Compare commits

..

145 Commits

Author SHA1 Message Date
Ruihang Xia
1bd53567b4 try to run on self-hosted runner
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-06-13 16:01:50 +08:00
Weny Xu
803940cfa4 feat: enable azblob tests (#1765)
* feat: enable azblob tests

* fix: add missing arg
2023-06-13 07:44:57 +00:00
Weny Xu
420ae054b3 chore: add debug log for heartbeat (#1770) 2023-06-13 07:43:26 +00:00
Lei, HUANG
0f1e061f24 fix: compile issue on develop and workaround to fix failing tests cau… (#1771)
* fix: compile issue on develop and workaround to fix failing tests caused by logstore file lock

* Apply suggestions from code review

Co-authored-by: JeremyHi <jiachun_feng@proton.me>

---------

Co-authored-by: JeremyHi <jiachun_feng@proton.me>
2023-06-13 07:30:16 +00:00
Lei, HUANG
7961de25ad feat: persist compaction time window (#1757)
* feat: persist compaction time window

* refactor: remove useless compaction window fields

* chore: revert some useless change

* fix: some CR comments

* fix: comment out unstable sqlness test

* revert commented sqlness
2023-06-13 10:15:42 +08:00
Lei, HUANG
f7d98e533b chore: fix compaction caused race condition (#1759)
* fix: set max_files_in_l0 in unit tests to avoid compaction

* refactor: pass while EngineConfig

* fix: comment out unstable sqlness test

* revert commented sqlness
2023-06-12 11:19:42 +00:00
Ruihang Xia
b540d640cf fix: unstable order with union operation (#1763)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-06-12 18:16:24 +08:00
Eugene Tolbakov
51a4d660b7 feat(to_unixtime): add timestamp types as arguments (#1632)
* feat(to_unixtime): add timestamp types as arguments

* feat(to_unixtime): change the return type

* feat(to_unixtime): address code review issues

* feat(to_unixtime): fix fmt issue
2023-06-12 17:21:49 +08:00
Ruihang Xia
1b2381502e fix: bring EnforceSorting rule forward (#1754)
* fix: bring EnforceSorting rule forward

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove duplicated rules

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* wrap remove logic into a method

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-06-12 07:29:08 +00:00
Yingwen
0e937be3f5 fix(storage): Use region_write_buffer_size as default value (#1760) 2023-06-12 15:05:17 +08:00
Weny Xu
564c183607 chore: make MetaKvBackend public (#1761) 2023-06-12 14:13:26 +08:00
Ruihang Xia
8c78368374 refactor: replace #[snafu(backtrace)] with Location (#1753)
* remove snafu backtrace

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* clean up

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-06-12 11:55:33 +08:00
Lei, HUANG
67c16dd631 feat: optimize some parquet writer parameter (#1758) 2023-06-12 11:46:45 +08:00
Lei, HUANG
ddcee052b2 fix: order by optimization (#1748)
* add some debug log

* fix: use lazy parquet reader in MitoTable::scan_to_stream to avoid IO in plan stage

* fix: unit tests

* fix: order-by optimization

* add some tests

* fix: move metric names to metrics.rs

* fix: some cr comments
2023-06-12 11:45:43 +08:00
王听正
7efcf868d5 refactor: Remove MySQL related options from Datanode (#1756)
* refactor: Remove MySQL related options from Datanode

remove mysql_addr and mysql_runtime_size in datanode.rs, remove command line argument mysql_addr in cmd/src/datanode.rs

#1739

* feat: remove --mysql-addr from command line

in pre commit, sqlness can not find --mysql-addrr, because we remove it

issue#1739

* refactor: remove --mysql-addr from command line

in pre commit, sqlness can not find --mysql-addrr, because we remove it

issue#1739
2023-06-12 11:00:24 +08:00
dennis zhuang
f08f726bec test: s3 manifest (#1755)
* feat: change default manifest options

* test: s3 manifest

* feat: revert checkpoint_margin to 10

* Update src/object-store/src/test_util.rs

Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>

---------

Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>
2023-06-09 10:28:41 +00:00
Ning Sun
7437820bdc ci: correct data type for input and event check (#1752) 2023-06-09 13:59:56 +08:00
Lei, HUANG
910c950717 fix: jemalloc error does not implement Error (#1747) 2023-06-09 04:00:50 +00:00
Zou Wei
f91cd250f8 feat:make version() show greptime info. (#1749)
* feat:impl get_version() to return greptime info.

* fix: refactor test case.
2023-06-09 11:38:52 +08:00
Yingwen
115d9eea8d chore: Log version and arguments (#1744) 2023-06-09 11:38:08 +08:00
Ning Sun
bc8f236806 ci: fix using env in job.if context (#1751) 2023-06-09 11:28:29 +08:00
Yiran
fdbda51c25 chore: update document links in README.md (#1745) 2023-06-09 10:05:24 +08:00
Ning Sun
e184826353 ci: allow triggering nightly release manually (#1746)
ci: allow triggering nightly manually
2023-06-09 10:04:44 +08:00
Yingwen
5b8e54e60e feat: Add HTTP API for cpu profiling (#1694)
* chore: print source error in mem-prof

* feat(common-pprof): add pprof crate

* feat(servers): Add pprof handler to router

refactor the mem_prof handler to avoid checking feature while
registering router

* feat(servers): pprof handler support different output type

* docs(common-pprof): Add readme

* feat(common-pprof): Build guard using code in pprof-rs's example

* feat(common-pprof): use prost

* feat: don't add timeout to perf api

* feat: add feature pprof

* feat: update readme

* test: fix tests

* feat: close region in TestBase

* feat(pprof): addres comments
2023-06-07 15:25:16 +08:00
Lei, HUANG
8cda1635cc feat: make jemalloc the default allocator (#1733)
* feat: add jemalloc metrics

* fix: dep format
2023-06-06 12:11:22 +00:00
Lei, HUANG
f63ddb57c3 fix: parquet time range predicate panic (#1735)
fix: parquet reader should use store schema to build time range predicate
2023-06-06 19:11:45 +08:00
fys
d2a8fd9890 feat: add route admin api in metasrv (#1734)
* feat: add route admin api in metasrv

* fix: add license
2023-06-06 18:00:02 +08:00
LFC
91026a6820 chore: clean up some of my todos (#1723)
* chore: clean up some of my todos

* fix: ci
2023-06-06 17:25:04 +08:00
Ruihang Xia
7a60bfec2a fix: empty result type on prom query endpoint (#1732)
* adjust return type

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add test case

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-06-06 15:40:54 +08:00
Niwaka
a103614fd2 feat: support /api/v1/series for Prometheus (#1620)
* feat: support /api/v1/series for Prometheus

* chore: error handling

* feat: update tests
2023-06-06 10:29:16 +08:00
Yingwen
1b4976b077 feat: Adds some metrics for write path and flush (#1726)
* feat: more metrics

* feat: Add preprocess elapsed

* chore(storage): rename metric

* test: fix tests
2023-06-05 21:35:44 +08:00
Lei, HUANG
166fb8871e chore: bump greptimedb version 0.4.0 (#1724) 2023-06-05 18:41:53 +08:00
Yingwen
466f258266 feat(servers): collect samples by metric (#1706) 2023-06-03 17:17:52 +08:00
Ruihang Xia
94228285a7 feat: convert values to vector directly (#1704)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-06-03 12:41:13 +08:00
JeremyHi
3d7185749d feat: insert with stream (#1703)
* feat: insert with stream

* chore: by CR
2023-06-03 03:58:00 +00:00
LFC
5004cf6d9a feat: make grpc insert requests in a batch (#1687)
* feat: make Prometheus remote write in a batch

* rebase

* fix: resolve PR comments

* fix: resolve PR comments

* fix: resolve PR comments
2023-06-02 09:06:48 +00:00
Ruihang Xia
8e69aef973 feat: serialize/deserialize support for PromQL plans (#1684)
* implement serializer

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy and CR comments

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix compile error

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* register registry

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* enable promql plan for dist planner

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-06-02 16:14:05 +08:00
Ruihang Xia
2615718999 feat: merge scan for distributed execution (#1660)
* generate exec plan

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* move DatanodeClients to client crate

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* wip MergeScanExec::to_stream

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix compile errors

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix default catalog

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix expand order of new stage

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* move sqlness cases contains plan out of common dir

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* refactor information schema to allow duplicated scan call

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix: ignore two cases due to substrait

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* reorganise sqlness common cases

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix typos

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* redact round robin partition number

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* Apply suggestions from code review

Co-authored-by: Yingwen <realevenyag@gmail.com>

* skip tranforming projection

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update sqlness result

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* revert common/order

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* Update src/query/src/dist_plan/merge_scan.rs

Co-authored-by: JeremyHi <jiachun_feng@proton.me>

* update sqlness result

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update sqlness result again

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* resolve CR comments

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* ignore region failover IT

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update sqlness result again and again

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* unignore some tests about projection

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* enable failover tests

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>
Co-authored-by: JeremyHi <jiachun_feng@proton.me>
2023-06-02 06:42:54 +00:00
fys
fe6e3daf81 fix: failed to insert data with u8 (#1701)
* fix: failed to insert data with u8 field

* remove unused code

* fix cr
2023-06-02 06:01:59 +00:00
ZonaHe
b7e1778ada feat: update dashboard to v0.2.6 (#1700)
Co-authored-by: ZonaHex <ZonaHex@users.noreply.github.com>
2023-06-02 13:26:07 +08:00
Lei, HUANG
ccd666aa9b fix: avoid writing manifest and wal if no files are actually flushed (#1698)
* fix: avoid writing manifest and wal if no files are actually flushed

* fix: simplify log
2023-06-02 13:16:59 +08:00
JeremyHi
2aa442c86d feat: exists API for KVStore (#1695)
* feat: exists API for kv

* chore: add unit test
2023-06-02 12:35:04 +08:00
Weny Xu
f811ae4665 fix: enable region failover test (#1699)
fix: fix region failover test
2023-06-02 12:05:37 +08:00
Ruihang Xia
e5b6f8654a feat: optimizer rule to pass expected output ordering hint (#1675)
* move type convertsion rule into optimizer dir

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* implement order_hint rule

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* it works!

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* use column name instead

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* accomplish test case

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update lock file

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-06-02 03:43:51 +00:00
Ruihang Xia
ff6d11ddc7 chore: ignore symbol link target file (#1696)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-06-02 10:42:44 +08:00
Ruihang Xia
878c6bf75a fix: do not alias relation before join (#1693)
* fix: do not alias relation before join

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* Update src/promql/src/error.rs

Co-authored-by: dennis zhuang <killme2008@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>
Co-authored-by: dennis zhuang <killme2008@gmail.com>
2023-06-01 14:24:37 +00:00
LFC
ce440606a9 fix: sqlness failed due to region failover wrongly kicks in for dropp… (#1690)
fix: sqlness failed due to region failover wrongly kicks in for dropped or renamed table
2023-06-01 21:47:47 +08:00
fys
5fd7250dca fix: invalidate route cache on renaming table (#1691)
* fix: sqlness test

* remove unnecessary clone

* fix cr
2023-06-01 20:43:31 +08:00
Ruihang Xia
5a5e88353c fix: do not change timestamp index column while planning aggr (#1688)
* fix: do not change timestamp index column while planning aggr

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove println

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-06-01 20:17:18 +08:00
Ruihang Xia
ef15de5f17 ci: always upload sqlness log (#1692)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-06-01 20:01:26 +08:00
fys
86adac1532 chore: reject table creation when partitions exceeds peer number (#1654)
* chore: table creation is rejected, when partition_num exceeds peer_num

* chore: modify no_active_datanode error msg

* fix: ut

* fix sqlness test and add limit for select peer in region_failover

* upgrade greptime-proto

* self cr

* fix: cargo sqlness

* chore: add table info in select ctx for failover

* fix sqlness
2023-06-01 09:05:17 +00:00
Ning Sun
e7a410573b test: fix sqlx compatibility and adds integration test for sqlx (#1686)
* test: fix sqlx compatibility and adds integration test for sqlx

* test: correct insert statements
2023-06-01 15:43:13 +08:00
Yingwen
548f0d1e2a feat: Add app version metric (#1685)
* feat: Add app version metric

* chore: use greptimedb instead of greptime
2023-06-01 14:31:08 +08:00
Zheming Li
5467ea496f feat: Add column supports at first or after the existing columns (#1621)
* feat: Add column supports at first or after the existing columns

* Update src/common/query/Cargo.toml

---------

Co-authored-by: dennis zhuang <killme2008@gmail.com>
2023-06-01 02:13:00 +00:00
Yingwen
70e17ead68 fix: Print source error in subprocedure failure message (#1683)
* fix: print source error in subprocedure failed error

* feat: print source error in subprocedure failure message
2023-06-01 09:51:31 +08:00
dennis zhuang
ae8203fafa fix: prepare statement doesn't support insert clause (#1680)
* fix: insert clause doesn't support prepare statement

* fix: manifeste dir

* fix: format

* fix: temp path
2023-05-31 20:14:58 +08:00
Ruihang Xia
ac3666b841 chore(deps): bump arrow/parquet to 40.0, datafuson to the latest HEAD (#1677)
* fix compile error

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove deprecated substrait

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update deps

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* downgrade opendal to 0.33.1

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* change finish's impl

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update test results

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* ignore failing cases

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-05-31 18:55:02 +08:00
Weny Xu
0460f3ae30 test: add write test for region failover (#1673)
* test: add write test for region failover

* test: add written data assertion after failover

* test: support more storage types
2023-05-31 15:42:00 +08:00
Yingwen
9d179802b8 feat: Add a global TTL option for all tables (#1679)
* feat: Add a global TTL option for all tables

* docs: update config examples

* chore: print start command and options when standalone/frontend starts
2023-05-31 15:36:25 +08:00
Lei, HUANG
72b6bd11f7 feat: adapt window reader to order rules (#1671)
* feat: adapt window reader to order rules

* fix: add asc sort test case
2023-05-31 03:36:17 +00:00
Xuanwo
6b08a5f94e chore: Bump OpenDAL to v0.36 (#1678)
* chore: Bump OpenDAL to v0.36

Signed-off-by: Xuanwo <github@xuanwo.io>

* Fix

Signed-off-by: Xuanwo <github@xuanwo.io>

---------

Signed-off-by: Xuanwo <github@xuanwo.io>
2023-05-31 11:12:40 +08:00
dennis zhuang
00104bef76 feat: supports CTE query (#1674)
* feat: supports CTE query

* test: move cte test to standalone
2023-05-30 12:08:49 +00:00
Zou Wei
ae81c7329d feat: support azblob storage. (#1659)
* feat:support azblob storage.

* test:add some tests.

* refactor:use if-let.
2023-05-30 19:59:38 +08:00
Yingwen
c5f6d7c99a refactor: update proto and rename incorrect region_id fields (#1670) 2023-05-30 15:19:04 +09:00
Weny Xu
bb1b71bcf0 feat: acquire table_id from region_id (#1656)
feat: acquire table_id from region_id
2023-05-30 03:36:47 +00:00
Weny Xu
a4b884406a feat: add invalidate cache step (#1658)
* feat: add invalidate cache step

* refactor: refactor TableIdent

* chore: apply suggestions from CR
2023-05-30 11:17:59 +08:00
dennis zhuang
ab5dfd31ec feat: sql dialect for different protocols (#1631)
* feat: add SqlDialect to query context

* feat: use session in postgrel handlers

* chore: refactor sql dialect

* feat: use different dialects for different sql protocols

* feat: adds GreptimeDbDialect

* refactor: replace GenericDialect with GreptimeDbDialect

* feat: save user info to session

* fix: compile error

* fix: test
2023-05-30 09:52:35 +08:00
Yingwen
563ce59071 feat: Add request type and result code to grpc metrics (#1664) 2023-05-30 09:51:08 +08:00
LFC
51b23664f7 feat: update table metadata in lock (#1634)
* feat: using distributed lock to guard against the concurrent updating of table metadatas in region failover procedure

* fix: resolve PR comments

* fix: resolve PR comments
2023-05-30 08:59:14 +08:00
Ruihang Xia
9e21632f23 fix: clippy warning (#1669)
* fix: clippy warning

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* restore the removed common sqlness cases

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-05-30 08:55:24 +08:00
Ruihang Xia
b27c569ae0 refactor: add scan_to_stream() to Table trait to postpone the stream generation (#1639)
* add scan_to_stream to Table

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* impl parquet stream

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* reorganise adapters

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* implement scan_to_stream for mito table

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* clean up

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add location info

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix: table scan

* UT pass

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* impl project record batch

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix information schema

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* resolve CR comments

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove one todo

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix errors generated by merge commit

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add output_ordering method to record batch stream

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix rustfmt

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* enhance error types

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: Lei, HUANG <mrsatangel@gmail.com>
2023-05-29 20:03:47 +08:00
Weny Xu
0eaae634fa fix: invalidate table route cache (#1663) 2023-05-29 18:49:23 +08:00
JeremyHi
8b9b5a0d3a feat: broadcast with mailbox (#1661)
feat: broad with mailbox
2023-05-29 15:11:50 +08:00
Lei, HUANG
78fab08b51 feat: window inferer (#1648)
* feat: window inferer

* doc: add some doc

* test: add a long missing unit test case for windowed reader

* add more tests

* fix: some CR comments
2023-05-29 14:41:00 +08:00
Weny Xu
d072947ef2 refactor: move code out of loop (#1657) 2023-05-27 13:31:13 +08:00
Weny Xu
4094907c09 fix: fix type casting issue (#1652)
* fix: fix type casting issue

* chore: apply suggestion from CR
2023-05-27 00:17:56 +08:00
Ruihang Xia
0da94930d5 feat: impl literal only PromQL query (#1641)
* refactor EmptyMetric to accept expr

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* impl literal only query

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add empty line

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* clean up

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* support literal on HTTP gateway

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy (again)

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-05-26 23:27:03 +08:00
fys
f0a519b71b chore: reduce the number of requests for meta (#1647) 2023-05-26 17:25:18 +08:00
Yingwen
89366ba939 refactor: Holds histogram in the timer to avoid clone labels if possible (#1653)
* feat: use Histogram struct to impl Timer

* fix: fix compile errors

* feat: downgrade metrics-process

* fix: compiler errors
2023-05-26 17:12:03 +08:00
Yingwen
c042723fc9 feat: Record process metrics (#1646)
* feat(servers): Export process metrics

* chore: update metrics related deps to get the process-metrics printed

The latest process-metrics crate depends on metrics 0.21, we use metrics
0.20. This cause the process-metrics crate doesn't record the metrics
  when use metrics macros
2023-05-26 11:51:01 +08:00
Weny Xu
732784d3f8 feat: support to load missing region (#1651)
* feat: support to load missing region

* Update src/mito/src/table.rs

Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>

---------

Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>
2023-05-26 03:30:46 +00:00
Ning Sun
332b3677ac feat: add metrics for ingested row count (#1645) 2023-05-26 10:57:27 +08:00
Weny Xu
6cd634b105 fix: fix typo (#1649) 2023-05-26 10:24:12 +08:00
Yinnan Yao
cd1ccb110b fix: install python3-pip in Dockerfile (#1644)
When I use docker build to build the image, I get an error that pip is missing. Add install python3-pip in Dockerfile.

Fixes: #1643

Signed-off-by: yaoyinnan <yaoyinnan@foxmail.com>
2023-05-25 23:00:39 +08:00
Weny Xu
953793143b feat: add invalidate table cache handler (#1633)
* feat: add invalidate table cache handler

* feat: setup invalidate table cache handler for frontend

* test: add test for invalidate table cache handler

* chore: apply suggestions from CR

* chore: apply suggestions from CR

* fix: fix report_interval unit
2023-05-25 17:45:45 +08:00
Yingwen
8a7998cd25 feat(servers): Add metrics based on axum's example (#1638)
Log on error
2023-05-25 17:31:48 +09:00
LFC
eb24bab5df refactor: set the filters for testing logs (#1637)
minor: set the filters for testing logs
2023-05-25 11:07:57 +08:00
fys
8f9e9686fe chore: add metrics for table route getting (#1636)
chore: add metrics for getting table_route
2023-05-25 10:02:59 +08:00
shuiyisong
61a32d1b9c chore: add boxed error for custom error map (#1635)
* chore: add boxed error for custom error map

* chore: fix typo

* chore: add comment & update display msg

* chore: change name to other error
2023-05-24 12:54:52 +00:00
Weny Xu
74a6517bd0 refactor: move the common part of the heartbeat response handler to common (#1627)
* refactor: move heartbeat response handler to common

* chore: apply suggestions from CR
2023-05-24 07:55:06 +00:00
fys
fa4a497d75 feat: add cache for catalog kv backend (#1592)
* feat: add kvbackend cache

* fix: cargo fmt
2023-05-24 15:07:29 +08:00
Ning Sun
ddca0307d1 feat: more configurable logging levels (#1630)
* feat: make logging level more configurable

* chore: resolve lint warnings

* fix: correct default level for h2

* chore: update text copy
2023-05-24 14:47:41 +08:00
Weny Xu
3dc45f1c13 feat: implement CloseRegionHandler (#1569)
* feat: implement CloseRegionHandler

* feat: register heartbeat response handlers

* test: add tests for heartbeat response handlers

* fix: drop table does not release regions

* chore: apply suggestion from CR

* fix: fix close region issue

* chore: apply suggestion from CR

* chore: apply suggestion from CR

* chore: apply suggestion from CR

* chore: apply suggestion from CR

* chore: apply suggestion from CR

* chore: apply suggestion from CR

* chore: apply suggestion from CR

* chore: modify method name and add log

* refactor: refactor HeartbeatResponseHandler

* chore: apply suggestion from CR

* refactor: remove close method from Region trait

* chore: apply suggestion from CR

* chore: remove PartialEq from CloseTableResult

* chore: apply suggestion from CR
2023-05-23 15:44:27 +08:00
dennis zhuang
7c55783e53 feat!: reorganize the storage layout (#1609)
* feat: adds data_home to DataOptions

* refactor: split out object store stuffs from datanode instance

* feat: move data_home into FileConfig

* refactor: object storage layers

* feat: adds datanode path to procedure paths

* feat: temp commit

* refactor: clean code

* fix: forgot files

* fix: forgot files

* Update src/common/test-util/src/ports.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

* Update tests/runner/src/env.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

* fix: compile error

* chore: cr comments

* fix: dependencies order in cargo

* fix: data path in test

---------

Co-authored-by: Yingwen <realevenyag@gmail.com>
2023-05-23 13:58:26 +08:00
shuiyisong
5b304fa692 chore: add grpc query interceptor (#1626) 2023-05-23 13:57:54 +08:00
Weny Xu
9f67ad8bce fix: fix doesn't release closed regions issue (#1596)
* fix: fix close region issue

* chore: apply suggestion from CR

* chore: apply suggestion from CR

* chore: apply suggestion from CR

* chore: apply suggestion from CR

* refactor: remove close method from Region trait

* chore: remove PartialEq from CloseTableResult
2023-05-23 11:40:12 +08:00
Weny Xu
e646490d16 chore: fix code styling (#1623) 2023-05-23 10:09:34 +08:00
JeremyHi
1225edb065 refactor: move rpc's commons to common-meta (#1625) 2023-05-23 10:07:24 +08:00
Lei, HUANG
8e7ec4626b refactor: remove useless error (#1624)
* refactor: remove useless

* fix: remove useless error variant
2023-05-22 22:55:27 +08:00
LFC
f64527da22 feat: region failover procedure (#1558)
* feat: region failover procedure
2023-05-22 19:54:52 +08:00
Yingwen
6dbceb1ad5 feat: Trigger flush based on global write buffer size (#1585)
* feat(storage): Add AllocTracker

* feat(storage): flush request wip

* feat(storage): support global write buffer size

* fix(storage): Test and fix size based strategy

* test(storage): Test AllocTracker

* test(storage): Test pick_by_write_buffer_full

* docs: Add flush config example

* test(storage): Test schedule_engine_flush

* feat(storage): Add metrics for write buffer size

* chore(flush): Add log when triggering flush by global buffer

* chore(storage): track allocation in update_stats
2023-05-22 19:00:30 +08:00
Ning Sun
067c5ee7ce feat: time_zone variable for mysql connections (#1607)
* feat: add timezone info to query context

* feat: parse mysql compatible time zone string

* feat: add method to timestamp for rendering timezone aware string

* feat: use timezone from session for time string rendering

* refactor: use querycontectref

* feat: implement session/timezone variable read/write

* style: resolve toml format

* test: update tests

* Apply suggestions from code review

Co-authored-by: dennis zhuang <killme2008@gmail.com>

* Update src/session/src/context.rs

Co-authored-by: dennis zhuang <killme2008@gmail.com>

* refactor: address review issues

---------

Co-authored-by: dennis zhuang <killme2008@gmail.com>
2023-05-22 18:30:23 +08:00
Yingwen
32ad358323 fix(table-procedure): Open table in RegisterCatalog state (#1617)
* fix(table-procedure): on_register_catalog should use open_table

* test: Test recover RegisterCatalog state

* test: Fix subprocedure does not execute in test

* feat(mito): adjust procedure log level

* refactor: rename execute_parent_procedure

execute_parent_procedure -> execute_until_suspended_or_done
2023-05-22 17:54:02 +08:00
Chuanle Chen
77497ca46a feat: support /api/v1/label/<label_name>/values from Prometheus (#1604)
* feat: support `/api/v1/label/<label_name>/values` from Prometheus

* chore: apply CR

* chore: apply CR
2023-05-22 07:24:12 +00:00
JeremyHi
e5a215de46 chore: truncate route-table (#1619) 2023-05-22 14:54:40 +08:00
Ruihang Xia
e5aad0f607 feat: distributed planner basic (#1599)
* basic skeleton

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* clean up

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* change QueryEngineState's constructor

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* install extension planner

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* tidy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-05-22 11:48:03 +08:00
QuenKar
edf6c0bf48 refactor: add "table engine" to datanode heartbeat. (#1616)
refactor:add "table engine" to datanode heartbeat.
2023-05-22 10:09:32 +08:00
Ruihang Xia
c3eeda7d84 refactor(frontend): adjust code structure (#1615)
* move  to expr_factory

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* move configs into service_config

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* move GrpcQueryHandler into distributed.rs

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-05-20 02:09:20 +08:00
Lei, HUANG
82f2b34f4d fix: wal replay ignore manifest entries (#1612)
* fix: wal replay ignore manifest entries

* test: add ut
2023-05-19 18:12:44 +08:00
Vanish
8764ce7845 feat: add delete WAL in drop_region (#1577)
* feat: add delete WAL in drop_region

* chore: fix typo err.

* feat: mark all SSTs deleted and remove the region from StorageEngine's region map.

* test: add test_drop_region for StorageEngine.

* chore: make clippy happy

* fix: fix conflict

* chore: CR.

* chore: CR

* chore: fix clippy

* fix: temp file life time
2023-05-18 18:02:34 +08:00
localhost
d76ddc575f fix: meta admin API get catalog table name error (#1603) 2023-05-18 14:27:40 +08:00
Weny Xu
68dfea0cfd fix: fix datanode cannot start while failing to open tables (#1601) 2023-05-17 20:56:13 +08:00
fys
57c02af55b feat: change default selector in meta from "LeaseBased" to "LoadBased" (#1598)
* feat: change default selector from "LeaseBased" to "LoadBased"

* fix: ut
2023-05-17 17:48:13 +08:00
Lei, HUANG
e8c2222a76 feat: add WindowedReader (#1532)
* feat: add WindowedReader

* fix: some cr comments

* feat: filter memtable by timestamp range

* fix: add source in error variants

* fix: some CR comments

* refactor: filter memtable in MapIterWrapper

* fix: clippy
2023-05-17 17:34:29 +08:00
JeremyHi
eb95a9e78b fix: sequence out of range (#1597) 2023-05-17 14:43:54 +08:00
zyy17
4920836021 refactor: support parsing env list (#1595)
* refactor: support parse env list

* refactor: set 'multiple = true' for metasrv_addr cli option and remove duplicated parsing
2023-05-17 14:37:08 +08:00
Huaijin
715e1a321f feat: implement /api/v1/labels for prometheus (#1580)
* feat: implement /api/v1/labels for prometheus

* fix: only gather match[]

* chore: fix typo

* chore: fix typo

* chore: change style

* fix: suggestion

* fix: suggestion

* chore: typo

* fix: fmt

* fix: add more test
2023-05-17 03:56:22 +00:00
localhost
a6ec79ee30 chore: add a uniform prefix to the metrics using the official recommendation of (#1590) 2023-05-17 11:08:49 +08:00
Lei, HUANG
e70d49b9cf feat: memtable stats (#1591)
* feat: memtable stats

* chore: add tests for timestamp subtraction

* feat: add `Value:as_timestamp` method
2023-05-17 11:07:07 +08:00
Weny Xu
ca75a7b744 fix: remove region number validation (#1593)
* fix: remove region number validation

* Update src/mito/src/engine.rs

Co-authored-by: dennis zhuang <killme2008@gmail.com>

---------

Co-authored-by: dennis zhuang <killme2008@gmail.com>
2023-05-17 09:23:56 +08:00
localhost
3330957896 chore: add fmt for statement query (#1588)
* chore: add fmt for statement query

* chore: add test for query display
2023-05-16 16:14:11 +08:00
WU Jingdi
fb1ac0cb9c feat: support user config manifest compression (#1579)
* feat: support user config manifest compression

* chore: change style

* chore: enhance test
2023-05-16 11:02:59 +08:00
Niwaka
856ab5bea7 feat: make RepeatedTask invoke remove_outdated_meta method (#1578)
* feat: make RepeatedTask invoke remove_outdated_meta method

* fix: typo

* chore: improve error message
2023-05-16 10:21:35 +08:00
Eugene Tolbakov
122bd5f0ab feat(tql): add initial implementation for explain & analyze (#1427)
* feat(tql): resolve conflicts after merge,formatting and clippy issues, add sqlness tests, adjust explain with start, end, step

* feat(tql): adjust sqlness assertions
2023-05-16 07:28:24 +08:00
Ruihang Xia
2fd1075c4f fix: uses nextest in the Release CI (#1582)
* fix: uses nextest in the Release CI

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* install nextest

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* Update Makefile

Co-authored-by: zyy17 <zyylsxm@gmail.com>

* update workflow yaml

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: zyy17 <zyylsxm@gmail.com>
2023-05-15 21:09:09 +08:00
fys
027707d969 feat: support frontend-meta heartbeat (#1555)
* feat: support frontend heartbeat

* fix: typo "reponse" -> "response"

* add ut

* enable start heartbeat task

* chore: frontend id is specified by metasrv, not in the frontend startup parameter

* fix typo

* self-cr

* cr

* cr

* cr

* remove unnecessary headers

* use the member id in the header as the node id
2023-05-15 09:54:45 +00:00
Yingwen
8d54d40b21 feat: Add FlushPicker to flush regions periodically (#1559)
* feat: Add FlushPicker

* feat(storage): Add close to StorageEngine

* style(storage): fix clippy

* feat(storage): Close regions in StorageEngine::close

* chore(storage): Clear requests on scheduler stop

* test(storage): Test flush picker

* feat(storage): Add metrics for auto flush

* feat(storage): Add flush reason and record it in metrics

* feat: Expose flush config

docs(config): Update config example

* refactor(storage): Run auto flush task in FlushScheduler

* refactor(storage): Add FlushItem trait to make FlushPicker easy to test
2023-05-15 17:29:28 +08:00
Ning Sun
497b1f9dc9 feat: metrics for storage engine (#1574)
* feat: add storage engine region count gauge

* test: remove catalog metrics because we can't get a correct number

* feat: add metrics for log store write and compaction

* fix: address review issues
2023-05-15 15:22:00 +08:00
LFC
4ae0b5e185 test: move instances tests to "tests-integration" (#1573)
* test: move standalone and distributed instances tests from "frontend" crate to "tests-integration"

* fix: resolve PR comments
2023-05-15 12:00:43 +08:00
Lei, HUANG
cfcfc72681 refactor: remove version column (#1576) 2023-05-15 11:03:37 +08:00
Weny Xu
66903d42e1 feat: implement OpenTableHandler (#1567)
* feat: implement OpenTableHandler

* chore: apply suggestion from CR

* chore: apply suggestion from CR
2023-05-15 10:47:28 +08:00
zyy17
4fc173acf0 refactor: support layered configuration (#1535)
* refactor: add a layered configuration by using config-rs

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* refactor: add 'env_var_prefix' for 'load_options()' and remove duplicate default construction in frontend

* refactor: add test_config_precedence_order in standalone

* refactor: add 'test_config_precedence_order()' test case in metasrv

* refactor: add 'test_config_precedence_order()' test case in datanode

* refactor: refine the naming '*_env_var_*' -> '*_env_vars_*'

* refactor: fix clippy error

* refactor: refine error naming 'LoadConfig' -> 'LoadLayeredConfig' and add Location

* refactor: move 'env_vars_prefix' to clap options

* fix: use '__' as envrionment variables separator and simplify load_layered_options()

* refactor: derive 'Default' for StartCommand and use default function to simplify the test cases

* fix: clippy error

* chore: update comments

* chore(deps): update deps info

* refactor(naming): 'env_vars_prefix' -> 'env_prefix'

* refactor: simplify the code

* refactor: change some argument type of 'load_layered_options()'

* refactor: simplify the code

* refactor: remove unnecessary 'clone()'

* refactor: add 'GREPTIMEDB_*' prefix for env_prefix

* refactor: modify configuration precedence order: cli > config file > environment variables > default values

---------

Signed-off-by: zyy17 <zyylsxm@gmail.com>
2023-05-13 22:37:47 +08:00
Huaijin
f9a4326461 fix: unwrap() None in NULL value exist multi-field table during prometheus query_range (#1571)
* fix: NULL value in multi-field table meet error in prometheus query_range

* fix: suggestion

* chore: change style
2023-05-12 17:36:03 +08:00
Ning Sun
4151d7a8ea fix: allow cross-schema query on information_schema (#1568) 2023-05-11 16:54:28 +08:00
LFC
a4e106380b fix: refreshing Dashboard returns 404 (#1562)
* fix: refreshing Dashboard returns 404

* fix: refreshing Dashboard returns 404
2023-05-11 15:08:20 +08:00
Ruihang Xia
7a310cb056 docs: rfc of distributed planner (#1554)
* docs: rfc of distributed planner

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* Update docs/rfcs/2023-05-09-distributed-planner.md

Co-authored-by: LFC <bayinamine@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: dennis zhuang <killme2008@gmail.com>
Co-authored-by: LFC <bayinamine@gmail.com>
2023-05-11 14:45:32 +08:00
LFC
8fef32f8ef feat: enable tokio console in cluster mode (#1512)
* feat: enable tokio console subscriber

* fix: resolve PR comments

* fix: resolve PR comments

* fix: resolve PR comments
2023-05-11 14:35:06 +08:00
Ning Sun
8c85fdec29 fix: correct schema/table count in catalog metrics (#1565) 2023-05-11 14:20:42 +08:00
ZonaHe
84f6b46437 feat: update dashboard to v0.2.5 (#1563)
Co-authored-by: ZonaHex <ZonaHex@users.noreply.github.com>
2023-05-11 13:55:42 +08:00
Weny Xu
44aef6fcbd feat(datanode): iImplement the heartbeat response handler (#1547)
* feat(datanode): implement instruction handler

* chore: apply suggestion from CR

* refactor: refactor heartbeat response handler
2023-05-11 09:27:13 +08:00
JeremyHi
7a9dd5f0c8 feat: ignore mailbox message into stat (#1560) 2023-05-10 18:06:04 +08:00
WU Jingdi
486bb2ee8e feat: Compress manifest and checkpoint (#1497)
* feat: Compress manifest and checkpoint

* refactor: use file extention infer compression type

* chore: apply suggestions from CR

* Update src/storage/src/manifest/storage.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

* chore: CR advices

* chore: Fix bugs, strengthen test

* chore: Fix CR, strengthen test

---------

Co-authored-by: dennis zhuang <killme2008@gmail.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>
2023-05-10 07:53:06 +00:00
Weny Xu
020c55e260 refactor: change mailbox_messages to mailbox_message (#1557) 2023-05-10 07:17:11 +00:00
Yingwen
ee3e1dbdaa feat: Use LocalScheduler framework to implement FlushScheduler (#1531)
* test: simplify countdownlatch

* feat: impl Drop for LocalScheduler

* feat(storage): Impl FlushRequest and FlushHandler

* feat(storage): Use scheduler to handle flush job

* chore(storage): remove unused code

* feat(storage): Use new type pattern for RegionMap

* feat(storage): Remove on_success callback

* feat(storage): Address CR comments and add some metrics to flush
2023-05-10 07:16:51 +00:00
dennis zhuang
aa0c5b888c docs: update readme (#1549)
* docs: update readme

* Update README.md

Co-authored-by: Ning Sun <classicning@gmail.com>

* chore: cr comments

* chore: cr comments

---------

Co-authored-by: Ning Sun <classicning@gmail.com>
2023-05-10 14:36:07 +08:00
550 changed files with 29896 additions and 11305 deletions

View File

@@ -9,3 +9,9 @@ GT_OSS_BUCKET=OSS bucket
GT_OSS_ACCESS_KEY_ID=OSS access key id
GT_OSS_ACCESS_KEY=OSS access key
GT_OSS_ENDPOINT=OSS endpoint
# Settings for azblob test
GT_AZBLOB_CONTAINER=AZBLOB container
GT_AZBLOB_ACCOUNT_NAME=AZBLOB account name
GT_AZBLOB_ACCOUNT_KEY=AZBLOB account key
GT_AZBLOB_ENDPOINT=AZBLOB endpoint

View File

@@ -141,6 +141,7 @@ jobs:
- name: Run sqlness
run: cargo sqlness && ls /tmp
- name: Upload sqlness logs
if: always()
uses: actions/upload-artifact@v3
with:
name: sqlness-logs

View File

@@ -7,20 +7,29 @@ on:
- cron: '0 0 * * 1'
# Mannually trigger only builds binaries.
workflow_dispatch:
inputs:
dry_run:
description: 'Skip docker push and release steps'
type: boolean
default: true
skip_test:
description: 'Do not run tests during build'
type: boolean
default: false
name: Release
env:
RUST_TOOLCHAIN: nightly-2023-05-03
SCHEDULED_BUILD_VERSION_PREFIX: v0.3.0
SCHEDULED_BUILD_VERSION_PREFIX: v0.4.0
SCHEDULED_PERIOD: nightly
CARGO_PROFILE: nightly
# Controls whether to run tests, include unit-test, integration-test and sqlness.
DISABLE_RUN_TESTS: false
DISABLE_RUN_TESTS: ${{ inputs.skip_test || false }}
jobs:
build-macos:
@@ -30,22 +39,22 @@ jobs:
# The file format is greptime-<os>-<arch>
include:
- arch: aarch64-apple-darwin
os: macos-latest
os: self-hosted
file: greptime-darwin-arm64
continue-on-error: false
opts: "-F servers/dashboard"
- arch: x86_64-apple-darwin
os: macos-latest
os: self-hosted
file: greptime-darwin-amd64
continue-on-error: false
opts: "-F servers/dashboard"
- arch: aarch64-apple-darwin
os: macos-latest
os: self-hosted
file: greptime-darwin-arm64-pyo3
continue-on-error: false
opts: "-F pyo3_backend,servers/dashboard"
- arch: x86_64-apple-darwin
os: macos-latest
os: self-hosted
file: greptime-darwin-amd64-pyo3
continue-on-error: false
opts: "-F pyo3_backend,servers/dashboard"
@@ -84,13 +93,14 @@ jobs:
with:
toolchain: ${{ env.RUST_TOOLCHAIN }}
targets: ${{ matrix.arch }}
- name: Install latest nextest release
uses: taiki-e/install-action@nextest
- name: Output package versions
run: protoc --version ; cargo version ; rustc --version ; gcc --version ; g++ --version
# - name: Run tests
# if: env.DISABLE_RUN_TESTS == 'false'
# run: make unit-test integration-test sqlness-test
- name: Run tests
if: env.DISABLE_RUN_TESTS == 'false'
run: make test sqlness-test
- name: Run cargo build
if: contains(matrix.arch, 'darwin') || contains(matrix.opts, 'pyo3_backend') == false
@@ -200,13 +210,14 @@ jobs:
with:
toolchain: ${{ env.RUST_TOOLCHAIN }}
targets: ${{ matrix.arch }}
- name: Install latest nextest release
uses: taiki-e/install-action@nextest
- name: Output package versions
run: protoc --version ; cargo version ; rustc --version ; gcc --version ; g++ --version
- name: Run tests
if: env.DISABLE_RUN_TESTS == 'false'
run: make unit-test integration-test sqlness-test
run: make test sqlness-test
- name: Run cargo build
if: contains(matrix.arch, 'darwin') || contains(matrix.opts, 'pyo3_backend') == false
@@ -279,7 +290,7 @@ jobs:
name: Build docker image
needs: [build-linux, build-macos]
runs-on: ubuntu-latest
if: github.repository == 'GreptimeTeam/greptimedb' && github.event_name != 'workflow_dispatch'
if: github.repository == 'GreptimeTeam/greptimedb' && !(inputs.dry_run || false)
steps:
- name: Checkout sources
uses: actions/checkout@v3
@@ -292,7 +303,7 @@ jobs:
- name: Configure scheduled build image tag # the tag would be ${SCHEDULED_BUILD_VERSION_PREFIX}-YYYYMMDD-${SCHEDULED_PERIOD}
shell: bash
if: github.event_name == 'schedule'
if: github.event_name != 'push'
run: |
buildTime=`date "+%Y%m%d"`
SCHEDULED_BUILD_VERSION=${{ env.SCHEDULED_BUILD_VERSION_PREFIX }}-$buildTime-${{ env.SCHEDULED_PERIOD }}
@@ -300,7 +311,7 @@ jobs:
- name: Configure tag # If the release tag is v0.1.0, then the image version tag will be 0.1.0.
shell: bash
if: github.event_name != 'schedule'
if: github.event_name == 'push'
run: |
VERSION=${{ github.ref_name }}
echo "IMAGE_TAG=${VERSION:1}" >> $GITHUB_ENV
@@ -365,7 +376,7 @@ jobs:
# Release artifacts only when all the artifacts are built successfully.
needs: [build-linux, build-macos, docker]
runs-on: ubuntu-latest
if: github.repository == 'GreptimeTeam/greptimedb' && github.event_name != 'workflow_dispatch'
if: github.repository == 'GreptimeTeam/greptimedb' && !(inputs.dry_run || false)
steps:
- name: Checkout sources
uses: actions/checkout@v3
@@ -375,7 +386,7 @@ jobs:
- name: Configure scheduled build version # the version would be ${SCHEDULED_BUILD_VERSION_PREFIX}-${SCHEDULED_PERIOD}-YYYYMMDD, like v0.2.0-nigthly-20230313.
shell: bash
if: github.event_name == 'schedule'
if: github.event_name != 'push'
run: |
buildTime=`date "+%Y%m%d"`
SCHEDULED_BUILD_VERSION=${{ env.SCHEDULED_BUILD_VERSION_PREFIX }}-${{ env.SCHEDULED_PERIOD }}-$buildTime
@@ -393,13 +404,13 @@ jobs:
fi
- name: Create scheduled build git tag
if: github.event_name == 'schedule'
if: github.event_name != 'push'
run: |
git tag ${{ env.SCHEDULED_BUILD_VERSION }}
- name: Publish scheduled release # configure the different release title and tags.
uses: ncipollo/release-action@v1
if: github.event_name == 'schedule'
if: github.event_name != 'push'
with:
name: "Release ${{ env.SCHEDULED_BUILD_VERSION }}"
prerelease: ${{ env.prerelease }}
@@ -411,7 +422,7 @@ jobs:
- name: Publish release
uses: ncipollo/release-action@v1
if: github.event_name != 'schedule'
if: github.event_name == 'push'
with:
name: "${{ github.ref_name }}"
prerelease: ${{ env.prerelease }}
@@ -424,7 +435,7 @@ jobs:
name: Push docker image to alibaba cloud container registry
needs: [docker]
runs-on: ubuntu-latest
if: github.repository == 'GreptimeTeam/greptimedb' && github.event_name != 'workflow_dispatch'
if: github.repository == 'GreptimeTeam/greptimedb' && !(inputs.dry_run || false)
continue-on-error: true
steps:
- name: Checkout sources
@@ -445,7 +456,7 @@ jobs:
- name: Configure scheduled build image tag # the tag would be ${SCHEDULED_BUILD_VERSION_PREFIX}-YYYYMMDD-${SCHEDULED_PERIOD}
shell: bash
if: github.event_name == 'schedule'
if: github.event_name != 'push'
run: |
buildTime=`date "+%Y%m%d"`
SCHEDULED_BUILD_VERSION=${{ env.SCHEDULED_BUILD_VERSION_PREFIX }}-$buildTime-${{ env.SCHEDULED_PERIOD }}
@@ -453,7 +464,7 @@ jobs:
- name: Configure tag # If the release tag is v0.1.0, then the image version tag will be 0.1.0.
shell: bash
if: github.event_name != 'schedule'
if: github.event_name == 'push'
run: |
VERSION=${{ github.ref_name }}
echo "IMAGE_TAG=${VERSION:1}" >> $GITHUB_ENV

5
.gitignore vendored
View File

@@ -1,6 +1,8 @@
# Generated by Cargo
# will have compiled files and executables
/target/
# also ignore if it's a symbolic link
/target
# Remove Cargo.lock from gitignore if creating an executable, leave it for libraries
# More information here https://doc.rust-lang.org/cargo/guide/cargo-toml-vs-cargo-lock.html
@@ -39,3 +41,6 @@ benchmarks/data
# dashboard files
!/src/servers/dashboard/VERSION
/src/servers/dashboard/*
# Vscode workspace
*.code-workspace

2339
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -14,8 +14,10 @@ members = [
"src/common/grpc",
"src/common/grpc-expr",
"src/common/mem-prof",
"src/common/meta",
"src/common/procedure",
"src/common/procedure-test",
"src/common/pprof",
"src/common/query",
"src/common/recordbatch",
"src/common/runtime",
@@ -48,38 +50,40 @@ members = [
]
[workspace.package]
version = "0.2.0"
version = "0.4.0"
edition = "2021"
license = "Apache-2.0"
[workspace.dependencies]
arrow = { version = "37.0" }
arrow-array = "37.0"
arrow-flight = "37.0"
arrow-schema = { version = "37.0", features = ["serde"] }
arrow = { version = "40.0" }
arrow-array = "40.0"
arrow-flight = "40.0"
arrow-schema = { version = "40.0", features = ["serde"] }
async-stream = "0.3"
async-trait = "0.1"
chrono = { version = "0.4", features = ["serde"] }
# TODO(ruihang): use arrow-datafusion when it contains https://github.com/apache/arrow-datafusion/pull/6032
datafusion = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "b14f7a9ffe91257fc3d2a5d654f2a1a14a8fc793" }
datafusion-common = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "b14f7a9ffe91257fc3d2a5d654f2a1a14a8fc793" }
datafusion-expr = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "b14f7a9ffe91257fc3d2a5d654f2a1a14a8fc793" }
datafusion-optimizer = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "b14f7a9ffe91257fc3d2a5d654f2a1a14a8fc793" }
datafusion-physical-expr = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "b14f7a9ffe91257fc3d2a5d654f2a1a14a8fc793" }
datafusion-sql = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "b14f7a9ffe91257fc3d2a5d654f2a1a14a8fc793" }
datafusion-substrait = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "b14f7a9ffe91257fc3d2a5d654f2a1a14a8fc793" }
datafusion = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "63e52dde9e44cac4b1f6c6e6b6bf6368ba3bd323" }
datafusion-common = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "63e52dde9e44cac4b1f6c6e6b6bf6368ba3bd323" }
datafusion-expr = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "63e52dde9e44cac4b1f6c6e6b6bf6368ba3bd323" }
datafusion-optimizer = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "63e52dde9e44cac4b1f6c6e6b6bf6368ba3bd323" }
datafusion-physical-expr = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "63e52dde9e44cac4b1f6c6e6b6bf6368ba3bd323" }
datafusion-sql = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "63e52dde9e44cac4b1f6c6e6b6bf6368ba3bd323" }
datafusion-substrait = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "63e52dde9e44cac4b1f6c6e6b6bf6368ba3bd323" }
futures = "0.3"
futures-util = "0.3"
parquet = "37.0"
greptime-proto = { git = "https://github.com/GreptimeTeam/greptime-proto.git", rev = "4398d20c56d5f7939cc2960789cb1fa7dd18e6fe" }
itertools = "0.10"
parquet = "40.0"
paste = "1.0"
prost = "0.11"
rand = "0.8"
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
snafu = { version = "0.7", features = ["backtraces"] }
sqlparser = "0.33"
sqlparser = "0.34"
tempfile = "3"
tokio = { version = "1.24.2", features = ["full"] }
tokio = { version = "1.28", features = ["full"] }
tokio-util = { version = "0.7", features = ["io-util", "compat"] }
tonic = { version = "0.9", features = ["tls"] }
uuid = { version = "1", features = ["serde", "v4", "fast-rng"] }

View File

@@ -33,13 +33,12 @@ docker-image: ## Build docker image.
##@ Test
.PHONY: unit-test
unit-test: ## Run unit test.
cargo test --workspace
test: nextest ## Run unit and integration tests.
cargo nextest run
.PHONY: integration-test
integration-test: ## Run integation test.
cargo test integration
.PHONY: nextest ## Install nextest tools.
nextest:
cargo --list | grep nextest || cargo install cargo-nextest --locked
.PHONY: sqlness-test
sqlness-test: ## Run sqlness test.

View File

@@ -100,64 +100,22 @@ Or if you built from docker:
docker run -p 4002:4002 -v "$(pwd):/tmp/greptimedb" greptime/greptimedb standalone start
```
For more startup options, greptimedb's **distributed mode** and information
about Kubernetes deployment, check our [docs](https://docs.greptime.com/).
Please see [the online document site](https://docs.greptime.com/getting-started/overview#install-greptimedb) for more installation options and [operations info](https://docs.greptime.com/user-guide/operations/overview).
### Connect
### Get started
1. Connect to GreptimeDB via standard [MySQL
client](https://dev.mysql.com/downloads/mysql/):
Read the [complete getting started guide](https://docs.greptime.com/getting-started/overview#connect) on our [official document site](https://docs.greptime.com/).
```
# The standalone instance listen on port 4002 by default.
mysql -h 127.0.0.1 -P 4002
```
2. Create table:
```SQL
CREATE TABLE monitor (
host STRING,
ts TIMESTAMP,
cpu DOUBLE DEFAULT 0,
memory DOUBLE,
TIME INDEX (ts),
PRIMARY KEY(host)) ENGINE=mito WITH(regions=1);
```
3. Insert some data:
```SQL
INSERT INTO monitor(host, cpu, memory, ts) VALUES ('host1', 66.6, 1024, 1660897955000);
INSERT INTO monitor(host, cpu, memory, ts) VALUES ('host2', 77.7, 2048, 1660897956000);
INSERT INTO monitor(host, cpu, memory, ts) VALUES ('host3', 88.8, 4096, 1660897957000);
```
4. Query the data:
```SQL
SELECT * FROM monitor;
```
```TEXT
+-------+--------------------------+------+--------+
| host | ts | cpu | memory |
+-------+--------------------------+------+--------+
| host1 | 2022-08-19 16:32:35+0800 | 66.6 | 1024 |
| host2 | 2022-08-19 16:32:36+0800 | 77.7 | 2048 |
| host3 | 2022-08-19 16:32:37+0800 | 88.8 | 4096 |
+-------+--------------------------+------+--------+
3 rows in set (0.03 sec)
```
You can always cleanup test database by removing `/tmp/greptimedb`.
To write and query data, GreptimeDB is compatible with multiple [protocols and clients](https://docs.greptime.com/user-guide/client/overview).
## Resources
### Installation
- [Pre-built Binaries](https://github.com/GreptimeTeam/greptimedb/releases):
For Linux and macOS, you can easily download pre-built binaries that are ready to use. In most cases, downloading the version without PyO3 is sufficient. However, if you plan to run scripts in CPython (and use Python packages like NumPy and Pandas), you will need to download the version with PyO3 and install a Python with the same version as the Python in the PyO3 version. We recommend using virtualenv for the installation process to manage multiple Python versions.
- [Pre-built Binaries](https://greptime.com/download):
For Linux and macOS, you can easily download pre-built binaries including official releases and nightly builds that are ready to use.
In most cases, downloading the version without PyO3 is sufficient. However, if you plan to run scripts in CPython (and use Python packages like NumPy and Pandas), you will need to download the version with PyO3 and install a Python with the same version as the Python in the PyO3 version.
We recommend using virtualenv for the installation process to manage multiple Python versions.
- [Docker Images](https://hub.docker.com/r/greptime/greptimedb)(**recommended**): pre-built
Docker images, this is the easiest way to try GreptimeDB. By default it runs CPython script with `pyo3_backend` enabled.
- [`gtctl`](https://github.com/GreptimeTeam/gtctl): the command-line tool for
@@ -165,7 +123,7 @@ You can always cleanup test database by removing `/tmp/greptimedb`.
### Documentation
- GreptimeDB [User Guide](https://docs.greptime.com/user-guide/concepts.html)
- GreptimeDB [User Guide](https://docs.greptime.com/user-guide/concepts/overview)
- GreptimeDB [Developer
Guide](https://docs.greptime.com/developer-guide/overview.html)
- GreptimeDB [internal code document](https://greptimedb.rs)

View File

@@ -9,6 +9,6 @@ arrow.workspace = true
clap = { version = "4.0", features = ["derive"] }
client = { path = "../src/client" }
indicatif = "0.17.1"
itertools = "0.10.5"
itertools.workspace = true
parquet.workspace = true
tokio.workspace = true

View File

@@ -26,7 +26,9 @@ use arrow::datatypes::{DataType, Float64Type, Int64Type};
use arrow::record_batch::RecordBatch;
use clap::Parser;
use client::api::v1::column::Values;
use client::api::v1::{Column, ColumnDataType, ColumnDef, CreateTableExpr, InsertRequest};
use client::api::v1::{
Column, ColumnDataType, ColumnDef, CreateTableExpr, InsertRequest, InsertRequests,
};
use client::{Client, Database, DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME};
use indicatif::{MultiProgress, ProgressBar, ProgressStyle};
use parquet::arrow::arrow_reader::ParquetRecordBatchReaderBuilder;
@@ -107,8 +109,12 @@ async fn write_data(
columns,
row_count,
};
let requests = InsertRequests {
inserts: vec![request],
};
let now = Instant::now();
db.insert(request).await.unwrap();
db.insert(requests).await.unwrap();
let elapsed = now.elapsed();
total_rpc_elapsed_ms += elapsed.as_millis();
progress_bar.inc(row_count as _);
@@ -364,7 +370,7 @@ fn create_table_expr() -> CreateTableExpr {
primary_keys: vec!["VendorID".to_string()],
create_if_not_exists: false,
table_options: Default::default(),
region_ids: vec![0],
region_numbers: vec![0],
table_id: None,
engine: "mito".to_string(),
}

View File

@@ -24,7 +24,8 @@ tcp_nodelay = true
# WAL options, see `standalone.example.toml`.
[wal]
dir = "/tmp/greptimedb/wal"
# WAL data directory
# dir = "/tmp/greptimedb/wal"
file_size = "1GB"
purge_threshold = "50GB"
purge_interval = "10m"
@@ -34,7 +35,9 @@ sync_write = false
# Storage options, see `standalone.example.toml`.
[storage]
type = "File"
data_dir = "/tmp/greptimedb/data/"
data_home = "/tmp/greptimedb/"
# TTL for all tables. Disabled by default.
# global_ttl = "7d"
# Compaction options, see `standalone.example.toml`.
[storage.compaction]
@@ -48,16 +51,31 @@ max_purge_tasks = 32
# Create a checkpoint every <checkpoint_margin> actions.
checkpoint_margin = 10
# Region manifest logs and checkpoints gc execution duration
gc_duration = '30s'
gc_duration = '10m'
# Whether to try creating a manifest checkpoint on region opening
checkpoint_on_startup = false
# Storage flush options
[storage.flush]
# Max inflight flush tasks.
max_flush_tasks = 8
# Default write buffer size for a region.
region_write_buffer_size = "32MB"
# Interval to check whether a region needs flush.
picker_schedule_interval = "5m"
# Interval to auto flush a region if it has not flushed yet.
auto_flush_interval = "1h"
# Global write buffer size for all regions.
global_write_buffer_size = "1GB"
# Procedure storage options, see `standalone.example.toml`.
[procedure]
max_retry_times = 3
retry_delay = "500ms"
# Log options, see `standalone.example.toml`
[logging]
dir = "/tmp/greptimedb/logs"
level = "info"
# Log options
# [logging]
# Specify logs directory.
# dir = "/tmp/greptimedb/logs"
# Specify the log level [info | debug | error | warn]
# level = "info"

View File

@@ -58,6 +58,6 @@ connect_timeout_millis = 5000
tcp_nodelay = true
# Log options, see `standalone.example.toml`
[logging]
dir = "/tmp/greptimedb/logs"
level = "info"
# [logging]
# dir = "/tmp/greptimedb/logs"
# level = "info"

View File

@@ -15,6 +15,6 @@ selector = "LeaseBased"
use_memory_store = false
# Log options, see `standalone.example.toml`
[logging]
dir = "/tmp/greptimedb/logs"
level = "info"
# [logging]
# dir = "/tmp/greptimedb/logs"
# level = "info"

View File

@@ -78,8 +78,8 @@ addr = "127.0.0.1:4004"
# WAL options.
[wal]
# WAL data directory.
dir = "/tmp/greptimedb/wal"
# WAL data directory
# dir = "/tmp/greptimedb/wal"
# WAL file size in bytes.
file_size = "1GB"
# WAL purge threshold in bytes.
@@ -96,7 +96,9 @@ sync_write = false
# Storage type.
type = "File"
# Data directory, "/tmp/greptimedb/data" by default.
data_dir = "/tmp/greptimedb/data/"
data_home = "/tmp/greptimedb/"
# TTL for all tables. Disabled by default.
# global_ttl = "7d"
# Compaction options.
[storage.compaction]
@@ -113,10 +115,23 @@ max_purge_tasks = 32
# Create a checkpoint every <checkpoint_margin> actions.
checkpoint_margin = 10
# Region manifest logs and checkpoints gc execution duration
gc_duration = '30s'
gc_duration = '10m'
# Whether to try creating a manifest checkpoint on region opening
checkpoint_on_startup = false
# Storage flush options
[storage.flush]
# Max inflight flush tasks.
max_flush_tasks = 8
# Default write buffer size for a region.
region_write_buffer_size = "32MB"
# Interval to check whether a region needs flush.
picker_schedule_interval = "5m"
# Interval to auto flush a region if it has not flushed yet.
auto_flush_interval = "1h"
# Global write buffer size for all regions.
global_write_buffer_size = "1GB"
# Procedure storage options.
[procedure]
# Procedure max retry time.
@@ -125,8 +140,8 @@ max_retry_times = 3
retry_delay = "500ms"
# Log options
[logging]
# [logging]
# Specify logs directory.
dir = "/tmp/greptimedb/logs"
# dir = "/tmp/greptimedb/logs"
# Specify the log level [info | debug | error | warn]
level = "debug"
# level = "info"

View File

@@ -12,7 +12,9 @@ RUN apt-get update && apt-get install -y \
pkg-config \
python3 \
python3-dev \
&& pip install pyarrow
python3-pip \
&& pip3 install --upgrade pip \
&& pip3 install pyarrow
# Install Rust.
SHELL ["/bin/bash", "-c"]

View File

@@ -0,0 +1,137 @@
---
Feature Name: distributed-planner
Tracking Issue: TBD
Date: 2023-05-09
Author: "Ruihang Xia <waynestxia@gmail.com>"
---
Distributed Planner
-------------------
# Summary
Enhance the logical planner with aware of distributed, multi-region table topology. To achieve "push computation down" execution rather than the current "pull data up" manner.
# Motivation
Query distributively can leverage the advantage of GreptimeDB's architecture to process large dataset that exceeds the capacity of a single node, or accelerate the query execution by executing it in parallel. This task includes two sub-tasks
- Be able to transform the plan that can push as much as possible computation down to data source.
- Be able to handle pipeline breaker (like `Join` or `Sort`) on multiple computation nodes.
This is a relatively complex topic. To keep this RFC concentrated I'll focus on the first one.
# Details
## Background: Partition and Region
GreptimeDB supports table partitioning, where the partition rule is set during table creation. Each partition can be further divided into one or more physical storage units known as "regions". Both partitions and regions are divided based on rows:
``` text
┌────────────────────────────────────┐
│ │
│ Table │
│ │
└─────┬────────────┬────────────┬────┘
│ │ │
│ │ │
┌─────▼────┐ ┌─────▼────┐ ┌─────▼────┐
│ Region 1 │ │ Region 2 │ │ Region 3 │
└──────────┘ └──────────┘ └──────────┘
Row 1~10 Row 11~20 Row 21~30
```
General speaking, region is the minimum element of data distribution, and we can also use it as the unit to distribute computation. This can greatly simplify the routing logic of this distributed planner, by always schedule the computation to the node that currently opening the corresponding region. And is also easy to scale more node for computing since GreptimeDB's data is persisted on shared storage backend like S3. But this is a bit beyond the scope of this specific topic.
## Background: Commutativity
Commutativity is an attribute that describes whether two operation can exchange their apply order: $P1(P2(R)) \Leftrightarrow P2(P1(R))$. If the equation keeps, we can transform one expression into another form without changing its result. This is useful on rewriting SQL expression, and is the theoretical basis of this RFC.
Take this SQL as an example
``` sql
SELECT a FROM t WHERE a > 10;
```
As we know projection and filter are commutative (todo: latex), it can be translated to the following two identical plan trees:
```text
┌─────────────┐ ┌─────────────┐
│Projection(a)│ │Filter(a>10) │
└──────▲──────┘ └──────▲──────┘
│ │
┌──────┴──────┐ ┌──────┴──────┐
│Filter(a>10) │ │Projection(a)│
└──────▲──────┘ └──────▲──────┘
│ │
┌──────┴──────┐ ┌──────┴──────┐
│ TableScan │ │ TableScan │
└─────────────┘ └─────────────┘
```
## Merge Operation
This RFC proposes to add a new expression node `MergeScan` to merge result from several regions in the frontend. It wrap the abstraction of remote data and execution, and expose a `TableScan` interface to upper level.
``` text
┌───────┼───────┐
│ │ │
│ ┌──┴──┐ │
│ └──▲──┘ │
│ │ │
│ ┌──┴──┐ │
│ └──▲──┘ │ ┌─────────────────────────────┐
│ │ │ │ │
│ ┌────┴────┐ │ │ ┌──────────┐ ┌───┐ ┌───┐ │
│ │MergeScan◄──┼────┤ │ Region 1 │ │ │ .. │ │ │
│ └─────────┘ │ │ └──────────┘ └───┘ └───┘ │
│ │ │ │
└─Frontend──────┘ └─Remote-Sources──────────────┘
```
This merge operation simply chains all the the underlying remote data sources and return `RecordBatch`, just like a coalesce op. And each remote sources is a gRPC query to datanode via the substrait logical plan interface. The plan is transformed and divided from the original query that comes to frontend.
## Commutativity of MergeScan
Obviously, The position of `MergeScan` is the key of the distributed plan. The more closer to the underlying `TableScan`, the less computation is taken by datanodes. Thus the goal is to pull the `MergeScan` up as more as possible. The word "pull up" means exchange `MergeScan` with its parent node in the plan tree, which means we should check the commutativity between the existing expression nodes and the `MergeScan`. Here I classify all the possibility into five categories:
- Commutative: $P1(P2(R)) \Leftrightarrow P2(P1(R))$
- filter
- projection
- operations that match the partition key
- Partial Commutative: $P1(P2(R)) \Leftrightarrow P1(P2(P1(R)))$
- $min(R) \rightarrow min(MERGE(min(R)))$
- $max(R) \rightarrow max(MERGE(max(R)))$
- Conditional Commutative: $P1(P2(R)) \Leftrightarrow P3(P2(P1(R)))$
- $count(R) \rightarrow sum(count(R))$
- Transformed Commutative: $P1(P2(R)) \Leftrightarrow P1(P3(R)) \Leftrightarrow P3(P1(R))$
- $avg(R) \rightarrow sum(R)/count(R)$
- Non-commutative
- sort
- join
- percentile
## Steps to plan
After establishing the set of commutative relations for all expressions, we can begin transforming the logical plan. There are four steps:
- Add a merge node before table scan
- Evaluate commutativity in a bottom-up way, stop at the first non-commutative node
- Divide the TableScan to scan over partitions
- Execute
First insert the `MergeScan` on top of the bottom `TableScan` node. Then examine the commutativity start from the `MergeScan` node transform the plan tree based on the result. Stop this process on the first non-commutative node.
``` text
┌─────────────┐ ┌─────────────┐
│ Sort │ │ Sort │
└──────▲──────┘ └──────▲──────┘
│ │
┌─────────────┐ ┌──────┴──────┐ ┌──────┴──────┐
│ Sort │ │Projection(a)│ │ MergeScan │
└──────▲──────┘ └──────▲──────┘ └──────▲──────┘
│ │ │
┌──────┴──────┐ ┌──────┴──────┐ ┌──────┴──────┐
│Projection(a)│ │ MergeScan │ │Projection(a)│
└──────▲──────┘ └──────▲──────┘ └──────▲──────┘
│ │ │
┌──────┴──────┐ ┌──────┴──────┐ ┌──────┴──────┐
│ TableScan │ │ TableScan │ │ TableScan │
└─────────────┘ └─────────────┘ └─────────────┘
(a) (b) (c)
```
Then in the physical planning phase, convert the sub-tree below `MergeScan` into a remote query request and dispatch to all the regions. And let the `MergeScan` to receive the results and feed to it parent node.
To simplify the overall complexity, any error in the procedure will lead to a failure to the entire query, and cancel all other parts.
# Alternatives
## Spill
If only consider the ability of processing large dataset, we can enable DataFusion's spill ability to temporary persist intermediate data into disk, like the "swap" memory. But this will lead to a super slow performance and very large write amplification.
# Future Work
As described in the `Motivation` section we can further explore the distributed planner on the physical execution level, by introducing mechanism like Spark's shuffle to improve parallelism and reduce intermediate pipeline breaker's stage.

View File

@@ -10,7 +10,7 @@ common-base = { path = "../common/base" }
common-error = { path = "../common/error" }
common-time = { path = "../common/time" }
datatypes = { path = "../datatypes" }
greptime-proto = { git = "https://github.com/GreptimeTeam/greptime-proto.git", rev = "e8abf8241c908448dce595399e89c89a40d048bd" }
greptime-proto.workspace = true
prost.workspace = true
snafu = { version = "0.7", features = ["backtraces"] }
tonic.workspace = true

View File

@@ -41,7 +41,7 @@ pub enum Error {
))]
ConvertColumnDefaultConstraint {
column: String,
#[snafu(backtrace)]
location: Location,
source: datatypes::error::Error,
},
@@ -52,7 +52,7 @@ pub enum Error {
))]
InvalidColumnDefaultConstraint {
column: String,
#[snafu(backtrace)]
location: Location,
source: datatypes::error::Error,
},
}

View File

@@ -18,6 +18,10 @@ use datatypes::prelude::ConcreteDataType;
use datatypes::types::TimestampType;
use datatypes::value::Value;
use datatypes::vectors::VectorRef;
use greptime_proto::v1::ddl_request::Expr;
use greptime_proto::v1::greptime_request::Request;
use greptime_proto::v1::query_request::Query;
use greptime_proto::v1::{DdlRequest, QueryRequest};
use snafu::prelude::*;
use crate::error::{self, Result};
@@ -224,6 +228,38 @@ pub fn push_vals(column: &mut Column, origin_count: usize, vector: VectorRef) {
column.null_mask = null_mask.into_vec();
}
/// Returns the type name of the [Request].
pub fn request_type(request: &Request) -> &'static str {
match request {
Request::Inserts(_) => "inserts",
Request::Query(query_req) => query_request_type(query_req),
Request::Ddl(ddl_req) => ddl_request_type(ddl_req),
Request::Delete(_) => "delete",
}
}
/// Returns the type name of the [QueryRequest].
fn query_request_type(request: &QueryRequest) -> &'static str {
match request.query {
Some(Query::Sql(_)) => "query.sql",
Some(Query::LogicalPlan(_)) => "query.logical_plan",
Some(Query::PromRangeQuery(_)) => "query.prom_range",
None => "query.empty",
}
}
/// Returns the type name of the [DdlRequest].
fn ddl_request_type(request: &DdlRequest) -> &'static str {
match request.expr {
Some(Expr::CreateDatabase(_)) => "ddl.create_database",
Some(Expr::CreateTable(_)) => "ddl.create_table",
Some(Expr::Alter(_)) => "ddl.alter",
Some(Expr::DropTable(_)) => "ddl.drop_table",
Some(Expr::FlushTable(_)) => "ddl.flush_table",
None => "ddl.empty",
}
}
#[cfg(test)]
mod tests {
use std::sync::Arc;

View File

@@ -23,4 +23,5 @@ pub mod prometheus {
pub mod v1;
pub use greptime_proto;
pub use prost::DecodeError;

View File

@@ -4,6 +4,9 @@ version.workspace = true
edition.workspace = true
license.workspace = true
[features]
testing = []
[dependencies]
api = { path = "../api" }
arc-swap = "1.0"
@@ -14,6 +17,7 @@ backoff = { version = "0.4", features = ["tokio"] }
common-catalog = { path = "../common/catalog" }
common-error = { path = "../common/error" }
common-grpc = { path = "../common/grpc" }
common-meta = { path = "../common/meta" }
common-query = { path = "../common/query" }
common-recordbatch = { path = "../common/recordbatch" }
common-runtime = { path = "../common/runtime" }
@@ -28,6 +32,7 @@ key-lock = "0.1"
lazy_static = "1.4"
meta-client = { path = "../meta-client" }
metrics.workspace = true
moka = { version = "0.11", features = ["future"] }
parking_lot = "0.12"
regex = "1.6"
serde = "1.0"
@@ -35,10 +40,12 @@ serde_json = "1.0"
session = { path = "../session" }
snafu = { version = "0.7", features = ["backtraces"] }
storage = { path = "../storage" }
store-api = { path = "../store-api" }
table = { path = "../table" }
tokio.workspace = true
[dev-dependencies]
catalog = { path = ".", features = ["testing"] }
common-test-util = { path = "../common/test-util" }
chrono.workspace = true
log-store = { path = "../log-store" }

View File

@@ -32,18 +32,18 @@ pub enum Error {
source
))]
CompileScriptInternal {
#[snafu(backtrace)]
location: Location,
source: BoxedError,
},
#[snafu(display("Failed to open system catalog table, source: {}", source))]
OpenSystemCatalog {
#[snafu(backtrace)]
location: Location,
source: table::error::Error,
},
#[snafu(display("Failed to create system catalog table, source: {}", source))]
CreateSystemCatalog {
#[snafu(backtrace)]
location: Location,
source: table::error::Error,
},
@@ -54,7 +54,7 @@ pub enum Error {
))]
CreateTable {
table_info: String,
#[snafu(backtrace)]
location: Location,
source: table::error::Error,
},
@@ -94,7 +94,7 @@ pub enum Error {
#[snafu(display("Table engine not found: {}, source: {}", engine_name, source))]
TableEngineNotFound {
engine_name: String,
#[snafu(backtrace)]
location: Location,
source: table::error::Error,
},
@@ -132,7 +132,7 @@ pub enum Error {
#[snafu(display("Failed to open table, table info: {}, source: {}", table_info, source))]
OpenTable {
table_info: String,
#[snafu(backtrace)]
location: Location,
source: table::error::Error,
},
@@ -147,13 +147,13 @@ pub enum Error {
#[snafu(display("Failed to read system catalog table records"))]
ReadSystemCatalog {
#[snafu(backtrace)]
location: Location,
source: common_recordbatch::error::Error,
},
#[snafu(display("Failed to create recordbatch, source: {}", source))]
CreateRecordBatch {
#[snafu(backtrace)]
location: Location,
source: common_recordbatch::error::Error,
},
@@ -162,7 +162,7 @@ pub enum Error {
source
))]
InsertCatalogRecord {
#[snafu(backtrace)]
location: Location,
source: table::error::Error,
},
@@ -173,7 +173,7 @@ pub enum Error {
))]
DeregisterTable {
request: DeregisterTableRequest,
#[snafu(backtrace)]
location: Location,
source: table::error::Error,
},
@@ -182,69 +182,42 @@ pub enum Error {
#[snafu(display("Failed to scan system catalog table, source: {}", source))]
SystemCatalogTableScan {
#[snafu(backtrace)]
location: Location,
source: table::error::Error,
},
#[snafu(display("Failure during SchemaProvider operation, source: {}", source))]
SchemaProviderOperation {
#[snafu(backtrace)]
source: BoxedError,
},
#[snafu(display("{source}"))]
Internal {
#[snafu(backtrace)]
location: Location,
source: BoxedError,
},
#[snafu(display("Failed to execute system catalog table scan, source: {}", source))]
SystemCatalogTableScanExec {
#[snafu(backtrace)]
location: Location,
source: common_query::error::Error,
},
#[snafu(display("Cannot parse catalog value, source: {}", source))]
InvalidCatalogValue {
#[snafu(backtrace)]
location: Location,
source: common_catalog::error::Error,
},
#[snafu(display("Failed to perform metasrv operation, source: {}", source))]
MetaSrv {
#[snafu(backtrace)]
location: Location,
source: meta_client::error::Error,
},
#[snafu(display("Invalid table info in catalog, source: {}", source))]
InvalidTableInfoInCatalog {
#[snafu(backtrace)]
location: Location,
source: datatypes::error::Error,
},
#[snafu(display("Failed to serialize or deserialize catalog entry: {}", source))]
CatalogEntrySerde {
#[snafu(backtrace)]
source: common_catalog::error::Error,
},
#[snafu(display("Illegal access to catalog: {} and schema: {}", catalog, schema))]
QueryAccessDenied { catalog: String, schema: String },
#[snafu(display(
"Failed to get region stats, catalog: {}, schema: {}, table: {}, source: {}",
catalog,
schema,
table,
source
))]
RegionStats {
catalog: String,
schema: String,
table: String,
#[snafu(backtrace)]
source: table::error::Error,
},
#[snafu(display("Invalid system table definition: {err_msg}"))]
InvalidSystemTableDef { err_msg: String, location: Location },
@@ -257,9 +230,12 @@ pub enum Error {
#[snafu(display("Table schema mismatch, source: {}", source))]
TableSchemaMismatch {
#[snafu(backtrace)]
location: Location,
source: table::error::Error,
},
#[snafu(display("A generic error has occurred, msg: {}", msg))]
Generic { msg: String, location: Location },
}
pub type Result<T> = std::result::Result<T, Error>;
@@ -280,14 +256,12 @@ impl ErrorExt for Error {
| Error::EmptyValue { .. }
| Error::ValueDeserialize { .. } => StatusCode::StorageUnavailable,
Error::SystemCatalogTypeMismatch { .. } => StatusCode::Internal,
Error::Generic { .. } | Error::SystemCatalogTypeMismatch { .. } => StatusCode::Internal,
Error::ReadSystemCatalog { source, .. } | Error::CreateRecordBatch { source } => {
source.status_code()
}
Error::InvalidCatalogValue { source, .. } | Error::CatalogEntrySerde { source } => {
Error::ReadSystemCatalog { source, .. } | Error::CreateRecordBatch { source, .. } => {
source.status_code()
}
Error::InvalidCatalogValue { source, .. } => source.status_code(),
Error::TableExists { .. } => StatusCode::TableAlreadyExists,
Error::TableNotExist { .. } => StatusCode::TableNotFound,
@@ -301,17 +275,16 @@ impl ErrorExt for Error {
| Error::OpenTable { source, .. }
| Error::CreateTable { source, .. }
| Error::DeregisterTable { source, .. }
| Error::RegionStats { source, .. }
| Error::TableSchemaMismatch { source } => source.status_code(),
| Error::TableSchemaMismatch { source, .. } => source.status_code(),
Error::MetaSrv { source, .. } => source.status_code(),
Error::SystemCatalogTableScan { source } => source.status_code(),
Error::SystemCatalogTableScanExec { source } => source.status_code(),
Error::InvalidTableInfoInCatalog { source } => source.status_code(),
Error::SystemCatalogTableScan { source, .. } => source.status_code(),
Error::SystemCatalogTableScanExec { source, .. } => source.status_code(),
Error::InvalidTableInfoInCatalog { source, .. } => source.status_code(),
Error::CompileScriptInternal { source }
| Error::SchemaProviderOperation { source }
| Error::Internal { source } => source.status_code(),
Error::CompileScriptInternal { source, .. } | Error::Internal { source, .. } => {
source.status_code()
}
Error::Unimplemented { .. } | Error::NotSupported { .. } => StatusCode::Unsupported,
Error::QueryAccessDenied { .. } => StatusCode::AccessDenied,

View File

@@ -19,13 +19,19 @@ use std::any::Any;
use std::sync::Arc;
use async_trait::async_trait;
use datafusion::datasource::streaming::{PartitionStream, StreamingTable};
use common_error::prelude::BoxedError;
use common_query::physical_plan::PhysicalPlanRef;
use common_query::prelude::Expr;
use common_recordbatch::{RecordBatchStreamAdaptor, SendableRecordBatchStream};
use datatypes::schema::SchemaRef;
use futures_util::StreamExt;
use snafu::ResultExt;
use table::table::adapter::TableAdapter;
use table::TableRef;
use store_api::storage::ScanRequest;
use table::error::{SchemaConversionSnafu, TablesRecordBatchSnafu};
use table::{Result as TableResult, Table, TableRef};
use self::columns::InformationSchemaColumns;
use crate::error::{DatafusionSnafu, Result, TableSchemaMismatchSnafu};
use crate::error::Result;
use crate::information_schema::tables::InformationSchemaTables;
use crate::{CatalogProviderRef, SchemaProvider};
@@ -59,40 +65,21 @@ impl SchemaProvider for InformationSchemaProvider {
}
async fn table(&self, name: &str) -> Result<Option<TableRef>> {
let table = match name.to_ascii_lowercase().as_ref() {
TABLES => {
let inner = Arc::new(InformationSchemaTables::new(
self.catalog_name.clone(),
self.catalog_provider.clone(),
));
Arc::new(
StreamingTable::try_new(inner.schema().clone(), vec![inner]).with_context(
|_| DatafusionSnafu {
msg: format!("Failed to get InformationSchema table '{name}'"),
},
)?,
)
}
COLUMNS => {
let inner = Arc::new(InformationSchemaColumns::new(
self.catalog_name.clone(),
self.catalog_provider.clone(),
));
Arc::new(
StreamingTable::try_new(inner.schema().clone(), vec![inner]).with_context(
|_| DatafusionSnafu {
msg: format!("Failed to get InformationSchema table '{name}'"),
},
)?,
)
}
let stream_builder = match name.to_ascii_lowercase().as_ref() {
TABLES => Arc::new(InformationSchemaTables::new(
self.catalog_name.clone(),
self.catalog_provider.clone(),
)) as _,
COLUMNS => Arc::new(InformationSchemaColumns::new(
self.catalog_name.clone(),
self.catalog_provider.clone(),
)) as _,
_ => {
return Ok(None);
}
};
let table = TableAdapter::new(table).context(TableSchemaMismatchSnafu)?;
Ok(Some(Arc::new(table)))
Ok(Some(Arc::new(InformationTable::new(stream_builder))))
}
async fn table_exist(&self, name: &str) -> Result<bool> {
@@ -100,3 +87,83 @@ impl SchemaProvider for InformationSchemaProvider {
Ok(self.tables.contains(&normalized_name))
}
}
// TODO(ruihang): make it a more generic trait:
// https://github.com/GreptimeTeam/greptimedb/pull/1639#discussion_r1205001903
pub trait InformationStreamBuilder: Send + Sync {
fn to_stream(&self) -> Result<SendableRecordBatchStream>;
fn schema(&self) -> SchemaRef;
}
pub struct InformationTable {
stream_builder: Arc<dyn InformationStreamBuilder>,
}
impl InformationTable {
pub fn new(stream_builder: Arc<dyn InformationStreamBuilder>) -> Self {
Self { stream_builder }
}
}
#[async_trait]
impl Table for InformationTable {
fn as_any(&self) -> &dyn Any {
self
}
fn schema(&self) -> SchemaRef {
self.stream_builder.schema()
}
fn table_info(&self) -> table::metadata::TableInfoRef {
unreachable!("Should not call table_info() of InformationTable directly")
}
/// Scan the table and returns a SendableRecordBatchStream.
async fn scan(
&self,
_projection: Option<&Vec<usize>>,
_filters: &[Expr],
// limit can be used to reduce the amount scanned
// from the datasource as a performance optimization.
// If set, it contains the amount of rows needed by the `LogicalPlan`,
// The datasource should return *at least* this number of rows if available.
_limit: Option<usize>,
) -> TableResult<PhysicalPlanRef> {
unimplemented!()
}
async fn scan_to_stream(&self, request: ScanRequest) -> TableResult<SendableRecordBatchStream> {
let projection = request.projection;
let projected_schema = if let Some(projection) = &projection {
Arc::new(
self.schema()
.try_project(projection)
.context(SchemaConversionSnafu)?,
)
} else {
self.schema()
};
let stream = self
.stream_builder
.to_stream()
.map_err(BoxedError::new)
.context(TablesRecordBatchSnafu)?
.map(move |batch| {
batch.and_then(|batch| {
if let Some(projection) = &projection {
batch.try_project(projection)
} else {
Ok(batch)
}
})
});
let stream = RecordBatchStreamAdaptor {
schema: projected_schema,
stream: Box::pin(stream),
output_ordering: None,
};
Ok(Box::pin(stream))
}
}

View File

@@ -18,8 +18,10 @@ use arrow_schema::SchemaRef as ArrowSchemaRef;
use common_catalog::consts::{
SEMANTIC_TYPE_FIELD, SEMANTIC_TYPE_PRIMARY_KEY, SEMANTIC_TYPE_TIME_INDEX,
};
use common_error::prelude::BoxedError;
use common_query::physical_plan::TaskContext;
use common_recordbatch::RecordBatch;
use common_recordbatch::adapter::RecordBatchStreamAdapter;
use common_recordbatch::{RecordBatch, SendableRecordBatchStream};
use datafusion::datasource::streaming::PartitionStream as DfPartitionStream;
use datafusion::physical_plan::stream::RecordBatchStreamAdapter as DfRecordBatchStreamAdapter;
use datafusion::physical_plan::SendableRecordBatchStream as DfSendableRecordBatchStream;
@@ -29,7 +31,8 @@ use datatypes::schema::{ColumnSchema, Schema, SchemaRef};
use datatypes::vectors::{StringVectorBuilder, VectorRef};
use snafu::ResultExt;
use crate::error::{CreateRecordBatchSnafu, Result};
use super::InformationStreamBuilder;
use crate::error::{CreateRecordBatchSnafu, InternalSnafu, Result};
use crate::CatalogProviderRef;
pub(super) struct InformationSchemaColumns {
@@ -71,6 +74,32 @@ impl InformationSchemaColumns {
}
}
impl InformationStreamBuilder for InformationSchemaColumns {
fn schema(&self) -> SchemaRef {
self.schema.clone()
}
fn to_stream(&self) -> Result<SendableRecordBatchStream> {
let schema = self.schema.arrow_schema().clone();
let mut builder = self.builder();
let stream = Box::pin(DfRecordBatchStreamAdapter::new(
schema,
futures::stream::once(async move {
builder
.make_tables()
.await
.map(|x| x.into_df_record_batch())
.map_err(Into::into)
}),
));
Ok(Box::pin(
RecordBatchStreamAdapter::try_new(stream)
.map_err(BoxedError::new)
.context(InternalSnafu)?,
))
}
}
struct InformationSchemaColumnsBuilder {
schema: SchemaRef,
catalog_name: String,
@@ -168,7 +197,7 @@ impl DfPartitionStream for InformationSchemaColumns {
}
fn execute(&self, _: Arc<TaskContext>) -> DfSendableRecordBatchStream {
let schema = self.schema().clone();
let schema = self.schema.arrow_schema().clone();
let mut builder = self.builder();
Box::pin(DfRecordBatchStreamAdapter::new(
schema,

View File

@@ -16,8 +16,10 @@ use std::sync::Arc;
use arrow_schema::SchemaRef as ArrowSchemaRef;
use common_catalog::consts::INFORMATION_SCHEMA_NAME;
use common_error::prelude::BoxedError;
use common_query::physical_plan::TaskContext;
use common_recordbatch::RecordBatch;
use common_recordbatch::adapter::RecordBatchStreamAdapter;
use common_recordbatch::{RecordBatch, SendableRecordBatchStream};
use datafusion::datasource::streaming::PartitionStream as DfPartitionStream;
use datafusion::physical_plan::stream::RecordBatchStreamAdapter as DfRecordBatchStreamAdapter;
use datafusion::physical_plan::SendableRecordBatchStream as DfSendableRecordBatchStream;
@@ -27,7 +29,8 @@ use datatypes::vectors::{StringVectorBuilder, UInt32VectorBuilder};
use snafu::ResultExt;
use table::metadata::TableType;
use crate::error::{CreateRecordBatchSnafu, Result};
use crate::error::{CreateRecordBatchSnafu, InternalSnafu, Result};
use crate::information_schema::InformationStreamBuilder;
use crate::CatalogProviderRef;
pub(super) struct InformationSchemaTables {
@@ -62,6 +65,32 @@ impl InformationSchemaTables {
}
}
impl InformationStreamBuilder for InformationSchemaTables {
fn schema(&self) -> SchemaRef {
self.schema.clone()
}
fn to_stream(&self) -> Result<SendableRecordBatchStream> {
let schema = self.schema.arrow_schema().clone();
let mut builder = self.builder();
let stream = Box::pin(DfRecordBatchStreamAdapter::new(
schema,
futures::stream::once(async move {
builder
.make_tables()
.await
.map(|x| x.into_df_record_batch())
.map_err(Into::into)
}),
));
Ok(Box::pin(
RecordBatchStreamAdapter::try_new(stream)
.map_err(BoxedError::new)
.context(InternalSnafu)?,
))
}
}
/// Builds the `information_schema.TABLE` table row by row
///
/// Columns are based on <https://www.postgresql.org/docs/current/infoschema-columns.html>
@@ -160,7 +189,7 @@ impl DfPartitionStream for InformationSchemaTables {
}
fn execute(&self, _: Arc<TaskContext>) -> DfSendableRecordBatchStream {
let schema = self.schema().clone();
let schema = self.schema.arrow_schema().clone();
let mut builder = self.builder();
Box::pin(DfRecordBatchStreamAdapter::new(
schema,

View File

@@ -12,9 +12,11 @@
// See the License for the specific language governing permissions and
// limitations under the License.
#![feature(trait_upcasting)]
#![feature(assert_matches)]
use std::any::Any;
use std::collections::HashMap;
use std::fmt::{Debug, Formatter};
use std::sync::Arc;
@@ -244,6 +246,8 @@ pub async fn datanode_stat(catalog_manager: &CatalogManagerRef) -> (u64, Vec<Reg
let region_numbers = &table.table_info().meta.region_numbers;
region_number += region_numbers.len() as u64;
let engine = &table.table_info().meta.engine;
match table.region_stats() {
Ok(stats) => {
let stats = stats.into_iter().map(|stat| RegionStat {
@@ -254,6 +258,7 @@ pub async fn datanode_stat(catalog_manager: &CatalogManagerRef) -> (u64, Vec<Reg
table_name: table_name.clone(),
}),
approximate_bytes: stat.disk_usage_bytes as i64,
attrs: HashMap::from([("engine_name".to_owned(), engine.clone())]),
..Default::default()
});

View File

@@ -266,6 +266,7 @@ impl LocalCatalogManager {
schema_name: t.schema_name.clone(),
table_name: t.table_name.clone(),
table_id: t.table_id,
region_numbers: vec![0],
};
let engine = self
.engine_manager

View File

@@ -20,6 +20,9 @@ pub(crate) const METRIC_CATALOG_MANAGER_CATALOG_COUNT: &str = "catalog.catalog_c
pub(crate) const METRIC_CATALOG_MANAGER_SCHEMA_COUNT: &str = "catalog.schema_count";
pub(crate) const METRIC_CATALOG_MANAGER_TABLE_COUNT: &str = "catalog.table_count";
pub(crate) const METRIC_CATALOG_KV_REMOTE_GET: &str = "catalog.kv.get.remote";
pub(crate) const METRIC_CATALOG_KV_GET: &str = "catalog.kv.get";
#[inline]
pub(crate) fn db_label(catalog: &str, schema: &str) -> (&'static str, String) {
(METRIC_DB_LABEL, build_db_string(catalog, schema))

View File

@@ -12,11 +12,12 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use std::any::Any;
use std::fmt::Debug;
use std::pin::Pin;
use std::sync::Arc;
pub use client::MetaKvBackend;
pub use client::{CachedMetaKvBackend, MetaKvBackend};
use futures::Stream;
use futures_util::StreamExt;
pub use manager::{RemoteCatalogManager, RemoteCatalogProvider, RemoteSchemaProvider};
@@ -26,6 +27,9 @@ use crate::error::Error;
mod client;
mod manager;
#[cfg(feature = "testing")]
pub mod mock;
#[derive(Debug, Clone)]
pub struct Kv(pub Vec<u8>, pub Vec<u8>);
@@ -70,10 +74,22 @@ pub trait KvBackend: Send + Sync {
}
return Ok(None);
}
/// MoveValue atomically renames the key to the given updated key.
async fn move_value(&self, from_key: &[u8], to_key: &[u8]) -> Result<(), Error>;
fn as_any(&self) -> &dyn Any;
}
pub type KvBackendRef = Arc<dyn KvBackend>;
#[async_trait::async_trait]
pub trait KvCacheInvalidator: Send + Sync {
async fn invalidate_key(&self, key: &[u8]);
}
pub type KvCacheInvalidatorRef = Arc<dyn KvCacheInvalidator>;
#[cfg(test)]
mod tests {
use async_stream::stream;
@@ -114,17 +130,29 @@ mod tests {
async fn delete_range(&self, _key: &[u8], _end: &[u8]) -> Result<(), Error> {
unimplemented!()
}
async fn move_value(&self, _from_key: &[u8], _to_key: &[u8]) -> Result<(), Error> {
unimplemented!()
}
fn as_any(&self) -> &dyn Any {
self
}
}
#[tokio::test]
async fn test_get() {
let backend = MockKvBackend {};
let result = backend.get(0.to_string().as_bytes()).await;
assert_eq!(0.to_string().as_bytes(), result.unwrap().unwrap().0);
let result = backend.get(1.to_string().as_bytes()).await;
assert_eq!(1.to_string().as_bytes(), result.unwrap().unwrap().0);
let result = backend.get(2.to_string().as_bytes()).await;
assert_eq!(2.to_string().as_bytes(), result.unwrap().unwrap().0);
let result = backend.get(3.to_string().as_bytes()).await;
assert!(result.unwrap().is_none());
}

View File

@@ -12,17 +12,149 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use std::any::Any;
use std::fmt::Debug;
use std::sync::Arc;
use std::time::Duration;
use async_stream::stream;
use common_telemetry::info;
use common_meta::rpc::store::{
CompareAndPutRequest, DeleteRangeRequest, MoveValueRequest, PutRequest, RangeRequest,
};
use common_telemetry::{info, timer};
use meta_client::client::MetaClient;
use meta_client::rpc::{CompareAndPutRequest, DeleteRangeRequest, PutRequest, RangeRequest};
use moka::future::{Cache, CacheBuilder};
use snafu::ResultExt;
use crate::error::{Error, MetaSrvSnafu};
use crate::remote::{Kv, KvBackend, ValueIter};
use super::KvCacheInvalidator;
use crate::error::{Error, GenericSnafu, MetaSrvSnafu, Result};
use crate::metrics::{METRIC_CATALOG_KV_GET, METRIC_CATALOG_KV_REMOTE_GET};
use crate::remote::{Kv, KvBackend, KvBackendRef, ValueIter};
const CACHE_MAX_CAPACITY: u64 = 10000;
const CACHE_TTL_SECOND: u64 = 10 * 60;
const CACHE_TTI_SECOND: u64 = 5 * 60;
pub type CacheBackendRef = Arc<Cache<Vec<u8>, Option<Kv>>>;
pub struct CachedMetaKvBackend {
kv_backend: KvBackendRef,
cache: CacheBackendRef,
}
#[async_trait::async_trait]
impl KvBackend for CachedMetaKvBackend {
fn range<'a, 'b>(&'a self, key: &[u8]) -> ValueIter<'b, Error>
where
'a: 'b,
{
self.kv_backend.range(key)
}
async fn get(&self, key: &[u8]) -> Result<Option<Kv>> {
let _timer = timer!(METRIC_CATALOG_KV_GET);
let init = async {
let _timer = timer!(METRIC_CATALOG_KV_REMOTE_GET);
self.kv_backend.get(key).await
};
let schema_provider = self.cache.try_get_with_by_ref(key, init).await;
schema_provider.map_err(|e| GenericSnafu { msg: e.to_string() }.build())
}
async fn set(&self, key: &[u8], val: &[u8]) -> Result<()> {
let ret = self.kv_backend.set(key, val).await;
if ret.is_ok() {
self.invalidate_key(key).await;
}
ret
}
async fn delete(&self, key: &[u8]) -> Result<()> {
let ret = self.kv_backend.delete_range(key, &[]).await;
if ret.is_ok() {
self.invalidate_key(key).await;
}
ret
}
async fn delete_range(&self, _key: &[u8], _end: &[u8]) -> Result<()> {
// TODO(fys): implement it
unimplemented!()
}
async fn compare_and_set(
&self,
key: &[u8],
expect: &[u8],
val: &[u8],
) -> Result<std::result::Result<(), Option<Vec<u8>>>> {
let ret = self.kv_backend.compare_and_set(key, expect, val).await;
if ret.is_ok() {
self.invalidate_key(key).await;
}
ret
}
async fn move_value(&self, from_key: &[u8], to_key: &[u8]) -> Result<()> {
let ret = self.kv_backend.move_value(from_key, to_key).await;
if ret.is_ok() {
self.invalidate_key(from_key).await;
self.invalidate_key(to_key).await;
}
ret
}
fn as_any(&self) -> &dyn Any {
self
}
}
#[async_trait::async_trait]
impl KvCacheInvalidator for CachedMetaKvBackend {
async fn invalidate_key(&self, key: &[u8]) {
self.cache.invalidate(key).await
}
}
impl CachedMetaKvBackend {
pub fn new(client: Arc<MetaClient>) -> Self {
let cache = Arc::new(
CacheBuilder::new(CACHE_MAX_CAPACITY)
.time_to_live(Duration::from_secs(CACHE_TTL_SECOND))
.time_to_idle(Duration::from_secs(CACHE_TTI_SECOND))
.build(),
);
let kv_backend = Arc::new(MetaKvBackend { client });
Self { kv_backend, cache }
}
pub fn wrap(kv_backend: KvBackendRef) -> Self {
let cache = Arc::new(
CacheBuilder::new(CACHE_MAX_CAPACITY)
.time_to_live(Duration::from_secs(CACHE_TTL_SECOND))
.time_to_idle(Duration::from_secs(CACHE_TTI_SECOND))
.build(),
);
Self { kv_backend, cache }
}
pub fn cache(&self) -> &CacheBackendRef {
&self.cache
}
}
#[derive(Debug)]
pub struct MetaKvBackend {
pub client: Arc<MetaClient>,
@@ -51,7 +183,7 @@ impl KvBackend for MetaKvBackend {
}))
}
async fn get(&self, key: &[u8]) -> Result<Option<Kv>, Error> {
async fn get(&self, key: &[u8]) -> Result<Option<Kv>> {
let mut response = self
.client
.range(RangeRequest::new().with_key(key))
@@ -63,7 +195,7 @@ impl KvBackend for MetaKvBackend {
.map(|kv| Kv(kv.take_key(), kv.take_value())))
}
async fn set(&self, key: &[u8], val: &[u8]) -> Result<(), Error> {
async fn set(&self, key: &[u8], val: &[u8]) -> Result<()> {
let req = PutRequest::new()
.with_key(key.to_vec())
.with_value(val.to_vec());
@@ -71,7 +203,7 @@ impl KvBackend for MetaKvBackend {
Ok(())
}
async fn delete_range(&self, key: &[u8], end: &[u8]) -> Result<(), Error> {
async fn delete_range(&self, key: &[u8], end: &[u8]) -> Result<()> {
let req = DeleteRangeRequest::new().with_range(key.to_vec(), end.to_vec());
let resp = self.client.delete_range(req).await.context(MetaSrvSnafu)?;
info!(
@@ -89,7 +221,7 @@ impl KvBackend for MetaKvBackend {
key: &[u8],
expect: &[u8],
val: &[u8],
) -> Result<Result<(), Option<Vec<u8>>>, Error> {
) -> Result<std::result::Result<(), Option<Vec<u8>>>> {
let request = CompareAndPutRequest::new()
.with_key(key.to_vec())
.with_expect(expect.to_vec())
@@ -105,4 +237,14 @@ impl KvBackend for MetaKvBackend {
Ok(Err(response.take_prev_kv().map(|v| v.value().to_vec())))
}
}
async fn move_value(&self, from_key: &[u8], to_key: &[u8]) -> Result<()> {
let req = MoveValueRequest::new(from_key, to_key);
self.client.move_value(req).await.context(MetaSrvSnafu)?;
Ok(())
}
fn as_any(&self) -> &dyn Any {
self
}
}

View File

@@ -273,11 +273,7 @@ async fn initiate_schemas(
"Fetch schema from metasrv: {}.{}",
&catalog_name, &schema_name
);
increment_gauge!(
crate::metrics::METRIC_CATALOG_MANAGER_SCHEMA_COUNT,
1.0,
&[crate::metrics::db_label(&catalog_name, &schema_name)],
);
increment_gauge!(crate::metrics::METRIC_CATALOG_MANAGER_SCHEMA_COUNT, 1.0);
let backend = backend.clone();
let engine_manager = engine_manager.clone();
@@ -347,6 +343,43 @@ async fn iter_remote_tables<'a>(
}))
}
async fn print_regional_key_debug_info(
node_id: u64,
backend: KvBackendRef,
table_key: &TableGlobalKey,
) {
let regional_key = TableRegionalKey {
catalog_name: table_key.catalog_name.clone(),
schema_name: table_key.schema_name.clone(),
table_name: table_key.table_name.clone(),
node_id,
}
.to_string();
match backend.get(regional_key.as_bytes()).await {
Ok(Some(Kv(_, values_bytes))) => {
debug!(
"Node id: {}, TableRegionalKey: {}, value: {},",
node_id,
table_key,
String::from_utf8_lossy(&values_bytes),
);
}
Ok(None) => {
debug!(
"Node id: {}, TableRegionalKey: {}, value: None",
node_id, table_key,
);
}
Err(err) => {
debug!(
"Node id: {}, failed to fetch TableRegionalKey: {}, source: {}",
node_id, regional_key, err
);
}
}
}
/// Initiates all tables inside the catalog by fetching data from metasrv.
/// Return maximum table id in the schema.
async fn initiate_tables(
@@ -367,35 +400,63 @@ async fn initiate_tables(
.map(|(table_key, table_value)| {
let engine_manager = engine_manager.clone();
let schema = schema.clone();
let backend = backend.clone();
common_runtime::spawn_bg(async move {
let table_ref =
open_or_create_table(node_id, engine_manager, &table_key, &table_value).await?;
let table_info = table_ref.table_info();
let table_name = &table_info.name;
schema.register_table(table_name.clone(), table_ref).await?;
info!("Registered table {}", table_name);
Ok(table_info.ident.table_id)
match open_or_create_table(node_id, engine_manager, &table_key, &table_value).await
{
Ok(table_ref) => {
let table_info = table_ref.table_info();
let table_name = &table_info.name;
schema.register_table(table_name.clone(), table_ref).await?;
info!("Registered table {}", table_name);
Ok(Some(table_info.ident.table_id))
}
Err(err) => {
warn!(
"Node id: {}, failed to open table: {}, source: {}",
node_id, table_key, err
);
debug!(
"Node id: {}, TableGlobalKey: {}, value: {:?},",
node_id, table_key, table_value
);
print_regional_key_debug_info(node_id, backend, &table_key).await;
Ok(None)
}
}
})
})
.collect::<Vec<_>>();
let max_table_id = futures::future::try_join_all(joins)
let opened_table_ids = futures::future::try_join_all(joins)
.await
.context(ParallelOpenTableSnafu)?
.into_iter()
.collect::<Result<Vec<_>>>()?
.into_iter()
.flatten()
.collect::<Vec<_>>();
let opened = opened_table_ids.len();
let max_table_id = opened_table_ids
.into_iter()
.max()
.unwrap_or(MAX_SYS_TABLE_ID);
increment_gauge!(
crate::metrics::METRIC_CATALOG_MANAGER_TABLE_COUNT,
1.0,
table_num as f64,
&[crate::metrics::db_label(catalog_name, schema_name)],
);
info!(
"initialized tables in {}.{}, total: {}",
catalog_name, schema_name, table_num
"initialized tables in {}.{}, total: {}, opened: {}, failed: {}",
catalog_name,
schema_name,
table_num,
opened,
table_num - opened
);
Ok(max_table_id)
@@ -431,6 +492,7 @@ async fn open_or_create_table(
schema_name: schema_name.clone(),
table_name: table_name.clone(),
table_id,
region_numbers: region_numbers.clone(),
};
let engine =
engine_manager

View File

@@ -12,6 +12,7 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use std::any::Any;
use std::collections::btree_map::Entry;
use std::collections::{BTreeMap, HashMap};
use std::fmt::{Display, Formatter};
@@ -19,9 +20,6 @@ use std::str::FromStr;
use std::sync::Arc;
use async_stream::stream;
use catalog::error::Error;
use catalog::helper::{CatalogKey, CatalogValue, SchemaKey, SchemaValue};
use catalog::remote::{Kv, KvBackend, ValueIter};
use common_catalog::consts::{DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME};
use common_recordbatch::RecordBatch;
use common_telemetry::logging::info;
@@ -36,6 +34,10 @@ use table::test_util::MemTable;
use table::TableRef;
use tokio::sync::RwLock;
use crate::error::Error;
use crate::helper::{CatalogKey, CatalogValue, SchemaKey, SchemaValue};
use crate::remote::{Kv, KvBackend, ValueIter};
pub struct MockKvBackend {
map: RwLock<BTreeMap<Vec<u8>, Vec<u8>>>,
}
@@ -139,14 +141,26 @@ impl KvBackend for MockKvBackend {
}
async fn delete_range(&self, key: &[u8], end: &[u8]) -> Result<(), Error> {
let start = key.to_vec();
let end = end.to_vec();
let range = start..end;
let mut map = self.map.write().await;
map.retain(|k, _| !range.contains(k));
if end.is_empty() {
let _ = map.remove(key);
} else {
let start = key.to_vec();
let end = end.to_vec();
let range = start..end;
map.retain(|k, _| !range.contains(k));
}
Ok(())
}
async fn move_value(&self, _from_key: &[u8], _to_key: &[u8]) -> Result<(), Error> {
unimplemented!()
}
fn as_any(&self) -> &dyn Any {
self
}
}
#[derive(Default)]

View File

@@ -30,12 +30,13 @@ use datatypes::schema::{ColumnSchema, RawSchema, SchemaRef};
use datatypes::vectors::{BinaryVector, TimestampMillisecondVector, UInt8Vector};
use serde::{Deserialize, Serialize};
use snafu::{ensure, OptionExt, ResultExt};
use store_api::storage::ScanRequest;
use table::engine::{EngineContext, TableEngineRef};
use table::metadata::{TableId, TableInfoRef};
use table::requests::{
CreateTableRequest, DeleteRequest, InsertRequest, OpenTableRequest, TableOptions,
};
use table::{Table, TableRef};
use table::{Result as TableResult, Table, TableRef};
use crate::error::{
self, CreateSystemCatalogSnafu, EmptyValueSnafu, Error, InvalidEntryTypeSnafu, InvalidKeySnafu,
@@ -68,8 +69,12 @@ impl Table for SystemCatalogTable {
self.0.scan(projection, filters, limit).await
}
async fn scan_to_stream(&self, request: ScanRequest) -> TableResult<SendableRecordBatchStream> {
self.0.scan_to_stream(request).await
}
/// Insert values into table.
async fn insert(&self, request: InsertRequest) -> table::error::Result<usize> {
async fn insert(&self, request: InsertRequest) -> TableResult<usize> {
self.0.insert(request).await
}
@@ -77,7 +82,7 @@ impl Table for SystemCatalogTable {
self.0.table_info()
}
async fn delete(&self, request: DeleteRequest) -> table::Result<usize> {
async fn delete(&self, request: DeleteRequest) -> TableResult<usize> {
self.0.delete(request).await
}
@@ -93,6 +98,7 @@ impl SystemCatalogTable {
schema_name: INFORMATION_SCHEMA_NAME.to_string(),
table_name: SYSTEM_CATALOG_TABLE_NAME.to_string(),
table_id: SYSTEM_CATALOG_TABLE_ID,
region_numbers: vec![0],
};
let schema = build_system_catalog_schema();
let ctx = EngineContext::default();
@@ -508,7 +514,8 @@ mod tests {
Arc::new(NoopLogStore::default()),
object_store.clone(),
noop_compaction_scheduler,
),
)
.unwrap(),
object_store,
));
(dir, table_engine)

View File

@@ -62,7 +62,8 @@ impl DfTableSourceProvider {
TableReference::Bare { .. } => (),
TableReference::Partial { schema, .. } => {
ensure!(
schema.as_ref() == self.default_schema,
schema.as_ref() == self.default_schema
|| schema.as_ref() == INFORMATION_SCHEMA_NAME,
QueryAccessDeniedSnafu {
catalog: &self.default_catalog,
schema: schema.as_ref(),
@@ -74,7 +75,8 @@ impl DfTableSourceProvider {
} => {
ensure!(
catalog.as_ref() == self.default_catalog
&& schema.as_ref() == self.default_schema,
&& (schema.as_ref() == self.default_schema
|| schema.as_ref() == INFORMATION_SCHEMA_NAME),
QueryAccessDeniedSnafu {
catalog: catalog.as_ref(),
schema: schema.as_ref()
@@ -191,5 +193,25 @@ mod tests {
};
let result = table_provider.resolve_table_ref(table_ref);
assert!(result.is_err());
let table_ref = TableReference::Partial {
schema: Cow::Borrowed("information_schema"),
table: Cow::Borrowed("columns"),
};
assert!(table_provider.resolve_table_ref(table_ref).is_ok());
let table_ref = TableReference::Full {
catalog: Cow::Borrowed("greptime"),
schema: Cow::Borrowed("information_schema"),
table: Cow::Borrowed("columns"),
};
assert!(table_provider.resolve_table_ref(table_ref).is_ok());
let table_ref = TableReference::Full {
catalog: Cow::Borrowed("dummy"),
schema: Cow::Borrowed("information_schema"),
table: Cow::Borrowed("columns"),
};
assert!(table_provider.resolve_table_ref(table_ref).is_err());
}
}

View File

@@ -14,8 +14,6 @@
#![feature(assert_matches)]
mod mock;
#[cfg(test)]
mod tests {
use std::assert_matches::assert_matches;
@@ -23,8 +21,10 @@ mod tests {
use std::sync::Arc;
use catalog::helper::{CatalogKey, CatalogValue, SchemaKey, SchemaValue};
use catalog::remote::mock::{MockKvBackend, MockTableEngine};
use catalog::remote::{
KvBackend, KvBackendRef, RemoteCatalogManager, RemoteCatalogProvider, RemoteSchemaProvider,
CachedMetaKvBackend, KvBackend, KvBackendRef, RemoteCatalogManager, RemoteCatalogProvider,
RemoteSchemaProvider,
};
use catalog::{CatalogManager, RegisterTableRequest};
use common_catalog::consts::{DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, MITO_ENGINE};
@@ -34,8 +34,6 @@ mod tests {
use table::engine::{EngineContext, TableEngineRef};
use table::requests::CreateTableRequest;
use crate::mock::{MockKvBackend, MockTableEngine};
#[tokio::test]
async fn test_backend() {
common_telemetry::init_default_ut_logging();
@@ -76,6 +74,52 @@ mod tests {
);
}
#[tokio::test]
async fn test_cached_backend() {
common_telemetry::init_default_ut_logging();
let backend = CachedMetaKvBackend::wrap(Arc::new(MockKvBackend::default()));
let default_catalog_key = CatalogKey {
catalog_name: DEFAULT_CATALOG_NAME.to_string(),
}
.to_string();
backend
.set(
default_catalog_key.as_bytes(),
&CatalogValue {}.as_bytes().unwrap(),
)
.await
.unwrap();
let ret = backend.get(b"__c-greptime").await.unwrap();
assert!(ret.is_some());
let _ = backend
.compare_and_set(
b"__c-greptime",
&CatalogValue {}.as_bytes().unwrap(),
b"123",
)
.await
.unwrap();
let ret = backend.get(b"__c-greptime").await.unwrap();
assert!(ret.is_some());
assert_eq!(&b"123"[..], &(ret.as_ref().unwrap().1));
let _ = backend.set(b"__c-greptime", b"1234").await;
let ret = backend.get(b"__c-greptime").await.unwrap();
assert!(ret.is_some());
assert_eq!(&b"1234"[..], &(ret.as_ref().unwrap().1));
backend.delete(b"__c-greptime").await.unwrap();
let ret = backend.get(b"__c-greptime").await.unwrap();
assert!(ret.is_none());
}
async fn prepare_components(
node_id: u64,
) -> (
@@ -84,17 +128,22 @@ mod tests {
Arc<RemoteCatalogManager>,
TableEngineManagerRef,
) {
let backend = Arc::new(MockKvBackend::default()) as KvBackendRef;
let cached_backend = Arc::new(CachedMetaKvBackend::wrap(
Arc::new(MockKvBackend::default()),
));
let table_engine = Arc::new(MockTableEngine::default());
let engine_manager = Arc::new(MemoryTableEngineManager::alias(
MITO_ENGINE.to_string(),
table_engine.clone(),
));
let catalog_manager =
RemoteCatalogManager::new(engine_manager.clone(), node_id, backend.clone());
RemoteCatalogManager::new(engine_manager.clone(), node_id, cached_backend.clone());
catalog_manager.start().await.unwrap();
(
backend,
cached_backend,
table_engine,
Arc::new(catalog_manager),
engine_manager as Arc<_>,

View File

@@ -4,6 +4,9 @@ version.workspace = true
edition.workspace = true
license.workspace = true
[features]
testing = []
[dependencies]
api = { path = "../api" }
arrow-flight.workspace = true
@@ -16,21 +19,24 @@ common-grpc-expr = { path = "../common/grpc-expr" }
common-query = { path = "../common/query" }
common-recordbatch = { path = "../common/recordbatch" }
common-time = { path = "../common/time" }
common-meta = { path = "../common/meta" }
common-telemetry = { path = "../common/telemetry" }
datafusion.workspace = true
datatypes = { path = "../datatypes" }
enum_dispatch = "0.3"
futures-util.workspace = true
moka = { version = "0.9", features = ["future"] }
parking_lot = "0.12"
prost.workspace = true
rand.workspace = true
snafu.workspace = true
tokio-stream = { version = "0.1", features = ["net"] }
tokio.workspace = true
tonic.workspace = true
[dev-dependencies]
datanode = { path = "../datanode" }
substrait = { path = "../common/substrait" }
tokio.workspace = true
tracing = "0.1"
tracing-subscriber = { version = "0.3", features = ["env-filter"] }
prost.workspace = true

View File

@@ -63,7 +63,7 @@ async fn run() {
create_if_not_exists: false,
table_options: Default::default(),
table_id: Some(TableId { id: 1024 }),
region_ids: vec![0],
region_numbers: vec![0],
engine: MITO_ENGINE.to_string(),
};

View File

@@ -12,32 +12,60 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use std::fmt::{Debug, Formatter};
use std::sync::{Arc, Mutex};
use std::time::Duration;
use client::Client;
use common_grpc::channel_manager::ChannelManager;
use meta_client::rpc::Peer;
use common_grpc::channel_manager::{ChannelConfig, ChannelManager};
use common_meta::peer::Peer;
use common_telemetry::info;
use moka::future::{Cache, CacheBuilder};
use crate::Client;
pub struct DatanodeClients {
channel_manager: ChannelManager,
clients: Cache<Peer, Client>,
started: Arc<Mutex<bool>>,
}
impl Default for DatanodeClients {
fn default() -> Self {
let config = ChannelConfig::new().timeout(Duration::from_secs(8));
Self {
channel_manager: ChannelManager::new(),
channel_manager: ChannelManager::with_config(config),
clients: CacheBuilder::new(1024)
.time_to_live(Duration::from_secs(30 * 60))
.time_to_idle(Duration::from_secs(5 * 60))
.build(),
started: Arc::new(Mutex::new(false)),
}
}
}
impl Debug for DatanodeClients {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
f.debug_struct("DatanodeClients")
.field("channel_manager", &self.channel_manager)
.finish()
}
}
impl DatanodeClients {
pub(crate) async fn get_client(&self, datanode: &Peer) -> Client {
pub fn start(&self) {
let mut started = self.started.lock().unwrap();
if *started {
return;
}
self.channel_manager.start_channel_recycle();
info!("Datanode clients manager is started!");
*started = true;
}
pub async fn get_client(&self, datanode: &Peer) -> Client {
self.clients
.get_with_by_ref(datanode, async move {
Client::with_manager_and_urls(
@@ -48,8 +76,8 @@ impl DatanodeClients {
.await
}
#[cfg(test)]
pub(crate) async fn insert_client(&self, datanode: Peer, client: Client) {
#[cfg(feature = "testing")]
pub async fn insert_client(&self, datanode: Peer, client: Client) {
self.clients.insert(datanode, client).await
}
}

View File

@@ -18,7 +18,7 @@ use api::v1::greptime_request::Request;
use api::v1::query_request::Query;
use api::v1::{
greptime_response, AffectedRows, AlterExpr, AuthHeader, CreateTableExpr, DdlRequest,
DeleteRequest, DropTableExpr, FlushTableExpr, GreptimeRequest, InsertRequest, PromRangeQuery,
DeleteRequest, DropTableExpr, FlushTableExpr, GreptimeRequest, InsertRequests, PromRangeQuery,
QueryRequest, RequestHeader,
};
use arrow_flight::{FlightData, Ticket};
@@ -29,6 +29,9 @@ use common_telemetry::{logging, timer};
use futures_util::{TryFutureExt, TryStreamExt};
use prost::Message;
use snafu::{ensure, ResultExt};
use tokio::sync::mpsc::Sender;
use tokio::sync::{mpsc, OnceCell};
use tokio_stream::wrappers::ReceiverStream;
use crate::error::{
ConvertFlightDataSnafu, IllegalDatabaseResponseSnafu, IllegalFlightMessagesSnafu,
@@ -47,6 +50,7 @@ pub struct Database {
dbname: String,
client: Client,
streaming_client: OnceCell<Sender<GreptimeRequest>>,
ctx: FlightContext,
}
@@ -56,8 +60,10 @@ impl Database {
Self {
catalog: catalog.into(),
schema: schema.into(),
dbname: "".to_string(),
client,
..Default::default()
streaming_client: OnceCell::new(),
ctx: FlightContext::default(),
}
}
@@ -70,9 +76,12 @@ impl Database {
/// environment
pub fn new_with_dbname(dbname: impl Into<String>, client: Client) -> Self {
Self {
catalog: "".to_string(),
schema: "".to_string(),
dbname: dbname.into(),
client,
..Default::default()
streaming_client: OnceCell::new(),
ctx: FlightContext::default(),
}
}
@@ -106,9 +115,25 @@ impl Database {
});
}
pub async fn insert(&self, request: InsertRequest) -> Result<u32> {
pub async fn insert(&self, requests: InsertRequests) -> Result<u32> {
let _timer = timer!(metrics::METRIC_GRPC_INSERT);
self.handle(Request::Insert(request)).await
self.handle(Request::Inserts(requests)).await
}
pub async fn insert_to_stream(&self, requests: InsertRequests) -> Result<()> {
let streaming_client = self
.streaming_client
.get_or_try_init(|| self.client_stream())
.await?;
let request = self.to_rpc_request(Request::Inserts(requests));
streaming_client.send(request).await.map_err(|e| {
error::ClientStreamingSnafu {
err_msg: e.to_string(),
}
.build()
})
}
pub async fn delete(&self, request: DeleteRequest) -> Result<u32> {
@@ -118,15 +143,7 @@ impl Database {
async fn handle(&self, request: Request) -> Result<u32> {
let mut client = self.client.make_database_client()?.inner;
let request = GreptimeRequest {
header: Some(RequestHeader {
catalog: self.catalog.clone(),
schema: self.schema.clone(),
authorization: self.ctx.auth_header.clone(),
dbname: self.dbname.clone(),
}),
request: Some(request),
};
let request = self.to_rpc_request(request);
let response = client
.handle(request)
.await?
@@ -139,6 +156,27 @@ impl Database {
Ok(value)
}
#[inline]
fn to_rpc_request(&self, request: Request) -> GreptimeRequest {
GreptimeRequest {
header: Some(RequestHeader {
catalog: self.catalog.clone(),
schema: self.schema.clone(),
authorization: self.ctx.auth_header.clone(),
dbname: self.dbname.clone(),
}),
request: Some(request),
}
}
async fn client_stream(&self) -> Result<Sender<GreptimeRequest>> {
let mut client = self.client.make_database_client()?.inner;
let (sender, receiver) = mpsc::channel::<GreptimeRequest>(65536);
let receiver = ReceiverStream::new(receiver);
client.handle_requests(receiver).await?;
Ok(sender)
}
pub async fn sql(&self, sql: &str) -> Result<Output> {
let _timer = timer!(metrics::METRIC_GRPC_SQL);
self.do_get(Request::Query(QueryRequest {
@@ -209,22 +247,13 @@ impl Database {
async fn do_get(&self, request: Request) -> Result<Output> {
// FIXME(paomian): should be added some labels for metrics
let _timer = timer!(metrics::METRIC_GRPC_DO_GET);
let request = GreptimeRequest {
header: Some(RequestHeader {
catalog: self.catalog.clone(),
schema: self.schema.clone(),
authorization: self.ctx.auth_header.clone(),
dbname: self.dbname.clone(),
}),
request: Some(request),
};
let request = self.to_rpc_request(request);
let request = Ticket {
ticket: request.encode_to_vec().into(),
};
let mut client = self.client.make_flight_client()?;
// TODO(LFC): Streaming get flight data.
let flight_data: Vec<FlightData> = client
.mut_inner()
.do_get(request)

View File

@@ -34,13 +34,13 @@ pub enum Error {
#[snafu(display("Failed to convert FlightData, source: {}", source))]
ConvertFlightData {
#[snafu(backtrace)]
location: Location,
source: common_grpc::Error,
},
#[snafu(display("Column datatype error, source: {}", source))]
ColumnDataType {
#[snafu(backtrace)]
location: Location,
source: api::error::Error,
},
@@ -57,7 +57,7 @@ pub enum Error {
))]
CreateChannel {
addr: String,
#[snafu(backtrace)]
location: Location,
source: common_grpc::error::Error,
},
@@ -67,6 +67,9 @@ pub enum Error {
#[snafu(display("Illegal Database response: {err_msg}"))]
IllegalDatabaseResponse { err_msg: String },
#[snafu(display("Failed to send request with streaming: {}", err_msg))]
ClientStreaming { err_msg: String, location: Location },
}
pub type Result<T> = std::result::Result<T, Error>;
@@ -77,11 +80,12 @@ impl ErrorExt for Error {
Error::IllegalFlightMessages { .. }
| Error::ColumnDataType { .. }
| Error::MissingField { .. }
| Error::IllegalDatabaseResponse { .. } => StatusCode::Internal,
| Error::IllegalDatabaseResponse { .. }
| Error::ClientStreaming { .. } => StatusCode::Internal,
Error::Server { code, .. } => *code,
Error::FlightGet { source, .. } => source.status_code(),
Error::CreateChannel { source, .. } | Error::ConvertFlightData { source } => {
Error::CreateChannel { source, .. } | Error::ConvertFlightData { source, .. } => {
source.status_code()
}
Error::IllegalGrpcClientState { .. } => StatusCode::Unexpected,

View File

@@ -13,6 +13,7 @@
// limitations under the License.
mod client;
pub mod client_manager;
mod database;
mod error;
pub mod load_balance;

View File

@@ -10,7 +10,7 @@ name = "greptime"
path = "src/bin/greptime.rs"
[features]
mem-prof = ["tikv-jemallocator", "tikv-jemalloc-ctl"]
tokio-console = ["common-telemetry/tokio-console"]
[dependencies]
anymap = "1.0.0-beta.2"
@@ -24,32 +24,32 @@ common-recordbatch = { path = "../common/recordbatch" }
common-telemetry = { path = "../common/telemetry", features = [
"deadlock_detection",
] }
config = "0.13"
datanode = { path = "../datanode" }
either = "1.8"
frontend = { path = "../frontend" }
futures.workspace = true
meta-client = { path = "../meta-client" }
meta-srv = { path = "../meta-srv" }
metrics.workspace = true
nu-ansi-term = "0.46"
partition = { path = "../partition" }
query = { path = "../query" }
rustyline = "10.1"
serde.workspace = true
servers = { path = "../servers" }
session = { path = "../session" }
snafu.workspace = true
substrait = { path = "../common/substrait" }
tikv-jemalloc-ctl = { version = "0.5", optional = true }
tikv-jemallocator = { version = "0.5", optional = true }
tikv-jemallocator = "0.5"
tokio.workspace = true
toml = "0.5"
[dev-dependencies]
common-test-util = { path = "../common/test-util" }
rexpect = "0.5"
temp-env = "0.3"
serde.workspace = true
toml = "0.5"
[build-dependencies]
build-data = "0.1.3"

View File

@@ -18,6 +18,10 @@ fn main() {
"cargo:rustc-env=GIT_COMMIT={}",
build_data::get_git_commit().unwrap_or_else(|_| DEFAULT_VALUE.to_string())
);
println!(
"cargo:rustc-env=GIT_COMMIT_SHORT={}",
build_data::get_git_commit_short().unwrap_or_else(|_| DEFAULT_VALUE.to_string())
);
println!(
"cargo:rustc-env=GIT_BRANCH={}",
build_data::get_git_branch().unwrap_or_else(|_| DEFAULT_VALUE.to_string())

View File

@@ -20,7 +20,8 @@ use clap::Parser;
use cmd::error::Result;
use cmd::options::{Options, TopLevelOptions};
use cmd::{cli, datanode, frontend, metasrv, standalone};
use common_telemetry::logging::{error, info};
use common_telemetry::logging::{error, info, TracingOptions};
use metrics::gauge;
#[derive(Parser)]
#[clap(name = "greptimedb", version = print_version())]
@@ -31,6 +32,10 @@ struct Command {
log_level: Option<String>,
#[clap(subcommand)]
subcmd: SubCommand,
#[cfg(feature = "tokio-console")]
#[clap(long)]
tokio_console_addr: Option<String>,
}
pub enum Application {
@@ -42,13 +47,13 @@ pub enum Application {
}
impl Application {
async fn run(&mut self) -> Result<()> {
async fn start(&mut self) -> Result<()> {
match self {
Application::Datanode(instance) => instance.run().await,
Application::Frontend(instance) => instance.run().await,
Application::Metasrv(instance) => instance.run().await,
Application::Standalone(instance) => instance.run().await,
Application::Cli(instance) => instance.run().await,
Application::Datanode(instance) => instance.start().await,
Application::Frontend(instance) => instance.start().await,
Application::Metasrv(instance) => instance.start().await,
Application::Standalone(instance) => instance.start().await,
Application::Cli(instance) => instance.start().await,
}
}
@@ -159,28 +164,63 @@ fn print_version() -> &'static str {
)
}
#[cfg(feature = "mem-prof")]
fn short_version() -> &'static str {
env!("CARGO_PKG_VERSION")
}
// {app_name}-{branch_name}-{commit_short}
// The branch name (tag) of a release build should already contain the short
// version so the full version doesn't concat the short version explicitly.
fn full_version() -> &'static str {
concat!(
"greptimedb-",
env!("GIT_BRANCH"),
"-",
env!("GIT_COMMIT_SHORT")
)
}
fn log_env_flags() {
info!("command line arguments");
for argument in std::env::args() {
info!("argument: {}", argument);
}
}
#[global_allocator]
static ALLOC: tikv_jemallocator::Jemalloc = tikv_jemallocator::Jemalloc;
#[tokio::main]
async fn main() -> Result<()> {
let cmd = Command::parse();
// TODO(dennis):
// 1. adds ip/port to app
let app_name = &cmd.subcmd.to_string();
let opts = cmd.load_options()?;
let logging_opts = opts.logging_options();
let tracing_opts = TracingOptions {
#[cfg(feature = "tokio-console")]
tokio_console_addr: cmd.tokio_console_addr.clone(),
};
common_telemetry::set_panic_hook();
common_telemetry::init_default_metrics_recorder();
let _guard = common_telemetry::init_global_logging(app_name, logging_opts);
let _guard = common_telemetry::init_global_logging(app_name, logging_opts, tracing_opts);
// Report app version as gauge.
gauge!("app_version", 1.0, "short_version" => short_version(), "version" => full_version());
// Log version and argument flags.
info!(
"short_version: {}, full_version: {}",
short_version(),
full_version()
);
log_env_flags();
let mut app = cmd.build(opts).await?;
tokio::select! {
result = app.run() => {
result = app.start() => {
if let Err(err) = result {
error!(err; "Fatal error occurs!");
}

View File

@@ -28,7 +28,7 @@ pub struct Instance {
}
impl Instance {
pub async fn run(&mut self) -> Result<()> {
pub async fn start(&mut self) -> Result<()> {
self.repl.run().await
}
@@ -53,8 +53,8 @@ impl Command {
if let Some(dir) = top_level_opts.log_dir {
logging_opts.dir = dir;
}
if let Some(level) = top_level_opts.log_level {
logging_opts.level = level;
if top_level_opts.log_level.is_some() {
logging_opts.level = top_level_opts.log_level;
}
Ok(Options::Cli(Box::new(logging_opts)))
}
@@ -107,7 +107,7 @@ mod tests {
let opts = cmd.load_options(TopLevelOptions::default()).unwrap();
let logging_opts = opts.logging_options();
assert_eq!("/tmp/greptimedb/logs", logging_opts.dir);
assert_eq!("info", logging_opts.level);
assert!(logging_opts.level.is_none());
assert!(!logging_opts.enable_jaeger_tracing);
}
@@ -129,6 +129,6 @@ mod tests {
.unwrap();
let logging_opts = opts.logging_options();
assert_eq!("/tmp/greptimedb/test/logs", logging_opts.dir);
assert_eq!("debug", logging_opts.level);
assert_eq!("debug", logging_opts.level.as_ref().unwrap());
}
}

View File

@@ -16,7 +16,8 @@ use std::path::PathBuf;
use std::sync::Arc;
use std::time::Instant;
use catalog::remote::MetaKvBackend;
use catalog::remote::CachedMetaKvBackend;
use client::client_manager::DatanodeClients;
use client::{Client, Database, DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME};
use common_error::prelude::ErrorExt;
use common_query::Output;
@@ -24,7 +25,6 @@ use common_recordbatch::RecordBatches;
use common_telemetry::logging;
use either::Either;
use frontend::catalog::FrontendCatalogManager;
use frontend::datanode::DatanodeClients;
use meta_client::client::MetaClientBuilder;
use partition::manager::PartitionRuleManager;
use partition::route::TableRoutes;
@@ -253,9 +253,7 @@ async fn create_query_engine(meta_addr: &str) -> Result<DatafusionQueryEngine> {
.context(StartMetaClientSnafu)?;
let meta_client = Arc::new(meta_client);
let backend = Arc::new(MetaKvBackend {
client: meta_client.clone(),
});
let cached_meta_backend = Arc::new(CachedMetaKvBackend::new(meta_client.clone()));
let table_routes = Arc::new(TableRoutes::new(meta_client));
let partition_manager = Arc::new(PartitionRuleManager::new(table_routes));
@@ -263,11 +261,18 @@ async fn create_query_engine(meta_addr: &str) -> Result<DatafusionQueryEngine> {
let datanode_clients = Arc::new(DatanodeClients::default());
let catalog_list = Arc::new(FrontendCatalogManager::new(
backend,
cached_meta_backend.clone(),
cached_meta_backend,
partition_manager,
datanode_clients,
));
let state = Arc::new(QueryEngineState::new(catalog_list, Default::default()));
let state = Arc::new(QueryEngineState::new(
catalog_list,
false,
None,
None,
Default::default(),
));
Ok(DatafusionQueryEngine::new(state))
}

View File

@@ -23,14 +23,13 @@ use snafu::ResultExt;
use crate::error::{MissingConfigSnafu, Result, ShutdownDatanodeSnafu, StartDatanodeSnafu};
use crate::options::{Options, TopLevelOptions};
use crate::toml_loader;
pub struct Instance {
datanode: Datanode,
}
impl Instance {
pub async fn run(&mut self) -> Result<()> {
pub async fn start(&mut self) -> Result<()> {
self.datanode.start().await.context(StartDatanodeSnafu)
}
@@ -85,61 +84,54 @@ struct StartCommand {
rpc_addr: Option<String>,
#[clap(long)]
rpc_hostname: Option<String>,
#[clap(long)]
mysql_addr: Option<String>,
#[clap(long)]
metasrv_addr: Option<String>,
#[clap(long, multiple = true, value_delimiter = ',')]
metasrv_addr: Option<Vec<String>>,
#[clap(short, long)]
config_file: Option<String>,
#[clap(long)]
data_dir: Option<String>,
data_home: Option<String>,
#[clap(long)]
wal_dir: Option<String>,
#[clap(long)]
http_addr: Option<String>,
#[clap(long)]
http_timeout: Option<u64>,
#[clap(long, default_value = "GREPTIMEDB_DATANODE")]
env_prefix: String,
}
impl StartCommand {
fn load_options(&self, top_level_opts: TopLevelOptions) -> Result<Options> {
let mut opts: DatanodeOptions = if let Some(path) = &self.config_file {
toml_loader::from_file!(path)?
} else {
DatanodeOptions::default()
};
let mut opts: DatanodeOptions = Options::load_layered_options(
self.config_file.as_deref(),
self.env_prefix.as_ref(),
DatanodeOptions::env_list_keys(),
)?;
if let Some(dir) = top_level_opts.log_dir {
opts.logging.dir = dir;
}
if let Some(level) = top_level_opts.log_level {
opts.logging.level = level;
if top_level_opts.log_level.is_some() {
opts.logging.level = top_level_opts.log_level;
}
if let Some(addr) = self.rpc_addr.clone() {
opts.rpc_addr = addr;
if let Some(addr) = &self.rpc_addr {
opts.rpc_addr = addr.clone();
}
if self.rpc_hostname.is_some() {
opts.rpc_hostname = self.rpc_hostname.clone();
}
if let Some(addr) = self.mysql_addr.clone() {
opts.mysql_addr = addr;
}
if let Some(node_id) = self.node_id {
opts.node_id = Some(node_id);
}
if let Some(meta_addr) = self.metasrv_addr.clone() {
if let Some(metasrv_addrs) = &self.metasrv_addr {
opts.meta_client_options
.get_or_insert_with(MetaClientOptions::default)
.metasrv_addrs = meta_addr
.split(',')
.map(&str::trim)
.map(&str::to_string)
.collect::<_>();
.metasrv_addrs = metasrv_addrs.clone();
opts.mode = Mode::Distributed;
}
@@ -150,16 +142,20 @@ impl StartCommand {
.fail();
}
if let Some(data_dir) = self.data_dir.clone() {
opts.storage.store = ObjectStoreConfig::File(FileConfig { data_dir });
if let Some(data_home) = &self.data_home {
opts.storage.store = ObjectStoreConfig::File(FileConfig {
data_home: data_home.clone(),
});
}
if let Some(wal_dir) = self.wal_dir.clone() {
opts.wal.dir = wal_dir;
if let Some(wal_dir) = &self.wal_dir {
opts.wal.dir = Some(wal_dir.clone());
}
if let Some(http_addr) = self.http_addr.clone() {
opts.http_opts.addr = http_addr
if let Some(http_addr) = &self.http_addr {
opts.http_opts.addr = http_addr.clone();
}
if let Some(http_timeout) = self.http_timeout {
opts.http_opts.timeout = Duration::from_secs(http_timeout)
}
@@ -191,6 +187,7 @@ mod tests {
use servers::Mode;
use super::*;
use crate::options::ENV_VAR_SEP;
#[test]
fn test_read_from_config_file() {
@@ -202,8 +199,6 @@ mod tests {
rpc_addr = "127.0.0.1:3001"
rpc_hostname = "127.0.0.1"
rpc_runtime_size = 8
mysql_addr = "127.0.0.1:4406"
mysql_runtime_size = 2
[meta_client_options]
metasrv_addrs = ["127.0.0.1:3002"]
@@ -212,7 +207,7 @@ mod tests {
tcp_nodelay = true
[wal]
dir = "/tmp/greptimedb/wal"
dir = "/other/wal"
file_size = "1GB"
purge_threshold = "50GB"
purge_interval = "10m"
@@ -221,7 +216,7 @@ mod tests {
[storage]
type = "File"
data_dir = "/tmp/greptimedb/data/"
data_home = "/tmp/greptimedb/"
[storage.compaction]
max_inflight_tasks = 3
@@ -232,6 +227,7 @@ mod tests {
checkpoint_margin = 9
gc_duration = '7s'
checkpoint_on_startup = true
compress = true
[logging]
level = "debug"
@@ -248,10 +244,9 @@ mod tests {
cmd.load_options(TopLevelOptions::default()).unwrap() else { unreachable!() };
assert_eq!("127.0.0.1:3001".to_string(), options.rpc_addr);
assert_eq!("127.0.0.1:4406".to_string(), options.mysql_addr);
assert_eq!(2, options.mysql_runtime_size);
assert_eq!(Some(42), options.node_id);
assert_eq!("/other/wal", options.wal.dir.unwrap());
assert_eq!(Duration::from_secs(600), options.wal.purge_interval);
assert_eq!(1024 * 1024 * 1024, options.wal.file_size.0);
assert_eq!(1024 * 1024 * 1024 * 50, options.wal.purge_threshold.0);
@@ -270,11 +265,12 @@ mod tests {
assert!(tcp_nodelay);
match &options.storage.store {
ObjectStoreConfig::File(FileConfig { data_dir, .. }) => {
assert_eq!("/tmp/greptimedb/data/", data_dir)
ObjectStoreConfig::File(FileConfig { data_home, .. }) => {
assert_eq!("/tmp/greptimedb/", data_home)
}
ObjectStoreConfig::S3 { .. } => unreachable!(),
ObjectStoreConfig::Oss { .. } => unreachable!(),
ObjectStoreConfig::Azblob { .. } => unreachable!(),
};
assert_eq!(
@@ -291,11 +287,12 @@ mod tests {
checkpoint_margin: Some(9),
gc_duration: Some(Duration::from_secs(7)),
checkpoint_on_startup: true,
compress: true
},
options.storage.manifest,
);
assert_eq!("debug".to_string(), options.logging.level);
assert_eq!("debug", options.logging.level.unwrap());
assert_eq!("/tmp/greptimedb/test/logs".to_string(), options.logging.dir);
}
@@ -310,7 +307,7 @@ mod tests {
if let Options::Datanode(opt) = (StartCommand {
node_id: Some(42),
metasrv_addr: Some("127.0.0.1:3002".to_string()),
metasrv_addr: Some(vec!["127.0.0.1:3002".to_string()]),
..Default::default()
})
.load_options(TopLevelOptions::default())
@@ -320,7 +317,7 @@ mod tests {
}
assert!((StartCommand {
metasrv_addr: Some("127.0.0.1:3002".to_string()),
metasrv_addr: Some(vec!["127.0.0.1:3002".to_string()]),
..Default::default()
})
.load_options(TopLevelOptions::default())
@@ -348,6 +345,126 @@ mod tests {
let logging_opt = options.logging_options();
assert_eq!("/tmp/greptimedb/test/logs", logging_opt.dir);
assert_eq!("debug", logging_opt.level);
assert_eq!("debug", logging_opt.level.as_ref().unwrap());
}
#[test]
fn test_config_precedence_order() {
let mut file = create_named_temp_file();
let toml_str = r#"
mode = "distributed"
enable_memory_catalog = false
node_id = 42
rpc_addr = "127.0.0.1:3001"
rpc_hostname = "127.0.0.1"
rpc_runtime_size = 8
[meta_client_options]
timeout_millis = 3000
connect_timeout_millis = 5000
tcp_nodelay = true
[wal]
file_size = "1GB"
purge_threshold = "50GB"
purge_interval = "10m"
read_batch_size = 128
sync_write = false
[storage]
type = "File"
data_home = "/tmp/greptimedb/"
[storage.compaction]
max_inflight_tasks = 3
max_files_in_level0 = 7
max_purge_tasks = 32
[storage.manifest]
checkpoint_on_startup = true
[logging]
level = "debug"
dir = "/tmp/greptimedb/test/logs"
"#;
write!(file, "{}", toml_str).unwrap();
let env_prefix = "DATANODE_UT";
temp_env::with_vars(
vec![
(
// storage.manifest.gc_duration = 9s
vec![
env_prefix.to_string(),
"storage".to_uppercase(),
"manifest".to_uppercase(),
"gc_duration".to_uppercase(),
]
.join(ENV_VAR_SEP),
Some("9s"),
),
(
// storage.compaction.max_purge_tasks = 99
vec![
env_prefix.to_string(),
"storage".to_uppercase(),
"compaction".to_uppercase(),
"max_purge_tasks".to_uppercase(),
]
.join(ENV_VAR_SEP),
Some("99"),
),
(
// meta_client_options.metasrv_addrs = 127.0.0.1:3001,127.0.0.1:3002,127.0.0.1:3003
vec![
env_prefix.to_string(),
"meta_client_options".to_uppercase(),
"metasrv_addrs".to_uppercase(),
]
.join(ENV_VAR_SEP),
Some("127.0.0.1:3001,127.0.0.1:3002,127.0.0.1:3003"),
),
],
|| {
let command = StartCommand {
config_file: Some(file.path().to_str().unwrap().to_string()),
wal_dir: Some("/other/wal/dir".to_string()),
env_prefix: env_prefix.to_string(),
..Default::default()
};
let Options::Datanode(opts) =
command.load_options(TopLevelOptions::default()).unwrap() else {unreachable!()};
// Should be read from env, env > default values.
assert_eq!(
opts.storage.manifest.gc_duration,
Some(Duration::from_secs(9))
);
assert_eq!(
opts.meta_client_options.unwrap().metasrv_addrs,
vec![
"127.0.0.1:3001".to_string(),
"127.0.0.1:3002".to_string(),
"127.0.0.1:3003".to_string()
]
);
// Should be read from config file, config file > env > default values.
assert_eq!(opts.storage.compaction.max_purge_tasks, 32);
// Should be read from cli, cli > config file > env > default values.
assert_eq!(opts.wal.dir.unwrap(), "/other/wal/dir");
// Should be default value.
assert_eq!(
opts.storage.manifest.checkpoint_margin,
DatanodeOptions::default()
.storage
.manifest
.checkpoint_margin
);
},
);
}
}

View File

@@ -15,6 +15,7 @@
use std::any::Any;
use common_error::prelude::*;
use config::ConfigError;
use rustyline::error::ReadlineError;
use snafu::Location;
@@ -23,59 +24,46 @@ use snafu::Location;
pub enum Error {
#[snafu(display("Failed to start datanode, source: {}", source))]
StartDatanode {
#[snafu(backtrace)]
location: Location,
source: datanode::error::Error,
},
#[snafu(display("Failed to shutdown datanode, source: {}", source))]
ShutdownDatanode {
#[snafu(backtrace)]
location: Location,
source: datanode::error::Error,
},
#[snafu(display("Failed to start frontend, source: {}", source))]
StartFrontend {
#[snafu(backtrace)]
location: Location,
source: frontend::error::Error,
},
#[snafu(display("Failed to shutdown frontend, source: {}", source))]
ShutdownFrontend {
#[snafu(backtrace)]
location: Location,
source: frontend::error::Error,
},
#[snafu(display("Failed to build meta server, source: {}", source))]
BuildMetaServer {
#[snafu(backtrace)]
location: Location,
source: meta_srv::error::Error,
},
#[snafu(display("Failed to start meta server, source: {}", source))]
StartMetaServer {
#[snafu(backtrace)]
location: Location,
source: meta_srv::error::Error,
},
#[snafu(display("Failed to shutdown meta server, source: {}", source))]
ShutdownMetaServer {
#[snafu(backtrace)]
location: Location,
source: meta_srv::error::Error,
},
#[snafu(display("Failed to read config file: {}, source: {}", path, source))]
ReadConfig {
path: String,
source: std::io::Error,
location: Location,
},
#[snafu(display("Failed to parse config, source: {}", source))]
ParseConfig {
source: toml::de::Error,
location: Location,
},
#[snafu(display("Missing config, msg: {}", msg))]
MissingConfig { msg: String, location: Location },
@@ -84,14 +72,14 @@ pub enum Error {
#[snafu(display("Illegal auth config: {}", source))]
IllegalAuthConfig {
#[snafu(backtrace)]
location: Location,
source: servers::auth::Error,
},
#[snafu(display("Unsupported selector type, {} source: {}", selector_type, source))]
UnsupportedSelectorType {
selector_type: String,
#[snafu(backtrace)]
location: Location,
source: meta_srv::error::Error,
},
@@ -113,46 +101,58 @@ pub enum Error {
#[snafu(display("Failed to request database, sql: {sql}, source: {source}"))]
RequestDatabase {
sql: String,
#[snafu(backtrace)]
location: Location,
source: client::Error,
},
#[snafu(display("Failed to collect RecordBatches, source: {source}"))]
CollectRecordBatches {
#[snafu(backtrace)]
location: Location,
source: common_recordbatch::error::Error,
},
#[snafu(display("Failed to pretty print Recordbatches, source: {source}"))]
PrettyPrintRecordBatches {
#[snafu(backtrace)]
location: Location,
source: common_recordbatch::error::Error,
},
#[snafu(display("Failed to start Meta client, source: {}", source))]
StartMetaClient {
#[snafu(backtrace)]
location: Location,
source: meta_client::error::Error,
},
#[snafu(display("Failed to parse SQL: {}, source: {}", sql, source))]
ParseSql {
sql: String,
#[snafu(backtrace)]
location: Location,
source: query::error::Error,
},
#[snafu(display("Failed to plan statement, source: {}", source))]
PlanStatement {
#[snafu(backtrace)]
location: Location,
source: query::error::Error,
},
#[snafu(display("Failed to encode logical plan in substrait, source: {}", source))]
SubstraitEncodeLogicalPlan {
#[snafu(backtrace)]
location: Location,
source: substrait::error::Error,
},
#[snafu(display("Failed to load layered config, source: {}", source))]
LoadLayeredConfig {
source: ConfigError,
location: Location,
},
#[snafu(display("Failed to start catalog manager, source: {}", source))]
StartCatalogManager {
location: Location,
source: catalog::error::Error,
},
}
pub type Result<T> = std::result::Result<T, Error>;
@@ -160,31 +160,29 @@ pub type Result<T> = std::result::Result<T, Error>;
impl ErrorExt for Error {
fn status_code(&self) -> StatusCode {
match self {
Error::StartDatanode { source } => source.status_code(),
Error::StartFrontend { source } => source.status_code(),
Error::ShutdownDatanode { source } => source.status_code(),
Error::ShutdownFrontend { source } => source.status_code(),
Error::StartMetaServer { source } => source.status_code(),
Error::ShutdownMetaServer { source } => source.status_code(),
Error::BuildMetaServer { source } => source.status_code(),
Error::StartDatanode { source, .. } => source.status_code(),
Error::StartFrontend { source, .. } => source.status_code(),
Error::ShutdownDatanode { source, .. } => source.status_code(),
Error::ShutdownFrontend { source, .. } => source.status_code(),
Error::StartMetaServer { source, .. } => source.status_code(),
Error::ShutdownMetaServer { source, .. } => source.status_code(),
Error::BuildMetaServer { source, .. } => source.status_code(),
Error::UnsupportedSelectorType { source, .. } => source.status_code(),
Error::ReadConfig { .. } | Error::ParseConfig { .. } | Error::MissingConfig { .. } => {
StatusCode::InvalidArguments
}
Error::IllegalConfig { .. } | Error::InvalidReplCommand { .. } => {
StatusCode::InvalidArguments
}
Error::IllegalAuthConfig { .. } => StatusCode::InvalidArguments,
Error::MissingConfig { .. }
| Error::LoadLayeredConfig { .. }
| Error::IllegalConfig { .. }
| Error::InvalidReplCommand { .. }
| Error::IllegalAuthConfig { .. } => StatusCode::InvalidArguments,
Error::ReplCreation { .. } | Error::Readline { .. } => StatusCode::Internal,
Error::RequestDatabase { source, .. } => source.status_code(),
Error::CollectRecordBatches { source } | Error::PrettyPrintRecordBatches { source } => {
Error::CollectRecordBatches { source, .. }
| Error::PrettyPrintRecordBatches { source, .. } => source.status_code(),
Error::StartMetaClient { source, .. } => source.status_code(),
Error::ParseSql { source, .. } | Error::PlanStatement { source, .. } => {
source.status_code()
}
Error::StartMetaClient { source } => source.status_code(),
Error::ParseSql { source, .. } | Error::PlanStatement { source } => {
source.status_code()
}
Error::SubstraitEncodeLogicalPlan { source } => source.status_code(),
Error::SubstraitEncodeLogicalPlan { source, .. } => source.status_code(),
Error::StartCatalogManager { source, .. } => source.status_code(),
}
}

View File

@@ -16,30 +16,31 @@ use std::sync::Arc;
use clap::Parser;
use common_base::Plugins;
use common_telemetry::logging;
use frontend::frontend::FrontendOptions;
use frontend::grpc::GrpcOptions;
use frontend::influxdb::InfluxdbOptions;
use frontend::instance::{FrontendInstance, Instance as FeInstance};
use frontend::mysql::MysqlOptions;
use frontend::opentsdb::OpentsdbOptions;
use frontend::postgres::PostgresOptions;
use frontend::prom::PromOptions;
use frontend::service_config::{InfluxdbOptions, PromOptions};
use meta_client::MetaClientOptions;
use servers::auth::UserProviderRef;
use servers::tls::{TlsMode, TlsOption};
use servers::{auth, Mode};
use snafu::ResultExt;
use crate::error::{self, IllegalAuthConfigSnafu, Result};
use crate::error::{self, IllegalAuthConfigSnafu, Result, StartCatalogManagerSnafu};
use crate::options::{Options, TopLevelOptions};
use crate::toml_loader;
pub struct Instance {
frontend: FeInstance,
}
impl Instance {
pub async fn run(&mut self) -> Result<()> {
pub async fn start(&mut self) -> Result<()> {
self.frontend
.catalog_manager()
.start()
.await
.context(StartCatalogManagerSnafu)?;
self.frontend
.start()
.await
@@ -89,7 +90,7 @@ impl SubCommand {
}
}
#[derive(Debug, Parser)]
#[derive(Debug, Default, Parser)]
pub struct StartCommand {
#[clap(long)]
http_addr: Option<String>,
@@ -107,8 +108,8 @@ pub struct StartCommand {
config_file: Option<String>,
#[clap(short, long)]
influxdb_enable: Option<bool>,
#[clap(long)]
metasrv_addr: Option<String>,
#[clap(long, multiple = true, value_delimiter = ',')]
metasrv_addr: Option<Vec<String>>,
#[clap(long)]
tls_mode: Option<TlsMode>,
#[clap(long)]
@@ -119,31 +120,36 @@ pub struct StartCommand {
user_provider: Option<String>,
#[clap(long)]
disable_dashboard: Option<bool>,
#[clap(long, default_value = "GREPTIMEDB_FRONTEND")]
env_prefix: String,
}
impl StartCommand {
fn load_options(&self, top_level_opts: TopLevelOptions) -> Result<Options> {
let mut opts: FrontendOptions = if let Some(path) = &self.config_file {
toml_loader::from_file!(path)?
} else {
FrontendOptions::default()
};
let mut opts: FrontendOptions = Options::load_layered_options(
self.config_file.as_deref(),
self.env_prefix.as_ref(),
FrontendOptions::env_list_keys(),
)?;
if let Some(dir) = top_level_opts.log_dir {
opts.logging.dir = dir;
}
if let Some(level) = top_level_opts.log_level {
opts.logging.level = level;
if top_level_opts.log_level.is_some() {
opts.logging.level = top_level_opts.log_level;
}
let tls_option = TlsOption::new(
let tls_opts = TlsOption::new(
self.tls_mode.clone(),
self.tls_cert_path.clone(),
self.tls_key_path.clone(),
);
if let Some(addr) = self.http_addr.clone() {
opts.http_options.get_or_insert_with(Default::default).addr = addr;
if let Some(addr) = &self.http_addr {
if let Some(http_opts) = &mut opts.http_options {
http_opts.addr = addr.clone()
}
}
if let Some(disable_dashboard) = self.disable_dashboard {
@@ -152,47 +158,44 @@ impl StartCommand {
.disable_dashboard = disable_dashboard;
}
if let Some(addr) = self.grpc_addr.clone() {
opts.grpc_options = Some(GrpcOptions {
addr,
..Default::default()
});
if let Some(addr) = &self.grpc_addr {
if let Some(grpc_opts) = &mut opts.grpc_options {
grpc_opts.addr = addr.clone()
}
}
if let Some(addr) = self.mysql_addr.clone() {
opts.mysql_options = Some(MysqlOptions {
addr,
tls: tls_option.clone(),
..Default::default()
});
if let Some(addr) = &self.mysql_addr {
if let Some(mysql_opts) = &mut opts.mysql_options {
mysql_opts.addr = addr.clone();
mysql_opts.tls = tls_opts.clone();
}
}
if let Some(addr) = self.prom_addr.clone() {
opts.prom_options = Some(PromOptions { addr });
if let Some(addr) = &self.prom_addr {
opts.prom_options = Some(PromOptions { addr: addr.clone() });
}
if let Some(addr) = self.postgres_addr.clone() {
opts.postgres_options = Some(PostgresOptions {
addr,
tls: tls_option,
..Default::default()
});
if let Some(addr) = &self.postgres_addr {
if let Some(postgres_opts) = &mut opts.postgres_options {
postgres_opts.addr = addr.clone();
postgres_opts.tls = tls_opts;
}
}
if let Some(addr) = self.opentsdb_addr.clone() {
opts.opentsdb_options = Some(OpentsdbOptions {
addr,
..Default::default()
});
if let Some(addr) = &self.opentsdb_addr {
if let Some(opentsdb_addr) = &mut opts.opentsdb_options {
opentsdb_addr.addr = addr.clone();
}
}
if let Some(enable) = self.influxdb_enable {
opts.influxdb_options = Some(InfluxdbOptions { enable });
}
if let Some(metasrv_addr) = self.metasrv_addr.clone() {
if let Some(metasrv_addrs) = &self.metasrv_addr {
opts.meta_client_options
.get_or_insert_with(MetaClientOptions::default)
.metasrv_addrs = metasrv_addr
.split(',')
.map(&str::trim)
.map(&str::to_string)
.collect::<Vec<_>>();
.metasrv_addrs = metasrv_addrs.clone();
opts.mode = Mode::Distributed;
}
@@ -200,6 +203,9 @@ impl StartCommand {
}
async fn build(self, opts: FrontendOptions) -> Result<Instance> {
logging::info!("Frontend start command: {:#?}", self);
logging::info!("Frontend options: {:#?}", opts);
let plugins = Arc::new(load_frontend_plugins(&self.user_provider)?);
let mut instance = FeInstance::try_new_distributed(&opts, plugins.clone())
@@ -231,27 +237,23 @@ mod tests {
use std::time::Duration;
use common_test_util::temp_dir::create_named_temp_file;
use frontend::service_config::GrpcOptions;
use servers::auth::{Identity, Password, UserProviderRef};
use super::*;
use crate::options::ENV_VAR_SEP;
#[test]
fn test_try_from_start_command() {
let command = StartCommand {
http_addr: Some("127.0.0.1:1234".to_string()),
grpc_addr: None,
prom_addr: Some("127.0.0.1:4444".to_string()),
mysql_addr: Some("127.0.0.1:5678".to_string()),
postgres_addr: Some("127.0.0.1:5432".to_string()),
opentsdb_addr: Some("127.0.0.1:4321".to_string()),
influxdb_enable: Some(false),
config_file: None,
metasrv_addr: None,
tls_mode: None,
tls_cert_path: None,
tls_key_path: None,
user_provider: None,
disable_dashboard: Some(false),
..Default::default()
};
let Options::Frontend(opts) =
@@ -299,7 +301,7 @@ mod tests {
[http_options]
addr = "127.0.0.1:4000"
timeout = "30s"
[logging]
level = "debug"
dir = "/tmp/greptimedb/test/logs"
@@ -307,20 +309,9 @@ mod tests {
write!(file, "{}", toml_str).unwrap();
let command = StartCommand {
http_addr: None,
grpc_addr: None,
mysql_addr: None,
prom_addr: None,
postgres_addr: None,
opentsdb_addr: None,
influxdb_enable: None,
config_file: Some(file.path().to_str().unwrap().to_string()),
metasrv_addr: None,
tls_mode: None,
tls_cert_path: None,
tls_key_path: None,
user_provider: None,
disable_dashboard: Some(false),
..Default::default()
};
let Options::Frontend(fe_opts) =
@@ -335,27 +326,16 @@ mod tests {
fe_opts.http_options.as_ref().unwrap().timeout
);
assert_eq!("debug".to_string(), fe_opts.logging.level);
assert_eq!("debug", fe_opts.logging.level.as_ref().unwrap());
assert_eq!("/tmp/greptimedb/test/logs".to_string(), fe_opts.logging.dir);
}
#[tokio::test]
async fn test_try_from_start_command_to_anymap() {
let command = StartCommand {
http_addr: None,
grpc_addr: None,
mysql_addr: None,
prom_addr: None,
postgres_addr: None,
opentsdb_addr: None,
influxdb_enable: None,
config_file: None,
metasrv_addr: None,
tls_mode: None,
tls_cert_path: None,
tls_key_path: None,
user_provider: Some("static_user_provider:cmd:test=test".to_string()),
disable_dashboard: Some(false),
..Default::default()
};
let plugins = load_frontend_plugins(&command.user_provider);
@@ -377,20 +357,8 @@ mod tests {
#[test]
fn test_top_level_options() {
let cmd = StartCommand {
http_addr: None,
grpc_addr: None,
mysql_addr: None,
prom_addr: None,
postgres_addr: None,
opentsdb_addr: None,
influxdb_enable: None,
config_file: None,
metasrv_addr: None,
tls_mode: None,
tls_cert_path: None,
tls_key_path: None,
user_provider: None,
disable_dashboard: Some(false),
..Default::default()
};
let options = cmd
@@ -402,6 +370,116 @@ mod tests {
let logging_opt = options.logging_options();
assert_eq!("/tmp/greptimedb/test/logs", logging_opt.dir);
assert_eq!("debug", logging_opt.level);
assert_eq!("debug", logging_opt.level.as_ref().unwrap());
}
#[test]
fn test_config_precedence_order() {
let mut file = create_named_temp_file();
let toml_str = r#"
mode = "distributed"
[http_options]
addr = "127.0.0.1:4000"
[meta_client_options]
timeout_millis = 3000
connect_timeout_millis = 5000
tcp_nodelay = true
[mysql_options]
addr = "127.0.0.1:4002"
"#;
write!(file, "{}", toml_str).unwrap();
let env_prefix = "FRONTEND_UT";
temp_env::with_vars(
vec![
(
// mysql_options.addr = 127.0.0.1:14002
vec![
env_prefix.to_string(),
"mysql_options".to_uppercase(),
"addr".to_uppercase(),
]
.join(ENV_VAR_SEP),
Some("127.0.0.1:14002"),
),
(
// mysql_options.runtime_size = 11
vec![
env_prefix.to_string(),
"mysql_options".to_uppercase(),
"runtime_size".to_uppercase(),
]
.join(ENV_VAR_SEP),
Some("11"),
),
(
// http_options.addr = 127.0.0.1:24000
vec![
env_prefix.to_string(),
"http_options".to_uppercase(),
"addr".to_uppercase(),
]
.join(ENV_VAR_SEP),
Some("127.0.0.1:24000"),
),
(
// meta_client_options.metasrv_addrs = 127.0.0.1:3001,127.0.0.1:3002,127.0.0.1:3003
vec![
env_prefix.to_string(),
"meta_client_options".to_uppercase(),
"metasrv_addrs".to_uppercase(),
]
.join(ENV_VAR_SEP),
Some("127.0.0.1:3001,127.0.0.1:3002,127.0.0.1:3003"),
),
],
|| {
let command = StartCommand {
config_file: Some(file.path().to_str().unwrap().to_string()),
http_addr: Some("127.0.0.1:14000".to_string()),
env_prefix: env_prefix.to_string(),
..Default::default()
};
let top_level_opts = TopLevelOptions {
log_dir: None,
log_level: Some("error".to_string()),
};
let Options::Frontend(fe_opts) =
command.load_options(top_level_opts).unwrap() else {unreachable!()};
// Should be read from env, env > default values.
assert_eq!(fe_opts.mysql_options.as_ref().unwrap().runtime_size, 11);
assert_eq!(
fe_opts.meta_client_options.unwrap().metasrv_addrs,
vec![
"127.0.0.1:3001".to_string(),
"127.0.0.1:3002".to_string(),
"127.0.0.1:3003".to_string()
]
);
// Should be read from config file, config file > env > default values.
assert_eq!(
fe_opts.mysql_options.as_ref().unwrap().addr,
"127.0.0.1:4002"
);
// Should be read from cli, cli > config file > env > default values.
assert_eq!(
fe_opts.http_options.as_ref().unwrap().addr,
"127.0.0.1:14000"
);
// Should be default value.
assert_eq!(
fe_opts.grpc_options.as_ref().unwrap().addr,
GrpcOptions::default().addr
);
},
);
}
}

View File

@@ -21,4 +21,3 @@ pub mod frontend;
pub mod metasrv;
pub mod options;
pub mod standalone;
mod toml_loader;

View File

@@ -20,16 +20,15 @@ use meta_srv::bootstrap::MetaSrvInstance;
use meta_srv::metasrv::MetaSrvOptions;
use snafu::ResultExt;
use crate::error::Result;
use crate::error::{self, Result};
use crate::options::{Options, TopLevelOptions};
use crate::{error, toml_loader};
pub struct Instance {
instance: MetaSrvInstance,
}
impl Instance {
pub async fn run(&mut self) -> Result<()> {
pub async fn start(&mut self) -> Result<()> {
self.instance
.start()
.await
@@ -79,7 +78,7 @@ impl SubCommand {
}
}
#[derive(Debug, Parser)]
#[derive(Debug, Default, Parser)]
struct StartCommand {
#[clap(long)]
bind_addr: Option<String>,
@@ -97,32 +96,38 @@ struct StartCommand {
http_addr: Option<String>,
#[clap(long)]
http_timeout: Option<u64>,
#[clap(long, default_value = "GREPTIMEDB_METASRV")]
env_prefix: String,
}
impl StartCommand {
fn load_options(&self, top_level_opts: TopLevelOptions) -> Result<Options> {
let mut opts: MetaSrvOptions = if let Some(path) = &self.config_file {
toml_loader::from_file!(path)?
} else {
MetaSrvOptions::default()
};
let mut opts: MetaSrvOptions = Options::load_layered_options(
self.config_file.as_deref(),
self.env_prefix.as_ref(),
None,
)?;
if let Some(dir) = top_level_opts.log_dir {
opts.logging.dir = dir;
}
if let Some(level) = top_level_opts.log_level {
opts.logging.level = level;
if top_level_opts.log_level.is_some() {
opts.logging.level = top_level_opts.log_level;
}
if let Some(addr) = self.bind_addr.clone() {
opts.bind_addr = addr;
if let Some(addr) = &self.bind_addr {
opts.bind_addr = addr.clone();
}
if let Some(addr) = self.server_addr.clone() {
opts.server_addr = addr;
if let Some(addr) = &self.server_addr {
opts.server_addr = addr.clone();
}
if let Some(addr) = self.store_addr.clone() {
opts.store_addr = addr;
if let Some(addr) = &self.store_addr {
opts.store_addr = addr.clone();
}
if let Some(selector_type) = &self.selector {
opts.selector = selector_type[..]
.try_into()
@@ -133,9 +138,10 @@ impl StartCommand {
opts.use_memory_store = true;
}
if let Some(http_addr) = self.http_addr.clone() {
opts.http_opts.addr = http_addr;
if let Some(http_addr) = &self.http_addr {
opts.http_opts.addr = http_addr.clone();
}
if let Some(http_timeout) = self.http_timeout {
opts.http_opts.timeout = Duration::from_secs(http_timeout);
}
@@ -167,6 +173,7 @@ mod tests {
use meta_srv::selector::SelectorType;
use super::*;
use crate::options::ENV_VAR_SEP;
#[test]
fn test_read_from_cmd() {
@@ -174,11 +181,8 @@ mod tests {
bind_addr: Some("127.0.0.1:3002".to_string()),
server_addr: Some("127.0.0.1:3002".to_string()),
store_addr: Some("127.0.0.1:2380".to_string()),
config_file: None,
selector: Some("LoadBased".to_string()),
use_memory_store: false,
http_addr: None,
http_timeout: None,
..Default::default()
};
let Options::Metasrv(options) =
@@ -206,14 +210,8 @@ mod tests {
write!(file, "{}", toml_str).unwrap();
let cmd = StartCommand {
bind_addr: None,
server_addr: None,
store_addr: None,
selector: None,
config_file: Some(file.path().to_str().unwrap().to_string()),
use_memory_store: false,
http_addr: None,
http_timeout: None,
..Default::default()
};
let Options::Metasrv(options) =
@@ -223,7 +221,7 @@ mod tests {
assert_eq!("127.0.0.1:2379".to_string(), options.store_addr);
assert_eq!(15, options.datanode_lease_secs);
assert_eq!(SelectorType::LeaseBased, options.selector);
assert_eq!("debug".to_string(), options.logging.level);
assert_eq!("debug", options.logging.level.as_ref().unwrap());
assert_eq!("/tmp/greptimedb/test/logs".to_string(), options.logging.dir);
}
@@ -233,11 +231,8 @@ mod tests {
bind_addr: Some("127.0.0.1:3002".to_string()),
server_addr: Some("127.0.0.1:3002".to_string()),
store_addr: Some("127.0.0.1:2380".to_string()),
config_file: None,
selector: Some("LoadBased".to_string()),
use_memory_store: false,
http_addr: None,
http_timeout: None,
..Default::default()
};
let options = cmd
@@ -249,6 +244,74 @@ mod tests {
let logging_opt = options.logging_options();
assert_eq!("/tmp/greptimedb/test/logs", logging_opt.dir);
assert_eq!("debug", logging_opt.level);
assert_eq!("debug", logging_opt.level.as_ref().unwrap());
}
#[test]
fn test_config_precedence_order() {
let mut file = create_named_temp_file();
let toml_str = r#"
server_addr = "127.0.0.1:3002"
datanode_lease_secs = 15
selector = "LeaseBased"
use_memory_store = false
[http_options]
addr = "127.0.0.1:4000"
[logging]
level = "debug"
dir = "/tmp/greptimedb/test/logs"
"#;
write!(file, "{}", toml_str).unwrap();
let env_prefix = "METASRV_UT";
temp_env::with_vars(
vec![
(
// bind_addr = 127.0.0.1:14002
vec![env_prefix.to_string(), "bind_addr".to_uppercase()].join(ENV_VAR_SEP),
Some("127.0.0.1:14002"),
),
(
// server_addr = 127.0.0.1:13002
vec![env_prefix.to_string(), "server_addr".to_uppercase()].join(ENV_VAR_SEP),
Some("127.0.0.1:13002"),
),
(
// http_options.addr = 127.0.0.1:24000
vec![
env_prefix.to_string(),
"http_options".to_uppercase(),
"addr".to_uppercase(),
]
.join(ENV_VAR_SEP),
Some("127.0.0.1:24000"),
),
],
|| {
let command = StartCommand {
http_addr: Some("127.0.0.1:14000".to_string()),
config_file: Some(file.path().to_str().unwrap().to_string()),
env_prefix: env_prefix.to_string(),
..Default::default()
};
let Options::Metasrv(opts) =
command.load_options(TopLevelOptions::default()).unwrap() else {unreachable!()};
// Should be read from env, env > default values.
assert_eq!(opts.bind_addr, "127.0.0.1:14002");
// Should be read from config file, config file > env > default values.
assert_eq!(opts.server_addr, "127.0.0.1:3002");
// Should be read from cli, cli > config file > env > default values.
assert_eq!(opts.http_opts.addr, "127.0.0.1:14000");
// Should be default value.
assert_eq!(opts.store_addr, "127.0.0.1:2379");
},
);
}
}

View File

@@ -11,10 +11,19 @@
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use common_telemetry::logging::LoggingOptions;
use config::{Config, Environment, File, FileFormat};
use datanode::datanode::DatanodeOptions;
use frontend::frontend::FrontendOptions;
use meta_srv::metasrv::MetaSrvOptions;
use serde::{Deserialize, Serialize};
use snafu::ResultExt;
use crate::error::{LoadLayeredConfigSnafu, Result};
pub const ENV_VAR_SEP: &str = "__";
pub const ENV_LIST_SEP: &str = ",";
pub struct MixOptions {
pub fe_opts: FrontendOptions,
@@ -30,6 +39,12 @@ pub enum Options {
Cli(Box<LoggingOptions>),
}
#[derive(Clone, Debug, Default)]
pub struct TopLevelOptions {
pub log_dir: Option<String>,
pub log_level: Option<String>,
}
impl Options {
pub fn logging_options(&self) -> &LoggingOptions {
match self {
@@ -40,10 +55,220 @@ impl Options {
Options::Cli(opts) => opts,
}
}
/// Load the configuration from multiple sources and merge them.
/// The precedence order is: config file > environment variables > default values.
/// `env_prefix` is the prefix of environment variables, e.g. "FRONTEND__xxx".
/// The function will use dunder(double underscore) `__` as the separator for environment variables, for example:
/// `DATANODE__STORAGE__MANIFEST__CHECKPOINT_MARGIN` will be mapped to `DatanodeOptions.storage.manifest.checkpoint_margin` field in the configuration.
/// `list_keys` is the list of keys that should be parsed as a list, for example, you can pass `Some(&["meta_client_options.metasrv_addrs"]` to parse `GREPTIMEDB_METASRV__META_CLIENT_OPTIONS__METASRV_ADDRS` as a list.
/// The function will use comma `,` as the separator for list values, for example: `127.0.0.1:3001,127.0.0.1:3002,127.0.0.1:3003`.
pub fn load_layered_options<'de, T: Serialize + Deserialize<'de> + Default>(
config_file: Option<&str>,
env_prefix: &str,
list_keys: Option<&[&str]>,
) -> Result<T> {
let default_opts = T::default();
let env_source = {
let mut env = Environment::default();
if !env_prefix.is_empty() {
env = env.prefix(env_prefix);
}
if let Some(list_keys) = list_keys {
env = env.list_separator(ENV_LIST_SEP);
for key in list_keys {
env = env.with_list_parse_key(key);
}
}
env.try_parsing(true)
.separator(ENV_VAR_SEP)
.ignore_empty(true)
};
// Add default values and environment variables as the sources of the configuration.
let mut layered_config = Config::builder()
.add_source(Config::try_from(&default_opts).context(LoadLayeredConfigSnafu)?)
.add_source(env_source);
// Add config file as the source of the configuration if it is specified.
if let Some(config_file) = config_file {
layered_config = layered_config.add_source(File::new(config_file, FileFormat::Toml));
}
let opts = layered_config
.build()
.context(LoadLayeredConfigSnafu)?
.try_deserialize()
.context(LoadLayeredConfigSnafu)?;
Ok(opts)
}
}
#[derive(Clone, Debug, Default)]
pub struct TopLevelOptions {
pub log_dir: Option<String>,
pub log_level: Option<String>,
#[cfg(test)]
mod tests {
use std::io::Write;
use std::time::Duration;
use common_test_util::temp_dir::create_named_temp_file;
use datanode::datanode::{DatanodeOptions, ObjectStoreConfig};
use super::*;
#[test]
fn test_load_layered_options() {
let mut file = create_named_temp_file();
let toml_str = r#"
mode = "distributed"
enable_memory_catalog = false
rpc_addr = "127.0.0.1:3001"
rpc_hostname = "127.0.0.1"
rpc_runtime_size = 8
mysql_addr = "127.0.0.1:4406"
mysql_runtime_size = 2
[meta_client_options]
timeout_millis = 3000
connect_timeout_millis = 5000
tcp_nodelay = true
[wal]
dir = "/tmp/greptimedb/wal"
file_size = "1GB"
purge_threshold = "50GB"
purge_interval = "10m"
read_batch_size = 128
sync_write = false
[storage.compaction]
max_inflight_tasks = 3
max_files_in_level0 = 7
max_purge_tasks = 32
[logging]
level = "debug"
dir = "/tmp/greptimedb/test/logs"
"#;
write!(file, "{}", toml_str).unwrap();
let env_prefix = "DATANODE_UT";
temp_env::with_vars(
// The following environment variables will be used to override the values in the config file.
vec![
(
// storage.manifest.checkpoint_margin = 99
vec![
env_prefix.to_string(),
"storage".to_uppercase(),
"manifest".to_uppercase(),
"checkpoint_margin".to_uppercase(),
]
.join(ENV_VAR_SEP),
Some("99"),
),
(
// storage.type = S3
vec![
env_prefix.to_string(),
"storage".to_uppercase(),
"type".to_uppercase(),
]
.join(ENV_VAR_SEP),
Some("S3"),
),
(
// storage.bucket = mybucket
vec![
env_prefix.to_string(),
"storage".to_uppercase(),
"bucket".to_uppercase(),
]
.join(ENV_VAR_SEP),
Some("mybucket"),
),
(
// storage.manifest.gc_duration = 42s
vec![
env_prefix.to_string(),
"storage".to_uppercase(),
"manifest".to_uppercase(),
"gc_duration".to_uppercase(),
]
.join(ENV_VAR_SEP),
Some("42s"),
),
(
// storage.manifest.checkpoint_on_startup = true
vec![
env_prefix.to_string(),
"storage".to_uppercase(),
"manifest".to_uppercase(),
"checkpoint_on_startup".to_uppercase(),
]
.join(ENV_VAR_SEP),
Some("true"),
),
(
// wal.dir = /other/wal/dir
vec![
env_prefix.to_string(),
"wal".to_uppercase(),
"dir".to_uppercase(),
]
.join(ENV_VAR_SEP),
Some("/other/wal/dir"),
),
(
// meta_client_options.metasrv_addrs = 127.0.0.1:3001,127.0.0.1:3002,127.0.0.1:3003
vec![
env_prefix.to_string(),
"meta_client_options".to_uppercase(),
"metasrv_addrs".to_uppercase(),
]
.join(ENV_VAR_SEP),
Some("127.0.0.1:3001,127.0.0.1:3002,127.0.0.1:3003"),
),
],
|| {
let opts: DatanodeOptions = Options::load_layered_options(
Some(file.path().to_str().unwrap()),
env_prefix,
DatanodeOptions::env_list_keys(),
)
.unwrap();
// Check the configs from environment variables.
assert_eq!(opts.storage.manifest.checkpoint_margin, Some(99));
match opts.storage.store {
ObjectStoreConfig::S3(s3_config) => {
assert_eq!(s3_config.bucket, "mybucket".to_string());
}
_ => panic!("unexpected store type"),
}
assert_eq!(
opts.storage.manifest.gc_duration,
Some(Duration::from_secs(42))
);
assert!(opts.storage.manifest.checkpoint_on_startup);
assert_eq!(
opts.meta_client_options.unwrap().metasrv_addrs,
vec![
"127.0.0.1:3001".to_string(),
"127.0.0.1:3002".to_string(),
"127.0.0.1:3003".to_string()
]
);
// Should be the values from config file, not environment variables.
assert_eq!(opts.wal.dir.unwrap(), "/tmp/greptimedb/wal");
// Should be default values.
assert_eq!(opts.node_id, None);
},
);
}
}

View File

@@ -21,14 +21,11 @@ use common_telemetry::logging::LoggingOptions;
use datanode::datanode::{Datanode, DatanodeOptions, ProcedureConfig, StorageConfig, WalConfig};
use datanode::instance::InstanceRef;
use frontend::frontend::FrontendOptions;
use frontend::grpc::GrpcOptions;
use frontend::influxdb::InfluxdbOptions;
use frontend::instance::{FrontendInstance, Instance as FeInstance};
use frontend::mysql::MysqlOptions;
use frontend::opentsdb::OpentsdbOptions;
use frontend::postgres::PostgresOptions;
use frontend::prom::PromOptions;
use frontend::prometheus::PrometheusOptions;
use frontend::service_config::{
GrpcOptions, InfluxdbOptions, MysqlOptions, OpentsdbOptions, PostgresOptions, PromOptions,
PrometheusOptions,
};
use serde::{Deserialize, Serialize};
use servers::http::HttpOptions;
use servers::tls::{TlsMode, TlsOption};
@@ -41,7 +38,6 @@ use crate::error::{
};
use crate::frontend::load_frontend_plugins;
use crate::options::{MixOptions, Options, TopLevelOptions};
use crate::toml_loader;
#[derive(Parser)]
pub struct Command {
@@ -156,7 +152,7 @@ pub struct Instance {
}
impl Instance {
pub async fn run(&mut self) -> Result<()> {
pub async fn start(&mut self) -> Result<()> {
// Start datanode instance before starting services, to avoid requests come in before internal components are started.
self.datanode
.start_instance()
@@ -184,7 +180,7 @@ impl Instance {
}
}
#[derive(Debug, Parser)]
#[derive(Debug, Default, Parser)]
struct StartCommand {
#[clap(long)]
http_addr: Option<String>,
@@ -212,43 +208,46 @@ struct StartCommand {
tls_key_path: Option<String>,
#[clap(long)]
user_provider: Option<String>,
#[clap(long, default_value = "GREPTIMEDB_STANDALONE")]
env_prefix: String,
}
impl StartCommand {
fn load_options(&self, top_level_options: TopLevelOptions) -> Result<Options> {
let enable_memory_catalog = self.enable_memory_catalog;
let config_file = &self.config_file;
let mut opts: StandaloneOptions = if let Some(path) = config_file {
toml_loader::from_file!(path)?
} else {
StandaloneOptions::default()
};
let mut opts: StandaloneOptions = Options::load_layered_options(
self.config_file.as_deref(),
self.env_prefix.as_ref(),
None,
)?;
opts.enable_memory_catalog = enable_memory_catalog;
opts.enable_memory_catalog = self.enable_memory_catalog;
let mut fe_opts = opts.clone().frontend_options();
let mut logging = opts.logging.clone();
let dn_opts = opts.datanode_options();
opts.mode = Mode::Standalone;
if let Some(dir) = top_level_options.log_dir {
logging.dir = dir;
}
if let Some(level) = top_level_options.log_level {
logging.level = level;
opts.logging.dir = dir;
}
fe_opts.mode = Mode::Standalone;
if let Some(addr) = self.http_addr.clone() {
fe_opts.http_options = Some(HttpOptions {
addr,
..Default::default()
});
if top_level_options.log_level.is_some() {
opts.logging.level = top_level_options.log_level;
}
if let Some(addr) = self.rpc_addr.clone() {
let tls_opts = TlsOption::new(
self.tls_mode.clone(),
self.tls_cert_path.clone(),
self.tls_key_path.clone(),
);
if let Some(addr) = &self.http_addr {
if let Some(http_opts) = &mut opts.http_options {
http_opts.addr = addr.clone()
}
}
if let Some(addr) = &self.rpc_addr {
// frontend grpc addr conflict with datanode default grpc addr
let datanode_grpc_addr = DatanodeOptions::default().rpc_addr;
if addr == datanode_grpc_addr {
if addr.eq(&datanode_grpc_addr) {
return IllegalConfigSnafu {
msg: format!(
"gRPC listen address conflicts with datanode reserved gRPC addr: {datanode_grpc_addr}",
@@ -256,56 +255,42 @@ impl StartCommand {
}
.fail();
}
fe_opts.grpc_options = Some(GrpcOptions {
addr,
..Default::default()
});
if let Some(grpc_opts) = &mut opts.grpc_options {
grpc_opts.addr = addr.clone()
}
}
if let Some(addr) = self.mysql_addr.clone() {
fe_opts.mysql_options = Some(MysqlOptions {
addr,
..Default::default()
})
if let Some(addr) = &self.mysql_addr {
if let Some(mysql_opts) = &mut opts.mysql_options {
mysql_opts.addr = addr.clone();
mysql_opts.tls = tls_opts.clone();
}
}
if let Some(addr) = self.prom_addr.clone() {
fe_opts.prom_options = Some(PromOptions { addr })
if let Some(addr) = &self.prom_addr {
opts.prom_options = Some(PromOptions { addr: addr.clone() })
}
if let Some(addr) = self.postgres_addr.clone() {
fe_opts.postgres_options = Some(PostgresOptions {
addr,
..Default::default()
})
if let Some(addr) = &self.postgres_addr {
if let Some(postgres_opts) = &mut opts.postgres_options {
postgres_opts.addr = addr.clone();
postgres_opts.tls = tls_opts;
}
}
if let Some(addr) = self.opentsdb_addr.clone() {
fe_opts.opentsdb_options = Some(OpentsdbOptions {
addr,
..Default::default()
});
if let Some(addr) = &self.opentsdb_addr {
if let Some(opentsdb_addr) = &mut opts.opentsdb_options {
opentsdb_addr.addr = addr.clone();
}
}
if self.influxdb_enable {
fe_opts.influxdb_options = Some(InfluxdbOptions { enable: true });
opts.influxdb_options = Some(InfluxdbOptions { enable: true });
}
let tls_option = TlsOption::new(
self.tls_mode.clone(),
self.tls_cert_path.clone(),
self.tls_key_path.clone(),
);
if let Some(mut mysql_options) = fe_opts.mysql_options {
mysql_options.tls = tls_option.clone();
fe_opts.mysql_options = Some(mysql_options);
}
if let Some(mut postgres_options) = fe_opts.postgres_options {
postgres_options.tls = tls_option;
fe_opts.postgres_options = Some(postgres_options);
}
let fe_opts = opts.clone().frontend_options();
let logging = opts.logging.clone();
let dn_opts = opts.datanode_options();
Ok(Options::Standalone(Box::new(MixOptions {
fe_opts,
@@ -317,6 +302,7 @@ impl StartCommand {
async fn build(self, fe_opts: FrontendOptions, dn_opts: DatanodeOptions) -> Result<Instance> {
let plugins = Arc::new(load_frontend_plugins(&self.user_provider)?);
info!("Standalone start command: {:#?}", self);
info!(
"Standalone frontend options: {:#?}, datanode options: {:#?}",
fe_opts, dn_opts
@@ -351,6 +337,7 @@ async fn build_frontend(
#[cfg(test)]
mod tests {
use std::default::Default;
use std::io::Write;
use std::time::Duration;
@@ -359,23 +346,13 @@ mod tests {
use servers::Mode;
use super::*;
use crate::options::ENV_VAR_SEP;
#[tokio::test]
async fn test_try_from_start_command_to_anymap() {
let command = StartCommand {
http_addr: None,
rpc_addr: None,
prom_addr: None,
mysql_addr: None,
postgres_addr: None,
opentsdb_addr: None,
config_file: None,
influxdb_enable: false,
enable_memory_catalog: false,
tls_mode: None,
tls_cert_path: None,
tls_key_path: None,
user_provider: Some("static_user_provider:cmd:test=test".to_string()),
..Default::default()
};
let plugins = load_frontend_plugins(&command.user_provider);
@@ -441,19 +418,9 @@ mod tests {
"#;
write!(file, "{}", toml_str).unwrap();
let cmd = StartCommand {
http_addr: None,
rpc_addr: None,
prom_addr: None,
mysql_addr: None,
postgres_addr: None,
opentsdb_addr: None,
config_file: Some(file.path().to_str().unwrap().to_string()),
influxdb_enable: false,
enable_memory_catalog: false,
tls_mode: None,
tls_cert_path: None,
tls_key_path: None,
user_provider: Some("static_user_provider:cmd:test=test".to_string()),
..Default::default()
};
let Options::Standalone(options) = cmd.load_options(TopLevelOptions::default()).unwrap() else {unreachable!()};
@@ -484,7 +451,7 @@ mod tests {
);
assert!(fe_opts.influxdb_options.as_ref().unwrap().enable);
assert_eq!("/tmp/greptimedb/test/wal", dn_opts.wal.dir);
assert_eq!("/tmp/greptimedb/test/wal", dn_opts.wal.dir.unwrap());
match &dn_opts.storage.store {
datanode::datanode::ObjectStoreConfig::S3(s3_config) => {
assert_eq!(
@@ -497,26 +464,15 @@ mod tests {
}
}
assert_eq!("debug".to_string(), logging_opts.level);
assert_eq!("debug", logging_opts.level.as_ref().unwrap());
assert_eq!("/tmp/greptimedb/test/logs".to_string(), logging_opts.dir);
}
#[test]
fn test_top_level_options() {
let cmd = StartCommand {
http_addr: None,
rpc_addr: None,
prom_addr: None,
mysql_addr: None,
postgres_addr: None,
opentsdb_addr: None,
config_file: None,
influxdb_enable: false,
enable_memory_catalog: false,
tls_mode: None,
tls_cert_path: None,
tls_key_path: None,
user_provider: Some("static_user_provider:cmd:test=test".to_string()),
..Default::default()
};
let Options::Standalone(opts) = cmd
@@ -529,6 +485,90 @@ mod tests {
};
assert_eq!("/tmp/greptimedb/test/logs", opts.logging.dir);
assert_eq!("debug", opts.logging.level);
assert_eq!("debug", opts.logging.level.unwrap());
}
#[test]
fn test_config_precedence_order() {
let mut file = create_named_temp_file();
let toml_str = r#"
mode = "standalone"
[http_options]
addr = "127.0.0.1:4000"
[logging]
level = "debug"
"#;
write!(file, "{}", toml_str).unwrap();
let env_prefix = "STANDALONE_UT";
temp_env::with_vars(
vec![
(
// logging.dir = /other/log/dir
vec![
env_prefix.to_string(),
"logging".to_uppercase(),
"dir".to_uppercase(),
]
.join(ENV_VAR_SEP),
Some("/other/log/dir"),
),
(
// logging.level = info
vec![
env_prefix.to_string(),
"logging".to_uppercase(),
"level".to_uppercase(),
]
.join(ENV_VAR_SEP),
Some("info"),
),
(
// http_options.addr = 127.0.0.1:24000
vec![
env_prefix.to_string(),
"http_options".to_uppercase(),
"addr".to_uppercase(),
]
.join(ENV_VAR_SEP),
Some("127.0.0.1:24000"),
),
],
|| {
let command = StartCommand {
config_file: Some(file.path().to_str().unwrap().to_string()),
http_addr: Some("127.0.0.1:14000".to_string()),
env_prefix: env_prefix.to_string(),
..Default::default()
};
let top_level_opts = TopLevelOptions {
log_dir: None,
log_level: None,
};
let Options::Standalone(opts) =
command.load_options(top_level_opts).unwrap() else {unreachable!()};
// Should be read from env, env > default values.
assert_eq!(opts.logging.dir, "/other/log/dir");
// Should be read from config file, config file > env > default values.
assert_eq!(opts.logging.level.as_ref().unwrap(), "debug");
// Should be read from cli, cli > config file > env > default values.
assert_eq!(
opts.fe_opts.http_options.as_ref().unwrap().addr,
"127.0.0.1:14000"
);
// Should be default value.
assert_eq!(
opts.fe_opts.grpc_options.unwrap().addr,
GrpcOptions::default().addr
);
},
);
}
}

View File

@@ -1,94 +0,0 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
macro_rules! from_file {
($path: expr) => {
toml::from_str(
&std::fs::read_to_string($path)
.context(crate::error::ReadConfigSnafu { path: $path })?,
)
.context(crate::error::ParseConfigSnafu)
};
}
pub(crate) use from_file;
#[cfg(test)]
mod tests {
use std::fs::File;
use std::io::Write;
use common_test_util::temp_dir::create_temp_dir;
use serde::{Deserialize, Serialize};
use snafu::ResultExt;
use super::*;
use crate::error::Result;
#[derive(Clone, PartialEq, Debug, Deserialize, Serialize)]
#[serde(default)]
struct MockConfig {
path: String,
port: u32,
host: String,
}
impl Default for MockConfig {
fn default() -> Self {
Self {
path: "test".to_string(),
port: 0,
host: "localhost".to_string(),
}
}
}
#[test]
fn test_from_file() -> Result<()> {
let config = MockConfig {
path: "/tmp".to_string(),
port: 999,
host: "greptime.test".to_string(),
};
let dir = create_temp_dir("test_from_file");
let test_file = format!("{}/test.toml", dir.path().to_str().unwrap());
let s = toml::to_string(&config).unwrap();
assert!(s.contains("host") && s.contains("path") && s.contains("port"));
let mut file = File::create(&test_file).unwrap();
file.write_all(s.as_bytes()).unwrap();
let loaded_config: MockConfig = from_file!(&test_file)?;
assert_eq!(loaded_config, config);
// Only host in file
let mut file = File::create(&test_file).unwrap();
file.write_all("host='greptime.test'\n".as_bytes()).unwrap();
let loaded_config: MockConfig = from_file!(&test_file)?;
assert_eq!(loaded_config.host, "greptime.test");
assert_eq!(loaded_config.port, 0);
assert_eq!(loaded_config.path, "test");
// Truncate the file.
let file = File::create(&test_file).unwrap();
file.set_len(0).unwrap();
let loaded_config: MockConfig = from_file!(&test_file)?;
assert_eq!(loaded_config, MockConfig::default());
Ok(())
}
}

View File

@@ -51,7 +51,7 @@ mod tests {
#[ignore]
#[test]
fn test_repl() {
let data_dir = create_temp_dir("data");
let data_home = create_temp_dir("data");
let wal_dir = create_temp_dir("wal");
let mut bin_path = PathBuf::from(env!("CARGO_MANIFEST_DIR"));
@@ -65,7 +65,7 @@ mod tests {
"start",
"--rpc-addr=0.0.0.0:4321",
"--node-id=1",
&format!("--data-dir={}", data_dir.path().display()),
&format!("--data-home={}", data_home.path().display()),
&format!("--wal-dir={}", wal_dir.path().display()),
])
.stdout(Stdio::null())

View File

@@ -15,6 +15,7 @@
pub mod bit_vec;
pub mod buffer;
pub mod bytes;
pub mod paths;
#[allow(clippy::all)]
pub mod readable_size;

View File

@@ -0,0 +1,25 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//! Path constants for table engines, cluster states and WAL
/// All paths relative to data_home(file storage) or root path(S3, OSS etc).
/// WAL dir for local file storage
pub const WAL_DIR: &str = "wal/";
/// Data dir for table engines
pub const DATA_DIR: &str = "data/";
/// Cluster state dir
pub const CLUSTER_DIR: &str = "cluster/";

View File

@@ -36,9 +36,6 @@ pub enum Error {
location: Location,
source: serde_json::error::Error,
},
#[snafu(display("Failed to parse node id: {}", key))]
ParseNodeId { key: String, location: Location },
}
impl ErrorExt for Error {
@@ -47,7 +44,6 @@ impl ErrorExt for Error {
Error::InvalidCatalog { .. }
| Error::DeserializeCatalogEntryValue { .. }
| Error::SerializeCatalogEntryValue { .. } => StatusCode::Unexpected,
Error::ParseNodeId { .. } => StatusCode::InvalidArguments,
}
}

View File

@@ -29,6 +29,7 @@ snafu.workspace = true
tokio.workspace = true
tokio-util.workspace = true
url = "2.3"
paste = "1.0"
[dev-dependencies]
common-test-util = { path = "../test-util" }

View File

@@ -17,9 +17,10 @@ use std::io;
use std::str::FromStr;
use async_compression::tokio::bufread::{BzDecoder, GzipDecoder, XzDecoder, ZstdDecoder};
use async_compression::tokio::write;
use bytes::Bytes;
use futures::Stream;
use tokio::io::{AsyncRead, BufReader};
use tokio::io::{AsyncRead, AsyncWriteExt, BufReader};
use tokio_util::io::{ReaderStream, StreamReader};
use crate::error::{self, Error, Result};
@@ -73,37 +74,107 @@ impl CompressionType {
!matches!(self, &Self::Uncompressed)
}
pub fn convert_async_read<T: AsyncRead + Unpin + Send + 'static>(
&self,
s: T,
) -> Box<dyn AsyncRead + Unpin + Send> {
pub const fn file_extension(&self) -> &'static str {
match self {
CompressionType::Gzip => Box::new(GzipDecoder::new(BufReader::new(s))),
CompressionType::Bzip2 => Box::new(BzDecoder::new(BufReader::new(s))),
CompressionType::Xz => Box::new(XzDecoder::new(BufReader::new(s))),
CompressionType::Zstd => Box::new(ZstdDecoder::new(BufReader::new(s))),
CompressionType::Uncompressed => Box::new(s),
}
}
pub fn convert_stream<T: Stream<Item = io::Result<Bytes>> + Unpin + Send + 'static>(
&self,
s: T,
) -> Box<dyn Stream<Item = io::Result<Bytes>> + Send + Unpin> {
match self {
CompressionType::Gzip => {
Box::new(ReaderStream::new(GzipDecoder::new(StreamReader::new(s))))
}
CompressionType::Bzip2 => {
Box::new(ReaderStream::new(BzDecoder::new(StreamReader::new(s))))
}
CompressionType::Xz => {
Box::new(ReaderStream::new(XzDecoder::new(StreamReader::new(s))))
}
CompressionType::Zstd => {
Box::new(ReaderStream::new(ZstdDecoder::new(StreamReader::new(s))))
}
CompressionType::Uncompressed => Box::new(s),
Self::Gzip => "gz",
Self::Bzip2 => "bz2",
Self::Xz => "xz",
Self::Zstd => "zst",
Self::Uncompressed => "",
}
}
}
macro_rules! impl_compression_type {
($(($enum_item:ident, $prefix:ident)),*) => {
paste::item! {
impl CompressionType {
pub async fn encode(&self, content: impl AsRef<[u8]>) -> io::Result<Vec<u8>> {
match self {
$(
CompressionType::$enum_item => {
let mut buffer = Vec::with_capacity(content.as_ref().len());
let mut encoder = write::[<$prefix Encoder>]::new(&mut buffer);
encoder.write_all(content.as_ref()).await?;
encoder.shutdown().await?;
Ok(buffer)
}
)*
CompressionType::Uncompressed => Ok(content.as_ref().to_vec()),
}
}
pub async fn decode(&self, content: impl AsRef<[u8]>) -> io::Result<Vec<u8>> {
match self {
$(
CompressionType::$enum_item => {
let mut buffer = Vec::with_capacity(content.as_ref().len() * 2);
let mut encoder = write::[<$prefix Decoder>]::new(&mut buffer);
encoder.write_all(content.as_ref()).await?;
encoder.shutdown().await?;
Ok(buffer)
}
)*
CompressionType::Uncompressed => Ok(content.as_ref().to_vec()),
}
}
pub fn convert_async_read<T: AsyncRead + Unpin + Send + 'static>(
&self,
s: T,
) -> Box<dyn AsyncRead + Unpin + Send> {
match self {
$(CompressionType::$enum_item => Box::new([<$prefix Decoder>]::new(BufReader::new(s))),)*
CompressionType::Uncompressed => Box::new(s),
}
}
pub fn convert_stream<T: Stream<Item = io::Result<Bytes>> + Unpin + Send + 'static>(
&self,
s: T,
) -> Box<dyn Stream<Item = io::Result<Bytes>> + Send + Unpin> {
match self {
$(CompressionType::$enum_item => Box::new(ReaderStream::new([<$prefix Decoder>]::new(StreamReader::new(s)))),)*
CompressionType::Uncompressed => Box::new(s),
}
}
}
#[cfg(test)]
mod tests {
use super::CompressionType;
$(
#[tokio::test]
async fn [<test_ $enum_item:lower _compression>]() {
let string = "foo_bar".as_bytes().to_vec();
let compress = CompressionType::$enum_item
.encode(&string)
.await
.unwrap();
let decompress = CompressionType::$enum_item
.decode(&compress)
.await
.unwrap();
assert_eq!(decompress, string);
})*
#[tokio::test]
async fn test_uncompression() {
let string = "foo_bar".as_bytes().to_vec();
let compress = CompressionType::Uncompressed
.encode(&string)
.await
.unwrap();
let decompress = CompressionType::Uncompressed
.decode(&compress)
.await
.unwrap();
assert_eq!(decompress, string);
}
}
}
};
}
impl_compression_type!((Gzip, Gzip), (Bzip2, Bz), (Xz, Xz), (Zstd, Zstd));

View File

@@ -41,9 +41,6 @@ pub enum Error {
#[snafu(display("empty host: {}", url))]
EmptyHostPath { url: String, location: Location },
#[snafu(display("Invalid path: {}", path))]
InvalidPath { path: String, location: Location },
#[snafu(display("Invalid url: {}, error :{}", url, source))]
InvalidUrl {
url: String,
@@ -51,12 +48,6 @@ pub enum Error {
location: Location,
},
#[snafu(display("Failed to decompression, source: {}", source))]
Decompression {
source: object_store::Error,
location: Location,
},
#[snafu(display("Failed to build backend, source: {}", source))]
BuildBackend {
source: object_store::Error,
@@ -148,9 +139,6 @@ pub enum Error {
location: Location,
},
#[snafu(display("Missing required field: {}", name))]
MissingRequiredField { name: String, location: Location },
#[snafu(display("Buffered writer closed"))]
BufferedWriterClosed { location: Location },
}
@@ -173,16 +161,13 @@ impl ErrorExt for Error {
| InvalidConnection { .. }
| InvalidUrl { .. }
| EmptyHostPath { .. }
| InvalidPath { .. }
| InferSchema { .. }
| ReadParquetSnafu { .. }
| ParquetToSchema { .. }
| ParseFormat { .. }
| MergeSchema { .. }
| MissingRequiredField { .. } => StatusCode::InvalidArguments,
| MergeSchema { .. } => StatusCode::InvalidArguments,
Decompression { .. }
| JoinHandle { .. }
JoinHandle { .. }
| ReadRecordBatch { .. }
| WriteRecordBatch { .. }
| EncodeRecordBatch { .. }
@@ -203,11 +188,9 @@ impl ErrorExt for Error {
InferSchema { location, .. } => Some(*location),
ReadParquetSnafu { location, .. } => Some(*location),
ParquetToSchema { location, .. } => Some(*location),
Decompression { location, .. } => Some(*location),
JoinHandle { location, .. } => Some(*location),
ParseFormat { location, .. } => Some(*location),
MergeSchema { location, .. } => Some(*location),
MissingRequiredField { location, .. } => Some(*location),
WriteObject { location, .. } => Some(*location),
ReadRecordBatch { location, .. } => Some(*location),
WriteRecordBatch { location, .. } => Some(*location),
@@ -217,7 +200,6 @@ impl ErrorExt for Error {
UnsupportedBackendProtocol { location, .. } => Some(*location),
EmptyHostPath { location, .. } => Some(*location),
InvalidPath { location, .. } => Some(*location),
InvalidUrl { location, .. } => Some(*location),
InvalidConnection { location, .. } => Some(*location),
UnsupportedCompressionType { location, .. } => Some(*location),

View File

@@ -110,6 +110,7 @@ impl ArrowDecoder for arrow::csv::reader::Decoder {
}
}
#[allow(deprecated)]
impl ArrowDecoder for arrow::json::RawDecoder {
fn decode(&mut self, buf: &[u8]) -> result::Result<usize, ArrowError> {
self.decode(buf)

View File

@@ -17,6 +17,7 @@ use std::str::FromStr;
use std::sync::Arc;
use arrow::csv;
#[allow(deprecated)]
use arrow::csv::reader::infer_reader_schema as infer_csv_schema;
use arrow::record_batch::RecordBatch;
use arrow_schema::{Schema, SchemaRef};
@@ -113,8 +114,7 @@ pub struct CsvConfig {
impl CsvConfig {
fn builder(&self) -> csv::ReaderBuilder {
let mut builder = csv::ReaderBuilder::new()
.with_schema(self.file_schema.clone())
let mut builder = csv::ReaderBuilder::new(self.file_schema.clone())
.with_delimiter(self.delimiter)
.with_batch_size(self.batch_size)
.has_header(self.has_header);
@@ -160,6 +160,7 @@ impl FileOpener for CsvOpener {
}
}
#[allow(deprecated)]
#[async_trait]
impl FileFormat for CsvFormat {
async fn infer_schema(&self, store: &ObjectStore, path: &str) -> Result<Schema> {

View File

@@ -20,6 +20,7 @@ use std::sync::Arc;
use arrow::datatypes::SchemaRef;
use arrow::json::reader::{infer_json_schema_from_iterator, ValueIter};
use arrow::json::writer::LineDelimited;
#[allow(deprecated)]
use arrow::json::{self, RawReaderBuilder};
use arrow::record_batch::RecordBatch;
use arrow_schema::Schema;
@@ -129,6 +130,7 @@ impl JsonOpener {
}
}
#[allow(deprecated)]
impl FileOpener for JsonOpener {
fn open(&self, meta: FileMeta) -> DataFusionResult<FileOpenFuture> {
open_with_decoder(
@@ -159,8 +161,7 @@ pub async fn stream_to_json(
impl DfRecordBatchEncoder for json::Writer<SharedBuffer, LineDelimited> {
fn write(&mut self, batch: &RecordBatch) -> Result<()> {
self.write(batch.clone())
.context(error::WriteRecordBatchSnafu)
self.write(batch).context(error::WriteRecordBatchSnafu)
}
}

View File

@@ -0,0 +1,6 @@
hostname,environment,usage_user,usage_system,usage_idle,usage_nice,usage_iowait,usage_irq,usage_softirq,usage_steal,usage_guest,usage_guest_nice,ts
host_0,test,32,58,36,72,61,21,53,12,59,72,2023-04-01T00:00:00+00:00
host_1,staging,12,32,50,84,19,73,38,37,72,2,2023-04-01T00:00:00+00:00
host_2,test,98,5,40,95,64,39,21,63,53,94,2023-04-01T00:00:00+00:00
host_3,test,98,95,7,48,99,67,14,86,36,23,2023-04-01T00:00:00+00:00
host_4,test,32,44,11,53,64,9,17,39,20,7,2023-04-01T00:00:00+00:00
1 hostname environment usage_user usage_system usage_idle usage_nice usage_iowait usage_irq usage_softirq usage_steal usage_guest usage_guest_nice ts
2 host_0 test 32 58 36 72 61 21 53 12 59 72 2023-04-01T00:00:00+00:00
3 host_1 staging 12 32 50 84 19 73 38 37 72 2 2023-04-01T00:00:00+00:00
4 host_2 test 98 5 40 95 64 39 21 63 53 94 2023-04-01T00:00:00+00:00
5 host_3 test 98 95 7 48 99 67 14 86 36 23 2023-04-01T00:00:00+00:00
6 host_4 test 32 44 11 53 64 9 17 39 20 7 2023-04-01T00:00:00+00:00

View File

@@ -33,6 +33,8 @@ pub enum StatusCode {
Internal = 1003,
/// Invalid arguments.
InvalidArguments = 1004,
/// The task is cancelled.
Cancelled = 1005,
// ====== End of common status code ================
// ====== Begin of SQL related status code =========
@@ -100,6 +102,7 @@ impl StatusCode {
| StatusCode::Unsupported
| StatusCode::Unexpected
| StatusCode::InvalidArguments
| StatusCode::Cancelled
| StatusCode::InvalidSyntax
| StatusCode::PlanQuery
| StatusCode::EngineExecuteQuery
@@ -125,6 +128,7 @@ impl StatusCode {
| StatusCode::Unsupported
| StatusCode::Unexpected
| StatusCode::Internal
| StatusCode::Cancelled
| StatusCode::PlanQuery
| StatusCode::EngineExecuteQuery
| StatusCode::StorageUnavailable

View File

@@ -8,6 +8,8 @@ license.workspace = true
proc-macro = true
[dependencies]
common-telemetry = { path = "../telemetry" }
backtrace = "0.3"
quote = "1.0"
syn = "1.0"
proc-macro2 = "1.0"

View File

@@ -15,11 +15,13 @@
mod range_fn;
use proc_macro::TokenStream;
use quote::{quote, quote_spanned};
use quote::{quote, quote_spanned, ToTokens};
use range_fn::process_range_fn;
use syn::parse::Parser;
use syn::spanned::Spanned;
use syn::{parse_macro_input, DeriveInput, ItemStruct};
use syn::{
parse_macro_input, AttributeArgs, DeriveInput, ItemFn, ItemStruct, Lit, Meta, NestedMeta,
};
/// Make struct implemented trait [AggrFuncTypeStore], which is necessary when writing UDAF.
/// This derive macro is expect to be used along with attribute macro [as_aggr_func_creator].
@@ -114,3 +116,109 @@ pub fn as_aggr_func_creator(_args: TokenStream, input: TokenStream) -> TokenStre
pub fn range_fn(args: TokenStream, input: TokenStream) -> TokenStream {
process_range_fn(args, input)
}
/// Attribute macro to print the caller to the annotated function.
/// The caller is printed as its filename and the call site line number.
///
/// This macro works like this: inject the tracking codes as the first statement to the annotated
/// function body. The tracking codes use [backtrace-rs](https://crates.io/crates/backtrace) to get
/// the callers. So you must dependent on the `backtrace-rs` crate.
///
/// # Arguments
/// - `depth`: The max depth of call stack to print. Optional, defaults to 1.
///
/// # Example
/// ```rust, ignore
///
/// #[print_caller(depth = 3)]
/// fn foo() {}
/// ```
#[proc_macro_attribute]
pub fn print_caller(args: TokenStream, input: TokenStream) -> TokenStream {
let mut depth = 1;
let args = parse_macro_input!(args as AttributeArgs);
for meta in args.iter() {
if let NestedMeta::Meta(Meta::NameValue(name_value)) = meta {
let ident = name_value
.path
.get_ident()
.expect("Expected an ident!")
.to_string();
if ident == "depth" {
let Lit::Int(i) = &name_value.lit else { panic!("Expected 'depth' to be a valid int!") };
depth = i.base10_parse::<usize>().expect("Invalid 'depth' value");
break;
}
}
}
let tokens: TokenStream = quote! {
{
let curr_file = file!();
let bt = backtrace::Backtrace::new();
let call_stack = bt
.frames()
.iter()
.skip_while(|f| {
!f.symbols().iter().any(|s| {
s.filename()
.map(|p| p.ends_with(curr_file))
.unwrap_or(false)
})
})
.skip(1)
.take(#depth);
let call_stack = call_stack
.map(|f| {
f.symbols()
.iter()
.map(|s| {
let filename = s
.filename()
.map(|p| format!("{:?}", p))
.unwrap_or_else(|| "unknown".to_string());
let lineno = s
.lineno()
.map(|l| format!("{}", l))
.unwrap_or_else(|| "unknown".to_string());
format!("filename: {}, lineno: {}", filename, lineno)
})
.collect::<Vec<String>>()
.join(", ")
})
.collect::<Vec<_>>();
match call_stack.len() {
0 => common_telemetry::info!("unable to find call stack"),
1 => common_telemetry::info!("caller: {}", call_stack[0]),
_ => {
let mut s = String::new();
s.push_str("[\n");
for e in call_stack {
s.push_str("\t");
s.push_str(&e);
s.push_str("\n");
}
s.push_str("]");
common_telemetry::info!("call stack: {}", s)
}
}
}
}
.into();
let stmt = match syn::parse(tokens) {
Ok(stmt) => stmt,
Err(e) => return e.into_compile_error().into(),
};
let mut item = parse_macro_input!(input as ItemFn);
item.block.stmts.insert(0, stmt);
item.into_token_stream().into()
}

View File

@@ -16,13 +16,16 @@ use std::fmt;
use std::str::FromStr;
use std::sync::Arc;
use common_query::error::{self, Result, UnsupportedInputDataTypeSnafu};
use common_query::error::{InvalidFuncArgsSnafu, Result, UnsupportedInputDataTypeSnafu};
use common_query::prelude::{Signature, Volatility};
use common_time::timestamp::TimeUnit;
use common_time::Timestamp;
use datatypes::prelude::ConcreteDataType;
use datatypes::types::StringType;
use datatypes::vectors::{Int64Vector, StringVector, Vector, VectorRef};
use datatypes::types::TimestampType;
use datatypes::vectors::{
Int64Vector, StringVector, TimestampMicrosecondVector, TimestampMillisecondVector,
TimestampNanosecondVector, TimestampSecondVector, Vector, VectorRef,
};
use snafu::ensure;
use crate::scalars::function::{Function, FunctionContext};
@@ -42,18 +45,33 @@ fn convert_to_seconds(arg: &str) -> Option<i64> {
}
}
fn process_vector(vector: &dyn Vector) -> Vec<Option<i64>> {
(0..vector.len())
.map(|i| paste::expr!((vector.get(i)).as_timestamp().map(|ts| ts.value())))
.collect::<Vec<Option<i64>>>()
}
impl Function for ToUnixtimeFunction {
fn name(&self) -> &str {
NAME
}
fn return_type(&self, _input_types: &[ConcreteDataType]) -> Result<ConcreteDataType> {
Ok(ConcreteDataType::timestamp_second_datatype())
Ok(ConcreteDataType::int64_datatype())
}
fn signature(&self) -> Signature {
Signature::exact(
vec![ConcreteDataType::String(StringType)],
Signature::uniform(
1,
vec![
ConcreteDataType::string_datatype(),
ConcreteDataType::int32_datatype(),
ConcreteDataType::int64_datatype(),
ConcreteDataType::timestamp_second_datatype(),
ConcreteDataType::timestamp_millisecond_datatype(),
ConcreteDataType::timestamp_microsecond_datatype(),
ConcreteDataType::timestamp_nanosecond_datatype(),
],
Volatility::Immutable,
)
}
@@ -61,7 +79,7 @@ impl Function for ToUnixtimeFunction {
fn eval(&self, _func_ctx: FunctionContext, columns: &[VectorRef]) -> Result<VectorRef> {
ensure!(
columns.len() == 1,
error::InvalidFuncArgsSnafu {
InvalidFuncArgsSnafu {
err_msg: format!(
"The length of the args is not correct, expect exactly one, have: {}",
columns.len()
@@ -79,6 +97,42 @@ impl Function for ToUnixtimeFunction {
.collect::<Vec<_>>(),
)))
}
ConcreteDataType::Int64(_) | ConcreteDataType::Int32(_) => {
let array = columns[0].to_arrow_array();
Ok(Arc::new(Int64Vector::try_from_arrow_array(&array).unwrap()))
}
ConcreteDataType::Timestamp(ts) => {
let array = columns[0].to_arrow_array();
let value = match ts {
TimestampType::Second(_) => {
let vector = paste::expr!(TimestampSecondVector::try_from_arrow_array(
array
)
.unwrap());
process_vector(&vector)
}
TimestampType::Millisecond(_) => {
let vector = paste::expr!(
TimestampMillisecondVector::try_from_arrow_array(array).unwrap()
);
process_vector(&vector)
}
TimestampType::Microsecond(_) => {
let vector = paste::expr!(
TimestampMicrosecondVector::try_from_arrow_array(array).unwrap()
);
process_vector(&vector)
}
TimestampType::Nanosecond(_) => {
let vector = paste::expr!(TimestampNanosecondVector::try_from_arrow_array(
array
)
.unwrap());
process_vector(&vector)
}
};
Ok(Arc::new(Int64Vector::from(value)))
}
_ => UnsupportedInputDataTypeSnafu {
function: NAME,
datatypes: columns.iter().map(|c| c.data_type()).collect::<Vec<_>>(),
@@ -97,28 +151,37 @@ impl fmt::Display for ToUnixtimeFunction {
#[cfg(test)]
mod tests {
use common_query::prelude::TypeSignature;
use datatypes::prelude::ConcreteDataType;
use datatypes::types::StringType;
use datatypes::prelude::{ConcreteDataType, ScalarVectorBuilder};
use datatypes::scalars::ScalarVector;
use datatypes::timestamp::TimestampSecond;
use datatypes::value::Value;
use datatypes::vectors::StringVector;
use datatypes::vectors::{StringVector, TimestampSecondVector};
use super::{ToUnixtimeFunction, *};
use crate::scalars::Function;
#[test]
fn test_to_unixtime() {
fn test_string_to_unixtime() {
let f = ToUnixtimeFunction::default();
assert_eq!("to_unixtime", f.name());
assert_eq!(
ConcreteDataType::timestamp_second_datatype(),
ConcreteDataType::int64_datatype(),
f.return_type(&[]).unwrap()
);
assert!(matches!(f.signature(),
Signature {
type_signature: TypeSignature::Exact(valid_types),
volatility: Volatility::Immutable
} if valid_types == vec![ConcreteDataType::String(StringType)]
Signature {
type_signature: TypeSignature::Uniform(1, valid_types),
volatility: Volatility::Immutable
} if valid_types == vec![
ConcreteDataType::string_datatype(),
ConcreteDataType::int32_datatype(),
ConcreteDataType::int64_datatype(),
ConcreteDataType::timestamp_second_datatype(),
ConcreteDataType::timestamp_millisecond_datatype(),
ConcreteDataType::timestamp_microsecond_datatype(),
ConcreteDataType::timestamp_nanosecond_datatype(),
]
));
let times = vec![
@@ -145,4 +208,106 @@ mod tests {
}
}
}
#[test]
fn test_int_to_unixtime() {
let f = ToUnixtimeFunction::default();
assert_eq!("to_unixtime", f.name());
assert_eq!(
ConcreteDataType::int64_datatype(),
f.return_type(&[]).unwrap()
);
assert!(matches!(f.signature(),
Signature {
type_signature: TypeSignature::Uniform(1, valid_types),
volatility: Volatility::Immutable
} if valid_types == vec![
ConcreteDataType::string_datatype(),
ConcreteDataType::int32_datatype(),
ConcreteDataType::int64_datatype(),
ConcreteDataType::timestamp_second_datatype(),
ConcreteDataType::timestamp_millisecond_datatype(),
ConcreteDataType::timestamp_microsecond_datatype(),
ConcreteDataType::timestamp_nanosecond_datatype(),
]
));
let times = vec![Some(3_i64), None, Some(5_i64), None];
let results = vec![Some(3), None, Some(5), None];
let args: Vec<VectorRef> = vec![Arc::new(Int64Vector::from(times.clone()))];
let vector = f.eval(FunctionContext::default(), &args).unwrap();
assert_eq!(4, vector.len());
for (i, _t) in times.iter().enumerate() {
let v = vector.get(i);
if i == 1 || i == 3 {
assert_eq!(Value::Null, v);
continue;
}
match v {
Value::Int64(ts) => {
assert_eq!(ts, (*results.get(i).unwrap()).unwrap());
}
_ => unreachable!(),
}
}
}
#[test]
fn test_timestamp_to_unixtime() {
let f = ToUnixtimeFunction::default();
assert_eq!("to_unixtime", f.name());
assert_eq!(
ConcreteDataType::int64_datatype(),
f.return_type(&[]).unwrap()
);
assert!(matches!(f.signature(),
Signature {
type_signature: TypeSignature::Uniform(1, valid_types),
volatility: Volatility::Immutable
} if valid_types == vec![
ConcreteDataType::string_datatype(),
ConcreteDataType::int32_datatype(),
ConcreteDataType::int64_datatype(),
ConcreteDataType::timestamp_second_datatype(),
ConcreteDataType::timestamp_millisecond_datatype(),
ConcreteDataType::timestamp_microsecond_datatype(),
ConcreteDataType::timestamp_nanosecond_datatype(),
]
));
let times: Vec<Option<TimestampSecond>> = vec![
Some(TimestampSecond::new(123)),
None,
Some(TimestampSecond::new(42)),
None,
];
let results = vec![Some(123), None, Some(42), None];
let ts_vector: TimestampSecondVector = build_vector_from_slice(&times);
let args: Vec<VectorRef> = vec![Arc::new(ts_vector)];
let vector = f.eval(FunctionContext::default(), &args).unwrap();
assert_eq!(4, vector.len());
for (i, _t) in times.iter().enumerate() {
let v = vector.get(i);
if i == 1 || i == 3 {
assert_eq!(Value::Null, v);
continue;
}
match v {
Value::Int64(ts) => {
assert_eq!(ts, (*results.get(i).unwrap()).unwrap());
}
_ => unreachable!(),
}
}
}
fn build_vector_from_slice<T: ScalarVector>(items: &[Option<T::RefItem<'_>>]) -> T {
let mut builder = T::Builder::with_capacity(items.len());
for item in items {
builder.push(*item);
}
builder.finish()
}
}

View File

@@ -17,3 +17,6 @@ common-time = { path = "../time" }
datatypes = { path = "../../datatypes" }
snafu = { version = "0.7", features = ["backtraces"] }
table = { path = "../../table" }
[dev-dependencies]
paste = "1.0"

View File

@@ -12,9 +12,12 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use api::v1::add_column::location::LocationType;
use api::v1::add_column::Location;
use api::v1::alter_expr::Kind;
use api::v1::{column_def, AlterExpr, CreateTableExpr, DropColumns, RenameTable};
use common_catalog::consts::{DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME};
use common_query::AddColumnLocation;
use datatypes::schema::{ColumnSchema, RawSchema};
use snafu::{ensure, OptionExt, ResultExt};
use table::metadata::TableId;
@@ -24,9 +27,12 @@ use table::requests::{
use crate::error::{
ColumnNotFoundSnafu, InvalidColumnDefSnafu, MissingFieldSnafu, MissingTimestampColumnSnafu,
Result, UnrecognizedTableOptionSnafu,
Result, UnknownLocationTypeSnafu, UnrecognizedTableOptionSnafu,
};
const LOCATION_TYPE_FIRST: i32 = LocationType::First as i32;
const LOCATION_TYPE_AFTER: i32 = LocationType::After as i32;
/// Convert an [`AlterExpr`] to an [`AlterTableRequest`]
pub fn alter_expr_to_request(expr: AlterExpr) -> Result<AlterTableRequest> {
let catalog_name = expr.catalog_name;
@@ -50,6 +56,7 @@ pub fn alter_expr_to_request(expr: AlterExpr) -> Result<AlterTableRequest> {
Ok(AddColumnRequest {
column_schema: schema,
is_key: ac.is_key,
location: parse_location(ac.location)?,
})
})
.collect::<Result<Vec<_>>>()?;
@@ -163,10 +170,10 @@ pub fn create_expr_to_request(
Some(expr.desc)
};
let region_ids = if expr.region_ids.is_empty() {
let region_numbers = if expr.region_numbers.is_empty() {
vec![0]
} else {
expr.region_ids
expr.region_numbers
};
let table_options =
@@ -178,7 +185,7 @@ pub fn create_expr_to_request(
table_name: expr.table_name,
desc,
schema,
region_numbers: region_ids,
region_numbers,
primary_key_indices,
create_if_not_exists: expr.create_if_not_exists,
table_options,
@@ -186,8 +193,26 @@ pub fn create_expr_to_request(
})
}
fn parse_location(location: Option<Location>) -> Result<Option<AddColumnLocation>> {
match location {
Some(Location {
location_type: LOCATION_TYPE_FIRST,
..
}) => Ok(Some(AddColumnLocation::First)),
Some(Location {
location_type: LOCATION_TYPE_AFTER,
after_cloumn_name,
}) => Ok(Some(AddColumnLocation::After {
column_name: after_cloumn_name,
})),
Some(Location { location_type, .. }) => UnknownLocationTypeSnafu { location_type }.fail(),
None => Ok(None),
}
}
#[cfg(test)]
mod tests {
use api::v1::add_column::location::LocationType;
use api::v1::{AddColumn, AddColumns, ColumnDataType, ColumnDef, DropColumn};
use datatypes::prelude::ConcreteDataType;
@@ -209,6 +234,7 @@ mod tests {
default_constraint: vec![],
}),
is_key: false,
location: None,
}],
})),
};
@@ -228,6 +254,80 @@ mod tests {
ConcreteDataType::float64_datatype(),
add_column.column_schema.data_type
);
assert_eq!(None, add_column.location);
}
#[test]
fn test_alter_expr_with_location_to_request() {
let expr = AlterExpr {
catalog_name: "".to_string(),
schema_name: "".to_string(),
table_name: "monitor".to_string(),
kind: Some(Kind::AddColumns(AddColumns {
add_columns: vec![
AddColumn {
column_def: Some(ColumnDef {
name: "mem_usage".to_string(),
datatype: ColumnDataType::Float64 as i32,
is_nullable: false,
default_constraint: vec![],
}),
is_key: false,
location: Some(Location {
location_type: LocationType::First.into(),
after_cloumn_name: "".to_string(),
}),
},
AddColumn {
column_def: Some(ColumnDef {
name: "cpu_usage".to_string(),
datatype: ColumnDataType::Float64 as i32,
is_nullable: false,
default_constraint: vec![],
}),
is_key: false,
location: Some(Location {
location_type: LocationType::After.into(),
after_cloumn_name: "ts".to_string(),
}),
},
],
})),
};
let alter_request = alter_expr_to_request(expr).unwrap();
assert_eq!(alter_request.catalog_name, "");
assert_eq!(alter_request.schema_name, "");
assert_eq!("monitor".to_string(), alter_request.table_name);
let mut add_columns = match alter_request.alter_kind {
AlterKind::AddColumns { columns } => columns,
_ => unreachable!(),
};
let add_column = add_columns.pop().unwrap();
assert!(!add_column.is_key);
assert_eq!("cpu_usage", add_column.column_schema.name);
assert_eq!(
ConcreteDataType::float64_datatype(),
add_column.column_schema.data_type
);
assert_eq!(
Some(AddColumnLocation::After {
column_name: "ts".to_string()
}),
add_column.location
);
let add_column = add_columns.pop().unwrap();
assert!(!add_column.is_key);
assert_eq!("mem_usage", add_column.column_schema.name);
assert_eq!(
ConcreteDataType::float64_datatype(),
add_column.column_schema.data_type
);
assert_eq!(Some(AddColumnLocation::First), add_column.location);
}
#[test]

View File

@@ -16,7 +16,6 @@ use std::collections::HashMap;
use api::helper::ColumnDataTypeWrapper;
use api::v1::{Column, DeleteRequest as GrpcDeleteRequest};
use datatypes::data_type::DataType;
use datatypes::prelude::ConcreteDataType;
use snafu::{ensure, ResultExt};
use table::requests::DeleteRequest;
@@ -41,14 +40,11 @@ pub fn to_table_delete_request(request: GrpcDeleteRequest) -> Result<DeleteReque
let datatype: ConcreteDataType = ColumnDataTypeWrapper::try_new(datatype)
.context(ColumnDataTypeSnafu)?
.into();
let vector_builder = &mut datatype.create_mutable_vector(row_count);
add_values_to_builder(vector_builder, values, row_count, null_mask)?;
let vector = add_values_to_builder(datatype, values, row_count, null_mask)?;
ensure!(
key_column_values
.insert(column_name.clone(), vector_builder.to_vector())
.insert(column_name.clone(), vector)
.is_none(),
IllegalDeleteRequestSnafu {
reason: format!("Duplicated column '{column_name}' in delete request.")

View File

@@ -14,7 +14,6 @@
use std::any::Any;
use api::DecodeError;
use common_error::ext::ErrorExt;
use common_error::prelude::{Snafu, StatusCode};
use snafu::Location;
@@ -28,15 +27,12 @@ pub enum Error {
table_name: String,
},
#[snafu(display("Failed to convert bytes to insert batch, source: {}", source))]
DecodeInsert { source: DecodeError },
#[snafu(display("Illegal delete request, reason: {reason}"))]
IllegalDeleteRequest { reason: String, location: Location },
#[snafu(display("Column datatype error, source: {}", source))]
ColumnDataType {
#[snafu(backtrace)]
location: Location,
source: api::error::Error,
},
@@ -58,19 +54,13 @@ pub enum Error {
InvalidColumnProto { err_msg: String, location: Location },
#[snafu(display("Failed to create vector, source: {}", source))]
CreateVector {
#[snafu(backtrace)]
location: Location,
source: datatypes::error::Error,
},
#[snafu(display("Missing required field in protobuf, field: {}", field))]
MissingField { field: String, location: Location },
#[snafu(display("Invalid column default constraint, source: {}", source))]
ColumnDefaultConstraint {
#[snafu(backtrace)]
source: datatypes::error::Error,
},
#[snafu(display(
"Invalid column proto definition, column: {}, source: {}",
column,
@@ -78,13 +68,13 @@ pub enum Error {
))]
InvalidColumnDef {
column: String,
#[snafu(backtrace)]
location: Location,
source: api::error::Error,
},
#[snafu(display("Unrecognized table option: {}", source))]
UnrecognizedTableOption {
#[snafu(backtrace)]
location: Location,
source: table::error::Error,
},
@@ -93,6 +83,12 @@ pub enum Error {
#[snafu(display("The column name already exists, column: {}", column))]
ColumnAlreadyExists { column: String, location: Location },
#[snafu(display("Unknown location type: {}", location_type))]
UnknownLocationType {
location_type: i32,
location: Location,
},
}
pub type Result<T> = std::result::Result<T, Error>;
@@ -102,9 +98,7 @@ impl ErrorExt for Error {
match self {
Error::ColumnNotFound { .. } => StatusCode::TableColumnNotFound,
Error::DecodeInsert { .. } | Error::IllegalDeleteRequest { .. } => {
StatusCode::InvalidArguments
}
Error::IllegalDeleteRequest { .. } => StatusCode::InvalidArguments,
Error::ColumnDataType { .. } => StatusCode::Internal,
Error::DuplicatedTimestampColumn { .. } | Error::MissingTimestampColumn { .. } => {
@@ -113,12 +107,11 @@ impl ErrorExt for Error {
Error::InvalidColumnProto { .. } => StatusCode::InvalidArguments,
Error::CreateVector { .. } => StatusCode::InvalidArguments,
Error::MissingField { .. } => StatusCode::InvalidArguments,
Error::ColumnDefaultConstraint { source, .. } => source.status_code(),
Error::InvalidColumnDef { source, .. } => source.status_code(),
Error::UnrecognizedTableOption { .. } => StatusCode::InvalidArguments,
Error::UnexpectedValuesLength { .. } | Error::ColumnAlreadyExists { .. } => {
StatusCode::InvalidArguments
}
Error::UnexpectedValuesLength { .. }
| Error::ColumnAlreadyExists { .. }
| Error::UnknownLocationType { .. } => StatusCode::InvalidArguments,
}
}

View File

@@ -13,6 +13,7 @@
// limitations under the License.
use std::collections::{HashMap, HashSet};
use std::sync::Arc;
use api::helper::ColumnDataTypeWrapper;
use api::v1::column::{SemanticType, Values};
@@ -25,10 +26,16 @@ use common_time::timestamp::Timestamp;
use common_time::{Date, DateTime};
use datatypes::data_type::{ConcreteDataType, DataType};
use datatypes::prelude::{ValueRef, VectorRef};
use datatypes::scalars::ScalarVector;
use datatypes::schema::SchemaRef;
use datatypes::types::TimestampType;
use datatypes::types::{Int16Type, Int8Type, TimestampType, UInt16Type, UInt8Type};
use datatypes::value::Value;
use datatypes::vectors::MutableVector;
use datatypes::vectors::{
BinaryVector, BooleanVector, DateTimeVector, DateVector, Float32Vector, Float64Vector,
Int32Vector, Int64Vector, PrimitiveVector, StringVector, TimestampMicrosecondVector,
TimestampMillisecondVector, TimestampNanosecondVector, TimestampSecondVector, UInt32Vector,
UInt64Vector,
};
use snafu::{ensure, OptionExt, ResultExt};
use table::metadata::TableId;
use table::requests::InsertRequest;
@@ -68,6 +75,7 @@ pub fn find_new_columns(schema: &SchemaRef, columns: &[Column]) -> Result<Option
columns_to_add.push(AddColumn {
column_def,
is_key: *semantic_type == TAG_SEMANTIC_TYPE,
location: None,
});
new_columns.insert(column_name.to_string());
}
@@ -257,7 +265,7 @@ pub fn build_create_expr_from_insertion(
create_if_not_exists: true,
table_options: Default::default(),
table_id: table_id.map(|id| api::v1::TableId { id }),
region_ids: vec![0], // TODO:(hl): region id should be allocated by frontend
region_numbers: vec![0], // TODO:(hl): region number should be allocated by frontend
engine: engine.to_string(),
};
@@ -286,15 +294,10 @@ pub fn to_table_insert_request(
let datatype: ConcreteDataType = ColumnDataTypeWrapper::try_new(datatype)
.context(ColumnDataTypeSnafu)?
.into();
let vector_builder = &mut datatype.create_mutable_vector(row_count);
add_values_to_builder(vector_builder, values, row_count, null_mask)?;
let vector = add_values_to_builder(datatype, values, row_count, null_mask)?;
ensure!(
columns_values
.insert(column_name.clone(), vector_builder.to_vector())
.is_none(),
columns_values.insert(column_name.clone(), vector).is_none(),
ColumnAlreadyExistsSnafu {
column: column_name
}
@@ -311,28 +314,16 @@ pub fn to_table_insert_request(
}
pub(crate) fn add_values_to_builder(
builder: &mut Box<dyn MutableVector>,
data_type: ConcreteDataType,
values: Values,
row_count: usize,
null_mask: Vec<u8>,
) -> Result<()> {
let data_type = builder.data_type();
let values = convert_values(&data_type, values);
) -> Result<VectorRef> {
if null_mask.is_empty() {
ensure!(
values.len() == row_count,
UnexpectedValuesLengthSnafu {
reason: "If null_mask is empty, the length of values must be equal to row_count."
}
);
values.iter().try_for_each(|value| {
builder
.try_push_value_ref(value.as_value_ref())
.context(CreateVectorSnafu)
})?;
Ok(values_to_vector(&data_type, values))
} else {
let builder = &mut data_type.create_mutable_vector(row_count);
let values = convert_values(&data_type, values);
let null_mask = BitVec::from_vec(null_mask);
ensure!(
null_mask.count_ones() + values.len() == row_count,
@@ -353,8 +344,53 @@ pub(crate) fn add_values_to_builder(
}
}
}
Ok(builder.to_vector())
}
}
fn values_to_vector(data_type: &ConcreteDataType, values: Values) -> VectorRef {
match data_type {
ConcreteDataType::Boolean(_) => Arc::new(BooleanVector::from(values.bool_values)),
ConcreteDataType::Int8(_) => Arc::new(PrimitiveVector::<Int8Type>::from_iter_values(
values.i8_values.into_iter().map(|x| x as i8),
)),
ConcreteDataType::Int16(_) => Arc::new(PrimitiveVector::<Int16Type>::from_iter_values(
values.i16_values.into_iter().map(|x| x as i16),
)),
ConcreteDataType::Int32(_) => Arc::new(Int32Vector::from_vec(values.i32_values)),
ConcreteDataType::Int64(_) => Arc::new(Int64Vector::from_vec(values.i64_values)),
ConcreteDataType::UInt8(_) => Arc::new(PrimitiveVector::<UInt8Type>::from_iter_values(
values.u8_values.into_iter().map(|x| x as u8),
)),
ConcreteDataType::UInt16(_) => Arc::new(PrimitiveVector::<UInt16Type>::from_iter_values(
values.u16_values.into_iter().map(|x| x as u16),
)),
ConcreteDataType::UInt32(_) => Arc::new(UInt32Vector::from_vec(values.u32_values)),
ConcreteDataType::UInt64(_) => Arc::new(UInt64Vector::from_vec(values.u64_values)),
ConcreteDataType::Float32(_) => Arc::new(Float32Vector::from_vec(values.f32_values)),
ConcreteDataType::Float64(_) => Arc::new(Float64Vector::from_vec(values.f64_values)),
ConcreteDataType::Binary(_) => Arc::new(BinaryVector::from(values.binary_values)),
ConcreteDataType::String(_) => Arc::new(StringVector::from_vec(values.string_values)),
ConcreteDataType::Date(_) => Arc::new(DateVector::from_vec(values.date_values)),
ConcreteDataType::DateTime(_) => Arc::new(DateTimeVector::from_vec(values.datetime_values)),
ConcreteDataType::Timestamp(unit) => match unit {
TimestampType::Second(_) => {
Arc::new(TimestampSecondVector::from_vec(values.ts_second_values))
}
TimestampType::Millisecond(_) => Arc::new(TimestampMillisecondVector::from_vec(
values.ts_millisecond_values,
)),
TimestampType::Microsecond(_) => Arc::new(TimestampMicrosecondVector::from_vec(
values.ts_microsecond_values,
)),
TimestampType::Nanosecond(_) => Arc::new(TimestampNanosecondVector::from_vec(
values.ts_nanosecond_values,
)),
},
ConcreteDataType::Null(_) | ConcreteDataType::List(_) | ConcreteDataType::Dictionary(_) => {
unreachable!()
}
}
Ok(())
}
fn convert_values(data_type: &ConcreteDataType, values: Values) -> Vec<Value> {
@@ -380,22 +416,34 @@ fn convert_values(data_type: &ConcreteDataType, values: Values) -> Vec<Value> {
.into_iter()
.map(|val| val.into())
.collect(),
ConcreteDataType::Int8(_) => values.i8_values.into_iter().map(|val| val.into()).collect(),
ConcreteDataType::Int8(_) => values
.i8_values
.into_iter()
// Safety: Since i32 only stores i8 data here, so i32 as i8 is safe.
.map(|val| (val as i8).into())
.collect(),
ConcreteDataType::Int16(_) => values
.i16_values
.into_iter()
.map(|val| val.into())
// Safety: Since i32 only stores i16 data here, so i32 as i16 is safe.
.map(|val| (val as i16).into())
.collect(),
ConcreteDataType::Int32(_) => values
.i32_values
.into_iter()
.map(|val| val.into())
.collect(),
ConcreteDataType::UInt8(_) => values.u8_values.into_iter().map(|val| val.into()).collect(),
ConcreteDataType::UInt8(_) => values
.u8_values
.into_iter()
// Safety: Since i32 only stores u8 data here, so i32 as u8 is safe.
.map(|val| (val as u8).into())
.collect(),
ConcreteDataType::UInt16(_) => values
.u16_values
.into_iter()
.map(|val| val.into())
// Safety: Since i32 only stores u16 data here, so i32 as u16 is safe.
.map(|val| (val as u16).into())
.collect(),
ConcreteDataType::UInt32(_) => values
.u32_values
@@ -418,12 +466,12 @@ fn convert_values(data_type: &ConcreteDataType, values: Values) -> Vec<Value> {
.map(|val| val.into())
.collect(),
ConcreteDataType::DateTime(_) => values
.i64_values
.datetime_values
.into_iter()
.map(|v| Value::DateTime(v.into()))
.collect(),
ConcreteDataType::Date(_) => values
.i32_values
.date_values
.into_iter()
.map(|v| Value::Date(v.into()))
.collect(),
@@ -459,26 +507,21 @@ fn is_null(null_mask: &BitVec, idx: usize) -> Option<bool> {
#[cfg(test)]
mod tests {
use std::any::Any;
use std::sync::Arc;
use std::{assert_eq, unimplemented, vec};
use std::{assert_eq, vec};
use api::helper::ColumnDataTypeWrapper;
use api::v1::column::{self, SemanticType, Values};
use api::v1::{Column, ColumnDataType};
use common_base::BitVec;
use common_catalog::consts::MITO_ENGINE;
use common_query::physical_plan::PhysicalPlanRef;
use common_query::prelude::Expr;
use common_time::timestamp::Timestamp;
use datatypes::data_type::ConcreteDataType;
use datatypes::schema::{ColumnSchema, SchemaBuilder, SchemaRef};
use datatypes::schema::{ColumnSchema, SchemaBuilder};
use datatypes::types::{TimestampMillisecondType, TimestampSecondType, TimestampType};
use datatypes::value::Value;
use paste::paste;
use snafu::ResultExt;
use table::error::Result as TableResult;
use table::metadata::TableInfoRef;
use table::Table;
use super::*;
use crate::error;
@@ -666,26 +709,150 @@ mod tests {
assert_eq!(Value::Timestamp(Timestamp::new_millisecond(101)), ts.get(1));
}
#[test]
fn test_convert_values() {
let data_type = ConcreteDataType::float64_datatype();
let values = Values {
f64_values: vec![0.1, 0.2, 0.3],
..Default::default()
macro_rules! test_convert_values {
($grpc_data_type: ident, $values: expr, $concrete_data_type: ident, $expected_ret: expr) => {
paste! {
#[test]
fn [<test_convert_ $grpc_data_type _values>]() {
let values = Values {
[<$grpc_data_type _values>]: $values,
..Default::default()
};
let data_type = ConcreteDataType::[<$concrete_data_type _datatype>]();
let result = convert_values(&data_type, values);
assert_eq!(
$expected_ret,
result
);
}
}
};
let result = convert_values(&data_type, values);
assert_eq!(
vec![
Value::Float64(0.1.into()),
Value::Float64(0.2.into()),
Value::Float64(0.3.into())
],
result
);
}
test_convert_values!(
i8,
vec![1_i32, 2, 3],
int8,
vec![Value::Int8(1), Value::Int8(2), Value::Int8(3)]
);
test_convert_values!(
u8,
vec![1_u32, 2, 3],
uint8,
vec![Value::UInt8(1), Value::UInt8(2), Value::UInt8(3)]
);
test_convert_values!(
i16,
vec![1_i32, 2, 3],
int16,
vec![Value::Int16(1), Value::Int16(2), Value::Int16(3)]
);
test_convert_values!(
u16,
vec![1_u32, 2, 3],
uint16,
vec![Value::UInt16(1), Value::UInt16(2), Value::UInt16(3)]
);
test_convert_values!(
i32,
vec![1, 2, 3],
int32,
vec![Value::Int32(1), Value::Int32(2), Value::Int32(3)]
);
test_convert_values!(
u32,
vec![1, 2, 3],
uint32,
vec![Value::UInt32(1), Value::UInt32(2), Value::UInt32(3)]
);
test_convert_values!(
i64,
vec![1, 2, 3],
int64,
vec![Value::Int64(1), Value::Int64(2), Value::Int64(3)]
);
test_convert_values!(
u64,
vec![1, 2, 3],
uint64,
vec![Value::UInt64(1), Value::UInt64(2), Value::UInt64(3)]
);
test_convert_values!(
f32,
vec![1.0, 2.0, 3.0],
float32,
vec![
Value::Float32(1.0.into()),
Value::Float32(2.0.into()),
Value::Float32(3.0.into())
]
);
test_convert_values!(
f64,
vec![1.0, 2.0, 3.0],
float64,
vec![
Value::Float64(1.0.into()),
Value::Float64(2.0.into()),
Value::Float64(3.0.into())
]
);
test_convert_values!(
string,
vec!["1".to_string(), "2".to_string(), "3".to_string()],
string,
vec![
Value::String("1".into()),
Value::String("2".into()),
Value::String("3".into())
]
);
test_convert_values!(
binary,
vec!["1".into(), "2".into(), "3".into()],
binary,
vec![
Value::Binary(b"1".to_vec().into()),
Value::Binary(b"2".to_vec().into()),
Value::Binary(b"3".to_vec().into())
]
);
test_convert_values!(
date,
vec![1, 2, 3],
date,
vec![
Value::Date(1.into()),
Value::Date(2.into()),
Value::Date(3.into())
]
);
test_convert_values!(
datetime,
vec![1.into(), 2.into(), 3.into()],
datetime,
vec![
Value::DateTime(1.into()),
Value::DateTime(2.into()),
Value::DateTime(3.into())
]
);
#[test]
fn test_convert_timestamp_values() {
// second
@@ -733,49 +900,6 @@ mod tests {
assert_eq!(None, is_null(&null_mask, 99));
}
struct DemoTable;
#[async_trait::async_trait]
impl Table for DemoTable {
fn as_any(&self) -> &dyn Any {
self
}
fn schema(&self) -> SchemaRef {
let column_schemas = vec![
ColumnSchema::new("host", ConcreteDataType::string_datatype(), false),
ColumnSchema::new("cpu", ConcreteDataType::float64_datatype(), true),
ColumnSchema::new("memory", ConcreteDataType::float64_datatype(), true),
ColumnSchema::new(
"ts",
ConcreteDataType::timestamp_millisecond_datatype(),
true,
)
.with_time_index(true),
];
Arc::new(
SchemaBuilder::try_from(column_schemas)
.unwrap()
.build()
.unwrap(),
)
}
fn table_info(&self) -> TableInfoRef {
unimplemented!()
}
async fn scan(
&self,
_projection: Option<&Vec<usize>>,
_filters: &[Expr],
_limit: Option<usize>,
) -> TableResult<PhysicalPlanRef> {
unimplemented!();
}
}
fn mock_insert_batch() -> (Vec<Column>, u32) {
let row_count = 2;

View File

@@ -8,11 +8,15 @@ license.workspace = true
api = { path = "../../api" }
arrow-flight.workspace = true
async-trait = "0.1"
backtrace = "0.3"
common-base = { path = "../base" }
common-error = { path = "../error" }
common-function-macro = { path = "../function-macro" }
common-query = { path = "../query" }
common-meta = { path = "../meta" }
common-recordbatch = { path = "../recordbatch" }
common-runtime = { path = "../runtime" }
common-telemetry = { path = "../telemetry" }
dashmap = "5.4"
datafusion.workspace = true
datatypes = { path = "../../datatypes" }

View File

@@ -13,9 +13,10 @@
// limitations under the License.
use std::sync::atomic::{AtomicUsize, Ordering};
use std::sync::Arc;
use std::sync::{Arc, Mutex};
use std::time::Duration;
use common_telemetry::info;
use dashmap::mapref::entry::Entry;
use dashmap::DashMap;
use snafu::{OptionExt, ResultExt};
@@ -27,12 +28,14 @@ use tower::make::MakeConnection;
use crate::error::{CreateChannelSnafu, InvalidConfigFilePathSnafu, InvalidTlsConfigSnafu, Result};
const RECYCLE_CHANNEL_INTERVAL_SECS: u64 = 60;
const DEFAULT_REQUEST_TIMEOUT_SECS: u64 = 2;
#[derive(Clone, Debug)]
pub struct ChannelManager {
config: ChannelConfig,
client_tls_config: Option<ClientTlsConfig>,
pool: Arc<Pool>,
channel_recycle_started: Arc<Mutex<bool>>,
}
impl Default for ChannelManager {
@@ -48,19 +51,29 @@ impl ChannelManager {
pub fn with_config(config: ChannelConfig) -> Self {
let pool = Arc::new(Pool::default());
let cloned_pool = pool.clone();
common_runtime::spawn_bg(async {
recycle_channel_in_loop(cloned_pool, RECYCLE_CHANNEL_INTERVAL_SECS).await;
});
Self {
config,
client_tls_config: None,
pool,
channel_recycle_started: Arc::new(Mutex::new(false)),
}
}
pub fn start_channel_recycle(&self) {
let mut started = self.channel_recycle_started.lock().unwrap();
if *started {
return;
}
let pool = self.pool.clone();
common_runtime::spawn_bg(async {
recycle_channel_in_loop(pool, RECYCLE_CHANNEL_INTERVAL_SECS).await;
});
info!("Channel recycle is started, running in the background!");
*started = true;
}
pub fn with_tls_config(config: ChannelConfig) -> Result<Self> {
let mut cm = Self::with_config(config.clone());
@@ -224,8 +237,8 @@ pub struct ChannelConfig {
impl Default for ChannelConfig {
fn default() -> Self {
Self {
timeout: None,
connect_timeout: None,
timeout: Some(Duration::from_secs(DEFAULT_REQUEST_TIMEOUT_SECS)),
connect_timeout: Some(Duration::from_secs(4)),
concurrency_limit: None,
rate_limit: None,
initial_stream_window_size: None,
@@ -455,13 +468,7 @@ mod tests {
#[tokio::test]
async fn test_access_count() {
let pool = Arc::new(Pool::default());
let config = ChannelConfig::new();
let mgr = Arc::new(ChannelManager {
pool,
config,
client_tls_config: None,
});
let mgr = Arc::new(ChannelManager::new());
let addr = "test_uri";
let mut joins = Vec::with_capacity(10);
@@ -491,8 +498,8 @@ mod tests {
let default_cfg = ChannelConfig::new();
assert_eq!(
ChannelConfig {
timeout: None,
connect_timeout: None,
timeout: Some(Duration::from_secs(DEFAULT_REQUEST_TIMEOUT_SECS)),
connect_timeout: Some(Duration::from_secs(4)),
concurrency_limit: None,
rate_limit: None,
initial_stream_window_size: None,
@@ -553,7 +560,6 @@ mod tests {
#[test]
fn test_build_endpoint() {
let pool = Arc::new(Pool::default());
let config = ChannelConfig::new()
.timeout(Duration::from_secs(3))
.connect_timeout(Duration::from_secs(5))
@@ -567,11 +573,7 @@ mod tests {
.http2_adaptive_window(true)
.tcp_keepalive(Duration::from_secs(2))
.tcp_nodelay(true);
let mgr = ChannelManager {
pool,
config,
client_tls_config: None,
};
let mgr = ChannelManager::with_config(config);
let res = mgr.build_endpoint("test_addr");
@@ -580,18 +582,7 @@ mod tests {
#[tokio::test]
async fn test_channel_with_connector() {
let pool = Pool {
channels: DashMap::default(),
};
let pool = Arc::new(pool);
let config = ChannelConfig::new();
let mgr = ChannelManager {
pool,
config,
client_tls_config: None,
};
let mgr = ChannelManager::new();
let addr = "test_addr";
let res = mgr.get(addr);

View File

@@ -32,9 +32,6 @@ pub enum Error {
location: Location,
},
#[snafu(display("Missing required field in protobuf, field: {}", field))]
MissingField { field: String, location: Location },
#[snafu(display(
"Write type mismatch, column name: {}, expected: {}, actual: {}",
column_name,
@@ -56,19 +53,13 @@ pub enum Error {
#[snafu(display("Failed to create RecordBatch, source: {}", source))]
CreateRecordBatch {
#[snafu(backtrace)]
location: Location,
source: common_recordbatch::error::Error,
},
#[snafu(display("Failed to convert Arrow type: {}", from))]
Conversion { from: String, location: Location },
#[snafu(display("Column datatype error, source: {}", source))]
ColumnDataType {
#[snafu(backtrace)]
source: api::error::Error,
},
#[snafu(display("Failed to decode FlightData, source: {}", source))]
DecodeFlightData {
source: api::DecodeError,
@@ -80,7 +71,7 @@ pub enum Error {
#[snafu(display("Failed to convert Arrow Schema, source: {}", source))]
ConvertArrowSchema {
#[snafu(backtrace)]
location: Location,
source: datatypes::error::Error,
},
}
@@ -90,7 +81,6 @@ impl ErrorExt for Error {
match self {
Error::InvalidTlsConfig { .. }
| Error::InvalidConfigFilePath { .. }
| Error::MissingField { .. }
| Error::TypeMismatch { .. }
| Error::InvalidFlightData { .. } => StatusCode::InvalidArguments,
@@ -98,9 +88,8 @@ impl ErrorExt for Error {
| Error::Conversion { .. }
| Error::DecodeFlightData { .. } => StatusCode::Internal,
Error::CreateRecordBatch { source } => source.status_code(),
Error::ColumnDataType { source } => source.status_code(),
Error::ConvertArrowSchema { source } => source.status_code(),
Error::CreateRecordBatch { source, .. } => source.status_code(),
Error::ConvertArrowSchema { source, .. } => source.status_code(),
}
}

View File

@@ -23,7 +23,7 @@ pub type Result<T> = std::result::Result<T, Error>;
#[derive(Debug, Snafu)]
#[snafu(visibility(pub))]
pub enum Error {
#[snafu(display("Failed to read OPT_PROF"))]
#[snafu(display("Failed to read OPT_PROF, source: {}", source))]
ReadOptProf { source: tikv_jemalloc_ctl::Error },
#[snafu(display("Memory profiling is not enabled"))]
@@ -32,13 +32,17 @@ pub enum Error {
#[snafu(display("Failed to build temp file from given path: {:?}", path))]
BuildTempPath { path: PathBuf, location: Location },
#[snafu(display("Failed to open temp file: {}", path))]
#[snafu(display("Failed to open temp file: {}, source: {}", path, source))]
OpenTempFile {
path: String,
source: std::io::Error,
},
#[snafu(display("Failed to dump profiling data to temp file: {:?}", path))]
#[snafu(display(
"Failed to dump profiling data to temp file: {:?}, source: {}",
path,
source
))]
DumpProfileData {
path: PathBuf,
source: tikv_jemalloc_ctl::Error,

View File

@@ -0,0 +1,23 @@
[package]
name = "common-meta"
version.workspace = true
edition.workspace = true
license.workspace = true
[dependencies]
api = { path = "../../api" }
common-catalog = { path = "../catalog" }
common-error = { path = "../error" }
common-runtime = { path = "../runtime" }
common-telemetry = { path = "../telemetry" }
common-time = { path = "../time" }
serde.workspace = true
serde_json.workspace = true
snafu.workspace = true
store-api = { path = "../../store-api" }
table = { path = "../../table" }
tokio.workspace = true
[dev-dependencies]
chrono.workspace = true
datatypes = { path = "../../datatypes" }

View File

@@ -0,0 +1,77 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use common_error::prelude::*;
use serde_json::error::Error as JsonError;
use snafu::Location;
#[derive(Debug, Snafu)]
#[snafu(visibility(pub))]
pub enum Error {
#[snafu(display("Failed to encode object into json, source: {}", source))]
EncodeJson {
location: Location,
source: JsonError,
},
#[snafu(display("Failed to decode object from json, source: {}", source))]
DecodeJson {
location: Location,
source: JsonError,
},
#[snafu(display("Payload not exist"))]
PayloadNotExist { location: Location },
#[snafu(display("Failed to send message: {err_msg}"))]
SendMessage { err_msg: String, location: Location },
#[snafu(display("Failed to serde json, source: {}", source))]
SerdeJson {
source: serde_json::error::Error,
location: Location,
},
#[snafu(display("Corrupted table route data, err: {}", err_msg))]
RouteInfoCorrupted { err_msg: String, location: Location },
#[snafu(display("Illegal state from server, code: {}, error: {}", code, err_msg))]
IllegalServerState {
code: i32,
err_msg: String,
location: Location,
},
}
pub type Result<T> = std::result::Result<T, Error>;
impl ErrorExt for Error {
fn status_code(&self) -> StatusCode {
use Error::*;
match self {
IllegalServerState { .. } => StatusCode::Internal,
SerdeJson { .. } | RouteInfoCorrupted { .. } => StatusCode::Unexpected,
SendMessage { .. } => StatusCode::Internal,
EncodeJson { .. } | DecodeJson { .. } | PayloadNotExist { .. } => {
StatusCode::Unexpected
}
}
}
fn as_any(&self) -> &dyn std::any::Any {
self
}
}

View File

@@ -0,0 +1,17 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
pub mod handler;
pub mod mailbox;
pub mod utils;

View File

@@ -0,0 +1,98 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::sync::Arc;
use api::v1::meta::HeartbeatResponse;
use common_telemetry::error;
use crate::error::Result;
use crate::heartbeat::mailbox::{IncomingMessage, MailboxRef};
pub mod parse_mailbox_message;
#[cfg(test)]
mod tests;
pub type HeartbeatResponseHandlerExecutorRef = Arc<dyn HeartbeatResponseHandlerExecutor>;
pub type HeartbeatResponseHandlerRef = Arc<dyn HeartbeatResponseHandler>;
pub struct HeartbeatResponseHandlerContext {
pub mailbox: MailboxRef,
pub response: HeartbeatResponse,
pub incoming_message: Option<IncomingMessage>,
}
/// HandleControl
///
/// Controls process of handling heartbeat response.
#[derive(PartialEq)]
pub enum HandleControl {
Continue,
Done,
}
impl HeartbeatResponseHandlerContext {
pub fn new(mailbox: MailboxRef, response: HeartbeatResponse) -> Self {
Self {
mailbox,
response,
incoming_message: None,
}
}
}
/// HeartbeatResponseHandler
///
/// [`HeartbeatResponseHandler::is_acceptable`] returns true if handler can handle incoming [`HeartbeatResponseHandlerContext`].
///
/// [`HeartbeatResponseHandler::handle`] handles all or part of incoming [`HeartbeatResponseHandlerContext`].
pub trait HeartbeatResponseHandler: Send + Sync {
fn is_acceptable(&self, ctx: &HeartbeatResponseHandlerContext) -> bool;
fn handle(&self, ctx: &mut HeartbeatResponseHandlerContext) -> Result<HandleControl>;
}
pub trait HeartbeatResponseHandlerExecutor: Send + Sync {
fn handle(&self, ctx: HeartbeatResponseHandlerContext) -> Result<()>;
}
pub struct HandlerGroupExecutor {
handlers: Vec<HeartbeatResponseHandlerRef>,
}
impl HandlerGroupExecutor {
pub fn new(handlers: Vec<HeartbeatResponseHandlerRef>) -> Self {
Self { handlers }
}
}
impl HeartbeatResponseHandlerExecutor for HandlerGroupExecutor {
fn handle(&self, mut ctx: HeartbeatResponseHandlerContext) -> Result<()> {
for handler in &self.handlers {
if !handler.is_acceptable(&ctx) {
continue;
}
match handler.handle(&mut ctx) {
Ok(HandleControl::Done) => break,
Ok(HandleControl::Continue) => {}
Err(e) => {
error!(e;"Error while handling: {:?}", ctx.response);
break;
}
}
}
Ok(())
}
}

View File

@@ -0,0 +1,39 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use crate::error::Result;
use crate::heartbeat::handler::{
HandleControl, HeartbeatResponseHandler, HeartbeatResponseHandlerContext,
};
use crate::heartbeat::utils::mailbox_message_to_incoming_message;
#[derive(Default)]
pub struct ParseMailboxMessageHandler;
impl HeartbeatResponseHandler for ParseMailboxMessageHandler {
fn is_acceptable(&self, _ctx: &HeartbeatResponseHandlerContext) -> bool {
true
}
fn handle(&self, ctx: &mut HeartbeatResponseHandlerContext) -> Result<HandleControl> {
if let Some(message) = &ctx.response.mailbox_message {
if message.payload.is_some() {
// mailbox_message_to_incoming_message will raise an error if payload is none
ctx.incoming_message = Some(mailbox_message_to_incoming_message(message.clone())?)
}
}
Ok(HandleControl::Continue)
}
}

View File

@@ -0,0 +1,36 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use tokio::sync::mpsc;
use crate::heartbeat::mailbox::{HeartbeatMailbox, MessageMeta};
use crate::instruction::{InstructionReply, SimpleReply};
#[tokio::test]
async fn test_heartbeat_mailbox() {
let (tx, mut rx) = mpsc::channel(8);
let mailbox = HeartbeatMailbox::new(tx);
let meta = MessageMeta::new_test(1, "test", "foo", "bar");
let reply = InstructionReply::OpenRegion(SimpleReply {
result: true,
error: None,
});
mailbox.send((meta.clone(), reply.clone())).await.unwrap();
let message = rx.recv().await.unwrap();
assert_eq!(message.0, meta);
assert_eq!(message.1, reply);
}

View File

@@ -0,0 +1,64 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::sync::Arc;
use tokio::sync::mpsc::Sender;
use crate::error::{self, Result};
use crate::instruction::{Instruction, InstructionReply};
pub type IncomingMessage = (MessageMeta, Instruction);
pub type OutgoingMessage = (MessageMeta, InstructionReply);
#[derive(Debug, PartialEq, Eq, Clone)]
pub struct MessageMeta {
pub id: u64,
pub subject: String,
pub to: String,
pub from: String,
}
#[cfg(test)]
impl MessageMeta {
pub fn new_test(id: u64, subject: &str, to: &str, from: &str) -> Self {
MessageMeta {
id,
subject: subject.to_string(),
to: to.to_string(),
from: from.to_string(),
}
}
}
pub struct HeartbeatMailbox {
sender: Sender<OutgoingMessage>,
}
impl HeartbeatMailbox {
pub fn new(sender: Sender<OutgoingMessage>) -> Self {
Self { sender }
}
pub async fn send(&self, message: OutgoingMessage) -> Result<()> {
self.sender.send(message).await.map_err(|e| {
error::SendMessageSnafu {
err_msg: e.to_string(),
}
.build()
})
}
}
pub type MailboxRef = Arc<HeartbeatMailbox>;

View File

@@ -0,0 +1,58 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use api::v1::meta::mailbox_message::Payload;
use api::v1::meta::MailboxMessage;
use common_time::util::current_time_millis;
use snafu::{OptionExt, ResultExt};
use crate::error::{self, Result};
use crate::heartbeat::mailbox::{IncomingMessage, MessageMeta, OutgoingMessage};
use crate::instruction::Instruction;
pub fn mailbox_message_to_incoming_message(m: MailboxMessage) -> Result<IncomingMessage> {
m.payload
.map(|payload| match payload {
Payload::Json(json) => {
let instruction: Instruction = serde_json::from_str(&json)?;
Ok((
MessageMeta {
id: m.id,
subject: m.subject,
to: m.to,
from: m.from,
},
instruction,
))
}
})
.transpose()
.context(error::DecodeJsonSnafu)?
.context(error::PayloadNotExistSnafu)
}
pub fn outgoing_message_to_mailbox_message(
(meta, reply): OutgoingMessage,
) -> Result<MailboxMessage> {
Ok(MailboxMessage {
id: meta.id,
subject: meta.subject,
from: meta.to,
to: meta.from,
timestamp_millis: current_time_millis(),
payload: Some(Payload::Json(
serde_json::to_string(&reply).context(error::EncodeJsonSnafu)?,
)),
})
}

View File

@@ -0,0 +1,167 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::fmt::{Display, Formatter};
use serde::{Deserialize, Serialize};
use crate::{ClusterId, DatanodeId};
#[derive(Eq, Hash, PartialEq, Clone, Debug, Serialize, Deserialize)]
pub struct RegionIdent {
pub cluster_id: ClusterId,
pub datanode_id: DatanodeId,
pub table_ident: TableIdent,
pub region_number: u32,
}
impl Display for RegionIdent {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
write!(
f,
"RegionIdent(datanode_id='{}.{}', table_id='{}', table_name='{}.{}.{}', table_engine='{}', region_no='{}')",
self.cluster_id,
self.datanode_id,
self.table_ident.table_id,
self.table_ident.catalog,
self.table_ident.schema,
self.table_ident.table,
self.table_ident.engine,
self.region_number
)
}
}
impl From<RegionIdent> for TableIdent {
fn from(region_ident: RegionIdent) -> Self {
region_ident.table_ident
}
}
#[derive(Eq, Hash, PartialEq, Clone, Debug, Serialize, Deserialize)]
pub struct TableIdent {
pub catalog: String,
pub schema: String,
pub table: String,
pub table_id: u32,
pub engine: String,
}
impl Display for TableIdent {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
write!(
f,
"TableIdent(table_id='{}', table_name='{}.{}.{}', table_engine='{}')",
self.table_id, self.catalog, self.schema, self.table, self.engine,
)
}
}
#[derive(Debug, Serialize, Deserialize, PartialEq, Eq, Clone)]
pub struct SimpleReply {
pub result: bool,
pub error: Option<String>,
}
impl Display for SimpleReply {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
write!(f, "(result={}, error={:?})", self.result, self.error)
}
}
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(tag = "type", rename_all = "snake_case")]
pub enum Instruction {
OpenRegion(RegionIdent),
CloseRegion(RegionIdent),
InvalidateTableCache(TableIdent),
}
impl Display for Instruction {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
match self {
Self::OpenRegion(region) => write!(f, "Instruction::OpenRegion({})", region),
Self::CloseRegion(region) => write!(f, "Instruction::CloseRegion({})", region),
Self::InvalidateTableCache(table) => write!(f, "Instruction::Invalidate({})", table),
}
}
}
#[derive(Debug, Serialize, Deserialize, PartialEq, Eq, Clone)]
#[serde(tag = "type", rename_all = "snake_case")]
pub enum InstructionReply {
OpenRegion(SimpleReply),
CloseRegion(SimpleReply),
InvalidateTableCache(SimpleReply),
}
impl Display for InstructionReply {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
match self {
Self::OpenRegion(reply) => write!(f, "InstructionReply::OpenRegion({})", reply),
Self::CloseRegion(reply) => write!(f, "InstructionReply::CloseRegion({})", reply),
Self::InvalidateTableCache(reply) => {
write!(f, "InstructionReply::Invalidate({})", reply)
}
}
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_serialize_instruction() {
let open_region = Instruction::OpenRegion(RegionIdent {
cluster_id: 1,
datanode_id: 2,
table_ident: TableIdent {
catalog: "foo".to_string(),
schema: "bar".to_string(),
table: "hi".to_string(),
table_id: 1024,
engine: "mito".to_string(),
},
region_number: 1,
});
let serialized = serde_json::to_string(&open_region).unwrap();
assert_eq!(
r#"{"type":"open_region","cluster_id":1,"datanode_id":2,"table_ident":{"catalog":"foo","schema":"bar","table":"hi","table_id":1024,"engine":"mito"},"region_number":1}"#,
serialized
);
let close_region = Instruction::CloseRegion(RegionIdent {
cluster_id: 1,
datanode_id: 2,
table_ident: TableIdent {
catalog: "foo".to_string(),
schema: "bar".to_string(),
table: "hi".to_string(),
table_id: 1024,
engine: "mito".to_string(),
},
region_number: 1,
});
let serialized = serde_json::to_string(&close_region).unwrap();
assert_eq!(
r#"{"type":"close_region","cluster_id":1,"datanode_id":2,"table_ident":{"catalog":"foo","schema":"bar","table":"hi","table_id":1024,"engine":"mito"},"region_number":1}"#,
serialized
);
}
}

View File

@@ -0,0 +1,35 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
mod table_route;
pub use crate::key::table_route::{TableRouteKey, TABLE_ROUTE_PREFIX};
pub const REMOVED_PREFIX: &str = "__removed";
pub fn to_removed_key(key: &str) -> String {
format!("{REMOVED_PREFIX}-{key}")
}
#[cfg(test)]
mod tests {
use crate::key::to_removed_key;
#[test]
fn test_to_removed_key() {
let key = "test_key";
let removed = "__removed-test_key";
assert_eq!(removed, to_removed_key(key));
}
}

View File

@@ -0,0 +1,97 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use api::v1::meta::TableName;
use crate::key::to_removed_key;
pub const TABLE_ROUTE_PREFIX: &str = "__meta_table_route";
pub struct TableRouteKey<'a> {
pub table_id: u64,
pub catalog_name: &'a str,
pub schema_name: &'a str,
pub table_name: &'a str,
}
impl<'a> TableRouteKey<'a> {
pub fn with_table_name(table_id: u64, t: &'a TableName) -> Self {
Self {
table_id,
catalog_name: &t.catalog_name,
schema_name: &t.schema_name,
table_name: &t.table_name,
}
}
pub fn prefix(&self) -> String {
format!(
"{}-{}-{}-{}",
TABLE_ROUTE_PREFIX, self.catalog_name, self.schema_name, self.table_name
)
}
pub fn key(&self) -> String {
format!("{}-{}", self.prefix(), self.table_id)
}
pub fn removed_key(&self) -> String {
to_removed_key(&self.key())
}
}
#[cfg(test)]
mod tests {
use api::v1::meta::TableName;
use super::TableRouteKey;
#[test]
fn test_table_route_key() {
let key = TableRouteKey {
table_id: 123,
catalog_name: "greptime",
schema_name: "public",
table_name: "demo",
};
let prefix = key.prefix();
assert_eq!("__meta_table_route-greptime-public-demo", prefix);
let key_string = key.key();
assert_eq!("__meta_table_route-greptime-public-demo-123", key_string);
let removed = key.removed_key();
assert_eq!(
"__removed-__meta_table_route-greptime-public-demo-123",
removed
);
}
#[test]
fn test_with_table_name() {
let table_name = TableName {
catalog_name: "greptime".to_string(),
schema_name: "public".to_string(),
table_name: "demo".to_string(),
};
let key = TableRouteKey::with_table_name(123, &table_name);
assert_eq!(123, key.table_id);
assert_eq!("greptime", key.catalog_name);
assert_eq!("public", key.schema_name);
assert_eq!("demo", key.table_name);
}
}

View File

@@ -0,0 +1,26 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
pub mod error;
pub mod heartbeat;
pub mod instruction;
pub mod key;
pub mod peer;
pub mod rpc;
pub mod table_name;
pub type ClusterId = u64;
pub type DatanodeId = u64;
pub use instruction::RegionIdent;

View File

@@ -0,0 +1,58 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::fmt::{Display, Formatter};
use api::v1::meta::Peer as PbPeer;
use serde::{Deserialize, Serialize};
#[derive(Debug, Clone, Hash, Eq, PartialEq, Deserialize, Serialize)]
pub struct Peer {
/// Node identifier. Unique in a cluster.
pub id: u64,
pub addr: String,
}
impl From<PbPeer> for Peer {
fn from(p: PbPeer) -> Self {
Self {
id: p.id,
addr: p.addr,
}
}
}
impl From<Peer> for PbPeer {
fn from(p: Peer) -> Self {
Self {
id: p.id,
addr: p.addr,
}
}
}
impl Peer {
pub fn new(id: u64, addr: impl Into<String>) -> Self {
Self {
id,
addr: addr.into(),
}
}
}
impl Display for Peer {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
write!(f, "peer-{}({})", self.id, self.addr)
}
}

View File

@@ -14,25 +14,10 @@
pub mod lock;
pub mod router;
mod store;
pub mod store;
pub mod util;
use std::fmt::{Display, Formatter};
use api::v1::meta::{
KeyValue as PbKeyValue, Peer as PbPeer, ResponseHeader as PbResponseHeader,
TableName as PbTableName,
};
pub use router::{
CreateRequest, Partition, Region, RouteRequest, RouteResponse, Table, TableRoute,
};
use serde::{Deserialize, Serialize};
pub use store::{
BatchDeleteRequest, BatchDeleteResponse, BatchGetRequest, BatchGetResponse, BatchPutRequest,
BatchPutResponse, CompareAndPutRequest, CompareAndPutResponse, DeleteRangeRequest,
DeleteRangeResponse, MoveValueRequest, MoveValueResponse, PutRequest, PutResponse,
RangeRequest, RangeResponse,
};
use api::v1::meta::{KeyValue as PbKeyValue, ResponseHeader as PbResponseHeader};
#[derive(Debug, Clone)]
pub struct ResponseHeader(PbResponseHeader);
@@ -100,81 +85,6 @@ impl KeyValue {
}
}
#[derive(Debug, Clone, Hash, Eq, PartialEq, Deserialize, Serialize)]
pub struct TableName {
pub catalog_name: String,
pub schema_name: String,
pub table_name: String,
}
impl Display for TableName {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
write!(
f,
"{}.{}.{}",
self.catalog_name, self.schema_name, self.table_name
)
}
}
impl TableName {
pub fn new(
catalog_name: impl Into<String>,
schema_name: impl Into<String>,
table_name: impl Into<String>,
) -> Self {
Self {
catalog_name: catalog_name.into(),
schema_name: schema_name.into(),
table_name: table_name.into(),
}
}
}
impl From<TableName> for PbTableName {
fn from(tb: TableName) -> Self {
Self {
catalog_name: tb.catalog_name,
schema_name: tb.schema_name,
table_name: tb.table_name,
}
}
}
impl From<PbTableName> for TableName {
fn from(tb: PbTableName) -> Self {
Self {
catalog_name: tb.catalog_name,
schema_name: tb.schema_name,
table_name: tb.table_name,
}
}
}
#[derive(Debug, Clone, Hash, Eq, PartialEq, Deserialize, Serialize)]
pub struct Peer {
pub id: u64,
pub addr: String,
}
impl From<PbPeer> for Peer {
fn from(p: PbPeer) -> Self {
Self {
id: p.id,
addr: p.addr,
}
}
}
impl Peer {
pub fn new(id: u64, addr: impl Into<String>) -> Self {
Self {
id,
addr: addr.into(),
}
}
}
#[cfg(test)]
mod tests {
use api::v1::meta::{Error, ResponseHeader as PbResponseHeader};

View File

@@ -16,16 +16,19 @@ use std::collections::{HashMap, HashSet};
use api::v1::meta::{
CreateRequest as PbCreateRequest, DeleteRequest as PbDeleteRequest, Partition as PbPartition,
Region as PbRegion, RouteRequest as PbRouteRequest, RouteResponse as PbRouteResponse,
Table as PbTable,
Peer as PbPeer, Region as PbRegion, RegionRoute as PbRegionRoute,
RouteRequest as PbRouteRequest, RouteResponse as PbRouteResponse, Table as PbTable,
TableRoute as PbTableRoute,
};
use serde::{Deserialize, Serialize, Serializer};
use snafu::{OptionExt, ResultExt};
use store_api::storage::{RegionId, RegionNumber};
use table::metadata::RawTableInfo;
use crate::error;
use crate::error::Result;
use crate::rpc::{util, Peer, TableName};
use crate::error::{self, Result};
use crate::peer::Peer;
use crate::rpc::util;
use crate::table_name::TableName;
#[derive(Debug, Clone)]
pub struct CreateRequest<'a> {
@@ -125,57 +128,135 @@ impl TryFrom<PbRouteResponse> for RouteResponse {
fn try_from(pb: PbRouteResponse) -> Result<Self> {
util::check_response_header(pb.header.as_ref())?;
let peers: Vec<Peer> = pb.peers.into_iter().map(Into::into).collect();
let get_peer = |index: u64| peers.get(index as usize).map(ToOwned::to_owned);
let mut table_routes = Vec::with_capacity(pb.table_routes.len());
for table_route in pb.table_routes.into_iter() {
let table = table_route
.table
.context(error::RouteInfoCorruptedSnafu {
err_msg: "table required",
})?
.try_into()?;
let mut region_routes = Vec::with_capacity(table_route.region_routes.len());
for region_route in table_route.region_routes.into_iter() {
let region = region_route
.region
.context(error::RouteInfoCorruptedSnafu {
err_msg: "'region' not found",
})?
.into();
let leader_peer = get_peer(region_route.leader_peer_index);
let follower_peers = region_route
.follower_peer_indexes
.into_iter()
.filter_map(get_peer)
.collect::<Vec<_>>();
region_routes.push(RegionRoute {
region,
leader_peer,
follower_peers,
});
}
table_routes.push(TableRoute {
table,
region_routes,
});
}
let table_routes = pb
.table_routes
.into_iter()
.map(|x| TableRoute::try_from_raw(&pb.peers, x))
.collect::<Result<Vec<_>>>()?;
Ok(Self { table_routes })
}
}
#[derive(Debug, Clone, Deserialize, Serialize)]
#[derive(Debug, Clone, Deserialize, Serialize, PartialEq)]
pub struct TableRoute {
pub table: Table,
pub region_routes: Vec<RegionRoute>,
region_leaders: HashMap<RegionNumber, Option<Peer>>,
}
impl TableRoute {
pub fn new(table: Table, region_routes: Vec<RegionRoute>) -> Self {
let region_leaders = region_routes
.iter()
.map(|x| (x.region.id as RegionNumber, x.leader_peer.clone()))
.collect::<HashMap<_, _>>();
Self {
table,
region_routes,
region_leaders,
}
}
pub fn try_from_raw(peers: &[PbPeer], table_route: PbTableRoute) -> Result<Self> {
let table = table_route
.table
.context(error::RouteInfoCorruptedSnafu {
err_msg: "'table' is empty in table route",
})?
.try_into()?;
let mut region_routes = Vec::with_capacity(table_route.region_routes.len());
for region_route in table_route.region_routes.into_iter() {
let region = region_route
.region
.context(error::RouteInfoCorruptedSnafu {
err_msg: "'region' is empty in region route",
})?
.into();
let leader_peer = peers
.get(region_route.leader_peer_index as usize)
.cloned()
.map(Into::into);
let follower_peers = region_route
.follower_peer_indexes
.into_iter()
.filter_map(|x| peers.get(x as usize).cloned().map(Into::into))
.collect::<Vec<_>>();
region_routes.push(RegionRoute {
region,
leader_peer,
follower_peers,
});
}
Ok(Self::new(table, region_routes))
}
pub fn try_into_raw(self) -> Result<(Vec<PbPeer>, PbTableRoute)> {
let mut peers = HashSet::new();
self.region_routes
.iter()
.filter_map(|x| x.leader_peer.as_ref())
.for_each(|p| {
peers.insert(p.clone());
});
self.region_routes
.iter()
.flat_map(|x| x.follower_peers.iter())
.for_each(|p| {
peers.insert(p.clone());
});
let mut peers = peers.into_iter().map(Into::into).collect::<Vec<PbPeer>>();
peers.sort_by_key(|x| x.id);
let find_peer = |peer_id: u64| -> u64 {
peers
.iter()
.enumerate()
.find_map(|(i, x)| {
if x.id == peer_id {
Some(i as u64)
} else {
None
}
})
.unwrap_or_else(|| {
panic!("Peer {peer_id} must be present when collecting all peers.")
})
};
let mut region_routes = Vec::with_capacity(self.region_routes.len());
for region_route in self.region_routes.into_iter() {
let leader_peer_index = region_route.leader_peer.map(|x| find_peer(x.id)).context(
error::RouteInfoCorruptedSnafu {
err_msg: "'leader_peer' is empty in region route",
},
)?;
let follower_peer_indexes = region_route
.follower_peers
.iter()
.map(|x| find_peer(x.id))
.collect::<Vec<_>>();
region_routes.push(PbRegionRoute {
region: Some(region_route.region.into()),
leader_peer_index,
follower_peer_indexes,
});
}
let table_route = PbTableRoute {
table: Some(self.table.into()),
region_routes,
};
Ok((peers, table_route))
}
pub fn find_leaders(&self) -> HashSet<Peer> {
self.region_routes
.iter()
@@ -197,9 +278,15 @@ impl TableRoute {
})
.collect()
}
pub fn find_region_leader(&self, region_number: RegionNumber) -> Option<&Peer> {
self.region_leaders
.get(&region_number)
.and_then(|x| x.as_ref())
}
}
#[derive(Debug, Clone, Deserialize, Serialize)]
#[derive(Debug, Clone, Deserialize, Serialize, PartialEq)]
pub struct Table {
pub id: u64,
pub table_name: TableName,
@@ -225,16 +312,26 @@ impl TryFrom<PbTable> for Table {
}
}
#[derive(Debug, Clone, Default, Deserialize, Serialize)]
impl From<Table> for PbTable {
fn from(table: Table) -> Self {
PbTable {
id: table.id,
table_name: Some(table.table_name.into()),
table_schema: table.table_schema,
}
}
}
#[derive(Debug, Clone, Default, Deserialize, Serialize, PartialEq)]
pub struct RegionRoute {
pub region: Region,
pub leader_peer: Option<Peer>,
pub follower_peers: Vec<Peer>,
}
#[derive(Debug, Clone, Default, Deserialize, Serialize)]
#[derive(Debug, Clone, Default, Deserialize, Serialize, PartialEq)]
pub struct Region {
pub id: u64,
pub id: RegionId,
pub name: String,
pub partition: Option<Partition>,
pub attrs: HashMap<String, String>,
@@ -251,7 +348,18 @@ impl From<PbRegion> for Region {
}
}
#[derive(Debug, Clone, Deserialize, Serialize)]
impl From<Region> for PbRegion {
fn from(region: Region) -> Self {
Self {
id: region.id,
name: region.name,
partition: region.partition.map(Into::into),
attrs: region.attrs,
}
}
}
#[derive(Debug, Clone, Deserialize, Serialize, PartialEq)]
pub struct Partition {
#[serde(serialize_with = "as_utf8_vec")]
pub column_list: Vec<Vec<u8>>,
@@ -495,4 +603,101 @@ mod tests {
assert_eq!(2, region_route.follower_peers.get(0).unwrap().id);
assert_eq!("peer2", region_route.follower_peers.get(0).unwrap().addr);
}
#[test]
fn test_table_route_raw_conversion() {
let raw_peers = vec![
PbPeer {
id: 1,
addr: "a1".to_string(),
},
PbPeer {
id: 2,
addr: "a2".to_string(),
},
PbPeer {
id: 3,
addr: "a3".to_string(),
},
];
// region distribution:
// region id => leader peer id + [follower peer id]
// 1 => 2 + [1, 3]
// 2 => 1 + [2, 3]
let raw_table_route = PbTableRoute {
table: Some(PbTable {
id: 1,
table_name: Some(PbTableName {
catalog_name: "c1".to_string(),
schema_name: "s1".to_string(),
table_name: "t1".to_string(),
}),
table_schema: vec![],
}),
region_routes: vec![
PbRegionRoute {
region: Some(PbRegion {
id: 1,
name: "r1".to_string(),
partition: None,
attrs: HashMap::new(),
}),
leader_peer_index: 1,
follower_peer_indexes: vec![0, 2],
},
PbRegionRoute {
region: Some(PbRegion {
id: 2,
name: "r2".to_string(),
partition: None,
attrs: HashMap::new(),
}),
leader_peer_index: 0,
follower_peer_indexes: vec![1, 2],
},
],
};
let table_route = TableRoute {
table: Table {
id: 1,
table_name: TableName::new("c1", "s1", "t1"),
table_schema: vec![],
},
region_routes: vec![
RegionRoute {
region: Region {
id: 1,
name: "r1".to_string(),
partition: None,
attrs: HashMap::new(),
},
leader_peer: Some(Peer::new(2, "a2")),
follower_peers: vec![Peer::new(1, "a1"), Peer::new(3, "a3")],
},
RegionRoute {
region: Region {
id: 2,
name: "r2".to_string(),
partition: None,
attrs: HashMap::new(),
},
leader_peer: Some(Peer::new(1, "a1")),
follower_peers: vec![Peer::new(2, "a2"), Peer::new(3, "a3")],
},
],
region_leaders: HashMap::from([
(2, Some(Peer::new(1, "a1"))),
(1, Some(Peer::new(2, "a2"))),
]),
};
let from_raw = TableRoute::try_from_raw(&raw_peers, raw_table_route.clone()).unwrap();
assert_eq!(from_raw, table_route);
let into_raw = table_route.try_into_raw().unwrap();
assert_eq!(into_raw.0, raw_peers);
assert_eq!(into_raw.1, raw_table_route);
}
}

View File

@@ -18,7 +18,7 @@ use crate::error;
use crate::error::Result;
#[inline]
pub(crate) fn check_response_header(header: Option<&ResponseHeader>) -> Result<()> {
pub fn check_response_header(header: Option<&ResponseHeader>) -> Result<()> {
if let Some(header) = header {
if let Some(error) = &header.error {
let code = error.code;

View File

@@ -0,0 +1,69 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::fmt::{Display, Formatter};
use api::v1::meta::TableName as PbTableName;
use serde::{Deserialize, Serialize};
#[derive(Debug, Clone, Hash, Eq, PartialEq, Deserialize, Serialize)]
pub struct TableName {
pub catalog_name: String,
pub schema_name: String,
pub table_name: String,
}
impl Display for TableName {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
f.write_str(&common_catalog::format_full_table_name(
&self.catalog_name,
&self.schema_name,
&self.table_name,
))
}
}
impl TableName {
pub fn new(
catalog_name: impl Into<String>,
schema_name: impl Into<String>,
table_name: impl Into<String>,
) -> Self {
Self {
catalog_name: catalog_name.into(),
schema_name: schema_name.into(),
table_name: table_name.into(),
}
}
}
impl From<TableName> for PbTableName {
fn from(table_name: TableName) -> Self {
Self {
catalog_name: table_name.catalog_name,
schema_name: table_name.schema_name,
table_name: table_name.table_name,
}
}
}
impl From<PbTableName> for TableName {
fn from(table_name: PbTableName) -> Self {
Self {
catalog_name: table_name.catalog_name,
schema_name: table_name.schema_name,
table_name: table_name.table_name,
}
}
}

View File

@@ -0,0 +1,16 @@
[package]
name = "common-pprof"
version.workspace = true
edition.workspace = true
license.workspace = true
[dependencies]
common-error = { path = "../error" }
pprof = { version = "0.11", features = [
"flamegraph",
"prost-codec",
"protobuf",
] }
prost.workspace = true
snafu.workspace = true
tokio.workspace = true

View File

@@ -0,0 +1,28 @@
# Profiling CPU
## Build GreptimeDB with `pprof` feature
```bash
cargo build --features=pprof
```
## HTTP API
Sample at 99 Hertz, for 5 seconds, output report in [protobuf format](https://github.com/google/pprof/blob/master/proto/profile.proto).
```bash
curl -s '0:4000/v1/prof/cpu' > /tmp/pprof.out
```
Then you can use `pprof` command with the protobuf file.
```bash
go tool pprof -top /tmp/pprof.out
```
Sample at 99 Hertz, for 60 seconds, output report in flamegraph format.
```bash
curl -s '0:4000/v1/prof/cpu?seconds=60&output=flamegraph' > /tmp/pprof.svg
```
Sample at 49 Hertz, for 10 seconds, output report in text format.
```bash
curl -s '0:4000/v1/prof/cpu?seconds=10&frequency=49&output=text' > /tmp/pprof.txt
```

124
src/common/pprof/src/lib.rs Normal file
View File

@@ -0,0 +1,124 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::any::Any;
use std::time::Duration;
use common_error::prelude::{ErrorExt, StatusCode};
use prost::Message;
use snafu::{Location, ResultExt, Snafu};
#[derive(Debug, Snafu)]
pub enum Error {
#[snafu(display(
"Failed to create profiler guard, source: {}, location: {}",
source,
location
))]
CreateGuard {
source: pprof::Error,
location: Location,
},
#[snafu(display("Failed to create report, source: {}, location: {}", source, location))]
CreateReport {
source: pprof::Error,
location: Location,
},
#[snafu(display(
"Failed to create flamegraph, source: {}, location: {}",
source,
location
))]
CreateFlamegraph {
source: pprof::Error,
location: Location,
},
#[snafu(display(
"Failed to create pprof report, source: {}, location: {}",
source,
location
))]
ReportPprof {
source: pprof::Error,
location: Location,
},
}
pub type Result<T> = std::result::Result<T, Error>;
impl ErrorExt for Error {
fn status_code(&self) -> StatusCode {
StatusCode::Unexpected
}
fn as_any(&self) -> &dyn Any {
self
}
}
/// CPU profiler utility.
// Inspired by https://github.com/datafuselabs/databend/blob/67f445e83cd4eceda98f6c1c114858929d564029/src/common/base/src/base/profiling.rs
#[derive(Debug)]
pub struct Profiling {
/// Sample duration.
duration: Duration,
/// Sample frequency.
frequency: i32,
}
impl Profiling {
/// Creates a new profiler.
pub fn new(duration: Duration, frequency: i32) -> Profiling {
Profiling {
duration,
frequency,
}
}
/// Profiles and returns a generated pprof report.
pub async fn report(&self) -> Result<pprof::Report> {
let guard = pprof::ProfilerGuardBuilder::default()
.frequency(self.frequency)
.blocklist(&["libc", "libgcc", "pthread", "vdso"])
.build()
.context(CreateGuardSnafu)?;
tokio::time::sleep(self.duration).await;
guard.report().build().context(CreateReportSnafu)
}
/// Profiles and returns a generated flamegraph.
pub async fn dump_flamegraph(&self) -> Result<Vec<u8>> {
let mut body: Vec<u8> = Vec::new();
let report = self.report().await?;
report
.flamegraph(&mut body)
.context(CreateFlamegraphSnafu)?;
Ok(body)
}
/// Profiles and returns a generated proto.
pub async fn dump_proto(&self) -> Result<Vec<u8>> {
let report = self.report().await?;
// Generate googles pprof format report.
let profile = report.pprof().context(ReportPprofSnafu)?;
let body = profile.encode_to_vec();
Ok(body)
}
}

Some files were not shown because too many files have changed in this diff Show More