Compare commits

...

594 Commits

Author SHA1 Message Date
Ruihang Xia
94409967be Merge branch 'main' into create-view 2024-04-22 21:08:22 +08:00
dependabot[bot]
54432df92f build(deps): bump rustls from 0.22.3 to 0.22.4 (#3764)
Bumps [rustls](https://github.com/rustls/rustls) from 0.22.3 to 0.22.4.
- [Release notes](https://github.com/rustls/rustls/releases)
- [Changelog](https://github.com/rustls/rustls/blob/main/CHANGELOG.md)
- [Commits](https://github.com/rustls/rustls/compare/v/0.22.3...v/0.22.4)

---
updated-dependencies:
- dependency-name: rustls
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-22 17:19:08 +08:00
dennis zhuang
8f2ce4abe8 feat: impl show collation and show charset statements (#3753)
* feat: impl show collation and show charset statements

* docs: add api docs
2024-04-20 06:01:32 +00:00
WU Jingdi
d077892e1c feat: support PromQL scalar (#3693) 2024-04-19 09:56:09 +00:00
LFC
cfed466fcd chore: update greptime-proto to main (#3743) 2024-04-19 06:38:34 +00:00
Ruihang Xia
0c5f4801b7 build: update toolchain to nightly-2024-04-18 (#3740)
* chore: update toolchain to nightly-2024-04-17

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix test clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix ut

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update fuzz test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update to nightly-2024-04-18

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add document

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update CI

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* avoid unnecessary allow clippy attrs

Signed-off-by: tison <wander4096@gmail.com>

* help the compiler find the clone is unnecessary and make clippy happy

Signed-off-by: tison <wander4096@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Signed-off-by: tison <wander4096@gmail.com>
Co-authored-by: tison <wander4096@gmail.com>
2024-04-19 05:42:34 +00:00
Eugene Tolbakov
2114b153e7 refactor: avoid unnecessary alloc by using unwrap_or_else (#3742)
feat(promql): address post-merge CR
2024-04-19 01:31:25 +00:00
LFC
314f2704d4 build(deps): update datafusion to latest and arrow to 51.0 (#3661)
* chore: update datafusion

* update sqlness case of time.sql

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix: adjust range query partition

* fix: hisogram incorrect result

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix: ignore filter pushdown temporarily

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix: update limit sqlness result

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix: histogram with wrong distribution

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix: update negative ordinal sqlness case

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* feat: bump df to cd7a00b

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* resolve conflicts

* ignore test_range_filter

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix promql exec panic

* fix "select count(*)" exec error

* re-enable the "test_range_filter" test since the filter push down seems not necessary to be removed

* fix: range query schema error

* update sqlness results

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* resolve conflicts

* update datafusion, again

* fix pyo3 compile error, and update some sqlness results

* update decimal sqlness cases

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix: promql literal

* fix udaf tests

* fix filter pushdown sqlness tests

* fix?: test_cast

* fix: rspy test fail due to datafusion `sin` signature change

* rebase main to see if there are any failed tests

* debug ci

* debug ci

* debug ci

* enforce input partition

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* debug ci

* fix ci

* fix ci

* debug ci

* debug ci

* debug ci

* fix sqlness

* feat: do not return error while creating a filter

* chore: remove array from error

* chore: replace todo with unimplemented

* Update src/flow/clippy.toml

Co-authored-by: Yingwen <realevenyag@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: WUJingdi <taylor-lagrange@qq.com>
Co-authored-by: discord9 <discord9@163.com>
Co-authored-by: evenyag <realevenyag@gmail.com>
Co-authored-by: tison <wander4096@gmail.com>
2024-04-18 12:07:18 +00:00
Weny Xu
510782261d refactor: avoid unnecessary cloning (#3734)
refactor: using `TxnOpGetResponseSet`
2024-04-18 09:02:28 +00:00
Jeremyhi
20e8c3d864 chore: remove TableIdProvider (#3733) 2024-04-18 05:36:37 +00:00
Weny Xu
2a2a44883f refactor(meta): Ensure all moving values remain unchanged between two transactions (#3727)
* feat: implement `move_values`

* refactor: using `move_values`

* refactor: refactor executor

* chore: fix clippy

* refactor: remove `CasKeyChanged` error

* refactor: refactor `move_values`

* chore: update comments

* refactor: do not compare `dest_key`

* chore: update comments

* chore: apply suggestions from CR

* chore: remove `#[inline]`

* chore: check length of keys and dest_key
2024-04-18 05:35:54 +00:00
maco
4248dfcf36 feat: support invalidate schema name key cache (#3725)
* feat: support invalidate schema name key cache

* fix: remove pub for invalidate_schema_cache

* refactor: add DropMetadataBroadcast State Op

* fix: delete files
2024-04-18 04:02:06 +00:00
Yohan Wal
64945533dd feat: add tinytext, mediumtext and longtext data types (#3731) 2024-04-18 03:15:21 +00:00
Yohan Wal
ffc8074556 feat(fuzz): enable create-if-not-exists option (#3732) 2024-04-18 02:50:57 +00:00
Ruihang Xia
7e56bf250b docs: add style guide (#3730)
* docs: add style guide

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add comments section

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add comment order

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* about error handling

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* about error logging

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-04-17 11:28:02 +00:00
Ruihang Xia
7503992d61 add statement
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-04-17 19:13:54 +08:00
tison
50ae4dc174 refactor: merge RegionHandleResult into RegionHandleResponse (#3721)
* refactor: merge RegionHandleResult into RegionHandleResponse

Signed-off-by: tison <wander4096@gmail.com>

* RegionResponse to api::region

Signed-off-by: tison <wander4096@gmail.com>

* order

Signed-off-by: tison <wander4096@gmail.com>

---------

Signed-off-by: tison <wander4096@gmail.com>
2024-04-17 10:03:20 +00:00
Ruihang Xia
16aef70089 fix: remove ttl option from metadata region (#3726)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-04-17 09:13:53 +00:00
tison
786f43da91 chore: cleanup todos that should be panic (#3720)
Signed-off-by: tison <wander4096@gmail.com>
2024-04-17 05:04:14 +00:00
zyy17
3e9bda3267 ci: use greptimedb-ci-tester account (#3719) 2024-04-16 14:43:17 +00:00
Eugene Tolbakov
89d58538c7 chore(mito): set null value data size to i64 (#3722)
* chore(mito): set null value data size to i64

* chore(mito): move comment to a relevant place
2024-04-16 14:40:16 +00:00
Weny Xu
d12379106e feat(drop_table): support to rollback table metadata (#3692)
* feat: support to rollback table metadata

* refactor: store table route value instead of physical table route

* feat(drop_table): support to rollback table metadata

* test: add rollback tests for drop table

* fix: do not set region to readonly

* test: add sqlness tests

* feat: implement TombstoneManager

* test: add tests for TombstoneManager

* refactor: using TombstoneManager

* chore: remove unused code

* fix: fix typo

* refactor: using `on_restore_metadata`

* refactor: add `executor` to `DropTableProcedure`

* refactor: simplify the `TombstoneManager`

* refactor: refactor `Key`

* refactor: carry more info

* feat: add `destroy_table_metadata`

* refactor: remove redundant table_route_value

* feat: ensure the key is empty

* feat: introcude `table_metadata_keys`

* chore: carry more info

* chore: remove clone

* chore: apply suggestions from CR

* feat: delete metadata tombstone
2024-04-16 09:22:41 +00:00
Weny Xu
64941d848e fix(alter_table): ignore request outdated error (#3715) 2024-04-16 08:18:38 +00:00
Ruihang Xia
96a40e0300 feat: check partition rule (#3711)
* feat: check partition rule

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy and fmt

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add more tests

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix typo

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* correct test comment

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-04-16 08:13:49 +00:00
Yingwen
d2e081c1f9 docs: update memtable config example (#3712) 2024-04-16 07:26:20 +00:00
tison
cdbdb04d93 refactor: remove redundant try_flush invocations (#3706)
* refactor: remove redundant try_flush invocations

Signed-off-by: tison <wander4096@gmail.com>

* fixup

Signed-off-by: tison <wander4096@gmail.com>

---------

Signed-off-by: tison <wander4096@gmail.com>
2024-04-16 06:35:55 +00:00
Lei, HUANG
5af87baeb0 feat: add filter_deleted option to avoid removing deletion markers (#3707)
* feat: add `filter_deleted` scan option to avoid removing deletion markers.

* refactor: move sort_batches_and_print to test_util
2024-04-16 06:34:41 +00:00
maco
d5a948a0a6 test: Add tests for KvBackend trait implement (#3700)
* test: add etcd

* optimize code

* test: add etcd tests

* fix: typos

* fix: taplo error and clippy

* avoid print

Signed-off-by: tison <wander4096@gmail.com>

---------

Signed-off-by: tison <wander4096@gmail.com>
Co-authored-by: tison <wander4096@gmail.com>
2024-04-15 10:51:59 +00:00
Eugene Tolbakov
bbea651d08 feat(promql): parameterize lookback (#3630)
* feat(promql): parameterize lookback

* chore(promql): address CR, adjusted sqlness

* chore(promql): fmt

* chore(promql): fix accidental removal

* fix(promql): address CR

* fix(promql): address CR

* feat(promql): add initial lookback parameter grpc support

* fix: update greptime-proto revision

* chore: restore accidental removal
2024-04-15 09:11:21 +00:00
zyy17
8060c81e1d refactor: use toml2docs to generate config docs (#3704)
* refactor: use toml2docs to generate config docs

* ci: add docs check in 'check-typos-and-docs'
2024-04-15 09:08:32 +00:00
Jeremyhi
e6507aaf34 chore: debt 3696 (#3705) 2024-04-15 09:02:19 +00:00
Jeremyhi
87795248dd feat: get metasrv clusterinfo (#3696)
* feat: add doc for MetasrvOptions

* feat: register candidate before election

* feat: get all peers metasrv

* chore: simply code

* chore: proto rev

* Update src/common/meta/src/cluster.rs

Co-authored-by: dennis zhuang <killme2008@gmail.com>

* Update src/meta-client/src/client.rs

Co-authored-by: dennis zhuang <killme2008@gmail.com>

* fmt

Signed-off-by: tison <wander4096@gmail.com>

* Apply suggestions from code review

Co-authored-by: dennis zhuang <killme2008@gmail.com>

* impl<T: AsRef<[u8]>> From<T> for LeaderValue

Signed-off-by: tison <wander4096@gmail.com>

---------

Signed-off-by: tison <wander4096@gmail.com>
Co-authored-by: dennis zhuang <killme2008@gmail.com>
Co-authored-by: tison <wander4096@gmail.com>
2024-04-15 08:10:48 +00:00
irenjj
7a04bfe50a feat: add strict mode to validate protocol strings (#3638)
* feat: add strict mode to validate protocol strings

* hotfix: fix test

* fix: fix return pair and test param

* test: add test for utf-8 validation

* fix: cargo check

* Update src/servers/src/prom_row_builder.rs

Co-authored-by: Eugene Tolbakov <ev.tolbakov@gmail.com>

* fix: fix param of without_strict_mode

* fix: change field name in HttpOptions

* fix: replace if else with match

* fix: replace all strict_mode with is_stirct_mode

* fix: fix test_config_api

* fix: fix bench, add vm handshake, catch error

---------

Co-authored-by: Eugene Tolbakov <ev.tolbakov@gmail.com>
Co-authored-by: tison <wander4096@gmail.com>
2024-04-15 07:53:48 +00:00
Yingwen
2f4726f7b5 refactor: Move manifest manager lock to MitoRegion (#3689)
* feat: remove manager inner wip

* feat: put manifest lock in region

* feat: don't update manifest if manager is stopped

* chore: address CR comments
2024-04-15 05:48:25 +00:00
dennis zhuang
75d85f9915 feat: impl table_constraints table for information_schema (#3698)
* feat: impl table_constraints table for information_schema

* test: update information_schema sqlness test

* test: adds table_constraints sqlness test
2024-04-15 03:59:16 +00:00
discord9
db329f6c80 feat(flow): transform substrait SELECT&WHERE&GROUP BY to Flow Plan (#3690)
* feat: transofrm substrait SELECT&WHERE&GROUP BY to Flow Plan

* chore: reexport from common/substrait

* feat: use datafusion Aggr Func to map to Flow aggr func

* chore: remove unwrap&split literal

* refactor: split transform.rs into smaller files

* feat: apply optimize for variadic fn

* refactor: split unit test

* chore: per review
2024-04-12 07:38:42 +00:00
Ning Sun
544c4a70f8 refactor: check error type before logging (#3697)
* refactor: check error type before logging

* chore: update log level for broken pipe

* refactor: leave a debugging output for non critial error
2024-04-12 02:18:14 +00:00
dimbtp
02f806fba9 fix: cli export "create table" with quoted names (#3684)
* fix: cli export `create table` with quoted names

* add test

* apply review comments

* fix to pass check

* remove eprintln for clippy check

* use prebuilt binary to avoid compile

* ci run coverage after build

* drop dirty hack test

Signed-off-by: tison <wander4096@gmail.com>

---------

Signed-off-by: tison <wander4096@gmail.com>
Co-authored-by: tison <wander4096@gmail.com>
2024-04-11 06:56:14 +00:00
tison
9459ace33e ci: add CODEOWNERS file (#3691)
Signed-off-by: tison <wander4096@gmail.com>
2024-04-10 17:47:54 +00:00
Weny Xu
c1e005b148 refactor: drop table procedure (#3688)
* refactor: refactor drop table procedure

* refactor: refactor test utils
2024-04-10 12:22:10 +00:00
discord9
c00c1d95ee chore(flow): more comments&lint (#3680)
* chore: more comments&lint

* chore: per review

* chore: remove abundant dep
2024-04-10 03:31:22 +00:00
tison
5d739932c0 chore: remove TODO that has been done (#3683)
This TODO is done by https://github.com/GreptimeTeam/greptimedb/pull/3473.
2024-04-09 22:55:55 +00:00
Ruihang Xia
aab7367804 feat: try get pk values from cache when applying predicate to parquet (#3286)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: tison <wander4096@gmail.com>
2024-04-09 12:53:38 +00:00
Yohan Wal
34f935df66 chore: create database api change in protobuf (#3682) 2024-04-09 12:11:38 +00:00
Weny Xu
fda1523ced refactor: refactor alter table procedure (#3678)
* refactor: refactor alter table procedure

* chore: apply suggestions from CR

* chore: remove `alter_expr` and `alter_kind`
2024-04-09 10:35:51 +00:00
tison
2c0c7759ee feat: add checksum for checkpoint data (#3651)
* feat: add checksum for checkpoint data

Signed-off-by: tison <wander4096@gmail.com>

* add test

Signed-off-by: tison <wander4096@gmail.com>

* clippy

Signed-off-by: tison <wander4096@gmail.com>

* fix: checksum should calculate on uncompressed data

Signed-off-by: tison <wander4096@gmail.com>

* address comments

Signed-off-by: tison <wander4096@gmail.com>

---------

Signed-off-by: tison <wander4096@gmail.com>
2024-04-09 08:32:24 +00:00
Weny Xu
2398918adf feat(fuzz): support to create metric table (#3617)
Co-authored-by: tison <wander4096@gmail.com>
2024-04-09 06:00:04 +00:00
Ruihang Xia
50bea2f107 feat: treat all number types as field candidates (#3670)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-04-09 03:28:21 +00:00
JeremyHi
1629435888 chore: unify name metasrv (#3671)
chore: unify name
2024-04-09 03:03:26 +00:00
tison
b3c94a303b chore: add a fix-clippy Makefile target (#3677)
* chore: add a fix-clippy Makefile target

* Update Makefile
2024-04-09 02:59:55 +00:00
tison
883b7fce96 refactor: bundle the lightweight axum test client (#3669)
* refactor: bundle the lightweight axum test client

Signed-off-by: tison <wander4096@gmail.com>

* address comments

Signed-off-by: tison <wander4096@gmail.com>

---------

Signed-off-by: tison <wander4096@gmail.com>
2024-04-09 02:33:26 +00:00
discord9
ea9367f371 refactor(flow): func spec api&use Error not EvalError in mfp (#3657)
* refactor: func's specialization& use Error not EvalError

* docs: some pub item

* chore: typo

* docs: add comments for every pub item

* chore: per review

* chore: per reveiw&derive Copy

* chore: per review&test for binary fn spec

* docs: comment explain how binary func spec works

* chore: minor style change

* fix: Error not EvalError
2024-04-09 02:32:02 +00:00
tison
2896e1f868 refactor: pass http method to metasrv http handler (#3667)
* refactor: pass http method to metasrc http handler

Signed-off-by: tison <wander4096@gmail.com>

* update maintenance endpoint

Signed-off-by: tison <wander4096@gmail.com>

* fixup

Signed-off-by: tison <wander4096@gmail.com>

* Update src/meta-srv/src/service/admin.rs

Co-authored-by: dennis zhuang <killme2008@gmail.com>

---------

Signed-off-by: tison <wander4096@gmail.com>
Co-authored-by: dennis zhuang <killme2008@gmail.com>
2024-04-09 02:26:42 +00:00
Lei, HUANG
183fccbbd6 chore: remove global_ttl config (#3673)
* chore: remove global_ttl config

* fix: clippy
2024-04-09 02:00:50 +00:00
Weny Xu
b51089fa61 fix: DeserializedValueWithBytes::from_inner misusing (#3676)
* fix: fix `DeserializedValueWithBytes::from_inner` misusing

* Update src/common/meta/src/key.rs

---------

Co-authored-by: tison <wander4096@gmail.com>
2024-04-09 01:48:35 +00:00
Yohan Wal
682b04cbe4 feat(fuzz): add create database target (#3675)
* feat(fuzz): add create database target

* chore(ci): add fuzz_create_database ci cfg
2024-04-09 01:33:29 +00:00
tison
e1d2f9a596 chore: improve contributor click in git-cliff (#3672)
Signed-off-by: tison <wander4096@gmail.com>
2024-04-08 18:15:00 +00:00
tison
2fca45b048 ci: setup-protoc always with token (#3674)
Signed-off-by: tison <wander4096@gmail.com>
2024-04-08 18:13:24 +00:00
Yingwen
3e1a125732 feat: add append mode to table options (#3624)
* feat: add append mode to table options

* test: add append mode test

* test: rename test tables

* chore: Add delete test for append mode
2024-04-08 13:42:58 +00:00
Mofeng
34b1427a82 fix(readme): fix link of Ingester-js (#3668) 2024-04-08 12:17:44 +00:00
discord9
28fd0dc276 feat(flow): render map&related tests (#3581)
* feat: render map&related tests

* chore: license header

* chore: update Cargo.lock&remove unused

* refactor: rename ComputeState to DataflowState

* chore: use org fork

* chore: fix typos

* chore: per review

* chore: more explain to use `VecDeque` in err collector

* chore: typos

* chore: more comment on `Plan::Let`

* chore: typos

* refactor mfp rendering

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix: update `now` in closure

* feat: use insert_local

* chore: remove unused

* chore: per review

* chore: fmt comment

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
Co-authored-by: Zhenchi <zhongzc_arch@outlook.com>
2024-04-08 11:36:07 +00:00
Weny Xu
32b9639d7c feat(procedure): support to rollback (#3625)
* feat: add rollback method

* refactor: simplify the state control

* feat(procedure): support to rollback

* test: add tests for rollback

* feat: persist rollback procedure state

* feat: rollback procedure after restarting

* feat: add `CommitRollback`, `RollingBack` to ProcedureStateResponse

* chore: apply suggestions from CR

* feat: persist rollback error

* feat: add `is_support_rollback`

* chore: apply suggestions from CR

* chore: update greptime-proto

* chore: rename to `rollback_supported`

* chore: rename to `RollbackProcedureRecovered`
2024-04-08 11:23:23 +00:00
ZonaHe
9038e1b769 feat: update dashboard to v0.4.10 (#3663)
Co-authored-by: ZonaHex <ZonaHex@users.noreply.github.com>
2024-04-08 16:35:41 +08:00
JeremyHi
12286f07ac feat: cluster information (#3631)
* chore: keep the same method order in KvBackend

* feat: make meta client can get all node info of cluster

* feat: cluster info data model

* feat: frontend and datanode info

* feat: list node info

* chore: remove the method: is_started

* fix: scan key prefix

* chore: impl From for NodeInfoKey

* chore: doc for trait and struct

* chore: reuse the error

* chore: refactor two collec cluster info handlers

* chore: remove inline

* chore: refactor two collec cluster info handlers
2024-04-08 07:48:36 +00:00
tison
e920f95902 refactor: drop Table trait (#3654)
* refactor: drop Table trait

Signed-off-by: tison <wander4096@gmail.com>

* finish rename

Signed-off-by: tison <wander4096@gmail.com>

* Apply suggestions from code review

Co-authored-by: Zhenchi <zhongzc_arch@outlook.com>

* Update time_range_filter_test.rs

* Update src/query/src/tests/time_range_filter_test.rs

* apply comments

Signed-off-by: tison <wander4096@gmail.com>

---------

Signed-off-by: tison <wander4096@gmail.com>
Co-authored-by: Zhenchi <zhongzc_arch@outlook.com>
2024-04-08 07:28:55 +00:00
Yohan Wal
c4798d1913 refactor: move create database to procedure (#3626)
* refactor: move create database to procedure

* feat: enable database creation of rpc

* chore: update the commit hash of greptime-proto
2024-04-08 07:05:55 +00:00
Ruihang Xia
2ede968c2b chore: bump version to 0.7.2 (#3658)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-04-08 06:33:29 +00:00
Yingwen
89db8c18c8 feat: Add timers to more mito methods (#3659)
* feat: add timers for more mito methods

* refactor: combine methods to get type name
2024-04-08 05:53:34 +00:00
LFC
aa0af6135d chore: add manifest related metrics (#3634)
* chore: add two manifest related metrics

* Update src/mito2/src/manifest/manager.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

* Update src/mito2/src/metrics.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

* fix: resolve PR comments

* update cargo lock

---------

Co-authored-by: Yingwen <realevenyag@gmail.com>
2024-04-08 05:53:08 +00:00
dennis zhuang
87e0189e58 fix!: columns table in information_schema misses some columns (#3639)
* fix: columns table in information_schema misses some columns

* fix: test_information_schema_dot_columns

* fix: fuzz test

* feat: adds srs_id and refactor some columns with constant vector

* fix: test_information_schema_dot_columns

* chore: update comment

Co-authored-by: JeremyHi <jiachun_feng@proton.me>

* build(deps): bump h2 from 0.3.24 to 0.3.26 (#3642)

Bumps [h2](https://github.com/hyperium/h2) from 0.3.24 to 0.3.26.
- [Release notes](https://github.com/hyperium/h2/releases)
- [Changelog](https://github.com/hyperium/h2/blob/v0.3.26/CHANGELOG.md)
- [Commits](https://github.com/hyperium/h2/compare/v0.3.24...v0.3.26)

---
updated-dependencies:
- dependency-name: h2
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* build(deps): bump whoami from 1.4.1 to 1.5.1 (#3643)

Bumps [whoami](https://github.com/ardaku/whoami) from 1.4.1 to 1.5.1.
- [Changelog](https://github.com/ardaku/whoami/blob/v1/CHANGELOG.md)
- [Commits](https://github.com/ardaku/whoami/compare/v1.4.1...v1.5.1)

---
updated-dependencies:
- dependency-name: whoami
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* feat: adding victoriametrics remote write (#3641)

* feat: adding victoria metrics remote write

* test: add e2e tests for prom and vm remote writes

* fix: construct correct pk list with pre-existing pk (#3614)

* fix: construct correct pk list with pre-existing pk

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update UT

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* test(sqlness): release databases after tests (#3648)

* refactor: rename Greptime_Type to Greptime_type

---------

Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: JeremyHi <jiachun_feng@proton.me>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Ning Sun <sunng@protonmail.com>
Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: Weny Xu <wenymedia@gmail.com>
2024-04-08 03:20:49 +00:00
tison
7e8e9aba9d chore: generate release notes with git-cliff (#3650)
* chore: generate release notes with git-cliff

Signed-off-by: tison <wander4096@gmail.com>

* chore: newlines

Signed-off-by: tison <wander4096@gmail.com>

---------

Signed-off-by: tison <wander4096@gmail.com>
2024-04-08 03:09:35 +00:00
tison
c93b76ae5f ci: bump license header checker action version (#3655) 2024-04-08 10:38:03 +08:00
Weny Xu
097a0371dc test(sqlness): release databases after tests (#3648) 2024-04-07 09:35:34 +00:00
Ruihang Xia
b9890ab870 fix: construct correct pk list with pre-existing pk (#3614)
* fix: construct correct pk list with pre-existing pk

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update UT

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-04-07 08:11:52 +00:00
Ning Sun
b32e0bba9c feat: adding victoriametrics remote write (#3641)
* feat: adding victoria metrics remote write

* test: add e2e tests for prom and vm remote writes
2024-04-07 07:09:21 +00:00
dependabot[bot]
fe1a0109d8 build(deps): bump whoami from 1.4.1 to 1.5.1 (#3643)
Bumps [whoami](https://github.com/ardaku/whoami) from 1.4.1 to 1.5.1.
- [Changelog](https://github.com/ardaku/whoami/blob/v1/CHANGELOG.md)
- [Commits](https://github.com/ardaku/whoami/compare/v1.4.1...v1.5.1)

---
updated-dependencies:
- dependency-name: whoami
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-05 19:02:36 -07:00
dependabot[bot]
11995eb52e build(deps): bump h2 from 0.3.24 to 0.3.26 (#3642)
Bumps [h2](https://github.com/hyperium/h2) from 0.3.24 to 0.3.26.
- [Release notes](https://github.com/hyperium/h2/releases)
- [Changelog](https://github.com/hyperium/h2/blob/v0.3.26/CHANGELOG.md)
- [Commits](https://github.com/hyperium/h2/compare/v0.3.24...v0.3.26)

---
updated-dependencies:
- dependency-name: h2
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-05 19:02:19 -07:00
dimbtp
86d377d028 fix: move object store read/write timer into inner (#3627)
* fix: move object store read/write timer into inner

* add Drop for PrometheusMetricWrapper

* call await on async read/write

* apply review comments

* git rid of option on timer
2024-04-03 12:24:34 +00:00
Lei, HUANG
ddeb73fbb7 fix: mistakely removes compaction inputs on failure (#3635)
* fix: mistakely removes compaction inputs on failure

* test: add test for compaction failure

---------

Co-authored-by: evenyag <realevenyag@gmail.com>
2024-04-03 11:54:20 +00:00
niebayes
d33435fa84 feat: introduce wal benchmarker (#3446)
* feat: introduce wal benchmarker

* chore: add log store metrics

* chore: add some comments to wal benchmarker

* fix: ci

* chore: add more metrics for kafka logstore

* chore: add more timers for kafka logstore

* chore: add more configs

* chore: move humantime to common dependencies

* refactor: refactor wal benchmarker

* fix: apply suggestions from code review

* doc: add a simple README for wal benchmarker

* fix: Cargo.toml

* fix: clippy

* chore: rename wal.rs to wal_bench.rs

* fix: compile
2024-04-03 03:16:05 +00:00
Weny Xu
a0f243c128 feat(procedure): enable auto split large value (#3628)
* chore: add comments

* chore: remove `pub`

* chore: rename to `merge_multiple_values`

* chore: fix typo

* feat(procedure): enable auto split large value

* chore: apply suggestions from CR

* chore: rename to `max_metadata_value_size`

* chore: remove the NoneAsEmptyString

* chore: set default max_metadata_value_size to 1500KiB
2024-04-02 12:13:59 +00:00
JeremyHi
a61fb98e4a refactor: alter logical tables (#3618)
* refactor: on prepare

* refactor: on create regions

* refactor: update metadata
2024-04-02 06:21:34 +00:00
Weny Xu
6c316d268f feat(procedure): auto split large value to multiple values (#3605)
* feat: implement MultipleValuesStream

* refactor: move KeySet to common-procedure

* refactor: move MultipleValuesStream to common-procedure

* refactor: refactor String to KeySet

* fix: fix dropping `collecting` unexpectedly

* fix: fix typo

* refactor: add the fast path of put

* refactor: remove `single_value_collector`

* refactor: use `extend` instead of `push`

* test: add more tests for `KvStateStore`

* test(etcd_store): add more tests for `KvStateStore`

* chore: apply suggestions from CR

* chore: apply suggestions from CR

* refactor: refactor with async_stream

* Update src/common/procedure/src/store/util.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

---------

Co-authored-by: Yingwen <realevenyag@gmail.com>
2024-04-01 12:04:29 +00:00
Lei, HUANG
5e24448b96 feat: reject invalid timestamp ranges in copy statement (#3623)
* chore: reject invalid timestamp ranges in copy statement

* tests: add unit tests
2024-04-01 08:25:31 +00:00
JohnsonLee
d6b2d1dfb8 feat: Support outputting various date styles for postgresql (#3602)
* test: add integration_test for datetime style

* feat: support various datestyle for postgres

* doc: rewrite the comment about merge_datestyle_value

* test: add more test to illustrate valid datestyle input
2024-04-01 07:31:36 +00:00
Yingwen
bfd32571d9 fix: run purge jobs in another scheduler (#3621) 2024-04-01 03:18:14 +00:00
JeremyHi
0eb023bb23 feat: group requests by peer (#3619) 2024-04-01 03:10:22 +00:00
dennis zhuang
4a5bb698a9 feat: impl show index and show columns (#3577)
* feat: impl show index and show columns

* fix: show index from database

* fix: canonicalize table name

* refactor: show parsers
2024-03-29 18:34:52 +00:00
Eugene Tolbakov
18d676802a feat(function): add timestamp epoch integer support for to_timezone (#3620)
* feat(function): add timestamp epoch integer support for to_timezone

* chore: fmt
2024-03-29 18:33:24 +00:00
JeremyHi
93da45f678 feat: let alter table procedure can only alter physical table (#3613)
* feat: let alter table procedure can only alter physicale table

* chore: rm unnecessary todo
2024-03-29 09:50:33 +00:00
Ruihang Xia
7a19f66be0 ci: ignore type in sqlness sql and result files (#3616)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-03-29 09:39:37 +00:00
Ning Sun
500f9f10fc feat: allow cross-schema query in promql (#3545)
* feat: add __schema__ tag for promql parser

* feat: disable matcher op other than equals

* test: add more test to ensure context getting reset

* test: add integration test

* test: refactor tests

* refactor: remove duplicated test code

* refactor: update according to review comments

* test: add sqlness test for cross schema scenario

---------

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
2024-03-29 07:41:01 +00:00
JeremyHi
f49cd0ca18 refactor: cache invalidator (#3611)
* chore: remove some alias

* refactor: cache invalidator
2024-03-29 07:33:51 +00:00
Yingwen
ffbb132f27 feat: Implement an unordered scanner for append mode (#3598)
* feat: ScanInput

* refactor: seq scan use scan input

* chore: implement unordered scan

* feat: use unordered scan for append table

* fix: unordered scan panic

* docs: update mermaid

* chore: address comment

---------

Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>
2024-03-29 07:25:35 +00:00
Eugene Tolbakov
14267c2aed feat(tql): add initial support for start,stop,step as sql functions (#3507)
* feat(tql): add initial support for start,stop,step as sql functions

* fix(tql): remove unwraps, adjust fmt

* fix(tql): address taplo issue

* feat(tql): update parse_tql_query logic

* fix(tql): change query parsing logic to use parser instead of delimiter

* fix(tql): add timestamp function support, add sqlness tests

* fix(tql): add lookback optional param for tql eval

* fix(tql): adjust tests for now() function

* fix(tql): introduce the tqlerror to differentiate failures on parsing, evaluation and simplification stages

* fix(tql): add tests for explain/analyze

* feat(tql): add lookback support for explain/analyze, update tests

* feat(tql): add more sqlness tests

* chore(tql): extract common logic for eval, analyze and explain into a single function

* feat(tql): address CR points

* feat(tql): use snafu for tql errors, add more docs

* feat(tql): address CR points
2024-03-29 06:37:25 +00:00
Ruihang Xia
77cc7216af feat: support 2+2 and /status/buildinfo (#3604)
* feat: implement buildinfo endpoint

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* refactor prom result struct

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add more integration test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* format toml file

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* Update src/servers/src/http/prometheus_resp.rs

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-03-29 06:31:39 +00:00
Zhenchi
63681f0e4d refactor(table): remove unused table requests (#3603)
* refactor(table): remove unused requests

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* update comments

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix: clippy

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix: compile

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2024-03-28 11:31:14 +00:00
Ruihang Xia
06a90527a3 fix: adjust status code to http error code map (#3601)
* fix: adjust status code to http error code map

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update integration test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-03-28 08:54:51 +00:00
niebayes
d5ba2fcf9d test: add more integration test for kafka wal (#3190)
* test: add integration tests for kafka wal

* chore: rebase main

* chore: unify naming convention for wal config

* chore: add register loaders switch

* chore: alter tables by adding a new column

* chore: move rand to dev-dependencies

* chore: update Cargo.lock
2024-03-28 06:55:18 +00:00
dennis zhuang
e3b37ee2c9 fix: canonicalize catalog and schema names (#3600) 2024-03-28 06:40:15 +00:00
dennis zhuang
5d7ce08358 feat: adds metric engine to information_schema engines table (#3599)
* feat: adds metric engine to information_schema engines table

* fix: support value for metric engine
2024-03-28 06:37:34 +00:00
JeremyHi
92a8e863de chore: do not reply for broadcast msg (#3595) 2024-03-27 11:39:23 +00:00
JeremyHi
9428cb8e7c feat: remove support for logical tables in the create table procedure (#3592)
* feat: Remove support for logical tables in the create table procedure

* chore: remove the redandent table ids alloc

* chore: minor fix
2024-03-27 10:03:42 +00:00
Weny Xu
5addb7d75a test: add tests for drop databases (#3594)
* refactor: minimize visibility of drop database steps

* feat: implement as_any

* refactor: move common functions to test_util

* test: add tests for drop databases

* fix: fix deteling physical table route unexpectedly
2024-03-27 09:18:37 +00:00
Weny Xu
623c930736 refactor: refactor drop table executor (#3589)
* refactor: refactor drop table executor

* chore: apply suggestions from CR
2024-03-27 06:29:54 +00:00
JeremyHi
5fa01e7a96 feat: create regions persist true (#3590)
* feat: change open-region-step's status persist as true

* feat: avoid cloning

* fix: fix unit test
2024-03-27 06:26:58 +00:00
Yingwen
922b1a9b66 feat: Implement append mode for a region (#3558)
* feat: add dedup option to merge reader

* test: test merger

* feat: append mode option

* feat: implement append mode for regions

* feat: only allow put under append mode

* feat: always create builder

* test: test append mode

* style: fix clippy

* test: trigger compaction

* chore: fix compiler errors
2024-03-27 03:21:22 +00:00
shuiyisong
653697f1d5 chore: add back core dependency (#3588) 2024-03-26 19:53:22 -07:00
JohnsonLee
83643eb195 feat: Support printing postgresql's bytea data type in its "hex" and "escape" format (#3567)
* feat: support set variable statement of session

* feat: support printing postgresql's bytea data type in its "hex" and "escape" format in ugly way

* refactor: add 'SessionConfigValue' type and unify the name

* doc: add license header

* refactor: confine coupling with 'sql::ast::Value' in SessionConfigValue

* refactor: move all bytea wrapper into bytea.rs

* fix: remove unused import in context.rs and postgres.rs

* refactor: rename 'set_configuration_parameter' to 'set_session_config'

rename 'set_configuration_parameter' in statement_.rs to 'set_session_config'

* refactor: use mod to organize options via macro

* refactor: re-model the session config value with static type

* test: add integration test

* refactor: move the encode bytea by format type logic into encoder

refactor: use Arc<DashMap> instead of DashMap in QueryContext

refactor: use Arc<DashMap> instead of DashMap in QueryContext

    Avoid expensive clone

refactor: use unreachable!() instead of unimplemented!()

refactor: move the encode bytea by format type logic into encoder

test: add binary format integration test case

* test: add ut for byte related type

* doc: remove TODO of bytea_output

* refactor: simplify the implementation with simple struct instead of complex typing

* fix: typo of 'Available'

* fix compile

Signed-off-by: tison <wander4096@gmail.com>

---------

Signed-off-by: tison <wander4096@gmail.com>
Co-authored-by: tison <wander4096@gmail.com>
2024-03-27 01:54:41 +00:00
tison
d83279567b feat(auth): watch file user provider (#3566)
* feat(auth): watch file user provider

Signed-off-by: tison <wander4096@gmail.com>

* impl

Signed-off-by: tison <wander4096@gmail.com>

* use debouncer

Signed-off-by: tison <wander4096@gmail.com>

* add test

Signed-off-by: tison <wander4096@gmail.com>

* clippy

Signed-off-by: tison <wander4096@gmail.com>

* add path for FileWatch snafu

Signed-off-by: tison <wander4096@gmail.com>

* Apply comments

Signed-off-by: tison <wander4096@gmail.com>

* fix compile

Signed-off-by: tison <wander4096@gmail.com>

* drop notify-debouncer-full dep

Signed-off-by: tison <wander4096@gmail.com>

* empty to allow all

Signed-off-by: tison <wander4096@gmail.com>

* more test and log

Signed-off-by: tison <wander4096@gmail.com>

* relax the wait period

Signed-off-by: tison <wander4096@gmail.com>

* avoid sleep

Signed-off-by: tison <wander4096@gmail.com>

* Revert "avoid sleep"

This reverts commit d7a0be1dea.

* avoid sleep

Signed-off-by: tison <wander4096@gmail.com>

* cargo fmt

Signed-off-by: tison <wander4096@gmail.com>

* tidy dep

Signed-off-by: tison <wander4096@gmail.com>

* adjust

Signed-off-by: tison <wander4096@gmail.com>

* try be stable on CI

Signed-off-by: tison <wander4096@gmail.com>

* deugging

Signed-off-by: tison <wander4096@gmail.com>

* debugging

Signed-off-by: tison <wander4096@gmail.com>

* watch on the dir

Signed-off-by: tison <wander4096@gmail.com>

---------

Signed-off-by: tison <wander4096@gmail.com>
2024-03-27 01:19:18 +00:00
tison
150454b1fd chore: Delete CODE_OF_CONDUCT.md (#3578)
Leverage GitHub's feature to reuse `GreptimeTeam/.github` content.

This depends on https://github.com/GreptimeTeam/.github/pull/5.
2024-03-26 09:43:05 -07:00
JeremyHi
58c7858cd4 feat: update physical table schema on alter logical tables (#3585)
* feat: update physical table schema on alter

* feat: alter logical table in sql path

* feat: invalidate cache step1

* feat: invalidate cache step2

* feat: invalidate cache step3

* feat: invalidate cache step4

* fix: failed ut

* fix: standalone cache invalidator

* feat: log the count of already finished

* feat: re-invalidate cache

* chore: by comment

* chore: Update src/common/meta/src/ddl/create_logical_tables.rs

---------

Co-authored-by: Yingwen <realevenyag@gmail.com>
2024-03-26 14:29:53 +00:00
dimbtp
dd18d8c97b build(deps): remove some unused dependencies (#3582)
* build(deps): remove some unused dependencies

* add `arc-swap` dependency back
2024-03-26 12:48:28 +00:00
Lei, HUANG
175929426a feat: support time range in copy table (#3583)
* feat: support specifying time range in copy table statement

* chore: update sqlness results

* fix: sqlness
2024-03-26 11:24:28 +00:00
Ruihang Xia
8f9676aad2 fix: incorrect version info in (#3586)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-03-26 09:31:01 +00:00
Ruihang Xia
74565151e9 fix: update pk_cache in compat reader (#3576)
* fix: update pk_cache in compat reader

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add sqlness case

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update document

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add more sqlness case

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* avoid mysterious bug

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-03-26 08:31:00 +00:00
Ruihang Xia
83c1b485ea chore: limit OpenDAL's feature gates (#3584)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-03-26 07:54:06 +00:00
JeremyHi
c2dd1136fe feat: batch alter logical tables (#3569)
* feat: add unit test for alter logical tables

* Update src/common/meta/src/ddl/alter_table.rs

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>

* feat: add some comments

* chore: add debug_assert_eq

* chore: fix some nits

* chore: remove the method batch_get_table_routes

---------

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
2024-03-26 07:07:23 +00:00
tison
7c1c6e8b8c refactor: try upgrade regex-automata (#3575)
* refactor: try upgrade regex-automata

Signed-off-by: tison <wander4096@gmail.com>

* try fix

Signed-off-by: tison <wander4096@gmail.com>

* always check match with next_eoi_state

Signed-off-by: tison <wander4096@gmail.com>

* add a guard to prevent over moving the state

Signed-off-by: tison <wander4096@gmail.com>

* tidy

Signed-off-by: tison <wander4096@gmail.com>

---------

Signed-off-by: tison <wander4096@gmail.com>
2024-03-26 04:28:14 +00:00
Yingwen
62d8bbb10c ci: use single commit on the deployment branch (#3580) 2024-03-25 21:04:57 -07:00
Weny Xu
bf14d33962 feat: implement the drop database procedure (#3541)
* refactor: remove Sync trait of Procedure

* refactor: remove unnecessary async

* feat: implement the drop database procedure

* refactor: refactor DdlManager register_loaders

* feat: register the DropDatabaseProcedureLoader

* chore: fmt toml

* feat: support to submit DropDatabaseTask

* feat: support drop database stmt

* fix: empty the tables stream

* fix: ensure the factory always exists

* test: update sqlness results

* chore: correct comments

* test: update sqlness results

* test: update sqlness results

* chore: apply suggestions from CR

* chore: apply suggestions from CR
2024-03-25 06:12:47 +00:00
tison
0f1747b80d chore: retain original headers (#3572)
Signed-off-by: tison <wander4096@gmail.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>
2024-03-25 03:53:51 +00:00
Ruihang Xia
992c7ec71b feat: update physical table's schema on creating logical table (#3570)
* feat: update physical table's schema on creating logical table

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove debug code

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update sqlness cases

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* tweak ut const

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update sqlness cases

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* invalid physical table cache

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-03-25 03:19:30 +00:00
x³u³
2ad0b24efa fix: set http response chartset to utf-8 when using table format (#3571) 2024-03-25 03:13:01 +00:00
Ruihang Xia
2b2fd80bf4 feat: return new added columns in region server's extension response (#3533)
* feat: adapt the new proto response

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update interfaces

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* write columns to extension

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* use physical column's schema

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* sort logical columns by name

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* format code

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* return physical table's column

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* Update src/common/meta/src/datanode_manager.rs

Co-authored-by: JeremyHi <jiachun_feng@proton.me>

* implement sort column logic

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* proxy create table procedure to create logical table

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add unit test for sort_columns

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update sqlness cases

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: JeremyHi <jiachun_feng@proton.me>
2024-03-23 09:31:16 +00:00
x³u³
24886b9530 test: add a parameter type mismatch test case to sql integration test (#3568) 2024-03-22 17:43:20 +00:00
tison
8345f1753c chore: avoid confusing TryFrom (#3565)
Signed-off-by: tison <wander4096@gmail.com>
2024-03-22 11:16:36 +00:00
tison
3420a010e6 refactor: reduce one clone by carefully pass ready boundary (#3543)
* refactor: reduce one clone by carefully pass ready boundary

Signed-off-by: tison <wander4096@gmail.com>

* defensive handle None

Signed-off-by: tison <wander4096@gmail.com>

* tidy code a bit

Signed-off-by: tison <wander4096@gmail.com>

* except batch exist

Signed-off-by: tison <wander4096@gmail.com>

---------

Signed-off-by: tison <wander4096@gmail.com>
2024-03-22 04:46:17 +00:00
discord9
9f020aa414 fix(flow): Arrange get range with batch unaligned (#3552)
* fix: Arrange get range with batch unaligned

* chore: per review

* refactor: sort at apply_updates
2024-03-22 04:08:37 +00:00
tison
c9ac72e7f8 ci: use a PAT to list all writers (#3559)
Signed-off-by: tison <wander4096@gmail.com>
2024-03-21 20:25:01 -07:00
Lei, HUANG
86fb9d8ac7 refactor: remove redudant PromStoreProtocolHandler::write (#3553)
refactor: remove redudant PromStoreProtocolHandler::write API and rename PromStoreProtocolHandler::write_fast to write
2024-03-22 02:09:00 +00:00
Lei, HUANG
1f0fc40287 fix: performance degradation caused by config change (#3556) 2024-03-21 12:23:52 +00:00
tison
8b7a5aaa4a refactor: handle error for http format (#3548)
* refactor: handle error for http format

Signed-off-by: tison <wander4096@gmail.com>

* finish format handling

Signed-off-by: tison <wander4096@gmail.com>

* simplify auth error

Signed-off-by: tison <wander4096@gmail.com>

* fix

Signed-off-by: tison <wander4096@gmail.com>

* clippy format

Signed-off-by: tison <wander4096@gmail.com>

* no longer set greptime-db-format on influxdb error

Signed-off-by: tison <wander4096@gmail.com>

---------

Signed-off-by: tison <wander4096@gmail.com>
2024-03-21 07:29:11 +00:00
Weny Xu
856a4e1e4f refactor: refactor CacheInvalidator (#3550)
* refactor: refactor InvalidateCache Instruction

* refactor: refactor CacheInvalidator
2024-03-20 10:18:28 +00:00
Yingwen
39b69f1e3b refactor!: Renames the new memtable to PartitionTreeMemtable (#3547)
* refactor: rename mod merge_tree to partition_tree

* refactor: rename merge_tree

* refactor: change merge tree comment

* refactor: rename merge tree struct

* refactor: memtable options
2024-03-20 06:40:41 +00:00
tison
bbcdb28b7c chore: fix comment in fetch-dashboard-assets.sh (#3546) 2024-03-20 06:18:14 +00:00
YCCD
6377982501 feat: Able to pretty print sql query result in http output (#3539)
* feat: Able to pretty print sql query result in http output

* fix: add some tests

* fix: add some space, delete fn into_payload, and impl Display for TableResponse
2024-03-20 03:25:17 +00:00
Lei, HUANG
ddbcff68dd feat: support append-only mode in time-series memtable (#3540)
* feat: support append-only mode in time-series memtable

* fix: rename sort_and_dedup to sort
2024-03-19 20:37:54 +00:00
WU Jingdi
5b315c2d40 feat: support multi params in promql range function macro (#3464)
feat: support multi params in promql range function
2024-03-19 20:36:51 +00:00
Ruihang Xia
9816d2a08b fix: clone data instead of moving it - homemade future is dangerous (#3542)
* fix: clone data instead of moving it - homemade future is dangerous

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add comment

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-03-19 13:46:55 +00:00
Ning Sun
a99d6eb3f9 feat: update pgwire to 0.20 for improved performance (#3538) 2024-03-19 10:11:05 +00:00
discord9
2c115bc22a feat(flow): shared in-memory state for dataflow operator (#3508)
* feat: Arrangement shared state

* feat: arrange&tests

* docs: detailed&tests for get

* chore: license

* refactor: opt out ts expr&tests: internal ts

* docs: remove some TODOs

* feat: use smallvec size of 2

* refactor: per review

* chore: per review

* chore: per review

* chore: remove reduant clone

* feat: return max expire time&docs: more explain cur expire config
2024-03-19 10:03:05 +00:00
Yingwen
641592644d feat: support per table memtable options (#3524)
* feat: add memtable builder to region

* refactor: rename memtable_builder in worker to default_memtable_builder

* fix: return error instead of using default compaction options

Support deserializing memtable and compaction options from the option
map

* feat: optional memtable options

* feat: add MemtableBuilderProvider to create builders

* feat: change default memtable and skip deserializing dedup

* chore: update test and comment

* chore: test invalid type

* feat: metric engine use new memtable manually

* feat: expose more memtable configs

* feat: add memtable options to valid option list

* test: add test

* test: sqlness test

* chore: serde workspace

* chore: remove comments
2024-03-19 08:50:10 +00:00
Weny Xu
fa0f3555d4 refactor: introduce the DropTableExecutor (#3534)
* refactor: introduce the DropTableExecutor

* fix: register the dropping regions

* test: add tests for on_prepare

* chroe: add TODO comment
2024-03-19 08:29:12 +00:00
ZonaHe
3cad844acd feat: update dashboard to v0.4.9 (#3531)
Co-authored-by: ZonaHex <ZonaHex@users.noreply.github.com>
2024-03-19 03:18:42 +00:00
JeremyHi
cf25cf984b chore: avoid unnecessary cloning (#3537) 2024-03-18 13:24:13 +00:00
shuiyisong
3acd5bfad0 chore: http header with metrics (#3536)
* chore: bring write cost to output

* chore: add write cost to greptimev1result

* chore: add metrics to influxdb write resp header

* chore: add metrics to prom store

* chore: add metrics to otlp

* chore: add debug log

* fix: prom remote read with output

* fix: prom queries don't output metrics header

* chore: extract header value

* chore: refactor code

* chore: fix cr issue
2024-03-18 11:21:19 +00:00
Weny Xu
343525dab8 refactor: remove removed-prefixed keys (#3535) 2024-03-18 11:07:30 +00:00
tison
0afac58e4d feat(metasrv): implement maintenance (#3527)
* feat(metasrv): implement maintenance

Signed-off-by: tison <wander4096@gmail.com>

* fixup and test

Signed-off-by: tison <wander4096@gmail.com>

* Add coauthors

Co-authored-by: Yingwen <realevenyag@gmail.com>
Co-authored-by: xifyang <595482900@qq.com>

* tidy code

Signed-off-by: tison <wander4096@gmail.com>

* Apply suggestions from code review

Co-authored-by: LFC <990479+MichaelScofield@users.noreply.github.com>

* always read kv_backend maintenance state

Signed-off-by: tison <wander4096@gmail.com>

---------

Signed-off-by: tison <wander4096@gmail.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>
Co-authored-by: xifyang <595482900@qq.com>
Co-authored-by: LFC <990479+MichaelScofield@users.noreply.github.com>
2024-03-18 09:41:14 +00:00
tison
393ea44de0 docs: improve fn comments (#3526)
Signed-off-by: tison <wander4096@gmail.com>
2024-03-18 03:18:01 +00:00
tison
44731fd653 docs: readme style and project status (#3528)
* docs: readme style

Signed-off-by: tison <wander4096@gmail.com>

* more opening

Signed-off-by: tison <wander4096@gmail.com>

* Project Status

Signed-off-by: tison <wander4096@gmail.com>

* tidy

Signed-off-by: tison <wander4096@gmail.com>

---------

Signed-off-by: tison <wander4096@gmail.com>
2024-03-18 03:03:25 +00:00
tison
d36a5a74d3 ci: unassign issues stale 14 days ago (#3529)
This closes https://github.com/GreptimeTeam/greptimedb/issues/3525.
2024-03-18 03:01:10 +00:00
Yingwen
74862f8c3f feat(mito): Checks whether a region should flush periodically (#3459)
* feat: handle flush periodically

* chore: call periodical method in loop

* feat: check periodical tasks on channel timeout

* refactor: use time provider to get time

Mock a time provider to test auto flush

* chore: fix typos

* refactor: rename mock time provider

* style: fix cilppy

* chore: address comment
2024-03-15 06:41:28 +00:00
Weny Xu
a52aedec5b feat: implement the drop database parser (#3521)
* refactor: refactor drop table parser

* feat: implement drop database parser

* fix: canonicalize name of create database

* test: update sqlness result

* Update src/operator/src/statement.rs

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>

---------

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
2024-03-15 06:15:18 +00:00
tison
b6fac619a6 docs: revise README file (#3522)
* docs: revise README file

Signed-off-by: tison <wander4096@gmail.com>

* build prerequisite

Signed-off-by: tison <wander4096@gmail.com>

---------

Signed-off-by: tison <wander4096@gmail.com>
2024-03-15 04:22:35 +00:00
Weny Xu
a29e7ebb7d feat: acquire all locks in procedure (#3514)
* feat: acquire catalog and schema lock in region failover

* chore: remove unused code

* feat!: acquire catalog and schema lock in region migration

* feat: acquire catalog and schema lock in create table
2024-03-14 11:41:23 +00:00
Yingwen
8ca9e01455 feat: Partition memtables by time if compaction window is provided (#3501)
* feat: define time partitions

* feat: adapt time partitions to version

* feat: implement non write methods

* feat: add write one to memtable

* feat: implement write

* chore: fix warning

* fix: inner not set

* refactor: add collect_iter_timestamps

* test: test partitions

* chore: debug log

* chore: fix typos

* chore: log memtable id

* fix: empty check

* chore: log total parts

* chore: update comments
2024-03-14 11:13:01 +00:00
Weny Xu
3a326775ee ci: add bin options to reduce build burden (#3518)
chore: add bin options
2024-03-14 11:05:35 +00:00
Yingwen
5ad3b7984e docs: add v0.7 TSBS benchmark result (#3512)
* docs: add v0.7 TSBS benchmark result

* docs: add OS

* docs: fix format
2024-03-14 08:29:52 +00:00
Yingwen
4fc27bdc75 chore: bump version to v0.7.1 (#3510)
chore: bump version
2024-03-14 07:43:47 +00:00
LFC
e3c82568e5 fix: correctly generate sequences when the value is pre-existed (#3502) 2024-03-14 06:55:12 +00:00
tison
61f0703af8 feat: support decode gzip if influxdb write specify it (#3494)
* feat: support dedoce gzip if influxdb write specify it

Signed-off-by: tison <wander4096@gmail.com>

* address comments

Signed-off-by: tison <wander4096@gmail.com>

* simplify with tower_http DecompressionLayer

Signed-off-by: tison <wander4096@gmail.com>

* tidy some code

Signed-off-by: tison <wander4096@gmail.com>

---------

Signed-off-by: tison <wander4096@gmail.com>
2024-03-14 04:26:26 +00:00
Ruihang Xia
b85d7bb575 fix: decoding prometheus remote write proto doesn't reset the value (#3505)
* reset Sample

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* accomplish test assertion

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* revert toml format

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-03-14 03:08:14 +00:00
Ning Sun
d334d74986 fix!: remove error message from http header to avoid panic (#3506)
fix: remove error message from http header
2024-03-14 01:43:38 +00:00
Ning Sun
5ca8521e87 ci: attempt to setup docker cache for etcd (#3488)
* ci: attempt to setup docker cache for etcd

* ci: do not use file hash for cache key
2024-03-14 00:48:02 +00:00
Weny Xu
e4333969b4 feat(fuzz): add alter table target (#3503)
* feat(fuzz): validate semantic type of column

* feat(fuzz): add fuzz_alter_table target

* feat(fuzz): validate columns

* chore(ci): add fuzz_alter_table ci cfg
2024-03-13 14:11:47 +00:00
Zhenchi
b55905cf66 feat(fuzz): add insert target (#3499)
* fix(common-time): allow building nanos timestamp from parts split from i64::MIN

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* feat(fuzz): add insert target

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* chore: cleanup cargo.toml and polish comments

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2024-03-13 10:03:03 +00:00
WU Jingdi
fb4da05f25 fix: adjust fill behavior of range query (#3489) 2024-03-13 09:20:34 +00:00
Zhenchi
904484b525 fix(common-time): allow building nanos timestamp from parts split from i64::MIN (#3493)
Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2024-03-13 02:46:00 +00:00
tison
cafb4708ce refactor: validate constraints eagerly (#3472)
* chore: validate constraints eagerly

Signed-off-by: tison <wander4096@gmail.com>

* use timestamp column

Signed-off-by: tison <wander4096@gmail.com>

* fixup

Signed-off-by: tison <wander4096@gmail.com>

* lint

Signed-off-by: tison <wander4096@gmail.com>

* compile

Signed-off-by: tison <wander4096@gmail.com>

---------

Signed-off-by: tison <wander4096@gmail.com>
2024-03-12 13:09:34 +00:00
Yingwen
7c895e2605 perf: more benchmarks for memtables (#3491)
* chore: remove duplicate bench

* refactor: rename bench

* perf: add full scan bench for memtable

* feat: filter bench and add time series to bench group

* chore: comment

* refactor: rename

* style: fix clippy
2024-03-12 12:02:58 +00:00
Lei, HUANG
9afe327bca feat: improve prom write requests decode performance (#3478)
* feat: optimize decode performance

* fix: some cr comments
2024-03-12 12:00:38 +00:00
discord9
58bd065c6b feat(flow): plan def (#3490)
* feat: plan def

* chore: add license

* docs: remove TODO done

* chore: add derive Ord
2024-03-12 10:59:07 +00:00
Yingwen
9aa8f756ab fix: allow passing extra table options (#3484)
* fix: do not check options in parser

* test: fix tests

* test: fix sqlness

* test: add sqlness test

* chore: log options

* chore: must specify compaction type

* feat: validate option key

* feat: add option key validation back
2024-03-12 07:03:52 +00:00
discord9
7639c227ca feat(flow): accumlator for aggr func (#3396)
* feat: Accumlator trait

* feat: add `OrdValue` accum&use enum_dispatch

* test: more accum test

* feat: eval aggr funcs

* chore: refactor test&fmt clippy

* refactor: less verbose

* test: more tests

* refactor: better err handling&use OrdValue for Count

* refactor: ignore null&more tests for error handle

* refactor: OrdValue accum

* chore: extract null check

* refactor: def&use fn signature

* chore: use extra cond with match guard

* chore: per review
2024-03-12 02:09:27 +00:00
tison
1255c1fc9e feat: to_timezone function (#3470)
* feat: to_timezone function

Signed-off-by: tison <wander4096@gmail.com>

* impl Function for ToTimezoneFunction

Signed-off-by: tison <wander4096@gmail.com>

* add test

Signed-off-by: tison <wander4096@gmail.com>

* Add original authors

Co-authored-by: parkma99 <park-ma@hotmail.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>

* fixup

Signed-off-by: tison <wander4096@gmail.com>

* address comments

Signed-off-by: tison <wander4096@gmail.com>

* add issue link

Signed-off-by: tison <wander4096@gmail.com>

* code refactor

Signed-off-by: tison <wander4096@gmail.com>

* further tidy

Signed-off-by: tison <wander4096@gmail.com>

---------

Signed-off-by: tison <wander4096@gmail.com>
Co-authored-by: parkma99 <park-ma@hotmail.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>
2024-03-12 01:46:19 +00:00
Yingwen
06dcd0f6ed fix: freeze data buffer in shard (#3468)
* feat: call freeze if the active data buffer in a shard is full

* chore: more metrics

* chore: print metrics

* chore: enlarge freeze threshold

* test: test freeze

* test: fix config test
2024-03-11 14:51:06 +00:00
Weny Xu
0a4444a43a feat(fuzz): validate columns (#3485) 2024-03-11 11:34:50 +00:00
Ruihang Xia
b7ac8d6aa8 ci: use another mirror for etcd image (#3486)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-03-11 10:40:19 +00:00
Weny Xu
e767f37241 fix: fix f64 has no sufficient precision during parsing (#3483) 2024-03-11 09:28:40 +00:00
JeremyHi
da098f5568 fix: make max-txn-ops limit valid (#3481) 2024-03-11 09:27:51 +00:00
shuiyisong
aa953dcc34 fix: impl RecordBatchStream method explicitly (#3482)
fix: impl RecordBatchStream method explicitly
2024-03-11 09:07:10 +00:00
crwen
aa125a50f9 refactor: make http api returns non-200 status code (#3473)
* refactor: make http api returns non-200 status code

* recover some code
2024-03-11 03:38:36 +00:00
Ruihang Xia
d8939eb891 feat: clamp function (#3465)
* basic impl

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add unit tests

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* a little type exercise

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add sqlness case

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-03-11 03:26:10 +00:00
shuiyisong
0bb949787c refactor: introduce new Output with OutputMeta (#3466)
* refactor: introduce new output struct

* chore: add helper function

* chore: update comment

* chore: update commit

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>

* chore: rename according to cr

---------

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
2024-03-11 02:24:09 +00:00
WU Jingdi
8c37c3fc0f feat: support first_value/last_value in range query (#3448)
* feat: support `first_value/last_value` in range query

* chore: add sqlness test on `count`

* chore: add test
2024-03-11 01:30:39 +00:00
gcmutator
21ff3620be chore: remove repetitive words (#3469)
remove repetitive words

Signed-off-by: gcmutator <329964069@qq.com>
2024-03-09 04:18:47 +00:00
Eugene Tolbakov
aeca0d8e8a feat(influxdb): add db query param support for v2 write api (#3445)
* feat(influxdb): add db query param support for v2 write api

* fix(influxdb): update authorize logic to get catalog and schema from query string

* fix(influxdb): address CR suggestions

* fix(influxdb): use the correct import
2024-03-08 08:17:57 +00:00
Weny Xu
a309cd018a fix: fix incorrect COM_STMT_PREPARE reply (#3463)
* fix: fix incorrect `COM_STMT_PREPARE` reply

* chore: use column name instead of index
2024-03-08 07:31:20 +00:00
Yingwen
3ee53360ee perf: Reduce decode overhead during pruning keys in the memtable (#3415)
* feat: reuse value buf

* feat: skip values to decode

* feat: prune shard

chore: fix compiler errors

refactor: shard prune metrics

* fix: panic on DedupReader::try_new

* fix: prune after next

* chore: num parts metrics

* feat: metrics and logs

* chore: data build cost

* chore: more logs

* feat: cache skip result

* chore: todo

* fix: index out of bound

* test: test codec

* fix: invalid offsets

* fix: skip binary

* fix: offset buffer reuse

* chore: comment

* test: test memtable filter

* style: fix clippy

* chore: fix compiler error
2024-03-08 02:54:00 +00:00
JeremyHi
352bd7b6fd feat: max-txn-ops option (#3458)
* feat: max-txn-ops limit

* chore: by comment
2024-03-08 02:34:40 +00:00
Weny Xu
3f3ef2e7af refactor: separate the quote char and value (#3455)
refactor: use ident instead of string
2024-03-07 08:24:09 +00:00
Weny Xu
a218f12bd9 test: add fuzz test for create table (#3441)
* feat: add create table fuzz test

* chore: add ci cfg for fuzz tests

* refactor: remove redundant nightly config

* chore: run fuzz test in debug mode

* chore: use ubuntu-latest

* fix: close connection

* chore: add cache in fuzz test ci

* chore: apply suggestion from CR

* chore: apply suggestion from CR

* chore: refactor the fuzz test action
2024-03-07 06:51:19 +00:00
ZonaHe
c884c56151 feat: update dashboard to v0.4.8 (#3450)
Co-authored-by: ZonaHex <ZonaHex@users.noreply.github.com>
2024-03-07 04:06:07 +00:00
Weny Xu
9ec288cab9 chore: specify binary name (#3449) 2024-03-07 03:56:24 +00:00
LFC
1f1491e429 feat: impl some "set"s to adapt to some client apps (#3443) 2024-03-06 13:15:48 +00:00
Weny Xu
c52bc613e0 chore: add bin opt to build cmd (#3440) 2024-03-06 08:24:55 +00:00
shuiyisong
a9d42f7b87 fix: add support for influxdb basic auth (#3437) 2024-03-06 03:56:25 +00:00
tison
86ce2d8713 build(deps): upgrade opendal to 0.45.1 (#3432)
* build(deps): upgrade opendal to 0.45.1

Signed-off-by: tison <wander4096@gmail.com>

* Update src/object-store/Cargo.toml

Co-authored-by: Weny Xu <wenymedia@gmail.com>

---------

Signed-off-by: tison <wander4096@gmail.com>
Co-authored-by: Weny Xu <wenymedia@gmail.com>
2024-03-06 03:08:59 +00:00
Yingwen
5d644c0b7f chore: bump version to v0.7.0 (#3433) 2024-03-05 12:07:37 +00:00
Ruihang Xia
020635063c feat: implement multi-dim partition rule (#3409)
* generate expr rule

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* implement show create for new partition rule

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* implement row spliter

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix: fix failed tests

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: fix lint issues

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: ignore tests for deprecated partition rule

* chore: remove unused partition rule tests setup

* test(sqlness): add basic partition tests

* test(multi_dim): add basic find region test

* address CR comments

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Signed-off-by: WenyXu <wenymedia@gmail.com>
Co-authored-by: WenyXu <wenymedia@gmail.com>
2024-03-05 11:39:15 +00:00
dependabot[bot]
97cbfcfe23 build(deps): bump mio from 0.8.10 to 0.8.11 (#3434)
Bumps [mio](https://github.com/tokio-rs/mio) from 0.8.10 to 0.8.11.
- [Release notes](https://github.com/tokio-rs/mio/releases)
- [Changelog](https://github.com/tokio-rs/mio/blob/master/CHANGELOG.md)
- [Commits](https://github.com/tokio-rs/mio/compare/v0.8.10...v0.8.11)

---
updated-dependencies:
- dependency-name: mio
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-03-05 11:04:14 +00:00
Lei, HUANG
7183fa198c refactor: make MergeTreeMemtable the default choice (#3430)
* refactor: make MergeTreeMemtable the default choice

* refactor: reformat

* chore: add doc to config
2024-03-05 10:00:08 +00:00
Lei, HUANG
02b18fbca1 feat: decode prom requests to grpc (#3425)
* hack: inline decode

* move to servers

* fix: samples lost

* add bench

* remove useless functions

* wip

* feat: remove object pools

* fix: minor issues

* fix: remove useless dep

* chore: rebase main

* format

* finish

* fix: format

* feat: introduce request pool

* try to fix license issue

* fix: clippy

* resolve comments

* fix:typo

* remove useless comments
2024-03-05 09:47:32 +00:00
shuiyisong
7b1c3503d0 fix: complete interceptors for all frontend entry (#3428) 2024-03-05 09:38:47 +00:00
liyang
6fd2ff49d5 ci: refine windows output env (#3431) 2024-03-05 08:38:28 +00:00
WU Jingdi
53f2a5846c feat: support tracing rule sampler (#3405)
* feat: support tracing rule sampler

* chore: simplify code
2024-03-05 15:40:02 +08:00
Yingwen
49157868f9 feat: Correct server metrics and add more metrics for scan (#3426)
* feat: drop timer on stream terminated

* refactor: combine metrics into a histogram vec

* refactor: frontend grpc metrics

* feat: add metrics middleware layer to grpc server

* refactor: move http metrics layer to metrics mod

* feat: bucket for grpc/http elapsed

* feat: remove duplicate metrics

* style: fix cilppy

* fix: incorrect bucket of promql series

* feat: more metrics for mito

* feat: convert cost

* test: fix metrics test
2024-03-04 10:15:10 +00:00
Ruihang Xia
ae2c18e1cf docs(rfcs): multi-dimension partition rule (#3350)
* docs(rfcs): multi-dimension partition rule

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* change math block type

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix typo

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update tracking issue

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update discussion

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix typo

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-03-04 08:10:54 +00:00
dennis zhuang
e6819412c5 refactor: show tables and show databases (#3423)
* refactor: show tables and show databases

* chore: clean code
2024-03-04 06:15:17 +00:00
tison
2a675e0794 docs: update pull_request_template.md (#3421) 2024-03-03 09:51:44 +00:00
JeremyHi
0edf1bbacc feat: reduce a clone of string (#3422) 2024-03-03 08:09:17 +00:00
Eugene Tolbakov
8609977b52 feat: add verbose support for tql explain/analyze (#3390)
* feat: add verbose support for tql explain/analyze

* chore: apply clippy suggestions

* feat: add sqlness tests

* fix: adjust sqlness replace rules

* fix: address CR (move tql explain/analyze inside common folder)

* fix: address CR(improve comments to indicate that verbose is optional)
2024-03-02 11:18:22 +00:00
JeremyHi
2d975e4f22 feat: tableref cache (#3420)
* feat: tableref cache

* chore: minor refactor

* chore: avoid to string

* chore: change log level

* feat: add metrics for prometheus remote write decode
2024-03-02 07:37:31 +00:00
Kould
00cbbc97ae feat: support Create Table ... Like (#3372)
* feat: support `Create Table ... Like`

* fix: `check_permission` for `Create Table ... Like`

* style: renaming `name` -> `table_name` & `target` -> `source_name` and make `Create Table ... Like` testcase more complicated

* rebase

* avoid _ fn

Signed-off-by: tison <wander4096@gmail.com>

---------

Signed-off-by: tison <wander4096@gmail.com>
Co-authored-by: tison <wander4096@gmail.com>
2024-03-02 06:34:13 +00:00
niebayes
7d30c2484b fix: mitigate memory spike during startup (#3418)
* fix: fix memory spike during startup

* fix: allocate a region write ctx for each wal entry
2024-03-01 07:46:05 +00:00
Lei, HUANG
376409b857 feat: employ sparse key encoding for shard lookup (#3410)
* feat: employ short key encoding for shard lookup

* fix: license

* chore: simplify code

* refactor: only enable sparse encoding to speed lookup on metric engine

* fix: names
2024-03-01 06:22:15 +00:00
Ning Sun
d4a54a085b feat: add configuration for tls watch option (#3395)
* feat: add configuration for tls watch option

* test: sleep longer to ensure async task run

* test: update config api integration test

* refactor: rename function
2024-03-01 03:49:54 +00:00
dennis zhuang
c1a370649e fix: show table names not complete from information_schema (#3417) 2024-03-01 02:51:46 +00:00
JeremyHi
3cad9d989d fix: partition region id (#3414) 2024-02-29 09:09:59 +00:00
JohnsonLee
a50025269f feat: Support automatic DNS lookup for kafka bootstrap servers (#3379)
* feat: Support automatic DNS lookup for kafka bootstrap servers

* Revert "feat: Support automatic DNS lookup for kafka bootstrap servers"

This reverts commit 5baed7b01d.

* feat: Support automatic DNS lookup for Kafka broker

* fix: resolve broker endpoint in client manager

* fix: apply clippy lints

* refactor: slimplify the code with clippy hint

* refactor: move resolve_broker_endpoint to common/wal/src/lib.rs

* test: add mock test for resolver_broker_endpoint

* refactor: accept niebayes's advice

* refactor: rename EndpointIpNotFound to EndpointIPV4NotFound

* refactor: remove mock test and simplify the implementation

* docs: add comments about test_vallid_host_ipv6

* Apply suggestions from code review

Co-authored-by: niebayes <niebayes@gmail.com>

* move more common code

Signed-off-by: tison <wander4096@gmail.com>

---------

Signed-off-by: tison <wander4096@gmail.com>
Co-authored-by: tison <wander4096@gmail.com>
Co-authored-by: niebayes <niebayes@gmail.com>
2024-02-29 07:29:20 +00:00
JeremyHi
a3533c4ea0 feat: zero copy on split rows (#3407) 2024-02-28 13:27:52 +00:00
Lei, HUANG
3413fc0781 refactor: move some costly methods in DataBuffer::read out of read lock (#3406)
* refactor: move some costly methods in DataBuffer::read out of read lock

* refactor: also replace ShardReader with ShardReaderBuilder
2024-02-28 12:22:44 +00:00
tison
dc205a2c5d feat: enable ArrowFlight compression (#3403)
* feat: enable ArrowFlight compression

Signed-off-by: tison <wander4096@gmail.com>

* turn on features

Signed-off-by: tison <wander4096@gmail.com>

---------

Signed-off-by: tison <wander4096@gmail.com>
2024-02-28 08:55:44 +00:00
Lei, HUANG
a0a8e8c587 fix: some read metrics (#3404)
* fix: some read metrics

* chore: fix some metrics

* fix
2024-02-28 08:47:49 +00:00
Zhenchi
c3c80b92c8 feat(index): measure memory usage in global instead of single-column and add metrics (#3383)
* feat(index): measure memory usage in global instead of single-column and add metrics

* feat: add leading zeros to streamline memory usage

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix: fmt

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix: remove println

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2024-02-28 06:49:24 +00:00
Weny Xu
a8cbec824c refactor: refactor TableRouteManager (#3392)
* feat: introduce TableRouteStorage

* refactor: remove get & batch_get in TableRouteManager

* refactor: move txn related fn to TableRouteStorage

* chore: apply suggestions from CR

* chore(codecov): ingore tests-integration dir
2024-02-28 06:18:09 +00:00
tison
33d894c1f0 build: do not retry for connrefused (#3402)
* build: do not retry for connrefused

Signed-off-by: tison <wander4096@gmail.com>

* simplify layout

Signed-off-by: tison <wander4096@gmail.com>

---------

Signed-off-by: tison <wander4096@gmail.com>
2024-02-28 06:15:23 +00:00
Lei, HUANG
7942b8fae9 chore: add metris for memtable read path (#3397)
* chore: add metris for read path

* chore: add more metrics
2024-02-28 03:37:19 +00:00
Yingwen
b97f957489 feat: Use a partition level map to look up pk index (#3400)
* feat: partition level map

* test: test shard and builder

* fix: do not use pk index from shard builder

* feat: add multi key test

* fix: freeze shard before finding pk in shards
2024-02-28 03:17:09 +00:00
tison
f3d69e9563 chore: retry fetch dashboard assets (#3394)
Signed-off-by: tison <wander4096@gmail.com>
2024-02-27 10:07:21 +00:00
dennis zhuang
4b36c285f1 feat: flush or compact table and region functions (#3363)
* feat: adds Requester to process table flush and compaction request

* feat: admin_fn macros for administration functions

* test: add query result

* feat: impl flush_region, flush_table, compact_region, and flush_region functions

* docs: add Arguments to admin_fn macro

* chore: apply suggestion

Co-authored-by: Zhenchi <zhongzc_arch@outlook.com>

* chore: apply suggestion

Co-authored-by: Zhenchi <zhongzc_arch@outlook.com>

* fix: group_requests_by_peer and adds log

* Update src/common/macro/src/admin_fn.rs

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>

* feat: adds todo for spawan thread

* feat: rebase with main

---------

Co-authored-by: Zhenchi <zhongzc_arch@outlook.com>
Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
2024-02-27 08:57:38 +00:00
discord9
dbb1ce1a9b feat(flow): impl for MapFilterProject (#3359)
* feat: mfp impls

* fix: after rebase

* test: temporal filter mfp

* refactor: more comments&test

* test: permute

* fix: check input len when eval

* refactor: err handle&docs: more explain graph

* docs: better flowchart map,filter,project

* refactor: visit_* falliable

* chore: better temp lint allow

* fix: permute partially

* chore: remove duplicated checks

* docs: more explain&tests for clarity

* refactor: use ensure! instead
2024-02-27 08:13:55 +00:00
Ruihang Xia
3544c9334c feat!: new partition grammar - parser part (#3347)
* parser part

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix test in sql

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* comment out and ignore some logic

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update sqlness cases

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update region migration test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* temporary disable region migration test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* allow dead code

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update integration test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-02-27 07:20:16 +00:00
Lei, HUANG
492a00969d feat: enable zstd compression and encodings in merge tree data part (#3380)
* feat: enable zstd compression in merge tree data part to save memory

* feat: also enable customized column encoding in DataPartEncoder
2024-02-27 06:54:56 +00:00
Yingwen
206666bff6 feat: Implement partition eviction and only add value size to write buffer size (#3393)
* feat: track key bytes in dict

* chore: done allocating on finish

* feat: evict keys

* chore: do not add to write buffer

* chore: only count value bytes

* fix: reset key bytes

* feat: remove write buffer manager from shards

* feat: change dict size compute method

* chore: adjust dictionary size by os memory
2024-02-27 06:28:57 +00:00
Weny Xu
7453d9779d fix: throw errors instead of panic (#3391)
* fix: throw errors instead of panic

* chore: apply suggestions from CR
2024-02-27 03:46:12 +00:00
liyang
8e3e0fd528 ci: add builder result outputs in release action (#3381) 2024-02-27 03:43:16 +00:00
dimbtp
b1e290f959 fix: range fix in modulo function tests (#3389)
fix: range fix for modulo tests
2024-02-26 15:50:23 +00:00
Ruihang Xia
d8dc93fccc feat(grafana): enable shared tooltip, add raft engine throughput (#3387)
feat: enable shared tooltip, add raft engine throughput

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-02-26 11:31:15 +00:00
Ning Sun
3887d207b6 feat: make tls certificates/keys reloadable (part 1) (#3335)
* feat: make tls certificates/keys reloadable (part 1)

* feat: add notify watcher for cert/key files

* test: add unit test for watcher

* fix: correct usage of watcher

* fix: skip watch when tls disabled
2024-02-26 09:37:54 +00:00
Ruihang Xia
e859f0e67d chore: skip reorder workspace tables in taplo (#3388)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-02-26 08:57:49 +00:00
Ruihang Xia
ce397ebcc6 feat: change how region id maps to region worker (#3384)
* feat: change how region id maps to region worker

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add overflow test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-02-26 08:42:29 +00:00
Yingwen
26011ed0b6 fix: resets dict builder keys counter and avoid unnecessary pruning (#3386)
* fix: dict builder resets num_keys on finish

* feat: skip empty shard and builder

* feat: avoid pruning if possible

Implementations:
- Apply all filters on the partition column
- If no filter to prune, skip decoding keys
2024-02-26 08:24:46 +00:00
Lei, HUANG
8087822ab2 refactor: change the receivers of merge tree components (#3378)
* refactor: change the receivers of Shard::read/DataBuffer::read/DataParts::read to &self instead of &mut self

* refactor: remove allow(dead_code) in merge tree
2024-02-26 06:50:55 +00:00
Yingwen
e481f073f5 feat: Implement dedup for the new memtable and expose the config (#3377)
* fix: KeyValues num_fields() is incorrect

* chore: fix warnings

* feat: support dedup

* feat: allow using the new memtable

* feat: serde default for config

* fix: resets pk index after finishing a dict
2024-02-25 13:06:01 +00:00
Lei, HUANG
606309f49a fix: remove unused imports in memtable_util.rs (#3376) 2024-02-25 09:23:28 +00:00
Yingwen
8059b95e37 feat: Implement iter for the new memtable (#3373)
* chore: read shard builder

* chore: reuse pk weights

* chore: prune key

* chore: shard reader wip

* refactor: shard builder DataBatch

* feat: merge shard readers

* feat: return shard id in shard readers

* feat: impl partition reader

* chore: impl partition read

* feat: impl iter tree

* chore: save last yield pk id

* style: fix clippy

* refactor: rename ShardReaderImpl to ShardReader

* chore: address CR comment
2024-02-25 07:42:16 +00:00
Lei, HUANG
afe4633320 feat: merge tree dedup reader (#3375)
* feat: add dedup option to merge tree component

* feat: impl dedup reader for shard reader

* refactor: DedupReader::new to DedupReader::try_new

* refactor: remove DedupReader::current_key field

* fix: some cr comments

* fix: fmt

* fix: remove shard_id method from DedupSource
2024-02-24 13:50:49 +00:00
Yingwen
abbfd23d4b feat: Add freeze and fork method to the memtable (#3374)
* feat: add fork method to the memtable

* feat: allow mark immutable returns result

* feat: use fork to create the mutable memtable

* feat: remove memtable builder from freeze

* chore: warninigs

* fix: inspect error

* feat: iter returns result

* chore: maintains memtable id in region

* chore: update comment

* fix: remove region status if failed to freeze a memtable

* chroe: update comment

* chore: iter should not require sync

* chore: implement freeze and fork for the new memtable
2024-02-24 12:11:16 +00:00
Yingwen
1df64f294b refactor: Remove Item from merger's Node trait (#3371)
* refactor: data reader returns reference to data batch

* refactor: use range to create merger

* chore: Reference RecordBatch in DataBatch

* fix: top node not read if no next node

* refactor: move timestamp_array_to_i64_slice to data mod

* style: fix cilppy

* chore: derive copy for DataBatch

* chore: address CR comments
2024-02-24 07:19:48 +00:00
LFC
a6564e72b4 fix: treat "0" and "1" as valid boolean values. (#3370)
* Treat "0" and "1" as valid boolean values.

* Update src/sql/src/statements.rs

Co-authored-by: tison <wander4096@gmail.com>

* Fix tests.

---------

Co-authored-by: tison <wander4096@gmail.com>
2024-02-23 14:34:27 +00:00
Lei, HUANG
1f1d1b4f57 feat: distinguish between different read paths (#3369)
* feat: distinguish between different read paths

* fix: reformat code
2024-02-23 12:40:39 +00:00
Yingwen
b144836935 feat: Implement write and fork for the new memtable (#3357)
* feat: write to a shard or a shard builder

* feat: freeze and fork for partition and shards

* chore: shard builder

* chore: change dict reader to support random access

* test: test write shard

* test: test write

* test: test memtable

* feat: add new and write_row to DataParts

* refactor: partition freeze shards

* refactor: write_with_pk_id

* style: fix clippy

* chore: add methods to get pk weights

* chroe: fix compiler errors
2024-02-23 07:20:55 +00:00
dependabot[bot]
93d9f48dd7 build(deps): bump libgit2-sys from 0.16.1+1.7.1 to 0.16.2+1.7.2 (#3367)
Bumps [libgit2-sys](https://github.com/rust-lang/git2-rs) from 0.16.1+1.7.1 to 0.16.2+1.7.2.
- [Changelog](https://github.com/rust-lang/git2-rs/blob/master/CHANGELOG.md)
- [Commits](https://github.com/rust-lang/git2-rs/commits)

---
updated-dependencies:
- dependency-name: libgit2-sys
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-02-23 14:30:09 +08:00
Lei, HUANG
90e9b69035 feat: impl merge reader for DataParts (#3361)
* feat: impl merge reader for DataParts

* fix: fmt

* fix: sort rows with pk and ts according to sequnce desc

* fix: remove pk weight as pk index are already replace by weights

* fix: format

* fix: some cr comments

* fix: some cr comments

* refactor: simply trait's associated types

* fix: some cr comments
2024-02-23 06:07:55 +00:00
LFC
2035e7bf4c refactor: set the actual bound port in server handler (#3353)
* refactor: set the actual bound port so we can use port 0 in testing

* Update src/servers/src/server.rs

Co-authored-by: Weny Xu <wenymedia@gmail.com>

* fmt

---------

Co-authored-by: Weny Xu <wenymedia@gmail.com>
2024-02-23 02:49:11 +00:00
Ruihang Xia
7341f23019 feat: skip filling NULL for put and delete requests (#3364)
* feat: optimize for sparse data

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove old structures

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-02-22 14:30:43 +00:00
tison
41ee0cdd5a build(deps): Upgrade opensrv to 0.7.0 (#3362)
* build(deps): Upgrade opensrv to 0.7.0

Signed-off-by: tison <wander4096@gmail.com>

* workaround X is not X by casting

Signed-off-by: tison <wander4096@gmail.com>

---------

Signed-off-by: tison <wander4096@gmail.com>
2024-02-22 13:11:26 +00:00
Kould
578dd8f87a feat: add isnull function (#3360)
* code fmt

* feat: add isnull function

* feat: add isnull function
2024-02-22 12:41:25 +00:00
Weny Xu
1dc4fec662 refactor: allocate table ids in the procedure (#3293)
* refactor: refactor the create logical tables

* test(create_logical_tables): add tests for on_prepare

* test(create_logical_tables): add tests for on_create_metadata

* refactor: rename to create_logical_tables_metadata

* chore: fmt toml

* chore: apply suggestions from CR
2024-02-22 10:53:28 +00:00
Ruihang Xia
f26505b625 fix: typo in lint config (#3358)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-02-22 08:56:33 +00:00
Ruihang Xia
8289b0dec2 ci: align docs workflow jobs with develop.yml (#3356)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-02-22 07:01:15 +00:00
Yingwen
53105b99e7 test: fix list_files_and_parse_table_name path issue on windows (#3349)
* fix: always converts path to slash

* chore: print

* chore: normalize dir

* chore: compile

* chore: rm print
2024-02-22 06:16:41 +00:00
dennis zhuang
564fe3beca feat: impl migrate_region and procedure_state SQL function (#3325)
* fix: logical region can't find region routes

* feat: fetch partitions info in batch

* refactor: rename batch functions

* refactor: rename DdlTaskExecutor to ProcedureExecutor

* feat: impl migrate_region and query_procedure_state for ProcedureExecutor

* feat: adds SQL function procedure_state and finish migrate_region impl

* fix: constant vector

* feat: unit tests for migrate_region and procedure_state

* test: test region migration by SQL

* fix: compile error after rebeasing

* fix: clippy warnings

* feat: ensure procedure_state and migrate_region can be only called under greptime catalog

* fix: license header
2024-02-22 02:37:11 +00:00
SteveLauC
e9a2b0a9ee chore: use workspace-wide lints (#3352)
* chore: use workspace-wide lints

* respond to review
2024-02-22 01:01:10 +00:00
discord9
860b1e9d9e feat(flow): impl ScalarExpr&Scalar Function (#3283)
* feat: impl for ScalarExpr

* feat: plain functions

* refactor: simpler trait bound&tests

* chore: remove unused imports

* chore: fmt

* refactor: early ret on first error

* refactor: remove abunant match arm

* chore: per review

* doc: `support` fn

* chore: per review more

* chore: more per review

* fix: extract_bound

* chore: per review

* refactor: reduce nest
2024-02-21 12:53:16 +00:00
Yingwen
7c88d721c2 Merge pull request #3348
* feat: define functions for partitions

* feat: write partitions

* feat: fork and freeze partition

* feat: create iter by partition

* style: fix clippy

* chore: typos

* feat: add scan method to builder

* feat: check whether the builder should freeze first
2024-02-21 20:50:34 +08:00
Lei, HUANG
90169c868d feat: merge tree data parts (#3346)
* feat: add iter method for DataPart

* chore: rename iter to reader

* chore: some doc

* fix: resolve some comments

* fix: remove metadata in DataPart
2024-02-21 11:37:29 +00:00
tison
4c07606da6 refactor: put together HTTP headers (#3337)
* refactor: put together HTTP headers

Signed-off-by: tison <wander4096@gmail.com>

* do refactor

Signed-off-by: tison <wander4096@gmail.com>

* drop dirty commit

Signed-off-by: tison <wander4096@gmail.com>

* reduce changeset

Signed-off-by: tison <wander4096@gmail.com>

* fixup compilations

Signed-off-by: tison <wander4096@gmail.com>

* tidy files

Signed-off-by: tison <wander4096@gmail.com>

* drop common-api

Signed-off-by: tison <wander4096@gmail.com>

* fmt

Signed-off-by: tison <wander4096@gmail.com>

---------

Signed-off-by: tison <wander4096@gmail.com>
2024-02-21 09:51:10 +00:00
tison
a7bf458a37 chore: remove unused deprecated table_dir_with_catalog_and_schema (#3341) 2024-02-21 08:46:36 +00:00
tison
fa08085119 ci: upgrade actions to node20-based version (#3345)
* ci: upgrade actions to node20-based version

Signed-off-by: tison <wander4096@gmail.com>

* distinguish artifact name

Signed-off-by: tison <wander4096@gmail.com>

---------

Signed-off-by: tison <wander4096@gmail.com>
2024-02-21 08:09:09 +00:00
Lei, HUANG
86a98c80f5 feat: replace pk index with pk_weight during freeze (#3343)
* feat: replace pk index with pk_weight during freeze

* chore: add parameter to control pk_index replacement

* fix: dedup pk weights also

* fix: generate pk array before dedup
2024-02-21 08:05:25 +00:00
tison
085a380019 build(deps): axum-tets-helper has included patch-1 (#3333)
Signed-off-by: tison <wander4096@gmail.com>
2024-02-21 07:49:42 +00:00
tison
d9a96344ee ci: try fix log location (#3342)
Signed-off-by: tison <wander4096@gmail.com>
2024-02-21 07:01:51 +00:00
Weny Xu
41656c8635 refactor: allocate table id in the procedure (#3271)
* refactor: replace TableMetadataManager with TableNameManager

* refactor: allocate table id in the procedure

* refactor: refactor client logical of handling retries

* feat(test_util): add TestCreateTableExprBuilder

* feat(test_util): add MockDatanodeManager

* feat(test_util): add new_ddl_context

* feat(test_util): add build_raw_table_info_from_expr

* feat(test_util): add MockDatanodeManager::new

* feat(procedure): add downcast_output_ref to Status

* test(create_table): add tests for CreateTableProcedure on_prepare

* refactor(ddl): rename handle_operate_region_error to add_peer_context_if_need

* test(create_table): add tests for CreateTableProcedure on_datanode_create_regions

* test(create_table): add tests for CreateTableProcedure on_create_metadata

* refactor(meta): use CreateTableExprBuilder

* feat(create_table): ensure number of partitions is greater than 0

* refactor: rename to add_peer_context_if_needed

* feat: add context for panic

* refactor: simplify the should_retry

* refactor: use Option<&T> instead of &Option<T>

* refactor: move downcast_output_ref under cfg(test)

* chore: fmt toml
2024-02-21 04:38:46 +00:00
tison
cf08a3de6b chore: support configure GITHUB_PROXY_URL when fetch dashboard assets (#3340)
Signed-off-by: tison <wander4096@gmail.com>
2024-02-21 02:38:14 +00:00
Yingwen
f087a843bb feat: Implement KeyDictBuilder for the merge tree memtable (#3334)
* feat: dict builder

* feat: write and scan dict builder

* chore: address CR comments
2024-02-20 15:39:17 +00:00
Lei, HUANG
450dfe324d feat: data buffer and related structs (#3329)
* feat: data buffer and related structs

* fix: some cr comments

* chore: remove freeze_threshold in DataBuffer

* fix: use LazyMutableVectorBuilder instead of two vector; add option to control dedup

* fix: dedup rows according to both pk weights and timestamps

* fix: assembly DataBatch on demand
2024-02-20 09:22:45 +00:00
tison
3dfe4a2e5a chore: check dirs before create RaftEngine store (#3327)
* chore: check dirs before create RaftEngine store

Signed-off-by: tison <wander4096@gmail.com>

* fix impl

Signed-off-by: tison <wander4096@gmail.com>

* improve naming

Signed-off-by: tison <wander4096@gmail.com>

---------

Signed-off-by: tison <wander4096@gmail.com>
2024-02-20 07:48:15 +00:00
LFC
eded08897d test: add data compatibility test (#3109)
* test: data files compatibility test

* rework compatibility test

* revert unneeded changes

* revert unneeded changes

* debug CI

* Update .github/workflows/develop.yml

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>

---------

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
2024-02-20 07:44:04 +00:00
Ruihang Xia
b1f54d8a03 fix: disable ansi contorl char when stdout is redirected (#3332)
* fix: disable ansi contorl char when stdout is redirected

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* don't touch file logging layer

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update comment

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* disable ansi for two file layers

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

Co-authored-by: LFC <bayinamine@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: LFC <bayinamine@gmail.com>
2024-02-20 06:42:56 +00:00
shuiyisong
bf5e1905cd refactor: bring metrics to http output (#3247)
* refactor: bring metrics to http output

* chore: remove unwrap

* chore: make walk plan accumulate

* chore: change field name and comment

* chore: add metrics to http resp header

* chore: move PrometheusJsonResponse to a separate file and impl IntoResponse

* chore: put metrics in prometheus resp header too
2024-02-20 03:25:18 +00:00
Zhenchi
6628c41c36 feat(metric-engine): set index options for data region (#3330)
Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2024-02-20 02:38:35 +00:00
Yingwen
43fd87e051 feat: Defines structs in the merge tree memtable (#3326)
* chore: define mods

* feat: memtable struct

* feat: define structs inside the tree
2024-02-19 11:43:19 +00:00
Zhenchi
40f43de27d fix(index): encode string type to original data to enable fst regex to work (#3324)
Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2024-02-19 10:52:19 +00:00
Zhenchi
4810c91a64 refactor(index): move option segment_row_count from WriteOptions to IndexOptions (#3307)
Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2024-02-19 08:03:41 +00:00
JeremyHi
6668d6b042 fix: split write metadata request (#3311)
* feat: add txn_helper

* fix: split the create metadata requests to avoid exceeding the txn limit

* fix: add license header

* chore: some improve
2024-02-19 07:33:09 +00:00
JeremyHi
aa569f7d6b feat: batch get physical table routes (#3319)
* feat: batch get physical table routes

* chore: by comment
2024-02-19 06:51:34 +00:00
dennis zhuang
8b73067815 feat: impl partitions and region_peers information schema (#3278)
* feat: impl partitions table

* fix: typo

* feat: impl region_peers information schema

* chore: rename region_peers to greptime_region_peers

* chore: rename statuses to upper case

* fix: comments

* chore: update partition result

* chore: remove redundant checking

* refactor: replace 42 with constant

* feat: fetch region routes in batch
2024-02-19 06:47:14 +00:00
liyang
1851c20c13 ci: add build artifacts needs in notification (#3320) 2024-02-19 06:42:09 +00:00
tison
29f11d7b7e ci: upgrade actions to avoid node16 deprecation warning (#3317)
* ci: upgrade actions to avoid node16 deprecantion warning

Signed-off-by: tison <wander4096@gmail.com>

* try delete size label action

Signed-off-by: tison <wander4096@gmail.com>

* one more action

Signed-off-by: tison <wander4096@gmail.com>

---------

Signed-off-by: tison <wander4096@gmail.com>
2024-02-18 15:20:26 +00:00
Ruihang Xia
72cd443ba3 feat: organize tracing on query path (#3310)
* feat: organize tracing on query path

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* warp json conversion to TracingContext's methods

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove unnecessary .trace()

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* Update src/query/src/dist_plan/merge_scan.rs

Co-authored-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: Zhenchi <zhongzc_arch@outlook.com>
2024-02-18 15:04:57 +00:00
dependabot[bot]
df6260d525 fix: bump libgit2-sys from 0.16.1+1.7.1 to 0.16.2+1.7.2 (#3316)
build(deps): bump libgit2-sys from 0.16.1+1.7.1 to 0.16.2+1.7.2

Bumps [libgit2-sys](https://github.com/rust-lang/git2-rs) from 0.16.1+1.7.1 to 0.16.2+1.7.2.
- [Changelog](https://github.com/rust-lang/git2-rs/blob/master/CHANGELOG.md)
- [Commits](https://github.com/rust-lang/git2-rs/commits)

---
updated-dependencies:
- dependency-name: libgit2-sys
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-02-18 13:25:00 +00:00
Ruihang Xia
94fd51c263 ci: run CI jobs in draft PR (#3314)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: tison <wander4096@gmail.com>
2024-02-18 13:14:57 +00:00
liyang
3bc0c4feda ci: add release result send to slack (#3312)
* feat: add notify release result to slack

* chore: change build image output result name

---------

Co-authored-by: tison <wander4096@gmail.com>
2024-02-18 13:13:04 +00:00
tison
2a26c01412 fix: commit_short sqlness test case (#3313)
* ci: debug sqlness jobs

Signed-off-by: tison <wander4096@gmail.com>

* fix: commit_short test case

Signed-off-by: tison <wander4096@gmail.com>

* fixup! fix: commit_short test case

Signed-off-by: tison <wander4096@gmail.com>

* fixup! fixup! fix: commit_short test case

Signed-off-by: tison <wander4096@gmail.com>

* revert uploading

Signed-off-by: tison <wander4096@gmail.com>

---------

Signed-off-by: tison <wander4096@gmail.com>
2024-02-18 11:51:24 +00:00
tison
4e04a4e48f build: support build without git (#3309)
* build: support build without git

Signed-off-by: tison <wander4096@gmail.com>

* chore

Signed-off-by: tison <wander4096@gmail.com>

* address comment

Signed-off-by: tison <wander4096@gmail.com>

* fix syntax

Signed-off-by: tison <wander4096@gmail.com>

---------

Signed-off-by: tison <wander4096@gmail.com>
2024-02-18 10:30:01 +00:00
tison
b889d57b32 build(deps): Upgrade raft-engine dependency cascading (#3305)
* build(deps): Upgrade raft-engine dependency

Signed-off-by: tison <wander4096@gmail.com>

* genlock and upgrade toolchain

Signed-off-by: tison <wander4096@gmail.com>

* revert toolchain upgrade

Signed-off-by: tison <wander4096@gmail.com>

* Revert "revert toolchain upgrade"

This reverts commit 1c6dd9d7ba.

* toolchain backward and correct dep attr

Signed-off-by: tison <wander4096@gmail.com>

* clippy

Signed-off-by: tison <wander4096@gmail.com>

* revert all

Signed-off-by: tison <wander4096@gmail.com>

* redo

Signed-off-by: tison <wander4096@gmail.com>

---------

Signed-off-by: tison <wander4096@gmail.com>
2024-02-18 02:31:59 +00:00
Zhenchi
f9ce2708d3 feat(mito): add options to ignore building index for specific column ids (#3295)
Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2024-02-16 08:50:41 +00:00
Zhenchi
34050ea8b5 fix(index): sanitize S3 upload buffer size (#3300)
Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2024-02-16 06:45:31 +00:00
liyang
31ace9dd5c fix: $TARGET_BIN not found when docker run the image (#3297)
* fix: TARGET_BIN path

* chore: return available versions

* refactor: Use ENV instead of ARG in ci dockerfile

* chore: add TARGET_BIN ENV pass to ENTRYPOINT

* chore: add TARGET_BIN ENV pass to ENTRYPOINT

* chore: update entrypoint

* chore: update entrypoint
2024-02-15 11:33:05 +00:00
Cancai Cai
2a971b0fff chore: update link to official website link (#3299) 2024-02-13 13:32:46 +00:00
Ruihang Xia
2f98fa0d97 fix: correct the case sensitivity behavior for PromQL (#3296)
* fix: correct the case sensitivity behavior for PromQL

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove debug code

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* consolidate sqlness case

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* drop table

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-02-08 03:26:32 +00:00
Hudson C. Dalprá
6b4be3a1cc fix(util): join_path function should not trim leading / (#3280)
* fix(util): join_path function should not trim leading `/`

Signed-off-by: Hudson C. Dalpra <dalpra.hcd@gmail.com>

* fix(util): making required changes at join_path function

* fix(util): added unit tests to match function comments

---------

Signed-off-by: Hudson C. Dalpra <dalpra.hcd@gmail.com>
2024-02-07 10:05:04 +00:00
Zhenchi
141ed51dcc feat(mito): adjust seg size of inverted index to finer granularity instead of row group level (#3289)
* feat(mito): adjust seg size of inverted index to finer granularity instead of row group level

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix: wrong metric

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix: more suitable name

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* feat: BitVec instead

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2024-02-07 08:20:00 +00:00
dennis zhuang
e5ec65988b feat: administration functions (#3236)
* feat: adds database() function to return current db

* refactor: refactor meta src and client with new protos

* feat: impl migrate_region and query_procedure_state for procedure service/client

* fix: format

* temp commit

* feat: impl migrate_region SQL function

* chore: clean code for review

* fix: license header

* fix: toml format

* chore: update proto dependency

* chore: apply suggestion

Co-authored-by: Weny Xu <wenymedia@gmail.com>

* chore: apply suggestion

Co-authored-by: Weny Xu <wenymedia@gmail.com>

* chore: apply suggestion

Co-authored-by: JeremyHi <jiachun_feng@proton.me>

* chore: apply suggestion

Co-authored-by: fys <40801205+fengys1996@users.noreply.github.com>

* chore: print key when parsing procedure id fails

* chore: comment

* chore: comment for MigrateRegionFunction

---------

Co-authored-by: Weny Xu <wenymedia@gmail.com>
Co-authored-by: JeremyHi <jiachun_feng@proton.me>
Co-authored-by: fys <40801205+fengys1996@users.noreply.github.com>
2024-02-07 01:12:32 +00:00
Zhenchi
dbf62f3273 chore(index): add BiError to fulfil the requirement of returning two errors (#3291)
Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2024-02-06 16:03:03 +00:00
Ruihang Xia
e4cd294ac0 feat: put all filter exprs in a filter plan separately (#3288)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-02-06 09:32:26 +00:00
fys
74bfb09195 feat: support cache for batch_get in CachedMetaKvBackend (#3277)
* feat: support cache for batch_get in CachedMetaKvBackend.

* add doc of CachedMetaKvBackend

* fix: cr

* fix: correct some words

* fix cr
2024-02-06 03:15:03 +00:00
shuiyisong
4cbdf64d52 chore: start plugins during standalone startup & comply with current catalog while changing database (#3282)
* chore: start plugins in standalone

* chore: respect current catalog in use statement for mysql

* chore: reduce unnecessory convert to string

* chore: reduce duplicate code
2024-02-06 02:41:37 +00:00
Weny Xu
96f32a166a chore: share cache corss jobs (#3284) 2024-02-05 09:30:22 +00:00
Weny Xu
770da02810 fix: fix incorrect StatusCode parsing (#3281)
* fix: fix incorrect StatusCode parsing

* chore: apply suggestions from CR
2024-02-05 08:06:43 +00:00
discord9
c62c67cf18 feat: Basic Definitions for Expression&Functions for Dataflow (#3267)
* added expression&func

* fix: EvalError derive&imports

* chore: add header

* feat: variadic func

* chore: minor adjust

* feat: accum

* feat: use accum for eval func

* feat: montonic min/max as accumulative

* feat: support min/max Date&DateTime

* chore: fix compile error&add test(WIP)

* test: sum, count, min, max

* feat: remove trait impl for EvalError

* chore: remove all impl retain only type definitions

* refactor: nest datatypes errors

* fix: remove `.build()`

* fix: not derive Clone

* docs: add comment for types

* feat: more func&remove `CurrentDatabase`
2024-02-05 07:49:34 +00:00
Ruihang Xia
51feec2579 feat: use simple filter to prune memtable (#3269)
* switch on clippy warnings

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* feat: use simple filter to prune memtable

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove deadcode

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* refine util function

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-02-04 11:35:55 +00:00
LFC
902570abf6 ci: fix nightly build (#3276)
* ci: fix nightly build

* ci: fix nightly build
2024-02-03 03:20:47 +00:00
shuiyisong
6ab3a88042 fix: use fe_opts after setup_frontend_plugins in standalone (#3275)
* chore: modify standalone startup opts

* chore: move frontend and datanode options
2024-02-02 10:36:08 +00:00
discord9
e89f5dc908 feat: support fraction part in timestamp (#3272)
* feat: support fraction part in timestamp

* test: with timezone
2024-02-01 08:51:26 +00:00
LFC
e375060b73 refactor: add same SST files (#3270)
* Make adding same SST file multiple times possible, instead of panic there.

* Update src/mito2/src/sst/version.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

---------

Co-authored-by: Yingwen <realevenyag@gmail.com>
2024-01-31 07:21:30 +00:00
LFC
50d16d6330 ci: build GreptimeDB binary for later use (#3244)
* ci: build GreptimeDB binaries for later use

* debug CI

* try larger runner host

* Revert "try larger runner host"

This reverts commit 03c18c0f51.

* fix: resolve PR comments

* revert some unrelated action yamls

* fix CI

* use artifact upload v4 for faster upload and download speed
2024-01-31 03:02:27 +00:00
tison
60e760b168 fix: cli export database default value (#3259)
Signed-off-by: tison <wander4096@gmail.com>
2024-01-30 09:56:05 +00:00
dennis zhuang
43ef0820c0 fix: SQL insertion and column default constraint aware of timezone (#3266)
* fix: missing timezone when parsing sql value to greptimedb value

* test: sql_value_to_value

* fix: column default constraint missing timezone

* test: column def default constraint  with timezone

* test: adds sqlness test for default constraint aware of timezone

* fix: typo

* chore: comment
2024-01-30 09:15:38 +00:00
Ruihang Xia
e0e635105e feat: initial configuration for grafana dashboard (#3263)
* feat: initial configuration for grafana dashboard

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix typo

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* lift up dir

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update readme

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add paths to CI config

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* tweak config details

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add OpenDAL traffic panel

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove sync count

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* Update grafana/README.md

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-01-30 09:11:50 +00:00
discord9
0a9361a63c feat: basic types for Dataflow Framework (#3186)
* feat: basic types for Dataflow Framework

* docs: license header

* chore: add name in todo, remove deprecated code

* chore: typo

* test: linear&repr unit test

* refactor: avoid mod.rs

* feat: Relation Type

* feat: unmat funcs

* feat: simple temporal filter(sliding window)

* refactor: impl Snafu for EvalError

* feat: cast as type

* feat: temporal filter

* feat: time index in RelationType

* refactor: move EvalError to expr

* refactor: error handling for func

* chore: fmt&comment

* make EvalError pub again

* refactor: move ScalarExpr to scalar.rs

* refactor: remove mod.rs for relation

* chore: slim down PR size

* chore: license header

* chore: per review

* chore: more fix per review

* chore: even more fix per review

* chore: fmt

* chore: fmt

* feat: impl From/Into ProtoRow instead

* chore: use cast not cast_with_opt&`Key` struct

* chore: new_unchecked

* feat: `Key::subset_of` method

* chore: toml in order
2024-01-30 07:48:22 +00:00
Weny Xu
ddbd0abe3b fix(Copy From): fix incorrect type casts (#3264)
* refactor: refactor RecordBatchStreamTypeAdapter

* fix(Copy From): fix incorrect type casts

* fix: unit tests

* chore: add comment
2024-01-30 07:16:36 +00:00
Ruihang Xia
a079955d38 chore: adjust storage engine related metrics (#3261)
* chore: adjust metrics to metric engine and mito engine

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* adjust more mito bucket

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix compile

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-01-30 06:43:03 +00:00
JeremyHi
e5a2b0463a feat: Only allow inserts and deletes operations to be executed in parallel (#3257)
* feat: Only allow inserts and deletes operations to be executed in parallel.

* feat: add comment
2024-01-29 11:27:06 +00:00
Weny Xu
691b649f67 chore: switch to free machine (#3256)
* chore: switch to free machine

* chore: switch sqlness runner to 4core
2024-01-29 08:35:12 +00:00
Ruihang Xia
9a28a1eb5e fix: decouple columns in projection and prune (#3253)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-01-29 08:29:21 +00:00
LFC
fc25b7c4ff build: make binary name a Dockerfile "ARG" (#3254)
* build: make binary name a Dockerfile "ARG"

* Update Dockerfile
2024-01-29 08:00:40 +00:00
Ning Sun
d43c638515 ci: merge doc label actor and checker tasks (#3252) 2024-01-29 07:09:11 +00:00
shuiyisong
91c8c62d6f chore: adjust MySQL connection log (#3251)
* chore: adjust mysql connection log level

* chore: adjust import
2024-01-29 06:53:50 +00:00
JeremyHi
3201aea360 feat: create tables in batch on prom write (#3246)
* feat: create tables in batch on prom write

* feat: add logic table ids to log

* fix: miss tabble ids in response
2024-01-26 12:47:24 +00:00
Ruihang Xia
7da8f22cda fix: IntermediateWriter closes underlying writer twice (#3248)
* fix: IntermediateWriter closes underlying writer twice

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* close writer manually on error

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-01-26 10:03:50 +00:00
Ning Sun
14b233c486 test: align chrono and some forked test deps to upstream (#3245)
* test: update chrono and its tests

* chore: switch some deps to upstream version

* test: update timestamp range in sqlness tests
2024-01-26 07:39:51 +00:00
discord9
d5648c18c1 docs: RFC of Dataflow Framework (#3185)
* docs: RFC of Dataflow Framework

* docs: middle layer&metadata store

* chore: fix typo

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>

* docs: add figure

* chore: use mermaid instead

---------

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
2024-01-26 07:13:28 +00:00
Ruihang Xia
b0c3be35fb feat: don't map semantic type in metric engine (#3243)
* feat: don't map semantic type in metric engine

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove duplicate set_null

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-01-26 03:50:05 +00:00
Ruihang Xia
5617b284c5 feat: return request outdated error on handling alter (#3239)
* feat: return request outdated error on handling alter

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix tonic code mapping

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy, add comment

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix deadloop

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update UT

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* address CR comments

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* chore: Update log message

* Update src/common/meta/src/ddl/alter_table.rs

Co-authored-by: Weny Xu <wenymedia@gmail.com>

* fix compile

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>
Co-authored-by: Weny Xu <wenymedia@gmail.com>
2024-01-26 03:37:46 +00:00
Weny Xu
f99b08796d feat: add insert/select generator & translator (#3240)
* feat: add insert into expr generator & translator

* feat: add select expr generator & translator

* chore: apply suggestions from CR

* fix: fix unit tests
2024-01-26 02:59:17 +00:00
JeremyHi
1fab7ab75a feat: batch create ddl (#3194)
* feat: batch ddl to region request

* feat: return table ids

chore: by comment

chore: remove wal_options

chore: create logical tables lock key

feat: get metadata in procedure

* chore: by comment
2024-01-26 02:43:57 +00:00
Yingwen
3fa070a0cc fix: init parquet reader metrics twice (#3242) 2024-01-26 01:54:51 +00:00
Weny Xu
8bade8f8e4 fix: fix create table ddl return incorrect table id (#3232)
* fix: fix create table ddl return incorrect table id

* refactor: refactor param of Status::done_with_output
2024-01-25 13:58:43 +00:00
Wei
6c2f0c9f53 feat: read metadata from write cache (#3224)
* feat: read meta from write cache

* test: add case

* chore: cr comment

* chore: clippy

* chore: code style

* feat: put metadata to sst cache
2024-01-25 11:39:41 +00:00
LFC
2d57bf0d2a ci: adding DOCKER_BUILD_ROOT docker arg for dev build (#3241)
Update Dockerfile
2024-01-25 09:05:27 +00:00
Weny Xu
673a4bd4ef feat: add pg create alter table expr translator (#3206)
* feat: add pg create table expr translator

* feat: add pg alter table expr translator

* refactor: refactor MappedGenerator

* chore: apply suggestions from CR
2024-01-25 08:00:42 +00:00
LFC
fca44098dc ci: make git's "safe.directory" accept all (#3234)
* Update Dockerfile

* Update Dockerfile

* Update Dockerfile
2024-01-25 07:32:03 +00:00
Ning Sun
1bc4f25de2 feat: http sql api return schema on empty resultset (#3237)
* feat: return schema on empty resultset

* refactor: make schema a required field in http output

* test: update integration test and add schema output
2024-01-25 06:44:28 +00:00
Ning Sun
814924f0b6 fix: security update for shlex and h2 (#3227) 2024-01-24 13:31:34 +00:00
ZonaHe
b0a8046179 feat: update dashboard to v0.4.7 (#3229) 2024-01-24 08:50:21 +00:00
dennis zhuang
7323e9b36f feat: change Range Query’s default align behavior aware of timezone (#3219)
* feat: change Range Query’s default align behavior to 1970-01-01 00:00:00 aware of timezone

* test: test with +23:00 timezone
2024-01-24 08:17:57 +00:00
Weny Xu
f82ddc9491 fix: fix MockInstance rebuild issue (#3218)
* fix: fix MockInstance rebuild issue

* chore: apply suggestions from CR
2024-01-24 07:52:47 +00:00
Ning Sun
1711ad4631 feat: add Arrow IPC output format for http rest api (#3177)
* feat: add arrow format output for sql api

* refactor: remove unwraps

* test: add test for arrow format

* chore: update cargo toml format

* fix: resolve lint warrnings

* fix: ensure outputs size is one
2024-01-24 06:10:05 +00:00
LFC
f81e37f508 refactor: make http server built flexibly (#3225)
* refactor: make http server built flexibly

* Apply suggestions from code review

Co-authored-by: JeremyHi <jiachun_feng@proton.me>

* fix: resolve PR comments

* Fix CI.

---------

Co-authored-by: JeremyHi <jiachun_feng@proton.me>
2024-01-24 03:45:08 +00:00
Weny Xu
d75cf86467 fix: only register region keeper while creating physical table (#3223)
fix: only register region keeper during create physical table
2024-01-23 09:42:32 +00:00
Weny Xu
26535f577d feat: enable concurrent write (#3214)
* feat: enable concurrent write

* chore: apply suggestions from CR

* chore: apply suggestions from CR
2024-01-23 09:20:12 +00:00
Ruihang Xia
8485c9af33 feat: read column and region info from state cache (#3222)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-01-23 09:10:27 +00:00
Weny Xu
19413eb345 refactor!: rename initialize_region_in_background to init_regions_in_background (#3216)
refactor!: change initialize_region_in_background to init_regions_in_background
2024-01-23 04:33:48 +00:00
Weny Xu
007b63dd9d fix: fix default value cannot accept negative number (#3217)
* fix: fix default value cannot accept negative number

* chore: apply suggestions from CR
2024-01-23 03:33:13 +00:00
Weny Xu
364754afa2 feat: add create alter table expr translator (#3203)
* feat: add create table expr translator

* feat: add alter table expr translator

* refactor: expose mod

* refactor: expr generator

* chore: ignore typos check for lorem_words

* feat: add string map helper functions

* chore: remove unit tests
2024-01-23 02:53:42 +00:00
Ruihang Xia
31787f4bfd feat!: switch prom remote write to metric engine (#3198)
* feat: switch prom remote write to metric engine

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* Apply suggestions from code review

Co-authored-by: dennis zhuang <killme2008@gmail.com>

* fix compile

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* read physical table name from url

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove physical table from header

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix merge error

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix format

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add with_metric_engine option to config remote write behavior

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* check parameter

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add specific config param

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* default with_metric_engine to true

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update UT

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: dennis zhuang <killme2008@gmail.com>
2024-01-22 14:48:28 +00:00
dennis zhuang
6a12c27e78 feat: make query be aware of timezone setting (#3175)
* feat: let TypeConversionRule aware query context timezone setting

* chore: don't optimize explain command

* feat: parse string into timestamp with timezone

* fix: compile error

* chore: check the scalar value type in predicate

* chore: remove mut for engine context

* chore: return none if the scalar value is utf8 in time range predicate

* fix: some fixme

* feat: let Date and DateTime parsing from string value be aware of timezone

* chore: tweak

* test: add datetime from_str test with timezone

* feat: construct function context from query context

* test: add timezone test for to_unixtime and date_format function

* fix: typo

* chore: apply suggestion

* test: adds string with timezone

* chore: apply CR suggestion

Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>

* chore: apply suggestion

---------

Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>
2024-01-22 14:14:03 +00:00
shuiyisong
2bf4b08a6b chore: change default factor to compute memory size (#3211)
* chore: change default factor to compute memory size

* chore: update doc

* chore: update comment in example config

* chore: extract factor to const and update comments

* chore: update comment by cr suggestion

Co-authored-by: dennis zhuang <killme2008@gmail.com>

---------

Co-authored-by: dennis zhuang <killme2008@gmail.com>
2024-01-22 09:03:29 +00:00
Ruihang Xia
8cc7129397 fix: remove __name__ matcher from processed matcher list (#3213)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-01-22 08:50:28 +00:00
Ruihang Xia
278e4c8c30 feat: lazy initialize vector builder on write (#3210)
* feat: lazy initialize vector builder on write

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* avoid using ConstantVector

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* simplify expression

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* Update src/metric-engine/src/engine/create.rs

Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>
2024-01-22 07:00:04 +00:00
Weny Xu
de13de1454 feat: introduce information schema provider cache (#3208)
fix: introduce information schema provider cache
2024-01-22 06:41:39 +00:00
Lei, HUANG
3834ea7422 feat: copy database from (#3164)
* wip: impl COPY DATABASE FROM parser

* wip: impl copy database from

* wip: add some ut

* wip: add continue_on_error option

* test: add sqlness cases for copy database

* fix: trailing newline

* fix: typo

* fix: some cr comments

* chore: resolve confilicts

* fix: some cr comments
2024-01-22 06:33:54 +00:00
Weny Xu
966875ee11 chore: bump opendal to v0.44.2 (#3209) 2024-01-21 11:47:18 +00:00
Wei
e5a8831fa0 refactor: read parquet metadata (#3199)
* feat: MetadataLoader

* refactor code

* chore: clippy

* chore: cr comment

* chore: add TODO

* chore: cr comment

Co-authored-by: Zhenchi <zhongzc_arch@outlook.com>

* chore: clippy

---------

Co-authored-by: Zhenchi <zhongzc_arch@outlook.com>
2024-01-21 07:21:29 +00:00
Weny Xu
4278c858f3 feat: make procedure able to return output (#3201)
* feat: make procedure able to return output

* refactor: change Output to Any
2024-01-21 06:56:45 +00:00
Weny Xu
986f3bb07d refactor: reduce number of parquet metadata reads and enable reader buffer (#3197)
refactor: reduce reading parquet metadata times and enable read buffer
2024-01-19 12:26:38 +00:00
Weny Xu
440cd00ad0 feat(tests-fuzz): add CreateTableExprGenerator & AlterTableExprGenerator (#3182)
* feat(tests-fuzz): add CreateTableExprGenerator

* refactor: move Column to root of ir mod

* feat: add AlterTableExprGenerator

* feat: add Serialize and Deserialize derive

* chore: refactor the AlterExprGenerator
2024-01-19 09:39:28 +00:00
dennis zhuang
5e89472b2e feat: adds parse options for SQL parser (#3193)
* feat: adds parse options for parser and timezone to scan request

* chore: remove timezone in ScanRequest

* feat: remove timezone in parse options and adds type checking to parititon columns

* fix: comment

* chore: apply suggestions

Co-authored-by: Yingwen <realevenyag@gmail.com>

* fix: format

---------

Co-authored-by: Yingwen <realevenyag@gmail.com>
2024-01-19 09:16:36 +00:00
Yiran
632edd05e5 docs: update SDK links in README.md (#3156)
* docs: update SDK links in README.md

* chore: typo
2024-01-19 08:54:43 +00:00
Zhenchi
2e4c48ae7a fix(index): S3 EntityTooSmall error (#3192)
* fix(index): S3 `EntityTooSmall` error

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix: config api

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2024-01-19 02:57:07 +00:00
Ruihang Xia
cde5a36f5e feat: precise filter for mito parquet reader (#3178)
* impl SimpleFilterEvaluator

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* time index and field filter

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* finish parquet filter

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove empty Batch

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix fmt

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix typo

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update metric

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* use projected schema from batch

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* correct naming

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove unnecessary error

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-01-18 06:59:48 +00:00
niebayes
63205907fb refactor: introduce common-wal to aggregate wal stuff (#3171)
* refactor: aggregate wal configs

* refactor: move wal options to common-wal

* chore: slim Cargo.toml

* fix: add missing crates

* fix: format

* chore: update comments

* chore: add testing feature gate for test_util

* fix: apply suggestions from code review

Co-authored-by: JeremyHi <jiachun_feng@proton.me>

* fix: apply suggestions from code review

* fix: compiling

---------

Co-authored-by: JeremyHi <jiachun_feng@proton.me>
2024-01-18 03:49:37 +00:00
Wei
3d7d2fdb4a feat: auto config cache size according to memory size (#3165)
* feat: auto config cache and buffer size according to mem size

* feat: utils

* refactor: add util function to common config

* refactor: check cgroups

* refactor: code

* fix: test

* fix: test

* chore: cr comment

Co-authored-by: Yingwen <realevenyag@gmail.com>
Co-authored-by: Dennis Zhuang <killme2008@gmail.com>

* chore: remove default comment

---------

Co-authored-by: Yingwen <realevenyag@gmail.com>
Co-authored-by: Dennis Zhuang <killme2008@gmail.com>
2024-01-17 14:35:35 +00:00
LFC
3cfd60e139 refactor: expose region edit in mito engine (#3179)
* refactor: expose region edit in mito engine

* feat: add a method for editing region directly

* fix: resolve PR comments

* Apply suggestions from code review

Co-authored-by: dennis zhuang <killme2008@gmail.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>

* fix: resolve PR comments

* fix: resolve PR comments

* fix: resolve PR comments

---------

Co-authored-by: dennis zhuang <killme2008@gmail.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>
2024-01-17 14:25:08 +00:00
shuiyisong
a29b9f71be chore: carry metrics in flight metadata from datanode to frontend (#3113)
* chore: carry metrics in flight metadata from datanode to frontend

* chore: fix typo

* fix: ignore metric flight message on client

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* chore: add cr comment

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>

* chore: add cr comment

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>

* chore: update proto

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
2024-01-17 11:38:03 +00:00
shuiyisong
ae160c2def fix: change back GREPTIME_DB_HEADER_NAME header key name (#3184)
fix: change back dbname header key
2024-01-17 09:30:32 +00:00
Ruihang Xia
fbd0197794 refactor: remove TableEngine (#3181)
* refactor: remove TableEngine

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* Update src/table/src/table_reference.rs

Co-authored-by: LFC <990479+MichaelScofield@users.noreply.github.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: LFC <990479+MichaelScofield@users.noreply.github.com>
2024-01-17 04:18:37 +00:00
dennis zhuang
204b9433b8 feat: adds date_format function (#3167)
* feat: adds date_format function

* fix: compile error

* chore: use system timezone for FunctionContext and EvalContext

* test: as_formatted_string

* test: sqlness test

* chore: rename function
2024-01-17 03:24:40 +00:00
niebayes
d020a3db23 chore: expose promql test to distributed instance (#3176) 2024-01-17 02:44:55 +00:00
JeremyHi
c6c4ea5e64 feat: tables stream with CatalogManager (#3180)
* feat: add tables for CatalogManager

* feat: replace table with tables

* Update src/catalog/src/information_schema/columns.rs

Co-authored-by: Weny Xu <wenymedia@gmail.com>

* Update src/catalog/src/information_schema/columns.rs

Co-authored-by: Weny Xu <wenymedia@gmail.com>

* Update src/catalog/src/information_schema/tables.rs

Co-authored-by: Weny Xu <wenymedia@gmail.com>

* Update src/catalog/src/information_schema/tables.rs

Co-authored-by: Weny Xu <wenymedia@gmail.com>

* feat: tables for MemoryCatalogManager

---------

Co-authored-by: Weny Xu <wenymedia@gmail.com>
2024-01-17 02:31:15 +00:00
Weny Xu
7a1b856dfb feat: add tests-fuzz crate (#3173) 2024-01-16 09:02:09 +00:00
JeremyHi
c2edaffa5c feat: let tables API return a stream (#3170) 2024-01-15 12:36:39 +00:00
tison
189df91882 docs: Update README.md (#3168)
* docs: Update README.md

Complying with ASF policy we should refer to Apache projects in their full form in the first and most prominent usage.

* Update README.md

* Update README.md
2024-01-15 10:54:53 +00:00
tison
3ef86aac97 docs: add tracking issue for inverted-index RFC (#3169) 2024-01-15 10:54:35 +00:00
Wei
07de65d2ac test: engine with write cache (#3163)
* feat: write cache test for engine

* chore: unused

* chore: comment

* refactor: super to crate

* chore: cr comment

Co-authored-by: Yingwen <realevenyag@gmail.com>

* chore: clippy

---------

Co-authored-by: Yingwen <realevenyag@gmail.com>
2024-01-15 10:02:53 +00:00
Ning Sun
1294d6f6e1 feat: upgrade pgwire to 0.19 (#3157)
* feat: upgrade pgwire to 0.19

* fix: update pgwire to 0.19.1
2024-01-15 09:14:08 +00:00
Zhenchi
6f07d69155 feat(mito): enable inverted index (#3158)
* feat(mito): enable inverted index

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix typos

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix typos

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* accidentally resolved the incorrect filtering issue within the Metric Engine

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix test

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* Update src/mito2/src/access_layer.rs

* Update src/mito2/src/test_util/scheduler_util.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

* fix: format -> join_dir

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* refactor: move intermediate_manager from arg of write_and_upload_sst to field of WriteCache

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* refactor: add IndexerBuidler

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix clippy

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>
2024-01-15 09:08:07 +00:00
WU Jingdi
816d94892c feat: support HTTP&gRPC&pg set timezone (#3125)
* feat: support HTTP&gRPC&pg set timezone

* chore: fix code advice

* chore: fix code advice
2024-01-15 06:29:31 +00:00
LFC
93f28c2a37 refactor: make grpc service able to be added dynamically (#3160) 2024-01-15 04:33:27 +00:00
Eugene Tolbakov
ca4d690424 feat: add modulo function (#3147)
* feat: add modulo function

* fix: address CR feedback
2024-01-13 00:24:25 +00:00
Weny Xu
75975adcb6 fix: fix tests failed on windows (#3155)
* fix: fix tests failed on windows

* feat: add comments

* Update src/object-store/src/util.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

---------

Co-authored-by: Yingwen <realevenyag@gmail.com>
2024-01-12 09:12:11 +00:00
Ruihang Xia
527e523a38 fix: handle non-identical time index and field column in PromQL set operation (#3145)
* handle different field columns

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix and/unless on different time index

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update sqlness result

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-01-12 06:27:03 +00:00
Weny Xu
aad2afd3f2 chore: bump version to 0.6.0 (#3154) 2024-01-12 06:25:14 +00:00
Weny Xu
bf88b3b4a0 fix: fix store all wal options (#3149)
* fix: fix store all wal options

* fix: incorrect updating DatanodeTable value
2024-01-12 04:48:14 +00:00
Weny Xu
bf96ce3049 fix: print detailed error (#3146) 2024-01-12 04:02:32 +00:00
Weny Xu
430ffe0e28 fix(kafka): overwrite the EntryId with Offset while consuming records (#3148)
* fix(kafka): overwrite the EntryId with Offset while consuming the KafkaRecords

* fix: temporarily workaround of incorrect entry Id
2024-01-12 03:46:17 +00:00
Zhenchi
c1190bae7b feat(mito): support write cache for index file (#3144)
* feat(mito): support write cache for index file

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix: address comments

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix: merge main

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2024-01-12 02:40:56 +00:00
Ruihang Xia
0882da4d01 feat: support PromQL operations over the same metric (#3124)
* update sqlness result

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update ut cases

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove deadcode

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-01-11 23:07:17 +00:00
Wei
8ec1e42754 feat: read data from write cache (#3128)
* feat: read from write cache

* chore: add read ranges test

* fix: use get instead of contains_key

* chore: clippy

* chore: cr comment

Co-authored-by: Yingwen <realevenyag@gmail.com>

* fix: with_label_values

---------

Co-authored-by: Yingwen <realevenyag@gmail.com>
2024-01-11 12:06:28 +00:00
Ruihang Xia
b00b49284e feat: manager kafka cluster in sqlness runner (#3143)
* feat: manager kafka cluster in sqlness runner

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* pull up clippy config

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* Apply suggestions from code review

Co-authored-by: niebayes <niebayes@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: niebayes <niebayes@gmail.com>
2024-01-11 09:47:19 +00:00
Ruihang Xia
09b3c7029b feat: handle drop request for metric table (#3136)
* handle drop request

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* adjust procedure manager

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add create table sqlness test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* insert/query metric table

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* address CR comments

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* Update src/common/meta/src/kv_backend.rs

Co-authored-by: JeremyHi <jiachun_feng@proton.me>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* reuse region option for metadata region

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* tweak variable name

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: JeremyHi <jiachun_feng@proton.me>
2024-01-11 09:38:43 +00:00
niebayes
f5798e2833 fix: remove incorrect wal comments in config file (#3142)
fix: kafka config comments
2024-01-11 09:34:24 +00:00
Zhenchi
fd8fb641fd feat(parquet): introduce inverted index applier to reader (#3130)
* feat(parquet): introduce inverted index applier to reader

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* feat: purger removes index file

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix test

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* chore: add TODO for escape route

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* chore: add TODO for escape route

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* Update src/mito2/src/access_layer.rs

Co-authored-by: dennis zhuang <killme2008@gmail.com>

* Update src/mito2/src/sst/parquet/reader.rs

Co-authored-by: dennis zhuang <killme2008@gmail.com>

* feat: min-max index to prune row groups filtered by inverted index

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* feat: file_meta.inverted_index_available -> file_meta.available_indexes

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* chore: add TODO for leveraging WriteCache

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix fmt

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix: misset available indexes

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* feat: add index file size

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* refactor: use smallvec to reduce heap allocation

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix: add index size to disk usage

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
Co-authored-by: dennis zhuang <killme2008@gmail.com>
2024-01-11 08:04:59 +00:00
Weny Xu
312e8e824e fix: save code in debug_assert! (#3137)
fix: save code in debug_assert!
2024-01-11 06:07:08 +00:00
Wei
29a7f301df feat: write and upload sst (#3106)
* feat: write and upload sst file

* refactor: unit test

* cr comment

* chore: typos

* chore: cr comment

* chore: conflict

* Apply suggestions from code review

Co-authored-by: dennis zhuang <killme2008@gmail.com>

* chore: fmt

* chore: style

Co-authored-by: Yingwen <realevenyag@gmail.com>

---------

Co-authored-by: dennis zhuang <killme2008@gmail.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>
2024-01-11 02:34:16 +00:00
LFC
51a3fbc7bf refactor: change how frontend grpc services are orchestrated (#3134) 2024-01-11 02:26:44 +00:00
Lanqing Yang
d521bc9dc5 chore: impl KvBackend for MetaPeerClient (#3076) 2024-01-10 14:16:03 +00:00
Weny Xu
7fad4e8356 fix: incorrect parsing broker_endpoints env variable (#3135) 2024-01-10 13:59:49 +00:00
Ning Sun
b6033f62cd refactor: implement version as built-in function and use fixed mysql version (#3133)
* refactor:  implement version as built-in function

* test: add sqlness test for version()
2024-01-10 11:04:18 +00:00
dennis zhuang
fd3f23ea15 feat: adds runtime_metrics (#3127)
* feat: adds runtime_metrics

* fix: comment

* feat: refactor metrics table

* chore: ensure build_info and runtime_metrics only avaiable in greptime catalog

* feat: adds timestamp column
2024-01-10 10:51:30 +00:00
niebayes
1b0e39a7f2 chore: stop exposing num_partitions (#3132) 2024-01-10 10:45:18 +00:00
Weny Xu
3ab370265a feat: expose the region migration replay_timeout argument (#3129)
* feat: expose region migration args

* fix: fix ci
2024-01-10 09:47:59 +00:00
Weny Xu
ec8266b969 refactor: refactor the locks in the procedure (#3126)
* feat: add lock key

* refactor: procedure lock keys

* chore: apply suggestions from CR
2024-01-10 09:46:39 +00:00
Zhenchi
490312bf57 fix: unstable time record test (#3131)
Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2024-01-10 09:41:52 +00:00
Ning Sun
1fc168bf6a feat: update our cross schema check to cross catalog (#3123) 2024-01-09 09:38:48 +00:00
Zhenchi
db98484796 feat(inverted_index): introduce SstIndexCreator (#3107)
* feat(inverted_index): introduce SstIndexCreator

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* chore: tiny polish

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* feat: distinguish intermediate store and index store

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* chore: move comment as doc comment

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* refactor: column id as index name

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2024-01-09 09:24:16 +00:00
Ruihang Xia
7d0d2163d2 fix: expose unsupported datatype error on mysql protocol (#3121)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-01-09 09:13:53 +00:00
Ruihang Xia
c4582c05cc chore: change the default doc checkbox to no need (#3122)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-01-09 17:12:54 +08:00
niebayes
a0a31c8acc chore(remote_wal): remove topic alias (#3120)
chore: remove topic alias
2024-01-09 07:35:02 +00:00
tison
0db1861452 chore(python): Print Python interpreter version (#3118)
* chore(pyo3_backend): Print bundle Python interpreter version

Signed-off-by: tison <wander4096@gmail.com>

* print RustPython interpreter version on init

Signed-off-by: tison <wander4096@gmail.com>

---------

Signed-off-by: tison <wander4096@gmail.com>
2024-01-09 07:04:23 +00:00
Wei
225ae953d1 feat: add parquet metadata to cache (#3097)
* feat: parquet metadata to sst meta cache

* chore: clippy

* refactor: move code to access_layer

* chore: clone()
2024-01-09 07:00:42 +00:00
Lei, HUANG
2c1b1cecc8 chore: add bound check for raft-engine logstore (#3073)
* chore: add bound check for raft-engine logstore

* feat: add bound check to append_batch API

* chore: check entry id during replay

* chore: resolve conflicts

* feat: add allow_stale_entries options to force obsolete wal entries

* chore: resolve some comments
2024-01-09 06:42:46 +00:00
Lei, HUANG
62db28b465 feat: add options to enable log recycle and periodical fsync (#3114)
* feat: add options to enable log recycle and periodical fsync

* fix: resolve review comments

* fix: conflicts
2024-01-09 06:41:23 +00:00
fys
6e860bc0fd feat: support grpc for otlp trace and metrics (#3105)
* feat: add grpc support for otlp trace and metrics

* cr: add some comment

* fix: ut

* fix: cr
2024-01-09 05:01:48 +00:00
Yingwen
8bd4a36136 feat(mito): Init the write cache in datanode (#3100)
* feat: add builder to build cache manager

* refactor: make MitoEngine::new async

* refactor: refactor object store creation

* refactor: add helper fn to attaches layers

* feat: fn to build fs store

* feat: add write cache to engine

* feat: config write cache

* style: fix clippy

* test: fix test

* feat: add warning

* chore: add experimental prefix to configs

* test: fix config test

* test: test weighted size

* feat: add switch to enable write cache

* fix: update cache stats by using get

* style: use then
2024-01-09 04:40:22 +00:00
Ruihang Xia
af0c4c068a feat: support PromQL function vector (#3036)
* produce vector plan

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* work with OR

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* apply review sugg

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* move common const strings to common_query

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add comment for GREPTIME_COUNT

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-01-09 03:44:00 +00:00
dennis zhuang
26cbcb8b3a docs: update issue template (#3119) 2024-01-09 02:45:55 +00:00
niebayes
122b47210e chore: bump version to 0.5.1 (#3116) 2024-01-08 11:32:56 +00:00
tison
316d843482 feat: support CSV format in sql HTTP API (#3062)
* chore: fix typo

Signed-off-by: tison <wander4096@gmail.com>

* add csv format

Signed-off-by: tison <wander4096@gmail.com>

* flatten response

Signed-off-by: tison <wander4096@gmail.com>

* more flatten response

Signed-off-by: tison <wander4096@gmail.com>

* add CSV format

Signed-off-by: tison <wander4096@gmail.com>

* format InfluxdbV1Response

Signed-off-by: tison <wander4096@gmail.com>

* format ErrorResponse

Signed-off-by: tison <wander4096@gmail.com>

* propagate ErrorResponse to InfluxdbV1Response

Signed-off-by: tison <wander4096@gmail.com>

* format GreptimedbV1Response

Signed-off-by: tison <wander4096@gmail.com>

* format CsvResponse

Signed-off-by: tison <wander4096@gmail.com>

* impl IntoResponse for QueryResponse

Signed-off-by: tison <wander4096@gmail.com>

* promql

Signed-off-by: tison <wander4096@gmail.com>

* sql

Signed-off-by: tison <wander4096@gmail.com>

* compile

Signed-off-by: tison <wander4096@gmail.com>

* fixup aide

Signed-off-by: tison <wander4096@gmail.com>

* clear debt

Signed-off-by: tison <wander4096@gmail.com>

* fixup UT test_recordbatches_conversion

Signed-off-by: tison <wander4096@gmail.com>

* fixup IT cases

Signed-off-by: tison <wander4096@gmail.com>

* fixup more IT cases

Signed-off-by: tison <wander4096@gmail.com>

* fixup test-integration cases

Signed-off-by: tison <wander4096@gmail.com>

* update comment

Signed-off-by: tison <wander4096@gmail.com>

* fixup deserialize and most query < 1ms

Signed-off-by: tison <wander4096@gmail.com>

* fixup auth tests

Signed-off-by: tison <wander4096@gmail.com>

* fixup tests

Signed-off-by: tison <wander4096@gmail.com>

* fixup and align X-GreptimeDB headers

Signed-off-by: tison <wander4096@gmail.com>

* fixup compile

Signed-off-by: tison <wander4096@gmail.com>

---------

Signed-off-by: tison <wander4096@gmail.com>
2024-01-08 10:54:27 +00:00
niebayes
8c58d3f85b test(remote_wal): add unit tests for kafka remote wal (#2993)
* test: add unit tests

* feat: introduce kafka runtime backed by testcontainers

* test: add test for kafka runtime

* fix: format

* chore: make kafka image ready to be used

* feat: add entry builder

* tmp

* test: add unit tests for client manager

* test: add some unit tests for kafka log store

* chore: resolve some todos

* chore: resolve some todos

* test: add unit tests for kafka log store

* chore: add deprecate develop branch warning

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* tmp: ready to move unit tests to an indie dir

* test: update unit tests for client manager

* test: add unit tests for meta srv remote wal

* fix: license

* fix: test

* refactor: kafka image

* doc: add doc example for kafka image

* chore: migrate kafka image to an indie PR

* fix: CR

* fix: CR

* fix: test

* fix: CR

* fix: update Cargo.toml

* fix: CR

* feat: skip test if no endpoints env

* fix: format

* test: rewrite parallel test with barrier

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
2024-01-08 10:48:11 +00:00
LFC
fcacb100a2 chore: expose some codes to let other projects use them (#3115) 2024-01-08 06:32:01 +00:00
Weny Xu
58ada1dfef fix: check env before running kafka test (#3110)
* fix: check env before running kafka test

* Apply suggestions from code review

Co-authored-by: niebayes <niebayes@gmail.com>

---------

Co-authored-by: niebayes <niebayes@gmail.com>
2024-01-08 06:30:43 +00:00
Weny Xu
f78c467a86 chore: bump opendal to 0.44.1 (#3111) 2024-01-08 03:55:58 +00:00
niebayes
78303639db feat(remote_wal): split an entry if it's too large (#3092)
* feat: split an entry if it's too large

* chore: rewrite check records

* test: add some unit tests for record

* chore: rewrite entry splitting

* chore: add unit tests for build records

* chore: add more unit tests for record

* chore: rewrite encdec of record

* revert: ignored test

* fix: set limit for max_batch_size

* fix: clippy

* chore: remove heavy logging

* fix: CR

* fix: properly terminate

* fix: CR

* fix: compiling

* fix: sqlness

* fix: CR

* fix: license

* fix: license
2024-01-05 12:41:43 +00:00
JeremyHi
bd1a5dc265 feat: metric engine support alter (#3098)
* feat: metric engine support alter

* chore: by comment

* feat: get physical table route for frontend
2024-01-05 09:46:39 +00:00
Weny Xu
e0a43f37d7 chore: bump opendal to 0.44 (#3058)
* chore: bump opendal to 0.44

* fix: fix test_object_store_cache_policy

* Revert "fix: fix test_object_store_cache_policy"

This reverts commit 46c37c343f66114e0f6ee7a0a3b9ee2b79c810af.

* fix: fix test_object_store_cache_policy

* fix: fix test_file_backend_with_lru_cache

* chore: apply suggestions from CR

* fix(mito): fix mito2 cache

* chore: apply suggestions from CR

* chore: apply suggestions from CR
2024-01-05 09:05:41 +00:00
zyy17
a89840f5f9 refactor(metrics): add 'greptime_' prefix for every metrics (#3093)
* refactor(metrics): add 'greptimedb_' prefix for every metrics

* chore: use 'greptime_' as prefix

* chore: add some prefix for new metrics

* chore: fix format error
2024-01-05 08:12:23 +00:00
dennis zhuang
c2db970687 feat: pushdown filters for some information_schema tables (#3091)
* feat: pushdown scan request to information_schema tables stream

* feat: supports filter pushdown for columns

* feat: supports filter pushdown for some information_schema tables

* fix: typo

* fix: predicate evaluate

* fix: typo

* test: predicates

* fix: comment

* fix: pub mod

* docs: improve comments

* fix: cr comments and supports like predicate

* chore: typo

* fix: cargo toml format

* chore: apply suggestion
2024-01-05 07:18:22 +00:00
LFC
e0525dbfeb chore: expose some codes to let other projects use them (#3102) 2024-01-05 06:54:01 +00:00
Weny Xu
cdc9021160 feat(metric): implement role and region_disk_usage (#3095)
* feat(metric): implement `role` and `region_disk_usage`

* Update src/datanode/src/region_server.rs

* Update src/datanode/src/heartbeat.rs

---------

Co-authored-by: LFC <990479+MichaelScofield@users.noreply.github.com>
2024-01-05 06:53:52 +00:00
dennis zhuang
702ea32538 docs: update the description of greptimedb project (#3099)
* docs: update the info of greptimedb project

* chore: move up SQL/PromQL
2024-01-05 03:06:02 +00:00
Weny Xu
342faa4e07 test: add tests for lease keeper with logical table (#3096) 2024-01-05 02:29:48 +00:00
tison
44ba131987 fix: improve redact sql regexp (#3080)
Signed-off-by: tison <wander4096@gmail.com>
2024-01-04 14:53:20 +00:00
Yingwen
96b6235f25 feat(mito): Add WriteCache struct and write SSTs to write cache (#2999)
* docs: remove todo

* feat: add upload cache

* feat: add cache to sst write path

* feat: add storage to part

* feat: add dir to part

* feat: revert storage name

* feat: flush use upload part writer

* feat: use upload part writer in compaction task

* refactor: upload part writer builds parquet writer

* chore: suppress warnings

* refactor: rename UploadCache to WriteCache

* refactor: move source to write_all()

* chore: typos

* chore: remove output mod

* feat: changes upload to async method

* docs: update cache

* chore: fix compiler errors

* docs: remove comment

* chore: simplify upload part

* refactor: remove option from cache manager param to access layer

* feat: remove cache home from file cache

* feat: write cache holds file cache

* feat: add recover and pub some methods

* feat: remove usages of UploadPartWriter

* refactor: move sst_file_path to sst mod

* refactor: use write cache in access layer

* refactor: remove upload

* style: fix clippy

* refactor: pub write cache method/structs
2024-01-04 10:53:43 +00:00
Weny Xu
f1a4750576 feat(tests-integration): add more region migration integration tests (#3094) 2024-01-04 08:18:46 +00:00
Zhenchi
d973cf81f0 feat(inverted_index): implement apply for SstIndexApplier (#3088)
* feat(inverted_index): implement apply for SstIndexApplier

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* chore: rename metrics

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2024-01-04 07:33:03 +00:00
Weny Xu
284a496f54 feat: add logs for upgrading candidate region and updating metadata (#3077)
* feat: add logs for upgrading candidate region

* feat: add logs for update metadata

* chore: apply suggestions from CR
2024-01-04 06:57:07 +00:00
WU Jingdi
4d250ed054 fix: Optimize export metric behavior (#3047)
* fix: optimze export metric bahavior

* chor: fix ci

* chore: update config format

* chore: fix format
2024-01-04 06:40:50 +00:00
LFC
ec43b9183d feat: table route for metric engine (#3053)
* feat: table route for metric engine

* feat: register logical regions

* fix: open logical region (#96)

---------

Co-authored-by: JeremyHi <jiachun_feng@proton.me>
2024-01-04 06:30:17 +00:00
ZonaHe
b025bed45c feat: update dashboard to v0.4.6 (#3089)
Co-authored-by: ZonaHex <ZonaHex@users.noreply.github.com>
2024-01-04 02:56:41 +00:00
Weny Xu
21694c2a1d feat: abort region migration if leader region peer is unexpected (#3086) 2024-01-03 11:46:51 +00:00
ClSlaid
5c66ce6e88 chore: remove unnecessary result wrappings (#3084)
patch: remove unnecessary result wrappings

Signed-off-by: 蔡略 <cailue@bupt.edu.cn>
2024-01-03 10:20:33 +00:00
Weny Xu
b2b752337b fix: fix non-physical error msg (#3087) 2024-01-03 09:40:03 +00:00
Weny Xu
aa22f9c94a refactor: allow procedure to acquire share lock (#3061)
* feat: implement `KeyRwLock`

* refactor: use KeyRwLock instead of LockMap

* refactor: use StringKey instead of String

* chore: remove redundant code

* refactor: cleanup KeyRwLock staled locks before granting new lock

* feat: clean staled locks manually

* feat: sort lock key in lexicographically order

* feat: ensure the ref count before dropping the rwlock

* feat: add more tests for rwlock

* feat: drop the key guards first

* feat: drops the key guards in the reverse order

* chore: apply suggestions from CR

* chore: apply suggestions from CR

* chore: apply suggestions from CR
2024-01-03 08:05:45 +00:00
Weny Xu
611a8aa2fe feat(tests-integration): add a naive region migration integration test (#3078)
* fix: fix heartbeat handler ignore upgrade candidate instruction

* fix: fix handler did not inject wal options

* feat: expose `RegionMigrationProcedureTask`

* feat(tests-integration): add a naive region migration test

* chore: apply suggestions from CR

* feat: add test if the target region has migrated

* chore: apply suggestions from CR
2024-01-03 07:12:59 +00:00
Zhenchi
e4c71843e6 feat(inverted_index): get memory usage of appliers (#3081)
Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2024-01-03 06:56:56 +00:00
Zhenchi
e1ad7af10c feat(puffin): finish return written bytes (#3082)
Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2024-01-03 06:55:09 +00:00
Zhenchi
b9302e4f0d feat(inverted_index): Add applier builder to convert Expr to Predicates (Part 2) (#3068)
* feat(inverted_index.integration): Add applier builder to convert Expr to Predicates (Part 1)

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* feat(inverted_index.integration): Add applier builder to convert Expr to Predicates (Part 2)

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* test: add comparison unit tests

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* test: add eq_list unit tests

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* test: add in_list unit tests

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* test: add and unit tests

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* test: strip tests

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix: address comments

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2024-01-03 05:14:40 +00:00
Yingwen
2e686fe053 feat(mito): Implement file cache (#3022)
* feat: recover cache

* feat: moka features

* test: tests for file cache

* chore: suppress warninig

* fix: parse_inde_key consider suffix

* feat: update cache

* feat: expose cache file path

* feat: use cache_path in test
2024-01-03 02:05:06 +00:00
Weny Xu
128d3717fa test(tests-integration): add a naive test with kafka wal (#3071)
* chore(tests-integration): add setup tests with kafka wal to README.md

* feat(tests-integration): add meta wal config

* fix(tests-integration): fix sign of both_instances_cases_with_kafka_wal

* chore(tests-integration): set num_topic to 3 for tests

* test(tests-integration): add a naive test with kafka wal

* chore: apply suggestions from CR
2024-01-02 09:05:20 +00:00
Weny Xu
2b181e91e0 refactor: unify the injection of WAL option (#3066)
* feat: add prepare_wal_option

* refactor: use integer hashmap

* feat: unify the injection of WAL option

* fix: fix procedure_flow_upgrade_candidate_with_retry

* chore: apply suggestions from CR
2024-01-02 07:40:02 +00:00
Weny Xu
d87ab06b28 feat: add kafka wal integration test utils (#3069)
* feat(tests-integration): add wal_config

* feat: add kafka wal integration test utils
2024-01-02 07:38:43 +00:00
Weny Xu
5653389063 feat!: correct the kafka config option (#3065)
* feat: correct the kafka config option

* refactor: rewrite the verbose comments
2024-01-02 07:31:37 +00:00
dimbtp
c4d7b0d91d feat: add some tables for information_schema (#3060)
* feat: add information_schema.optimizer_trace

* feat: add information_schema.parameters

* feat: add information_schema.profiling

* feat: add information_schema.referential_constraints

* feat: add information_schema.routines

* feat: add information_schema.schema_privileges

* feat: add information_schema.table_privileges

* feat: add information_schema.triggers

* fix: update sql test result

* feat: add information_schema.global_status

* feat: add information_schema.session_status

* fix: update sql test result

* fix: add TODO for some tables

* Update src/catalog/src/information_schema/memory_table/tables.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

---------

Co-authored-by: dennis zhuang <killme2008@gmail.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>
2024-01-02 04:10:59 +00:00
dimbtp
f735f739e5 feat: add information_schema.key_column_usage (#3057)
* feat: add information_schema.key_column_usage

* fix: follow #3057 review comments

* fix: add sql test for `key_column_usage` table

* fix: fix spell typo

* fix: resolve conflict in sql test result
2023-12-31 12:29:06 +00:00
dimbtp
6070e88077 feat: add information_schema.files (#3054)
* feat: add information_schema.files

* fix: update information_schema.result

* fix: change `EXTRA` field type to string
2023-12-31 02:08:16 +00:00
niebayes
9db168875c fix(remote_wal): some known issues (#3052)
* fix: some known issues

* fix: CR

* fix: CR

* chore: replace Mutex with RwLock
2023-12-30 15:28:10 +00:00
AntiTopQuark
4460af800f feat(TableRouteValue): add panic notes and type checks (#3031)
* refactor(TableRouteValue): add panic notes and type checks

* chore: add deprecate develop branch warning

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add error defines and checks

* Update README.md

* update code format and fix tests

* update name of error

* delete unused note

* fix unsafe .expect() for region_route()

* update error name

* update unwrap

* update code format

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
2023-12-30 13:02:26 +00:00
Zhenchi
69a53130c2 feat(inverted_index): Add applier builder to convert Expr to Predicates (Part 1) (#3034)
* feat(inverted_index.integration): Add applier builder to convert Expr to Predicates (Part 1)

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* chore: add docs

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix: typos

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix: address comments

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* Update src/mito2/src/sst/index/applier/builder.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

* fix: remove unwrap

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* chore: error source

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>
2023-12-30 07:32:32 +00:00
Ning Sun
1c94d4c506 ci: fix duplicatd doc issue (#3056) 2023-12-30 13:36:14 +08:00
Ning Sun
41e51d4ab3 chore: attempt to add doc issue in label task (#3021)
* chore: attempt to add doc issue in label task

* ci: check pr body for doc issue creation
2023-12-29 20:17:34 +08:00
dennis zhuang
11ae85b1cd feat: adds information_schema.schemata (#3051)
* feat: improve information_schema.columns

* feat: adds information_schema.schemata

* fix: instance test

* fix: comment
2023-12-29 09:22:31 +00:00
LFC
7551432cff refactor: merge standalone and metasrv table metadata allocators (#3035)
* refactor: merge standalone and metasrv table metadata allocators

* Update src/common/meta/src/ddl/table_meta.rs

Co-authored-by: niebayes <niebayes@gmail.com>

* Update src/common/meta/src/ddl/table_meta.rs

Co-authored-by: Weny Xu <wenymedia@gmail.com>

---------

Co-authored-by: niebayes <niebayes@gmail.com>
Co-authored-by: Weny Xu <wenymedia@gmail.com>
2023-12-29 08:50:59 +00:00
Weny Xu
e16f093282 test(remote_wal): add sqlness with kafka wal (#3027)
* feat(sqlness): add kafka wal config

* chore: add sqlness with kafka wal ci config

* fix: fix config

* chore: apply suggestions from CR

* fix: add metasrv config to sqlness with kafka

* fix: replay memtable should from flushed_entry_id + 1

* fix: should set append flag to fopen

* feat: start wal allocator in standalone meta mode

* feat: append a noop record after kafka topic initialization

* test: ignore tests temporally

* test: change sqlness kafka wal config
2023-12-29 08:17:22 +00:00
Weny Xu
301ffc1d91 feat(remote_wal): append a noop record after kafka topic initialization (#3040)
* feat: append a noop record after kafka topic initialization

* chore: apply suggestions from CR

* feat: ignore the noop record during the read
2023-12-29 07:46:48 +00:00
Weny Xu
d22072f68b feat: expose region migration http endpoint (#3032)
* feat: add region migration endpoint

* feat: implement naive peer registry

* chore: apply suggestions from CR

* chore: rename `ContextFactoryImpl` to `DefaultContextFactory`

* chore: rename unregister to deregister

* refactor: use lease-based alive datanode checking
2023-12-29 06:57:00 +00:00
Weny Xu
b526d159c3 fix: replay memtable should from flushed_entry_id + 1 (#3038)
* fix: replay memtable should from flushed_entry_id + 1

* chore: apply suggestions from CR
2023-12-28 16:12:07 +00:00
ZonaHe
7152407428 feat: update dashboard to v0.4.5 (#3033)
Co-authored-by: ZonaHex <ZonaHex@users.noreply.github.com>
2023-12-28 11:51:43 +00:00
Ruihang Xia
b58296de22 feat: Implement OR for PromQL (#3024)
* with anit-join

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* impl UnionDistinctOn

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* unify schema

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add sqlness case

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add UTs

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* Update src/promql/src/planner.rs

Co-authored-by: dennis zhuang <killme2008@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: dennis zhuang <killme2008@gmail.com>
2023-12-28 06:56:17 +00:00
Yingwen
1d80a0f2d6 chore: Update CI badge in README.md (#3028)
chore: Update README.md

Fix CI badge
2023-12-28 05:59:27 +00:00
Ruihang Xia
286b9af661 chore: change all reference from develop to main (#3026)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-12-28 04:11:00 +00:00
dennis zhuang
af13eeaad3 feat: adds character_sets, collations and events etc. (#3017)
feat: adds character_sets, collations and events etc. to information_schema
2023-12-28 04:01:42 +00:00
Weny Xu
485a91f49a feat: implement handle upgrade region instruction (#3013)
* feat: implement task tracker

* feat: implement handle upgrade region instruction

* refactor: remove redundant code

* chore: apply suggestions from CR

* chore: apply suggestions from CR

* refactor: refactor wait_for_replay_millis to wait_for_replay_timeout

* chore: apply suggestions from CR

* chore: apply suggestions from CR
2023-12-28 02:08:47 +00:00
Ruihang Xia
bd0eed7af9 chore: do not send message for xlarge PR (#3020)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-12-27 13:22:19 +00:00
zyy17
b8b1e98399 refactor: use string type instead of Option type for '--store-key-prefix' (#3018)
* refactor: use string type instead of Option type for '--store-key-prefix'

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* chore: refine for code review comments

---------

Signed-off-by: zyy17 <zyylsxm@gmail.com>
2023-12-27 11:26:30 +00:00
Weny Xu
abeb32e042 feat: implement region migration manager (#3014)
* feat: implement region migration manager

* Update src/meta-srv/src/procedure/region_migration/manager.rs

Co-authored-by: JeremyHi <jiachun_feng@proton.me>

* chore: apply suggestions from CR

---------

Co-authored-by: JeremyHi <jiachun_feng@proton.me>
2023-12-27 10:50:10 +00:00
Weny Xu
840e94630d feat: implement param parsing of SubmitRegionMigrationTaskHandler (#3015)
* feat: implement param parsing of `SubmitMigrationTaskHandler`

* chore: apply suggestions from CR

* refactor: change `SubmitRegionMigrationTaskParams` to `SubmitRegionMigrationTaskRequest`
2023-12-27 10:08:54 +00:00
Wei
43e3a77263 fix: decimal128 ScalarValue to Value (#3019)
fix: decimal128 scalarvalue to value
2023-12-27 10:03:37 +00:00
WU Jingdi
d1ee1ba56a feat: support set timezone in db (#2992)
* feat: support set timezone in db

* chore: fix  ci

* chore: fix code advice

* fix: rename `time_zone` to `timezone`
2023-12-27 09:19:39 +00:00
Ning Sun
feec4e289d feat: upgrade pgwire to 0.18 for corrected statement caching (#3010) 2023-12-27 03:02:25 +00:00
Ruihang Xia
718447c542 docs: RFC of enclosing column id (#2983)
* docs: RFC of enclosing column id

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix typo

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* Update docs/rfcs/2023-12-22-enclose-column-id.md

Co-authored-by: Weny Xu <wenymedia@gmail.com>

* Apply suggestions from code review

Co-authored-by: Yingwen <realevenyag@gmail.com>

* accomplish the first point

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: Weny Xu <wenymedia@gmail.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>
2023-12-27 02:54:53 +00:00
LFC
eadde72973 chore: "fix: revert unfinished route table change" (#3009)
Revert "fix: revert unfinished route table change (#3008)"

This reverts commit 8ce8a8f3c7.
2023-12-27 02:40:59 +00:00
Ning Sun
7c5c75568d chore: try to fix size labeller (#3012) 2023-12-26 21:36:44 +08:00
Ruihang Xia
1c9bf2e2a7 fix: change CI target repo to the origin one (#3011)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-12-26 21:15:44 +08:00
niebayes
d061bf3d07 feat(remote_wal): introduce kafka remote wal (#3001)
* feat: integrate remote wal to standalone

* fix: test

* chore: ready to debug kafka remote wal

* fix: test

* chore: add some logs for remote wal

* chore: add logs for topic manager

* fix: properly terminate stream consumer

* fix: properly handle TopicAlreadyExists error

* fix: parse config file error

* fix: properly handle last entry id

* chore: prepare for merge

* fix: test

* fix: typo

* fix: set replication_factor properly

* fix: CR

* test: tmp for test

* Revert "test: tmp for test"

This reverts commit 093a3e0038.

* fix: serde

* fix selector type deserialize
2023-12-26 12:35:24 +00:00
Ruihang Xia
8ce8a8f3c7 fix: revert unfinished route table change (#3008)
* Revert "refactor: hide `RegionRoute` behind `TableRouteValue` (#2989)"

This reverts commit 1641fd572a.

* Revert "feat: MetricsEngine table route (part 1) (#2952)"

This reverts commit 6ac47e939c.
2023-12-26 09:56:49 +00:00
Ruihang Xia
bf635a6c7c feat: replace ahash with murmur3 on generating tsid (#3007)
* feat: replace ahash with murmur3 on generating tsid

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* change random seed

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-12-26 07:45:31 +00:00
LFC
196c06db14 feat: make logging to stdout configurable (#3003)
* feat: make logging to stdout configurable

* fix sqlness

* fix: resolve PR comments
2023-12-26 07:37:50 +00:00
Ruihang Xia
99565a3676 fix: update doc label on pr edit (#3005)
* fix: update doc label on pr edit

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update token

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add more trigger types

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-12-26 14:53:34 +08:00
Weny Xu
c902d43380 refactor(remote_wal): add StandaloneWalConfig (#3002)
* feat: integrate remote wal to standalone

* fix: test

* refactor: refactor standalone wal config

* chore: change default kafka port to 9092

* chore: apply suggestions from CR

---------

Co-authored-by: niebayes <niebayes@gmail.com>
2023-12-26 04:31:41 +00:00
Wei
95f172eb81 feat: convert FileMetaData to ParquetMetaData (#2980)
* feat: can convert Format FileMetaData to ParquetMetaData

* test: parquet metadata equal

* chore: test

* chore: cr comment

Co-authored-by: fys <40801205+fengys1996@users.noreply.github.com>

* chore: cr comment

* refactor: type name

* chore: cr comment

Co-authored-by: Yingwen <realevenyag@gmail.com>

---------

Co-authored-by: fys <40801205+fengys1996@users.noreply.github.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>
2023-12-26 04:27:36 +00:00
Ruihang Xia
3bd2f79841 chore(ci): auto doc labeler job (#2998)
* chore(ci): auto doc labeler job

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* tweak rules

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-12-25 15:55:01 +00:00
tison
417be13400 feat: adopt human-panic crash reports (#2996)
* feat: adopt human-panic crash reports

Signed-off-by: tison <wander4096@gmail.com>

* build: fix dep toml conflict

Signed-off-by: tison <wander4096@gmail.com>

---------

Signed-off-by: tison <wander4096@gmail.com>
2023-12-25 13:23:18 +00:00
Ruihang Xia
0a9ad004a4 chore: bump version to 0.5.0 (#3000)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-12-25 13:14:59 +00:00
SSebo
cf561df854 feat: export runtime metric to promethues (#2985)
* feat: export runtime metric to promethues

* fix: export tokio metric should enable tokio_unstable

* chore: tokio-metric export add more check info
2023-12-25 12:47:22 +00:00
LFC
1641fd572a refactor: hide RegionRoute behind TableRouteValue (#2989)
* refactor: hide `RegionRoute` behind `TableRouteValue` (Metric Engine table route, part 2)

* Update src/common/meta/src/ddl/create_table.rs

Co-authored-by: Weny Xu <wenymedia@gmail.com>

* Update src/meta-srv/src/procedure/region_migration/test_util.rs

Co-authored-by: Weny Xu <wenymedia@gmail.com>

* fix: resolve PR comments

* fix: rustfmt

* compatible with the old TableRouteValue

* Update src/common/meta/src/key/table_route.rs

Co-authored-by: Weny Xu <wenymedia@gmail.com>

---------

Co-authored-by: Weny Xu <wenymedia@gmail.com>
2023-12-25 11:37:50 +00:00
Weny Xu
89129c99c8 chore: setup kafka standalone in coverage test (#2984)
* chore: setup kafka standalone in coverage test

* test: add a naive test for topic manager
2023-12-25 11:18:54 +00:00
Ruihang Xia
48cd22d459 feat: support querying metric engine from frontend (#2987)
* query one logical table

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* map column id

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove deadcode

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix typo

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove redundent column name

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-12-25 10:47:57 +00:00
zyy17
0d42651047 feat: add '--store-key-prefix' to support multiple meta instances can use the same etcd backend (#2988)
* feat: add '--store-key-prefix' to support multiple meta instances can use the same etcd backend

* refactor: refine lock_key
2023-12-25 09:33:25 +00:00
niebayes
bab198ae68 feat(remote_wal): impl kafka log store (#2971)
* feat: introduce client manager

* chore: add errors for client manager

* chore: add record utils

* chore: impl kafka log store

* chore: build kafka log store upon starting datanode

* chore: update comments for kafka log store

* chore: add a todo for getting entry offset

* fix: typo

* chore: remove unused

* chore: update comments

* fix: typo

* fix: resolve some review conversations

* chore: move commonly referenced crates to workspace Cargo.toml

* fix: style

* fix: style

* chore: unify topic name prefix

* chore: make backoff config configurable by users

* chore: properly use backoff config in wal config

* refactor: read/write of kafka log store

* fix: typo

* fix: typo

* fix: resolve review conversations
2023-12-25 09:21:52 +00:00
niebayes
d4ac8734bc refactor(remote_wal): entry id usage (#2986)
* chore: update comments for log store stuff

* refactor: entry id usage

* tmp update

* Revert "tmp update"

This reverts commit fcfcda2851.

* fix: resolve review conversations

* fix: resolve review conversations

* chore: remove entry offset
2023-12-25 07:30:27 +00:00
Lei, HUANG
4664cc601c fix: install nextest bin (#2990)
* use binstall to install nextest

* fix: change all docker files

* Update docker/dev-builder/centos/Dockerfile

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>

---------

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
2023-12-25 06:01:47 +00:00
Weny Xu
06fd7fd210 feat: add skip_wal_replay to OpenRegion instruction (#2977)
feat: add skip_wal_replay to OpenRegion instruction
2023-12-23 06:42:21 +00:00
dennis zhuang
d7b2e791b9 fix: duplicate information_schema (#2979)
* fix: duplicate information_schema

* chore: style

* fix: comment in sqlness
2023-12-22 14:02:11 +00:00
niebayes
7d509e97f6 chore: move some commonly referenced crates to workspace Cargo.toml (#2981)
fix: resolve conflicts
2023-12-22 09:13:18 +00:00
Wei
a7349b573b feat: support fetch ranges in concurrent (#2959)
* feat: concurrent fetch ranges

* chore: cr comment

* chore: cr comment

* chore: clippy
2023-12-22 08:47:55 +00:00
niebayes
830a91c548 feat(remote_wal): implement topic allocation (#2970)
* chore: implement wal options allocator

* chore: implement round-robin topic selector

* feat: add shuffle to round-robin topic selector

* chore: implement kafka topic manager

* test: add tests for wal options allocator

* test: add wal provider to test config files

* test: leave todos for adding tests for remote wal

* fix: resolve review conversations

* fix: typo
2023-12-22 08:26:48 +00:00
ZonaHe
6221e5b105 feat: update dashboard to v0.4.4 (#2978)
Co-authored-by: ZonaHex <ZonaHex@users.noreply.github.com>
2023-12-22 06:37:39 +00:00
Ning Sun
43f01cc594 feat: add a default internal schema (#2974) 2023-12-22 06:25:19 +00:00
WU Jingdi
675767c023 feat: Support export metric in remote write format (#2928)
* feat: remote write metric task

* chore: pass stanalone task to frontend

* chore: change name to system metric

* fix: add header and rename to export metrics
2023-12-22 02:09:12 +00:00
dennis zhuang
054bca359e feat: adds build_info table (#2969) 2023-12-21 12:28:09 +00:00
Weny Xu
ff8c10eae7 feat: add CatchupRequest to engine (#2939)
* chore: remove redundant code

* feat(mito): add CatchupRequest

feat: reopen region before replay if need

* chore: apply suggestions from CR

* chore: apply suggestions from CR

* Apply suggestions from code review

Co-authored-by: Yingwen <realevenyag@gmail.com>

---------

Co-authored-by: Yingwen <realevenyag@gmail.com>
2023-12-21 08:27:53 +00:00
Zhenchi
b5c5458798 refactor: remove unused code for pruning row groups (#2973)
Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2023-12-21 07:54:29 +00:00
Ruihang Xia
6a1f5751c6 feat(metric-engine): open and close metric regions (#2961)
* implementation

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* tests

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix typo

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add metrics

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* define consts

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix compile error

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* print region id with display

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* only ignore region not found

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-12-21 07:33:22 +00:00
LFC
8776b1204b feat: use explicitly set table id (#2945)
* feat: make standalone table metadata allocator able to use explicitly set table id

* rebase

* fix: resolve PR comments
2023-12-21 05:51:43 +00:00
Yiran
bad89185c2 ci: add user-doc label checker (#2967) 2023-12-21 03:04:03 +00:00
Weny Xu
6c1c7d8d24 feat: allow initializing regions in background (#2930)
* refactor: open regions in background

* feat: add status code of RegionNotReady

* feat: use RegionNotReady instead of RegionNotFound for a registering region

* chore: apply suggestions from CR

* feat: add status code of RegionBusy

* feat: return RegionBusy for mutually exclusive operations
2023-12-20 11:37:46 +00:00
niebayes
9da1f236d9 feat(remote_wal): add skeleton for remote wal related to datanode (#2941)
* refactor: refactor wal config

* test: update tests related to wal

* feat: introduce kafka wal config

* chore: augment proto with wal options

* feat: augment region open request with wal options

* feat: augment mito region with wal options

* feat: augment region create request with wal options

* refactor: refactor log store trait

* feat: add skeleton for kafka log store

* feat: generalize building log store when starting datanode

* feat: integrate wal options to region write

* chore: minor update

* refactor: remove wal options from region create/open requests

* fix: compliation issues

* chore: insert wal options into region options upon initializing region server

* chore: integrate wal options into region options

* chore: fill in kafka wal config

* chore: reuse namespaces while writing to wal

* chore: minor update

* chore: fetch wal options from region while handling truncate/flush

* fix: region options test

* fix: resolve some review conversations

* refactor: serde with wal options

* fix: resolve some review conversations
2023-12-20 09:01:17 +00:00
LFC
6ac47e939c feat: MetricsEngine table route (part 1) (#2952)
* refactor: make `TableRouteValue` an enum to add the variant of MetricsEngine table route

* fix: resolve PR comments

* Update src/common/meta/src/key/table_route.rs

Co-authored-by: Weny Xu <wenymedia@gmail.com>

---------

Co-authored-by: Weny Xu <wenymedia@gmail.com>
2023-12-20 07:31:59 +00:00
Zhenchi
7d1724f832 feat(inverted_index.create): add index creator (#2960)
* feat(inverted_index.create): add read/write for external intermediate files

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* chore: MAGIC_CODEC_V1 -> CODEC_V1_MAGIC

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* chore: polish comments

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* chore: fix typos intermedia -> intermediate

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix: typos

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* feat(inverted_index.create): add external sorter

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* chore: fix typos intermedia -> intermediate

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* chore: polish comments

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* chore: polish comments

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* refactor: drop the stream as early as possible to avoid recursive calls to poll

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* refactor: project merge sorted stream

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* feat: add total_row_count to SortOutput

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* feat: remove change of format

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* feat(inverted_index.create): add index creator

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* chore: polish comments

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* feat: add check for total_row_count

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* feat: lazy set meta of writer

This reverts commit 63cb5bdb5c3a08406d978357d8167ca18ed1b83b.

* feat: lazyily provide inverted index writer

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* chore: polish readability

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* feat: add push_with_name_n

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2023-12-20 07:02:13 +00:00
Weny Xu
d2f49cbc2e chore: remove debug = 1 of dev profile (#2966) 2023-12-20 04:37:46 +00:00
Weny Xu
97c3755ab6 feat(mito): add skip_wal_replay option to OpenRegionRequest (#2955)
* feat(mito): add skip_replay_wal option to OpenRegionRequest

* test: add tests for skip replay wal

* chore: rename `skip_replay_wal` to `skip_wal_replay`
2023-12-20 03:31:48 +00:00
JeremyHi
5f8c17514f chore: SelectorType use snake_case (#2962)
chore: let SelectorType use snake_case
2023-12-20 02:28:29 +00:00
niebayes
839e653e0d feat(remote_wal): add skeleton for remote wal related to meta srv (#2933)
* feat: introduce wal config and kafka config

* feat: introduce kafka topic manager and selector

* feat: introduce region wal options

* chore: build region wal options upon starting meta srv

* feat: integrate region wal options allocator into table meta allocator

* chore: add wal config to metasrv.example.toml

* chore: add region wal options map to create table procedure

* feat: augment region create request with wal options

* feat: augment DatanodeTableValue with region wal options map

* chore: encode region wal options upon constructing table creator

* feat: persist region wal options when creating table meta

* fix: sqlness test

* chore: set default wal provider to raft-engine

* refactor: refactor wal options

* chore: update wal options allocator

* refactor: rename region wal options to wal options

* chore: update usages of region wal options

* chore: add some comments to kafka

* chore: fill in kafka config

* test: add tests for serde wal config

* test: add tests for wal options

* refactor: refactor wal options allocator to enum

* refactor: store wal options into the request options instead

* fix: typo

* fix: typo

* refactor: move wal options map to region info

* refactor: refacto serialization and deserialization of wal options

* refactor: use serde_json to encode wal options

* chore: rename wal_options_map to region_wal_options

* chore: resolve some review comments

* fix: typo

* refactor: replace kecab-case with snake_case

* fix: sqlness and converage tests

* fix: typo

* fix: coverage test

* fix: coverage test

* chore: resolve some review conversations

* fix: resolve some review conversations

* chore: format comments in metasrv.example.toml

* chore: update import style

* feat: integrate wal options allocator to standalone mode

* test: add compatible test for OpenRegion

* test: add compatible test for UpdateRegionMetadata

* chore: remove based suffix from topic selector type
2023-12-19 12:43:47 +00:00
LFC
c7b36779c1 fix: correctly decode mysql timestamp binary value (#2946)
* fix: correctly decode mysql timestamp binary value

* fix: ut

* fix: resolve PR comments
2023-12-19 11:05:28 +00:00
Wei
bbcac3a541 fix: auto create datatype_extension missing (#2953) 2023-12-19 10:53:31 +00:00
dennis zhuang
600cde1ff2 fix: wrong link for selector (#2958) 2023-12-19 10:50:48 +00:00
Zhenchi
83de399bef feat(inverted_index.create): add external sorter (#2950)
* feat(inverted_index.create): add read/write for external intermediate files

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* chore: MAGIC_CODEC_V1 -> CODEC_V1_MAGIC

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* chore: polish comments

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* chore: fix typos intermedia -> intermediate

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix: typos

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* feat(inverted_index.create): add external sorter

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* chore: fix typos intermedia -> intermediate

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* chore: polish comments

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* chore: polish comments

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* refactor: drop the stream as early as possible to avoid recursive calls to poll

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* refactor: project merge sorted stream

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* feat: add total_row_count to SortOutput

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* feat: remove change of format

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* refactor: rename segment null bitmap

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* refactor: test type alias

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* feat: allow `memory_usage_threshold` to be None to turn off dumping

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* feat: change segment_row_count type to NonZeroUsize

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* refactor: accept BytesRef instead

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* feat: add `push_n` to adapt mito2

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* chore: add k-way merge TODO

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* refactor: more sorter cases

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* refactor: make the merge tree balance

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* Update src/index/src/inverted_index/create/sort/external_sort.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

* chore: address comments

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* chore: stable feature

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>
2023-12-19 08:14:37 +00:00
Ruihang Xia
6b8dbcfb54 chore: update toolchain to 20231219 (#2932)
* update toolchain file, remove unused feature gates

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix format

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update action file

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update to 12-19

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-12-19 07:24:08 +00:00
Wei
3e6a564f8e fix: blocking read timer lack of parameter (#2954) 2023-12-19 06:16:55 +00:00
Zhenchi
ccbd49777d fix: correct the error message for snappy compress (#2956)
Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2023-12-19 06:14:40 +00:00
Weny Xu
29fc2ea9d8 feat: fetch manifests in concurrent (#2951)
* feat: fetch manifests in concurrent

* chore: set fetching manifest concurrency limit to 16
2023-12-19 06:12:30 +00:00
WU Jingdi
d180e41230 fix: change range query time slot to [align_ts, align_ts + range) (#2938) 2023-12-19 02:35:14 +00:00
liyang
62d5fcbd76 refactor: greptimedb cluster sqlness test scripts (#2947) 2023-12-18 10:59:14 +00:00
Zhenchi
d339191e29 refactor: remove reduntant .compat() (#2949)
* refactor: remove reduntant `.compat()`

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* remove dep

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2023-12-18 10:36:45 +00:00
Zhenchi
029ff2f1e3 feat(inverted_index.create): add read/write for external intermediate files (#2942)
* feat(inverted_index.create): add read/write for external intermediate files

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* chore: MAGIC_CODEC_V1 -> CODEC_V1_MAGIC

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* chore: polish comments

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* chore: fix typos intermedia -> intermediate

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix: typos

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* chore: futures_code -> asynchronous_codec

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* chore: bump bytes to 1.5

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2023-12-18 09:44:48 +00:00
Ruihang Xia
9af9c0229a feat: create table procedure for metric engine, part 1 (#2943)
* implementation

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* initialize

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove empty file

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* apply review sugg

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-12-18 08:03:28 +00:00
WU Jingdi
4383a69876 fix!: remove range calendar as To option (#2940)
fix: remove range calendar as `To` option
2023-12-18 07:48:49 +00:00
LFC
033a065359 refactor: make sequence bounded with max value (#2937)
* refactor: make sequence bounded with max value

(cherry picked from commit 3a8eba6f863327a96b617cd86ee2d39fac30abb2)

* fix: resolve PR comments
2023-12-18 07:05:28 +00:00
dennis zhuang
262a79a170 feat: adds some tables to information_schema (#2935)
* feat: adds engines table to information_schema

* feat: adds COLUMN_PRIVILEGES and COLUMN_STATISTICS

* feat: refactor memory tables

* chore: rename memory_tables

* test: adds unit tests

* chore: format

* chore: style

* fix: by cr comments

* refactor: tables
2023-12-18 06:10:22 +00:00
Wei
5dc7ce1791 fix: typos and bit operations (#2944)
* fix: typos and bit operation

* fix: helper

* chore: add tests in decimal128.rs and interval.rs

* chore: test

* chore: change proto version

* chore: clippy
2023-12-18 03:06:11 +00:00
discord9
e35a494a3f test: fix a wronged test script (#2934) 2023-12-14 14:27:36 +00:00
shuiyisong
5dba373ede chore: return json body under http status 401 (#2924)
* chore: change auth_fn to function and return response with json body

* chore: move unsupported to debug level

* chore: add docs and tests

* chore: rebase and update test
2023-12-14 10:01:12 +00:00
Weny Xu
518bac35bc feat: add log and tracing layers for Copy From statement (#2929)
feat: add log and tracing layers
2023-12-14 06:15:30 +00:00
Ning Sun
39f80876cd feat: upgrade rustls library family, opensrv-mysql and pgwire (#2927)
* feat: deps up

* fmt: toml format
2023-12-14 05:56:39 +00:00
LFC
181e16a11a refactor: make instance started separately (#2911)
* refactor: make instance started separately, to support further integrated into other binaries

* fix: resolve PR comments

* fix: resolve PR comments
2023-12-14 03:44:29 +00:00
JeremyHi
99dda93f0e feat: sql with influxdb v1 result format (#2917)
* feat: sql with influxdb v1 result format

* chore: add unit tests

* feat: minor refactor

* chore: by comment

* chore; u128 to u64 since serde can't deser u128 in enum

* chore: by comment

* chore: apply suggestion

* chore: revert suggestion

* chore: try again

---------

Co-authored-by: dennis zhuang <killme2008@gmail.com>
2023-12-13 16:15:37 +00:00
ZonaHe
d3da128d66 feat: update dashboard to v0.4.3 (#2926)
Co-authored-by: ZonaHex <ZonaHex@users.noreply.github.com>
2023-12-13 11:12:17 +00:00
WU Jingdi
370ec04a9d fix: use linear interpolation to implement range LINEAR fill strategy (#2903)
* fix: use linear interpolation to implement range LINEAR fill strategy

* chore: update test case

* chore: optimize linear interpolation implementation

* chore: update test and add comment
2023-12-13 09:53:35 +00:00
LFC
c13d2fd11d ci: correctly find the binary in dev-build when using "dev" profile (#2925) 2023-12-13 09:04:44 +00:00
Yue Deng
3d651522c2 feat: add build() function to return the database build info (#2919)
* feat: add build function and register it

build() function to return the database build info #2909

* refactor: fix typos and change code structure

* test: add test for build()

* refactor: cargo fmt and eliminate warnings

* Apply suggestions from code review

Co-authored-by: Weny Xu <wenymedia@gmail.com>

* refactor: move system.sql to a new directory

---------

Co-authored-by: Weny Xu <wenymedia@gmail.com>
2023-12-13 09:02:00 +00:00
Wei
fec3fcf4ef feat: builder to vector without resetting (#2915)
* feat: finish_cloned() without resetting

* test: add unit cases

* chore: port comment

* chore: apply suggestions from code review

Co-authored-by: Yingwen <realevenyag@gmail.com>

---------

Co-authored-by: Yingwen <realevenyag@gmail.com>
2023-12-13 06:48:09 +00:00
LFC
3555e1644c ci: able to choose cargo profile in dev-build action (#2923) 2023-12-13 04:14:16 +00:00
niebayes
c42168d7c2 chore: remove useless storage apis (#2904)
* chore: remove metadata.rs

* chore: remove snapshot.rs

* chore: remove chunk.rs

* chore: remove engine.rs

* chore: remove MIN_OP_TYPE from types.rs

* chore: remove region.rs

* chore: remove almost all codes in requests.rs

* chore: remove WriteRequest from requests.rs

* chore: remove responses.rs

* chore: remove unused descriptors from descriptors.rs

* chore: remove unused consts from consts.rs

* chore: remove useless comments
2023-12-13 03:36:14 +00:00
Weny Xu
3c24ca1a7a test: add more tests for region migration procedure (#2895)
* test: add flow test for open candidate region with retryable error

* test: add flow test for upgrade candidate retry failed

* test: add flow test for upgrade candidate with retry
2023-12-13 03:00:58 +00:00
Wei
9531469660 fix: date - interval sqlness (#2912)
fix: date - interval can work
2023-12-12 12:45:38 +00:00
LFC
880ca2e786 fix: return error to client if prepare stmt param not match (#2918)
* fix: return error to client if prepare stmt param not match

* Update src/servers/src/mysql/handler.rs

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>

---------

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
2023-12-12 12:41:09 +00:00
Ruihang Xia
0ce2b50676 feat!: do not get TZ info from server local env (#2905)
* feat: do not get TZ info from server local env

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add sqlness case

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add empty line

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix typo

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-12-12 12:41:05 +00:00
WU Jingdi
34635558d2 fix: support tailing commas in SQL (#2913)
* fix: support tailing commas in SQL

* fix: broken ci
2023-12-12 11:56:23 +00:00
Ruihang Xia
8a74bd36f5 style: rename *Adaptor to *Adapter (#2914)
* rename RecordBatchStreamAdaptor to RecordBatchStreamWrapper

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* rename ServerSqlQueryHandlerAdaptor to ServerSqlQueryHandlerAdapter

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-12-12 09:45:09 +00:00
Weny Xu
cf6bba09fd refactor: use downgrading the region instead of closing region (#2863)
* refactor: use downgrading the region instead of closing region

* feat: enhance the tests for alive keeper

* feat: add a metric to track region lease expired

* chore: apply suggestions from CR

* chore: enable logging for test_distributed_handle_ddl_request

* refactor: simplify lease keeper

* feat: add metrics for lease keeper

* chore: apply suggestions from CR

* chore: apply suggestions from CR

* chore: apply suggestions from CR

* refactor: move OpeningRegionKeeper to common_meta

* feat: register operating regions to MemoryRegionKeeper
2023-12-12 09:24:17 +00:00
Ruihang Xia
89a0d3af1e feat: set default debug level of release and dev profiles to 1 (#2916)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-12-12 08:01:18 +00:00
tison
47e51545dd ci: prevent running nightly FT in forks (#2906) 2023-12-12 02:49:52 +00:00
Zhenchi
1e22f1cb4f feat(inverted_index.format): add writer (#2900)
* feat(inverted_index.format): add writer

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* chore: remove clippy allow

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* Update src/index/src/inverted_index/error.rs

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
2023-12-11 09:55:25 +00:00
dennis zhuang
cf8b6c77dc chore: clean up unused errors (#2901) 2023-12-11 09:08:50 +00:00
Yingwen
6a57f4975e perf(mito): scan SSTs and memtables in parallel (#2852)
* feat: seq scan support parallelism

* feat: scan region by parallelism in config

* feat: enlarge channel size

* chore: parallel builder logs

* feat: use parallel reader accroding to source num

* chore: 128 channel size

* feat: add fetch cost metrics

* feat: add channel size to config

* feat: builder cost

* feat: logs

* feat: compiler error

* feat: fetch cost

* feat: convert cost

* chore: Revert "feat: logs"

This reverts commit 01e0df2c3a.

* chore: fix compiler errors

* feat: reduce channel size to 32

* chore: use workspace tokio-stream

* test: test scan in parallel

* chore: comment typos

* refactor: build all sources first

* test: test 0 parallelism

* feat: use parallel scan by default

* docs: update config example

* feat: log parallelism

* refactor: keep config in engine inner

* refactor: rename parallelism method

* docs: update docs

* test: fix config api test

* docs: update parallel scan comment

* feat: 0 for default parallelism
2023-12-11 06:43:17 +00:00
tison
178018143d ci: prevent running nightly CI in forks (#2898)
Signed-off-by: tison <wander4096@gmail.com>
2023-12-11 02:34:59 +00:00
tison
73227bbafd chore: ignore MySQL client sent SELECT $$ (#2896)
Signed-off-by: tison <wander4096@gmail.com>
2023-12-11 02:27:22 +00:00
Weny Xu
5a99f098c5 test: add tests for region migration procedure (#2857)
* feat: add backward compatibility test for persistent ctx

* refactor: refactor State of region migration

* feat: add test utils for region migration tests

* test: add simple region migration tests

* chore: apply suggestions from CR
2023-12-08 08:47:09 +00:00
Ruihang Xia
7cf9945161 fix: re-enable ignored case test_query_prepared (#2892)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-12-08 08:35:56 +00:00
tison
bfb4794cfa fix: handle heartbeat shutdown gracefully (#2886)
* fix: handle heartbeat shutdown gracefully

Signed-off-by: tison <wander4096@gmail.com>

* improve logging

Signed-off-by: tison <wander4096@gmail.com>

---------

Signed-off-by: tison <wander4096@gmail.com>
2023-12-08 03:59:05 +00:00
Ruihang Xia
58183fe72f fix: align linear_regression to PromQL's behavior (#2879)
* fix: accept f64 and i64 as predict_linear's param

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* use second instead of millisecond

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add test to linear_regression

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-12-08 02:41:10 +00:00
Niwaka
09aa4b72a5 chore: update storage config example (#2887)
chore: update config example
2023-12-07 03:18:56 +00:00
dennis zhuang
43f32f4499 feat: impl date_add/date_sub functions (#2881)
* feat: adds date_add and date_sub function

* test: add date function

* fix: adds interval to date returns wrong result

* fix: header

* fix: typo

* fix: timestamp resolution

* fix: capacity

* chore: apply suggestion

* fix: wrong behavior when adding intervals to timestamp, date and datetime

* chore: remove unused error

* test: refactor and add some tests
2023-12-07 03:02:15 +00:00
tison
ea80570cb1 fix: mysql version function result (#2884)
Signed-off-by: tison <wander4096@gmail.com>
2023-12-06 16:14:09 +00:00
Niwaka
cfe3a2c55e feat!: support table ddl for custom storage (#2733)
* feat: support table ddl for custom_storage

* refactor: rename extract_variant_name to name

* chore: add blank

* chore: keep compatible

* feat: rename custom_stores to providers

* chore: rename

* chore: config

* refactor: add should_retry in client Error

* fix: test fail

* chore: remove unused options

* chore: remove unused import

* chore: remove the blanks.

* chore: revert

---------

Co-authored-by: dennis zhuang <killme2008@gmail.com>
2023-12-06 15:59:01 +00:00
Ruihang Xia
2cca267a32 chore: tweak status code of promql errors (#2883)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-12-06 13:50:53 +00:00
tison
f74715ce52 refactor: RegionEngine::handle_request always returns affected rows (#2874)
* refactor: RegionEngine::handle_request -> handle_execution

Signed-off-by: tison <wander4096@gmail.com>

* propagate refactor

Signed-off-by: tison <wander4096@gmail.com>

* revert spell change

Signed-off-by: tison <wander4096@gmail.com>

* propagate refactor

Signed-off-by: tison <wander4096@gmail.com>

* cargo clippy

Signed-off-by: tison <wander4096@gmail.com>

* propagate refactor

Signed-off-by: tison <wander4096@gmail.com>

* cargo fmt

Signed-off-by: tison <wander4096@gmail.com>

* more name clarification

Signed-off-by: tison <wander4096@gmail.com>

* revert rename

Signed-off-by: tison <wander4096@gmail.com>

* wrap affected rows into RegionResponse

Signed-off-by: tison <wander4096@gmail.com>

* flatten return AffectedRows

Signed-off-by: tison <wander4096@gmail.com>

---------

Signed-off-by: tison <wander4096@gmail.com>
2023-12-06 13:27:19 +00:00
Weny Xu
1141dbe946 chore: unify the meta metrics styling (#2875)
* chore: unify the meta metrics styling

* chore: apply suggestions from CR
2023-12-06 09:20:41 +00:00
ZonaHe
a415685bf1 feat: update dashboard to v0.4.2 (#2882)
Co-authored-by: ZonaHex <ZonaHex@users.noreply.github.com>
2023-12-06 09:13:02 +00:00
1257 changed files with 114039 additions and 23045 deletions

View File

@@ -3,13 +3,3 @@ linker = "aarch64-linux-gnu-gcc"
[alias]
sqlness = "run --bin sqlness-runner --"
[build]
rustflags = [
# lints
# TODO: use lint configuration in cargo https://github.com/rust-lang/cargo/issues/5034
"-Wclippy::print_stdout",
"-Wclippy::print_stderr",
"-Wclippy::implicit_clone",
]

10
.editorconfig Normal file
View File

@@ -0,0 +1,10 @@
root = true
[*]
end_of_line = lf
indent_style = space
insert_final_newline = true
trim_trailing_whitespace = true
[{Makefile,**.mk}]
indent_style = tab

View File

@@ -19,3 +19,8 @@ GT_GCS_BUCKET = GCS bucket
GT_GCS_SCOPE = GCS scope
GT_GCS_CREDENTIAL_PATH = GCS credential path
GT_GCS_ENDPOINT = GCS end point
# Settings for kafka wal test
GT_KAFKA_ENDPOINTS = localhost:9092
# Setting for fuzz tests
GT_MYSQL_ADDR = localhost:4002

27
.github/CODEOWNERS vendored Normal file
View File

@@ -0,0 +1,27 @@
# GreptimeDB CODEOWNERS
# These owners will be the default owners for everything in the repo.
* @GreptimeTeam/db-approver
## [Module] Databse Engine
/src/index @zhongzc
/src/mito2 @evenyag @v0y4g3r @waynexia
/src/query @evenyag
## [Module] Distributed
/src/common/meta @MichaelScofield
/src/common/procedure @MichaelScofield
/src/meta-client @MichaelScofield
/src/meta-srv @MichaelScofield
## [Module] Write Ahead Log
/src/log-store @v0y4g3r
/src/store-api @v0y4g3r
## [Module] Metrics Engine
/src/metric-engine @waynexia
/src/promql @waynexia
## [Module] Flow
/src/flow @zhongzc @waynexia

View File

@@ -21,6 +21,7 @@ body:
- Locking issue
- Performance issue
- Unexpected error
- User Experience
- Other
validations:
required: true
@@ -33,9 +34,14 @@ body:
multiple: true
options:
- Standalone mode
- Distributed Cluster
- Storage Engine
- Query Engine
- Table Engine
- Write Protocols
- Metasrv
- Frontend
- Datanode
- Meta
- Other
validations:
required: true
@@ -77,6 +83,17 @@ body:
validations:
required: true
- type: input
id: greptimedb
attributes:
label: What version of GreptimeDB did you use?
description: |
Please provide the version of GreptimeDB. For example:
0.5.1 etc. You can get it by executing command line `greptime --version`.
placeholder: "0.5.1"
validations:
required: true
- type: textarea
id: logs
attributes:

View File

@@ -40,9 +40,11 @@ runs:
- name: Upload artifacts
uses: ./.github/actions/upload-artifacts
if: ${{ inputs.build-android-artifacts == 'false' }}
env:
PROFILE_TARGET: ${{ inputs.cargo-profile == 'dev' && 'debug' || inputs.cargo-profile }}
with:
artifacts-dir: ${{ inputs.artifacts-dir }}
target-file: ./target/${{ inputs.cargo-profile }}/greptime
target-file: ./target/$PROFILE_TARGET/greptime
version: ${{ inputs.version }}
working-dir: ${{ inputs.working-dir }}

View File

@@ -53,7 +53,7 @@ runs:
uses: docker/setup-buildx-action@v2
- name: Download amd64 artifacts
uses: actions/download-artifact@v3
uses: actions/download-artifact@v4
with:
name: ${{ inputs.amd64-artifact-name }}
@@ -66,7 +66,7 @@ runs:
mv ${{ inputs.amd64-artifact-name }} amd64
- name: Download arm64 artifacts
uses: actions/download-artifact@v3
uses: actions/download-artifact@v4
if: ${{ inputs.arm64-artifact-name }}
with:
name: ${{ inputs.arm64-artifact-name }}

View File

@@ -34,7 +34,7 @@ runs:
- name: Upload sqlness logs
if: ${{ failure() && inputs.disable-run-tests == 'false' }} # Only upload logs when the integration tests failed.
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: sqlness-logs
path: /tmp/greptime-*.log

View File

@@ -67,7 +67,7 @@ runs:
- name: Upload sqlness logs
if: ${{ failure() }} # Only upload logs when the integration tests failed.
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: sqlness-logs
path: /tmp/greptime-*.log

View File

@@ -25,7 +25,9 @@ inputs:
runs:
using: composite
steps:
- uses: arduino/setup-protoc@v1
- uses: arduino/setup-protoc@v3
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
- name: Install rust toolchain
uses: dtolnay/rust-toolchain@master
@@ -38,7 +40,7 @@ runs:
uses: Swatinem/rust-cache@v2
- name: Install Python
uses: actions/setup-python@v4
uses: actions/setup-python@v5
with:
python-version: '3.10'
@@ -62,15 +64,15 @@ runs:
- name: Upload sqlness logs
if: ${{ failure() }} # Only upload logs when the integration tests failed.
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: sqlness-logs
path: ${{ runner.temp }}/greptime-*.log
path: /tmp/greptime-*.log
retention-days: 3
- name: Build greptime binary
shell: pwsh
run: cargo build --profile ${{ inputs.cargo-profile }} --features ${{ inputs.features }} --target ${{ inputs.arch }}
run: cargo build --profile ${{ inputs.cargo-profile }} --features ${{ inputs.features }} --target ${{ inputs.arch }} --bin greptime
- name: Upload artifacts
uses: ./.github/actions/upload-artifacts

13
.github/actions/fuzz-test/action.yaml vendored Normal file
View File

@@ -0,0 +1,13 @@
name: Fuzz Test
description: 'Fuzz test given setup and service'
inputs:
target:
description: "The fuzz target to test"
runs:
using: composite
steps:
- name: Run Fuzz Test
shell: bash
run: cargo fuzz run ${{ inputs.target }} --fuzz-dir tests-fuzz -D -s none -- -max_total_time=120
env:
GT_MYSQL_ADDR: 127.0.0.1:4002

View File

@@ -15,7 +15,7 @@ runs:
# |- greptime-darwin-amd64-v0.5.0.sha256sum/greptime-darwin-amd64-v0.5.0.sha256sum
# ...
- name: Download artifacts
uses: actions/download-artifact@v3
uses: actions/download-artifact@v4
- name: Create git tag for release
if: ${{ github.event_name != 'push' }} # Meaning this is a scheduled or manual workflow.

View File

@@ -73,7 +73,7 @@ runs:
using: composite
steps:
- name: Download artifacts
uses: actions/download-artifact@v3
uses: actions/download-artifact@v4
with:
path: ${{ inputs.artifacts-dir }}

View File

@@ -6,7 +6,7 @@ inputs:
required: true
target-file:
description: The path of the target artifact
required: true
required: false
version:
description: Version of the artifact
required: true
@@ -18,11 +18,12 @@ runs:
using: composite
steps:
- name: Create artifacts directory
if: ${{ inputs.target-file != '' }}
working-directory: ${{ inputs.working-dir }}
shell: bash
run: |
mkdir -p ${{ inputs.artifacts-dir }} && \
mv ${{ inputs.target-file }} ${{ inputs.artifacts-dir }}
cp ${{ inputs.target-file }} ${{ inputs.artifacts-dir }}
# The compressed artifacts will use the following layout:
# greptime-linux-amd64-pyo3-v0.3.0sha256sum
@@ -49,15 +50,15 @@ runs:
run: Get-FileHash ${{ inputs.artifacts-dir }}.tar.gz -Algorithm SHA256 | select -ExpandProperty Hash > ${{ inputs.artifacts-dir }}.sha256sum
# Note: The artifacts will be double zip compressed(related issue: https://github.com/actions/upload-artifact/issues/39).
# However, when we use 'actions/download-artifact@v3' to download the artifacts, it will be automatically unzipped.
# However, when we use 'actions/download-artifact' to download the artifacts, it will be automatically unzipped.
- name: Upload artifacts
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: ${{ inputs.artifacts-dir }}
path: ${{ inputs.working-dir }}/${{ inputs.artifacts-dir }}.tar.gz
- name: Upload checksum
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: ${{ inputs.artifacts-dir }}.sha256sum
path: ${{ inputs.working-dir }}/${{ inputs.artifacts-dir }}.sha256sum

4
.github/doc-label-config.yml vendored Normal file
View File

@@ -0,0 +1,4 @@
Doc not needed:
- '- \[x\] This PR does not require documentation updates.'
Doc update required:
- '- \[ \] This PR does not require documentation updates.'

View File

@@ -1,8 +1,10 @@
I hereby agree to the terms of the [GreptimeDB CLA](https://gist.github.com/xtang/6378857777706e568c1949c7578592cc)
I hereby agree to the terms of the [GreptimeDB CLA](https://github.com/GreptimeTeam/.github/blob/main/CLA.md).
## Refer to a related PR or issue link (optional)
## What's changed and what's your intention?
_PLEASE DO NOT LEAVE THIS EMPTY !!!_
__!!! DO NOT LEAVE THIS BLOCK EMPTY !!!__
Please explain IN DETAIL what the changes are in this PR and why they are needed:
@@ -15,5 +17,4 @@ Please explain IN DETAIL what the changes are in this PR and why they are needed
- [ ] I have written the necessary rustdoc comments.
- [ ] I have added the necessary unit tests and integration tests.
## Refer to a related PR or issue link (optional)
- [x] This PR does not require documentation updates.

View File

@@ -107,12 +107,9 @@ function deploy_greptimedb_cluster_with_s3_storage() {
--set storage.s3.bucket="$AWS_CI_TEST_BUCKET" \
--set storage.s3.region="$AWS_REGION" \
--set storage.s3.root="$DATA_ROOT" \
--set storage.s3.secretName=s3-credentials \
--set storage.credentials.secretName=s3-credentials \
--set storage.credentials.secretCreation.enabled=true \
--set storage.credentials.secretCreation.enableEncryption=false \
--set storage.credentials.secretCreation.data.access-key-id="$AWS_ACCESS_KEY_ID" \
--set storage.credentials.secretCreation.data.secret-access-key="$AWS_SECRET_ACCESS_KEY"
--set storage.credentials.accessKeyId="$AWS_ACCESS_KEY_ID" \
--set storage.credentials.secretAccessKey="$AWS_SECRET_ACCESS_KEY"
# Wait for greptimedb cluster to be ready.
while true; do

View File

@@ -1,7 +1,7 @@
on:
push:
branches:
- develop
- main
paths-ignore:
- 'docs/**'
- 'config/**'
@@ -13,14 +13,14 @@ on:
name: Build API docs
env:
RUST_TOOLCHAIN: nightly-2023-10-21
RUST_TOOLCHAIN: nightly-2024-04-18
jobs:
apidoc:
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@v3
- uses: arduino/setup-protoc@v1
- uses: actions/checkout@v4
- uses: arduino/setup-protoc@v3
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
- uses: dtolnay/rust-toolchain@master
@@ -40,3 +40,4 @@ jobs:
uses: JamesIves/github-pages-deploy-action@v4
with:
folder: target/doc
single-commit: true

View File

@@ -55,10 +55,18 @@ on:
description: Build and push images to DockerHub and ACR
required: false
default: true
cargo_profile:
type: choice
description: The cargo profile to use in building GreptimeDB.
default: nightly
options:
- dev
- release
- nightly
# Use env variables to control all the release process.
env:
CARGO_PROFILE: nightly
CARGO_PROFILE: ${{ inputs.cargo_profile }}
# Controls whether to run tests, include unit-test, integration-test and sqlness.
DISABLE_RUN_TESTS: ${{ inputs.skip_test || vars.DEFAULT_SKIP_TEST }}
@@ -93,7 +101,7 @@ jobs:
version: ${{ steps.create-version.outputs.version }}
steps:
- name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
fetch-depth: 0
@@ -147,12 +155,12 @@ jobs:
runs-on: ${{ needs.allocate-runners.outputs.linux-amd64-runner }}
steps:
- name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Checkout greptimedb
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
repository: ${{ inputs.repository }}
ref: ${{ inputs.commit }}
@@ -176,12 +184,12 @@ jobs:
runs-on: ${{ needs.allocate-runners.outputs.linux-arm64-runner }}
steps:
- name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
fetch-depth: 0
- name: Checkout greptimedb
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
repository: ${{ inputs.repository }}
ref: ${{ inputs.commit }}
@@ -208,7 +216,7 @@ jobs:
outputs:
build-result: ${{ steps.set-build-result.outputs.build-result }}
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
with:
fetch-depth: 0
@@ -239,7 +247,7 @@ jobs:
runs-on: ubuntu-20.04
continue-on-error: true
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
with:
fetch-depth: 0
@@ -273,7 +281,7 @@ jobs:
]
steps:
- name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
fetch-depth: 0
@@ -298,7 +306,7 @@ jobs:
]
steps:
- name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
fetch-depth: 0
@@ -322,14 +330,14 @@ jobs:
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL_DEVELOP_CHANNEL }}
steps:
- name: Notifiy nightly build successful result
- name: Notifiy dev build successful result
uses: slackapi/slack-github-action@v1.23.0
if: ${{ needs.release-images-to-dockerhub.outputs.build-result == 'success' }}
with:
payload: |
{"text": "GreptimeDB's ${{ env.NEXT_RELEASE_VERSION }} build has completed successfully."}
- name: Notifiy nightly build failed result
- name: Notifiy dev build failed result
uses: slackapi/slack-github-action@v1.23.0
if: ${{ needs.release-images-to-dockerhub.outputs.build-result != 'success' }}
with:

View File

@@ -1,7 +1,7 @@
on:
merge_group:
pull_request:
types: [opened, synchronize, reopened, ready_for_review]
types: [ opened, synchronize, reopened, ready_for_review ]
paths-ignore:
- 'docs/**'
- 'config/**'
@@ -9,9 +9,9 @@ on:
- '.dockerignore'
- 'docker/**'
- '.gitignore'
- 'grafana/**'
push:
branches:
- develop
- main
paths-ignore:
- 'docs/**'
@@ -20,6 +20,7 @@ on:
- '.dockerignore'
- 'docker/**'
- '.gitignore'
- 'grafana/**'
workflow_dispatch:
name: CI
@@ -29,27 +30,31 @@ concurrency:
cancel-in-progress: true
env:
RUST_TOOLCHAIN: nightly-2023-10-21
RUST_TOOLCHAIN: nightly-2024-04-18
jobs:
typos:
name: Spell Check with Typos
check-typos-and-docs:
name: Check typos and docs
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
- uses: crate-ci/typos@v1.13.10
- name: Check the config docs
run: |
make config-docs && \
git diff --name-only --exit-code ./config/config.md \
|| (echo "'config/config.md' is not up-to-date, please run 'make config-docs'." && exit 1)
check:
name: Check
if: github.event.pull_request.draft == false
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ windows-latest-8-cores, ubuntu-20.04 ]
os: [ windows-latest, ubuntu-20.04 ]
timeout-minutes: 60
steps:
- uses: actions/checkout@v3
- uses: arduino/setup-protoc@v1
- uses: actions/checkout@v4
- uses: arduino/setup-protoc@v3
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
- uses: dtolnay/rust-toolchain@master
@@ -57,37 +62,78 @@ jobs:
toolchain: ${{ env.RUST_TOOLCHAIN }}
- name: Rust Cache
uses: Swatinem/rust-cache@v2
with:
# Shares across multiple jobs
# Shares with `Clippy` job
shared-key: "check-lint"
- name: Run cargo check
run: cargo check --locked --workspace --all-targets
toml:
name: Toml Check
if: github.event.pull_request.draft == false
runs-on: ubuntu-20.04
timeout-minutes: 60
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
- uses: dtolnay/rust-toolchain@master
with:
toolchain: stable
- name: Rust Cache
uses: Swatinem/rust-cache@v2
with:
# Shares across multiple jobs
shared-key: "check-toml"
- name: Install taplo
run: cargo +stable install taplo-cli --version ^0.8 --locked
run: cargo +stable install taplo-cli --version ^0.9 --locked
- name: Run taplo
run: taplo format --check
sqlness:
name: Sqlness Test
if: github.event.pull_request.draft == false
build:
name: Build GreptimeDB binaries
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ ubuntu-20.04-8-cores ]
os: [ ubuntu-20.04 ]
timeout-minutes: 60
steps:
- uses: actions/checkout@v3
- uses: arduino/setup-protoc@v1
- uses: actions/checkout@v4
- uses: arduino/setup-protoc@v3
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
- uses: dtolnay/rust-toolchain@master
with:
toolchain: ${{ env.RUST_TOOLCHAIN }}
- uses: Swatinem/rust-cache@v2
with:
# Shares across multiple jobs
shared-key: "build-binaries"
- name: Build greptime binaries
shell: bash
run: cargo build --bin greptime --bin sqlness-runner
- name: Pack greptime binaries
shell: bash
run: |
mkdir bins && \
mv ./target/debug/greptime bins && \
mv ./target/debug/sqlness-runner bins
- name: Print greptime binaries info
run: ls -lh bins
- name: Upload artifacts
uses: ./.github/actions/upload-artifacts
with:
artifacts-dir: bins
version: current
fuzztest:
name: Fuzz Test
needs: build
runs-on: ubuntu-latest
strategy:
matrix:
target: [ "fuzz_create_table", "fuzz_alter_table", "fuzz_create_database" ]
steps:
- uses: actions/checkout@v4
- uses: arduino/setup-protoc@v3
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
- uses: dtolnay/rust-toolchain@master
@@ -95,24 +141,95 @@ jobs:
toolchain: ${{ env.RUST_TOOLCHAIN }}
- name: Rust Cache
uses: Swatinem/rust-cache@v2
with:
# Shares across multiple jobs
shared-key: "fuzz-test-targets"
- name: Set Rust Fuzz
shell: bash
run: |
sudo apt update && sudo apt install -y libfuzzer-14-dev
cargo install cargo-fuzz
- name: Download pre-built binaries
uses: actions/download-artifact@v4
with:
name: bins
path: .
- name: Unzip binaries
run: tar -xvf ./bins.tar.gz
- name: Run GreptimeDB
run: |
./bins/greptime standalone start&
- name: Fuzz Test
uses: ./.github/actions/fuzz-test
env:
CUSTOM_LIBFUZZER_PATH: /usr/lib/llvm-14/lib/libFuzzer.a
with:
target: ${{ matrix.target }}
sqlness:
name: Sqlness Test
needs: build
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ ubuntu-20.04 ]
timeout-minutes: 60
steps:
- uses: actions/checkout@v4
- name: Download pre-built binaries
uses: actions/download-artifact@v4
with:
name: bins
path: .
- name: Unzip binaries
run: tar -xvf ./bins.tar.gz
- name: Run sqlness
run: cargo sqlness
run: RUST_BACKTRACE=1 ./bins/sqlness-runner -c ./tests/cases --bins-dir ./bins
- name: Upload sqlness logs
if: always()
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: sqlness-logs
path: ${{ runner.temp }}/greptime-*.log
path: /tmp/greptime-*.log
retention-days: 3
sqlness-kafka-wal:
name: Sqlness Test with Kafka Wal
needs: build
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ ubuntu-20.04 ]
timeout-minutes: 60
steps:
- uses: actions/checkout@v4
- name: Download pre-built binaries
uses: actions/download-artifact@v4
with:
name: bins
path: .
- name: Unzip binaries
run: tar -xvf ./bins.tar.gz
- name: Setup kafka server
working-directory: tests-integration/fixtures/kafka
run: docker compose -f docker-compose-standalone.yml up -d --wait
- name: Run sqlness
run: RUST_BACKTRACE=1 ./bins/sqlness-runner -w kafka -k 127.0.0.1:9092 -c ./tests/cases --bins-dir ./bins
- name: Upload sqlness logs
if: always()
uses: actions/upload-artifact@v4
with:
name: sqlness-logs-with-kafka-wal
path: /tmp/greptime-*.log
retention-days: 3
fmt:
name: Rustfmt
if: github.event.pull_request.draft == false
runs-on: ubuntu-20.04
timeout-minutes: 60
steps:
- uses: actions/checkout@v3
- uses: arduino/setup-protoc@v1
- uses: actions/checkout@v4
- uses: arduino/setup-protoc@v3
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
- uses: dtolnay/rust-toolchain@master
@@ -121,17 +238,19 @@ jobs:
components: rustfmt
- name: Rust Cache
uses: Swatinem/rust-cache@v2
with:
# Shares across multiple jobs
shared-key: "check-rust-fmt"
- name: Run cargo fmt
run: cargo fmt --all -- --check
clippy:
name: Clippy
if: github.event.pull_request.draft == false
runs-on: ubuntu-20.04
timeout-minutes: 60
steps:
- uses: actions/checkout@v3
- uses: arduino/setup-protoc@v1
- uses: actions/checkout@v4
- uses: arduino/setup-protoc@v3
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
- uses: dtolnay/rust-toolchain@master
@@ -140,6 +259,10 @@ jobs:
components: clippy
- name: Rust Cache
uses: Swatinem/rust-cache@v2
with:
# Shares across multiple jobs
# Shares with `Check` job
shared-key: "check-lint"
- name: Run cargo clippy
run: cargo clippy --workspace --all-targets -- -D warnings
@@ -148,8 +271,8 @@ jobs:
runs-on: ubuntu-20.04-8-cores
timeout-minutes: 60
steps:
- uses: actions/checkout@v3
- uses: arduino/setup-protoc@v1
- uses: actions/checkout@v4
- uses: arduino/setup-protoc@v3
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
- uses: KyleMayes/install-llvm-action@v1
@@ -162,12 +285,19 @@ jobs:
components: llvm-tools-preview
- name: Rust Cache
uses: Swatinem/rust-cache@v2
with:
# Shares cross multiple jobs
shared-key: "coverage-test"
- name: Docker Cache
uses: ScribeMD/docker-cache@0.3.7
with:
key: docker-${{ runner.os }}-coverage
- name: Install latest nextest release
uses: taiki-e/install-action@nextest
- name: Install cargo-llvm-cov
uses: taiki-e/install-action@cargo-llvm-cov
- name: Install Python
uses: actions/setup-python@v4
uses: actions/setup-python@v5
with:
python-version: '3.10'
- name: Install PyArrow Package
@@ -175,23 +305,45 @@ jobs:
- name: Setup etcd server
working-directory: tests-integration/fixtures/etcd
run: docker compose -f docker-compose-standalone.yml up -d --wait
- name: Setup kafka server
working-directory: tests-integration/fixtures/kafka
run: docker compose -f docker-compose-standalone.yml up -d --wait
- name: Run nextest cases
run: cargo llvm-cov nextest --workspace --lcov --output-path lcov.info -F pyo3_backend -F dashboard
env:
CARGO_BUILD_RUSTFLAGS: "-C link-arg=-fuse-ld=lld"
RUST_BACKTRACE: 1
CARGO_INCREMENTAL: 0
GT_S3_BUCKET: ${{ secrets.S3_BUCKET }}
GT_S3_ACCESS_KEY_ID: ${{ secrets.S3_ACCESS_KEY_ID }}
GT_S3_ACCESS_KEY: ${{ secrets.S3_ACCESS_KEY }}
GT_S3_REGION: ${{ secrets.S3_REGION }}
GT_S3_BUCKET: ${{ vars.AWS_CI_TEST_BUCKET }}
GT_S3_ACCESS_KEY_ID: ${{ secrets.AWS_CI_TEST_ACCESS_KEY_ID }}
GT_S3_ACCESS_KEY: ${{ secrets.AWS_CI_TEST_SECRET_ACCESS_KEY }}
GT_S3_REGION: ${{ vars.AWS_CI_TEST_BUCKET_REGION }}
GT_ETCD_ENDPOINTS: http://127.0.0.1:2379
GT_KAFKA_ENDPOINTS: 127.0.0.1:9092
UNITTEST_LOG_DIR: "__unittest_logs"
- name: Codecov upload
uses: codecov/codecov-action@v2
uses: codecov/codecov-action@v4
with:
token: ${{ secrets.CODECOV_TOKEN }}
files: ./lcov.info
flags: rust
fail_ci_if_error: false
verbose: true
compat:
name: Compatibility Test
needs: build
runs-on: ubuntu-20.04
timeout-minutes: 60
steps:
- uses: actions/checkout@v4
- name: Download pre-built binaries
uses: actions/download-artifact@v4
with:
name: bins
path: .
- name: Unzip binaries
run: |
mkdir -p ./bins/current
tar -xvf ./bins.tar.gz --strip-components=1 -C ./bins/current
- run: ./tests/compat/test-compat.sh 0.6.0

View File

@@ -14,7 +14,7 @@ jobs:
runs-on: ubuntu-20.04
steps:
- name: create an issue in doc repo
uses: dacbd/create-issue-action@main
uses: dacbd/create-issue-action@v1.2.1
with:
owner: GreptimeTeam
repo: docs
@@ -28,7 +28,7 @@ jobs:
runs-on: ubuntu-20.04
steps:
- name: create an issue in cloud repo
uses: dacbd/create-issue-action@main
uses: dacbd/create-issue-action@v1.2.1
with:
owner: GreptimeTeam
repo: greptimedb-cloud

36
.github/workflows/doc-label.yml vendored Normal file
View File

@@ -0,0 +1,36 @@
name: "PR Doc Labeler"
on:
pull_request_target:
types: [opened, edited, synchronize, ready_for_review, auto_merge_enabled, labeled, unlabeled]
permissions:
pull-requests: write
contents: read
jobs:
triage:
if: ${{ github.repository == 'GreptimeTeam/greptimedb' }}
runs-on: ubuntu-latest
steps:
- uses: github/issue-labeler@v3.4
with:
configuration-path: .github/doc-label-config.yml
enable-versioned-regex: false
repo-token: ${{ secrets.GITHUB_TOKEN }}
sync-labels: 1
- name: create an issue in doc repo
uses: dacbd/create-issue-action@v1.2.1
if: ${{ github.event.action == 'opened' && contains(github.event.pull_request.body, '- [ ] This PR does not require documentation updates.') }}
with:
owner: GreptimeTeam
repo: docs
token: ${{ secrets.DOCS_REPO_TOKEN }}
title: Update docs for ${{ github.event.issue.title || github.event.pull_request.title }}
body: |
A document change request is generated from
${{ github.event.issue.html_url || github.event.pull_request.html_url }}
- name: Check doc labels
uses: docker://agilepathway/pull-request-label-checker:latest
with:
one_of: Doc update required,Doc not needed
repo_token: ${{ secrets.GITHUB_TOKEN }}

View File

@@ -9,9 +9,9 @@ on:
- '.dockerignore'
- 'docker/**'
- '.gitignore'
- 'grafana/**'
push:
branches:
- develop
- main
paths:
- 'docs/**'
@@ -20,6 +20,7 @@ on:
- '.dockerignore'
- 'docker/**'
- '.gitignore'
- 'grafana/**'
workflow_dispatch:
name: CI
@@ -32,39 +33,46 @@ jobs:
name: Spell Check with Typos
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
- uses: crate-ci/typos@v1.13.10
check:
name: Check
if: github.event.pull_request.draft == false
runs-on: ubuntu-20.04
steps:
- run: 'echo "No action required"'
fmt:
name: Rustfmt
if: github.event.pull_request.draft == false
runs-on: ubuntu-20.04
steps:
- run: 'echo "No action required"'
clippy:
name: Clippy
if: github.event.pull_request.draft == false
runs-on: ubuntu-20.04
steps:
- run: 'echo "No action required"'
coverage:
if: github.event.pull_request.draft == false
runs-on: ubuntu-20.04
steps:
- run: 'echo "No action required"'
sqlness:
name: Sqlness Test
if: github.event.pull_request.draft == false
runs-on: ubuntu-20.04
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ ubuntu-20.04 ]
steps:
- run: 'echo "No action required"'
sqlness-kafka-wal:
name: Sqlness Test with Kafka Wal
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ ubuntu-20.04 ]
steps:
- run: 'echo "No action required"'

View File

@@ -3,7 +3,7 @@ name: License checker
on:
push:
branches:
- develop
- main
pull_request:
types: [opened, synchronize, reopened, ready_for_review]
jobs:
@@ -11,6 +11,6 @@ jobs:
runs-on: ubuntu-20.04
name: license-header-check
steps:
- uses: actions/checkout@v2
- uses: actions/checkout@v4
- name: Check License Header
uses: korandoru/hawkeye@v3
uses: korandoru/hawkeye@v5

View File

@@ -85,7 +85,7 @@ jobs:
version: ${{ steps.create-version.outputs.version }}
steps:
- name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
fetch-depth: 0
@@ -137,7 +137,7 @@ jobs:
]
runs-on: ${{ needs.allocate-runners.outputs.linux-amd64-runner }}
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
with:
fetch-depth: 0
@@ -156,7 +156,7 @@ jobs:
]
runs-on: ${{ needs.allocate-runners.outputs.linux-arm64-runner }}
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
with:
fetch-depth: 0
@@ -179,7 +179,7 @@ jobs:
outputs:
nightly-build-result: ${{ steps.set-nightly-build-result.outputs.nightly-build-result }}
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
with:
fetch-depth: 0
@@ -211,7 +211,7 @@ jobs:
# The ACR have daily sync with DockerHub, so don't worry about the image not being updated.
continue-on-error: true
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
with:
fetch-depth: 0
@@ -245,7 +245,7 @@ jobs:
]
steps:
- name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
fetch-depth: 0
@@ -270,7 +270,7 @@ jobs:
]
steps:
- name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
fetch-depth: 0

View File

@@ -12,19 +12,20 @@ concurrency:
cancel-in-progress: true
env:
RUST_TOOLCHAIN: nightly-2023-10-21
RUST_TOOLCHAIN: nightly-2024-04-18
jobs:
sqlness:
name: Sqlness Test
if: ${{ github.repository == 'GreptimeTeam/greptimedb' }}
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ windows-latest-8-cores ]
timeout-minutes: 60
steps:
- uses: actions/checkout@v4.1.0
- uses: arduino/setup-protoc@v1
- uses: actions/checkout@v4
- uses: arduino/setup-protoc@v3
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
- uses: dtolnay/rust-toolchain@master
@@ -44,19 +45,20 @@ jobs:
{"text": "Nightly CI failed for sqlness tests"}
- name: Upload sqlness logs
if: always()
uses: actions/upload-artifact@v3
uses: actions/upload-artifact@v4
with:
name: sqlness-logs
path: ${{ runner.temp }}/greptime-*.log
path: /tmp/greptime-*.log
retention-days: 3
test-on-windows:
if: ${{ github.repository == 'GreptimeTeam/greptimedb' }}
runs-on: windows-latest-8-cores
timeout-minutes: 60
steps:
- run: git config --global core.autocrlf false
- uses: actions/checkout@v4.1.0
- uses: arduino/setup-protoc@v1
- uses: actions/checkout@v4
- uses: arduino/setup-protoc@v3
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
- name: Install Rust toolchain
@@ -69,7 +71,7 @@ jobs:
- name: Install Cargo Nextest
uses: taiki-e/install-action@nextest
- name: Install Python
uses: actions/setup-python@v4
uses: actions/setup-python@v5
with:
python-version: '3.10'
- name: Install PyArrow Package
@@ -83,10 +85,10 @@ jobs:
env:
RUST_BACKTRACE: 1
CARGO_INCREMENTAL: 0
GT_S3_BUCKET: ${{ secrets.S3_BUCKET }}
GT_S3_ACCESS_KEY_ID: ${{ secrets.S3_ACCESS_KEY_ID }}
GT_S3_ACCESS_KEY: ${{ secrets.S3_ACCESS_KEY }}
GT_S3_REGION: ${{ secrets.S3_REGION }}
GT_S3_BUCKET: ${{ vars.AWS_CI_TEST_BUCKET }}
GT_S3_ACCESS_KEY_ID: ${{ secrets.AWS_CI_TEST_ACCESS_KEY_ID }}
GT_S3_ACCESS_KEY: ${{ secrets.AWS_CI_TEST_SECRET_ACCESS_KEY }}
GT_S3_REGION: ${{ vars.AWS_CI_TEST_BUCKET_REGION }}
UNITTEST_LOG_DIR: "__unittest_logs"
- name: Notify slack if failed
if: failure()

View File

@@ -9,10 +9,11 @@ on:
jobs:
sqlness-test:
name: Run sqlness test
if: ${{ github.repository == 'GreptimeTeam/greptimedb' }}
runs-on: ubuntu-22.04
steps:
- name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
fetch-depth: 0

View File

@@ -13,7 +13,7 @@ jobs:
runs-on: ubuntu-20.04
timeout-minutes: 10
steps:
- uses: thehanimo/pr-title-checker@v1.3.4
- uses: thehanimo/pr-title-checker@v1.4.2
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
pass_on_octokit_error: false
@@ -22,7 +22,7 @@ jobs:
runs-on: ubuntu-20.04
timeout-minutes: 10
steps:
- uses: thehanimo/pr-title-checker@v1.3.4
- uses: thehanimo/pr-title-checker@v1.4.2
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
pass_on_octokit_error: false

View File

@@ -30,7 +30,7 @@ jobs:
runs-on: ubuntu-20.04-16-cores
steps:
- name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
fetch-depth: 0

View File

@@ -82,7 +82,7 @@ on:
# Use env variables to control all the release process.
env:
# The arguments of building greptime.
RUST_TOOLCHAIN: nightly-2023-10-21
RUST_TOOLCHAIN: nightly-2024-04-18
CARGO_PROFILE: nightly
# Controls whether to run tests, include unit-test, integration-test and sqlness.
@@ -91,7 +91,7 @@ env:
# The scheduled version is '${{ env.NEXT_RELEASE_VERSION }}-nightly-YYYYMMDD', like v0.2.0-nigthly-20230313;
NIGHTLY_RELEASE_PREFIX: nightly
# Note: The NEXT_RELEASE_VERSION should be modified manually by every formal release.
NEXT_RELEASE_VERSION: v0.5.0
NEXT_RELEASE_VERSION: v0.8.0
jobs:
allocate-runners:
@@ -114,7 +114,7 @@ jobs:
version: ${{ steps.create-version.outputs.version }}
steps:
- name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
fetch-depth: 0
@@ -168,7 +168,7 @@ jobs:
]
runs-on: ${{ needs.allocate-runners.outputs.linux-amd64-runner }}
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
with:
fetch-depth: 0
@@ -187,7 +187,7 @@ jobs:
]
runs-on: ${{ needs.allocate-runners.outputs.linux-arm64-runner }}
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
with:
fetch-depth: 0
@@ -221,12 +221,14 @@ jobs:
arch: x86_64-apple-darwin
artifacts-dir-prefix: greptime-darwin-amd64-pyo3
runs-on: ${{ matrix.os }}
outputs:
build-macos-result: ${{ steps.set-build-macos-result.outputs.build-macos-result }}
needs: [
allocate-runners,
]
if: ${{ inputs.build_macos_artifacts || github.event_name == 'push' || github.event_name == 'schedule' }}
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
with:
fetch-depth: 0
@@ -240,6 +242,11 @@ jobs:
disable-run-tests: ${{ env.DISABLE_RUN_TESTS }}
artifacts-dir: ${{ matrix.artifacts-dir-prefix }}-${{ needs.allocate-runners.outputs.version }}
- name: Set build macos result
id: set-build-macos-result
run: |
echo "build-macos-result=success" >> $GITHUB_OUTPUT
build-windows-artifacts:
name: Build Windows artifacts
strategy:
@@ -255,6 +262,8 @@ jobs:
features: pyo3_backend,servers/dashboard
artifacts-dir-prefix: greptime-windows-amd64-pyo3
runs-on: ${{ matrix.os }}
outputs:
build-windows-result: ${{ steps.set-build-windows-result.outputs.build-windows-result }}
needs: [
allocate-runners,
]
@@ -262,7 +271,7 @@ jobs:
steps:
- run: git config --global core.autocrlf false
- uses: actions/checkout@v3
- uses: actions/checkout@v4
with:
fetch-depth: 0
@@ -276,6 +285,11 @@ jobs:
disable-run-tests: ${{ env.DISABLE_RUN_TESTS }}
artifacts-dir: ${{ matrix.artifacts-dir-prefix }}-${{ needs.allocate-runners.outputs.version }}
- name: Set build windows result
id: set-build-windows-result
run: |
echo "build-windows-result=success" >> $Env:GITHUB_OUTPUT
release-images-to-dockerhub:
name: Build and push images to DockerHub
if: ${{ inputs.release_images || github.event_name == 'push' || github.event_name == 'schedule' }}
@@ -285,8 +299,10 @@ jobs:
build-linux-arm64-artifacts,
]
runs-on: ubuntu-2004-16-cores
outputs:
build-image-result: ${{ steps.set-build-image-result.outputs.build-image-result }}
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
with:
fetch-depth: 0
@@ -299,6 +315,11 @@ jobs:
image-registry-password: ${{ secrets.DOCKERHUB_TOKEN }}
version: ${{ needs.allocate-runners.outputs.version }}
- name: Set build image result
id: set-build-image-result
run: |
echo "build-image-result=success" >> $GITHUB_OUTPUT
release-cn-artifacts:
name: Release artifacts to CN region
if: ${{ inputs.release_images || github.event_name == 'push' || github.event_name == 'schedule' }}
@@ -316,7 +337,7 @@ jobs:
# The ACR have daily sync with DockerHub, so don't worry about the image not being updated.
continue-on-error: true
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
with:
fetch-depth: 0
@@ -352,7 +373,7 @@ jobs:
]
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@v3
- uses: actions/checkout@v4
with:
fetch-depth: 0
@@ -375,7 +396,7 @@ jobs:
]
steps:
- name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
fetch-depth: 0
@@ -400,7 +421,7 @@ jobs:
]
steps:
- name: Checkout
uses: actions/checkout@v3
uses: actions/checkout@v4
with:
fetch-depth: 0
@@ -413,3 +434,29 @@ jobs:
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ vars.EC2_RUNNER_REGION }}
github-token: ${{ secrets.GH_PERSONAL_ACCESS_TOKEN }}
notification:
if: ${{ always() }} # Not requiring successful dependent jobs, always run.
name: Send notification to Greptime team
needs: [
release-images-to-dockerhub,
build-macos-artifacts,
build-windows-artifacts,
]
runs-on: ubuntu-20.04
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL_DEVELOP_CHANNEL }}
steps:
- name: Notifiy release successful result
uses: slackapi/slack-github-action@v1.25.0
if: ${{ needs.release-images-to-dockerhub.outputs.build-image-result == 'success' && needs.build-windows-artifacts.outputs.build-windows-result == 'success' && needs.build-macos-artifacts.outputs.build-macos-result == 'success' }}
with:
payload: |
{"text": "GreptimeDB's release version has completed successfully."}
- name: Notifiy release failed result
uses: slackapi/slack-github-action@v1.25.0
if: ${{ needs.release-images-to-dockerhub.outputs.build-image-result != 'success' || needs.build-windows-artifacts.outputs.build-windows-result != 'success' || needs.build-macos-artifacts.outputs.build-macos-result != 'success' }}
with:
payload: |
{"text": "GreptimeDB's release version has failed, please check 'https://github.com/GreptimeTeam/greptimedb/actions/workflows/release.yml'."}

View File

@@ -1,26 +0,0 @@
name: size-labeler
on: [pull_request]
jobs:
labeler:
runs-on: ubuntu-latest
name: Label the PR size
steps:
- uses: codelytv/pr-size-labeler@v1
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
s_label: 'Size: S'
s_max_size: '100'
m_label: 'Size: M'
m_max_size: '500'
l_label: 'Size: L'
l_max_size: '1000'
xl_label: 'Size: XL'
fail_if_xl: 'false'
message_if_xl: >
This PR exceeds the recommended size of 1000 lines.
Please make sure you are NOT addressing multiple issues with one PR.
Note this PR might be rejected due to its size.
github_api_url: 'api.github.com'
files_to_ignore: 'Cargo.lock'

21
.github/workflows/unassign.yml vendored Normal file
View File

@@ -0,0 +1,21 @@
name: Auto Unassign
on:
schedule:
- cron: '4 2 * * *'
workflow_dispatch:
permissions:
contents: read
issues: write
pull-requests: write
jobs:
auto-unassign:
name: Auto Unassign
runs-on: ubuntu-latest
steps:
- name: Auto Unassign
uses: tisonspieces/auto-unassign@main
with:
token: ${{ secrets.GH_PERSONAL_ACCESS_TOKEN }}
repository: ${{ github.repository }}

4
.gitignore vendored
View File

@@ -46,3 +46,7 @@ benchmarks/data
*.code-workspace
venv/
# Fuzz tests
tests-fuzz/artifacts/
tests-fuzz/corpus/

View File

@@ -1,132 +0,0 @@
# Contributor Covenant Code of Conduct
## Our Pledge
We as members, contributors, and leaders pledge to make participation in our
community a harassment-free experience for everyone, regardless of age, body
size, visible or invisible disability, ethnicity, sex characteristics, gender
identity and expression, level of experience, education, socio-economic status,
nationality, personal appearance, race, caste, color, religion, or sexual
identity and orientation.
We pledge to act and interact in ways that contribute to an open, welcoming,
diverse, inclusive, and healthy community.
## Our Standards
Examples of behavior that contributes to a positive environment for our
community include:
* Demonstrating empathy and kindness toward other people
* Being respectful of differing opinions, viewpoints, and experiences
* Giving and gracefully accepting constructive feedback
* Accepting responsibility and apologizing to those affected by our mistakes,
and learning from the experience
* Focusing on what is best not just for us as individuals, but for the overall
community
Examples of unacceptable behavior include:
* The use of sexualized language or imagery, and sexual attention or advances of
any kind
* Trolling, insulting or derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or email address,
without their explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Enforcement Responsibilities
Community leaders are responsible for clarifying and enforcing our standards of
acceptable behavior and will take appropriate and fair corrective action in
response to any behavior that they deem inappropriate, threatening, offensive,
or harmful.
Community leaders have the right and responsibility to remove, edit, or reject
comments, commits, code, wiki edits, issues, and other contributions that are
not aligned to this Code of Conduct, and will communicate reasons for moderation
decisions when appropriate.
## Scope
This Code of Conduct applies within all community spaces, and also applies when
an individual is officially representing the community in public spaces.
Examples of representing our community include using an official e-mail address,
posting via an official social media account, or acting as an appointed
representative at an online or offline event.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported to the community leaders responsible for enforcement at
info@greptime.com.
All complaints will be reviewed and investigated promptly and fairly.
All community leaders are obligated to respect the privacy and security of the
reporter of any incident.
## Enforcement Guidelines
Community leaders will follow these Community Impact Guidelines in determining
the consequences for any action they deem in violation of this Code of Conduct:
### 1. Correction
**Community Impact**: Use of inappropriate language or other behavior deemed
unprofessional or unwelcome in the community.
**Consequence**: A private, written warning from community leaders, providing
clarity around the nature of the violation and an explanation of why the
behavior was inappropriate. A public apology may be requested.
### 2. Warning
**Community Impact**: A violation through a single incident or series of
actions.
**Consequence**: A warning with consequences for continued behavior. No
interaction with the people involved, including unsolicited interaction with
those enforcing the Code of Conduct, for a specified period of time. This
includes avoiding interactions in community spaces as well as external channels
like social media. Violating these terms may lead to a temporary or permanent
ban.
### 3. Temporary Ban
**Community Impact**: A serious violation of community standards, including
sustained inappropriate behavior.
**Consequence**: A temporary ban from any sort of interaction or public
communication with the community for a specified period of time. No public or
private interaction with the people involved, including unsolicited interaction
with those enforcing the Code of Conduct, is allowed during this period.
Violating these terms may lead to a permanent ban.
### 4. Permanent Ban
**Community Impact**: Demonstrating a pattern of violation of community
standards, including sustained inappropriate behavior, harassment of an
individual, or aggression toward or disparagement of classes of individuals.
**Consequence**: A permanent ban from any sort of public interaction within the
community.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
version 2.1, available at
[https://www.contributor-covenant.org/version/2/1/code_of_conduct.html][v2.1].
Community Impact Guidelines were inspired by
[Mozilla's code of conduct enforcement ladder][Mozilla CoC].
For answers to common questions about this code of conduct, see the FAQ at
[https://www.contributor-covenant.org/faq][FAQ]. Translations are available at
[https://www.contributor-covenant.org/translations][translations].
[homepage]: https://www.contributor-covenant.org
[v2.1]: https://www.contributor-covenant.org/version/2/1/code_of_conduct.html
[Mozilla CoC]: https://github.com/mozilla/diversity
[FAQ]: https://www.contributor-covenant.org/faq
[translations]: https://www.contributor-covenant.org/translations

View File

@@ -10,7 +10,7 @@ Follow our [README](https://github.com/GreptimeTeam/greptimedb#readme) to get th
It can feel intimidating to contribute to a complex project, but it can also be exciting and fun. These general notes will help everyone participate in this communal activity.
- Follow the [Code of Conduct](https://github.com/GreptimeTeam/greptimedb/blob/develop/CODE_OF_CONDUCT.md)
- Follow the [Code of Conduct](https://github.com/GreptimeTeam/greptimedb/blob/main/CODE_OF_CONDUCT.md)
- Small changes make huge differences. We will happily accept a PR making a single character change if it helps move forward. Don't wait to have everything working.
- Check the closed issues before opening your issue.
- Try to follow the existing style of the code.
@@ -26,7 +26,7 @@ Pull requests are great, but we accept all kinds of other help if you like. Such
## Code of Conduct
Also, there are things that we are not looking for because they don't match the goals of the product or benefit the community. Please read [Code of Conduct](https://github.com/GreptimeTeam/greptimedb/blob/develop/CODE_OF_CONDUCT.md); we hope everyone can keep good manners and become an honored member.
Also, there are things that we are not looking for because they don't match the goals of the product or benefit the community. Please read [Code of Conduct](https://github.com/GreptimeTeam/greptimedb/blob/main/CODE_OF_CONDUCT.md); we hope everyone can keep good manners and become an honored member.
## License
@@ -50,7 +50,7 @@ GreptimeDB uses the [Apache 2.0 license](https://github.com/GreptimeTeam/greptim
- To ensure that community is free and confident in its ability to use your contributions, please sign the Contributor License Agreement (CLA) which will be incorporated in the pull request process.
- Make sure all files have proper license header (running `docker run --rm -v $(pwd):/github/workspace ghcr.io/korandoru/hawkeye-native:v3 format` from the project root).
- Make sure all your codes are formatted and follow the [coding style](https://pingcap.github.io/style-guide/rust/).
- Make sure all your codes are formatted and follow the [coding style](https://pingcap.github.io/style-guide/rust/) and [style guide](http://github.com/greptimeTeam/docs/style-guide.md).
- Make sure all unit tests are passed (using `cargo test --workspace` or [nextest](https://nexte.st/index.html) `cargo nextest run`).
- Make sure all clippy warnings are fixed (you can check it locally by running `cargo clippy --workspace --all-targets -- -D warnings`).

4364
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -18,6 +18,7 @@ members = [
"src/common/grpc-expr",
"src/common/mem-prof",
"src/common/meta",
"src/common/plugins",
"src/common/procedure",
"src/common/procedure-test",
"src/common/query",
@@ -29,9 +30,11 @@ members = [
"src/common/time",
"src/common/decimal",
"src/common/version",
"src/common/wal",
"src/datanode",
"src/datatypes",
"src/file-engine",
"src/flow",
"src/frontend",
"src/log-store",
"src/meta-client",
@@ -52,85 +55,118 @@ members = [
"src/store-api",
"src/table",
"src/index",
"tests-fuzz",
"tests-integration",
"tests/runner",
]
resolver = "2"
[workspace.package]
version = "0.4.4"
version = "0.7.2"
edition = "2021"
license = "Apache-2.0"
[workspace.lints]
clippy.print_stdout = "warn"
clippy.print_stderr = "warn"
clippy.implicit_clone = "warn"
clippy.readonly_write_lock = "allow"
rust.unknown_lints = "deny"
# Remove this after https://github.com/PyO3/pyo3/issues/4094
rust.non_local_definitions = "allow"
[workspace.dependencies]
# We turn off default-features for some dependencies here so the workspaces which inherit them can
# selectively turn them on if needed, since we can override default-features = true (from false)
# for the inherited dependency but cannot do the reverse (override from true to false).
#
# See for more detaiils: https://github.com/rust-lang/cargo/issues/11329
ahash = { version = "0.8", features = ["compile-time-rng"] }
aquamarine = "0.3"
arrow = { version = "47.0" }
arrow-array = "47.0"
arrow-flight = "47.0"
arrow-schema = { version = "47.0", features = ["serde"] }
arrow = { version = "51.0.0", features = ["prettyprint"] }
arrow-array = { version = "51.0.0", default-features = false, features = ["chrono-tz"] }
arrow-flight = "51.0"
arrow-ipc = { version = "51.0.0", default-features = false, features = ["lz4"] }
arrow-schema = { version = "51.0", features = ["serde"] }
async-stream = "0.3"
async-trait = "0.1"
axum = { version = "0.6", features = ["headers"] }
base64 = "0.21"
bigdecimal = "0.4.2"
bitflags = "2.4.1"
bytemuck = "1.12"
bytes = { version = "1.5", features = ["serde"] }
chrono = { version = "0.4", features = ["serde"] }
datafusion = { git = "https://github.com/apache/arrow-datafusion.git", rev = "26e43acac3a96cec8dd4c8365f22dfb1a84306e9" }
datafusion-common = { git = "https://github.com/apache/arrow-datafusion.git", rev = "26e43acac3a96cec8dd4c8365f22dfb1a84306e9" }
datafusion-expr = { git = "https://github.com/apache/arrow-datafusion.git", rev = "26e43acac3a96cec8dd4c8365f22dfb1a84306e9" }
datafusion-optimizer = { git = "https://github.com/apache/arrow-datafusion.git", rev = "26e43acac3a96cec8dd4c8365f22dfb1a84306e9" }
datafusion-physical-expr = { git = "https://github.com/apache/arrow-datafusion.git", rev = "26e43acac3a96cec8dd4c8365f22dfb1a84306e9" }
datafusion-sql = { git = "https://github.com/apache/arrow-datafusion.git", rev = "26e43acac3a96cec8dd4c8365f22dfb1a84306e9" }
datafusion-substrait = { git = "https://github.com/apache/arrow-datafusion.git", rev = "26e43acac3a96cec8dd4c8365f22dfb1a84306e9" }
clap = { version = "4.4", features = ["derive"] }
dashmap = "5.4"
datafusion = { git = "https://github.com/apache/arrow-datafusion.git", rev = "34eda15b73a9e278af8844b30ed2f1c21c10359c" }
datafusion-common = { git = "https://github.com/apache/arrow-datafusion.git", rev = "34eda15b73a9e278af8844b30ed2f1c21c10359c" }
datafusion-expr = { git = "https://github.com/apache/arrow-datafusion.git", rev = "34eda15b73a9e278af8844b30ed2f1c21c10359c" }
datafusion-functions = { git = "https://github.com/apache/arrow-datafusion.git", rev = "34eda15b73a9e278af8844b30ed2f1c21c10359c" }
datafusion-optimizer = { git = "https://github.com/apache/arrow-datafusion.git", rev = "34eda15b73a9e278af8844b30ed2f1c21c10359c" }
datafusion-physical-expr = { git = "https://github.com/apache/arrow-datafusion.git", rev = "34eda15b73a9e278af8844b30ed2f1c21c10359c" }
datafusion-sql = { git = "https://github.com/apache/arrow-datafusion.git", rev = "34eda15b73a9e278af8844b30ed2f1c21c10359c" }
datafusion-substrait = { git = "https://github.com/apache/arrow-datafusion.git", rev = "34eda15b73a9e278af8844b30ed2f1c21c10359c" }
derive_builder = "0.12"
etcd-client = "0.12"
dotenv = "0.15"
# TODO(LFC): Wait for https://github.com/etcdv3/etcd-client/pull/76
etcd-client = { git = "https://github.com/MichaelScofield/etcd-client.git", rev = "4c371e9b3ea8e0a8ee2f9cbd7ded26e54a45df3b" }
fst = "0.4.7"
futures = "0.3"
futures-util = "0.3"
greptime-proto = { git = "https://github.com/GreptimeTeam/greptime-proto.git", rev = "b1d403088f02136bcebde53d604f491c260ca8e2" }
greptime-proto = { git = "https://github.com/GreptimeTeam/greptime-proto.git", rev = "73ac0207ab71dfea48f30259ffdb611501b5ecb8" }
humantime = "2.1"
humantime-serde = "1.1"
itertools = "0.10"
lazy_static = "1.4"
meter-core = { git = "https://github.com/GreptimeTeam/greptime-meter.git", rev = "abbd357c1e193cd270ea65ee7652334a150b628f" }
meter-core = { git = "https://github.com/GreptimeTeam/greptime-meter.git", rev = "80b72716dcde47ec4161478416a5c6c21343364d" }
mockall = "0.11.4"
moka = "0.12"
notify = "6.1"
num_cpus = "1.16"
once_cell = "1.18"
opentelemetry-proto = { git = "https://github.com/waynexia/opentelemetry-rust.git", rev = "33841b38dda79b15f2024952be5f32533325ca02", features = [
opentelemetry-proto = { version = "0.5", features = [
"gen-tonic",
"metrics",
"trace",
] }
parquet = "47.0"
parquet = { version = "51.0.0", default-features = false, features = ["arrow", "async", "object_store"] }
paste = "1.0"
pin-project = "1.0"
prometheus = { version = "0.13.3", features = ["process"] }
prost = "0.12"
raft-engine = { git = "https://github.com/tikv/raft-engine.git", rev = "22dfb426cd994602b57725ef080287d3e53db479" }
raft-engine = { version = "0.4.1", default-features = false }
rand = "0.8"
regex = "1.8"
regex-automata = { version = "0.1", features = ["transducer"] }
regex-automata = { version = "0.4" }
reqwest = { version = "0.11", default-features = false, features = [
"json",
"rustls-tls-native-roots",
"stream",
"multipart",
] }
rskafka = "0.5"
rust_decimal = "1.33"
schemars = "0.8"
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
smallvec = "1"
serde_json = { version = "1.0", features = ["float_roundtrip"] }
serde_with = "3"
smallvec = { version = "1", features = ["serde"] }
snafu = "0.7"
# on branch v0.38.x
sqlparser = { git = "https://github.com/GreptimeTeam/sqlparser-rs.git", rev = "6a93567ae38d42be5c8d08b13c8ff4dde26502ef", features = [
sysinfo = "0.30"
# on branch v0.44.x
sqlparser = { git = "https://github.com/GreptimeTeam/sqlparser-rs.git", rev = "c919990bf62ad38d2b0c0a3bc90b26ad919d51b0", features = [
"visitor",
] }
strum = { version = "0.25", features = ["derive"] }
tempfile = "3"
tokio = { version = "1.28", features = ["full"] }
tokio = { version = "1.36", features = ["full"] }
tokio-stream = { version = "0.1" }
tokio-util = { version = "0.7", features = ["io-util", "compat"] }
toml = "0.7"
tonic = { version = "0.10", features = ["tls"] }
uuid = { version = "1", features = ["serde", "v4", "fast-rng"] }
toml = "0.8.8"
tonic = { version = "0.11", features = ["tls"] }
uuid = { version = "1.7", features = ["serde", "v4", "fast-rng"] }
zstd = "0.13"
## workspaces members
api = { path = "src/api" }
@@ -151,7 +187,7 @@ common-grpc-expr = { path = "src/common/grpc-expr" }
common-macro = { path = "src/common/macro" }
common-mem-prof = { path = "src/common/mem-prof" }
common-meta = { path = "src/common/meta" }
common-pprof = { path = "src/common/pprof" }
common-plugins = { path = "src/common/plugins" }
common-procedure = { path = "src/common/procedure" }
common-procedure-test = { path = "src/common/procedure-test" }
common-query = { path = "src/common/query" }
@@ -161,20 +197,23 @@ common-telemetry = { path = "src/common/telemetry" }
common-test-util = { path = "src/common/test-util" }
common-time = { path = "src/common/time" }
common-version = { path = "src/common/version" }
common-wal = { path = "src/common/wal" }
datanode = { path = "src/datanode" }
datatypes = { path = "src/datatypes" }
file-engine = { path = "src/file-engine" }
frontend = { path = "src/frontend" }
index = { path = "src/index" }
log-store = { path = "src/log-store" }
meta-client = { path = "src/meta-client" }
meta-srv = { path = "src/meta-srv" }
mito = { path = "src/mito" }
metric-engine = { path = "src/metric-engine" }
mito2 = { path = "src/mito2" }
object-store = { path = "src/object-store" }
operator = { path = "src/operator" }
partition = { path = "src/partition" }
plugins = { path = "src/plugins" }
promql = { path = "src/promql" }
puffin = { path = "src/puffin" }
query = { path = "src/query" }
script = { path = "src/script" }
servers = { path = "src/servers" }
@@ -186,10 +225,10 @@ table = { path = "src/table" }
[workspace.dependencies.meter-macros]
git = "https://github.com/GreptimeTeam/greptime-meter.git"
rev = "abbd357c1e193cd270ea65ee7652334a150b628f"
rev = "80b72716dcde47ec4161478416a5c6c21343364d"
[profile.release]
debug = true
debug = 1
[profile.nightly]
inherits = "release"

View File

@@ -3,6 +3,7 @@ CARGO_PROFILE ?=
FEATURES ?=
TARGET_DIR ?=
TARGET ?=
BUILD_BIN ?= greptime
CARGO_BUILD_OPTS := --locked
IMAGE_REGISTRY ?= docker.io
IMAGE_NAMESPACE ?= greptime
@@ -45,6 +46,10 @@ ifneq ($(strip $(TARGET)),)
CARGO_BUILD_OPTS += --target ${TARGET}
endif
ifneq ($(strip $(BUILD_BIN)),)
CARGO_BUILD_OPTS += --bin ${BUILD_BIN}
endif
ifneq ($(strip $(RELEASE)),)
CARGO_BUILD_OPTS += --release
endif
@@ -65,7 +70,7 @@ endif
build: ## Build debug version greptime.
cargo ${CARGO_EXTENSION} build ${CARGO_BUILD_OPTS}
.POHNY: build-by-dev-builder
.PHONY: build-by-dev-builder
build-by-dev-builder: ## Build greptime by dev-builder.
docker run --network=host \
-v ${PWD}:/greptimedb -v ${CARGO_REGISTRY_CACHE}:/root/.cargo/registry \
@@ -144,11 +149,12 @@ multi-platform-buildx: ## Create buildx multi-platform builder.
docker buildx inspect ${BUILDX_BUILDER_NAME} || docker buildx create --name ${BUILDX_BUILDER_NAME} --driver docker-container --bootstrap --use
##@ Test
.PHONY: test
test: nextest ## Run unit and integration tests.
cargo nextest run ${NEXTEST_OPTS}
.PHONY: nextest ## Install nextest tools.
nextest:
.PHONY: nextest
nextest: ## Install nextest tools.
cargo --list | grep nextest || cargo install cargo-nextest --locked
.PHONY: sqlness-test
@@ -163,6 +169,10 @@ check: ## Cargo check all the targets.
clippy: ## Check clippy rules.
cargo clippy --workspace --all-targets --all-features -- -D warnings
.PHONY: fix-clippy
fix-clippy: ## Fix clippy violations.
cargo clippy --workspace --all-targets --all-features --fix
.PHONY: fmt-check
fmt-check: ## Check code format.
cargo fmt --all -- --check
@@ -182,6 +192,16 @@ run-it-in-container: start-etcd ## Run integration tests in dev-builder.
-w /greptimedb ${IMAGE_REGISTRY}/${IMAGE_NAMESPACE}/dev-builder-${BASE_IMAGE}:latest \
make test sqlness-test BUILD_JOBS=${BUILD_JOBS}
##@ Docs
config-docs: ## Generate configuration documentation from toml files.
docker run --rm \
-v ${PWD}:/greptimedb \
-w /greptimedb/config \
toml2docs/toml2docs:latest \
-p '##' \
-t ./config-docs-template.md \
-o ./config.md
##@ General
# The help target prints out all targets with their descriptions organized

246
README.md
View File

@@ -1,150 +1,159 @@
<p align="center">
<picture>
<source media="(prefers-color-scheme: light)" srcset="https://cdn.jsdelivr.net/gh/GreptimeTeam/greptimedb@develop/docs/logo-text-padding.png">
<source media="(prefers-color-scheme: dark)" srcset="https://cdn.jsdelivr.net/gh/GreptimeTeam/greptimedb@develop/docs/logo-text-padding-dark.png">
<img alt="GreptimeDB Logo" src="https://cdn.jsdelivr.net/gh/GreptimeTeam/greptimedb@develop/docs/logo-text-padding.png" width="400px">
<source media="(prefers-color-scheme: light)" srcset="https://cdn.jsdelivr.net/gh/GreptimeTeam/greptimedb@main/docs/logo-text-padding.png">
<source media="(prefers-color-scheme: dark)" srcset="https://cdn.jsdelivr.net/gh/GreptimeTeam/greptimedb@main/docs/logo-text-padding-dark.png">
<img alt="GreptimeDB Logo" src="https://cdn.jsdelivr.net/gh/GreptimeTeam/greptimedb@main/docs/logo-text-padding.png" width="400px">
</picture>
</p>
<h1 align="center">Cloud-scale, Fast and Efficient Time Series Database</h1>
<div align="center">
<h3 align="center">
The next-generation hybrid time-series/analytics processing database in the cloud
</h3>
<a href="https://greptime.com/product/cloud">GreptimeCloud</a> |
<a href="https://docs.greptime.com/">User guide</a> |
<a href="https://greptimedb.rs/">API Docs</a> |
<a href="https://github.com/GreptimeTeam/greptimedb/issues/3412">Roadmap 2024</a>
</h4>
<p align="center">
<a href="https://codecov.io/gh/GrepTimeTeam/greptimedb"><img src="https://codecov.io/gh/GrepTimeTeam/greptimedb/branch/develop/graph/badge.svg?token=FITFDI3J3C"></img></a>
&nbsp;
<a href="https://github.com/GreptimeTeam/greptimedb/actions/workflows/develop.yml"><img src="https://github.com/GreptimeTeam/greptimedb/actions/workflows/develop.yml/badge.svg" alt="CI"></img></a>
&nbsp;
<a href="https://github.com/greptimeTeam/greptimedb/blob/develop/LICENSE"><img src="https://img.shields.io/github/license/greptimeTeam/greptimedb"></a>
</p>
<a href="https://github.com/GreptimeTeam/greptimedb/releases/latest">
<img src="https://img.shields.io/github/v/release/GreptimeTeam/greptimedb.svg" alt="Version"/>
</a>
<a href="https://github.com/GreptimeTeam/greptimedb/releases/latest">
<img src="https://img.shields.io/github/release-date/GreptimeTeam/greptimedb.svg" alt="Releases"/>
</a>
<a href="https://hub.docker.com/r/greptime/greptimedb/">
<img src="https://img.shields.io/docker/pulls/greptime/greptimedb.svg" alt="Docker Pulls"/>
</a>
<a href="https://github.com/GreptimeTeam/greptimedb/actions/workflows/develop.yml">
<img src="https://github.com/GreptimeTeam/greptimedb/actions/workflows/develop.yml/badge.svg" alt="GitHub Actions"/>
</a>
<a href="https://codecov.io/gh/GrepTimeTeam/greptimedb">
<img src="https://codecov.io/gh/GrepTimeTeam/greptimedb/branch/main/graph/badge.svg?token=FITFDI3J3C" alt="Codecov"/>
</a>
<a href="https://github.com/greptimeTeam/greptimedb/blob/main/LICENSE">
<img src="https://img.shields.io/github/license/greptimeTeam/greptimedb" alt="License"/>
</a>
<p align="center">
<a href="https://twitter.com/greptime"><img src="https://img.shields.io/badge/twitter-follow_us-1d9bf0.svg"></a>
&nbsp;
<a href="https://www.linkedin.com/company/greptime/"><img src="https://img.shields.io/badge/linkedin-connect_with_us-0a66c2.svg"></a>
&nbsp;
<a href="https://greptime.com/slack"><img src="https://img.shields.io/badge/slack-GreptimeDB-0abd59?logo=slack" alt="slack" /></a>
</p>
<br/>
## What is GreptimeDB
<a href="https://greptime.com/slack">
<img src="https://img.shields.io/badge/slack-GreptimeDB-0abd59?logo=slack&style=for-the-badge" alt="Slack"/>
</a>
<a href="https://twitter.com/greptime">
<img src="https://img.shields.io/badge/twitter-follow_us-1d9bf0.svg?style=for-the-badge" alt="Twitter"/>
</a>
<a href="https://www.linkedin.com/company/greptime/">
<img src="https://img.shields.io/badge/linkedin-connect_with_us-0a66c2.svg?style=for-the-badge" alt="LinkedIn"/>
</a>
</div>
GreptimeDB is an open-source time-series database with a special focus on
scalability, analytical capabilities and efficiency. It's designed to work on
infrastructure of the cloud era, and users benefit from its elasticity and commodity
storage.
## Introduction
Our core developers have been building time-series data platform
for years. Based on their best-practices, GreptimeDB is born to give you:
**GreptimeDB** is an open-source time-series database focusing on efficiency, scalability, and analytical capabilities.
Designed to work on infrastructure of the cloud era, GreptimeDB benefits users with its elasticity and commodity storage, offering a fast and cost-effective **alternative to InfluxDB** and a **long-term storage for Prometheus**.
- A standalone binary that scales to highly-available distributed cluster, providing a transparent experience for cluster users
- Optimized columnar layout for handling time-series data; compacted, compressed, and stored on various storage backends
- Flexible indexes, tackling high cardinality issues down
- Distributed, parallel query execution, leveraging elastic computing resource
- Native SQL, and Python scripting for advanced analytical scenarios
- Widely adopted database protocols and APIs, native PromQL supports
- Extensible table engine architecture for extensive workloads
## Why GreptimeDB
## Quick Start
Our core developers have been building time-series data platforms for years. Based on our best-practices, GreptimeDB is born to give you:
### [GreptimePlay](https://greptime.com/playground)
* **Easy horizontal scaling**
Seamless scalability from a standalone binary at edge to a robust, highly available distributed cluster in cloud, with a transparent experience for both developers and administrators.
* **Analyzing time-series data**
Query your time-series data with SQL and PromQL. Use Python scripts to facilitate complex analytical tasks.
* **Cloud-native distributed database**
Fully open-source distributed cluster architecture that harnesses the power of cloud-native elastic computing resources.
* **Performance and Cost-effective**
Flexible indexing capabilities and distributed, parallel-processing query engine, tackling high cardinality issues down. Optimized columnar layout for handling time-series data; compacted, compressed, and stored on various storage backends, particularly cloud object storage with 50x cost efficiency.
* **Compatible with InfluxDB, Prometheus and more protocols**
Widely adopted database protocols and APIs, including MySQL, PostgreSQL, and Prometheus Remote Storage, etc. [Read more](https://docs.greptime.com/user-guide/clients/overview).
## Try GreptimeDB
### 1. [GreptimePlay](https://greptime.com/playground)
Try out the features of GreptimeDB right from your browser.
### Build
### 2. [GreptimeCloud](https://console.greptime.cloud/)
#### Build from Source
Start instantly with a free cluster.
To compile GreptimeDB from source, you'll need:
### 3. Docker Image
- C/C++ Toolchain: provides basic tools for compiling and linking. This is
available as `build-essential` on ubuntu and similar name on other platforms.
- Rust: the easiest way to install Rust is to use
[`rustup`](https://rustup.rs/), which will check our `rust-toolchain` file and
install correct Rust version for you.
- Protobuf: `protoc` is required for compiling `.proto` files. `protobuf` is
available from major package manager on macos and linux distributions. You can
find an installation instructions [here](https://grpc.io/docs/protoc-installation/).
**Note that `protoc` version needs to be >= 3.15** because we have used the `optional`
keyword. You can check it with `protoc --version`.
- python3-dev or python3-devel(Optional feature, only needed if you want to run scripts
in CPython, and also need to enable `pyo3_backend` feature when compiling(by `cargo run -F pyo3_backend` or add `pyo3_backend` to src/script/Cargo.toml 's `features.default` like `default = ["python", "pyo3_backend]`)): this install a Python shared library required for running Python
scripting engine(In CPython Mode). This is available as `python3-dev` on
ubuntu, you can install it with `sudo apt install python3-dev`, or
`python3-devel` on RPM based distributions (e.g. Fedora, Red Hat, SuSE). Mac's
`Python3` package should have this shared library by default. More detail for compiling with PyO3 can be found in [PyO3](https://pyo3.rs/v0.18.1/building_and_distribution#configuring-the-python-version)'s documentation.
To install GreptimeDB locally, the recommended way is via Docker:
#### Build with Docker
A docker image with necessary dependencies is provided:
```
docker build --network host -f docker/Dockerfile -t greptimedb .
```shell
docker pull greptime/greptimedb
```
### Run
Start GreptimeDB from source code, in standalone mode:
Start a GreptimeDB container with:
```shell
docker run --rm --name greptime --net=host greptime/greptimedb standalone start
```
Read more about [Installation](https://docs.greptime.com/getting-started/installation/overview) on docs.
## Getting Started
* [Quickstart](https://docs.greptime.com/getting-started/quick-start/overview)
* [Write Data](https://docs.greptime.com/user-guide/clients/overview)
* [Query Data](https://docs.greptime.com/user-guide/query-data/overview)
* [Operations](https://docs.greptime.com/user-guide/operations/overview)
## Build
Check the prerequisite:
* [Rust toolchain](https://www.rust-lang.org/tools/install) (nightly)
* [Protobuf compiler](https://grpc.io/docs/protoc-installation/) (>= 3.15)
* Python toolchain (optional): Required only if built with PyO3 backend. More detail for compiling with PyO3 can be found in its [documentation](https://pyo3.rs/v0.18.1/building_and_distribution#configuring-the-python-version).
Build GreptimeDB binary:
```shell
make
```
Run a standalone server:
```shell
cargo run -- standalone start
```
Or if you built from docker:
```
docker run -p 4002:4002 -v "$(pwd):/tmp/greptimedb" greptime/greptimedb standalone start
```
Please see the online document site for more installation options and [operations info](https://docs.greptime.com/user-guide/operations/overview).
### Get started
Read the [complete getting started guide](https://docs.greptime.com/getting-started/overview) on our [official document site](https://docs.greptime.com/).
To write and query data, GreptimeDB is compatible with multiple [protocols and clients](https://docs.greptime.com/user-guide/clients/overview).
## Resources
### Installation
- [Pre-built Binaries](https://greptime.com/download):
For Linux and macOS, you can easily download pre-built binaries including official releases and nightly builds that are ready to use.
In most cases, downloading the version without PyO3 is sufficient. However, if you plan to run scripts in CPython (and use Python packages like NumPy and Pandas), you will need to download the version with PyO3 and install a Python with the same version as the Python in the PyO3 version.
We recommend using virtualenv for the installation process to manage multiple Python versions.
- [Docker Images](https://hub.docker.com/r/greptime/greptimedb)(**recommended**): pre-built
Docker images, this is the easiest way to try GreptimeDB. By default it runs CPython script with `pyo3_backend` enabled.
- [`gtctl`](https://github.com/GreptimeTeam/gtctl): the command-line tool for
Kubernetes deployment
### Documentation
- GreptimeDB [User Guide](https://docs.greptime.com/user-guide/concepts/overview)
- GreptimeDB [Developer
Guide](https://docs.greptime.com/developer-guide/overview.html)
- GreptimeDB [internal code document](https://greptimedb.rs)
## Extension
### Dashboard
- [The dashboard UI for GreptimeDB](https://github.com/GreptimeTeam/dashboard)
### SDK
- [GreptimeDB C++ Client](https://github.com/GreptimeTeam/greptimedb-client-cpp)
- [GreptimeDB Erlang Client](https://github.com/GreptimeTeam/greptimedb-client-erl)
- [GreptimeDB Go Client](https://github.com/GreptimeTeam/greptimedb-client-go)
- [GreptimeDB Java Client](https://github.com/GreptimeTeam/greptimedb-client-java)
- [GreptimeDB Python Client](https://github.com/GreptimeTeam/greptimedb-client-py) (WIP)
- [GreptimeDB Rust Client](https://github.com/GreptimeTeam/greptimedb-client-rust)
- [GreptimeDB JavaScript Client](https://github.com/GreptimeTeam/greptime-js-sdk)
- [GreptimeDB Go Ingester](https://github.com/GreptimeTeam/greptimedb-ingester-go)
- [GreptimeDB Java Ingester](https://github.com/GreptimeTeam/greptimedb-ingester-java)
- [GreptimeDB C++ Ingester](https://github.com/GreptimeTeam/greptimedb-ingester-cpp)
- [GreptimeDB Erlang Ingester](https://github.com/GreptimeTeam/greptimedb-ingester-erl)
- [GreptimeDB Rust Ingester](https://github.com/GreptimeTeam/greptimedb-ingester-rust)
- [GreptimeDB JavaScript Ingester](https://github.com/GreptimeTeam/greptimedb-ingester-js)
### Grafana Dashboard
Our official Grafana dashboard is available at [grafana](grafana/README.md) directory.
## Project Status
This project is in its early stage and under heavy development. We move fast and
break things. Benchmark on development branch may not represent its potential
performance. We release pre-built binaries constantly for functional
evaluation. Do not use it in production at the moment.
For future plans, check out [GreptimeDB roadmap](https://github.com/GreptimeTeam/greptimedb/issues/669).
The current version has not yet reached General Availability version standards.
In line with our Greptime 2024 Roadmap, we plan to achieve a production-level
version with the update to v1.0 in August. [[Join Force]](https://github.com/GreptimeTeam/greptimedb/issues/3412)
## Community
@@ -154,29 +163,28 @@ and what went wrong. If you have any questions or if you would like to get invol
community, please check out:
- GreptimeDB Community on [Slack](https://greptime.com/slack)
- GreptimeDB GitHub [Discussions](https://github.com/GreptimeTeam/greptimedb/discussions)
- Greptime official [Website](https://greptime.com)
- GreptimeDB [GitHub Discussions forum](https://github.com/GreptimeTeam/greptimedb/discussions)
- Greptime official [website](https://greptime.com)
In addition, you may:
- View our official [Blog](https://greptime.com/blogs/index)
- View our official [Blog](https://greptime.com/blogs/)
- Connect us with [Linkedin](https://www.linkedin.com/company/greptime/)
- Follow us on [Twitter](https://twitter.com/greptime)
## License
GreptimeDB uses the [Apache 2.0 license][1] to strike a balance between
GreptimeDB uses the [Apache License 2.0](https://apache.org/licenses/LICENSE-2.0.txt) to strike a balance between
open contributions and allowing you to use the software however you want.
[1]: <https://github.com/greptimeTeam/greptimedb/blob/develop/LICENSE>
## Contributing
Please refer to [contribution guidelines](CONTRIBUTING.md) for more information.
Please refer to [contribution guidelines](CONTRIBUTING.md) and [internal concepts docs](https://docs.greptime.com/contributor-guide/overview.html) for more information.
## Acknowledgement
- GreptimeDB uses [Apache Arrow](https://arrow.apache.org/) as the memory model and [Apache Parquet](https://parquet.apache.org/) as the persistent file format.
- GreptimeDB's query engine is powered by [Apache Arrow DataFusion](https://github.com/apache/arrow-datafusion).
- [Apache OpenDAL (incubating)](https://opendal.apache.org) gives GreptimeDB a very general and elegant data access abstraction layer.
- GreptimeDB uses [Apache Arrow™](https://arrow.apache.org/) as the memory model and [Apache Parquet™](https://parquet.apache.org/) as the persistent file format.
- GreptimeDB's query engine is powered by [Apache Arrow DataFusion™](https://arrow.apache.org/datafusion/).
- [Apache OpenDAL™](https://opendal.apache.org) gives GreptimeDB a very general and elegant data access abstraction layer.
- GreptimeDB's meta service is based on [etcd](https://etcd.io/).
- GreptimeDB uses [RustPython](https://github.com/RustPython/RustPython) for experimental embedded python scripting.

View File

@@ -4,13 +4,35 @@ version.workspace = true
edition.workspace = true
license.workspace = true
[lints]
workspace = true
[dependencies]
api.workspace = true
arrow.workspace = true
chrono.workspace = true
clap = { version = "4.0", features = ["derive"] }
clap.workspace = true
client.workspace = true
common-base.workspace = true
common-telemetry.workspace = true
common-wal.workspace = true
dotenv.workspace = true
futures.workspace = true
futures-util.workspace = true
humantime.workspace = true
humantime-serde.workspace = true
indicatif = "0.17.1"
itertools.workspace = true
lazy_static.workspace = true
log-store.workspace = true
mito2.workspace = true
num_cpus.workspace = true
parquet.workspace = true
prometheus.workspace = true
rand.workspace = true
rskafka.workspace = true
serde.workspace = true
store-api.workspace = true
tokio.workspace = true
toml.workspace = true
uuid.workspace = true

11
benchmarks/README.md Normal file
View File

@@ -0,0 +1,11 @@
Benchmarkers for GreptimeDB
--------------------------------
## Wal Benchmarker
The wal benchmarker serves to evaluate the performance of GreptimeDB's Write-Ahead Log (WAL) component. It meticulously assesses the read/write performance of the WAL under diverse workloads generated by the benchmarker.
### How to use
To compile the benchmarker, navigate to the `greptimedb/benchmarks` directory and execute `cargo build --release`. Subsequently, you'll find the compiled target located at `greptimedb/target/release/wal_bench`.
The `./wal_bench -h` command reveals numerous arguments that the target accepts. Among these, a notable one is the `cfg-file` argument. By utilizing a configuration file in the TOML format, you can bypass the need to repeatedly specify cumbersome arguments.

View File

@@ -0,0 +1,21 @@
# Refers to the documents of `Args` in benchmarks/src/wal.rs`.
wal_provider = "kafka"
bootstrap_brokers = ["localhost:9092"]
num_workers = 10
num_topics = 32
num_regions = 1000
num_scrapes = 1000
num_rows = 5
col_types = "ifs"
max_batch_size = "512KB"
linger = "1ms"
backoff_init = "10ms"
backoff_max = "1ms"
backoff_base = 2
backoff_deadline = "3s"
compression = "zstd"
rng_seed = 42
skip_read = false
skip_write = false
random_topics = true
report_metrics = false

View File

@@ -29,7 +29,7 @@ use client::api::v1::column::Values;
use client::api::v1::{
Column, ColumnDataType, ColumnDef, CreateTableExpr, InsertRequest, InsertRequests, SemanticType,
};
use client::{Client, Database, Output, DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME};
use client::{Client, Database, OutputData, DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME};
use futures_util::TryStreamExt;
use indicatif::{MultiProgress, ProgressBar, ProgressStyle};
use parquet::arrow::arrow_reader::ParquetRecordBatchReaderBuilder;
@@ -215,37 +215,7 @@ fn build_values(column: &ArrayRef) -> (Values, ColumnDataType) {
ColumnDataType::String,
)
}
DataType::Null
| DataType::Boolean
| DataType::Int8
| DataType::Int16
| DataType::Int32
| DataType::UInt8
| DataType::UInt16
| DataType::UInt32
| DataType::UInt64
| DataType::Float16
| DataType::Float32
| DataType::Date32
| DataType::Date64
| DataType::Time32(_)
| DataType::Time64(_)
| DataType::Duration(_)
| DataType::Interval(_)
| DataType::Binary
| DataType::FixedSizeBinary(_)
| DataType::LargeBinary
| DataType::LargeUtf8
| DataType::List(_)
| DataType::FixedSizeList(_, _)
| DataType::LargeList(_)
| DataType::Struct(_)
| DataType::Union(_, _)
| DataType::Dictionary(_, _)
| DataType::Decimal128(_, _)
| DataType::Decimal256(_, _)
| DataType::RunEndEncoded(_, _)
| DataType::Map(_, _) => todo!(),
_ => unimplemented!(),
}
}
@@ -258,7 +228,7 @@ fn create_table_expr(table_name: &str) -> CreateTableExpr {
catalog_name: CATALOG_NAME.to_string(),
schema_name: SCHEMA_NAME.to_string(),
table_name: table_name.to_string(),
desc: "".to_string(),
desc: String::default(),
column_defs: vec![
ColumnDef {
name: "VendorID".to_string(),
@@ -502,9 +472,9 @@ async fn do_query(num_iter: usize, db: &Database, table_name: &str) {
for i in 0..num_iter {
let now = Instant::now();
let res = db.sql(&query).await.unwrap();
match res {
Output::AffectedRows(_) | Output::RecordBatches(_) => (),
Output::Stream(stream) => {
match res.data {
OutputData::AffectedRows(_) | OutputData::RecordBatches(_) => (),
OutputData::Stream(stream) => {
stream.try_collect::<Vec<_>>().await.unwrap();
}
}

View File

@@ -0,0 +1,326 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#![feature(int_roundings)]
use std::fs;
use std::sync::Arc;
use std::time::Instant;
use api::v1::{ColumnDataType, ColumnSchema, SemanticType};
use benchmarks::metrics;
use benchmarks::wal_bench::{Args, Config, Region, WalProvider};
use clap::Parser;
use common_telemetry::info;
use common_wal::config::kafka::common::BackoffConfig;
use common_wal::config::kafka::DatanodeKafkaConfig as KafkaConfig;
use common_wal::config::raft_engine::RaftEngineConfig;
use common_wal::options::{KafkaWalOptions, WalOptions};
use itertools::Itertools;
use log_store::kafka::log_store::KafkaLogStore;
use log_store::raft_engine::log_store::RaftEngineLogStore;
use mito2::wal::Wal;
use prometheus::{Encoder, TextEncoder};
use rand::distributions::{Alphanumeric, DistString};
use rand::rngs::SmallRng;
use rand::SeedableRng;
use rskafka::client::partition::Compression;
use rskafka::client::ClientBuilder;
use store_api::logstore::LogStore;
use store_api::storage::RegionId;
async fn run_benchmarker<S: LogStore>(cfg: &Config, topics: &[String], wal: Arc<Wal<S>>) {
let chunk_size = cfg.num_regions.div_ceil(cfg.num_workers);
let region_chunks = (0..cfg.num_regions)
.map(|id| {
build_region(
id as u64,
topics,
&mut SmallRng::seed_from_u64(cfg.rng_seed),
cfg,
)
})
.chunks(chunk_size as usize)
.into_iter()
.map(|chunk| Arc::new(chunk.collect::<Vec<_>>()))
.collect::<Vec<_>>();
let mut write_elapsed = 0;
let mut read_elapsed = 0;
if !cfg.skip_write {
info!("Benchmarking write ...");
let num_scrapes = cfg.num_scrapes;
let timer = Instant::now();
futures::future::join_all((0..cfg.num_workers).map(|i| {
let wal = wal.clone();
let regions = region_chunks[i as usize].clone();
tokio::spawn(async move {
for _ in 0..num_scrapes {
let mut wal_writer = wal.writer();
regions
.iter()
.for_each(|region| region.add_wal_entry(&mut wal_writer));
wal_writer.write_to_wal().await.unwrap();
}
})
}))
.await;
write_elapsed += timer.elapsed().as_millis();
}
if !cfg.skip_read {
info!("Benchmarking read ...");
let timer = Instant::now();
futures::future::join_all((0..cfg.num_workers).map(|i| {
let wal = wal.clone();
let regions = region_chunks[i as usize].clone();
tokio::spawn(async move {
for region in regions.iter() {
region.replay(&wal).await;
}
})
}))
.await;
read_elapsed = timer.elapsed().as_millis();
}
dump_report(cfg, write_elapsed, read_elapsed);
}
fn build_region(id: u64, topics: &[String], rng: &mut SmallRng, cfg: &Config) -> Region {
let wal_options = match cfg.wal_provider {
WalProvider::Kafka => {
assert!(!topics.is_empty());
WalOptions::Kafka(KafkaWalOptions {
topic: topics.get(id as usize % topics.len()).cloned().unwrap(),
})
}
WalProvider::RaftEngine => WalOptions::RaftEngine,
};
Region::new(
RegionId::from_u64(id),
build_schema(&parse_col_types(&cfg.col_types), rng),
wal_options,
cfg.num_rows,
cfg.rng_seed,
)
}
fn build_schema(col_types: &[ColumnDataType], mut rng: &mut SmallRng) -> Vec<ColumnSchema> {
col_types
.iter()
.map(|col_type| ColumnSchema {
column_name: Alphanumeric.sample_string(&mut rng, 5),
datatype: *col_type as i32,
semantic_type: SemanticType::Field as i32,
datatype_extension: None,
})
.chain(vec![ColumnSchema {
column_name: "ts".to_string(),
datatype: ColumnDataType::TimestampMillisecond as i32,
semantic_type: SemanticType::Tag as i32,
datatype_extension: None,
}])
.collect()
}
fn dump_report(cfg: &Config, write_elapsed: u128, read_elapsed: u128) {
let cost_report = format!(
"write costs: {} ms, read costs: {} ms",
write_elapsed, read_elapsed,
);
let total_written_bytes = metrics::METRIC_WAL_WRITE_BYTES_TOTAL.get() as u128;
let write_throughput = if write_elapsed > 0 {
(total_written_bytes * 1000).div_floor(write_elapsed)
} else {
0
};
let total_read_bytes = metrics::METRIC_WAL_READ_BYTES_TOTAL.get() as u128;
let read_throughput = if read_elapsed > 0 {
(total_read_bytes * 1000).div_floor(read_elapsed)
} else {
0
};
let throughput_report = format!(
"total written bytes: {} bytes, total read bytes: {} bytes, write throuput: {} bytes/s ({} mb/s), read throughput: {} bytes/s ({} mb/s)",
total_written_bytes,
total_read_bytes,
write_throughput,
write_throughput.div_floor(1 << 20),
read_throughput,
read_throughput.div_floor(1 << 20),
);
let metrics_report = if cfg.report_metrics {
let mut buffer = Vec::new();
let encoder = TextEncoder::new();
let metrics = prometheus::gather();
encoder.encode(&metrics, &mut buffer).unwrap();
String::from_utf8(buffer).unwrap()
} else {
String::new()
};
info!(
r#"
Benchmark config:
{cfg:?}
Benchmark report:
{cost_report}
{throughput_report}
{metrics_report}"#
);
}
async fn create_topics(cfg: &Config) -> Vec<String> {
// Creates topics.
let client = ClientBuilder::new(cfg.bootstrap_brokers.clone())
.build()
.await
.unwrap();
let ctrl_client = client.controller_client().unwrap();
let (topics, tasks): (Vec<_>, Vec<_>) = (0..cfg.num_topics)
.map(|i| {
let topic = if cfg.random_topics {
format!(
"greptime_wal_bench_topic_{}_{}",
uuid::Uuid::new_v4().as_u128(),
i
)
} else {
format!("greptime_wal_bench_topic_{}", i)
};
let task = ctrl_client.create_topic(
topic.clone(),
1,
cfg.bootstrap_brokers.len() as i16,
2000,
);
(topic, task)
})
.unzip();
// Must ignore errors since we allow topics being created more than once.
let _ = futures::future::try_join_all(tasks).await;
topics
}
fn parse_compression(comp: &str) -> Compression {
match comp {
"no" => Compression::NoCompression,
"gzip" => Compression::Gzip,
"lz4" => Compression::Lz4,
"snappy" => Compression::Snappy,
"zstd" => Compression::Zstd,
other => unreachable!("Unrecognized compression {other}"),
}
}
fn parse_col_types(col_types: &str) -> Vec<ColumnDataType> {
let parts = col_types.split('x').collect::<Vec<_>>();
assert!(parts.len() <= 2);
let pattern = parts[0];
let repeat = parts
.get(1)
.map(|r| r.parse::<usize>().unwrap())
.unwrap_or(1);
pattern
.chars()
.map(|c| match c {
'i' | 'I' => ColumnDataType::Int64,
'f' | 'F' => ColumnDataType::Float64,
's' | 'S' => ColumnDataType::String,
other => unreachable!("Cannot parse {other} as a column data type"),
})
.cycle()
.take(pattern.len() * repeat)
.collect()
}
fn main() {
// Sets the global logging to INFO and suppress loggings from rskafka other than ERROR and upper ones.
std::env::set_var("UNITTEST_LOG_LEVEL", "info,rskafka=error");
common_telemetry::init_default_ut_logging();
let args = Args::parse();
let cfg = if !args.cfg_file.is_empty() {
toml::from_str(&fs::read_to_string(&args.cfg_file).unwrap()).unwrap()
} else {
Config::from(args)
};
// Validates arguments.
if cfg.num_regions < cfg.num_workers {
panic!("num_regions must be greater than or equal to num_workers");
}
if cfg
.num_workers
.min(cfg.num_topics)
.min(cfg.num_regions)
.min(cfg.num_scrapes)
.min(cfg.max_batch_size.as_bytes() as u32)
.min(cfg.bootstrap_brokers.len() as u32)
== 0
{
panic!("Invalid arguments");
}
tokio::runtime::Builder::new_multi_thread()
.enable_all()
.build()
.unwrap()
.block_on(async {
match cfg.wal_provider {
WalProvider::Kafka => {
let topics = create_topics(&cfg).await;
let kafka_cfg = KafkaConfig {
broker_endpoints: cfg.bootstrap_brokers.clone(),
max_batch_size: cfg.max_batch_size,
linger: cfg.linger,
backoff: BackoffConfig {
init: cfg.backoff_init,
max: cfg.backoff_max,
base: cfg.backoff_base,
deadline: Some(cfg.backoff_deadline),
},
compression: parse_compression(&cfg.compression),
..Default::default()
};
let store = Arc::new(KafkaLogStore::try_new(&kafka_cfg).await.unwrap());
let wal = Arc::new(Wal::new(store));
run_benchmarker(&cfg, &topics, wal).await;
}
WalProvider::RaftEngine => {
// The benchmarker assumes the raft engine directory exists.
let store = RaftEngineLogStore::try_new(
"/tmp/greptimedb/raft-engine-wal".to_string(),
RaftEngineConfig::default(),
)
.await
.map(Arc::new)
.unwrap();
let wal = Arc::new(Wal::new(store));
run_benchmarker(&cfg, &[], wal).await;
}
}
});
}

16
benchmarks/src/lib.rs Normal file
View File

@@ -0,0 +1,16 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
pub mod metrics;
pub mod wal_bench;

39
benchmarks/src/metrics.rs Normal file
View File

@@ -0,0 +1,39 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use lazy_static::lazy_static;
use prometheus::*;
/// Logstore label.
pub const LOGSTORE_LABEL: &str = "logstore";
/// Operation type label.
pub const OPTYPE_LABEL: &str = "optype";
lazy_static! {
/// Counters of bytes of each operation on a logstore.
pub static ref METRIC_WAL_OP_BYTES_TOTAL: IntCounterVec = register_int_counter_vec!(
"greptime_bench_wal_op_bytes_total",
"wal operation bytes total",
&[OPTYPE_LABEL],
)
.unwrap();
/// Counter of bytes of the append_batch operation.
pub static ref METRIC_WAL_WRITE_BYTES_TOTAL: IntCounter = METRIC_WAL_OP_BYTES_TOTAL.with_label_values(
&["write"],
);
/// Counter of bytes of the read operation.
pub static ref METRIC_WAL_READ_BYTES_TOTAL: IntCounter = METRIC_WAL_OP_BYTES_TOTAL.with_label_values(
&["read"],
);
}

361
benchmarks/src/wal_bench.rs Normal file
View File

@@ -0,0 +1,361 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::mem::size_of;
use std::sync::atomic::{AtomicI64, AtomicU64, Ordering};
use std::sync::{Arc, Mutex};
use std::time::Duration;
use api::v1::value::ValueData;
use api::v1::{ColumnDataType, ColumnSchema, Mutation, OpType, Row, Rows, Value, WalEntry};
use clap::{Parser, ValueEnum};
use common_base::readable_size::ReadableSize;
use common_wal::options::WalOptions;
use futures::StreamExt;
use mito2::wal::{Wal, WalWriter};
use rand::distributions::{Alphanumeric, DistString, Uniform};
use rand::rngs::SmallRng;
use rand::{Rng, SeedableRng};
use serde::{Deserialize, Serialize};
use store_api::logstore::LogStore;
use store_api::storage::RegionId;
use crate::metrics;
/// The wal provider.
#[derive(Clone, ValueEnum, Default, Debug, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "snake_case")]
pub enum WalProvider {
#[default]
RaftEngine,
Kafka,
}
#[derive(Parser)]
pub struct Args {
/// The provided configuration file.
/// The example configuration file can be found at `greptimedb/benchmarks/config/wal_bench.example.toml`.
#[clap(long, short = 'c')]
pub cfg_file: String,
/// The wal provider.
#[clap(long, value_enum, default_value_t = WalProvider::default())]
pub wal_provider: WalProvider,
/// The advertised addresses of the kafka brokers.
/// If there're multiple bootstrap brokers, their addresses should be separated by comma, for e.g. "localhost:9092,localhost:9093".
#[clap(long, short = 'b', default_value = "localhost:9092")]
pub bootstrap_brokers: String,
/// The number of workers each running in a dedicated thread.
#[clap(long, default_value_t = num_cpus::get() as u32)]
pub num_workers: u32,
/// The number of kafka topics to be created.
#[clap(long, default_value_t = 32)]
pub num_topics: u32,
/// The number of regions.
#[clap(long, default_value_t = 1000)]
pub num_regions: u32,
/// The number of times each region is scraped.
#[clap(long, default_value_t = 1000)]
pub num_scrapes: u32,
/// The number of rows in each wal entry.
/// Each time a region is scraped, a wal entry containing will be produced.
#[clap(long, default_value_t = 5)]
pub num_rows: u32,
/// The column types of the schema for each region.
/// Currently, three column types are supported:
/// - i = ColumnDataType::Int64
/// - f = ColumnDataType::Float64
/// - s = ColumnDataType::String
/// For e.g., "ifs" will be parsed as three columns: i64, f64, and string.
///
/// Additionally, a "x" sign can be provided to repeat the column types for a given number of times.
/// For e.g., "iix2" will be parsed as 4 columns: i64, i64, i64, and i64.
/// This feature is useful if you want to specify many columns.
#[clap(long, default_value = "ifs")]
pub col_types: String,
/// The maximum size of a batch of kafka records.
/// The default value is 1mb.
#[clap(long, default_value = "512KB")]
pub max_batch_size: ReadableSize,
/// The minimum latency the kafka client issues a batch of kafka records.
/// However, a batch of kafka records would be immediately issued if a record cannot be fit into the batch.
#[clap(long, default_value = "1ms")]
pub linger: String,
/// The initial backoff delay of the kafka consumer.
#[clap(long, default_value = "10ms")]
pub backoff_init: String,
/// The maximum backoff delay of the kafka consumer.
#[clap(long, default_value = "1s")]
pub backoff_max: String,
/// The exponential backoff rate of the kafka consumer. The next back off = base * the current backoff.
#[clap(long, default_value_t = 2)]
pub backoff_base: u32,
/// The deadline of backoff. The backoff ends if the total backoff delay reaches the deadline.
#[clap(long, default_value = "3s")]
pub backoff_deadline: String,
/// The client-side compression algorithm for kafka records.
#[clap(long, default_value = "zstd")]
pub compression: String,
/// The seed of random number generators.
#[clap(long, default_value_t = 42)]
pub rng_seed: u64,
/// Skips the read phase, aka. region replay, if set to true.
#[clap(long, default_value_t = false)]
pub skip_read: bool,
/// Skips the write phase if set to true.
#[clap(long, default_value_t = false)]
pub skip_write: bool,
/// Randomly generates topic names if set to true.
/// Useful when you want to run the benchmarker without worrying about the topics created before.
#[clap(long, default_value_t = false)]
pub random_topics: bool,
/// Logs out the gathered prometheus metrics when the benchmarker ends.
#[clap(long, default_value_t = false)]
pub report_metrics: bool,
}
/// Benchmarker config.
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Config {
pub wal_provider: WalProvider,
pub bootstrap_brokers: Vec<String>,
pub num_workers: u32,
pub num_topics: u32,
pub num_regions: u32,
pub num_scrapes: u32,
pub num_rows: u32,
pub col_types: String,
pub max_batch_size: ReadableSize,
#[serde(with = "humantime_serde")]
pub linger: Duration,
#[serde(with = "humantime_serde")]
pub backoff_init: Duration,
#[serde(with = "humantime_serde")]
pub backoff_max: Duration,
pub backoff_base: u32,
#[serde(with = "humantime_serde")]
pub backoff_deadline: Duration,
pub compression: String,
pub rng_seed: u64,
pub skip_read: bool,
pub skip_write: bool,
pub random_topics: bool,
pub report_metrics: bool,
}
impl From<Args> for Config {
fn from(args: Args) -> Self {
let cfg = Self {
wal_provider: args.wal_provider,
bootstrap_brokers: args
.bootstrap_brokers
.split(',')
.map(ToString::to_string)
.collect::<Vec<_>>(),
num_workers: args.num_workers.min(num_cpus::get() as u32),
num_topics: args.num_topics,
num_regions: args.num_regions,
num_scrapes: args.num_scrapes,
num_rows: args.num_rows,
col_types: args.col_types,
max_batch_size: args.max_batch_size,
linger: humantime::parse_duration(&args.linger).unwrap(),
backoff_init: humantime::parse_duration(&args.backoff_init).unwrap(),
backoff_max: humantime::parse_duration(&args.backoff_max).unwrap(),
backoff_base: args.backoff_base,
backoff_deadline: humantime::parse_duration(&args.backoff_deadline).unwrap(),
compression: args.compression,
rng_seed: args.rng_seed,
skip_read: args.skip_read,
skip_write: args.skip_write,
random_topics: args.random_topics,
report_metrics: args.report_metrics,
};
cfg
}
}
/// The region used for wal benchmarker.
pub struct Region {
id: RegionId,
schema: Vec<ColumnSchema>,
wal_options: WalOptions,
next_sequence: AtomicU64,
next_entry_id: AtomicU64,
next_timestamp: AtomicI64,
rng: Mutex<Option<SmallRng>>,
num_rows: u32,
}
impl Region {
/// Creates a new region.
pub fn new(
id: RegionId,
schema: Vec<ColumnSchema>,
wal_options: WalOptions,
num_rows: u32,
rng_seed: u64,
) -> Self {
Self {
id,
schema,
wal_options,
next_sequence: AtomicU64::new(1),
next_entry_id: AtomicU64::new(1),
next_timestamp: AtomicI64::new(1655276557000),
rng: Mutex::new(Some(SmallRng::seed_from_u64(rng_seed))),
num_rows,
}
}
/// Scrapes the region and adds the generated entry to wal.
pub fn add_wal_entry<S: LogStore>(&self, wal_writer: &mut WalWriter<S>) {
let mutation = Mutation {
op_type: OpType::Put as i32,
sequence: self
.next_sequence
.fetch_add(self.num_rows as u64, Ordering::Relaxed),
rows: Some(self.build_rows()),
};
let entry = WalEntry {
mutations: vec![mutation],
};
metrics::METRIC_WAL_WRITE_BYTES_TOTAL.inc_by(Self::entry_estimated_size(&entry) as u64);
wal_writer
.add_entry(
self.id,
self.next_entry_id.fetch_add(1, Ordering::Relaxed),
&entry,
&self.wal_options,
)
.unwrap();
}
/// Replays the region.
pub async fn replay<S: LogStore>(&self, wal: &Arc<Wal<S>>) {
let mut wal_stream = wal.scan(self.id, 0, &self.wal_options).unwrap();
while let Some(res) = wal_stream.next().await {
let (_, entry) = res.unwrap();
metrics::METRIC_WAL_READ_BYTES_TOTAL.inc_by(Self::entry_estimated_size(&entry) as u64);
}
}
/// Computes the estimated size in bytes of the entry.
pub fn entry_estimated_size(entry: &WalEntry) -> usize {
let wrapper_size = size_of::<WalEntry>()
+ entry.mutations.capacity() * size_of::<Mutation>()
+ size_of::<Rows>();
let rows = entry.mutations[0].rows.as_ref().unwrap();
let schema_size = rows.schema.capacity() * size_of::<ColumnSchema>()
+ rows
.schema
.iter()
.map(|s| s.column_name.capacity())
.sum::<usize>();
let values_size = (rows.rows.capacity() * size_of::<Row>())
+ rows
.rows
.iter()
.map(|r| r.values.capacity() * size_of::<Value>())
.sum::<usize>();
wrapper_size + schema_size + values_size
}
fn build_rows(&self) -> Rows {
let cols = self
.schema
.iter()
.map(|col_schema| {
let col_data_type = ColumnDataType::try_from(col_schema.datatype).unwrap();
self.build_col(&col_data_type, self.num_rows)
})
.collect::<Vec<_>>();
let rows = (0..self.num_rows)
.map(|i| {
let values = cols.iter().map(|col| col[i as usize].clone()).collect();
Row { values }
})
.collect();
Rows {
schema: self.schema.clone(),
rows,
}
}
fn build_col(&self, col_data_type: &ColumnDataType, num_rows: u32) -> Vec<Value> {
let mut rng_guard = self.rng.lock().unwrap();
let rng = rng_guard.as_mut().unwrap();
match col_data_type {
ColumnDataType::TimestampMillisecond => (0..num_rows)
.map(|_| {
let ts = self.next_timestamp.fetch_add(1000, Ordering::Relaxed);
Value {
value_data: Some(ValueData::TimestampMillisecondValue(ts)),
}
})
.collect(),
ColumnDataType::Int64 => (0..num_rows)
.map(|_| {
let v = rng.sample(Uniform::new(0, 10_000));
Value {
value_data: Some(ValueData::I64Value(v)),
}
})
.collect(),
ColumnDataType::Float64 => (0..num_rows)
.map(|_| {
let v = rng.sample(Uniform::new(0.0, 5000.0));
Value {
value_data: Some(ValueData::F64Value(v)),
}
})
.collect(),
ColumnDataType::String => (0..num_rows)
.map(|_| {
let v = Alphanumeric.sample_string(rng, 10);
Value {
value_data: Some(ValueData::StringValue(v)),
}
})
.collect(),
_ => unreachable!(),
}
}
}

127
cliff.toml Normal file
View File

@@ -0,0 +1,127 @@
# https://git-cliff.org/docs/configuration
[remote.github]
owner = "GreptimeTeam"
repo = "greptimedb"
[changelog]
header = ""
footer = ""
# template for the changelog body
# https://keats.github.io/tera/docs/#introduction
body = """
# {{ version }}
Release date: {{ timestamp | date(format="%B %d, %Y") }}
{%- set breakings = commits | filter(attribute="breaking", value=true) -%}
{%- if breakings | length > 0 %}
## Breaking changes
{% for commit in breakings %}
* {{ commit.github.pr_title }}\
{% if commit.github.username %} by \
{% set author = commit.github.username -%}
[@{{ author }}](https://github.com/{{ author }})
{%- endif -%}
{% if commit.github.pr_number %} in \
{% set number = commit.github.pr_number -%}
[#{{ number }}]({{ self::remote_url() }}/pull/{{ number }})
{%- endif %}
{%- endfor %}
{%- endif -%}
{%- set grouped_commits = commits | filter(attribute="breaking", value=false) | group_by(attribute="group") -%}
{% for group, commits in grouped_commits %}
### {{ group | striptags | trim | upper_first }}
{% for commit in commits %}
* {{ commit.github.pr_title }}\
{% if commit.github.username %} by \
{% set author = commit.github.username -%}
[@{{ author }}](https://github.com/{{ author }})
{%- endif -%}
{% if commit.github.pr_number %} in \
{% set number = commit.github.pr_number -%}
[#{{ number }}]({{ self::remote_url() }}/pull/{{ number }})
{%- endif %}
{%- endfor -%}
{% endfor %}
{%- if github.contributors | filter(attribute="is_first_time", value=true) | length != 0 %}
{% raw %}\n{% endraw -%}
## New Contributors
{% endif -%}
{% for contributor in github.contributors | filter(attribute="is_first_time", value=true) %}
* [@{{ contributor.username }}](https://github.com/{{ contributor.username }}) made their first contribution
{%- if contributor.pr_number %} in \
[#{{ contributor.pr_number }}]({{ self::remote_url() }}/pull/{{ contributor.pr_number }}) \
{%- endif %}
{%- endfor -%}
{% if github.contributors | length != 0 %}
{% raw %}\n{% endraw -%}
## All Contributors
We would like to thank the following contributors from the GreptimeDB community:
{%- set contributors = github.contributors | sort(attribute="username") | map(attribute="username") -%}
{%- set bots = ['dependabot[bot]'] %}
{% for contributor in contributors %}
{%- if bots is containing(contributor) -%}{% continue %}{%- endif -%}
{%- if loop.first -%}
[@{{ contributor }}](https://github.com/{{ contributor }})
{%- else -%}
, [@{{ contributor }}](https://github.com/{{ contributor }})
{%- endif -%}
{%- endfor %}
{%- endif %}
{% raw %}\n{% endraw %}
{%- macro remote_url() -%}
https://github.com/{{ remote.github.owner }}/{{ remote.github.repo }}
{%- endmacro -%}
"""
trim = true
[git]
# parse the commits based on https://www.conventionalcommits.org
conventional_commits = true
# filter out the commits that are not conventional
filter_unconventional = true
# process each line of a commit as an individual commit
split_commits = false
# regex for parsing and grouping commits
commit_parsers = [
{ message = "^feat", group = "<!-- 0 -->🚀 Features" },
{ message = "^fix", group = "<!-- 1 -->🐛 Bug Fixes" },
{ message = "^doc", group = "<!-- 3 -->📚 Documentation" },
{ message = "^perf", group = "<!-- 4 -->⚡ Performance" },
{ message = "^refactor", group = "<!-- 2 -->🚜 Refactor" },
{ message = "^style", group = "<!-- 5 -->🎨 Styling" },
{ message = "^test", group = "<!-- 6 -->🧪 Testing" },
{ message = "^chore\\(release\\): prepare for", skip = true },
{ message = "^chore\\(deps.*\\)", skip = true },
{ message = "^chore\\(pr\\)", skip = true },
{ message = "^chore\\(pull\\)", skip = true },
{ message = "^chore|^ci", group = "<!-- 7 -->⚙️ Miscellaneous Tasks" },
{ body = ".*security", group = "<!-- 8 -->🛡️ Security" },
{ message = "^revert", group = "<!-- 9 -->◀️ Revert" },
]
# protect breaking changes from being skipped due to matching a skipping commit_parser
protect_breaking_commits = false
# filter out the commits that are not matched by commit parsers
filter_commits = false
# regex for matching git tags
# tag_pattern = "v[0-9].*"
# regex for skipping tags
# skip_tags = ""
# regex for ignoring tags
ignore_tags = ".*-nightly-.*"
# sort the tags topologically
topo_order = false
# sort the commits inside sections by oldest/newest order
sort_commits = "oldest"
# limit the number of commits included in the changelog.
# limit_commits = 42

View File

@@ -8,5 +8,6 @@ coverage:
ignore:
- "**/error*.rs" # ignore all error.rs files
- "tests/runner/*.rs" # ignore integration test runner
- "tests-integration/**/*.rs" # ignore integration tests
comment: # this is a top-level key
layout: "diff"

View File

@@ -0,0 +1,19 @@
# Configurations
## Standalone Mode
{{ toml2docs "./standalone.example.toml" }}
## Cluster Mode
### Frontend
{{ toml2docs "./frontend.example.toml" }}
### Metasrv
{{ toml2docs "./metasrv.example.toml" }}
### Datanode
{{ toml2docs "./datanode.example.toml" }}

376
config/config.md Normal file
View File

@@ -0,0 +1,376 @@
# Configurations
## Standalone Mode
| Key | Type | Default | Descriptions |
| --- | -----| ------- | ----------- |
| `mode` | String | `standalone` | The running mode of the datanode. It can be `standalone` or `distributed`. |
| `enable_telemetry` | Bool | `true` | Enable telemetry to collect anonymous usage data. |
| `default_timezone` | String | `None` | The default timezone of the server. |
| `http` | -- | -- | The HTTP server options. |
| `http.addr` | String | `127.0.0.1:4000` | The address to bind the HTTP server. |
| `http.timeout` | String | `30s` | HTTP request timeout. |
| `http.body_limit` | String | `64MB` | HTTP request body limit.<br/>Support the following units are supported: `B`, `KB`, `KiB`, `MB`, `MiB`, `GB`, `GiB`, `TB`, `TiB`, `PB`, `PiB`. |
| `grpc` | -- | -- | The gRPC server options. |
| `grpc.addr` | String | `127.0.0.1:4001` | The address to bind the gRPC server. |
| `grpc.runtime_size` | Integer | `8` | The number of server worker threads. |
| `mysql` | -- | -- | MySQL server options. |
| `mysql.enable` | Bool | `true` | Whether to enable. |
| `mysql.addr` | String | `127.0.0.1:4002` | The addr to bind the MySQL server. |
| `mysql.runtime_size` | Integer | `2` | The number of server worker threads. |
| `mysql.tls` | -- | -- | -- |
| `mysql.tls.mode` | String | `disable` | TLS mode, refer to https://www.postgresql.org/docs/current/libpq-ssl.html<br/>- `disable` (default value)<br/>- `prefer`<br/>- `require`<br/>- `verify-ca`<br/>- `verify-full` |
| `mysql.tls.cert_path` | String | `None` | Certificate file path. |
| `mysql.tls.key_path` | String | `None` | Private key file path. |
| `mysql.tls.watch` | Bool | `false` | Watch for Certificate and key file change and auto reload |
| `postgres` | -- | -- | PostgresSQL server options. |
| `postgres.enable` | Bool | `true` | Whether to enable |
| `postgres.addr` | String | `127.0.0.1:4003` | The addr to bind the PostgresSQL server. |
| `postgres.runtime_size` | Integer | `2` | The number of server worker threads. |
| `postgres.tls` | -- | -- | PostgresSQL server TLS options, see `mysql_options.tls` section. |
| `postgres.tls.mode` | String | `disable` | TLS mode. |
| `postgres.tls.cert_path` | String | `None` | Certificate file path. |
| `postgres.tls.key_path` | String | `None` | Private key file path. |
| `postgres.tls.watch` | Bool | `false` | Watch for Certificate and key file change and auto reload |
| `opentsdb` | -- | -- | OpenTSDB protocol options. |
| `opentsdb.enable` | Bool | `true` | Whether to enable |
| `opentsdb.addr` | String | `127.0.0.1:4242` | OpenTSDB telnet API server address. |
| `opentsdb.runtime_size` | Integer | `2` | The number of server worker threads. |
| `influxdb` | -- | -- | InfluxDB protocol options. |
| `influxdb.enable` | Bool | `true` | Whether to enable InfluxDB protocol in HTTP API. |
| `prom_store` | -- | -- | Prometheus remote storage options |
| `prom_store.enable` | Bool | `true` | Whether to enable Prometheus remote write and read in HTTP API. |
| `prom_store.with_metric_engine` | Bool | `true` | Whether to store the data from Prometheus remote write in metric engine. |
| `wal` | -- | -- | The WAL options. |
| `wal.provider` | String | `raft_engine` | The provider of the WAL.<br/>- `raft_engine`: the wal is stored in the local file system by raft-engine.<br/>- `kafka`: it's remote wal that data is stored in Kafka. |
| `wal.dir` | String | `None` | The directory to store the WAL files.<br/>**It's only used when the provider is `raft_engine`**. |
| `wal.file_size` | String | `256MB` | The size of the WAL segment file.<br/>**It's only used when the provider is `raft_engine`**. |
| `wal.purge_threshold` | String | `4GB` | The threshold of the WAL size to trigger a flush.<br/>**It's only used when the provider is `raft_engine`**. |
| `wal.purge_interval` | String | `10m` | The interval to trigger a flush.<br/>**It's only used when the provider is `raft_engine`**. |
| `wal.read_batch_size` | Integer | `128` | The read batch size.<br/>**It's only used when the provider is `raft_engine`**. |
| `wal.sync_write` | Bool | `false` | Whether to use sync write.<br/>**It's only used when the provider is `raft_engine`**. |
| `wal.enable_log_recycle` | Bool | `true` | Whether to reuse logically truncated log files.<br/>**It's only used when the provider is `raft_engine`**. |
| `wal.prefill_log_files` | Bool | `false` | Whether to pre-create log files on start up.<br/>**It's only used when the provider is `raft_engine`**. |
| `wal.sync_period` | String | `10s` | Duration for fsyncing log files.<br/>**It's only used when the provider is `raft_engine`**. |
| `wal.broker_endpoints` | Array | -- | The Kafka broker endpoints.<br/>**It's only used when the provider is `kafka`**. |
| `wal.max_batch_size` | String | `1MB` | The max size of a single producer batch.<br/>Warning: Kafka has a default limit of 1MB per message in a topic.<br/>**It's only used when the provider is `kafka`**. |
| `wal.linger` | String | `200ms` | The linger duration of a kafka batch producer.<br/>**It's only used when the provider is `kafka`**. |
| `wal.consumer_wait_timeout` | String | `100ms` | The consumer wait timeout.<br/>**It's only used when the provider is `kafka`**. |
| `wal.backoff_init` | String | `500ms` | The initial backoff delay.<br/>**It's only used when the provider is `kafka`**. |
| `wal.backoff_max` | String | `10s` | The maximum backoff delay.<br/>**It's only used when the provider is `kafka`**. |
| `wal.backoff_base` | Integer | `2` | The exponential backoff rate, i.e. next backoff = base * current backoff.<br/>**It's only used when the provider is `kafka`**. |
| `wal.backoff_deadline` | String | `5mins` | The deadline of retries.<br/>**It's only used when the provider is `kafka`**. |
| `metadata_store` | -- | -- | Metadata storage options. |
| `metadata_store.file_size` | String | `256MB` | Kv file size in bytes. |
| `metadata_store.purge_threshold` | String | `4GB` | Kv purge threshold. |
| `procedure` | -- | -- | Procedure storage options. |
| `procedure.max_retry_times` | Integer | `3` | Procedure max retry time. |
| `procedure.retry_delay` | String | `500ms` | Initial retry delay of procedures, increases exponentially |
| `storage` | -- | -- | The data storage options. |
| `storage.data_home` | String | `/tmp/greptimedb/` | The working home directory. |
| `storage.type` | String | `File` | The storage type used to store the data.<br/>- `File`: the data is stored in the local file system.<br/>- `S3`: the data is stored in the S3 object storage.<br/>- `Gcs`: the data is stored in the Google Cloud Storage.<br/>- `Azblob`: the data is stored in the Azure Blob Storage.<br/>- `Oss`: the data is stored in the Aliyun OSS. |
| `storage.cache_path` | String | `None` | Cache configuration for object storage such as 'S3' etc.<br/>The local file cache directory. |
| `storage.cache_capacity` | String | `None` | The local file cache capacity in bytes. |
| `storage.bucket` | String | `None` | The S3 bucket name.<br/>**It's only used when the storage type is `S3`, `Oss` and `Gcs`**. |
| `storage.root` | String | `None` | The S3 data will be stored in the specified prefix, for example, `s3://${bucket}/${root}`.<br/>**It's only used when the storage type is `S3`, `Oss` and `Azblob`**. |
| `storage.access_key_id` | String | `None` | The access key id of the aws account.<br/>It's **highly recommended** to use AWS IAM roles instead of hardcoding the access key id and secret key.<br/>**It's only used when the storage type is `S3` and `Oss`**. |
| `storage.secret_access_key` | String | `None` | The secret access key of the aws account.<br/>It's **highly recommended** to use AWS IAM roles instead of hardcoding the access key id and secret key.<br/>**It's only used when the storage type is `S3`**. |
| `storage.access_key_secret` | String | `None` | The secret access key of the aliyun account.<br/>**It's only used when the storage type is `Oss`**. |
| `storage.account_name` | String | `None` | The account key of the azure account.<br/>**It's only used when the storage type is `Azblob`**. |
| `storage.account_key` | String | `None` | The account key of the azure account.<br/>**It's only used when the storage type is `Azblob`**. |
| `storage.scope` | String | `None` | The scope of the google cloud storage.<br/>**It's only used when the storage type is `Gcs`**. |
| `storage.credential_path` | String | `None` | The credential path of the google cloud storage.<br/>**It's only used when the storage type is `Gcs`**. |
| `storage.container` | String | `None` | The container of the azure account.<br/>**It's only used when the storage type is `Azblob`**. |
| `storage.sas_token` | String | `None` | The sas token of the azure account.<br/>**It's only used when the storage type is `Azblob`**. |
| `storage.endpoint` | String | `None` | The endpoint of the S3 service.<br/>**It's only used when the storage type is `S3`, `Oss`, `Gcs` and `Azblob`**. |
| `storage.region` | String | `None` | The region of the S3 service.<br/>**It's only used when the storage type is `S3`, `Oss`, `Gcs` and `Azblob`**. |
| `[[region_engine]]` | -- | -- | The region engine options. You can configure multiple region engines. |
| `region_engine.mito` | -- | -- | The Mito engine options. |
| `region_engine.mito.num_workers` | Integer | `8` | Number of region workers. |
| `region_engine.mito.worker_channel_size` | Integer | `128` | Request channel size of each worker. |
| `region_engine.mito.worker_request_batch_size` | Integer | `64` | Max batch size for a worker to handle requests. |
| `region_engine.mito.manifest_checkpoint_distance` | Integer | `10` | Number of meta action updated to trigger a new checkpoint for the manifest. |
| `region_engine.mito.compress_manifest` | Bool | `false` | Whether to compress manifest and checkpoint file by gzip (default false). |
| `region_engine.mito.max_background_jobs` | Integer | `4` | Max number of running background jobs |
| `region_engine.mito.auto_flush_interval` | String | `1h` | Interval to auto flush a region if it has not flushed yet. |
| `region_engine.mito.global_write_buffer_size` | String | `1GB` | Global write buffer size for all regions. If not set, it's default to 1/8 of OS memory with a max limitation of 1GB. |
| `region_engine.mito.global_write_buffer_reject_size` | String | `2GB` | Global write buffer size threshold to reject write requests. If not set, it's default to 2 times of `global_write_buffer_size` |
| `region_engine.mito.sst_meta_cache_size` | String | `128MB` | Cache size for SST metadata. Setting it to 0 to disable the cache.<br/>If not set, it's default to 1/32 of OS memory with a max limitation of 128MB. |
| `region_engine.mito.vector_cache_size` | String | `512MB` | Cache size for vectors and arrow arrays. Setting it to 0 to disable the cache.<br/>If not set, it's default to 1/16 of OS memory with a max limitation of 512MB. |
| `region_engine.mito.page_cache_size` | String | `512MB` | Cache size for pages of SST row groups. Setting it to 0 to disable the cache.<br/>If not set, it's default to 1/16 of OS memory with a max limitation of 512MB. |
| `region_engine.mito.sst_write_buffer_size` | String | `8MB` | Buffer size for SST writing. |
| `region_engine.mito.scan_parallelism` | Integer | `0` | Parallelism to scan a region (default: 1/4 of cpu cores).<br/>- `0`: using the default value (1/4 of cpu cores).<br/>- `1`: scan in current thread.<br/>- `n`: scan in parallelism n. |
| `region_engine.mito.parallel_scan_channel_size` | Integer | `32` | Capacity of the channel to send data from parallel scan tasks to the main task. |
| `region_engine.mito.allow_stale_entries` | Bool | `false` | Whether to allow stale WAL entries read during replay. |
| `region_engine.mito.inverted_index` | -- | -- | The options for inverted index in Mito engine. |
| `region_engine.mito.inverted_index.create_on_flush` | String | `auto` | Whether to create the index on flush.<br/>- `auto`: automatically<br/>- `disable`: never |
| `region_engine.mito.inverted_index.create_on_compaction` | String | `auto` | Whether to create the index on compaction.<br/>- `auto`: automatically<br/>- `disable`: never |
| `region_engine.mito.inverted_index.apply_on_query` | String | `auto` | Whether to apply the index on query<br/>- `auto`: automatically<br/>- `disable`: never |
| `region_engine.mito.inverted_index.mem_threshold_on_create` | String | `64M` | Memory threshold for performing an external sort during index creation.<br/>Setting to empty will disable external sorting, forcing all sorting operations to happen in memory. |
| `region_engine.mito.inverted_index.intermediate_path` | String | `""` | File system path to store intermediate files for external sorting (default `{data_home}/index_intermediate`). |
| `region_engine.mito.memtable` | -- | -- | -- |
| `region_engine.mito.memtable.type` | String | `time_series` | Memtable type.<br/>- `time_series`: time-series memtable<br/>- `partition_tree`: partition tree memtable (experimental) |
| `region_engine.mito.memtable.index_max_keys_per_shard` | Integer | `8192` | The max number of keys in one shard.<br/>Only available for `partition_tree` memtable. |
| `region_engine.mito.memtable.data_freeze_threshold` | Integer | `32768` | The max rows of data inside the actively writing buffer in one shard.<br/>Only available for `partition_tree` memtable. |
| `region_engine.mito.memtable.fork_dictionary_bytes` | String | `1GiB` | Max dictionary bytes.<br/>Only available for `partition_tree` memtable. |
| `logging` | -- | -- | The logging options. |
| `logging.dir` | String | `/tmp/greptimedb/logs` | The directory to store the log files. |
| `logging.level` | String | `None` | The log level. Can be `info`/`debug`/`warn`/`error`. |
| `logging.enable_otlp_tracing` | Bool | `false` | Enable OTLP tracing. |
| `logging.otlp_endpoint` | String | `None` | The OTLP tracing endpoint. |
| `logging.append_stdout` | Bool | `true` | Whether to append logs to stdout. |
| `logging.tracing_sample_ratio` | -- | -- | The percentage of tracing will be sampled and exported.<br/>Valid range `[0, 1]`, 1 means all traces are sampled, 0 means all traces are not sampled, the default value is 1.<br/>ratio > 1 are treated as 1. Fractions < 0 are treated as 0 |
| `logging.tracing_sample_ratio.default_ratio` | Float | `1.0` | -- |
| `export_metrics` | -- | -- | The datanode can export its metrics and send to Prometheus compatible service (e.g. send to `greptimedb` itself) from remote-write API.<br/>This is only used for `greptimedb` to export its own metrics internally. It's different from prometheus scrape. |
| `export_metrics.enable` | Bool | `false` | whether enable export metrics. |
| `export_metrics.write_interval` | String | `30s` | The interval of export metrics. |
| `export_metrics.self_import` | -- | -- | For `standalone` mode, `self_import` is recommend to collect metrics generated by itself |
| `export_metrics.self_import.db` | String | `None` | -- |
| `export_metrics.remote_write` | -- | -- | -- |
| `export_metrics.remote_write.url` | String | `""` | The url the metrics send to. The url example can be: `http://127.0.0.1:4000/v1/prometheus/write?db=information_schema`. |
| `export_metrics.remote_write.headers` | InlineTable | -- | HTTP headers of Prometheus remote-write carry. |
## Cluster Mode
### Frontend
| Key | Type | Default | Descriptions |
| --- | -----| ------- | ----------- |
| `mode` | String | `standalone` | The running mode of the datanode. It can be `standalone` or `distributed`. |
| `default_timezone` | String | `None` | The default timezone of the server. |
| `heartbeat` | -- | -- | The heartbeat options. |
| `heartbeat.interval` | String | `18s` | Interval for sending heartbeat messages to the metasrv. |
| `heartbeat.retry_interval` | String | `3s` | Interval for retrying to send heartbeat messages to the metasrv. |
| `http` | -- | -- | The HTTP server options. |
| `http.addr` | String | `127.0.0.1:4000` | The address to bind the HTTP server. |
| `http.timeout` | String | `30s` | HTTP request timeout. |
| `http.body_limit` | String | `64MB` | HTTP request body limit.<br/>Support the following units are supported: `B`, `KB`, `KiB`, `MB`, `MiB`, `GB`, `GiB`, `TB`, `TiB`, `PB`, `PiB`. |
| `grpc` | -- | -- | The gRPC server options. |
| `grpc.addr` | String | `127.0.0.1:4001` | The address to bind the gRPC server. |
| `grpc.runtime_size` | Integer | `8` | The number of server worker threads. |
| `mysql` | -- | -- | MySQL server options. |
| `mysql.enable` | Bool | `true` | Whether to enable. |
| `mysql.addr` | String | `127.0.0.1:4002` | The addr to bind the MySQL server. |
| `mysql.runtime_size` | Integer | `2` | The number of server worker threads. |
| `mysql.tls` | -- | -- | -- |
| `mysql.tls.mode` | String | `disable` | TLS mode, refer to https://www.postgresql.org/docs/current/libpq-ssl.html<br/>- `disable` (default value)<br/>- `prefer`<br/>- `require`<br/>- `verify-ca`<br/>- `verify-full` |
| `mysql.tls.cert_path` | String | `None` | Certificate file path. |
| `mysql.tls.key_path` | String | `None` | Private key file path. |
| `mysql.tls.watch` | Bool | `false` | Watch for Certificate and key file change and auto reload |
| `postgres` | -- | -- | PostgresSQL server options. |
| `postgres.enable` | Bool | `true` | Whether to enable |
| `postgres.addr` | String | `127.0.0.1:4003` | The addr to bind the PostgresSQL server. |
| `postgres.runtime_size` | Integer | `2` | The number of server worker threads. |
| `postgres.tls` | -- | -- | PostgresSQL server TLS options, see `mysql_options.tls` section. |
| `postgres.tls.mode` | String | `disable` | TLS mode. |
| `postgres.tls.cert_path` | String | `None` | Certificate file path. |
| `postgres.tls.key_path` | String | `None` | Private key file path. |
| `postgres.tls.watch` | Bool | `false` | Watch for Certificate and key file change and auto reload |
| `opentsdb` | -- | -- | OpenTSDB protocol options. |
| `opentsdb.enable` | Bool | `true` | Whether to enable |
| `opentsdb.addr` | String | `127.0.0.1:4242` | OpenTSDB telnet API server address. |
| `opentsdb.runtime_size` | Integer | `2` | The number of server worker threads. |
| `influxdb` | -- | -- | InfluxDB protocol options. |
| `influxdb.enable` | Bool | `true` | Whether to enable InfluxDB protocol in HTTP API. |
| `prom_store` | -- | -- | Prometheus remote storage options |
| `prom_store.enable` | Bool | `true` | Whether to enable Prometheus remote write and read in HTTP API. |
| `prom_store.with_metric_engine` | Bool | `true` | Whether to store the data from Prometheus remote write in metric engine. |
| `meta_client` | -- | -- | The metasrv client options. |
| `meta_client.metasrv_addrs` | Array | -- | The addresses of the metasrv. |
| `meta_client.timeout` | String | `3s` | Operation timeout. |
| `meta_client.heartbeat_timeout` | String | `500ms` | Heartbeat timeout. |
| `meta_client.ddl_timeout` | String | `10s` | DDL timeout. |
| `meta_client.connect_timeout` | String | `1s` | Connect server timeout. |
| `meta_client.tcp_nodelay` | Bool | `true` | `TCP_NODELAY` option for accepted connections. |
| `meta_client.metadata_cache_max_capacity` | Integer | `100000` | The configuration about the cache of the metadata. |
| `meta_client.metadata_cache_ttl` | String | `10m` | TTL of the metadata cache. |
| `meta_client.metadata_cache_tti` | String | `5m` | -- |
| `datanode` | -- | -- | Datanode options. |
| `datanode.client` | -- | -- | Datanode client options. |
| `datanode.client.timeout` | String | `10s` | -- |
| `datanode.client.connect_timeout` | String | `10s` | -- |
| `datanode.client.tcp_nodelay` | Bool | `true` | -- |
| `logging` | -- | -- | The logging options. |
| `logging.dir` | String | `/tmp/greptimedb/logs` | The directory to store the log files. |
| `logging.level` | String | `None` | The log level. Can be `info`/`debug`/`warn`/`error`. |
| `logging.enable_otlp_tracing` | Bool | `false` | Enable OTLP tracing. |
| `logging.otlp_endpoint` | String | `None` | The OTLP tracing endpoint. |
| `logging.append_stdout` | Bool | `true` | Whether to append logs to stdout. |
| `logging.tracing_sample_ratio` | -- | -- | The percentage of tracing will be sampled and exported.<br/>Valid range `[0, 1]`, 1 means all traces are sampled, 0 means all traces are not sampled, the default value is 1.<br/>ratio > 1 are treated as 1. Fractions < 0 are treated as 0 |
| `logging.tracing_sample_ratio.default_ratio` | Float | `1.0` | -- |
| `export_metrics` | -- | -- | The datanode can export its metrics and send to Prometheus compatible service (e.g. send to `greptimedb` itself) from remote-write API.<br/>This is only used for `greptimedb` to export its own metrics internally. It's different from prometheus scrape. |
| `export_metrics.enable` | Bool | `false` | whether enable export metrics. |
| `export_metrics.write_interval` | String | `30s` | The interval of export metrics. |
| `export_metrics.self_import` | -- | -- | For `standalone` mode, `self_import` is recommend to collect metrics generated by itself |
| `export_metrics.self_import.db` | String | `None` | -- |
| `export_metrics.remote_write` | -- | -- | -- |
| `export_metrics.remote_write.url` | String | `""` | The url the metrics send to. The url example can be: `http://127.0.0.1:4000/v1/prometheus/write?db=information_schema`. |
| `export_metrics.remote_write.headers` | InlineTable | -- | HTTP headers of Prometheus remote-write carry. |
### Metasrv
| Key | Type | Default | Descriptions |
| --- | -----| ------- | ----------- |
| `data_home` | String | `/tmp/metasrv/` | The working home directory. |
| `bind_addr` | String | `127.0.0.1:3002` | The bind address of metasrv. |
| `server_addr` | String | `127.0.0.1:3002` | The communication server address for frontend and datanode to connect to metasrv, "127.0.0.1:3002" by default for localhost. |
| `store_addr` | String | `127.0.0.1:2379` | Etcd server address. |
| `selector` | String | `lease_based` | Datanode selector type.<br/>- `lease_based` (default value).<br/>- `load_based`<br/>For details, please see "https://docs.greptime.com/developer-guide/metasrv/selector". |
| `use_memory_store` | Bool | `false` | Store data in memory. |
| `enable_telemetry` | Bool | `true` | Whether to enable greptimedb telemetry. |
| `store_key_prefix` | String | `""` | If it's not empty, the metasrv will store all data with this key prefix. |
| `procedure` | -- | -- | Procedure storage options. |
| `procedure.max_retry_times` | Integer | `12` | Procedure max retry time. |
| `procedure.retry_delay` | String | `500ms` | Initial retry delay of procedures, increases exponentially |
| `procedure.max_metadata_value_size` | String | `1500KiB` | Auto split large value<br/>GreptimeDB procedure uses etcd as the default metadata storage backend.<br/>The etcd the maximum size of any request is 1.5 MiB<br/>1500KiB = 1536KiB (1.5MiB) - 36KiB (reserved size of key)<br/>Comments out the `max_metadata_value_size`, for don't split large value (no limit). |
| `failure_detector` | -- | -- | -- |
| `failure_detector.threshold` | Float | `8.0` | -- |
| `failure_detector.min_std_deviation` | String | `100ms` | -- |
| `failure_detector.acceptable_heartbeat_pause` | String | `3000ms` | -- |
| `failure_detector.first_heartbeat_estimate` | String | `1000ms` | -- |
| `datanode` | -- | -- | Datanode options. |
| `datanode.client` | -- | -- | Datanode client options. |
| `datanode.client.timeout` | String | `10s` | -- |
| `datanode.client.connect_timeout` | String | `10s` | -- |
| `datanode.client.tcp_nodelay` | Bool | `true` | -- |
| `wal` | -- | -- | -- |
| `wal.provider` | String | `raft_engine` | -- |
| `wal.broker_endpoints` | Array | -- | The broker endpoints of the Kafka cluster. |
| `wal.num_topics` | Integer | `64` | Number of topics to be created upon start. |
| `wal.selector_type` | String | `round_robin` | Topic selector type.<br/>Available selector types:<br/>- `round_robin` (default) |
| `wal.topic_name_prefix` | String | `greptimedb_wal_topic` | A Kafka topic is constructed by concatenating `topic_name_prefix` and `topic_id`. |
| `wal.replication_factor` | Integer | `1` | Expected number of replicas of each partition. |
| `wal.create_topic_timeout` | String | `30s` | Above which a topic creation operation will be cancelled. |
| `wal.backoff_init` | String | `500ms` | The initial backoff for kafka clients. |
| `wal.backoff_max` | String | `10s` | The maximum backoff for kafka clients. |
| `wal.backoff_base` | Integer | `2` | Exponential backoff rate, i.e. next backoff = base * current backoff. |
| `wal.backoff_deadline` | String | `5mins` | Stop reconnecting if the total wait time reaches the deadline. If this config is missing, the reconnecting won't terminate. |
| `logging` | -- | -- | The logging options. |
| `logging.dir` | String | `/tmp/greptimedb/logs` | The directory to store the log files. |
| `logging.level` | String | `None` | The log level. Can be `info`/`debug`/`warn`/`error`. |
| `logging.enable_otlp_tracing` | Bool | `false` | Enable OTLP tracing. |
| `logging.otlp_endpoint` | String | `None` | The OTLP tracing endpoint. |
| `logging.append_stdout` | Bool | `true` | Whether to append logs to stdout. |
| `logging.tracing_sample_ratio` | -- | -- | The percentage of tracing will be sampled and exported.<br/>Valid range `[0, 1]`, 1 means all traces are sampled, 0 means all traces are not sampled, the default value is 1.<br/>ratio > 1 are treated as 1. Fractions < 0 are treated as 0 |
| `logging.tracing_sample_ratio.default_ratio` | Float | `1.0` | -- |
| `export_metrics` | -- | -- | The datanode can export its metrics and send to Prometheus compatible service (e.g. send to `greptimedb` itself) from remote-write API.<br/>This is only used for `greptimedb` to export its own metrics internally. It's different from prometheus scrape. |
| `export_metrics.enable` | Bool | `false` | whether enable export metrics. |
| `export_metrics.write_interval` | String | `30s` | The interval of export metrics. |
| `export_metrics.self_import` | -- | -- | For `standalone` mode, `self_import` is recommend to collect metrics generated by itself |
| `export_metrics.self_import.db` | String | `None` | -- |
| `export_metrics.remote_write` | -- | -- | -- |
| `export_metrics.remote_write.url` | String | `""` | The url the metrics send to. The url example can be: `http://127.0.0.1:4000/v1/prometheus/write?db=information_schema`. |
| `export_metrics.remote_write.headers` | InlineTable | -- | HTTP headers of Prometheus remote-write carry. |
### Datanode
| Key | Type | Default | Descriptions |
| --- | -----| ------- | ----------- |
| `mode` | String | `standalone` | The running mode of the datanode. It can be `standalone` or `distributed`. |
| `node_id` | Integer | `None` | The datanode identifier and should be unique in the cluster. |
| `require_lease_before_startup` | Bool | `false` | Start services after regions have obtained leases.<br/>It will block the datanode start if it can't receive leases in the heartbeat from metasrv. |
| `init_regions_in_background` | Bool | `false` | Initialize all regions in the background during the startup.<br/>By default, it provides services after all regions have been initialized. |
| `rpc_addr` | String | `127.0.0.1:3001` | The gRPC address of the datanode. |
| `rpc_hostname` | String | `None` | The hostname of the datanode. |
| `rpc_runtime_size` | Integer | `8` | The number of gRPC server worker threads. |
| `rpc_max_recv_message_size` | String | `512MB` | The maximum receive message size for gRPC server. |
| `rpc_max_send_message_size` | String | `512MB` | The maximum send message size for gRPC server. |
| `enable_telemetry` | Bool | `true` | Enable telemetry to collect anonymous usage data. |
| `heartbeat` | -- | -- | The heartbeat options. |
| `heartbeat.interval` | String | `3s` | Interval for sending heartbeat messages to the metasrv. |
| `heartbeat.retry_interval` | String | `3s` | Interval for retrying to send heartbeat messages to the metasrv. |
| `meta_client` | -- | -- | The metasrv client options. |
| `meta_client.metasrv_addrs` | Array | -- | The addresses of the metasrv. |
| `meta_client.timeout` | String | `3s` | Operation timeout. |
| `meta_client.heartbeat_timeout` | String | `500ms` | Heartbeat timeout. |
| `meta_client.ddl_timeout` | String | `10s` | DDL timeout. |
| `meta_client.connect_timeout` | String | `1s` | Connect server timeout. |
| `meta_client.tcp_nodelay` | Bool | `true` | `TCP_NODELAY` option for accepted connections. |
| `meta_client.metadata_cache_max_capacity` | Integer | `100000` | The configuration about the cache of the metadata. |
| `meta_client.metadata_cache_ttl` | String | `10m` | TTL of the metadata cache. |
| `meta_client.metadata_cache_tti` | String | `5m` | -- |
| `wal` | -- | -- | The WAL options. |
| `wal.provider` | String | `raft_engine` | The provider of the WAL.<br/>- `raft_engine`: the wal is stored in the local file system by raft-engine.<br/>- `kafka`: it's remote wal that data is stored in Kafka. |
| `wal.dir` | String | `None` | The directory to store the WAL files.<br/>**It's only used when the provider is `raft_engine`**. |
| `wal.file_size` | String | `256MB` | The size of the WAL segment file.<br/>**It's only used when the provider is `raft_engine`**. |
| `wal.purge_threshold` | String | `4GB` | The threshold of the WAL size to trigger a flush.<br/>**It's only used when the provider is `raft_engine`**. |
| `wal.purge_interval` | String | `10m` | The interval to trigger a flush.<br/>**It's only used when the provider is `raft_engine`**. |
| `wal.read_batch_size` | Integer | `128` | The read batch size.<br/>**It's only used when the provider is `raft_engine`**. |
| `wal.sync_write` | Bool | `false` | Whether to use sync write.<br/>**It's only used when the provider is `raft_engine`**. |
| `wal.enable_log_recycle` | Bool | `true` | Whether to reuse logically truncated log files.<br/>**It's only used when the provider is `raft_engine`**. |
| `wal.prefill_log_files` | Bool | `false` | Whether to pre-create log files on start up.<br/>**It's only used when the provider is `raft_engine`**. |
| `wal.sync_period` | String | `10s` | Duration for fsyncing log files.<br/>**It's only used when the provider is `raft_engine`**. |
| `wal.broker_endpoints` | Array | -- | The Kafka broker endpoints.<br/>**It's only used when the provider is `kafka`**. |
| `wal.max_batch_size` | String | `1MB` | The max size of a single producer batch.<br/>Warning: Kafka has a default limit of 1MB per message in a topic.<br/>**It's only used when the provider is `kafka`**. |
| `wal.linger` | String | `200ms` | The linger duration of a kafka batch producer.<br/>**It's only used when the provider is `kafka`**. |
| `wal.consumer_wait_timeout` | String | `100ms` | The consumer wait timeout.<br/>**It's only used when the provider is `kafka`**. |
| `wal.backoff_init` | String | `500ms` | The initial backoff delay.<br/>**It's only used when the provider is `kafka`**. |
| `wal.backoff_max` | String | `10s` | The maximum backoff delay.<br/>**It's only used when the provider is `kafka`**. |
| `wal.backoff_base` | Integer | `2` | The exponential backoff rate, i.e. next backoff = base * current backoff.<br/>**It's only used when the provider is `kafka`**. |
| `wal.backoff_deadline` | String | `5mins` | The deadline of retries.<br/>**It's only used when the provider is `kafka`**. |
| `storage` | -- | -- | The data storage options. |
| `storage.data_home` | String | `/tmp/greptimedb/` | The working home directory. |
| `storage.type` | String | `File` | The storage type used to store the data.<br/>- `File`: the data is stored in the local file system.<br/>- `S3`: the data is stored in the S3 object storage.<br/>- `Gcs`: the data is stored in the Google Cloud Storage.<br/>- `Azblob`: the data is stored in the Azure Blob Storage.<br/>- `Oss`: the data is stored in the Aliyun OSS. |
| `storage.cache_path` | String | `None` | Cache configuration for object storage such as 'S3' etc.<br/>The local file cache directory. |
| `storage.cache_capacity` | String | `None` | The local file cache capacity in bytes. |
| `storage.bucket` | String | `None` | The S3 bucket name.<br/>**It's only used when the storage type is `S3`, `Oss` and `Gcs`**. |
| `storage.root` | String | `None` | The S3 data will be stored in the specified prefix, for example, `s3://${bucket}/${root}`.<br/>**It's only used when the storage type is `S3`, `Oss` and `Azblob`**. |
| `storage.access_key_id` | String | `None` | The access key id of the aws account.<br/>It's **highly recommended** to use AWS IAM roles instead of hardcoding the access key id and secret key.<br/>**It's only used when the storage type is `S3` and `Oss`**. |
| `storage.secret_access_key` | String | `None` | The secret access key of the aws account.<br/>It's **highly recommended** to use AWS IAM roles instead of hardcoding the access key id and secret key.<br/>**It's only used when the storage type is `S3`**. |
| `storage.access_key_secret` | String | `None` | The secret access key of the aliyun account.<br/>**It's only used when the storage type is `Oss`**. |
| `storage.account_name` | String | `None` | The account key of the azure account.<br/>**It's only used when the storage type is `Azblob`**. |
| `storage.account_key` | String | `None` | The account key of the azure account.<br/>**It's only used when the storage type is `Azblob`**. |
| `storage.scope` | String | `None` | The scope of the google cloud storage.<br/>**It's only used when the storage type is `Gcs`**. |
| `storage.credential_path` | String | `None` | The credential path of the google cloud storage.<br/>**It's only used when the storage type is `Gcs`**. |
| `storage.container` | String | `None` | The container of the azure account.<br/>**It's only used when the storage type is `Azblob`**. |
| `storage.sas_token` | String | `None` | The sas token of the azure account.<br/>**It's only used when the storage type is `Azblob`**. |
| `storage.endpoint` | String | `None` | The endpoint of the S3 service.<br/>**It's only used when the storage type is `S3`, `Oss`, `Gcs` and `Azblob`**. |
| `storage.region` | String | `None` | The region of the S3 service.<br/>**It's only used when the storage type is `S3`, `Oss`, `Gcs` and `Azblob`**. |
| `[[region_engine]]` | -- | -- | The region engine options. You can configure multiple region engines. |
| `region_engine.mito` | -- | -- | The Mito engine options. |
| `region_engine.mito.num_workers` | Integer | `8` | Number of region workers. |
| `region_engine.mito.worker_channel_size` | Integer | `128` | Request channel size of each worker. |
| `region_engine.mito.worker_request_batch_size` | Integer | `64` | Max batch size for a worker to handle requests. |
| `region_engine.mito.manifest_checkpoint_distance` | Integer | `10` | Number of meta action updated to trigger a new checkpoint for the manifest. |
| `region_engine.mito.compress_manifest` | Bool | `false` | Whether to compress manifest and checkpoint file by gzip (default false). |
| `region_engine.mito.max_background_jobs` | Integer | `4` | Max number of running background jobs |
| `region_engine.mito.auto_flush_interval` | String | `1h` | Interval to auto flush a region if it has not flushed yet. |
| `region_engine.mito.global_write_buffer_size` | String | `1GB` | Global write buffer size for all regions. If not set, it's default to 1/8 of OS memory with a max limitation of 1GB. |
| `region_engine.mito.global_write_buffer_reject_size` | String | `2GB` | Global write buffer size threshold to reject write requests. If not set, it's default to 2 times of `global_write_buffer_size` |
| `region_engine.mito.sst_meta_cache_size` | String | `128MB` | Cache size for SST metadata. Setting it to 0 to disable the cache.<br/>If not set, it's default to 1/32 of OS memory with a max limitation of 128MB. |
| `region_engine.mito.vector_cache_size` | String | `512MB` | Cache size for vectors and arrow arrays. Setting it to 0 to disable the cache.<br/>If not set, it's default to 1/16 of OS memory with a max limitation of 512MB. |
| `region_engine.mito.page_cache_size` | String | `512MB` | Cache size for pages of SST row groups. Setting it to 0 to disable the cache.<br/>If not set, it's default to 1/16 of OS memory with a max limitation of 512MB. |
| `region_engine.mito.sst_write_buffer_size` | String | `8MB` | Buffer size for SST writing. |
| `region_engine.mito.scan_parallelism` | Integer | `0` | Parallelism to scan a region (default: 1/4 of cpu cores).<br/>- `0`: using the default value (1/4 of cpu cores).<br/>- `1`: scan in current thread.<br/>- `n`: scan in parallelism n. |
| `region_engine.mito.parallel_scan_channel_size` | Integer | `32` | Capacity of the channel to send data from parallel scan tasks to the main task. |
| `region_engine.mito.allow_stale_entries` | Bool | `false` | Whether to allow stale WAL entries read during replay. |
| `region_engine.mito.inverted_index` | -- | -- | The options for inverted index in Mito engine. |
| `region_engine.mito.inverted_index.create_on_flush` | String | `auto` | Whether to create the index on flush.<br/>- `auto`: automatically<br/>- `disable`: never |
| `region_engine.mito.inverted_index.create_on_compaction` | String | `auto` | Whether to create the index on compaction.<br/>- `auto`: automatically<br/>- `disable`: never |
| `region_engine.mito.inverted_index.apply_on_query` | String | `auto` | Whether to apply the index on query<br/>- `auto`: automatically<br/>- `disable`: never |
| `region_engine.mito.inverted_index.mem_threshold_on_create` | String | `64M` | Memory threshold for performing an external sort during index creation.<br/>Setting to empty will disable external sorting, forcing all sorting operations to happen in memory. |
| `region_engine.mito.inverted_index.intermediate_path` | String | `""` | File system path to store intermediate files for external sorting (default `{data_home}/index_intermediate`). |
| `region_engine.mito.memtable` | -- | -- | -- |
| `region_engine.mito.memtable.type` | String | `time_series` | Memtable type.<br/>- `time_series`: time-series memtable<br/>- `partition_tree`: partition tree memtable (experimental) |
| `region_engine.mito.memtable.index_max_keys_per_shard` | Integer | `8192` | The max number of keys in one shard.<br/>Only available for `partition_tree` memtable. |
| `region_engine.mito.memtable.data_freeze_threshold` | Integer | `32768` | The max rows of data inside the actively writing buffer in one shard.<br/>Only available for `partition_tree` memtable. |
| `region_engine.mito.memtable.fork_dictionary_bytes` | String | `1GiB` | Max dictionary bytes.<br/>Only available for `partition_tree` memtable. |
| `logging` | -- | -- | The logging options. |
| `logging.dir` | String | `/tmp/greptimedb/logs` | The directory to store the log files. |
| `logging.level` | String | `None` | The log level. Can be `info`/`debug`/`warn`/`error`. |
| `logging.enable_otlp_tracing` | Bool | `false` | Enable OTLP tracing. |
| `logging.otlp_endpoint` | String | `None` | The OTLP tracing endpoint. |
| `logging.append_stdout` | Bool | `true` | Whether to append logs to stdout. |
| `logging.tracing_sample_ratio` | -- | -- | The percentage of tracing will be sampled and exported.<br/>Valid range `[0, 1]`, 1 means all traces are sampled, 0 means all traces are not sampled, the default value is 1.<br/>ratio > 1 are treated as 1. Fractions < 0 are treated as 0 |
| `logging.tracing_sample_ratio.default_ratio` | Float | `1.0` | -- |
| `export_metrics` | -- | -- | The datanode can export its metrics and send to Prometheus compatible service (e.g. send to `greptimedb` itself) from remote-write API.<br/>This is only used for `greptimedb` to export its own metrics internally. It's different from prometheus scrape. |
| `export_metrics.enable` | Bool | `false` | whether enable export metrics. |
| `export_metrics.write_interval` | String | `30s` | The interval of export metrics. |
| `export_metrics.self_import` | -- | -- | For `standalone` mode, `self_import` is recommend to collect metrics generated by itself |
| `export_metrics.self_import.db` | String | `None` | -- |
| `export_metrics.remote_write` | -- | -- | -- |
| `export_metrics.remote_write.url` | String | `""` | The url the metrics send to. The url example can be: `http://127.0.0.1:4000/v1/prometheus/write?db=information_schema`. |
| `export_metrics.remote_write.headers` | InlineTable | -- | HTTP headers of Prometheus remote-write carry. |

View File

@@ -1,89 +1,430 @@
# Node running mode, see `standalone.example.toml`.
mode = "distributed"
# The datanode identifier, should be unique.
## The running mode of the datanode. It can be `standalone` or `distributed`.
mode = "standalone"
## The datanode identifier and should be unique in the cluster.
## +toml2docs:none-default
node_id = 42
# gRPC server address, "127.0.0.1:3001" by default.
rpc_addr = "127.0.0.1:3001"
# Hostname of this node.
rpc_hostname = "127.0.0.1"
# The number of gRPC server worker threads, 8 by default.
rpc_runtime_size = 8
# Start services after regions have obtained leases.
# It will block the datanode start if it can't receive leases in the heartbeat from metasrv.
## Start services after regions have obtained leases.
## It will block the datanode start if it can't receive leases in the heartbeat from metasrv.
require_lease_before_startup = false
## Initialize all regions in the background during the startup.
## By default, it provides services after all regions have been initialized.
init_regions_in_background = false
## The gRPC address of the datanode.
rpc_addr = "127.0.0.1:3001"
## The hostname of the datanode.
## +toml2docs:none-default
rpc_hostname = "127.0.0.1"
## The number of gRPC server worker threads.
rpc_runtime_size = 8
## The maximum receive message size for gRPC server.
rpc_max_recv_message_size = "512MB"
## The maximum send message size for gRPC server.
rpc_max_send_message_size = "512MB"
## Enable telemetry to collect anonymous usage data.
enable_telemetry = true
## The heartbeat options.
[heartbeat]
# Interval for sending heartbeat messages to the Metasrv, 3 seconds by default.
## Interval for sending heartbeat messages to the metasrv.
interval = "3s"
# Metasrv client options.
## Interval for retrying to send heartbeat messages to the metasrv.
retry_interval = "3s"
## The metasrv client options.
[meta_client]
# Metasrv address list.
## The addresses of the metasrv.
metasrv_addrs = ["127.0.0.1:3002"]
# Heartbeat timeout, 500 milliseconds by default.
heartbeat_timeout = "500ms"
# Operation timeout, 3 seconds by default.
## Operation timeout.
timeout = "3s"
# Connect server timeout, 1 second by default.
## Heartbeat timeout.
heartbeat_timeout = "500ms"
## DDL timeout.
ddl_timeout = "10s"
## Connect server timeout.
connect_timeout = "1s"
# `TCP_NODELAY` option for accepted connections, true by default.
## `TCP_NODELAY` option for accepted connections.
tcp_nodelay = true
# WAL options, see `standalone.example.toml`.
## The configuration about the cache of the metadata.
metadata_cache_max_capacity = 100000
## TTL of the metadata cache.
metadata_cache_ttl = "10m"
# TTI of the metadata cache.
metadata_cache_tti = "5m"
## The WAL options.
[wal]
# WAL data directory
# dir = "/tmp/greptimedb/wal"
## The provider of the WAL.
## - `raft_engine`: the wal is stored in the local file system by raft-engine.
## - `kafka`: it's remote wal that data is stored in Kafka.
provider = "raft_engine"
## The directory to store the WAL files.
## **It's only used when the provider is `raft_engine`**.
## +toml2docs:none-default
dir = "/tmp/greptimedb/wal"
## The size of the WAL segment file.
## **It's only used when the provider is `raft_engine`**.
file_size = "256MB"
## The threshold of the WAL size to trigger a flush.
## **It's only used when the provider is `raft_engine`**.
purge_threshold = "4GB"
## The interval to trigger a flush.
## **It's only used when the provider is `raft_engine`**.
purge_interval = "10m"
## The read batch size.
## **It's only used when the provider is `raft_engine`**.
read_batch_size = 128
## Whether to use sync write.
## **It's only used when the provider is `raft_engine`**.
sync_write = false
# Storage options, see `standalone.example.toml`.
## Whether to reuse logically truncated log files.
## **It's only used when the provider is `raft_engine`**.
enable_log_recycle = true
## Whether to pre-create log files on start up.
## **It's only used when the provider is `raft_engine`**.
prefill_log_files = false
## Duration for fsyncing log files.
## **It's only used when the provider is `raft_engine`**.
sync_period = "10s"
## The Kafka broker endpoints.
## **It's only used when the provider is `kafka`**.
broker_endpoints = ["127.0.0.1:9092"]
## The max size of a single producer batch.
## Warning: Kafka has a default limit of 1MB per message in a topic.
## **It's only used when the provider is `kafka`**.
max_batch_size = "1MB"
## The linger duration of a kafka batch producer.
## **It's only used when the provider is `kafka`**.
linger = "200ms"
## The consumer wait timeout.
## **It's only used when the provider is `kafka`**.
consumer_wait_timeout = "100ms"
## The initial backoff delay.
## **It's only used when the provider is `kafka`**.
backoff_init = "500ms"
## The maximum backoff delay.
## **It's only used when the provider is `kafka`**.
backoff_max = "10s"
## The exponential backoff rate, i.e. next backoff = base * current backoff.
## **It's only used when the provider is `kafka`**.
backoff_base = 2
## The deadline of retries.
## **It's only used when the provider is `kafka`**.
backoff_deadline = "5mins"
# Example of using S3 as the storage.
# [storage]
# type = "S3"
# bucket = "greptimedb"
# root = "data"
# access_key_id = "test"
# secret_access_key = "123456"
# endpoint = "https://s3.amazonaws.com"
# region = "us-west-2"
# Example of using Oss as the storage.
# [storage]
# type = "Oss"
# bucket = "greptimedb"
# root = "data"
# access_key_id = "test"
# access_key_secret = "123456"
# endpoint = "https://oss-cn-hangzhou.aliyuncs.com"
# Example of using Azblob as the storage.
# [storage]
# type = "Azblob"
# container = "greptimedb"
# root = "data"
# account_name = "test"
# account_key = "123456"
# endpoint = "https://greptimedb.blob.core.windows.net"
# sas_token = ""
# Example of using Gcs as the storage.
# [storage]
# type = "Gcs"
# bucket = "greptimedb"
# root = "data"
# scope = "test"
# credential_path = "123456"
# endpoint = "https://storage.googleapis.com"
## The data storage options.
[storage]
# The working home directory.
## The working home directory.
data_home = "/tmp/greptimedb/"
## The storage type used to store the data.
## - `File`: the data is stored in the local file system.
## - `S3`: the data is stored in the S3 object storage.
## - `Gcs`: the data is stored in the Google Cloud Storage.
## - `Azblob`: the data is stored in the Azure Blob Storage.
## - `Oss`: the data is stored in the Aliyun OSS.
type = "File"
# TTL for all tables. Disabled by default.
# global_ttl = "7d"
# Cache configuration for object storage such as 'S3' etc.
# The local file cache directory
# cache_path = "/path/local_cache"
# The local file cache capacity in bytes.
# cache_capacity = "256MB"
## Cache configuration for object storage such as 'S3' etc.
## The local file cache directory.
## +toml2docs:none-default
cache_path = "/path/local_cache"
# Mito engine options
## The local file cache capacity in bytes.
## +toml2docs:none-default
cache_capacity = "256MB"
## The S3 bucket name.
## **It's only used when the storage type is `S3`, `Oss` and `Gcs`**.
## +toml2docs:none-default
bucket = "greptimedb"
## The S3 data will be stored in the specified prefix, for example, `s3://${bucket}/${root}`.
## **It's only used when the storage type is `S3`, `Oss` and `Azblob`**.
## +toml2docs:none-default
root = "greptimedb"
## The access key id of the aws account.
## It's **highly recommended** to use AWS IAM roles instead of hardcoding the access key id and secret key.
## **It's only used when the storage type is `S3` and `Oss`**.
## +toml2docs:none-default
access_key_id = "test"
## The secret access key of the aws account.
## It's **highly recommended** to use AWS IAM roles instead of hardcoding the access key id and secret key.
## **It's only used when the storage type is `S3`**.
## +toml2docs:none-default
secret_access_key = "test"
## The secret access key of the aliyun account.
## **It's only used when the storage type is `Oss`**.
## +toml2docs:none-default
access_key_secret = "test"
## The account key of the azure account.
## **It's only used when the storage type is `Azblob`**.
## +toml2docs:none-default
account_name = "test"
## The account key of the azure account.
## **It's only used when the storage type is `Azblob`**.
## +toml2docs:none-default
account_key = "test"
## The scope of the google cloud storage.
## **It's only used when the storage type is `Gcs`**.
## +toml2docs:none-default
scope = "test"
## The credential path of the google cloud storage.
## **It's only used when the storage type is `Gcs`**.
## +toml2docs:none-default
credential_path = "test"
## The container of the azure account.
## **It's only used when the storage type is `Azblob`**.
## +toml2docs:none-default
container = "greptimedb"
## The sas token of the azure account.
## **It's only used when the storage type is `Azblob`**.
## +toml2docs:none-default
sas_token = ""
## The endpoint of the S3 service.
## **It's only used when the storage type is `S3`, `Oss`, `Gcs` and `Azblob`**.
## +toml2docs:none-default
endpoint = "https://s3.amazonaws.com"
## The region of the S3 service.
## **It's only used when the storage type is `S3`, `Oss`, `Gcs` and `Azblob`**.
## +toml2docs:none-default
region = "us-west-2"
# Custom storage options
# [[storage.providers]]
# type = "S3"
# [[storage.providers]]
# type = "Gcs"
## The region engine options. You can configure multiple region engines.
[[region_engine]]
## The Mito engine options.
[region_engine.mito]
# Number of region workers
## Number of region workers.
num_workers = 8
# Request channel size of each worker
## Request channel size of each worker.
worker_channel_size = 128
# Max batch size for a worker to handle requests
## Max batch size for a worker to handle requests.
worker_request_batch_size = 64
# Number of meta action updated to trigger a new checkpoint for the manifest
## Number of meta action updated to trigger a new checkpoint for the manifest.
manifest_checkpoint_distance = 10
# Whether to compress manifest and checkpoint file by gzip (default false).
## Whether to compress manifest and checkpoint file by gzip (default false).
compress_manifest = false
# Max number of running background jobs
## Max number of running background jobs
max_background_jobs = 4
# Interval to auto flush a region if it has not flushed yet.
## Interval to auto flush a region if it has not flushed yet.
auto_flush_interval = "1h"
# Global write buffer size for all regions.
## Global write buffer size for all regions. If not set, it's default to 1/8 of OS memory with a max limitation of 1GB.
global_write_buffer_size = "1GB"
# Global write buffer size threshold to reject write requests (default 2G).
## Global write buffer size threshold to reject write requests. If not set, it's default to 2 times of `global_write_buffer_size`
global_write_buffer_reject_size = "2GB"
# Cache size for SST metadata (default 128MB). Setting it to 0 to disable the cache.
## Cache size for SST metadata. Setting it to 0 to disable the cache.
## If not set, it's default to 1/32 of OS memory with a max limitation of 128MB.
sst_meta_cache_size = "128MB"
# Cache size for vectors and arrow arrays (default 512MB). Setting it to 0 to disable the cache.
## Cache size for vectors and arrow arrays. Setting it to 0 to disable the cache.
## If not set, it's default to 1/16 of OS memory with a max limitation of 512MB.
vector_cache_size = "512MB"
# Cache size for pages of SST row groups (default 512MB). Setting it to 0 to disable the cache.
## Cache size for pages of SST row groups. Setting it to 0 to disable the cache.
## If not set, it's default to 1/16 of OS memory with a max limitation of 512MB.
page_cache_size = "512MB"
# Buffer size for SST writing.
## Buffer size for SST writing.
sst_write_buffer_size = "8MB"
# Log options, see `standalone.example.toml`
# [logging]
# dir = "/tmp/greptimedb/logs"
# level = "info"
## Parallelism to scan a region (default: 1/4 of cpu cores).
## - `0`: using the default value (1/4 of cpu cores).
## - `1`: scan in current thread.
## - `n`: scan in parallelism n.
scan_parallelism = 0
## Capacity of the channel to send data from parallel scan tasks to the main task.
parallel_scan_channel_size = 32
## Whether to allow stale WAL entries read during replay.
allow_stale_entries = false
## The options for inverted index in Mito engine.
[region_engine.mito.inverted_index]
## Whether to create the index on flush.
## - `auto`: automatically
## - `disable`: never
create_on_flush = "auto"
## Whether to create the index on compaction.
## - `auto`: automatically
## - `disable`: never
create_on_compaction = "auto"
## Whether to apply the index on query
## - `auto`: automatically
## - `disable`: never
apply_on_query = "auto"
## Memory threshold for performing an external sort during index creation.
## Setting to empty will disable external sorting, forcing all sorting operations to happen in memory.
mem_threshold_on_create = "64M"
## File system path to store intermediate files for external sorting (default `{data_home}/index_intermediate`).
intermediate_path = ""
[region_engine.mito.memtable]
## Memtable type.
## - `time_series`: time-series memtable
## - `partition_tree`: partition tree memtable (experimental)
type = "time_series"
## The max number of keys in one shard.
## Only available for `partition_tree` memtable.
index_max_keys_per_shard = 8192
## The max rows of data inside the actively writing buffer in one shard.
## Only available for `partition_tree` memtable.
data_freeze_threshold = 32768
## Max dictionary bytes.
## Only available for `partition_tree` memtable.
fork_dictionary_bytes = "1GiB"
## The logging options.
[logging]
## The directory to store the log files.
dir = "/tmp/greptimedb/logs"
## The log level. Can be `info`/`debug`/`warn`/`error`.
## +toml2docs:none-default
level = "info"
## Enable OTLP tracing.
enable_otlp_tracing = false
## The OTLP tracing endpoint.
## +toml2docs:none-default
otlp_endpoint = ""
## Whether to append logs to stdout.
append_stdout = true
## The percentage of tracing will be sampled and exported.
## Valid range `[0, 1]`, 1 means all traces are sampled, 0 means all traces are not sampled, the default value is 1.
## ratio > 1 are treated as 1. Fractions < 0 are treated as 0
[logging.tracing_sample_ratio]
default_ratio = 1.0
## The datanode can export its metrics and send to Prometheus compatible service (e.g. send to `greptimedb` itself) from remote-write API.
## This is only used for `greptimedb` to export its own metrics internally. It's different from prometheus scrape.
[export_metrics]
## whether enable export metrics.
enable = false
## The interval of export metrics.
write_interval = "30s"
## For `standalone` mode, `self_import` is recommend to collect metrics generated by itself
[export_metrics.self_import]
## +toml2docs:none-default
db = "information_schema"
[export_metrics.remote_write]
## The url the metrics send to. The url example can be: `http://127.0.0.1:4000/v1/prometheus/write?db=information_schema`.
url = ""
## HTTP headers of Prometheus remote-write carry.
headers = { }

View File

@@ -1,79 +1,192 @@
# Node running mode, see `standalone.example.toml`.
mode = "distributed"
## The running mode of the datanode. It can be `standalone` or `distributed`.
mode = "standalone"
## The default timezone of the server.
## +toml2docs:none-default
default_timezone = "UTC"
## The heartbeat options.
[heartbeat]
# Interval for sending heartbeat task to the Metasrv, 5 seconds by default.
interval = "5s"
# Interval for retry sending heartbeat task, 5 seconds by default.
retry_interval = "5s"
## Interval for sending heartbeat messages to the metasrv.
interval = "18s"
# HTTP server options, see `standalone.example.toml`.
## Interval for retrying to send heartbeat messages to the metasrv.
retry_interval = "3s"
## The HTTP server options.
[http]
## The address to bind the HTTP server.
addr = "127.0.0.1:4000"
## HTTP request timeout.
timeout = "30s"
## HTTP request body limit.
## Support the following units are supported: `B`, `KB`, `KiB`, `MB`, `MiB`, `GB`, `GiB`, `TB`, `TiB`, `PB`, `PiB`.
body_limit = "64MB"
# gRPC server options, see `standalone.example.toml`.
## The gRPC server options.
[grpc]
## The address to bind the gRPC server.
addr = "127.0.0.1:4001"
## The number of server worker threads.
runtime_size = 8
# MySQL server options, see `standalone.example.toml`.
## MySQL server options.
[mysql]
## Whether to enable.
enable = true
## The addr to bind the MySQL server.
addr = "127.0.0.1:4002"
## The number of server worker threads.
runtime_size = 2
# MySQL server TLS options, see `standalone.example.toml`.
# MySQL server TLS options.
[mysql.tls]
## TLS mode, refer to https://www.postgresql.org/docs/current/libpq-ssl.html
## - `disable` (default value)
## - `prefer`
## - `require`
## - `verify-ca`
## - `verify-full`
mode = "disable"
## Certificate file path.
## +toml2docs:none-default
cert_path = ""
## Private key file path.
## +toml2docs:none-default
key_path = ""
# PostgresSQL server options, see `standalone.example.toml`.
## Watch for Certificate and key file change and auto reload
watch = false
## PostgresSQL server options.
[postgres]
## Whether to enable
enable = true
## The addr to bind the PostgresSQL server.
addr = "127.0.0.1:4003"
## The number of server worker threads.
runtime_size = 2
# PostgresSQL server TLS options, see `standalone.example.toml`.
## PostgresSQL server TLS options, see `mysql_options.tls` section.
[postgres.tls]
## TLS mode.
mode = "disable"
## Certificate file path.
## +toml2docs:none-default
cert_path = ""
## Private key file path.
## +toml2docs:none-default
key_path = ""
# OpenTSDB protocol options, see `standalone.example.toml`.
## Watch for Certificate and key file change and auto reload
watch = false
## OpenTSDB protocol options.
[opentsdb]
## Whether to enable
enable = true
## OpenTSDB telnet API server address.
addr = "127.0.0.1:4242"
## The number of server worker threads.
runtime_size = 2
# InfluxDB protocol options, see `standalone.example.toml`.
## InfluxDB protocol options.
[influxdb]
## Whether to enable InfluxDB protocol in HTTP API.
enable = true
# Prometheus remote storage options, see `standalone.example.toml`.
## Prometheus remote storage options
[prom_store]
## Whether to enable Prometheus remote write and read in HTTP API.
enable = true
## Whether to store the data from Prometheus remote write in metric engine.
with_metric_engine = true
# Metasrv client options, see `datanode.example.toml`.
## The metasrv client options.
[meta_client]
## The addresses of the metasrv.
metasrv_addrs = ["127.0.0.1:3002"]
## Operation timeout.
timeout = "3s"
# DDL timeouts options.
## Heartbeat timeout.
heartbeat_timeout = "500ms"
## DDL timeout.
ddl_timeout = "10s"
## Connect server timeout.
connect_timeout = "1s"
## `TCP_NODELAY` option for accepted connections.
tcp_nodelay = true
# Log options, see `standalone.example.toml`
# [logging]
# dir = "/tmp/greptimedb/logs"
# level = "info"
## The configuration about the cache of the metadata.
metadata_cache_max_capacity = 100000
# Datanode options.
## TTL of the metadata cache.
metadata_cache_ttl = "10m"
# TTI of the metadata cache.
metadata_cache_tti = "5m"
## Datanode options.
[datanode]
# Datanode client options.
## Datanode client options.
[datanode.client]
timeout = "10s"
connect_timeout = "10s"
tcp_nodelay = true
## The logging options.
[logging]
## The directory to store the log files.
dir = "/tmp/greptimedb/logs"
## The log level. Can be `info`/`debug`/`warn`/`error`.
## +toml2docs:none-default
level = "info"
## Enable OTLP tracing.
enable_otlp_tracing = false
## The OTLP tracing endpoint.
## +toml2docs:none-default
otlp_endpoint = ""
## Whether to append logs to stdout.
append_stdout = true
## The percentage of tracing will be sampled and exported.
## Valid range `[0, 1]`, 1 means all traces are sampled, 0 means all traces are not sampled, the default value is 1.
## ratio > 1 are treated as 1. Fractions < 0 are treated as 0
[logging.tracing_sample_ratio]
default_ratio = 1.0
## The datanode can export its metrics and send to Prometheus compatible service (e.g. send to `greptimedb` itself) from remote-write API.
## This is only used for `greptimedb` to export its own metrics internally. It's different from prometheus scrape.
[export_metrics]
## whether enable export metrics.
enable = false
## The interval of export metrics.
write_interval = "30s"
## For `standalone` mode, `self_import` is recommend to collect metrics generated by itself
[export_metrics.self_import]
## +toml2docs:none-default
db = "information_schema"
[export_metrics.remote_write]
## The url the metrics send to. The url example can be: `http://127.0.0.1:4000/v1/prometheus/write?db=information_schema`.
url = ""
## HTTP headers of Prometheus remote-write carry.
headers = { }

View File

@@ -1,33 +1,46 @@
# The working home directory.
## The working home directory.
data_home = "/tmp/metasrv/"
# The bind address of metasrv, "127.0.0.1:3002" by default.
## The bind address of metasrv.
bind_addr = "127.0.0.1:3002"
# The communication server address for frontend and datanode to connect to metasrv, "127.0.0.1:3002" by default for localhost.
## The communication server address for frontend and datanode to connect to metasrv, "127.0.0.1:3002" by default for localhost.
server_addr = "127.0.0.1:3002"
# Etcd server address, "127.0.0.1:2379" by default.
## Etcd server address.
store_addr = "127.0.0.1:2379"
# Datanode selector type.
# - "LeaseBased" (default value).
# - "LoadBased"
# For details, please see "https://docs.greptime.com/developer-guide/meta/selector".
selector = "LeaseBased"
# Store data in memory, false by default.
## Datanode selector type.
## - `lease_based` (default value).
## - `load_based`
## For details, please see "https://docs.greptime.com/developer-guide/metasrv/selector".
selector = "lease_based"
## Store data in memory.
use_memory_store = false
# Whether to enable greptimedb telemetry, true by default.
## Whether to enable greptimedb telemetry.
enable_telemetry = true
# Log options, see `standalone.example.toml`
# [logging]
# dir = "/tmp/greptimedb/logs"
# level = "info"
## If it's not empty, the metasrv will store all data with this key prefix.
store_key_prefix = ""
# Procedure storage options.
## Procedure storage options.
[procedure]
# Procedure max retry time.
## Procedure max retry time.
max_retry_times = 12
# Initial retry delay of procedures, increases exponentially
## Initial retry delay of procedures, increases exponentially
retry_delay = "500ms"
## Auto split large value
## GreptimeDB procedure uses etcd as the default metadata storage backend.
## The etcd the maximum size of any request is 1.5 MiB
## 1500KiB = 1536KiB (1.5MiB) - 36KiB (reserved size of key)
## Comments out the `max_metadata_value_size`, for don't split large value (no limit).
max_metadata_value_size = "1500KiB"
# Failure detectors options.
[failure_detector]
threshold = 8.0
@@ -35,10 +48,96 @@ min_std_deviation = "100ms"
acceptable_heartbeat_pause = "3000ms"
first_heartbeat_estimate = "1000ms"
# # Datanode options.
# [datanode]
# # Datanode client options.
# [datanode.client_options]
# timeout = "10s"
# connect_timeout = "10s"
# tcp_nodelay = true
## Datanode options.
[datanode]
## Datanode client options.
[datanode.client]
timeout = "10s"
connect_timeout = "10s"
tcp_nodelay = true
[wal]
# Available wal providers:
# - `raft_engine` (default): there're none raft-engine wal config since metasrv only involves in remote wal currently.
# - `kafka`: metasrv **have to be** configured with kafka wal config when using kafka wal provider in datanode.
provider = "raft_engine"
# Kafka wal config.
## The broker endpoints of the Kafka cluster.
broker_endpoints = ["127.0.0.1:9092"]
## Number of topics to be created upon start.
num_topics = 64
## Topic selector type.
## Available selector types:
## - `round_robin` (default)
selector_type = "round_robin"
## A Kafka topic is constructed by concatenating `topic_name_prefix` and `topic_id`.
topic_name_prefix = "greptimedb_wal_topic"
## Expected number of replicas of each partition.
replication_factor = 1
## Above which a topic creation operation will be cancelled.
create_topic_timeout = "30s"
## The initial backoff for kafka clients.
backoff_init = "500ms"
## The maximum backoff for kafka clients.
backoff_max = "10s"
## Exponential backoff rate, i.e. next backoff = base * current backoff.
backoff_base = 2
## Stop reconnecting if the total wait time reaches the deadline. If this config is missing, the reconnecting won't terminate.
backoff_deadline = "5mins"
## The logging options.
[logging]
## The directory to store the log files.
dir = "/tmp/greptimedb/logs"
## The log level. Can be `info`/`debug`/`warn`/`error`.
## +toml2docs:none-default
level = "info"
## Enable OTLP tracing.
enable_otlp_tracing = false
## The OTLP tracing endpoint.
## +toml2docs:none-default
otlp_endpoint = ""
## Whether to append logs to stdout.
append_stdout = true
## The percentage of tracing will be sampled and exported.
## Valid range `[0, 1]`, 1 means all traces are sampled, 0 means all traces are not sampled, the default value is 1.
## ratio > 1 are treated as 1. Fractions < 0 are treated as 0
[logging.tracing_sample_ratio]
default_ratio = 1.0
## The datanode can export its metrics and send to Prometheus compatible service (e.g. send to `greptimedb` itself) from remote-write API.
## This is only used for `greptimedb` to export its own metrics internally. It's different from prometheus scrape.
[export_metrics]
## whether enable export metrics.
enable = false
## The interval of export metrics.
write_interval = "30s"
## For `standalone` mode, `self_import` is recommend to collect metrics generated by itself
[export_metrics.self_import]
## +toml2docs:none-default
db = "information_schema"
[export_metrics.remote_write]
## The url the metrics send to. The url example can be: `http://127.0.0.1:4000/v1/prometheus/write?db=information_schema`.
url = ""
## HTTP headers of Prometheus remote-write carry.
headers = { }

View File

@@ -1,166 +1,477 @@
# Node running mode, "standalone" or "distributed".
## The running mode of the datanode. It can be `standalone` or `distributed`.
mode = "standalone"
# Whether to enable greptimedb telemetry, true by default.
## Enable telemetry to collect anonymous usage data.
enable_telemetry = true
# HTTP server options.
## The default timezone of the server.
## +toml2docs:none-default
default_timezone = "UTC"
## The HTTP server options.
[http]
# Server address, "127.0.0.1:4000" by default.
## The address to bind the HTTP server.
addr = "127.0.0.1:4000"
# HTTP request timeout, 30s by default.
## HTTP request timeout.
timeout = "30s"
# HTTP request body limit, 64Mb by default.
# the following units are supported: B, KB, KiB, MB, MiB, GB, GiB, TB, TiB, PB, PiB
## HTTP request body limit.
## Support the following units are supported: `B`, `KB`, `KiB`, `MB`, `MiB`, `GB`, `GiB`, `TB`, `TiB`, `PB`, `PiB`.
body_limit = "64MB"
# gRPC server options.
## The gRPC server options.
[grpc]
# Server address, "127.0.0.1:4001" by default.
## The address to bind the gRPC server.
addr = "127.0.0.1:4001"
# The number of server worker threads, 8 by default.
## The number of server worker threads.
runtime_size = 8
# MySQL server options.
## MySQL server options.
[mysql]
# Whether to enable
## Whether to enable.
enable = true
# Server address, "127.0.0.1:4002" by default.
## The addr to bind the MySQL server.
addr = "127.0.0.1:4002"
# The number of server worker threads, 2 by default.
## The number of server worker threads.
runtime_size = 2
# MySQL server TLS options.
[mysql.tls]
# TLS mode, refer to https://www.postgresql.org/docs/current/libpq-ssl.html
# - "disable" (default value)
# - "prefer"
# - "require"
# - "verify-ca"
# - "verify-full"
## TLS mode, refer to https://www.postgresql.org/docs/current/libpq-ssl.html
## - `disable` (default value)
## - `prefer`
## - `require`
## - `verify-ca`
## - `verify-full`
mode = "disable"
# Certificate file path.
## Certificate file path.
## +toml2docs:none-default
cert_path = ""
# Private key file path.
## Private key file path.
## +toml2docs:none-default
key_path = ""
# PostgresSQL server options.
## Watch for Certificate and key file change and auto reload
watch = false
## PostgresSQL server options.
[postgres]
# Whether to enable
## Whether to enable
enable = true
# Server address, "127.0.0.1:4003" by default.
## The addr to bind the PostgresSQL server.
addr = "127.0.0.1:4003"
# The number of server worker threads, 2 by default.
## The number of server worker threads.
runtime_size = 2
# PostgresSQL server TLS options, see `[mysql_options.tls]` section.
## PostgresSQL server TLS options, see `mysql_options.tls` section.
[postgres.tls]
# TLS mode.
## TLS mode.
mode = "disable"
# certificate file path.
## Certificate file path.
## +toml2docs:none-default
cert_path = ""
# private key file path.
## Private key file path.
## +toml2docs:none-default
key_path = ""
# OpenTSDB protocol options.
## Watch for Certificate and key file change and auto reload
watch = false
## OpenTSDB protocol options.
[opentsdb]
# Whether to enable
## Whether to enable
enable = true
# OpenTSDB telnet API server address, "127.0.0.1:4242" by default.
## OpenTSDB telnet API server address.
addr = "127.0.0.1:4242"
# The number of server worker threads, 2 by default.
## The number of server worker threads.
runtime_size = 2
# InfluxDB protocol options.
## InfluxDB protocol options.
[influxdb]
# Whether to enable InfluxDB protocol in HTTP API, true by default.
## Whether to enable InfluxDB protocol in HTTP API.
enable = true
# Prometheus remote storage options
## Prometheus remote storage options
[prom_store]
# Whether to enable Prometheus remote write and read in HTTP API, true by default.
## Whether to enable Prometheus remote write and read in HTTP API.
enable = true
## Whether to store the data from Prometheus remote write in metric engine.
with_metric_engine = true
# WAL options.
## The WAL options.
[wal]
# WAL data directory
# dir = "/tmp/greptimedb/wal"
# WAL file size in bytes.
## The provider of the WAL.
## - `raft_engine`: the wal is stored in the local file system by raft-engine.
## - `kafka`: it's remote wal that data is stored in Kafka.
provider = "raft_engine"
## The directory to store the WAL files.
## **It's only used when the provider is `raft_engine`**.
## +toml2docs:none-default
dir = "/tmp/greptimedb/wal"
## The size of the WAL segment file.
## **It's only used when the provider is `raft_engine`**.
file_size = "256MB"
# WAL purge threshold.
## The threshold of the WAL size to trigger a flush.
## **It's only used when the provider is `raft_engine`**.
purge_threshold = "4GB"
# WAL purge interval in seconds.
## The interval to trigger a flush.
## **It's only used when the provider is `raft_engine`**.
purge_interval = "10m"
# WAL read batch size.
## The read batch size.
## **It's only used when the provider is `raft_engine`**.
read_batch_size = 128
# Whether to sync log file after every write.
## Whether to use sync write.
## **It's only used when the provider is `raft_engine`**.
sync_write = false
# Metadata storage options.
## Whether to reuse logically truncated log files.
## **It's only used when the provider is `raft_engine`**.
enable_log_recycle = true
## Whether to pre-create log files on start up.
## **It's only used when the provider is `raft_engine`**.
prefill_log_files = false
## Duration for fsyncing log files.
## **It's only used when the provider is `raft_engine`**.
sync_period = "10s"
## The Kafka broker endpoints.
## **It's only used when the provider is `kafka`**.
broker_endpoints = ["127.0.0.1:9092"]
## The max size of a single producer batch.
## Warning: Kafka has a default limit of 1MB per message in a topic.
## **It's only used when the provider is `kafka`**.
max_batch_size = "1MB"
## The linger duration of a kafka batch producer.
## **It's only used when the provider is `kafka`**.
linger = "200ms"
## The consumer wait timeout.
## **It's only used when the provider is `kafka`**.
consumer_wait_timeout = "100ms"
## The initial backoff delay.
## **It's only used when the provider is `kafka`**.
backoff_init = "500ms"
## The maximum backoff delay.
## **It's only used when the provider is `kafka`**.
backoff_max = "10s"
## The exponential backoff rate, i.e. next backoff = base * current backoff.
## **It's only used when the provider is `kafka`**.
backoff_base = 2
## The deadline of retries.
## **It's only used when the provider is `kafka`**.
backoff_deadline = "5mins"
## Metadata storage options.
[metadata_store]
# Kv file size in bytes.
## Kv file size in bytes.
file_size = "256MB"
# Kv purge threshold.
## Kv purge threshold.
purge_threshold = "4GB"
# Procedure storage options.
## Procedure storage options.
[procedure]
# Procedure max retry time.
## Procedure max retry time.
max_retry_times = 3
# Initial retry delay of procedures, increases exponentially
## Initial retry delay of procedures, increases exponentially
retry_delay = "500ms"
# Storage options.
[storage]
# The working home directory.
data_home = "/tmp/greptimedb/"
# Storage type.
type = "File"
# TTL for all tables. Disabled by default.
# global_ttl = "7d"
# Cache configuration for object storage such as 'S3' etc.
# cache_path = "/path/local_cache"
# The local file cache capacity in bytes.
# cache_capacity = "256MB"
# Example of using S3 as the storage.
# [storage]
# type = "S3"
# bucket = "greptimedb"
# root = "data"
# access_key_id = "test"
# secret_access_key = "123456"
# endpoint = "https://s3.amazonaws.com"
# region = "us-west-2"
# Mito engine options
# Example of using Oss as the storage.
# [storage]
# type = "Oss"
# bucket = "greptimedb"
# root = "data"
# access_key_id = "test"
# access_key_secret = "123456"
# endpoint = "https://oss-cn-hangzhou.aliyuncs.com"
# Example of using Azblob as the storage.
# [storage]
# type = "Azblob"
# container = "greptimedb"
# root = "data"
# account_name = "test"
# account_key = "123456"
# endpoint = "https://greptimedb.blob.core.windows.net"
# sas_token = ""
# Example of using Gcs as the storage.
# [storage]
# type = "Gcs"
# bucket = "greptimedb"
# root = "data"
# scope = "test"
# credential_path = "123456"
# endpoint = "https://storage.googleapis.com"
## The data storage options.
[storage]
## The working home directory.
data_home = "/tmp/greptimedb/"
## The storage type used to store the data.
## - `File`: the data is stored in the local file system.
## - `S3`: the data is stored in the S3 object storage.
## - `Gcs`: the data is stored in the Google Cloud Storage.
## - `Azblob`: the data is stored in the Azure Blob Storage.
## - `Oss`: the data is stored in the Aliyun OSS.
type = "File"
## Cache configuration for object storage such as 'S3' etc.
## The local file cache directory.
## +toml2docs:none-default
cache_path = "/path/local_cache"
## The local file cache capacity in bytes.
## +toml2docs:none-default
cache_capacity = "256MB"
## The S3 bucket name.
## **It's only used when the storage type is `S3`, `Oss` and `Gcs`**.
## +toml2docs:none-default
bucket = "greptimedb"
## The S3 data will be stored in the specified prefix, for example, `s3://${bucket}/${root}`.
## **It's only used when the storage type is `S3`, `Oss` and `Azblob`**.
## +toml2docs:none-default
root = "greptimedb"
## The access key id of the aws account.
## It's **highly recommended** to use AWS IAM roles instead of hardcoding the access key id and secret key.
## **It's only used when the storage type is `S3` and `Oss`**.
## +toml2docs:none-default
access_key_id = "test"
## The secret access key of the aws account.
## It's **highly recommended** to use AWS IAM roles instead of hardcoding the access key id and secret key.
## **It's only used when the storage type is `S3`**.
## +toml2docs:none-default
secret_access_key = "test"
## The secret access key of the aliyun account.
## **It's only used when the storage type is `Oss`**.
## +toml2docs:none-default
access_key_secret = "test"
## The account key of the azure account.
## **It's only used when the storage type is `Azblob`**.
## +toml2docs:none-default
account_name = "test"
## The account key of the azure account.
## **It's only used when the storage type is `Azblob`**.
## +toml2docs:none-default
account_key = "test"
## The scope of the google cloud storage.
## **It's only used when the storage type is `Gcs`**.
## +toml2docs:none-default
scope = "test"
## The credential path of the google cloud storage.
## **It's only used when the storage type is `Gcs`**.
## +toml2docs:none-default
credential_path = "test"
## The container of the azure account.
## **It's only used when the storage type is `Azblob`**.
## +toml2docs:none-default
container = "greptimedb"
## The sas token of the azure account.
## **It's only used when the storage type is `Azblob`**.
## +toml2docs:none-default
sas_token = ""
## The endpoint of the S3 service.
## **It's only used when the storage type is `S3`, `Oss`, `Gcs` and `Azblob`**.
## +toml2docs:none-default
endpoint = "https://s3.amazonaws.com"
## The region of the S3 service.
## **It's only used when the storage type is `S3`, `Oss`, `Gcs` and `Azblob`**.
## +toml2docs:none-default
region = "us-west-2"
# Custom storage options
# [[storage.providers]]
# type = "S3"
# [[storage.providers]]
# type = "Gcs"
## The region engine options. You can configure multiple region engines.
[[region_engine]]
## The Mito engine options.
[region_engine.mito]
# Number of region workers
## Number of region workers.
num_workers = 8
# Request channel size of each worker
## Request channel size of each worker.
worker_channel_size = 128
# Max batch size for a worker to handle requests
## Max batch size for a worker to handle requests.
worker_request_batch_size = 64
# Number of meta action updated to trigger a new checkpoint for the manifest
## Number of meta action updated to trigger a new checkpoint for the manifest.
manifest_checkpoint_distance = 10
# Whether to compress manifest and checkpoint file by gzip (default false).
## Whether to compress manifest and checkpoint file by gzip (default false).
compress_manifest = false
# Max number of running background jobs
## Max number of running background jobs
max_background_jobs = 4
# Interval to auto flush a region if it has not flushed yet.
## Interval to auto flush a region if it has not flushed yet.
auto_flush_interval = "1h"
# Global write buffer size for all regions.
## Global write buffer size for all regions. If not set, it's default to 1/8 of OS memory with a max limitation of 1GB.
global_write_buffer_size = "1GB"
# Global write buffer size threshold to reject write requests (default 2G).
## Global write buffer size threshold to reject write requests. If not set, it's default to 2 times of `global_write_buffer_size`
global_write_buffer_reject_size = "2GB"
# Cache size for SST metadata (default 128MB). Setting it to 0 to disable the cache.
## Cache size for SST metadata. Setting it to 0 to disable the cache.
## If not set, it's default to 1/32 of OS memory with a max limitation of 128MB.
sst_meta_cache_size = "128MB"
# Cache size for vectors and arrow arrays (default 512MB). Setting it to 0 to disable the cache.
## Cache size for vectors and arrow arrays. Setting it to 0 to disable the cache.
## If not set, it's default to 1/16 of OS memory with a max limitation of 512MB.
vector_cache_size = "512MB"
# Cache size for pages of SST row groups (default 512MB). Setting it to 0 to disable the cache.
## Cache size for pages of SST row groups. Setting it to 0 to disable the cache.
## If not set, it's default to 1/16 of OS memory with a max limitation of 512MB.
page_cache_size = "512MB"
# Buffer size for SST writing.
## Buffer size for SST writing.
sst_write_buffer_size = "8MB"
# Log options
# [logging]
# Specify logs directory.
# dir = "/tmp/greptimedb/logs"
# Specify the log level [info | debug | error | warn]
# level = "info"
# whether enable tracing, default is false
# enable_otlp_tracing = false
# tracing exporter endpoint with format `ip:port`, we use grpc oltp as exporter, default endpoint is `localhost:4317`
# otlp_endpoint = "localhost:4317"
# The percentage of tracing will be sampled and exported. Valid range `[0, 1]`, 1 means all traces are sampled, 0 means all traces are not sampled, the default value is 1. ratio > 1 are treated as 1. Fractions < 0 are treated as 0
# tracing_sample_ratio = 1.0
## Parallelism to scan a region (default: 1/4 of cpu cores).
## - `0`: using the default value (1/4 of cpu cores).
## - `1`: scan in current thread.
## - `n`: scan in parallelism n.
scan_parallelism = 0
## Capacity of the channel to send data from parallel scan tasks to the main task.
parallel_scan_channel_size = 32
## Whether to allow stale WAL entries read during replay.
allow_stale_entries = false
## The options for inverted index in Mito engine.
[region_engine.mito.inverted_index]
## Whether to create the index on flush.
## - `auto`: automatically
## - `disable`: never
create_on_flush = "auto"
## Whether to create the index on compaction.
## - `auto`: automatically
## - `disable`: never
create_on_compaction = "auto"
## Whether to apply the index on query
## - `auto`: automatically
## - `disable`: never
apply_on_query = "auto"
## Memory threshold for performing an external sort during index creation.
## Setting to empty will disable external sorting, forcing all sorting operations to happen in memory.
mem_threshold_on_create = "64M"
## File system path to store intermediate files for external sorting (default `{data_home}/index_intermediate`).
intermediate_path = ""
[region_engine.mito.memtable]
## Memtable type.
## - `time_series`: time-series memtable
## - `partition_tree`: partition tree memtable (experimental)
type = "time_series"
## The max number of keys in one shard.
## Only available for `partition_tree` memtable.
index_max_keys_per_shard = 8192
## The max rows of data inside the actively writing buffer in one shard.
## Only available for `partition_tree` memtable.
data_freeze_threshold = 32768
## Max dictionary bytes.
## Only available for `partition_tree` memtable.
fork_dictionary_bytes = "1GiB"
## The logging options.
[logging]
## The directory to store the log files.
dir = "/tmp/greptimedb/logs"
## The log level. Can be `info`/`debug`/`warn`/`error`.
## +toml2docs:none-default
level = "info"
## Enable OTLP tracing.
enable_otlp_tracing = false
## The OTLP tracing endpoint.
## +toml2docs:none-default
otlp_endpoint = ""
## Whether to append logs to stdout.
append_stdout = true
## The percentage of tracing will be sampled and exported.
## Valid range `[0, 1]`, 1 means all traces are sampled, 0 means all traces are not sampled, the default value is 1.
## ratio > 1 are treated as 1. Fractions < 0 are treated as 0
[logging.tracing_sample_ratio]
default_ratio = 1.0
## The datanode can export its metrics and send to Prometheus compatible service (e.g. send to `greptimedb` itself) from remote-write API.
## This is only used for `greptimedb` to export its own metrics internally. It's different from prometheus scrape.
[export_metrics]
## whether enable export metrics.
enable = false
## The interval of export metrics.
write_interval = "30s"
## For `standalone` mode, `self_import` is recommend to collect metrics generated by itself
[export_metrics.self_import]
## +toml2docs:none-default
db = "information_schema"
[export_metrics.remote_write]
## The url the metrics send to. The url example can be: `http://127.0.0.1:4000/v1/prometheus/write?db=information_schema`.
url = ""
## HTTP headers of Prometheus remote-write carry.
headers = { }

View File

@@ -1,5 +1,11 @@
FROM ubuntu:22.04
# The root path under which contains all the dependencies to build this Dockerfile.
ARG DOCKER_BUILD_ROOT=.
# The binary name of GreptimeDB executable.
# Defaults to "greptime", but sometimes in other projects it might be different.
ARG TARGET_BIN=greptime
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y \
ca-certificates \
python3.10 \
@@ -7,14 +13,16 @@ RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y \
python3-pip \
curl
COPY ./docker/python/requirements.txt /etc/greptime/requirements.txt
COPY $DOCKER_BUILD_ROOT/docker/python/requirements.txt /etc/greptime/requirements.txt
RUN python3 -m pip install -r /etc/greptime/requirements.txt
ARG TARGETARCH
ADD $TARGETARCH/greptime /greptime/bin/
ADD $TARGETARCH/$TARGET_BIN /greptime/bin/
ENV PATH /greptime/bin/:$PATH
ENTRYPOINT ["greptime"]
ENV TARGET_BIN=$TARGET_BIN
ENTRYPOINT ["sh", "-c", "exec $TARGET_BIN \"$@\"", "--"]

View File

@@ -26,4 +26,5 @@ ARG RUST_TOOLCHAIN
RUN rustup toolchain install ${RUST_TOOLCHAIN}
# Install nextest.
RUN cargo install cargo-nextest --locked
RUN cargo install cargo-binstall --locked
RUN cargo binstall cargo-nextest --no-confirm

View File

@@ -1,5 +1,8 @@
FROM ubuntu:20.04
# The root path under which contains all the dependencies to build this Dockerfile.
ARG DOCKER_BUILD_ROOT=.
ENV LANG en_US.utf8
WORKDIR /greptimedb
@@ -27,10 +30,20 @@ RUN apt-get -y purge python3.8 && \
ln -s /usr/bin/python3.10 /usr/bin/python3 && \
curl -sS https://bootstrap.pypa.io/get-pip.py | python3.10
RUN git config --global --add safe.directory /greptimedb
# Silence all `safe.directory` warnings, to avoid the "detect dubious repository" error when building with submodules.
# Disabling the safe directory check here won't pose extra security issues, because in our usage for this dev build
# image, we use it solely on our own environment (that github action's VM, or ECS created dynamically by ourselves),
# and the repositories are pulled from trusted sources (still us, of course). Doing so does not violate the intention
# of the Git's addition to the "safe.directory" at the first place (see the commit message here:
# https://github.com/git/git/commit/8959555cee7ec045958f9b6dd62e541affb7e7d9).
# There's also another solution to this, that we add the desired submodules to the safe directory, instead of using
# wildcard here. However, that requires the git's config files and the submodules all owned by the very same user.
# It's troublesome to do this since the dev build runs in Docker, which is under user "root"; while outside the Docker,
# it can be a different user that have prepared the submodules.
RUN git config --global --add safe.directory *
# Install Python dependencies.
COPY ./docker/python/requirements.txt /etc/greptime/requirements.txt
COPY $DOCKER_BUILD_ROOT/docker/python/requirements.txt /etc/greptime/requirements.txt
RUN python3 -m pip install -r /etc/greptime/requirements.txt
# Install Rust.
@@ -43,4 +56,5 @@ ARG RUST_TOOLCHAIN
RUN rustup toolchain install ${RUST_TOOLCHAIN}
# Install nextest.
RUN cargo install cargo-nextest --locked
RUN cargo install cargo-binstall --locked
RUN cargo binstall cargo-nextest --no-confirm

View File

@@ -44,4 +44,5 @@ ARG RUST_TOOLCHAIN
RUN rustup toolchain install ${RUST_TOOLCHAIN}
# Install nextest.
RUN cargo install cargo-nextest --locked
RUN cargo install cargo-binstall --locked
RUN cargo binstall cargo-nextest --no-confirm

View File

@@ -0,0 +1,50 @@
# TSBS benchmark - v0.7.0
## Environment
### Local
| | |
| ------ | ---------------------------------- |
| CPU | AMD Ryzen 7 7735HS (8 core 3.2GHz) |
| Memory | 32GB |
| Disk | SOLIDIGM SSDPFKNU010TZ |
| OS | Ubuntu 22.04.2 LTS |
### Amazon EC2
| | |
| ------- | -------------- |
| Machine | c5d.2xlarge |
| CPU | 8 core |
| Memory | 16GB |
| Disk | 50GB (GP3) |
| OS | Ubuntu 22.04.1 |
## Write performance
| Environment | Ingest rate (rows/s) |
| ------------------ | --------------------- |
| Local | 3695814.64 |
| EC2 c5d.2xlarge | 2987166.64 |
## Query performance
| Query type | Local (ms) | EC2 c5d.2xlarge (ms) |
| --------------------- | ---------- | ---------------------- |
| cpu-max-all-1 | 30.56 | 54.74 |
| cpu-max-all-8 | 52.69 | 70.50 |
| double-groupby-1 | 664.30 | 1366.63 |
| double-groupby-5 | 1391.26 | 2141.71 |
| double-groupby-all | 2828.94 | 3389.59 |
| groupby-orderby-limit | 718.92 | 1213.90 |
| high-cpu-1 | 29.21 | 52.98 |
| high-cpu-all | 5514.12 | 7194.91 |
| lastpoint | 7571.40 | 9423.41 |
| single-groupby-1-1-1 | 19.09 | 7.77 |
| single-groupby-1-1-12 | 27.28 | 51.64 |
| single-groupby-1-8-1 | 31.85 | 11.64 |
| single-groupby-5-1-1 | 16.14 | 9.67 |
| single-groupby-5-1-12 | 27.21 | 53.62 |
| single-groupby-5-8-1 | 39.62 | 14.96 |

View File

@@ -79,7 +79,7 @@ This RFC proposes to add a new expression node `MergeScan` to merge result from
│ │ │ │
└─Frontend──────┘ └─Remote-Sources──────────────┘
```
This merge operation simply chains all the the underlying remote data sources and return `RecordBatch`, just like a coalesce op. And each remote sources is a gRPC query to datanode via the substrait logical plan interface. The plan is transformed and divided from the original query that comes to frontend.
This merge operation simply chains all the underlying remote data sources and return `RecordBatch`, just like a coalesce op. And each remote sources is a gRPC query to datanode via the substrait logical plan interface. The plan is transformed and divided from the original query that comes to frontend.
## Commutativity of MergeScan

View File

@@ -27,8 +27,8 @@ subgraph Frontend["Frontend"]
end
end
MyTable --> MetaSrv
MetaSrv --> ETCD
MyTable --> Metasrv
Metasrv --> ETCD
MyTable-->TableEngine0
MyTable-->TableEngine1
@@ -95,8 +95,8 @@ subgraph Frontend["Frontend"]
end
end
MyTable --> MetaSrv
MetaSrv --> ETCD
MyTable --> Metasrv
Metasrv --> ETCD
MyTable-->RegionEngine
MyTable-->RegionEngine1

View File

@@ -1,6 +1,6 @@
---
Feature Name: Inverted Index for SST File
Tracking Issue: TBD
Tracking Issue: https://github.com/GreptimeTeam/greptimedb/issues/2705
Date: 2023-11-03
Author: "Zhong Zhenchi <zhongzc_arch@outlook.com>"
---

View File

@@ -0,0 +1,44 @@
---
Feature Name: Enclose Column Id
Tracking Issue: https://github.com/GreptimeTeam/greptimedb/issues/2982
Date: 2023-12-22
Author: "Ruihang Xia <waynestxia@gmail.com>"
---
# Summary
This RFC proposes to enclose the usage of `ColumnId` into the region engine only.
# Motivation
`ColumnId` is an identifier for columns. It's assigned by meta server, stored in `TableInfo` and `RegionMetadata` and used in region engine to distinguish columns.
At present, Both Frontend, Datanode and Metasrv are aware of `ColumnId` but it's only used in region engine. Thus this RFC proposes to remove it from Frontend (mainly used in `TableInfo`) and Metasrv.
# Details
`ColumnId` is used widely on both read and write paths. Removing it from Frontend and Metasrv implies several things:
- A column may have different column id in different regions.
- A column is identified by its name in all components.
- Column order in the region engine is not restricted, i.e., no need to be in the same order with table info.
The first thing doesn't matter IMO. This concept doesn't exist anymore outside of region server, and each region is autonomous and independent -- the only guarantee it should hold is those columns exist. But if we consider region repartition, where the SST file would be re-assign to different regions, things would become a bit more complicated. A possible solution is store the relation between name and ColumnId in the manifest, but it's out of the scope of this RFC. We can likely give a workaround by introducing a indirection mapping layer of different version of partitions.
And more importantly, we can still assume columns have the same column ids across regions. We have procedure to maintain consistency between regions and the region engine should ensure alterations are idempotent. So it is possible that region repartition doesn't need to consider column ids or other region metadata in the future.
Users write and query column by their names, not by ColumnId or something else. The second point also means to change the column reference in ScanRequest from index to name. This change can hugely alleviate the misuse of the column index, which has given us many surprises.
And for the last one, column order only matters in table info. This order is used in user-faced table structure operation, like add column, describe column or as the default order of INSERT clause. None of them is connected with the order in storage.
# Drawback
Firstly, this is a breaking change. Delivering this change requires a full upgrade of the cluster. Secondly, this change may introduce some performance regression. For example, we have to pass the full table name in the `ScanRequest` instead of the `ColumnId`. But this influence is very limited, since the column index is only used in the region engine.
# Alternatives
There are two alternatives from the perspective of "what can be used as the column identifier":
- Index of column to the table schema
- `ColumnId` of that column
The first one is what we are using now. By choosing this way, it's required to keep the column order in the region engine the same as the table info. This is not hard to achieve, but it's a bit annoying. And things become tricky when there is internal column or different schemas like those stored in file format. And this is the initial purpose of this RFC, which is trying to decouple the table schema and region schema.
The second one, in other hand, requires the `ColumnId` should be identical in all regions and `TableInfo`. It has the same drawback with the previous alternative, that the `TableInfo` and `RegionMetadata` are tighted together. Another point is that the `ColumnId` is assigned by the Metasrv, who doesn't need it but have to maintain it. And this also limits the functionality of `ColumnId`, by taking the ability of assigning it from concrete region engine.

View File

@@ -0,0 +1,97 @@
---
Feature Name: Dataflow Framework
Tracking Issue: https://github.com/GreptimeTeam/greptimedb/issues/3187
Date: 2024-01-17
Author: "Discord9 <discord9@163.com>"
---
# Summary
This RFC proposes a Lightweight Module for executing continuous aggregation queries on a stream of data.
# Motivation
Being able to do continuous aggregation is a very powerful tool. It allows you to do things like:
1. downsample data from i.e. 1 milliseconds to 1 second
2. calculate the average of a stream of data
3. Keeping a sliding window of data in memory
In order to do those things while maintaining a low memory footprint, you need to be able to manage the data in a smart way. Hence, we only store necessary data in memory, and send/recv data deltas to/from the client.
# Details
## System boundary / What it's and isn't
- GreptimeFlow provides a way to perform continuous aggregation over time-series data.
- It's not a complete streaming-processing system. Only a must subset functionalities are provided.
- Flow can process a configured range of fresh data. Data exceeding this range will be dropped directly. Thus it cannot handle random datasets (random on timestamp).
- Both sliding windows (e.g., latest 5m from present) and fixed windows (every 5m from some time) are supported. And these two are the major targeting scenarios.
- Flow can handle most aggregate operators within one table(i.e. Sum, avg, min, max and comparison operators). But others (join, trigger, txn etc.) are not the target feature.
## Framework
- Greptime Flow's is built on top of [Hydroflow](https://github.com/hydro-project/hydroflow).
- We have three choices for the Dataflow/Streaming process framework for our simple continuous aggregation feature:
1. Based on the timely/differential dataflow crate that [materialize](https://github.com/MaterializeInc/materialize) based on. Later, it's proved too obscure for a simple usage, and is hard to customize memory usage control.
2. Based on a simple dataflow framework that we write from ground up, like what [arroyo](https://www.arroyo.dev/) or [risingwave](https://www.risingwave.dev/) did, for example the core streaming logic of [arroyo](https://github.com/ArroyoSystems/arroyo/blob/master/arroyo-datastream/src/lib.rs) only takes up to 2000 line of codes. However, it means maintaining another layer of dataflow framework, which might seem easy in the beginning, but I fear it might be too burdensome to maintain once we need more features.
3. Based on a simple and lower level dataflow framework that someone else write, like [hydroflow](https://github.com/hydro-project/hydroflow), this approach combines the best of both worlds. Firstly, it boasts ease of comprehension and customization. Secondly, the dataflow framework offers precisely the necessary features for crafting uncomplicated single-node dataflow programs while delivering decent performance.
Hence, we choose the third option, and use a simple logical plan that's anagonistic to the underlying dataflow framework, as it only describe how the dataflow graph should be doing, not how it do that. And we built operator in hydroflow to execute the plan. And the result hydroflow graph is wrapped in a engine that only support data in/out and tick event to flush and compute the result. This provide a thin middle layer that's easy to maintain and allow switching to other dataflow framework if necessary.
## Deploy mode and protocol
- Greptime Flow is an independent streaming compute component. It can be used either within a standalone node or as a dedicated node at the same level as frontend in distributed mode.
- It accepts insert request Rows, which is used between frontend and datanode.
- New flow job is submitted in the format of modified SQL query like snowflake do, like: `CREATE TASK avg_over_5m WINDOW_SIZE = "5m" AS SELECT avg(value) FROM table WHERE time > now() - 5m GROUP BY time(1m)`. Flow job then got stored in Metasrv.
- It also persists results in the format of Rows to frontend.
- The query plan uses Substrait as codec format. It's the same with GreptimeDB's query engine.
- Greptime Flow needs a WAL for recovering. It's possible to reuse datanode's.
The workflow is shown in the following diagram
```mermaid
graph TB
subgraph Flownode["Flownode"]
subgraph Dataflows
df1("Dataflow_1")
df2("Dataflow_2")
end
end
subgraph Frontend["Frontend"]
newLines["Mirror Insert
Create Task From Query
Write result from flow node"]
end
subgraph Datanode["Datanode"]
end
User --> Frontend
Frontend -->|Register Task| Metasrv
Metasrv -->|Read Task Metadata| Frontend
Frontend -->|Create Task| Flownode
Frontend -->|Mirror Insert| Flownode
Flownode -->|Write back| Frontend
Frontend --> Datanode
Datanode --> Frontend
```
## Lifecycle of data
- New data is inserted into frontend like before. Frontend will mirror insert request to Flow node if there is configured flow job.
- Depending on the timestamp of incoming data, flow will either drop it (outdated data) or process it (fresh data).
- Greptime Flow will periodically write results back to the result table through frontend.
- Those result will then be written into a result table stored in datanode.
- A small table of intermediate state is kept in memory, which is used to calculate the result.
## Supported operations
- Greptime Flow accepts a configurable "materialize window", data point exceeds that time window is discarded.
- Data within that "materialize window" is queryable and updateable.
- Greptime Flow can handle partitioning, if and only if the input query can be transformed to a fully partitioned plan according to the existing commutative rules. Otherwise the corresponding flow job has to be calculated in a single node.
- Notice that Greptime Flow has to see all the data belongs to one partition.
- Deletion and duplicate insertion are not supported at early stage.
## Miscellaneous
- Greptime Flow can translate SQL to it's own plan, however only a selected few aggregate function is supported for now, like min/max/sum/count/avg
- Greptime Flow's operator is configurable in terms of the size of the materialize window, whether to allow delay of incoming data etc., so simplest operator can choose to not tolerate any delay to save memory.
# Future Work
- Support UDF that can do one-to-one mapping. Preferably, we can reuse the UDF mechanism in GreptimeDB.
- Support join operator.
- Design syntax for config operator for different materialize window and delay tolerance.
- Support cross partition merge operator that allows complex query plan that not necessary accord with partitioning rule to communicate between nodes and create final materialize result.
- Duplicate insertion, which can be reverted easily within the current framework, so supporting it could be easy
- Deletion within "materialize window", this requires operators like min/max to store all inputs within materialize window, which might require further optimization.

Binary file not shown.

After

Width:  |  Height:  |  Size: 65 KiB

View File

@@ -0,0 +1,101 @@
---
Feature Name: Multi-dimension Partition Rule
Tracking Issue: https://github.com/GreptimeTeam/greptimedb/issues/3351
Date: 2024-02-21
Author: "Ruihang Xia <waynestxia@gmail.com>"
---
# Summary
A new region partition scheme that runs on multiple dimensions of the key space. The partition rule is defined by a set of simple expressions on the partition key columns.
# Motivation
The current partition rule is from MySQL's [`RANGE Partition`](https://dev.mysql.com/doc/refman/8.0/en/partitioning-range.html), which is based on a single dimension. It is sort of a [Hilbert Curve](https://en.wikipedia.org/wiki/Hilbert_curve) and pick several point on the curve to divide the space. It is neither easy to understand how the data get partitioned nor flexible enough to handle complex partitioning requirements.
Considering the future requirements like region repartitioning or autonomous rebalancing, where both workload and partition may change frequently. Here proposes a new region partition scheme that uses a set of simple expressions on the partition key columns to divide the key space.
# Details
## Partition rule
First, we define a simple expression that can be used to define the partition rule. The simple expression is a binary expression expression on the partition key columns that can be evaluated to a boolean value. The binary operator is limited to comparison operators only, like `=`, `!=`, `>`, `>=`, `<`, `<=`. And the operands are limited either literal value or partition column.
Example of valid simple expressions are $`col_A = 10`$, $`col_A \gt 10 \& col_B \gt 20`$ or $`col_A \ne 10`$.
Those expressions can be used as predicates to divide the key space into different regions. The following example have two partition columns `Col A` and `Col B`, and four partitioned regions.
```math
\left\{\begin{aligned}
&col_A \le 10 &Region_1 \\
&10 \lt col_A \& col_A \le 20 &Region_2 \\
&20 \lt col_A \space \& \space col_B \lt 100 &Region_3 \\
&20 \lt col_A \space \& \space col_B \ge 100 &Region_4
\end{aligned}\right\}
```
An advantage of this scheme is that it is easy to understand how the data get partitioned. The above example can be visualized in a 2D space (two partition column is involved in the example).
![example](2d-example.png)
Here each expression draws a line in the 2D space. Managing data partitioning becomes a matter of drawing lines in the key space.
To make it easy to use, there is a "default region" which catches all the data that doesn't match any of previous expressions. The default region exist by default and do not need to specify. It is also possible to remove this default region if the DB finds it is not necessary.
## SQL interface
The SQL interface is in response to two parts: specifying the partition columns and the partition rule. Thouth we are targeting an autonomous system, it's still allowed to give some bootstrap rules or hints on creating table.
Partition column is specified by `PARTITION ON COLUMNS` sub-clause in `CREATE TABLE`:
```sql
CREATE TABLE t (...)
PARTITION ON COLUMNS (...) ();
```
Two following brackets are for partition columns and partition rule respectively.
Columns provided here are only used as an allow-list of how the partition rule can be defined. Which means (a) the sequence between columns doesn't matter, (b) the columns provided here are not necessarily being used in the partition rule.
The partition rule part is a list of comma-separated simple expressions. Expressions here are not corresponding to region, as they might be changed by system to fit various workload.
A full example of `CREATE TABLE` with partition rule is:
```sql
CREATE TABLE IF NOT EXISTS demo (
a STRING,
b STRING,
c STRING,
d STRING,
ts TIMESTAMP,
memory DOUBLE,
TIME INDEX (ts),
PRIMARY KEY (a, b, c, d)
)
PARTITION ON COLUMNS (c, b, a) (
a < 10,
10 >= a AND a < 20,
20 >= a AND b < 100,
20 >= a AND b > 100
)
```
## Combine with storage
Examining columns separately suits our columnar storage very well in two aspects.
1. The simple expression can be pushed down to storage and file format, and is likely to hit existing index. Makes pruning operation very efficient.
2. Columns in columnar storage are not tightly coupled like in the traditional row storages, which means we can easily add or remove columns from partition rule without much impact (like a global reshuffle) on data.
The data file itself can be "projected" to the key space as a polyhedron, it is guaranteed that each plane is in parallel with some coordinate planes (in a 2D scenario, this is saying that all the files can be projected to a rectangle). Thus partition or repartition also only need to consider related columns.
![sst-project](sst-project.png)
An additional limitation is that considering how the index works and how we organize the primary keys at present, the partition columns are limited to be a subset of primary keys for better performance.
# Drawbacks
This is a breaking change.

Binary file not shown.

After

Width:  |  Height:  |  Size: 71 KiB

46
docs/style-guide.md Normal file
View File

@@ -0,0 +1,46 @@
# GreptimeDB Style Guide
This style guide is intended to help contributors to GreptimeDB write code that is consistent with the rest of the codebase. It is a living document and will be updated as the codebase evolves.
It's mainly an complement to the [Rust Style Guide](https://pingcap.github.io/style-guide/rust/).
## Table of Contents
- Formatting
- Modules
- Comments
## Formatting
- Place all `mod` declaration before any `use`.
- Use `unimplemented!()` instead of `todo!()` for things that aren't likely to be implemented.
- Add an empty line before and after declaration blocks.
- Place comment before attributes (`#[]`) and derive (`#[derive]`).
## Modules
- Use the file with same name instead of `mod.rs` to define a module. E.g.:
```
.
├── cache
│ ├── cache_size.rs
│ └── write_cache.rs
└── cache.rs
```
## Comments
- Add comments for public functions and structs.
- Prefer document comment (`///`) over normal comment (`//`) for structs, fields, functions etc.
- Add link (`[]`) to struct, method, or any other reference. And make sure that link works.
## Error handling
- Define a custom error type for the module if needed.
- Prefer `with_context()` over `context()` when allocation is needed to construct an error.
- Use `error!()` or `warn!()` macros in the `common_telemetry` crate to log errors. E.g.:
```rust
error!(e; "Failed to do something");
```

10
grafana/README.md Normal file
View File

@@ -0,0 +1,10 @@
Grafana dashboard for GreptimeDB
--------------------------------
GreptimeDB's official Grafana dashboard.
Status notify: we are still working on this config. It's expected to change frequently in the recent days. Please feel free to submit your feedback and/or contribution to this dashboard 🤗
# How to use
Open Grafana Dashboard page, choose `New` -> `Import`. And upload `greptimedb.json` file.

2513
grafana/greptimedb.json Normal file

File diff suppressed because it is too large Load Diff

View File

@@ -19,6 +19,12 @@ includes = [
"*.py",
]
excludes = [
# copied sources
"src/common/base/src/readable_size.rs",
"src/servers/src/repeated_field.rs",
]
[properties]
inceptionYear = 2023
copyrightOwner = "Greptime Team"

View File

@@ -1,2 +1,2 @@
[toolchain]
channel = "nightly-2023-10-21"
channel = "nightly-2024-04-18"

View File

@@ -1,8 +1,7 @@
#!/usr/bin/env bash
# This script is used to download built dashboard assets from the "GreptimeTeam/dashboard" repository.
set -e -x
set -ex
declare -r SCRIPT_DIR=$(cd $(dirname ${0}) >/dev/null 2>&1 && pwd)
declare -r ROOT_DIR=$(dirname ${SCRIPT_DIR})
@@ -13,13 +12,34 @@ RELEASE_VERSION="$(cat $STATIC_DIR/VERSION | tr -d '\t\r\n ')"
echo "Downloading assets to dir: $OUT_DIR"
cd $OUT_DIR
if [[ -z "$GITHUB_PROXY_URL" ]]; then
GITHUB_URL="https://github.com"
else
GITHUB_URL="${GITHUB_PROXY_URL%/}"
fi
function retry_fetch() {
local url=$1
local filename=$2
curl --connect-timeout 10 --retry 3 -fsSL $url --output $filename || {
echo "Failed to download $url"
echo "You may try to set http_proxy and https_proxy environment variables."
if [[ -z "$GITHUB_PROXY_URL" ]]; then
echo "You may try to set GITHUB_PROXY_URL=http://mirror.ghproxy.com/https://github.com/"
fi
exit 1
}
}
# Download the SHA256 checksum attached to the release. To verify the integrity
# of the download, this checksum will be used to check the download tar file
# containing the built dashboard assets.
curl -Ls https://github.com/GreptimeTeam/dashboard/releases/download/$RELEASE_VERSION/sha256.txt --output sha256.txt
retry_fetch "${GITHUB_URL}/GreptimeTeam/dashboard/releases/download/${RELEASE_VERSION}/sha256.txt" sha256.txt
# Download the tar file containing the built dashboard assets.
curl -L https://github.com/GreptimeTeam/dashboard/releases/download/$RELEASE_VERSION/build.tar.gz --output build.tar.gz
retry_fetch "${GITHUB_URL}/GreptimeTeam/dashboard/releases/download/${RELEASE_VERSION}/build.tar.gz" build.tar.gz
# Verify the checksums match; exit if they don't.
case "$(uname -s)" in

View File

@@ -4,6 +4,9 @@ version.workspace = true
edition.workspace = true
license.workspace = true
[lints]
workspace = true
[dependencies]
common-base.workspace = true
common-decimal.workspace = true
@@ -15,7 +18,6 @@ greptime-proto.workspace = true
paste = "1.0"
prost.workspace = true
snafu.workspace = true
tonic.workspace = true
[build-dependencies]
tonic-build = "0.9"

View File

@@ -535,11 +535,8 @@ pub fn convert_i128_to_interval(v: i128) -> v1::IntervalMonthDayNano {
/// Convert common decimal128 to grpc decimal128 without precision and scale.
pub fn convert_to_pb_decimal128(v: Decimal128) -> v1::Decimal128 {
let value = v.val();
v1::Decimal128 {
hi: (value >> 64) as i64,
lo: value as i64,
}
let (hi, lo) = v.split_value();
v1::Decimal128 { hi, lo }
}
pub fn pb_value_to_value_ref<'a>(
@@ -580,9 +577,9 @@ pub fn pb_value_to_value_ref<'a>(
ValueData::TimeMillisecondValue(t) => ValueRef::Time(Time::new_millisecond(*t)),
ValueData::TimeMicrosecondValue(t) => ValueRef::Time(Time::new_microsecond(*t)),
ValueData::TimeNanosecondValue(t) => ValueRef::Time(Time::new_nanosecond(*t)),
ValueData::IntervalYearMonthValues(v) => ValueRef::Interval(Interval::from_i32(*v)),
ValueData::IntervalDayTimeValues(v) => ValueRef::Interval(Interval::from_i64(*v)),
ValueData::IntervalMonthDayNanoValues(v) => {
ValueData::IntervalYearMonthValue(v) => ValueRef::Interval(Interval::from_i32(*v)),
ValueData::IntervalDayTimeValue(v) => ValueRef::Interval(Interval::from_i64(*v)),
ValueData::IntervalMonthDayNanoValue(v) => {
let interval = Interval::from_month_day_nano(v.months, v.days, v.nanoseconds);
ValueRef::Interval(interval)
}
@@ -710,7 +707,6 @@ pub fn pb_values_to_vector_ref(data_type: &ConcreteDataType, values: Values) ->
}
pub fn pb_values_to_values(data_type: &ConcreteDataType, values: Values) -> Vec<Value> {
// TODO(fys): use macros to optimize code
match data_type {
ConcreteDataType::Int64(_) => values
.i64_values
@@ -986,13 +982,13 @@ pub fn to_proto_value(value: Value) -> Option<v1::Value> {
},
Value::Interval(v) => match v.unit() {
IntervalUnit::YearMonth => v1::Value {
value_data: Some(ValueData::IntervalYearMonthValues(v.to_i32())),
value_data: Some(ValueData::IntervalYearMonthValue(v.to_i32())),
},
IntervalUnit::DayTime => v1::Value {
value_data: Some(ValueData::IntervalDayTimeValues(v.to_i64())),
value_data: Some(ValueData::IntervalDayTimeValue(v.to_i64())),
},
IntervalUnit::MonthDayNano => v1::Value {
value_data: Some(ValueData::IntervalMonthDayNanoValues(
value_data: Some(ValueData::IntervalMonthDayNanoValue(
convert_i128_to_interval(v.to_i128()),
)),
},
@@ -1011,12 +1007,9 @@ pub fn to_proto_value(value: Value) -> Option<v1::Value> {
value_data: Some(ValueData::DurationNanosecondValue(v.value())),
},
},
Value::Decimal128(v) => {
let (hi, lo) = v.split_value();
v1::Value {
value_data: Some(ValueData::Decimal128Value(v1::Decimal128 { hi, lo })),
}
}
Value::Decimal128(v) => v1::Value {
value_data: Some(ValueData::Decimal128Value(convert_to_pb_decimal128(v))),
},
Value::List(_) => return None,
};
@@ -1051,9 +1044,9 @@ pub fn proto_value_type(value: &v1::Value) -> Option<ColumnDataType> {
ValueData::TimeMillisecondValue(_) => ColumnDataType::TimeMillisecond,
ValueData::TimeMicrosecondValue(_) => ColumnDataType::TimeMicrosecond,
ValueData::TimeNanosecondValue(_) => ColumnDataType::TimeNanosecond,
ValueData::IntervalYearMonthValues(_) => ColumnDataType::IntervalYearMonth,
ValueData::IntervalDayTimeValues(_) => ColumnDataType::IntervalDayTime,
ValueData::IntervalMonthDayNanoValues(_) => ColumnDataType::IntervalMonthDayNano,
ValueData::IntervalYearMonthValue(_) => ColumnDataType::IntervalYearMonth,
ValueData::IntervalDayTimeValue(_) => ColumnDataType::IntervalDayTime,
ValueData::IntervalMonthDayNanoValue(_) => ColumnDataType::IntervalMonthDayNano,
ValueData::DurationSecondValue(_) => ColumnDataType::DurationSecond,
ValueData::DurationMillisecondValue(_) => ColumnDataType::DurationMillisecond,
ValueData::DurationMicrosecondValue(_) => ColumnDataType::DurationMicrosecond,
@@ -1109,10 +1102,10 @@ pub fn value_to_grpc_value(value: Value) -> GrpcValue {
TimeUnit::Nanosecond => ValueData::TimeNanosecondValue(v.value()),
}),
Value::Interval(v) => Some(match v.unit() {
IntervalUnit::YearMonth => ValueData::IntervalYearMonthValues(v.to_i32()),
IntervalUnit::DayTime => ValueData::IntervalDayTimeValues(v.to_i64()),
IntervalUnit::YearMonth => ValueData::IntervalYearMonthValue(v.to_i32()),
IntervalUnit::DayTime => ValueData::IntervalDayTimeValue(v.to_i64()),
IntervalUnit::MonthDayNano => {
ValueData::IntervalMonthDayNanoValues(convert_i128_to_interval(v.to_i128()))
ValueData::IntervalMonthDayNanoValue(convert_i128_to_interval(v.to_i128()))
}
}),
Value::Duration(v) => Some(match v.unit() {
@@ -1121,10 +1114,7 @@ pub fn value_to_grpc_value(value: Value) -> GrpcValue {
TimeUnit::Microsecond => ValueData::DurationMicrosecondValue(v.value()),
TimeUnit::Nanosecond => ValueData::DurationNanosecondValue(v.value()),
}),
Value::Decimal128(v) => {
let (hi, lo) = v.split_value();
Some(ValueData::Decimal128Value(v1::Decimal128 { hi, lo }))
}
Value::Decimal128(v) => Some(ValueData::Decimal128Value(convert_to_pb_decimal128(v))),
Value::List(_) => unreachable!(),
},
}

View File

@@ -21,6 +21,7 @@ pub mod prom_store {
}
}
pub mod region;
pub mod v1;
pub use greptime_proto;

42
src/api/src/region.rs Normal file
View File

@@ -0,0 +1,42 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::collections::HashMap;
use common_base::AffectedRows;
use greptime_proto::v1::region::RegionResponse as RegionResponseV1;
/// This result struct is derived from [RegionResponseV1]
#[derive(Debug)]
pub struct RegionResponse {
pub affected_rows: AffectedRows,
pub extension: HashMap<String, Vec<u8>>,
}
impl RegionResponse {
pub fn from_region_response(region_response: RegionResponseV1) -> Self {
Self {
affected_rows: region_response.affected_rows as _,
extension: region_response.extension,
}
}
/// Creates one response without extension
pub fn new(affected_rows: AffectedRows) -> Self {
Self {
affected_rows,
extension: Default::default(),
}
}
}

View File

@@ -8,13 +8,17 @@ license.workspace = true
default = []
testing = []
[lints]
workspace = true
[dependencies]
api.workspace = true
async-trait.workspace = true
common-error.workspace = true
common-macro.workspace = true
common-telemetry.workspace = true
digest = "0.10"
hex = { version = "0.4" }
notify.workspace = true
secrecy = { version = "0.8", features = ["serde", "alloc"] }
sha1 = "0.10"
snafu.workspace = true

View File

@@ -22,6 +22,9 @@ use snafu::{ensure, OptionExt};
use crate::error::{IllegalParamSnafu, InvalidConfigSnafu, Result, UserPasswordMismatchSnafu};
use crate::user_info::DefaultUserInfo;
use crate::user_provider::static_user_provider::{StaticUserProvider, STATIC_USER_PROVIDER};
use crate::user_provider::watch_file_user_provider::{
WatchFileUserProvider, WATCH_FILE_USER_PROVIDER,
};
use crate::{UserInfoRef, UserProviderRef};
pub(crate) const DEFAULT_USERNAME: &str = "greptime";
@@ -40,9 +43,12 @@ pub fn user_provider_from_option(opt: &String) -> Result<UserProviderRef> {
match name {
STATIC_USER_PROVIDER => {
let provider =
StaticUserProvider::try_from(content).map(|p| Arc::new(p) as UserProviderRef)?;
StaticUserProvider::new(content).map(|p| Arc::new(p) as UserProviderRef)?;
Ok(provider)
}
WATCH_FILE_USER_PROVIDER => {
WatchFileUserProvider::new(content).map(|p| Arc::new(p) as UserProviderRef)
}
_ => InvalidConfigSnafu {
value: name.to_string(),
msg: "Invalid UserProviderOption",

View File

@@ -64,6 +64,13 @@ pub enum Error {
username: String,
},
#[snafu(display("Failed to initialize a watcher for file {}", path))]
FileWatch {
path: String,
#[snafu(source)]
error: notify::Error,
},
#[snafu(display("User is not authorized to perform this action"))]
PermissionDenied { location: Location },
}
@@ -73,6 +80,7 @@ impl ErrorExt for Error {
match self {
Error::InvalidConfig { .. } => StatusCode::InvalidArguments,
Error::IllegalParam { .. } => StatusCode::InvalidArguments,
Error::FileWatch { .. } => StatusCode::InvalidArguments,
Error::InternalState { .. } => StatusCode::Unexpected,
Error::Io { .. } => StatusCode::Internal,
Error::AuthBackend { .. } => StatusCode::Internal,

View File

@@ -45,9 +45,9 @@ impl Default for MockUserProvider {
impl MockUserProvider {
pub fn set_authorization_info(&mut self, info: DatabaseAuthInfo) {
self.catalog = info.catalog.to_owned();
self.schema = info.schema.to_owned();
self.username = info.username.to_owned();
info.catalog.clone_into(&mut self.catalog);
info.schema.clone_into(&mut self.schema);
info.username.clone_into(&mut self.username);
}
}

View File

@@ -13,10 +13,24 @@
// limitations under the License.
pub(crate) mod static_user_provider;
pub(crate) mod watch_file_user_provider;
use std::collections::HashMap;
use std::fs::File;
use std::io;
use std::io::BufRead;
use std::path::Path;
use secrecy::ExposeSecret;
use snafu::{ensure, OptionExt, ResultExt};
use crate::common::{Identity, Password};
use crate::error::Result;
use crate::UserInfoRef;
use crate::error::{
IllegalParamSnafu, InvalidConfigSnafu, IoSnafu, Result, UnsupportedPasswordTypeSnafu,
UserNotFoundSnafu, UserPasswordMismatchSnafu,
};
use crate::user_info::DefaultUserInfo;
use crate::{auth_mysql, UserInfoRef};
#[async_trait::async_trait]
pub trait UserProvider: Send + Sync {
@@ -44,3 +58,88 @@ pub trait UserProvider: Send + Sync {
Ok(user_info)
}
}
fn load_credential_from_file(filepath: &str) -> Result<Option<HashMap<String, Vec<u8>>>> {
// check valid path
let path = Path::new(filepath);
if !path.exists() {
return Ok(None);
}
ensure!(
path.is_file(),
InvalidConfigSnafu {
value: filepath,
msg: "UserProvider file must be a file",
}
);
let file = File::open(path).context(IoSnafu)?;
let credential = io::BufReader::new(file)
.lines()
.map_while(std::result::Result::ok)
.filter_map(|line| {
if let Some((k, v)) = line.split_once('=') {
Some((k.to_string(), v.as_bytes().to_vec()))
} else {
None
}
})
.collect::<HashMap<String, Vec<u8>>>();
ensure!(
!credential.is_empty(),
InvalidConfigSnafu {
value: filepath,
msg: "UserProvider's file must contains at least one valid credential",
}
);
Ok(Some(credential))
}
fn authenticate_with_credential(
users: &HashMap<String, Vec<u8>>,
input_id: Identity<'_>,
input_pwd: Password<'_>,
) -> Result<UserInfoRef> {
match input_id {
Identity::UserId(username, _) => {
ensure!(
!username.is_empty(),
IllegalParamSnafu {
msg: "blank username"
}
);
let save_pwd = users.get(username).context(UserNotFoundSnafu {
username: username.to_string(),
})?;
match input_pwd {
Password::PlainText(pwd) => {
ensure!(
!pwd.expose_secret().is_empty(),
IllegalParamSnafu {
msg: "blank password"
}
);
if save_pwd == pwd.expose_secret().as_bytes() {
Ok(DefaultUserInfo::with_name(username))
} else {
UserPasswordMismatchSnafu {
username: username.to_string(),
}
.fail()
}
}
Password::MysqlNativePassword(auth_data, salt) => {
auth_mysql(auth_data, salt, username, save_pwd)
.map(|_| DefaultUserInfo::with_name(username))
}
Password::PgMD5(_, _) => UnsupportedPasswordTypeSnafu {
password_type: "pg_md5",
}
.fail(),
}
}
}
}

View File

@@ -13,60 +13,34 @@
// limitations under the License.
use std::collections::HashMap;
use std::fs::File;
use std::io;
use std::io::BufRead;
use std::path::Path;
use async_trait::async_trait;
use secrecy::ExposeSecret;
use snafu::{ensure, OptionExt, ResultExt};
use snafu::OptionExt;
use crate::error::{
Error, IllegalParamSnafu, InvalidConfigSnafu, IoSnafu, Result, UnsupportedPasswordTypeSnafu,
UserNotFoundSnafu, UserPasswordMismatchSnafu,
};
use crate::user_info::DefaultUserInfo;
use crate::{auth_mysql, Identity, Password, UserInfoRef, UserProvider};
use crate::error::{InvalidConfigSnafu, Result};
use crate::user_provider::{authenticate_with_credential, load_credential_from_file};
use crate::{Identity, Password, UserInfoRef, UserProvider};
pub(crate) const STATIC_USER_PROVIDER: &str = "static_user_provider";
impl TryFrom<&str> for StaticUserProvider {
type Error = Error;
pub(crate) struct StaticUserProvider {
users: HashMap<String, Vec<u8>>,
}
fn try_from(value: &str) -> Result<Self> {
impl StaticUserProvider {
pub(crate) fn new(value: &str) -> Result<Self> {
let (mode, content) = value.split_once(':').context(InvalidConfigSnafu {
value: value.to_string(),
msg: "StaticUserProviderOption must be in format `<option>:<value>`",
})?;
return match mode {
"file" => {
// check valid path
let path = Path::new(content);
ensure!(path.exists() && path.is_file(), InvalidConfigSnafu {
value: content.to_string(),
msg: "StaticUserProviderOption file must be a valid file path",
});
let file = File::open(path).context(IoSnafu)?;
let credential = io::BufReader::new(file)
.lines()
.map_while(std::result::Result::ok)
.filter_map(|line| {
if let Some((k, v)) = line.split_once('=') {
Some((k.to_string(), v.as_bytes().to_vec()))
} else {
None
}
})
.collect::<HashMap<String, Vec<u8>>>();
ensure!(!credential.is_empty(), InvalidConfigSnafu {
value: content.to_string(),
msg: "StaticUserProviderOption file must contains at least one valid credential",
});
Ok(StaticUserProvider { users: credential, })
let users = load_credential_from_file(content)?
.context(InvalidConfigSnafu {
value: content.to_string(),
msg: "StaticFileUserProvider must be a valid file path",
})?;
Ok(StaticUserProvider { users })
}
"cmd" => content
.split(',')
@@ -83,66 +57,19 @@ impl TryFrom<&str> for StaticUserProvider {
value: mode.to_string(),
msg: "StaticUserProviderOption must be in format `file:<path>` or `cmd:<values>`",
}
.fail(),
.fail(),
};
}
}
pub(crate) struct StaticUserProvider {
users: HashMap<String, Vec<u8>>,
}
#[async_trait]
impl UserProvider for StaticUserProvider {
fn name(&self) -> &str {
STATIC_USER_PROVIDER
}
async fn authenticate(
&self,
input_id: Identity<'_>,
input_pwd: Password<'_>,
) -> Result<UserInfoRef> {
match input_id {
Identity::UserId(username, _) => {
ensure!(
!username.is_empty(),
IllegalParamSnafu {
msg: "blank username"
}
);
let save_pwd = self.users.get(username).context(UserNotFoundSnafu {
username: username.to_string(),
})?;
match input_pwd {
Password::PlainText(pwd) => {
ensure!(
!pwd.expose_secret().is_empty(),
IllegalParamSnafu {
msg: "blank password"
}
);
return if save_pwd == pwd.expose_secret().as_bytes() {
Ok(DefaultUserInfo::with_name(username))
} else {
UserPasswordMismatchSnafu {
username: username.to_string(),
}
.fail()
};
}
Password::MysqlNativePassword(auth_data, salt) => {
auth_mysql(auth_data, salt, username, save_pwd)
.map(|_| DefaultUserInfo::with_name(username))
}
Password::PgMD5(_, _) => UnsupportedPasswordTypeSnafu {
password_type: "pg_md5",
}
.fail(),
}
}
}
async fn authenticate(&self, id: Identity<'_>, pwd: Password<'_>) -> Result<UserInfoRef> {
authenticate_with_credential(&self.users, id, pwd)
}
async fn authorize(
@@ -181,7 +108,7 @@ pub mod test {
#[tokio::test]
async fn test_authorize() {
let user_info = DefaultUserInfo::with_name("root");
let provider = StaticUserProvider::try_from("cmd:root=123456,admin=654321").unwrap();
let provider = StaticUserProvider::new("cmd:root=123456,admin=654321").unwrap();
provider
.authorize("catalog", "schema", &user_info)
.await
@@ -190,7 +117,7 @@ pub mod test {
#[tokio::test]
async fn test_inline_provider() {
let provider = StaticUserProvider::try_from("cmd:root=123456,admin=654321").unwrap();
let provider = StaticUserProvider::new("cmd:root=123456,admin=654321").unwrap();
test_authenticate(&provider, "root", "123456").await;
test_authenticate(&provider, "admin", "654321").await;
}
@@ -214,7 +141,7 @@ admin=654321",
}
let param = format!("file:{file_path}");
let provider = StaticUserProvider::try_from(param.as_str()).unwrap();
let provider = StaticUserProvider::new(param.as_str()).unwrap();
test_authenticate(&provider, "root", "123456").await;
test_authenticate(&provider, "admin", "654321").await;
}

View File

@@ -0,0 +1,215 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::collections::HashMap;
use std::path::Path;
use std::sync::mpsc::channel;
use std::sync::{Arc, Mutex};
use async_trait::async_trait;
use common_telemetry::{info, warn};
use notify::{EventKind, RecursiveMode, Watcher};
use snafu::{ensure, ResultExt};
use crate::error::{FileWatchSnafu, InvalidConfigSnafu, Result};
use crate::user_info::DefaultUserInfo;
use crate::user_provider::{authenticate_with_credential, load_credential_from_file};
use crate::{Identity, Password, UserInfoRef, UserProvider};
pub(crate) const WATCH_FILE_USER_PROVIDER: &str = "watch_file_user_provider";
type WatchedCredentialRef = Arc<Mutex<Option<HashMap<String, Vec<u8>>>>>;
/// A user provider that reads user credential from a file and watches the file for changes.
///
/// Empty file is invalid; but file not exist means every user can be authenticated.
pub(crate) struct WatchFileUserProvider {
users: WatchedCredentialRef,
}
impl WatchFileUserProvider {
pub fn new(filepath: &str) -> Result<Self> {
let credential = load_credential_from_file(filepath)?;
let users = Arc::new(Mutex::new(credential));
let this = WatchFileUserProvider {
users: users.clone(),
};
let (tx, rx) = channel::<notify::Result<notify::Event>>();
let mut debouncer =
notify::recommended_watcher(tx).context(FileWatchSnafu { path: "<none>" })?;
let mut dir = Path::new(filepath).to_path_buf();
ensure!(
dir.pop(),
InvalidConfigSnafu {
value: filepath,
msg: "UserProvider path must be a file path",
}
);
debouncer
.watch(&dir, RecursiveMode::NonRecursive)
.context(FileWatchSnafu { path: filepath })?;
let filepath = filepath.to_string();
std::thread::spawn(move || {
let filename = Path::new(&filepath).file_name();
let _hold = debouncer;
while let Ok(res) = rx.recv() {
if let Ok(event) = res {
let is_this_file = event.paths.iter().any(|p| p.file_name() == filename);
let is_relevant_event = matches!(
event.kind,
EventKind::Modify(_) | EventKind::Create(_) | EventKind::Remove(_)
);
if is_this_file && is_relevant_event {
info!(?event.kind, "User provider file {} changed", &filepath);
match load_credential_from_file(&filepath) {
Ok(credential) => {
let mut users =
users.lock().expect("users credential must be valid");
#[cfg(not(test))]
info!("User provider file {filepath} reloaded");
#[cfg(test)]
info!("User provider file {filepath} reloaded: {credential:?}");
*users = credential;
}
Err(err) => {
warn!(
?err,
"Fail to load credential from file {filepath}; keep the old one",
)
}
}
}
}
}
});
Ok(this)
}
}
#[async_trait]
impl UserProvider for WatchFileUserProvider {
fn name(&self) -> &str {
WATCH_FILE_USER_PROVIDER
}
async fn authenticate(&self, id: Identity<'_>, password: Password<'_>) -> Result<UserInfoRef> {
let users = self.users.lock().expect("users credential must be valid");
if let Some(users) = users.as_ref() {
authenticate_with_credential(users, id, password)
} else {
match id {
Identity::UserId(id, _) => {
warn!(id, "User provider file not exist, allow all users");
Ok(DefaultUserInfo::with_name(id))
}
}
}
}
async fn authorize(&self, _: &str, _: &str, _: &UserInfoRef) -> Result<()> {
// default allow all
Ok(())
}
}
#[cfg(test)]
pub mod test {
use std::time::{Duration, Instant};
use common_test_util::temp_dir::create_temp_dir;
use tokio::time::sleep;
use crate::user_provider::watch_file_user_provider::WatchFileUserProvider;
use crate::user_provider::{Identity, Password};
use crate::UserProvider;
async fn test_authenticate(
provider: &dyn UserProvider,
username: &str,
password: &str,
ok: bool,
timeout: Option<Duration>,
) {
if let Some(timeout) = timeout {
let deadline = Instant::now().checked_add(timeout).unwrap();
loop {
let re = provider
.authenticate(
Identity::UserId(username, None),
Password::PlainText(password.to_string().into()),
)
.await;
if re.is_ok() == ok {
break;
} else if Instant::now() < deadline {
sleep(Duration::from_millis(100)).await;
} else {
panic!("timeout (username: {username}, password: {password}, expected: {ok})");
}
}
} else {
let re = provider
.authenticate(
Identity::UserId(username, None),
Password::PlainText(password.to_string().into()),
)
.await;
assert_eq!(
re.is_ok(),
ok,
"username: {}, password: {}",
username,
password
);
}
}
#[tokio::test]
async fn test_file_provider() {
common_telemetry::init_default_ut_logging();
let dir = create_temp_dir("test_file_provider");
let file_path = format!("{}/test_file_provider", dir.path().to_str().unwrap());
// write a tmp file
assert!(std::fs::write(&file_path, "root=123456\nadmin=654321\n").is_ok());
let provider = WatchFileUserProvider::new(file_path.as_str()).unwrap();
let timeout = Duration::from_secs(60);
test_authenticate(&provider, "root", "123456", true, None).await;
test_authenticate(&provider, "admin", "654321", true, None).await;
test_authenticate(&provider, "root", "654321", false, None).await;
// update the tmp file
assert!(std::fs::write(&file_path, "root=654321\n").is_ok());
test_authenticate(&provider, "root", "123456", false, Some(timeout)).await;
test_authenticate(&provider, "root", "654321", true, Some(timeout)).await;
test_authenticate(&provider, "admin", "654321", false, Some(timeout)).await;
// remove the tmp file
assert!(std::fs::remove_file(&file_path).is_ok());
test_authenticate(&provider, "root", "123456", true, Some(timeout)).await;
test_authenticate(&provider, "root", "654321", true, Some(timeout)).await;
test_authenticate(&provider, "admin", "654321", true, Some(timeout)).await;
// recreate the tmp file
assert!(std::fs::write(&file_path, "root=123456\n").is_ok());
test_authenticate(&provider, "root", "123456", true, Some(timeout)).await;
test_authenticate(&provider, "root", "654321", false, Some(timeout)).await;
test_authenticate(&provider, "admin", "654321", false, Some(timeout)).await;
}
}

View File

@@ -7,38 +7,40 @@ license.workspace = true
[features]
testing = []
[lints]
workspace = true
[dependencies]
api.workspace = true
arc-swap = "1.0"
arrow.workspace = true
arrow-schema.workspace = true
async-stream.workspace = true
async-trait = "0.1"
common-catalog.workspace = true
common-error.workspace = true
common-grpc.workspace = true
common-macro.workspace = true
common-meta.workspace = true
common-query.workspace = true
common-recordbatch.workspace = true
common-runtime.workspace = true
common-telemetry.workspace = true
common-time.workspace = true
dashmap = "5.4"
common-version.workspace = true
dashmap.workspace = true
datafusion.workspace = true
datatypes.workspace = true
futures = "0.3"
futures-util.workspace = true
itertools.workspace = true
lazy_static.workspace = true
meta-client.workspace = true
moka = { workspace = true, features = ["future"] }
parking_lot = "0.12"
moka = { workspace = true, features = ["future", "sync"] }
partition.workspace = true
paste = "1.0"
prometheus.workspace = true
regex.workspace = true
serde.workspace = true
serde_json = "1.0"
serde_json.workspace = true
session.workspace = true
snafu.workspace = true
sql.workspace = true
store-api.workspace = true
table.workspace = true
tokio.workspace = true

View File

@@ -41,6 +41,14 @@ pub enum Error {
source: BoxedError,
},
#[snafu(display("Failed to list {}.{}'s tables", catalog, schema))]
ListTables {
location: Location,
catalog: String,
schema: String,
source: BoxedError,
},
#[snafu(display("Failed to re-compile script due to internal error"))]
CompileScriptInternal {
location: Location,
@@ -156,6 +164,12 @@ pub enum Error {
location: Location,
},
#[snafu(display("Failed to find table partitions"))]
FindPartitions { source: partition::error::Error },
#[snafu(display("Failed to find region routes"))]
FindRegionRoutes { source: partition::error::Error },
#[snafu(display("Failed to read system catalog table records"))]
ReadSystemCatalog {
location: Location,
@@ -202,7 +216,7 @@ pub enum Error {
},
#[snafu(display("Failed to perform metasrv operation"))]
MetaSrv {
Metasrv {
location: Location,
source: meta_client::error::Error,
},
@@ -237,6 +251,12 @@ pub enum Error {
source: common_meta::error::Error,
location: Location,
},
#[snafu(display("Get null from table cache, key: {}", key))]
TableCacheNotGet { key: String, location: Location },
#[snafu(display("Failed to get table cache, err: {}", err_msg))]
GetTableCache { err_msg: String },
}
pub type Result<T> = std::result::Result<T, Error>;
@@ -246,11 +266,14 @@ impl ErrorExt for Error {
match self {
Error::InvalidKey { .. }
| Error::SchemaNotFound { .. }
| Error::TableNotFound { .. }
| Error::CatalogNotFound { .. }
| Error::FindPartitions { .. }
| Error::FindRegionRoutes { .. }
| Error::InvalidEntryType { .. }
| Error::ParallelOpenTable { .. } => StatusCode::Unexpected,
Error::TableNotFound { .. } => StatusCode::TableNotFound,
Error::SystemCatalog { .. }
| Error::EmptyValue { .. }
| Error::ValueDeserialize { .. } => StatusCode::StorageUnavailable,
@@ -270,9 +293,9 @@ impl ErrorExt for Error {
StatusCode::InvalidArguments
}
Error::ListCatalogs { source, .. } | Error::ListSchemas { source, .. } => {
source.status_code()
}
Error::ListCatalogs { source, .. }
| Error::ListSchemas { source, .. }
| Error::ListTables { source, .. } => source.status_code(),
Error::OpenSystemCatalog { source, .. }
| Error::CreateSystemCatalog { source, .. }
@@ -281,7 +304,7 @@ impl ErrorExt for Error {
| Error::CreateTable { source, .. }
| Error::TableSchemaMismatch { source, .. } => source.status_code(),
Error::MetaSrv { source, .. } => source.status_code(),
Error::Metasrv { source, .. } => source.status_code(),
Error::SystemCatalogTableScan { source, .. } => source.status_code(),
Error::SystemCatalogTableScanExec { source, .. } => source.status_code(),
Error::InvalidTableInfoInCatalog { source, .. } => source.status_code(),
@@ -294,6 +317,7 @@ impl ErrorExt for Error {
Error::QueryAccessDenied { .. } => StatusCode::AccessDenied,
Error::Datafusion { .. } => StatusCode::EngineExecuteQuery,
Error::TableMetadataManager { source, .. } => source.status_code(),
Error::TableCacheNotGet { .. } | Error::GetTableCache { .. } => StatusCode::Internal,
}
}
@@ -333,7 +357,7 @@ mod tests {
assert_eq!(
StatusCode::StorageUnavailable,
Error::SystemCatalog {
msg: "".to_string(),
msg: String::default(),
location: Location::generate(),
}
.status_code()

View File

@@ -12,17 +12,29 @@
// See the License for the specific language governing permissions and
// limitations under the License.
mod columns;
mod tables;
pub mod columns;
pub mod key_column_usage;
mod memory_table;
mod partitions;
mod predicate;
mod region_peers;
mod runtime_metrics;
pub mod schemata;
mod table_constraints;
mod table_names;
pub mod tables;
use std::collections::HashMap;
use std::sync::{Arc, Weak};
use common_catalog::consts::INFORMATION_SCHEMA_NAME;
use common_catalog::consts::{self, DEFAULT_CATALOG_NAME, INFORMATION_SCHEMA_NAME};
use common_error::ext::BoxedError;
use common_recordbatch::{RecordBatchStreamAdaptor, SendableRecordBatchStream};
use common_recordbatch::{RecordBatchStreamWrapper, SendableRecordBatchStream};
use datatypes::schema::SchemaRef;
use futures_util::StreamExt;
use lazy_static::lazy_static;
use paste::paste;
pub(crate) use predicate::Predicates;
use snafu::ResultExt;
use store_api::data_source::DataSource;
use store_api::storage::{ScanRequest, TableId};
@@ -30,52 +42,159 @@ use table::error::{SchemaConversionSnafu, TablesRecordBatchSnafu};
use table::metadata::{
FilterPushDownType, TableInfoBuilder, TableInfoRef, TableMetaBuilder, TableType,
};
use table::thin_table::{ThinTable, ThinTableAdapter};
use table::TableRef;
use table::{Table, TableRef};
pub use table_names::*;
use self::columns::InformationSchemaColumns;
use crate::error::Result;
use crate::information_schema::key_column_usage::InformationSchemaKeyColumnUsage;
use crate::information_schema::memory_table::{get_schema_columns, MemoryTable};
use crate::information_schema::partitions::InformationSchemaPartitions;
use crate::information_schema::region_peers::InformationSchemaRegionPeers;
use crate::information_schema::runtime_metrics::InformationSchemaMetrics;
use crate::information_schema::schemata::InformationSchemaSchemata;
use crate::information_schema::table_constraints::InformationSchemaTableConstraints;
use crate::information_schema::tables::InformationSchemaTables;
use crate::CatalogManager;
pub const TABLES: &str = "tables";
pub const COLUMNS: &str = "columns";
lazy_static! {
// Memory tables in `information_schema`.
static ref MEMORY_TABLES: &'static [&'static str] = &[
ENGINES,
COLUMN_PRIVILEGES,
COLUMN_STATISTICS,
CHARACTER_SETS,
COLLATIONS,
COLLATION_CHARACTER_SET_APPLICABILITY,
CHECK_CONSTRAINTS,
EVENTS,
FILES,
OPTIMIZER_TRACE,
PARAMETERS,
PROFILING,
REFERENTIAL_CONSTRAINTS,
ROUTINES,
SCHEMA_PRIVILEGES,
TABLE_PRIVILEGES,
TRIGGERS,
GLOBAL_STATUS,
SESSION_STATUS,
PARTITIONS,
];
}
macro_rules! setup_memory_table {
($name: expr) => {
paste! {
{
let (schema, columns) = get_schema_columns($name);
Some(Arc::new(MemoryTable::new(
consts::[<INFORMATION_SCHEMA_ $name _TABLE_ID>],
$name,
schema,
columns
)) as _)
}
}
};
}
/// The `information_schema` tables info provider.
pub struct InformationSchemaProvider {
catalog_name: String,
catalog_manager: Weak<dyn CatalogManager>,
tables: HashMap<String, TableRef>,
}
impl InformationSchemaProvider {
pub fn new(catalog_name: String, catalog_manager: Weak<dyn CatalogManager>) -> Self {
Self {
let mut provider = Self {
catalog_name,
catalog_manager,
}
tables: HashMap::new(),
};
provider.build_tables();
provider
}
/// Build a map of [TableRef] in information schema.
/// Including `tables` and `columns`.
pub fn build(
catalog_name: String,
catalog_manager: Weak<dyn CatalogManager>,
) -> HashMap<String, TableRef> {
let provider = Self::new(catalog_name, catalog_manager);
/// Returns table names in the order of table id.
pub fn table_names(&self) -> Vec<String> {
let mut tables = self.tables.values().clone().collect::<Vec<_>>();
let mut schema = HashMap::new();
schema.insert(TABLES.to_owned(), provider.table(TABLES).unwrap());
schema.insert(COLUMNS.to_owned(), provider.table(COLUMNS).unwrap());
schema
tables.sort_by(|t1, t2| {
t1.table_info()
.table_id()
.partial_cmp(&t2.table_info().table_id())
.unwrap()
});
tables
.into_iter()
.map(|t| t.table_info().name.clone())
.collect()
}
/// Returns a map of [TableRef] in information schema.
pub fn tables(&self) -> &HashMap<String, TableRef> {
assert!(!self.tables.is_empty());
&self.tables
}
/// Returns the [TableRef] by table name.
pub fn table(&self, name: &str) -> Option<TableRef> {
self.tables.get(name).cloned()
}
fn build_tables(&mut self) {
let mut tables = HashMap::new();
// Carefully consider the tables that may expose sensitive cluster configurations,
// authentication details, and other critical information.
// Only put these tables under `greptime` catalog to prevent info leak.
if self.catalog_name == DEFAULT_CATALOG_NAME {
tables.insert(
RUNTIME_METRICS.to_string(),
self.build_table(RUNTIME_METRICS).unwrap(),
);
tables.insert(
BUILD_INFO.to_string(),
self.build_table(BUILD_INFO).unwrap(),
);
tables.insert(
REGION_PEERS.to_string(),
self.build_table(REGION_PEERS).unwrap(),
);
}
tables.insert(TABLES.to_string(), self.build_table(TABLES).unwrap());
tables.insert(SCHEMATA.to_string(), self.build_table(SCHEMATA).unwrap());
tables.insert(COLUMNS.to_string(), self.build_table(COLUMNS).unwrap());
tables.insert(
KEY_COLUMN_USAGE.to_string(),
self.build_table(KEY_COLUMN_USAGE).unwrap(),
);
tables.insert(
TABLE_CONSTRAINTS.to_string(),
self.build_table(TABLE_CONSTRAINTS).unwrap(),
);
// Add memory tables
for name in MEMORY_TABLES.iter() {
tables.insert((*name).to_string(), self.build_table(name).expect(name));
}
self.tables = tables;
}
fn build_table(&self, name: &str) -> Option<TableRef> {
self.information_table(name).map(|table| {
let table_info = Self::table_info(self.catalog_name.clone(), &table);
let filter_pushdown = FilterPushDownType::Unsupported;
let thin_table = ThinTable::new(table_info, filter_pushdown);
let filter_pushdown = FilterPushDownType::Inexact;
let data_source = Arc::new(InformationTableDataSource::new(table));
Arc::new(ThinTableAdapter::new(thin_table, data_source)) as _
let table = Table::new(table_info, filter_pushdown, data_source);
Arc::new(table)
})
}
@@ -89,6 +208,49 @@ impl InformationSchemaProvider {
self.catalog_name.clone(),
self.catalog_manager.clone(),
)) as _),
ENGINES => setup_memory_table!(ENGINES),
COLUMN_PRIVILEGES => setup_memory_table!(COLUMN_PRIVILEGES),
COLUMN_STATISTICS => setup_memory_table!(COLUMN_STATISTICS),
BUILD_INFO => setup_memory_table!(BUILD_INFO),
CHARACTER_SETS => setup_memory_table!(CHARACTER_SETS),
COLLATIONS => setup_memory_table!(COLLATIONS),
COLLATION_CHARACTER_SET_APPLICABILITY => {
setup_memory_table!(COLLATION_CHARACTER_SET_APPLICABILITY)
}
CHECK_CONSTRAINTS => setup_memory_table!(CHECK_CONSTRAINTS),
EVENTS => setup_memory_table!(EVENTS),
FILES => setup_memory_table!(FILES),
OPTIMIZER_TRACE => setup_memory_table!(OPTIMIZER_TRACE),
PARAMETERS => setup_memory_table!(PARAMETERS),
PROFILING => setup_memory_table!(PROFILING),
REFERENTIAL_CONSTRAINTS => setup_memory_table!(REFERENTIAL_CONSTRAINTS),
ROUTINES => setup_memory_table!(ROUTINES),
SCHEMA_PRIVILEGES => setup_memory_table!(SCHEMA_PRIVILEGES),
TABLE_PRIVILEGES => setup_memory_table!(TABLE_PRIVILEGES),
TRIGGERS => setup_memory_table!(TRIGGERS),
GLOBAL_STATUS => setup_memory_table!(GLOBAL_STATUS),
SESSION_STATUS => setup_memory_table!(SESSION_STATUS),
KEY_COLUMN_USAGE => Some(Arc::new(InformationSchemaKeyColumnUsage::new(
self.catalog_name.clone(),
self.catalog_manager.clone(),
)) as _),
SCHEMATA => Some(Arc::new(InformationSchemaSchemata::new(
self.catalog_name.clone(),
self.catalog_manager.clone(),
)) as _),
RUNTIME_METRICS => Some(Arc::new(InformationSchemaMetrics::new())),
PARTITIONS => Some(Arc::new(InformationSchemaPartitions::new(
self.catalog_name.clone(),
self.catalog_manager.clone(),
)) as _),
REGION_PEERS => Some(Arc::new(InformationSchemaRegionPeers::new(
self.catalog_name.clone(),
self.catalog_manager.clone(),
)) as _),
TABLE_CONSTRAINTS => Some(Arc::new(InformationSchemaTableConstraints::new(
self.catalog_name.clone(),
self.catalog_manager.clone(),
)) as _),
_ => None,
}
}
@@ -102,9 +264,9 @@ impl InformationSchemaProvider {
.unwrap();
let table_info = TableInfoBuilder::default()
.table_id(table.table_id())
.name(table.table_name().to_owned())
.name(table.table_name().to_string())
.catalog_name(catalog_name)
.schema_name(INFORMATION_SCHEMA_NAME.to_owned())
.schema_name(INFORMATION_SCHEMA_NAME.to_string())
.meta(table_meta)
.table_type(table.table_type())
.build()
@@ -120,7 +282,7 @@ trait InformationTable {
fn schema(&self) -> SchemaRef;
fn to_stream(&self) -> Result<SendableRecordBatchStream>;
fn to_stream(&self, request: ScanRequest) -> Result<SendableRecordBatchStream>;
fn table_type(&self) -> TableType {
TableType::Temporary
@@ -154,7 +316,7 @@ impl DataSource for InformationTableDataSource {
&self,
request: ScanRequest,
) -> std::result::Result<SendableRecordBatchStream, BoxedError> {
let projection = request.projection;
let projection = request.projection.clone();
let projected_schema = match &projection {
Some(projection) => self.try_project(projection)?,
None => self.table.schema(),
@@ -162,7 +324,7 @@ impl DataSource for InformationTableDataSource {
let stream = self
.table
.to_stream()
.to_stream(request)
.map_err(BoxedError::new)
.context(TablesRecordBatchSnafu)
.map_err(BoxedError::new)?
@@ -171,11 +333,13 @@ impl DataSource for InformationTableDataSource {
None => batch,
});
let stream = RecordBatchStreamAdaptor {
let stream = RecordBatchStreamWrapper {
schema: projected_schema,
stream: Box::pin(stream),
output_ordering: None,
metrics: Default::default(),
};
Ok(Box::pin(stream))
}
}

View File

@@ -16,8 +16,8 @@ use std::sync::{Arc, Weak};
use arrow_schema::SchemaRef as ArrowSchemaRef;
use common_catalog::consts::{
INFORMATION_SCHEMA_COLUMNS_TABLE_ID, INFORMATION_SCHEMA_NAME, SEMANTIC_TYPE_FIELD,
SEMANTIC_TYPE_PRIMARY_KEY, SEMANTIC_TYPE_TIME_INDEX,
INFORMATION_SCHEMA_COLUMNS_TABLE_ID, SEMANTIC_TYPE_FIELD, SEMANTIC_TYPE_PRIMARY_KEY,
SEMANTIC_TYPE_TIME_INDEX,
};
use common_error::ext::BoxedError;
use common_query::physical_plan::TaskContext;
@@ -26,18 +26,23 @@ use common_recordbatch::{RecordBatch, SendableRecordBatchStream};
use datafusion::physical_plan::stream::RecordBatchStreamAdapter as DfRecordBatchStreamAdapter;
use datafusion::physical_plan::streaming::PartitionStream as DfPartitionStream;
use datafusion::physical_plan::SendableRecordBatchStream as DfSendableRecordBatchStream;
use datatypes::prelude::{ConcreteDataType, DataType};
use datatypes::prelude::{ConcreteDataType, DataType, MutableVector};
use datatypes::scalars::ScalarVectorBuilder;
use datatypes::schema::{ColumnSchema, Schema, SchemaRef};
use datatypes::vectors::{StringVectorBuilder, VectorRef};
use datatypes::value::Value;
use datatypes::vectors::{
ConstantVector, Int64Vector, Int64VectorBuilder, StringVector, StringVectorBuilder, VectorRef,
};
use futures::TryStreamExt;
use snafu::{OptionExt, ResultExt};
use store_api::storage::TableId;
use sql::statements;
use store_api::storage::{ScanRequest, TableId};
use super::tables::InformationSchemaTables;
use super::{InformationTable, COLUMNS, TABLES};
use super::{InformationTable, COLUMNS};
use crate::error::{
CreateRecordBatchSnafu, InternalSnafu, Result, UpgradeWeakCatalogManagerRefSnafu,
};
use crate::information_schema::Predicates;
use crate::CatalogManager;
pub(super) struct InformationSchemaColumns {
@@ -46,12 +51,41 @@ pub(super) struct InformationSchemaColumns {
catalog_manager: Weak<dyn CatalogManager>,
}
const TABLE_CATALOG: &str = "table_catalog";
const TABLE_SCHEMA: &str = "table_schema";
const TABLE_NAME: &str = "table_name";
const COLUMN_NAME: &str = "column_name";
const DATA_TYPE: &str = "data_type";
const SEMANTIC_TYPE: &str = "semantic_type";
pub const TABLE_CATALOG: &str = "table_catalog";
pub const TABLE_SCHEMA: &str = "table_schema";
pub const TABLE_NAME: &str = "table_name";
pub const COLUMN_NAME: &str = "column_name";
const ORDINAL_POSITION: &str = "ordinal_position";
const CHARACTER_MAXIMUM_LENGTH: &str = "character_maximum_length";
const CHARACTER_OCTET_LENGTH: &str = "character_octet_length";
const NUMERIC_PRECISION: &str = "numeric_precision";
const NUMERIC_SCALE: &str = "numeric_scale";
const DATETIME_PRECISION: &str = "datetime_precision";
const CHARACTER_SET_NAME: &str = "character_set_name";
pub const COLLATION_NAME: &str = "collation_name";
pub const COLUMN_KEY: &str = "column_key";
pub const EXTRA: &str = "extra";
pub const PRIVILEGES: &str = "privileges";
const GENERATION_EXPRESSION: &str = "generation_expression";
// Extension field to keep greptime data type name
pub const GREPTIME_DATA_TYPE: &str = "greptime_data_type";
pub const DATA_TYPE: &str = "data_type";
pub const SEMANTIC_TYPE: &str = "semantic_type";
pub const COLUMN_DEFAULT: &str = "column_default";
pub const IS_NULLABLE: &str = "is_nullable";
const COLUMN_TYPE: &str = "column_type";
pub const COLUMN_COMMENT: &str = "column_comment";
const SRS_ID: &str = "srs_id";
const INIT_CAPACITY: usize = 42;
// The maximum length of string type
const MAX_STRING_LENGTH: i64 = 2147483647;
const UTF8_CHARSET_NAME: &str = "utf8";
const UTF8_COLLATE_NAME: &str = "utf8_bin";
const PRI_COLUMN_KEY: &str = "PRI";
const TIME_INDEX_COLUMN_KEY: &str = "TIME INDEX";
const DEFAULT_PRIVILEGES: &str = "select,insert";
const EMPTY_STR: &str = "";
impl InformationSchemaColumns {
pub(super) fn new(catalog_name: String, catalog_manager: Weak<dyn CatalogManager>) -> Self {
@@ -68,8 +102,46 @@ impl InformationSchemaColumns {
ColumnSchema::new(TABLE_SCHEMA, ConcreteDataType::string_datatype(), false),
ColumnSchema::new(TABLE_NAME, ConcreteDataType::string_datatype(), false),
ColumnSchema::new(COLUMN_NAME, ConcreteDataType::string_datatype(), false),
ColumnSchema::new(ORDINAL_POSITION, ConcreteDataType::int64_datatype(), false),
ColumnSchema::new(
CHARACTER_MAXIMUM_LENGTH,
ConcreteDataType::int64_datatype(),
true,
),
ColumnSchema::new(
CHARACTER_OCTET_LENGTH,
ConcreteDataType::int64_datatype(),
true,
),
ColumnSchema::new(NUMERIC_PRECISION, ConcreteDataType::int64_datatype(), true),
ColumnSchema::new(NUMERIC_SCALE, ConcreteDataType::int64_datatype(), true),
ColumnSchema::new(DATETIME_PRECISION, ConcreteDataType::int64_datatype(), true),
ColumnSchema::new(
CHARACTER_SET_NAME,
ConcreteDataType::string_datatype(),
true,
),
ColumnSchema::new(COLLATION_NAME, ConcreteDataType::string_datatype(), true),
ColumnSchema::new(COLUMN_KEY, ConcreteDataType::string_datatype(), false),
ColumnSchema::new(EXTRA, ConcreteDataType::string_datatype(), false),
ColumnSchema::new(PRIVILEGES, ConcreteDataType::string_datatype(), false),
ColumnSchema::new(
GENERATION_EXPRESSION,
ConcreteDataType::string_datatype(),
false,
),
ColumnSchema::new(
GREPTIME_DATA_TYPE,
ConcreteDataType::string_datatype(),
false,
),
ColumnSchema::new(DATA_TYPE, ConcreteDataType::string_datatype(), false),
ColumnSchema::new(SEMANTIC_TYPE, ConcreteDataType::string_datatype(), false),
ColumnSchema::new(COLUMN_DEFAULT, ConcreteDataType::string_datatype(), true),
ColumnSchema::new(IS_NULLABLE, ConcreteDataType::string_datatype(), false),
ColumnSchema::new(COLUMN_TYPE, ConcreteDataType::string_datatype(), false),
ColumnSchema::new(COLUMN_COMMENT, ConcreteDataType::string_datatype(), true),
ColumnSchema::new(SRS_ID, ConcreteDataType::int64_datatype(), true),
]))
}
@@ -95,14 +167,14 @@ impl InformationTable for InformationSchemaColumns {
self.schema.clone()
}
fn to_stream(&self) -> Result<SendableRecordBatchStream> {
fn to_stream(&self, request: ScanRequest) -> Result<SendableRecordBatchStream> {
let schema = self.schema.arrow_schema().clone();
let mut builder = self.builder();
let stream = Box::pin(DfRecordBatchStreamAdapter::new(
schema,
futures::stream::once(async move {
builder
.make_tables()
.make_columns(Some(request))
.await
.map(|x| x.into_df_record_batch())
.map_err(Into::into)
@@ -125,8 +197,22 @@ struct InformationSchemaColumnsBuilder {
schema_names: StringVectorBuilder,
table_names: StringVectorBuilder,
column_names: StringVectorBuilder,
ordinal_positions: Int64VectorBuilder,
character_maximum_lengths: Int64VectorBuilder,
character_octet_lengths: Int64VectorBuilder,
numeric_precisions: Int64VectorBuilder,
numeric_scales: Int64VectorBuilder,
datetime_precisions: Int64VectorBuilder,
character_set_names: StringVectorBuilder,
collation_names: StringVectorBuilder,
column_keys: StringVectorBuilder,
greptime_data_types: StringVectorBuilder,
data_types: StringVectorBuilder,
semantic_types: StringVectorBuilder,
column_defaults: StringVectorBuilder,
is_nullables: StringVectorBuilder,
column_types: StringVectorBuilder,
column_comments: StringVectorBuilder,
}
impl InformationSchemaColumnsBuilder {
@@ -139,55 +225,44 @@ impl InformationSchemaColumnsBuilder {
schema,
catalog_name,
catalog_manager,
catalog_names: StringVectorBuilder::with_capacity(42),
schema_names: StringVectorBuilder::with_capacity(42),
table_names: StringVectorBuilder::with_capacity(42),
column_names: StringVectorBuilder::with_capacity(42),
data_types: StringVectorBuilder::with_capacity(42),
semantic_types: StringVectorBuilder::with_capacity(42),
catalog_names: StringVectorBuilder::with_capacity(INIT_CAPACITY),
schema_names: StringVectorBuilder::with_capacity(INIT_CAPACITY),
table_names: StringVectorBuilder::with_capacity(INIT_CAPACITY),
column_names: StringVectorBuilder::with_capacity(INIT_CAPACITY),
ordinal_positions: Int64VectorBuilder::with_capacity(INIT_CAPACITY),
character_maximum_lengths: Int64VectorBuilder::with_capacity(INIT_CAPACITY),
character_octet_lengths: Int64VectorBuilder::with_capacity(INIT_CAPACITY),
numeric_precisions: Int64VectorBuilder::with_capacity(INIT_CAPACITY),
numeric_scales: Int64VectorBuilder::with_capacity(INIT_CAPACITY),
datetime_precisions: Int64VectorBuilder::with_capacity(INIT_CAPACITY),
character_set_names: StringVectorBuilder::with_capacity(INIT_CAPACITY),
collation_names: StringVectorBuilder::with_capacity(INIT_CAPACITY),
column_keys: StringVectorBuilder::with_capacity(INIT_CAPACITY),
greptime_data_types: StringVectorBuilder::with_capacity(INIT_CAPACITY),
data_types: StringVectorBuilder::with_capacity(INIT_CAPACITY),
semantic_types: StringVectorBuilder::with_capacity(INIT_CAPACITY),
column_defaults: StringVectorBuilder::with_capacity(INIT_CAPACITY),
is_nullables: StringVectorBuilder::with_capacity(INIT_CAPACITY),
column_types: StringVectorBuilder::with_capacity(INIT_CAPACITY),
column_comments: StringVectorBuilder::with_capacity(INIT_CAPACITY),
}
}
/// Construct the `information_schema.tables` virtual table
async fn make_tables(&mut self) -> Result<RecordBatch> {
/// Construct the `information_schema.columns` virtual table
async fn make_columns(&mut self, request: Option<ScanRequest>) -> Result<RecordBatch> {
let catalog_name = self.catalog_name.clone();
let catalog_manager = self
.catalog_manager
.upgrade()
.context(UpgradeWeakCatalogManagerRefSnafu)?;
let predicates = Predicates::from_scan_request(&request);
for schema_name in catalog_manager.schema_names(&catalog_name).await? {
if !catalog_manager
.schema_exists(&catalog_name, &schema_name)
.await?
{
continue;
}
for table_name in catalog_manager
.table_names(&catalog_name, &schema_name)
.await?
{
let (keys, schema) = if let Some(table) = catalog_manager
.table(&catalog_name, &schema_name, &table_name)
.await?
{
let keys = &table.table_info().meta.primary_key_indices;
let schema = table.schema();
(keys.clone(), schema)
} else {
// TODO: this specific branch is only a workaround for FrontendCatalogManager.
if schema_name == INFORMATION_SCHEMA_NAME {
if table_name == COLUMNS {
(vec![], InformationSchemaColumns::schema())
} else if table_name == TABLES {
(vec![], InformationSchemaTables::schema())
} else {
continue;
}
} else {
continue;
}
};
let mut stream = catalog_manager.tables(&catalog_name, &schema_name).await;
while let Some(table) = stream.try_next().await? {
let keys = &table.table_info().meta.primary_key_indices;
let schema = table.schema();
for (idx, column) in schema.column_schemas().iter().enumerate() {
let semantic_type = if column.is_time_index() {
@@ -197,13 +272,15 @@ impl InformationSchemaColumnsBuilder {
} else {
SEMANTIC_TYPE_FIELD
};
self.add_column(
&predicates,
idx,
&catalog_name,
&schema_name,
&table_name,
&column.name,
&column.data_type.name(),
&table.table_info().name,
semantic_type,
column,
);
}
}
@@ -212,32 +289,164 @@ impl InformationSchemaColumnsBuilder {
self.finish()
}
#[allow(clippy::too_many_arguments)]
fn add_column(
&mut self,
predicates: &Predicates,
index: usize,
catalog_name: &str,
schema_name: &str,
table_name: &str,
column_name: &str,
data_type: &str,
semantic_type: &str,
column_schema: &ColumnSchema,
) {
// Use sql data type name
let data_type = statements::concrete_data_type_to_sql_data_type(&column_schema.data_type)
.map(|dt| dt.to_string().to_lowercase())
.unwrap_or_else(|_| column_schema.data_type.name());
let column_key = match semantic_type {
SEMANTIC_TYPE_PRIMARY_KEY => PRI_COLUMN_KEY,
SEMANTIC_TYPE_TIME_INDEX => TIME_INDEX_COLUMN_KEY,
_ => EMPTY_STR,
};
let row = [
(TABLE_CATALOG, &Value::from(catalog_name)),
(TABLE_SCHEMA, &Value::from(schema_name)),
(TABLE_NAME, &Value::from(table_name)),
(COLUMN_NAME, &Value::from(column_schema.name.as_str())),
(DATA_TYPE, &Value::from(data_type.as_str())),
(SEMANTIC_TYPE, &Value::from(semantic_type)),
(ORDINAL_POSITION, &Value::from((index + 1) as i64)),
(COLUMN_KEY, &Value::from(column_key)),
];
if !predicates.eval(&row) {
return;
}
self.catalog_names.push(Some(catalog_name));
self.schema_names.push(Some(schema_name));
self.table_names.push(Some(table_name));
self.column_names.push(Some(column_name));
self.data_types.push(Some(data_type));
self.column_names.push(Some(&column_schema.name));
// Starts from 1
self.ordinal_positions.push(Some((index + 1) as i64));
if column_schema.data_type.is_string() {
self.character_maximum_lengths.push(Some(MAX_STRING_LENGTH));
self.character_octet_lengths.push(Some(MAX_STRING_LENGTH));
self.numeric_precisions.push(None);
self.numeric_scales.push(None);
self.datetime_precisions.push(None);
self.character_set_names.push(Some(UTF8_CHARSET_NAME));
self.collation_names.push(Some(UTF8_COLLATE_NAME));
} else if column_schema.data_type.is_numeric() || column_schema.data_type.is_decimal() {
self.character_maximum_lengths.push(None);
self.character_octet_lengths.push(None);
self.numeric_precisions.push(
column_schema
.data_type
.numeric_precision()
.map(|x| x as i64),
);
self.numeric_scales
.push(column_schema.data_type.numeric_scale().map(|x| x as i64));
self.datetime_precisions.push(None);
self.character_set_names.push(None);
self.collation_names.push(None);
} else {
self.character_maximum_lengths.push(None);
self.character_octet_lengths.push(None);
self.numeric_precisions.push(None);
self.numeric_scales.push(None);
match &column_schema.data_type {
ConcreteDataType::DateTime(datetime_type) => {
self.datetime_precisions
.push(Some(datetime_type.precision() as i64));
}
ConcreteDataType::Timestamp(ts_type) => {
self.datetime_precisions
.push(Some(ts_type.precision() as i64));
}
ConcreteDataType::Time(time_type) => {
self.datetime_precisions
.push(Some(time_type.precision() as i64));
}
_ => self.datetime_precisions.push(None),
}
self.character_set_names.push(None);
self.collation_names.push(None);
}
self.column_keys.push(Some(column_key));
self.greptime_data_types
.push(Some(&column_schema.data_type.name()));
self.data_types.push(Some(&data_type));
self.semantic_types.push(Some(semantic_type));
self.column_defaults.push(
column_schema
.default_constraint()
.map(|s| format!("{}", s))
.as_deref(),
);
if column_schema.is_nullable() {
self.is_nullables.push(Some("Yes"));
} else {
self.is_nullables.push(Some("No"));
}
self.column_types.push(Some(&data_type));
self.column_comments
.push(column_schema.column_comment().map(|x| x.as_ref()));
}
fn finish(&mut self) -> Result<RecordBatch> {
let rows_num = self.collation_names.len();
let privileges = Arc::new(ConstantVector::new(
Arc::new(StringVector::from(vec![DEFAULT_PRIVILEGES])),
rows_num,
));
let empty_string = Arc::new(ConstantVector::new(
Arc::new(StringVector::from(vec![EMPTY_STR])),
rows_num,
));
let srs_ids = Arc::new(ConstantVector::new(
Arc::new(Int64Vector::from(vec![None])),
rows_num,
));
let columns: Vec<VectorRef> = vec![
Arc::new(self.catalog_names.finish()),
Arc::new(self.schema_names.finish()),
Arc::new(self.table_names.finish()),
Arc::new(self.column_names.finish()),
Arc::new(self.ordinal_positions.finish()),
Arc::new(self.character_maximum_lengths.finish()),
Arc::new(self.character_octet_lengths.finish()),
Arc::new(self.numeric_precisions.finish()),
Arc::new(self.numeric_scales.finish()),
Arc::new(self.datetime_precisions.finish()),
Arc::new(self.character_set_names.finish()),
Arc::new(self.collation_names.finish()),
Arc::new(self.column_keys.finish()),
empty_string.clone(),
privileges,
empty_string,
Arc::new(self.greptime_data_types.finish()),
Arc::new(self.data_types.finish()),
Arc::new(self.semantic_types.finish()),
Arc::new(self.column_defaults.finish()),
Arc::new(self.is_nullables.finish()),
Arc::new(self.column_types.finish()),
Arc::new(self.column_comments.finish()),
srs_ids,
];
RecordBatch::new(self.schema.clone(), columns).context(CreateRecordBatchSnafu)
}
}
@@ -254,7 +463,7 @@ impl DfPartitionStream for InformationSchemaColumns {
schema,
futures::stream::once(async move {
builder
.make_tables()
.make_columns(None)
.await
.map(|x| x.into_df_record_batch())
.map_err(Into::into)

View File

@@ -0,0 +1,367 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::sync::{Arc, Weak};
use arrow_schema::SchemaRef as ArrowSchemaRef;
use common_catalog::consts::INFORMATION_SCHEMA_KEY_COLUMN_USAGE_TABLE_ID;
use common_error::ext::BoxedError;
use common_query::physical_plan::TaskContext;
use common_recordbatch::adapter::RecordBatchStreamAdapter;
use common_recordbatch::{RecordBatch, SendableRecordBatchStream};
use datafusion::physical_plan::stream::RecordBatchStreamAdapter as DfRecordBatchStreamAdapter;
use datafusion::physical_plan::streaming::PartitionStream as DfPartitionStream;
use datafusion::physical_plan::SendableRecordBatchStream as DfSendableRecordBatchStream;
use datatypes::prelude::{ConcreteDataType, MutableVector, ScalarVectorBuilder, VectorRef};
use datatypes::schema::{ColumnSchema, Schema, SchemaRef};
use datatypes::value::Value;
use datatypes::vectors::{ConstantVector, StringVector, StringVectorBuilder, UInt32VectorBuilder};
use snafu::{OptionExt, ResultExt};
use store_api::storage::{ScanRequest, TableId};
use super::KEY_COLUMN_USAGE;
use crate::error::{
CreateRecordBatchSnafu, InternalSnafu, Result, UpgradeWeakCatalogManagerRefSnafu,
};
use crate::information_schema::{InformationTable, Predicates};
use crate::CatalogManager;
pub const CONSTRAINT_SCHEMA: &str = "constraint_schema";
pub const CONSTRAINT_NAME: &str = "constraint_name";
// It's always `def` in MySQL
pub const TABLE_CATALOG: &str = "table_catalog";
// The real catalog name for this key column.
pub const REAL_TABLE_CATALOG: &str = "real_table_catalog";
pub const TABLE_SCHEMA: &str = "table_schema";
pub const TABLE_NAME: &str = "table_name";
pub const COLUMN_NAME: &str = "column_name";
pub const ORDINAL_POSITION: &str = "ordinal_position";
const INIT_CAPACITY: usize = 42;
/// Primary key constraint name
pub(crate) const PRI_CONSTRAINT_NAME: &str = "PRIMARY";
/// Time index constraint name
pub(crate) const TIME_INDEX_CONSTRAINT_NAME: &str = "TIME INDEX";
/// The virtual table implementation for `information_schema.KEY_COLUMN_USAGE`.
pub(super) struct InformationSchemaKeyColumnUsage {
schema: SchemaRef,
catalog_name: String,
catalog_manager: Weak<dyn CatalogManager>,
}
impl InformationSchemaKeyColumnUsage {
pub(super) fn new(catalog_name: String, catalog_manager: Weak<dyn CatalogManager>) -> Self {
Self {
schema: Self::schema(),
catalog_name,
catalog_manager,
}
}
pub(crate) fn schema() -> SchemaRef {
Arc::new(Schema::new(vec![
ColumnSchema::new(
"constraint_catalog",
ConcreteDataType::string_datatype(),
false,
),
ColumnSchema::new(
CONSTRAINT_SCHEMA,
ConcreteDataType::string_datatype(),
false,
),
ColumnSchema::new(CONSTRAINT_NAME, ConcreteDataType::string_datatype(), false),
ColumnSchema::new(TABLE_CATALOG, ConcreteDataType::string_datatype(), false),
ColumnSchema::new(
REAL_TABLE_CATALOG,
ConcreteDataType::string_datatype(),
false,
),
ColumnSchema::new(TABLE_SCHEMA, ConcreteDataType::string_datatype(), false),
ColumnSchema::new(TABLE_NAME, ConcreteDataType::string_datatype(), false),
ColumnSchema::new(COLUMN_NAME, ConcreteDataType::string_datatype(), false),
ColumnSchema::new(ORDINAL_POSITION, ConcreteDataType::uint32_datatype(), false),
ColumnSchema::new(
"position_in_unique_constraint",
ConcreteDataType::uint32_datatype(),
true,
),
ColumnSchema::new(
"referenced_table_schema",
ConcreteDataType::string_datatype(),
true,
),
ColumnSchema::new(
"referenced_table_name",
ConcreteDataType::string_datatype(),
true,
),
ColumnSchema::new(
"referenced_column_name",
ConcreteDataType::string_datatype(),
true,
),
]))
}
fn builder(&self) -> InformationSchemaKeyColumnUsageBuilder {
InformationSchemaKeyColumnUsageBuilder::new(
self.schema.clone(),
self.catalog_name.clone(),
self.catalog_manager.clone(),
)
}
}
impl InformationTable for InformationSchemaKeyColumnUsage {
fn table_id(&self) -> TableId {
INFORMATION_SCHEMA_KEY_COLUMN_USAGE_TABLE_ID
}
fn table_name(&self) -> &'static str {
KEY_COLUMN_USAGE
}
fn schema(&self) -> SchemaRef {
self.schema.clone()
}
fn to_stream(&self, request: ScanRequest) -> Result<SendableRecordBatchStream> {
let schema = self.schema.arrow_schema().clone();
let mut builder = self.builder();
let stream = Box::pin(DfRecordBatchStreamAdapter::new(
schema,
futures::stream::once(async move {
builder
.make_key_column_usage(Some(request))
.await
.map(|x| x.into_df_record_batch())
.map_err(Into::into)
}),
));
Ok(Box::pin(
RecordBatchStreamAdapter::try_new(stream)
.map_err(BoxedError::new)
.context(InternalSnafu)?,
))
}
}
/// Builds the `information_schema.KEY_COLUMN_USAGE` table row by row
///
/// Columns are based on <https://dev.mysql.com/doc/refman/8.2/en/information-schema-key-column-usage-table.html>
struct InformationSchemaKeyColumnUsageBuilder {
schema: SchemaRef,
catalog_name: String,
catalog_manager: Weak<dyn CatalogManager>,
constraint_catalog: StringVectorBuilder,
constraint_schema: StringVectorBuilder,
constraint_name: StringVectorBuilder,
table_catalog: StringVectorBuilder,
real_table_catalog: StringVectorBuilder,
table_schema: StringVectorBuilder,
table_name: StringVectorBuilder,
column_name: StringVectorBuilder,
ordinal_position: UInt32VectorBuilder,
position_in_unique_constraint: UInt32VectorBuilder,
}
impl InformationSchemaKeyColumnUsageBuilder {
fn new(
schema: SchemaRef,
catalog_name: String,
catalog_manager: Weak<dyn CatalogManager>,
) -> Self {
Self {
schema,
catalog_name,
catalog_manager,
constraint_catalog: StringVectorBuilder::with_capacity(INIT_CAPACITY),
constraint_schema: StringVectorBuilder::with_capacity(INIT_CAPACITY),
constraint_name: StringVectorBuilder::with_capacity(INIT_CAPACITY),
table_catalog: StringVectorBuilder::with_capacity(INIT_CAPACITY),
real_table_catalog: StringVectorBuilder::with_capacity(INIT_CAPACITY),
table_schema: StringVectorBuilder::with_capacity(INIT_CAPACITY),
table_name: StringVectorBuilder::with_capacity(INIT_CAPACITY),
column_name: StringVectorBuilder::with_capacity(INIT_CAPACITY),
ordinal_position: UInt32VectorBuilder::with_capacity(INIT_CAPACITY),
position_in_unique_constraint: UInt32VectorBuilder::with_capacity(INIT_CAPACITY),
}
}
/// Construct the `information_schema.KEY_COLUMN_USAGE` virtual table
async fn make_key_column_usage(&mut self, request: Option<ScanRequest>) -> Result<RecordBatch> {
let catalog_name = self.catalog_name.clone();
let catalog_manager = self
.catalog_manager
.upgrade()
.context(UpgradeWeakCatalogManagerRefSnafu)?;
let predicates = Predicates::from_scan_request(&request);
let mut primary_constraints = vec![];
for schema_name in catalog_manager.schema_names(&catalog_name).await? {
if !catalog_manager
.schema_exists(&catalog_name, &schema_name)
.await?
{
continue;
}
for table_name in catalog_manager
.table_names(&catalog_name, &schema_name)
.await?
{
if let Some(table) = catalog_manager
.table(&catalog_name, &schema_name, &table_name)
.await?
{
let keys = &table.table_info().meta.primary_key_indices;
let schema = table.schema();
for (idx, column) in schema.column_schemas().iter().enumerate() {
if column.is_time_index() {
self.add_key_column_usage(
&predicates,
&schema_name,
TIME_INDEX_CONSTRAINT_NAME,
&catalog_name,
&schema_name,
&table_name,
&column.name,
1, //always 1 for time index
);
}
if keys.contains(&idx) {
primary_constraints.push((
catalog_name.clone(),
schema_name.clone(),
table_name.clone(),
column.name.clone(),
));
}
// TODO(dimbtp): foreign key constraint not supported yet
}
} else {
unreachable!();
}
}
}
for (i, (catalog_name, schema_name, table_name, column_name)) in
primary_constraints.into_iter().enumerate()
{
self.add_key_column_usage(
&predicates,
&schema_name,
PRI_CONSTRAINT_NAME,
&catalog_name,
&schema_name,
&table_name,
&column_name,
i as u32 + 1,
);
}
self.finish()
}
// TODO(dimbtp): Foreign key constraint has not `None` value for last 4
// fields, but it is not supported yet.
#[allow(clippy::too_many_arguments)]
fn add_key_column_usage(
&mut self,
predicates: &Predicates,
constraint_schema: &str,
constraint_name: &str,
table_catalog: &str,
table_schema: &str,
table_name: &str,
column_name: &str,
ordinal_position: u32,
) {
let row = [
(CONSTRAINT_SCHEMA, &Value::from(constraint_schema)),
(CONSTRAINT_NAME, &Value::from(constraint_name)),
(REAL_TABLE_CATALOG, &Value::from(table_catalog)),
(TABLE_SCHEMA, &Value::from(table_schema)),
(TABLE_NAME, &Value::from(table_name)),
(COLUMN_NAME, &Value::from(column_name)),
(ORDINAL_POSITION, &Value::from(ordinal_position)),
];
if !predicates.eval(&row) {
return;
}
self.constraint_catalog.push(Some("def"));
self.constraint_schema.push(Some(constraint_schema));
self.constraint_name.push(Some(constraint_name));
self.table_catalog.push(Some("def"));
self.real_table_catalog.push(Some(table_catalog));
self.table_schema.push(Some(table_schema));
self.table_name.push(Some(table_name));
self.column_name.push(Some(column_name));
self.ordinal_position.push(Some(ordinal_position));
self.position_in_unique_constraint.push(None);
}
fn finish(&mut self) -> Result<RecordBatch> {
let rows_num = self.table_catalog.len();
let null_string_vector = Arc::new(ConstantVector::new(
Arc::new(StringVector::from(vec![None as Option<&str>])),
rows_num,
));
let columns: Vec<VectorRef> = vec![
Arc::new(self.constraint_catalog.finish()),
Arc::new(self.constraint_schema.finish()),
Arc::new(self.constraint_name.finish()),
Arc::new(self.table_catalog.finish()),
Arc::new(self.real_table_catalog.finish()),
Arc::new(self.table_schema.finish()),
Arc::new(self.table_name.finish()),
Arc::new(self.column_name.finish()),
Arc::new(self.ordinal_position.finish()),
Arc::new(self.position_in_unique_constraint.finish()),
null_string_vector.clone(),
null_string_vector.clone(),
null_string_vector,
];
RecordBatch::new(self.schema.clone(), columns).context(CreateRecordBatchSnafu)
}
}
impl DfPartitionStream for InformationSchemaKeyColumnUsage {
fn schema(&self) -> &ArrowSchemaRef {
self.schema.arrow_schema()
}
fn execute(&self, _: Arc<TaskContext>) -> DfSendableRecordBatchStream {
let schema = self.schema.arrow_schema().clone();
let mut builder = self.builder();
Box::pin(DfRecordBatchStreamAdapter::new(
schema,
futures::stream::once(async move {
builder
.make_key_column_usage(None)
.await
.map(|x| x.into_df_record_batch())
.map_err(Into::into)
}),
))
}
}

View File

@@ -0,0 +1,214 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
mod tables;
use std::sync::Arc;
use arrow_schema::SchemaRef as ArrowSchemaRef;
use common_error::ext::BoxedError;
use common_query::physical_plan::TaskContext;
use common_recordbatch::adapter::RecordBatchStreamAdapter;
use common_recordbatch::{RecordBatch, SendableRecordBatchStream};
use datafusion::physical_plan::stream::RecordBatchStreamAdapter as DfRecordBatchStreamAdapter;
use datafusion::physical_plan::streaming::PartitionStream as DfPartitionStream;
use datafusion::physical_plan::SendableRecordBatchStream as DfSendableRecordBatchStream;
use datatypes::schema::SchemaRef;
use datatypes::vectors::VectorRef;
use snafu::ResultExt;
use store_api::storage::{ScanRequest, TableId};
pub use tables::get_schema_columns;
use crate::error::{CreateRecordBatchSnafu, InternalSnafu, Result};
use crate::information_schema::InformationTable;
/// A memory table with specified schema and columns.
pub(super) struct MemoryTable {
table_id: TableId,
table_name: &'static str,
schema: SchemaRef,
columns: Vec<VectorRef>,
}
impl MemoryTable {
/// Creates a memory table with table id, name, schema and columns.
pub(super) fn new(
table_id: TableId,
table_name: &'static str,
schema: SchemaRef,
columns: Vec<VectorRef>,
) -> Self {
Self {
table_id,
table_name,
schema,
columns,
}
}
fn builder(&self) -> MemoryTableBuilder {
MemoryTableBuilder::new(self.schema.clone(), self.columns.clone())
}
}
impl InformationTable for MemoryTable {
fn table_id(&self) -> TableId {
self.table_id
}
fn table_name(&self) -> &'static str {
self.table_name
}
fn schema(&self) -> SchemaRef {
self.schema.clone()
}
fn to_stream(&self, _request: ScanRequest) -> Result<SendableRecordBatchStream> {
let schema = self.schema.arrow_schema().clone();
let mut builder = self.builder();
let stream = Box::pin(DfRecordBatchStreamAdapter::new(
schema,
futures::stream::once(async move {
builder
.memory_records()
.await
.map(|x| x.into_df_record_batch())
.map_err(Into::into)
}),
));
Ok(Box::pin(
RecordBatchStreamAdapter::try_new(stream)
.map_err(BoxedError::new)
.context(InternalSnafu)?,
))
}
}
struct MemoryTableBuilder {
schema: SchemaRef,
columns: Vec<VectorRef>,
}
impl MemoryTableBuilder {
fn new(schema: SchemaRef, columns: Vec<VectorRef>) -> Self {
Self { schema, columns }
}
/// Construct the `information_schema.{table_name}` virtual table
async fn memory_records(&mut self) -> Result<RecordBatch> {
if self.columns.is_empty() {
RecordBatch::new_empty(self.schema.clone()).context(CreateRecordBatchSnafu)
} else {
RecordBatch::new(self.schema.clone(), std::mem::take(&mut self.columns))
.context(CreateRecordBatchSnafu)
}
}
}
impl DfPartitionStream for MemoryTable {
fn schema(&self) -> &ArrowSchemaRef {
self.schema.arrow_schema()
}
fn execute(&self, _: Arc<TaskContext>) -> DfSendableRecordBatchStream {
let schema = self.schema.arrow_schema().clone();
let mut builder = self.builder();
Box::pin(DfRecordBatchStreamAdapter::new(
schema,
futures::stream::once(async move {
builder
.memory_records()
.await
.map(|x| x.into_df_record_batch())
.map_err(Into::into)
}),
))
}
}
#[cfg(test)]
mod tests {
use std::sync::Arc;
use common_recordbatch::RecordBatches;
use datatypes::prelude::ConcreteDataType;
use datatypes::schema::{ColumnSchema, Schema};
use datatypes::vectors::StringVector;
use super::*;
#[tokio::test]
async fn test_memory_table() {
let schema = Arc::new(Schema::new(vec![
ColumnSchema::new("a", ConcreteDataType::string_datatype(), false),
ColumnSchema::new("b", ConcreteDataType::string_datatype(), false),
]));
let table = MemoryTable::new(
42,
"test",
schema.clone(),
vec![
Arc::new(StringVector::from(vec!["a1", "a2"])),
Arc::new(StringVector::from(vec!["b1", "b2"])),
],
);
assert_eq!(42, table.table_id());
assert_eq!("test", table.table_name());
assert_eq!(schema, InformationTable::schema(&table));
let stream = table.to_stream(ScanRequest::default()).unwrap();
let batches = RecordBatches::try_collect(stream).await.unwrap();
assert_eq!(
"\
+----+----+
| a | b |
+----+----+
| a1 | b1 |
| a2 | b2 |
+----+----+",
batches.pretty_print().unwrap()
);
}
#[tokio::test]
async fn test_empty_memory_table() {
let schema = Arc::new(Schema::new(vec![
ColumnSchema::new("a", ConcreteDataType::string_datatype(), false),
ColumnSchema::new("b", ConcreteDataType::string_datatype(), false),
]));
let table = MemoryTable::new(42, "test", schema.clone(), vec![]);
assert_eq!(42, table.table_id());
assert_eq!("test", table.table_name());
assert_eq!(schema, InformationTable::schema(&table));
let stream = table.to_stream(ScanRequest::default()).unwrap();
let batches = RecordBatches::try_collect(stream).await.unwrap();
assert_eq!(
"\
+---+---+
| a | b |
+---+---+
+---+---+",
batches.pretty_print().unwrap()
);
}
}

View File

@@ -0,0 +1,463 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::sync::Arc;
use common_catalog::consts::{METRIC_ENGINE, MITO_ENGINE};
use datatypes::prelude::{ConcreteDataType, VectorRef};
use datatypes::schema::{ColumnSchema, Schema, SchemaRef};
use datatypes::vectors::{Int64Vector, StringVector};
use crate::information_schema::table_names::*;
const NO_VALUE: &str = "NO";
/// Find the schema and columns by the table_name, only valid for memory tables.
/// Safety: the user MUST ensure the table schema exists, panic otherwise.
pub fn get_schema_columns(table_name: &str) -> (SchemaRef, Vec<VectorRef>) {
let (column_schemas, columns): (_, Vec<VectorRef>) = match table_name {
COLUMN_PRIVILEGES => (
string_columns(&[
"GRANTEE",
"TABLE_CATALOG",
"TABLE_SCHEMA",
"TABLE_NAME",
"COLUMN_NAME",
"PRIVILEGE_TYPE",
"IS_GRANTABLE",
]),
vec![],
),
COLUMN_STATISTICS => (
string_columns(&[
"SCHEMA_NAME",
"TABLE_NAME",
"COLUMN_NAME",
// TODO(dennis): It must be a JSON type, but we don't support it yet
"HISTOGRAM",
]),
vec![],
),
ENGINES => (
string_columns(&[
"ENGINE",
"SUPPORT",
"COMMENT",
"TRANSACTIONS",
"XA",
"SAVEPOINTS",
]),
vec![
Arc::new(StringVector::from(vec![MITO_ENGINE, METRIC_ENGINE])),
Arc::new(StringVector::from(vec!["DEFAULT", "YES"])),
Arc::new(StringVector::from(vec![
"Storage engine for time-series data",
"Storage engine for observability scenarios, which is adept at handling a large number of small tables, making it particularly suitable for cloud-native monitoring",
])),
Arc::new(StringVector::from(vec![NO_VALUE, NO_VALUE])),
Arc::new(StringVector::from(vec![NO_VALUE, NO_VALUE])),
Arc::new(StringVector::from(vec![NO_VALUE, NO_VALUE])),
],
),
BUILD_INFO => {
let build_info = common_version::build_info();
(
string_columns(&[
"GIT_BRANCH",
"GIT_COMMIT",
"GIT_COMMIT_SHORT",
"GIT_DIRTY",
"PKG_VERSION",
]),
vec![
Arc::new(StringVector::from(vec![build_info.branch.to_string()])),
Arc::new(StringVector::from(vec![build_info.commit.to_string()])),
Arc::new(StringVector::from(vec![build_info
.commit_short
.to_string()])),
Arc::new(StringVector::from(vec![build_info.dirty.to_string()])),
Arc::new(StringVector::from(vec![build_info.version.to_string()])),
],
)
}
CHARACTER_SETS => (
vec![
string_column("CHARACTER_SET_NAME"),
string_column("DEFAULT_COLLATE_NAME"),
string_column("DESCRIPTION"),
bigint_column("MAXLEN"),
],
vec![
Arc::new(StringVector::from(vec!["utf8"])),
Arc::new(StringVector::from(vec!["utf8_bin"])),
Arc::new(StringVector::from(vec!["UTF-8 Unicode"])),
Arc::new(Int64Vector::from_slice([4])),
],
),
COLLATIONS => (
vec![
string_column("COLLATION_NAME"),
string_column("CHARACTER_SET_NAME"),
bigint_column("ID"),
string_column("IS_DEFAULT"),
string_column("IS_COMPILED"),
bigint_column("SORTLEN"),
],
vec![
Arc::new(StringVector::from(vec!["utf8_bin"])),
Arc::new(StringVector::from(vec!["utf8"])),
Arc::new(Int64Vector::from_slice([1])),
Arc::new(StringVector::from(vec!["Yes"])),
Arc::new(StringVector::from(vec!["Yes"])),
Arc::new(Int64Vector::from_slice([1])),
],
),
COLLATION_CHARACTER_SET_APPLICABILITY => (
vec![
string_column("COLLATION_NAME"),
string_column("CHARACTER_SET_NAME"),
],
vec![
Arc::new(StringVector::from(vec!["utf8_bin"])),
Arc::new(StringVector::from(vec!["utf8"])),
],
),
CHECK_CONSTRAINTS => (
string_columns(&[
"CONSTRAINT_CATALOG",
"CONSTRAINT_SCHEMA",
"CONSTRAINT_NAME",
"CHECK_CLAUSE",
]),
// Not support check constraints yet
vec![],
),
EVENTS => (
vec![
string_column("EVENT_CATALOG"),
string_column("EVENT_SCHEMA"),
string_column("EVENT_NAME"),
string_column("DEFINER"),
string_column("TIME_ZONE"),
string_column("EVENT_BODY"),
string_column("EVENT_DEFINITION"),
string_column("EVENT_TYPE"),
datetime_column("EXECUTE_AT"),
bigint_column("INTERVAL_VALUE"),
string_column("INTERVAL_FIELD"),
string_column("SQL_MODE"),
datetime_column("STARTS"),
datetime_column("ENDS"),
string_column("STATUS"),
string_column("ON_COMPLETION"),
datetime_column("CREATED"),
datetime_column("LAST_ALTERED"),
datetime_column("LAST_EXECUTED"),
string_column("EVENT_COMMENT"),
bigint_column("ORIGINATOR"),
string_column("CHARACTER_SET_CLIENT"),
string_column("COLLATION_CONNECTION"),
string_column("DATABASE_COLLATION"),
],
vec![],
),
FILES => (
vec![
bigint_column("FILE_ID"),
string_column("FILE_NAME"),
string_column("FILE_TYPE"),
string_column("TABLESPACE_NAME"),
string_column("TABLE_CATALOG"),
string_column("TABLE_SCHEMA"),
string_column("TABLE_NAME"),
string_column("LOGFILE_GROUP_NAME"),
bigint_column("LOGFILE_GROUP_NUMBER"),
string_column("ENGINE"),
string_column("FULLTEXT_KEYS"),
bigint_column("DELETED_ROWS"),
bigint_column("UPDATE_COUNT"),
bigint_column("FREE_EXTENTS"),
bigint_column("TOTAL_EXTENTS"),
bigint_column("EXTENT_SIZE"),
bigint_column("INITIAL_SIZE"),
bigint_column("MAXIMUM_SIZE"),
bigint_column("AUTOEXTEND_SIZE"),
datetime_column("CREATION_TIME"),
datetime_column("LAST_UPDATE_TIME"),
datetime_column("LAST_ACCESS_TIME"),
datetime_column("RECOVER_TIME"),
bigint_column("TRANSACTION_COUNTER"),
string_column("VERSION"),
string_column("ROW_FORMAT"),
bigint_column("TABLE_ROWS"),
bigint_column("AVG_ROW_LENGTH"),
bigint_column("DATA_LENGTH"),
bigint_column("MAX_DATA_LENGTH"),
bigint_column("INDEX_LENGTH"),
bigint_column("DATA_FREE"),
datetime_column("CREATE_TIME"),
datetime_column("UPDATE_TIME"),
datetime_column("CHECK_TIME"),
string_column("CHECKSUM"),
string_column("STATUS"),
string_column("EXTRA"),
],
vec![],
),
OPTIMIZER_TRACE => (
vec![
string_column("QUERY"),
string_column("TRACE"),
bigint_column("MISSING_BYTES_BEYOND_MAX_MEM_SIZE"),
bigint_column("INSUFFICIENT_PRIVILEGES"),
],
vec![],
),
// MySQL(https://dev.mysql.com/doc/refman/8.2/en/information-schema-parameters-table.html)
// has the spec that is different from
// PostgreSQL(https://www.postgresql.org/docs/current/infoschema-parameters.html).
// Follow `MySQL` spec here.
PARAMETERS => (
vec![
string_column("SPECIFIC_CATALOG"),
string_column("SPECIFIC_SCHEMA"),
string_column("SPECIFIC_NAME"),
bigint_column("ORDINAL_POSITION"),
string_column("PARAMETER_MODE"),
string_column("PARAMETER_NAME"),
string_column("DATA_TYPE"),
bigint_column("CHARACTER_MAXIMUM_LENGTH"),
bigint_column("CHARACTER_OCTET_LENGTH"),
bigint_column("NUMERIC_PRECISION"),
bigint_column("NUMERIC_SCALE"),
bigint_column("DATETIME_PRECISION"),
string_column("CHARACTER_SET_NAME"),
string_column("COLLATION_NAME"),
string_column("DTD_IDENTIFIER"),
string_column("ROUTINE_TYPE"),
],
vec![],
),
PROFILING => (
vec![
bigint_column("QUERY_ID"),
bigint_column("SEQ"),
string_column("STATE"),
bigint_column("DURATION"),
bigint_column("CPU_USER"),
bigint_column("CPU_SYSTEM"),
bigint_column("CONTEXT_VOLUNTARY"),
bigint_column("CONTEXT_INVOLUNTARY"),
bigint_column("BLOCK_OPS_IN"),
bigint_column("BLOCK_OPS_OUT"),
bigint_column("MESSAGES_SENT"),
bigint_column("MESSAGES_RECEIVED"),
bigint_column("PAGE_FAULTS_MAJOR"),
bigint_column("PAGE_FAULTS_MINOR"),
bigint_column("SWAPS"),
string_column("SOURCE_FUNCTION"),
string_column("SOURCE_FILE"),
bigint_column("SOURCE_LINE"),
],
vec![],
),
// TODO: _Must_ reimplement this table when foreign key constraint is supported.
REFERENTIAL_CONSTRAINTS => (
vec![
string_column("CONSTRAINT_CATALOG"),
string_column("CONSTRAINT_SCHEMA"),
string_column("CONSTRAINT_NAME"),
string_column("UNIQUE_CONSTRAINT_CATALOG"),
string_column("UNIQUE_CONSTRAINT_SCHEMA"),
string_column("UNIQUE_CONSTRAINT_NAME"),
string_column("MATCH_OPTION"),
string_column("UPDATE_RULE"),
string_column("DELETE_RULE"),
string_column("TABLE_NAME"),
string_column("REFERENCED_TABLE_NAME"),
],
vec![],
),
ROUTINES => (
vec![
string_column("SPECIFIC_NAME"),
string_column("ROUTINE_CATALOG"),
string_column("ROUTINE_SCHEMA"),
string_column("ROUTINE_NAME"),
string_column("ROUTINE_TYPE"),
string_column("DATA_TYPE"),
bigint_column("CHARACTER_MAXIMUM_LENGTH"),
bigint_column("CHARACTER_OCTET_LENGTH"),
bigint_column("NUMERIC_PRECISION"),
bigint_column("NUMERIC_SCALE"),
bigint_column("DATETIME_PRECISION"),
string_column("CHARACTER_SET_NAME"),
string_column("COLLATION_NAME"),
string_column("DTD_IDENTIFIER"),
string_column("ROUTINE_BODY"),
string_column("ROUTINE_DEFINITION"),
string_column("EXTERNAL_NAME"),
string_column("EXTERNAL_LANGUAGE"),
string_column("PARAMETER_STYLE"),
string_column("IS_DETERMINISTIC"),
string_column("SQL_DATA_ACCESS"),
string_column("SQL_PATH"),
string_column("SECURITY_TYPE"),
datetime_column("CREATED"),
datetime_column("LAST_ALTERED"),
string_column("SQL_MODE"),
string_column("ROUTINE_COMMENT"),
string_column("DEFINER"),
string_column("CHARACTER_SET_CLIENT"),
string_column("COLLATION_CONNECTION"),
string_column("DATABASE_COLLATION"),
],
vec![],
),
SCHEMA_PRIVILEGES => (
vec![
string_column("GRANTEE"),
string_column("TABLE_CATALOG"),
string_column("TABLE_SCHEMA"),
string_column("PRIVILEGE_TYPE"),
string_column("IS_GRANTABLE"),
],
vec![],
),
TABLE_PRIVILEGES => (
vec![
string_column("GRANTEE"),
string_column("TABLE_CATALOG"),
string_column("TABLE_SCHEMA"),
string_column("TABLE_NAME"),
string_column("PRIVILEGE_TYPE"),
string_column("IS_GRANTABLE"),
],
vec![],
),
TRIGGERS => (
vec![
string_column("TRIGGER_CATALOG"),
string_column("TRIGGER_SCHEMA"),
string_column("TRIGGER_NAME"),
string_column("EVENT_MANIPULATION"),
string_column("EVENT_OBJECT_CATALOG"),
string_column("EVENT_OBJECT_SCHEMA"),
string_column("EVENT_OBJECT_TABLE"),
bigint_column("ACTION_ORDER"),
string_column("ACTION_CONDITION"),
string_column("ACTION_STATEMENT"),
string_column("ACTION_ORIENTATION"),
string_column("ACTION_TIMING"),
string_column("ACTION_REFERENCE_OLD_TABLE"),
string_column("ACTION_REFERENCE_NEW_TABLE"),
string_column("ACTION_REFERENCE_OLD_ROW"),
string_column("ACTION_REFERENCE_NEW_ROW"),
datetime_column("CREATED"),
string_column("SQL_MODE"),
string_column("DEFINER"),
string_column("CHARACTER_SET_CLIENT"),
string_column("COLLATION_CONNECTION"),
string_column("DATABASE_COLLATION"),
],
vec![],
),
// TODO: Considering store internal metrics in `global_status` and
// `session_status` tables.
GLOBAL_STATUS => (
vec![
string_column("VARIABLE_NAME"),
string_column("VARIABLE_VALUE"),
],
vec![],
),
SESSION_STATUS => (
vec![
string_column("VARIABLE_NAME"),
string_column("VARIABLE_VALUE"),
],
vec![],
),
_ => unreachable!("Unknown table in information_schema: {}", table_name),
};
(Arc::new(Schema::new(column_schemas)), columns)
}
fn string_columns(names: &[&'static str]) -> Vec<ColumnSchema> {
names.iter().map(|name| string_column(name)).collect()
}
fn string_column(name: &str) -> ColumnSchema {
ColumnSchema::new(
str::to_lowercase(name),
ConcreteDataType::string_datatype(),
false,
)
}
fn bigint_column(name: &str) -> ColumnSchema {
ColumnSchema::new(
str::to_lowercase(name),
ConcreteDataType::int64_datatype(),
false,
)
}
fn datetime_column(name: &str) -> ColumnSchema {
ColumnSchema::new(
str::to_lowercase(name),
ConcreteDataType::datetime_datatype(),
false,
)
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_string_columns() {
let columns = ["a", "b", "c"];
let column_schemas = string_columns(&columns);
assert_eq!(3, column_schemas.len());
for (i, name) in columns.iter().enumerate() {
let cs = column_schemas.get(i).unwrap();
assert_eq!(*name, cs.name);
assert_eq!(ConcreteDataType::string_datatype(), cs.data_type);
}
}
}

View File

@@ -0,0 +1,424 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use core::pin::pin;
use std::sync::{Arc, Weak};
use arrow_schema::SchemaRef as ArrowSchemaRef;
use common_catalog::consts::INFORMATION_SCHEMA_PARTITIONS_TABLE_ID;
use common_error::ext::BoxedError;
use common_query::physical_plan::TaskContext;
use common_recordbatch::adapter::RecordBatchStreamAdapter;
use common_recordbatch::{RecordBatch, SendableRecordBatchStream};
use common_time::datetime::DateTime;
use datafusion::physical_plan::stream::RecordBatchStreamAdapter as DfRecordBatchStreamAdapter;
use datafusion::physical_plan::streaming::PartitionStream as DfPartitionStream;
use datafusion::physical_plan::SendableRecordBatchStream as DfSendableRecordBatchStream;
use datatypes::prelude::{ConcreteDataType, ScalarVectorBuilder, VectorRef};
use datatypes::schema::{ColumnSchema, Schema, SchemaRef};
use datatypes::value::Value;
use datatypes::vectors::{
ConstantVector, DateTimeVector, DateTimeVectorBuilder, Int64Vector, Int64VectorBuilder,
MutableVector, StringVector, StringVectorBuilder, UInt64VectorBuilder,
};
use futures::{StreamExt, TryStreamExt};
use partition::manager::PartitionInfo;
use partition::partition::PartitionDef;
use snafu::{OptionExt, ResultExt};
use store_api::storage::{RegionId, ScanRequest, TableId};
use table::metadata::{TableInfo, TableType};
use super::PARTITIONS;
use crate::error::{
CreateRecordBatchSnafu, FindPartitionsSnafu, InternalSnafu, Result,
UpgradeWeakCatalogManagerRefSnafu,
};
use crate::information_schema::{InformationTable, Predicates};
use crate::kvbackend::KvBackendCatalogManager;
use crate::CatalogManager;
const TABLE_CATALOG: &str = "table_catalog";
const TABLE_SCHEMA: &str = "table_schema";
const TABLE_NAME: &str = "table_name";
const PARTITION_NAME: &str = "partition_name";
const PARTITION_EXPRESSION: &str = "partition_expression";
/// The region id
const GREPTIME_PARTITION_ID: &str = "greptime_partition_id";
const INIT_CAPACITY: usize = 42;
/// The `PARTITIONS` table provides information about partitioned tables.
/// See https://dev.mysql.com/doc/refman/8.0/en/information-schema-partitions-table.html
/// We provide an extral column `greptime_partition_id` for GreptimeDB region id.
pub(super) struct InformationSchemaPartitions {
schema: SchemaRef,
catalog_name: String,
catalog_manager: Weak<dyn CatalogManager>,
}
impl InformationSchemaPartitions {
pub(super) fn new(catalog_name: String, catalog_manager: Weak<dyn CatalogManager>) -> Self {
Self {
schema: Self::schema(),
catalog_name,
catalog_manager,
}
}
pub(crate) fn schema() -> SchemaRef {
Arc::new(Schema::new(vec![
ColumnSchema::new(TABLE_CATALOG, ConcreteDataType::string_datatype(), false),
ColumnSchema::new(TABLE_SCHEMA, ConcreteDataType::string_datatype(), false),
ColumnSchema::new(TABLE_NAME, ConcreteDataType::string_datatype(), false),
ColumnSchema::new(PARTITION_NAME, ConcreteDataType::string_datatype(), false),
ColumnSchema::new(
"subpartition_name",
ConcreteDataType::string_datatype(),
true,
),
ColumnSchema::new(
"partition_ordinal_position",
ConcreteDataType::int64_datatype(),
true,
),
ColumnSchema::new(
"subpartition_ordinal_position",
ConcreteDataType::int64_datatype(),
true,
),
ColumnSchema::new(
"partition_method",
ConcreteDataType::string_datatype(),
true,
),
ColumnSchema::new(
"subpartition_method",
ConcreteDataType::string_datatype(),
true,
),
ColumnSchema::new(
PARTITION_EXPRESSION,
ConcreteDataType::string_datatype(),
true,
),
ColumnSchema::new(
"subpartition_expression",
ConcreteDataType::string_datatype(),
true,
),
ColumnSchema::new(
"partition_description",
ConcreteDataType::string_datatype(),
true,
),
ColumnSchema::new("table_rows", ConcreteDataType::int64_datatype(), true),
ColumnSchema::new("avg_row_length", ConcreteDataType::int64_datatype(), true),
ColumnSchema::new("data_length", ConcreteDataType::int64_datatype(), true),
ColumnSchema::new("max_data_length", ConcreteDataType::int64_datatype(), true),
ColumnSchema::new("index_length", ConcreteDataType::int64_datatype(), true),
ColumnSchema::new("data_free", ConcreteDataType::int64_datatype(), true),
ColumnSchema::new("create_time", ConcreteDataType::datetime_datatype(), true),
ColumnSchema::new("update_time", ConcreteDataType::datetime_datatype(), true),
ColumnSchema::new("check_time", ConcreteDataType::datetime_datatype(), true),
ColumnSchema::new("checksum", ConcreteDataType::int64_datatype(), true),
ColumnSchema::new(
"partition_comment",
ConcreteDataType::string_datatype(),
true,
),
ColumnSchema::new("nodegroup", ConcreteDataType::string_datatype(), true),
ColumnSchema::new("tablespace_name", ConcreteDataType::string_datatype(), true),
ColumnSchema::new(
GREPTIME_PARTITION_ID,
ConcreteDataType::uint64_datatype(),
true,
),
]))
}
fn builder(&self) -> InformationSchemaPartitionsBuilder {
InformationSchemaPartitionsBuilder::new(
self.schema.clone(),
self.catalog_name.clone(),
self.catalog_manager.clone(),
)
}
}
impl InformationTable for InformationSchemaPartitions {
fn table_id(&self) -> TableId {
INFORMATION_SCHEMA_PARTITIONS_TABLE_ID
}
fn table_name(&self) -> &'static str {
PARTITIONS
}
fn schema(&self) -> SchemaRef {
self.schema.clone()
}
fn to_stream(&self, request: ScanRequest) -> Result<SendableRecordBatchStream> {
let schema = self.schema.arrow_schema().clone();
let mut builder = self.builder();
let stream = Box::pin(DfRecordBatchStreamAdapter::new(
schema,
futures::stream::once(async move {
builder
.make_partitions(Some(request))
.await
.map(|x| x.into_df_record_batch())
.map_err(Into::into)
}),
));
Ok(Box::pin(
RecordBatchStreamAdapter::try_new(stream)
.map_err(BoxedError::new)
.context(InternalSnafu)?,
))
}
}
struct InformationSchemaPartitionsBuilder {
schema: SchemaRef,
catalog_name: String,
catalog_manager: Weak<dyn CatalogManager>,
catalog_names: StringVectorBuilder,
schema_names: StringVectorBuilder,
table_names: StringVectorBuilder,
partition_names: StringVectorBuilder,
partition_ordinal_positions: Int64VectorBuilder,
partition_expressions: StringVectorBuilder,
create_times: DateTimeVectorBuilder,
partition_ids: UInt64VectorBuilder,
}
impl InformationSchemaPartitionsBuilder {
fn new(
schema: SchemaRef,
catalog_name: String,
catalog_manager: Weak<dyn CatalogManager>,
) -> Self {
Self {
schema,
catalog_name,
catalog_manager,
catalog_names: StringVectorBuilder::with_capacity(INIT_CAPACITY),
schema_names: StringVectorBuilder::with_capacity(INIT_CAPACITY),
table_names: StringVectorBuilder::with_capacity(INIT_CAPACITY),
partition_names: StringVectorBuilder::with_capacity(INIT_CAPACITY),
partition_ordinal_positions: Int64VectorBuilder::with_capacity(INIT_CAPACITY),
partition_expressions: StringVectorBuilder::with_capacity(INIT_CAPACITY),
create_times: DateTimeVectorBuilder::with_capacity(INIT_CAPACITY),
partition_ids: UInt64VectorBuilder::with_capacity(INIT_CAPACITY),
}
}
/// Construct the `information_schema.partitions` virtual table
async fn make_partitions(&mut self, request: Option<ScanRequest>) -> Result<RecordBatch> {
let catalog_name = self.catalog_name.clone();
let catalog_manager = self
.catalog_manager
.upgrade()
.context(UpgradeWeakCatalogManagerRefSnafu)?;
let partition_manager = catalog_manager
.as_any()
.downcast_ref::<KvBackendCatalogManager>()
.map(|catalog_manager| catalog_manager.partition_manager());
let predicates = Predicates::from_scan_request(&request);
for schema_name in catalog_manager.schema_names(&catalog_name).await? {
let table_info_stream = catalog_manager
.tables(&catalog_name, &schema_name)
.await
.try_filter_map(|t| async move {
let table_info = t.table_info();
if table_info.table_type == TableType::Temporary {
Ok(None)
} else {
Ok(Some(table_info))
}
});
const BATCH_SIZE: usize = 128;
// Split table infos into chunks
let mut table_info_chunks = pin!(table_info_stream.ready_chunks(BATCH_SIZE));
while let Some(table_infos) = table_info_chunks.next().await {
let table_infos = table_infos.into_iter().collect::<Result<Vec<_>>>()?;
let table_ids: Vec<TableId> =
table_infos.iter().map(|info| info.ident.table_id).collect();
let mut table_partitions = if let Some(partition_manager) = &partition_manager {
partition_manager
.batch_find_table_partitions(&table_ids)
.await
.context(FindPartitionsSnafu)?
} else {
// Current node must be a standalone instance, contains only one partition by default.
// TODO(dennis): change it when we support multi-regions for standalone.
table_ids
.into_iter()
.map(|table_id| {
(
table_id,
vec![PartitionInfo {
id: RegionId::new(table_id, 0),
partition: PartitionDef::new(vec![], vec![]),
}],
)
})
.collect()
};
for table_info in table_infos {
let partitions = table_partitions
.remove(&table_info.ident.table_id)
.unwrap_or(vec![]);
self.add_partitions(
&predicates,
&table_info,
&catalog_name,
&schema_name,
&table_info.name,
&partitions,
);
}
}
}
self.finish()
}
#[allow(clippy::too_many_arguments)]
fn add_partitions(
&mut self,
predicates: &Predicates,
table_info: &TableInfo,
catalog_name: &str,
schema_name: &str,
table_name: &str,
partitions: &[PartitionInfo],
) {
let row = [
(TABLE_CATALOG, &Value::from(catalog_name)),
(TABLE_SCHEMA, &Value::from(schema_name)),
(TABLE_NAME, &Value::from(table_name)),
];
if !predicates.eval(&row) {
return;
}
for (index, partition) in partitions.iter().enumerate() {
let partition_name = format!("p{index}");
self.catalog_names.push(Some(catalog_name));
self.schema_names.push(Some(schema_name));
self.table_names.push(Some(table_name));
self.partition_names.push(Some(&partition_name));
self.partition_ordinal_positions
.push(Some((index + 1) as i64));
let expressions = if partition.partition.partition_columns().is_empty() {
None
} else {
Some(partition.partition.to_string())
};
self.partition_expressions.push(expressions.as_deref());
self.create_times.push(Some(DateTime::from(
table_info.meta.created_on.timestamp_millis(),
)));
self.partition_ids.push(Some(partition.id.as_u64()));
}
}
fn finish(&mut self) -> Result<RecordBatch> {
let rows_num = self.catalog_names.len();
let null_string_vector = Arc::new(ConstantVector::new(
Arc::new(StringVector::from(vec![None as Option<&str>])),
rows_num,
));
let null_i64_vector = Arc::new(ConstantVector::new(
Arc::new(Int64Vector::from(vec![None])),
rows_num,
));
let null_datetime_vector = Arc::new(ConstantVector::new(
Arc::new(DateTimeVector::from(vec![None])),
rows_num,
));
let partition_methods = Arc::new(ConstantVector::new(
Arc::new(StringVector::from(vec![Some("RANGE")])),
rows_num,
));
let columns: Vec<VectorRef> = vec![
Arc::new(self.catalog_names.finish()),
Arc::new(self.schema_names.finish()),
Arc::new(self.table_names.finish()),
Arc::new(self.partition_names.finish()),
null_string_vector.clone(),
Arc::new(self.partition_ordinal_positions.finish()),
null_i64_vector.clone(),
partition_methods,
null_string_vector.clone(),
Arc::new(self.partition_expressions.finish()),
null_string_vector.clone(),
null_string_vector.clone(),
// TODO(dennis): rows and index statistics info
null_i64_vector.clone(),
null_i64_vector.clone(),
null_i64_vector.clone(),
null_i64_vector.clone(),
null_i64_vector.clone(),
null_i64_vector.clone(),
Arc::new(self.create_times.finish()),
// TODO(dennis): supports update_time
null_datetime_vector.clone(),
null_datetime_vector,
null_i64_vector,
null_string_vector.clone(),
null_string_vector.clone(),
null_string_vector,
Arc::new(self.partition_ids.finish()),
];
RecordBatch::new(self.schema.clone(), columns).context(CreateRecordBatchSnafu)
}
}
impl DfPartitionStream for InformationSchemaPartitions {
fn schema(&self) -> &ArrowSchemaRef {
self.schema.arrow_schema()
}
fn execute(&self, _: Arc<TaskContext>) -> DfSendableRecordBatchStream {
let schema = self.schema.arrow_schema().clone();
let mut builder = self.builder();
Box::pin(DfRecordBatchStreamAdapter::new(
schema,
futures::stream::once(async move {
builder
.make_partitions(None)
.await
.map(|x| x.into_df_record_batch())
.map_err(Into::into)
}),
))
}
}

View File

@@ -0,0 +1,589 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use arrow::array::StringArray;
use arrow::compute::kernels::comparison;
use common_query::logical_plan::DfExpr;
use datafusion::common::ScalarValue;
use datafusion::logical_expr::expr::Like;
use datafusion::logical_expr::Operator;
use datatypes::value::Value;
use store_api::storage::ScanRequest;
type ColumnName = String;
/// Predicate to filter `information_schema` tables stream,
/// we only support these simple predicates currently.
/// TODO(dennis): supports more predicate types.
#[derive(Clone, PartialEq, Eq, Debug)]
enum Predicate {
Eq(ColumnName, Value),
Like(ColumnName, String, bool),
NotEq(ColumnName, Value),
InList(ColumnName, Vec<Value>),
And(Box<Predicate>, Box<Predicate>),
Or(Box<Predicate>, Box<Predicate>),
Not(Box<Predicate>),
}
impl Predicate {
/// Evaluate the predicate with the row, returns:
/// - `None` when the predicate can't evaluate with the row.
/// - `Some(true)` when the predicate is satisfied,
/// - `Some(false)` when the predicate is not satisfied,
fn eval(&self, row: &[(&str, &Value)]) -> Option<bool> {
match self {
Predicate::Eq(c, v) => {
for (column, value) in row {
if c != column {
continue;
}
return Some(v == *value);
}
}
Predicate::Like(c, pattern, case_insensitive) => {
for (column, value) in row {
if c != column {
continue;
}
let Value::String(bs) = value else {
continue;
};
return like_utf8(bs.as_utf8(), pattern, case_insensitive);
}
}
Predicate::NotEq(c, v) => {
for (column, value) in row {
if c != column {
continue;
}
return Some(v != *value);
}
}
Predicate::InList(c, values) => {
for (column, value) in row {
if c != column {
continue;
}
return Some(values.iter().any(|v| v == *value));
}
}
Predicate::And(left, right) => {
let left = left.eval(row);
// short-circuit
if matches!(left, Some(false)) {
return Some(false);
}
return match (left, right.eval(row)) {
(Some(left), Some(right)) => Some(left && right),
(None, Some(false)) => Some(false),
_ => None,
};
}
Predicate::Or(left, right) => {
let left = left.eval(row);
// short-circuit
if matches!(left, Some(true)) {
return Some(true);
}
return match (left, right.eval(row)) {
(Some(left), Some(right)) => Some(left || right),
(None, Some(true)) => Some(true),
_ => None,
};
}
Predicate::Not(p) => {
return Some(!p.eval(row)?);
}
}
// Can't evaluate predicate with the row
None
}
/// Try to create a predicate from datafusion [`Expr`], return None if fails.
fn from_expr(expr: DfExpr) -> Option<Predicate> {
match expr {
// NOT expr
DfExpr::Not(expr) => Some(Predicate::Not(Box::new(Self::from_expr(*expr)?))),
// expr LIKE pattern
DfExpr::Like(Like {
negated,
expr,
pattern,
case_insensitive,
..
}) if is_column(&expr) && is_string_literal(&pattern) => {
// Safety: ensured by gurad
let DfExpr::Column(c) = *expr else {
unreachable!();
};
let DfExpr::Literal(ScalarValue::Utf8(Some(pattern))) = *pattern else {
unreachable!();
};
let p = Predicate::Like(c.name, pattern, case_insensitive);
if negated {
Some(Predicate::Not(Box::new(p)))
} else {
Some(p)
}
}
// left OP right
DfExpr::BinaryExpr(bin) => match (*bin.left, bin.op, *bin.right) {
// left == right
(DfExpr::Literal(scalar), Operator::Eq, DfExpr::Column(c))
| (DfExpr::Column(c), Operator::Eq, DfExpr::Literal(scalar)) => {
let Ok(v) = Value::try_from(scalar) else {
return None;
};
Some(Predicate::Eq(c.name, v))
}
// left != right
(DfExpr::Literal(scalar), Operator::NotEq, DfExpr::Column(c))
| (DfExpr::Column(c), Operator::NotEq, DfExpr::Literal(scalar)) => {
let Ok(v) = Value::try_from(scalar) else {
return None;
};
Some(Predicate::NotEq(c.name, v))
}
// left AND right
(left, Operator::And, right) => {
let left = Self::from_expr(left)?;
let right = Self::from_expr(right)?;
Some(Predicate::And(Box::new(left), Box::new(right)))
}
// left OR right
(left, Operator::Or, right) => {
let left = Self::from_expr(left)?;
let right = Self::from_expr(right)?;
Some(Predicate::Or(Box::new(left), Box::new(right)))
}
_ => None,
},
// [NOT] IN (LIST)
DfExpr::InList(list) => {
match (*list.expr, list.list, list.negated) {
// column [NOT] IN (v1, v2, v3, ...)
(DfExpr::Column(c), list, negated) if is_all_scalars(&list) => {
let mut values = Vec::with_capacity(list.len());
for scalar in list {
// Safety: checked by `is_all_scalars`
let DfExpr::Literal(scalar) = scalar else {
unreachable!();
};
let Ok(value) = Value::try_from(scalar) else {
return None;
};
values.push(value);
}
let predicate = Predicate::InList(c.name, values);
if negated {
Some(Predicate::Not(Box::new(predicate)))
} else {
Some(predicate)
}
}
_ => None,
}
}
_ => None,
}
}
}
/// Perform SQL left LIKE right, return `None` if fail to evaluate.
/// - `s` the target string
/// - `pattern` the pattern just like '%abc'
/// - `case_insensitive` whether to perform case-insensitive like or not.
fn like_utf8(s: &str, pattern: &str, case_insensitive: &bool) -> Option<bool> {
let array = StringArray::from(vec![s]);
let patterns = StringArray::new_scalar(pattern);
let Ok(booleans) = (if *case_insensitive {
comparison::ilike(&array, &patterns)
} else {
comparison::like(&array, &patterns)
}) else {
return None;
};
// Safety: at least one value in result
Some(booleans.value(0))
}
fn is_string_literal(expr: &DfExpr) -> bool {
matches!(expr, DfExpr::Literal(ScalarValue::Utf8(Some(_))))
}
fn is_column(expr: &DfExpr) -> bool {
matches!(expr, DfExpr::Column(_))
}
/// A list of predicate
pub struct Predicates {
predicates: Vec<Predicate>,
}
impl Predicates {
/// Try its best to create predicates from [`ScanRequest`].
pub fn from_scan_request(request: &Option<ScanRequest>) -> Predicates {
if let Some(request) = request {
let mut predicates = Vec::with_capacity(request.filters.len());
for filter in &request.filters {
if let Some(predicate) = Predicate::from_expr(filter.df_expr().clone()) {
predicates.push(predicate);
}
}
Self { predicates }
} else {
Self {
predicates: Vec::new(),
}
}
}
/// Evaluate the predicates with the row.
/// returns true when all the predicates are satisfied or can't be evaluated.
pub fn eval(&self, row: &[(&str, &Value)]) -> bool {
// fast path
if self.predicates.is_empty() {
return true;
}
self.predicates
.iter()
.filter_map(|p| p.eval(row))
.all(|b| b)
}
}
/// Returns true when the values are all [`DfExpr::Literal`].
fn is_all_scalars(list: &[DfExpr]) -> bool {
list.iter().all(|v| matches!(v, DfExpr::Literal(_)))
}
#[cfg(test)]
mod tests {
use datafusion::common::{Column, ScalarValue};
use datafusion::logical_expr::expr::InList;
use datafusion::logical_expr::BinaryExpr;
use super::*;
#[test]
fn test_predicate_eval() {
let a_col = "a".to_string();
let b_col = "b".to_string();
let a_value = Value::from("a_value");
let b_value = Value::from("b_value");
let wrong_value = Value::from("wrong_value");
let a_row = [(a_col.as_str(), &a_value)];
let b_row = [("b", &wrong_value)];
let wrong_row = [(a_col.as_str(), &wrong_value)];
// Predicate::Eq
let p = Predicate::Eq(a_col.clone(), a_value.clone());
assert!(p.eval(&a_row).unwrap());
assert!(p.eval(&b_row).is_none());
assert!(!p.eval(&wrong_row).unwrap());
// Predicate::NotEq
let p = Predicate::NotEq(a_col.clone(), a_value.clone());
assert!(!p.eval(&a_row).unwrap());
assert!(p.eval(&b_row).is_none());
assert!(p.eval(&wrong_row).unwrap());
// Predicate::InList
let p = Predicate::InList(a_col.clone(), vec![a_value.clone(), b_value.clone()]);
assert!(p.eval(&a_row).unwrap());
assert!(p.eval(&b_row).is_none());
assert!(!p.eval(&wrong_row).unwrap());
assert!(p.eval(&[(&a_col, &b_value)]).unwrap());
let p1 = Predicate::Eq(a_col.clone(), a_value.clone());
let p2 = Predicate::Eq(b_col.clone(), b_value.clone());
let row = [(a_col.as_str(), &a_value), (b_col.as_str(), &b_value)];
let wrong_row = [(a_col.as_str(), &a_value), (b_col.as_str(), &wrong_value)];
//Predicate::And
let p = Predicate::And(Box::new(p1.clone()), Box::new(p2.clone()));
assert!(p.eval(&row).unwrap());
assert!(!p.eval(&wrong_row).unwrap());
assert!(p.eval(&[]).is_none());
assert!(p.eval(&[("c", &a_value)]).is_none());
assert!(!p
.eval(&[(a_col.as_str(), &b_value), (b_col.as_str(), &a_value)])
.unwrap());
assert!(!p
.eval(&[(a_col.as_str(), &b_value), (b_col.as_str(), &b_value)])
.unwrap());
assert!(p
.eval(&[(a_col.as_ref(), &a_value), ("c", &a_value)])
.is_none());
assert!(!p
.eval(&[(a_col.as_ref(), &b_value), ("c", &a_value)])
.unwrap());
//Predicate::Or
let p = Predicate::Or(Box::new(p1), Box::new(p2));
assert!(p.eval(&row).unwrap());
assert!(p.eval(&wrong_row).unwrap());
assert!(p.eval(&[]).is_none());
assert!(p.eval(&[("c", &a_value)]).is_none());
assert!(!p
.eval(&[(a_col.as_str(), &b_value), (b_col.as_str(), &a_value)])
.unwrap());
assert!(p
.eval(&[(a_col.as_str(), &b_value), (b_col.as_str(), &b_value)])
.unwrap());
assert!(p
.eval(&[(a_col.as_ref(), &a_value), ("c", &a_value)])
.unwrap());
assert!(p
.eval(&[(a_col.as_ref(), &b_value), ("c", &a_value)])
.is_none());
}
#[test]
fn test_predicate_like() {
// case insensitive
let expr = DfExpr::Like(Like {
negated: false,
expr: Box::new(column("a")),
pattern: Box::new(string_literal("%abc")),
case_insensitive: true,
escape_char: None,
});
let p = Predicate::from_expr(expr).unwrap();
assert!(
matches!(&p, Predicate::Like(c, pattern, case_insensitive) if
c == "a"
&& pattern == "%abc"
&& *case_insensitive)
);
let match_row = [
("a", &Value::from("hello AbC")),
("b", &Value::from("b value")),
];
let unmatch_row = [("a", &Value::from("bca")), ("b", &Value::from("b value"))];
assert!(p.eval(&match_row).unwrap());
assert!(!p.eval(&unmatch_row).unwrap());
assert!(p.eval(&[]).is_none());
// case sensitive
let expr = DfExpr::Like(Like {
negated: false,
expr: Box::new(column("a")),
pattern: Box::new(string_literal("%abc")),
case_insensitive: false,
escape_char: None,
});
let p = Predicate::from_expr(expr).unwrap();
assert!(
matches!(&p, Predicate::Like(c, pattern, case_insensitive) if
c == "a"
&& pattern == "%abc"
&& !*case_insensitive)
);
assert!(!p.eval(&match_row).unwrap());
assert!(!p.eval(&unmatch_row).unwrap());
assert!(p.eval(&[]).is_none());
// not like
let expr = DfExpr::Like(Like {
negated: true,
expr: Box::new(column("a")),
pattern: Box::new(string_literal("%abc")),
case_insensitive: true,
escape_char: None,
});
let p = Predicate::from_expr(expr).unwrap();
assert!(!p.eval(&match_row).unwrap());
assert!(p.eval(&unmatch_row).unwrap());
assert!(p.eval(&[]).is_none());
}
fn column(name: &str) -> DfExpr {
DfExpr::Column(Column {
relation: None,
name: name.to_string(),
})
}
fn string_literal(v: &str) -> DfExpr {
DfExpr::Literal(ScalarValue::Utf8(Some(v.to_string())))
}
fn match_string_value(v: &Value, expected: &str) -> bool {
matches!(v, Value::String(bs) if bs.as_utf8() == expected)
}
fn match_string_values(vs: &[Value], expected: &[&str]) -> bool {
assert_eq!(vs.len(), expected.len());
let mut result = true;
for (i, v) in vs.iter().enumerate() {
result = result && match_string_value(v, expected[i]);
}
result
}
fn mock_exprs() -> (DfExpr, DfExpr) {
let expr1 = DfExpr::BinaryExpr(BinaryExpr {
left: Box::new(column("a")),
op: Operator::Eq,
right: Box::new(string_literal("a_value")),
});
let expr2 = DfExpr::BinaryExpr(BinaryExpr {
left: Box::new(column("b")),
op: Operator::NotEq,
right: Box::new(string_literal("b_value")),
});
(expr1, expr2)
}
#[test]
fn test_predicate_from_expr() {
let (expr1, expr2) = mock_exprs();
let p1 = Predicate::from_expr(expr1.clone()).unwrap();
assert!(matches!(&p1, Predicate::Eq(column, v) if column == "a"
&& match_string_value(v, "a_value")));
let p2 = Predicate::from_expr(expr2.clone()).unwrap();
assert!(matches!(&p2, Predicate::NotEq(column, v) if column == "b"
&& match_string_value(v, "b_value")));
let and_expr = DfExpr::BinaryExpr(BinaryExpr {
left: Box::new(expr1.clone()),
op: Operator::And,
right: Box::new(expr2.clone()),
});
let or_expr = DfExpr::BinaryExpr(BinaryExpr {
left: Box::new(expr1.clone()),
op: Operator::Or,
right: Box::new(expr2.clone()),
});
let not_expr = DfExpr::Not(Box::new(expr1.clone()));
let and_p = Predicate::from_expr(and_expr).unwrap();
assert!(matches!(and_p, Predicate::And(left, right) if *left == p1 && *right == p2));
let or_p = Predicate::from_expr(or_expr).unwrap();
assert!(matches!(or_p, Predicate::Or(left, right) if *left == p1 && *right == p2));
let not_p = Predicate::from_expr(not_expr).unwrap();
assert!(matches!(not_p, Predicate::Not(p) if *p == p1));
let inlist_expr = DfExpr::InList(InList {
expr: Box::new(column("a")),
list: vec![string_literal("a1"), string_literal("a2")],
negated: false,
});
let inlist_p = Predicate::from_expr(inlist_expr).unwrap();
assert!(matches!(&inlist_p, Predicate::InList(c, values) if c == "a"
&& match_string_values(values, &["a1", "a2"])));
let inlist_expr = DfExpr::InList(InList {
expr: Box::new(column("a")),
list: vec![string_literal("a1"), string_literal("a2")],
negated: true,
});
let inlist_p = Predicate::from_expr(inlist_expr).unwrap();
assert!(matches!(inlist_p, Predicate::Not(p) if
matches!(&*p,
Predicate::InList(c, values) if c == "a"
&& match_string_values(values, &["a1", "a2"]))));
}
#[test]
fn test_predicates_from_scan_request() {
let predicates = Predicates::from_scan_request(&None);
assert!(predicates.predicates.is_empty());
let (expr1, expr2) = mock_exprs();
let request = ScanRequest {
filters: vec![expr1.into(), expr2.into()],
..Default::default()
};
let predicates = Predicates::from_scan_request(&Some(request));
assert_eq!(2, predicates.predicates.len());
assert!(
matches!(&predicates.predicates[0], Predicate::Eq(column, v) if column == "a"
&& match_string_value(v, "a_value"))
);
assert!(
matches!(&predicates.predicates[1], Predicate::NotEq(column, v) if column == "b"
&& match_string_value(v, "b_value"))
);
}
#[test]
fn test_predicates_eval_row() {
let wrong_row = [
("a", &Value::from("a_value")),
("b", &Value::from("b_value")),
("c", &Value::from("c_value")),
];
let row = [
("a", &Value::from("a_value")),
("b", &Value::from("not_b_value")),
("c", &Value::from("c_value")),
];
let c_row = [("c", &Value::from("c_value"))];
// test empty predicates, always returns true
let predicates = Predicates::from_scan_request(&None);
assert!(predicates.eval(&row));
assert!(predicates.eval(&wrong_row));
assert!(predicates.eval(&c_row));
let (expr1, expr2) = mock_exprs();
let request = ScanRequest {
filters: vec![expr1.into(), expr2.into()],
..Default::default()
};
let predicates = Predicates::from_scan_request(&Some(request));
assert!(predicates.eval(&row));
assert!(!predicates.eval(&wrong_row));
assert!(predicates.eval(&c_row));
}
}

View File

@@ -0,0 +1,279 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use core::pin::pin;
use std::sync::{Arc, Weak};
use arrow_schema::SchemaRef as ArrowSchemaRef;
use common_catalog::consts::INFORMATION_SCHEMA_REGION_PEERS_TABLE_ID;
use common_error::ext::BoxedError;
use common_meta::rpc::router::RegionRoute;
use common_query::physical_plan::TaskContext;
use common_recordbatch::adapter::RecordBatchStreamAdapter;
use common_recordbatch::{RecordBatch, SendableRecordBatchStream};
use datafusion::physical_plan::stream::RecordBatchStreamAdapter as DfRecordBatchStreamAdapter;
use datafusion::physical_plan::streaming::PartitionStream as DfPartitionStream;
use datafusion::physical_plan::SendableRecordBatchStream as DfSendableRecordBatchStream;
use datatypes::prelude::{ConcreteDataType, ScalarVectorBuilder, VectorRef};
use datatypes::schema::{ColumnSchema, Schema, SchemaRef};
use datatypes::value::Value;
use datatypes::vectors::{Int64VectorBuilder, StringVectorBuilder, UInt64VectorBuilder};
use futures::{StreamExt, TryStreamExt};
use snafu::{OptionExt, ResultExt};
use store_api::storage::{ScanRequest, TableId};
use table::metadata::TableType;
use super::REGION_PEERS;
use crate::error::{
CreateRecordBatchSnafu, FindRegionRoutesSnafu, InternalSnafu, Result,
UpgradeWeakCatalogManagerRefSnafu,
};
use crate::information_schema::{InformationTable, Predicates};
use crate::kvbackend::KvBackendCatalogManager;
use crate::CatalogManager;
const REGION_ID: &str = "region_id";
const PEER_ID: &str = "peer_id";
const PEER_ADDR: &str = "peer_addr";
const IS_LEADER: &str = "is_leader";
const STATUS: &str = "status";
const DOWN_SECONDS: &str = "down_seconds";
const INIT_CAPACITY: usize = 42;
/// The `REGION_PEERS` table provides information about the region distribution and routes. Including fields:
///
/// - `region_id`: the region id
/// - `peer_id`: the region storage datanode peer id
/// - `peer_addr`: the region storage datanode peer address
/// - `is_leader`: whether the peer is the leader
/// - `status`: the region status, `ALIVE` or `DOWNGRADED`.
/// - `down_seconds`: the duration of being offline, in seconds.
///
pub(super) struct InformationSchemaRegionPeers {
schema: SchemaRef,
catalog_name: String,
catalog_manager: Weak<dyn CatalogManager>,
}
impl InformationSchemaRegionPeers {
pub(super) fn new(catalog_name: String, catalog_manager: Weak<dyn CatalogManager>) -> Self {
Self {
schema: Self::schema(),
catalog_name,
catalog_manager,
}
}
pub(crate) fn schema() -> SchemaRef {
Arc::new(Schema::new(vec![
ColumnSchema::new(REGION_ID, ConcreteDataType::uint64_datatype(), false),
ColumnSchema::new(PEER_ID, ConcreteDataType::uint64_datatype(), true),
ColumnSchema::new(PEER_ADDR, ConcreteDataType::string_datatype(), true),
ColumnSchema::new(IS_LEADER, ConcreteDataType::string_datatype(), true),
ColumnSchema::new(STATUS, ConcreteDataType::string_datatype(), true),
ColumnSchema::new(DOWN_SECONDS, ConcreteDataType::int64_datatype(), true),
]))
}
fn builder(&self) -> InformationSchemaRegionPeersBuilder {
InformationSchemaRegionPeersBuilder::new(
self.schema.clone(),
self.catalog_name.clone(),
self.catalog_manager.clone(),
)
}
}
impl InformationTable for InformationSchemaRegionPeers {
fn table_id(&self) -> TableId {
INFORMATION_SCHEMA_REGION_PEERS_TABLE_ID
}
fn table_name(&self) -> &'static str {
REGION_PEERS
}
fn schema(&self) -> SchemaRef {
self.schema.clone()
}
fn to_stream(&self, request: ScanRequest) -> Result<SendableRecordBatchStream> {
let schema = self.schema.arrow_schema().clone();
let mut builder = self.builder();
let stream = Box::pin(DfRecordBatchStreamAdapter::new(
schema,
futures::stream::once(async move {
builder
.make_region_peers(Some(request))
.await
.map(|x| x.into_df_record_batch())
.map_err(Into::into)
}),
));
Ok(Box::pin(
RecordBatchStreamAdapter::try_new(stream)
.map_err(BoxedError::new)
.context(InternalSnafu)?,
))
}
}
struct InformationSchemaRegionPeersBuilder {
schema: SchemaRef,
catalog_name: String,
catalog_manager: Weak<dyn CatalogManager>,
region_ids: UInt64VectorBuilder,
peer_ids: UInt64VectorBuilder,
peer_addrs: StringVectorBuilder,
is_leaders: StringVectorBuilder,
statuses: StringVectorBuilder,
down_seconds: Int64VectorBuilder,
}
impl InformationSchemaRegionPeersBuilder {
fn new(
schema: SchemaRef,
catalog_name: String,
catalog_manager: Weak<dyn CatalogManager>,
) -> Self {
Self {
schema,
catalog_name,
catalog_manager,
region_ids: UInt64VectorBuilder::with_capacity(INIT_CAPACITY),
peer_ids: UInt64VectorBuilder::with_capacity(INIT_CAPACITY),
peer_addrs: StringVectorBuilder::with_capacity(INIT_CAPACITY),
is_leaders: StringVectorBuilder::with_capacity(INIT_CAPACITY),
statuses: StringVectorBuilder::with_capacity(INIT_CAPACITY),
down_seconds: Int64VectorBuilder::with_capacity(INIT_CAPACITY),
}
}
/// Construct the `information_schema.region_peers` virtual table
async fn make_region_peers(&mut self, request: Option<ScanRequest>) -> Result<RecordBatch> {
let catalog_name = self.catalog_name.clone();
let catalog_manager = self
.catalog_manager
.upgrade()
.context(UpgradeWeakCatalogManagerRefSnafu)?;
let partition_manager = catalog_manager
.as_any()
.downcast_ref::<KvBackendCatalogManager>()
.map(|catalog_manager| catalog_manager.partition_manager());
let predicates = Predicates::from_scan_request(&request);
for schema_name in catalog_manager.schema_names(&catalog_name).await? {
let table_id_stream = catalog_manager
.tables(&catalog_name, &schema_name)
.await
.try_filter_map(|t| async move {
let table_info = t.table_info();
if table_info.table_type == TableType::Temporary {
Ok(None)
} else {
Ok(Some(table_info.ident.table_id))
}
});
const BATCH_SIZE: usize = 128;
// Split table ids into chunks
let mut table_id_chunks = pin!(table_id_stream.ready_chunks(BATCH_SIZE));
while let Some(table_ids) = table_id_chunks.next().await {
let table_ids = table_ids.into_iter().collect::<Result<Vec<_>>>()?;
let table_routes = if let Some(partition_manager) = &partition_manager {
partition_manager
.batch_find_region_routes(&table_ids)
.await
.context(FindRegionRoutesSnafu)?
} else {
table_ids.into_iter().map(|id| (id, vec![])).collect()
};
for routes in table_routes.values() {
self.add_region_peers(&predicates, routes);
}
}
}
self.finish()
}
fn add_region_peers(&mut self, predicates: &Predicates, routes: &[RegionRoute]) {
for route in routes {
let region_id = route.region.id.as_u64();
let peer_id = route.leader_peer.clone().map(|p| p.id);
let peer_addr = route.leader_peer.clone().map(|p| p.addr);
let status = if let Some(status) = route.leader_status {
Some(status.as_ref().to_string())
} else {
// Alive by default
Some("ALIVE".to_string())
};
let row = [(REGION_ID, &Value::from(region_id))];
if !predicates.eval(&row) {
return;
}
// TODO(dennis): adds followers.
self.region_ids.push(Some(region_id));
self.peer_ids.push(peer_id);
self.peer_addrs.push(peer_addr.as_deref());
self.is_leaders.push(Some("Yes"));
self.statuses.push(status.as_deref());
self.down_seconds
.push(route.leader_down_millis().map(|m| m / 1000));
}
}
fn finish(&mut self) -> Result<RecordBatch> {
let columns: Vec<VectorRef> = vec![
Arc::new(self.region_ids.finish()),
Arc::new(self.peer_ids.finish()),
Arc::new(self.peer_addrs.finish()),
Arc::new(self.is_leaders.finish()),
Arc::new(self.statuses.finish()),
Arc::new(self.down_seconds.finish()),
];
RecordBatch::new(self.schema.clone(), columns).context(CreateRecordBatchSnafu)
}
}
impl DfPartitionStream for InformationSchemaRegionPeers {
fn schema(&self) -> &ArrowSchemaRef {
self.schema.arrow_schema()
}
fn execute(&self, _: Arc<TaskContext>) -> DfSendableRecordBatchStream {
let schema = self.schema.arrow_schema().clone();
let mut builder = self.builder();
Box::pin(DfRecordBatchStreamAdapter::new(
schema,
futures::stream::once(async move {
builder
.make_region_peers(None)
.await
.map(|x| x.into_df_record_batch())
.map_err(Into::into)
}),
))
}
}

View File

@@ -0,0 +1,250 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::sync::Arc;
use arrow_schema::SchemaRef as ArrowSchemaRef;
use common_catalog::consts::INFORMATION_SCHEMA_RUNTIME_METRICS_TABLE_ID;
use common_error::ext::BoxedError;
use common_query::physical_plan::TaskContext;
use common_recordbatch::adapter::RecordBatchStreamAdapter;
use common_recordbatch::{RecordBatch, SendableRecordBatchStream};
use common_time::util::current_time_millis;
use datafusion::physical_plan::stream::RecordBatchStreamAdapter as DfRecordBatchStreamAdapter;
use datafusion::physical_plan::streaming::PartitionStream as DfPartitionStream;
use datafusion::physical_plan::SendableRecordBatchStream as DfSendableRecordBatchStream;
use datatypes::prelude::{ConcreteDataType, MutableVector};
use datatypes::scalars::ScalarVectorBuilder;
use datatypes::schema::{ColumnSchema, Schema, SchemaRef};
use datatypes::vectors::{
ConstantVector, Float64VectorBuilder, StringVector, StringVectorBuilder,
TimestampMillisecondVector, VectorRef,
};
use itertools::Itertools;
use snafu::ResultExt;
use store_api::storage::{ScanRequest, TableId};
use super::{InformationTable, RUNTIME_METRICS};
use crate::error::{CreateRecordBatchSnafu, InternalSnafu, Result};
pub(super) struct InformationSchemaMetrics {
schema: SchemaRef,
}
const METRIC_NAME: &str = "metric_name";
const METRIC_VALUE: &str = "value";
const METRIC_LABELS: &str = "labels";
const NODE: &str = "node";
const NODE_TYPE: &str = "node_type";
const TIMESTAMP: &str = "timestamp";
/// The `information_schema.runtime_metrics` virtual table.
/// It provides the GreptimeDB runtime metrics for the users by SQL.
impl InformationSchemaMetrics {
pub(super) fn new() -> Self {
Self {
schema: Self::schema(),
}
}
fn schema() -> SchemaRef {
Arc::new(Schema::new(vec![
ColumnSchema::new(METRIC_NAME, ConcreteDataType::string_datatype(), false),
ColumnSchema::new(METRIC_VALUE, ConcreteDataType::float64_datatype(), false),
ColumnSchema::new(METRIC_LABELS, ConcreteDataType::string_datatype(), true),
ColumnSchema::new(NODE, ConcreteDataType::string_datatype(), false),
ColumnSchema::new(NODE_TYPE, ConcreteDataType::string_datatype(), false),
ColumnSchema::new(
TIMESTAMP,
ConcreteDataType::timestamp_millisecond_datatype(),
false,
),
]))
}
fn builder(&self) -> InformationSchemaMetricsBuilder {
InformationSchemaMetricsBuilder::new(self.schema.clone())
}
}
impl InformationTable for InformationSchemaMetrics {
fn table_id(&self) -> TableId {
INFORMATION_SCHEMA_RUNTIME_METRICS_TABLE_ID
}
fn table_name(&self) -> &'static str {
RUNTIME_METRICS
}
fn schema(&self) -> SchemaRef {
self.schema.clone()
}
fn to_stream(&self, request: ScanRequest) -> Result<SendableRecordBatchStream> {
let schema = self.schema.arrow_schema().clone();
let mut builder = self.builder();
let stream = Box::pin(DfRecordBatchStreamAdapter::new(
schema,
futures::stream::once(async move {
builder
.make_metrics(Some(request))
.await
.map(|x| x.into_df_record_batch())
.map_err(Into::into)
}),
));
Ok(Box::pin(
RecordBatchStreamAdapter::try_new(stream)
.map_err(BoxedError::new)
.context(InternalSnafu)?,
))
}
}
struct InformationSchemaMetricsBuilder {
schema: SchemaRef,
metric_names: StringVectorBuilder,
metric_values: Float64VectorBuilder,
metric_labels: StringVectorBuilder,
}
impl InformationSchemaMetricsBuilder {
fn new(schema: SchemaRef) -> Self {
Self {
schema,
metric_names: StringVectorBuilder::with_capacity(42),
metric_values: Float64VectorBuilder::with_capacity(42),
metric_labels: StringVectorBuilder::with_capacity(42),
}
}
fn add_metric(&mut self, metric_name: &str, labels: String, metric_value: f64) {
self.metric_names.push(Some(metric_name));
self.metric_values.push(Some(metric_value));
self.metric_labels.push(Some(&labels));
}
async fn make_metrics(&mut self, _request: Option<ScanRequest>) -> Result<RecordBatch> {
let metric_families = prometheus::gather();
let write_request =
common_telemetry::metric::convert_metric_to_write_request(metric_families, None, 0);
for ts in write_request.timeseries {
//Safety: always has `__name__` label
let metric_name = ts
.labels
.iter()
.find_map(|label| {
if label.name == "__name__" {
Some(label.value.clone())
} else {
None
}
})
.unwrap();
self.add_metric(
&metric_name,
ts.labels
.into_iter()
.filter_map(|label| {
if label.name == "__name__" {
None
} else {
Some(format!("{}={}", label.name, label.value))
}
})
.join(", "),
// Safety: always has a sample
ts.samples[0].value,
);
}
self.finish()
}
fn finish(&mut self) -> Result<RecordBatch> {
let rows_num = self.metric_names.len();
let unknowns = Arc::new(ConstantVector::new(
Arc::new(StringVector::from(vec!["unknown"])),
rows_num,
));
let timestamps = Arc::new(ConstantVector::new(
Arc::new(TimestampMillisecondVector::from_slice([
current_time_millis(),
])),
rows_num,
));
let columns: Vec<VectorRef> = vec![
Arc::new(self.metric_names.finish()),
Arc::new(self.metric_values.finish()),
Arc::new(self.metric_labels.finish()),
// TODO(dennis): supports node and node_type for cluster
unknowns.clone(),
unknowns,
timestamps,
];
RecordBatch::new(self.schema.clone(), columns).context(CreateRecordBatchSnafu)
}
}
impl DfPartitionStream for InformationSchemaMetrics {
fn schema(&self) -> &ArrowSchemaRef {
self.schema.arrow_schema()
}
fn execute(&self, _: Arc<TaskContext>) -> DfSendableRecordBatchStream {
let schema = self.schema.arrow_schema().clone();
let mut builder = self.builder();
Box::pin(DfRecordBatchStreamAdapter::new(
schema,
futures::stream::once(async move {
builder
.make_metrics(None)
.await
.map(|x| x.into_df_record_batch())
.map_err(Into::into)
}),
))
}
}
#[cfg(test)]
mod tests {
use common_recordbatch::RecordBatches;
use super::*;
#[tokio::test]
async fn test_make_metrics() {
let metrics = InformationSchemaMetrics::new();
let stream = metrics.to_stream(ScanRequest::default()).unwrap();
let batches = RecordBatches::try_collect(stream).await.unwrap();
let result_literal = batches.pretty_print().unwrap();
assert!(result_literal.contains(METRIC_NAME));
assert!(result_literal.contains(METRIC_VALUE));
assert!(result_literal.contains(METRIC_LABELS));
assert!(result_literal.contains(NODE));
assert!(result_literal.contains(NODE_TYPE));
assert!(result_literal.contains(TIMESTAMP));
}
}

View File

@@ -0,0 +1,222 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::sync::{Arc, Weak};
use arrow_schema::SchemaRef as ArrowSchemaRef;
use common_catalog::consts::INFORMATION_SCHEMA_SCHEMATA_TABLE_ID;
use common_error::ext::BoxedError;
use common_query::physical_plan::TaskContext;
use common_recordbatch::adapter::RecordBatchStreamAdapter;
use common_recordbatch::{RecordBatch, SendableRecordBatchStream};
use datafusion::physical_plan::stream::RecordBatchStreamAdapter as DfRecordBatchStreamAdapter;
use datafusion::physical_plan::streaming::PartitionStream as DfPartitionStream;
use datafusion::physical_plan::SendableRecordBatchStream as DfSendableRecordBatchStream;
use datatypes::prelude::{ConcreteDataType, ScalarVectorBuilder, VectorRef};
use datatypes::schema::{ColumnSchema, Schema, SchemaRef};
use datatypes::value::Value;
use datatypes::vectors::StringVectorBuilder;
use snafu::{OptionExt, ResultExt};
use store_api::storage::{ScanRequest, TableId};
use super::SCHEMATA;
use crate::error::{
CreateRecordBatchSnafu, InternalSnafu, Result, UpgradeWeakCatalogManagerRefSnafu,
};
use crate::information_schema::{InformationTable, Predicates};
use crate::CatalogManager;
pub const CATALOG_NAME: &str = "catalog_name";
pub const SCHEMA_NAME: &str = "schema_name";
const DEFAULT_CHARACTER_SET_NAME: &str = "default_character_set_name";
const DEFAULT_COLLATION_NAME: &str = "default_collation_name";
const INIT_CAPACITY: usize = 42;
/// The `information_schema.schemata` table implementation.
pub(super) struct InformationSchemaSchemata {
schema: SchemaRef,
catalog_name: String,
catalog_manager: Weak<dyn CatalogManager>,
}
impl InformationSchemaSchemata {
pub(super) fn new(catalog_name: String, catalog_manager: Weak<dyn CatalogManager>) -> Self {
Self {
schema: Self::schema(),
catalog_name,
catalog_manager,
}
}
pub(crate) fn schema() -> SchemaRef {
Arc::new(Schema::new(vec![
ColumnSchema::new(CATALOG_NAME, ConcreteDataType::string_datatype(), false),
ColumnSchema::new(SCHEMA_NAME, ConcreteDataType::string_datatype(), false),
ColumnSchema::new(
DEFAULT_CHARACTER_SET_NAME,
ConcreteDataType::string_datatype(),
false,
),
ColumnSchema::new(
DEFAULT_COLLATION_NAME,
ConcreteDataType::string_datatype(),
false,
),
ColumnSchema::new("sql_path", ConcreteDataType::string_datatype(), true),
]))
}
fn builder(&self) -> InformationSchemaSchemataBuilder {
InformationSchemaSchemataBuilder::new(
self.schema.clone(),
self.catalog_name.clone(),
self.catalog_manager.clone(),
)
}
}
impl InformationTable for InformationSchemaSchemata {
fn table_id(&self) -> TableId {
INFORMATION_SCHEMA_SCHEMATA_TABLE_ID
}
fn table_name(&self) -> &'static str {
SCHEMATA
}
fn schema(&self) -> SchemaRef {
self.schema.clone()
}
fn to_stream(&self, request: ScanRequest) -> Result<SendableRecordBatchStream> {
let schema = self.schema.arrow_schema().clone();
let mut builder = self.builder();
let stream = Box::pin(DfRecordBatchStreamAdapter::new(
schema,
futures::stream::once(async move {
builder
.make_schemata(Some(request))
.await
.map(|x| x.into_df_record_batch())
.map_err(Into::into)
}),
));
Ok(Box::pin(
RecordBatchStreamAdapter::try_new(stream)
.map_err(BoxedError::new)
.context(InternalSnafu)?,
))
}
}
/// Builds the `information_schema.schemata` table row by row
///
/// Columns are based on <https://docs.pingcap.com/tidb/stable/information-schema-schemata>
struct InformationSchemaSchemataBuilder {
schema: SchemaRef,
catalog_name: String,
catalog_manager: Weak<dyn CatalogManager>,
catalog_names: StringVectorBuilder,
schema_names: StringVectorBuilder,
charset_names: StringVectorBuilder,
collation_names: StringVectorBuilder,
sql_paths: StringVectorBuilder,
}
impl InformationSchemaSchemataBuilder {
fn new(
schema: SchemaRef,
catalog_name: String,
catalog_manager: Weak<dyn CatalogManager>,
) -> Self {
Self {
schema,
catalog_name,
catalog_manager,
catalog_names: StringVectorBuilder::with_capacity(INIT_CAPACITY),
schema_names: StringVectorBuilder::with_capacity(INIT_CAPACITY),
charset_names: StringVectorBuilder::with_capacity(INIT_CAPACITY),
collation_names: StringVectorBuilder::with_capacity(INIT_CAPACITY),
sql_paths: StringVectorBuilder::with_capacity(INIT_CAPACITY),
}
}
/// Construct the `information_schema.schemata` virtual table
async fn make_schemata(&mut self, request: Option<ScanRequest>) -> Result<RecordBatch> {
let catalog_name = self.catalog_name.clone();
let catalog_manager = self
.catalog_manager
.upgrade()
.context(UpgradeWeakCatalogManagerRefSnafu)?;
let predicates = Predicates::from_scan_request(&request);
for schema_name in catalog_manager.schema_names(&catalog_name).await? {
self.add_schema(&predicates, &catalog_name, &schema_name);
}
self.finish()
}
fn add_schema(&mut self, predicates: &Predicates, catalog_name: &str, schema_name: &str) {
let row = [
(CATALOG_NAME, &Value::from(catalog_name)),
(SCHEMA_NAME, &Value::from(schema_name)),
(DEFAULT_CHARACTER_SET_NAME, &Value::from("utf8")),
(DEFAULT_COLLATION_NAME, &Value::from("utf8_bin")),
];
if !predicates.eval(&row) {
return;
}
self.catalog_names.push(Some(catalog_name));
self.schema_names.push(Some(schema_name));
self.charset_names.push(Some("utf8"));
self.collation_names.push(Some("utf8_bin"));
self.sql_paths.push(None);
}
fn finish(&mut self) -> Result<RecordBatch> {
let columns: Vec<VectorRef> = vec![
Arc::new(self.catalog_names.finish()),
Arc::new(self.schema_names.finish()),
Arc::new(self.charset_names.finish()),
Arc::new(self.collation_names.finish()),
Arc::new(self.sql_paths.finish()),
];
RecordBatch::new(self.schema.clone(), columns).context(CreateRecordBatchSnafu)
}
}
impl DfPartitionStream for InformationSchemaSchemata {
fn schema(&self) -> &ArrowSchemaRef {
self.schema.arrow_schema()
}
fn execute(&self, _: Arc<TaskContext>) -> DfSendableRecordBatchStream {
let schema = self.schema.arrow_schema().clone();
let mut builder = self.builder();
Box::pin(DfRecordBatchStreamAdapter::new(
schema,
futures::stream::once(async move {
builder
.make_schemata(None)
.await
.map(|x| x.into_df_record_batch())
.map_err(Into::into)
}),
))
}
}

View File

@@ -0,0 +1,286 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::sync::{Arc, Weak};
use arrow_schema::SchemaRef as ArrowSchemaRef;
use common_catalog::consts::INFORMATION_SCHEMA_TABLE_CONSTRAINTS_TABLE_ID;
use common_error::ext::BoxedError;
use common_query::physical_plan::TaskContext;
use common_recordbatch::adapter::RecordBatchStreamAdapter;
use common_recordbatch::{RecordBatch, SendableRecordBatchStream};
use datafusion::physical_plan::stream::RecordBatchStreamAdapter as DfRecordBatchStreamAdapter;
use datafusion::physical_plan::streaming::PartitionStream as DfPartitionStream;
use datafusion::physical_plan::SendableRecordBatchStream as DfSendableRecordBatchStream;
use datatypes::prelude::{ConcreteDataType, MutableVector};
use datatypes::scalars::ScalarVectorBuilder;
use datatypes::schema::{ColumnSchema, Schema, SchemaRef};
use datatypes::value::Value;
use datatypes::vectors::{ConstantVector, StringVector, StringVectorBuilder, VectorRef};
use futures::TryStreamExt;
use snafu::{OptionExt, ResultExt};
use store_api::storage::{ScanRequest, TableId};
use super::{InformationTable, TABLE_CONSTRAINTS};
use crate::error::{
CreateRecordBatchSnafu, InternalSnafu, Result, UpgradeWeakCatalogManagerRefSnafu,
};
use crate::information_schema::key_column_usage::{
PRI_CONSTRAINT_NAME, TIME_INDEX_CONSTRAINT_NAME,
};
use crate::information_schema::Predicates;
use crate::CatalogManager;
/// The `TABLE_CONSTRAINTS` table describes which tables have constraints.
pub(super) struct InformationSchemaTableConstraints {
schema: SchemaRef,
catalog_name: String,
catalog_manager: Weak<dyn CatalogManager>,
}
const CONSTRAINT_CATALOG: &str = "constraint_catalog";
const CONSTRAINT_SCHEMA: &str = "constraint_schema";
const CONSTRAINT_NAME: &str = "constraint_name";
const TABLE_SCHEMA: &str = "table_schema";
const TABLE_NAME: &str = "table_name";
const CONSTRAINT_TYPE: &str = "constraint_type";
const ENFORCED: &str = "enforced";
const INIT_CAPACITY: usize = 42;
const TIME_INDEX_CONSTRAINT_TYPE: &str = "TIME INDEX";
const PRI_KEY_CONSTRAINT_TYPE: &str = "PRIMARY KEY";
impl InformationSchemaTableConstraints {
pub(super) fn new(catalog_name: String, catalog_manager: Weak<dyn CatalogManager>) -> Self {
Self {
schema: Self::schema(),
catalog_name,
catalog_manager,
}
}
fn schema() -> SchemaRef {
Arc::new(Schema::new(vec![
ColumnSchema::new(
CONSTRAINT_CATALOG,
ConcreteDataType::string_datatype(),
false,
),
ColumnSchema::new(
CONSTRAINT_SCHEMA,
ConcreteDataType::string_datatype(),
false,
),
ColumnSchema::new(CONSTRAINT_NAME, ConcreteDataType::string_datatype(), false),
ColumnSchema::new(TABLE_SCHEMA, ConcreteDataType::string_datatype(), false),
ColumnSchema::new(TABLE_NAME, ConcreteDataType::string_datatype(), false),
ColumnSchema::new(CONSTRAINT_TYPE, ConcreteDataType::string_datatype(), false),
ColumnSchema::new(ENFORCED, ConcreteDataType::string_datatype(), false),
]))
}
fn builder(&self) -> InformationSchemaTableConstraintsBuilder {
InformationSchemaTableConstraintsBuilder::new(
self.schema.clone(),
self.catalog_name.clone(),
self.catalog_manager.clone(),
)
}
}
impl InformationTable for InformationSchemaTableConstraints {
fn table_id(&self) -> TableId {
INFORMATION_SCHEMA_TABLE_CONSTRAINTS_TABLE_ID
}
fn table_name(&self) -> &'static str {
TABLE_CONSTRAINTS
}
fn schema(&self) -> SchemaRef {
self.schema.clone()
}
fn to_stream(&self, request: ScanRequest) -> Result<SendableRecordBatchStream> {
let schema = self.schema.arrow_schema().clone();
let mut builder = self.builder();
let stream = Box::pin(DfRecordBatchStreamAdapter::new(
schema,
futures::stream::once(async move {
builder
.make_table_constraints(Some(request))
.await
.map(|x| x.into_df_record_batch())
.map_err(Into::into)
}),
));
Ok(Box::pin(
RecordBatchStreamAdapter::try_new(stream)
.map_err(BoxedError::new)
.context(InternalSnafu)?,
))
}
}
struct InformationSchemaTableConstraintsBuilder {
schema: SchemaRef,
catalog_name: String,
catalog_manager: Weak<dyn CatalogManager>,
constraint_schemas: StringVectorBuilder,
constraint_names: StringVectorBuilder,
table_schemas: StringVectorBuilder,
table_names: StringVectorBuilder,
constraint_types: StringVectorBuilder,
}
impl InformationSchemaTableConstraintsBuilder {
fn new(
schema: SchemaRef,
catalog_name: String,
catalog_manager: Weak<dyn CatalogManager>,
) -> Self {
Self {
schema,
catalog_name,
catalog_manager,
constraint_schemas: StringVectorBuilder::with_capacity(INIT_CAPACITY),
constraint_names: StringVectorBuilder::with_capacity(INIT_CAPACITY),
table_schemas: StringVectorBuilder::with_capacity(INIT_CAPACITY),
table_names: StringVectorBuilder::with_capacity(INIT_CAPACITY),
constraint_types: StringVectorBuilder::with_capacity(INIT_CAPACITY),
}
}
/// Construct the `information_schema.table_constraints` virtual table
async fn make_table_constraints(
&mut self,
request: Option<ScanRequest>,
) -> Result<RecordBatch> {
let catalog_name = self.catalog_name.clone();
let catalog_manager = self
.catalog_manager
.upgrade()
.context(UpgradeWeakCatalogManagerRefSnafu)?;
let predicates = Predicates::from_scan_request(&request);
for schema_name in catalog_manager.schema_names(&catalog_name).await? {
let mut stream = catalog_manager.tables(&catalog_name, &schema_name).await;
while let Some(table) = stream.try_next().await? {
let keys = &table.table_info().meta.primary_key_indices;
let schema = table.schema();
if schema.timestamp_index().is_some() {
self.add_table_constraint(
&predicates,
&schema_name,
TIME_INDEX_CONSTRAINT_NAME,
&schema_name,
&table.table_info().name,
TIME_INDEX_CONSTRAINT_TYPE,
);
}
if !keys.is_empty() {
self.add_table_constraint(
&predicates,
&schema_name,
PRI_CONSTRAINT_NAME,
&schema_name,
&table.table_info().name,
PRI_KEY_CONSTRAINT_TYPE,
);
}
}
}
self.finish()
}
fn add_table_constraint(
&mut self,
predicates: &Predicates,
constraint_schema: &str,
constraint_name: &str,
table_schema: &str,
table_name: &str,
constraint_type: &str,
) {
let row = [
(CONSTRAINT_SCHEMA, &Value::from(constraint_schema)),
(CONSTRAINT_NAME, &Value::from(constraint_name)),
(TABLE_SCHEMA, &Value::from(table_schema)),
(TABLE_NAME, &Value::from(table_name)),
(CONSTRAINT_TYPE, &Value::from(constraint_type)),
];
if !predicates.eval(&row) {
return;
}
self.constraint_schemas.push(Some(constraint_schema));
self.constraint_names.push(Some(constraint_name));
self.table_schemas.push(Some(table_schema));
self.table_names.push(Some(table_name));
self.constraint_types.push(Some(constraint_type));
}
fn finish(&mut self) -> Result<RecordBatch> {
let rows_num = self.constraint_names.len();
let constraint_catalogs = Arc::new(ConstantVector::new(
Arc::new(StringVector::from(vec!["def"])),
rows_num,
));
let enforceds = Arc::new(ConstantVector::new(
Arc::new(StringVector::from(vec!["YES"])),
rows_num,
));
let columns: Vec<VectorRef> = vec![
constraint_catalogs,
Arc::new(self.constraint_schemas.finish()),
Arc::new(self.constraint_names.finish()),
Arc::new(self.table_schemas.finish()),
Arc::new(self.table_names.finish()),
Arc::new(self.constraint_types.finish()),
enforceds,
];
RecordBatch::new(self.schema.clone(), columns).context(CreateRecordBatchSnafu)
}
}
impl DfPartitionStream for InformationSchemaTableConstraints {
fn schema(&self) -> &ArrowSchemaRef {
self.schema.arrow_schema()
}
fn execute(&self, _: Arc<TaskContext>) -> DfSendableRecordBatchStream {
let schema = self.schema.arrow_schema().clone();
let mut builder = self.builder();
Box::pin(DfRecordBatchStreamAdapter::new(
schema,
futures::stream::once(async move {
builder
.make_table_constraints(None)
.await
.map(|x| x.into_df_record_batch())
.map_err(Into::into)
}),
))
}
}

View File

@@ -0,0 +1,44 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
/// All table names in `information_schema`.
pub const TABLES: &str = "tables";
pub const COLUMNS: &str = "columns";
pub const ENGINES: &str = "engines";
pub const COLUMN_PRIVILEGES: &str = "column_privileges";
pub const COLUMN_STATISTICS: &str = "column_statistics";
pub const BUILD_INFO: &str = "build_info";
pub const CHARACTER_SETS: &str = "character_sets";
pub const COLLATIONS: &str = "collations";
pub const COLLATION_CHARACTER_SET_APPLICABILITY: &str = "collation_character_set_applicability";
pub const CHECK_CONSTRAINTS: &str = "check_constraints";
pub const EVENTS: &str = "events";
pub const FILES: &str = "files";
pub const SCHEMATA: &str = "schemata";
pub const KEY_COLUMN_USAGE: &str = "key_column_usage";
pub const OPTIMIZER_TRACE: &str = "optimizer_trace";
pub const PARAMETERS: &str = "parameters";
pub const PROFILING: &str = "profiling";
pub const REFERENTIAL_CONSTRAINTS: &str = "referential_constraints";
pub const ROUTINES: &str = "routines";
pub const SCHEMA_PRIVILEGES: &str = "schema_privileges";
pub const TABLE_PRIVILEGES: &str = "table_privileges";
pub const TRIGGERS: &str = "triggers";
pub const GLOBAL_STATUS: &str = "global_status";
pub const SESSION_STATUS: &str = "session_status";
pub const RUNTIME_METRICS: &str = "runtime_metrics";
pub const PARTITIONS: &str = "partitions";
pub const REGION_PEERS: &str = "greptime_region_peers";
pub const TABLE_CONSTRAINTS: &str = "table_constraints";

View File

@@ -15,10 +15,7 @@
use std::sync::{Arc, Weak};
use arrow_schema::SchemaRef as ArrowSchemaRef;
use common_catalog::consts::{
INFORMATION_SCHEMA_COLUMNS_TABLE_ID, INFORMATION_SCHEMA_NAME,
INFORMATION_SCHEMA_TABLES_TABLE_ID,
};
use common_catalog::consts::INFORMATION_SCHEMA_TABLES_TABLE_ID;
use common_error::ext::BoxedError;
use common_query::physical_plan::TaskContext;
use common_recordbatch::adapter::RecordBatchStreamAdapter;
@@ -28,18 +25,28 @@ use datafusion::physical_plan::streaming::PartitionStream as DfPartitionStream;
use datafusion::physical_plan::SendableRecordBatchStream as DfSendableRecordBatchStream;
use datatypes::prelude::{ConcreteDataType, ScalarVectorBuilder, VectorRef};
use datatypes::schema::{ColumnSchema, Schema, SchemaRef};
use datatypes::value::Value;
use datatypes::vectors::{StringVectorBuilder, UInt32VectorBuilder};
use futures::TryStreamExt;
use snafu::{OptionExt, ResultExt};
use store_api::storage::TableId;
use store_api::storage::{ScanRequest, TableId};
use table::metadata::TableType;
use super::{COLUMNS, TABLES};
use super::TABLES;
use crate::error::{
CreateRecordBatchSnafu, InternalSnafu, Result, UpgradeWeakCatalogManagerRefSnafu,
};
use crate::information_schema::InformationTable;
use crate::information_schema::{InformationTable, Predicates};
use crate::CatalogManager;
pub const TABLE_CATALOG: &str = "table_catalog";
pub const TABLE_SCHEMA: &str = "table_schema";
pub const TABLE_NAME: &str = "table_name";
pub const TABLE_TYPE: &str = "table_type";
const TABLE_ID: &str = "table_id";
const ENGINE: &str = "engine";
const INIT_CAPACITY: usize = 42;
pub(super) struct InformationSchemaTables {
schema: SchemaRef,
catalog_name: String,
@@ -57,12 +64,12 @@ impl InformationSchemaTables {
pub(crate) fn schema() -> SchemaRef {
Arc::new(Schema::new(vec![
ColumnSchema::new("table_catalog", ConcreteDataType::string_datatype(), false),
ColumnSchema::new("table_schema", ConcreteDataType::string_datatype(), false),
ColumnSchema::new("table_name", ConcreteDataType::string_datatype(), false),
ColumnSchema::new("table_type", ConcreteDataType::string_datatype(), false),
ColumnSchema::new("table_id", ConcreteDataType::uint32_datatype(), true),
ColumnSchema::new("engine", ConcreteDataType::string_datatype(), true),
ColumnSchema::new(TABLE_CATALOG, ConcreteDataType::string_datatype(), false),
ColumnSchema::new(TABLE_SCHEMA, ConcreteDataType::string_datatype(), false),
ColumnSchema::new(TABLE_NAME, ConcreteDataType::string_datatype(), false),
ColumnSchema::new(TABLE_TYPE, ConcreteDataType::string_datatype(), false),
ColumnSchema::new(TABLE_ID, ConcreteDataType::uint32_datatype(), true),
ColumnSchema::new(ENGINE, ConcreteDataType::string_datatype(), true),
]))
}
@@ -88,14 +95,14 @@ impl InformationTable for InformationSchemaTables {
self.schema.clone()
}
fn to_stream(&self) -> Result<SendableRecordBatchStream> {
fn to_stream(&self, request: ScanRequest) -> Result<SendableRecordBatchStream> {
let schema = self.schema.arrow_schema().clone();
let mut builder = self.builder();
let stream = Box::pin(DfRecordBatchStreamAdapter::new(
schema,
futures::stream::once(async move {
builder
.make_tables()
.make_tables(Some(request))
.await
.map(|x| x.into_df_record_batch())
.map_err(Into::into)
@@ -135,80 +142,48 @@ impl InformationSchemaTablesBuilder {
schema,
catalog_name,
catalog_manager,
catalog_names: StringVectorBuilder::with_capacity(42),
schema_names: StringVectorBuilder::with_capacity(42),
table_names: StringVectorBuilder::with_capacity(42),
table_types: StringVectorBuilder::with_capacity(42),
table_ids: UInt32VectorBuilder::with_capacity(42),
engines: StringVectorBuilder::with_capacity(42),
catalog_names: StringVectorBuilder::with_capacity(INIT_CAPACITY),
schema_names: StringVectorBuilder::with_capacity(INIT_CAPACITY),
table_names: StringVectorBuilder::with_capacity(INIT_CAPACITY),
table_types: StringVectorBuilder::with_capacity(INIT_CAPACITY),
table_ids: UInt32VectorBuilder::with_capacity(INIT_CAPACITY),
engines: StringVectorBuilder::with_capacity(INIT_CAPACITY),
}
}
/// Construct the `information_schema.tables` virtual table
async fn make_tables(&mut self) -> Result<RecordBatch> {
async fn make_tables(&mut self, request: Option<ScanRequest>) -> Result<RecordBatch> {
let catalog_name = self.catalog_name.clone();
let catalog_manager = self
.catalog_manager
.upgrade()
.context(UpgradeWeakCatalogManagerRefSnafu)?;
let predicates = Predicates::from_scan_request(&request);
for schema_name in catalog_manager.schema_names(&catalog_name).await? {
if !catalog_manager
.schema_exists(&catalog_name, &schema_name)
.await?
{
continue;
}
let mut stream = catalog_manager.tables(&catalog_name, &schema_name).await;
for table_name in catalog_manager
.table_names(&catalog_name, &schema_name)
.await?
{
if let Some(table) = catalog_manager
.table(&catalog_name, &schema_name, &table_name)
.await?
{
let table_info = table.table_info();
self.add_table(
&catalog_name,
&schema_name,
&table_name,
table.table_type(),
Some(table_info.ident.table_id),
Some(&table_info.meta.engine),
);
} else {
// TODO: this specific branch is only a workaround for FrontendCatalogManager.
if schema_name == INFORMATION_SCHEMA_NAME {
if table_name == COLUMNS {
self.add_table(
&catalog_name,
&schema_name,
&table_name,
TableType::Temporary,
Some(INFORMATION_SCHEMA_COLUMNS_TABLE_ID),
None,
);
} else if table_name == TABLES {
self.add_table(
&catalog_name,
&schema_name,
&table_name,
TableType::Temporary,
Some(INFORMATION_SCHEMA_TABLES_TABLE_ID),
None,
);
}
}
};
while let Some(table) = stream.try_next().await? {
let table_info = table.table_info();
self.add_table(
&predicates,
&catalog_name,
&schema_name,
&table_info.name,
table.table_type(),
Some(table_info.ident.table_id),
Some(&table_info.meta.engine),
);
}
}
self.finish()
}
#[allow(clippy::too_many_arguments)]
fn add_table(
&mut self,
predicates: &Predicates,
catalog_name: &str,
schema_name: &str,
table_name: &str,
@@ -216,14 +191,27 @@ impl InformationSchemaTablesBuilder {
table_id: Option<u32>,
engine: Option<&str>,
) {
self.catalog_names.push(Some(catalog_name));
self.schema_names.push(Some(schema_name));
self.table_names.push(Some(table_name));
self.table_types.push(Some(match table_type {
let table_type = match table_type {
TableType::Base => "BASE TABLE",
TableType::View => "VIEW",
TableType::Temporary => "LOCAL TEMPORARY",
}));
};
let row = [
(TABLE_CATALOG, &Value::from(catalog_name)),
(TABLE_SCHEMA, &Value::from(schema_name)),
(TABLE_NAME, &Value::from(table_name)),
(TABLE_TYPE, &Value::from(table_type)),
];
if !predicates.eval(&row) {
return;
}
self.catalog_names.push(Some(catalog_name));
self.schema_names.push(Some(schema_name));
self.table_names.push(Some(table_name));
self.table_types.push(Some(table_type));
self.table_ids.push(table_id);
self.engines.push(engine);
}
@@ -253,7 +241,7 @@ impl DfPartitionStream for InformationSchemaTables {
schema,
futures::stream::once(async move {
builder
.make_tables()
.make_tables(None)
.await
.map(|x| x.into_df_record_batch())
.map_err(Into::into)

Some files were not shown because too many files have changed in this diff Show More