Compare commits

...

62 Commits

Author SHA1 Message Date
ZonaHe
9038e1b769 feat: update dashboard to v0.4.10 (#3663)
Co-authored-by: ZonaHex <ZonaHex@users.noreply.github.com>
2024-04-08 16:35:41 +08:00
JeremyHi
12286f07ac feat: cluster information (#3631)
* chore: keep the same method order in KvBackend

* feat: make meta client can get all node info of cluster

* feat: cluster info data model

* feat: frontend and datanode info

* feat: list node info

* chore: remove the method: is_started

* fix: scan key prefix

* chore: impl From for NodeInfoKey

* chore: doc for trait and struct

* chore: reuse the error

* chore: refactor two collec cluster info handlers

* chore: remove inline

* chore: refactor two collec cluster info handlers
2024-04-08 07:48:36 +00:00
tison
e920f95902 refactor: drop Table trait (#3654)
* refactor: drop Table trait

Signed-off-by: tison <wander4096@gmail.com>

* finish rename

Signed-off-by: tison <wander4096@gmail.com>

* Apply suggestions from code review

Co-authored-by: Zhenchi <zhongzc_arch@outlook.com>

* Update time_range_filter_test.rs

* Update src/query/src/tests/time_range_filter_test.rs

* apply comments

Signed-off-by: tison <wander4096@gmail.com>

---------

Signed-off-by: tison <wander4096@gmail.com>
Co-authored-by: Zhenchi <zhongzc_arch@outlook.com>
2024-04-08 07:28:55 +00:00
Yohan Wal
c4798d1913 refactor: move create database to procedure (#3626)
* refactor: move create database to procedure

* feat: enable database creation of rpc

* chore: update the commit hash of greptime-proto
2024-04-08 07:05:55 +00:00
Ruihang Xia
2ede968c2b chore: bump version to 0.7.2 (#3658)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-04-08 06:33:29 +00:00
Yingwen
89db8c18c8 feat: Add timers to more mito methods (#3659)
* feat: add timers for more mito methods

* refactor: combine methods to get type name
2024-04-08 05:53:34 +00:00
LFC
aa0af6135d chore: add manifest related metrics (#3634)
* chore: add two manifest related metrics

* Update src/mito2/src/manifest/manager.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

* Update src/mito2/src/metrics.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

* fix: resolve PR comments

* update cargo lock

---------

Co-authored-by: Yingwen <realevenyag@gmail.com>
2024-04-08 05:53:08 +00:00
dennis zhuang
87e0189e58 fix!: columns table in information_schema misses some columns (#3639)
* fix: columns table in information_schema misses some columns

* fix: test_information_schema_dot_columns

* fix: fuzz test

* feat: adds srs_id and refactor some columns with constant vector

* fix: test_information_schema_dot_columns

* chore: update comment

Co-authored-by: JeremyHi <jiachun_feng@proton.me>

* build(deps): bump h2 from 0.3.24 to 0.3.26 (#3642)

Bumps [h2](https://github.com/hyperium/h2) from 0.3.24 to 0.3.26.
- [Release notes](https://github.com/hyperium/h2/releases)
- [Changelog](https://github.com/hyperium/h2/blob/v0.3.26/CHANGELOG.md)
- [Commits](https://github.com/hyperium/h2/compare/v0.3.24...v0.3.26)

---
updated-dependencies:
- dependency-name: h2
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* build(deps): bump whoami from 1.4.1 to 1.5.1 (#3643)

Bumps [whoami](https://github.com/ardaku/whoami) from 1.4.1 to 1.5.1.
- [Changelog](https://github.com/ardaku/whoami/blob/v1/CHANGELOG.md)
- [Commits](https://github.com/ardaku/whoami/compare/v1.4.1...v1.5.1)

---
updated-dependencies:
- dependency-name: whoami
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* feat: adding victoriametrics remote write (#3641)

* feat: adding victoria metrics remote write

* test: add e2e tests for prom and vm remote writes

* fix: construct correct pk list with pre-existing pk (#3614)

* fix: construct correct pk list with pre-existing pk

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update UT

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* test(sqlness): release databases after tests (#3648)

* refactor: rename Greptime_Type to Greptime_type

---------

Signed-off-by: dependabot[bot] <support@github.com>
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: JeremyHi <jiachun_feng@proton.me>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Ning Sun <sunng@protonmail.com>
Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: Weny Xu <wenymedia@gmail.com>
2024-04-08 03:20:49 +00:00
tison
7e8e9aba9d chore: generate release notes with git-cliff (#3650)
* chore: generate release notes with git-cliff

Signed-off-by: tison <wander4096@gmail.com>

* chore: newlines

Signed-off-by: tison <wander4096@gmail.com>

---------

Signed-off-by: tison <wander4096@gmail.com>
2024-04-08 03:09:35 +00:00
tison
c93b76ae5f ci: bump license header checker action version (#3655) 2024-04-08 10:38:03 +08:00
Weny Xu
097a0371dc test(sqlness): release databases after tests (#3648) 2024-04-07 09:35:34 +00:00
Ruihang Xia
b9890ab870 fix: construct correct pk list with pre-existing pk (#3614)
* fix: construct correct pk list with pre-existing pk

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update UT

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-04-07 08:11:52 +00:00
Ning Sun
b32e0bba9c feat: adding victoriametrics remote write (#3641)
* feat: adding victoria metrics remote write

* test: add e2e tests for prom and vm remote writes
2024-04-07 07:09:21 +00:00
dependabot[bot]
fe1a0109d8 build(deps): bump whoami from 1.4.1 to 1.5.1 (#3643)
Bumps [whoami](https://github.com/ardaku/whoami) from 1.4.1 to 1.5.1.
- [Changelog](https://github.com/ardaku/whoami/blob/v1/CHANGELOG.md)
- [Commits](https://github.com/ardaku/whoami/compare/v1.4.1...v1.5.1)

---
updated-dependencies:
- dependency-name: whoami
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-05 19:02:36 -07:00
dependabot[bot]
11995eb52e build(deps): bump h2 from 0.3.24 to 0.3.26 (#3642)
Bumps [h2](https://github.com/hyperium/h2) from 0.3.24 to 0.3.26.
- [Release notes](https://github.com/hyperium/h2/releases)
- [Changelog](https://github.com/hyperium/h2/blob/v0.3.26/CHANGELOG.md)
- [Commits](https://github.com/hyperium/h2/compare/v0.3.24...v0.3.26)

---
updated-dependencies:
- dependency-name: h2
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
2024-04-05 19:02:19 -07:00
dimbtp
86d377d028 fix: move object store read/write timer into inner (#3627)
* fix: move object store read/write timer into inner

* add Drop for PrometheusMetricWrapper

* call await on async read/write

* apply review comments

* git rid of option on timer
2024-04-03 12:24:34 +00:00
Lei, HUANG
ddeb73fbb7 fix: mistakely removes compaction inputs on failure (#3635)
* fix: mistakely removes compaction inputs on failure

* test: add test for compaction failure

---------

Co-authored-by: evenyag <realevenyag@gmail.com>
2024-04-03 11:54:20 +00:00
niebayes
d33435fa84 feat: introduce wal benchmarker (#3446)
* feat: introduce wal benchmarker

* chore: add log store metrics

* chore: add some comments to wal benchmarker

* fix: ci

* chore: add more metrics for kafka logstore

* chore: add more timers for kafka logstore

* chore: add more configs

* chore: move humantime to common dependencies

* refactor: refactor wal benchmarker

* fix: apply suggestions from code review

* doc: add a simple README for wal benchmarker

* fix: Cargo.toml

* fix: clippy

* chore: rename wal.rs to wal_bench.rs

* fix: compile
2024-04-03 03:16:05 +00:00
Weny Xu
a0f243c128 feat(procedure): enable auto split large value (#3628)
* chore: add comments

* chore: remove `pub`

* chore: rename to `merge_multiple_values`

* chore: fix typo

* feat(procedure): enable auto split large value

* chore: apply suggestions from CR

* chore: rename to `max_metadata_value_size`

* chore: remove the NoneAsEmptyString

* chore: set default max_metadata_value_size to 1500KiB
2024-04-02 12:13:59 +00:00
JeremyHi
a61fb98e4a refactor: alter logical tables (#3618)
* refactor: on prepare

* refactor: on create regions

* refactor: update metadata
2024-04-02 06:21:34 +00:00
Weny Xu
6c316d268f feat(procedure): auto split large value to multiple values (#3605)
* feat: implement MultipleValuesStream

* refactor: move KeySet to common-procedure

* refactor: move MultipleValuesStream to common-procedure

* refactor: refactor String to KeySet

* fix: fix dropping `collecting` unexpectedly

* fix: fix typo

* refactor: add the fast path of put

* refactor: remove `single_value_collector`

* refactor: use `extend` instead of `push`

* test: add more tests for `KvStateStore`

* test(etcd_store): add more tests for `KvStateStore`

* chore: apply suggestions from CR

* chore: apply suggestions from CR

* refactor: refactor with async_stream

* Update src/common/procedure/src/store/util.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

---------

Co-authored-by: Yingwen <realevenyag@gmail.com>
2024-04-01 12:04:29 +00:00
Lei, HUANG
5e24448b96 feat: reject invalid timestamp ranges in copy statement (#3623)
* chore: reject invalid timestamp ranges in copy statement

* tests: add unit tests
2024-04-01 08:25:31 +00:00
JohnsonLee
d6b2d1dfb8 feat: Support outputting various date styles for postgresql (#3602)
* test: add integration_test for datetime style

* feat: support various datestyle for postgres

* doc: rewrite the comment about merge_datestyle_value

* test: add more test to illustrate valid datestyle input
2024-04-01 07:31:36 +00:00
Yingwen
bfd32571d9 fix: run purge jobs in another scheduler (#3621) 2024-04-01 03:18:14 +00:00
JeremyHi
0eb023bb23 feat: group requests by peer (#3619) 2024-04-01 03:10:22 +00:00
dennis zhuang
4a5bb698a9 feat: impl show index and show columns (#3577)
* feat: impl show index and show columns

* fix: show index from database

* fix: canonicalize table name

* refactor: show parsers
2024-03-29 18:34:52 +00:00
Eugene Tolbakov
18d676802a feat(function): add timestamp epoch integer support for to_timezone (#3620)
* feat(function): add timestamp epoch integer support for to_timezone

* chore: fmt
2024-03-29 18:33:24 +00:00
JeremyHi
93da45f678 feat: let alter table procedure can only alter physical table (#3613)
* feat: let alter table procedure can only alter physicale table

* chore: rm unnecessary todo
2024-03-29 09:50:33 +00:00
Ruihang Xia
7a19f66be0 ci: ignore type in sqlness sql and result files (#3616)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-03-29 09:39:37 +00:00
Ning Sun
500f9f10fc feat: allow cross-schema query in promql (#3545)
* feat: add __schema__ tag for promql parser

* feat: disable matcher op other than equals

* test: add more test to ensure context getting reset

* test: add integration test

* test: refactor tests

* refactor: remove duplicated test code

* refactor: update according to review comments

* test: add sqlness test for cross schema scenario

---------

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
2024-03-29 07:41:01 +00:00
JeremyHi
f49cd0ca18 refactor: cache invalidator (#3611)
* chore: remove some alias

* refactor: cache invalidator
2024-03-29 07:33:51 +00:00
Yingwen
ffbb132f27 feat: Implement an unordered scanner for append mode (#3598)
* feat: ScanInput

* refactor: seq scan use scan input

* chore: implement unordered scan

* feat: use unordered scan for append table

* fix: unordered scan panic

* docs: update mermaid

* chore: address comment

---------

Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>
2024-03-29 07:25:35 +00:00
Eugene Tolbakov
14267c2aed feat(tql): add initial support for start,stop,step as sql functions (#3507)
* feat(tql): add initial support for start,stop,step as sql functions

* fix(tql): remove unwraps, adjust fmt

* fix(tql): address taplo issue

* feat(tql): update parse_tql_query logic

* fix(tql): change query parsing logic to use parser instead of delimiter

* fix(tql): add timestamp function support, add sqlness tests

* fix(tql): add lookback optional param for tql eval

* fix(tql): adjust tests for now() function

* fix(tql): introduce the tqlerror to differentiate failures on parsing, evaluation and simplification stages

* fix(tql): add tests for explain/analyze

* feat(tql): add lookback support for explain/analyze, update tests

* feat(tql): add more sqlness tests

* chore(tql): extract common logic for eval, analyze and explain into a single function

* feat(tql): address CR points

* feat(tql): use snafu for tql errors, add more docs

* feat(tql): address CR points
2024-03-29 06:37:25 +00:00
Ruihang Xia
77cc7216af feat: support 2+2 and /status/buildinfo (#3604)
* feat: implement buildinfo endpoint

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* refactor prom result struct

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add more integration test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* format toml file

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* Update src/servers/src/http/prometheus_resp.rs

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-03-29 06:31:39 +00:00
Zhenchi
63681f0e4d refactor(table): remove unused table requests (#3603)
* refactor(table): remove unused requests

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* update comments

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix: clippy

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix: compile

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2024-03-28 11:31:14 +00:00
Ruihang Xia
06a90527a3 fix: adjust status code to http error code map (#3601)
* fix: adjust status code to http error code map

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update integration test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-03-28 08:54:51 +00:00
niebayes
d5ba2fcf9d test: add more integration test for kafka wal (#3190)
* test: add integration tests for kafka wal

* chore: rebase main

* chore: unify naming convention for wal config

* chore: add register loaders switch

* chore: alter tables by adding a new column

* chore: move rand to dev-dependencies

* chore: update Cargo.lock
2024-03-28 06:55:18 +00:00
dennis zhuang
e3b37ee2c9 fix: canonicalize catalog and schema names (#3600) 2024-03-28 06:40:15 +00:00
dennis zhuang
5d7ce08358 feat: adds metric engine to information_schema engines table (#3599)
* feat: adds metric engine to information_schema engines table

* fix: support value for metric engine
2024-03-28 06:37:34 +00:00
JeremyHi
92a8e863de chore: do not reply for broadcast msg (#3595) 2024-03-27 11:39:23 +00:00
JeremyHi
9428cb8e7c feat: remove support for logical tables in the create table procedure (#3592)
* feat: Remove support for logical tables in the create table procedure

* chore: remove the redandent table ids alloc

* chore: minor fix
2024-03-27 10:03:42 +00:00
Weny Xu
5addb7d75a test: add tests for drop databases (#3594)
* refactor: minimize visibility of drop database steps

* feat: implement as_any

* refactor: move common functions to test_util

* test: add tests for drop databases

* fix: fix deteling physical table route unexpectedly
2024-03-27 09:18:37 +00:00
Weny Xu
623c930736 refactor: refactor drop table executor (#3589)
* refactor: refactor drop table executor

* chore: apply suggestions from CR
2024-03-27 06:29:54 +00:00
JeremyHi
5fa01e7a96 feat: create regions persist true (#3590)
* feat: change open-region-step's status persist as true

* feat: avoid cloning

* fix: fix unit test
2024-03-27 06:26:58 +00:00
Yingwen
922b1a9b66 feat: Implement append mode for a region (#3558)
* feat: add dedup option to merge reader

* test: test merger

* feat: append mode option

* feat: implement append mode for regions

* feat: only allow put under append mode

* feat: always create builder

* test: test append mode

* style: fix clippy

* test: trigger compaction

* chore: fix compiler errors
2024-03-27 03:21:22 +00:00
shuiyisong
653697f1d5 chore: add back core dependency (#3588) 2024-03-26 19:53:22 -07:00
JohnsonLee
83643eb195 feat: Support printing postgresql's bytea data type in its "hex" and "escape" format (#3567)
* feat: support set variable statement of session

* feat: support printing postgresql's bytea data type in its "hex" and "escape" format in ugly way

* refactor: add 'SessionConfigValue' type and unify the name

* doc: add license header

* refactor: confine coupling with 'sql::ast::Value' in SessionConfigValue

* refactor: move all bytea wrapper into bytea.rs

* fix: remove unused import in context.rs and postgres.rs

* refactor: rename 'set_configuration_parameter' to 'set_session_config'

rename 'set_configuration_parameter' in statement_.rs to 'set_session_config'

* refactor: use mod to organize options via macro

* refactor: re-model the session config value with static type

* test: add integration test

* refactor: move the encode bytea by format type logic into encoder

refactor: use Arc<DashMap> instead of DashMap in QueryContext

refactor: use Arc<DashMap> instead of DashMap in QueryContext

    Avoid expensive clone

refactor: use unreachable!() instead of unimplemented!()

refactor: move the encode bytea by format type logic into encoder

test: add binary format integration test case

* test: add ut for byte related type

* doc: remove TODO of bytea_output

* refactor: simplify the implementation with simple struct instead of complex typing

* fix: typo of 'Available'

* fix compile

Signed-off-by: tison <wander4096@gmail.com>

---------

Signed-off-by: tison <wander4096@gmail.com>
Co-authored-by: tison <wander4096@gmail.com>
2024-03-27 01:54:41 +00:00
tison
d83279567b feat(auth): watch file user provider (#3566)
* feat(auth): watch file user provider

Signed-off-by: tison <wander4096@gmail.com>

* impl

Signed-off-by: tison <wander4096@gmail.com>

* use debouncer

Signed-off-by: tison <wander4096@gmail.com>

* add test

Signed-off-by: tison <wander4096@gmail.com>

* clippy

Signed-off-by: tison <wander4096@gmail.com>

* add path for FileWatch snafu

Signed-off-by: tison <wander4096@gmail.com>

* Apply comments

Signed-off-by: tison <wander4096@gmail.com>

* fix compile

Signed-off-by: tison <wander4096@gmail.com>

* drop notify-debouncer-full dep

Signed-off-by: tison <wander4096@gmail.com>

* empty to allow all

Signed-off-by: tison <wander4096@gmail.com>

* more test and log

Signed-off-by: tison <wander4096@gmail.com>

* relax the wait period

Signed-off-by: tison <wander4096@gmail.com>

* avoid sleep

Signed-off-by: tison <wander4096@gmail.com>

* Revert "avoid sleep"

This reverts commit d7a0be1dea.

* avoid sleep

Signed-off-by: tison <wander4096@gmail.com>

* cargo fmt

Signed-off-by: tison <wander4096@gmail.com>

* tidy dep

Signed-off-by: tison <wander4096@gmail.com>

* adjust

Signed-off-by: tison <wander4096@gmail.com>

* try be stable on CI

Signed-off-by: tison <wander4096@gmail.com>

* deugging

Signed-off-by: tison <wander4096@gmail.com>

* debugging

Signed-off-by: tison <wander4096@gmail.com>

* watch on the dir

Signed-off-by: tison <wander4096@gmail.com>

---------

Signed-off-by: tison <wander4096@gmail.com>
2024-03-27 01:19:18 +00:00
tison
150454b1fd chore: Delete CODE_OF_CONDUCT.md (#3578)
Leverage GitHub's feature to reuse `GreptimeTeam/.github` content.

This depends on https://github.com/GreptimeTeam/.github/pull/5.
2024-03-26 09:43:05 -07:00
JeremyHi
58c7858cd4 feat: update physical table schema on alter logical tables (#3585)
* feat: update physical table schema on alter

* feat: alter logical table in sql path

* feat: invalidate cache step1

* feat: invalidate cache step2

* feat: invalidate cache step3

* feat: invalidate cache step4

* fix: failed ut

* fix: standalone cache invalidator

* feat: log the count of already finished

* feat: re-invalidate cache

* chore: by comment

* chore: Update src/common/meta/src/ddl/create_logical_tables.rs

---------

Co-authored-by: Yingwen <realevenyag@gmail.com>
2024-03-26 14:29:53 +00:00
dimbtp
dd18d8c97b build(deps): remove some unused dependencies (#3582)
* build(deps): remove some unused dependencies

* add `arc-swap` dependency back
2024-03-26 12:48:28 +00:00
Lei, HUANG
175929426a feat: support time range in copy table (#3583)
* feat: support specifying time range in copy table statement

* chore: update sqlness results

* fix: sqlness
2024-03-26 11:24:28 +00:00
Ruihang Xia
8f9676aad2 fix: incorrect version info in (#3586)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-03-26 09:31:01 +00:00
Ruihang Xia
74565151e9 fix: update pk_cache in compat reader (#3576)
* fix: update pk_cache in compat reader

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add sqlness case

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update document

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add more sqlness case

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* avoid mysterious bug

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-03-26 08:31:00 +00:00
Ruihang Xia
83c1b485ea chore: limit OpenDAL's feature gates (#3584)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-03-26 07:54:06 +00:00
JeremyHi
c2dd1136fe feat: batch alter logical tables (#3569)
* feat: add unit test for alter logical tables

* Update src/common/meta/src/ddl/alter_table.rs

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>

* feat: add some comments

* chore: add debug_assert_eq

* chore: fix some nits

* chore: remove the method batch_get_table_routes

---------

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
2024-03-26 07:07:23 +00:00
tison
7c1c6e8b8c refactor: try upgrade regex-automata (#3575)
* refactor: try upgrade regex-automata

Signed-off-by: tison <wander4096@gmail.com>

* try fix

Signed-off-by: tison <wander4096@gmail.com>

* always check match with next_eoi_state

Signed-off-by: tison <wander4096@gmail.com>

* add a guard to prevent over moving the state

Signed-off-by: tison <wander4096@gmail.com>

* tidy

Signed-off-by: tison <wander4096@gmail.com>

---------

Signed-off-by: tison <wander4096@gmail.com>
2024-03-26 04:28:14 +00:00
Yingwen
62d8bbb10c ci: use single commit on the deployment branch (#3580) 2024-03-25 21:04:57 -07:00
Weny Xu
bf14d33962 feat: implement the drop database procedure (#3541)
* refactor: remove Sync trait of Procedure

* refactor: remove unnecessary async

* feat: implement the drop database procedure

* refactor: refactor DdlManager register_loaders

* feat: register the DropDatabaseProcedureLoader

* chore: fmt toml

* feat: support to submit DropDatabaseTask

* feat: support drop database stmt

* fix: empty the tables stream

* fix: ensure the factory always exists

* test: update sqlness results

* chore: correct comments

* test: update sqlness results

* test: update sqlness results

* chore: apply suggestions from CR

* chore: apply suggestions from CR
2024-03-25 06:12:47 +00:00
tison
0f1747b80d chore: retain original headers (#3572)
Signed-off-by: tison <wander4096@gmail.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>
2024-03-25 03:53:51 +00:00
Ruihang Xia
992c7ec71b feat: update physical table's schema on creating logical table (#3570)
* feat: update physical table's schema on creating logical table

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove debug code

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update sqlness cases

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* tweak ut const

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update sqlness cases

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* invalid physical table cache

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-03-25 03:19:30 +00:00
x³u³
2ad0b24efa fix: set http response chartset to utf-8 when using table format (#3571) 2024-03-25 03:13:01 +00:00
319 changed files with 14315 additions and 3994 deletions

View File

@@ -40,3 +40,4 @@ jobs:
uses: JamesIves/github-pages-deploy-action@v4
with:
folder: target/doc
single-commit: true

View File

@@ -13,4 +13,4 @@ jobs:
steps:
- uses: actions/checkout@v4
- name: Check License Header
uses: korandoru/hawkeye@v4
uses: korandoru/hawkeye@v5

View File

@@ -1,132 +0,0 @@
# Contributor Covenant Code of Conduct
## Our Pledge
We as members, contributors, and leaders pledge to make participation in our
community a harassment-free experience for everyone, regardless of age, body
size, visible or invisible disability, ethnicity, sex characteristics, gender
identity and expression, level of experience, education, socio-economic status,
nationality, personal appearance, race, caste, color, religion, or sexual
identity and orientation.
We pledge to act and interact in ways that contribute to an open, welcoming,
diverse, inclusive, and healthy community.
## Our Standards
Examples of behavior that contributes to a positive environment for our
community include:
* Demonstrating empathy and kindness toward other people
* Being respectful of differing opinions, viewpoints, and experiences
* Giving and gracefully accepting constructive feedback
* Accepting responsibility and apologizing to those affected by our mistakes,
and learning from the experience
* Focusing on what is best not just for us as individuals, but for the overall
community
Examples of unacceptable behavior include:
* The use of sexualized language or imagery, and sexual attention or advances of
any kind
* Trolling, insulting or derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or email address,
without their explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Enforcement Responsibilities
Community leaders are responsible for clarifying and enforcing our standards of
acceptable behavior and will take appropriate and fair corrective action in
response to any behavior that they deem inappropriate, threatening, offensive,
or harmful.
Community leaders have the right and responsibility to remove, edit, or reject
comments, commits, code, wiki edits, issues, and other contributions that are
not aligned to this Code of Conduct, and will communicate reasons for moderation
decisions when appropriate.
## Scope
This Code of Conduct applies within all community spaces, and also applies when
an individual is officially representing the community in public spaces.
Examples of representing our community include using an official e-mail address,
posting via an official social media account, or acting as an appointed
representative at an online or offline event.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported to the community leaders responsible for enforcement at
info@greptime.com.
All complaints will be reviewed and investigated promptly and fairly.
All community leaders are obligated to respect the privacy and security of the
reporter of any incident.
## Enforcement Guidelines
Community leaders will follow these Community Impact Guidelines in determining
the consequences for any action they deem in violation of this Code of Conduct:
### 1. Correction
**Community Impact**: Use of inappropriate language or other behavior deemed
unprofessional or unwelcome in the community.
**Consequence**: A private, written warning from community leaders, providing
clarity around the nature of the violation and an explanation of why the
behavior was inappropriate. A public apology may be requested.
### 2. Warning
**Community Impact**: A violation through a single incident or series of
actions.
**Consequence**: A warning with consequences for continued behavior. No
interaction with the people involved, including unsolicited interaction with
those enforcing the Code of Conduct, for a specified period of time. This
includes avoiding interactions in community spaces as well as external channels
like social media. Violating these terms may lead to a temporary or permanent
ban.
### 3. Temporary Ban
**Community Impact**: A serious violation of community standards, including
sustained inappropriate behavior.
**Consequence**: A temporary ban from any sort of interaction or public
communication with the community for a specified period of time. No public or
private interaction with the people involved, including unsolicited interaction
with those enforcing the Code of Conduct, is allowed during this period.
Violating these terms may lead to a permanent ban.
### 4. Permanent Ban
**Community Impact**: Demonstrating a pattern of violation of community
standards, including sustained inappropriate behavior, harassment of an
individual, or aggression toward or disparagement of classes of individuals.
**Consequence**: A permanent ban from any sort of public interaction within the
community.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage],
version 2.1, available at
[https://www.contributor-covenant.org/version/2/1/code_of_conduct.html][v2.1].
Community Impact Guidelines were inspired by
[Mozilla's code of conduct enforcement ladder][Mozilla CoC].
For answers to common questions about this code of conduct, see the FAQ at
[https://www.contributor-covenant.org/faq][FAQ]. Translations are available at
[https://www.contributor-covenant.org/translations][translations].
[homepage]: https://www.contributor-covenant.org
[v2.1]: https://www.contributor-covenant.org/version/2/1/code_of_conduct.html
[Mozilla CoC]: https://github.com/mozilla/diversity
[FAQ]: https://www.contributor-covenant.org/faq
[translations]: https://www.contributor-covenant.org/translations

385
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -62,7 +62,7 @@ members = [
resolver = "2"
[workspace.package]
version = "0.7.1"
version = "0.7.2"
edition = "2021"
license = "Apache-2.0"
@@ -99,17 +99,20 @@ datafusion-physical-expr = { git = "https://github.com/apache/arrow-datafusion.g
datafusion-sql = { git = "https://github.com/apache/arrow-datafusion.git", rev = "26e43acac3a96cec8dd4c8365f22dfb1a84306e9" }
datafusion-substrait = { git = "https://github.com/apache/arrow-datafusion.git", rev = "26e43acac3a96cec8dd4c8365f22dfb1a84306e9" }
derive_builder = "0.12"
dotenv = "0.15"
etcd-client = "0.12"
fst = "0.4.7"
futures = "0.3"
futures-util = "0.3"
greptime-proto = { git = "https://github.com/GreptimeTeam/greptime-proto.git", rev = "349cb385583697f41010dabeb3c106d58f9599b4" }
greptime-proto = { git = "https://github.com/GreptimeTeam/greptime-proto.git", rev = "1bd2398b686e5ac6c1eef6daf615867ce27f75c1" }
humantime = "2.1"
humantime-serde = "1.1"
itertools = "0.10"
lazy_static = "1.4"
meter-core = { git = "https://github.com/GreptimeTeam/greptime-meter.git", rev = "80b72716dcde47ec4161478416a5c6c21343364d" }
mockall = "0.11.4"
moka = "0.12"
notify = "6.1"
num_cpus = "1.16"
once_cell = "1.18"
opentelemetry-proto = { git = "https://github.com/waynexia/opentelemetry-rust.git", rev = "33841b38dda79b15f2024952be5f32533325ca02", features = [
@@ -125,7 +128,7 @@ prost = "0.12"
raft-engine = { version = "0.4.1", default-features = false }
rand = "0.8"
regex = "1.8"
regex-automata = { version = "0.2", features = ["transducer"] }
regex-automata = { version = "0.4" }
reqwest = { version = "0.11", default-features = false, features = [
"json",
"rustls-tls-native-roots",
@@ -133,6 +136,7 @@ reqwest = { version = "0.11", default-features = false, features = [
] }
rskafka = "0.5"
rust_decimal = "1.33"
schemars = "0.8"
serde = { version = "1.0", features = ["derive"] }
serde_json = { version = "1.0", features = ["float_roundtrip"] }
serde_with = "3"
@@ -151,6 +155,7 @@ tokio-util = { version = "0.7", features = ["io-util", "compat"] }
toml = "0.8.8"
tonic = { version = "0.10", features = ["tls"] }
uuid = { version = "1", features = ["serde", "v4", "fast-rng"] }
zstd = "0.13"
## workspaces members
api = { path = "src/api" }

View File

@@ -8,12 +8,31 @@ license.workspace = true
workspace = true
[dependencies]
api.workspace = true
arrow.workspace = true
chrono.workspace = true
clap.workspace = true
client.workspace = true
common-base.workspace = true
common-telemetry.workspace = true
common-wal.workspace = true
dotenv.workspace = true
futures.workspace = true
futures-util.workspace = true
humantime.workspace = true
humantime-serde.workspace = true
indicatif = "0.17.1"
itertools.workspace = true
lazy_static.workspace = true
log-store.workspace = true
mito2.workspace = true
num_cpus.workspace = true
parquet.workspace = true
prometheus.workspace = true
rand.workspace = true
rskafka.workspace = true
serde.workspace = true
store-api.workspace = true
tokio.workspace = true
toml.workspace = true
uuid.workspace = true

11
benchmarks/README.md Normal file
View File

@@ -0,0 +1,11 @@
Benchmarkers for GreptimeDB
--------------------------------
## Wal Benchmarker
The wal benchmarker serves to evaluate the performance of GreptimeDB's Write-Ahead Log (WAL) component. It meticulously assesses the read/write performance of the WAL under diverse workloads generated by the benchmarker.
### How to use
To compile the benchmarker, navigate to the `greptimedb/benchmarks` directory and execute `cargo build --release`. Subsequently, you'll find the compiled target located at `greptimedb/target/release/wal_bench`.
The `./wal_bench -h` command reveals numerous arguments that the target accepts. Among these, a notable one is the `cfg-file` argument. By utilizing a configuration file in the TOML format, you can bypass the need to repeatedly specify cumbersome arguments.

View File

@@ -0,0 +1,21 @@
# Refers to the documents of `Args` in benchmarks/src/wal.rs`.
wal_provider = "kafka"
bootstrap_brokers = ["localhost:9092"]
num_workers = 10
num_topics = 32
num_regions = 1000
num_scrapes = 1000
num_rows = 5
col_types = "ifs"
max_batch_size = "512KB"
linger = "1ms"
backoff_init = "10ms"
backoff_max = "1ms"
backoff_base = 2
backoff_deadline = "3s"
compression = "zstd"
rng_seed = 42
skip_read = false
skip_write = false
random_topics = true
report_metrics = false

View File

@@ -0,0 +1,326 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#![feature(int_roundings)]
use std::fs;
use std::sync::Arc;
use std::time::Instant;
use api::v1::{ColumnDataType, ColumnSchema, SemanticType};
use benchmarks::metrics;
use benchmarks::wal_bench::{Args, Config, Region, WalProvider};
use clap::Parser;
use common_telemetry::info;
use common_wal::config::kafka::common::BackoffConfig;
use common_wal::config::kafka::DatanodeKafkaConfig as KafkaConfig;
use common_wal::config::raft_engine::RaftEngineConfig;
use common_wal::options::{KafkaWalOptions, WalOptions};
use itertools::Itertools;
use log_store::kafka::log_store::KafkaLogStore;
use log_store::raft_engine::log_store::RaftEngineLogStore;
use mito2::wal::Wal;
use prometheus::{Encoder, TextEncoder};
use rand::distributions::{Alphanumeric, DistString};
use rand::rngs::SmallRng;
use rand::SeedableRng;
use rskafka::client::partition::Compression;
use rskafka::client::ClientBuilder;
use store_api::logstore::LogStore;
use store_api::storage::RegionId;
async fn run_benchmarker<S: LogStore>(cfg: &Config, topics: &[String], wal: Arc<Wal<S>>) {
let chunk_size = cfg.num_regions.div_ceil(cfg.num_workers);
let region_chunks = (0..cfg.num_regions)
.map(|id| {
build_region(
id as u64,
topics,
&mut SmallRng::seed_from_u64(cfg.rng_seed),
cfg,
)
})
.chunks(chunk_size as usize)
.into_iter()
.map(|chunk| Arc::new(chunk.collect::<Vec<_>>()))
.collect::<Vec<_>>();
let mut write_elapsed = 0;
let mut read_elapsed = 0;
if !cfg.skip_write {
info!("Benchmarking write ...");
let num_scrapes = cfg.num_scrapes;
let timer = Instant::now();
futures::future::join_all((0..cfg.num_workers).map(|i| {
let wal = wal.clone();
let regions = region_chunks[i as usize].clone();
tokio::spawn(async move {
for _ in 0..num_scrapes {
let mut wal_writer = wal.writer();
regions
.iter()
.for_each(|region| region.add_wal_entry(&mut wal_writer));
wal_writer.write_to_wal().await.unwrap();
}
})
}))
.await;
write_elapsed += timer.elapsed().as_millis();
}
if !cfg.skip_read {
info!("Benchmarking read ...");
let timer = Instant::now();
futures::future::join_all((0..cfg.num_workers).map(|i| {
let wal = wal.clone();
let regions = region_chunks[i as usize].clone();
tokio::spawn(async move {
for region in regions.iter() {
region.replay(&wal).await;
}
})
}))
.await;
read_elapsed = timer.elapsed().as_millis();
}
dump_report(cfg, write_elapsed, read_elapsed);
}
fn build_region(id: u64, topics: &[String], rng: &mut SmallRng, cfg: &Config) -> Region {
let wal_options = match cfg.wal_provider {
WalProvider::Kafka => {
assert!(!topics.is_empty());
WalOptions::Kafka(KafkaWalOptions {
topic: topics.get(id as usize % topics.len()).cloned().unwrap(),
})
}
WalProvider::RaftEngine => WalOptions::RaftEngine,
};
Region::new(
RegionId::from_u64(id),
build_schema(&parse_col_types(&cfg.col_types), rng),
wal_options,
cfg.num_rows,
cfg.rng_seed,
)
}
fn build_schema(col_types: &[ColumnDataType], mut rng: &mut SmallRng) -> Vec<ColumnSchema> {
col_types
.iter()
.map(|col_type| ColumnSchema {
column_name: Alphanumeric.sample_string(&mut rng, 5),
datatype: *col_type as i32,
semantic_type: SemanticType::Field as i32,
datatype_extension: None,
})
.chain(vec![ColumnSchema {
column_name: "ts".to_string(),
datatype: ColumnDataType::TimestampMillisecond as i32,
semantic_type: SemanticType::Tag as i32,
datatype_extension: None,
}])
.collect()
}
fn dump_report(cfg: &Config, write_elapsed: u128, read_elapsed: u128) {
let cost_report = format!(
"write costs: {} ms, read costs: {} ms",
write_elapsed, read_elapsed,
);
let total_written_bytes = metrics::METRIC_WAL_WRITE_BYTES_TOTAL.get() as u128;
let write_throughput = if write_elapsed > 0 {
(total_written_bytes * 1000).div_floor(write_elapsed)
} else {
0
};
let total_read_bytes = metrics::METRIC_WAL_READ_BYTES_TOTAL.get() as u128;
let read_throughput = if read_elapsed > 0 {
(total_read_bytes * 1000).div_floor(read_elapsed)
} else {
0
};
let throughput_report = format!(
"total written bytes: {} bytes, total read bytes: {} bytes, write throuput: {} bytes/s ({} mb/s), read throughput: {} bytes/s ({} mb/s)",
total_written_bytes,
total_read_bytes,
write_throughput,
write_throughput.div_floor(1 << 20),
read_throughput,
read_throughput.div_floor(1 << 20),
);
let metrics_report = if cfg.report_metrics {
let mut buffer = Vec::new();
let encoder = TextEncoder::new();
let metrics = prometheus::gather();
encoder.encode(&metrics, &mut buffer).unwrap();
String::from_utf8(buffer).unwrap()
} else {
String::new()
};
info!(
r#"
Benchmark config:
{cfg:?}
Benchmark report:
{cost_report}
{throughput_report}
{metrics_report}"#
);
}
async fn create_topics(cfg: &Config) -> Vec<String> {
// Creates topics.
let client = ClientBuilder::new(cfg.bootstrap_brokers.clone())
.build()
.await
.unwrap();
let ctrl_client = client.controller_client().unwrap();
let (topics, tasks): (Vec<_>, Vec<_>) = (0..cfg.num_topics)
.map(|i| {
let topic = if cfg.random_topics {
format!(
"greptime_wal_bench_topic_{}_{}",
uuid::Uuid::new_v4().as_u128(),
i
)
} else {
format!("greptime_wal_bench_topic_{}", i)
};
let task = ctrl_client.create_topic(
topic.clone(),
1,
cfg.bootstrap_brokers.len() as i16,
2000,
);
(topic, task)
})
.unzip();
// Must ignore errors since we allow topics being created more than once.
let _ = futures::future::try_join_all(tasks).await;
topics
}
fn parse_compression(comp: &str) -> Compression {
match comp {
"no" => Compression::NoCompression,
"gzip" => Compression::Gzip,
"lz4" => Compression::Lz4,
"snappy" => Compression::Snappy,
"zstd" => Compression::Zstd,
other => unreachable!("Unrecognized compression {other}"),
}
}
fn parse_col_types(col_types: &str) -> Vec<ColumnDataType> {
let parts = col_types.split('x').collect::<Vec<_>>();
assert!(parts.len() <= 2);
let pattern = parts[0];
let repeat = parts
.get(1)
.map(|r| r.parse::<usize>().unwrap())
.unwrap_or(1);
pattern
.chars()
.map(|c| match c {
'i' | 'I' => ColumnDataType::Int64,
'f' | 'F' => ColumnDataType::Float64,
's' | 'S' => ColumnDataType::String,
other => unreachable!("Cannot parse {other} as a column data type"),
})
.cycle()
.take(pattern.len() * repeat)
.collect()
}
fn main() {
// Sets the global logging to INFO and suppress loggings from rskafka other than ERROR and upper ones.
std::env::set_var("UNITTEST_LOG_LEVEL", "info,rskafka=error");
common_telemetry::init_default_ut_logging();
let args = Args::parse();
let cfg = if !args.cfg_file.is_empty() {
toml::from_str(&fs::read_to_string(&args.cfg_file).unwrap()).unwrap()
} else {
Config::from(args)
};
// Validates arguments.
if cfg.num_regions < cfg.num_workers {
panic!("num_regions must be greater than or equal to num_workers");
}
if cfg
.num_workers
.min(cfg.num_topics)
.min(cfg.num_regions)
.min(cfg.num_scrapes)
.min(cfg.max_batch_size.as_bytes() as u32)
.min(cfg.bootstrap_brokers.len() as u32)
== 0
{
panic!("Invalid arguments");
}
tokio::runtime::Builder::new_multi_thread()
.enable_all()
.build()
.unwrap()
.block_on(async {
match cfg.wal_provider {
WalProvider::Kafka => {
let topics = create_topics(&cfg).await;
let kafka_cfg = KafkaConfig {
broker_endpoints: cfg.bootstrap_brokers.clone(),
max_batch_size: cfg.max_batch_size,
linger: cfg.linger,
backoff: BackoffConfig {
init: cfg.backoff_init,
max: cfg.backoff_max,
base: cfg.backoff_base,
deadline: Some(cfg.backoff_deadline),
},
compression: parse_compression(&cfg.compression),
..Default::default()
};
let store = Arc::new(KafkaLogStore::try_new(&kafka_cfg).await.unwrap());
let wal = Arc::new(Wal::new(store));
run_benchmarker(&cfg, &topics, wal).await;
}
WalProvider::RaftEngine => {
// The benchmarker assumes the raft engine directory exists.
let store = RaftEngineLogStore::try_new(
"/tmp/greptimedb/raft-engine-wal".to_string(),
RaftEngineConfig::default(),
)
.await
.map(Arc::new)
.unwrap();
let wal = Arc::new(Wal::new(store));
run_benchmarker(&cfg, &[], wal).await;
}
}
});
}

View File

@@ -11,3 +11,6 @@
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
pub mod metrics;
pub mod wal_bench;

39
benchmarks/src/metrics.rs Normal file
View File

@@ -0,0 +1,39 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use lazy_static::lazy_static;
use prometheus::*;
/// Logstore label.
pub const LOGSTORE_LABEL: &str = "logstore";
/// Operation type label.
pub const OPTYPE_LABEL: &str = "optype";
lazy_static! {
/// Counters of bytes of each operation on a logstore.
pub static ref METRIC_WAL_OP_BYTES_TOTAL: IntCounterVec = register_int_counter_vec!(
"greptime_bench_wal_op_bytes_total",
"wal operation bytes total",
&[OPTYPE_LABEL],
)
.unwrap();
/// Counter of bytes of the append_batch operation.
pub static ref METRIC_WAL_WRITE_BYTES_TOTAL: IntCounter = METRIC_WAL_OP_BYTES_TOTAL.with_label_values(
&["write"],
);
/// Counter of bytes of the read operation.
pub static ref METRIC_WAL_READ_BYTES_TOTAL: IntCounter = METRIC_WAL_OP_BYTES_TOTAL.with_label_values(
&["read"],
);
}

361
benchmarks/src/wal_bench.rs Normal file
View File

@@ -0,0 +1,361 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::mem::size_of;
use std::sync::atomic::{AtomicI64, AtomicU64, Ordering};
use std::sync::{Arc, Mutex};
use std::time::Duration;
use api::v1::value::ValueData;
use api::v1::{ColumnDataType, ColumnSchema, Mutation, OpType, Row, Rows, Value, WalEntry};
use clap::{Parser, ValueEnum};
use common_base::readable_size::ReadableSize;
use common_wal::options::WalOptions;
use futures::StreamExt;
use mito2::wal::{Wal, WalWriter};
use rand::distributions::{Alphanumeric, DistString, Uniform};
use rand::rngs::SmallRng;
use rand::{Rng, SeedableRng};
use serde::{Deserialize, Serialize};
use store_api::logstore::LogStore;
use store_api::storage::RegionId;
use crate::metrics;
/// The wal provider.
#[derive(Clone, ValueEnum, Default, Debug, PartialEq, Serialize, Deserialize)]
#[serde(rename_all = "snake_case")]
pub enum WalProvider {
#[default]
RaftEngine,
Kafka,
}
#[derive(Parser)]
pub struct Args {
/// The provided configuration file.
/// The example configuration file can be found at `greptimedb/benchmarks/config/wal_bench.example.toml`.
#[clap(long, short = 'c')]
pub cfg_file: String,
/// The wal provider.
#[clap(long, value_enum, default_value_t = WalProvider::default())]
pub wal_provider: WalProvider,
/// The advertised addresses of the kafka brokers.
/// If there're multiple bootstrap brokers, their addresses should be separated by comma, for e.g. "localhost:9092,localhost:9093".
#[clap(long, short = 'b', default_value = "localhost:9092")]
pub bootstrap_brokers: String,
/// The number of workers each running in a dedicated thread.
#[clap(long, default_value_t = num_cpus::get() as u32)]
pub num_workers: u32,
/// The number of kafka topics to be created.
#[clap(long, default_value_t = 32)]
pub num_topics: u32,
/// The number of regions.
#[clap(long, default_value_t = 1000)]
pub num_regions: u32,
/// The number of times each region is scraped.
#[clap(long, default_value_t = 1000)]
pub num_scrapes: u32,
/// The number of rows in each wal entry.
/// Each time a region is scraped, a wal entry containing will be produced.
#[clap(long, default_value_t = 5)]
pub num_rows: u32,
/// The column types of the schema for each region.
/// Currently, three column types are supported:
/// - i = ColumnDataType::Int64
/// - f = ColumnDataType::Float64
/// - s = ColumnDataType::String
/// For e.g., "ifs" will be parsed as three columns: i64, f64, and string.
///
/// Additionally, a "x" sign can be provided to repeat the column types for a given number of times.
/// For e.g., "iix2" will be parsed as 4 columns: i64, i64, i64, and i64.
/// This feature is useful if you want to specify many columns.
#[clap(long, default_value = "ifs")]
pub col_types: String,
/// The maximum size of a batch of kafka records.
/// The default value is 1mb.
#[clap(long, default_value = "512KB")]
pub max_batch_size: ReadableSize,
/// The minimum latency the kafka client issues a batch of kafka records.
/// However, a batch of kafka records would be immediately issued if a record cannot be fit into the batch.
#[clap(long, default_value = "1ms")]
pub linger: String,
/// The initial backoff delay of the kafka consumer.
#[clap(long, default_value = "10ms")]
pub backoff_init: String,
/// The maximum backoff delay of the kafka consumer.
#[clap(long, default_value = "1s")]
pub backoff_max: String,
/// The exponential backoff rate of the kafka consumer. The next back off = base * the current backoff.
#[clap(long, default_value_t = 2)]
pub backoff_base: u32,
/// The deadline of backoff. The backoff ends if the total backoff delay reaches the deadline.
#[clap(long, default_value = "3s")]
pub backoff_deadline: String,
/// The client-side compression algorithm for kafka records.
#[clap(long, default_value = "zstd")]
pub compression: String,
/// The seed of random number generators.
#[clap(long, default_value_t = 42)]
pub rng_seed: u64,
/// Skips the read phase, aka. region replay, if set to true.
#[clap(long, default_value_t = false)]
pub skip_read: bool,
/// Skips the write phase if set to true.
#[clap(long, default_value_t = false)]
pub skip_write: bool,
/// Randomly generates topic names if set to true.
/// Useful when you want to run the benchmarker without worrying about the topics created before.
#[clap(long, default_value_t = false)]
pub random_topics: bool,
/// Logs out the gathered prometheus metrics when the benchmarker ends.
#[clap(long, default_value_t = false)]
pub report_metrics: bool,
}
/// Benchmarker config.
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct Config {
pub wal_provider: WalProvider,
pub bootstrap_brokers: Vec<String>,
pub num_workers: u32,
pub num_topics: u32,
pub num_regions: u32,
pub num_scrapes: u32,
pub num_rows: u32,
pub col_types: String,
pub max_batch_size: ReadableSize,
#[serde(with = "humantime_serde")]
pub linger: Duration,
#[serde(with = "humantime_serde")]
pub backoff_init: Duration,
#[serde(with = "humantime_serde")]
pub backoff_max: Duration,
pub backoff_base: u32,
#[serde(with = "humantime_serde")]
pub backoff_deadline: Duration,
pub compression: String,
pub rng_seed: u64,
pub skip_read: bool,
pub skip_write: bool,
pub random_topics: bool,
pub report_metrics: bool,
}
impl From<Args> for Config {
fn from(args: Args) -> Self {
let cfg = Self {
wal_provider: args.wal_provider,
bootstrap_brokers: args
.bootstrap_brokers
.split(',')
.map(ToString::to_string)
.collect::<Vec<_>>(),
num_workers: args.num_workers.min(num_cpus::get() as u32),
num_topics: args.num_topics,
num_regions: args.num_regions,
num_scrapes: args.num_scrapes,
num_rows: args.num_rows,
col_types: args.col_types,
max_batch_size: args.max_batch_size,
linger: humantime::parse_duration(&args.linger).unwrap(),
backoff_init: humantime::parse_duration(&args.backoff_init).unwrap(),
backoff_max: humantime::parse_duration(&args.backoff_max).unwrap(),
backoff_base: args.backoff_base,
backoff_deadline: humantime::parse_duration(&args.backoff_deadline).unwrap(),
compression: args.compression,
rng_seed: args.rng_seed,
skip_read: args.skip_read,
skip_write: args.skip_write,
random_topics: args.random_topics,
report_metrics: args.report_metrics,
};
cfg
}
}
/// The region used for wal benchmarker.
pub struct Region {
id: RegionId,
schema: Vec<ColumnSchema>,
wal_options: WalOptions,
next_sequence: AtomicU64,
next_entry_id: AtomicU64,
next_timestamp: AtomicI64,
rng: Mutex<Option<SmallRng>>,
num_rows: u32,
}
impl Region {
/// Creates a new region.
pub fn new(
id: RegionId,
schema: Vec<ColumnSchema>,
wal_options: WalOptions,
num_rows: u32,
rng_seed: u64,
) -> Self {
Self {
id,
schema,
wal_options,
next_sequence: AtomicU64::new(1),
next_entry_id: AtomicU64::new(1),
next_timestamp: AtomicI64::new(1655276557000),
rng: Mutex::new(Some(SmallRng::seed_from_u64(rng_seed))),
num_rows,
}
}
/// Scrapes the region and adds the generated entry to wal.
pub fn add_wal_entry<S: LogStore>(&self, wal_writer: &mut WalWriter<S>) {
let mutation = Mutation {
op_type: OpType::Put as i32,
sequence: self
.next_sequence
.fetch_add(self.num_rows as u64, Ordering::Relaxed),
rows: Some(self.build_rows()),
};
let entry = WalEntry {
mutations: vec![mutation],
};
metrics::METRIC_WAL_WRITE_BYTES_TOTAL.inc_by(Self::entry_estimated_size(&entry) as u64);
wal_writer
.add_entry(
self.id,
self.next_entry_id.fetch_add(1, Ordering::Relaxed),
&entry,
&self.wal_options,
)
.unwrap();
}
/// Replays the region.
pub async fn replay<S: LogStore>(&self, wal: &Arc<Wal<S>>) {
let mut wal_stream = wal.scan(self.id, 0, &self.wal_options).unwrap();
while let Some(res) = wal_stream.next().await {
let (_, entry) = res.unwrap();
metrics::METRIC_WAL_READ_BYTES_TOTAL.inc_by(Self::entry_estimated_size(&entry) as u64);
}
}
/// Computes the estimated size in bytes of the entry.
pub fn entry_estimated_size(entry: &WalEntry) -> usize {
let wrapper_size = size_of::<WalEntry>()
+ entry.mutations.capacity() * size_of::<Mutation>()
+ size_of::<Rows>();
let rows = entry.mutations[0].rows.as_ref().unwrap();
let schema_size = rows.schema.capacity() * size_of::<ColumnSchema>()
+ rows
.schema
.iter()
.map(|s| s.column_name.capacity())
.sum::<usize>();
let values_size = (rows.rows.capacity() * size_of::<Row>())
+ rows
.rows
.iter()
.map(|r| r.values.capacity() * size_of::<Value>())
.sum::<usize>();
wrapper_size + schema_size + values_size
}
fn build_rows(&self) -> Rows {
let cols = self
.schema
.iter()
.map(|col_schema| {
let col_data_type = ColumnDataType::try_from(col_schema.datatype).unwrap();
self.build_col(&col_data_type, self.num_rows)
})
.collect::<Vec<_>>();
let rows = (0..self.num_rows)
.map(|i| {
let values = cols.iter().map(|col| col[i as usize].clone()).collect();
Row { values }
})
.collect();
Rows {
schema: self.schema.clone(),
rows,
}
}
fn build_col(&self, col_data_type: &ColumnDataType, num_rows: u32) -> Vec<Value> {
let mut rng_guard = self.rng.lock().unwrap();
let rng = rng_guard.as_mut().unwrap();
match col_data_type {
ColumnDataType::TimestampMillisecond => (0..num_rows)
.map(|_| {
let ts = self.next_timestamp.fetch_add(1000, Ordering::Relaxed);
Value {
value_data: Some(ValueData::TimestampMillisecondValue(ts)),
}
})
.collect(),
ColumnDataType::Int64 => (0..num_rows)
.map(|_| {
let v = rng.sample(Uniform::new(0, 10_000));
Value {
value_data: Some(ValueData::I64Value(v)),
}
})
.collect(),
ColumnDataType::Float64 => (0..num_rows)
.map(|_| {
let v = rng.sample(Uniform::new(0.0, 5000.0));
Value {
value_data: Some(ValueData::F64Value(v)),
}
})
.collect(),
ColumnDataType::String => (0..num_rows)
.map(|_| {
let v = Alphanumeric.sample_string(rng, 10);
Value {
value_data: Some(ValueData::StringValue(v)),
}
})
.collect(),
_ => unreachable!(),
}
}
}

117
cliff.toml Normal file
View File

@@ -0,0 +1,117 @@
# https://git-cliff.org/docs/configuration
[remote.github]
owner = "GreptimeTeam"
repo = "greptimedb"
[changelog]
header = ""
footer = ""
# template for the changelog body
# https://keats.github.io/tera/docs/#introduction
body = """
# {{ version }}
Release date: {{ timestamp | date(format="%B %d, %Y") }}
{%- set breakings = commits | filter(attribute="breaking", value=true) -%}
{%- if breakings | length > 0 %}
## Breaking changes
{% for commit in breakings %}
* {{ commit.github.pr_title }}\
{% if commit.github.username %} by \
{% set author = commit.github.username -%}
[@{{ author }}](https://github.com/{{ author }})
{%- endif -%}
{% if commit.github.pr_number %} in \
{% set number = commit.github.pr_number -%}
[#{{ number }}]({{ self::remote_url() }}/pull/{{ number }})
{%- endif %}
{%- endfor %}
{%- endif -%}
{%- set grouped_commits = commits | filter(attribute="breaking", value=false) | group_by(attribute="group") -%}
{% for group, commits in grouped_commits %}
### {{ group | striptags | trim | upper_first }}
{% for commit in commits %}
* {{ commit.github.pr_title }}\
{% if commit.github.username %} by \
{% set author = commit.github.username -%}
[@{{ author }}](https://github.com/{{ author }})
{%- endif -%}
{% if commit.github.pr_number %} in \
{% set number = commit.github.pr_number -%}
[#{{ number }}]({{ self::remote_url() }}/pull/{{ number }})
{%- endif %}
{%- endfor -%}
{% endfor %}
{%- if github.contributors | filter(attribute="is_first_time", value=true) | length != 0 %}
{% raw %}\n{% endraw -%}
## New Contributors
{% endif -%}
{% for contributor in github.contributors | filter(attribute="is_first_time", value=true) %}
* @{{ contributor.username }} made their first contribution
{%- if contributor.pr_number %} in \
[#{{ contributor.pr_number }}]({{ self::remote_url() }}/pull/{{ contributor.pr_number }}) \
{%- endif %}
{%- endfor -%}
{% if github.contributors | length != 0 %}
{% raw %}\n{% endraw -%}
## All Contributors
We would like to thank the following contributors from the GreptimeDB community:
{{ github.contributors | map(attribute="username") | join(sep=", ") }}
{%- endif %}
{% raw %}\n{% endraw %}
{%- macro remote_url() -%}
https://github.com/{{ remote.github.owner }}/{{ remote.github.repo }}
{%- endmacro -%}
"""
trim = true
[git]
# parse the commits based on https://www.conventionalcommits.org
conventional_commits = true
# filter out the commits that are not conventional
filter_unconventional = true
# process each line of a commit as an individual commit
split_commits = false
# regex for parsing and grouping commits
commit_parsers = [
{ message = "^feat", group = "<!-- 0 -->🚀 Features" },
{ message = "^fix", group = "<!-- 1 -->🐛 Bug Fixes" },
{ message = "^doc", group = "<!-- 3 -->📚 Documentation" },
{ message = "^perf", group = "<!-- 4 -->⚡ Performance" },
{ message = "^refactor", group = "<!-- 2 -->🚜 Refactor" },
{ message = "^style", group = "<!-- 5 -->🎨 Styling" },
{ message = "^test", group = "<!-- 6 -->🧪 Testing" },
{ message = "^chore\\(release\\): prepare for", skip = true },
{ message = "^chore\\(deps.*\\)", skip = true },
{ message = "^chore\\(pr\\)", skip = true },
{ message = "^chore\\(pull\\)", skip = true },
{ message = "^chore|^ci", group = "<!-- 7 -->⚙️ Miscellaneous Tasks" },
{ body = ".*security", group = "<!-- 8 -->🛡️ Security" },
{ message = "^revert", group = "<!-- 9 -->◀️ Revert" },
]
# protect breaking changes from being skipped due to matching a skipping commit_parser
protect_breaking_commits = false
# filter out the commits that are not matched by commit parsers
filter_commits = false
# regex for matching git tags
# tag_pattern = "v[0-9].*"
# regex for skipping tags
# skip_tags = ""
# regex for ignoring tags
ignore_tags = ".*-nightly-.*"
# sort the tags topologically
topo_order = false
# sort the commits inside sections by oldest/newest order
sort_commits = "oldest"
# limit the number of commits included in the changelog.
# limit_commits = 42

View File

@@ -29,6 +29,12 @@ store_key_prefix = ""
max_retry_times = 12
# Initial retry delay of procedures, increases exponentially
retry_delay = "500ms"
# Auto split large value
# GreptimeDB procedure uses etcd as the default metadata storage backend.
# The etcd the maximum size of any request is 1.5 MiB
# 1500KiB = 1536KiB (1.5MiB) - 36KiB (reserved size of key)
# Comments out the `max_metadata_value_size`, for don't split large value (no limit).
max_metadata_value_size = "1500KiB"
# Failure detectors options.
[failure_detector]

View File

@@ -19,6 +19,12 @@ includes = [
"*.py",
]
excludes = [
# copied sources
"src/common/base/src/readable_size.rs",
"src/servers/src/repeated_field.rs",
]
[properties]
inceptionYear = 2023
copyrightOwner = "Greptime Team"

View File

@@ -18,7 +18,6 @@ greptime-proto.workspace = true
paste = "1.0"
prost.workspace = true
snafu.workspace = true
tonic.workspace = true
[build-dependencies]
tonic-build = "0.9"

View File

@@ -16,8 +16,9 @@ api.workspace = true
async-trait.workspace = true
common-error.workspace = true
common-macro.workspace = true
common-telemetry.workspace = true
digest = "0.10"
hex = { version = "0.4" }
notify.workspace = true
secrecy = { version = "0.8", features = ["serde", "alloc"] }
sha1 = "0.10"
snafu.workspace = true

View File

@@ -22,6 +22,9 @@ use snafu::{ensure, OptionExt};
use crate::error::{IllegalParamSnafu, InvalidConfigSnafu, Result, UserPasswordMismatchSnafu};
use crate::user_info::DefaultUserInfo;
use crate::user_provider::static_user_provider::{StaticUserProvider, STATIC_USER_PROVIDER};
use crate::user_provider::watch_file_user_provider::{
WatchFileUserProvider, WATCH_FILE_USER_PROVIDER,
};
use crate::{UserInfoRef, UserProviderRef};
pub(crate) const DEFAULT_USERNAME: &str = "greptime";
@@ -43,6 +46,9 @@ pub fn user_provider_from_option(opt: &String) -> Result<UserProviderRef> {
StaticUserProvider::new(content).map(|p| Arc::new(p) as UserProviderRef)?;
Ok(provider)
}
WATCH_FILE_USER_PROVIDER => {
WatchFileUserProvider::new(content).map(|p| Arc::new(p) as UserProviderRef)
}
_ => InvalidConfigSnafu {
value: name.to_string(),
msg: "Invalid UserProviderOption",

View File

@@ -64,6 +64,13 @@ pub enum Error {
username: String,
},
#[snafu(display("Failed to initialize a watcher for file {}", path))]
FileWatch {
path: String,
#[snafu(source)]
error: notify::Error,
},
#[snafu(display("User is not authorized to perform this action"))]
PermissionDenied { location: Location },
}
@@ -73,6 +80,7 @@ impl ErrorExt for Error {
match self {
Error::InvalidConfig { .. } => StatusCode::InvalidArguments,
Error::IllegalParam { .. } => StatusCode::InvalidArguments,
Error::FileWatch { .. } => StatusCode::InvalidArguments,
Error::InternalState { .. } => StatusCode::Unexpected,
Error::Io { .. } => StatusCode::Internal,
Error::AuthBackend { .. } => StatusCode::Internal,

View File

@@ -13,10 +13,24 @@
// limitations under the License.
pub(crate) mod static_user_provider;
pub(crate) mod watch_file_user_provider;
use std::collections::HashMap;
use std::fs::File;
use std::io;
use std::io::BufRead;
use std::path::Path;
use secrecy::ExposeSecret;
use snafu::{ensure, OptionExt, ResultExt};
use crate::common::{Identity, Password};
use crate::error::Result;
use crate::UserInfoRef;
use crate::error::{
IllegalParamSnafu, InvalidConfigSnafu, IoSnafu, Result, UnsupportedPasswordTypeSnafu,
UserNotFoundSnafu, UserPasswordMismatchSnafu,
};
use crate::user_info::DefaultUserInfo;
use crate::{auth_mysql, UserInfoRef};
#[async_trait::async_trait]
pub trait UserProvider: Send + Sync {
@@ -44,3 +58,88 @@ pub trait UserProvider: Send + Sync {
Ok(user_info)
}
}
fn load_credential_from_file(filepath: &str) -> Result<Option<HashMap<String, Vec<u8>>>> {
// check valid path
let path = Path::new(filepath);
if !path.exists() {
return Ok(None);
}
ensure!(
path.is_file(),
InvalidConfigSnafu {
value: filepath,
msg: "UserProvider file must be a file",
}
);
let file = File::open(path).context(IoSnafu)?;
let credential = io::BufReader::new(file)
.lines()
.map_while(std::result::Result::ok)
.filter_map(|line| {
if let Some((k, v)) = line.split_once('=') {
Some((k.to_string(), v.as_bytes().to_vec()))
} else {
None
}
})
.collect::<HashMap<String, Vec<u8>>>();
ensure!(
!credential.is_empty(),
InvalidConfigSnafu {
value: filepath,
msg: "UserProvider's file must contains at least one valid credential",
}
);
Ok(Some(credential))
}
fn authenticate_with_credential(
users: &HashMap<String, Vec<u8>>,
input_id: Identity<'_>,
input_pwd: Password<'_>,
) -> Result<UserInfoRef> {
match input_id {
Identity::UserId(username, _) => {
ensure!(
!username.is_empty(),
IllegalParamSnafu {
msg: "blank username"
}
);
let save_pwd = users.get(username).context(UserNotFoundSnafu {
username: username.to_string(),
})?;
match input_pwd {
Password::PlainText(pwd) => {
ensure!(
!pwd.expose_secret().is_empty(),
IllegalParamSnafu {
msg: "blank password"
}
);
if save_pwd == pwd.expose_secret().as_bytes() {
Ok(DefaultUserInfo::with_name(username))
} else {
UserPasswordMismatchSnafu {
username: username.to_string(),
}
.fail()
}
}
Password::MysqlNativePassword(auth_data, salt) => {
auth_mysql(auth_data, salt, username, save_pwd)
.map(|_| DefaultUserInfo::with_name(username))
}
Password::PgMD5(_, _) => UnsupportedPasswordTypeSnafu {
password_type: "pg_md5",
}
.fail(),
}
}
}
}

View File

@@ -13,21 +13,13 @@
// limitations under the License.
use std::collections::HashMap;
use std::fs::File;
use std::io;
use std::io::BufRead;
use std::path::Path;
use async_trait::async_trait;
use secrecy::ExposeSecret;
use snafu::{ensure, OptionExt, ResultExt};
use snafu::OptionExt;
use crate::error::{
IllegalParamSnafu, InvalidConfigSnafu, IoSnafu, Result, UnsupportedPasswordTypeSnafu,
UserNotFoundSnafu, UserPasswordMismatchSnafu,
};
use crate::user_info::DefaultUserInfo;
use crate::{auth_mysql, Identity, Password, UserInfoRef, UserProvider};
use crate::error::{InvalidConfigSnafu, Result};
use crate::user_provider::{authenticate_with_credential, load_credential_from_file};
use crate::{Identity, Password, UserInfoRef, UserProvider};
pub(crate) const STATIC_USER_PROVIDER: &str = "static_user_provider";
@@ -43,32 +35,12 @@ impl StaticUserProvider {
})?;
return match mode {
"file" => {
// check valid path
let path = Path::new(content);
ensure!(path.exists() && path.is_file(), InvalidConfigSnafu {
value: content.to_string(),
msg: "StaticUserProviderOption file must be a valid file path",
});
let file = File::open(path).context(IoSnafu)?;
let credential = io::BufReader::new(file)
.lines()
.map_while(std::result::Result::ok)
.filter_map(|line| {
if let Some((k, v)) = line.split_once('=') {
Some((k.to_string(), v.as_bytes().to_vec()))
} else {
None
}
})
.collect::<HashMap<String, Vec<u8>>>();
ensure!(!credential.is_empty(), InvalidConfigSnafu {
value: content.to_string(),
msg: "StaticUserProviderOption file must contains at least one valid credential",
});
Ok(StaticUserProvider { users: credential, })
let users = load_credential_from_file(content)?
.context(InvalidConfigSnafu {
value: content.to_string(),
msg: "StaticFileUserProvider must be a valid file path",
})?;
Ok(StaticUserProvider { users })
}
"cmd" => content
.split(',')
@@ -96,51 +68,8 @@ impl UserProvider for StaticUserProvider {
STATIC_USER_PROVIDER
}
async fn authenticate(
&self,
input_id: Identity<'_>,
input_pwd: Password<'_>,
) -> Result<UserInfoRef> {
match input_id {
Identity::UserId(username, _) => {
ensure!(
!username.is_empty(),
IllegalParamSnafu {
msg: "blank username"
}
);
let save_pwd = self.users.get(username).context(UserNotFoundSnafu {
username: username.to_string(),
})?;
match input_pwd {
Password::PlainText(pwd) => {
ensure!(
!pwd.expose_secret().is_empty(),
IllegalParamSnafu {
msg: "blank password"
}
);
return if save_pwd == pwd.expose_secret().as_bytes() {
Ok(DefaultUserInfo::with_name(username))
} else {
UserPasswordMismatchSnafu {
username: username.to_string(),
}
.fail()
};
}
Password::MysqlNativePassword(auth_data, salt) => {
auth_mysql(auth_data, salt, username, save_pwd)
.map(|_| DefaultUserInfo::with_name(username))
}
Password::PgMD5(_, _) => UnsupportedPasswordTypeSnafu {
password_type: "pg_md5",
}
.fail(),
}
}
}
async fn authenticate(&self, id: Identity<'_>, pwd: Password<'_>) -> Result<UserInfoRef> {
authenticate_with_credential(&self.users, id, pwd)
}
async fn authorize(

View File

@@ -0,0 +1,215 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::collections::HashMap;
use std::path::Path;
use std::sync::mpsc::channel;
use std::sync::{Arc, Mutex};
use async_trait::async_trait;
use common_telemetry::{info, warn};
use notify::{EventKind, RecursiveMode, Watcher};
use snafu::{ensure, ResultExt};
use crate::error::{FileWatchSnafu, InvalidConfigSnafu, Result};
use crate::user_info::DefaultUserInfo;
use crate::user_provider::{authenticate_with_credential, load_credential_from_file};
use crate::{Identity, Password, UserInfoRef, UserProvider};
pub(crate) const WATCH_FILE_USER_PROVIDER: &str = "watch_file_user_provider";
type WatchedCredentialRef = Arc<Mutex<Option<HashMap<String, Vec<u8>>>>>;
/// A user provider that reads user credential from a file and watches the file for changes.
///
/// Empty file is invalid; but file not exist means every user can be authenticated.
pub(crate) struct WatchFileUserProvider {
users: WatchedCredentialRef,
}
impl WatchFileUserProvider {
pub fn new(filepath: &str) -> Result<Self> {
let credential = load_credential_from_file(filepath)?;
let users = Arc::new(Mutex::new(credential));
let this = WatchFileUserProvider {
users: users.clone(),
};
let (tx, rx) = channel::<notify::Result<notify::Event>>();
let mut debouncer =
notify::recommended_watcher(tx).context(FileWatchSnafu { path: "<none>" })?;
let mut dir = Path::new(filepath).to_path_buf();
ensure!(
dir.pop(),
InvalidConfigSnafu {
value: filepath,
msg: "UserProvider path must be a file path",
}
);
debouncer
.watch(&dir, RecursiveMode::NonRecursive)
.context(FileWatchSnafu { path: filepath })?;
let filepath = filepath.to_string();
std::thread::spawn(move || {
let filename = Path::new(&filepath).file_name();
let _hold = debouncer;
while let Ok(res) = rx.recv() {
if let Ok(event) = res {
let is_this_file = event.paths.iter().any(|p| p.file_name() == filename);
let is_relevant_event = matches!(
event.kind,
EventKind::Modify(_) | EventKind::Create(_) | EventKind::Remove(_)
);
if is_this_file && is_relevant_event {
info!(?event.kind, "User provider file {} changed", &filepath);
match load_credential_from_file(&filepath) {
Ok(credential) => {
let mut users =
users.lock().expect("users credential must be valid");
#[cfg(not(test))]
info!("User provider file {filepath} reloaded");
#[cfg(test)]
info!("User provider file {filepath} reloaded: {credential:?}");
*users = credential;
}
Err(err) => {
warn!(
?err,
"Fail to load credential from file {filepath}; keep the old one",
)
}
}
}
}
}
});
Ok(this)
}
}
#[async_trait]
impl UserProvider for WatchFileUserProvider {
fn name(&self) -> &str {
WATCH_FILE_USER_PROVIDER
}
async fn authenticate(&self, id: Identity<'_>, password: Password<'_>) -> Result<UserInfoRef> {
let users = self.users.lock().expect("users credential must be valid");
if let Some(users) = users.as_ref() {
authenticate_with_credential(users, id, password)
} else {
match id {
Identity::UserId(id, _) => {
warn!(id, "User provider file not exist, allow all users");
Ok(DefaultUserInfo::with_name(id))
}
}
}
}
async fn authorize(&self, _: &str, _: &str, _: &UserInfoRef) -> Result<()> {
// default allow all
Ok(())
}
}
#[cfg(test)]
pub mod test {
use std::time::{Duration, Instant};
use common_test_util::temp_dir::create_temp_dir;
use tokio::time::sleep;
use crate::user_provider::watch_file_user_provider::WatchFileUserProvider;
use crate::user_provider::{Identity, Password};
use crate::UserProvider;
async fn test_authenticate(
provider: &dyn UserProvider,
username: &str,
password: &str,
ok: bool,
timeout: Option<Duration>,
) {
if let Some(timeout) = timeout {
let deadline = Instant::now().checked_add(timeout).unwrap();
loop {
let re = provider
.authenticate(
Identity::UserId(username, None),
Password::PlainText(password.to_string().into()),
)
.await;
if re.is_ok() == ok {
break;
} else if Instant::now() < deadline {
sleep(Duration::from_millis(100)).await;
} else {
panic!("timeout (username: {username}, password: {password}, expected: {ok})");
}
}
} else {
let re = provider
.authenticate(
Identity::UserId(username, None),
Password::PlainText(password.to_string().into()),
)
.await;
assert_eq!(
re.is_ok(),
ok,
"username: {}, password: {}",
username,
password
);
}
}
#[tokio::test]
async fn test_file_provider() {
common_telemetry::init_default_ut_logging();
let dir = create_temp_dir("test_file_provider");
let file_path = format!("{}/test_file_provider", dir.path().to_str().unwrap());
// write a tmp file
assert!(std::fs::write(&file_path, "root=123456\nadmin=654321\n").is_ok());
let provider = WatchFileUserProvider::new(file_path.as_str()).unwrap();
let timeout = Duration::from_secs(60);
test_authenticate(&provider, "root", "123456", true, None).await;
test_authenticate(&provider, "admin", "654321", true, None).await;
test_authenticate(&provider, "root", "654321", false, None).await;
// update the tmp file
assert!(std::fs::write(&file_path, "root=654321\n").is_ok());
test_authenticate(&provider, "root", "123456", false, Some(timeout)).await;
test_authenticate(&provider, "root", "654321", true, Some(timeout)).await;
test_authenticate(&provider, "admin", "654321", false, Some(timeout)).await;
// remove the tmp file
assert!(std::fs::remove_file(&file_path).is_ok());
test_authenticate(&provider, "root", "123456", true, Some(timeout)).await;
test_authenticate(&provider, "root", "654321", true, Some(timeout)).await;
test_authenticate(&provider, "admin", "654321", true, Some(timeout)).await;
// recreate the tmp file
assert!(std::fs::write(&file_path, "root=123456\n").is_ok());
test_authenticate(&provider, "root", "123456", true, Some(timeout)).await;
test_authenticate(&provider, "root", "654321", false, Some(timeout)).await;
test_authenticate(&provider, "admin", "654321", false, Some(timeout)).await;
}
}

View File

@@ -12,19 +12,16 @@ workspace = true
[dependencies]
api.workspace = true
arc-swap = "1.0"
arrow.workspace = true
arrow-schema.workspace = true
async-stream.workspace = true
async-trait = "0.1"
common-catalog.workspace = true
common-error.workspace = true
common-grpc.workspace = true
common-macro.workspace = true
common-meta.workspace = true
common-query.workspace = true
common-recordbatch.workspace = true
common-runtime.workspace = true
common-telemetry.workspace = true
common-time.workspace = true
common-version.workspace = true
@@ -37,15 +34,13 @@ itertools.workspace = true
lazy_static.workspace = true
meta-client.workspace = true
moka = { workspace = true, features = ["future", "sync"] }
parking_lot = "0.12"
partition.workspace = true
paste = "1.0"
prometheus.workspace = true
regex.workspace = true
serde.workspace = true
serde_json.workspace = true
session.workspace = true
snafu.workspace = true
sql.workspace = true
store-api.workspace = true
table.workspace = true
tokio.workspace = true

View File

@@ -12,8 +12,8 @@
// See the License for the specific language governing permissions and
// limitations under the License.
mod columns;
mod key_column_usage;
pub mod columns;
pub mod key_column_usage;
mod memory_table;
mod partitions;
mod predicate;
@@ -41,8 +41,7 @@ use table::error::{SchemaConversionSnafu, TablesRecordBatchSnafu};
use table::metadata::{
FilterPushDownType, TableInfoBuilder, TableInfoRef, TableMetaBuilder, TableType,
};
use table::thin_table::{ThinTable, ThinTableAdapter};
use table::TableRef;
use table::{Table, TableRef};
pub use table_names::*;
use self::columns::InformationSchemaColumns;
@@ -187,10 +186,9 @@ impl InformationSchemaProvider {
self.information_table(name).map(|table| {
let table_info = Self::table_info(self.catalog_name.clone(), &table);
let filter_pushdown = FilterPushDownType::Inexact;
let thin_table = ThinTable::new(table_info, filter_pushdown);
let data_source = Arc::new(InformationTableDataSource::new(table));
Arc::new(ThinTableAdapter::new(thin_table, data_source)) as _
let table = Table::new(table_info, filter_pushdown, data_source);
Arc::new(table)
})
}

View File

@@ -26,13 +26,16 @@ use common_recordbatch::{RecordBatch, SendableRecordBatchStream};
use datafusion::physical_plan::stream::RecordBatchStreamAdapter as DfRecordBatchStreamAdapter;
use datafusion::physical_plan::streaming::PartitionStream as DfPartitionStream;
use datafusion::physical_plan::SendableRecordBatchStream as DfSendableRecordBatchStream;
use datatypes::prelude::{ConcreteDataType, DataType};
use datatypes::prelude::{ConcreteDataType, DataType, MutableVector};
use datatypes::scalars::ScalarVectorBuilder;
use datatypes::schema::{ColumnSchema, Schema, SchemaRef};
use datatypes::value::Value;
use datatypes::vectors::{StringVectorBuilder, VectorRef};
use datatypes::vectors::{
ConstantVector, Int64Vector, Int64VectorBuilder, StringVector, StringVectorBuilder, VectorRef,
};
use futures::TryStreamExt;
use snafu::{OptionExt, ResultExt};
use sql::statements;
use store_api::storage::{ScanRequest, TableId};
use super::{InformationTable, COLUMNS};
@@ -48,18 +51,42 @@ pub(super) struct InformationSchemaColumns {
catalog_manager: Weak<dyn CatalogManager>,
}
const TABLE_CATALOG: &str = "table_catalog";
const TABLE_SCHEMA: &str = "table_schema";
const TABLE_NAME: &str = "table_name";
const COLUMN_NAME: &str = "column_name";
const DATA_TYPE: &str = "data_type";
const SEMANTIC_TYPE: &str = "semantic_type";
const COLUMN_DEFAULT: &str = "column_default";
const IS_NULLABLE: &str = "is_nullable";
pub const TABLE_CATALOG: &str = "table_catalog";
pub const TABLE_SCHEMA: &str = "table_schema";
pub const TABLE_NAME: &str = "table_name";
pub const COLUMN_NAME: &str = "column_name";
const ORDINAL_POSITION: &str = "ordinal_position";
const CHARACTER_MAXIMUM_LENGTH: &str = "character_maximum_length";
const CHARACTER_OCTET_LENGTH: &str = "character_octet_length";
const NUMERIC_PRECISION: &str = "numeric_precision";
const NUMERIC_SCALE: &str = "numeric_scale";
const DATETIME_PRECISION: &str = "datetime_precision";
const CHARACTER_SET_NAME: &str = "character_set_name";
pub const COLLATION_NAME: &str = "collation_name";
pub const COLUMN_KEY: &str = "column_key";
pub const EXTRA: &str = "extra";
pub const PRIVILEGES: &str = "privileges";
const GENERATION_EXPRESSION: &str = "generation_expression";
// Extension field to keep greptime data type name
pub const GREPTIME_DATA_TYPE: &str = "greptime_data_type";
pub const DATA_TYPE: &str = "data_type";
pub const SEMANTIC_TYPE: &str = "semantic_type";
pub const COLUMN_DEFAULT: &str = "column_default";
pub const IS_NULLABLE: &str = "is_nullable";
const COLUMN_TYPE: &str = "column_type";
const COLUMN_COMMENT: &str = "column_comment";
pub const COLUMN_COMMENT: &str = "column_comment";
const SRS_ID: &str = "srs_id";
const INIT_CAPACITY: usize = 42;
// The maximum length of string type
const MAX_STRING_LENGTH: i64 = 2147483647;
const UTF8_CHARSET_NAME: &str = "utf8";
const UTF8_COLLATE_NAME: &str = "utf8_bin";
const PRI_COLUMN_KEY: &str = "PRI";
const TIME_INDEX_COLUMN_KEY: &str = "TIME INDEX";
const DEFAULT_PRIVILEGES: &str = "select,insert";
const EMPTY_STR: &str = "";
impl InformationSchemaColumns {
pub(super) fn new(catalog_name: String, catalog_manager: Weak<dyn CatalogManager>) -> Self {
Self {
@@ -75,12 +102,46 @@ impl InformationSchemaColumns {
ColumnSchema::new(TABLE_SCHEMA, ConcreteDataType::string_datatype(), false),
ColumnSchema::new(TABLE_NAME, ConcreteDataType::string_datatype(), false),
ColumnSchema::new(COLUMN_NAME, ConcreteDataType::string_datatype(), false),
ColumnSchema::new(ORDINAL_POSITION, ConcreteDataType::int64_datatype(), false),
ColumnSchema::new(
CHARACTER_MAXIMUM_LENGTH,
ConcreteDataType::int64_datatype(),
true,
),
ColumnSchema::new(
CHARACTER_OCTET_LENGTH,
ConcreteDataType::int64_datatype(),
true,
),
ColumnSchema::new(NUMERIC_PRECISION, ConcreteDataType::int64_datatype(), true),
ColumnSchema::new(NUMERIC_SCALE, ConcreteDataType::int64_datatype(), true),
ColumnSchema::new(DATETIME_PRECISION, ConcreteDataType::int64_datatype(), true),
ColumnSchema::new(
CHARACTER_SET_NAME,
ConcreteDataType::string_datatype(),
true,
),
ColumnSchema::new(COLLATION_NAME, ConcreteDataType::string_datatype(), true),
ColumnSchema::new(COLUMN_KEY, ConcreteDataType::string_datatype(), false),
ColumnSchema::new(EXTRA, ConcreteDataType::string_datatype(), false),
ColumnSchema::new(PRIVILEGES, ConcreteDataType::string_datatype(), false),
ColumnSchema::new(
GENERATION_EXPRESSION,
ConcreteDataType::string_datatype(),
false,
),
ColumnSchema::new(
GREPTIME_DATA_TYPE,
ConcreteDataType::string_datatype(),
false,
),
ColumnSchema::new(DATA_TYPE, ConcreteDataType::string_datatype(), false),
ColumnSchema::new(SEMANTIC_TYPE, ConcreteDataType::string_datatype(), false),
ColumnSchema::new(COLUMN_DEFAULT, ConcreteDataType::string_datatype(), true),
ColumnSchema::new(IS_NULLABLE, ConcreteDataType::string_datatype(), false),
ColumnSchema::new(COLUMN_TYPE, ConcreteDataType::string_datatype(), false),
ColumnSchema::new(COLUMN_COMMENT, ConcreteDataType::string_datatype(), true),
ColumnSchema::new(SRS_ID, ConcreteDataType::int64_datatype(), true),
]))
}
@@ -136,9 +197,18 @@ struct InformationSchemaColumnsBuilder {
schema_names: StringVectorBuilder,
table_names: StringVectorBuilder,
column_names: StringVectorBuilder,
ordinal_positions: Int64VectorBuilder,
character_maximum_lengths: Int64VectorBuilder,
character_octet_lengths: Int64VectorBuilder,
numeric_precisions: Int64VectorBuilder,
numeric_scales: Int64VectorBuilder,
datetime_precisions: Int64VectorBuilder,
character_set_names: StringVectorBuilder,
collation_names: StringVectorBuilder,
column_keys: StringVectorBuilder,
greptime_data_types: StringVectorBuilder,
data_types: StringVectorBuilder,
semantic_types: StringVectorBuilder,
column_defaults: StringVectorBuilder,
is_nullables: StringVectorBuilder,
column_types: StringVectorBuilder,
@@ -159,6 +229,16 @@ impl InformationSchemaColumnsBuilder {
schema_names: StringVectorBuilder::with_capacity(INIT_CAPACITY),
table_names: StringVectorBuilder::with_capacity(INIT_CAPACITY),
column_names: StringVectorBuilder::with_capacity(INIT_CAPACITY),
ordinal_positions: Int64VectorBuilder::with_capacity(INIT_CAPACITY),
character_maximum_lengths: Int64VectorBuilder::with_capacity(INIT_CAPACITY),
character_octet_lengths: Int64VectorBuilder::with_capacity(INIT_CAPACITY),
numeric_precisions: Int64VectorBuilder::with_capacity(INIT_CAPACITY),
numeric_scales: Int64VectorBuilder::with_capacity(INIT_CAPACITY),
datetime_precisions: Int64VectorBuilder::with_capacity(INIT_CAPACITY),
character_set_names: StringVectorBuilder::with_capacity(INIT_CAPACITY),
collation_names: StringVectorBuilder::with_capacity(INIT_CAPACITY),
column_keys: StringVectorBuilder::with_capacity(INIT_CAPACITY),
greptime_data_types: StringVectorBuilder::with_capacity(INIT_CAPACITY),
data_types: StringVectorBuilder::with_capacity(INIT_CAPACITY),
semantic_types: StringVectorBuilder::with_capacity(INIT_CAPACITY),
column_defaults: StringVectorBuilder::with_capacity(INIT_CAPACITY),
@@ -194,6 +274,7 @@ impl InformationSchemaColumnsBuilder {
};
self.add_column(
idx,
&predicates,
&catalog_name,
&schema_name,
@@ -208,8 +289,10 @@ impl InformationSchemaColumnsBuilder {
self.finish()
}
#[allow(clippy::too_many_arguments)]
fn add_column(
&mut self,
index: usize,
predicates: &Predicates,
catalog_name: &str,
schema_name: &str,
@@ -217,7 +300,16 @@ impl InformationSchemaColumnsBuilder {
semantic_type: &str,
column_schema: &ColumnSchema,
) {
let data_type = &column_schema.data_type.name();
// Use sql data type name
let data_type = statements::concrete_data_type_to_sql_data_type(&column_schema.data_type)
.map(|dt| dt.to_string().to_lowercase())
.unwrap_or_else(|_| column_schema.data_type.name());
let column_key = match semantic_type {
SEMANTIC_TYPE_PRIMARY_KEY => PRI_COLUMN_KEY,
SEMANTIC_TYPE_TIME_INDEX => TIME_INDEX_COLUMN_KEY,
_ => EMPTY_STR,
};
let row = [
(TABLE_CATALOG, &Value::from(catalog_name)),
@@ -226,6 +318,8 @@ impl InformationSchemaColumnsBuilder {
(COLUMN_NAME, &Value::from(column_schema.name.as_str())),
(DATA_TYPE, &Value::from(data_type.as_str())),
(SEMANTIC_TYPE, &Value::from(semantic_type)),
(ORDINAL_POSITION, &Value::from((index + 1) as i64)),
(COLUMN_KEY, &Value::from(column_key)),
];
if !predicates.eval(&row) {
@@ -236,7 +330,63 @@ impl InformationSchemaColumnsBuilder {
self.schema_names.push(Some(schema_name));
self.table_names.push(Some(table_name));
self.column_names.push(Some(&column_schema.name));
self.data_types.push(Some(data_type));
// Starts from 1
self.ordinal_positions.push(Some((index + 1) as i64));
if column_schema.data_type.is_string() {
self.character_maximum_lengths.push(Some(MAX_STRING_LENGTH));
self.character_octet_lengths.push(Some(MAX_STRING_LENGTH));
self.numeric_precisions.push(None);
self.numeric_scales.push(None);
self.datetime_precisions.push(None);
self.character_set_names.push(Some(UTF8_CHARSET_NAME));
self.collation_names.push(Some(UTF8_COLLATE_NAME));
} else if column_schema.data_type.is_numeric() || column_schema.data_type.is_decimal() {
self.character_maximum_lengths.push(None);
self.character_octet_lengths.push(None);
self.numeric_precisions.push(
column_schema
.data_type
.numeric_precision()
.map(|x| x as i64),
);
self.numeric_scales
.push(column_schema.data_type.numeric_scale().map(|x| x as i64));
self.datetime_precisions.push(None);
self.character_set_names.push(None);
self.collation_names.push(None);
} else {
self.character_maximum_lengths.push(None);
self.character_octet_lengths.push(None);
self.numeric_precisions.push(None);
self.numeric_scales.push(None);
match &column_schema.data_type {
ConcreteDataType::DateTime(datetime_type) => {
self.datetime_precisions
.push(Some(datetime_type.precision() as i64));
}
ConcreteDataType::Timestamp(ts_type) => {
self.datetime_precisions
.push(Some(ts_type.precision() as i64));
}
ConcreteDataType::Time(time_type) => {
self.datetime_precisions
.push(Some(time_type.precision() as i64));
}
_ => self.datetime_precisions.push(None),
}
self.character_set_names.push(None);
self.collation_names.push(None);
}
self.column_keys.push(Some(column_key));
self.greptime_data_types
.push(Some(&column_schema.data_type.name()));
self.data_types.push(Some(&data_type));
self.semantic_types.push(Some(semantic_type));
self.column_defaults.push(
column_schema
@@ -249,23 +399,52 @@ impl InformationSchemaColumnsBuilder {
} else {
self.is_nullables.push(Some("No"));
}
self.column_types.push(Some(data_type));
self.column_types.push(Some(&data_type));
self.column_comments
.push(column_schema.column_comment().map(|x| x.as_ref()));
}
fn finish(&mut self) -> Result<RecordBatch> {
let rows_num = self.collation_names.len();
let privileges = Arc::new(ConstantVector::new(
Arc::new(StringVector::from(vec![DEFAULT_PRIVILEGES])),
rows_num,
));
let empty_string = Arc::new(ConstantVector::new(
Arc::new(StringVector::from(vec![EMPTY_STR])),
rows_num,
));
let srs_ids = Arc::new(ConstantVector::new(
Arc::new(Int64Vector::from(vec![None])),
rows_num,
));
let columns: Vec<VectorRef> = vec![
Arc::new(self.catalog_names.finish()),
Arc::new(self.schema_names.finish()),
Arc::new(self.table_names.finish()),
Arc::new(self.column_names.finish()),
Arc::new(self.ordinal_positions.finish()),
Arc::new(self.character_maximum_lengths.finish()),
Arc::new(self.character_octet_lengths.finish()),
Arc::new(self.numeric_precisions.finish()),
Arc::new(self.numeric_scales.finish()),
Arc::new(self.datetime_precisions.finish()),
Arc::new(self.character_set_names.finish()),
Arc::new(self.collation_names.finish()),
Arc::new(self.column_keys.finish()),
empty_string.clone(),
privileges,
empty_string,
Arc::new(self.greptime_data_types.finish()),
Arc::new(self.data_types.finish()),
Arc::new(self.semantic_types.finish()),
Arc::new(self.column_defaults.finish()),
Arc::new(self.is_nullables.finish()),
Arc::new(self.column_types.finish()),
Arc::new(self.column_comments.finish()),
srs_ids,
];
RecordBatch::new(self.schema.clone(), columns).context(CreateRecordBatchSnafu)

View File

@@ -37,13 +37,16 @@ use crate::error::{
use crate::information_schema::{InformationTable, Predicates};
use crate::CatalogManager;
const CONSTRAINT_SCHEMA: &str = "constraint_schema";
const CONSTRAINT_NAME: &str = "constraint_name";
const TABLE_CATALOG: &str = "table_catalog";
const TABLE_SCHEMA: &str = "table_schema";
const TABLE_NAME: &str = "table_name";
const COLUMN_NAME: &str = "column_name";
const ORDINAL_POSITION: &str = "ordinal_position";
pub const CONSTRAINT_SCHEMA: &str = "constraint_schema";
pub const CONSTRAINT_NAME: &str = "constraint_name";
// It's always `def` in MySQL
pub const TABLE_CATALOG: &str = "table_catalog";
// The real catalog name for this key column.
pub const REAL_TABLE_CATALOG: &str = "real_table_catalog";
pub const TABLE_SCHEMA: &str = "table_schema";
pub const TABLE_NAME: &str = "table_name";
pub const COLUMN_NAME: &str = "column_name";
pub const ORDINAL_POSITION: &str = "ordinal_position";
const INIT_CAPACITY: usize = 42;
/// The virtual table implementation for `information_schema.KEY_COLUMN_USAGE`.
@@ -76,6 +79,11 @@ impl InformationSchemaKeyColumnUsage {
),
ColumnSchema::new(CONSTRAINT_NAME, ConcreteDataType::string_datatype(), false),
ColumnSchema::new(TABLE_CATALOG, ConcreteDataType::string_datatype(), false),
ColumnSchema::new(
REAL_TABLE_CATALOG,
ConcreteDataType::string_datatype(),
false,
),
ColumnSchema::new(TABLE_SCHEMA, ConcreteDataType::string_datatype(), false),
ColumnSchema::new(TABLE_NAME, ConcreteDataType::string_datatype(), false),
ColumnSchema::new(COLUMN_NAME, ConcreteDataType::string_datatype(), false),
@@ -158,6 +166,7 @@ struct InformationSchemaKeyColumnUsageBuilder {
constraint_schema: StringVectorBuilder,
constraint_name: StringVectorBuilder,
table_catalog: StringVectorBuilder,
real_table_catalog: StringVectorBuilder,
table_schema: StringVectorBuilder,
table_name: StringVectorBuilder,
column_name: StringVectorBuilder,
@@ -179,6 +188,7 @@ impl InformationSchemaKeyColumnUsageBuilder {
constraint_schema: StringVectorBuilder::with_capacity(INIT_CAPACITY),
constraint_name: StringVectorBuilder::with_capacity(INIT_CAPACITY),
table_catalog: StringVectorBuilder::with_capacity(INIT_CAPACITY),
real_table_catalog: StringVectorBuilder::with_capacity(INIT_CAPACITY),
table_schema: StringVectorBuilder::with_capacity(INIT_CAPACITY),
table_name: StringVectorBuilder::with_capacity(INIT_CAPACITY),
column_name: StringVectorBuilder::with_capacity(INIT_CAPACITY),
@@ -223,6 +233,7 @@ impl InformationSchemaKeyColumnUsageBuilder {
&predicates,
&schema_name,
"TIME INDEX",
&catalog_name,
&schema_name,
&table_name,
&column.name,
@@ -231,6 +242,7 @@ impl InformationSchemaKeyColumnUsageBuilder {
}
if keys.contains(&idx) {
primary_constraints.push((
catalog_name.clone(),
schema_name.clone(),
table_name.clone(),
column.name.clone(),
@@ -244,13 +256,14 @@ impl InformationSchemaKeyColumnUsageBuilder {
}
}
for (i, (schema_name, table_name, column_name)) in
for (i, (catalog_name, schema_name, table_name, column_name)) in
primary_constraints.into_iter().enumerate()
{
self.add_key_column_usage(
&predicates,
&schema_name,
"PRIMARY",
&catalog_name,
&schema_name,
&table_name,
&column_name,
@@ -269,6 +282,7 @@ impl InformationSchemaKeyColumnUsageBuilder {
predicates: &Predicates,
constraint_schema: &str,
constraint_name: &str,
table_catalog: &str,
table_schema: &str,
table_name: &str,
column_name: &str,
@@ -277,6 +291,7 @@ impl InformationSchemaKeyColumnUsageBuilder {
let row = [
(CONSTRAINT_SCHEMA, &Value::from(constraint_schema)),
(CONSTRAINT_NAME, &Value::from(constraint_name)),
(REAL_TABLE_CATALOG, &Value::from(table_catalog)),
(TABLE_SCHEMA, &Value::from(table_schema)),
(TABLE_NAME, &Value::from(table_name)),
(COLUMN_NAME, &Value::from(column_name)),
@@ -291,6 +306,7 @@ impl InformationSchemaKeyColumnUsageBuilder {
self.constraint_schema.push(Some(constraint_schema));
self.constraint_name.push(Some(constraint_name));
self.table_catalog.push(Some("def"));
self.real_table_catalog.push(Some(table_catalog));
self.table_schema.push(Some(table_schema));
self.table_name.push(Some(table_name));
self.column_name.push(Some(column_name));
@@ -310,6 +326,7 @@ impl InformationSchemaKeyColumnUsageBuilder {
Arc::new(self.constraint_schema.finish()),
Arc::new(self.constraint_name.finish()),
Arc::new(self.table_catalog.finish()),
Arc::new(self.real_table_catalog.finish()),
Arc::new(self.table_schema.finish()),
Arc::new(self.table_name.finish()),
Arc::new(self.column_name.finish()),

View File

@@ -14,13 +14,15 @@
use std::sync::Arc;
use common_catalog::consts::MITO_ENGINE;
use common_catalog::consts::{METRIC_ENGINE, MITO_ENGINE};
use datatypes::prelude::{ConcreteDataType, VectorRef};
use datatypes::schema::{ColumnSchema, Schema, SchemaRef};
use datatypes::vectors::{Int64Vector, StringVector};
use crate::information_schema::table_names::*;
const NO_VALUE: &str = "NO";
/// Find the schema and columns by the table_name, only valid for memory tables.
/// Safety: the user MUST ensure the table schema exists, panic otherwise.
pub fn get_schema_columns(table_name: &str) -> (SchemaRef, Vec<VectorRef>) {
@@ -59,14 +61,15 @@ pub fn get_schema_columns(table_name: &str) -> (SchemaRef, Vec<VectorRef>) {
"SAVEPOINTS",
]),
vec![
Arc::new(StringVector::from(vec![MITO_ENGINE])),
Arc::new(StringVector::from(vec!["DEFAULT"])),
Arc::new(StringVector::from(vec![MITO_ENGINE, METRIC_ENGINE])),
Arc::new(StringVector::from(vec!["DEFAULT", "YES"])),
Arc::new(StringVector::from(vec![
"Storage engine for time-series data",
"Storage engine for observability scenarios, which is adept at handling a large number of small tables, making it particularly suitable for cloud-native monitoring",
])),
Arc::new(StringVector::from(vec!["NO"])),
Arc::new(StringVector::from(vec!["NO"])),
Arc::new(StringVector::from(vec!["NO"])),
Arc::new(StringVector::from(vec![NO_VALUE, NO_VALUE])),
Arc::new(StringVector::from(vec![NO_VALUE, NO_VALUE])),
Arc::new(StringVector::from(vec![NO_VALUE, NO_VALUE])),
],
),

View File

@@ -364,6 +364,10 @@ impl KvBackend for MetaKvBackend {
"MetaKvBackend"
}
fn as_any(&self) -> &dyn Any {
self
}
async fn range(&self, req: RangeRequest) -> Result<RangeResponse> {
self.client
.range(req)
@@ -372,27 +376,6 @@ impl KvBackend for MetaKvBackend {
.context(ExternalSnafu)
}
async fn get(&self, key: &[u8]) -> Result<Option<KeyValue>> {
let mut response = self
.client
.range(RangeRequest::new().with_key(key))
.await
.map_err(BoxedError::new)
.context(ExternalSnafu)?;
Ok(response.take_kvs().get_mut(0).map(|kv| KeyValue {
key: kv.take_key(),
value: kv.take_value(),
}))
}
async fn batch_put(&self, req: BatchPutRequest) -> Result<BatchPutResponse> {
self.client
.batch_put(req)
.await
.map_err(BoxedError::new)
.context(ExternalSnafu)
}
async fn put(&self, req: PutRequest) -> Result<PutResponse> {
self.client
.put(req)
@@ -401,17 +384,9 @@ impl KvBackend for MetaKvBackend {
.context(ExternalSnafu)
}
async fn delete_range(&self, req: DeleteRangeRequest) -> Result<DeleteRangeResponse> {
async fn batch_put(&self, req: BatchPutRequest) -> Result<BatchPutResponse> {
self.client
.delete_range(req)
.await
.map_err(BoxedError::new)
.context(ExternalSnafu)
}
async fn batch_delete(&self, req: BatchDeleteRequest) -> Result<BatchDeleteResponse> {
self.client
.batch_delete(req)
.batch_put(req)
.await
.map_err(BoxedError::new)
.context(ExternalSnafu)
@@ -436,8 +411,33 @@ impl KvBackend for MetaKvBackend {
.context(ExternalSnafu)
}
fn as_any(&self) -> &dyn Any {
self
async fn delete_range(&self, req: DeleteRangeRequest) -> Result<DeleteRangeResponse> {
self.client
.delete_range(req)
.await
.map_err(BoxedError::new)
.context(ExternalSnafu)
}
async fn batch_delete(&self, req: BatchDeleteRequest) -> Result<BatchDeleteResponse> {
self.client
.batch_delete(req)
.await
.map_err(BoxedError::new)
.context(ExternalSnafu)
}
async fn get(&self, key: &[u8]) -> Result<Option<KeyValue>> {
let mut response = self
.client
.range(RangeRequest::new().with_key(key))
.await
.map_err(BoxedError::new)
.context(ExternalSnafu)?;
Ok(response.take_kvs().get_mut(0).map(|kv| KeyValue {
key: kv.take_key(),
value: kv.take_value(),
}))
}
}

View File

@@ -23,8 +23,7 @@ use common_catalog::consts::{
};
use common_catalog::format_full_table_name;
use common_error::ext::BoxedError;
use common_meta::cache_invalidator::{CacheInvalidator, CacheInvalidatorRef, Context};
use common_meta::error::Result as MetaResult;
use common_meta::cache_invalidator::{CacheInvalidator, Context, MultiCacheInvalidator};
use common_meta::instruction::CacheIdent;
use common_meta::key::catalog_name::CatalogNameKey;
use common_meta::key::schema_name::SchemaNameKey;
@@ -44,8 +43,8 @@ use table::TableRef;
use crate::error::Error::{GetTableCache, TableCacheNotGet};
use crate::error::{
self as catalog_err, ListCatalogsSnafu, ListSchemasSnafu, ListTablesSnafu,
Result as CatalogResult, TableCacheNotGetSnafu, TableMetadataManagerSnafu,
InvalidTableInfoInCatalogSnafu, ListCatalogsSnafu, ListSchemasSnafu, ListTablesSnafu, Result,
TableCacheNotGetSnafu, TableMetadataManagerSnafu,
};
use crate::information_schema::InformationSchemaProvider;
use crate::CatalogManager;
@@ -57,10 +56,6 @@ use crate::CatalogManager;
/// comes from `SystemCatalog`, which is static and read-only.
#[derive(Clone)]
pub struct KvBackendCatalogManager {
// TODO(LFC): Maybe use a real implementation for Standalone mode.
// Now we use `NoopKvCacheInvalidator` for Standalone mode. In Standalone mode, the KV backend
// is implemented by RaftEngine. Maybe we need a cache for it?
cache_invalidator: CacheInvalidatorRef,
partition_manager: PartitionRuleManagerRef,
table_metadata_manager: TableMetadataManagerRef,
/// A sub-CatalogManager that handles system tables
@@ -68,18 +63,24 @@ pub struct KvBackendCatalogManager {
table_cache: AsyncCache<String, TableRef>,
}
fn make_table(table_info_value: TableInfoValue) -> CatalogResult<TableRef> {
let table_info = table_info_value
.table_info
.try_into()
.context(catalog_err::InvalidTableInfoInCatalogSnafu)?;
Ok(DistTable::table(Arc::new(table_info)))
struct TableCacheInvalidator {
table_cache: AsyncCache<String, TableRef>,
}
impl TableCacheInvalidator {
pub fn new(table_cache: AsyncCache<String, TableRef>) -> Self {
Self { table_cache }
}
}
#[async_trait::async_trait]
impl CacheInvalidator for KvBackendCatalogManager {
async fn invalidate(&self, ctx: &Context, caches: Vec<CacheIdent>) -> MetaResult<()> {
for cache in &caches {
impl CacheInvalidator for TableCacheInvalidator {
async fn invalidate(
&self,
_ctx: &Context,
caches: Vec<CacheIdent>,
) -> common_meta::error::Result<()> {
for cache in caches {
if let CacheIdent::TableName(table_name) = cache {
let table_cache_key = format_full_table_name(
&table_name.catalog_name,
@@ -89,7 +90,7 @@ impl CacheInvalidator for KvBackendCatalogManager {
self.table_cache.invalidate(&table_cache_key).await;
}
}
self.cache_invalidator.invalidate(ctx, caches).await
Ok(())
}
}
@@ -99,11 +100,21 @@ const TABLE_CACHE_TTL: Duration = Duration::from_secs(10 * 60);
const TABLE_CACHE_TTI: Duration = Duration::from_secs(5 * 60);
impl KvBackendCatalogManager {
pub fn new(backend: KvBackendRef, cache_invalidator: CacheInvalidatorRef) -> Arc<Self> {
pub async fn new(
backend: KvBackendRef,
multi_cache_invalidator: Arc<MultiCacheInvalidator>,
) -> Arc<Self> {
let table_cache: AsyncCache<String, TableRef> = CacheBuilder::new(TABLE_CACHE_MAX_CAPACITY)
.time_to_live(TABLE_CACHE_TTL)
.time_to_idle(TABLE_CACHE_TTI)
.build();
multi_cache_invalidator
.add_invalidator(Arc::new(TableCacheInvalidator::new(table_cache.clone())))
.await;
Arc::new_cyclic(|me| Self {
partition_manager: Arc::new(PartitionRuleManager::new(backend.clone())),
table_metadata_manager: Arc::new(TableMetadataManager::new(backend)),
cache_invalidator,
system_catalog: SystemCatalog {
catalog_manager: me.clone(),
catalog_cache: Cache::new(CATALOG_CACHE_MAX_CAPACITY),
@@ -112,10 +123,7 @@ impl KvBackendCatalogManager {
me.clone(),
)),
},
table_cache: CacheBuilder::new(TABLE_CACHE_MAX_CAPACITY)
.time_to_live(TABLE_CACHE_TTL)
.time_to_idle(TABLE_CACHE_TTI)
.build(),
table_cache,
})
}
@@ -134,12 +142,11 @@ impl CatalogManager for KvBackendCatalogManager {
self
}
async fn catalog_names(&self) -> CatalogResult<Vec<String>> {
async fn catalog_names(&self) -> Result<Vec<String>> {
let stream = self
.table_metadata_manager
.catalog_manager()
.catalog_names()
.await;
.catalog_names();
let keys = stream
.try_collect::<Vec<_>>()
@@ -150,12 +157,11 @@ impl CatalogManager for KvBackendCatalogManager {
Ok(keys)
}
async fn schema_names(&self, catalog: &str) -> CatalogResult<Vec<String>> {
async fn schema_names(&self, catalog: &str) -> Result<Vec<String>> {
let stream = self
.table_metadata_manager
.schema_manager()
.schema_names(catalog)
.await;
.schema_names(catalog);
let mut keys = stream
.try_collect::<BTreeSet<_>>()
.await
@@ -167,12 +173,11 @@ impl CatalogManager for KvBackendCatalogManager {
Ok(keys.into_iter().collect())
}
async fn table_names(&self, catalog: &str, schema: &str) -> CatalogResult<Vec<String>> {
async fn table_names(&self, catalog: &str, schema: &str) -> Result<Vec<String>> {
let stream = self
.table_metadata_manager
.table_name_manager()
.tables(catalog, schema)
.await;
.tables(catalog, schema);
let mut tables = stream
.try_collect::<Vec<_>>()
.await
@@ -186,7 +191,7 @@ impl CatalogManager for KvBackendCatalogManager {
Ok(tables.into_iter().collect())
}
async fn catalog_exists(&self, catalog: &str) -> CatalogResult<bool> {
async fn catalog_exists(&self, catalog: &str) -> Result<bool> {
self.table_metadata_manager
.catalog_manager()
.exists(CatalogNameKey::new(catalog))
@@ -194,7 +199,7 @@ impl CatalogManager for KvBackendCatalogManager {
.context(TableMetadataManagerSnafu)
}
async fn schema_exists(&self, catalog: &str, schema: &str) -> CatalogResult<bool> {
async fn schema_exists(&self, catalog: &str, schema: &str) -> Result<bool> {
if self.system_catalog.schema_exist(schema) {
return Ok(true);
}
@@ -206,7 +211,7 @@ impl CatalogManager for KvBackendCatalogManager {
.context(TableMetadataManagerSnafu)
}
async fn table_exists(&self, catalog: &str, schema: &str, table: &str) -> CatalogResult<bool> {
async fn table_exists(&self, catalog: &str, schema: &str, table: &str) -> Result<bool> {
if self.system_catalog.table_exist(schema, table) {
return Ok(true);
}
@@ -225,7 +230,7 @@ impl CatalogManager for KvBackendCatalogManager {
catalog: &str,
schema: &str,
table_name: &str,
) -> CatalogResult<Option<TableRef>> {
) -> Result<Option<TableRef>> {
if let Some(table) = self.system_catalog.table(catalog, schema, table_name) {
return Ok(Some(table));
}
@@ -259,7 +264,7 @@ impl CatalogManager for KvBackendCatalogManager {
}
.fail();
};
make_table(table_info_value)
build_table(table_info_value)
};
match self
@@ -282,7 +287,7 @@ impl CatalogManager for KvBackendCatalogManager {
&'a self,
catalog: &'a str,
schema: &'a str,
) -> BoxStream<'a, CatalogResult<TableRef>> {
) -> BoxStream<'a, Result<TableRef>> {
let sys_tables = try_stream!({
// System tables
let sys_table_names = self.system_catalog.table_names(schema);
@@ -297,7 +302,6 @@ impl CatalogManager for KvBackendCatalogManager {
.table_metadata_manager
.table_name_manager()
.tables(catalog, schema)
.await
.map_ok(|(_, v)| v.table_id());
const BATCH_SIZE: usize = 128;
let user_tables = try_stream!({
@@ -307,7 +311,7 @@ impl CatalogManager for KvBackendCatalogManager {
while let Some(table_ids) = table_id_chunks.next().await {
let table_ids = table_ids
.into_iter()
.collect::<Result<Vec<_>, _>>()
.collect::<std::result::Result<Vec<_>, _>>()
.map_err(BoxedError::new)
.context(ListTablesSnafu { catalog, schema })?;
@@ -319,7 +323,7 @@ impl CatalogManager for KvBackendCatalogManager {
.context(TableMetadataManagerSnafu)?;
for table_info_value in table_info_values.into_values() {
yield make_table(table_info_value)?;
yield build_table(table_info_value)?;
}
}
});
@@ -328,6 +332,14 @@ impl CatalogManager for KvBackendCatalogManager {
}
}
fn build_table(table_info_value: TableInfoValue) -> Result<TableRef> {
let table_info = table_info_value
.table_info
.try_into()
.context(InvalidTableInfoInCatalogSnafu)?;
Ok(DistTable::table(Arc::new(table_info)))
}
// TODO: This struct can hold a static map of all system tables when
// the upper layer (e.g., procedure) can inform the catalog manager
// a new catalog is created.

View File

@@ -19,10 +19,10 @@ use std::any::Any;
use std::fmt::{Debug, Formatter};
use std::sync::Arc;
use api::v1::CreateTableExpr;
use futures::future::BoxFuture;
use futures_util::stream::BoxStream;
use table::metadata::TableId;
use table::requests::CreateTableRequest;
use table::TableRef;
use crate::error::Result;
@@ -75,9 +75,9 @@ pub type OpenSystemTableHook =
/// Register system table request:
/// - When system table is already created and registered, the hook will be called
/// with table ref after opening the system table
/// - When system table is not exists, create and register the table by create_table_request and calls open_hook with the created table.
/// - When system table is not exists, create and register the table by `create_table_expr` and calls `open_hook` with the created table.
pub struct RegisterSystemTableRequest {
pub create_table_request: CreateTableRequest,
pub create_table_expr: CreateTableExpr,
pub open_hook: Option<OpenSystemTableHook>,
}

View File

@@ -16,7 +16,6 @@ arc-swap = "1.6"
arrow-flight.workspace = true
async-stream.workspace = true
async-trait.workspace = true
common-base.workspace = true
common-catalog.workspace = true
common-error.workspace = true
common-grpc.workspace = true
@@ -25,10 +24,6 @@ common-meta.workspace = true
common-query.workspace = true
common-recordbatch.workspace = true
common-telemetry.workspace = true
common-time.workspace = true
datafusion.workspace = true
datatypes.workspace = true
derive_builder.workspace = true
enum_dispatch = "0.3"
futures-util.workspace = true
lazy_static.workspace = true
@@ -37,9 +32,7 @@ parking_lot = "0.12"
prometheus.workspace = true
prost.workspace = true
rand.workspace = true
serde.workspace = true
serde_json.workspace = true
session.workspace = true
snafu.workspace = true
tokio.workspace = true
tokio-stream = { workspace = true, features = ["net"] }

View File

@@ -16,7 +16,6 @@ tokio-console = ["common-telemetry/tokio-console"]
workspace = true
[dependencies]
anymap = "1.0.0-beta.2"
async-trait.workspace = true
auth.workspace = true
catalog.workspace = true
@@ -52,7 +51,6 @@ meta-client.workspace = true
meta-srv.workspace = true
mito2.workspace = true
nu-ansi-term = "0.46"
partition.workspace = true
plugins.workspace = true
prometheus.workspace = true
prost.workspace = true

View File

@@ -13,5 +13,8 @@
// limitations under the License.
fn main() {
// Trigger this script if the git branch/commit changes
println!("cargo:rerun-if-changed=.git/refs/heads");
common_version::setup_build_info();
}

View File

@@ -106,9 +106,15 @@ impl TableMetadataBencher {
.await
.unwrap();
let start = Instant::now();
let table_info = table_info.unwrap();
let table_id = table_info.table_info.ident.table_id;
let _ = self
.table_metadata_manager
.delete_table_metadata(&table_info.unwrap(), &table_route.unwrap())
.delete_table_metadata(
table_id,
&table_info.table_name(),
table_route.unwrap().region_routes().unwrap(),
)
.await;
start.elapsed()
},

View File

@@ -22,6 +22,7 @@ use catalog::kvbackend::{
use client::{Client, Database, OutputData, DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME};
use common_base::Plugins;
use common_error::ext::ErrorExt;
use common_meta::cache_invalidator::MultiCacheInvalidator;
use common_query::Output;
use common_recordbatch::RecordBatches;
use common_telemetry::logging;
@@ -252,9 +253,11 @@ async fn create_query_engine(meta_addr: &str) -> Result<DatafusionQueryEngine> {
let cached_meta_backend =
Arc::new(CachedMetaKvBackendBuilder::new(meta_client.clone()).build());
let multi_cache_invalidator = Arc::new(MultiCacheInvalidator::with_invalidators(vec![
cached_meta_backend.clone(),
]));
let catalog_list =
KvBackendCatalogManager::new(cached_meta_backend.clone(), cached_meta_backend);
KvBackendCatalogManager::new(cached_meta_backend.clone(), multi_cache_invalidator).await;
let plugins: Plugins = Default::default();
let state = Arc::new(QueryEngineState::new(
catalog_list,

View File

@@ -16,9 +16,10 @@ use std::sync::Arc;
use std::time::Duration;
use async_trait::async_trait;
use catalog::kvbackend::CachedMetaKvBackendBuilder;
use catalog::kvbackend::{CachedMetaKvBackendBuilder, KvBackendCatalogManager};
use clap::Parser;
use client::client_manager::DatanodeClients;
use common_meta::cache_invalidator::MultiCacheInvalidator;
use common_meta::heartbeat::handler::parse_mailbox_message::ParseMailboxMessageHandler;
use common_meta::heartbeat::handler::HandlerGroupExecutor;
use common_telemetry::logging;
@@ -247,11 +248,19 @@ impl StartCommand {
.cache_tti(cache_tti)
.build();
let cached_meta_backend = Arc::new(cached_meta_backend);
let multi_cache_invalidator = Arc::new(MultiCacheInvalidator::with_invalidators(vec![
cached_meta_backend.clone(),
]));
let catalog_manager = KvBackendCatalogManager::new(
cached_meta_backend.clone(),
multi_cache_invalidator.clone(),
)
.await;
let executor = HandlerGroupExecutor::new(vec![
Arc::new(ParseMailboxMessageHandler),
Arc::new(InvalidateTableCacheHandler::new(
cached_meta_backend.clone(),
multi_cache_invalidator.clone(),
)),
]);
@@ -263,11 +272,12 @@ impl StartCommand {
let mut instance = FrontendBuilder::new(
cached_meta_backend.clone(),
catalog_manager,
Arc::new(DatanodeClients::default()),
meta_client,
)
.with_cache_invalidator(cached_meta_backend)
.with_plugin(plugins.clone())
.with_cache_invalidator(multi_cache_invalidator)
.with_heartbeat_task(heartbeat_task)
.try_build()
.await

View File

@@ -218,6 +218,7 @@ impl StartCommand {
mod tests {
use std::io::Write;
use common_base::readable_size::ReadableSize;
use common_test_util::temp_dir::create_named_temp_file;
use meta_srv::selector::SelectorType;
@@ -297,6 +298,10 @@ mod tests {
.first_heartbeat_estimate
.as_millis()
);
assert_eq!(
options.procedure.max_metadata_value_size,
Some(ReadableSize::kb(1500))
);
}
#[test]

View File

@@ -16,10 +16,11 @@ use std::sync::Arc;
use std::{fs, path};
use async_trait::async_trait;
use catalog::kvbackend::KvBackendCatalogManager;
use clap::Parser;
use common_catalog::consts::MIN_USER_TABLE_ID;
use common_config::{metadata_store_dir, KvBackendConfig};
use common_meta::cache_invalidator::DummyCacheInvalidator;
use common_meta::cache_invalidator::{CacheInvalidatorRef, MultiCacheInvalidator};
use common_meta::datanode_manager::DatanodeManagerRef;
use common_meta::ddl::table_meta::{TableMetadataAllocator, TableMetadataAllocatorRef};
use common_meta::ddl::ProcedureExecutorRef;
@@ -399,6 +400,10 @@ impl StartCommand {
.await
.context(StartFrontendSnafu)?;
let multi_cache_invalidator = Arc::new(MultiCacheInvalidator::default());
let catalog_manager =
KvBackendCatalogManager::new(kv_backend.clone(), multi_cache_invalidator.clone()).await;
let builder =
DatanodeBuilder::new(dn_opts, fe_plugins.clone()).with_kv_backend(kv_backend.clone());
let datanode = builder.build().await.context(StartDatanodeSnafu)?;
@@ -422,22 +427,27 @@ impl StartCommand {
let table_meta_allocator = Arc::new(TableMetadataAllocator::new(
table_id_sequence,
wal_options_allocator.clone(),
table_metadata_manager.table_name_manager().clone(),
));
let ddl_task_executor = Self::create_ddl_task_executor(
table_metadata_manager,
procedure_manager.clone(),
datanode_manager.clone(),
multi_cache_invalidator,
table_meta_allocator,
)
.await?;
let mut frontend = FrontendBuilder::new(kv_backend, datanode_manager, ddl_task_executor)
.with_plugin(fe_plugins.clone())
.try_build()
.await
.context(StartFrontendSnafu)?;
let mut frontend = FrontendBuilder::new(
kv_backend,
catalog_manager,
datanode_manager,
ddl_task_executor,
)
.with_plugin(fe_plugins.clone())
.try_build()
.await
.context(StartFrontendSnafu)?;
let servers = Services::new(fe_opts.clone(), Arc::new(frontend.clone()), fe_plugins)
.build()
@@ -459,16 +469,18 @@ impl StartCommand {
table_metadata_manager: TableMetadataManagerRef,
procedure_manager: ProcedureManagerRef,
datanode_manager: DatanodeManagerRef,
cache_invalidator: CacheInvalidatorRef,
table_meta_allocator: TableMetadataAllocatorRef,
) -> Result<ProcedureExecutorRef> {
let procedure_executor: ProcedureExecutorRef = Arc::new(
DdlManager::try_new(
procedure_manager,
datanode_manager,
Arc::new(DummyCacheInvalidator),
cache_invalidator,
table_metadata_manager,
table_meta_allocator,
Arc::new(MemoryRegionKeeper::default()),
true,
)
.context(InitDdlManagerSnafu)?,
);

View File

@@ -1,20 +1,6 @@
// Copyright (c) 2017-present, PingCAP, Inc. Licensed under Apache-2.0.
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// This file is copied from https://github.com/tikv/raft-engine/blob/8dd2a39f359ff16f5295f35343f626e0c10132fa/src/util.rs
// This file is copied from https://github.com/tikv/raft-engine/blob/0.3.0/src/util.rs
use std::fmt::{self, Debug, Display, Write};
use std::ops::{Div, Mul};

View File

@@ -55,10 +55,10 @@ pub fn build_db_string(catalog: &str, schema: &str) -> String {
/// schema name
/// - if `[<catalog>-]` is provided, we split database name with `-` and use
/// `<catalog>` and `<schema>`.
pub fn parse_catalog_and_schema_from_db_string(db: &str) -> (&str, &str) {
pub fn parse_catalog_and_schema_from_db_string(db: &str) -> (String, String) {
match parse_optional_catalog_and_schema_from_db_string(db) {
(Some(catalog), schema) => (catalog, schema),
(None, schema) => (DEFAULT_CATALOG_NAME, schema),
(None, schema) => (DEFAULT_CATALOG_NAME.to_string(), schema),
}
}
@@ -66,12 +66,12 @@ pub fn parse_catalog_and_schema_from_db_string(db: &str) -> (&str, &str) {
///
/// Similar to [`parse_catalog_and_schema_from_db_string`] but returns an optional
/// catalog if it's not provided in the database name.
pub fn parse_optional_catalog_and_schema_from_db_string(db: &str) -> (Option<&str>, &str) {
pub fn parse_optional_catalog_and_schema_from_db_string(db: &str) -> (Option<String>, String) {
let parts = db.splitn(2, '-').collect::<Vec<&str>>();
if parts.len() == 2 {
(Some(parts[0]), parts[1])
(Some(parts[0].to_lowercase()), parts[1].to_lowercase())
} else {
(None, db)
(None, db.to_lowercase())
}
}
@@ -88,32 +88,37 @@ mod tests {
#[test]
fn test_parse_catalog_and_schema() {
assert_eq!(
(DEFAULT_CATALOG_NAME, "fullschema"),
(DEFAULT_CATALOG_NAME.to_string(), "fullschema".to_string()),
parse_catalog_and_schema_from_db_string("fullschema")
);
assert_eq!(
("catalog", "schema"),
("catalog".to_string(), "schema".to_string()),
parse_catalog_and_schema_from_db_string("catalog-schema")
);
assert_eq!(
("catalog", "schema1-schema2"),
("catalog".to_string(), "schema1-schema2".to_string()),
parse_catalog_and_schema_from_db_string("catalog-schema1-schema2")
);
assert_eq!(
(None, "fullschema"),
(None, "fullschema".to_string()),
parse_optional_catalog_and_schema_from_db_string("fullschema")
);
assert_eq!(
(Some("catalog"), "schema"),
(Some("catalog".to_string()), "schema".to_string()),
parse_optional_catalog_and_schema_from_db_string("catalog-schema")
);
assert_eq!(
(Some("catalog"), "schema1-schema2"),
(Some("catalog".to_string()), "schema".to_string()),
parse_optional_catalog_and_schema_from_db_string("CATALOG-SCHEMA")
);
assert_eq!(
(Some("catalog".to_string()), "schema1-schema2".to_string()),
parse_optional_catalog_and_schema_from_db_string("catalog-schema1-schema2")
);
}

View File

@@ -9,7 +9,6 @@ workspace = true
[dependencies]
common-base.workspace = true
humantime-serde.workspace = true
num_cpus.workspace = true
serde.workspace = true
sysinfo.workspace = true

View File

@@ -8,7 +8,6 @@ license.workspace = true
workspace = true
[dependencies]
arrow.workspace = true
bigdecimal.workspace = true
common-error.workspace = true
common-macro.workspace = true

View File

@@ -11,7 +11,6 @@ workspace = true
api.workspace = true
arc-swap = "1.0"
async-trait.workspace = true
chrono-tz = "0.6"
common-base.workspace = true
common-catalog.workspace = true
common-error.workspace = true
@@ -24,7 +23,6 @@ common-time.workspace = true
common-version.workspace = true
datafusion.workspace = true
datatypes.workspace = true
libc = "0.2"
num = "0.4"
num-traits = "0.2"
once_cell.workspace = true

View File

@@ -23,7 +23,7 @@ use datatypes::prelude::VectorRef;
use datatypes::types::TimestampType;
use datatypes::value::Value;
use datatypes::vectors::{
StringVector, TimestampMicrosecondVector, TimestampMillisecondVector,
Int64Vector, StringVector, TimestampMicrosecondVector, TimestampMillisecondVector,
TimestampNanosecondVector, TimestampSecondVector, Vector,
};
use snafu::{ensure, OptionExt};
@@ -43,6 +43,7 @@ fn convert_to_timezone(arg: &str) -> Option<Timezone> {
fn convert_to_timestamp(arg: &Value) -> Option<Timestamp> {
match arg {
Value::Timestamp(ts) => Some(*ts),
Value::Int64(i) => Some(Timestamp::new_millisecond(*i)),
_ => None,
}
}
@@ -66,6 +67,8 @@ impl Function for ToTimezoneFunction {
fn signature(&self) -> Signature {
helper::one_of_sigs2(
vec![
ConcreteDataType::int32_datatype(),
ConcreteDataType::int64_datatype(),
ConcreteDataType::timestamp_second_datatype(),
ConcreteDataType::timestamp_millisecond_datatype(),
ConcreteDataType::timestamp_microsecond_datatype(),
@@ -86,39 +89,45 @@ impl Function for ToTimezoneFunction {
}
);
// TODO: maybe support epoch timestamp? https://github.com/GreptimeTeam/greptimedb/issues/3477
let ts = columns[0].data_type().as_timestamp().with_context(|| {
UnsupportedInputDataTypeSnafu {
let array = columns[0].to_arrow_array();
let times = match columns[0].data_type() {
ConcreteDataType::Int64(_) | ConcreteDataType::Int32(_) => {
let vector = Int64Vector::try_from_arrow_array(array).unwrap();
(0..vector.len())
.map(|i| convert_to_timestamp(&vector.get(i)))
.collect::<Vec<_>>()
}
ConcreteDataType::Timestamp(ts) => match ts {
TimestampType::Second(_) => {
let vector = TimestampSecondVector::try_from_arrow_array(array).unwrap();
(0..vector.len())
.map(|i| convert_to_timestamp(&vector.get(i)))
.collect::<Vec<_>>()
}
TimestampType::Millisecond(_) => {
let vector = TimestampMillisecondVector::try_from_arrow_array(array).unwrap();
(0..vector.len())
.map(|i| convert_to_timestamp(&vector.get(i)))
.collect::<Vec<_>>()
}
TimestampType::Microsecond(_) => {
let vector = TimestampMicrosecondVector::try_from_arrow_array(array).unwrap();
(0..vector.len())
.map(|i| convert_to_timestamp(&vector.get(i)))
.collect::<Vec<_>>()
}
TimestampType::Nanosecond(_) => {
let vector = TimestampNanosecondVector::try_from_arrow_array(array).unwrap();
(0..vector.len())
.map(|i| convert_to_timestamp(&vector.get(i)))
.collect::<Vec<_>>()
}
},
_ => UnsupportedInputDataTypeSnafu {
function: NAME,
datatypes: columns.iter().map(|c| c.data_type()).collect::<Vec<_>>(),
}
})?;
let array = columns[0].to_arrow_array();
let times = match ts {
TimestampType::Second(_) => {
let vector = TimestampSecondVector::try_from_arrow_array(array).unwrap();
(0..vector.len())
.map(|i| convert_to_timestamp(&vector.get(i)))
.collect::<Vec<_>>()
}
TimestampType::Millisecond(_) => {
let vector = TimestampMillisecondVector::try_from_arrow_array(array).unwrap();
(0..vector.len())
.map(|i| convert_to_timestamp(&vector.get(i)))
.collect::<Vec<_>>()
}
TimestampType::Microsecond(_) => {
let vector = TimestampMicrosecondVector::try_from_arrow_array(array).unwrap();
(0..vector.len())
.map(|i| convert_to_timestamp(&vector.get(i)))
.collect::<Vec<_>>()
}
TimestampType::Nanosecond(_) => {
let vector = TimestampNanosecondVector::try_from_arrow_array(array).unwrap();
(0..vector.len())
.map(|i| convert_to_timestamp(&vector.get(i)))
.collect::<Vec<_>>()
}
.fail()?,
};
let tzs = {
@@ -153,7 +162,7 @@ mod tests {
use datatypes::timestamp::{
TimestampMicrosecond, TimestampMillisecond, TimestampNanosecond, TimestampSecond,
};
use datatypes::vectors::StringVector;
use datatypes::vectors::{Int64Vector, StringVector};
use super::*;
@@ -257,4 +266,48 @@ mod tests {
let expect_times: VectorRef = Arc::new(StringVector::from(results));
assert_eq!(expect_times, vector);
}
#[test]
fn test_numerical_to_timezone() {
let f = ToTimezoneFunction;
let results = vec![
Some("1969-12-31 19:00:00.001"),
None,
Some("1970-01-01 03:00:00.001"),
None,
Some("2024-03-26 23:01:50"),
None,
Some("2024-03-27 06:02:00"),
None,
];
let times: Vec<Option<i64>> = vec![
Some(1),
None,
Some(1),
None,
Some(1711508510000),
None,
Some(1711508520000),
None,
];
let ts_vector: Int64Vector = Int64Vector::from_owned_iterator(times.into_iter());
let tzs = vec![
Some("America/New_York"),
None,
Some("Europe/Moscow"),
None,
Some("America/New_York"),
None,
Some("Europe/Moscow"),
None,
];
let args: Vec<VectorRef> = vec![
Arc::new(ts_vector),
Arc::new(StringVector::from(tzs.clone())),
];
let vector = f.eval(FunctionContext::default(), &args).unwrap();
assert_eq!(8, vector.len());
let expect_times: VectorRef = Arc::new(StringVector::from(results));
assert_eq!(expect_times, vector);
}
}

View File

@@ -9,12 +9,10 @@ workspace = true
[dependencies]
async-trait.workspace = true
common-error.workspace = true
common-runtime.workspace = true
common-telemetry.workspace = true
reqwest.workspace = true
serde.workspace = true
serde_json.workspace = true
tokio.workspace = true
uuid.workspace = true

View File

@@ -9,13 +9,11 @@ workspace = true
[dependencies]
api.workspace = true
async-trait.workspace = true
common-base.workspace = true
common-catalog.workspace = true
common-error.workspace = true
common-macro.workspace = true
common-query.workspace = true
common-telemetry.workspace = true
common-time.workspace = true
datatypes.workspace = true
snafu.workspace = true

View File

@@ -10,8 +10,6 @@ workspace = true
[dependencies]
api.workspace = true
arrow-flight.workspace = true
async-trait = "0.1"
backtrace = "0.3"
common-base.workspace = true
common-error.workspace = true
common-macro.workspace = true
@@ -20,10 +18,8 @@ common-runtime.workspace = true
common-telemetry.workspace = true
common-time.workspace = true
dashmap.workspace = true
datafusion.workspace = true
datatypes.workspace = true
flatbuffers = "23.1"
futures = "0.3"
lazy_static.workspace = true
prost.workspace = true
snafu.workspace = true

View File

@@ -13,7 +13,6 @@ workspace = true
[dependencies]
api.workspace = true
async-recursion = "1.0"
async-stream.workspace = true
async-trait.workspace = true
base64.workspace = true
bytes.workspace = true
@@ -26,7 +25,6 @@ common-macro.workspace = true
common-procedure.workspace = true
common-procedure-test.workspace = true
common-recordbatch.workspace = true
common-runtime.workspace = true
common-telemetry.workspace = true
common-time.workspace = true
common-wal.workspace = true
@@ -53,6 +51,7 @@ strum.workspace = true
table.workspace = true
tokio.workspace = true
tonic.workspace = true
typetag = "0.2"
[dev-dependencies]
chrono.workspace = true

View File

@@ -14,6 +14,8 @@
use std::sync::Arc;
use tokio::sync::RwLock;
use crate::error::Result;
use crate::instruction::CacheIdent;
use crate::key::table_info::TableInfoKey;
@@ -58,6 +60,34 @@ impl CacheInvalidator for DummyCacheInvalidator {
}
}
#[derive(Default)]
pub struct MultiCacheInvalidator {
invalidators: RwLock<Vec<CacheInvalidatorRef>>,
}
impl MultiCacheInvalidator {
pub fn with_invalidators(invalidators: Vec<CacheInvalidatorRef>) -> Self {
Self {
invalidators: RwLock::new(invalidators),
}
}
pub async fn add_invalidator(&self, invalidator: CacheInvalidatorRef) {
self.invalidators.write().await.push(invalidator);
}
}
#[async_trait::async_trait]
impl CacheInvalidator for MultiCacheInvalidator {
async fn invalidate(&self, ctx: &Context, caches: Vec<CacheIdent>) -> Result<()> {
let invalidators = self.invalidators.read().await;
for invalidator in invalidators.iter() {
invalidator.invalidate(ctx, caches.clone()).await?;
}
Ok(())
}
}
#[async_trait::async_trait]
impl<T> CacheInvalidator for T
where

View File

@@ -0,0 +1,300 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::str::FromStr;
use common_error::ext::ErrorExt;
use lazy_static::lazy_static;
use regex::Regex;
use serde::{Deserialize, Serialize};
use snafu::{ensure, OptionExt, ResultExt};
use crate::error::{
DecodeJsonSnafu, EncodeJsonSnafu, Error, FromUtf8Snafu, InvalidNodeInfoKeySnafu,
InvalidRoleSnafu, ParseNumSnafu, Result,
};
use crate::peer::Peer;
const CLUSTER_NODE_INFO_PREFIX: &str = "__meta_cluster_node_info";
lazy_static! {
static ref CLUSTER_NODE_INFO_PREFIX_PATTERN: Regex = Regex::new(&format!(
"^{CLUSTER_NODE_INFO_PREFIX}-([0-9]+)-([0-9]+)-([0-9]+)$"
))
.unwrap();
}
/// [ClusterInfo] provides information about the cluster.
#[async_trait::async_trait]
pub trait ClusterInfo {
type Error: ErrorExt;
/// List all nodes by role in the cluster. If `role` is `None`, list all nodes.
async fn list_nodes(
&self,
role: Option<Role>,
) -> std::result::Result<Vec<NodeInfo>, Self::Error>;
// TODO(jeremy): Other info, like region status, etc.
}
/// The key of [NodeInfo] in the storage. The format is `__meta_cluster_node_info-{cluster_id}-{role}-{node_id}`.
#[derive(Debug, Clone, Eq, Hash, PartialEq, Serialize, Deserialize)]
pub struct NodeInfoKey {
/// The cluster id.
pub cluster_id: u64,
/// The role of the node. It can be [Role::Datanode], [Role::Frontend], or [Role::Metasrv].
pub role: Role,
/// The node id.
pub node_id: u64,
}
impl NodeInfoKey {
pub fn key_prefix_with_cluster_id(cluster_id: u64) -> String {
format!("{}-{}-", CLUSTER_NODE_INFO_PREFIX, cluster_id)
}
pub fn key_prefix_with_role(cluster_id: u64, role: Role) -> String {
format!(
"{}-{}-{}-",
CLUSTER_NODE_INFO_PREFIX,
cluster_id,
i32::from(role)
)
}
}
/// The information of a node in the cluster.
#[derive(Debug, Serialize, Deserialize)]
pub struct NodeInfo {
/// The peer information. [node_id, address]
pub peer: Peer,
/// Last activity time in milliseconds.
pub last_activity_ts: i64,
/// The status of the node. Different roles have different node status.
pub status: NodeStatus,
}
#[derive(Debug, Clone, Eq, Hash, PartialEq, Serialize, Deserialize)]
pub enum Role {
Datanode,
Frontend,
Metasrv,
}
#[derive(Debug, Serialize, Deserialize)]
pub enum NodeStatus {
Datanode(DatanodeStatus),
Frontend(FrontendStatus),
Metasrv(MetasrvStatus),
}
/// The status of a datanode.
#[derive(Debug, Serialize, Deserialize)]
pub struct DatanodeStatus {
/// The read capacity units during this period.
pub rcus: i64,
/// The write capacity units during this period.
pub wcus: i64,
/// How many leader regions on this node.
pub leader_regions: usize,
/// How many follower regions on this node.
pub follower_regions: usize,
}
/// The status of a frontend.
#[derive(Debug, Serialize, Deserialize)]
pub struct FrontendStatus {}
/// The status of a metasrv.
#[derive(Debug, Serialize, Deserialize)]
pub struct MetasrvStatus {
pub is_leader: bool,
}
impl FromStr for NodeInfoKey {
type Err = Error;
fn from_str(key: &str) -> Result<Self> {
let caps = CLUSTER_NODE_INFO_PREFIX_PATTERN
.captures(key)
.context(InvalidNodeInfoKeySnafu { key })?;
ensure!(caps.len() == 4, InvalidNodeInfoKeySnafu { key });
let cluster_id = caps[1].to_string();
let role = caps[2].to_string();
let node_id = caps[3].to_string();
let cluster_id: u64 = cluster_id.parse().context(ParseNumSnafu {
err_msg: format!("invalid cluster_id: {cluster_id}"),
})?;
let role: i32 = role.parse().context(ParseNumSnafu {
err_msg: format!("invalid role {role}"),
})?;
let role = Role::try_from(role)?;
let node_id: u64 = node_id.parse().context(ParseNumSnafu {
err_msg: format!("invalid node_id: {node_id}"),
})?;
Ok(Self {
cluster_id,
role,
node_id,
})
}
}
impl TryFrom<Vec<u8>> for NodeInfoKey {
type Error = Error;
fn try_from(bytes: Vec<u8>) -> Result<Self> {
String::from_utf8(bytes)
.context(FromUtf8Snafu {
name: "NodeInfoKey",
})
.map(|x| x.parse())?
}
}
impl From<NodeInfoKey> for Vec<u8> {
fn from(key: NodeInfoKey) -> Self {
format!(
"{}-{}-{}-{}",
CLUSTER_NODE_INFO_PREFIX,
key.cluster_id,
i32::from(key.role),
key.node_id
)
.into_bytes()
}
}
impl FromStr for NodeInfo {
type Err = Error;
fn from_str(value: &str) -> Result<Self> {
serde_json::from_str(value).context(DecodeJsonSnafu)
}
}
impl TryFrom<Vec<u8>> for NodeInfo {
type Error = Error;
fn try_from(bytes: Vec<u8>) -> Result<Self> {
String::from_utf8(bytes)
.context(FromUtf8Snafu { name: "NodeInfo" })
.map(|x| x.parse())?
}
}
impl TryFrom<NodeInfo> for Vec<u8> {
type Error = Error;
fn try_from(info: NodeInfo) -> Result<Self> {
Ok(serde_json::to_string(&info)
.context(EncodeJsonSnafu)?
.into_bytes())
}
}
impl From<Role> for i32 {
fn from(role: Role) -> Self {
match role {
Role::Datanode => 0,
Role::Frontend => 1,
Role::Metasrv => 2,
}
}
}
impl TryFrom<i32> for Role {
type Error = Error;
fn try_from(role: i32) -> Result<Self> {
match role {
0 => Ok(Self::Datanode),
1 => Ok(Self::Frontend),
2 => Ok(Self::Metasrv),
_ => InvalidRoleSnafu { role }.fail(),
}
}
}
#[cfg(test)]
mod tests {
use std::assert_matches::assert_matches;
use crate::cluster::Role::{Datanode, Frontend};
use crate::cluster::{DatanodeStatus, NodeInfo, NodeInfoKey, NodeStatus};
use crate::peer::Peer;
#[test]
fn test_node_info_key_round_trip() {
let key = NodeInfoKey {
cluster_id: 1,
role: Datanode,
node_id: 2,
};
let key_bytes: Vec<u8> = key.into();
let new_key: NodeInfoKey = key_bytes.try_into().unwrap();
assert_eq!(1, new_key.cluster_id);
assert_eq!(Datanode, new_key.role);
assert_eq!(2, new_key.node_id);
}
#[test]
fn test_node_info_round_trip() {
let node_info = NodeInfo {
peer: Peer {
id: 1,
addr: "127.0.0.1".to_string(),
},
last_activity_ts: 123,
status: NodeStatus::Datanode(DatanodeStatus {
rcus: 1,
wcus: 2,
leader_regions: 3,
follower_regions: 4,
}),
};
let node_info_bytes: Vec<u8> = node_info.try_into().unwrap();
let new_node_info: NodeInfo = node_info_bytes.try_into().unwrap();
assert_matches!(
new_node_info,
NodeInfo {
peer: Peer { id: 1, .. },
last_activity_ts: 123,
status: NodeStatus::Datanode(DatanodeStatus {
rcus: 1,
wcus: 2,
leader_regions: 3,
follower_regions: 4,
}),
}
);
}
#[test]
fn test_node_info_key_prefix() {
let prefix = NodeInfoKey::key_prefix_with_cluster_id(1);
assert_eq!(prefix, "__meta_cluster_node_info-1-");
let prefix = NodeInfoKey::key_prefix_with_role(2, Frontend);
assert_eq!(prefix, "__meta_cluster_node_info-2-1-");
}
}

View File

@@ -22,17 +22,21 @@ use self::table_meta::TableMetadataAllocatorRef;
use crate::cache_invalidator::CacheInvalidatorRef;
use crate::datanode_manager::DatanodeManagerRef;
use crate::error::Result;
use crate::key::table_route::TableRouteValue;
use crate::key::table_route::PhysicalTableRouteValue;
use crate::key::TableMetadataManagerRef;
use crate::region_keeper::MemoryRegionKeeperRef;
use crate::rpc::ddl::{SubmitDdlTaskRequest, SubmitDdlTaskResponse};
use crate::rpc::procedure::{MigrateRegionRequest, MigrateRegionResponse, ProcedureStateResponse};
pub mod alter_logical_tables;
pub mod alter_table;
pub mod create_database;
pub mod create_logical_tables;
pub mod create_table;
mod create_table_template;
pub mod drop_database;
pub mod drop_table;
mod physical_table_metadata;
pub mod table_meta;
#[cfg(any(test, feature = "testing"))]
pub mod test_util;
@@ -83,7 +87,7 @@ pub struct TableMetadata {
/// Table id.
pub table_id: TableId,
/// Route information for each region of the table.
pub table_route: TableRouteValue,
pub table_route: PhysicalTableRouteValue,
/// The encoded wal options for regions of the table.
// If a region does not have an associated wal options, no key for the region would be found in the map.
pub region_wal_options: HashMap<RegionNumber, String>,

View File

@@ -0,0 +1,265 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
mod check;
mod metadata;
mod region_request;
mod table_cache_keys;
mod update_metadata;
use async_trait::async_trait;
use common_procedure::error::{FromJsonSnafu, Result as ProcedureResult, ToJsonSnafu};
use common_procedure::{Context, LockKey, Procedure, Status};
use common_telemetry::{info, warn};
use futures_util::future;
use serde::{Deserialize, Serialize};
use snafu::{ensure, ResultExt};
use store_api::metadata::ColumnMetadata;
use store_api::metric_engine_consts::ALTER_PHYSICAL_EXTENSION_KEY;
use strum::AsRefStr;
use table::metadata::TableId;
use crate::ddl::utils::add_peer_context_if_needed;
use crate::ddl::DdlContext;
use crate::error::{DecodeJsonSnafu, Error, MetadataCorruptionSnafu, Result};
use crate::key::table_info::TableInfoValue;
use crate::key::table_route::PhysicalTableRouteValue;
use crate::lock_key::{CatalogLock, SchemaLock, TableLock};
use crate::rpc::ddl::AlterTableTask;
use crate::rpc::router::find_leaders;
use crate::{cache_invalidator, metrics, ClusterId};
pub struct AlterLogicalTablesProcedure {
pub context: DdlContext,
pub data: AlterTablesData,
}
impl AlterLogicalTablesProcedure {
pub const TYPE_NAME: &'static str = "metasrv-procedure::AlterLogicalTables";
pub fn new(
cluster_id: ClusterId,
tasks: Vec<AlterTableTask>,
physical_table_id: TableId,
context: DdlContext,
) -> Self {
Self {
context,
data: AlterTablesData {
cluster_id,
state: AlterTablesState::Prepare,
tasks,
table_info_values: vec![],
physical_table_id,
physical_table_info: None,
physical_table_route: None,
physical_columns: vec![],
},
}
}
pub fn from_json(json: &str, context: DdlContext) -> ProcedureResult<Self> {
let data = serde_json::from_str(json).context(FromJsonSnafu)?;
Ok(Self { context, data })
}
pub(crate) async fn on_prepare(&mut self) -> Result<Status> {
// Checks all the tasks
self.check_input_tasks()?;
// Fills the table info values
self.fill_table_info_values().await?;
// Checks the physical table, must after [fill_table_info_values]
self.check_physical_table().await?;
// Fills the physical table info
self.fill_physical_table_info().await?;
// Filter the finished tasks
let finished_tasks = self.check_finished_tasks()?;
let already_finished_count = finished_tasks
.iter()
.map(|x| if *x { 1 } else { 0 })
.sum::<usize>();
let apply_tasks_count = self.data.tasks.len();
if already_finished_count == apply_tasks_count {
info!("All the alter tasks are finished, will skip the procedure.");
// Re-invalidate the table cache
self.data.state = AlterTablesState::InvalidateTableCache;
return Ok(Status::executing(true));
} else if already_finished_count > 0 {
info!(
"There are {} alter tasks, {} of them were already finished.",
apply_tasks_count, already_finished_count
);
}
self.filter_task(&finished_tasks)?;
// Next state
self.data.state = AlterTablesState::SubmitAlterRegionRequests;
Ok(Status::executing(true))
}
pub(crate) async fn on_submit_alter_region_requests(&mut self) -> Result<Status> {
// Safety: we have checked the state in on_prepare
let physical_table_route = &self.data.physical_table_route.as_ref().unwrap();
let leaders = find_leaders(&physical_table_route.region_routes);
let mut alter_region_tasks = Vec::with_capacity(leaders.len());
for peer in leaders {
let requester = self.context.datanode_manager.datanode(&peer).await;
let request = self.make_request(&peer, &physical_table_route.region_routes)?;
alter_region_tasks.push(async move {
requester
.handle(request)
.await
.map_err(add_peer_context_if_needed(peer))
});
}
// Collects responses from datanodes.
let phy_raw_schemas = future::join_all(alter_region_tasks)
.await
.into_iter()
.map(|res| res.map(|mut res| res.extension.remove(ALTER_PHYSICAL_EXTENSION_KEY)))
.collect::<Result<Vec<_>>>()?;
if phy_raw_schemas.is_empty() {
self.data.state = AlterTablesState::UpdateMetadata;
return Ok(Status::executing(true));
}
// Verify all the physical schemas are the same
// Safety: previous check ensures this vec is not empty
let first = phy_raw_schemas.first().unwrap();
ensure!(
phy_raw_schemas.iter().all(|x| x == first),
MetadataCorruptionSnafu {
err_msg: "The physical schemas from datanodes are not the same."
}
);
// Decodes the physical raw schemas
if let Some(phy_raw_schema) = first {
self.data.physical_columns =
ColumnMetadata::decode_list(phy_raw_schema).context(DecodeJsonSnafu)?;
} else {
warn!("altering logical table result doesn't contains extension key `{ALTER_PHYSICAL_EXTENSION_KEY}`,leaving the physical table's schema unchanged");
}
self.data.state = AlterTablesState::UpdateMetadata;
Ok(Status::executing(true))
}
pub(crate) async fn on_update_metadata(&mut self) -> Result<Status> {
self.update_physical_table_metadata().await?;
self.update_logical_tables_metadata().await?;
self.data.state = AlterTablesState::InvalidateTableCache;
Ok(Status::executing(true))
}
pub(crate) async fn on_invalidate_table_cache(&mut self) -> Result<Status> {
let ctx = cache_invalidator::Context::default();
let to_invalidate = self.build_table_cache_keys_to_invalidate();
self.context
.cache_invalidator
.invalidate(&ctx, to_invalidate)
.await?;
Ok(Status::done())
}
}
#[async_trait]
impl Procedure for AlterLogicalTablesProcedure {
fn type_name(&self) -> &str {
Self::TYPE_NAME
}
async fn execute(&mut self, _ctx: &Context) -> ProcedureResult<Status> {
let error_handler = |e: Error| {
if e.is_retry_later() {
common_procedure::Error::retry_later(e)
} else {
common_procedure::Error::external(e)
}
};
let state = &self.data.state;
let step = state.as_ref();
let _timer = metrics::METRIC_META_PROCEDURE_ALTER_TABLE
.with_label_values(&[step])
.start_timer();
match state {
AlterTablesState::Prepare => self.on_prepare().await,
AlterTablesState::SubmitAlterRegionRequests => {
self.on_submit_alter_region_requests().await
}
AlterTablesState::UpdateMetadata => self.on_update_metadata().await,
AlterTablesState::InvalidateTableCache => self.on_invalidate_table_cache().await,
}
.map_err(error_handler)
}
fn dump(&self) -> ProcedureResult<String> {
serde_json::to_string(&self.data).context(ToJsonSnafu)
}
fn lock_key(&self) -> LockKey {
// CatalogLock, SchemaLock,
// TableLock
// TableNameLock(s)
let mut lock_key = Vec::with_capacity(2 + 1 + self.data.tasks.len());
let table_ref = self.data.tasks[0].table_ref();
lock_key.push(CatalogLock::Read(table_ref.catalog).into());
lock_key.push(SchemaLock::read(table_ref.catalog, table_ref.schema).into());
lock_key.push(TableLock::Write(self.data.physical_table_id).into());
lock_key.extend(
self.data
.table_info_values
.iter()
.map(|table| TableLock::Write(table.table_info.ident.table_id).into()),
);
LockKey::new(lock_key)
}
}
#[derive(Debug, Serialize, Deserialize)]
pub struct AlterTablesData {
cluster_id: ClusterId,
state: AlterTablesState,
tasks: Vec<AlterTableTask>,
/// Table info values before the alter operation.
/// Corresponding one-to-one with the AlterTableTask in tasks.
table_info_values: Vec<TableInfoValue>,
/// Physical table info
physical_table_id: TableId,
physical_table_info: Option<TableInfoValue>,
physical_table_route: Option<PhysicalTableRouteValue>,
physical_columns: Vec<ColumnMetadata>,
}
#[derive(Debug, Serialize, Deserialize, AsRefStr)]
enum AlterTablesState {
/// Prepares to alter the table
Prepare,
SubmitAlterRegionRequests,
/// Updates table metadata.
UpdateMetadata,
/// Broadcasts the invalidating table cache instruction.
InvalidateTableCache,
}

View File

@@ -0,0 +1,136 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::collections::HashSet;
use api::v1::alter_expr::Kind;
use snafu::{ensure, OptionExt};
use crate::ddl::alter_logical_tables::AlterLogicalTablesProcedure;
use crate::error::{AlterLogicalTablesInvalidArgumentsSnafu, Result};
use crate::key::table_info::TableInfoValue;
use crate::key::table_route::TableRouteValue;
use crate::rpc::ddl::AlterTableTask;
impl AlterLogicalTablesProcedure {
pub(crate) fn check_input_tasks(&self) -> Result<()> {
self.check_schema()?;
self.check_alter_kind()?;
Ok(())
}
pub(crate) async fn check_physical_table(&self) -> Result<()> {
let table_route_manager = self.context.table_metadata_manager.table_route_manager();
let table_ids = self
.data
.table_info_values
.iter()
.map(|v| v.table_info.ident.table_id)
.collect::<Vec<_>>();
let table_routes = table_route_manager
.table_route_storage()
.batch_get(&table_ids)
.await?;
let physical_table_id = self.data.physical_table_id;
let is_same_physical_table = table_routes.iter().all(|r| {
if let Some(TableRouteValue::Logical(r)) = r {
r.physical_table_id() == physical_table_id
} else {
false
}
});
ensure!(
is_same_physical_table,
AlterLogicalTablesInvalidArgumentsSnafu {
err_msg: "All the tasks should have the same physical table id"
}
);
Ok(())
}
pub(crate) fn check_finished_tasks(&self) -> Result<Vec<bool>> {
let task = &self.data.tasks;
let table_info_values = &self.data.table_info_values;
Ok(task
.iter()
.zip(table_info_values.iter())
.map(|(task, table)| Self::check_finished_task(task, table))
.collect())
}
// Checks if the schemas of the tasks are the same
fn check_schema(&self) -> Result<()> {
let is_same_schema = self.data.tasks.windows(2).all(|pair| {
pair[0].alter_table.catalog_name == pair[1].alter_table.catalog_name
&& pair[0].alter_table.schema_name == pair[1].alter_table.schema_name
});
ensure!(
is_same_schema,
AlterLogicalTablesInvalidArgumentsSnafu {
err_msg: "Schemas of the tasks are not the same"
}
);
Ok(())
}
fn check_alter_kind(&self) -> Result<()> {
for task in &self.data.tasks {
let kind = task.alter_table.kind.as_ref().context(
AlterLogicalTablesInvalidArgumentsSnafu {
err_msg: "Alter kind is missing",
},
)?;
let Kind::AddColumns(_) = kind else {
return AlterLogicalTablesInvalidArgumentsSnafu {
err_msg: "Only support add columns operation",
}
.fail();
};
}
Ok(())
}
fn check_finished_task(task: &AlterTableTask, table: &TableInfoValue) -> bool {
let columns = table
.table_info
.meta
.schema
.column_schemas
.iter()
.map(|c| &c.name)
.collect::<HashSet<_>>();
let Some(kind) = task.alter_table.kind.as_ref() else {
return true; // Never get here since we have checked it in `check_alter_kind`
};
let Kind::AddColumns(add_columns) = kind else {
return true; // Never get here since we have checked it in `check_alter_kind`
};
// We only check that all columns have been finished. That is to say,
// if one part is finished but another part is not, it will be considered
// unfinished.
add_columns
.add_columns
.iter()
.map(|add_column| add_column.column_def.as_ref().map(|c| &c.name))
.all(|column| column.map(|c| columns.contains(c)).unwrap_or(false))
}
}

View File

@@ -0,0 +1,159 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use common_catalog::format_full_table_name;
use snafu::OptionExt;
use table::metadata::TableId;
use crate::ddl::alter_logical_tables::AlterLogicalTablesProcedure;
use crate::error::{
AlterLogicalTablesInvalidArgumentsSnafu, Result, TableInfoNotFoundSnafu, TableNotFoundSnafu,
TableRouteNotFoundSnafu,
};
use crate::key::table_info::TableInfoValue;
use crate::key::table_name::TableNameKey;
use crate::key::table_route::TableRouteValue;
use crate::rpc::ddl::AlterTableTask;
impl AlterLogicalTablesProcedure {
pub(crate) fn filter_task(&mut self, finished_tasks: &[bool]) -> Result<()> {
debug_assert_eq!(finished_tasks.len(), self.data.tasks.len());
debug_assert_eq!(finished_tasks.len(), self.data.table_info_values.len());
self.data.tasks = self
.data
.tasks
.drain(..)
.zip(finished_tasks.iter())
.filter_map(|(task, finished)| if *finished { None } else { Some(task) })
.collect();
self.data.table_info_values = self
.data
.table_info_values
.drain(..)
.zip(finished_tasks.iter())
.filter_map(|(table_info_value, finished)| {
if *finished {
None
} else {
Some(table_info_value)
}
})
.collect();
Ok(())
}
pub(crate) async fn fill_physical_table_info(&mut self) -> Result<()> {
let (physical_table_info, physical_table_route) = self
.context
.table_metadata_manager
.get_full_table_info(self.data.physical_table_id)
.await?;
let physical_table_info = physical_table_info
.with_context(|| TableInfoNotFoundSnafu {
table: format!("table id - {}", self.data.physical_table_id),
})?
.into_inner();
let physical_table_route = physical_table_route
.context(TableRouteNotFoundSnafu {
table_id: self.data.physical_table_id,
})?
.into_inner();
self.data.physical_table_info = Some(physical_table_info);
let TableRouteValue::Physical(physical_table_route) = physical_table_route else {
return AlterLogicalTablesInvalidArgumentsSnafu {
err_msg: format!(
"expected a physical table but got a logical table: {:?}",
self.data.physical_table_id
),
}
.fail();
};
self.data.physical_table_route = Some(physical_table_route);
Ok(())
}
pub(crate) async fn fill_table_info_values(&mut self) -> Result<()> {
let table_ids = self.get_all_table_ids().await?;
let table_info_values = self.get_all_table_info_values(&table_ids).await?;
debug_assert_eq!(table_info_values.len(), self.data.tasks.len());
self.data.table_info_values = table_info_values;
Ok(())
}
async fn get_all_table_info_values(
&self,
table_ids: &[TableId],
) -> Result<Vec<TableInfoValue>> {
let table_info_manager = self.context.table_metadata_manager.table_info_manager();
let mut table_info_map = table_info_manager.batch_get(table_ids).await?;
let mut table_info_values = Vec::with_capacity(table_ids.len());
for (table_id, task) in table_ids.iter().zip(self.data.tasks.iter()) {
let table_info_value =
table_info_map
.remove(table_id)
.with_context(|| TableInfoNotFoundSnafu {
table: extract_table_name(task),
})?;
table_info_values.push(table_info_value);
}
Ok(table_info_values)
}
async fn get_all_table_ids(&self) -> Result<Vec<TableId>> {
let table_name_manager = self.context.table_metadata_manager.table_name_manager();
let table_name_keys = self
.data
.tasks
.iter()
.map(|task| extract_table_name_key(task))
.collect();
let table_name_values = table_name_manager.batch_get(table_name_keys).await?;
let mut table_ids = Vec::with_capacity(table_name_values.len());
for (value, task) in table_name_values.into_iter().zip(self.data.tasks.iter()) {
let table_id = value
.with_context(|| TableNotFoundSnafu {
table_name: extract_table_name(task),
})?
.table_id();
table_ids.push(table_id);
}
Ok(table_ids)
}
}
#[inline]
fn extract_table_name(task: &AlterTableTask) -> String {
format_full_table_name(
&task.alter_table.catalog_name,
&task.alter_table.schema_name,
&task.alter_table.table_name,
)
}
#[inline]
fn extract_table_name_key(task: &AlterTableTask) -> TableNameKey {
TableNameKey::new(
&task.alter_table.catalog_name,
&task.alter_table.schema_name,
&task.alter_table.table_name,
)
}

View File

@@ -0,0 +1,112 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use api::v1;
use api::v1::alter_expr::Kind;
use api::v1::region::{
alter_request, region_request, AddColumn, AddColumns, AlterRequest, AlterRequests,
RegionColumnDef, RegionRequest, RegionRequestHeader,
};
use common_telemetry::tracing_context::TracingContext;
use store_api::storage::RegionId;
use crate::ddl::alter_logical_tables::AlterLogicalTablesProcedure;
use crate::error::Result;
use crate::key::table_info::TableInfoValue;
use crate::peer::Peer;
use crate::rpc::ddl::AlterTableTask;
use crate::rpc::router::{find_leader_regions, RegionRoute};
impl AlterLogicalTablesProcedure {
pub(crate) fn make_request(
&self,
peer: &Peer,
region_routes: &[RegionRoute],
) -> Result<RegionRequest> {
let alter_requests = self.make_alter_region_requests(peer, region_routes)?;
let request = RegionRequest {
header: Some(RegionRequestHeader {
tracing_context: TracingContext::from_current_span().to_w3c(),
..Default::default()
}),
body: Some(region_request::Body::Alters(alter_requests)),
};
Ok(request)
}
fn make_alter_region_requests(
&self,
peer: &Peer,
region_routes: &[RegionRoute],
) -> Result<AlterRequests> {
let tasks = &self.data.tasks;
let regions_on_this_peer = find_leader_regions(region_routes, peer);
let mut requests = Vec::with_capacity(tasks.len() * regions_on_this_peer.len());
for (task, table) in self
.data
.tasks
.iter()
.zip(self.data.table_info_values.iter())
{
for region_number in &regions_on_this_peer {
let region_id = RegionId::new(table.table_info.ident.table_id, *region_number);
let request = self.make_alter_region_request(region_id, task, table)?;
requests.push(request);
}
}
Ok(AlterRequests { requests })
}
fn make_alter_region_request(
&self,
region_id: RegionId,
task: &AlterTableTask,
table: &TableInfoValue,
) -> Result<AlterRequest> {
let region_id = region_id.as_u64();
let schema_version = table.table_info.ident.version;
let kind = match &task.alter_table.kind {
Some(Kind::AddColumns(add_columns)) => Some(alter_request::Kind::AddColumns(
to_region_add_columns(add_columns),
)),
_ => unreachable!(), // Safety: we have checked the kind in check_input_tasks
};
Ok(AlterRequest {
region_id,
schema_version,
kind,
})
}
}
fn to_region_add_columns(add_columns: &v1::AddColumns) -> AddColumns {
let add_columns = add_columns
.add_columns
.iter()
.map(|add_column| {
let region_column_def = RegionColumnDef {
column_def: add_column.column_def.clone(),
..Default::default() // other fields are not used in alter logical table
};
AddColumn {
column_def: Some(region_column_def),
..Default::default() // other fields are not used in alter logical table
}
})
.collect();
AddColumns { add_columns }
}

View File

@@ -0,0 +1,51 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use table::metadata::RawTableInfo;
use crate::ddl::alter_logical_tables::AlterLogicalTablesProcedure;
use crate::instruction::CacheIdent;
use crate::table_name::TableName;
impl AlterLogicalTablesProcedure {
pub(crate) fn build_table_cache_keys_to_invalidate(&self) -> Vec<CacheIdent> {
let mut cache_keys = self
.data
.table_info_values
.iter()
.flat_map(|table| {
vec![
CacheIdent::TableId(table.table_info.ident.table_id),
CacheIdent::TableName(extract_table_name(&table.table_info)),
]
})
.collect::<Vec<_>>();
cache_keys.push(CacheIdent::TableId(self.data.physical_table_id));
// Safety: physical_table_info already filled in previous steps
let physical_table_info = &self.data.physical_table_info.as_ref().unwrap().table_info;
cache_keys.push(CacheIdent::TableName(extract_table_name(
physical_table_info,
)));
cache_keys
}
}
fn extract_table_name(table_info: &RawTableInfo) -> TableName {
TableName::new(
&table_info.catalog_name,
&table_info.schema_name,
&table_info.name,
)
}

View File

@@ -0,0 +1,124 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use common_grpc_expr::alter_expr_to_request;
use common_telemetry::warn;
use itertools::Itertools;
use snafu::ResultExt;
use table::metadata::{RawTableInfo, TableInfo};
use crate::ddl::alter_logical_tables::AlterLogicalTablesProcedure;
use crate::ddl::physical_table_metadata;
use crate::error;
use crate::error::{ConvertAlterTableRequestSnafu, Result};
use crate::key::table_info::TableInfoValue;
use crate::key::DeserializedValueWithBytes;
use crate::rpc::ddl::AlterTableTask;
impl AlterLogicalTablesProcedure {
pub(crate) async fn update_physical_table_metadata(&mut self) -> Result<()> {
if self.data.physical_columns.is_empty() {
warn!("No physical columns found, leaving the physical table's schema unchanged when altering logical tables");
return Ok(());
}
let physical_table_info = self.data.physical_table_info.as_ref().unwrap();
// Generates new table info
let old_raw_table_info = physical_table_info.table_info.clone();
let new_raw_table_info = physical_table_metadata::build_new_physical_table_info(
old_raw_table_info,
&self.data.physical_columns,
);
// Updates physical table's metadata
self.context
.table_metadata_manager
.update_table_info(
DeserializedValueWithBytes::from_inner(physical_table_info.clone()),
new_raw_table_info,
)
.await?;
Ok(())
}
pub(crate) async fn update_logical_tables_metadata(&mut self) -> Result<()> {
let table_info_values = self.build_update_metadata()?;
let manager = &self.context.table_metadata_manager;
let chunk_size = manager.batch_update_table_info_value_chunk_size();
if table_info_values.len() > chunk_size {
let chunks = table_info_values
.into_iter()
.chunks(chunk_size)
.into_iter()
.map(|check| check.collect::<Vec<_>>())
.collect::<Vec<_>>();
for chunk in chunks {
manager.batch_update_table_info_values(chunk).await?;
}
} else {
manager
.batch_update_table_info_values(table_info_values)
.await?;
}
Ok(())
}
pub(crate) fn build_update_metadata(&self) -> Result<Vec<(TableInfoValue, RawTableInfo)>> {
let mut table_info_values_to_update = Vec::with_capacity(self.data.tasks.len());
for (task, table) in self
.data
.tasks
.iter()
.zip(self.data.table_info_values.iter())
{
table_info_values_to_update.push(self.build_new_table_info(task, table)?);
}
Ok(table_info_values_to_update)
}
fn build_new_table_info(
&self,
task: &AlterTableTask,
table: &TableInfoValue,
) -> Result<(TableInfoValue, RawTableInfo)> {
// Builds new_meta
let table_info = TableInfo::try_from(table.table_info.clone())
.context(error::ConvertRawTableInfoSnafu)?;
let table_ref = task.table_ref();
let request =
alter_expr_to_request(table.table_info.ident.table_id, task.alter_table.clone())
.context(ConvertAlterTableRequestSnafu)?;
let new_meta = table_info
.meta
.builder_with_alter_kind(table_ref.table, &request.alter_kind, true)
.context(error::TableSnafu)?
.build()
.with_context(|_| error::BuildTableMetaSnafu {
table_name: table_ref.table,
})?;
let version = table_info.ident.version + 1;
let mut new_table = table_info;
new_table.meta = new_meta;
new_table.ident.version = version;
let mut raw_table_info = RawTableInfo::from(new_table);
raw_table_info.sort_columns();
Ok((table.clone(), raw_table_info))
}
}

View File

@@ -67,7 +67,6 @@ impl AlterTableProcedure {
cluster_id: u64,
task: AlterTableTask,
table_info_value: DeserializedValueWithBytes<TableInfoValue>,
physical_table_info: Option<(TableId, TableName)>,
context: DdlContext,
) -> Result<Self> {
let alter_kind = task
@@ -87,13 +86,7 @@ impl AlterTableProcedure {
Ok(Self {
context,
data: AlterTableData::new(
task,
table_info_value,
physical_table_info,
cluster_id,
next_column_id,
),
data: AlterTableData::new(task, table_info_value, cluster_id, next_column_id),
kind,
})
}
@@ -281,7 +274,7 @@ impl AlterTableProcedure {
let new_meta = table_info
.meta
.builder_with_alter_kind(table_ref.table, &request.alter_kind)
.builder_with_alter_kind(table_ref.table, &request.alter_kind, false)
.context(error::TableSnafu)?
.build()
.with_context(|_| error::BuildTableMetaSnafu {
@@ -331,41 +324,24 @@ impl AlterTableProcedure {
async fn on_broadcast(&mut self) -> Result<Status> {
let alter_kind = self.alter_kind()?;
let cache_invalidator = &self.context.cache_invalidator;
if matches!(alter_kind, Kind::RenameTable { .. }) {
cache_invalidator
.invalidate(
&Context::default(),
vec![CacheIdent::TableName(self.data.table_ref().into())],
)
.await?;
let cache_keys = if matches!(alter_kind, Kind::RenameTable { .. }) {
vec![CacheIdent::TableName(self.data.table_ref().into())]
} else {
cache_invalidator
.invalidate(
&Context::default(),
vec![CacheIdent::TableId(self.data.table_id())],
)
.await?;
vec![
CacheIdent::TableId(self.data.table_id()),
CacheIdent::TableName(self.data.table_ref().into()),
]
};
cache_invalidator
.invalidate(&Context::default(), cache_keys)
.await?;
Ok(Status::done())
}
fn lock_key_inner(&self) -> Vec<StringKey> {
let mut lock_key = vec![];
if let Some((physical_table_id, physical_table_name)) = self.data.physical_table_info() {
lock_key.push(CatalogLock::Read(&physical_table_name.catalog_name).into());
lock_key.push(
SchemaLock::read(
&physical_table_name.catalog_name,
&physical_table_name.schema_name,
)
.into(),
);
lock_key.push(TableLock::Read(*physical_table_id).into())
}
let table_ref = self.data.table_ref();
let table_id = self.data.table_id();
lock_key.push(CatalogLock::Read(table_ref.catalog).into());
@@ -443,8 +419,6 @@ pub struct AlterTableData {
task: AlterTableTask,
/// Table info value before alteration.
table_info_value: DeserializedValueWithBytes<TableInfoValue>,
/// Physical table name, if the table to alter is a logical table.
physical_table_info: Option<(TableId, TableName)>,
/// Next column id of the table if the task adds columns to the table.
next_column_id: Option<ColumnId>,
}
@@ -453,7 +427,6 @@ impl AlterTableData {
pub fn new(
task: AlterTableTask,
table_info_value: DeserializedValueWithBytes<TableInfoValue>,
physical_table_info: Option<(TableId, TableName)>,
cluster_id: u64,
next_column_id: Option<ColumnId>,
) -> Self {
@@ -461,7 +434,6 @@ impl AlterTableData {
state: AlterTableState::Prepare,
task,
table_info_value,
physical_table_info,
cluster_id,
next_column_id,
}
@@ -478,10 +450,6 @@ impl AlterTableData {
fn table_info(&self) -> &RawTableInfo {
&self.table_info_value.table_info
}
fn physical_table_info(&self) -> Option<&(TableId, TableName)> {
self.physical_table_info.as_ref()
}
}
/// Creates region proto alter kind from `table_info` and `alter_kind`.

View File

@@ -0,0 +1,152 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::collections::HashMap;
use async_trait::async_trait;
use common_procedure::error::{FromJsonSnafu, Result as ProcedureResult, ToJsonSnafu};
use common_procedure::{Context as ProcedureContext, LockKey, Procedure, Status};
use serde::{Deserialize, Serialize};
use snafu::{ensure, ResultExt};
use strum::AsRefStr;
use crate::ddl::utils::handle_retry_error;
use crate::ddl::DdlContext;
use crate::error::{self, Result};
use crate::key::schema_name::{SchemaNameKey, SchemaNameValue};
use crate::lock_key::{CatalogLock, SchemaLock};
pub struct CreateDatabaseProcedure {
pub context: DdlContext,
pub data: CreateDatabaseData,
}
impl CreateDatabaseProcedure {
pub const TYPE_NAME: &'static str = "metasrv-procedure::CreateDatabase";
pub fn new(
catalog: String,
schema: String,
create_if_not_exists: bool,
options: Option<HashMap<String, String>>,
context: DdlContext,
) -> Self {
Self {
context,
data: CreateDatabaseData {
state: CreateDatabaseState::Prepare,
catalog,
schema,
create_if_not_exists,
options,
},
}
}
pub fn from_json(json: &str, context: DdlContext) -> ProcedureResult<Self> {
let data = serde_json::from_str(json).context(FromJsonSnafu)?;
Ok(Self { context, data })
}
pub async fn on_prepare(&mut self) -> Result<Status> {
let exists = self
.context
.table_metadata_manager
.schema_manager()
.exists(SchemaNameKey::new(&self.data.catalog, &self.data.schema))
.await?;
if exists && self.data.create_if_not_exists {
return Ok(Status::done());
}
ensure!(
!exists,
error::SchemaAlreadyExistsSnafu {
catalog: &self.data.catalog,
schema: &self.data.schema,
}
);
self.data.state = CreateDatabaseState::CreateMetadata;
Ok(Status::executing(true))
}
pub async fn on_create_metadata(&mut self) -> Result<Status> {
let value: Option<SchemaNameValue> = self
.data
.options
.as_ref()
.map(|hash_map_ref| hash_map_ref.try_into())
.transpose()?;
self.context
.table_metadata_manager
.schema_manager()
.create(
SchemaNameKey::new(&self.data.catalog, &self.data.schema),
value,
self.data.create_if_not_exists,
)
.await?;
Ok(Status::done())
}
}
#[async_trait]
impl Procedure for CreateDatabaseProcedure {
fn type_name(&self) -> &str {
Self::TYPE_NAME
}
async fn execute(&mut self, _ctx: &ProcedureContext) -> ProcedureResult<Status> {
let state = &self.data.state;
match state {
CreateDatabaseState::Prepare => self.on_prepare().await,
CreateDatabaseState::CreateMetadata => self.on_create_metadata().await,
}
.map_err(handle_retry_error)
}
fn dump(&self) -> ProcedureResult<String> {
serde_json::to_string(&self.data).context(ToJsonSnafu)
}
fn lock_key(&self) -> LockKey {
let lock_key = vec![
CatalogLock::Read(&self.data.catalog).into(),
SchemaLock::write(&self.data.catalog, &self.data.schema).into(),
];
LockKey::new(lock_key)
}
}
#[derive(Debug, Clone, Serialize, Deserialize, AsRefStr)]
pub enum CreateDatabaseState {
Prepare,
CreateMetadata,
}
#[derive(Debug, Serialize, Deserialize)]
pub struct CreateDatabaseData {
pub state: CreateDatabaseState,
pub catalog: String,
pub schema: String,
pub create_if_not_exists: bool,
pub options: Option<HashMap<String, String>>,
}

View File

@@ -12,39 +12,37 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use std::collections::HashMap;
mod check;
mod metadata;
mod region_request;
mod update_metadata;
use api::v1::region::region_request::Body as PbRegionRequest;
use api::v1::region::{CreateRequests, RegionRequest, RegionRequestHeader};
use api::v1::CreateTableExpr;
use async_trait::async_trait;
use common_procedure::error::{FromJsonSnafu, Result as ProcedureResult, ToJsonSnafu};
use common_procedure::{Context as ProcedureContext, LockKey, Procedure, Status};
use common_telemetry::info;
use common_telemetry::tracing_context::TracingContext;
use common_telemetry::warn;
use futures_util::future::join_all;
use itertools::Itertools;
use serde::{Deserialize, Serialize};
use snafu::{ensure, ResultExt};
use store_api::metadata::ColumnMetadata;
use store_api::metric_engine_consts::ALTER_PHYSICAL_EXTENSION_KEY;
use store_api::storage::{RegionId, RegionNumber};
use strum::AsRefStr;
use table::metadata::{RawTableInfo, TableId};
use crate::ddl::create_table_template::{build_template, CreateRequestBuilder};
use crate::ddl::utils::{add_peer_context_if_needed, handle_retry_error, region_storage_path};
use crate::ddl::utils::{add_peer_context_if_needed, handle_retry_error};
use crate::ddl::DdlContext;
use crate::error::{Result, TableAlreadyExistsSnafu};
use crate::key::table_name::TableNameKey;
use crate::error::{DecodeJsonSnafu, MetadataCorruptionSnafu, Result};
use crate::key::table_route::TableRouteValue;
use crate::lock_key::{CatalogLock, SchemaLock, TableLock, TableNameLock};
use crate::peer::Peer;
use crate::rpc::ddl::CreateTableTask;
use crate::rpc::router::{find_leader_regions, find_leaders, RegionRoute};
use crate::rpc::router::{find_leaders, RegionRoute};
use crate::{metrics, ClusterId};
pub struct CreateLogicalTablesProcedure {
pub context: DdlContext,
pub creator: TablesCreator,
pub data: CreateTablesData,
}
impl CreateLogicalTablesProcedure {
@@ -56,14 +54,23 @@ impl CreateLogicalTablesProcedure {
physical_table_id: TableId,
context: DdlContext,
) -> Self {
let creator = TablesCreator::new(cluster_id, tasks, physical_table_id);
Self { context, creator }
Self {
context,
data: CreateTablesData {
cluster_id,
state: CreateTablesState::Prepare,
tasks,
table_ids_already_exists: vec![],
physical_table_id,
physical_region_numbers: vec![],
physical_columns: vec![],
},
}
}
pub fn from_json(json: &str, context: DdlContext) -> ProcedureResult<Self> {
let data = serde_json::from_str(json).context(FromJsonSnafu)?;
let creator = TablesCreator { data };
Ok(Self { context, creator })
Ok(Self { context, data })
}
/// On the prepares step, it performs:
@@ -77,213 +84,108 @@ impl CreateLogicalTablesProcedure {
/// - Failed to check whether tables exist.
/// - One of logical tables has existing, and the table creation task without setting `create_if_not_exists`.
pub(crate) async fn on_prepare(&mut self) -> Result<Status> {
let manager = &self.context.table_metadata_manager;
self.check_input_tasks()?;
// Sets physical region numbers
let physical_table_id = self.creator.data.physical_table_id();
let physical_region_numbers = manager
.table_route_manager()
.get_physical_table_route(physical_table_id)
.await
.map(|(_, route)| TableRouteValue::Physical(route).region_numbers())?;
self.creator
.data
.set_physical_region_numbers(physical_region_numbers);
self.fill_physical_table_info().await?;
// Checks if the tables exist
let table_name_keys = self
.creator
.data
.all_create_table_exprs()
.iter()
.map(|expr| TableNameKey::new(&expr.catalog_name, &expr.schema_name, &expr.table_name))
.collect::<Vec<_>>();
let already_exists_tables_ids = manager
.table_name_manager()
.batch_get(table_name_keys)
.await?
.iter()
.map(|x| x.map(|x| x.table_id()))
.collect::<Vec<_>>();
// Validates the tasks
let tasks = &mut self.creator.data.tasks;
for (task, table_id) in tasks.iter().zip(already_exists_tables_ids.iter()) {
if table_id.is_some() {
// If a table already exists, we just ignore it.
ensure!(
task.create_table.create_if_not_exists,
TableAlreadyExistsSnafu {
table_name: task.create_table.table_name.to_string(),
}
);
continue;
}
}
self.check_tables_already_exist().await?;
// If all tables already exist, returns the table_ids.
if already_exists_tables_ids.iter().all(Option::is_some) {
if self
.data
.table_ids_already_exists
.iter()
.all(Option::is_some)
{
return Ok(Status::done_with_output(
already_exists_tables_ids
.into_iter()
self.data
.table_ids_already_exists
.drain(..)
.flatten()
.collect::<Vec<_>>(),
));
}
// Allocates table ids and sort columns on their names.
for (task, table_id) in tasks.iter_mut().zip(already_exists_tables_ids.iter()) {
let table_id = if let Some(table_id) = table_id {
*table_id
} else {
self.context
.table_metadata_allocator
.allocate_table_id(task)
.await?
};
task.set_table_id(table_id);
self.allocate_table_ids().await?;
// sort columns in task
task.sort_columns();
common_telemetry::info!("[DEBUG] sorted task {:?}", task);
}
self.creator
.data
.set_table_ids_already_exists(already_exists_tables_ids);
self.creator.data.state = CreateTablesState::DatanodeCreateRegions;
self.data.state = CreateTablesState::DatanodeCreateRegions;
Ok(Status::executing(true))
}
pub async fn on_datanode_create_regions(&mut self) -> Result<Status> {
let physical_table_id = self.creator.data.physical_table_id();
let (_, physical_table_route) = self
.context
.table_metadata_manager
.table_route_manager()
.get_physical_table_route(physical_table_id)
.get_physical_table_route(self.data.physical_table_id)
.await?;
let region_routes = &physical_table_route.region_routes;
self.create_regions(region_routes).await
self.create_regions(&physical_table_route.region_routes)
.await
}
/// Creates table metadata
/// Creates table metadata for logical tables and update corresponding physical
/// table's metadata.
///
/// Abort(not-retry):
/// - Failed to create table metadata.
pub async fn on_create_metadata(&self) -> Result<Status> {
let manager = &self.context.table_metadata_manager;
let physical_table_id = self.creator.data.physical_table_id();
let remaining_tasks = self.creator.data.remaining_tasks();
let num_tables = remaining_tasks.len();
if num_tables > 0 {
let chunk_size = manager.max_logical_tables_per_batch();
if num_tables > chunk_size {
let chunks = remaining_tasks
.into_iter()
.chunks(chunk_size)
.into_iter()
.map(|chunk| chunk.collect::<Vec<_>>())
.collect::<Vec<_>>();
for chunk in chunks {
manager.create_logical_tables_metadata(chunk).await?;
}
} else {
manager
.create_logical_tables_metadata(remaining_tasks)
.await?;
}
}
// The `table_id` MUST be collected after the [Prepare::Prepare],
// ensures the all `table_id`s have been allocated.
let table_ids = self
.creator
.data
.tasks
.iter()
.map(|task| task.table_info.ident.table_id)
.collect::<Vec<_>>();
info!("Created {num_tables} tables {table_ids:?} metadata for physical table {physical_table_id}");
pub async fn on_create_metadata(&mut self) -> Result<Status> {
self.update_physical_table_metadata().await?;
let table_ids = self.create_logical_tables_metadata().await?;
Ok(Status::done_with_output(table_ids))
}
fn create_region_request_builder(
&self,
physical_table_id: TableId,
task: &CreateTableTask,
) -> Result<CreateRequestBuilder> {
let create_expr = &task.create_table;
let template = build_template(create_expr)?;
Ok(CreateRequestBuilder::new(template, Some(physical_table_id)))
}
fn one_datanode_region_requests(
&self,
datanode: &Peer,
region_routes: &[RegionRoute],
) -> Result<CreateRequests> {
let create_tables_data = &self.creator.data;
let tasks = &create_tables_data.tasks;
let physical_table_id = create_tables_data.physical_table_id();
let regions = find_leader_regions(region_routes, datanode);
let mut requests = Vec::with_capacity(tasks.len() * regions.len());
for task in tasks {
let create_table_expr = &task.create_table;
let catalog = &create_table_expr.catalog_name;
let schema = &create_table_expr.schema_name;
let logical_table_id = task.table_info.ident.table_id;
let storage_path = region_storage_path(catalog, schema);
let request_builder = self.create_region_request_builder(physical_table_id, task)?;
for region_number in &regions {
let region_id = RegionId::new(logical_table_id, *region_number);
let create_region_request =
request_builder.build_one(region_id, storage_path.clone(), &HashMap::new())?;
requests.push(create_region_request);
}
}
Ok(CreateRequests { requests })
}
async fn create_regions(&mut self, region_routes: &[RegionRoute]) -> Result<Status> {
let leaders = find_leaders(region_routes);
let mut create_region_tasks = Vec::with_capacity(leaders.len());
for datanode in leaders {
let requester = self.context.datanode_manager.datanode(&datanode).await;
let creates = self.one_datanode_region_requests(&datanode, region_routes)?;
let request = RegionRequest {
header: Some(RegionRequestHeader {
tracing_context: TracingContext::from_current_span().to_w3c(),
..Default::default()
}),
body: Some(PbRegionRequest::Creates(creates)),
};
for peer in leaders {
let requester = self.context.datanode_manager.datanode(&peer).await;
let request = self.make_request(&peer, region_routes)?;
create_region_tasks.push(async move {
requester
.handle(request)
.await
.map_err(add_peer_context_if_needed(datanode))
.map_err(add_peer_context_if_needed(peer))
});
}
join_all(create_region_tasks)
// Collects response from datanodes.
let phy_raw_schemas = join_all(create_region_tasks)
.await
.into_iter()
.map(|res| res.map(|mut res| res.extension.remove(ALTER_PHYSICAL_EXTENSION_KEY)))
.collect::<Result<Vec<_>>>()?;
self.creator.data.state = CreateTablesState::CreateMetadata;
if phy_raw_schemas.is_empty() {
self.data.state = CreateTablesState::CreateMetadata;
return Ok(Status::executing(false));
}
// Ensures the procedures after the crash start from the `DatanodeCreateRegions` stage.
Ok(Status::executing(false))
// Verify all the physical schemas are the same
// Safety: previous check ensures this vec is not empty
let first = phy_raw_schemas.first().unwrap();
ensure!(
phy_raw_schemas.iter().all(|x| x == first),
MetadataCorruptionSnafu {
err_msg: "The physical schemas from datanodes are not the same."
}
);
// Decodes the physical raw schemas
if let Some(phy_raw_schemas) = first {
self.data.physical_columns =
ColumnMetadata::decode_list(phy_raw_schemas).context(DecodeJsonSnafu)?;
} else {
warn!("creating logical table result doesn't contains extension key `{ALTER_PHYSICAL_EXTENSION_KEY}`,leaving the physical table's schema unchanged");
}
self.data.state = CreateTablesState::CreateMetadata;
Ok(Status::executing(true))
}
}
@@ -294,7 +196,7 @@ impl Procedure for CreateLogicalTablesProcedure {
}
async fn execute(&mut self, _ctx: &ProcedureContext) -> ProcedureResult<Status> {
let state = &self.creator.data.state;
let state = &self.data.state;
let _timer = metrics::METRIC_META_PROCEDURE_CREATE_TABLES
.with_label_values(&[state.as_ref()])
@@ -309,20 +211,20 @@ impl Procedure for CreateLogicalTablesProcedure {
}
fn dump(&self) -> ProcedureResult<String> {
serde_json::to_string(&self.creator.data).context(ToJsonSnafu)
serde_json::to_string(&self.data).context(ToJsonSnafu)
}
fn lock_key(&self) -> LockKey {
// CatalogLock, SchemaLock,
// TableLock
// TableNameLock(s)
let mut lock_key = Vec::with_capacity(2 + 1 + self.creator.data.tasks.len());
let table_ref = self.creator.data.tasks[0].table_ref();
let mut lock_key = Vec::with_capacity(2 + 1 + self.data.tasks.len());
let table_ref = self.data.tasks[0].table_ref();
lock_key.push(CatalogLock::Read(table_ref.catalog).into());
lock_key.push(SchemaLock::read(table_ref.catalog, table_ref.schema).into());
lock_key.push(TableLock::Write(self.creator.data.physical_table_id()).into());
lock_key.push(TableLock::Write(self.data.physical_table_id).into());
for task in &self.creator.data.tasks {
for task in &self.data.tasks {
lock_key.push(
TableNameLock::new(
&task.create_table.catalog_name,
@@ -336,32 +238,6 @@ impl Procedure for CreateLogicalTablesProcedure {
}
}
pub struct TablesCreator {
/// The serializable data.
pub data: CreateTablesData,
}
impl TablesCreator {
pub fn new(
cluster_id: ClusterId,
tasks: Vec<CreateTableTask>,
physical_table_id: TableId,
) -> Self {
let len = tasks.len();
Self {
data: CreateTablesData {
cluster_id,
state: CreateTablesState::Prepare,
tasks,
table_ids_already_exists: vec![None; len],
physical_table_id,
physical_region_numbers: vec![],
},
}
}
}
#[derive(Debug, Serialize, Deserialize)]
pub struct CreateTablesData {
cluster_id: ClusterId,
@@ -370,6 +246,7 @@ pub struct CreateTablesData {
table_ids_already_exists: Vec<Option<TableId>>,
physical_table_id: TableId,
physical_region_numbers: Vec<RegionNumber>,
physical_columns: Vec<ColumnMetadata>,
}
impl CreateTablesData {
@@ -377,18 +254,6 @@ impl CreateTablesData {
&self.state
}
fn physical_table_id(&self) -> TableId {
self.physical_table_id
}
fn set_physical_region_numbers(&mut self, physical_region_numbers: Vec<RegionNumber>) {
self.physical_region_numbers = physical_region_numbers;
}
fn set_table_ids_already_exists(&mut self, table_ids_already_exists: Vec<Option<TableId>>) {
self.table_ids_already_exists = table_ids_already_exists;
}
fn all_create_table_exprs(&self) -> Vec<&CreateTableExpr> {
self.tasks
.iter()

View File

@@ -0,0 +1,81 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use snafu::ensure;
use crate::ddl::create_logical_tables::CreateLogicalTablesProcedure;
use crate::error::{CreateLogicalTablesInvalidArgumentsSnafu, Result, TableAlreadyExistsSnafu};
use crate::key::table_name::TableNameKey;
impl CreateLogicalTablesProcedure {
pub(crate) fn check_input_tasks(&self) -> Result<()> {
self.check_schema()?;
Ok(())
}
pub(crate) async fn check_tables_already_exist(&mut self) -> Result<()> {
let table_name_keys = self
.data
.all_create_table_exprs()
.iter()
.map(|expr| TableNameKey::new(&expr.catalog_name, &expr.schema_name, &expr.table_name))
.collect::<Vec<_>>();
let table_ids_already_exists = self
.context
.table_metadata_manager
.table_name_manager()
.batch_get(table_name_keys)
.await?
.iter()
.map(|x| x.map(|x| x.table_id()))
.collect::<Vec<_>>();
self.data.table_ids_already_exists = table_ids_already_exists;
// Validates the tasks
let tasks = &mut self.data.tasks;
for (task, table_id) in tasks.iter().zip(self.data.table_ids_already_exists.iter()) {
if table_id.is_some() {
// If a table already exists, we just ignore it.
ensure!(
task.create_table.create_if_not_exists,
TableAlreadyExistsSnafu {
table_name: task.create_table.table_name.to_string(),
}
);
continue;
}
}
Ok(())
}
// Checks if the schemas of the tasks are the same
fn check_schema(&self) -> Result<()> {
let is_same_schema = self.data.tasks.windows(2).all(|pair| {
pair[0].create_table.catalog_name == pair[1].create_table.catalog_name
&& pair[0].create_table.schema_name == pair[1].create_table.schema_name
});
ensure!(
is_same_schema,
CreateLogicalTablesInvalidArgumentsSnafu {
err_msg: "Schemas of the tasks are not the same"
}
);
Ok(())
}
}

View File

@@ -0,0 +1,57 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use crate::ddl::create_logical_tables::CreateLogicalTablesProcedure;
use crate::error::Result;
use crate::key::table_route::TableRouteValue;
impl CreateLogicalTablesProcedure {
pub(crate) async fn fill_physical_table_info(&mut self) -> Result<()> {
let physical_region_numbers = self
.context
.table_metadata_manager
.table_route_manager()
.get_physical_table_route(self.data.physical_table_id)
.await
.map(|(_, route)| TableRouteValue::Physical(route).region_numbers())?;
self.data.physical_region_numbers = physical_region_numbers;
Ok(())
}
pub(crate) async fn allocate_table_ids(&mut self) -> Result<()> {
for (task, table_id) in self
.data
.tasks
.iter_mut()
.zip(self.data.table_ids_already_exists.iter())
{
let table_id = if let Some(table_id) = table_id {
*table_id
} else {
self.context
.table_metadata_allocator
.allocate_table_id(task)
.await?
};
task.set_table_id(table_id);
// sort columns in task
task.sort_columns();
}
Ok(())
}
}

View File

@@ -0,0 +1,74 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::collections::HashMap;
use api::v1::region::{region_request, CreateRequests, RegionRequest, RegionRequestHeader};
use common_telemetry::tracing_context::TracingContext;
use store_api::storage::RegionId;
use crate::ddl::create_logical_tables::CreateLogicalTablesProcedure;
use crate::ddl::create_table_template::{build_template, CreateRequestBuilder};
use crate::ddl::utils::region_storage_path;
use crate::error::Result;
use crate::peer::Peer;
use crate::rpc::ddl::CreateTableTask;
use crate::rpc::router::{find_leader_regions, RegionRoute};
impl CreateLogicalTablesProcedure {
pub(crate) fn make_request(
&self,
peer: &Peer,
region_routes: &[RegionRoute],
) -> Result<RegionRequest> {
let tasks = &self.data.tasks;
let regions_on_this_peer = find_leader_regions(region_routes, peer);
let mut requests = Vec::with_capacity(tasks.len() * regions_on_this_peer.len());
for task in tasks {
let create_table_expr = &task.create_table;
let catalog = &create_table_expr.catalog_name;
let schema = &create_table_expr.schema_name;
let logical_table_id = task.table_info.ident.table_id;
let storage_path = region_storage_path(catalog, schema);
let request_builder = self.create_region_request_builder(task)?;
for region_number in &regions_on_this_peer {
let region_id = RegionId::new(logical_table_id, *region_number);
let one_region_request =
request_builder.build_one(region_id, storage_path.clone(), &HashMap::new())?;
requests.push(one_region_request);
}
}
Ok(RegionRequest {
header: Some(RegionRequestHeader {
tracing_context: TracingContext::from_current_span().to_w3c(),
..Default::default()
}),
body: Some(region_request::Body::Creates(CreateRequests { requests })),
})
}
fn create_region_request_builder(
&self,
task: &CreateTableTask,
) -> Result<CreateRequestBuilder> {
let create_expr = &task.create_table;
let template = build_template(create_expr)?;
Ok(CreateRequestBuilder::new(
template,
Some(self.data.physical_table_id),
))
}
}

View File

@@ -0,0 +1,128 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::ops::Deref;
use common_telemetry::{info, warn};
use itertools::Itertools;
use snafu::OptionExt;
use table::metadata::TableId;
use crate::cache_invalidator::Context;
use crate::ddl::create_logical_tables::CreateLogicalTablesProcedure;
use crate::ddl::physical_table_metadata;
use crate::error::{Result, TableInfoNotFoundSnafu};
use crate::instruction::CacheIdent;
use crate::table_name::TableName;
impl CreateLogicalTablesProcedure {
pub(crate) async fn update_physical_table_metadata(&mut self) -> Result<()> {
if self.data.physical_columns.is_empty() {
warn!("No physical columns found, leaving the physical table's schema unchanged when creating logical tables");
return Ok(());
}
// Fetches old physical table's info
let physical_table_info = self
.context
.table_metadata_manager
.table_info_manager()
.get(self.data.physical_table_id)
.await?
.with_context(|| TableInfoNotFoundSnafu {
table: format!("table id - {}", self.data.physical_table_id),
})?;
// Generates new table info
let raw_table_info = physical_table_info.deref().table_info.clone();
let new_table_info = physical_table_metadata::build_new_physical_table_info(
raw_table_info,
&self.data.physical_columns,
);
let physical_table_name = TableName::new(
&new_table_info.catalog_name,
&new_table_info.schema_name,
&new_table_info.name,
);
// Update physical table's metadata
self.context
.table_metadata_manager
.update_table_info(physical_table_info, new_table_info)
.await?;
// Invalid physical table cache
self.context
.cache_invalidator
.invalidate(
&Context::default(),
vec![
CacheIdent::TableId(self.data.physical_table_id),
CacheIdent::TableName(physical_table_name),
],
)
.await?;
Ok(())
}
pub(crate) async fn create_logical_tables_metadata(&mut self) -> Result<Vec<TableId>> {
let remaining_tasks = self.data.remaining_tasks();
let num_tables = remaining_tasks.len();
if num_tables > 0 {
let chunk_size = self
.context
.table_metadata_manager
.create_logical_tables_metadata_chunk_size();
if num_tables > chunk_size {
let chunks = remaining_tasks
.into_iter()
.chunks(chunk_size)
.into_iter()
.map(|chunk| chunk.collect::<Vec<_>>())
.collect::<Vec<_>>();
for chunk in chunks {
self.context
.table_metadata_manager
.create_logical_tables_metadata(chunk)
.await?;
}
} else {
self.context
.table_metadata_manager
.create_logical_tables_metadata(remaining_tasks)
.await?;
}
}
// The `table_id` MUST be collected after the [Prepare::Prepare],
// ensures the all `table_id`s have been allocated.
let table_ids = self
.data
.tasks
.iter()
.map(|task| task.table_info.ident.table_id)
.collect::<Vec<_>>();
info!(
"Created {num_tables} tables {table_ids:?} metadata for physical table {}",
self.data.physical_table_id
);
Ok(table_ids)
}
}

View File

@@ -35,9 +35,9 @@ use table::table_reference::TableReference;
use crate::ddl::create_table_template::{build_template, CreateRequestBuilder};
use crate::ddl::utils::{add_peer_context_if_needed, handle_retry_error, region_storage_path};
use crate::ddl::{DdlContext, TableMetadata, TableMetadataAllocatorContext};
use crate::error::{self, Result, TableRouteNotFoundSnafu};
use crate::error::{self, Result};
use crate::key::table_name::TableNameKey;
use crate::key::table_route::TableRouteValue;
use crate::key::table_route::{PhysicalTableRouteValue, TableRouteValue};
use crate::lock_key::{CatalogLock, SchemaLock, TableNameLock};
use crate::region_keeper::OperatingRegionGuard;
use crate::rpc::ddl::CreateTableTask;
@@ -69,7 +69,7 @@ impl CreateTableProcedure {
};
// Only registers regions if the table route is allocated.
if let Some(TableRouteValue::Physical(x)) = &creator.data.table_route {
if let Some(x) = &creator.data.table_route {
creator.opening_regions = creator
.register_opening_regions(&context, &x.region_routes)
.map_err(BoxedError::new)
@@ -97,7 +97,7 @@ impl CreateTableProcedure {
})
}
fn table_route(&self) -> Result<&TableRouteValue> {
fn table_route(&self) -> Result<&PhysicalTableRouteValue> {
self.creator
.data
.table_route
@@ -111,7 +111,7 @@ impl CreateTableProcedure {
pub fn set_allocated_metadata(
&mut self,
table_id: TableId,
table_route: TableRouteValue,
table_route: PhysicalTableRouteValue,
region_wal_options: HashMap<RegionNumber, String>,
) {
self.creator
@@ -192,32 +192,10 @@ impl CreateTableProcedure {
/// - [Code::DeadlineExceeded](tonic::status::Code::DeadlineExceeded)
/// - [Code::Unavailable](tonic::status::Code::Unavailable)
pub async fn on_datanode_create_regions(&mut self) -> Result<Status> {
// Safety: the table route must be allocated.
match self.table_route()?.clone() {
TableRouteValue::Physical(x) => {
let region_routes = x.region_routes.clone();
let request_builder = self.new_region_request_builder(None)?;
self.create_regions(&region_routes, request_builder).await
}
TableRouteValue::Logical(x) => {
let physical_table_id = x.physical_table_id();
let physical_table_route = self
.context
.table_metadata_manager
.table_route_manager()
.try_get_physical_table_route(physical_table_id)
.await?
.context(TableRouteNotFoundSnafu {
table_id: physical_table_id,
})?;
let region_routes = &physical_table_route.region_routes;
let request_builder = self.new_region_request_builder(Some(physical_table_id))?;
self.create_regions(region_routes, request_builder).await
}
}
let table_route = self.table_route()?.clone();
let request_builder = self.new_region_request_builder(None)?;
self.create_regions(&table_route.region_routes, request_builder)
.await
}
async fn create_regions(
@@ -225,15 +203,12 @@ impl CreateTableProcedure {
region_routes: &[RegionRoute],
request_builder: CreateRequestBuilder,
) -> Result<Status> {
// Safety: the table_route must be allocated.
if self.table_route()?.is_physical() {
// Registers opening regions
let guards = self
.creator
.register_opening_regions(&self.context, region_routes)?;
if !guards.is_empty() {
self.creator.opening_regions = guards;
}
// Registers opening regions
let guards = self
.creator
.register_opening_regions(&self.context, region_routes)?;
if !guards.is_empty() {
self.creator.opening_regions = guards;
}
let create_table_data = &self.creator.data;
@@ -288,9 +263,8 @@ impl CreateTableProcedure {
self.creator.data.state = CreateTableState::CreateMetadata;
// Ensures the procedures after the crash start from the `DatanodeCreateRegions` stage.
// TODO(weny): Add more tests.
Ok(Status::executing(false))
Ok(Status::executing(true))
}
/// Creates table metadata
@@ -305,7 +279,7 @@ impl CreateTableProcedure {
// Safety: the region_wal_options must be allocated.
let region_wal_options = self.region_wal_options()?.clone();
// Safety: the table_route must be allocated.
let table_route = self.table_route()?.clone();
let table_route = TableRouteValue::Physical(self.table_route()?.clone());
manager
.create_table_metadata(raw_table_info, table_route, region_wal_options)
.await?;
@@ -402,7 +376,7 @@ impl TableCreator {
fn set_allocated_metadata(
&mut self,
table_id: TableId,
table_route: TableRouteValue,
table_route: PhysicalTableRouteValue,
region_wal_options: HashMap<RegionNumber, String>,
) {
self.data.task.table_info.ident.table_id = table_id;
@@ -426,7 +400,7 @@ pub struct CreateTableData {
pub state: CreateTableState,
pub task: CreateTableTask,
/// None stands for not allocated yet.
table_route: Option<TableRouteValue>,
table_route: Option<PhysicalTableRouteValue>,
/// None stands for not allocated yet.
pub region_wal_options: Option<HashMap<RegionNumber, String>>,
pub cluster_id: ClusterId,

View File

@@ -0,0 +1,175 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
pub mod cursor;
pub mod end;
pub mod executor;
pub mod metadata;
pub mod start;
use std::any::Any;
use std::fmt::Debug;
use common_procedure::error::{Error as ProcedureError, FromJsonSnafu, ToJsonSnafu};
use common_procedure::{
Context as ProcedureContext, LockKey, Procedure, Result as ProcedureResult, Status,
};
use futures::stream::BoxStream;
use serde::{Deserialize, Serialize};
use snafu::ResultExt;
use tonic::async_trait;
use self::start::DropDatabaseStart;
use crate::ddl::DdlContext;
use crate::error::Result;
use crate::key::table_name::TableNameValue;
use crate::lock_key::{CatalogLock, SchemaLock};
pub struct DropDatabaseProcedure {
/// The context of procedure runtime.
runtime_context: DdlContext,
context: DropDatabaseContext,
state: Box<dyn State>,
}
/// Target of dropping tables.
#[derive(Debug, Clone, Copy, Serialize, Deserialize, PartialEq, Eq)]
pub(crate) enum DropTableTarget {
Logical,
Physical,
}
/// Context of [DropDatabaseProcedure] execution.
pub(crate) struct DropDatabaseContext {
catalog: String,
schema: String,
drop_if_exists: bool,
tables: Option<BoxStream<'static, Result<(String, TableNameValue)>>>,
}
#[async_trait::async_trait]
#[typetag::serde(tag = "drop_database_state")]
pub(crate) trait State: Send + Debug {
/// Yields the next [State] and [Status].
async fn next(
&mut self,
ddl_ctx: &DdlContext,
ctx: &mut DropDatabaseContext,
) -> Result<(Box<dyn State>, Status)>;
/// Returns as [Any](std::any::Any).
fn as_any(&self) -> &dyn Any;
}
impl DropDatabaseProcedure {
pub const TYPE_NAME: &'static str = "metasrv-procedure::DropDatabase";
pub fn new(catalog: String, schema: String, drop_if_exists: bool, context: DdlContext) -> Self {
Self {
runtime_context: context,
context: DropDatabaseContext {
catalog,
schema,
drop_if_exists,
tables: None,
},
state: Box::new(DropDatabaseStart),
}
}
pub fn from_json(json: &str, runtime_context: DdlContext) -> ProcedureResult<Self> {
let DropDatabaseOwnedData {
catalog,
schema,
drop_if_exists,
state,
} = serde_json::from_str(json).context(FromJsonSnafu)?;
Ok(Self {
runtime_context,
context: DropDatabaseContext {
catalog,
schema,
drop_if_exists,
tables: None,
},
state,
})
}
}
#[async_trait]
impl Procedure for DropDatabaseProcedure {
fn type_name(&self) -> &str {
Self::TYPE_NAME
}
async fn execute(&mut self, _ctx: &ProcedureContext) -> ProcedureResult<Status> {
let state = &mut self.state;
let (next, status) = state
.next(&self.runtime_context, &mut self.context)
.await
.map_err(|e| {
if e.is_retry_later() {
ProcedureError::retry_later(e)
} else {
ProcedureError::external(e)
}
})?;
*state = next;
Ok(status)
}
fn dump(&self) -> ProcedureResult<String> {
let data = DropDatabaseData {
catalog: &self.context.catalog,
schema: &self.context.schema,
drop_if_exists: self.context.drop_if_exists,
state: self.state.as_ref(),
};
serde_json::to_string(&data).context(ToJsonSnafu)
}
fn lock_key(&self) -> LockKey {
let lock_key = vec![
CatalogLock::Read(&self.context.catalog).into(),
SchemaLock::write(&self.context.catalog, &self.context.schema).into(),
];
LockKey::new(lock_key)
}
}
#[derive(Debug, Serialize)]
struct DropDatabaseData<'a> {
// The catalog name
catalog: &'a str,
// The schema name
schema: &'a str,
drop_if_exists: bool,
state: &'a dyn State,
}
#[derive(Debug, Deserialize)]
struct DropDatabaseOwnedData {
// The catalog name
catalog: String,
// The schema name
schema: String,
drop_if_exists: bool,
state: Box<dyn State>,
}

View File

@@ -0,0 +1,247 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::any::Any;
use common_procedure::Status;
use futures::TryStreamExt;
use serde::{Deserialize, Serialize};
use table::metadata::TableId;
use super::executor::DropDatabaseExecutor;
use super::metadata::DropDatabaseRemoveMetadata;
use super::DropTableTarget;
use crate::ddl::drop_database::{DropDatabaseContext, State};
use crate::ddl::DdlContext;
use crate::error::Result;
use crate::key::table_route::TableRouteValue;
use crate::table_name::TableName;
#[derive(Debug, Serialize, Deserialize)]
pub(crate) struct DropDatabaseCursor {
pub(crate) target: DropTableTarget,
}
impl DropDatabaseCursor {
/// Returns a new [DropDatabaseCursor].
pub fn new(target: DropTableTarget) -> Self {
Self { target }
}
fn handle_reach_end(
&mut self,
ctx: &mut DropDatabaseContext,
) -> Result<(Box<dyn State>, Status)> {
// Consumes the tables stream.
ctx.tables.take();
match self.target {
DropTableTarget::Logical => Ok((
Box::new(DropDatabaseCursor::new(DropTableTarget::Physical)),
Status::executing(true),
)),
DropTableTarget::Physical => Ok((
Box::new(DropDatabaseRemoveMetadata),
Status::executing(true),
)),
}
}
async fn handle_table(
&mut self,
ddl_ctx: &DdlContext,
ctx: &mut DropDatabaseContext,
table_name: String,
table_id: TableId,
table_route_value: TableRouteValue,
) -> Result<(Box<dyn State>, Status)> {
match (self.target, table_route_value) {
(DropTableTarget::Logical, TableRouteValue::Logical(route)) => {
let physical_table_id = route.physical_table_id();
let (_, table_route) = ddl_ctx
.table_metadata_manager
.table_route_manager()
.get_physical_table_route(physical_table_id)
.await?;
Ok((
Box::new(DropDatabaseExecutor::new(
table_id,
TableName::new(&ctx.catalog, &ctx.schema, &table_name),
table_route.region_routes,
self.target,
)),
Status::executing(true),
))
}
(DropTableTarget::Physical, TableRouteValue::Physical(table_route)) => Ok((
Box::new(DropDatabaseExecutor::new(
table_id,
TableName::new(&ctx.catalog, &ctx.schema, &table_name),
table_route.region_routes,
self.target,
)),
Status::executing(true),
)),
_ => Ok((
Box::new(DropDatabaseCursor::new(self.target)),
Status::executing(false),
)),
}
}
}
#[async_trait::async_trait]
#[typetag::serde]
impl State for DropDatabaseCursor {
async fn next(
&mut self,
ddl_ctx: &DdlContext,
ctx: &mut DropDatabaseContext,
) -> Result<(Box<dyn State>, Status)> {
if ctx.tables.as_deref().is_none() {
let tables = ddl_ctx
.table_metadata_manager
.table_name_manager()
.tables(&ctx.catalog, &ctx.schema);
ctx.tables = Some(tables);
}
// Safety: must exist
match ctx.tables.as_mut().unwrap().try_next().await? {
Some((table_name, table_name_value)) => {
let table_id = table_name_value.table_id();
match ddl_ctx
.table_metadata_manager
.table_route_manager()
.table_route_storage()
.get(table_id)
.await?
{
Some(table_route_value) => {
self.handle_table(ddl_ctx, ctx, table_name, table_id, table_route_value)
.await
}
None => Ok((
Box::new(DropDatabaseCursor::new(self.target)),
Status::executing(false),
)),
}
}
None => self.handle_reach_end(ctx),
}
}
fn as_any(&self) -> &dyn Any {
self
}
}
#[cfg(test)]
mod tests {
use std::sync::Arc;
use common_catalog::consts::{DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME};
use crate::ddl::drop_database::cursor::DropDatabaseCursor;
use crate::ddl::drop_database::executor::DropDatabaseExecutor;
use crate::ddl::drop_database::metadata::DropDatabaseRemoveMetadata;
use crate::ddl::drop_database::{DropDatabaseContext, DropTableTarget, State};
use crate::ddl::test_util::{create_logical_table, create_physical_table};
use crate::test_util::{new_ddl_context, MockDatanodeManager};
#[tokio::test]
async fn test_next_without_logical_tables() {
let datanode_manager = Arc::new(MockDatanodeManager::new(()));
let ddl_context = new_ddl_context(datanode_manager);
create_physical_table(ddl_context.clone(), 0, "phy").await;
// It always starts from Logical
let mut state = DropDatabaseCursor::new(DropTableTarget::Logical);
let mut ctx = DropDatabaseContext {
catalog: DEFAULT_CATALOG_NAME.to_string(),
schema: DEFAULT_SCHEMA_NAME.to_string(),
drop_if_exists: false,
tables: None,
};
// Ticks
let (mut state, status) = state.next(&ddl_context, &mut ctx).await.unwrap();
assert!(!status.need_persist());
let cursor = state.as_any().downcast_ref::<DropDatabaseCursor>().unwrap();
assert_eq!(cursor.target, DropTableTarget::Logical);
// Ticks
let (mut state, status) = state.next(&ddl_context, &mut ctx).await.unwrap();
assert!(status.need_persist());
assert!(ctx.tables.is_none());
let cursor = state.as_any().downcast_ref::<DropDatabaseCursor>().unwrap();
assert_eq!(cursor.target, DropTableTarget::Physical);
// Ticks
let (state, status) = state.next(&ddl_context, &mut ctx).await.unwrap();
assert!(status.need_persist());
let executor = state
.as_any()
.downcast_ref::<DropDatabaseExecutor>()
.unwrap();
assert_eq!(executor.target, DropTableTarget::Physical);
}
#[tokio::test]
async fn test_next_with_logical_tables() {
let datanode_manager = Arc::new(MockDatanodeManager::new(()));
let ddl_context = new_ddl_context(datanode_manager);
let physical_table_id = create_physical_table(ddl_context.clone(), 0, "phy").await;
create_logical_table(ddl_context.clone(), 0, physical_table_id, "metric_0").await;
// It always starts from Logical
let mut state = DropDatabaseCursor::new(DropTableTarget::Logical);
let mut ctx = DropDatabaseContext {
catalog: DEFAULT_CATALOG_NAME.to_string(),
schema: DEFAULT_SCHEMA_NAME.to_string(),
drop_if_exists: false,
tables: None,
};
// Ticks
let (state, status) = state.next(&ddl_context, &mut ctx).await.unwrap();
assert!(status.need_persist());
let executor = state
.as_any()
.downcast_ref::<DropDatabaseExecutor>()
.unwrap();
let (_, table_route) = ddl_context
.table_metadata_manager
.table_route_manager()
.get_physical_table_route(physical_table_id)
.await
.unwrap();
assert_eq!(table_route.region_routes, executor.region_routes);
assert_eq!(executor.target, DropTableTarget::Logical);
}
#[tokio::test]
async fn test_reach_the_end() {
let datanode_manager = Arc::new(MockDatanodeManager::new(()));
let ddl_context = new_ddl_context(datanode_manager);
let mut state = DropDatabaseCursor::new(DropTableTarget::Physical);
let mut ctx = DropDatabaseContext {
catalog: DEFAULT_CATALOG_NAME.to_string(),
schema: DEFAULT_SCHEMA_NAME.to_string(),
drop_if_exists: false,
tables: None,
};
// Ticks
let (state, status) = state.next(&ddl_context, &mut ctx).await.unwrap();
assert!(status.need_persist());
state
.as_any()
.downcast_ref::<DropDatabaseRemoveMetadata>()
.unwrap();
assert!(ctx.tables.is_none());
}
}

View File

@@ -0,0 +1,41 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::any::Any;
use common_procedure::Status;
use serde::{Deserialize, Serialize};
use crate::ddl::drop_database::{DropDatabaseContext, State};
use crate::ddl::DdlContext;
use crate::error::Result;
#[derive(Debug, Serialize, Deserialize)]
pub(crate) struct DropDatabaseEnd;
#[async_trait::async_trait]
#[typetag::serde]
impl State for DropDatabaseEnd {
async fn next(
&mut self,
_: &DdlContext,
_: &mut DropDatabaseContext,
) -> Result<(Box<dyn State>, Status)> {
Ok((Box::new(DropDatabaseEnd), Status::done()))
}
fn as_any(&self) -> &dyn Any {
self
}
}

View File

@@ -0,0 +1,296 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::any::Any;
use common_procedure::Status;
use common_telemetry::info;
use serde::{Deserialize, Serialize};
use snafu::OptionExt;
use table::metadata::TableId;
use super::cursor::DropDatabaseCursor;
use super::{DropDatabaseContext, DropTableTarget};
use crate::ddl::drop_database::State;
use crate::ddl::drop_table::executor::DropTableExecutor;
use crate::ddl::DdlContext;
use crate::error::{self, Result};
use crate::region_keeper::OperatingRegionGuard;
use crate::rpc::router::{operating_leader_regions, RegionRoute};
use crate::table_name::TableName;
#[derive(Debug, Serialize, Deserialize)]
pub(crate) struct DropDatabaseExecutor {
table_id: TableId,
table_name: TableName,
pub(crate) region_routes: Vec<RegionRoute>,
pub(crate) target: DropTableTarget,
#[serde(skip)]
dropping_regions: Vec<OperatingRegionGuard>,
}
impl DropDatabaseExecutor {
/// Returns a new [DropDatabaseExecutor].
pub fn new(
table_id: TableId,
table_name: TableName,
region_routes: Vec<RegionRoute>,
target: DropTableTarget,
) -> Self {
Self {
table_name,
table_id,
region_routes,
target,
dropping_regions: vec![],
}
}
}
impl DropDatabaseExecutor {
fn register_dropping_regions(&mut self, ddl_ctx: &DdlContext) -> Result<()> {
let dropping_regions = operating_leader_regions(&self.region_routes);
let mut dropping_region_guards = Vec::with_capacity(dropping_regions.len());
for (region_id, datanode_id) in dropping_regions {
let guard = ddl_ctx
.memory_region_keeper
.register(datanode_id, region_id)
.context(error::RegionOperatingRaceSnafu {
region_id,
peer_id: datanode_id,
})?;
dropping_region_guards.push(guard);
}
self.dropping_regions = dropping_region_guards;
Ok(())
}
}
#[async_trait::async_trait]
#[typetag::serde]
impl State for DropDatabaseExecutor {
async fn next(
&mut self,
ddl_ctx: &DdlContext,
_ctx: &mut DropDatabaseContext,
) -> Result<(Box<dyn State>, Status)> {
self.register_dropping_regions(ddl_ctx)?;
let executor = DropTableExecutor::new(self.table_name.clone(), self.table_id, true);
executor
.on_remove_metadata(ddl_ctx, &self.region_routes)
.await?;
executor.invalidate_table_cache(ddl_ctx).await?;
executor
.on_drop_regions(ddl_ctx, &self.region_routes)
.await?;
info!("Table: {}({}) is dropped", self.table_name, self.table_id);
Ok((
Box::new(DropDatabaseCursor::new(self.target)),
Status::executing(false),
))
}
fn as_any(&self) -> &dyn Any {
self
}
}
#[cfg(test)]
mod tests {
use std::sync::Arc;
use api::v1::region::{QueryRequest, RegionRequest};
use common_catalog::consts::{DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME};
use common_error::ext::BoxedError;
use common_recordbatch::SendableRecordBatchStream;
use crate::datanode_manager::HandleResponse;
use crate::ddl::drop_database::cursor::DropDatabaseCursor;
use crate::ddl::drop_database::executor::DropDatabaseExecutor;
use crate::ddl::drop_database::{DropDatabaseContext, DropTableTarget, State};
use crate::ddl::test_util::{create_logical_table, create_physical_table};
use crate::error::{self, Error, Result};
use crate::peer::Peer;
use crate::table_name::TableName;
use crate::test_util::{new_ddl_context, MockDatanodeHandler, MockDatanodeManager};
#[derive(Clone)]
pub struct NaiveDatanodeHandler;
#[async_trait::async_trait]
impl MockDatanodeHandler for NaiveDatanodeHandler {
async fn handle(&self, _peer: &Peer, _request: RegionRequest) -> Result<HandleResponse> {
Ok(HandleResponse::new(0))
}
async fn handle_query(
&self,
_peer: &Peer,
_request: QueryRequest,
) -> Result<SendableRecordBatchStream> {
unreachable!()
}
}
#[tokio::test]
async fn test_next_with_physical_table() {
let datanode_manager = Arc::new(MockDatanodeManager::new(NaiveDatanodeHandler));
let ddl_context = new_ddl_context(datanode_manager);
let physical_table_id = create_physical_table(ddl_context.clone(), 0, "phy").await;
let (_, table_route) = ddl_context
.table_metadata_manager
.table_route_manager()
.get_physical_table_route(physical_table_id)
.await
.unwrap();
{
let mut state = DropDatabaseExecutor::new(
physical_table_id,
TableName::new(DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, "phy"),
table_route.region_routes.clone(),
DropTableTarget::Physical,
);
let mut ctx = DropDatabaseContext {
catalog: DEFAULT_CATALOG_NAME.to_string(),
schema: DEFAULT_SCHEMA_NAME.to_string(),
drop_if_exists: false,
tables: None,
};
let (state, status) = state.next(&ddl_context, &mut ctx).await.unwrap();
assert!(!status.need_persist());
let cursor = state.as_any().downcast_ref::<DropDatabaseCursor>().unwrap();
assert_eq!(cursor.target, DropTableTarget::Physical);
}
// Execute again
let mut ctx = DropDatabaseContext {
catalog: DEFAULT_CATALOG_NAME.to_string(),
schema: DEFAULT_SCHEMA_NAME.to_string(),
drop_if_exists: false,
tables: None,
};
let mut state = DropDatabaseExecutor::new(
physical_table_id,
TableName::new(DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, "phy"),
table_route.region_routes,
DropTableTarget::Physical,
);
let (state, status) = state.next(&ddl_context, &mut ctx).await.unwrap();
assert!(!status.need_persist());
let cursor = state.as_any().downcast_ref::<DropDatabaseCursor>().unwrap();
assert_eq!(cursor.target, DropTableTarget::Physical);
}
#[tokio::test]
async fn test_next_logical_table() {
let datanode_manager = Arc::new(MockDatanodeManager::new(NaiveDatanodeHandler));
let ddl_context = new_ddl_context(datanode_manager);
let physical_table_id = create_physical_table(ddl_context.clone(), 0, "phy").await;
create_logical_table(ddl_context.clone(), 0, physical_table_id, "metric").await;
let logical_table_id = physical_table_id + 1;
let (_, table_route) = ddl_context
.table_metadata_manager
.table_route_manager()
.get_physical_table_route(logical_table_id)
.await
.unwrap();
{
let mut state = DropDatabaseExecutor::new(
physical_table_id,
TableName::new(DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, "metric"),
table_route.region_routes.clone(),
DropTableTarget::Logical,
);
let mut ctx = DropDatabaseContext {
catalog: DEFAULT_CATALOG_NAME.to_string(),
schema: DEFAULT_SCHEMA_NAME.to_string(),
drop_if_exists: false,
tables: None,
};
let (state, status) = state.next(&ddl_context, &mut ctx).await.unwrap();
assert!(!status.need_persist());
let cursor = state.as_any().downcast_ref::<DropDatabaseCursor>().unwrap();
assert_eq!(cursor.target, DropTableTarget::Logical);
}
// Execute again
let mut ctx = DropDatabaseContext {
catalog: DEFAULT_CATALOG_NAME.to_string(),
schema: DEFAULT_SCHEMA_NAME.to_string(),
drop_if_exists: false,
tables: None,
};
let mut state = DropDatabaseExecutor::new(
physical_table_id,
TableName::new(DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, "phy"),
table_route.region_routes,
DropTableTarget::Logical,
);
let (state, status) = state.next(&ddl_context, &mut ctx).await.unwrap();
assert!(!status.need_persist());
let cursor = state.as_any().downcast_ref::<DropDatabaseCursor>().unwrap();
assert_eq!(cursor.target, DropTableTarget::Logical);
}
#[derive(Clone)]
pub struct RetryErrorDatanodeHandler;
#[async_trait::async_trait]
impl MockDatanodeHandler for RetryErrorDatanodeHandler {
async fn handle(&self, _peer: &Peer, _request: RegionRequest) -> Result<HandleResponse> {
Err(Error::RetryLater {
source: BoxedError::new(
error::UnexpectedSnafu {
err_msg: "retry later",
}
.build(),
),
})
}
async fn handle_query(
&self,
_peer: &Peer,
_request: QueryRequest,
) -> Result<SendableRecordBatchStream> {
unreachable!()
}
}
#[tokio::test]
async fn test_next_retryable_err() {
let datanode_manager = Arc::new(MockDatanodeManager::new(RetryErrorDatanodeHandler));
let ddl_context = new_ddl_context(datanode_manager);
let physical_table_id = create_physical_table(ddl_context.clone(), 0, "phy").await;
let (_, table_route) = ddl_context
.table_metadata_manager
.table_route_manager()
.get_physical_table_route(physical_table_id)
.await
.unwrap();
let mut state = DropDatabaseExecutor::new(
physical_table_id,
TableName::new(DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, "phy"),
table_route.region_routes,
DropTableTarget::Physical,
);
let mut ctx = DropDatabaseContext {
catalog: DEFAULT_CATALOG_NAME.to_string(),
schema: DEFAULT_SCHEMA_NAME.to_string(),
drop_if_exists: false,
tables: None,
};
let err = state.next(&ddl_context, &mut ctx).await.unwrap_err();
assert!(err.is_retry_later());
}
}

View File

@@ -0,0 +1,99 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::any::Any;
use common_procedure::Status;
use serde::{Deserialize, Serialize};
use super::end::DropDatabaseEnd;
use crate::ddl::drop_database::{DropDatabaseContext, State};
use crate::ddl::DdlContext;
use crate::error::Result;
use crate::key::schema_name::SchemaNameKey;
#[derive(Debug, Serialize, Deserialize)]
pub(crate) struct DropDatabaseRemoveMetadata;
#[async_trait::async_trait]
#[typetag::serde]
impl State for DropDatabaseRemoveMetadata {
async fn next(
&mut self,
ddl_ctx: &DdlContext,
ctx: &mut DropDatabaseContext,
) -> Result<(Box<dyn State>, Status)> {
ddl_ctx
.table_metadata_manager
.schema_manager()
.delete(SchemaNameKey::new(&ctx.catalog, &ctx.schema))
.await?;
return Ok((Box::new(DropDatabaseEnd), Status::done()));
}
fn as_any(&self) -> &dyn Any {
self
}
}
#[cfg(test)]
mod tests {
use std::sync::Arc;
use crate::ddl::drop_database::end::DropDatabaseEnd;
use crate::ddl::drop_database::metadata::DropDatabaseRemoveMetadata;
use crate::ddl::drop_database::{DropDatabaseContext, State};
use crate::key::schema_name::SchemaNameKey;
use crate::test_util::{new_ddl_context, MockDatanodeManager};
#[tokio::test]
async fn test_next() {
let datanode_manager = Arc::new(MockDatanodeManager::new(()));
let ddl_context = new_ddl_context(datanode_manager);
ddl_context
.table_metadata_manager
.schema_manager()
.create(SchemaNameKey::new("foo", "bar"), None, true)
.await
.unwrap();
let mut state = DropDatabaseRemoveMetadata;
let mut ctx = DropDatabaseContext {
catalog: "foo".to_string(),
schema: "bar".to_string(),
drop_if_exists: true,
tables: None,
};
let (state, status) = state.next(&ddl_context, &mut ctx).await.unwrap();
state.as_any().downcast_ref::<DropDatabaseEnd>().unwrap();
assert!(status.is_done());
assert!(!ddl_context
.table_metadata_manager
.schema_manager()
.exists(SchemaNameKey::new("foo", "bar"))
.await
.unwrap());
// Schema not exists
let mut state = DropDatabaseRemoveMetadata;
let mut ctx = DropDatabaseContext {
catalog: "foo".to_string(),
schema: "bar".to_string(),
drop_if_exists: true,
tables: None,
};
let (state, status) = state.next(&ddl_context, &mut ctx).await.unwrap();
state.as_any().downcast_ref::<DropDatabaseEnd>().unwrap();
assert!(status.is_done());
}
}

View File

@@ -0,0 +1,138 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::any::Any;
use common_procedure::Status;
use serde::{Deserialize, Serialize};
use snafu::ensure;
use crate::ddl::drop_database::cursor::DropDatabaseCursor;
use crate::ddl::drop_database::end::DropDatabaseEnd;
use crate::ddl::drop_database::{DropDatabaseContext, DropTableTarget, State};
use crate::ddl::DdlContext;
use crate::error::{self, Result};
use crate::key::schema_name::SchemaNameKey;
#[derive(Debug, Serialize, Deserialize)]
pub(crate) struct DropDatabaseStart;
#[async_trait::async_trait]
#[typetag::serde]
impl State for DropDatabaseStart {
/// Checks whether schema exists.
/// - Early returns if schema not exists and `drop_if_exists` is `true`.
/// - Throws an error if schema not exists and `drop_if_exists` is `false`.
async fn next(
&mut self,
ddl_ctx: &DdlContext,
ctx: &mut DropDatabaseContext,
) -> Result<(Box<dyn State>, Status)> {
let exists = ddl_ctx
.table_metadata_manager
.schema_manager()
.exists(SchemaNameKey {
catalog: &ctx.catalog,
schema: &ctx.schema,
})
.await?;
if !exists && ctx.drop_if_exists {
return Ok((Box::new(DropDatabaseEnd), Status::done()));
}
ensure!(
exists,
error::SchemaNotFoundSnafu {
table_schema: &ctx.schema,
}
);
Ok((
Box::new(DropDatabaseCursor::new(DropTableTarget::Logical)),
Status::executing(true),
))
}
fn as_any(&self) -> &dyn Any {
self
}
}
#[cfg(test)]
mod tests {
use std::assert_matches::assert_matches;
use std::sync::Arc;
use crate::ddl::drop_database::cursor::DropDatabaseCursor;
use crate::ddl::drop_database::end::DropDatabaseEnd;
use crate::ddl::drop_database::start::DropDatabaseStart;
use crate::ddl::drop_database::{DropDatabaseContext, State};
use crate::error;
use crate::key::schema_name::SchemaNameKey;
use crate::test_util::{new_ddl_context, MockDatanodeManager};
#[tokio::test]
async fn test_schema_not_exists_err() {
let datanode_manager = Arc::new(MockDatanodeManager::new(()));
let ddl_context = new_ddl_context(datanode_manager);
let mut step = DropDatabaseStart;
let mut ctx = DropDatabaseContext {
catalog: "foo".to_string(),
schema: "bar".to_string(),
drop_if_exists: false,
tables: None,
};
let err = step.next(&ddl_context, &mut ctx).await.unwrap_err();
assert_matches!(err, error::Error::SchemaNotFound { .. });
}
#[tokio::test]
async fn test_schema_not_exists() {
let datanode_manager = Arc::new(MockDatanodeManager::new(()));
let ddl_context = new_ddl_context(datanode_manager);
let mut state = DropDatabaseStart;
let mut ctx = DropDatabaseContext {
catalog: "foo".to_string(),
schema: "bar".to_string(),
drop_if_exists: true,
tables: None,
};
let (state, status) = state.next(&ddl_context, &mut ctx).await.unwrap();
state.as_any().downcast_ref::<DropDatabaseEnd>().unwrap();
assert!(status.is_done());
}
#[tokio::test]
async fn test_next() {
let datanode_manager = Arc::new(MockDatanodeManager::new(()));
let ddl_context = new_ddl_context(datanode_manager);
ddl_context
.table_metadata_manager
.schema_manager()
.create(SchemaNameKey::new("foo", "bar"), None, true)
.await
.unwrap();
let mut state = DropDatabaseStart;
let mut ctx = DropDatabaseContext {
catalog: "foo".to_string(),
schema: "bar".to_string(),
drop_if_exists: false,
tables: None,
};
let (state, status) = state.next(&ddl_context, &mut ctx).await.unwrap();
state.as_any().downcast_ref::<DropDatabaseCursor>().unwrap();
assert!(status.need_persist());
}
}

View File

@@ -27,7 +27,7 @@ use table::metadata::{RawTableInfo, TableId};
use table::table_reference::TableReference;
use self::executor::DropTableExecutor;
use super::utils::handle_retry_error;
use crate::ddl::utils::handle_retry_error;
use crate::ddl::DdlContext;
use crate::error::{self, Result};
use crate::key::table_info::TableInfoValue;
@@ -121,11 +121,7 @@ impl DropTableProcedure {
// TODO(weny): Considers introducing a RegionStatus to indicate the region is dropping.
let table_id = self.data.table_id();
executor
.on_remove_metadata(
&self.context,
&self.data.table_info_value,
&self.data.table_route_value,
)
.on_remove_metadata(&self.context, self.data.region_routes()?)
.await?;
info!("Deleted table metadata for table {table_id}");
self.data.state = DropTableState::InvalidateTableCache;
@@ -142,7 +138,7 @@ impl DropTableProcedure {
pub async fn on_datanode_drop_regions(&self, executor: &DropTableExecutor) -> Result<Status> {
executor
.on_drop_regions(&self.context, &self.data.table_route_value)
.on_drop_regions(&self.context, self.data.region_routes()?)
.await?;
Ok(Status::done())
}
@@ -192,6 +188,7 @@ impl Procedure for DropTableProcedure {
}
#[derive(Debug, Serialize, Deserialize)]
/// TODO(weny): simplify the table data.
pub struct DropTableData {
pub state: DropTableState,
pub cluster_id: u64,

View File

@@ -29,11 +29,8 @@ use crate::ddl::utils::add_peer_context_if_needed;
use crate::ddl::DdlContext;
use crate::error::{self, Result};
use crate::instruction::CacheIdent;
use crate::key::table_info::TableInfoValue;
use crate::key::table_name::TableNameKey;
use crate::key::table_route::TableRouteValue;
use crate::key::DeserializedValueWithBytes;
use crate::rpc::router::{find_leader_regions, find_leaders};
use crate::rpc::router::{find_leader_regions, find_leaders, RegionRoute};
use crate::table_name::TableName;
/// [Control] indicated to the caller whether to go to the next step.
@@ -106,11 +103,10 @@ impl DropTableExecutor {
pub async fn on_remove_metadata(
&self,
ctx: &DdlContext,
table_info_value: &DeserializedValueWithBytes<TableInfoValue>,
table_route_value: &DeserializedValueWithBytes<TableRouteValue>,
region_routes: &[RegionRoute],
) -> Result<()> {
ctx.table_metadata_manager
.delete_table_metadata(table_info_value, table_route_value)
.delete_table_metadata(self.table_id, &self.table, region_routes)
.await
}
@@ -138,10 +134,8 @@ impl DropTableExecutor {
pub async fn on_drop_regions(
&self,
ctx: &DdlContext,
table_route_value: &DeserializedValueWithBytes<TableRouteValue>,
region_routes: &[RegionRoute],
) -> Result<()> {
// The `table_route_value` always be the physical table route.
let region_routes = table_route_value.region_routes()?;
let leaders = find_leaders(region_routes);
let mut drop_region_tasks = Vec::with_capacity(leaders.len());
let table_id = self.table_id;
@@ -198,8 +192,11 @@ mod tests {
use table::metadata::RawTableInfo;
use super::*;
use crate::ddl::test_util::create_table::build_raw_table_info_from_expr;
use crate::ddl::test_util::{TestColumnDefBuilder, TestCreateTableExprBuilder};
use crate::ddl::test_util::columns::TestColumnDefBuilder;
use crate::ddl::test_util::create_table::{
build_raw_table_info_from_expr, TestCreateTableExprBuilder,
};
use crate::key::table_route::TableRouteValue;
use crate::table_name::TableName;
use crate::test_util::{new_ddl_context, MockDatanodeManager};

View File

@@ -0,0 +1,56 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::collections::HashSet;
use api::v1::SemanticType;
use store_api::metadata::ColumnMetadata;
use table::metadata::RawTableInfo;
/// Generate the new physical table info.
pub(crate) fn build_new_physical_table_info(
mut raw_table_info: RawTableInfo,
physical_columns: &[ColumnMetadata],
) -> RawTableInfo {
let existing_columns = raw_table_info
.meta
.schema
.column_schemas
.iter()
.map(|col| col.name.clone())
.collect::<HashSet<_>>();
let primary_key_indices = &mut raw_table_info.meta.primary_key_indices;
let value_indices = &mut raw_table_info.meta.value_indices;
value_indices.clear();
let time_index = &mut raw_table_info.meta.schema.timestamp_index;
let columns = &mut raw_table_info.meta.schema.column_schemas;
columns.clear();
for (idx, col) in physical_columns.iter().enumerate() {
match col.semantic_type {
SemanticType::Tag => {
// push new primary key to the end.
if !existing_columns.contains(&col.column_schema.name) {
primary_key_indices.push(idx);
}
}
SemanticType::Field => value_indices.push(idx),
SemanticType::Timestamp => *time_index = Some(idx),
}
columns.push(col.column_schema.clone());
}
raw_table_info
}

View File

@@ -16,16 +16,13 @@ use std::collections::HashMap;
use std::sync::Arc;
use async_trait::async_trait;
use common_catalog::consts::METRIC_ENGINE;
use common_telemetry::{debug, info};
use snafu::{ensure, OptionExt};
use store_api::metric_engine_consts::LOGICAL_TABLE_METADATA_KEY;
use snafu::ensure;
use store_api::storage::{RegionId, RegionNumber, TableId};
use crate::ddl::{TableMetadata, TableMetadataAllocatorContext};
use crate::error::{self, Result, TableNotFoundSnafu, UnsupportedSnafu};
use crate::key::table_name::{TableNameKey, TableNameManager};
use crate::key::table_route::{LogicalTableRouteValue, PhysicalTableRouteValue, TableRouteValue};
use crate::error::{self, Result, UnsupportedSnafu};
use crate::key::table_route::PhysicalTableRouteValue;
use crate::peer::Peer;
use crate::rpc::ddl::CreateTableTask;
use crate::rpc::router::{Region, RegionRoute};
@@ -38,7 +35,6 @@ pub type TableMetadataAllocatorRef = Arc<TableMetadataAllocator>;
pub struct TableMetadataAllocator {
table_id_sequence: SequenceRef,
wal_options_allocator: WalOptionsAllocatorRef,
table_name_manager: TableNameManager,
peer_allocator: PeerAllocatorRef,
}
@@ -46,12 +42,10 @@ impl TableMetadataAllocator {
pub fn new(
table_id_sequence: SequenceRef,
wal_options_allocator: WalOptionsAllocatorRef,
table_name_manager: TableNameManager,
) -> Self {
Self::with_peer_allocator(
table_id_sequence,
wal_options_allocator,
table_name_manager,
Arc::new(NoopPeerAllocator),
)
}
@@ -59,13 +53,11 @@ impl TableMetadataAllocator {
pub fn with_peer_allocator(
table_id_sequence: SequenceRef,
wal_options_allocator: WalOptionsAllocatorRef,
table_name_manager: TableNameManager,
peer_allocator: PeerAllocatorRef,
) -> Self {
Self {
table_id_sequence,
wal_options_allocator,
table_name_manager,
peer_allocator,
}
}
@@ -102,19 +94,14 @@ impl TableMetadataAllocator {
fn create_wal_options(
&self,
table_route: &TableRouteValue,
table_route: &PhysicalTableRouteValue,
) -> Result<HashMap<RegionNumber, String>> {
match table_route {
TableRouteValue::Physical(x) => {
let region_numbers = x
.region_routes
.iter()
.map(|route| route.region.id.region_number())
.collect();
allocate_region_wal_options(region_numbers, &self.wal_options_allocator)
}
TableRouteValue::Logical(_) => Ok(HashMap::new()),
}
let region_numbers = table_route
.region_routes
.iter()
.map(|route| route.region.id.region_number())
.collect();
allocate_region_wal_options(region_numbers, &self.wal_options_allocator)
}
async fn create_table_route(
@@ -122,7 +109,7 @@ impl TableMetadataAllocator {
ctx: &TableMetadataAllocatorContext,
table_id: TableId,
task: &CreateTableTask,
) -> Result<TableRouteValue> {
) -> Result<PhysicalTableRouteValue> {
let regions = task.partitions.len();
ensure!(
regions > 0,
@@ -131,56 +118,29 @@ impl TableMetadataAllocator {
}
);
let table_route = if task.create_table.engine == METRIC_ENGINE
&& let Some(physical_table_name) = task
.create_table
.table_options
.get(LOGICAL_TABLE_METADATA_KEY)
{
let physical_table_id = self
.table_name_manager
.get(TableNameKey::new(
&task.create_table.catalog_name,
&task.create_table.schema_name,
physical_table_name,
))
.await?
.context(TableNotFoundSnafu {
table_name: physical_table_name,
})?
.table_id();
let peers = self.peer_allocator.alloc(ctx, regions).await?;
let region_routes = task
.partitions
.iter()
.enumerate()
.map(|(i, partition)| {
let region = Region {
id: RegionId::new(table_id, i as u32),
partition: Some(partition.clone().into()),
..Default::default()
};
let region_ids = (0..regions)
.map(|i| RegionId::new(table_id, i as RegionNumber))
.collect();
let peer = peers[i % peers.len()].clone();
TableRouteValue::Logical(LogicalTableRouteValue::new(physical_table_id, region_ids))
} else {
let peers = self.peer_allocator.alloc(ctx, regions).await?;
RegionRoute {
region,
leader_peer: Some(peer),
..Default::default()
}
})
.collect::<Vec<_>>();
let region_routes = task
.partitions
.iter()
.enumerate()
.map(|(i, partition)| {
let region = Region {
id: RegionId::new(table_id, i as u32),
partition: Some(partition.clone().into()),
..Default::default()
};
let peer = peers[i % peers.len()].clone();
RegionRoute {
region,
leader_peer: Some(peer),
..Default::default()
}
})
.collect::<Vec<_>>();
TableRouteValue::Physical(PhysicalTableRouteValue::new(region_routes))
};
Ok(table_route)
Ok(PhysicalTableRouteValue::new(region_routes))
}
pub async fn create(
@@ -203,15 +163,6 @@ impl TableMetadataAllocator {
region_wal_options,
})
}
/// Sets table ids with all tasks.
pub async fn set_table_ids_on_logic_create(&self, tasks: &mut [CreateTableTask]) -> Result<()> {
for task in tasks {
let table_id = self.allocate_table_id(task).await?;
task.table_info.ident.table_id = table_id;
}
Ok(())
}
}
pub type PeerAllocatorRef = Arc<dyn PeerAllocator>;

View File

@@ -12,8 +12,161 @@
// See the License for the specific language governing permissions and
// limitations under the License.
pub mod alter_table;
pub mod columns;
pub mod create_table;
pub use create_table::{
TestColumnDef, TestColumnDefBuilder, TestCreateTableExpr, TestCreateTableExprBuilder,
use std::collections::HashMap;
use api::v1::meta::Partition;
use api::v1::{ColumnDataType, SemanticType};
use common_procedure::Status;
use table::metadata::{RawTableInfo, TableId};
use crate::ddl::create_logical_tables::CreateLogicalTablesProcedure;
use crate::ddl::test_util::columns::TestColumnDefBuilder;
use crate::ddl::test_util::create_table::{
build_raw_table_info_from_expr, TestCreateTableExprBuilder,
};
use crate::ddl::{DdlContext, TableMetadata, TableMetadataAllocatorContext};
use crate::key::table_route::TableRouteValue;
use crate::rpc::ddl::CreateTableTask;
use crate::ClusterId;
pub async fn create_physical_table_metadata(
ddl_context: &DdlContext,
table_info: RawTableInfo,
table_route: TableRouteValue,
) {
ddl_context
.table_metadata_manager
.create_table_metadata(table_info, table_route, HashMap::default())
.await
.unwrap();
}
pub async fn create_physical_table(
ddl_context: DdlContext,
cluster_id: ClusterId,
name: &str,
) -> TableId {
// Prepares physical table metadata.
let mut create_physical_table_task = test_create_physical_table_task(name);
let TableMetadata {
table_id,
table_route,
..
} = ddl_context
.table_metadata_allocator
.create(
&TableMetadataAllocatorContext { cluster_id },
&create_physical_table_task,
)
.await
.unwrap();
create_physical_table_task.set_table_id(table_id);
create_physical_table_metadata(
&ddl_context,
create_physical_table_task.table_info.clone(),
TableRouteValue::Physical(table_route),
)
.await;
table_id
}
pub async fn create_logical_table(
ddl_context: DdlContext,
cluster_id: ClusterId,
physical_table_id: TableId,
table_name: &str,
) {
use std::assert_matches::assert_matches;
let tasks = vec![test_create_logical_table_task(table_name)];
let mut procedure =
CreateLogicalTablesProcedure::new(cluster_id, tasks, physical_table_id, ddl_context);
let status = procedure.on_prepare().await.unwrap();
assert_matches!(status, Status::Executing { persist: true });
let status = procedure.on_create_metadata().await.unwrap();
assert_matches!(status, Status::Done { .. });
}
pub fn test_create_logical_table_task(name: &str) -> CreateTableTask {
let create_table = TestCreateTableExprBuilder::default()
.column_defs([
TestColumnDefBuilder::default()
.name("ts")
.data_type(ColumnDataType::TimestampMillisecond)
.semantic_type(SemanticType::Timestamp)
.build()
.unwrap()
.into(),
TestColumnDefBuilder::default()
.name("host")
.data_type(ColumnDataType::String)
.semantic_type(SemanticType::Tag)
.build()
.unwrap()
.into(),
TestColumnDefBuilder::default()
.name("cpu")
.data_type(ColumnDataType::Float64)
.semantic_type(SemanticType::Field)
.build()
.unwrap()
.into(),
])
.time_index("ts")
.primary_keys(["host".into()])
.table_name(name)
.build()
.unwrap()
.into();
let table_info = build_raw_table_info_from_expr(&create_table);
CreateTableTask {
create_table,
// Single region
partitions: vec![Partition {
column_list: vec![],
value_list: vec![],
}],
table_info,
}
}
pub fn test_create_physical_table_task(name: &str) -> CreateTableTask {
let create_table = TestCreateTableExprBuilder::default()
.column_defs([
TestColumnDefBuilder::default()
.name("ts")
.data_type(ColumnDataType::TimestampMillisecond)
.semantic_type(SemanticType::Timestamp)
.build()
.unwrap()
.into(),
TestColumnDefBuilder::default()
.name("value")
.data_type(ColumnDataType::Float64)
.semantic_type(SemanticType::Field)
.build()
.unwrap()
.into(),
])
.time_index("ts")
.primary_keys(["value".into()])
.table_name(name)
.build()
.unwrap()
.into();
let table_info = build_raw_table_info_from_expr(&create_table);
CreateTableTask {
create_table,
// Single region
partitions: vec![Partition {
column_list: vec![],
value_list: vec![],
}],
table_info,
}
}

View File

@@ -0,0 +1,62 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use api::v1::alter_expr::Kind;
use api::v1::{AddColumn, AddColumns, AlterExpr, ColumnDef, RenameTable};
use common_catalog::consts::{DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME};
use derive_builder::Builder;
#[derive(Default, Builder)]
#[builder(default)]
pub struct TestAlterTableExpr {
#[builder(setter(into), default = "DEFAULT_CATALOG_NAME.to_string()")]
catalog_name: String,
#[builder(setter(into), default = "DEFAULT_SCHEMA_NAME.to_string()")]
schema_name: String,
#[builder(setter(into))]
table_name: String,
#[builder(setter(into))]
add_columns: Vec<ColumnDef>,
#[builder(setter(into))]
new_table_name: Option<String>,
}
impl From<TestAlterTableExpr> for AlterExpr {
fn from(value: TestAlterTableExpr) -> Self {
if let Some(new_table_name) = value.new_table_name {
Self {
catalog_name: value.catalog_name,
schema_name: value.schema_name,
table_name: value.table_name,
kind: Some(Kind::RenameTable(RenameTable { new_table_name })),
}
} else {
Self {
catalog_name: value.catalog_name,
schema_name: value.schema_name,
table_name: value.table_name,
kind: Some(Kind::AddColumns(AddColumns {
add_columns: value
.add_columns
.into_iter()
.map(|col| AddColumn {
column_def: Some(col),
location: None,
})
.collect(),
})),
}
}
}
}

View File

@@ -0,0 +1,50 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use api::v1::{ColumnDataType, ColumnDef, SemanticType};
use derive_builder::Builder;
#[derive(Default, Builder)]
pub struct TestColumnDef {
#[builder(setter(into), default)]
name: String,
data_type: ColumnDataType,
#[builder(default)]
is_nullable: bool,
semantic_type: SemanticType,
#[builder(setter(into), default)]
comment: String,
}
impl From<TestColumnDef> for ColumnDef {
fn from(
TestColumnDef {
name,
data_type,
is_nullable,
semantic_type,
comment,
}: TestColumnDef,
) -> Self {
Self {
name,
data_type: data_type as i32,
is_nullable,
default_constraint: vec![],
semantic_type: semantic_type as i32,
comment,
datatype_extension: None,
}
}
}

View File

@@ -15,7 +15,7 @@
use std::collections::HashMap;
use api::v1::column_def::try_as_column_schema;
use api::v1::{ColumnDataType, ColumnDef, CreateTableExpr, SemanticType};
use api::v1::{ColumnDef, CreateTableExpr, SemanticType};
use chrono::DateTime;
use common_catalog::consts::{DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, MITO2_ENGINE};
use datatypes::schema::RawSchema;
@@ -24,40 +24,6 @@ use store_api::storage::TableId;
use table::metadata::{RawTableInfo, RawTableMeta, TableIdent, TableType};
use table::requests::TableOptions;
#[derive(Default, Builder)]
pub struct TestColumnDef {
#[builder(setter(into), default)]
name: String,
data_type: ColumnDataType,
#[builder(default)]
is_nullable: bool,
semantic_type: SemanticType,
#[builder(setter(into), default)]
comment: String,
}
impl From<TestColumnDef> for ColumnDef {
fn from(
TestColumnDef {
name,
data_type,
is_nullable,
semantic_type,
comment,
}: TestColumnDef,
) -> Self {
Self {
name,
data_type: data_type as i32,
is_nullable,
default_constraint: vec![],
semantic_type: semantic_type as i32,
comment,
datatype_extension: None,
}
}
}
#[derive(Default, Builder)]
#[builder(default)]
pub struct TestCreateTableExpr {

View File

@@ -12,5 +12,7 @@
// See the License for the specific language governing permissions and
// limitations under the License.
mod alter_logical_tables;
mod create_logical_tables;
mod create_table;
mod drop_database;

View File

@@ -0,0 +1,359 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::assert_matches::assert_matches;
use std::sync::Arc;
use api::v1::{ColumnDataType, SemanticType};
use common_catalog::consts::{DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME};
use common_procedure::{Procedure, ProcedureId, Status};
use common_procedure_test::MockContextProvider;
use crate::ddl::alter_logical_tables::AlterLogicalTablesProcedure;
use crate::ddl::test_util::alter_table::TestAlterTableExprBuilder;
use crate::ddl::test_util::columns::TestColumnDefBuilder;
use crate::ddl::test_util::{create_logical_table, create_physical_table};
use crate::ddl::tests::create_logical_tables::NaiveDatanodeHandler;
use crate::error::Error::{AlterLogicalTablesInvalidArguments, TableNotFound};
use crate::key::table_name::TableNameKey;
use crate::rpc::ddl::AlterTableTask;
use crate::test_util::{new_ddl_context, MockDatanodeManager};
fn make_alter_logical_table_add_column_task(
schema: Option<&str>,
table: &str,
add_columns: Vec<String>,
) -> AlterTableTask {
let add_columns = add_columns
.into_iter()
.map(|name| {
TestColumnDefBuilder::default()
.name(name)
.data_type(ColumnDataType::String)
.is_nullable(true)
.semantic_type(SemanticType::Tag)
.comment("new column".to_string())
.build()
.unwrap()
.into()
})
.collect::<Vec<_>>();
let mut alter_table = TestAlterTableExprBuilder::default();
if let Some(schema) = schema {
alter_table.schema_name(schema.to_string());
}
let alter_table = alter_table
.table_name(table.to_string())
.add_columns(add_columns)
.build()
.unwrap();
AlterTableTask {
alter_table: alter_table.into(),
}
}
fn make_alter_logical_table_rename_task(
schema: &str,
table: &str,
new_table_name: &str,
) -> AlterTableTask {
let alter_table = TestAlterTableExprBuilder::default()
.schema_name(schema.to_string())
.table_name(table.to_string())
.new_table_name(new_table_name.to_string())
.build()
.unwrap();
AlterTableTask {
alter_table: alter_table.into(),
}
}
#[tokio::test]
async fn test_on_prepare_check_schema() {
let datanode_manager = Arc::new(MockDatanodeManager::new(()));
let ddl_context = new_ddl_context(datanode_manager);
let cluster_id = 1;
let tasks = vec![
make_alter_logical_table_add_column_task(
Some("schema1"),
"table1",
vec!["column1".to_string()],
),
make_alter_logical_table_add_column_task(
Some("schema2"),
"table2",
vec!["column2".to_string()],
),
];
let physical_table_id = 1024u32;
let mut procedure =
AlterLogicalTablesProcedure::new(cluster_id, tasks, physical_table_id, ddl_context);
let err = procedure.on_prepare().await.unwrap_err();
assert_matches!(err, AlterLogicalTablesInvalidArguments { .. });
}
#[tokio::test]
async fn test_on_prepare_check_alter_kind() {
let datanode_manager = Arc::new(MockDatanodeManager::new(()));
let ddl_context = new_ddl_context(datanode_manager);
let cluster_id = 1;
let tasks = vec![make_alter_logical_table_rename_task(
"schema1",
"table1",
"new_table1",
)];
let physical_table_id = 1024u32;
let mut procedure =
AlterLogicalTablesProcedure::new(cluster_id, tasks, physical_table_id, ddl_context);
let err = procedure.on_prepare().await.unwrap_err();
assert_matches!(err, AlterLogicalTablesInvalidArguments { .. });
}
#[tokio::test]
async fn test_on_prepare_different_physical_table() {
let cluster_id = 1;
let datanode_manager = Arc::new(MockDatanodeManager::new(()));
let ddl_context = new_ddl_context(datanode_manager);
let phy1_id = create_physical_table(ddl_context.clone(), cluster_id, "phy1").await;
create_logical_table(ddl_context.clone(), cluster_id, phy1_id, "table1").await;
let phy2_id = create_physical_table(ddl_context.clone(), cluster_id, "phy2").await;
create_logical_table(ddl_context.clone(), cluster_id, phy2_id, "table2").await;
let tasks = vec![
make_alter_logical_table_add_column_task(None, "table1", vec!["column1".to_string()]),
make_alter_logical_table_add_column_task(None, "table2", vec!["column2".to_string()]),
];
let mut procedure = AlterLogicalTablesProcedure::new(cluster_id, tasks, phy1_id, ddl_context);
let err = procedure.on_prepare().await.unwrap_err();
assert_matches!(err, AlterLogicalTablesInvalidArguments { .. });
}
#[tokio::test]
async fn test_on_prepare_logical_table_not_exists() {
let cluster_id = 1;
let datanode_manager = Arc::new(MockDatanodeManager::new(()));
let ddl_context = new_ddl_context(datanode_manager);
// Creates physical table
let phy_id = create_physical_table(ddl_context.clone(), cluster_id, "phy").await;
// Creates 3 logical tables
create_logical_table(ddl_context.clone(), cluster_id, phy_id, "table1").await;
let tasks = vec![
make_alter_logical_table_add_column_task(None, "table1", vec!["column1".to_string()]),
// table2 not exists
make_alter_logical_table_add_column_task(None, "table2", vec!["column2".to_string()]),
];
let mut procedure = AlterLogicalTablesProcedure::new(cluster_id, tasks, phy_id, ddl_context);
let err = procedure.on_prepare().await.unwrap_err();
assert_matches!(err, TableNotFound { .. });
}
#[tokio::test]
async fn test_on_prepare() {
let cluster_id = 1;
let datanode_manager = Arc::new(MockDatanodeManager::new(()));
let ddl_context = new_ddl_context(datanode_manager);
// Creates physical table
let phy_id = create_physical_table(ddl_context.clone(), cluster_id, "phy").await;
// Creates 3 logical tables
create_logical_table(ddl_context.clone(), cluster_id, phy_id, "table1").await;
create_logical_table(ddl_context.clone(), cluster_id, phy_id, "table2").await;
create_logical_table(ddl_context.clone(), cluster_id, phy_id, "table3").await;
let tasks = vec![
make_alter_logical_table_add_column_task(None, "table1", vec!["column1".to_string()]),
make_alter_logical_table_add_column_task(None, "table2", vec!["column2".to_string()]),
make_alter_logical_table_add_column_task(None, "table3", vec!["column3".to_string()]),
];
let mut procedure = AlterLogicalTablesProcedure::new(cluster_id, tasks, phy_id, ddl_context);
let result = procedure.on_prepare().await;
assert_matches!(result, Ok(Status::Executing { persist: true }));
}
#[tokio::test]
async fn test_on_update_metadata() {
let cluster_id = 1;
let datanode_manager = Arc::new(MockDatanodeManager::new(NaiveDatanodeHandler));
let ddl_context = new_ddl_context(datanode_manager);
// Creates physical table
let phy_id = create_physical_table(ddl_context.clone(), cluster_id, "phy").await;
// Creates 3 logical tables
create_logical_table(ddl_context.clone(), cluster_id, phy_id, "table1").await;
create_logical_table(ddl_context.clone(), cluster_id, phy_id, "table2").await;
create_logical_table(ddl_context.clone(), cluster_id, phy_id, "table3").await;
create_logical_table(ddl_context.clone(), cluster_id, phy_id, "table4").await;
create_logical_table(ddl_context.clone(), cluster_id, phy_id, "table5").await;
let tasks = vec![
make_alter_logical_table_add_column_task(None, "table1", vec!["new_col".to_string()]),
make_alter_logical_table_add_column_task(None, "table2", vec!["mew_col".to_string()]),
make_alter_logical_table_add_column_task(None, "table3", vec!["new_col".to_string()]),
];
let mut procedure = AlterLogicalTablesProcedure::new(cluster_id, tasks, phy_id, ddl_context);
let mut status = procedure.on_prepare().await.unwrap();
assert_matches!(status, Status::Executing { persist: true });
let ctx = common_procedure::Context {
procedure_id: ProcedureId::random(),
provider: Arc::new(MockContextProvider::default()),
};
// on_submit_alter_region_requests
status = procedure.execute(&ctx).await.unwrap();
assert_matches!(status, Status::Executing { persist: true });
// on_update_metadata
status = procedure.execute(&ctx).await.unwrap();
assert_matches!(status, Status::Executing { persist: true });
}
#[tokio::test]
async fn test_on_part_duplicate_alter_request() {
let cluster_id = 1;
let datanode_manager = Arc::new(MockDatanodeManager::new(NaiveDatanodeHandler));
let ddl_context = new_ddl_context(datanode_manager);
// Creates physical table
let phy_id = create_physical_table(ddl_context.clone(), cluster_id, "phy").await;
// Creates 3 logical tables
create_logical_table(ddl_context.clone(), cluster_id, phy_id, "table1").await;
create_logical_table(ddl_context.clone(), cluster_id, phy_id, "table2").await;
let tasks = vec![
make_alter_logical_table_add_column_task(None, "table1", vec!["col_0".to_string()]),
make_alter_logical_table_add_column_task(None, "table2", vec!["col_0".to_string()]),
];
let mut procedure =
AlterLogicalTablesProcedure::new(cluster_id, tasks, phy_id, ddl_context.clone());
let mut status = procedure.on_prepare().await.unwrap();
assert_matches!(status, Status::Executing { persist: true });
let ctx = common_procedure::Context {
procedure_id: ProcedureId::random(),
provider: Arc::new(MockContextProvider::default()),
};
// on_submit_alter_region_requests
status = procedure.execute(&ctx).await.unwrap();
assert_matches!(status, Status::Executing { persist: true });
// on_update_metadata
status = procedure.execute(&ctx).await.unwrap();
assert_matches!(status, Status::Executing { persist: true });
// re-alter
let tasks = vec![
make_alter_logical_table_add_column_task(
None,
"table1",
vec!["col_0".to_string(), "new_col_1".to_string()],
),
make_alter_logical_table_add_column_task(
None,
"table2",
vec![
"col_0".to_string(),
"new_col_2".to_string(),
"new_col_1".to_string(),
],
),
];
let mut procedure =
AlterLogicalTablesProcedure::new(cluster_id, tasks, phy_id, ddl_context.clone());
let mut status = procedure.on_prepare().await.unwrap();
assert_matches!(status, Status::Executing { persist: true });
let ctx = common_procedure::Context {
procedure_id: ProcedureId::random(),
provider: Arc::new(MockContextProvider::default()),
};
// on_submit_alter_region_requests
status = procedure.execute(&ctx).await.unwrap();
assert_matches!(status, Status::Executing { persist: true });
// on_update_metadata
status = procedure.execute(&ctx).await.unwrap();
assert_matches!(status, Status::Executing { persist: true });
let table_name_keys = vec![
TableNameKey::new(DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, "table1"),
TableNameKey::new(DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, "table2"),
];
let table_ids = ddl_context
.table_metadata_manager
.table_name_manager()
.batch_get(table_name_keys)
.await
.unwrap()
.into_iter()
.map(|x| x.unwrap().table_id())
.collect::<Vec<_>>();
let tables = ddl_context
.table_metadata_manager
.table_info_manager()
.batch_get(&table_ids)
.await
.unwrap();
let table1 = tables.get(&table_ids[0]).unwrap();
let table2 = tables.get(&table_ids[1]).unwrap();
assert_eq!(table1.table_info.name, "table1");
assert_eq!(table2.table_info.name, "table2");
let table1_cols = table1
.table_info
.meta
.schema
.column_schemas
.iter()
.map(|x| x.name.clone())
.collect::<Vec<_>>();
assert_eq!(
table1_cols,
vec![
"col_0".to_string(),
"cpu".to_string(),
"host".to_string(),
"new_col_1".to_string(),
"ts".to_string()
]
);
let table2_cols = table2
.table_info
.meta
.schema
.column_schemas
.iter()
.map(|x| x.name.clone())
.collect::<Vec<_>>();
assert_eq!(
table2_cols,
vec![
"col_0".to_string(),
"cpu".to_string(),
"host".to_string(),
"new_col_1".to_string(),
"new_col_2".to_string(),
"ts".to_string()
]
);
}

View File

@@ -13,12 +13,9 @@
// limitations under the License.
use std::assert_matches::assert_matches;
use std::collections::HashMap;
use std::sync::Arc;
use api::v1::meta::Partition;
use api::v1::region::{QueryRequest, RegionRequest};
use api::v1::{ColumnDataType, SemanticType};
use common_error::ext::ErrorExt;
use common_error::status_code::StatusCode;
use common_procedure::{Context as ProcedureContext, Procedure, ProcedureId, Status};
@@ -26,102 +23,18 @@ use common_procedure_test::MockContextProvider;
use common_recordbatch::SendableRecordBatchStream;
use common_telemetry::debug;
use store_api::storage::RegionId;
use table::metadata::RawTableInfo;
use crate::datanode_manager::HandleResponse;
use crate::ddl::create_logical_tables::CreateLogicalTablesProcedure;
use crate::ddl::test_util::create_table::build_raw_table_info_from_expr;
use crate::ddl::test_util::{TestColumnDefBuilder, TestCreateTableExprBuilder};
use crate::ddl::{DdlContext, TableMetadata, TableMetadataAllocatorContext};
use crate::ddl::test_util::{
create_physical_table_metadata, test_create_logical_table_task, test_create_physical_table_task,
};
use crate::ddl::{TableMetadata, TableMetadataAllocatorContext};
use crate::error::{Error, Result};
use crate::key::table_route::TableRouteValue;
use crate::peer::Peer;
use crate::rpc::ddl::CreateTableTask;
use crate::test_util::{new_ddl_context, MockDatanodeHandler, MockDatanodeManager};
// Note: this code may be duplicated with others.
// However, it's by design, ensures the tests are easy to be modified or added.
fn test_create_logical_table_task(name: &str) -> CreateTableTask {
let create_table = TestCreateTableExprBuilder::default()
.column_defs([
TestColumnDefBuilder::default()
.name("ts")
.data_type(ColumnDataType::TimestampMillisecond)
.semantic_type(SemanticType::Timestamp)
.build()
.unwrap()
.into(),
TestColumnDefBuilder::default()
.name("host")
.data_type(ColumnDataType::String)
.semantic_type(SemanticType::Tag)
.build()
.unwrap()
.into(),
TestColumnDefBuilder::default()
.name("cpu")
.data_type(ColumnDataType::Float64)
.semantic_type(SemanticType::Field)
.build()
.unwrap()
.into(),
])
.time_index("ts")
.primary_keys(["host".into()])
.table_name(name)
.build()
.unwrap()
.into();
let table_info = build_raw_table_info_from_expr(&create_table);
CreateTableTask {
create_table,
// Single region
partitions: vec![Partition {
column_list: vec![],
value_list: vec![],
}],
table_info,
}
}
// Note: this code may be duplicated with others.
// However, it's by design, ensures the tests are easy to be modified or added.
fn test_create_physical_table_task(name: &str) -> CreateTableTask {
let create_table = TestCreateTableExprBuilder::default()
.column_defs([
TestColumnDefBuilder::default()
.name("ts")
.data_type(ColumnDataType::TimestampMillisecond)
.semantic_type(SemanticType::Timestamp)
.build()
.unwrap()
.into(),
TestColumnDefBuilder::default()
.name("value")
.data_type(ColumnDataType::Float64)
.semantic_type(SemanticType::Field)
.build()
.unwrap()
.into(),
])
.time_index("ts")
.primary_keys(["value".into()])
.table_name(name)
.build()
.unwrap()
.into();
let table_info = build_raw_table_info_from_expr(&create_table);
CreateTableTask {
create_table,
// Single region
partitions: vec![Partition {
column_list: vec![],
value_list: vec![],
}],
table_info,
}
}
#[tokio::test]
async fn test_on_prepare_physical_table_not_found() {
let datanode_manager = Arc::new(MockDatanodeManager::new(()));
@@ -135,18 +48,6 @@ async fn test_on_prepare_physical_table_not_found() {
assert_matches!(err, Error::TableRouteNotFound { .. });
}
async fn create_physical_table_metadata(
ddl_context: &DdlContext,
table_info: RawTableInfo,
table_route: TableRouteValue,
) {
ddl_context
.table_metadata_manager
.create_table_metadata(table_info, table_route, HashMap::default())
.await
.unwrap();
}
#[tokio::test]
async fn test_on_prepare() {
let datanode_manager = Arc::new(MockDatanodeManager::new(()));
@@ -170,7 +71,7 @@ async fn test_on_prepare() {
create_physical_table_metadata(
&ddl_context,
create_physical_table_task.table_info.clone(),
table_route,
TableRouteValue::Physical(table_route),
)
.await;
// The create logical table procedure.
@@ -205,7 +106,7 @@ async fn test_on_prepare_logical_table_exists_err() {
create_physical_table_metadata(
&ddl_context,
create_physical_table_task.table_info.clone(),
table_route,
TableRouteValue::Physical(table_route),
)
.await;
// Creates the logical table metadata.
@@ -251,7 +152,7 @@ async fn test_on_prepare_with_create_if_table_exists() {
create_physical_table_metadata(
&ddl_context,
create_physical_table_task.table_info.clone(),
table_route,
TableRouteValue::Physical(table_route),
)
.await;
// Creates the logical table metadata.
@@ -299,7 +200,7 @@ async fn test_on_prepare_part_logical_tables_exist() {
create_physical_table_metadata(
&ddl_context,
create_physical_table_task.table_info.clone(),
table_route,
TableRouteValue::Physical(table_route),
)
.await;
// Creates the logical table metadata.
@@ -370,7 +271,7 @@ async fn test_on_create_metadata() {
create_physical_table_metadata(
&ddl_context,
create_physical_table_task.table_info.clone(),
table_route,
TableRouteValue::Physical(table_route),
)
.await;
// The create logical table procedure.
@@ -420,7 +321,7 @@ async fn test_on_create_metadata_part_logical_tables_exist() {
create_physical_table_metadata(
&ddl_context,
create_physical_table_task.table_info.clone(),
table_route,
TableRouteValue::Physical(table_route),
)
.await;
// Creates the logical table metadata.
@@ -481,7 +382,7 @@ async fn test_on_create_metadata_err() {
create_physical_table_metadata(
&ddl_context,
create_physical_table_task.table_info.clone(),
table_route,
TableRouteValue::Physical(table_route),
)
.await;
// The create logical table procedure.

View File

@@ -28,8 +28,10 @@ use common_telemetry::debug;
use crate::datanode_manager::HandleResponse;
use crate::ddl::create_table::CreateTableProcedure;
use crate::ddl::test_util::create_table::build_raw_table_info_from_expr;
use crate::ddl::test_util::{TestColumnDefBuilder, TestCreateTableExprBuilder};
use crate::ddl::test_util::columns::TestColumnDefBuilder;
use crate::ddl::test_util::create_table::{
build_raw_table_info_from_expr, TestCreateTableExprBuilder,
};
use crate::error;
use crate::error::{Error, Result};
use crate::key::table_route::TableRouteValue;

View File

@@ -0,0 +1,123 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::sync::Arc;
use common_catalog::consts::{DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME};
use common_procedure::{Context as ProcedureContext, Procedure, ProcedureId};
use common_procedure_test::MockContextProvider;
use futures::TryStreamExt;
use crate::ddl::drop_database::DropDatabaseProcedure;
use crate::ddl::test_util::{create_logical_table, create_physical_table};
use crate::ddl::tests::create_table::{NaiveDatanodeHandler, RetryErrorDatanodeHandler};
use crate::key::schema_name::SchemaNameKey;
use crate::test_util::{new_ddl_context, MockDatanodeManager};
#[tokio::test]
async fn test_drop_database_with_logical_tables() {
common_telemetry::init_default_ut_logging();
let cluster_id = 1;
let datanode_manager = Arc::new(MockDatanodeManager::new(NaiveDatanodeHandler));
let ddl_context = new_ddl_context(datanode_manager);
ddl_context
.table_metadata_manager
.schema_manager()
.create(
SchemaNameKey::new(DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME),
None,
false,
)
.await
.unwrap();
// Creates physical table
let phy_id = create_physical_table(ddl_context.clone(), cluster_id, "phy").await;
// Creates 3 logical tables
create_logical_table(ddl_context.clone(), cluster_id, phy_id, "table1").await;
create_logical_table(ddl_context.clone(), cluster_id, phy_id, "table2").await;
create_logical_table(ddl_context.clone(), cluster_id, phy_id, "table3").await;
let mut procedure = DropDatabaseProcedure::new(
DEFAULT_CATALOG_NAME.to_string(),
DEFAULT_SCHEMA_NAME.to_string(),
false,
ddl_context.clone(),
);
let ctx = ProcedureContext {
procedure_id: ProcedureId::random(),
provider: Arc::new(MockContextProvider::default()),
};
while !procedure.execute(&ctx).await.unwrap().is_done() {
procedure.execute(&ctx).await.unwrap();
}
let tables = ddl_context
.table_metadata_manager
.table_name_manager()
.tables(DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME)
.try_collect::<Vec<_>>()
.await
.unwrap();
assert!(tables.is_empty());
}
#[tokio::test]
async fn test_drop_database_retryable_error() {
common_telemetry::init_default_ut_logging();
let cluster_id = 1;
let datanode_manager = Arc::new(MockDatanodeManager::new(RetryErrorDatanodeHandler));
let ddl_context = new_ddl_context(datanode_manager);
ddl_context
.table_metadata_manager
.schema_manager()
.create(
SchemaNameKey::new(DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME),
None,
false,
)
.await
.unwrap();
// Creates physical table
let phy_id = create_physical_table(ddl_context.clone(), cluster_id, "phy").await;
// Creates 3 logical tables
create_logical_table(ddl_context.clone(), cluster_id, phy_id, "table1").await;
create_logical_table(ddl_context.clone(), cluster_id, phy_id, "table2").await;
create_logical_table(ddl_context.clone(), cluster_id, phy_id, "table3").await;
let mut procedure = DropDatabaseProcedure::new(
DEFAULT_CATALOG_NAME.to_string(),
DEFAULT_SCHEMA_NAME.to_string(),
false,
ddl_context.clone(),
);
let ctx = ProcedureContext {
procedure_id: ProcedureId::random(),
provider: Arc::new(MockContextProvider::default()),
};
loop {
match procedure.execute(&ctx).await {
Ok(_) => {
// go next
}
Err(err) => {
assert!(err.is_retry_later());
break;
}
}
}
}

View File

@@ -19,9 +19,7 @@ use snafu::{ensure, location, Location, OptionExt};
use store_api::metric_engine_consts::LOGICAL_TABLE_METADATA_KEY;
use table::metadata::TableId;
use crate::error::{
EmptyCreateTableTasksSnafu, Error, Result, TableNotFoundSnafu, UnsupportedSnafu,
};
use crate::error::{Error, Result, TableNotFoundSnafu, UnsupportedSnafu};
use crate::key::table_name::TableNameKey;
use crate::key::TableMetadataManagerRef;
use crate::peer::Peer;
@@ -98,7 +96,8 @@ pub async fn check_and_get_physical_table_id(
None => Some(current_physical_table_name),
};
}
let physical_table_name = physical_table_name.context(EmptyCreateTableTasksSnafu)?;
// Safety: `physical_table_name` is `Some` here
let physical_table_name = physical_table_name.unwrap();
table_metadata_manager
.table_name_manager()
.get(physical_table_name)
@@ -108,3 +107,22 @@ pub async fn check_and_get_physical_table_id(
})
.map(|table| table.table_id())
}
pub async fn get_physical_table_id(
table_metadata_manager: &TableMetadataManagerRef,
logical_table_name: TableNameKey<'_>,
) -> Result<TableId> {
let logical_table_id = table_metadata_manager
.table_name_manager()
.get(logical_table_name)
.await?
.context(TableNotFoundSnafu {
table_name: logical_table_name.to_string(),
})
.map(|table| table.table_id())?;
table_metadata_manager
.table_route_manager()
.get_physical_table_id(logical_table_id)
.await
}

View File

@@ -14,7 +14,9 @@
use std::sync::Arc;
use common_procedure::{watcher, Output, ProcedureId, ProcedureManagerRef, ProcedureWithId};
use common_procedure::{
watcher, BoxedProcedureLoader, Output, ProcedureId, ProcedureManagerRef, ProcedureWithId,
};
use common_telemetry::tracing_context::{FutureExt, TracingContext};
use common_telemetry::{debug, info, tracing};
use snafu::{ensure, OptionExt, ResultExt};
@@ -22,16 +24,21 @@ use store_api::storage::TableId;
use crate::cache_invalidator::CacheInvalidatorRef;
use crate::datanode_manager::DatanodeManagerRef;
use crate::ddl::alter_logical_tables::AlterLogicalTablesProcedure;
use crate::ddl::alter_table::AlterTableProcedure;
use crate::ddl::create_database::CreateDatabaseProcedure;
use crate::ddl::create_logical_tables::CreateLogicalTablesProcedure;
use crate::ddl::create_table::CreateTableProcedure;
use crate::ddl::drop_database::DropDatabaseProcedure;
use crate::ddl::drop_table::DropTableProcedure;
use crate::ddl::table_meta::TableMetadataAllocatorRef;
use crate::ddl::truncate_table::TruncateTableProcedure;
use crate::ddl::{utils, DdlContext, ExecutorContext, ProcedureExecutor};
use crate::error::{
self, EmptyCreateTableTasksSnafu, ProcedureOutputSnafu, RegisterProcedureLoaderSnafu, Result,
SubmitProcedureSnafu, TableNotFoundSnafu, UnsupportedSnafu, WaitProcedureSnafu,
EmptyDdlTasksSnafu, ParseProcedureIdSnafu, ProcedureNotFoundSnafu, ProcedureOutputSnafu,
QueryProcedureSnafu, RegisterProcedureLoaderSnafu, Result, SubmitProcedureSnafu,
TableInfoNotFoundSnafu, TableNotFoundSnafu, TableRouteNotFoundSnafu,
UnexpectedLogicalRouteTableSnafu, UnsupportedSnafu, WaitProcedureSnafu,
};
use crate::key::table_info::TableInfoValue;
use crate::key::table_name::TableNameKey;
@@ -39,21 +46,22 @@ use crate::key::table_route::TableRouteValue;
use crate::key::{DeserializedValueWithBytes, TableMetadataManagerRef};
use crate::region_keeper::MemoryRegionKeeperRef;
use crate::rpc::ddl::DdlTask::{
AlterLogicalTables, AlterTable, CreateLogicalTables, CreateTable, DropLogicalTables, DropTable,
TruncateTable,
AlterLogicalTables, AlterTable, CreateDatabase, CreateLogicalTables, CreateTable, DropDatabase,
DropLogicalTables, DropTable, TruncateTable,
};
use crate::rpc::ddl::{
AlterTableTask, CreateTableTask, DropTableTask, SubmitDdlTaskRequest, SubmitDdlTaskResponse,
TruncateTableTask,
AlterTableTask, CreateDatabaseTask, CreateTableTask, DropDatabaseTask, DropTableTask,
SubmitDdlTaskRequest, SubmitDdlTaskResponse, TruncateTableTask,
};
use crate::rpc::procedure;
use crate::rpc::procedure::{MigrateRegionRequest, MigrateRegionResponse, ProcedureStateResponse};
use crate::rpc::router::RegionRoute;
use crate::table_name::TableName;
use crate::ClusterId;
pub type DdlManagerRef = Arc<DdlManager>;
pub type BoxedProcedureLoaderFactory = dyn Fn(DdlContext) -> BoxedProcedureLoader;
/// The [DdlManager] provides the ability to execute Ddl.
pub struct DdlManager {
procedure_manager: ProcedureManagerRef,
@@ -64,8 +72,8 @@ pub struct DdlManager {
memory_region_keeper: MemoryRegionKeeperRef,
}
/// Returns a new [DdlManager] with all Ddl [BoxedProcedureLoader](common_procedure::procedure::BoxedProcedureLoader)s registered.
impl DdlManager {
/// Returns a new [DdlManager] with all Ddl [BoxedProcedureLoader](common_procedure::procedure::BoxedProcedureLoader)s registered.
pub fn try_new(
procedure_manager: ProcedureManagerRef,
datanode_clients: DatanodeManagerRef,
@@ -73,6 +81,7 @@ impl DdlManager {
table_metadata_manager: TableMetadataManagerRef,
table_metadata_allocator: TableMetadataAllocatorRef,
memory_region_keeper: MemoryRegionKeeperRef,
register_loaders: bool,
) -> Result<Self> {
let manager = Self {
procedure_manager,
@@ -82,7 +91,9 @@ impl DdlManager {
table_metadata_allocator,
memory_region_keeper,
};
manager.register_loaders()?;
if register_loaders {
manager.register_loaders()?;
}
Ok(manager)
}
@@ -103,75 +114,91 @@ impl DdlManager {
}
fn register_loaders(&self) -> Result<()> {
let context = self.create_context();
self.procedure_manager
.register_loader(
let loaders: Vec<(&str, &BoxedProcedureLoaderFactory)> = vec![
(
CreateTableProcedure::TYPE_NAME,
Box::new(move |json| {
let context = context.clone();
CreateTableProcedure::from_json(json, context).map(|p| Box::new(p) as _)
}),
)
.context(RegisterProcedureLoaderSnafu {
type_name: CreateTableProcedure::TYPE_NAME,
})?;
let context = self.create_context();
self.procedure_manager
.register_loader(
&|context: DdlContext| -> BoxedProcedureLoader {
Box::new(move |json: &str| {
let context = context.clone();
CreateTableProcedure::from_json(json, context).map(|p| Box::new(p) as _)
})
},
),
(
CreateLogicalTablesProcedure::TYPE_NAME,
Box::new(move |json| {
let context = context.clone();
CreateLogicalTablesProcedure::from_json(json, context).map(|p| Box::new(p) as _)
}),
)
.context(RegisterProcedureLoaderSnafu {
type_name: CreateLogicalTablesProcedure::TYPE_NAME,
})?;
let context = self.create_context();
self.procedure_manager
.register_loader(
DropTableProcedure::TYPE_NAME,
Box::new(move |json| {
let context = context.clone();
DropTableProcedure::from_json(json, context).map(|p| Box::new(p) as _)
}),
)
.context(RegisterProcedureLoaderSnafu {
type_name: DropTableProcedure::TYPE_NAME,
})?;
let context = self.create_context();
self.procedure_manager
.register_loader(
&|context: DdlContext| -> BoxedProcedureLoader {
Box::new(move |json: &str| {
let context = context.clone();
CreateLogicalTablesProcedure::from_json(json, context)
.map(|p| Box::new(p) as _)
})
},
),
(
AlterTableProcedure::TYPE_NAME,
Box::new(move |json| {
let context = context.clone();
AlterTableProcedure::from_json(json, context).map(|p| Box::new(p) as _)
}),
)
.context(RegisterProcedureLoaderSnafu {
type_name: AlterTableProcedure::TYPE_NAME,
})?;
let context = self.create_context();
self.procedure_manager
.register_loader(
&|context: DdlContext| -> BoxedProcedureLoader {
Box::new(move |json: &str| {
let context = context.clone();
AlterTableProcedure::from_json(json, context).map(|p| Box::new(p) as _)
})
},
),
(
AlterLogicalTablesProcedure::TYPE_NAME,
&|context: DdlContext| -> BoxedProcedureLoader {
Box::new(move |json: &str| {
let context = context.clone();
AlterLogicalTablesProcedure::from_json(json, context)
.map(|p| Box::new(p) as _)
})
},
),
(
DropTableProcedure::TYPE_NAME,
&|context: DdlContext| -> BoxedProcedureLoader {
Box::new(move |json: &str| {
let context = context.clone();
DropTableProcedure::from_json(json, context).map(|p| Box::new(p) as _)
})
},
),
(
TruncateTableProcedure::TYPE_NAME,
Box::new(move |json| {
let context = context.clone();
TruncateTableProcedure::from_json(json, context).map(|p| Box::new(p) as _)
}),
)
.context(RegisterProcedureLoaderSnafu {
type_name: TruncateTableProcedure::TYPE_NAME,
})
&|context: DdlContext| -> BoxedProcedureLoader {
Box::new(move |json: &str| {
let context = context.clone();
TruncateTableProcedure::from_json(json, context).map(|p| Box::new(p) as _)
})
},
),
(
CreateDatabaseProcedure::TYPE_NAME,
&|context: DdlContext| -> BoxedProcedureLoader {
Box::new(move |json: &str| {
let context = context.clone();
CreateDatabaseProcedure::from_json(json, context).map(|p| Box::new(p) as _)
})
},
),
(
DropDatabaseProcedure::TYPE_NAME,
&|context: DdlContext| -> BoxedProcedureLoader {
Box::new(move |json: &str| {
let context = context.clone();
DropDatabaseProcedure::from_json(json, context).map(|p| Box::new(p) as _)
})
},
),
];
for (type_name, loader_factory) in loaders {
let context = self.create_context();
self.procedure_manager
.register_loader(type_name, loader_factory(context))
.context(RegisterProcedureLoaderSnafu { type_name })?;
}
Ok(())
}
#[tracing::instrument(skip_all)]
@@ -181,17 +208,11 @@ impl DdlManager {
cluster_id: ClusterId,
alter_table_task: AlterTableTask,
table_info_value: DeserializedValueWithBytes<TableInfoValue>,
physical_table_info: Option<(TableId, TableName)>,
) -> Result<(ProcedureId, Option<Output>)> {
let context = self.create_context();
let procedure = AlterTableProcedure::new(
cluster_id,
alter_table_task,
table_info_value,
physical_table_info,
context,
)?;
let procedure =
AlterTableProcedure::new(cluster_id, alter_table_task, table_info_value, context)?;
let procedure_with_id = ProcedureWithId::with_random_id(Box::new(procedure));
@@ -215,7 +236,7 @@ impl DdlManager {
}
#[tracing::instrument(skip_all)]
/// Submits and executes a create table task.
/// Submits and executes a create multiple logical table tasks.
pub async fn submit_create_logical_table_tasks(
&self,
cluster_id: ClusterId,
@@ -236,6 +257,28 @@ impl DdlManager {
self.submit_procedure(procedure_with_id).await
}
#[tracing::instrument(skip_all)]
/// Submits and executes alter multiple table tasks.
pub async fn submit_alter_logical_table_tasks(
&self,
cluster_id: ClusterId,
alter_table_tasks: Vec<AlterTableTask>,
physical_table_id: TableId,
) -> Result<(ProcedureId, Option<Output>)> {
let context = self.create_context();
let procedure = AlterLogicalTablesProcedure::new(
cluster_id,
alter_table_tasks,
physical_table_id,
context,
);
let procedure_with_id = ProcedureWithId::with_random_id(Box::new(procedure));
self.submit_procedure(procedure_with_id).await
}
#[tracing::instrument(skip_all)]
/// Submits and executes a drop table task.
pub async fn submit_drop_table_task(
@@ -260,6 +303,44 @@ impl DdlManager {
self.submit_procedure(procedure_with_id).await
}
#[tracing::instrument(skip_all)]
/// Submits and executes a create database task.
pub async fn submit_create_database(
&self,
_cluster_id: ClusterId,
CreateDatabaseTask {
catalog,
schema,
create_if_not_exists,
options,
}: CreateDatabaseTask,
) -> Result<(ProcedureId, Option<Output>)> {
let context = self.create_context();
let procedure =
CreateDatabaseProcedure::new(catalog, schema, create_if_not_exists, options, context);
let procedure_with_id = ProcedureWithId::with_random_id(Box::new(procedure));
self.submit_procedure(procedure_with_id).await
}
#[tracing::instrument(skip_all)]
/// Submits and executes a drop table task.
pub async fn submit_drop_database(
&self,
_cluster_id: ClusterId,
DropDatabaseTask {
catalog,
schema,
drop_if_exists,
}: DropDatabaseTask,
) -> Result<(ProcedureId, Option<Output>)> {
let context = self.create_context();
let procedure = DropDatabaseProcedure::new(catalog, schema, drop_if_exists, context);
let procedure_with_id = ProcedureWithId::with_random_id(Box::new(procedure));
self.submit_procedure(procedure_with_id).await
}
#[tracing::instrument(skip_all)]
/// Submits and executes a truncate table task.
pub async fn submit_truncate_table_task(
@@ -315,12 +396,11 @@ async fn handle_truncate_table_task(
let (table_info_value, table_route_value) =
table_metadata_manager.get_full_table_info(table_id).await?;
let table_info_value = table_info_value.with_context(|| error::TableInfoNotFoundSnafu {
table_name: table_ref.to_string(),
let table_info_value = table_info_value.with_context(|| TableInfoNotFoundSnafu {
table: table_ref.to_string(),
})?;
let table_route_value =
table_route_value.context(error::TableRouteNotFoundSnafu { table_id })?;
let table_route_value = table_route_value.context(TableRouteNotFoundSnafu { table_id })?;
let table_route = table_route_value.into_inner().region_routes()?.clone();
@@ -362,50 +442,28 @@ async fn handle_alter_table_task(
})?
.table_id();
let table_info_value = ddl_manager
let (table_info_value, table_route_value) = ddl_manager
.table_metadata_manager()
.table_info_manager()
.get(table_id)
.await?
.with_context(|| error::TableInfoNotFoundSnafu {
table_name: table_ref.to_string(),
})?;
let physical_table_id = ddl_manager
.table_metadata_manager()
.table_route_manager()
.get_physical_table_id(table_id)
.get_full_table_info(table_id)
.await?;
let physical_table_info = if physical_table_id == table_id {
None
} else {
let physical_table_info = &ddl_manager
.table_metadata_manager()
.table_info_manager()
.get(physical_table_id)
.await?
.with_context(|| error::TableInfoNotFoundSnafu {
table_name: table_ref.to_string(),
})?
.table_info;
Some((
physical_table_id,
TableName {
catalog_name: physical_table_info.catalog_name.clone(),
schema_name: physical_table_info.schema_name.clone(),
table_name: physical_table_info.name.clone(),
},
))
};
let table_route_value = table_route_value
.context(TableRouteNotFoundSnafu { table_id })?
.into_inner();
ensure!(
table_route_value.is_physical(),
UnexpectedLogicalRouteTableSnafu {
err_msg: format!("{:?} is a non-physical TableRouteValue.", table_ref),
}
);
let table_info_value = table_info_value.with_context(|| TableInfoNotFoundSnafu {
table: table_ref.to_string(),
})?;
let (id, _) = ddl_manager
.submit_alter_table_task(
cluster_id,
alter_table_task,
table_info_value,
physical_table_info,
)
.submit_alter_table_task(cluster_id, alter_table_task, table_info_value)
.await?;
info!("Table: {table_id} is altered via procedure_id {id:?}");
@@ -434,8 +492,8 @@ async fn handle_drop_table_task(
.get_physical_table_route(table_id)
.await?;
let table_info_value = table_info_value.with_context(|| error::TableInfoNotFoundSnafu {
table_name: table_ref.to_string(),
let table_info_value = table_info_value.with_context(|| TableInfoNotFoundSnafu {
table: table_ref.to_string(),
})?;
let table_route_value =
@@ -488,19 +546,19 @@ async fn handle_create_table_task(
async fn handle_create_logical_table_tasks(
ddl_manager: &DdlManager,
cluster_id: ClusterId,
mut create_table_tasks: Vec<CreateTableTask>,
create_table_tasks: Vec<CreateTableTask>,
) -> Result<SubmitDdlTaskResponse> {
ensure!(!create_table_tasks.is_empty(), EmptyCreateTableTasksSnafu);
ensure!(
!create_table_tasks.is_empty(),
EmptyDdlTasksSnafu {
name: "create logical tables"
}
);
let physical_table_id = utils::check_and_get_physical_table_id(
&ddl_manager.table_metadata_manager,
&create_table_tasks,
)
.await?;
// Sets table_ids on create_table_tasks
ddl_manager
.table_metadata_allocator
.set_table_ids_on_logic_create(&mut create_table_tasks)
.await?;
let num_logical_tables = create_table_tasks.len();
let (id, output) = ddl_manager
@@ -529,6 +587,84 @@ async fn handle_create_logical_table_tasks(
})
}
async fn handle_create_database_task(
ddl_manager: &DdlManager,
cluster_id: ClusterId,
create_database_task: CreateDatabaseTask,
) -> Result<SubmitDdlTaskResponse> {
let (id, _) = ddl_manager
.submit_create_database(cluster_id, create_database_task.clone())
.await?;
let procedure_id = id.to_string();
info!(
"Database {}.{} is created via procedure_id {id:?}",
create_database_task.catalog, create_database_task.schema
);
Ok(SubmitDdlTaskResponse {
key: procedure_id.into(),
..Default::default()
})
}
async fn handle_drop_database_task(
ddl_manager: &DdlManager,
cluster_id: ClusterId,
drop_database_task: DropDatabaseTask,
) -> Result<SubmitDdlTaskResponse> {
let (id, _) = ddl_manager
.submit_drop_database(cluster_id, drop_database_task.clone())
.await?;
let procedure_id = id.to_string();
info!(
"Database {}.{} is dropped via procedure_id {id:?}",
drop_database_task.catalog, drop_database_task.schema
);
Ok(SubmitDdlTaskResponse {
key: procedure_id.into(),
..Default::default()
})
}
async fn handle_alter_logical_table_tasks(
ddl_manager: &DdlManager,
cluster_id: ClusterId,
alter_table_tasks: Vec<AlterTableTask>,
) -> Result<SubmitDdlTaskResponse> {
ensure!(
!alter_table_tasks.is_empty(),
EmptyDdlTasksSnafu {
name: "alter logical tables"
}
);
// Use the physical table id in the first logical table, then it will be checked in the procedure.
let first_table = TableNameKey {
catalog: &alter_table_tasks[0].alter_table.catalog_name,
schema: &alter_table_tasks[0].alter_table.schema_name,
table: &alter_table_tasks[0].alter_table.table_name,
};
let physical_table_id =
utils::get_physical_table_id(&ddl_manager.table_metadata_manager, first_table).await?;
let num_logical_tables = alter_table_tasks.len();
let (id, _) = ddl_manager
.submit_alter_logical_table_tasks(cluster_id, alter_table_tasks, physical_table_id)
.await?;
info!("{num_logical_tables} logical tables on physical table: {physical_table_id:?} is altered via procedure_id {id:?}");
let procedure_id = id.to_string();
Ok(SubmitDdlTaskResponse {
key: procedure_id.into(),
..Default::default()
})
}
/// TODO(dennis): let [`DdlManager`] implement [`ProcedureExecutor`] looks weird, find some way to refactor it.
#[async_trait::async_trait]
impl ProcedureExecutor for DdlManager {
@@ -562,8 +698,16 @@ impl ProcedureExecutor for DdlManager {
CreateLogicalTables(create_table_tasks) => {
handle_create_logical_table_tasks(self, cluster_id, create_table_tasks).await
}
AlterLogicalTables(alter_table_tasks) => {
handle_alter_logical_table_tasks(self, cluster_id, alter_table_tasks).await
}
DropLogicalTables(_) => todo!(),
AlterLogicalTables(_) => todo!(),
CreateDatabase(create_database_task) => {
handle_create_database_task(self, cluster_id, create_database_task).await
}
DropDatabase(drop_database_task) => {
handle_drop_database_task(self, cluster_id, drop_database_task).await
}
}
}
.trace(span)
@@ -586,15 +730,15 @@ impl ProcedureExecutor for DdlManager {
_ctx: &ExecutorContext,
pid: &str,
) -> Result<ProcedureStateResponse> {
let pid = ProcedureId::parse_str(pid)
.with_context(|_| error::ParseProcedureIdSnafu { key: pid })?;
let pid =
ProcedureId::parse_str(pid).with_context(|_| ParseProcedureIdSnafu { key: pid })?;
let state = self
.procedure_manager
.procedure_state(pid)
.await
.context(error::QueryProcedureSnafu)?
.context(error::ProcedureNotFoundSnafu {
.context(QueryProcedureSnafu)?
.context(ProcedureNotFoundSnafu {
pid: pid.to_string(),
})?;
@@ -650,9 +794,9 @@ mod tests {
Arc::new(TableMetadataAllocator::new(
Arc::new(SequenceBuilder::new("test", kv_backend.clone()).build()),
Arc::new(WalOptionsAllocator::default()),
table_metadata_manager.table_name_manager().clone(),
)),
Arc::new(MemoryRegionKeeper::default()),
true,
);
let expected_loaders = vec![

View File

@@ -89,11 +89,8 @@ pub enum Error {
#[snafu(display("Unexpected sequence value: {}", err_msg))]
UnexpectedSequenceValue { err_msg: String, location: Location },
#[snafu(display("Table info not found: {}", table_name))]
TableInfoNotFound {
table_name: String,
location: Location,
},
#[snafu(display("Table info not found: {}", table))]
TableInfoNotFound { table: String, location: Location },
#[snafu(display("Failed to register procedure loader, type name: {}", type_name))]
RegisterProcedureLoader {
@@ -267,6 +264,12 @@ pub enum Error {
location: Location,
},
#[snafu(display("Schema nod found, schema: {}", table_schema))]
SchemaNotFound {
table_schema: String,
location: Location,
},
#[snafu(display("Failed to rename table, reason: {}", reason))]
RenameTable { reason: String, location: Location },
@@ -392,11 +395,39 @@ pub enum Error {
#[snafu(display("Unexpected table route type: {}", err_msg))]
UnexpectedLogicalRouteTable { location: Location, err_msg: String },
#[snafu(display("The tasks of create tables cannot be empty"))]
EmptyCreateTableTasks { location: Location },
#[snafu(display("The tasks of {} cannot be empty", name))]
EmptyDdlTasks { name: String, location: Location },
#[snafu(display("Metadata corruption: {}", err_msg))]
MetadataCorruption { err_msg: String, location: Location },
#[snafu(display("Alter logical tables invalid arguments: {}", err_msg))]
AlterLogicalTablesInvalidArguments { err_msg: String, location: Location },
#[snafu(display("Create logical tables invalid arguments: {}", err_msg))]
CreateLogicalTablesInvalidArguments { err_msg: String, location: Location },
#[snafu(display("Invalid node info key: {}", key))]
InvalidNodeInfoKey { key: String, location: Location },
#[snafu(display("Failed to parse number: {}", err_msg))]
ParseNum {
err_msg: String,
#[snafu(source)]
error: std::num::ParseIntError,
location: Location,
},
#[snafu(display("Invalid role: {}", role))]
InvalidRole { role: i32, location: Location },
#[snafu(display("Failed to parse {} from utf8", name))]
FromUtf8 {
name: String,
#[snafu(source)]
error: std::string::FromUtf8Error,
location: Location,
},
}
pub type Result<T> = std::result::Result<T, Error>;
@@ -443,6 +474,7 @@ impl ErrorExt for Error {
| EmptyTopicPool { .. }
| UnexpectedLogicalRouteTable { .. }
| ProcedureOutput { .. }
| FromUtf8 { .. }
| MetadataCorruption { .. } => StatusCode::Unexpected,
SendMessage { .. }
@@ -456,7 +488,9 @@ impl ErrorExt for Error {
ProcedureNotFound { .. }
| PrimaryKeyNotFound { .. }
| EmptyKey { .. }
| InvalidEngineType { .. } => StatusCode::InvalidArguments,
| InvalidEngineType { .. }
| AlterLogicalTablesInvalidArguments { .. }
| CreateLogicalTablesInvalidArguments { .. } => StatusCode::InvalidArguments,
TableNotFound { .. } => StatusCode::TableNotFound,
TableAlreadyExists { .. } => StatusCode::TableAlreadyExists,
@@ -472,9 +506,13 @@ impl ErrorExt for Error {
InvalidCatalogValue { source, .. } => source.status_code(),
ConvertAlterTableRequest { source, .. } => source.status_code(),
ParseProcedureId { .. } | InvalidNumTopics { .. } | EmptyCreateTableTasks { .. } => {
StatusCode::InvalidArguments
}
ParseProcedureId { .. }
| InvalidNumTopics { .. }
| SchemaNotFound { .. }
| InvalidNodeInfoKey { .. }
| ParseNum { .. }
| InvalidRole { .. }
| EmptyDdlTasks { .. } => StatusCode::InvalidArguments,
}
}

View File

@@ -203,7 +203,6 @@ pub enum InstructionReply {
OpenRegion(SimpleReply),
CloseRegion(SimpleReply),
UpgradeRegion(UpgradeRegionReply),
InvalidateTableCache(SimpleReply),
DowngradeRegion(DowngradeRegionReply),
}
@@ -213,9 +212,6 @@ impl Display for InstructionReply {
Self::OpenRegion(reply) => write!(f, "InstructionReply::OpenRegion({})", reply),
Self::CloseRegion(reply) => write!(f, "InstructionReply::CloseRegion({})", reply),
Self::UpgradeRegion(reply) => write!(f, "InstructionReply::UpgradeRegion({})", reply),
Self::InvalidateTableCache(reply) => {
write!(f, "InstructionReply::Invalidate({})", reply)
}
Self::DowngradeRegion(reply) => {
write!(f, "InstructionReply::DowngradeRegion({})", reply)
}

View File

@@ -88,6 +88,7 @@ use crate::error::{self, Result, SerdeJsonSnafu};
use crate::kv_backend::txn::{Txn, TxnOpResponse};
use crate::kv_backend::KvBackendRef;
use crate::rpc::router::{region_distribution, RegionRoute, RegionStatus};
use crate::table_name::TableName;
use crate::DatanodeId;
pub const NAME_PATTERN: &str = r"[a-zA-Z_:-][a-zA-Z0-9_:\-\.]*";
@@ -273,6 +274,10 @@ impl<T: Serialize + DeserializeOwned + TableMetaValue> DeserializedValueWithByte
self.inner
}
pub fn get_inner_ref(&self) -> &T {
&self.inner
}
/// Returns original `bytes`
pub fn get_raw_bytes(&self) -> Vec<u8> {
self.bytes.to_vec()
@@ -351,7 +356,6 @@ impl TableMetadataManager {
&self.kv_backend
}
// TODO(ruihang): deprecate this
pub async fn get_full_table_info(
&self,
table_id: TableId,
@@ -363,17 +367,14 @@ impl TableMetadataManager {
.table_route_manager
.table_route_storage()
.build_get_txn(table_id);
let (get_table_info_txn, table_info_decoder) =
self.table_info_manager.build_get_txn(table_id);
let txn = Txn::merge_all(vec![get_table_route_txn, get_table_info_txn]);
let res = self.kv_backend.txn(txn).await?;
let r = self.kv_backend.txn(txn).await?;
let table_info_value = table_info_decoder(&r.responses)?;
let table_route_value = table_route_decoder(&r.responses)?;
let table_info_value = table_info_decoder(&res.responses)?;
let table_route_value = table_route_decoder(&res.responses)?;
Ok((table_info_value, table_route_value))
}
@@ -457,7 +458,7 @@ impl TableMetadataManager {
Ok(())
}
pub fn max_logical_tables_per_batch(&self) -> usize {
pub fn create_logical_tables_metadata_chunk_size(&self) -> usize {
// The batch size is max_txn_size / 3 because the size of the `tables_data`
// is 3 times the size of the `tables_data`.
self.kv_backend.max_txn_ops() / 3
@@ -548,17 +549,15 @@ impl TableMetadataManager {
/// The caller MUST ensure it has the exclusive access to `TableNameKey`.
pub async fn delete_table_metadata(
&self,
table_info_value: &DeserializedValueWithBytes<TableInfoValue>,
table_route_value: &DeserializedValueWithBytes<TableRouteValue>,
table_id: TableId,
table_name: &TableName,
region_routes: &[RegionRoute],
) -> Result<()> {
let table_info = &table_info_value.table_info;
let table_id = table_info.ident.table_id;
// Deletes table name.
let table_name = TableNameKey::new(
&table_info.catalog_name,
&table_info.schema_name,
&table_info.name,
&table_name.catalog_name,
&table_name.schema_name,
&table_name.table_name,
);
let delete_table_name_txn = self.table_name_manager().build_delete_txn(&table_name)?;
@@ -567,7 +566,7 @@ impl TableMetadataManager {
let delete_table_info_txn = self.table_info_manager().build_delete_txn(table_id)?;
// Deletes datanode table key value pairs.
let distribution = region_distribution(table_route_value.region_routes()?);
let distribution = region_distribution(region_routes);
let delete_datanode_txn = self
.datanode_table_manager()
.build_delete_txn(table_id, distribution)?;
@@ -682,6 +681,64 @@ impl TableMetadataManager {
Ok(())
}
pub fn batch_update_table_info_value_chunk_size(&self) -> usize {
self.kv_backend.max_txn_ops()
}
pub async fn batch_update_table_info_values(
&self,
table_info_value_pairs: Vec<(TableInfoValue, RawTableInfo)>,
) -> Result<()> {
let len = table_info_value_pairs.len();
let mut txns = Vec::with_capacity(len);
struct OnFailure<F, R>
where
F: FnOnce(&Vec<TxnOpResponse>) -> R,
{
table_info_value: TableInfoValue,
on_update_table_info_failure: F,
}
let mut on_failures = Vec::with_capacity(len);
for (table_info_value, new_table_info) in table_info_value_pairs {
let table_id = table_info_value.table_info.ident.table_id;
let new_table_info_value = table_info_value.update(new_table_info);
let (update_table_info_txn, on_update_table_info_failure) =
self.table_info_manager().build_update_txn(
table_id,
&DeserializedValueWithBytes::from_inner(table_info_value),
&new_table_info_value,
)?;
txns.push(update_table_info_txn);
on_failures.push(OnFailure {
table_info_value: new_table_info_value,
on_update_table_info_failure,
});
}
let txn = Txn::merge_all(txns);
let r = self.kv_backend.txn(txn).await?;
if !r.succeeded {
for on_failure in on_failures {
let remote_table_info = (on_failure.on_update_table_info_failure)(&r.responses)?
.context(error::UnexpectedSnafu {
err_msg: "Reads the empty table info during the updating table info",
})?
.into_inner();
let op_name = "the batch updating table info";
ensure_values!(remote_table_info, on_failure.table_info_value, op_name);
}
}
Ok(())
}
pub async fn update_table_route(
&self,
table_id: TableId,
@@ -867,6 +924,7 @@ mod tests {
use crate::kv_backend::memory::MemoryKvBackend;
use crate::peer::Peer;
use crate::rpc::router::{region_distribution, Region, RegionRoute, RegionStatus};
use crate::table_name::TableName;
#[test]
fn test_deserialized_value_with_bytes() {
@@ -1082,9 +1140,6 @@ mod tests {
new_test_table_info(region_routes.iter().map(|r| r.region.id.region_number())).into();
let table_id = table_info.ident.table_id;
let datanode_id = 2;
let table_route_value = DeserializedValueWithBytes::from_inner(TableRouteValue::physical(
region_routes.clone(),
));
// creates metadata.
create_physical_table_metadata(
@@ -1095,18 +1150,20 @@ mod tests {
.await
.unwrap();
let table_info_value =
DeserializedValueWithBytes::from_inner(TableInfoValue::new(table_info.clone()));
let table_name = TableName::new(
table_info.catalog_name,
table_info.schema_name,
table_info.name,
);
// deletes metadata.
table_metadata_manager
.delete_table_metadata(&table_info_value, &table_route_value)
.delete_table_metadata(table_id, &table_name, region_routes)
.await
.unwrap();
// if metadata was already deleted, it should be ok.
table_metadata_manager
.delete_table_metadata(&table_info_value, &table_route_value)
.delete_table_metadata(table_id, &table_name, region_routes)
.await
.unwrap();

View File

@@ -123,7 +123,7 @@ impl CatalogManager {
self.kv_backend.exists(&raw_key).await
}
pub async fn catalog_names(&self) -> BoxStream<'static, Result<String>> {
pub fn catalog_names(&self) -> BoxStream<'static, Result<String>> {
let start_key = CatalogNameKey::range_start_key();
let req = RangeRequest::new().with_prefix(start_key.as_bytes());

View File

@@ -173,8 +173,16 @@ impl SchemaManager {
.transpose()
}
/// Deletes a [SchemaNameKey].
pub async fn delete(&self, schema: SchemaNameKey<'_>) -> Result<()> {
let raw_key = schema.as_raw_key();
self.kv_backend.delete(&raw_key, false).await?;
Ok(())
}
/// Returns a schema stream, it lists all schemas belong to the target `catalog`.
pub async fn schema_names(&self, catalog: &str) -> BoxStream<'static, Result<String>> {
pub fn schema_names(&self, catalog: &str) -> BoxStream<'static, Result<String>> {
let start_key = SchemaNameKey::range_start_key(catalog);
let req = RangeRequest::new().with_prefix(start_key.as_bytes());

View File

@@ -241,7 +241,7 @@ impl TableNameManager {
self.kv_backend.exists(&raw_key).await
}
pub async fn tables(
pub fn tables(
&self,
catalog: &str,
schema: &str,

View File

@@ -147,7 +147,7 @@ impl TableRouteValue {
///
/// # Panic
/// If it is not the [`PhysicalTableRouteValue`].
fn into_physical_table_route(self) -> PhysicalTableRouteValue {
pub fn into_physical_table_route(self) -> PhysicalTableRouteValue {
match self {
TableRouteValue::Physical(x) => x,
_ => unreachable!("Mistakenly been treated as a Physical TableRoute: {self:?}"),

View File

@@ -18,6 +18,7 @@
#![feature(let_chains)]
pub mod cache_invalidator;
pub mod cluster;
pub mod datanode_manager;
pub mod ddl;
pub mod ddl_manager;

View File

@@ -236,6 +236,8 @@ impl<K, V> Stream for PaginationStream<K, V> {
PaginationStreamState::Init => {
let factory = self.factory.take().expect("lost factory");
if !factory.more {
// Ensures the factory always exists.
self.factory = Some(factory);
return Poll::Ready(None);
}
let fut = factory.read_next().boxed();

View File

@@ -17,7 +17,6 @@ pub mod lock;
pub mod procedure;
pub mod router;
pub mod store;
pub mod util;
use std::fmt::{Display, Formatter};

View File

@@ -12,17 +12,22 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use std::collections::HashMap;
use std::result;
use api::v1::meta::ddl_task_request::Task;
use api::v1::meta::{
AlterTableTask as PbAlterTableTask, AlterTableTasks as PbAlterTableTasks,
CreateTableTask as PbCreateTableTask, CreateTableTasks as PbCreateTableTasks,
DdlTaskRequest as PbDdlTaskRequest, DdlTaskResponse as PbDdlTaskResponse,
CreateDatabaseTask as PbCreateDatabaseTask, CreateTableTask as PbCreateTableTask,
CreateTableTasks as PbCreateTableTasks, DdlTaskRequest as PbDdlTaskRequest,
DdlTaskResponse as PbDdlTaskResponse, DropDatabaseTask as PbDropDatabaseTask,
DropTableTask as PbDropTableTask, DropTableTasks as PbDropTableTasks, Partition, ProcedureId,
TruncateTableTask as PbTruncateTableTask,
};
use api::v1::{AlterExpr, CreateTableExpr, DropTableExpr, SemanticType, TruncateTableExpr};
use api::v1::{
AlterExpr, CreateDatabaseExpr, CreateTableExpr, DropDatabaseExpr, DropTableExpr,
TruncateTableExpr,
};
use base64::engine::general_purpose;
use base64::Engine as _;
use prost::Message;
@@ -43,6 +48,8 @@ pub enum DdlTask {
CreateLogicalTables(Vec<CreateTableTask>),
DropLogicalTables(Vec<DropTableTask>),
AlterLogicalTables(Vec<AlterTableTask>),
CreateDatabase(CreateDatabaseTask),
DropDatabase(DropDatabaseTask),
}
impl DdlTask {
@@ -63,6 +70,15 @@ impl DdlTask {
)
}
pub fn new_alter_logical_tables(table_data: Vec<AlterExpr>) -> Self {
DdlTask::AlterLogicalTables(
table_data
.into_iter()
.map(|alter_table| AlterTableTask { alter_table })
.collect(),
)
}
pub fn new_drop_table(
catalog: String,
schema: String,
@@ -79,6 +95,28 @@ impl DdlTask {
})
}
pub fn new_create_database(
catalog: String,
schema: String,
create_if_not_exists: bool,
options: Option<HashMap<String, String>>,
) -> Self {
DdlTask::CreateDatabase(CreateDatabaseTask {
catalog,
schema,
create_if_not_exists,
options,
})
}
pub fn new_drop_database(catalog: String, schema: String, drop_if_exists: bool) -> Self {
DdlTask::DropDatabase(DropDatabaseTask {
catalog,
schema,
drop_if_exists,
})
}
pub fn new_alter_table(alter_table: AlterExpr) -> Self {
DdlTask::AlterTable(AlterTableTask { alter_table })
}
@@ -137,6 +175,12 @@ impl TryFrom<Task> for DdlTask {
Ok(DdlTask::AlterLogicalTables(tasks))
}
Task::CreateDatabaseTask(create_database) => {
Ok(DdlTask::CreateDatabase(create_database.try_into()?))
}
Task::DropDatabaseTask(drop_database) => {
Ok(DdlTask::DropDatabase(drop_database.try_into()?))
}
}
}
}
@@ -179,6 +223,8 @@ impl TryFrom<SubmitDdlTaskRequest> for PbDdlTaskRequest {
Task::AlterTableTasks(PbAlterTableTasks { tasks })
}
DdlTask::CreateDatabase(task) => Task::CreateDatabaseTask(task.try_into()?),
DdlTask::DropDatabase(task) => Task::DropDatabaseTask(task.try_into()?),
};
Ok(Self {
@@ -380,31 +426,7 @@ impl CreateTableTask {
.column_defs
.sort_unstable_by(|a, b| a.name.cmp(&b.name));
// compute new indices of sorted columns
// this part won't do any check or verification.
let mut primary_key_indices = Vec::with_capacity(self.create_table.primary_keys.len());
let mut value_indices =
Vec::with_capacity(self.create_table.column_defs.len() - primary_key_indices.len() - 1);
let mut timestamp_index = None;
for (index, col) in self.create_table.column_defs.iter().enumerate() {
if self.create_table.primary_keys.contains(&col.name) {
primary_key_indices.push(index);
} else if col.semantic_type == SemanticType::Timestamp as i32 {
timestamp_index = Some(index);
} else {
value_indices.push(index);
}
}
// overwrite table info
self.table_info
.meta
.schema
.column_schemas
.sort_unstable_by(|a, b| a.name.cmp(&b.name));
self.table_info.meta.schema.timestamp_index = timestamp_index;
self.table_info.meta.primary_key_indices = primary_key_indices;
self.table_info.meta.value_indices = value_indices;
self.table_info.sort_columns();
}
}
@@ -557,7 +579,7 @@ impl TryFrom<PbTruncateTableTask> for TruncateTableTask {
fn try_from(pb: PbTruncateTableTask) -> Result<Self> {
let truncate_table = pb.truncate_table.context(error::InvalidProtoMsgSnafu {
err_msg: "expected drop table",
err_msg: "expected truncate table",
})?;
Ok(Self {
@@ -589,6 +611,105 @@ impl TryFrom<TruncateTableTask> for PbTruncateTableTask {
}
}
#[derive(Debug, PartialEq, Serialize, Deserialize, Clone)]
pub struct CreateDatabaseTask {
pub catalog: String,
pub schema: String,
pub create_if_not_exists: bool,
pub options: Option<HashMap<String, String>>,
}
impl TryFrom<PbCreateDatabaseTask> for CreateDatabaseTask {
type Error = error::Error;
fn try_from(pb: PbCreateDatabaseTask) -> Result<Self> {
let CreateDatabaseExpr {
catalog_name,
database_name,
create_if_not_exists,
options,
} = pb.create_database.context(error::InvalidProtoMsgSnafu {
err_msg: "expected create database",
})?;
Ok(CreateDatabaseTask {
catalog: catalog_name,
schema: database_name,
create_if_not_exists,
options: Some(options),
})
}
}
impl TryFrom<CreateDatabaseTask> for PbCreateDatabaseTask {
type Error = error::Error;
fn try_from(
CreateDatabaseTask {
catalog,
schema,
create_if_not_exists,
options,
}: CreateDatabaseTask,
) -> Result<Self> {
Ok(PbCreateDatabaseTask {
create_database: Some(CreateDatabaseExpr {
catalog_name: catalog,
database_name: schema,
create_if_not_exists,
options: options.unwrap_or_default(),
}),
})
}
}
#[derive(Debug, PartialEq, Serialize, Deserialize, Clone)]
pub struct DropDatabaseTask {
pub catalog: String,
pub schema: String,
pub drop_if_exists: bool,
}
impl TryFrom<PbDropDatabaseTask> for DropDatabaseTask {
type Error = error::Error;
fn try_from(pb: PbDropDatabaseTask) -> Result<Self> {
let DropDatabaseExpr {
catalog_name,
schema_name,
drop_if_exists,
} = pb.drop_database.context(error::InvalidProtoMsgSnafu {
err_msg: "expected drop database",
})?;
Ok(DropDatabaseTask {
catalog: catalog_name,
schema: schema_name,
drop_if_exists,
})
}
}
impl TryFrom<DropDatabaseTask> for PbDropDatabaseTask {
type Error = error::Error;
fn try_from(
DropDatabaseTask {
catalog,
schema,
drop_if_exists,
}: DropDatabaseTask,
) -> Result<Self> {
Ok(PbDropDatabaseTask {
drop_database: Some(DropDatabaseExpr {
catalog_name: catalog,
schema_name: schema,
drop_if_exists,
}),
})
}
}
#[cfg(test)]
mod tests {
use std::sync::Arc;
@@ -644,7 +765,8 @@ mod tests {
"column1".to_string(),
ConcreteDataType::timestamp_millisecond_datatype(),
false,
),
)
.with_time_index(true),
ColumnSchema::new(
"column2".to_string(),
ConcreteDataType::float64_datatype(),

View File

@@ -26,9 +26,9 @@ use api::v1::meta::{
ResponseHeader as PbResponseHeader,
};
use crate::error;
use crate::error::Result;
use crate::rpc::{util, KeyValue};
use crate::rpc::KeyValue;
use crate::{error, util};
pub fn to_range(key: Vec<u8>, range_end: Vec<u8>) -> (Bound<Vec<u8>>, Bound<Vec<u8>>) {
match (&key[..], &range_end[..]) {

View File

@@ -1,46 +0,0 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use api::v1::meta::ResponseHeader;
use crate::error;
use crate::error::Result;
#[inline]
pub fn check_response_header(header: Option<&ResponseHeader>) -> Result<()> {
if let Some(header) = header {
if let Some(error) = &header.error {
let code = error.code;
let err_msg = &error.err_msg;
return error::IllegalServerStateSnafu { code, err_msg }.fail();
}
}
Ok(())
}
/// Get prefix end key of `key`.
#[inline]
pub fn get_prefix_end_key(key: &[u8]) -> Vec<u8> {
for (i, v) in key.iter().enumerate().rev() {
if *v < 0xFF {
let mut end = Vec::from(&key[..=i]);
end[i] = *v + 1;
return end;
}
}
// next prefix does not exist (e.g., 0xffff);
vec![0]
}

View File

@@ -17,8 +17,10 @@ use std::sync::Arc;
use async_trait::async_trait;
use common_error::ext::BoxedError;
use common_procedure::error::{DeleteStatesSnafu, ListStateSnafu, PutStateSnafu};
use common_procedure::store::state_store::{KeyValueStream, StateStore};
use common_procedure::store::state_store::{KeySet, KeyValueStream, StateStore};
use common_procedure::store::util::multiple_value_stream;
use common_procedure::Result as ProcedureResult;
use futures::future::try_join_all;
use futures::StreamExt;
use snafu::ResultExt;
@@ -42,18 +44,31 @@ fn strip_prefix(key: &str) -> String {
pub struct KvStateStore {
kv_backend: KvBackendRef,
// limit is set to 0, it is treated as no limit.
max_size_per_range: usize,
// The max num of keys to be returned in a range scan request
// `None` stands for no limit.
max_num_per_range_request: Option<usize>,
// The max bytes of value.
// `None` stands for no limit.
max_value_size: Option<usize>,
}
impl KvStateStore {
// `max_size_per_range` is set to 0, it is treated as no limit.
pub fn new(kv_backend: KvBackendRef) -> Self {
Self {
kv_backend,
max_size_per_range: 0,
max_num_per_range_request: None,
max_value_size: None,
}
}
/// Sets the `max_value_size`. `None` stands for no limit.
///
/// If a value is larger than the `max_value_size`,
/// the [`KvStateStore`] will automatically split the large value into multiple values.
pub fn with_max_value_size(mut self, max_value_size: Option<usize>) -> Self {
self.max_value_size = max_value_size;
self
}
}
fn decode_kv(kv: KeyValue) -> Result<(String, Vec<u8>)> {
@@ -64,20 +79,80 @@ fn decode_kv(kv: KeyValue) -> Result<(String, Vec<u8>)> {
Ok((key, value))
}
enum SplitValue<'a> {
Single(&'a [u8]),
Multiple(Vec<&'a [u8]>),
}
fn split_value(value: &[u8], max_value_size: Option<usize>) -> SplitValue<'_> {
if let Some(max_value_size) = max_value_size {
if value.len() <= max_value_size {
SplitValue::Single(value)
} else {
SplitValue::Multiple(value.chunks(max_value_size).collect::<Vec<_>>())
}
} else {
SplitValue::Single(value)
}
}
#[async_trait]
impl StateStore for KvStateStore {
async fn put(&self, key: &str, value: Vec<u8>) -> ProcedureResult<()> {
let _ = self
.kv_backend
.put(PutRequest {
key: with_prefix(key).into_bytes(),
value,
..Default::default()
})
.await
.map_err(BoxedError::new)
.context(PutStateSnafu { key })?;
Ok(())
let split = split_value(&value, self.max_value_size);
let key = with_prefix(key);
match split {
SplitValue::Single(_) => {
self.kv_backend
.put(
PutRequest::new()
.with_key(key.to_string().into_bytes())
.with_value(value),
)
.await
.map_err(BoxedError::new)
.context(PutStateSnafu { key })?;
Ok(())
}
SplitValue::Multiple(values) => {
// Note:
// The length of values can be up to usize::MAX.
// The KeySet::with_segment_suffix method uses a 10-digit number to store the segment number,
// which is large enough for the usize type.
// The first segment key: "0b00001111"
// The 2nd segment key: "0b00001111/0000000001"
// The 3rd segment key: "0b00001111/0000000002"
let operations = values
.into_iter()
.enumerate()
.map(|(idx, value)| {
let key = if idx > 0 {
KeySet::with_segment_suffix(&key, idx)
} else {
key.to_string()
};
let kv_backend = self.kv_backend.clone();
async move {
kv_backend
.put(
PutRequest::new()
.with_key(key.into_bytes())
.with_value(value),
)
.await
}
})
.collect::<Vec<_>>();
try_join_all(operations)
.await
.map_err(BoxedError::new)
.context(PutStateSnafu { key })?;
Ok(())
}
}
}
async fn walk_top_down(&self, path: &str) -> ProcedureResult<KeyValueStream> {
@@ -90,7 +165,7 @@ impl StateStore for KvStateStore {
let stream = PaginationStream::new(
self.kv_backend.clone(),
req,
self.max_size_per_range,
self.max_num_per_range_request.unwrap_or_default(),
Arc::new(decode_kv),
);
@@ -100,6 +175,8 @@ impl StateStore for KvStateStore {
.with_context(|_| ListStateSnafu { path })
});
let stream = multiple_value_stream(Box::pin(stream));
Ok(Box::pin(stream))
}
@@ -128,19 +205,26 @@ impl StateStore for KvStateStore {
#[cfg(test)]
mod tests {
use std::env;
use std::sync::Arc;
use common_procedure::store::state_store::KeyValue;
use common_telemetry::info;
use futures::TryStreamExt;
use rand::{Rng, RngCore};
use uuid::Uuid;
use super::*;
use crate::kv_backend::chroot::ChrootKvBackend;
use crate::kv_backend::etcd::EtcdStore;
use crate::kv_backend::memory::MemoryKvBackend;
#[tokio::test]
async fn test_meta_state_store() {
let store = &KvStateStore {
kv_backend: Arc::new(MemoryKvBackend::new()),
max_size_per_range: 1, // for testing "more" in range
max_num_per_range_request: Some(1), // for testing "more" in range
max_value_size: None,
};
let walk_top_down = async move |path: &str| -> Vec<KeyValue> {
@@ -165,9 +249,9 @@ mod tests {
let data = walk_top_down("/").await;
assert_eq!(
vec![
("a/1".to_string(), b"v1".to_vec()),
("a/2".to_string(), b"v2".to_vec()),
("b/1".to_string(), b"v3".to_vec())
("a/1".into(), b"v1".to_vec()),
("a/2".into(), b"v2".to_vec()),
("b/1".into(), b"v3".to_vec())
],
data
);
@@ -175,8 +259,8 @@ mod tests {
let data = walk_top_down("a/").await;
assert_eq!(
vec![
("a/1".to_string(), b"v1".to_vec()),
("a/2".to_string(), b"v2".to_vec()),
("a/1".into(), b"v1".to_vec()),
("a/2".into(), b"v2".to_vec()),
],
data
);
@@ -187,6 +271,122 @@ mod tests {
.unwrap();
let data = walk_top_down("a/").await;
assert_eq!(vec![("a/1".to_string(), b"v1".to_vec()),], data);
assert_eq!(vec![("a/1".into(), b"v1".to_vec()),], data);
}
struct TestCase {
prefix: String,
key: String,
value: Vec<u8>,
}
async fn test_meta_state_store_split_value_with_size_limit(
kv_backend: KvBackendRef,
size_limit: u32,
num_per_range: u32,
max_bytes: u32,
) {
let num_cases = rand::thread_rng().gen_range(1..=26);
let mut cases = Vec::with_capacity(num_cases);
for i in 0..num_cases {
let size = rand::thread_rng().gen_range(size_limit..=max_bytes);
let mut large_value = vec![0u8; size as usize];
rand::thread_rng().fill_bytes(&mut large_value);
// Starts from `a`.
let prefix = format!("{}/", std::char::from_u32(97 + i as u32).unwrap());
cases.push(TestCase {
key: format!("{}{}.commit", prefix, Uuid::new_v4()),
prefix,
value: large_value,
})
}
let store = &KvStateStore {
kv_backend: kv_backend.clone(),
max_num_per_range_request: Some(num_per_range as usize), // for testing "more" in range
max_value_size: Some(size_limit as usize),
};
let walk_top_down = async move |path: &str| -> Vec<KeyValue> {
let mut data = store
.walk_top_down(path)
.await
.unwrap()
.try_collect::<Vec<_>>()
.await
.unwrap();
data.sort_unstable_by(|a, b| a.0.cmp(&b.0));
data
};
// Puts the values
for TestCase { key, value, .. } in &cases {
store.put(key, value.clone()).await.unwrap();
}
// Validates the values
for TestCase { prefix, key, value } in &cases {
let data = walk_top_down(prefix).await;
assert_eq!(data.len(), 1);
let (keyset, got) = data.into_iter().next().unwrap();
let num_expected_keys = value.len().div_ceil(size_limit as usize);
assert_eq!(&got, value);
assert_eq!(keyset.key(), key);
assert_eq!(keyset.keys().len(), num_expected_keys);
}
// Deletes the values
for TestCase { prefix, .. } in &cases {
let data = walk_top_down(prefix).await;
let (keyset, _) = data.into_iter().next().unwrap();
// Deletes values
store.batch_delete(keyset.keys().as_slice()).await.unwrap();
let data = walk_top_down(prefix).await;
assert_eq!(data.len(), 0);
}
}
#[tokio::test]
async fn test_meta_state_store_split_value() {
let size_limit = rand::thread_rng().gen_range(128..=512);
let page_size = rand::thread_rng().gen_range(1..10);
let kv_backend = Arc::new(MemoryKvBackend::new());
test_meta_state_store_split_value_with_size_limit(kv_backend, size_limit, page_size, 8192)
.await;
}
#[tokio::test]
async fn test_etcd_store_split_value() {
common_telemetry::init_default_ut_logging();
let prefix = "test_etcd_store_split_value/";
let endpoints = env::var("GT_ETCD_ENDPOINTS").unwrap_or_default();
let kv_backend: KvBackendRef = if endpoints.is_empty() {
Arc::new(MemoryKvBackend::new())
} else {
let endpoints = endpoints
.split(',')
.map(|s| s.to_string())
.collect::<Vec<String>>();
let backend = EtcdStore::with_endpoints(endpoints, 128)
.await
.expect("malformed endpoints");
// Each retry requires a new isolation namespace.
let chroot = format!("{}{}", prefix, Uuid::new_v4());
info!("chroot length: {}", chroot.len());
Arc::new(ChrootKvBackend::new(chroot.into(), backend))
};
let key_size = 1024;
// The etcd default size limit of any requests is 1.5MiB.
// However, some KvBackends, the `ChrootKvBackend`, will add the prefix to `key`;
// we don't know the exact size of the key.
let size_limit = 1536 * 1024 - key_size;
let page_size = rand::thread_rng().gen_range(1..10);
test_meta_state_store_split_value_with_size_limit(
kv_backend,
size_limit,
page_size,
size_limit * 10,
)
.await;
}
}

Some files were not shown because too many files have changed in this diff Show More