Compare commits

...

48 Commits

Author SHA1 Message Date
liyang
4b580f4037 feat: release binary to aws s3 (#1881) 2023-07-04 22:33:35 +08:00
Weny Xu
ee16262b45 feat: add create table procedure (#1845)
* feat: add create table procedure

* feat: change table_info type from vec u8 to RawTableInfo

* feat: return create table status

* fix: fix uncaught error

* refactor: use a notifier to respond to callers

* chore: apply suggestions from CR

* chore: apply suggestions from CR

* chore: add comment

* chore: apply suggestions from CR

* refacotr: make CreateMetadata step after DatanodeCreateTable step
2023-07-04 22:24:43 +08:00
Yingwen
f37b394f1a fix: check table existence in create table procedure (#1880)
* fix: check table existence in table procedures

* fix: use correct error variant

* chore: address view comments

* chore: address comments

* test: change error code
2023-07-04 22:01:27 +08:00
Eugene Tolbakov
ccee60f37d feat(http_body_limit): add initial support for DefaultBodyLimit (#1860)
* feat(http_body_limit): add initial support for DefaultBodyLimit

* fix: address CR suggestions

* fix: adjust the const for default http body limit

* fix: adjust the toml_str for the test

* fix: address CR suggestions

* fix: body_limit units in example config toml files

* fix: address clippy suggestions
2023-07-04 20:56:56 +08:00
Ruihang Xia
bee8323bae chore: bump sqlness to 0.5.0 (#1877)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-07-04 19:49:12 +08:00
Weny Xu
000df8cf1e feat: add ddl client (#1856)
* feat: add ddl client

* chore: apply suggestions from CR

* chore: apply suggestions from CR
2023-07-04 19:32:02 +08:00
Yingwen
884731a2c8 chore: initialize mito2 crate (#1875) 2023-07-04 17:55:00 +08:00
shuiyisong
2922c25a16 chore: stop caching None in CachedMetaKvBackend (#1871)
* chore: dont cache none

* fix: test case

* chore: add comment

* chore: minor rewrite
2023-07-04 17:17:48 +08:00
Lei, HUANG
4dec06ec86 chore: bump version 0.3.2 (#1876)
bump version 0.3.2
2023-07-04 17:04:27 +08:00
Lei, HUANG
3b6f70cde3 feat: initial twcs impl (#1851)
* feat: initial twcs impl

* chore: rename SimplePicker to LeveledPicker

* rename some structs

* Remove Compaction strategy

* make compaction picker a trait object

* make compaction picker configurable for every region

* chore: add some test for ttl

* add some tests

* fix: some style issues in cr

* feat: enable twcs when creating tables

* feat: allow config time window when creating tables

* fix: some cr comments
2023-07-04 16:42:27 +08:00
Yingwen
b8e92292d2 feat: Implement a new scan mode using a chain reader (#1857)
* feat: add log

* feat: print more info

* feat: use chain reader

* fix: panic on getting first range

* fix: prev not updated

* fix: reverse readers and iter backward

* chore: don't print windows in log

* feat: consider memtable range

Also fix the issue that using incorrect comparision method to sort time
ranges.

* fix: merge memtable window with sst's

* feat: add use_chain_reader option

* feat: skip empty memtables

* chore: change log level

* fix: memtable range not ordered

* style: fix clippy

* chore: address review comments

* chore: print region id in log
2023-07-04 16:01:34 +08:00
Ruihang Xia
746fe8b4fe fix: use mark-deletion for system catalog (#1874)
* fix: use mark-deletion for system catalog

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix the default value

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* clean tables

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-07-04 16:00:39 +08:00
JeremyHi
20f2fc4a2a feat: add leader kv store cache for metadata (#1853)
* feat: add leader kv store cache for metadata

* refactor: create cache internal

* fix: race condition

* fix: race condition on read
2023-07-04 15:49:42 +08:00
Yingwen
2ef84f64f1 feat(servers): enlarge default body limit to 64M (#1873) 2023-07-04 07:13:14 +00:00
fys
451cc02d8d chore: add feature for metrics-process, default enable (#1870)
chore: add feature for metrics process, default enable
2023-07-04 13:28:33 +08:00
Lei, HUANG
b466ef6cb6 fix: libz dependency (#1867) 2023-07-03 10:08:53 +00:00
LFC
5b42e15105 refactor: add TableInfoKey and TableRegionKey (#1865)
* refactor: add TableInfoKey and TableRegionKey

* refactor: move KvBackend to common-meta

* fix: resolve PR comments
2023-07-03 18:01:20 +08:00
shuiyisong
e1bb7acfe5 fix: return err msg if use wrong database in MySQL (#1866) 2023-07-03 17:31:09 +08:00
Lei, HUANG
2c0c4672b4 feat: support building binary for centos7 (#1863)
feat:support building binary for centos7
2023-07-03 14:13:55 +08:00
Cao Zhengjia
e54415e723 feat: Make heartbeat intervals configurable in Frontend and Datanode (#1864)
* update frontend options and config

* fix format
2023-07-03 12:08:47 +08:00
Ruihang Xia
783a794060 fix: break CI again 🥲 (#1859)
* fix information schema case

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* disable -Wunused_result lint

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-06-30 20:01:14 +08:00
Vanish
563f6e05e2 feat: remove all the manifests in drop_region. (#1834)
* feat: drop_region delete manifest file

* chore: remove redundant code

* chore: fmt

* chore: clippy

* chore: clippy

* feat: support delete_all in manifest.

* chore:CR

* test: test_drop_basic, test_drop_reopen

* chore: cr

* fix: typo

* chore: cr
2023-06-30 17:42:11 +08:00
Ruihang Xia
25cb667470 fix: sort unstable sqlness result (#1858)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-06-30 09:25:24 +00:00
Ruihang Xia
c77b94650c refactor: remove Table::scan method (#1855)
* remove scan method

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* clean up

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-06-30 12:13:14 +08:00
Ruihang Xia
605776f49c feat: support bool operator with other computation (#1844)
* add some cases

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix sqlness test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* impl atan2 and power

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix instant manipulator

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-06-29 19:23:54 +08:00
Ruihang Xia
d45e7b7480 refactor: build parquet file stream from ParquetExec (#1852)
* refactor: build parquet file stream from ParquetExec

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* rename sqlness case

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-06-29 19:19:31 +08:00
JeremyHi
2b3ca1309a feat: table_routes util (#1849) 2023-06-29 16:47:56 +08:00
Weny Xu
acfa229641 chore: bump orc-rust to 0319acd (#1847) 2023-06-29 10:45:05 +08:00
JeremyHi
7e23dd7714 feat: http api for node-lease (#1843)
* feat: add node-lease http api

* revert: show_create.result
2023-06-29 09:34:54 +08:00
Lei, HUANG
559d1f73a2 feat: push all possible filters down to parquet exec (#1839)
* feat: push all possible filters down to parquet exec

* fix: project

* test: add ut for DatafusionArrowPredicate

* fix: according to CR comments
2023-06-28 20:14:37 +08:00
JeremyHi
bc33fdc8ef feat: save node lease into memory (#1841)
* feat: lease secs = 5

* feat: set lease data into memory of leader

* fix: ignore stale heartbeat

* Update src/meta-srv/src/election.rs

Co-authored-by: LFC <bayinamine@gmail.com>

---------

Co-authored-by: LFC <bayinamine@gmail.com>
2023-06-28 11:54:06 +08:00
Lei, HUANG
f287d3115b chore: replace result assertions (#1840)
* s/assert!\((.*)\.is_ok\(\)\);/\1.unwrap\(\);/g

* s/assert!\((.*)\.is_some\(\)\);/\1.unwrap\(\);/g
2023-06-27 19:14:48 +08:00
Ruihang Xia
b737a240de fix: add sqlness tests for some promql function (#1838)
* correct range manipulate exec fmt text

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix partition requirement

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix udf signature

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* finilise

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* ignore unstable ordered result

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add nan value test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-06-27 19:05:26 +08:00
fys
99f0479bd2 feat: improve influxdb v2 api compability (#1831)
* feat: support influxdb v2 api

* cr
2023-06-27 18:21:51 +08:00
fys
313121f2ae fix: block when stream insert (#1835)
* fix: stream insert blocking

* fix: example link

* chore: Increase the default channel size "1024" -> "65536"
2023-06-27 16:57:03 +08:00
LFC
fcff66e039 chore: deny unused results (#1825)
* chore: deny unused results

* rebase
2023-06-27 15:33:53 +08:00
shuiyisong
03057cab6c feat: physical plan wrapper (#1837)
* test: add physical plan wrapper trait

* test: add plugins to datanode initialization

* test: add plugins to datanode initialization

* chore: add metrics method

* chore: update meter-core version

* chore: remove unused code

* chore: impl metrics method on df execution plan adapter

* chore: minor comment fix

* chore: add retry in create table

* chore: shrink keep lease handler buffer

* chore: add etcd batch size warn

* chore: try shrink

* Revert "chore: try shrink"

This reverts commit 0361b51670.

* chore: add create table backup time

* add metrics in some interfaces

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* calc elapsed time and rows

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* chore: remove timer in scan batch

* chore: add back stream metrics wrapper

* chore: add timer to ready poll

* chore: minor update

* chore: try using df_plan.metrics()

* chore: remove table scan timer

* chore: remove scan timer

* chore: add debug log

* Revert "chore: add debug log"

This reverts commit 672a0138fd.

* chore: use batch size as row count

* chore: use batch size as row count

* chore: tune code for pr

* chore: rename to physical plan wrapper

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
2023-06-27 14:04:04 +08:00
Weny Xu
dcfce49cff refactor(datanode): move Instance heartbeat task to Datanode struct (#1832)
* refactor(datanode): move Instance heartbeat to Datanode struct

* chore: apply suggestions from CR

* fix: start heartbeat task after instance starts
2023-06-27 12:32:20 +08:00
JeremyHi
78b07996b1 feat: txn for meta (#1828)
* feat: txn for meta kvstore

* feat: txn

* chore: add unit test

* chore: more test

* chore: more test

* Update src/meta-srv/src/service/store/memory.rs

Co-authored-by: LFC <bayinamine@gmail.com>

* chore: by cr

---------

Co-authored-by: LFC <bayinamine@gmail.com>
2023-06-26 17:12:48 +08:00
dennis zhuang
034564fd27 feat: make blob(binary) type working (#1818)
* feat: test blob type

* feat: make blob type working

* chore: comment

* Update src/sql/src/statements/insert.rs

Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>

* chore: by CR comments

* fix: comment

* Update src/sql/src/statements/insert.rs

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>

* Update src/sql/src/statements/insert.rs

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>

* fix: test

---------

Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>
Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
2023-06-26 08:49:04 +00:00
Ruihang Xia
a95f8767a8 refactor: merge catalog provider & schema provider into catalog manager (#1803)
* move  to expr_factory

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* move configs into service_config

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* move GrpcQueryHandler into distributed.rs

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix compile and test in catalog sub-crate

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* clean up

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix table-procedure compile and test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix query compile and tests

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix datanode compile and tests

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix catalog/query/script/servers compile

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix frontend compile

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix nextest except information_schema

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* support information_schema

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix sqlness test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix merge errors

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove other structs

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* clean up

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix format

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* change deregister_table's return type to empty tuple

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-06-26 15:08:59 +08:00
Eugene Tolbakov
964d26e415 fix: docker build for aarch64 (#1826) 2023-06-25 18:29:00 +09:00
Yingwen
fd412b7b07 refactor!: Uses table id to locate tables in table engines (#1817)
* refactor: add table_id to get_table()/table_exists()

* refactor: Add table_id to alter table request

* refactor: Add table id to DropTableRequest

* refactor: add table id to DropTableRequest

* refactor: Use table id as key for the tables map

* refactor: use table id as file engine's map key

* refactor: Remove table reference from engine's get_table/table_exists

* style: remove unused imports

* feat!: Add table id to TableRegionalValue

* style: fix cilppy

* chore: add comments and logs
2023-06-25 15:05:20 +08:00
Weny Xu
223cf31409 feat: support to copy from orc format (#1814)
* feat: support to copy from orc format

* test: add copy from orc test

* chore: add license header

* refactor: remove unimplemented macro

* chore: apply suggestions from CR

* chore: bump orc-rust to 0.2.3
2023-06-25 14:07:16 +08:00
Ruihang Xia
62f660e439 feat: implement metrics for Scan plan (#1812)
* add metrics in some interfaces

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* calc elapsed time and rows

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-06-25 14:06:50 +08:00
Lei, HUANG
0fb18245b8 fix: docker build (#1822) 2023-06-25 11:05:46 +08:00
Weny Xu
caed6879e6 refactor: remove redundant code (#1821) 2023-06-25 10:56:31 +08:00
Yingwen
5ab0747092 test(storage): wait task before checking scheduled task num (#1811) 2023-06-21 18:04:34 +08:00
459 changed files with 11994 additions and 6396 deletions

View File

@@ -20,6 +20,3 @@ out/
# Rust
target/
# Git
.git

View File

@@ -127,6 +127,21 @@ jobs:
name: ${{ matrix.file }}.sha256sum
path: target/${{ matrix.arch }}/${{ env.CARGO_PROFILE }}/${{ matrix.file }}.sha256sum
- name: Configure tag
shell: bash
if: github.event_name == 'push'
run: |
VERSION=${{ github.ref_name }}
echo "TAG=${VERSION:1}" >> $GITHUB_ENV
- name: Upload to S3
run: |
aws s3 sync target/${{ matrix.arch }}/${{ env.CARGO_PROFILE }} s3://${{ secrets.GREPTIMEDB_RELEASE_BUCKET_NAME }}/releases/${TAG}
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_CN_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_CN_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: ${{ secrets.AWS_CN_REGION }}
build-linux:
name: Build linux binary
strategy:
@@ -288,6 +303,21 @@ jobs:
name: ${{ matrix.file }}.sha256sum
path: target/${{ matrix.arch }}/${{ env.CARGO_PROFILE }}/${{ matrix.file }}.sha256sum
- name: Configure tag
shell: bash
if: github.event_name == 'push'
run: |
VERSION=${{ github.ref_name }}
echo "TAG=${VERSION:1}" >> $GITHUB_ENV
- name: Upload to S3
run: |
aws s3 sync target/${{ matrix.arch }}/${{ env.CARGO_PROFILE }} s3://${{ secrets.GREPTIMEDB_RELEASE_BUCKET_NAME }}/releases/${TAG}
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_CN_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_CN_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: ${{ secrets.AWS_CN_REGION }}
docker:
name: Build docker image
needs: [build-linux, build-macos]

2
.gitignore vendored
View File

@@ -44,3 +44,5 @@ benchmarks/data
# Vscode workspace
*.code-workspace
venv/

163
Cargo.lock generated
View File

@@ -199,7 +199,7 @@ checksum = "8f1f8f5a6f3d50d89e3797d7593a50f96bb2aaa20ca0cc7be1fb673232c91d72"
[[package]]
name = "api"
version = "0.4.0"
version = "0.3.2"
dependencies = [
"arrow-flight",
"common-base",
@@ -841,7 +841,7 @@ dependencies = [
[[package]]
name = "benchmarks"
version = "0.4.0"
version = "0.3.2"
dependencies = [
"arrow",
"clap 4.3.2",
@@ -1224,7 +1224,7 @@ checksum = "37b2a672a2cb129a2e41c10b1224bb368f9f37a2b16b612598138befd7b37eb5"
[[package]]
name = "catalog"
version = "0.4.0"
version = "0.3.2"
dependencies = [
"api",
"arc-swap",
@@ -1509,7 +1509,7 @@ checksum = "2da6da31387c7e4ef160ffab6d5e7f00c42626fe39aea70a7b0f1773f7dd6c1b"
[[package]]
name = "client"
version = "0.4.0"
version = "0.3.2"
dependencies = [
"api",
"arrow-flight",
@@ -1527,6 +1527,7 @@ dependencies = [
"datafusion",
"datanode",
"datatypes",
"derive-new",
"enum_dispatch",
"futures-util",
"moka 0.9.7",
@@ -1534,7 +1535,7 @@ dependencies = [
"prost",
"rand",
"snafu",
"substrait 0.4.0",
"substrait 0.3.2",
"substrait 0.7.5",
"tokio",
"tokio-stream",
@@ -1571,7 +1572,7 @@ dependencies = [
[[package]]
name = "cmd"
version = "0.4.0"
version = "0.3.2"
dependencies = [
"anymap",
"build-data",
@@ -1601,7 +1602,7 @@ dependencies = [
"servers",
"session",
"snafu",
"substrait 0.4.0",
"substrait 0.3.2",
"temp-env",
"tikv-jemallocator",
"tokio",
@@ -1633,7 +1634,7 @@ checksum = "55b672471b4e9f9e95499ea597ff64941a309b2cdbffcc46f2cc5e2d971fd335"
[[package]]
name = "common-base"
version = "0.4.0"
version = "0.3.2"
dependencies = [
"anymap",
"bitvec",
@@ -1647,7 +1648,7 @@ dependencies = [
[[package]]
name = "common-catalog"
version = "0.4.0"
version = "0.3.2"
dependencies = [
"async-trait",
"chrono",
@@ -1664,7 +1665,7 @@ dependencies = [
[[package]]
name = "common-datasource"
version = "0.4.0"
version = "0.3.2"
dependencies = [
"arrow",
"arrow-schema",
@@ -1679,6 +1680,7 @@ dependencies = [
"derive_builder 0.12.0",
"futures",
"object-store",
"orc-rust",
"paste",
"regex",
"snafu",
@@ -1689,7 +1691,7 @@ dependencies = [
[[package]]
name = "common-error"
version = "0.4.0"
version = "0.3.2"
dependencies = [
"snafu",
"strum",
@@ -1697,7 +1699,7 @@ dependencies = [
[[package]]
name = "common-function"
version = "0.4.0"
version = "0.3.2"
dependencies = [
"arc-swap",
"chrono-tz 0.6.3",
@@ -1720,7 +1722,7 @@ dependencies = [
[[package]]
name = "common-function-macro"
version = "0.4.0"
version = "0.3.2"
dependencies = [
"arc-swap",
"backtrace",
@@ -1736,7 +1738,7 @@ dependencies = [
[[package]]
name = "common-grpc"
version = "0.4.0"
version = "0.3.2"
dependencies = [
"api",
"arrow-flight",
@@ -1766,7 +1768,7 @@ dependencies = [
[[package]]
name = "common-grpc-expr"
version = "0.4.0"
version = "0.3.2"
dependencies = [
"api",
"async-trait",
@@ -1785,7 +1787,7 @@ dependencies = [
[[package]]
name = "common-mem-prof"
version = "0.4.0"
version = "0.3.2"
dependencies = [
"common-error",
"snafu",
@@ -1798,9 +1800,10 @@ dependencies = [
[[package]]
name = "common-meta"
version = "0.4.0"
version = "0.3.2"
dependencies = [
"api",
"async-stream",
"async-trait",
"chrono",
"common-catalog",
@@ -1809,6 +1812,8 @@ dependencies = [
"common-telemetry",
"common-time",
"datatypes",
"futures",
"prost",
"serde",
"serde_json",
"snafu",
@@ -1819,7 +1824,7 @@ dependencies = [
[[package]]
name = "common-pprof"
version = "0.4.0"
version = "0.3.2"
dependencies = [
"common-error",
"pprof",
@@ -1830,7 +1835,7 @@ dependencies = [
[[package]]
name = "common-procedure"
version = "0.4.0"
version = "0.3.2"
dependencies = [
"async-stream",
"async-trait",
@@ -1852,7 +1857,7 @@ dependencies = [
[[package]]
name = "common-procedure-test"
version = "0.4.0"
version = "0.3.2"
dependencies = [
"async-trait",
"common-procedure",
@@ -1860,7 +1865,7 @@ dependencies = [
[[package]]
name = "common-query"
version = "0.4.0"
version = "0.3.2"
dependencies = [
"api",
"async-trait",
@@ -1880,7 +1885,7 @@ dependencies = [
[[package]]
name = "common-recordbatch"
version = "0.4.0"
version = "0.3.2"
dependencies = [
"common-error",
"datafusion",
@@ -1896,7 +1901,7 @@ dependencies = [
[[package]]
name = "common-runtime"
version = "0.4.0"
version = "0.3.2"
dependencies = [
"async-trait",
"common-error",
@@ -1912,7 +1917,7 @@ dependencies = [
[[package]]
name = "common-telemetry"
version = "0.4.0"
version = "0.3.2"
dependencies = [
"backtrace",
"common-error",
@@ -1937,7 +1942,7 @@ dependencies = [
[[package]]
name = "common-test-util"
version = "0.4.0"
version = "0.3.2"
dependencies = [
"once_cell",
"rand",
@@ -1946,7 +1951,7 @@ dependencies = [
[[package]]
name = "common-time"
version = "0.4.0"
version = "0.3.2"
dependencies = [
"chrono",
"chrono-tz 0.8.2",
@@ -2586,7 +2591,7 @@ dependencies = [
[[package]]
name = "datanode"
version = "0.4.0"
version = "0.3.2"
dependencies = [
"api",
"async-compat",
@@ -2642,7 +2647,7 @@ dependencies = [
"sql",
"storage",
"store-api",
"substrait 0.4.0",
"substrait 0.3.2",
"table",
"table-procedure",
"tokio",
@@ -2656,7 +2661,7 @@ dependencies = [
[[package]]
name = "datatypes"
version = "0.4.0"
version = "0.3.2"
dependencies = [
"arrow",
"arrow-array",
@@ -3069,6 +3074,12 @@ version = "0.2.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4443176a9f2c162692bd3d352d745ef9413eec5782a80d8fd6f8a1ac692a07f7"
[[package]]
name = "fallible-streaming-iterator"
version = "0.1.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7360491ce676a36bf9bb3c56c1aa791658183a54d2744120f27285738d90465a"
[[package]]
name = "fastrand"
version = "1.9.0"
@@ -3091,7 +3102,7 @@ dependencies = [
[[package]]
name = "file-table-engine"
version = "0.4.0"
version = "0.3.2"
dependencies = [
"async-trait",
"common-catalog",
@@ -3200,7 +3211,7 @@ dependencies = [
[[package]]
name = "frontend"
version = "0.4.0"
version = "0.3.2"
dependencies = [
"api",
"async-compat",
@@ -3255,7 +3266,7 @@ dependencies = [
"storage",
"store-api",
"strfmt",
"substrait 0.4.0",
"substrait 0.3.2",
"table",
"tokio",
"toml",
@@ -4098,7 +4109,7 @@ checksum = "d2fabcfbdc87f4758337ca535fb41a6d701b65693ce38287d856d1674551ec9b"
[[package]]
name = "greptime-proto"
version = "0.1.0"
source = "git+https://github.com/GreptimeTeam/greptime-proto.git?rev=7aeaeaba1e0ca6a5c736b6ab2eb63144ae3d284b#7aeaeaba1e0ca6a5c736b6ab2eb63144ae3d284b"
source = "git+https://github.com/WenyXu/greptime-proto.git?rev=1eda4691a5d2c8ffc463d48ca2317905ba7e4b2d#1eda4691a5d2c8ffc463d48ca2317905ba7e4b2d"
dependencies = [
"prost",
"serde",
@@ -4861,7 +4872,7 @@ checksum = "518ef76f2f87365916b142844c16d8fefd85039bc5699050210a7778ee1cd1de"
[[package]]
name = "log-store"
version = "0.4.0"
version = "0.3.2"
dependencies = [
"arc-swap",
"async-stream",
@@ -5123,7 +5134,7 @@ dependencies = [
[[package]]
name = "meta-client"
version = "0.4.0"
version = "0.3.2"
dependencies = [
"api",
"async-trait",
@@ -5151,7 +5162,7 @@ dependencies = [
[[package]]
name = "meta-srv"
version = "0.4.0"
version = "0.3.2"
dependencies = [
"anymap",
"api",
@@ -5159,6 +5170,7 @@ dependencies = [
"async-trait",
"catalog",
"chrono",
"client",
"common-base",
"common-catalog",
"common-error",
@@ -5202,7 +5214,7 @@ dependencies = [
[[package]]
name = "meter-core"
version = "0.1.0"
source = "git+https://github.com/GreptimeTeam/greptime-meter.git?rev=f0798c4c648d89f51abe63e870919c75dd463199#f0798c4c648d89f51abe63e870919c75dd463199"
source = "git+https://github.com/GreptimeTeam/greptime-meter.git?rev=abbd357c1e193cd270ea65ee7652334a150b628f#abbd357c1e193cd270ea65ee7652334a150b628f"
dependencies = [
"anymap",
"once_cell",
@@ -5212,7 +5224,7 @@ dependencies = [
[[package]]
name = "meter-macros"
version = "0.1.0"
source = "git+https://github.com/GreptimeTeam/greptime-meter.git?rev=f0798c4c648d89f51abe63e870919c75dd463199#f0798c4c648d89f51abe63e870919c75dd463199"
source = "git+https://github.com/GreptimeTeam/greptime-meter.git?rev=abbd357c1e193cd270ea65ee7652334a150b628f#abbd357c1e193cd270ea65ee7652334a150b628f"
dependencies = [
"meter-core",
]
@@ -5344,7 +5356,7 @@ dependencies = [
[[package]]
name = "mito"
version = "0.4.0"
version = "0.3.2"
dependencies = [
"anymap",
"arc-swap",
@@ -5379,6 +5391,10 @@ dependencies = [
"tokio",
]
[[package]]
name = "mito2"
version = "0.3.2"
[[package]]
name = "moka"
version = "0.9.7"
@@ -5815,7 +5831,7 @@ dependencies = [
[[package]]
name = "object-store"
version = "0.4.0"
version = "0.3.2"
dependencies = [
"anyhow",
"async-trait",
@@ -5983,6 +5999,27 @@ version = "0.5.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "978aa494585d3ca4ad74929863093e87cac9790d81fe7aba2b3dc2890643a0fc"
[[package]]
name = "orc-rust"
version = "0.2.3"
source = "git+https://github.com/WenyXu/orc-rs.git?rev=0319acd32456e403c20f135cc012441a76852605#0319acd32456e403c20f135cc012441a76852605"
dependencies = [
"arrow",
"bytes",
"chrono",
"fallible-streaming-iterator",
"flate2",
"futures",
"futures-util",
"lazy_static",
"paste",
"prost",
"snafu",
"tokio",
"zigzag",
"zstd 0.12.3+zstd.1.5.2",
]
[[package]]
name = "ordered-float"
version = "1.1.1"
@@ -6188,7 +6225,7 @@ dependencies = [
[[package]]
name = "partition"
version = "0.4.0"
version = "0.3.2"
dependencies = [
"api",
"async-trait",
@@ -6775,7 +6812,7 @@ dependencies = [
[[package]]
name = "promql"
version = "0.4.0"
version = "0.3.2"
dependencies = [
"async-recursion",
"async-trait",
@@ -7025,7 +7062,7 @@ dependencies = [
[[package]]
name = "query"
version = "0.4.0"
version = "0.3.2"
dependencies = [
"ahash 0.8.3",
"approx_eq",
@@ -7079,7 +7116,7 @@ dependencies = [
"stats-cli",
"store-api",
"streaming-stats",
"substrait 0.4.0",
"substrait 0.3.2",
"table",
"tokio",
"tokio-stream",
@@ -8255,7 +8292,7 @@ checksum = "d29ab0c6d3fc0ee92fe66e2d99f700eab17a8d57d1c1d3b748380fb20baa78cd"
[[package]]
name = "script"
version = "0.4.0"
version = "0.3.2"
dependencies = [
"arrow",
"async-trait",
@@ -8510,7 +8547,7 @@ dependencies = [
[[package]]
name = "servers"
version = "0.4.0"
version = "0.3.2"
dependencies = [
"aide",
"api",
@@ -8598,7 +8635,7 @@ dependencies = [
[[package]]
name = "session"
version = "0.4.0"
version = "0.3.2"
dependencies = [
"arc-swap",
"common-catalog",
@@ -8873,7 +8910,7 @@ dependencies = [
[[package]]
name = "sql"
version = "0.4.0"
version = "0.3.2"
dependencies = [
"api",
"common-base",
@@ -8905,8 +8942,9 @@ dependencies = [
[[package]]
name = "sqlness"
version = "0.4.3"
source = "git+https://github.com/CeresDB/sqlness.git?rev=dde4b19d7e4a41319d05a0c5bfae5c4422fde14f#dde4b19d7e4a41319d05a0c5bfae5c4422fde14f"
version = "0.5.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0860f149718809371602b42573693e1ed2b1d0aed35fe69e04e4e4e9918d81f7"
dependencies = [
"async-trait",
"derive_builder 0.11.2",
@@ -8919,7 +8957,7 @@ dependencies = [
[[package]]
name = "sqlness-runner"
version = "0.4.0"
version = "0.3.2"
dependencies = [
"async-trait",
"client",
@@ -9101,7 +9139,7 @@ dependencies = [
[[package]]
name = "storage"
version = "0.4.0"
version = "0.3.2"
dependencies = [
"arc-swap",
"arrow",
@@ -9121,8 +9159,10 @@ dependencies = [
"common-test-util",
"common-time",
"criterion 0.3.6",
"datafusion",
"datafusion-common",
"datafusion-expr",
"datafusion-physical-expr",
"datatypes",
"futures",
"futures-util",
@@ -9152,7 +9192,7 @@ dependencies = [
[[package]]
name = "store-api"
version = "0.4.0"
version = "0.3.2"
dependencies = [
"async-stream",
"async-trait",
@@ -9267,7 +9307,7 @@ dependencies = [
[[package]]
name = "substrait"
version = "0.4.0"
version = "0.3.2"
dependencies = [
"async-recursion",
"async-trait",
@@ -9422,7 +9462,7 @@ dependencies = [
[[package]]
name = "table"
version = "0.4.0"
version = "0.3.2"
dependencies = [
"anymap",
"async-trait",
@@ -9458,7 +9498,7 @@ dependencies = [
[[package]]
name = "table-procedure"
version = "0.4.0"
version = "0.3.2"
dependencies = [
"async-trait",
"catalog",
@@ -9551,7 +9591,7 @@ dependencies = [
[[package]]
name = "tests-integration"
version = "0.4.0"
version = "0.3.2"
dependencies = [
"api",
"async-trait",
@@ -11214,6 +11254,15 @@ version = "1.6.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "2a0956f1ba7c7909bfb66c2e9e4124ab6f6482560f6628b5aaeba39207c9aad9"
[[package]]
name = "zigzag"
version = "0.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "70b40401a28d86ce16a330b863b86fd7dbee4d7c940587ab09ab8c019f9e3fdf"
dependencies = [
"num-traits",
]
[[package]]
name = "zstd"
version = "0.11.2+zstd.1.5.2"

View File

@@ -33,6 +33,7 @@ members = [
"src/meta-client",
"src/meta-srv",
"src/mito",
"src/mito2",
"src/object-store",
"src/partition",
"src/promql",
@@ -50,7 +51,7 @@ members = [
]
[workspace.package]
version = "0.4.0"
version = "0.3.2"
edition = "2021"
license = "Apache-2.0"
@@ -72,7 +73,7 @@ datafusion-sql = { git = "https://github.com/waynexia/arrow-datafusion.git", rev
datafusion-substrait = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "63e52dde9e44cac4b1f6c6e6b6bf6368ba3bd323" }
futures = "0.3"
futures-util = "0.3"
greptime-proto = { git = "https://github.com/GreptimeTeam/greptime-proto.git", rev = "7aeaeaba1e0ca6a5c736b6ab2eb63144ae3d284b" }
greptime-proto = { git = "https://github.com/WenyXu/greptime-proto.git", rev = "1eda4691a5d2c8ffc463d48ca2317905ba7e4b2d" }
itertools = "0.10"
parquet = "40.0"
paste = "1.0"
@@ -88,11 +89,11 @@ tokio-util = { version = "0.7", features = ["io-util", "compat"] }
tonic = { version = "0.9", features = ["tls"] }
uuid = { version = "1", features = ["serde", "v4", "fast-rng"] }
metrics = "0.20"
meter-core = { git = "https://github.com/GreptimeTeam/greptime-meter.git", rev = "f0798c4c648d89f51abe63e870919c75dd463199" }
meter-core = { git = "https://github.com/GreptimeTeam/greptime-meter.git", rev = "abbd357c1e193cd270ea65ee7652334a150b628f" }
[workspace.dependencies.meter-macros]
git = "https://github.com/GreptimeTeam/greptime-meter.git"
rev = "f0798c4c648d89f51abe63e870919c75dd463199"
rev = "abbd357c1e193cd270ea65ee7652334a150b628f"
[profile.release]
debug = true

View File

@@ -1,7 +1,7 @@
[build]
pre-build = [
"dpkg --add-architecture $CROSS_DEB_ARCH",
"apt update && apt install -y unzip zlib1g-dev:$CROSS_DEB_ARCH",
"apt update && apt install -y unzip zlib1g-dev zlib1g-dev:$CROSS_DEB_ARCH",
"curl -LO https://github.com/protocolbuffers/protobuf/releases/download/v3.15.8/protoc-3.15.8-linux-x86_64.zip && unzip protoc-3.15.8-linux-x86_64.zip -d /usr/",
"chmod a+x /usr/bin/protoc && chmod -R a+rx /usr/include/google",
]

View File

@@ -114,7 +114,7 @@ async fn write_data(
};
let now = Instant::now();
db.insert(requests).await.unwrap();
let _ = db.insert(requests).await.unwrap();
let elapsed = now.elapsed();
total_rpc_elapsed_ms += elapsed.as_millis();
progress_bar.inc(row_count as _);
@@ -377,19 +377,16 @@ fn create_table_expr() -> CreateTableExpr {
}
fn query_set() -> HashMap<String, String> {
let mut ret = HashMap::new();
ret.insert(
"count_all".to_string(),
format!("SELECT COUNT(*) FROM {TABLE_NAME};"),
);
ret.insert(
"fare_amt_by_passenger".to_string(),
format!("SELECT passenger_count, MIN(fare_amount), MAX(fare_amount), SUM(fare_amount) FROM {TABLE_NAME} GROUP BY passenger_count")
);
ret
HashMap::from([
(
"count_all".to_string(),
format!("SELECT COUNT(*) FROM {TABLE_NAME};"),
),
(
"fare_amt_by_passenger".to_string(),
format!("SELECT passenger_count, MIN(fare_amount), MAX(fare_amount), SUM(fare_amount) FROM {TABLE_NAME} GROUP BY passenger_count"),
)
])
}
async fn do_write(args: &Args, db: &Database) {
@@ -414,7 +411,8 @@ async fn do_write(args: &Args, db: &Database) {
let db = db.clone();
let mpb = multi_progress_bar.clone();
let pb_style = progress_bar_style.clone();
write_jobs.spawn(async move { write_data(batch_size, &db, path, mpb, pb_style).await });
let _ = write_jobs
.spawn(async move { write_data(batch_size, &db, path, mpb, pb_style).await });
}
}
while write_jobs.join_next().await.is_some() {
@@ -423,7 +421,8 @@ async fn do_write(args: &Args, db: &Database) {
let db = db.clone();
let mpb = multi_progress_bar.clone();
let pb_style = progress_bar_style.clone();
write_jobs.spawn(async move { write_data(batch_size, &db, path, mpb, pb_style).await });
let _ = write_jobs
.spawn(async move { write_data(batch_size, &db, path, mpb, pb_style).await });
}
}
}

View File

@@ -10,6 +10,8 @@ rpc_addr = "127.0.0.1:3001"
rpc_hostname = "127.0.0.1"
# The number of gRPC server worker threads, 8 by default.
rpc_runtime_size = 8
# Interval for sending heartbeat messages to the Metasrv in milliseconds, 5000 by default.
heartbeat_interval_millis = 5000
# Metasrv client options.
[meta_client_options]

View File

@@ -1,10 +1,15 @@
# Node running mode, see `standalone.example.toml`.
mode = "distributed"
# Interval for sending heartbeat task to the Metasrv in milliseconds, 5000 by default.
heartbeat_interval_millis = 5000
# Interval for retry sending heartbeat task in milliseconds, 5000 by default.
retry_interval_millis = 5000
# HTTP server options, see `standalone.example.toml`.
[http_options]
addr = "127.0.0.1:4000"
timeout = "30s"
body_limit = "64MB"
# gRPC server options, see `standalone.example.toml`.
[grpc_options]

View File

@@ -9,6 +9,9 @@ enable_memory_catalog = false
addr = "127.0.0.1:4000"
# HTTP request timeout, 30s by default.
timeout = "30s"
# HTTP request body limit, 64Mb by default.
# the following units are supported: B, KB, KiB, MB, MiB, GB, GiB, TB, TiB, PB, PiB
body_limit = "64MB"
# gRPC server options.
[grpc_options]

View File

@@ -8,6 +8,7 @@ RUN apt-get update && apt-get install -y \
libssl-dev \
protobuf-compiler \
curl \
git \
build-essential \
pkg-config \
python3 \

View File

@@ -0,0 +1,29 @@
FROM centos:7
ENV LANG en_US.utf8
WORKDIR /greptimedb
RUN sed -e 's|^mirrorlist=|#mirrorlist=|g' \
-e 's|^#baseurl=http://mirror.centos.org/centos|baseurl=http://mirrors.tuna.tsinghua.edu.cn/centos|g' \
-i.bak \
/etc/yum.repos.d/CentOS-*.repo
# Install dependencies
RUN RUN ulimit -n 1024000 && yum groupinstall -y 'Development Tools'
RUN yum install -y epel-release \
openssl \
openssl-devel \
centos-release-scl \
rh-python38 \
rh-python38-python-devel
# Install protoc
RUN curl -LO https://github.com/protocolbuffers/protobuf/releases/download/v3.15.8/protoc-3.15.8-linux-x86_64.zip
RUN unzip protoc-3.15.8-linux-x86_64.zip -d /usr/local/
# Install Rust
SHELL ["/bin/bash", "-c"]
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- --no-modify-path --default-toolchain none -y
ENV PATH /opt/rh/rh-python38/root/usr/bin:/usr/local/bin:/root/.cargo/bin/:$PATH
CMD ["cargo", "build", "--release"]

View File

@@ -8,6 +8,7 @@ RUN apt-get update && apt-get install -y \
libssl-dev \
protobuf-compiler \
curl \
git \
build-essential \
pkg-config \
wget

View File

@@ -192,11 +192,18 @@ pub enum Error {
source: BoxedError,
},
#[snafu(display(
"Failed to upgrade weak catalog manager reference. location: {}",
location
))]
UpgradeWeakCatalogManagerRef { location: Location },
#[snafu(display("Failed to execute system catalog table scan, source: {}", source))]
SystemCatalogTableScanExec {
location: Location,
source: common_query::error::Error,
},
#[snafu(display("Cannot parse catalog value, source: {}", source))]
InvalidCatalogValue {
location: Location,
@@ -236,6 +243,12 @@ pub enum Error {
#[snafu(display("A generic error has occurred, msg: {}", msg))]
Generic { msg: String, location: Location },
#[snafu(display("Table metadata manager error: {}", source))]
TableMetadataManager {
source: common_meta::error::Error,
location: Location,
},
}
pub type Result<T> = std::result::Result<T, Error>;
@@ -256,7 +269,9 @@ impl ErrorExt for Error {
| Error::EmptyValue { .. }
| Error::ValueDeserialize { .. } => StatusCode::StorageUnavailable,
Error::Generic { .. } | Error::SystemCatalogTypeMismatch { .. } => StatusCode::Internal,
Error::Generic { .. }
| Error::SystemCatalogTypeMismatch { .. }
| Error::UpgradeWeakCatalogManagerRef { .. } => StatusCode::Internal,
Error::ReadSystemCatalog { source, .. } | Error::CreateRecordBatch { source, .. } => {
source.status_code()
@@ -289,6 +304,7 @@ impl ErrorExt for Error {
Error::Unimplemented { .. } | Error::NotSupported { .. } => StatusCode::Unsupported,
Error::QueryAccessDenied { .. } => StatusCode::AccessDenied,
Error::Datafusion { .. } => StatusCode::EngineExecuteQuery,
Error::TableMetadataManager { source, .. } => source.status_code(),
}
}

View File

@@ -67,6 +67,7 @@ pub fn build_schema_prefix(catalog_name: impl AsRef<str>) -> String {
format!("{SCHEMA_KEY_PREFIX}-{}-", catalog_name.as_ref())
}
/// Global table info has only one key across all datanodes so it does not have `node_id` field.
pub fn build_table_global_prefix(
catalog_name: impl AsRef<str>,
schema_name: impl AsRef<str>,
@@ -78,6 +79,7 @@ pub fn build_table_global_prefix(
)
}
/// Regional table info varies between datanode, so it contains a `node_id` field.
pub fn build_table_regional_prefix(
catalog_name: impl AsRef<str>,
schema_name: impl AsRef<str>,
@@ -201,6 +203,9 @@ impl TableRegionalKey {
/// region ids allocated by metasrv.
#[derive(Debug, Serialize, Deserialize, Clone)]
pub struct TableRegionalValue {
// We can remove the `Option` from the table id once all regional values
// stored in meta have table ids.
pub table_id: Option<TableId>,
pub version: TableVersion,
pub regions_ids: Vec<u32>,
pub engine_name: Option<String>,
@@ -387,6 +392,6 @@ mod tests {
#[test]
fn test_table_global_value_compatibility() {
let s = r#"{"node_id":1,"regions_id_map":{"1":[0]},"table_info":{"ident":{"table_id":1098,"version":1},"name":"container_cpu_limit","desc":"Created on insertion","catalog_name":"greptime","schema_name":"dd","meta":{"schema":{"column_schemas":[{"name":"container_id","data_type":{"String":null},"is_nullable":true,"is_time_index":false,"default_constraint":null,"metadata":{}},{"name":"container_name","data_type":{"String":null},"is_nullable":true,"is_time_index":false,"default_constraint":null,"metadata":{}},{"name":"docker_image","data_type":{"String":null},"is_nullable":true,"is_time_index":false,"default_constraint":null,"metadata":{}},{"name":"host","data_type":{"String":null},"is_nullable":true,"is_time_index":false,"default_constraint":null,"metadata":{}},{"name":"image_name","data_type":{"String":null},"is_nullable":true,"is_time_index":false,"default_constraint":null,"metadata":{}},{"name":"image_tag","data_type":{"String":null},"is_nullable":true,"is_time_index":false,"default_constraint":null,"metadata":{}},{"name":"interval","data_type":{"String":null},"is_nullable":true,"is_time_index":false,"default_constraint":null,"metadata":{}},{"name":"runtime","data_type":{"String":null},"is_nullable":true,"is_time_index":false,"default_constraint":null,"metadata":{}},{"name":"short_image","data_type":{"String":null},"is_nullable":true,"is_time_index":false,"default_constraint":null,"metadata":{}},{"name":"type","data_type":{"String":null},"is_nullable":true,"is_time_index":false,"default_constraint":null,"metadata":{}},{"name":"dd_value","data_type":{"Float64":{}},"is_nullable":true,"is_time_index":false,"default_constraint":null,"metadata":{}},{"name":"ts","data_type":{"Timestamp":{"Millisecond":null}},"is_nullable":false,"is_time_index":true,"default_constraint":null,"metadata":{"greptime:time_index":"true"}},{"name":"git.repository_url","data_type":{"String":null},"is_nullable":true,"is_time_index":false,"default_constraint":null,"metadata":{}}],"timestamp_index":11,"version":1},"primary_key_indices":[0,1,2,3,4,5,6,7,8,9,12],"value_indices":[10,11],"engine":"mito","next_column_id":12,"region_numbers":[],"engine_options":{},"options":{},"created_on":"1970-01-01T00:00:00Z"},"table_type":"Base"}}"#;
TableGlobalValue::parse(s).unwrap();
let _ = TableGlobalValue::parse(s).unwrap();
}
}

View File

@@ -16,12 +16,10 @@ mod columns;
mod tables;
use std::any::Any;
use std::sync::Arc;
use std::sync::{Arc, Weak};
use async_trait::async_trait;
use common_error::prelude::BoxedError;
use common_query::physical_plan::PhysicalPlanRef;
use common_query::prelude::Expr;
use common_recordbatch::{RecordBatchStreamAdaptor, SendableRecordBatchStream};
use datatypes::schema::SchemaRef;
use futures_util::StreamExt;
@@ -33,46 +31,35 @@ use table::{Result as TableResult, Table, TableRef};
use self::columns::InformationSchemaColumns;
use crate::error::Result;
use crate::information_schema::tables::InformationSchemaTables;
use crate::{CatalogProviderRef, SchemaProvider};
use crate::CatalogManager;
const TABLES: &str = "tables";
const COLUMNS: &str = "columns";
pub(crate) struct InformationSchemaProvider {
pub struct InformationSchemaProvider {
catalog_name: String,
catalog_provider: CatalogProviderRef,
tables: Vec<String>,
catalog_manager: Weak<dyn CatalogManager>,
}
impl InformationSchemaProvider {
pub(crate) fn new(catalog_name: String, catalog_provider: CatalogProviderRef) -> Self {
pub fn new(catalog_name: String, catalog_manager: Weak<dyn CatalogManager>) -> Self {
Self {
catalog_name,
catalog_provider,
tables: vec![TABLES.to_string(), COLUMNS.to_string()],
catalog_manager,
}
}
}
#[async_trait]
impl SchemaProvider for InformationSchemaProvider {
fn as_any(&self) -> &dyn Any {
self
}
async fn table_names(&self) -> Result<Vec<String>> {
Ok(self.tables.clone())
}
async fn table(&self, name: &str) -> Result<Option<TableRef>> {
impl InformationSchemaProvider {
pub fn table(&self, name: &str) -> Result<Option<TableRef>> {
let stream_builder = match name.to_ascii_lowercase().as_ref() {
TABLES => Arc::new(InformationSchemaTables::new(
self.catalog_name.clone(),
self.catalog_provider.clone(),
self.catalog_manager.clone(),
)) as _,
COLUMNS => Arc::new(InformationSchemaColumns::new(
self.catalog_name.clone(),
self.catalog_provider.clone(),
self.catalog_manager.clone(),
)) as _,
_ => {
return Ok(None);
@@ -81,11 +68,6 @@ impl SchemaProvider for InformationSchemaProvider {
Ok(Some(Arc::new(InformationTable::new(stream_builder))))
}
async fn table_exist(&self, name: &str) -> Result<bool> {
let normalized_name = name.to_ascii_lowercase();
Ok(self.tables.contains(&normalized_name))
}
}
// TODO(ruihang): make it a more generic trait:
@@ -120,20 +102,6 @@ impl Table for InformationTable {
unreachable!("Should not call table_info() of InformationTable directly")
}
/// Scan the table and returns a SendableRecordBatchStream.
async fn scan(
&self,
_projection: Option<&Vec<usize>>,
_filters: &[Expr],
// limit can be used to reduce the amount scanned
// from the datasource as a performance optimization.
// If set, it contains the amount of rows needed by the `LogicalPlan`,
// The datasource should return *at least* this number of rows if available.
_limit: Option<usize>,
) -> TableResult<PhysicalPlanRef> {
unimplemented!()
}
async fn scan_to_stream(&self, request: ScanRequest) -> TableResult<SendableRecordBatchStream> {
let projection = request.projection;
let projected_schema = if let Some(projection) = &projection {

View File

@@ -12,7 +12,7 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use std::sync::Arc;
use std::sync::{Arc, Weak};
use arrow_schema::SchemaRef as ArrowSchemaRef;
use common_catalog::consts::{
@@ -29,16 +29,18 @@ use datatypes::prelude::{ConcreteDataType, DataType};
use datatypes::scalars::ScalarVectorBuilder;
use datatypes::schema::{ColumnSchema, Schema, SchemaRef};
use datatypes::vectors::{StringVectorBuilder, VectorRef};
use snafu::ResultExt;
use snafu::{OptionExt, ResultExt};
use super::InformationStreamBuilder;
use crate::error::{CreateRecordBatchSnafu, InternalSnafu, Result};
use crate::CatalogProviderRef;
use crate::error::{
CreateRecordBatchSnafu, InternalSnafu, Result, UpgradeWeakCatalogManagerRefSnafu,
};
use crate::CatalogManager;
pub(super) struct InformationSchemaColumns {
schema: SchemaRef,
catalog_name: String,
catalog_provider: CatalogProviderRef,
catalog_manager: Weak<dyn CatalogManager>,
}
const TABLE_CATALOG: &str = "table_catalog";
@@ -49,7 +51,7 @@ const DATA_TYPE: &str = "data_type";
const SEMANTIC_TYPE: &str = "semantic_type";
impl InformationSchemaColumns {
pub(super) fn new(catalog_name: String, catalog_provider: CatalogProviderRef) -> Self {
pub(super) fn new(catalog_name: String, catalog_manager: Weak<dyn CatalogManager>) -> Self {
let schema = Arc::new(Schema::new(vec![
ColumnSchema::new(TABLE_CATALOG, ConcreteDataType::string_datatype(), false),
ColumnSchema::new(TABLE_SCHEMA, ConcreteDataType::string_datatype(), false),
@@ -61,7 +63,7 @@ impl InformationSchemaColumns {
Self {
schema,
catalog_name,
catalog_provider,
catalog_manager,
}
}
@@ -69,7 +71,7 @@ impl InformationSchemaColumns {
InformationSchemaColumnsBuilder::new(
self.schema.clone(),
self.catalog_name.clone(),
self.catalog_provider.clone(),
self.catalog_manager.clone(),
)
}
}
@@ -103,7 +105,7 @@ impl InformationStreamBuilder for InformationSchemaColumns {
struct InformationSchemaColumnsBuilder {
schema: SchemaRef,
catalog_name: String,
catalog_provider: CatalogProviderRef,
catalog_manager: Weak<dyn CatalogManager>,
catalog_names: StringVectorBuilder,
schema_names: StringVectorBuilder,
@@ -114,11 +116,15 @@ struct InformationSchemaColumnsBuilder {
}
impl InformationSchemaColumnsBuilder {
fn new(schema: SchemaRef, catalog_name: String, catalog_provider: CatalogProviderRef) -> Self {
fn new(
schema: SchemaRef,
catalog_name: String,
catalog_manager: Weak<dyn CatalogManager>,
) -> Self {
Self {
schema,
catalog_name,
catalog_provider,
catalog_manager,
catalog_names: StringVectorBuilder::with_capacity(42),
schema_names: StringVectorBuilder::with_capacity(42),
table_names: StringVectorBuilder::with_capacity(42),
@@ -131,11 +137,23 @@ impl InformationSchemaColumnsBuilder {
/// Construct the `information_schema.tables` virtual table
async fn make_tables(&mut self) -> Result<RecordBatch> {
let catalog_name = self.catalog_name.clone();
let catalog_manager = self
.catalog_manager
.upgrade()
.context(UpgradeWeakCatalogManagerRefSnafu)?;
for schema_name in self.catalog_provider.schema_names().await? {
let Some(schema) = self.catalog_provider.schema(&schema_name).await? else { continue };
for table_name in schema.table_names().await? {
let Some(table) = schema.table(&table_name).await? else { continue };
for schema_name in catalog_manager.schema_names(&catalog_name).await? {
if !catalog_manager
.schema_exist(&catalog_name, &schema_name)
.await?
{
continue;
}
for table_name in catalog_manager
.table_names(&catalog_name, &schema_name)
.await?
{
let Some(table) = catalog_manager.table(&catalog_name, &schema_name, &table_name).await? else { continue };
let keys = &table.table_info().meta.primary_key_indices;
let schema = table.schema();
for (idx, column) in schema.column_schemas().iter().enumerate() {

View File

@@ -12,7 +12,7 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use std::sync::Arc;
use std::sync::{Arc, Weak};
use arrow_schema::SchemaRef as ArrowSchemaRef;
use common_catalog::consts::INFORMATION_SCHEMA_NAME;
@@ -26,21 +26,23 @@ use datafusion::physical_plan::SendableRecordBatchStream as DfSendableRecordBatc
use datatypes::prelude::{ConcreteDataType, ScalarVectorBuilder, VectorRef};
use datatypes::schema::{ColumnSchema, Schema, SchemaRef};
use datatypes::vectors::{StringVectorBuilder, UInt32VectorBuilder};
use snafu::ResultExt;
use snafu::{OptionExt, ResultExt};
use table::metadata::TableType;
use crate::error::{CreateRecordBatchSnafu, InternalSnafu, Result};
use crate::error::{
CreateRecordBatchSnafu, InternalSnafu, Result, UpgradeWeakCatalogManagerRefSnafu,
};
use crate::information_schema::InformationStreamBuilder;
use crate::CatalogProviderRef;
use crate::CatalogManager;
pub(super) struct InformationSchemaTables {
schema: SchemaRef,
catalog_name: String,
catalog_provider: CatalogProviderRef,
catalog_manager: Weak<dyn CatalogManager>,
}
impl InformationSchemaTables {
pub(super) fn new(catalog_name: String, catalog_provider: CatalogProviderRef) -> Self {
pub(super) fn new(catalog_name: String, catalog_manager: Weak<dyn CatalogManager>) -> Self {
let schema = Arc::new(Schema::new(vec![
ColumnSchema::new("table_catalog", ConcreteDataType::string_datatype(), false),
ColumnSchema::new("table_schema", ConcreteDataType::string_datatype(), false),
@@ -52,7 +54,7 @@ impl InformationSchemaTables {
Self {
schema,
catalog_name,
catalog_provider,
catalog_manager,
}
}
@@ -60,7 +62,7 @@ impl InformationSchemaTables {
InformationSchemaTablesBuilder::new(
self.schema.clone(),
self.catalog_name.clone(),
self.catalog_provider.clone(),
self.catalog_manager.clone(),
)
}
}
@@ -97,7 +99,7 @@ impl InformationStreamBuilder for InformationSchemaTables {
struct InformationSchemaTablesBuilder {
schema: SchemaRef,
catalog_name: String,
catalog_provider: CatalogProviderRef,
catalog_manager: Weak<dyn CatalogManager>,
catalog_names: StringVectorBuilder,
schema_names: StringVectorBuilder,
@@ -108,11 +110,15 @@ struct InformationSchemaTablesBuilder {
}
impl InformationSchemaTablesBuilder {
fn new(schema: SchemaRef, catalog_name: String, catalog_provider: CatalogProviderRef) -> Self {
fn new(
schema: SchemaRef,
catalog_name: String,
catalog_manager: Weak<dyn CatalogManager>,
) -> Self {
Self {
schema,
catalog_name,
catalog_provider,
catalog_manager,
catalog_names: StringVectorBuilder::with_capacity(42),
schema_names: StringVectorBuilder::with_capacity(42),
table_names: StringVectorBuilder::with_capacity(42),
@@ -125,15 +131,27 @@ impl InformationSchemaTablesBuilder {
/// Construct the `information_schema.tables` virtual table
async fn make_tables(&mut self) -> Result<RecordBatch> {
let catalog_name = self.catalog_name.clone();
let catalog_manager = self
.catalog_manager
.upgrade()
.context(UpgradeWeakCatalogManagerRefSnafu)?;
for schema_name in self.catalog_provider.schema_names().await? {
for schema_name in catalog_manager.schema_names(&catalog_name).await? {
if schema_name == INFORMATION_SCHEMA_NAME {
continue;
}
if !catalog_manager
.schema_exist(&catalog_name, &schema_name)
.await?
{
continue;
}
let Some(schema) = self.catalog_provider.schema(&schema_name).await? else { continue };
for table_name in schema.table_names().await? {
let Some(table) = schema.table(&table_name).await? else { continue };
for table_name in catalog_manager
.table_names(&catalog_name, &schema_name)
.await?
{
let Some(table) = catalog_manager.table(&catalog_name, &schema_name, &table_name).await? else { continue };
let table_info = table.table_info();
self.add_table(
&catalog_name,

View File

@@ -14,6 +14,7 @@
#![feature(trait_upcasting)]
#![feature(assert_matches)]
#![feature(try_blocks)]
use std::any::Any;
use std::collections::HashMap;
@@ -29,65 +30,46 @@ use table::requests::CreateTableRequest;
use table::TableRef;
use crate::error::{CreateTableSnafu, Result};
pub use crate::schema::{SchemaProvider, SchemaProviderRef};
pub mod error;
pub mod helper;
pub(crate) mod information_schema;
pub mod information_schema;
pub mod local;
mod metrics;
pub mod remote;
pub mod schema;
pub mod system;
pub mod table_source;
pub mod tables;
/// Represents a catalog, comprising a number of named schemas.
#[async_trait::async_trait]
pub trait CatalogProvider: Sync + Send {
/// Returns the catalog provider as [`Any`](std::any::Any)
/// so that it can be downcast to a specific implementation.
fn as_any(&self) -> &dyn Any;
/// Retrieves the list of available schema names in this catalog.
async fn schema_names(&self) -> Result<Vec<String>>;
/// Registers schema to this catalog.
async fn register_schema(
&self,
name: String,
schema: SchemaProviderRef,
) -> Result<Option<SchemaProviderRef>>;
/// Retrieves a specific schema from the catalog by name, provided it exists.
async fn schema(&self, name: &str) -> Result<Option<SchemaProviderRef>>;
}
pub type CatalogProviderRef = Arc<dyn CatalogProvider>;
#[async_trait::async_trait]
pub trait CatalogManager: Send + Sync {
fn as_any(&self) -> &dyn Any;
/// Starts a catalog manager.
async fn start(&self) -> Result<()>;
async fn register_catalog(
&self,
name: String,
catalog: CatalogProviderRef,
) -> Result<Option<CatalogProviderRef>>;
/// Registers a table within given catalog/schema to catalog manager,
/// returns whether the table registered.
async fn register_table(&self, request: RegisterTableRequest) -> Result<bool>;
/// Deregisters a table within given catalog/schema to catalog manager,
/// returns whether the table deregistered.
async fn deregister_table(&self, request: DeregisterTableRequest) -> Result<bool>;
/// Registers a catalog to catalog manager, returns whether the catalog exist before.
async fn register_catalog(&self, name: String) -> Result<bool>;
/// Register a schema with catalog name and schema name. Retuens whether the
/// schema registered.
///
/// # Errors
///
/// This method will/should fail if catalog not exist
async fn register_schema(&self, request: RegisterSchemaRequest) -> Result<bool>;
/// Registers a table within given catalog/schema to catalog manager,
/// returns whether the table registered.
///
/// # Errors
///
/// This method will/should fail if catalog or schema not exist
async fn register_table(&self, request: RegisterTableRequest) -> Result<bool>;
/// Deregisters a table within given catalog/schema to catalog manager
async fn deregister_table(&self, request: DeregisterTableRequest) -> Result<()>;
/// Rename a table to [RenameTableRequest::new_table_name], returns whether the table is renamed.
async fn rename_table(&self, request: RenameTableRequest) -> Result<bool>;
@@ -97,9 +79,15 @@ pub trait CatalogManager: Send + Sync {
async fn catalog_names(&self) -> Result<Vec<String>>;
async fn catalog(&self, catalog: &str) -> Result<Option<CatalogProviderRef>>;
async fn schema_names(&self, catalog: &str) -> Result<Vec<String>>;
async fn schema(&self, catalog: &str, schema: &str) -> Result<Option<SchemaProviderRef>>;
async fn table_names(&self, catalog: &str, schema: &str) -> Result<Vec<String>>;
async fn catalog_exist(&self, catalog: &str) -> Result<bool>;
async fn schema_exist(&self, catalog: &str, schema: &str) -> Result<bool>;
async fn table_exist(&self, catalog: &str, schema: &str, table: &str) -> Result<bool>;
/// Returns the table by catalog, schema and table name.
async fn table(
@@ -108,8 +96,6 @@ pub trait CatalogManager: Send + Sync {
schema: &str,
table_name: &str,
) -> Result<Option<TableRef>>;
fn as_any(&self) -> &dyn Any;
}
pub type CatalogManagerRef = Arc<dyn CatalogManager>;
@@ -169,14 +155,6 @@ pub struct RegisterSchemaRequest {
pub schema: String,
}
pub trait CatalogProviderFactory {
fn create(&self, catalog_name: String) -> CatalogProviderRef;
}
pub trait SchemaProviderFactory {
fn create(&self, catalog_name: String, schema_name: String) -> SchemaProviderRef;
}
pub(crate) async fn handle_system_table_request<'a, M: CatalogManager>(
manager: &'a M,
engine: TableEngineRef,
@@ -202,7 +180,7 @@ pub(crate) async fn handle_system_table_request<'a, M: CatalogManager>(
table_name,
),
})?;
manager
let _ = manager
.register_table(RegisterTableRequest {
catalog: catalog_name.clone(),
schema: schema_name.clone(),
@@ -233,15 +211,11 @@ pub async fn datanode_stat(catalog_manager: &CatalogManagerRef) -> (u64, Vec<Reg
let Ok(catalog_names) = catalog_manager.catalog_names().await else { return (region_number, region_stats) };
for catalog_name in catalog_names {
let Ok(Some(catalog)) = catalog_manager.catalog(&catalog_name).await else { continue };
let Ok(schema_names) = catalog.schema_names().await else { continue };
let Ok(schema_names) = catalog_manager.schema_names(&catalog_name).await else { continue };
for schema_name in schema_names {
let Ok(Some(schema)) = catalog.schema(&schema_name).await else { continue };
let Ok(table_names) = schema.table_names().await else { continue };
let Ok(table_names) = catalog_manager.table_names(&catalog_name,&schema_name).await else { continue };
for table_name in table_names {
let Ok(Some(table)) = schema.table(&table_name).await else { continue };
let Ok(Some(table)) = catalog_manager.table(&catalog_name, &schema_name, &table_name).await else { continue };
let region_numbers = &table.table_info().meta.region_numbers;
region_number += region_numbers.len() as u64;

View File

@@ -16,6 +16,4 @@ pub mod manager;
pub mod memory;
pub use manager::LocalCatalogManager;
pub use memory::{
new_memory_catalog_list, MemoryCatalogManager, MemoryCatalogProvider, MemorySchemaProvider,
};
pub use memory::{new_memory_catalog_manager, MemoryCatalogManager};

View File

@@ -18,7 +18,8 @@ use std::sync::Arc;
use common_catalog::consts::{
DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, INFORMATION_SCHEMA_NAME, MIN_USER_TABLE_ID,
MITO_ENGINE, SYSTEM_CATALOG_NAME, SYSTEM_CATALOG_TABLE_NAME,
MITO_ENGINE, NUMBERS_TABLE_ID, SYSTEM_CATALOG_NAME, SYSTEM_CATALOG_TABLE_ID,
SYSTEM_CATALOG_TABLE_NAME,
};
use common_catalog::format_full_table_name;
use common_recordbatch::{RecordBatch, SendableRecordBatchStream};
@@ -32,7 +33,7 @@ use table::engine::manager::TableEngineManagerRef;
use table::engine::EngineContext;
use table::metadata::TableId;
use table::requests::OpenTableRequest;
use table::table::numbers::NumbersTable;
use table::table::numbers::{NumbersTable, NUMBERS_TABLE_NAME};
use table::table::TableIdProvider;
use table::TableRef;
@@ -42,16 +43,16 @@ use crate::error::{
SystemCatalogTypeMismatchSnafu, TableEngineNotFoundSnafu, TableExistsSnafu, TableNotExistSnafu,
TableNotFoundSnafu,
};
use crate::local::memory::{MemoryCatalogManager, MemoryCatalogProvider, MemorySchemaProvider};
use crate::information_schema::InformationSchemaProvider;
use crate::local::memory::MemoryCatalogManager;
use crate::system::{
decode_system_catalog, Entry, SystemCatalogTable, TableEntry, ENTRY_TYPE_INDEX, KEY_INDEX,
VALUE_INDEX,
};
use crate::tables::SystemCatalog;
use crate::{
handle_system_table_request, CatalogManager, CatalogProviderRef, DeregisterTableRequest,
handle_system_table_request, CatalogManager, CatalogManagerRef, DeregisterTableRequest,
RegisterSchemaRequest, RegisterSystemTableRequest, RegisterTableRequest, RenameTableRequest,
SchemaProviderRef,
};
/// A `CatalogManager` consists of a system catalog and a bunch of user catalogs.
@@ -74,11 +75,11 @@ impl LocalCatalogManager {
engine_name: MITO_ENGINE,
})?;
let table = SystemCatalogTable::new(engine.clone()).await?;
let memory_catalog_list = crate::local::memory::new_memory_catalog_list()?;
let memory_catalog_manager = crate::local::memory::new_memory_catalog_manager()?;
let system_catalog = Arc::new(SystemCatalog::new(table));
Ok(Self {
system: system_catalog,
catalogs: memory_catalog_list,
catalogs: memory_catalog_manager,
engine_manager,
next_table_id: AtomicU32::new(MIN_USER_TABLE_ID),
init_lock: Mutex::new(false),
@@ -116,26 +117,47 @@ impl LocalCatalogManager {
}
async fn init_system_catalog(&self) -> Result<()> {
let system_schema = Arc::new(MemorySchemaProvider::new());
system_schema.register_table_sync(
SYSTEM_CATALOG_TABLE_NAME.to_string(),
self.system.information_schema.system.clone(),
)?;
let system_catalog = Arc::new(MemoryCatalogProvider::new());
system_catalog.register_schema_sync(INFORMATION_SCHEMA_NAME.to_string(), system_schema)?;
self.catalogs
.register_catalog_sync(SYSTEM_CATALOG_NAME.to_string(), system_catalog)?;
// register SystemCatalogTable
let _ = self
.catalogs
.register_catalog_sync(SYSTEM_CATALOG_NAME.to_string())?;
let _ = self.catalogs.register_schema_sync(RegisterSchemaRequest {
catalog: SYSTEM_CATALOG_NAME.to_string(),
schema: INFORMATION_SCHEMA_NAME.to_string(),
})?;
let register_table_req = RegisterTableRequest {
catalog: SYSTEM_CATALOG_NAME.to_string(),
schema: INFORMATION_SCHEMA_NAME.to_string(),
table_name: SYSTEM_CATALOG_TABLE_NAME.to_string(),
table_id: SYSTEM_CATALOG_TABLE_ID,
table: self.system.information_schema.system.clone(),
};
let _ = self.catalogs.register_table(register_table_req).await?;
let default_catalog = Arc::new(MemoryCatalogProvider::new());
let default_schema = Arc::new(MemorySchemaProvider::new());
// register default catalog and default schema
let _ = self
.catalogs
.register_catalog_sync(DEFAULT_CATALOG_NAME.to_string())?;
let _ = self.catalogs.register_schema_sync(RegisterSchemaRequest {
catalog: DEFAULT_CATALOG_NAME.to_string(),
schema: DEFAULT_SCHEMA_NAME.to_string(),
})?;
// Add numbers table for test
let table = Arc::new(NumbersTable::default());
default_schema.register_table_sync("numbers".to_string(), table)?;
let numbers_table = Arc::new(NumbersTable::default());
let register_number_table_req = RegisterTableRequest {
catalog: DEFAULT_CATALOG_NAME.to_string(),
schema: DEFAULT_SCHEMA_NAME.to_string(),
table_name: NUMBERS_TABLE_NAME.to_string(),
table_id: NUMBERS_TABLE_ID,
table: numbers_table,
};
let _ = self
.catalogs
.register_table(register_number_table_req)
.await?;
default_catalog.register_schema_sync(DEFAULT_SCHEMA_NAME.to_string(), default_schema)?;
self.catalogs
.register_catalog_sync(DEFAULT_CATALOG_NAME.to_string(), default_catalog)?;
Ok(())
}
@@ -207,30 +229,26 @@ impl LocalCatalogManager {
for entry in entries {
match entry {
Entry::Catalog(c) => {
self.catalogs.register_catalog_if_absent(
c.catalog_name.clone(),
Arc::new(MemoryCatalogProvider::new()),
);
let _ = self
.catalogs
.register_catalog_if_absent(c.catalog_name.clone());
info!("Register catalog: {}", c.catalog_name);
}
Entry::Schema(s) => {
self.catalogs
.catalog(&s.catalog_name)
.await?
.context(CatalogNotFoundSnafu {
catalog_name: &s.catalog_name,
})?
.register_schema(
s.schema_name.clone(),
Arc::new(MemorySchemaProvider::new()),
)
.await?;
let req = RegisterSchemaRequest {
catalog: s.catalog_name.clone(),
schema: s.schema_name.clone(),
};
let _ = self.catalogs.register_schema_sync(req)?;
info!("Registered schema: {:?}", s);
}
Entry::Table(t) => {
max_table_id = max_table_id.max(t.table_id);
if t.is_deleted {
continue;
}
self.open_and_register_table(&t).await?;
info!("Registered table: {:?}", t);
max_table_id = max_table_id.max(t.table_id);
}
}
}
@@ -245,23 +263,11 @@ impl LocalCatalogManager {
}
async fn open_and_register_table(&self, t: &TableEntry) -> Result<()> {
let catalog =
self.catalogs
.catalog(&t.catalog_name)
.await?
.context(CatalogNotFoundSnafu {
catalog_name: &t.catalog_name,
})?;
let schema = catalog
.schema(&t.schema_name)
.await?
.context(SchemaNotFoundSnafu {
catalog: &t.catalog_name,
schema: &t.schema_name,
})?;
self.check_catalog_schema_exist(&t.catalog_name, &t.schema_name)
.await?;
let context = EngineContext {};
let request = OpenTableRequest {
let open_request = OpenTableRequest {
catalog_name: t.catalog_name.clone(),
schema_name: t.schema_name.clone(),
table_name: t.table_name.clone(),
@@ -275,8 +281,8 @@ impl LocalCatalogManager {
engine_name: &t.engine,
})?;
let option = engine
.open_table(&context, request)
let table_ref = engine
.open_table(&context, open_request)
.await
.with_context(|_| OpenTableSnafu {
table_info: format!(
@@ -291,7 +297,48 @@ impl LocalCatalogManager {
),
})?;
schema.register_table(t.table_name.clone(), option).await?;
let register_request = RegisterTableRequest {
catalog: t.catalog_name.clone(),
schema: t.schema_name.clone(),
table_name: t.table_name.clone(),
table_id: t.table_id,
table: table_ref,
};
let _ = self.catalogs.register_table(register_request).await?;
Ok(())
}
async fn check_state(&self) -> Result<()> {
let started = self.init_lock.lock().await;
ensure!(
*started,
IllegalManagerStateSnafu {
msg: "Catalog manager not started",
}
);
Ok(())
}
async fn check_catalog_schema_exist(
&self,
catalog_name: &str,
schema_name: &str,
) -> Result<()> {
if !self.catalogs.catalog_exist(catalog_name).await? {
return CatalogNotFoundSnafu { catalog_name }.fail()?;
}
if !self
.catalogs
.schema_exist(catalog_name, schema_name)
.await?
{
return SchemaNotFoundSnafu {
catalog: catalog_name,
schema: schema_name,
}
.fail()?;
}
Ok(())
}
}
@@ -312,34 +359,21 @@ impl CatalogManager for LocalCatalogManager {
}
async fn register_table(&self, request: RegisterTableRequest) -> Result<bool> {
let started = self.init_lock.lock().await;
self.check_state().await?;
ensure!(
*started,
IllegalManagerStateSnafu {
msg: "Catalog manager not started",
}
);
let catalog_name = request.catalog.clone();
let schema_name = request.schema.clone();
let catalog_name = &request.catalog;
let schema_name = &request.schema;
let catalog = self
.catalogs
.catalog(catalog_name)
.await?
.context(CatalogNotFoundSnafu { catalog_name })?;
let schema = catalog
.schema(schema_name)
.await?
.with_context(|| SchemaNotFoundSnafu {
catalog: catalog_name,
schema: schema_name,
})?;
self.check_catalog_schema_exist(&catalog_name, &schema_name)
.await?;
{
let _lock = self.register_lock.lock().await;
if let Some(existing) = schema.table(&request.table_name).await? {
if let Some(existing) = self
.catalogs
.table(&request.catalog, &request.schema, &request.table_name)
.await?
{
if existing.table_info().ident.table_id != request.table_id {
error!(
"Unexpected table register request: {:?}, existing: {:?}",
@@ -348,8 +382,8 @@ impl CatalogManager for LocalCatalogManager {
);
return TableExistsSnafu {
table: format_full_table_name(
catalog_name,
schema_name,
&catalog_name,
&schema_name,
&request.table_name,
),
}
@@ -358,24 +392,25 @@ impl CatalogManager for LocalCatalogManager {
// Try to register table with same table id, just ignore.
Ok(false)
} else {
let engine = request.table.table_info().meta.engine.to_string();
// table does not exist
self.system
let engine = request.table.table_info().meta.engine.to_string();
let table_name = request.table_name.clone();
let table_id = request.table_id;
let _ = self.catalogs.register_table(request).await?;
let _ = self
.system
.register_table(
catalog_name.clone(),
schema_name.clone(),
request.table_name.clone(),
request.table_id,
table_name,
table_id,
engine,
)
.await?;
schema
.register_table(request.table_name, request.table)
.await?;
increment_gauge!(
crate::metrics::METRIC_CATALOG_MANAGER_TABLE_COUNT,
1.0,
&[crate::metrics::db_label(catalog_name, schema_name)],
&[crate::metrics::db_label(&catalog_name, &schema_name)],
);
Ok(true)
}
@@ -383,41 +418,27 @@ impl CatalogManager for LocalCatalogManager {
}
async fn rename_table(&self, request: RenameTableRequest) -> Result<bool> {
let started = self.init_lock.lock().await;
ensure!(
*started,
IllegalManagerStateSnafu {
msg: "Catalog manager not started",
}
);
self.check_state().await?;
let catalog_name = &request.catalog;
let schema_name = &request.schema;
let catalog = self
.catalogs
.catalog(catalog_name)
.await?
.context(CatalogNotFoundSnafu { catalog_name })?;
let schema = catalog
.schema(schema_name)
.await?
.with_context(|| SchemaNotFoundSnafu {
catalog: catalog_name,
schema: schema_name,
})?;
let _lock = self.register_lock.lock().await;
self.check_catalog_schema_exist(catalog_name, schema_name)
.await?;
ensure!(
!schema.table_exist(&request.new_table_name).await?,
self.catalogs
.table(catalog_name, schema_name, &request.new_table_name)
.await?
.is_none(),
TableExistsSnafu {
table: &request.new_table_name
}
);
let old_table = schema
.table(&request.table_name)
let _lock = self.register_lock.lock().await;
let old_table = self
.catalogs
.table(catalog_name, schema_name, &request.table_name)
.await?
.context(TableNotExistSnafu {
table: &request.table_name,
@@ -425,7 +446,8 @@ impl CatalogManager for LocalCatalogManager {
let engine = old_table.table_info().meta.engine.to_string();
// rename table in system catalog
self.system
let _ = self
.system
.register_table(
catalog_name.clone(),
schema_name.clone(),
@@ -435,18 +457,11 @@ impl CatalogManager for LocalCatalogManager {
)
.await?;
let renamed = schema
.rename_table(&request.table_name, request.new_table_name.clone())
.await
.is_ok();
Ok(renamed)
self.catalogs.rename_table(request).await
}
async fn deregister_table(&self, request: DeregisterTableRequest) -> Result<bool> {
{
let started = *self.init_lock.lock().await;
ensure!(started, IllegalManagerStateSnafu { msg: "not started" });
}
async fn deregister_table(&self, request: DeregisterTableRequest) -> Result<()> {
self.check_state().await?;
{
let _ = self.register_lock.lock().await;
@@ -473,52 +488,40 @@ impl CatalogManager for LocalCatalogManager {
}
async fn register_schema(&self, request: RegisterSchemaRequest) -> Result<bool> {
let started = self.init_lock.lock().await;
ensure!(
*started,
IllegalManagerStateSnafu {
msg: "Catalog manager not started",
}
);
self.check_state().await?;
let catalog_name = &request.catalog;
let schema_name = &request.schema;
let catalog = self
.catalogs
.catalog(catalog_name)
.await?
.context(CatalogNotFoundSnafu { catalog_name })?;
if !self.catalogs.catalog_exist(catalog_name).await? {
return CatalogNotFoundSnafu { catalog_name }.fail()?;
}
{
let _lock = self.register_lock.lock().await;
ensure!(
catalog.schema(schema_name).await?.is_none(),
!self
.catalogs
.schema_exist(catalog_name, schema_name)
.await?,
SchemaExistsSnafu {
schema: schema_name,
}
);
self.system
.register_schema(request.catalog, schema_name.clone())
let _ = self
.system
.register_schema(request.catalog.clone(), schema_name.clone())
.await?;
catalog
.register_schema(request.schema, Arc::new(MemorySchemaProvider::new()))
.await?;
Ok(true)
self.catalogs.register_schema_sync(request)
}
}
async fn register_system_table(&self, request: RegisterSystemTableRequest) -> Result<()> {
self.check_state().await?;
let catalog_name = request.create_table_request.catalog_name.clone();
let schema_name = request.create_table_request.schema_name.clone();
ensure!(
!*self.init_lock.lock().await,
IllegalManagerStateSnafu {
msg: "Catalog manager already started",
}
);
let mut sys_table_requests = self.system_table_requests.lock().await;
sys_table_requests.push(request);
increment_gauge!(
@@ -529,15 +532,8 @@ impl CatalogManager for LocalCatalogManager {
Ok(())
}
async fn schema(&self, catalog: &str, schema: &str) -> Result<Option<SchemaProviderRef>> {
self.catalogs
.catalog(catalog)
.await?
.context(CatalogNotFoundSnafu {
catalog_name: catalog,
})?
.schema(schema)
.await
async fn schema_exist(&self, catalog: &str, schema: &str) -> Result<bool> {
self.catalogs.schema_exist(catalog, schema).await
}
async fn table(
@@ -546,39 +542,44 @@ impl CatalogManager for LocalCatalogManager {
schema_name: &str,
table_name: &str,
) -> Result<Option<TableRef>> {
let catalog = self
.catalogs
.catalog(catalog_name)
.await?
.context(CatalogNotFoundSnafu { catalog_name })?;
let schema = catalog
.schema(schema_name)
.await?
.with_context(|| SchemaNotFoundSnafu {
catalog: catalog_name,
schema: schema_name,
})?;
schema.table(table_name).await
if schema_name == INFORMATION_SCHEMA_NAME {
let manager: CatalogManagerRef = self.catalogs.clone() as _;
let provider =
InformationSchemaProvider::new(catalog_name.to_string(), Arc::downgrade(&manager));
return provider.table(table_name);
}
self.catalogs
.table(catalog_name, schema_name, table_name)
.await
}
async fn catalog(&self, catalog: &str) -> Result<Option<CatalogProviderRef>> {
async fn catalog_exist(&self, catalog: &str) -> Result<bool> {
if catalog.eq_ignore_ascii_case(SYSTEM_CATALOG_NAME) {
Ok(Some(self.system.clone()))
Ok(true)
} else {
self.catalogs.catalog(catalog).await
self.catalogs.catalog_exist(catalog).await
}
}
async fn table_exist(&self, catalog: &str, schema: &str, table: &str) -> Result<bool> {
self.catalogs.table_exist(catalog, schema, table).await
}
async fn catalog_names(&self) -> Result<Vec<String>> {
self.catalogs.catalog_names().await
}
async fn register_catalog(
&self,
name: String,
catalog: CatalogProviderRef,
) -> Result<Option<CatalogProviderRef>> {
self.catalogs.register_catalog(name, catalog).await
async fn schema_names(&self, catalog_name: &str) -> Result<Vec<String>> {
self.catalogs.schema_names(catalog_name).await
}
async fn table_names(&self, catalog_name: &str, schema_name: &str) -> Result<Vec<String>> {
self.catalogs.table_names(catalog_name, schema_name).await
}
async fn register_catalog(&self, name: String) -> Result<bool> {
self.catalogs.register_catalog(name).await
}
fn as_any(&self) -> &dyn Any {
@@ -604,6 +605,7 @@ mod tests {
table_name: "T1".to_string(),
table_id: 1,
engine: MITO_ENGINE.to_string(),
is_deleted: false,
}),
Entry::Catalog(CatalogEntry {
catalog_name: "C2".to_string(),
@@ -625,6 +627,7 @@ mod tests {
table_name: "T2".to_string(),
table_id: 2,
engine: MITO_ENGINE.to_string(),
is_deleted: false,
}),
];
let res = LocalCatalogManager::sort_entries(vec);

View File

@@ -18,29 +18,27 @@ use std::collections::HashMap;
use std::sync::atomic::{AtomicU32, Ordering};
use std::sync::{Arc, RwLock};
use async_trait::async_trait;
use common_catalog::consts::MIN_USER_TABLE_ID;
use common_telemetry::error;
use common_catalog::consts::{DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, MIN_USER_TABLE_ID};
use metrics::{decrement_gauge, increment_gauge};
use snafu::{ensure, OptionExt};
use snafu::OptionExt;
use table::metadata::TableId;
use table::table::TableIdProvider;
use table::TableRef;
use crate::error::{
self, CatalogNotFoundSnafu, Result, SchemaNotFoundSnafu, TableExistsSnafu, TableNotFoundSnafu,
CatalogNotFoundSnafu, Result, SchemaNotFoundSnafu, TableExistsSnafu, TableNotFoundSnafu,
};
use crate::schema::SchemaProvider;
use crate::{
CatalogManager, CatalogProvider, CatalogProviderRef, DeregisterTableRequest,
RegisterSchemaRequest, RegisterSystemTableRequest, RegisterTableRequest, RenameTableRequest,
SchemaProviderRef,
CatalogManager, DeregisterTableRequest, RegisterSchemaRequest, RegisterSystemTableRequest,
RegisterTableRequest, RenameTableRequest,
};
type SchemaEntries = HashMap<String, HashMap<String, TableRef>>;
/// Simple in-memory list of catalogs
pub struct MemoryCatalogManager {
/// Collection of catalogs containing schemas and ultimately Tables
pub catalogs: RwLock<HashMap<String, CatalogProviderRef>>,
pub catalogs: RwLock<HashMap<String, SchemaEntries>>,
pub table_id: AtomicU32,
}
@@ -50,13 +48,14 @@ impl Default for MemoryCatalogManager {
table_id: AtomicU32::new(MIN_USER_TABLE_ID),
catalogs: Default::default(),
};
let default_catalog = Arc::new(MemoryCatalogProvider::new());
manager
.register_catalog_sync("greptime".to_string(), default_catalog.clone())
.unwrap();
default_catalog
.register_schema_sync("public".to_string(), Arc::new(MemorySchemaProvider::new()))
.unwrap();
let catalog = HashMap::from([(DEFAULT_SCHEMA_NAME.to_string(), HashMap::new())]);
let _ = manager
.catalogs
.write()
.unwrap()
.insert(DEFAULT_CATALOG_NAME.to_string(), catalog);
manager
}
}
@@ -76,82 +75,79 @@ impl CatalogManager for MemoryCatalogManager {
}
async fn register_table(&self, request: RegisterTableRequest) -> Result<bool> {
let schema = self
.catalog(&request.catalog)
.context(CatalogNotFoundSnafu {
catalog_name: &request.catalog,
})?
.schema(&request.schema)
.await?
.context(SchemaNotFoundSnafu {
catalog: &request.catalog,
schema: &request.schema,
})?;
let catalog = request.catalog.clone();
let schema = request.schema.clone();
let result = self.register_table_sync(request);
increment_gauge!(
crate::metrics::METRIC_CATALOG_MANAGER_TABLE_COUNT,
1.0,
&[crate::metrics::db_label(&request.catalog, &request.schema)],
&[crate::metrics::db_label(&catalog, &schema)],
);
schema
.register_table(request.table_name, request.table)
.await
.map(|v| v.is_none())
result
}
async fn rename_table(&self, request: RenameTableRequest) -> Result<bool> {
let catalog = self
.catalog(&request.catalog)
.context(CatalogNotFoundSnafu {
catalog_name: &request.catalog,
})?;
let schema =
catalog
.schema(&request.schema)
.await?
.with_context(|| SchemaNotFoundSnafu {
catalog: &request.catalog,
schema: &request.schema,
})?;
Ok(schema
.rename_table(&request.table_name, request.new_table_name)
.await
.is_ok())
}
async fn deregister_table(&self, request: DeregisterTableRequest) -> Result<bool> {
let schema = self
.catalog(&request.catalog)
.context(CatalogNotFoundSnafu {
let mut catalogs = self.catalogs.write().unwrap();
let schema = catalogs
.get_mut(&request.catalog)
.with_context(|| CatalogNotFoundSnafu {
catalog_name: &request.catalog,
})?
.schema(&request.schema)
.await?
.get_mut(&request.schema)
.with_context(|| SchemaNotFoundSnafu {
catalog: &request.catalog,
schema: &request.schema,
})?;
decrement_gauge!(
crate::metrics::METRIC_CATALOG_MANAGER_TABLE_COUNT,
1.0,
&[crate::metrics::db_label(&request.catalog, &request.schema)],
);
schema
.deregister_table(&request.table_name)
.await
.map(|v| v.is_some())
// check old and new table names
if !schema.contains_key(&request.table_name) {
return TableNotFoundSnafu {
table_info: request.table_name.to_string(),
}
.fail()?;
}
if schema.contains_key(&request.new_table_name) {
return TableExistsSnafu {
table: &request.new_table_name,
}
.fail();
}
let table = schema.remove(&request.table_name).unwrap();
let _ = schema.insert(request.new_table_name, table);
Ok(true)
}
async fn deregister_table(&self, request: DeregisterTableRequest) -> Result<()> {
let mut catalogs = self.catalogs.write().unwrap();
let schema = catalogs
.get_mut(&request.catalog)
.with_context(|| CatalogNotFoundSnafu {
catalog_name: &request.catalog,
})?
.get_mut(&request.schema)
.with_context(|| SchemaNotFoundSnafu {
catalog: &request.catalog,
schema: &request.schema,
})?;
let result = schema.remove(&request.table_name);
if result.is_some() {
decrement_gauge!(
crate::metrics::METRIC_CATALOG_MANAGER_TABLE_COUNT,
1.0,
&[crate::metrics::db_label(&request.catalog, &request.schema)],
);
}
Ok(())
}
async fn register_schema(&self, request: RegisterSchemaRequest) -> Result<bool> {
let catalog = self
.catalog(&request.catalog)
.context(CatalogNotFoundSnafu {
catalog_name: &request.catalog,
})?;
catalog
.register_schema(request.schema, Arc::new(MemorySchemaProvider::new()))
.await?;
increment_gauge!(crate::metrics::METRIC_CATALOG_MANAGER_SCHEMA_COUNT, 1.0);
Ok(true)
let registered = self.register_schema_sync(request)?;
if registered {
increment_gauge!(crate::metrics::METRIC_CATALOG_MANAGER_SCHEMA_COUNT, 1.0);
}
Ok(registered)
}
async fn register_system_table(&self, _request: RegisterSystemTableRequest) -> Result<()> {
@@ -159,12 +155,16 @@ impl CatalogManager for MemoryCatalogManager {
Ok(())
}
async fn schema(&self, catalog: &str, schema: &str) -> Result<Option<SchemaProviderRef>> {
if let Some(c) = self.catalog(catalog) {
c.schema(schema).await
} else {
Ok(None)
}
async fn schema_exist(&self, catalog: &str, schema: &str) -> Result<bool> {
Ok(self
.catalogs
.read()
.unwrap()
.get(catalog)
.with_context(|| CatalogNotFoundSnafu {
catalog_name: catalog,
})?
.contains_key(schema))
}
async fn table(
@@ -173,27 +173,73 @@ impl CatalogManager for MemoryCatalogManager {
schema: &str,
table_name: &str,
) -> Result<Option<TableRef>> {
let Some(catalog) = self
.catalog(catalog) else { return Ok(None)};
let Some(s) = catalog.schema(schema).await? else { return Ok(None) };
s.table(table_name).await
let result = try {
self.catalogs
.read()
.unwrap()
.get(catalog)?
.get(schema)?
.get(table_name)
.cloned()?
};
Ok(result)
}
async fn catalog(&self, catalog: &str) -> Result<Option<CatalogProviderRef>> {
Ok(self.catalogs.read().unwrap().get(catalog).cloned())
async fn catalog_exist(&self, catalog: &str) -> Result<bool> {
Ok(self.catalogs.read().unwrap().get(catalog).is_some())
}
async fn table_exist(&self, catalog: &str, schema: &str, table: &str) -> Result<bool> {
let catalogs = self.catalogs.read().unwrap();
Ok(catalogs
.get(catalog)
.with_context(|| CatalogNotFoundSnafu {
catalog_name: catalog,
})?
.get(schema)
.with_context(|| SchemaNotFoundSnafu { catalog, schema })?
.contains_key(table))
}
async fn catalog_names(&self) -> Result<Vec<String>> {
Ok(self.catalogs.read().unwrap().keys().cloned().collect())
}
async fn register_catalog(
&self,
name: String,
catalog: CatalogProviderRef,
) -> Result<Option<CatalogProviderRef>> {
increment_gauge!(crate::metrics::METRIC_CATALOG_MANAGER_CATALOG_COUNT, 1.0);
self.register_catalog_sync(name, catalog)
async fn schema_names(&self, catalog_name: &str) -> Result<Vec<String>> {
Ok(self
.catalogs
.read()
.unwrap()
.get(catalog_name)
.with_context(|| CatalogNotFoundSnafu { catalog_name })?
.keys()
.cloned()
.collect())
}
async fn table_names(&self, catalog_name: &str, schema_name: &str) -> Result<Vec<String>> {
Ok(self
.catalogs
.read()
.unwrap()
.get(catalog_name)
.with_context(|| CatalogNotFoundSnafu { catalog_name })?
.get(schema_name)
.with_context(|| SchemaNotFoundSnafu {
catalog: catalog_name,
schema: schema_name,
})?
.keys()
.cloned()
.collect())
}
async fn register_catalog(&self, name: String) -> Result<bool> {
let registered = self.register_catalog_sync(name)?;
if registered {
increment_gauge!(crate::metrics::METRIC_CATALOG_MANAGER_CATALOG_COUNT, 1.0);
}
Ok(registered)
}
fn as_any(&self) -> &dyn Any {
@@ -202,206 +248,78 @@ impl CatalogManager for MemoryCatalogManager {
}
impl MemoryCatalogManager {
/// Registers a catalog and return `None` if no catalog with the same name was already
/// registered, or `Some` with the previously registered catalog.
pub fn register_catalog_if_absent(
&self,
name: String,
catalog: CatalogProviderRef,
) -> Option<CatalogProviderRef> {
/// Registers a catalog and return the catalog already exist
pub fn register_catalog_if_absent(&self, name: String) -> bool {
let mut catalogs = self.catalogs.write().unwrap();
let entry = catalogs.entry(name);
match entry {
Entry::Occupied(v) => Some(v.get().clone()),
Entry::Occupied(_) => true,
Entry::Vacant(v) => {
v.insert(catalog);
None
let _ = v.insert(HashMap::new());
false
}
}
}
pub fn register_catalog_sync(
&self,
name: String,
catalog: CatalogProviderRef,
) -> Result<Option<CatalogProviderRef>> {
pub fn register_catalog_sync(&self, name: String) -> Result<bool> {
let mut catalogs = self.catalogs.write().unwrap();
Ok(catalogs.insert(name, catalog))
Ok(catalogs.insert(name, HashMap::new()).is_some())
}
fn catalog(&self, catalog_name: &str) -> Option<CatalogProviderRef> {
self.catalogs.read().unwrap().get(catalog_name).cloned()
}
}
impl Default for MemoryCatalogProvider {
fn default() -> Self {
Self::new()
}
}
/// Simple in-memory implementation of a catalog.
pub struct MemoryCatalogProvider {
schemas: RwLock<HashMap<String, Arc<dyn SchemaProvider>>>,
}
impl MemoryCatalogProvider {
/// Instantiates a new MemoryCatalogProvider with an empty collection of schemas.
pub fn new() -> Self {
Self {
schemas: RwLock::new(HashMap::new()),
pub fn register_schema_sync(&self, request: RegisterSchemaRequest) -> Result<bool> {
let mut catalogs = self.catalogs.write().unwrap();
let catalog = catalogs
.get_mut(&request.catalog)
.with_context(|| CatalogNotFoundSnafu {
catalog_name: &request.catalog,
})?;
if catalog.contains_key(&request.schema) {
return Ok(false);
}
let _ = catalog.insert(request.schema, HashMap::new());
Ok(true)
}
pub fn schema_names_sync(&self) -> Result<Vec<String>> {
let schemas = self.schemas.read().unwrap();
Ok(schemas.keys().cloned().collect())
}
pub fn register_table_sync(&self, request: RegisterTableRequest) -> Result<bool> {
let mut catalogs = self.catalogs.write().unwrap();
let schema = catalogs
.get_mut(&request.catalog)
.with_context(|| CatalogNotFoundSnafu {
catalog_name: &request.catalog,
})?
.get_mut(&request.schema)
.with_context(|| SchemaNotFoundSnafu {
catalog: &request.catalog,
schema: &request.schema,
})?;
pub fn register_schema_sync(
&self,
name: String,
schema: SchemaProviderRef,
) -> Result<Option<SchemaProviderRef>> {
let mut schemas = self.schemas.write().unwrap();
ensure!(
!schemas.contains_key(&name),
error::SchemaExistsSnafu { schema: &name }
);
increment_gauge!(crate::metrics::METRIC_CATALOG_MANAGER_SCHEMA_COUNT, 1.0);
Ok(schemas.insert(name, schema))
}
pub fn schema_sync(&self, name: &str) -> Result<Option<Arc<dyn SchemaProvider>>> {
let schemas = self.schemas.read().unwrap();
Ok(schemas.get(name).cloned())
}
}
#[async_trait::async_trait]
impl CatalogProvider for MemoryCatalogProvider {
fn as_any(&self) -> &dyn Any {
self
}
async fn schema_names(&self) -> Result<Vec<String>> {
self.schema_names_sync()
}
async fn register_schema(
&self,
name: String,
schema: SchemaProviderRef,
) -> Result<Option<SchemaProviderRef>> {
self.register_schema_sync(name, schema)
}
async fn schema(&self, name: &str) -> Result<Option<Arc<dyn SchemaProvider>>> {
self.schema_sync(name)
}
}
/// Simple in-memory implementation of a schema.
pub struct MemorySchemaProvider {
tables: RwLock<HashMap<String, TableRef>>,
}
impl MemorySchemaProvider {
/// Instantiates a new MemorySchemaProvider with an empty collection of tables.
pub fn new() -> Self {
Self {
tables: RwLock::new(HashMap::new()),
}
}
pub fn register_table_sync(&self, name: String, table: TableRef) -> Result<Option<TableRef>> {
let mut tables = self.tables.write().unwrap();
if let Some(existing) = tables.get(name.as_str()) {
// if table with the same name but different table id exists, then it's a fatal bug
if existing.table_info().ident.table_id != table.table_info().ident.table_id {
error!(
"Unexpected table register: {:?}, existing: {:?}",
table.table_info(),
existing.table_info()
);
return TableExistsSnafu { table: name }.fail()?;
if schema.contains_key(&request.table_name) {
return TableExistsSnafu {
table: &request.table_name,
}
Ok(Some(existing.clone()))
} else {
Ok(tables.insert(name, table))
.fail();
}
Ok(schema.insert(request.table_name, request.table).is_none())
}
pub fn rename_table_sync(&self, name: &str, new_name: String) -> Result<TableRef> {
let mut tables = self.tables.write().unwrap();
let Some(table) = tables.remove(name) else {
return TableNotFoundSnafu {
table_info: name.to_string(),
}
.fail()?;
#[cfg(any(test, feature = "testing"))]
pub fn new_with_table(table: TableRef) -> Self {
let manager = Self::default();
let request = RegisterTableRequest {
catalog: DEFAULT_CATALOG_NAME.to_string(),
schema: DEFAULT_SCHEMA_NAME.to_string(),
table_name: table.table_info().name.clone(),
table_id: table.table_info().ident.table_id,
table,
};
let e = match tables.entry(new_name) {
Entry::Vacant(e) => e,
Entry::Occupied(e) => {
return TableExistsSnafu { table: e.key() }.fail();
}
};
e.insert(table.clone());
Ok(table)
}
pub fn table_exist_sync(&self, name: &str) -> Result<bool> {
let tables = self.tables.read().unwrap();
Ok(tables.contains_key(name))
}
pub fn deregister_table_sync(&self, name: &str) -> Result<Option<TableRef>> {
let mut tables = self.tables.write().unwrap();
Ok(tables.remove(name))
}
}
impl Default for MemorySchemaProvider {
fn default() -> Self {
Self::new()
}
}
#[async_trait]
impl SchemaProvider for MemorySchemaProvider {
fn as_any(&self) -> &dyn Any {
self
}
async fn table_names(&self) -> Result<Vec<String>> {
let tables = self.tables.read().unwrap();
Ok(tables.keys().cloned().collect())
}
async fn table(&self, name: &str) -> Result<Option<TableRef>> {
let tables = self.tables.read().unwrap();
Ok(tables.get(name).cloned())
}
async fn register_table(&self, name: String, table: TableRef) -> Result<Option<TableRef>> {
self.register_table_sync(name, table)
}
async fn rename_table(&self, name: &str, new_name: String) -> Result<TableRef> {
self.rename_table_sync(name, new_name)
}
async fn deregister_table(&self, name: &str) -> Result<Option<TableRef>> {
self.deregister_table_sync(name)
}
async fn table_exist(&self, name: &str) -> Result<bool> {
self.table_exist_sync(name)
let _ = manager.register_table_sync(request).unwrap();
manager
}
}
/// Create a memory catalog list contains a numbers table for test
pub fn new_memory_catalog_list() -> Result<Arc<MemoryCatalogManager>> {
pub fn new_memory_catalog_manager() -> Result<Arc<MemoryCatalogManager>> {
Ok(Arc::new(MemoryCatalogManager::default()))
}
@@ -410,88 +328,99 @@ mod tests {
use common_catalog::consts::*;
use common_error::ext::ErrorExt;
use common_error::prelude::StatusCode;
use table::table::numbers::NumbersTable;
use table::table::numbers::{NumbersTable, NUMBERS_TABLE_NAME};
use super::*;
#[tokio::test]
async fn test_new_memory_catalog_list() {
let catalog_list = new_memory_catalog_list().unwrap();
let default_catalog = CatalogManager::catalog(&*catalog_list, DEFAULT_CATALOG_NAME)
.await
.unwrap()
.unwrap();
let catalog_list = new_memory_catalog_manager().unwrap();
let default_schema = default_catalog
.schema(DEFAULT_SCHEMA_NAME)
.await
.unwrap()
.unwrap();
let register_request = RegisterTableRequest {
catalog: DEFAULT_CATALOG_NAME.to_string(),
schema: DEFAULT_SCHEMA_NAME.to_string(),
table_name: NUMBERS_TABLE_NAME.to_string(),
table_id: NUMBERS_TABLE_ID,
table: Arc::new(NumbersTable::default()),
};
default_schema
.register_table("numbers".to_string(), Arc::new(NumbersTable::default()))
let _ = catalog_list.register_table(register_request).await.unwrap();
let table = catalog_list
.table(
DEFAULT_CATALOG_NAME,
DEFAULT_SCHEMA_NAME,
NUMBERS_TABLE_NAME,
)
.await
.unwrap();
let table = default_schema.table("numbers").await.unwrap();
assert!(table.is_some());
assert!(default_schema.table("not_exists").await.unwrap().is_none());
}
#[tokio::test]
async fn test_mem_provider() {
let provider = MemorySchemaProvider::new();
let table_name = "numbers";
assert!(!provider.table_exist_sync(table_name).unwrap());
provider.deregister_table_sync(table_name).unwrap();
let test_table = NumbersTable::default();
// register table successfully
assert!(provider
.register_table_sync(table_name.to_string(), Arc::new(test_table))
let _ = table.unwrap();
assert!(catalog_list
.table(DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, "not_exists")
.await
.unwrap()
.is_none());
assert!(provider.table_exist_sync(table_name).unwrap());
let other_table = NumbersTable::new(12);
let result = provider.register_table_sync(table_name.to_string(), Arc::new(other_table));
let err = result.err().unwrap();
assert_eq!(StatusCode::TableAlreadyExists, err.status_code());
}
#[tokio::test]
async fn test_mem_provider_rename_table() {
let provider = MemorySchemaProvider::new();
let table_name = "num";
assert!(!provider.table_exist_sync(table_name).unwrap());
let test_table: TableRef = Arc::new(NumbersTable::default());
async fn test_mem_manager_rename_table() {
let catalog = MemoryCatalogManager::default();
let table_name = "test_table";
assert!(!catalog
.table_exist(DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, table_name)
.await
.unwrap());
// register test table
assert!(provider
.register_table_sync(table_name.to_string(), test_table.clone())
.unwrap()
.is_none());
assert!(provider.table_exist_sync(table_name).unwrap());
let table_id = 2333;
let register_request = RegisterTableRequest {
catalog: DEFAULT_CATALOG_NAME.to_string(),
schema: DEFAULT_SCHEMA_NAME.to_string(),
table_name: table_name.to_string(),
table_id,
table: Arc::new(NumbersTable::new(table_id)),
};
assert!(catalog.register_table(register_request).await.unwrap());
assert!(catalog
.table_exist(DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, table_name)
.await
.unwrap());
// rename test table
let new_table_name = "numbers";
provider
.rename_table_sync(table_name, new_table_name.to_string())
.unwrap();
let new_table_name = "test_table_renamed";
let rename_request = RenameTableRequest {
catalog: DEFAULT_CATALOG_NAME.to_string(),
schema: DEFAULT_SCHEMA_NAME.to_string(),
table_name: table_name.to_string(),
new_table_name: new_table_name.to_string(),
table_id,
};
let _ = catalog.rename_table(rename_request).await.unwrap();
// test old table name not exist
assert!(!provider.table_exist_sync(table_name).unwrap());
provider.deregister_table_sync(table_name).unwrap();
assert!(!catalog
.table_exist(DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, table_name)
.await
.unwrap());
// test new table name exists
assert!(provider.table_exist_sync(new_table_name).unwrap());
let registered_table = provider.table(new_table_name).await.unwrap().unwrap();
assert_eq!(
registered_table.table_info().ident.table_id,
test_table.table_info().ident.table_id
);
assert!(catalog
.table_exist(DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, new_table_name)
.await
.unwrap());
let registered_table = catalog
.table(DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, new_table_name)
.await
.unwrap()
.unwrap();
assert_eq!(registered_table.table_info().ident.table_id, table_id);
let other_table = Arc::new(NumbersTable::new(2));
let result = provider
.register_table(new_table_name.to_string(), other_table)
.await;
let dup_register_request = RegisterTableRequest {
catalog: DEFAULT_CATALOG_NAME.to_string(),
schema: DEFAULT_SCHEMA_NAME.to_string(),
table_name: new_table_name.to_string(),
table_id: table_id + 1,
table: Arc::new(NumbersTable::new(table_id + 1)),
};
let result = catalog.register_table(dup_register_request).await;
let err = result.err().unwrap();
assert_eq!(StatusCode::TableAlreadyExists, err.status_code());
}
@@ -499,16 +428,11 @@ mod tests {
#[tokio::test]
async fn test_catalog_rename_table() {
let catalog = MemoryCatalogManager::default();
let schema = catalog
.schema(DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME)
.await
.unwrap()
.unwrap();
// register table
let table_name = "num";
let table_id = 2333;
let table: TableRef = Arc::new(NumbersTable::new(table_id));
// register table
let register_table_req = RegisterTableRequest {
catalog: DEFAULT_CATALOG_NAME.to_string(),
schema: DEFAULT_SCHEMA_NAME.to_string(),
@@ -517,7 +441,11 @@ mod tests {
table,
};
assert!(catalog.register_table(register_table_req).await.unwrap());
assert!(schema.table_exist(table_name).await.unwrap());
assert!(catalog
.table(DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, table_name)
.await
.unwrap()
.is_some());
// rename table
let new_table_name = "numbers_new";
@@ -529,8 +457,16 @@ mod tests {
table_id,
};
assert!(catalog.rename_table(rename_table_req).await.unwrap());
assert!(!schema.table_exist(table_name).await.unwrap());
assert!(schema.table_exist(new_table_name).await.unwrap());
assert!(catalog
.table(DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, table_name)
.await
.unwrap()
.is_none());
assert!(catalog
.table(DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, new_table_name)
.await
.unwrap()
.is_some());
let registered_table = catalog
.table(DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, new_table_name)
@@ -543,50 +479,42 @@ mod tests {
#[test]
pub fn test_register_if_absent() {
let list = MemoryCatalogManager::default();
assert!(list
.register_catalog_if_absent(
"test_catalog".to_string(),
Arc::new(MemoryCatalogProvider::new())
)
.is_none());
list.register_catalog_if_absent(
"test_catalog".to_string(),
Arc::new(MemoryCatalogProvider::new()),
)
.unwrap();
list.as_any()
.downcast_ref::<MemoryCatalogManager>()
.unwrap();
assert!(!list.register_catalog_if_absent("test_catalog".to_string(),));
assert!(list.register_catalog_if_absent("test_catalog".to_string()));
}
#[tokio::test]
pub async fn test_catalog_deregister_table() {
let catalog = MemoryCatalogManager::default();
let schema = catalog
.schema(DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME)
.await
.unwrap()
.unwrap();
let table_name = "foo_table";
let register_table_req = RegisterTableRequest {
catalog: DEFAULT_CATALOG_NAME.to_string(),
schema: DEFAULT_SCHEMA_NAME.to_string(),
table_name: "numbers".to_string(),
table_name: table_name.to_string(),
table_id: 2333,
table: Arc::new(NumbersTable::default()),
};
catalog.register_table(register_table_req).await.unwrap();
assert!(schema.table_exist("numbers").await.unwrap());
let _ = catalog.register_table(register_table_req).await.unwrap();
assert!(catalog
.table(DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, table_name)
.await
.unwrap()
.is_some());
let deregister_table_req = DeregisterTableRequest {
catalog: DEFAULT_CATALOG_NAME.to_string(),
schema: DEFAULT_SCHEMA_NAME.to_string(),
table_name: "numbers".to_string(),
table_name: table_name.to_string(),
};
catalog
.deregister_table(deregister_table_req)
.await
.unwrap();
assert!(!schema.table_exist("numbers").await.unwrap());
assert!(catalog
.table(DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, table_name)
.await
.unwrap()
.is_none());
}
}

View File

@@ -12,17 +12,10 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use std::any::Any;
use std::fmt::Debug;
use std::pin::Pin;
use std::sync::Arc;
pub use client::{CachedMetaKvBackend, MetaKvBackend};
use futures::Stream;
use futures_util::StreamExt;
pub use manager::{RemoteCatalogManager, RemoteCatalogProvider, RemoteSchemaProvider};
use crate::error::Error;
pub use manager::RemoteCatalogManager;
mod client;
mod manager;
@@ -31,59 +24,6 @@ mod manager;
pub mod mock;
pub mod region_alive_keeper;
#[derive(Debug, Clone)]
pub struct Kv(pub Vec<u8>, pub Vec<u8>);
pub type ValueIter<'a, E> = Pin<Box<dyn Stream<Item = Result<Kv, E>> + Send + 'a>>;
#[async_trait::async_trait]
pub trait KvBackend: Send + Sync {
fn range<'a, 'b>(&'a self, key: &[u8]) -> ValueIter<'b, Error>
where
'a: 'b;
async fn set(&self, key: &[u8], val: &[u8]) -> Result<(), Error>;
/// Compare and set value of key. `expect` is the expected value, if backend's current value associated
/// with key is the same as `expect`, the value will be updated to `val`.
///
/// - If the compare-and-set operation successfully updated value, this method will return an `Ok(Ok())`
/// - If associated value is not the same as `expect`, no value will be updated and an `Ok(Err(Vec<u8>))`
/// will be returned, the `Err(Vec<u8>)` indicates the current associated value of key.
/// - If any error happens during operation, an `Err(Error)` will be returned.
async fn compare_and_set(
&self,
key: &[u8],
expect: &[u8],
val: &[u8],
) -> Result<Result<(), Option<Vec<u8>>>, Error>;
async fn delete_range(&self, key: &[u8], end: &[u8]) -> Result<(), Error>;
async fn delete(&self, key: &[u8]) -> Result<(), Error> {
self.delete_range(key, &[]).await
}
/// Default get is implemented based on `range` method.
async fn get(&self, key: &[u8]) -> Result<Option<Kv>, Error> {
let mut iter = self.range(key);
while let Some(r) = iter.next().await {
let kv = r?;
if kv.0 == key {
return Ok(Some(kv));
}
}
return Ok(None);
}
/// MoveValue atomically renames the key to the given updated key.
async fn move_value(&self, from_key: &[u8], to_key: &[u8]) -> Result<(), Error>;
fn as_any(&self) -> &dyn Any;
}
pub type KvBackendRef = Arc<dyn KvBackend>;
#[async_trait::async_trait]
pub trait KvCacheInvalidator: Send + Sync {
async fn invalidate_key(&self, key: &[u8]);
@@ -93,14 +33,19 @@ pub type KvCacheInvalidatorRef = Arc<dyn KvCacheInvalidator>;
#[cfg(test)]
mod tests {
use async_stream::stream;
use std::any::Any;
use super::*;
use async_stream::stream;
use common_meta::kv_backend::{Kv, KvBackend, ValueIter};
use crate::error::Error;
struct MockKvBackend {}
#[async_trait::async_trait]
impl KvBackend for MockKvBackend {
type Error = Error;
fn range<'a, 'b>(&'a self, _key: &[u8]) -> ValueIter<'b, Error>
where
'a: 'b,

View File

@@ -18,24 +18,26 @@ use std::sync::Arc;
use std::time::Duration;
use async_stream::stream;
use common_error::prelude::BoxedError;
use common_meta::error::Error::{CacheNotGet, GetKvCache};
use common_meta::error::{CacheNotGetSnafu, Error, MetaSrvSnafu, Result};
use common_meta::kv_backend::{Kv, KvBackend, KvBackendRef, ValueIter};
use common_meta::rpc::store::{
CompareAndPutRequest, DeleteRangeRequest, MoveValueRequest, PutRequest, RangeRequest,
};
use common_telemetry::{info, timer};
use meta_client::client::MetaClient;
use moka::future::{Cache, CacheBuilder};
use snafu::ResultExt;
use snafu::{OptionExt, ResultExt};
use super::KvCacheInvalidator;
use crate::error::{Error, GenericSnafu, MetaSrvSnafu, Result};
use crate::metrics::{METRIC_CATALOG_KV_GET, METRIC_CATALOG_KV_REMOTE_GET};
use crate::remote::{Kv, KvBackend, KvBackendRef, ValueIter};
const CACHE_MAX_CAPACITY: u64 = 10000;
const CACHE_TTL_SECOND: u64 = 10 * 60;
const CACHE_TTI_SECOND: u64 = 5 * 60;
pub type CacheBackendRef = Arc<Cache<Vec<u8>, Option<Kv>>>;
pub type CacheBackendRef = Arc<Cache<Vec<u8>, Kv>>;
pub struct CachedMetaKvBackend {
kv_backend: KvBackendRef,
cache: CacheBackendRef,
@@ -43,6 +45,8 @@ pub struct CachedMetaKvBackend {
#[async_trait::async_trait]
impl KvBackend for CachedMetaKvBackend {
type Error = Error;
fn range<'a, 'b>(&'a self, key: &[u8]) -> ValueIter<'b, Error>
where
'a: 'b,
@@ -55,12 +59,26 @@ impl KvBackend for CachedMetaKvBackend {
let init = async {
let _timer = timer!(METRIC_CATALOG_KV_REMOTE_GET);
self.kv_backend.get(key).await
self.kv_backend.get(key).await.map(|val| {
val.with_context(|| CacheNotGetSnafu {
key: String::from_utf8_lossy(key),
})
})?
};
let schema_provider = self.cache.try_get_with_by_ref(key, init).await;
schema_provider.map_err(|e| GenericSnafu { msg: e.to_string() }.build())
// currently moka doesn't have `optionally_try_get_with_by_ref`
// TODO(fys): change to moka method when available
// https://github.com/moka-rs/moka/issues/254
match self.cache.try_get_with_by_ref(key, init).await {
Ok(val) => Ok(Some(val)),
Err(e) => match e.as_ref() {
CacheNotGet { .. } => Ok(None),
_ => Err(e),
},
}
.map_err(|e| GetKvCache {
err_msg: e.to_string(),
})
}
async fn set(&self, key: &[u8], val: &[u8]) -> Result<()> {
@@ -165,6 +183,8 @@ pub struct MetaKvBackend {
/// comparing to `Accessor`'s list and get method.
#[async_trait::async_trait]
impl KvBackend for MetaKvBackend {
type Error = Error;
fn range<'a, 'b>(&'a self, key: &[u8]) -> ValueIter<'b, Error>
where
'a: 'b,
@@ -175,6 +195,7 @@ impl KvBackend for MetaKvBackend {
.client
.range(RangeRequest::new().with_prefix(key))
.await
.map_err(BoxedError::new)
.context(MetaSrvSnafu)?;
let kvs = resp.take_kvs();
for mut kv in kvs.into_iter() {
@@ -188,6 +209,7 @@ impl KvBackend for MetaKvBackend {
.client
.range(RangeRequest::new().with_key(key))
.await
.map_err(BoxedError::new)
.context(MetaSrvSnafu)?;
Ok(response
.take_kvs()
@@ -199,13 +221,23 @@ impl KvBackend for MetaKvBackend {
let req = PutRequest::new()
.with_key(key.to_vec())
.with_value(val.to_vec());
let _ = self.client.put(req).await.context(MetaSrvSnafu)?;
let _ = self
.client
.put(req)
.await
.map_err(BoxedError::new)
.context(MetaSrvSnafu)?;
Ok(())
}
async fn delete_range(&self, key: &[u8], end: &[u8]) -> Result<()> {
let req = DeleteRangeRequest::new().with_range(key.to_vec(), end.to_vec());
let resp = self.client.delete_range(req).await.context(MetaSrvSnafu)?;
let resp = self
.client
.delete_range(req)
.await
.map_err(BoxedError::new)
.context(MetaSrvSnafu)?;
info!(
"Delete range, key: {}, end: {}, deleted: {}",
String::from_utf8_lossy(key),
@@ -230,6 +262,7 @@ impl KvBackend for MetaKvBackend {
.client
.compare_and_put(request)
.await
.map_err(BoxedError::new)
.context(MetaSrvSnafu)?;
if response.is_success() {
Ok(Ok(()))
@@ -240,7 +273,12 @@ impl KvBackend for MetaKvBackend {
async fn move_value(&self, from_key: &[u8], to_key: &[u8]) -> Result<()> {
let req = MoveValueRequest::new(from_key, to_key);
self.client.move_value(req).await.context(MetaSrvSnafu)?;
let _ = self
.client
.move_value(req)
.await
.map_err(BoxedError::new)
.context(MetaSrvSnafu)?;
Ok(())
}

File diff suppressed because it is too large Load Diff

View File

@@ -12,162 +12,24 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use std::any::Any;
use std::collections::btree_map::Entry;
use std::collections::{BTreeMap, HashMap};
use std::fmt::{Display, Formatter};
use std::str::FromStr;
use std::sync::Arc;
use std::collections::HashMap;
use std::sync::{Arc, RwLock as StdRwLock};
use async_stream::stream;
use common_catalog::consts::{DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME};
use common_recordbatch::RecordBatch;
use common_telemetry::logging::info;
use datatypes::data_type::ConcreteDataType;
use datatypes::schema::{ColumnSchema, Schema};
use datatypes::vectors::StringVector;
use serde::Serializer;
use table::engine::{CloseTableResult, EngineContext, TableEngine, TableReference};
use table::engine::{CloseTableResult, EngineContext, TableEngine};
use table::metadata::TableId;
use table::requests::{
AlterTableRequest, CloseTableRequest, CreateTableRequest, DropTableRequest, OpenTableRequest,
};
use table::test_util::MemTable;
use table::TableRef;
use tokio::sync::RwLock;
use crate::error::Error;
use crate::helper::{CatalogKey, CatalogValue, SchemaKey, SchemaValue};
use crate::remote::{Kv, KvBackend, ValueIter};
pub struct MockKvBackend {
map: RwLock<BTreeMap<Vec<u8>, Vec<u8>>>,
}
impl Default for MockKvBackend {
fn default() -> Self {
let mut map = BTreeMap::default();
let catalog_value = CatalogValue {}.as_bytes().unwrap();
let schema_value = SchemaValue {}.as_bytes().unwrap();
let default_catalog_key = CatalogKey {
catalog_name: DEFAULT_CATALOG_NAME.to_string(),
}
.to_string();
let default_schema_key = SchemaKey {
catalog_name: DEFAULT_CATALOG_NAME.to_string(),
schema_name: DEFAULT_SCHEMA_NAME.to_string(),
}
.to_string();
// create default catalog and schema
map.insert(default_catalog_key.into(), catalog_value);
map.insert(default_schema_key.into(), schema_value);
let map = RwLock::new(map);
Self { map }
}
}
impl Display for MockKvBackend {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
futures::executor::block_on(async {
let map = self.map.read().await;
for (k, v) in map.iter() {
f.serialize_str(&String::from_utf8_lossy(k))?;
f.serialize_str(" -> ")?;
f.serialize_str(&String::from_utf8_lossy(v))?;
f.serialize_str("\n")?;
}
Ok(())
})
}
}
#[async_trait::async_trait]
impl KvBackend for MockKvBackend {
fn range<'a, 'b>(&'a self, key: &[u8]) -> ValueIter<'b, Error>
where
'a: 'b,
{
let prefix = key.to_vec();
let prefix_string = String::from_utf8_lossy(&prefix).to_string();
Box::pin(stream!({
let maps = self.map.read().await.clone();
for (k, v) in maps.range(prefix.clone()..) {
let key_string = String::from_utf8_lossy(k).to_string();
let matches = key_string.starts_with(&prefix_string);
if matches {
yield Ok(Kv(k.clone(), v.clone()))
} else {
info!("Stream finished");
return;
}
}
}))
}
async fn set(&self, key: &[u8], val: &[u8]) -> Result<(), Error> {
let mut map = self.map.write().await;
map.insert(key.to_vec(), val.to_vec());
Ok(())
}
async fn compare_and_set(
&self,
key: &[u8],
expect: &[u8],
val: &[u8],
) -> Result<Result<(), Option<Vec<u8>>>, Error> {
let mut map = self.map.write().await;
let existing = map.entry(key.to_vec());
match existing {
Entry::Vacant(e) => {
if expect.is_empty() {
e.insert(val.to_vec());
Ok(Ok(()))
} else {
Ok(Err(None))
}
}
Entry::Occupied(mut existing) => {
if existing.get() == expect {
existing.insert(val.to_vec());
Ok(Ok(()))
} else {
Ok(Err(Some(existing.get().clone())))
}
}
}
}
async fn delete_range(&self, key: &[u8], end: &[u8]) -> Result<(), Error> {
let mut map = self.map.write().await;
if end.is_empty() {
let _ = map.remove(key);
} else {
let start = key.to_vec();
let end = end.to_vec();
let range = start..end;
map.retain(|k, _| !range.contains(k));
}
Ok(())
}
async fn move_value(&self, _from_key: &[u8], _to_key: &[u8]) -> Result<(), Error> {
unimplemented!()
}
fn as_any(&self) -> &dyn Any {
self
}
}
#[derive(Default)]
pub struct MockTableEngine {
tables: RwLock<HashMap<String, TableRef>>,
tables: StdRwLock<HashMap<TableId, TableRef>>,
}
#[async_trait::async_trait]
@@ -182,21 +44,8 @@ impl TableEngine for MockTableEngine {
_ctx: &EngineContext,
request: CreateTableRequest,
) -> table::Result<TableRef> {
let table_name = request.table_name.clone();
let catalog_name = request.catalog_name.clone();
let schema_name = request.schema_name.clone();
let table_full_name =
TableReference::full(&catalog_name, &schema_name, &table_name).to_string();
let table_id = request.id;
let default_table_id = "0".to_owned();
let table_id = TableId::from_str(
request
.table_options
.extra_options
.get("table_id")
.unwrap_or(&default_table_id),
)
.unwrap();
let schema = Arc::new(Schema::new(vec![ColumnSchema::new(
"name",
ConcreteDataType::string_datatype(),
@@ -206,16 +55,16 @@ impl TableEngine for MockTableEngine {
let data = vec![Arc::new(StringVector::from(vec!["a", "b", "c"])) as _];
let record_batch = RecordBatch::new(schema, data).unwrap();
let table: TableRef = Arc::new(MemTable::new_with_catalog(
&table_name,
&request.table_name,
record_batch,
table_id,
catalog_name,
schema_name,
request.catalog_name,
request.schema_name,
vec![0],
)) as Arc<_>;
let mut tables = self.tables.write().await;
tables.insert(table_full_name, table.clone() as TableRef);
let mut tables = self.tables.write().unwrap();
let _ = tables.insert(table_id, table.clone() as TableRef);
Ok(table)
}
@@ -224,7 +73,7 @@ impl TableEngine for MockTableEngine {
_ctx: &EngineContext,
request: OpenTableRequest,
) -> table::Result<Option<TableRef>> {
Ok(self.tables.read().await.get(&request.table_name).cloned())
Ok(self.tables.read().unwrap().get(&request.table_id).cloned())
}
async fn alter_table(
@@ -238,25 +87,13 @@ impl TableEngine for MockTableEngine {
fn get_table(
&self,
_ctx: &EngineContext,
table_ref: &TableReference,
table_id: TableId,
) -> table::Result<Option<TableRef>> {
futures::executor::block_on(async {
Ok(self
.tables
.read()
.await
.get(&table_ref.to_string())
.cloned())
})
Ok(self.tables.read().unwrap().get(&table_id).cloned())
}
fn table_exists(&self, _ctx: &EngineContext, table_ref: &TableReference) -> bool {
futures::executor::block_on(async {
self.tables
.read()
.await
.contains_key(&table_ref.to_string())
})
fn table_exists(&self, _ctx: &EngineContext, table_id: TableId) -> bool {
self.tables.read().unwrap().contains_key(&table_id)
}
async fn drop_table(
@@ -272,11 +109,7 @@ impl TableEngine for MockTableEngine {
_ctx: &EngineContext,
request: CloseTableRequest,
) -> table::Result<CloseTableResult> {
let _ = self
.tables
.write()
.await
.remove(&request.table_ref().to_string());
let _ = self.tables.write().unwrap().remove(&request.table_id);
Ok(CloseTableResult::Released(vec![]))
}

View File

@@ -92,7 +92,7 @@ impl RegionAliveKeepers {
}
let mut keepers = self.keepers.lock().await;
keepers.insert(table_ident.clone(), keeper.clone());
let _ = keepers.insert(table_ident.clone(), keeper.clone());
if self.started.load(Ordering::Relaxed) {
keeper.start().await;
@@ -237,7 +237,7 @@ impl RegionAliveKeeper {
let countdown_task_handles = Arc::downgrade(&self.countdown_task_handles);
let on_task_finished = async move {
if let Some(x) = countdown_task_handles.upgrade() {
x.lock().await.remove(&region);
let _ = x.lock().await.remove(&region);
} // Else the countdown task handles map could be dropped because the keeper is dropped.
};
let handle = Arc::new(CountdownTaskHandle::new(
@@ -248,7 +248,7 @@ impl RegionAliveKeeper {
));
let mut handles = self.countdown_task_handles.lock().await;
handles.insert(region, handle.clone());
let _ = handles.insert(region, handle.clone());
if self.started.load(Ordering::Relaxed) {
handle.start(self.heartbeat_interval_millis).await;
@@ -475,6 +475,7 @@ impl CountdownTask {
catalog_name: table_ident.catalog.clone(),
schema_name: table_ident.schema.clone(),
table_name: table_ident.table.clone(),
table_id: table_ident.table_id,
region_numbers: vec![region],
flush: true,
};
@@ -499,7 +500,7 @@ mod test {
use common_meta::heartbeat::mailbox::HeartbeatMailbox;
use datatypes::schema::RawSchema;
use table::engine::manager::MemoryTableEngineManager;
use table::engine::{TableEngine, TableReference};
use table::engine::TableEngine;
use table::requests::{CreateTableRequest, TableOptions};
use table::test_util::EmptyTable;
@@ -676,7 +677,7 @@ mod test {
let region = 1;
assert!(keeper.find_handle(&region).await.is_none());
keeper.register_region(region).await;
assert!(keeper.find_handle(&region).await.is_some());
let _ = keeper.find_handle(&region).await.unwrap();
let ten_seconds_later = || Instant::now() + Duration::from_secs(10);
@@ -719,7 +720,7 @@ mod test {
let tx = handle.tx.clone();
// assert countdown task is running
assert!(tx.send(CountdownCommand::Start(5000)).await.is_ok());
tx.send(CountdownCommand::Start(5000)).await.unwrap();
assert!(!finished.load(Ordering::Relaxed));
drop(handle);
@@ -751,8 +752,9 @@ mod test {
let catalog = "my_catalog";
let schema = "my_schema";
let table = "my_table";
let table_id = 1;
let request = CreateTableRequest {
id: 1,
id: table_id,
catalog_name: catalog.to_string(),
schema_name: schema.to_string(),
table_name: table.to_string(),
@@ -768,16 +770,15 @@ mod test {
table_options: TableOptions::default(),
engine: "mito".to_string(),
};
let table_ref = TableReference::full(catalog, schema, table);
let table_engine = Arc::new(MockTableEngine::default());
table_engine.create_table(ctx, request).await.unwrap();
let _ = table_engine.create_table(ctx, request).await.unwrap();
let table_ident = TableIdent {
catalog: catalog.to_string(),
schema: schema.to_string(),
table: table.to_string(),
table_id: 1024,
table_id,
engine: "mito".to_string(),
};
let (tx, rx) = mpsc::channel(10);
@@ -787,7 +788,7 @@ mod test {
region: 1,
rx,
};
common_runtime::spawn_bg(async move {
let _handle = common_runtime::spawn_bg(async move {
task.run().await;
});
@@ -813,9 +814,9 @@ mod test {
.unwrap();
// assert the table is closed after deadline is reached
assert!(table_engine.table_exists(ctx, &table_ref));
assert!(table_engine.table_exists(ctx, table_id));
// spare 500ms for the task to close the table
tokio::time::sleep(Duration::from_millis(2000)).await;
assert!(!table_engine.table_exists(ctx, &table_ref));
assert!(!table_engine.table_exists(ctx, table_id));
}
}

View File

@@ -1,69 +0,0 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::any::Any;
use std::sync::Arc;
use async_trait::async_trait;
use table::TableRef;
use crate::error::{NotSupportedSnafu, Result};
/// Represents a schema, comprising a number of named tables.
#[async_trait]
pub trait SchemaProvider: Sync + Send {
/// Returns the schema provider as [`Any`](std::any::Any)
/// so that it can be downcast to a specific implementation.
fn as_any(&self) -> &dyn Any;
/// Retrieves the list of available table names in this schema.
async fn table_names(&self) -> Result<Vec<String>>;
/// Retrieves a specific table from the schema by name, provided it exists.
async fn table(&self, name: &str) -> Result<Option<TableRef>>;
/// If supported by the implementation, adds a new table to this schema.
/// If a table of the same name existed before, it returns "Table already exists" error.
async fn register_table(&self, name: String, _table: TableRef) -> Result<Option<TableRef>> {
NotSupportedSnafu {
op: format!("register_table({name}, <table>)"),
}
.fail()
}
/// If supported by the implementation, renames an existing table from this schema and returns it.
/// If no table of that name exists, returns "Table not found" error.
async fn rename_table(&self, name: &str, new_name: String) -> Result<TableRef> {
NotSupportedSnafu {
op: format!("rename_table({name}, {new_name})"),
}
.fail()
}
/// If supported by the implementation, removes an existing table from this schema and returns it.
/// If no table of that name exists, returns Ok(None).
async fn deregister_table(&self, name: &str) -> Result<Option<TableRef>> {
NotSupportedSnafu {
op: format!("deregister_table({name})"),
}
.fail()
}
/// If supported by the implementation, checks the table exist in the schema provider or not.
/// If no matched table in the schema provider, return false.
/// Otherwise, return true.
async fn table_exist(&self, name: &str) -> Result<bool>;
}
pub type SchemaProviderRef = Arc<dyn SchemaProvider>;

View File

@@ -20,8 +20,6 @@ use common_catalog::consts::{
DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, INFORMATION_SCHEMA_NAME, MITO_ENGINE,
SYSTEM_CATALOG_NAME, SYSTEM_CATALOG_TABLE_ID, SYSTEM_CATALOG_TABLE_NAME,
};
use common_query::logical_plan::Expr;
use common_query::physical_plan::{PhysicalPlanRef, SessionContext};
use common_recordbatch::SendableRecordBatchStream;
use common_telemetry::debug;
use common_time::util;
@@ -60,15 +58,6 @@ impl Table for SystemCatalogTable {
self.0.schema()
}
async fn scan(
&self,
projection: Option<&Vec<usize>>,
filters: &[Expr],
limit: Option<usize>,
) -> table::Result<PhysicalPlanRef> {
self.0.scan(projection, filters, limit).await
}
async fn scan_to_stream(&self, request: ScanRequest) -> TableResult<SendableRecordBatchStream> {
self.0.scan_to_stream(request).await
}
@@ -136,14 +125,17 @@ impl SystemCatalogTable {
/// Create a stream of all entries inside system catalog table
pub async fn records(&self) -> Result<SendableRecordBatchStream> {
let full_projection = None;
let ctx = SessionContext::new();
let scan = self
.scan(full_projection, &[], None)
let scan_req = ScanRequest {
sequence: None,
projection: full_projection,
filters: vec![],
output_ordering: None,
limit: None,
};
let stream = self
.scan_to_stream(scan_req)
.await
.context(error::SystemCatalogTableScanSnafu)?;
let stream = scan
.execute(0, ctx.task_ctx())
.context(error::SystemCatalogTableScanExecSnafu)?;
Ok(stream)
}
}
@@ -211,38 +203,50 @@ pub fn build_table_insert_request(
build_insert_request(
EntryType::Table,
entry_key.as_bytes(),
serde_json::to_string(&TableEntryValue { table_name, engine })
.unwrap()
.as_bytes(),
serde_json::to_string(&TableEntryValue {
table_name,
engine,
is_deleted: false,
})
.unwrap()
.as_bytes(),
)
}
pub(crate) fn build_table_deletion_request(
request: &DeregisterTableRequest,
table_id: TableId,
) -> DeleteRequest {
let table_key = format_table_entry_key(&request.catalog, &request.schema, table_id);
DeleteRequest {
key_column_values: build_primary_key_columns(EntryType::Table, table_key.as_bytes()),
}
) -> InsertRequest {
let entry_key = format_table_entry_key(&request.catalog, &request.schema, table_id);
build_insert_request(
EntryType::Table,
entry_key.as_bytes(),
serde_json::to_string(&TableEntryValue {
table_name: "".to_string(),
engine: "".to_string(),
is_deleted: true,
})
.unwrap()
.as_bytes(),
)
}
fn build_primary_key_columns(entry_type: EntryType, key: &[u8]) -> HashMap<String, VectorRef> {
let mut m = HashMap::with_capacity(3);
m.insert(
"entry_type".to_string(),
Arc::new(UInt8Vector::from_slice([entry_type as u8])) as _,
);
m.insert(
"key".to_string(),
Arc::new(BinaryVector::from_slice(&[key])) as _,
);
// Timestamp in key part is intentionally left to 0
m.insert(
"timestamp".to_string(),
Arc::new(TimestampMillisecondVector::from_slice([0])) as _,
);
m
HashMap::from([
(
"entry_type".to_string(),
Arc::new(UInt8Vector::from_slice([entry_type as u8])) as VectorRef,
),
(
"key".to_string(),
Arc::new(BinaryVector::from_slice(&[key])) as VectorRef,
),
(
"timestamp".to_string(),
// Timestamp in key part is intentionally left to 0
Arc::new(TimestampMillisecondVector::from_slice([0])) as VectorRef,
),
])
}
pub fn build_schema_insert_request(catalog_name: String, schema_name: String) -> InsertRequest {
@@ -262,18 +266,18 @@ pub fn build_insert_request(entry_type: EntryType, key: &[u8], value: &[u8]) ->
let mut columns_values = HashMap::with_capacity(6);
columns_values.extend(primary_key_columns.into_iter());
columns_values.insert(
let _ = columns_values.insert(
"value".to_string(),
Arc::new(BinaryVector::from_slice(&[value])) as _,
);
let now = util::current_time_millis();
columns_values.insert(
let _ = columns_values.insert(
"gmt_created".to_string(),
Arc::new(TimestampMillisecondVector::from_slice([now])) as _,
);
columns_values.insert(
let _ = columns_values.insert(
"gmt_modified".to_string(),
Arc::new(TimestampMillisecondVector::from_slice([now])) as _,
);
@@ -343,6 +347,7 @@ pub fn decode_system_catalog(
table_name: table_meta.table_name,
table_id,
engine: table_meta.engine,
is_deleted: table_meta.is_deleted,
}))
}
}
@@ -399,6 +404,7 @@ pub struct TableEntry {
pub table_name: String,
pub table_id: TableId,
pub engine: String,
pub is_deleted: bool,
}
#[derive(Debug, Serialize, Deserialize, PartialEq, Eq)]
@@ -407,12 +413,19 @@ pub struct TableEntryValue {
#[serde(default = "mito_engine")]
pub engine: String,
#[serde(default = "not_deleted")]
pub is_deleted: bool,
}
fn mito_engine() -> String {
MITO_ENGINE.to_string()
}
fn not_deleted() -> bool {
false
}
#[cfg(test)]
mod tests {
use common_recordbatch::RecordBatches;
@@ -482,14 +495,13 @@ mod tests {
}
#[test]
#[should_panic]
pub fn test_decode_mismatch() {
decode_system_catalog(
assert!(decode_system_catalog(
Some(EntryType::Table as u8),
Some("some_catalog.some_schema.42".as_bytes()),
None,
)
.unwrap();
.is_err());
}
#[test]
@@ -504,7 +516,7 @@ mod tests {
let dir = create_temp_dir("system-table-test");
let store_dir = dir.path().to_string_lossy();
let mut builder = object_store::services::Fs::default();
builder.root(&store_dir);
let _ = builder.root(&store_dir);
let object_store = ObjectStore::new(builder).unwrap().finish();
let noop_compaction_scheduler = Arc::new(NoopCompactionScheduler::default());
let table_engine = Arc::new(MitoEngine::new(
@@ -572,6 +584,7 @@ mod tests {
table_name: "my_table".to_string(),
table_id: 1,
engine: MITO_ENGINE.to_string(),
is_deleted: false,
});
assert_eq!(entry, expected);
@@ -583,11 +596,11 @@ mod tests {
},
1,
);
let result = catalog_table.delete(table_deletion).await.unwrap();
let result = catalog_table.insert(table_deletion).await.unwrap();
assert_eq!(result, 1);
let records = catalog_table.records().await.unwrap();
let batches = RecordBatches::try_collect(records).await.unwrap().take();
assert_eq!(batches.len(), 0);
assert_eq!(batches.len(), 1);
}
}

View File

@@ -24,10 +24,7 @@ use session::context::QueryContext;
use snafu::{ensure, OptionExt};
use table::table::adapter::DfTableProviderAdapter;
use crate::error::{
CatalogNotFoundSnafu, QueryAccessDeniedSnafu, Result, SchemaNotFoundSnafu, TableNotExistSnafu,
};
use crate::information_schema::InformationSchemaProvider;
use crate::error::{QueryAccessDeniedSnafu, Result, TableNotExistSnafu};
use crate::CatalogManagerRef;
pub struct DfTableSourceProvider {
@@ -104,41 +101,18 @@ impl DfTableSourceProvider {
let schema_name = table_ref.schema.as_ref();
let table_name = table_ref.table.as_ref();
let schema = if schema_name != INFORMATION_SCHEMA_NAME {
let catalog = self
.catalog_manager
.catalog(catalog_name)
.await?
.context(CatalogNotFoundSnafu { catalog_name })?;
catalog
.schema(schema_name)
.await?
.context(SchemaNotFoundSnafu {
catalog: catalog_name,
schema: schema_name,
})?
} else {
let catalog_provider = self
.catalog_manager
.catalog(catalog_name)
.await?
.context(CatalogNotFoundSnafu { catalog_name })?;
Arc::new(InformationSchemaProvider::new(
catalog_name.to_string(),
catalog_provider,
))
};
let table = schema
.table(table_name)
let table = self
.catalog_manager
.table(catalog_name, schema_name, table_name)
.await?
.with_context(|| TableNotExistSnafu {
table: format_full_table_name(catalog_name, schema_name, table_name),
})?;
let table = DfTableProviderAdapter::new(table);
let table = provider_as_source(Arc::new(table));
self.resolved_tables.insert(resolved_name, table.clone());
Ok(table)
let provider = DfTableProviderAdapter::new(table);
let source = provider_as_source(Arc::new(provider));
let _ = self.resolved_tables.insert(resolved_name, source.clone());
Ok(source)
}
}
@@ -162,14 +136,14 @@ mod tests {
table: Cow::Borrowed("table_name"),
};
let result = table_provider.resolve_table_ref(table_ref);
assert!(result.is_ok());
let _ = result.unwrap();
let table_ref = TableReference::Partial {
schema: Cow::Borrowed("public"),
table: Cow::Borrowed("table_name"),
};
let result = table_provider.resolve_table_ref(table_ref);
assert!(result.is_ok());
let _ = result.unwrap();
let table_ref = TableReference::Partial {
schema: Cow::Borrowed("wrong_schema"),
@@ -184,7 +158,7 @@ mod tests {
table: Cow::Borrowed("table_name"),
};
let result = table_provider.resolve_table_ref(table_ref);
assert!(result.is_ok());
let _ = result.unwrap();
let table_ref = TableReference::Full {
catalog: Cow::Borrowed("wrong_catalog"),
@@ -198,14 +172,14 @@ mod tests {
schema: Cow::Borrowed("information_schema"),
table: Cow::Borrowed("columns"),
};
assert!(table_provider.resolve_table_ref(table_ref).is_ok());
let _ = table_provider.resolve_table_ref(table_ref).unwrap();
let table_ref = TableReference::Full {
catalog: Cow::Borrowed("greptime"),
schema: Cow::Borrowed("information_schema"),
table: Cow::Borrowed("columns"),
};
assert!(table_provider.resolve_table_ref(table_ref).is_ok());
let _ = table_provider.resolve_table_ref(table_ref).unwrap();
let table_ref = TableReference::Full {
catalog: Cow::Borrowed("dummy"),

View File

@@ -14,50 +14,24 @@
// The `tables` table in system catalog keeps a record of all tables created by user.
use std::any::Any;
use std::sync::Arc;
use async_trait::async_trait;
use common_catalog::consts::{INFORMATION_SCHEMA_NAME, SYSTEM_CATALOG_TABLE_NAME};
use common_telemetry::logging;
use snafu::ResultExt;
use table::metadata::TableId;
use table::{Table, TableRef};
use table::Table;
use crate::error::{self, Error, InsertCatalogRecordSnafu, Result as CatalogResult};
use crate::error::{self, InsertCatalogRecordSnafu, Result as CatalogResult};
use crate::system::{
build_schema_insert_request, build_table_deletion_request, build_table_insert_request,
SystemCatalogTable,
};
use crate::{CatalogProvider, DeregisterTableRequest, SchemaProvider, SchemaProviderRef};
use crate::DeregisterTableRequest;
pub struct InformationSchema {
pub system: Arc<SystemCatalogTable>,
}
#[async_trait]
impl SchemaProvider for InformationSchema {
fn as_any(&self) -> &dyn Any {
self
}
async fn table_names(&self) -> Result<Vec<String>, Error> {
Ok(vec![SYSTEM_CATALOG_TABLE_NAME.to_string()])
}
async fn table(&self, name: &str) -> Result<Option<TableRef>, Error> {
if name.eq_ignore_ascii_case(SYSTEM_CATALOG_TABLE_NAME) {
Ok(Some(self.system.clone()))
} else {
Ok(None)
}
}
async fn table_exist(&self, name: &str) -> Result<bool, Error> {
Ok(name.eq_ignore_ascii_case(SYSTEM_CATALOG_TABLE_NAME))
}
}
pub struct SystemCatalog {
pub information_schema: Arc<InformationSchema>,
}
@@ -95,7 +69,7 @@ impl SystemCatalog {
) -> CatalogResult<()> {
self.information_schema
.system
.delete(build_table_deletion_request(request, table_id))
.insert(build_table_deletion_request(request, table_id))
.await
.map(|x| {
if x != 1 {
@@ -125,30 +99,3 @@ impl SystemCatalog {
.context(InsertCatalogRecordSnafu)
}
}
#[async_trait::async_trait]
impl CatalogProvider for SystemCatalog {
fn as_any(&self) -> &dyn Any {
self
}
async fn schema_names(&self) -> Result<Vec<String>, Error> {
Ok(vec![INFORMATION_SCHEMA_NAME.to_string()])
}
async fn register_schema(
&self,
_name: String,
_schema: SchemaProviderRef,
) -> Result<Option<SchemaProviderRef>, Error> {
panic!("System catalog does not support registering schema!")
}
async fn schema(&self, name: &str) -> Result<Option<Arc<dyn SchemaProvider>>, Error> {
if name.eq_ignore_ascii_case(INFORMATION_SCHEMA_NAME) {
Ok(Some(self.information_schema.clone()))
} else {
Ok(None)
}
}
}

View File

@@ -22,15 +22,14 @@ mod tests {
use std::time::Duration;
use catalog::helper::{CatalogKey, CatalogValue, SchemaKey, SchemaValue};
use catalog::remote::mock::{MockKvBackend, MockTableEngine};
use catalog::remote::mock::MockTableEngine;
use catalog::remote::region_alive_keeper::RegionAliveKeepers;
use catalog::remote::{
CachedMetaKvBackend, KvBackend, KvBackendRef, RemoteCatalogManager, RemoteCatalogProvider,
RemoteSchemaProvider,
};
use catalog::{CatalogManager, RegisterTableRequest};
use catalog::remote::{CachedMetaKvBackend, RemoteCatalogManager};
use catalog::{CatalogManager, RegisterSchemaRequest, RegisterTableRequest};
use common_catalog::consts::{DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, MITO_ENGINE};
use common_meta::ident::TableIdent;
use common_meta::kv_backend::memory::MemoryKvBackend;
use common_meta::kv_backend::KvBackend;
use datatypes::schema::RawSchema;
use futures_util::StreamExt;
use table::engine::manager::{MemoryTableEngineManager, TableEngineManagerRef};
@@ -40,7 +39,6 @@ mod tests {
use tokio::time::Instant;
struct TestingComponents {
kv_backend: KvBackendRef,
catalog_manager: Arc<RemoteCatalogManager>,
table_engine_manager: TableEngineManagerRef,
region_alive_keepers: Arc<RegionAliveKeepers>,
@@ -55,7 +53,7 @@ mod tests {
#[tokio::test]
async fn test_backend() {
common_telemetry::init_default_ut_logging();
let backend = MockKvBackend::default();
let backend = MemoryKvBackend::default();
let default_catalog_key = CatalogKey {
catalog_name: DEFAULT_CATALOG_NAME.to_string(),
@@ -84,7 +82,7 @@ mod tests {
let mut res = HashSet::new();
while let Some(r) = iter.next().await {
let kv = r.unwrap();
res.insert(String::from_utf8_lossy(&kv.0).to_string());
let _ = res.insert(String::from_utf8_lossy(&kv.0).to_string());
}
assert_eq!(
vec!["__c-greptime".to_string()],
@@ -94,8 +92,7 @@ mod tests {
#[tokio::test]
async fn test_cached_backend() {
common_telemetry::init_default_ut_logging();
let backend = CachedMetaKvBackend::wrap(Arc::new(MockKvBackend::default()));
let backend = CachedMetaKvBackend::wrap(Arc::new(MemoryKvBackend::default()));
let default_catalog_key = CatalogKey {
catalog_name: DEFAULT_CATALOG_NAME.to_string(),
@@ -111,7 +108,7 @@ mod tests {
.unwrap();
let ret = backend.get(b"__c-greptime").await.unwrap();
assert!(ret.is_some());
let _ = ret.unwrap();
let _ = backend
.compare_and_set(
@@ -123,13 +120,11 @@ mod tests {
.unwrap();
let ret = backend.get(b"__c-greptime").await.unwrap();
assert!(ret.is_some());
assert_eq!(&b"123"[..], &(ret.as_ref().unwrap().1));
let _ = backend.set(b"__c-greptime", b"1234").await;
let ret = backend.get(b"__c-greptime").await.unwrap();
assert!(ret.is_some());
assert_eq!(&b"1234"[..], &(ret.as_ref().unwrap().1));
backend.delete(b"__c-greptime").await.unwrap();
@@ -139,9 +134,11 @@ mod tests {
}
async fn prepare_components(node_id: u64) -> TestingComponents {
let cached_backend = Arc::new(CachedMetaKvBackend::wrap(
Arc::new(MockKvBackend::default()),
));
let backend = Arc::new(MemoryKvBackend::default());
backend.set(b"__c-greptime", b"").await.unwrap();
backend.set(b"__s-greptime-public", b"").await.unwrap();
let cached_backend = Arc::new(CachedMetaKvBackend::wrap(backend));
let table_engine = Arc::new(MockTableEngine::default());
let engine_manager = Arc::new(MemoryTableEngineManager::alias(
@@ -160,7 +157,6 @@ mod tests {
catalog_manager.start().await.unwrap();
TestingComponents {
kv_backend: cached_backend,
catalog_manager: Arc::new(catalog_manager),
table_engine_manager: engine_manager,
region_alive_keepers,
@@ -179,14 +175,12 @@ mod tests {
catalog_manager.catalog_names().await.unwrap()
);
let default_catalog = catalog_manager
.catalog(DEFAULT_CATALOG_NAME)
.await
.unwrap()
.unwrap();
assert_eq!(
vec![DEFAULT_SCHEMA_NAME.to_string()],
default_catalog.schema_names().await.unwrap()
catalog_manager
.schema_names(DEFAULT_CATALOG_NAME)
.await
.unwrap()
);
}
@@ -242,23 +236,15 @@ mod tests {
async fn test_register_table() {
let node_id = 42;
let components = prepare_components(node_id).await;
let catalog_manager = &components.catalog_manager;
let default_catalog = catalog_manager
.catalog(DEFAULT_CATALOG_NAME)
.await
.unwrap()
.unwrap();
assert_eq!(
vec![DEFAULT_SCHEMA_NAME.to_string()],
default_catalog.schema_names().await.unwrap()
components
.catalog_manager
.schema_names(DEFAULT_CATALOG_NAME)
.await
.unwrap()
);
let default_schema = default_catalog
.schema(DEFAULT_SCHEMA_NAME)
.await
.unwrap()
.unwrap();
// register a new table with an nonexistent catalog
let catalog_name = DEFAULT_CATALOG_NAME.to_string();
let schema_name = DEFAULT_SCHEMA_NAME.to_string();
@@ -293,10 +279,18 @@ mod tests {
table_id,
table,
};
assert!(catalog_manager.register_table(reg_req).await.unwrap());
assert!(components
.catalog_manager
.register_table(reg_req)
.await
.unwrap());
assert_eq!(
vec![table_name],
default_schema.table_names().await.unwrap()
components
.catalog_manager
.table_names(DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME)
.await
.unwrap()
);
}
@@ -304,29 +298,28 @@ mod tests {
async fn test_register_catalog_schema_table() {
let node_id = 42;
let components = prepare_components(node_id).await;
let backend = &components.kv_backend;
let catalog_manager = components.catalog_manager.clone();
let engine_manager = components.table_engine_manager.clone();
let catalog_name = "test_catalog".to_string();
let schema_name = "nonexistent_schema".to_string();
let catalog = Arc::new(RemoteCatalogProvider::new(
catalog_name.clone(),
backend.clone(),
engine_manager.clone(),
node_id,
components.region_alive_keepers.clone(),
));
// register catalog to catalog manager
CatalogManager::register_catalog(&*catalog_manager, catalog_name.clone(), catalog)
assert!(components
.catalog_manager
.register_catalog(catalog_name.clone())
.await
.unwrap();
.is_ok());
assert_eq!(
HashSet::<String>::from_iter(
vec![DEFAULT_CATALOG_NAME.to_string(), catalog_name.clone()].into_iter()
),
HashSet::from_iter(catalog_manager.catalog_names().await.unwrap().into_iter())
HashSet::from_iter(
components
.catalog_manager
.catalog_names()
.await
.unwrap()
.into_iter()
)
);
let table_to_register = components
@@ -359,38 +352,34 @@ mod tests {
};
// this register will fail since schema does not exist yet
assert_matches!(
catalog_manager
components
.catalog_manager
.register_table(reg_req.clone())
.await
.unwrap_err(),
catalog::error::Error::SchemaNotFound { .. }
);
let new_catalog = catalog_manager
.catalog(&catalog_name)
let register_schema_request = RegisterSchemaRequest {
catalog: catalog_name.to_string(),
schema: schema_name.to_string(),
};
assert!(components
.catalog_manager
.register_schema(register_schema_request)
.await
.unwrap()
.expect("catalog should exist since it's already registered");
let schema = Arc::new(RemoteSchemaProvider::new(
catalog_name.clone(),
schema_name.clone(),
node_id,
engine_manager,
backend.clone(),
components.region_alive_keepers.clone(),
));
let prev = new_catalog
.register_schema(schema_name.clone(), schema.clone())
.expect("Register schema should not fail"));
assert!(components
.catalog_manager
.register_table(reg_req)
.await
.expect("Register schema should not fail");
assert!(prev.is_none());
assert!(catalog_manager.register_table(reg_req).await.unwrap());
.unwrap());
assert_eq!(
HashSet::from([schema_name.clone()]),
new_catalog
.schema_names()
components
.catalog_manager
.schema_names(&catalog_name)
.await
.unwrap()
.into_iter()

View File

@@ -36,6 +36,7 @@ tonic.workspace = true
[dev-dependencies]
datanode = { path = "../datanode" }
derive-new = "0.5"
substrait = { path = "../common/substrait" }
tracing = "0.1"
tracing-subscriber = { version = "0.3", features = ["env-filter"] }

View File

@@ -0,0 +1,183 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use api::v1::column::*;
use api::v1::*;
use client::{Client, Database, DEFAULT_SCHEMA_NAME};
use derive_new::new;
use tracing::{error, info};
fn main() {
tracing::subscriber::set_global_default(tracing_subscriber::FmtSubscriber::builder().finish())
.unwrap();
run();
}
#[tokio::main]
async fn run() {
let greptimedb_endpoint =
std::env::var("GREPTIMEDB_ENDPOINT").unwrap_or_else(|_| "localhost:4001".to_owned());
let greptimedb_dbname =
std::env::var("GREPTIMEDB_DBNAME").unwrap_or_else(|_| DEFAULT_SCHEMA_NAME.to_owned());
let grpc_client = Client::with_urls(vec![&greptimedb_endpoint]);
let client = Database::new_with_dbname(greptimedb_dbname, grpc_client);
let stream_inserter = client.streaming_inserter().unwrap();
if let Err(e) = stream_inserter
.insert(vec![to_insert_request(weather_records_1())])
.await
{
error!("Error: {e}");
}
if let Err(e) = stream_inserter
.insert(vec![to_insert_request(weather_records_2())])
.await
{
error!("Error: {e}");
}
let result = stream_inserter.finish().await;
match result {
Ok(rows) => {
info!("Rows written: {rows}");
}
Err(e) => {
error!("Error: {e}");
}
};
}
#[derive(new)]
struct WeatherRecord {
timestamp_millis: i64,
collector: String,
temperature: f32,
humidity: i32,
}
fn weather_records_1() -> Vec<WeatherRecord> {
vec![
WeatherRecord::new(1686109527000, "c1".to_owned(), 26.4, 15),
WeatherRecord::new(1686023127000, "c1".to_owned(), 29.3, 20),
WeatherRecord::new(1685936727000, "c1".to_owned(), 31.8, 13),
WeatherRecord::new(1686109527000, "c2".to_owned(), 20.4, 67),
WeatherRecord::new(1686023127000, "c2".to_owned(), 18.0, 74),
WeatherRecord::new(1685936727000, "c2".to_owned(), 19.2, 81),
]
}
fn weather_records_2() -> Vec<WeatherRecord> {
vec![
WeatherRecord::new(1686109527001, "c3".to_owned(), 26.4, 15),
WeatherRecord::new(1686023127002, "c3".to_owned(), 29.3, 20),
WeatherRecord::new(1685936727003, "c3".to_owned(), 31.8, 13),
WeatherRecord::new(1686109527004, "c4".to_owned(), 20.4, 67),
WeatherRecord::new(1686023127005, "c4".to_owned(), 18.0, 74),
WeatherRecord::new(1685936727006, "c4".to_owned(), 19.2, 81),
]
}
/// This function generates some random data and bundle them into a
/// `InsertRequest`.
///
/// Data structure:
///
/// - `ts`: a timestamp column
/// - `collector`: a tag column
/// - `temperature`: a value field of f32
/// - `humidity`: a value field of i32
///
fn to_insert_request(records: Vec<WeatherRecord>) -> InsertRequest {
// convert records into columns
let rows = records.len();
// transpose records into columns
let (timestamp_millis, collectors, temp, humidity) = records.into_iter().fold(
(
Vec::with_capacity(rows),
Vec::with_capacity(rows),
Vec::with_capacity(rows),
Vec::with_capacity(rows),
),
|mut acc, rec| {
acc.0.push(rec.timestamp_millis);
acc.1.push(rec.collector);
acc.2.push(rec.temperature);
acc.3.push(rec.humidity);
acc
},
);
let columns = vec![
// timestamp column: `ts`
Column {
column_name: "ts".to_owned(),
values: Some(column::Values {
ts_millisecond_values: timestamp_millis,
..Default::default()
}),
semantic_type: SemanticType::Timestamp as i32,
datatype: ColumnDataType::TimestampMillisecond as i32,
..Default::default()
},
// tag column: collectors
Column {
column_name: "collector".to_owned(),
values: Some(column::Values {
string_values: collectors.into_iter().collect(),
..Default::default()
}),
semantic_type: SemanticType::Tag as i32,
datatype: ColumnDataType::String as i32,
..Default::default()
},
// field column: temperature
Column {
column_name: "temperature".to_owned(),
values: Some(column::Values {
f32_values: temp,
..Default::default()
}),
semantic_type: SemanticType::Field as i32,
datatype: ColumnDataType::Float32 as i32,
..Default::default()
},
// field column: humidity
Column {
column_name: "humidity".to_owned(),
values: Some(column::Values {
i32_values: humidity,
..Default::default()
}),
semantic_type: SemanticType::Field as i32,
datatype: ColumnDataType::Int32 as i32,
..Default::default()
},
];
InsertRequest {
table_name: "weather_demo".to_owned(),
columns,
row_count: rows as u32,
..Default::default()
}
}

View File

@@ -165,7 +165,7 @@ impl Client {
pub async fn health_check(&self) -> Result<()> {
let (_, channel) = self.find_channel()?;
let mut client = HealthCheckClient::new(channel);
client.health_check(HealthCheckRequest {}).await?;
let _ = client.health_check(HealthCheckRequest {}).await?;
Ok(())
}
}

View File

@@ -29,14 +29,11 @@ use common_telemetry::{logging, timer};
use futures_util::{TryFutureExt, TryStreamExt};
use prost::Message;
use snafu::{ensure, ResultExt};
use tokio::sync::mpsc::Sender;
use tokio::sync::{mpsc, OnceCell};
use tokio_stream::wrappers::ReceiverStream;
use crate::error::{
ConvertFlightDataSnafu, IllegalDatabaseResponseSnafu, IllegalFlightMessagesSnafu,
};
use crate::{error, metrics, Client, Result};
use crate::{error, metrics, Client, Result, StreamInserter};
#[derive(Clone, Debug, Default)]
pub struct Database {
@@ -50,7 +47,6 @@ pub struct Database {
dbname: String,
client: Client,
streaming_client: OnceCell<Sender<GreptimeRequest>>,
ctx: FlightContext,
}
@@ -62,7 +58,6 @@ impl Database {
schema: schema.into(),
dbname: "".to_string(),
client,
streaming_client: OnceCell::new(),
ctx: FlightContext::default(),
}
}
@@ -80,7 +75,6 @@ impl Database {
schema: "".to_string(),
dbname: dbname.into(),
client,
streaming_client: OnceCell::new(),
ctx: FlightContext::default(),
}
}
@@ -120,20 +114,24 @@ impl Database {
self.handle(Request::Inserts(requests)).await
}
pub async fn insert_to_stream(&self, requests: InsertRequests) -> Result<()> {
let streaming_client = self
.streaming_client
.get_or_try_init(|| self.client_stream())
.await?;
pub fn streaming_inserter(&self) -> Result<StreamInserter> {
self.streaming_inserter_with_channel_size(65536)
}
let request = self.to_rpc_request(Request::Inserts(requests));
pub fn streaming_inserter_with_channel_size(
&self,
channel_size: usize,
) -> Result<StreamInserter> {
let client = self.client.make_database_client()?.inner;
streaming_client.send(request).await.map_err(|e| {
error::ClientStreamingSnafu {
err_msg: e.to_string(),
}
.build()
})
let stream_inserter = StreamInserter::new(
client,
self.dbname().to_string(),
self.ctx.auth_header.clone(),
channel_size,
);
Ok(stream_inserter)
}
pub async fn delete(&self, request: DeleteRequest) -> Result<u32> {
@@ -169,14 +167,6 @@ impl Database {
}
}
async fn client_stream(&self) -> Result<Sender<GreptimeRequest>> {
let mut client = self.client.make_database_client()?.inner;
let (sender, receiver) = mpsc::channel::<GreptimeRequest>(65536);
let receiver = ReceiverStream::new(receiver);
client.handle_requests(receiver).await?;
Ok(sender)
}
pub async fn sql(&self, sql: &str) -> Result<Output> {
let _timer = timer!(metrics::METRIC_GRPC_SQL);
self.do_get(Request::Query(QueryRequest {

View File

@@ -18,6 +18,7 @@ mod database;
mod error;
pub mod load_balance;
mod metrics;
mod stream_insert;
pub use api;
pub use common_catalog::consts::{DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME};
@@ -25,3 +26,4 @@ pub use common_catalog::consts::{DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME};
pub use self::client::Client;
pub use self::database::Database;
pub use self::error::{Error, Result};
pub use self::stream_insert::StreamInserter;

View File

@@ -60,7 +60,7 @@ mod tests {
let random = Random;
for _ in 0..100 {
let peer = random.get_peer(&peers).unwrap();
all.contains(peer);
assert!(all.contains(peer));
}
}
}

View File

@@ -0,0 +1,115 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use api::v1::greptime_database_client::GreptimeDatabaseClient;
use api::v1::greptime_request::Request;
use api::v1::{
greptime_response, AffectedRows, AuthHeader, GreptimeRequest, GreptimeResponse, InsertRequest,
InsertRequests, RequestHeader,
};
use snafu::OptionExt;
use tokio::sync::mpsc;
use tokio::task::JoinHandle;
use tokio_stream::wrappers::ReceiverStream;
use tonic::transport::Channel;
use tonic::{Response, Status};
use crate::error::{self, IllegalDatabaseResponseSnafu, Result};
/// A structure that provides some methods for streaming data insert.
///
/// [`StreamInserter`] cannot be constructed via the `StreamInserter::new` method.
/// You can use the following way to obtain [`StreamInserter`].
///
/// ```ignore
/// let grpc_client = Client::with_urls(vec!["127.0.0.1:4002"]);
/// let client = Database::new_with_dbname("db_name", grpc_client);
/// let stream_inserter = client.streaming_inserter().unwrap();
/// ```
///
/// If you want to see a concrete usage example, please see
/// [stream_inserter.rs](https://github.com/GreptimeTeam/greptimedb/blob/develop/src/client/examples/stream_ingest.rs).
pub struct StreamInserter {
sender: mpsc::Sender<GreptimeRequest>,
auth_header: Option<AuthHeader>,
dbname: String,
join: JoinHandle<std::result::Result<Response<GreptimeResponse>, Status>>,
}
impl StreamInserter {
pub(crate) fn new(
mut client: GreptimeDatabaseClient<Channel>,
dbname: String,
auth_header: Option<AuthHeader>,
channel_size: usize,
) -> StreamInserter {
let (send, recv) = tokio::sync::mpsc::channel(channel_size);
let join: JoinHandle<std::result::Result<Response<GreptimeResponse>, Status>> =
tokio::spawn(async move {
let recv_stream = ReceiverStream::new(recv);
client.handle_requests(recv_stream).await
});
StreamInserter {
sender: send,
auth_header,
dbname,
join,
}
}
pub async fn insert(&self, requests: Vec<InsertRequest>) -> Result<()> {
let inserts = InsertRequests { inserts: requests };
let request = self.to_rpc_request(Request::Inserts(inserts));
self.sender.send(request).await.map_err(|e| {
error::ClientStreamingSnafu {
err_msg: e.to_string(),
}
.build()
})
}
pub async fn finish(self) -> Result<u32> {
drop(self.sender);
let response = self.join.await.unwrap()?;
let response = response
.into_inner()
.response
.context(IllegalDatabaseResponseSnafu {
err_msg: "GreptimeResponse is empty",
})?;
let greptime_response::Response::AffectedRows(AffectedRows { value }) = response;
Ok(value)
}
fn to_rpc_request(&self, request: Request) -> GreptimeRequest {
GreptimeRequest {
header: Some(RequestHeader {
authorization: self.auth_header.clone(),
dbname: self.dbname.clone(),
..Default::default()
}),
request: Some(request),
}
}
}

View File

@@ -10,7 +10,9 @@ name = "greptime"
path = "src/bin/greptime.rs"
[features]
default = ["metrics-process"]
tokio-console = ["common-telemetry/tokio-console"]
metrics-process = ["servers/metrics-process"]
[dependencies]
anymap = "1.0.0-beta.2"

View File

@@ -19,6 +19,7 @@ use std::time::Instant;
use catalog::remote::CachedMetaKvBackend;
use client::client_manager::DatanodeClients;
use client::{Client, Database, DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME};
use common_base::Plugins;
use common_error::prelude::ErrorExt;
use common_query::Output;
use common_recordbatch::RecordBatches;
@@ -107,7 +108,7 @@ impl Repl {
Ok(ref line) => {
let request = line.trim();
self.rl.add_history_entry(request.to_string());
let _ = self.rl.add_history_entry(request.to_string());
request.try_into()
}
@@ -136,7 +137,7 @@ impl Repl {
}
}
ReplCommand::Sql { sql } => {
self.execute_sql(sql).await;
let _ = self.execute_sql(sql).await;
}
ReplCommand::Exit => {
return Ok(());
@@ -266,13 +267,14 @@ async fn create_query_engine(meta_addr: &str) -> Result<DatafusionQueryEngine> {
partition_manager,
datanode_clients,
));
let plugins: Arc<Plugins> = Default::default();
let state = Arc::new(QueryEngineState::new(
catalog_list,
false,
None,
None,
Default::default(),
plugins.clone(),
));
Ok(DatafusionQueryEngine::new(state))
Ok(DatafusionQueryEngine::new(state, plugins))
}

View File

@@ -170,7 +170,9 @@ impl StartCommand {
logging::info!("Datanode start command: {:#?}", self);
logging::info!("Datanode options: {:#?}", opts);
let datanode = Datanode::new(opts).await.context(StartDatanodeSnafu)?;
let datanode = Datanode::new(opts, Default::default())
.await
.context(StartDatanodeSnafu)?;
Ok(Instance { datanode })
}
@@ -324,12 +326,12 @@ mod tests {
.is_err());
// Providing node_id but leave metasrv_addr absent is ok since metasrv_addr has default value
(StartCommand {
assert!((StartCommand {
node_id: Some(42),
..Default::default()
})
.load_options(TopLevelOptions::default())
.unwrap();
.is_ok());
}
#[test]

View File

@@ -236,6 +236,7 @@ mod tests {
use std::io::Write;
use std::time::Duration;
use common_base::readable_size::ReadableSize;
use common_test_util::temp_dir::create_named_temp_file;
use frontend::service_config::GrpcOptions;
use servers::auth::{Identity, Password, UserProviderRef};
@@ -260,6 +261,10 @@ mod tests {
command.load_options(TopLevelOptions::default()).unwrap() else { unreachable!() };
assert_eq!(opts.http_options.as_ref().unwrap().addr, "127.0.0.1:1234");
assert_eq!(
ReadableSize::mb(64),
opts.http_options.as_ref().unwrap().body_limit
);
assert_eq!(opts.mysql_options.as_ref().unwrap().addr, "127.0.0.1:5678");
assert_eq!(
opts.postgres_options.as_ref().unwrap().addr,
@@ -301,6 +306,7 @@ mod tests {
[http_options]
addr = "127.0.0.1:4000"
timeout = "30s"
body_limit = "2GB"
[logging]
level = "debug"
@@ -326,6 +332,11 @@ mod tests {
fe_opts.http_options.as_ref().unwrap().timeout
);
assert_eq!(
ReadableSize::gb(2),
fe_opts.http_options.as_ref().unwrap().body_limit
);
assert_eq!("debug", fe_opts.logging.level.as_ref().unwrap());
assert_eq!("/tmp/greptimedb/test/logs".to_string(), fe_opts.logging.dir);
}
@@ -339,19 +350,15 @@ mod tests {
};
let plugins = load_frontend_plugins(&command.user_provider);
assert!(plugins.is_ok());
let plugins = plugins.unwrap();
let provider = plugins.get::<UserProviderRef>();
assert!(provider.is_some());
let provider = provider.unwrap();
let provider = plugins.get::<UserProviderRef>().unwrap();
let result = provider
.authenticate(
Identity::UserId("test", None),
Password::PlainText("test".to_string().into()),
)
.await;
assert!(result.is_ok());
let _ = result.unwrap();
}
#[test]

View File

@@ -132,6 +132,7 @@ impl StandaloneOptions {
prom_options: self.prom_options,
meta_client_options: None,
logging: self.logging,
..Default::default()
}
}
@@ -308,7 +309,7 @@ impl StartCommand {
fe_opts, dn_opts
);
let datanode = Datanode::new(dn_opts.clone())
let datanode = Datanode::new(dn_opts.clone(), Default::default())
.await
.context(StartDatanodeSnafu)?;
@@ -341,6 +342,7 @@ mod tests {
use std::io::Write;
use std::time::Duration;
use common_base::readable_size::ReadableSize;
use common_test_util::temp_dir::create_named_temp_file;
use servers::auth::{Identity, Password, UserProviderRef};
use servers::Mode;
@@ -356,18 +358,15 @@ mod tests {
};
let plugins = load_frontend_plugins(&command.user_provider);
assert!(plugins.is_ok());
let plugins = plugins.unwrap();
let provider = plugins.get::<UserProviderRef>();
assert!(provider.is_some());
let provider = provider.unwrap();
let provider = plugins.get::<UserProviderRef>().unwrap();
let result = provider
.authenticate(
Identity::UserId("test", None),
Password::PlainText("test".to_string().into()),
)
.await;
assert!(result.is_ok());
let _ = result.unwrap();
}
#[test]
@@ -411,6 +410,7 @@ mod tests {
[http_options]
addr = "127.0.0.1:4000"
timeout = "30s"
body_limit = "128MB"
[logging]
level = "debug"
@@ -436,6 +436,10 @@ mod tests {
Duration::from_secs(30),
fe_opts.http_options.as_ref().unwrap().timeout
);
assert_eq!(
ReadableSize::mb(128),
fe_opts.http_options.as_ref().unwrap().body_limit
);
assert_eq!(
"127.0.0.1:4001".to_string(),
fe_opts.grpc_options.unwrap().addr
@@ -562,6 +566,10 @@ mod tests {
opts.fe_opts.http_options.as_ref().unwrap().addr,
"127.0.0.1:14000"
);
assert_eq!(
ReadableSize::mb(64),
opts.fe_opts.http_options.as_ref().unwrap().body_limit
);
// Should be default value.
assert_eq!(

View File

@@ -27,10 +27,10 @@ mod tests {
impl Repl {
fn send_line(&mut self, line: &str) {
self.repl.send_line(line).unwrap();
let _ = self.repl.send_line(line).unwrap();
// read a line to consume the prompt
self.read_line();
let _ = self.read_line();
}
fn read_line(&mut self) -> String {
@@ -76,7 +76,7 @@ mod tests {
std::thread::sleep(Duration::from_secs(3));
let mut repl_cmd = Command::new("./greptime");
repl_cmd.current_dir(bin_path).args([
let _ = repl_cmd.current_dir(bin_path).args([
"--log-level=off",
"cli",
"attach",
@@ -105,7 +105,7 @@ mod tests {
test_select(repl);
datanode.kill().unwrap();
datanode.wait().unwrap();
let _ = datanode.wait().unwrap();
}
fn test_create_database(repl: &mut Repl) {

View File

@@ -41,7 +41,7 @@ impl Plugins {
}
pub fn insert<T: 'static + Send + Sync>(&self, value: T) {
self.lock().insert(value);
let _ = self.lock().insert(value);
}
pub fn get<T: 'static + Send + Sync + Clone>(&self) -> Option<T> {

View File

@@ -119,7 +119,7 @@ mod tests {
#[test]
fn [<test_read_write_ $num_ty _from_vec_buffer>]() {
let mut buf = vec![];
assert!(buf.[<write_ $num_ty _le>]($num_ty::MAX).is_ok());
let _ = buf.[<write_ $num_ty _le>]($num_ty::MAX).unwrap();
assert_eq!($num_ty::MAX, buf.as_slice().[<read_ $num_ty _le>]().unwrap());
}
}
@@ -132,7 +132,7 @@ mod tests {
#[test]
pub fn test_peek_write_from_vec_buffer() {
let mut buf: Vec<u8> = vec![];
assert!(buf.write_from_slice("hello".as_bytes()).is_ok());
buf.write_from_slice("hello".as_bytes()).unwrap();
let mut slice = buf.as_slice();
assert_eq!(104, slice.peek_u8_le().unwrap());
slice.advance_by(1);

View File

@@ -27,6 +27,8 @@ pub const MAX_SYS_TABLE_ID: u32 = MIN_USER_TABLE_ID - 1;
pub const SYSTEM_CATALOG_TABLE_ID: u32 = 0;
/// scripts table id
pub const SCRIPTS_TABLE_ID: u32 = 1;
/// numbers table id
pub const NUMBERS_TABLE_ID: u32 = 2;
pub const MITO_ENGINE: &str = "mito";
pub const IMMUTABLE_FILE_ENGINE: &str = "file";

View File

@@ -24,6 +24,7 @@ datafusion.workspace = true
derive_builder = "0.12"
futures.workspace = true
object-store = { path = "../../object-store" }
orc-rust = { git = "https://github.com/WenyXu/orc-rs.git", rev = "0319acd32456e403c20f135cc012441a76852605" }
regex = "1.7"
snafu.workspace = true
tokio.workspace = true

View File

@@ -54,6 +54,12 @@ pub enum Error {
location: Location,
},
#[snafu(display("Failed to build orc reader, source: {}", source))]
OrcReader {
location: Location,
source: orc_rust::error::Error,
},
#[snafu(display("Failed to read object from path: {}, source: {}", path, source))]
ReadObject {
path: String,
@@ -171,7 +177,8 @@ impl ErrorExt for Error {
| ReadRecordBatch { .. }
| WriteRecordBatch { .. }
| EncodeRecordBatch { .. }
| BufferedWriterClosed { .. } => StatusCode::Unexpected,
| BufferedWriterClosed { .. }
| OrcReader { .. } => StatusCode::Unexpected,
}
}
@@ -182,6 +189,7 @@ impl ErrorExt for Error {
fn location_opt(&self) -> Option<common_error::snafu::Location> {
use Error::*;
match self {
OrcReader { location, .. } => Some(*location),
BuildBackend { location, .. } => Some(*location),
ReadObject { location, .. } => Some(*location),
ListObjects { location, .. } => Some(*location),

View File

@@ -14,6 +14,7 @@
pub mod csv;
pub mod json;
pub mod orc;
pub mod parquet;
#[cfg(test)]
pub mod tests;
@@ -38,6 +39,7 @@ use snafu::ResultExt;
use self::csv::CsvFormat;
use self::json::JsonFormat;
use self::orc::OrcFormat;
use self::parquet::ParquetFormat;
use crate::buffered_writer::{DfRecordBatchEncoder, LazyBufferedWriter};
use crate::compression::CompressionType;
@@ -56,6 +58,7 @@ pub enum Format {
Csv(CsvFormat),
Json(JsonFormat),
Parquet(ParquetFormat),
Orc(OrcFormat),
}
impl Format {
@@ -64,6 +67,7 @@ impl Format {
Format::Csv(_) => ".csv",
Format::Json(_) => ".json",
Format::Parquet(_) => ".parquet",
&Format::Orc(_) => ".orc",
}
}
}
@@ -81,6 +85,7 @@ impl TryFrom<&HashMap<String, String>> for Format {
"CSV" => Ok(Self::Csv(CsvFormat::try_from(options)?)),
"JSON" => Ok(Self::Json(JsonFormat::try_from(options)?)),
"PARQUET" => Ok(Self::Parquet(ParquetFormat::default())),
"ORC" => Ok(Self::Orc(OrcFormat)),
_ => error::UnsupportedFormatSnafu { format: &format }.fail(),
}
}
@@ -208,7 +213,7 @@ pub async fn stream_to_file<T: DfRecordBatchEncoder, U: Fn(SharedBuffer) -> T>(
}
// Flushes all pending writes
writer.try_flush(true).await?;
let _ = writer.try_flush(true).await?;
writer.close_inner_writer().await?;
Ok(rows)

View File

@@ -291,20 +291,20 @@ mod tests {
#[test]
fn test_try_from() {
let mut map = HashMap::new();
let map = HashMap::new();
let format: CsvFormat = CsvFormat::try_from(&map).unwrap();
assert_eq!(format, CsvFormat::default());
map.insert(
FORMAT_SCHEMA_INFER_MAX_RECORD.to_string(),
"2000".to_string(),
);
map.insert(FORMAT_COMPRESSION_TYPE.to_string(), "zstd".to_string());
map.insert(FORMAT_DELIMITER.to_string(), b'\t'.to_string());
map.insert(FORMAT_HAS_HEADER.to_string(), "false".to_string());
let map = HashMap::from([
(
FORMAT_SCHEMA_INFER_MAX_RECORD.to_string(),
"2000".to_string(),
),
(FORMAT_COMPRESSION_TYPE.to_string(), "zstd".to_string()),
(FORMAT_DELIMITER.to_string(), b'\t'.to_string()),
(FORMAT_HAS_HEADER.to_string(), "false".to_string()),
]);
let format = CsvFormat::try_from(&map).unwrap();
assert_eq!(

View File

@@ -214,18 +214,18 @@ mod tests {
#[test]
fn test_try_from() {
let mut map = HashMap::new();
let map = HashMap::new();
let format = JsonFormat::try_from(&map).unwrap();
assert_eq!(format, JsonFormat::default());
map.insert(
FORMAT_SCHEMA_INFER_MAX_RECORD.to_string(),
"2000".to_string(),
);
map.insert(FORMAT_COMPRESSION_TYPE.to_string(), "zstd".to_string());
let map = HashMap::from([
(
FORMAT_SCHEMA_INFER_MAX_RECORD.to_string(),
"2000".to_string(),
),
(FORMAT_COMPRESSION_TYPE.to_string(), "zstd".to_string()),
]);
let format = JsonFormat::try_from(&map).unwrap();
assert_eq!(

View File

@@ -0,0 +1,102 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::pin::Pin;
use std::task::{Context, Poll};
use arrow_schema::{Schema, SchemaRef};
use async_trait::async_trait;
use datafusion::arrow::record_batch::RecordBatch as DfRecordBatch;
use datafusion::error::{DataFusionError, Result as DfResult};
use datafusion::physical_plan::RecordBatchStream;
use futures::Stream;
use object_store::ObjectStore;
use orc_rust::arrow_reader::{create_arrow_schema, Cursor};
use orc_rust::async_arrow_reader::ArrowStreamReader;
pub use orc_rust::error::Error as OrcError;
use orc_rust::reader::Reader;
use snafu::ResultExt;
use tokio::io::{AsyncRead, AsyncSeek};
use crate::error::{self, Result};
use crate::file_format::FileFormat;
#[derive(Debug, Clone, Copy, PartialEq, Eq, Default)]
pub struct OrcFormat;
pub async fn new_orc_cursor<R: AsyncRead + AsyncSeek + Unpin + Send + 'static>(
reader: R,
) -> Result<Cursor<R>> {
let reader = Reader::new_async(reader)
.await
.context(error::OrcReaderSnafu)?;
let cursor = Cursor::root(reader).context(error::OrcReaderSnafu)?;
Ok(cursor)
}
pub async fn new_orc_stream_reader<R: AsyncRead + AsyncSeek + Unpin + Send + 'static>(
reader: R,
) -> Result<ArrowStreamReader<R>> {
let cursor = new_orc_cursor(reader).await?;
Ok(ArrowStreamReader::new(cursor, None))
}
pub async fn infer_orc_schema<R: AsyncRead + AsyncSeek + Unpin + Send + 'static>(
reader: R,
) -> Result<Schema> {
let cursor = new_orc_cursor(reader).await?;
Ok(create_arrow_schema(&cursor))
}
pub struct OrcArrowStreamReaderAdapter<T: AsyncRead + AsyncSeek + Unpin + Send + 'static> {
stream: ArrowStreamReader<T>,
}
impl<T: AsyncRead + AsyncSeek + Unpin + Send + 'static> OrcArrowStreamReaderAdapter<T> {
pub fn new(stream: ArrowStreamReader<T>) -> Self {
Self { stream }
}
}
impl<T: AsyncRead + AsyncSeek + Unpin + Send + 'static> RecordBatchStream
for OrcArrowStreamReaderAdapter<T>
{
fn schema(&self) -> SchemaRef {
self.stream.schema()
}
}
impl<T: AsyncRead + AsyncSeek + Unpin + Send + 'static> Stream for OrcArrowStreamReaderAdapter<T> {
type Item = DfResult<DfRecordBatch>;
fn poll_next(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Option<Self::Item>> {
let batch = futures::ready!(Pin::new(&mut self.stream).poll_next(cx))
.map(|r| r.map_err(|e| DataFusionError::External(Box::new(e))));
Poll::Ready(batch)
}
}
#[async_trait]
impl FileFormat for OrcFormat {
async fn infer_schema(&self, store: &ObjectStore, path: &str) -> Result<Schema> {
let reader = store
.reader(path)
.await
.context(error::ReadObjectSnafu { path })?;
let schema = infer_orc_schema(reader).await?;
Ok(schema)
}
}

View File

@@ -20,7 +20,7 @@ use crate::error::{BuildBackendSnafu, Result};
pub fn build_fs_backend(root: &str) -> Result<ObjectStore> {
let mut builder = Fs::default();
builder.root(root);
let _ = builder.root(root);
let object_store = ObjectStore::new(builder)
.context(BuildBackendSnafu)?
.finish();

View File

@@ -34,28 +34,26 @@ pub fn build_s3_backend(
) -> Result<ObjectStore> {
let mut builder = S3::default();
builder.root(path);
builder.bucket(host);
let _ = builder.root(path).bucket(host);
if let Some(endpoint) = connection.get(ENDPOINT_URL) {
builder.endpoint(endpoint);
let _ = builder.endpoint(endpoint);
}
if let Some(region) = connection.get(REGION) {
builder.region(region);
let _ = builder.region(region);
}
if let Some(key_id) = connection.get(ACCESS_KEY_ID) {
builder.access_key_id(key_id);
let _ = builder.access_key_id(key_id);
}
if let Some(key) = connection.get(SECRET_ACCESS_KEY) {
builder.secret_access_key(key);
let _ = builder.secret_access_key(key);
}
if let Some(session_token) = connection.get(SESSION_TOKEN) {
builder.security_token(session_token);
let _ = builder.security_token(session_token);
}
if let Some(enable_str) = connection.get(ENABLE_VIRTUAL_HOST_STYLE) {
@@ -69,7 +67,7 @@ pub fn build_s3_backend(
.build()
})?;
if enable {
builder.enable_virtual_host_style();
let _ = builder.enable_virtual_host_style();
}
}

View File

@@ -55,7 +55,7 @@ pub fn format_schema(schema: Schema) -> Vec<String> {
pub fn test_store(root: &str) -> ObjectStore {
let mut builder = Fs::default();
builder.root(root);
let _ = builder.root(root);
ObjectStore::new(builder).unwrap().finish()
}
@@ -64,7 +64,7 @@ pub fn test_tmp_store(root: &str) -> (ObjectStore, TempDir) {
let dir = create_temp_dir(root);
let mut builder = Fs::default();
builder.root("/");
let _ = builder.root("/");
(ObjectStore::new(builder).unwrap().finish(), dir)
}
@@ -113,14 +113,14 @@ pub async fn setup_stream_to_json_test(origin_path: &str, threshold: impl Fn(usi
let output_path = format!("{}/{}", dir.path().display(), "output");
stream_to_json(
assert!(stream_to_json(
Box::pin(stream),
tmp_store.clone(),
&output_path,
threshold(size),
)
.await
.unwrap();
.is_ok());
let written = tmp_store.read(&output_path).await.unwrap();
let origin = store.read(origin_path).await.unwrap();
@@ -155,14 +155,14 @@ pub async fn setup_stream_to_csv_test(origin_path: &str, threshold: impl Fn(usiz
let output_path = format!("{}/{}", dir.path().display(), "output");
stream_to_csv(
assert!(stream_to_csv(
Box::pin(stream),
tmp_store.clone(),
&output_path,
threshold(size),
)
.await
.unwrap();
.is_ok());
let written = tmp_store.read(&output_path).await.unwrap();
let origin = store.read(origin_path).await.unwrap();

View File

@@ -0,0 +1,11 @@
## Generate orc data
```bash
python3 -m venv venv
venv/bin/pip install -U pip
venv/bin/pip install -U pyorc
./venv/bin/python write.py
cargo test
```

Binary file not shown.

View File

@@ -0,0 +1,103 @@
# Copyright 2023 Greptime Team
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import random
import datetime
import pyorc
data = {
"double_a": [1.0, 2.0, 3.0, 4.0, 5.0],
"a": [1.0, 2.0, None, 4.0, 5.0],
"b": [True, False, None, True, False],
"str_direct": ["a", "cccccc", None, "ddd", "ee"],
"d": ["a", "bb", None, "ccc", "ddd"],
"e": ["ddd", "cc", None, "bb", "a"],
"f": ["aaaaa", "bbbbb", None, "ccccc", "ddddd"],
"int_short_repeated": [5, 5, None, 5, 5],
"int_neg_short_repeated": [-5, -5, None, -5, -5],
"int_delta": [1, 2, None, 4, 5],
"int_neg_delta": [5, 4, None, 2, 1],
"int_direct": [1, 6, None, 3, 2],
"int_neg_direct": [-1, -6, None, -3, -2],
"bigint_direct": [1, 6, None, 3, 2],
"bigint_neg_direct": [-1, -6, None, -3, -2],
"bigint_other": [5, -5, 1, 5, 5],
"utf8_increase": ["a", "bb", "ccc", "dddd", "eeeee"],
"utf8_decrease": ["eeeee", "dddd", "ccc", "bb", "a"],
"timestamp_simple": [datetime.datetime(2023, 4, 1, 20, 15, 30, 2000), datetime.datetime.fromtimestamp(int('1629617204525777000')/1000000000), datetime.datetime(2023, 1, 1), datetime.datetime(2023, 2, 1), datetime.datetime(2023, 3, 1)],
"date_simple": [datetime.date(2023, 4, 1), datetime.date(2023, 3, 1), datetime.date(2023, 1, 1), datetime.date(2023, 2, 1), datetime.date(2023, 3, 1)]
}
def infer_schema(data):
schema = "struct<"
for key, value in data.items():
dt = type(value[0])
if dt == float:
dt = "float"
elif dt == int:
dt = "int"
elif dt == bool:
dt = "boolean"
elif dt == str:
dt = "string"
elif key.startswith("timestamp"):
dt = "timestamp"
elif key.startswith("date"):
dt = "date"
else:
print(key,value,dt)
raise NotImplementedError
if key.startswith("double"):
dt = "double"
if key.startswith("bigint"):
dt = "bigint"
schema += key + ":" + dt + ","
schema = schema[:-1] + ">"
return schema
def _write(
schema: str,
data,
file_name: str,
compression=pyorc.CompressionKind.NONE,
dict_key_size_threshold=0.0,
):
output = open(file_name, "wb")
writer = pyorc.Writer(
output,
schema,
dict_key_size_threshold=dict_key_size_threshold,
# use a small number to ensure that compression crosses value boundaries
compression_block_size=32,
compression=compression,
)
num_rows = len(list(data.values())[0])
for x in range(num_rows):
row = tuple(values[x] for values in data.values())
writer.write(row)
writer.close()
with open(file_name, "rb") as f:
reader = pyorc.Reader(f)
list(reader)
_write(
infer_schema(data),
data,
"test.orc",
)

View File

@@ -22,7 +22,7 @@ struct Foo {}
#[test]
#[allow(clippy::extra_unused_type_parameters)]
fn test_derive() {
Foo::default();
let _ = Foo::default();
assert_fields!(Foo: input_types);
assert_impl_all!(Foo: std::fmt::Debug, Default, AggrFuncTypeStore);
}

View File

@@ -158,19 +158,19 @@ mod test {
fn test_update_batch() {
// test update empty batch, expect not updating anything
let mut argmax = Argmax::<i32>::default();
assert!(argmax.update_batch(&[]).is_ok());
argmax.update_batch(&[]).unwrap();
assert_eq!(Value::Null, argmax.evaluate().unwrap());
// test update one not-null value
let mut argmax = Argmax::<i32>::default();
let v: Vec<VectorRef> = vec![Arc::new(Int32Vector::from(vec![Some(42)]))];
assert!(argmax.update_batch(&v).is_ok());
argmax.update_batch(&v).unwrap();
assert_eq!(Value::from(0_u64), argmax.evaluate().unwrap());
// test update one null value
let mut argmax = Argmax::<i32>::default();
let v: Vec<VectorRef> = vec![Arc::new(Int32Vector::from(vec![Option::<i32>::None]))];
assert!(argmax.update_batch(&v).is_ok());
argmax.update_batch(&v).unwrap();
assert_eq!(Value::Null, argmax.evaluate().unwrap());
// test update no null-value batch
@@ -180,7 +180,7 @@ mod test {
Some(1),
Some(3),
]))];
assert!(argmax.update_batch(&v).is_ok());
argmax.update_batch(&v).unwrap();
assert_eq!(Value::from(2_u64), argmax.evaluate().unwrap());
// test update null-value batch
@@ -190,7 +190,7 @@ mod test {
None,
Some(4),
]))];
assert!(argmax.update_batch(&v).is_ok());
argmax.update_batch(&v).unwrap();
assert_eq!(Value::from(2_u64), argmax.evaluate().unwrap());
// test update with constant vector
@@ -199,7 +199,7 @@ mod test {
Arc::new(Int32Vector::from_vec(vec![4])),
10,
))];
assert!(argmax.update_batch(&v).is_ok());
argmax.update_batch(&v).unwrap();
assert_eq!(Value::from(0_u64), argmax.evaluate().unwrap());
}
}

View File

@@ -166,19 +166,19 @@ mod test {
fn test_update_batch() {
// test update empty batch, expect not updating anything
let mut argmin = Argmin::<i32>::default();
assert!(argmin.update_batch(&[]).is_ok());
argmin.update_batch(&[]).unwrap();
assert_eq!(Value::Null, argmin.evaluate().unwrap());
// test update one not-null value
let mut argmin = Argmin::<i32>::default();
let v: Vec<VectorRef> = vec![Arc::new(Int32Vector::from(vec![Some(42)]))];
assert!(argmin.update_batch(&v).is_ok());
argmin.update_batch(&v).unwrap();
assert_eq!(Value::from(0_u32), argmin.evaluate().unwrap());
// test update one null value
let mut argmin = Argmin::<i32>::default();
let v: Vec<VectorRef> = vec![Arc::new(Int32Vector::from(vec![Option::<i32>::None]))];
assert!(argmin.update_batch(&v).is_ok());
argmin.update_batch(&v).unwrap();
assert_eq!(Value::Null, argmin.evaluate().unwrap());
// test update no null-value batch
@@ -188,7 +188,7 @@ mod test {
Some(1),
Some(3),
]))];
assert!(argmin.update_batch(&v).is_ok());
argmin.update_batch(&v).unwrap();
assert_eq!(Value::from(0_u32), argmin.evaluate().unwrap());
// test update null-value batch
@@ -198,7 +198,7 @@ mod test {
None,
Some(4),
]))];
assert!(argmin.update_batch(&v).is_ok());
argmin.update_batch(&v).unwrap();
assert_eq!(Value::from(0_u32), argmin.evaluate().unwrap());
// test update with constant vector
@@ -207,7 +207,7 @@ mod test {
Arc::new(Int32Vector::from_vec(vec![4])),
10,
))];
assert!(argmin.update_batch(&v).is_ok());
argmin.update_batch(&v).unwrap();
assert_eq!(Value::from(0_u32), argmin.evaluate().unwrap());
}
}

View File

@@ -192,20 +192,20 @@ mod test {
fn test_update_batch() {
// test update empty batch, expect not updating anything
let mut diff = Diff::<i32, i64>::default();
assert!(diff.update_batch(&[]).is_ok());
diff.update_batch(&[]).unwrap();
assert!(diff.values.is_empty());
assert_eq!(Value::Null, diff.evaluate().unwrap());
// test update one not-null value
let mut diff = Diff::<i32, i64>::default();
let v: Vec<VectorRef> = vec![Arc::new(Int32Vector::from(vec![Some(42)]))];
assert!(diff.update_batch(&v).is_ok());
diff.update_batch(&v).unwrap();
assert_eq!(Value::Null, diff.evaluate().unwrap());
// test update one null value
let mut diff = Diff::<i32, i64>::default();
let v: Vec<VectorRef> = vec![Arc::new(Int32Vector::from(vec![Option::<i32>::None]))];
assert!(diff.update_batch(&v).is_ok());
diff.update_batch(&v).unwrap();
assert_eq!(Value::Null, diff.evaluate().unwrap());
// test update no null-value batch
@@ -216,7 +216,7 @@ mod test {
Some(2),
]))];
let values = vec![Value::from(2_i64), Value::from(1_i64)];
assert!(diff.update_batch(&v).is_ok());
diff.update_batch(&v).unwrap();
assert_eq!(
Value::List(ListValue::new(
Some(Box::new(values)),
@@ -234,7 +234,7 @@ mod test {
Some(4),
]))];
let values = vec![Value::from(5_i64), Value::from(1_i64)];
assert!(diff.update_batch(&v).is_ok());
diff.update_batch(&v).unwrap();
assert_eq!(
Value::List(ListValue::new(
Some(Box::new(values)),
@@ -250,7 +250,7 @@ mod test {
4,
))];
let values = vec![Value::from(0_i64), Value::from(0_i64), Value::from(0_i64)];
assert!(diff.update_batch(&v).is_ok());
diff.update_batch(&v).unwrap();
assert_eq!(
Value::List(ListValue::new(
Some(Box::new(values)),

View File

@@ -188,19 +188,19 @@ mod test {
fn test_update_batch() {
// test update empty batch, expect not updating anything
let mut mean = Mean::<i32>::default();
assert!(mean.update_batch(&[]).is_ok());
mean.update_batch(&[]).unwrap();
assert_eq!(Value::Null, mean.evaluate().unwrap());
// test update one not-null value
let mut mean = Mean::<i32>::default();
let v: Vec<VectorRef> = vec![Arc::new(Int32Vector::from(vec![Some(42)]))];
assert!(mean.update_batch(&v).is_ok());
mean.update_batch(&v).unwrap();
assert_eq!(Value::from(42.0_f64), mean.evaluate().unwrap());
// test update one null value
let mut mean = Mean::<i32>::default();
let v: Vec<VectorRef> = vec![Arc::new(Int32Vector::from(vec![Option::<i32>::None]))];
assert!(mean.update_batch(&v).is_ok());
mean.update_batch(&v).unwrap();
assert_eq!(Value::Null, mean.evaluate().unwrap());
// test update no null-value batch
@@ -210,7 +210,7 @@ mod test {
Some(1),
Some(2),
]))];
assert!(mean.update_batch(&v).is_ok());
mean.update_batch(&v).unwrap();
assert_eq!(Value::from(0.6666666666666666), mean.evaluate().unwrap());
// test update null-value batch
@@ -221,7 +221,7 @@ mod test {
Some(3),
Some(4),
]))];
assert!(mean.update_batch(&v).is_ok());
mean.update_batch(&v).unwrap();
assert_eq!(Value::from(1.6666666666666667), mean.evaluate().unwrap());
// test update with constant vector
@@ -230,7 +230,7 @@ mod test {
Arc::new(Int32Vector::from_vec(vec![4])),
10,
))];
assert!(mean.update_batch(&v).is_ok());
mean.update_batch(&v).unwrap();
assert_eq!(Value::from(4.0), mean.evaluate().unwrap());
}
}

View File

@@ -299,7 +299,7 @@ mod test {
fn test_update_batch() {
// test update empty batch, expect not updating anything
let mut percentile = Percentile::<i32>::default();
assert!(percentile.update_batch(&[]).is_ok());
percentile.update_batch(&[]).unwrap();
assert!(percentile.not_greater.is_empty());
assert!(percentile.greater.is_empty());
assert_eq!(Value::Null, percentile.evaluate().unwrap());
@@ -310,7 +310,7 @@ mod test {
Arc::new(Int32Vector::from(vec![Some(42)])),
Arc::new(Float64Vector::from(vec![Some(100.0_f64)])),
];
assert!(percentile.update_batch(&v).is_ok());
percentile.update_batch(&v).unwrap();
assert_eq!(Value::from(42.0_f64), percentile.evaluate().unwrap());
// test update one null value
@@ -319,7 +319,7 @@ mod test {
Arc::new(Int32Vector::from(vec![Option::<i32>::None])),
Arc::new(Float64Vector::from(vec![Some(100.0_f64)])),
];
assert!(percentile.update_batch(&v).is_ok());
percentile.update_batch(&v).unwrap();
assert_eq!(Value::Null, percentile.evaluate().unwrap());
// test update no null-value batch
@@ -332,7 +332,7 @@ mod test {
Some(100.0_f64),
])),
];
assert!(percentile.update_batch(&v).is_ok());
percentile.update_batch(&v).unwrap();
assert_eq!(Value::from(2_f64), percentile.evaluate().unwrap());
// test update null-value batch
@@ -346,7 +346,7 @@ mod test {
Some(100.0_f64),
])),
];
assert!(percentile.update_batch(&v).is_ok());
percentile.update_batch(&v).unwrap();
assert_eq!(Value::from(4_f64), percentile.evaluate().unwrap());
// test update with constant vector
@@ -358,7 +358,7 @@ mod test {
)),
Arc::new(Float64Vector::from(vec![Some(100.0_f64), Some(100.0_f64)])),
];
assert!(percentile.update_batch(&v).is_ok());
percentile.update_batch(&v).unwrap();
assert_eq!(Value::from(4_f64), percentile.evaluate().unwrap());
// test left border
@@ -371,7 +371,7 @@ mod test {
Some(0.0_f64),
])),
];
assert!(percentile.update_batch(&v).is_ok());
percentile.update_batch(&v).unwrap();
assert_eq!(Value::from(-1.0_f64), percentile.evaluate().unwrap());
// test medium
@@ -384,7 +384,7 @@ mod test {
Some(50.0_f64),
])),
];
assert!(percentile.update_batch(&v).is_ok());
percentile.update_batch(&v).unwrap();
assert_eq!(Value::from(1.0_f64), percentile.evaluate().unwrap());
// test right border
@@ -397,7 +397,7 @@ mod test {
Some(100.0_f64),
])),
];
assert!(percentile.update_batch(&v).is_ok());
percentile.update_batch(&v).unwrap();
assert_eq!(Value::from(2.0_f64), percentile.evaluate().unwrap());
// the following is the result of numpy.percentile
@@ -414,7 +414,7 @@ mod test {
Some(40.0_f64),
])),
];
assert!(percentile.update_batch(&v).is_ok());
percentile.update_batch(&v).unwrap();
assert_eq!(Value::from(6.400000000_f64), percentile.evaluate().unwrap());
// the following is the result of numpy.percentile
@@ -430,7 +430,7 @@ mod test {
Some(95.0_f64),
])),
];
assert!(percentile.update_batch(&v).is_ok());
percentile.update_batch(&v).unwrap();
assert_eq!(
Value::from(9.700_000_000_000_001_f64),
percentile.evaluate().unwrap()

View File

@@ -267,7 +267,7 @@ mod test {
fn test_update_batch() {
// test update empty batch, expect not updating anything
let mut polyval = Polyval::<i32, i64>::default();
assert!(polyval.update_batch(&[]).is_ok());
polyval.update_batch(&[]).unwrap();
assert!(polyval.values.is_empty());
assert_eq!(Value::Null, polyval.evaluate().unwrap());
@@ -277,7 +277,7 @@ mod test {
Arc::new(Int32Vector::from(vec![Some(3)])),
Arc::new(Int64Vector::from(vec![Some(2_i64)])),
];
assert!(polyval.update_batch(&v).is_ok());
polyval.update_batch(&v).unwrap();
assert_eq!(Value::Int64(3), polyval.evaluate().unwrap());
// test update one null value
@@ -286,7 +286,7 @@ mod test {
Arc::new(Int32Vector::from(vec![Option::<i32>::None])),
Arc::new(Int64Vector::from(vec![Some(2_i64)])),
];
assert!(polyval.update_batch(&v).is_ok());
polyval.update_batch(&v).unwrap();
assert_eq!(Value::Null, polyval.evaluate().unwrap());
// test update no null-value batch
@@ -299,7 +299,7 @@ mod test {
Some(2_i64),
])),
];
assert!(polyval.update_batch(&v).is_ok());
polyval.update_batch(&v).unwrap();
assert_eq!(Value::Int64(13), polyval.evaluate().unwrap());
// test update null-value batch
@@ -313,7 +313,7 @@ mod test {
Some(2_i64),
])),
];
assert!(polyval.update_batch(&v).is_ok());
polyval.update_batch(&v).unwrap();
assert_eq!(Value::Int64(13), polyval.evaluate().unwrap());
// test update with constant vector
@@ -325,7 +325,7 @@ mod test {
)),
Arc::new(Int64Vector::from(vec![Some(5_i64), Some(5_i64)])),
];
assert!(polyval.update_batch(&v).is_ok());
polyval.update_batch(&v).unwrap();
assert_eq!(Value::Int64(24), polyval.evaluate().unwrap());
}
}

View File

@@ -231,7 +231,7 @@ mod test {
fn test_update_batch() {
// test update empty batch, expect not updating anything
let mut scipy_stats_norm_cdf = ScipyStatsNormCdf::<i32>::default();
assert!(scipy_stats_norm_cdf.update_batch(&[]).is_ok());
scipy_stats_norm_cdf.update_batch(&[]).unwrap();
assert!(scipy_stats_norm_cdf.values.is_empty());
assert_eq!(Value::Null, scipy_stats_norm_cdf.evaluate().unwrap());
@@ -245,7 +245,7 @@ mod test {
Some(2.0_f64),
])),
];
assert!(scipy_stats_norm_cdf.update_batch(&v).is_ok());
scipy_stats_norm_cdf.update_batch(&v).unwrap();
assert_eq!(
Value::from(0.8086334555398362),
scipy_stats_norm_cdf.evaluate().unwrap()
@@ -262,7 +262,7 @@ mod test {
Some(2.0_f64),
])),
];
assert!(scipy_stats_norm_cdf.update_batch(&v).is_ok());
scipy_stats_norm_cdf.update_batch(&v).unwrap();
assert_eq!(
Value::from(0.5412943699039795),
scipy_stats_norm_cdf.evaluate().unwrap()

View File

@@ -232,7 +232,7 @@ mod test {
fn test_update_batch() {
// test update empty batch, expect not updating anything
let mut scipy_stats_norm_pdf = ScipyStatsNormPdf::<i32>::default();
assert!(scipy_stats_norm_pdf.update_batch(&[]).is_ok());
scipy_stats_norm_pdf.update_batch(&[]).unwrap();
assert!(scipy_stats_norm_pdf.values.is_empty());
assert_eq!(Value::Null, scipy_stats_norm_pdf.evaluate().unwrap());
@@ -246,7 +246,7 @@ mod test {
Some(2.0_f64),
])),
];
assert!(scipy_stats_norm_pdf.update_batch(&v).is_ok());
scipy_stats_norm_pdf.update_batch(&v).unwrap();
assert_eq!(
Value::from(0.17843340219081558),
scipy_stats_norm_pdf.evaluate().unwrap()
@@ -263,7 +263,7 @@ mod test {
Some(2.0_f64),
])),
];
assert!(scipy_stats_norm_pdf.update_batch(&v).is_ok());
scipy_stats_norm_pdf.update_batch(&v).unwrap();
assert_eq!(
Value::from(0.12343972049858312),
scipy_stats_norm_pdf.evaluate().unwrap()

View File

@@ -32,14 +32,16 @@ pub struct FunctionRegistry {
impl FunctionRegistry {
pub fn register(&self, func: FunctionRef) {
self.functions
let _ = self
.functions
.write()
.unwrap()
.insert(func.name().to_string(), func);
}
pub fn register_aggregate_function(&self, func: AggregateFunctionMetaRef) {
self.aggregate_functions
let _ = self
.aggregate_functions
.write()
.unwrap()
.insert(func.name(), func);
@@ -92,7 +94,7 @@ mod tests {
assert!(registry.get_function("test_and").is_none());
assert!(registry.functions().is_empty());
registry.register(func);
assert!(registry.get_function("test_and").is_some());
let _ = registry.get_function("test_and").unwrap();
assert_eq!(1, registry.functions().len());
}
}

View File

@@ -34,7 +34,7 @@ const LOCATION_TYPE_FIRST: i32 = LocationType::First as i32;
const LOCATION_TYPE_AFTER: i32 = LocationType::After as i32;
/// Convert an [`AlterExpr`] to an [`AlterTableRequest`]
pub fn alter_expr_to_request(expr: AlterExpr) -> Result<AlterTableRequest> {
pub fn alter_expr_to_request(table_id: TableId, expr: AlterExpr) -> Result<AlterTableRequest> {
let catalog_name = expr.catalog_name;
let schema_name = expr.schema_name;
let kind = expr.kind.context(MissingFieldSnafu { field: "kind" })?;
@@ -69,6 +69,7 @@ pub fn alter_expr_to_request(expr: AlterExpr) -> Result<AlterTableRequest> {
catalog_name,
schema_name,
table_name: expr.table_name,
table_id,
alter_kind,
};
Ok(request)
@@ -82,6 +83,7 @@ pub fn alter_expr_to_request(expr: AlterExpr) -> Result<AlterTableRequest> {
catalog_name,
schema_name,
table_name: expr.table_name,
table_id,
alter_kind,
};
Ok(request)
@@ -92,6 +94,7 @@ pub fn alter_expr_to_request(expr: AlterExpr) -> Result<AlterTableRequest> {
catalog_name,
schema_name,
table_name: expr.table_name,
table_id,
alter_kind,
};
Ok(request)
@@ -239,7 +242,7 @@ mod tests {
})),
};
let alter_request = alter_expr_to_request(expr).unwrap();
let alter_request = alter_expr_to_request(1, expr).unwrap();
assert_eq!(alter_request.catalog_name, "");
assert_eq!(alter_request.schema_name, "");
assert_eq!("monitor".to_string(), alter_request.table_name);
@@ -296,7 +299,7 @@ mod tests {
})),
};
let alter_request = alter_expr_to_request(expr).unwrap();
let alter_request = alter_expr_to_request(1, expr).unwrap();
assert_eq!(alter_request.catalog_name, "");
assert_eq!(alter_request.schema_name, "");
assert_eq!("monitor".to_string(), alter_request.table_name);
@@ -344,7 +347,7 @@ mod tests {
})),
};
let alter_request = alter_expr_to_request(expr).unwrap();
let alter_request = alter_expr_to_request(1, expr).unwrap();
assert_eq!(alter_request.catalog_name, "test_catalog");
assert_eq!(alter_request.schema_name, "test_schema");
assert_eq!("monitor".to_string(), alter_request.table_name);

View File

@@ -77,7 +77,7 @@ pub fn find_new_columns(schema: &SchemaRef, columns: &[Column]) -> Result<Option
is_key: *semantic_type == TAG_SEMANTIC_TYPE,
location: None,
});
new_columns.insert(column_name.to_string());
let _ = new_columns.insert(column_name.to_string());
}
}
@@ -239,7 +239,7 @@ pub fn build_create_expr_from_insertion(
let column_def = build_column_def(column_name, *datatype, is_nullable);
column_defs.push(column_def);
new_columns.insert(column_name.to_string());
let _ = new_columns.insert(column_name.to_string());
}
}

View File

@@ -27,7 +27,7 @@ async fn do_bench_channel_manager() {
for _ in 0..10000 {
let idx = rand::random::<usize>() % 100;
let ret = m_clone.get(format!("{idx}"));
assert!(ret.is_ok());
let _ = ret.unwrap();
}
});
joins.push(join);
@@ -39,7 +39,7 @@ async fn do_bench_channel_manager() {
}
fn bench_channel_manager(c: &mut Criterion) {
c.bench_function("bench channel manager", |b| {
let _ = c.bench_function("bench channel manager", |b| {
b.iter(do_bench_channel_manager);
});
}

View File

@@ -66,7 +66,7 @@ impl ChannelManager {
}
let pool = self.pool.clone();
common_runtime::spawn_bg(async {
let _handle = common_runtime::spawn_bg(async {
recycle_channel_in_loop(pool, RECYCLE_CHANNEL_INTERVAL_SECS).await;
});
info!("Channel recycle is started, running in the background!");
@@ -398,7 +398,7 @@ impl Channel {
#[inline]
pub fn increase_access(&self) {
self.access.fetch_add(1, Ordering::Relaxed);
let _ = self.access.fetch_add(1, Ordering::Relaxed);
}
}
@@ -427,7 +427,7 @@ impl Pool {
}
fn put(&self, addr: &str, channel: Channel) {
self.channels.insert(addr.to_string(), channel);
let _ = self.channels.insert(addr.to_string(), channel);
}
fn retain_channel<F>(&self, f: F)
@@ -442,7 +442,7 @@ async fn recycle_channel_in_loop(pool: Arc<Pool>, interval_secs: u64) {
let mut interval = tokio::time::interval(Duration::from_secs(interval_secs));
loop {
interval.tick().await;
let _ = interval.tick().await;
pool.retain_channel(|_, c| c.access.swap(0, Ordering::Relaxed) != 0)
}
}
@@ -577,7 +577,7 @@ mod tests {
let res = mgr.build_endpoint("test_addr");
assert!(res.is_ok());
let _ = res.unwrap();
}
#[tokio::test]
@@ -586,7 +586,7 @@ mod tests {
let addr = "test_addr";
let res = mgr.get(addr);
assert!(res.is_ok());
let _ = res.unwrap();
mgr.retain_channel(|addr, channel| {
assert_eq!("test_addr", addr);
@@ -604,7 +604,7 @@ mod tests {
}),
);
assert!(res.is_ok());
let _ = res.unwrap();
mgr.retain_channel(|addr, channel| {
assert_eq!("test_addr", addr);

View File

@@ -265,7 +265,7 @@ mod test {
let FlightMessage::Schema(decoded_schema) = message else { unreachable!() };
assert_eq!(decoded_schema, schema);
assert!(decoder.schema.is_some());
let _ = decoder.schema.as_ref().unwrap();
let message = decoder.try_decode(d2.clone()).unwrap();
assert!(matches!(message, FlightMessage::Recordbatch(_)));

View File

@@ -217,7 +217,7 @@ impl LinesWriter {
datatype: datatype as i32,
null_mask: Vec::default(),
});
column_names.insert(column_name.to_string(), new_idx);
let _ = column_names.insert(column_name.to_string(), new_idx);
new_idx
}
};

View File

@@ -38,9 +38,8 @@ async fn test_mtls_config() {
client_key_path: "tests/tls/corrupted".to_string(),
});
let re = ChannelManager::with_tls_config(config);
assert!(re.is_ok());
let re = re.unwrap().get("127.0.0.1:0");
let re = ChannelManager::with_tls_config(config).unwrap();
let re = re.get("127.0.0.1:0");
assert!(re.is_err());
// success
@@ -50,8 +49,7 @@ async fn test_mtls_config() {
client_key_path: "tests/tls/client.key.pem".to_string(),
});
let re = ChannelManager::with_tls_config(config);
assert!(re.is_ok());
let re = re.unwrap().get("127.0.0.1:0");
assert!(re.is_ok());
let re = ChannelManager::with_tls_config(config).unwrap();
let re = re.get("127.0.0.1:0");
let _ = re.unwrap();
}

View File

@@ -62,7 +62,8 @@ pub async fn dump_profile() -> error::Result<Vec<u8>> {
.await
.context(OpenTempFileSnafu { path: &path })?;
let mut buf = vec![];
f.read_to_end(&mut buf)
let _ = f
.read_to_end(&mut buf)
.await
.context(OpenTempFileSnafu { path })?;
Ok(buf)

View File

@@ -6,12 +6,15 @@ license.workspace = true
[dependencies]
api = { path = "../../api" }
async-stream.workspace = true
async-trait.workspace = true
common-catalog = { path = "../catalog" }
common-error = { path = "../error" }
common-runtime = { path = "../runtime" }
common-telemetry = { path = "../telemetry" }
common-time = { path = "../time" }
futures.workspace = true
prost.workspace = true
serde.workspace = true
serde_json.workspace = true
snafu.workspace = true

View File

@@ -55,6 +55,21 @@ pub enum Error {
#[snafu(display("Invalid protobuf message, err: {}", err_msg))]
InvalidProtoMsg { err_msg: String, location: Location },
#[snafu(display("Invalid table metadata, err: {}", err_msg))]
InvalidTableMetadata { err_msg: String, location: Location },
#[snafu(display("Failed to get kv cache, err: {}", err_msg))]
GetKvCache { err_msg: String },
#[snafu(display("Get null from cache, key: {}", key))]
CacheNotGet { key: String, location: Location },
#[snafu(display("Failed to request MetaSrv, source: {}", source))]
MetaSrv {
source: BoxedError,
location: Location,
},
}
pub type Result<T> = std::result::Result<T, Error>;
@@ -65,15 +80,18 @@ impl ErrorExt for Error {
match self {
IllegalServerState { .. } => StatusCode::Internal,
SerdeJson { .. } | RouteInfoCorrupted { .. } | InvalidProtoMsg { .. } => {
StatusCode::Unexpected
}
SerdeJson { .. }
| RouteInfoCorrupted { .. }
| InvalidProtoMsg { .. }
| InvalidTableMetadata { .. } => StatusCode::Unexpected,
SendMessage { .. } => StatusCode::Internal,
SendMessage { .. } | GetKvCache { .. } | CacheNotGet { .. } => StatusCode::Internal,
EncodeJson { .. } | DecodeJson { .. } | PayloadNotExist { .. } => {
StatusCode::Unexpected
}
MetaSrv { source, .. } => source.status_code(),
}
}

View File

@@ -12,16 +12,102 @@
// See the License for the specific language governing permissions and
// limitations under the License.
//! This mod defines all the keys used in the metadata store (Metasrv).
//! Specifically, there are these kinds of keys:
//!
//! 1. Table info key: `__table_info/{table_id}`
//! - The value is a [TableInfoValue] struct; it contains the whole table info (like column
//! schemas).
//! - This key is mainly used in constructing the table in Datanode and Frontend.
//!
//! 2. Table region key: `__table_region/{table_id}`
//! - The value is a [TableRegionValue] struct; it contains the region distribution of the
//! table in the Datanodes.
//!
//! All keys have related managers. The managers take care of the serialization and deserialization
//! of keys and values, and the interaction with the underlying KV store backend.
//!
//! To simplify the managers used in struct fields and function parameters, we define a "unify"
//! table metadata manager: [TableMetadataManager]. It contains all the managers defined above.
//! It's recommended to just use this manager only.
pub mod table_info;
pub mod table_region;
mod table_route;
use std::sync::Arc;
use snafu::ResultExt;
use table_info::{TableInfoManager, TableInfoValue};
use table_region::{TableRegionManager, TableRegionValue};
use crate::error::{InvalidTableMetadataSnafu, Result, SerdeJsonSnafu};
pub use crate::key::table_route::{TableRouteKey, TABLE_ROUTE_PREFIX};
use crate::kv_backend::KvBackendRef;
pub const REMOVED_PREFIX: &str = "__removed";
const TABLE_INFO_KEY_PREFIX: &str = "__table_info";
const TABLE_REGION_KEY_PREFIX: &str = "__table_region";
pub fn to_removed_key(key: &str) -> String {
format!("{REMOVED_PREFIX}-{key}")
}
pub trait TableMetaKey {
fn as_raw_key(&self) -> Vec<u8>;
}
pub type TableMetadataManagerRef = Arc<TableMetadataManager>;
pub struct TableMetadataManager {
table_info_manager: TableInfoManager,
table_region_manager: TableRegionManager,
}
impl TableMetadataManager {
pub fn new(kv_backend: KvBackendRef) -> Self {
TableMetadataManager {
table_info_manager: TableInfoManager::new(kv_backend.clone()),
table_region_manager: TableRegionManager::new(kv_backend),
}
}
pub fn table_info_manager(&self) -> &TableInfoManager {
&self.table_info_manager
}
pub fn table_region_manager(&self) -> &TableRegionManager {
&self.table_region_manager
}
}
macro_rules! impl_table_meta_value {
( $($val_ty: ty), *) => {
$(
impl $val_ty {
pub fn try_from_raw_value(raw_value: Vec<u8>) -> Result<Self> {
let raw_value = String::from_utf8(raw_value).map_err(|e| {
InvalidTableMetadataSnafu { err_msg: e.to_string() }.build()
})?;
serde_json::from_str(&raw_value).context(SerdeJsonSnafu)
}
pub fn try_as_raw_value(&self) -> Result<Vec<u8>> {
serde_json::to_string(self)
.map(|x| x.into_bytes())
.context(SerdeJsonSnafu)
}
}
)*
}
}
impl_table_meta_value! {
TableInfoValue,
TableRegionValue
}
#[cfg(test)]
mod tests {
use crate::key::to_removed_key;

View File

@@ -0,0 +1,230 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use serde::{Deserialize, Serialize};
use table::metadata::{RawTableInfo, TableId};
use super::TABLE_INFO_KEY_PREFIX;
use crate::error::Result;
use crate::key::{to_removed_key, TableMetaKey};
use crate::kv_backend::KvBackendRef;
pub struct TableInfoKey {
table_id: TableId,
}
impl TableInfoKey {
pub fn new(table_id: TableId) -> Self {
Self { table_id }
}
}
impl TableMetaKey for TableInfoKey {
fn as_raw_key(&self) -> Vec<u8> {
format!("{}/{}", TABLE_INFO_KEY_PREFIX, self.table_id).into_bytes()
}
}
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
pub struct TableInfoValue {
pub table_info: RawTableInfo,
version: u64,
}
pub struct TableInfoManager {
kv_backend: KvBackendRef,
}
impl TableInfoManager {
pub fn new(kv_backend: KvBackendRef) -> Self {
Self { kv_backend }
}
pub async fn get(&self, table_id: TableId) -> Result<Option<TableInfoValue>> {
let key = TableInfoKey::new(table_id);
let raw_key = key.as_raw_key();
self.kv_backend
.get(&raw_key)
.await?
.map(|x| TableInfoValue::try_from_raw_value(x.1))
.transpose()
}
pub async fn compare_and_set(
&self,
table_id: TableId,
expect: Option<TableInfoValue>,
table_info: RawTableInfo,
) -> Result<std::result::Result<(), Option<Vec<u8>>>> {
let key = TableInfoKey::new(table_id);
let raw_key = key.as_raw_key();
let (expect, version) = if let Some(x) = expect {
(x.try_as_raw_value()?, x.version + 1)
} else {
(vec![], 0)
};
let value = TableInfoValue {
table_info,
version,
};
let raw_value = value.try_as_raw_value()?;
self.kv_backend
.compare_and_set(&raw_key, &expect, &raw_value)
.await
}
pub async fn remove(&self, table_id: TableId) -> Result<()> {
let key = TableInfoKey::new(table_id);
let removed_key = to_removed_key(&String::from_utf8_lossy(key.as_raw_key().as_slice()));
self.kv_backend
.move_value(&key.as_raw_key(), removed_key.as_bytes())
.await
}
}
#[cfg(test)]
mod tests {
use std::sync::Arc;
use datatypes::prelude::ConcreteDataType;
use datatypes::schema::{ColumnSchema, RawSchema, Schema};
use table::metadata::{RawTableMeta, TableIdent, TableType};
use super::*;
use crate::kv_backend::memory::MemoryKvBackend;
use crate::kv_backend::KvBackend;
#[tokio::test]
async fn test_table_info_manager() {
let backend = Arc::new(MemoryKvBackend::default());
for i in 1..=3 {
let key = TableInfoKey::new(i).as_raw_key();
let val = TableInfoValue {
table_info: new_table_info(i),
version: 1,
}
.try_as_raw_value()
.unwrap();
backend.set(&key, &val).await.unwrap();
}
let manager = TableInfoManager::new(backend.clone());
let val = manager.get(1).await.unwrap().unwrap();
assert_eq!(
val,
TableInfoValue {
table_info: new_table_info(1),
version: 1,
}
);
assert!(manager.get(4).await.unwrap().is_none());
let table_info = new_table_info(4);
let result = manager
.compare_and_set(4, None, table_info.clone())
.await
.unwrap();
assert!(result.is_ok());
// test cas failed, the new table info is not set
let new_table_info = new_table_info(4);
let result = manager
.compare_and_set(4, None, new_table_info.clone())
.await
.unwrap();
let actual = TableInfoValue::try_from_raw_value(result.unwrap_err().unwrap()).unwrap();
assert_eq!(
actual,
TableInfoValue {
table_info: table_info.clone(),
version: 0,
}
);
// test cas success
let result = manager
.compare_and_set(4, Some(actual), new_table_info.clone())
.await
.unwrap();
assert!(result.is_ok());
assert!(manager.remove(4).await.is_ok());
let kv = backend
.get(b"__removed-__table_info/4")
.await
.unwrap()
.unwrap();
assert_eq!(b"__removed-__table_info/4", kv.0.as_slice());
let value = TableInfoValue::try_from_raw_value(kv.1).unwrap();
assert_eq!(value.table_info, new_table_info);
assert_eq!(value.version, 1);
}
#[test]
fn test_key_serde() {
let key = TableInfoKey::new(42);
let raw_key = key.as_raw_key();
assert_eq!(raw_key, b"__table_info/42");
}
#[test]
fn test_value_serde() {
let value = TableInfoValue {
table_info: new_table_info(42),
version: 1,
};
let serialized = value.try_as_raw_value().unwrap();
let deserialized = TableInfoValue::try_from_raw_value(serialized).unwrap();
assert_eq!(value, deserialized);
}
fn new_table_info(table_id: TableId) -> RawTableInfo {
let schema = Schema::new(vec![ColumnSchema::new(
"name",
ConcreteDataType::string_datatype(),
true,
)]);
let meta = RawTableMeta {
schema: RawSchema::from(&schema),
engine: "mito".to_string(),
created_on: chrono::DateTime::default(),
primary_key_indices: vec![0, 1],
next_column_id: 3,
engine_options: Default::default(),
value_indices: vec![2, 3],
options: Default::default(),
region_numbers: vec![1],
};
RawTableInfo {
ident: TableIdent {
table_id,
version: 1,
},
name: "table_1".to_string(),
desc: Some("blah".to_string()),
catalog_name: "catalog_1".to_string(),
schema_name: "schema_1".to_string(),
meta,
table_type: TableType::Base,
}
}
}

View File

@@ -0,0 +1,190 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::collections::BTreeMap;
use serde::{Deserialize, Serialize};
use store_api::storage::RegionNumber;
use table::metadata::TableId;
use super::TABLE_REGION_KEY_PREFIX;
use crate::error::Result;
use crate::key::{to_removed_key, TableMetaKey};
use crate::kv_backend::KvBackendRef;
use crate::DatanodeId;
pub type RegionDistribution = BTreeMap<DatanodeId, Vec<RegionNumber>>;
pub struct TableRegionKey {
table_id: TableId,
}
impl TableRegionKey {
pub fn new(table_id: TableId) -> Self {
Self { table_id }
}
}
impl TableMetaKey for TableRegionKey {
fn as_raw_key(&self) -> Vec<u8> {
format!("{}/{}", TABLE_REGION_KEY_PREFIX, self.table_id).into_bytes()
}
}
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq)]
pub struct TableRegionValue {
pub region_distribution: RegionDistribution,
version: u64,
}
pub struct TableRegionManager {
kv_backend: KvBackendRef,
}
impl TableRegionManager {
pub fn new(kv_backend: KvBackendRef) -> Self {
Self { kv_backend }
}
pub async fn get(&self, table_id: TableId) -> Result<Option<TableRegionValue>> {
let key = TableRegionKey::new(table_id);
let raw_key = key.as_raw_key();
self.kv_backend
.get(&raw_key)
.await?
.map(|x| TableRegionValue::try_from_raw_value(x.1))
.transpose()
}
pub async fn compare_and_set(
&self,
table_id: TableId,
expect: Option<TableRegionValue>,
region_distribution: RegionDistribution,
) -> Result<std::result::Result<(), Option<Vec<u8>>>> {
let key = TableRegionKey::new(table_id);
let raw_key = key.as_raw_key();
let (expect, version) = if let Some(x) = expect {
(x.try_as_raw_value()?, x.version + 1)
} else {
(vec![], 0)
};
let value = TableRegionValue {
region_distribution,
version,
};
let raw_value = value.try_as_raw_value()?;
self.kv_backend
.compare_and_set(&raw_key, &expect, &raw_value)
.await
}
pub async fn remove(&self, table_id: TableId) -> Result<()> {
let key = TableRegionKey::new(table_id);
let remove_key = to_removed_key(&String::from_utf8_lossy(key.as_raw_key().as_slice()));
self.kv_backend
.move_value(&key.as_raw_key(), remove_key.as_bytes())
.await
}
}
#[cfg(test)]
mod tests {
use std::sync::Arc;
use super::*;
use crate::kv_backend::memory::MemoryKvBackend;
use crate::kv_backend::KvBackend;
#[tokio::test]
async fn test_table_region_manager() {
let backend = Arc::new(MemoryKvBackend::default());
let manager = TableRegionManager::new(backend.clone());
let region_distribution =
RegionDistribution::from([(1, vec![1, 2, 3]), (2, vec![4, 5, 6])]);
let result = manager
.compare_and_set(1, None, region_distribution.clone())
.await
.unwrap();
assert!(result.is_ok());
let new_region_distribution =
RegionDistribution::from([(1, vec![4, 5, 6]), (2, vec![1, 2, 3])]);
let curr = manager
.compare_and_set(1, None, new_region_distribution.clone())
.await
.unwrap()
.unwrap_err()
.unwrap();
let curr = TableRegionValue::try_from_raw_value(curr).unwrap();
assert_eq!(
curr,
TableRegionValue {
region_distribution,
version: 0
}
);
assert!(manager
.compare_and_set(1, Some(curr), new_region_distribution.clone())
.await
.unwrap()
.is_ok());
let value = manager.get(1).await.unwrap().unwrap();
assert_eq!(
value,
TableRegionValue {
region_distribution: new_region_distribution.clone(),
version: 1
}
);
assert!(manager.get(2).await.unwrap().is_none());
assert!(manager.remove(1).await.is_ok());
let kv = backend
.get(b"__removed-__table_region/1")
.await
.unwrap()
.unwrap();
assert_eq!(b"__removed-__table_region/1", kv.0.as_slice());
let value = TableRegionValue::try_from_raw_value(kv.1).unwrap();
assert_eq!(value.region_distribution, new_region_distribution);
assert_eq!(value.version, 1);
}
#[test]
fn test_serde() {
let key = TableRegionKey::new(1);
let raw_key = key.as_raw_key();
assert_eq!(raw_key, b"__table_region/1");
let value = TableRegionValue {
region_distribution: RegionDistribution::from([(1, vec![1, 2, 3]), (2, vec![4, 5, 6])]),
version: 0,
};
let literal = br#"{"region_distribution":{"1":[1,2,3],"2":[4,5,6]},"version":0}"#;
assert_eq!(value.try_as_raw_value().unwrap(), literal);
assert_eq!(
TableRegionValue::try_from_raw_value(literal.to_vec()).unwrap(),
value,
);
}
}

View File

@@ -0,0 +1,80 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
pub mod memory;
use std::any::Any;
use std::pin::Pin;
use std::sync::Arc;
use async_trait::async_trait;
use common_error::ext::ErrorExt;
use futures::{Stream, StreamExt};
use crate::error::Error;
#[derive(Debug, Clone, PartialEq)]
pub struct Kv(pub Vec<u8>, pub Vec<u8>);
pub type ValueIter<'a, E> = Pin<Box<dyn Stream<Item = Result<Kv, E>> + Send + 'a>>;
pub type KvBackendRef = Arc<dyn KvBackend<Error = Error>>;
#[async_trait]
pub trait KvBackend: Send + Sync {
type Error: ErrorExt;
fn range<'a, 'b>(&'a self, key: &[u8]) -> ValueIter<'b, Self::Error>
where
'a: 'b;
async fn set(&self, key: &[u8], val: &[u8]) -> Result<(), Self::Error>;
/// Compare and set value of key. `expect` is the expected value, if backend's current value associated
/// with key is the same as `expect`, the value will be updated to `val`.
///
/// - If the compare-and-set operation successfully updated value, this method will return an `Ok(Ok())`
/// - If associated value is not the same as `expect`, no value will be updated and an `Ok(Err(Vec<u8>))`
/// will be returned, the `Err(Vec<u8>)` indicates the current associated value of key.
/// - If any error happens during operation, an `Err(Error)` will be returned.
async fn compare_and_set(
&self,
key: &[u8],
expect: &[u8],
val: &[u8],
) -> Result<Result<(), Option<Vec<u8>>>, Self::Error>;
async fn delete_range(&self, key: &[u8], end: &[u8]) -> Result<(), Self::Error>;
async fn delete(&self, key: &[u8]) -> Result<(), Self::Error> {
self.delete_range(key, &[]).await
}
/// Default get is implemented based on `range` method.
async fn get(&self, key: &[u8]) -> Result<Option<Kv>, Self::Error> {
let mut iter = self.range(key);
while let Some(r) = iter.next().await {
let kv = r?;
if kv.0 == key {
return Ok(Some(kv));
}
}
return Ok(None);
}
/// MoveValue atomically renames the key to the given updated key.
async fn move_value(&self, from_key: &[u8], to_key: &[u8]) -> Result<(), Self::Error>;
fn as_any(&self) -> &dyn Any;
}

View File

@@ -0,0 +1,197 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::any::Any;
use std::collections::btree_map::Entry;
use std::collections::BTreeMap;
use std::fmt::{Display, Formatter};
use std::sync::RwLock;
use async_stream::stream;
use async_trait::async_trait;
use serde::Serializer;
use crate::error::Error;
use crate::kv_backend::{Kv, KvBackend, ValueIter};
pub struct MemoryKvBackend {
kvs: RwLock<BTreeMap<Vec<u8>, Vec<u8>>>,
}
impl Display for MemoryKvBackend {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
let kvs = self.kvs.read().unwrap();
for (k, v) in kvs.iter() {
f.serialize_str(&String::from_utf8_lossy(k))?;
f.serialize_str(" -> ")?;
f.serialize_str(&String::from_utf8_lossy(v))?;
f.serialize_str("\n")?;
}
Ok(())
}
}
impl Default for MemoryKvBackend {
fn default() -> Self {
Self {
kvs: RwLock::new(BTreeMap::new()),
}
}
}
#[async_trait]
impl KvBackend for MemoryKvBackend {
type Error = Error;
fn range<'a, 'b>(&'a self, prefix: &[u8]) -> ValueIter<'b, Error>
where
'a: 'b,
{
let kvs = self.kvs.read().unwrap();
let kvs = kvs.clone();
let prefix = prefix.to_vec();
Box::pin(stream!({
for (k, v) in kvs.range(prefix.clone()..) {
if !k.starts_with(&prefix) {
break;
}
yield Ok(Kv(k.clone(), v.clone()));
}
}))
}
async fn set(&self, key: &[u8], val: &[u8]) -> Result<(), Error> {
let mut kvs = self.kvs.write().unwrap();
let _ = kvs.insert(key.to_vec(), val.to_vec());
Ok(())
}
async fn compare_and_set(
&self,
key: &[u8],
expect: &[u8],
val: &[u8],
) -> Result<Result<(), Option<Vec<u8>>>, Error> {
let key = key.to_vec();
let val = val.to_vec();
let mut kvs = self.kvs.write().unwrap();
let existed = kvs.entry(key);
Ok(match existed {
Entry::Vacant(e) => {
if expect.is_empty() {
let _ = e.insert(val);
Ok(())
} else {
Err(None)
}
}
Entry::Occupied(mut existed) => {
if existed.get() == expect {
let _ = existed.insert(val);
Ok(())
} else {
Err(Some(existed.get().clone()))
}
}
})
}
async fn delete_range(&self, key: &[u8], end: &[u8]) -> Result<(), Error> {
let mut kvs = self.kvs.write().unwrap();
if end.is_empty() {
let _ = kvs.remove(key);
} else {
let start = key.to_vec();
let end = end.to_vec();
let range = start..end;
kvs.retain(|k, _| !range.contains(k));
}
Ok(())
}
async fn move_value(&self, from_key: &[u8], to_key: &[u8]) -> Result<(), Error> {
let mut kvs = self.kvs.write().unwrap();
if let Some(v) = kvs.remove(from_key) {
let _ = kvs.insert(to_key.to_vec(), v);
}
Ok(())
}
fn as_any(&self) -> &dyn Any {
self
}
}
#[cfg(test)]
mod tests {
use futures::TryStreamExt;
use super::*;
#[tokio::test]
async fn test_memory_kv_backend() {
let backend = MemoryKvBackend::default();
for i in 1..10 {
let key = format!("key{}", i);
let val = format!("val{}", i);
assert!(backend.set(key.as_bytes(), val.as_bytes()).await.is_ok());
}
let result = backend
.compare_and_set(b"hello", b"what", b"world")
.await
.unwrap();
assert!(result.unwrap_err().is_none());
let result = backend
.compare_and_set(b"hello", b"", b"world")
.await
.unwrap();
assert!(result.is_ok());
let result = backend
.compare_and_set(b"hello", b"world", b"greptime")
.await
.unwrap();
assert!(result.is_ok());
let result = backend
.compare_and_set(b"hello", b"world", b"what")
.await
.unwrap();
assert_eq!(result.unwrap_err().unwrap(), b"greptime");
assert!(backend.delete_range(b"key1", &[]).await.is_ok());
assert!(backend.delete_range(b"key3", b"key9").await.is_ok());
assert!(backend.move_value(b"key9", b"key10").await.is_ok());
assert_eq!(
backend.to_string(),
r#"hello -> greptime
key10 -> val9
key2 -> val2
"#
);
let range = backend.range(b"key").try_collect::<Vec<_>>().await.unwrap();
assert_eq!(range.len(), 2);
assert_eq!(range[0], Kv(b"key10".to_vec(), b"val9".to_vec()));
assert_eq!(range[1], Kv(b"key2".to_vec(), b"val2".to_vec()));
}
}

View File

@@ -17,6 +17,7 @@ pub mod heartbeat;
pub mod ident;
pub mod instruction;
pub mod key;
pub mod kv_backend;
pub mod peer;
pub mod rpc;
pub mod table_name;

View File

@@ -12,6 +12,7 @@
// See the License for the specific language governing permissions and
// limitations under the License.
pub mod ddl;
pub mod lock;
pub mod router;
pub mod store;

View File

@@ -0,0 +1,217 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::result;
use api::v1::meta::submit_ddl_task_request::Task;
use api::v1::meta::{
CreateTableTask as PbCreateTableTask, Partition,
SubmitDdlTaskRequest as PbSubmitDdlTaskRequest,
SubmitDdlTaskResponse as PbSubmitDdlTaskResponse,
};
use api::v1::CreateTableExpr;
use prost::Message;
use serde::{Deserialize, Serialize};
use snafu::{OptionExt, ResultExt};
use table::engine::TableReference;
use table::metadata::{RawTableInfo, TableId};
use crate::error::{self, Result};
use crate::table_name::TableName;
#[derive(Debug)]
pub enum DdlTask {
CreateTable(CreateTableTask),
}
impl DdlTask {
pub fn new_create_table(
expr: CreateTableExpr,
partitions: Vec<Partition>,
table_info: RawTableInfo,
) -> Self {
DdlTask::CreateTable(CreateTableTask::new(expr, partitions, table_info))
}
}
impl TryFrom<Task> for DdlTask {
type Error = error::Error;
fn try_from(task: Task) -> Result<Self> {
match task {
Task::CreateTableTask(create_table) => {
Ok(DdlTask::CreateTable(create_table.try_into()?))
}
}
}
}
pub struct SubmitDdlTaskRequest {
pub task: DdlTask,
}
impl TryFrom<SubmitDdlTaskRequest> for PbSubmitDdlTaskRequest {
type Error = error::Error;
fn try_from(request: SubmitDdlTaskRequest) -> Result<Self> {
let task = match request.task {
DdlTask::CreateTable(task) => Task::CreateTableTask(PbCreateTableTask {
table_info: serde_json::to_vec(&task.table_info).context(error::SerdeJsonSnafu)?,
create_table: Some(task.create_table),
partitions: task.partitions,
}),
};
Ok(Self {
header: None,
task: Some(task),
})
}
}
pub struct SubmitDdlTaskResponse {
pub key: Vec<u8>,
pub table_id: TableId,
}
impl TryFrom<PbSubmitDdlTaskResponse> for SubmitDdlTaskResponse {
type Error = error::Error;
fn try_from(resp: PbSubmitDdlTaskResponse) -> Result<Self> {
let table_id = resp.table_id.context(error::InvalidProtoMsgSnafu {
err_msg: "expected table_id",
})?;
Ok(Self {
key: resp.key,
table_id: table_id.id,
})
}
}
#[derive(Debug, PartialEq)]
pub struct CreateTableTask {
pub create_table: CreateTableExpr,
pub partitions: Vec<Partition>,
pub table_info: RawTableInfo,
}
impl TryFrom<PbCreateTableTask> for CreateTableTask {
type Error = error::Error;
fn try_from(pb: PbCreateTableTask) -> Result<Self> {
let table_info = serde_json::from_slice(&pb.table_info).context(error::SerdeJsonSnafu)?;
Ok(CreateTableTask::new(
pb.create_table.context(error::InvalidProtoMsgSnafu {
err_msg: "expected create table",
})?,
pb.partitions,
table_info,
))
}
}
impl CreateTableTask {
pub fn new(
expr: CreateTableExpr,
partitions: Vec<Partition>,
table_info: RawTableInfo,
) -> CreateTableTask {
CreateTableTask {
create_table: expr,
partitions,
table_info,
}
}
pub fn table_name(&self) -> TableName {
let table = &self.create_table;
TableName {
catalog_name: table.catalog_name.to_string(),
schema_name: table.schema_name.to_string(),
table_name: table.table_name.to_string(),
}
}
pub fn table_ref(&self) -> TableReference {
let table = &self.create_table;
TableReference {
catalog: &table.catalog_name,
schema: &table.schema_name,
table: &table.table_name,
}
}
}
impl Serialize for CreateTableTask {
fn serialize<S>(&self, serializer: S) -> result::Result<S::Ok, S::Error>
where
S: serde::Serializer,
{
let table_info = serde_json::to_vec(&self.table_info)
.map_err(|err| serde::ser::Error::custom(err.to_string()))?;
let pb = PbCreateTableTask {
create_table: Some(self.create_table.clone()),
partitions: self.partitions.clone(),
table_info,
};
let buf = pb.encode_to_vec();
serializer.serialize_bytes(&buf)
}
}
impl<'de> Deserialize<'de> for CreateTableTask {
fn deserialize<D>(deserializer: D) -> result::Result<Self, D::Error>
where
D: serde::Deserializer<'de>,
{
let buf = Vec::<u8>::deserialize(deserializer)?;
let expr: PbCreateTableTask = PbCreateTableTask::decode(&*buf)
.map_err(|err| serde::de::Error::custom(err.to_string()))?;
let expr = CreateTableTask::try_from(expr)
.map_err(|err| serde::de::Error::custom(err.to_string()))?;
Ok(expr)
}
}
#[cfg(test)]
mod tests {
use std::sync::Arc;
use api::v1::CreateTableExpr;
use datatypes::schema::SchemaBuilder;
use table::metadata::RawTableInfo;
use table::test_util::table_info::test_table_info;
use super::CreateTableTask;
#[test]
fn test_basic_ser_de_create_table_task() {
let schema = SchemaBuilder::default().build().unwrap();
let table_info = test_table_info(1025, "foo", "bar", "baz", Arc::new(schema));
let task = CreateTableTask::new(
CreateTableExpr::default(),
Vec::new(),
RawTableInfo::from(table_info),
);
let output = serde_json::to_vec(&task).unwrap();
let de = serde_json::from_slice(&output).unwrap();
assert_eq!(task, de);
}
}

View File

@@ -202,13 +202,13 @@ impl TableRoute {
.iter()
.filter_map(|x| x.leader_peer.as_ref())
.for_each(|p| {
peers.insert(p.clone());
let _ = peers.insert(p.clone());
});
self.region_routes
.iter()
.flat_map(|x| x.follower_peers.iter())
.for_each(|p| {
peers.insert(p.clone());
let _ = peers.insert(p.clone());
});
let mut peers = peers.into_iter().map(Into::into).collect::<Vec<PbPeer>>();
peers.sort_by_key(|x| x.id);

View File

@@ -16,6 +16,7 @@ use std::fmt::{Display, Formatter};
use api::v1::meta::TableName as PbTableName;
use serde::{Deserialize, Serialize};
use table::engine::TableReference;
#[derive(Debug, Clone, Hash, Eq, PartialEq, Deserialize, Serialize)]
pub struct TableName {
@@ -46,6 +47,14 @@ impl TableName {
table_name: table_name.into(),
}
}
pub fn table_ref(&self) -> TableReference<'_> {
TableReference {
catalog: &self.catalog_name,
schema: &self.schema_name,
table: &self.table_name,
}
}
}
impl From<TableName> for PbTableName {

View File

@@ -292,7 +292,7 @@ impl ManagerContext {
fn remove_messages(&self, procedure_ids: &[ProcedureId]) {
let mut messages = self.messages.lock().unwrap();
for procedure_id in procedure_ids {
messages.remove(procedure_id);
let _ = messages.remove(procedure_id);
}
}
@@ -319,7 +319,7 @@ impl ManagerContext {
while let Some((id, finish_time)) = finished_procedures.front() {
if finish_time.elapsed() > ttl {
ids_to_remove.push(*id);
finished_procedures.pop_front();
let _ = finished_procedures.pop_front();
} else {
// The rest procedures are finished later, so we can break
// the loop.
@@ -335,7 +335,7 @@ impl ManagerContext {
let mut procedures = self.procedures.write().unwrap();
for id in ids {
procedures.remove(&id);
let _ = procedures.remove(&id);
}
}
}
@@ -419,7 +419,7 @@ impl LocalManager {
DuplicateProcedureSnafu { procedure_id },
);
common_runtime::spawn_bg(async move {
let _handle = common_runtime::spawn_bg(async move {
// Run the root procedure.
runner.run().await;
});
@@ -434,7 +434,7 @@ impl ProcedureManager for LocalManager {
let mut loaders = self.manager_ctx.loaders.lock().unwrap();
ensure!(!loaders.contains_key(name), LoaderConflictSnafu { name });
loaders.insert(name.to_string(), loader);
let _ = loaders.insert(name.to_string(), loader);
Ok(())
}
@@ -559,7 +559,7 @@ mod test_util {
pub(crate) fn new_object_store(dir: &TempDir) -> ObjectStore {
let store_dir = dir.path().to_str().unwrap();
let mut builder = Builder::default();
builder.root(store_dir);
let _ = builder.root(store_dir);
ObjectStore::new(builder).unwrap().finish()
}
}
@@ -742,7 +742,7 @@ mod tests {
manager.recover().await.unwrap();
// The manager should submit the root procedure.
assert!(manager.procedure_state(root_id).await.unwrap().is_some());
let _ = manager.procedure_state(root_id).await.unwrap().unwrap();
// Since the mocked root procedure actually doesn't submit subprocedures, so there is no
// related state.
assert!(manager.procedure_state(child_id).await.unwrap().is_none());
@@ -770,13 +770,13 @@ mod tests {
let mut procedure = ProcedureToLoad::new("submit");
procedure.lock_key = LockKey::single("test.submit");
manager
assert!(manager
.submit(ProcedureWithId {
id: procedure_id,
procedure: Box::new(procedure),
})
.await
.unwrap();
.is_ok());
assert!(manager
.procedure_state(procedure_id)
.await
@@ -877,13 +877,13 @@ mod tests {
let mut procedure = ProcedureToLoad::new("submit");
procedure.lock_key = LockKey::single("test.submit");
let procedure_id = ProcedureId::random();
manager
assert!(manager
.submit(ProcedureWithId {
id: procedure_id,
procedure: Box::new(procedure),
})
.await
.unwrap();
.is_ok());
let mut watcher = manager.procedure_watcher(procedure_id).unwrap();
watcher.changed().await.unwrap();
manager.start().unwrap();
@@ -899,13 +899,13 @@ mod tests {
let mut procedure = ProcedureToLoad::new("submit");
procedure.lock_key = LockKey::single("test.submit");
let procedure_id = ProcedureId::random();
manager
assert!(manager
.submit(ProcedureWithId {
id: procedure_id,
procedure: Box::new(procedure),
})
.await
.unwrap();
.is_ok());
let mut watcher = manager.procedure_watcher(procedure_id).unwrap();
watcher.changed().await.unwrap();
tokio::time::sleep(Duration::from_millis(10)).await;

View File

@@ -88,7 +88,7 @@ impl LockMap {
// expect that a procedure should not wait for two lock simultaneously.
lock.waiters.push_back(meta.clone());
} else {
locks.insert(key.to_string(), Lock::from_owner(meta));
let _ = locks.insert(key.to_string(), Lock::from_owner(meta));
return;
}
@@ -111,7 +111,7 @@ impl LockMap {
if !lock.switch_owner() {
// No body waits for this lock, we can remove the lock entry.
locks.remove(key);
let _ = locks.remove(key);
}
}
}

View File

@@ -332,7 +332,7 @@ impl Runner {
// Add the id of the subprocedure to the metadata.
self.meta.push_child(procedure_id);
common_runtime::spawn_bg(async move {
let _handle = common_runtime::spawn_bg(async move {
// Run the root procedure.
runner.run().await
});

View File

@@ -388,6 +388,6 @@ mod tests {
StatusCode::Unexpected,
))));
assert!(state.is_failed());
assert!(state.error().is_some());
let _ = state.error().unwrap();
}
}

View File

@@ -198,7 +198,7 @@ impl ProcedureStore {
entry.1 = value;
}
} else {
procedure_key_values.insert(curr_key.procedure_id, (curr_key, value));
let _ = procedure_key_values.insert(curr_key.procedure_id, (curr_key, value));
}
}
@@ -211,7 +211,7 @@ impl ProcedureStore {
// procedures are loaded.
continue;
};
messages.insert(procedure_id, message);
let _ = messages.insert(procedure_id, message);
} else {
finished_ids.push(procedure_id);
}
@@ -331,7 +331,7 @@ mod tests {
fn procedure_store_for_test(dir: &TempDir) -> ProcedureStore {
let store_dir = dir.path().to_str().unwrap();
let mut builder = Builder::default();
builder.root(store_dir);
let _ = builder.root(store_dir);
let object_store = ObjectStore::new(builder).unwrap().finish();
ProcedureStore::from_object_store(object_store)

View File

@@ -173,7 +173,7 @@ mod tests {
let dir = create_temp_dir("state_store");
let store_dir = dir.path().to_str().unwrap();
let mut builder = Builder::default();
builder.root(store_dir);
let _ = builder.root(store_dir);
let object_store = ObjectStore::new(builder).unwrap().finish();
let state_store = ObjectStateStore::new(object_store);
@@ -244,7 +244,7 @@ mod tests {
let dir = create_temp_dir("state_store_list");
let store_dir = dir.path().to_str().unwrap();
let mut builder = Builder::default();
builder.root(store_dir);
let _ = builder.root(store_dir);
let object_store = ObjectStore::new(builder).unwrap().finish();
let state_store = ObjectStateStore::new(object_store);

View File

@@ -22,6 +22,7 @@ use datafusion::arrow::datatypes::SchemaRef as DfSchemaRef;
use datafusion::error::Result as DfResult;
pub use datafusion::execution::context::{SessionContext, TaskContext};
use datafusion::physical_plan::expressions::PhysicalSortExpr;
use datafusion::physical_plan::metrics::{BaselineMetrics, ExecutionPlanMetricsSet, MetricsSet};
pub use datafusion::physical_plan::Partitioning;
use datafusion::physical_plan::Statistics;
use datatypes::schema::SchemaRef;
@@ -69,6 +70,10 @@ pub trait PhysicalPlan: Debug + Send + Sync {
partition: usize,
context: Arc<TaskContext>,
) -> Result<SendableRecordBatchStream>;
fn metrics(&self) -> Option<MetricsSet> {
None
}
}
/// Adapt DataFusion's [`ExecutionPlan`](DfPhysicalPlan) to GreptimeDB's [`PhysicalPlan`].
@@ -76,11 +81,16 @@ pub trait PhysicalPlan: Debug + Send + Sync {
pub struct PhysicalPlanAdapter {
schema: SchemaRef,
df_plan: Arc<dyn DfPhysicalPlan>,
metric: ExecutionPlanMetricsSet,
}
impl PhysicalPlanAdapter {
pub fn new(schema: SchemaRef, df_plan: Arc<dyn DfPhysicalPlan>) -> Self {
Self { schema, df_plan }
Self {
schema,
df_plan,
metric: ExecutionPlanMetricsSet::new(),
}
}
pub fn df_plan(&self) -> Arc<dyn DfPhysicalPlan> {
@@ -127,15 +137,21 @@ impl PhysicalPlan for PhysicalPlanAdapter {
partition: usize,
context: Arc<TaskContext>,
) -> Result<SendableRecordBatchStream> {
let baseline_metric = BaselineMetrics::new(&self.metric, partition);
let df_plan = self.df_plan.clone();
let stream = df_plan
.execute(partition, context)
.context(error::GeneralDataFusionSnafu)?;
let adapter = RecordBatchStreamAdapter::try_new(stream)
let adapter = RecordBatchStreamAdapter::try_new_with_metrics(stream, baseline_metric)
.context(error::ConvertDfRecordBatchStreamSnafu)?;
Ok(Box::pin(adapter))
}
fn metrics(&self) -> Option<MetricsSet> {
self.df_plan.metrics()
}
}
#[derive(Debug)]
@@ -196,6 +212,10 @@ impl DfPhysicalPlan for DfPhysicalPlanAdapter {
fn statistics(&self) -> Statistics {
Statistics::default()
}
fn metrics(&self) -> Option<MetricsSet> {
self.0.metrics()
}
}
#[cfg(test)]
@@ -353,7 +373,7 @@ mod test {
Arc::new(Schema::try_from(df_schema.clone()).unwrap()),
Arc::new(EmptyExec::new(true, df_schema.clone())),
);
assert!(plan.df_plan.as_any().downcast_ref::<EmptyExec>().is_some());
let _ = plan.df_plan.as_any().downcast_ref::<EmptyExec>().unwrap();
let df_plan = DfPhysicalPlanAdapter(Arc::new(plan));
assert_eq!(df_schema, df_plan.schema());

View File

@@ -20,6 +20,7 @@ use std::task::{Context, Poll};
use datafusion::arrow::datatypes::SchemaRef as DfSchemaRef;
use datafusion::error::Result as DfResult;
use datafusion::parquet::arrow::async_reader::{AsyncFileReader, ParquetRecordBatchStream};
use datafusion::physical_plan::metrics::BaselineMetrics;
use datafusion::physical_plan::RecordBatchStream as DfRecordBatchStream;
use datafusion_common::DataFusionError;
use datatypes::schema::{Schema, SchemaRef};
@@ -115,13 +116,31 @@ impl Stream for DfRecordBatchStreamAdapter {
pub struct RecordBatchStreamAdapter {
schema: SchemaRef,
stream: DfSendableRecordBatchStream,
metrics: Option<BaselineMetrics>,
}
impl RecordBatchStreamAdapter {
pub fn try_new(stream: DfSendableRecordBatchStream) -> Result<Self> {
let schema =
Arc::new(Schema::try_from(stream.schema()).context(error::SchemaConversionSnafu)?);
Ok(Self { schema, stream })
Ok(Self {
schema,
stream,
metrics: None,
})
}
pub fn try_new_with_metrics(
stream: DfSendableRecordBatchStream,
metrics: BaselineMetrics,
) -> Result<Self> {
let schema =
Arc::new(Schema::try_from(stream.schema()).context(error::SchemaConversionSnafu)?);
Ok(Self {
schema,
stream,
metrics: Some(metrics),
})
}
}
@@ -135,6 +154,12 @@ impl Stream for RecordBatchStreamAdapter {
type Item = Result<RecordBatch>;
fn poll_next(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Option<Self::Item>> {
let timer = self
.metrics
.as_ref()
.map(|m| m.elapsed_compute().clone())
.unwrap_or_default();
let _guard = timer.timer();
match Pin::new(&mut self.stream).poll_next(cx) {
Poll::Pending => Poll::Pending,
Poll::Ready(Some(df_record_batch)) => {

View File

@@ -164,7 +164,7 @@ impl RecordBatch {
vector.clone()
};
vectors.insert(column_name.clone(), vector);
let _ = vectors.insert(column_name.clone(), vector);
}
Ok(vectors)

View File

@@ -172,8 +172,7 @@ mod tests {
}
async fn call(&mut self) -> Result<()> {
self.n.fetch_add(1, Ordering::Relaxed);
let _ = self.n.fetch_add(1, Ordering::Relaxed);
Ok(())
}
}

Some files were not shown because too many files have changed in this diff Show More