Compare commits

...

118 Commits

Author SHA1 Message Date
Lei, HUANG
975b8c69e5 fix(sqlness): redact all volatile text (#4583)
Commit Message:

 Add SQLNESS replacements for RoundRobinBatch and region patterns
2024-08-19 08:04:54 +00:00
Weny Xu
8036b44347 chore: setup kafka before downloading binary step (#4582) 2024-08-19 06:44:33 +00:00
Zhenchi
4c72b3f3fe chore: bump version to v0.9.2 (#4581)
chore: bump version to 0.9.2

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2024-08-19 06:11:36 +00:00
Weny Xu
76dc906574 feat(log_store): introduce the CollectionTask (#4530)
* feat: introduce the `CollectionTask`

* feat: add config of index collector

* chore: remove unused code

* feat: truncate indexes

* chore: apply suggestions from CR

* chore: update config examples

* refactor: retrieve latest offset while dumping indexes

* chore: print warn
2024-08-19 03:48:35 +00:00
Ran Joe
2a73e0937f fix(common_version): short_version with empty branch (#4572) 2024-08-19 03:14:49 +00:00
Zhenchi
c8de8b80f4 fix(fulltext-index): single segment is not sufficient for >50M rows SST (#4552)
* fix(fulltext-index): single segment is not sufficient for a >50M rows SST

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix: update doc comment

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2024-08-16 09:14:33 +00:00
LFC
ec59ce5c9a feat: able to handle concurrent region edit requests (#4569)
* feat: able to handle concurrent region edit requests

* resolve PR comments
2024-08-16 03:29:03 +00:00
liyang
f578155602 feat: add GcsConfig credential field (#4568) 2024-08-16 03:11:20 +00:00
Weny Xu
d1472782d0 chore(log_store): remove redundant metrics (#4570)
chore(log_store): remove unused metrics
2024-08-16 02:23:21 +00:00
Lanqing Yang
93be81c041 feat: implement postgres kvbackend (#4421) 2024-08-14 22:49:32 +00:00
discord9
2c3fccb516 feat(flow): add eval_batch for ScalarExpr (#4551)
* refactor: better perf flow

* feat(WIP): batching proc

* feat: UnaryFunc::eval_batch untested

* feat: BinaryFunc::eval_batch untested

* feat: VariadicFunc::eval_batch un tested

* feat: literal eval_batch

* refactor: move DfScalarFunc to separate file

* chore: remove unused imports

* feat: eval_batch df func&ifthen

* chore: remove unused file

* refactor: use Batch type

* chore: remove unused

* chore: remove a done TODO

* refactor: per review

* chore: import

* refactor: eval_batch if then

* chore: typo
2024-08-14 11:29:30 +00:00
Lei, HUANG
c1b1be47ba fix: append table stats (#4561)
* fix: append table stats

* fix: clippy
2024-08-14 09:01:42 +00:00
Weny Xu
0f85037024 chore: remove unused code (#4559) 2024-08-14 06:55:54 +00:00
discord9
f88705080b chore: set topic to 3 for sqlness test (#4560) 2024-08-14 06:32:26 +00:00
discord9
cbb06cd0c6 feat(flow): add some metrics (#4539)
* feat: add some metrics

* fix: tmp rate limiter

* feat: add task count metrics

* refactor: use bounded channel anyway

* refactor: better metrics
2024-08-14 03:23:49 +00:00
discord9
b59a93dfbc chore: Helper function to convert Vec<Value> to VectorRef (#4546)
* chore: `try_from_row_into_vector` helper

* test: try_from_row

* refactor: simplify with builder

* fix: deicmal set prec&scale

* refactor: more simplify

* refactor: use ref
2024-08-14 03:11:44 +00:00
localhost
202c730363 perf: Optimizing pipeline performance (#4390)
* chore: improve pipeline performance

* chore: use arc to improve time type

* chore: improve pipeline coerce

* chore: add vec refactor

* chore: add vec pp

* chore: improve pipeline

* inprocess

* chore: set log ingester use new pipeline

* chore: fix some error by pr comment

* chore: fix typo

* chore: use enum_dispatch to simplify code

* chore: some minor fix

* chore: format code

* chore: update by pr comment

* chore: fix typo

* chore: make clippy happy

* chore: fix by pr comment

* chore: remove epoch and date process add new timestamp process

* chore: add more test for pipeline

* chore: restore epoch and date processor

* chore: compatibility issue

* chore: fix by pr comment

* chore: move the evaluation out of the loop

* chore: fix by pr comment

* chore: fix dissect output key filter

* chore: fix transform output greptime value has order error

* chore: keep pipeline transform output order

* chore: revert tests

* chore: simplify pipeline prepare implementation

* chore: add test for timestamp pipelin processor

* chore: make clippy happy

* chore: replace is_some check to match

---------

Co-authored-by: shuiyisong <xixing.sys@gmail.com>
2024-08-13 11:32:04 +00:00
zyy17
63e1892dc1 refactor(plugin): add SetupPlugin and StartPlugin error (#4554) 2024-08-13 11:22:48 +00:00
Lei, HUANG
216bce6973 perf: count(*) for append-only tables (#4545)
* feat: support fast count(*) for append-only tables

* fix: total_rows stats in time series memtable

* fix: sqlness result changes for SinglePartitionScanner -> StreamScanAdapter

* fix: some cr comments
2024-08-13 09:27:50 +00:00
Yingwen
4466fee580 docs: update grafana readme (#4550)
* docs: update grafana readme

* docs: simplify example
2024-08-13 08:45:06 +00:00
shuiyisong
5aa4c70057 chore: update validator signature (#4548) 2024-08-13 08:06:12 +00:00
Yingwen
72a1732fb4 docs: Adds more panels to grafana dashboards (#4540)
* docs: update standalone grafana

* docs: add more panels to grafana dashboards

* docs: replace source name

* docs: bump dashboard version

* docs: update hit rate expr

* docs: greptime_pod to instance, add panels for cache
2024-08-13 06:29:28 +00:00
Weny Xu
c821d21111 feat(log_store): introduce the IndexCollector (#4461)
* feat: introduce the IndexCollector

* refactor: separate BackgroundProducerWorker code into files

* feat: introduce index related operations

* feat: introduce the `GlobalIndexCollector`

* refactor: move collector to index mod

* refactor: refactor `GlobalIndexCollector`

* chore: remove unused collector.rs

* chore: add comments

* chore: add comments

* chore: apply suggestions from CR

* chore: apply suggestions from CR
2024-08-13 06:15:24 +00:00
Weny Xu
2e2eacf3b2 feat: add SASL and TLS config for Kafka client (#4536)
* feat: add SASL and TLS config

* feat: add SASL/PLAIN and TLS config for Kafka client

* chore: use `ring`

* feat: support SASL SCRAM-SHA-256 and SCRAM-SHA-512

* fix: correct unit test

* test: add integration test

* chore: apply suggestions from CR

* refactor: introduce `KafkaConnectionConfig`

* chore: refine toml examples

* docs: add missing fields

* chore: refine examples

* feat: allow no server ca cert

* chore: refine examples

* chore: fix clippy

* feat: load system ca certs

* chore: fmt toml

* chore: unpin version

* Update src/common/wal/src/error.rs

Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>

---------

Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>
2024-08-12 12:27:11 +00:00
Ruihang Xia
9bcaeaaa0e refactor: reuse aligned ts array in range manipulate exec (#4535)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-08-12 06:26:11 +00:00
Weny Xu
90cfe276b4 chore: upload kind logs (#4544) 2024-08-12 05:01:13 +00:00
JohnsonLee
6694d2a930 fix: change the type of oid in pg_namespace to u32 (#4541)
* fix:  change the type of oid in pg_namespace to u32

* fix: header and correct logic of update oid
2024-08-10 15:06:14 +00:00
Ning Sun
9532ffb954 fix: configuration example for selector (#4532)
* fix: configuration example for selector

* docs: update config docs

* test: update unit tests for configuration in meta
2024-08-09 09:51:05 +00:00
Weny Xu
665b7e5c6e perf: merge small byte ranges for optimized fetching (#4520) 2024-08-09 08:17:54 +00:00
Weny Xu
27d9aa0f3b fix: rollback only if dropping the metric physical table fails (#4525)
* fix: rollback only if dropping the metric physical table fails

* chore: apply suggestions from CR
2024-08-09 08:01:11 +00:00
discord9
8f3293d4fb fix: larger stack size in debug mode (#4521)
* fix: larger stack size in debug mode

* chore: typo

* chore: clippy

* chore: per review

* chore: rename thread

* chore: per review

* refactor: better looking cfg

* chore: async main entry
2024-08-09 07:01:20 +00:00
LFC
7dd20b0348 chore: make mysql server version changable (#4531) 2024-08-09 03:43:43 +00:00
zyy17
4c1a3f29c0 ci: download the latest stable released version by default and do some small refactoring (#4529)
refactor: download the latest stable released version by default and do some small refactoring
2024-08-08 07:46:09 +00:00
Jeremyhi
0d70961448 feat: change the default selector to RoundRobin (#4528)
* feat: change the default selector to rr

* Update src/meta-srv/src/selector.rs

* fix: unit test
2024-08-08 04:58:20 +00:00
LFC
a75cfaa516 chore: update snafu to make clippy happy (#4507)
* chore: update snafu to make clippy happy

* fix ci
2024-08-07 16:12:00 +00:00
Lei, HUANG
aa3f53f08a fix: install script (#4527)
fix: install script always install v0.9.0-nightly-20240709 instead of latest nightly
2024-08-07 14:07:32 +00:00
Ruihang Xia
8f0959fa9f fix: fix incorrect result of topk with cte (#4523)
* fix: fix incorrect result of topk with cte

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update sqlness

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* clean up cargo toml

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-08-07 09:13:38 +00:00
Weny Xu
4a3982ca60 chore: use configData (#4522)
* chore: use `configData`

* chore: add an empty line
2024-08-07 07:43:04 +00:00
Yingwen
559219496d ci: fix windows temp path (#4518) 2024-08-06 13:53:12 +00:00
LFC
685aa7dd8f ci: squeeze some disk space for complex fuzz tests (#4519)
* ci: squeeze some disk space for complex fuzz tests

* Update .github/workflows/develop.yml

Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>

---------

Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>
2024-08-06 11:52:34 +00:00
Lei, HUANG
be5364a056 chore: support swcs as the short name for strict window compaction (#4517) 2024-08-06 07:38:07 +00:00
Weny Xu
a25d9f736f chore: set default otlp_endpoint (#4508)
* chore: set default `otlp_endpoint`

* fix: fix ci
2024-08-06 06:48:14 +00:00
dependabot[bot]
2cd4a78f17 build(deps): bump zerovec from 0.10.2 to 0.10.4 (#4335)
Bumps [zerovec](https://github.com/unicode-org/icu4x) from 0.10.2 to 0.10.4.
- [Release notes](https://github.com/unicode-org/icu4x/releases)
- [Changelog](https://github.com/unicode-org/icu4x/blob/main/CHANGELOG.md)
- [Commits](https://github.com/unicode-org/icu4x/commits/ind/zerovec@0.10.4)

---
updated-dependencies:
- dependency-name: zerovec
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Weny Xu <wenymedia@gmail.com>
2024-08-06 00:40:03 +00:00
dependabot[bot]
188e182d75 build(deps): bump zerovec-derive from 0.10.2 to 0.10.3 (#4346)
Bumps [zerovec-derive](https://github.com/unicode-org/icu4x) from 0.10.2 to 0.10.3.
- [Release notes](https://github.com/unicode-org/icu4x/releases)
- [Changelog](https://github.com/unicode-org/icu4x/blob/main/CHANGELOG.md)
- [Commits](https://github.com/unicode-org/icu4x/commits/ind/zerovec-derive@0.10.3)

---
updated-dependencies:
- dependency-name: zerovec-derive
  dependency-type: indirect
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: Weny Xu <wenymedia@gmail.com>
2024-08-05 23:58:30 +00:00
Yingwen
d64cc79ab4 docs: add v0.9.1 bench result (#4511) 2024-08-05 16:53:32 +00:00
discord9
e6cc4df8c8 feat: flow recreate on reboot (#4509)
* feat: flow reboot clean

* refactor: per review

* refactor: per review

* test: sqlness flow reboot
2024-08-05 13:57:48 +00:00
LFC
803780030d fix: too large shadow-rs consts (#4506) 2024-08-05 07:05:14 +00:00
Weny Xu
79f10d0415 chore: reduce fuzz tests in CI (#4505) 2024-08-05 06:56:41 +00:00
Weny Xu
3937e67694 feat: introduce new kafka topic consumer respecting WAL index (#4424)
* feat: introduce new kafka topic consumer respecting WAL index

* chore: fmt

* chore: fmt toml

* chore: add comments

* feat: merge close ranges

* fix: fix unit tests

* chore: fix typos

* chore: use loop

* chore: use unstable sort

* chore: use gt instead of gte

* chore: add comments

* chore: rename to `current_entry_id`

* chore: apply suggestions from CR

* chore: apply suggestions from CR

* refactor: minor refactor

* chore: apply suggestions from CR
2024-08-05 06:56:25 +00:00
Weny Xu
4c93fe6c2d chore: bump rust-postgres to 0.7.11 (#4504) 2024-08-05 04:26:46 +00:00
LFC
c4717abb68 chore: bump shadow-rs version to set the path to find the correct git repo (#4494) 2024-08-05 02:24:12 +00:00
shuiyisong
3b701d8f5e test: more on processors (#4493)
* test: add date test

* test: add epoch test

* test: add letter test and complete some others

* test: add urlencoding test

* chore: typo
2024-08-04 08:29:31 +00:00
Weny Xu
cb4cffe636 chore: bump opendal version to 0.48 (#4499) 2024-08-04 00:46:04 +00:00
Ruihang Xia
cc7f33c90c fix(tql): avoid unwrap on parsing tql query (#4502)
* fix(tql): avoid unwrap on parsing tql query

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add unit test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-08-03 20:58:53 +00:00
Ruihang Xia
fe1cfbf2b3 fix: partition column with mixed quoted and unquoted idents (#4491)
* fix: partition column with mixed quoted and unquoted idents

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update error message

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-08-02 09:06:31 +00:00
Yingwen
ded874da04 feat: enlarge default page cache size (#4490) 2024-08-02 07:24:20 +00:00
Lei, HUANG
fe2d29a2a0 chore: bump version v0.9.1 (#4486)
Update package versions to 0.9.1

 - Bump version for multiple packages from 0.9.0 to 0.9.1 in Cargo.lock
2024-08-02 07:10:05 +00:00
Yingwen
b388829a96 fix: avoid total size overflow (#4487)
feat: avoid total size overflow
2024-08-02 06:16:37 +00:00
zyy17
8e7c027bf5 ci: make docker image args configurable from env vars (#4484)
refactor: make docker image args configurable from env vars
2024-08-02 03:17:09 +00:00
Lei, HUANG
9d5d7c1f9a feat(compaction): add file number limits to TWCS compaction (#4481)
* Add file number limits to TWCS compaction

 - Introduce `max_active_window_files` and `max_inactive_window_files` to `TwcsOptions`.

* feat/limit-files-in-windows: Add max active/inactive window files options to mito engine config

* feat/limit-files-in-windows: Add Debug derive to TwcsPicker and implement max file enforcement logging in TWCS compaction

* fix: clippy
2024-08-01 12:42:09 +00:00
ZonaHe
efe5eeef14 feat: update dashboard to v0.5.4 (#4483)
Co-authored-by: ZonaHex <ZonaHex@users.noreply.github.com>
2024-08-01 12:19:38 +00:00
Ruihang Xia
ca54b05be3 feat: time poll elapsed for RegionScan plan (#4482)
* feat: time poll elapsed for RegionScan plan

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* also record await time

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-08-01 12:19:15 +00:00
Jeremyhi
d67314789c feat: export all schemas and data at once in export tool (#4478)
* feat: export all schemas and data at onece

* feat: introduce export all to export schemas and data at once

* feat: default value for target

* feat: refactor export target

* chore: fix unit test
2024-08-01 09:14:44 +00:00
Yingwen
6c4b8b63a5 fix: notify flush receiver after write buffer is released (#4476)
* fix: notify the worker after write buffer is released

* feat: worker level region count
2024-08-01 07:15:36 +00:00
Jeremyhi
62a0defd63 feat: improve extract hints (#4479) 2024-08-01 07:06:13 +00:00
Jeremyhi
291d9d55a4 feat: hint options for gRPC insert (#4454)
* feat: hint options for gRPC isnert

* chore: unit test for extract_hints

* feat: add integration test for grpc hint

* test: add integration test for hints
2024-08-01 02:59:38 +00:00
Weny Xu
90301a6250 fix: generate unique timestamp for inserting tests (#4472) 2024-07-31 12:19:43 +00:00
shuiyisong
c66d3090b6 fix: prometheus api only returns 200 (#4471)
fix: prometheus api returns http status other than 200
2024-07-31 07:42:50 +00:00
dennis zhuang
656050722c fix: overflow when parsing default value with negative numbers (#4459)
* fix: overflow when parsing default value with negative numbers

* test: adds sqlness test
2024-07-31 07:41:49 +00:00
Ning Sun
b741a7181b feat: track channels with query context and w/rcu (#4448)
* feat: add source channel to meter recorders

* feat: provide channel for query context

* fix: testing and extension get for query context

* chore: revert cargo toml structure changes

* fix: querycontext modification for prometheus and pipeline

* chore: switch git dependency to main branches

* chore: remove TODO

* refactor: rename other to unknown

---------

Co-authored-by: shuiyisong <113876041+shuiyisong@users.noreply.github.com>
2024-07-31 07:30:50 +00:00
Weny Xu
dd23d47743 chore(ci): bring back chaos tests (#4456)
* Revert "chore: temporarily disable fuzz chaos tests (#4457)"

This reverts commit f0c953f84a.

* chore: update config

* Update .github/actions/setup-greptimedb-cluster/with-remote-wal.yaml

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>

---------

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
2024-07-31 07:29:31 +00:00
Ran Miller
80aaa7725e docs(contributing): replace expired links (#4468) 2024-07-31 06:11:30 +00:00
Ran Miller
c24de8b908 refactor(servers): improve postgres error message (#4463)
* refactor(servers): improve postgres error message

* refactor(servers): remove numerical representation of ErrorSeverity
2024-07-31 06:06:15 +00:00
Yingwen
f382a7695f perf: reduce lock scope and improve log (#4453)
* feat: refine logs for scan

* feat: improve build parts and  unordered scan metrics

* feat: change to debug log

* fix: release lock before reading part

* test: replace region id

* test: fix sqlness

* chore: add todo

Co-authored-by: dennis zhuang <killme2008@gmail.com>

---------

Co-authored-by: dennis zhuang <killme2008@gmail.com>
2024-07-31 04:07:34 +00:00
Jeremyhi
1ea43da9ea feat: default export catalog name (#4464)
* feat: default export catalog name

* chore: default catalog name
2024-07-31 03:39:39 +00:00
dennis zhuang
6113f46284 docs: tweak readme (#4465) 2024-07-31 02:35:29 +00:00
LFC
6d8a502430 chore: add more metrics about parquet and cache (#4410)
* chore: add more metrics about parquet and cache

* resolve PR comments

* resolve PR comments

* resolve PR comments

* resolve PR comments
2024-07-30 12:01:49 +00:00
Ruihang Xia
2d992f4f12 fix: check_partition uses unqualified name (#4452)
* fix: check_partition uses unqualified name

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update sqlness result

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-07-30 11:28:28 +00:00
Ruihang Xia
7daf24c47f feat: remove dedicated runtime for grpc, mysql and pg protocols (#4436)
* feat: remove dedicated runtime for grpc, mysql and pg protocols

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove other runtimes

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* spawn compact task into compact_runtime

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* refine naming

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* Update src/servers/tests/mysql/mysql_server_test.rs

Co-authored-by: Zhenchi <zhongzc_arch@outlook.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* turnoff fuzz test matrix fail fast option

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* chore: update rt config for ci tests

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: Zhenchi <zhongzc_arch@outlook.com>
Co-authored-by: Weny Xu <wenymedia@gmail.com>
2024-07-30 06:17:58 +00:00
shuiyisong
567f5105bf fix: missing pre_write check on prometheus remote write (#4460)
fix: missing pre_write check on prometheus remote write
2024-07-30 04:55:19 +00:00
Yingwen
78962015dd ci: keep sqlness log by default (#4449)
* ci: keep sqlness log by default

* chore: not preserve state in makefile by default

* ci: use make
2024-07-29 17:11:24 +00:00
taobo
1138f32af9 feat: support setting time range in Copy From statement (#4405)
* feat: support setting time range in Copy From statement

* test: add batch_filter_test

* fix: ts data type inconsistent error

* test: add sqlness test for copy from with statement

* fix: sqlness result error

* fix: cr comments
2024-07-29 16:55:19 +00:00
shuiyisong
53fc14a50b fix: use status code to http status mapping in error IntoResponse (#4455) 2024-07-29 16:37:04 +00:00
Ruihang Xia
1895a5478b feat: track prometheus HTTP API's query latency (#4458)
* feat: track prometheus HTTP API's query latency

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update grafana config

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* chore: Update src/servers/src/metrics.rs

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>
2024-07-29 15:00:09 +00:00
Weny Xu
f0c953f84a chore: temporarily disable fuzz chaos tests (#4457) 2024-07-29 13:23:40 +00:00
zyy17
1a38f36d2d refactor!: Remove Mode from FrontendOptions (#4401)
refactor: remove `Mode` from `FrontendOptions`

Signed-off-by: zyy17 <zyylsxm@gmail.com>
2024-07-29 06:57:01 +00:00
Zhenchi
cb94bd45d3 fix(fulltext-search): prune rows in row group forget to take remainder (#4447)
* fix(fulltext-search): prune rows in row group forget to take remainder

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* test: add unit test

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2024-07-29 06:20:07 +00:00
Ning Sun
b298b35b3b feat: show root cause and db name on the error line (#4442)
* feat: show root cause on the error line

* feat: show root error for grpc

* feat: add error information for http error

* feat: add db information on error mysql/postgres logs
2024-07-29 03:59:42 +00:00
Weny Xu
164232e073 fix: use heartbeat runtime instead of background runtime (#4445) 2024-07-29 03:29:30 +00:00
JohnsonLee
9a5fa49955 feat: support pg_namespace, pg_class and related psql command (#4428)
* feat: add function 'pg_catalog.pg_table_is_visible'q

* feat: add 'pg_class' and 'pg_namespace', now we can run '\d' and '\dt'!

* refactor: move memory_table::tables to utils::tables

* refactor: move out predicate to system_schema to reuse it

* feat: predicates pushdown

* test: add pg_namespace, pg_class related sqlness test

* fix: typos and license header

* fix: sqlness test

* refactor: use `expect` instead of `unwrap` here

* refactor: remove the `information_schema::utils` mod

* doc: make the comment in pg_get_userbyid more precise

* doc: add TODO and comment in pg_catalog

* fix: typo

* fix: sqlness

* doc: change to comment on PGClassBuilder to TODO
2024-07-28 12:04:54 +00:00
dennis zhuang
92d6d4e64a docs: update project status (#4440)
* docs: update project status

* docs: update project status
2024-07-27 05:24:09 +00:00
discord9
021ec7b6ac feat(flow): flush_flow function (#4416)
* refactor: df err variant

* WIP

* chore: update proto version

* chore: revert mistaken rust-toolchain

* feat(WIP): added FlowService to QueryEngine

* refactor: move flow service to operator

* refactor: flush use flow name not id

* refactor: use full path in macro

* feat: flush flow

* feat: impl flush flow

* chore: remove unused

* chore: meaninful response

* chore: remove unused

* chore: clippy

* fix: flush_flow with proper blocking

* test: sqlness tests added back for flow

* test: better predicate for flush_flow

* refactor: rwlock

* fix: flush lock

* fix: flush lock write then drop

* test: add a new flow sqlness test

* fix: sqlness testcase

* chore: style

---------

Co-authored-by: dennis zhuang <killme2008@gmail.com>
2024-07-26 23:04:13 +00:00
dennis zhuang
0710e6ff36 fix: remove to_timezone function (#4439)
fix: remove to_timezone, it doesn't make sense
2024-07-26 07:40:07 +00:00
dennis zhuang
db3a07804e fix: information_schema tables and views column value (#4438) 2024-07-26 07:39:58 +00:00
Lei, HUANG
bdd3d2d9ce chore: add dynamic cache size adjustment for InvertedIndexConfig (#4433)
* Add dynamic cache size adjustment for InvertedIndexConfig

* Increase cache sizes in integration tests for HTTP

 - Updated `metadata_cache_size` from 32MiB to 64MiB

* Remove cache size settings from config and update drop_lines_with_inconsistent_results function to handle them

* Add cache size configurations for inverted index metadata and content

 - Introduced `metadata_cache_size` with a default of 64MiB.
 - Introduced `content_cache_size` with a default of 128MiB.

* chore/index-content-cache-default-size: Add cache size configuration options for Mito engine's inverted index
2024-07-26 03:36:20 +00:00
zyy17
b81d3a28e6 refactor: add RetryInterceptor to print detailed error (#4434) 2024-07-25 11:52:28 +00:00
Weny Xu
89b86c87a2 chore: add docs for config file (#4432) 2024-07-25 08:11:10 +00:00
Lei, HUANG
0b0ed03ee6 fix(metrics): RowGroupLastRowCachedReader metrics (#4418)
fix/reader-metrics:
 Refactor cache hit/miss logic and update metrics in mito2

 - Simplify cache retrieval logic in CacheManager by removing inline update_hit_miss function call.
 - Add separate functions for incrementing cache hit and miss metrics.
 - Update RowGroupLastRowCachedReader to use new cache hit/miss functions and refactor to new helper methods for creating Hit and Miss variants.
2024-07-25 06:45:43 +00:00
dennis zhuang
ea4a71b387 docs: update readme (#4431) 2024-07-25 06:17:45 +00:00
dennis zhuang
4cd5ec7769 docs: update readme (#4430) 2024-07-25 02:42:18 +00:00
Ruihang Xia
c8f4a85720 chore: update grafana dashboard to reflect recent metric changes (#4417)
* chore: update grafana dashboard to reflect recent metric changes

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* chore: add a blank line at the end

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: dennis zhuang <killme2008@gmail.com>
2024-07-24 20:05:44 +00:00
discord9
024dac8171 chore: add a compile cfg for python in cmd package (#4406)
* chore: add a compile cfg for python

* fix: feature gate additive turn off default features in workspace&add cfg in place

* chore: remove unused in different cfg
2024-07-24 20:03:53 +00:00
Ran Miller
918be099cd docs(common_error): format enum StatusCode docs (#4427)
* fix: format comments end with . symbol
* docs: add commnet for RegionReadonly
* fix: comment error for DatabaseAlreadyExists
2024-07-24 15:54:35 +00:00
Zhenchi
91dbac4141 fix(fulltext-index): clean up 0-value timer (#4423)
Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2024-07-24 15:03:36 +00:00
Ran Miller
e935bf7574 refactor: Remove PhysicalOptimizer and LogicalOptimizer trait (#4426)
* refactor(query): Remove LogicalOptimizer trait

* refactor(query): Remove PhysicalOptimizer trait
2024-07-24 13:01:44 +00:00
Ran Miller
f7872654cc refactor(query): Remove PhysicalPlanner trait (#4412) 2024-07-24 03:06:46 +00:00
shuiyisong
547730a467 chore: add metrics for log ingestion (#4411)
* chore: add metrics for log ingestion

* chore: record result as well
2024-07-23 08:05:11 +00:00
Ning Sun
49f22f0fc5 fix: add back AuthBackend which is required by custom auth backend (#4409) 2024-07-23 05:35:29 +00:00
zyy17
2ae2a6674e refactor: add get_storage_path() and get_catalog_and_schema() (#4397)
refactor: add get_storage_path() and get_catalog_and_schema()
2024-07-20 01:55:48 +00:00
Lei, HUANG
c8cf3b1677 fix(wal): handle WAL deletion on region drop (#4400)
Add LogStore trait bound to RegionWorkerLoop and handle WAL deletion on region drop.
2024-07-19 13:24:10 +00:00
Yingwen
7aae19aa8b fix: dictionary key type use u32 (#4396)
* fix: dictionary key type use u32

* fix: fix error whle reading content

* fix: bulk memtable dictionary type
2024-07-19 09:51:29 +00:00
Jeremyhi
b90267dd80 feat: export database data (#4382)
* feat: export database data

* feat: export data with time range

* feat: refactor the data dir

* feat: by comment
2024-07-19 09:29:45 +00:00
discord9
9fa9156bde feat: FLOWS table in information_schema&SHOW FLOWS (#4386)
* feat(WIP): flow info table

refactor: better err handling&log

feat: add flow metadata to info schema provider

feat(WIP): info_schema.flows

feat: info_schema.flows table

* fix: err after rebase

* fix: wrong comparsion op

* feat: SHOW FLOWS&tests

* refactor: per review

* chore: unused

* refactor: json error

* chore: per review

* test: sqlness

* chore: rm inline error

* refactor: per review
2024-07-19 09:29:36 +00:00
zyy17
ce900e850a fix: user provider can't be configured by config file or environment variables (#4398) 2024-07-19 08:41:29 +00:00
zyy17
5274c5a407 refactor: add &mut Plugins argument in plugins setup api and remove unnecessary mut (#4389)
refactor: add '&mut Plugins' argument in plugins setup api and remove unnecessary mut

Signed-off-by: zyy17 <zyylsxm@gmail.com>
2024-07-19 08:12:06 +00:00
Yingwen
0b13ac6e16 ci: disable auto review (#4387) 2024-07-18 08:03:37 +00:00
shuiyisong
8ab6136d1c chore: support pattern as pipeline key name (#4368)
* chore: add pattern to processor key name

* fix: typo

* refactor: test
2024-07-18 03:32:26 +00:00
Weny Xu
e39f49fe56 fix: ensure keep alive is completed in time (#4349)
* fix: ensure keep alive is completed in time

* chore: apply suggestions from CR

* chore: use write runtime

* refactor: set META_LEASE_SECS to 5

* chore: set etcd replicas to 1

* chore: apply suggestions from CR

* chore: apply suggestions from CR

* fix: set `MissedTickBehavior::Delay`

* chore: apply suggestions from CR
2024-07-17 06:14:45 +00:00
501 changed files with 24459 additions and 9578 deletions

15
.coderabbit.yaml Normal file
View File

@@ -0,0 +1,15 @@
# yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json
language: "en-US"
early_access: false
reviews:
profile: "chill"
request_changes_workflow: false
high_level_summary: true
poem: true
review_status: true
collapse_walkthrough: false
auto_review:
enabled: false
drafts: false
chat:
auto_reply: true

View File

@@ -14,10 +14,11 @@ GT_AZBLOB_CONTAINER=AZBLOB container
GT_AZBLOB_ACCOUNT_NAME=AZBLOB account name
GT_AZBLOB_ACCOUNT_KEY=AZBLOB account key
GT_AZBLOB_ENDPOINT=AZBLOB endpoint
# Settings for gcs test
GT_GCS_BUCKET = GCS bucket
# Settings for gcs test
GT_GCS_BUCKET = GCS bucket
GT_GCS_SCOPE = GCS scope
GT_GCS_CREDENTIAL_PATH = GCS credential path
GT_GCS_CREDENTIAL_PATH = GCS credential path
GT_GCS_CREDENTIAL = GCS credential
GT_GCS_ENDPOINT = GCS end point
# Settings for kafka wal test
GT_KAFKA_ENDPOINTS = localhost:9092

View File

@@ -62,12 +62,13 @@ runs:
# Get proper backtraces in mac Sonoma. Currently there's an issue with the new
# linker that prevents backtraces from getting printed correctly.
#
# <https://github.com/rust-lang/rust/issues/113783>
# <https://github.com/rust-lang/rust/issues/113783>
- name: Run integration tests
if: ${{ inputs.disable-run-tests == 'false' }}
shell: bash
env:
env:
CARGO_BUILD_RUSTFLAGS: "-Clink-arg=-Wl,-ld_classic"
SQLNESS_OPTS: "--preserve-state"
run: |
make test sqlness-test
@@ -81,7 +82,7 @@ runs:
- name: Build greptime binary
shell: bash
env:
env:
CARGO_BUILD_RUSTFLAGS: "-Clink-arg=-Wl,-ld_classic"
run: |
make build \

View File

@@ -40,7 +40,7 @@ runs:
- name: Install Python
uses: actions/setup-python@v5
with:
python-version: '3.10'
python-version: "3.10"
- name: Install PyArrow Package
shell: pwsh
@@ -62,13 +62,14 @@ runs:
env:
RUSTUP_WINDOWS_PATH_ADD_BIN: 1 # Workaround for https://github.com/nextest-rs/nextest/issues/1493
RUST_BACKTRACE: 1
SQLNESS_OPTS: "--preserve-state"
- name: Upload sqlness logs
if: ${{ failure() }} # Only upload logs when the integration tests failed.
uses: actions/upload-artifact@v4
with:
name: sqlness-logs
path: /tmp/greptime-*.log
path: C:\Users\RUNNER~1\AppData\Local\Temp\sqlness*
retention-days: 3
- name: Build greptime binary

View File

@@ -2,7 +2,7 @@ name: Setup Etcd cluster
description: Deploy Etcd cluster on Kubernetes
inputs:
etcd-replicas:
default: 3
default: 1
description: "Etcd replicas"
namespace:
default: "etcd-cluster"

View File

@@ -1,18 +1,13 @@
meta:
config: |-
configData: |-
[runtime]
read_rt_size = 8
write_rt_size = 8
bg_rt_size = 8
global_rt_size = 4
datanode:
config: |-
configData: |-
[runtime]
read_rt_size = 8
write_rt_size = 8
bg_rt_size = 8
global_rt_size = 4
compact_rt_size = 2
frontend:
config: |-
configData: |-
[runtime]
read_rt_size = 8
write_rt_size = 8
bg_rt_size = 8
global_rt_size = 4

View File

@@ -1,29 +1,24 @@
meta:
config: |-
configData: |-
[runtime]
read_rt_size = 8
write_rt_size = 8
bg_rt_size = 8
global_rt_size = 4
[datanode]
[datanode.client]
timeout = "60s"
datanode:
config: |-
configData: |-
[runtime]
read_rt_size = 8
write_rt_size = 8
bg_rt_size = 8
global_rt_size = 4
compact_rt_size = 2
[storage]
cache_path = "/data/greptimedb/s3cache"
cache_capacity = "256MB"
frontend:
config: |-
configData: |-
[runtime]
read_rt_size = 8
write_rt_size = 8
bg_rt_size = 8
global_rt_size = 4
[meta_client]
ddl_timeout = "60s"

View File

@@ -1,25 +1,20 @@
meta:
config: |-
configData: |-
[runtime]
read_rt_size = 8
write_rt_size = 8
bg_rt_size = 8
global_rt_size = 4
[datanode]
[datanode.client]
timeout = "60s"
datanode:
config: |-
configData: |-
[runtime]
read_rt_size = 8
write_rt_size = 8
bg_rt_size = 8
global_rt_size = 4
compact_rt_size = 2
frontend:
config: |-
configData: |-
[runtime]
read_rt_size = 8
write_rt_size = 8
bg_rt_size = 8
global_rt_size = 4
[meta_client]
ddl_timeout = "60s"

View File

@@ -1,9 +1,7 @@
meta:
config: |-
configData: |-
[runtime]
read_rt_size = 8
write_rt_size = 8
bg_rt_size = 8
global_rt_size = 4
[wal]
provider = "kafka"
@@ -15,22 +13,19 @@ meta:
[datanode.client]
timeout = "60s"
datanode:
config: |-
configData: |-
[runtime]
read_rt_size = 8
write_rt_size = 8
bg_rt_size = 8
global_rt_size = 4
compact_rt_size = 2
[wal]
provider = "kafka"
broker_endpoints = ["kafka.kafka-cluster.svc.cluster.local:9092"]
linger = "2ms"
frontend:
config: |-
configData: |-
[runtime]
read_rt_size = 8
write_rt_size = 8
bg_rt_size = 8
global_rt_size = 4
[meta_client]
ddl_timeout = "60s"
@@ -43,3 +38,8 @@ objectStorage:
credentials:
accessKeyId: rootuser
secretAccessKey: rootpass123
remoteWal:
enabled: true
kafka:
brokerEndpoints:
- "kafka.kafka-cluster.svc.cluster.local:9092"

View File

@@ -0,0 +1,30 @@
name: Setup PostgreSQL
description: Deploy PostgreSQL on Kubernetes
inputs:
postgres-replicas:
default: 1
description: "Number of PostgreSQL replicas"
namespace:
default: "postgres-namespace"
postgres-version:
default: "14.2"
description: "PostgreSQL version"
storage-size:
default: "1Gi"
description: "Storage size for PostgreSQL"
runs:
using: composite
steps:
- name: Install PostgreSQL
shell: bash
run: |
helm upgrade \
--install postgresql oci://registry-1.docker.io/bitnamicharts/postgresql \
--set replicaCount=${{ inputs.postgres-replicas }} \
--set image.tag=${{ inputs.postgres-version }} \
--set persistence.size=${{ inputs.storage-size }} \
--set postgresql.username=greptimedb \
--set postgresql.password=admin \
--create-namespace \
-n ${{ inputs.namespace }}

View File

@@ -141,9 +141,22 @@ jobs:
runs-on: ubuntu-latest
timeout-minutes: 60
strategy:
fail-fast: false
matrix:
target: [ "fuzz_create_table", "fuzz_alter_table", "fuzz_create_database", "fuzz_create_logical_table", "fuzz_alter_logical_table", "fuzz_insert", "fuzz_insert_logical_table" ]
steps:
- name: Remove unused software
run: |
echo "Disk space before:"
df -h
[[ -d /usr/share/dotnet ]] && sudo rm -rf /usr/share/dotnet
[[ -d /usr/local/lib/android ]] && sudo rm -rf /usr/local/lib/android
[[ -d /opt/ghc ]] && sudo rm -rf /opt/ghc
[[ -d /opt/hostedtoolcache/CodeQL ]] && sudo rm -rf /opt/hostedtoolcache/CodeQL
sudo docker image prune --all --force
sudo docker builder prune -a
echo "Disk space after:"
df -h
- uses: actions/checkout@v4
- uses: arduino/setup-protoc@v3
with:
@@ -192,6 +205,18 @@ jobs:
matrix:
target: [ "unstable_fuzz_create_table_standalone" ]
steps:
- name: Remove unused software
run: |
echo "Disk space before:"
df -h
[[ -d /usr/share/dotnet ]] && sudo rm -rf /usr/share/dotnet
[[ -d /usr/local/lib/android ]] && sudo rm -rf /usr/local/lib/android
[[ -d /opt/ghc ]] && sudo rm -rf /opt/ghc
[[ -d /opt/hostedtoolcache/CodeQL ]] && sudo rm -rf /opt/hostedtoolcache/CodeQL
sudo docker image prune --all --force
sudo docker builder prune -a
echo "Disk space after:"
df -h
- uses: actions/checkout@v4
- uses: arduino/setup-protoc@v3
with:
@@ -284,24 +309,24 @@ jobs:
strategy:
matrix:
target: [ "fuzz_create_table", "fuzz_alter_table", "fuzz_create_database", "fuzz_create_logical_table", "fuzz_alter_logical_table", "fuzz_insert", "fuzz_insert_logical_table" ]
mode:
- name: "Disk"
minio: false
kafka: false
values: "with-disk.yaml"
- name: "Minio"
minio: true
kafka: false
values: "with-minio.yaml"
- name: "Minio with Cache"
minio: true
kafka: false
values: "with-minio-and-cache.yaml"
mode:
- name: "Remote WAL"
minio: true
kafka: true
values: "with-remote-wal.yaml"
steps:
- name: Remove unused software
run: |
echo "Disk space before:"
df -h
[[ -d /usr/share/dotnet ]] && sudo rm -rf /usr/share/dotnet
[[ -d /usr/local/lib/android ]] && sudo rm -rf /usr/local/lib/android
[[ -d /opt/ghc ]] && sudo rm -rf /opt/ghc
[[ -d /opt/hostedtoolcache/CodeQL ]] && sudo rm -rf /opt/hostedtoolcache/CodeQL
sudo docker image prune --all --force
sudo docker builder prune -a
echo "Disk space after:"
df -h
- uses: actions/checkout@v4
- name: Setup Kind
uses: ./.github/actions/setup-kind
@@ -313,6 +338,8 @@ jobs:
uses: ./.github/actions/setup-kafka-cluster
- name: Setup Etcd cluser
uses: ./.github/actions/setup-etcd-cluster
- name: Setup Postgres cluser
uses: ./.github/actions/setup-postgres-cluster
# Prepares for fuzz tests
- uses: arduino/setup-protoc@v3
with:
@@ -426,6 +453,18 @@ jobs:
kafka: true
values: "with-remote-wal.yaml"
steps:
- name: Remove unused software
run: |
echo "Disk space before:"
df -h
[[ -d /usr/share/dotnet ]] && sudo rm -rf /usr/share/dotnet
[[ -d /usr/local/lib/android ]] && sudo rm -rf /usr/local/lib/android
[[ -d /opt/ghc ]] && sudo rm -rf /opt/ghc
[[ -d /opt/hostedtoolcache/CodeQL ]] && sudo rm -rf /opt/hostedtoolcache/CodeQL
sudo docker image prune --all --force
sudo docker builder prune -a
echo "Disk space after:"
df -h
- uses: actions/checkout@v4
- name: Setup Kind
uses: ./.github/actions/setup-kind
@@ -439,6 +478,8 @@ jobs:
uses: ./.github/actions/setup-kafka-cluster
- name: Setup Etcd cluser
uses: ./.github/actions/setup-etcd-cluster
- name: Setup Postgres cluser
uses: ./.github/actions/setup-postgres-cluster
# Prepares for fuzz tests
- uses: arduino/setup-protoc@v3
with:
@@ -556,6 +597,10 @@ jobs:
timeout-minutes: 60
steps:
- uses: actions/checkout@v4
- if: matrix.mode.kafka
name: Setup kafka server
working-directory: tests-integration/fixtures/kafka
run: docker compose -f docker-compose-standalone.yml up -d --wait
- name: Download pre-built binaries
uses: actions/download-artifact@v4
with:
@@ -563,10 +608,6 @@ jobs:
path: .
- name: Unzip binaries
run: tar -xvf ./bins.tar.gz
- if: matrix.mode.kafka
name: Setup kafka server
working-directory: tests-integration/fixtures/kafka
run: docker compose -f docker-compose-standalone.yml up -d --wait
- name: Run sqlness
run: RUST_BACKTRACE=1 ./bins/sqlness-runner ${{ matrix.mode.opts }} -c ./tests/cases --bins-dir ./bins --preserve-state
- name: Upload sqlness logs
@@ -665,6 +706,9 @@ jobs:
- name: Setup minio
working-directory: tests-integration/fixtures/minio
run: docker compose -f docker-compose-standalone.yml up -d --wait
- name: Setup postgres server
working-directory: tests-integration/fixtures/postgres
run: docker compose -f docker-compose-standalone.yml up -d --wait
- name: Run nextest cases
run: cargo llvm-cov nextest --workspace --lcov --output-path lcov.info -F pyo3_backend -F dashboard
env:
@@ -681,7 +725,9 @@ jobs:
GT_MINIO_REGION: us-west-2
GT_MINIO_ENDPOINT_URL: http://127.0.0.1:9000
GT_ETCD_ENDPOINTS: http://127.0.0.1:2379
GT_POSTGRES_ENDPOINTS: postgres://greptimedb:admin@127.0.0.1:5432/postgres
GT_KAFKA_ENDPOINTS: 127.0.0.1:9092
GT_KAFKA_SASL_ENDPOINTS: 127.0.0.1:9093
UNITTEST_LOG_DIR: "__unittest_logs"
- name: Codecov upload
uses: codecov/codecov-action@v4

View File

@@ -33,6 +33,13 @@ jobs:
aws-region: ${{ vars.AWS_CI_TEST_BUCKET_REGION }}
aws-access-key-id: ${{ secrets.AWS_CI_TEST_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_CI_TEST_SECRET_ACCESS_KEY }}
- name: Upload sqlness logs
if: failure()
uses: actions/upload-artifact@v4
with:
name: sqlness-logs-kind
path: /tmp/kind/
retention-days: 3
sqlness-windows:
name: Sqlness tests on Windows
@@ -51,13 +58,15 @@ jobs:
- name: Rust Cache
uses: Swatinem/rust-cache@v2
- name: Run sqlness
run: cargo sqlness
run: make sqlness-test
env:
SQLNESS_OPTS: "--preserve-state"
- name: Upload sqlness logs
if: always()
if: failure()
uses: actions/upload-artifact@v4
with:
name: sqlness-logs
path: /tmp/greptime-*.log
path: C:\Users\RUNNER~1\AppData\Local\Temp\sqlness*
retention-days: 3
test-on-windows:
@@ -109,11 +118,7 @@ jobs:
check-status:
name: Check status
needs: [
sqlness-test,
sqlness-windows,
test-on-windows,
]
needs: [sqlness-test, sqlness-windows, test-on-windows]
if: ${{ github.repository == 'GreptimeTeam/greptimedb' }}
runs-on: ubuntu-20.04
outputs:
@@ -127,9 +132,7 @@ jobs:
notification:
if: ${{ github.repository == 'GreptimeTeam/greptimedb' && always() }} # Not requiring successful dependent jobs, always run.
name: Send notification to Greptime team
needs: [
check-status
]
needs: [check-status]
runs-on: ubuntu-20.04
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL_DEVELOP_CHANNEL }}

View File

@@ -14,7 +14,7 @@ Follow our [README](https://github.com/GreptimeTeam/greptimedb#readme) to get th
It can feel intimidating to contribute to a complex project, but it can also be exciting and fun. These general notes will help everyone participate in this communal activity.
- Follow the [Code of Conduct](https://github.com/GreptimeTeam/greptimedb/blob/main/CODE_OF_CONDUCT.md)
- Follow the [Code of Conduct](https://github.com/GreptimeTeam/.github/blob/main/.github/CODE_OF_CONDUCT.md)
- Small changes make huge differences. We will happily accept a PR making a single character change if it helps move forward. Don't wait to have everything working.
- Check the closed issues before opening your issue.
- Try to follow the existing style of the code.
@@ -30,7 +30,7 @@ Pull requests are great, but we accept all kinds of other help if you like. Such
## Code of Conduct
Also, there are things that we are not looking for because they don't match the goals of the product or benefit the community. Please read [Code of Conduct](https://github.com/GreptimeTeam/greptimedb/blob/main/CODE_OF_CONDUCT.md); we hope everyone can keep good manners and become an honored member.
Also, there are things that we are not looking for because they don't match the goals of the product or benefit the community. Please read [Code of Conduct](https://github.com/GreptimeTeam/.github/blob/main/.github/CODE_OF_CONDUCT.md); we hope everyone can keep good manners and become an honored member.
## License

448
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -64,7 +64,7 @@ members = [
resolver = "2"
[workspace.package]
version = "0.9.0"
version = "0.9.2"
edition = "2021"
license = "Apache-2.0"
@@ -104,27 +104,27 @@ clap = { version = "4.4", features = ["derive"] }
config = "0.13.0"
crossbeam-utils = "0.8"
dashmap = "5.4"
datafusion = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "d7bda5c9b762426e81f144296deadc87e5f4a0b8" }
datafusion-common = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "d7bda5c9b762426e81f144296deadc87e5f4a0b8" }
datafusion-expr = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "d7bda5c9b762426e81f144296deadc87e5f4a0b8" }
datafusion-functions = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "d7bda5c9b762426e81f144296deadc87e5f4a0b8" }
datafusion-optimizer = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "d7bda5c9b762426e81f144296deadc87e5f4a0b8" }
datafusion-physical-expr = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "d7bda5c9b762426e81f144296deadc87e5f4a0b8" }
datafusion-physical-plan = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "d7bda5c9b762426e81f144296deadc87e5f4a0b8" }
datafusion-sql = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "d7bda5c9b762426e81f144296deadc87e5f4a0b8" }
datafusion-substrait = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "d7bda5c9b762426e81f144296deadc87e5f4a0b8" }
datafusion = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "7823ef2f63663907edab46af0d51359900f608d6" }
datafusion-common = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "7823ef2f63663907edab46af0d51359900f608d6" }
datafusion-expr = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "7823ef2f63663907edab46af0d51359900f608d6" }
datafusion-functions = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "7823ef2f63663907edab46af0d51359900f608d6" }
datafusion-optimizer = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "7823ef2f63663907edab46af0d51359900f608d6" }
datafusion-physical-expr = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "7823ef2f63663907edab46af0d51359900f608d6" }
datafusion-physical-plan = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "7823ef2f63663907edab46af0d51359900f608d6" }
datafusion-sql = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "7823ef2f63663907edab46af0d51359900f608d6" }
datafusion-substrait = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "7823ef2f63663907edab46af0d51359900f608d6" }
derive_builder = "0.12"
dotenv = "0.15"
etcd-client = { version = "0.13" }
fst = "0.4.7"
futures = "0.3"
futures-util = "0.3"
greptime-proto = { git = "https://github.com/GreptimeTeam/greptime-proto.git", rev = "5c801650435d464891114502539b701c77a1b914" }
greptime-proto = { git = "https://github.com/GreptimeTeam/greptime-proto.git", rev = "c437b55725b7f5224fe9d46db21072b4a682ee4b" }
humantime = "2.1"
humantime-serde = "1.1"
itertools = "0.10"
lazy_static = "1.4"
meter-core = { git = "https://github.com/GreptimeTeam/greptime-meter.git", rev = "80b72716dcde47ec4161478416a5c6c21343364d" }
meter-core = { git = "https://github.com/GreptimeTeam/greptime-meter.git", rev = "80eb97c24c88af4dd9a86f8bbaf50e741d4eb8cd" }
mockall = "0.11.4"
moka = "0.12"
notify = "6.1"
@@ -151,14 +151,19 @@ reqwest = { version = "0.12", default-features = false, features = [
"stream",
"multipart",
] }
rskafka = "0.5"
# SCRAM-SHA-512 requires https://github.com/dequbed/rsasl/pull/48, https://github.com/influxdata/rskafka/pull/247
rskafka = { git = "https://github.com/WenyXu/rskafka.git", rev = "940c6030012c5b746fad819fb72e3325b26e39de", features = [
"transport-tls",
] }
rstest = "0.21"
rstest_reuse = "0.7"
rust_decimal = "1.33"
rustc-hash = "2.0"
schemars = "0.8"
serde = { version = "1.0", features = ["derive"] }
serde_json = { version = "1.0", features = ["float_roundtrip"] }
serde_with = "3"
shadow-rs = "0.31"
smallvec = { version = "1", features = ["serde"] }
snafu = "0.8"
sysinfo = "0.30"
@@ -169,6 +174,7 @@ sqlparser = { git = "https://github.com/GreptimeTeam/sqlparser-rs.git", rev = "5
strum = { version = "0.25", features = ["derive"] }
tempfile = "3"
tokio = { version = "1.36", features = ["full"] }
tokio-postgres = "0.7"
tokio-stream = { version = "0.1" }
tokio-util = { version = "0.7", features = ["io-util", "compat"] }
toml = "0.8.8"
@@ -183,7 +189,7 @@ auth = { path = "src/auth" }
cache = { path = "src/cache" }
catalog = { path = "src/catalog" }
client = { path = "src/client" }
cmd = { path = "src/cmd" }
cmd = { path = "src/cmd", default-features = false }
common-base = { path = "src/common/base" }
common-catalog = { path = "src/common/catalog" }
common-config = { path = "src/common/config" }
@@ -213,7 +219,7 @@ datanode = { path = "src/datanode" }
datatypes = { path = "src/datatypes" }
file-engine = { path = "src/file-engine" }
flow = { path = "src/flow" }
frontend = { path = "src/frontend" }
frontend = { path = "src/frontend", default-features = false }
index = { path = "src/index" }
log-store = { path = "src/log-store" }
meta-client = { path = "src/meta-client" }
@@ -238,7 +244,7 @@ table = { path = "src/table" }
[workspace.dependencies.meter-macros]
git = "https://github.com/GreptimeTeam/greptime-meter.git"
rev = "80b72716dcde47ec4161478416a5c6c21343364d"
rev = "80eb97c24c88af4dd9a86f8bbaf50e741d4eb8cd"
[profile.release]
debug = 1

View File

@@ -15,6 +15,7 @@ RUST_TOOLCHAIN ?= $(shell cat rust-toolchain.toml | grep channel | cut -d'"' -f2
CARGO_REGISTRY_CACHE ?= ${HOME}/.cargo/registry
ARCH := $(shell uname -m | sed 's/x86_64/amd64/' | sed 's/aarch64/arm64/')
OUTPUT_DIR := $(shell if [ "$(RELEASE)" = "true" ]; then echo "release"; elif [ ! -z "$(CARGO_PROFILE)" ]; then echo "$(CARGO_PROFILE)" ; else echo "debug"; fi)
SQLNESS_OPTS ?=
# The arguments for running integration tests.
ETCD_VERSION ?= v3.5.9
@@ -161,7 +162,7 @@ nextest: ## Install nextest tools.
.PHONY: sqlness-test
sqlness-test: ## Run sqlness test.
cargo sqlness
cargo sqlness ${SQLNESS_OPTS}
# Run fuzz test ${FUZZ_TARGET}.
RUNS ?= 1
@@ -172,7 +173,7 @@ fuzz:
.PHONY: fuzz-ls
fuzz-ls:
cargo fuzz list --fuzz-dir tests-fuzz
cargo fuzz list --fuzz-dir tests-fuzz
.PHONY: check
check: ## Cargo check all the targets.

View File

@@ -6,7 +6,7 @@
</picture>
</p>
<h2 align="center">Unified Time Series Database for Metrics, Events, and Logs</h2>
<h2 align="center">Unified Time Series Database for Metrics, Logs, and Events</h2>
<div align="center">
<h3 align="center">
@@ -50,7 +50,7 @@
## Introduction
**GreptimeDB** is an open-source unified time-series database for **Metrics**, **Events**, and **Logs** (also **Traces** in plan). You can gain real-time insights from Edge to Cloud at any scale.
**GreptimeDB** is an open-source unified time-series database for **Metrics**, **Logs**, and **Events** (also **Traces** in plan). You can gain real-time insights from Edge to Cloud at any scale.
## Why GreptimeDB
@@ -58,7 +58,7 @@ Our core developers have been building time-series data platforms for years. Bas
* **Unified all kinds of time series**
GreptimeDB treats all time series as contextual events with timestamp, and thus unifies the processing of metrics and events. It supports analyzing metrics and events with SQL and PromQL, and doing streaming with continuous aggregation.
GreptimeDB treats all time series as contextual events with timestamp, and thus unifies the processing of metrics, logs, and events. It supports analyzing metrics, logs, and events with SQL and PromQL, and doing streaming with continuous aggregation.
* **Cloud-Edge collaboration**
@@ -104,10 +104,10 @@ Read more about [Installation](https://docs.greptime.com/getting-started/install
## Getting Started
* [Quickstart](https://docs.greptime.com/getting-started/quick-start/overview)
* [Write Data](https://docs.greptime.com/user-guide/clients/overview)
* [Query Data](https://docs.greptime.com/user-guide/query-data/overview)
* [Operations](https://docs.greptime.com/user-guide/operations/overview)
* [Quickstart](https://docs.greptime.com/getting-started/quick-start)
* [User Guide](https://docs.greptime.com/user-guide/overview)
* [Demos](https://github.com/GreptimeTeam/demo-scene)
* [FAQ](https://docs.greptime.com/faq-and-others/faq)
## Build
@@ -150,9 +150,10 @@ Our official Grafana dashboard is available at [grafana](grafana/README.md) dire
## Project Status
The current version has not yet reached General Availability version standards.
In line with our Greptime 2024 Roadmap, we plan to achieve a production-level
version with the update to v1.0 in August. [[Join Force]](https://github.com/GreptimeTeam/greptimedb/issues/3412)
The current version has not yet reached the standards for General Availability.
According to our Greptime 2024 Roadmap, we aim to achieve a production-level version with the release of v1.0 by the end of 2024. [Join Us](https://github.com/GreptimeTeam/greptimedb/issues/3412)
We welcome you to test and use GreptimeDB. Some users have already adopted it in their production environments. If you're interested in trying it out, please use the latest stable release available.
## Community

View File

@@ -16,9 +16,8 @@
| `enable_telemetry` | Bool | `true` | Enable telemetry to collect anonymous usage data. |
| `default_timezone` | String | `None` | The default timezone of the server. |
| `runtime` | -- | -- | The runtime options. |
| `runtime.read_rt_size` | Integer | `8` | The number of threads to execute the runtime for global read operations. |
| `runtime.write_rt_size` | Integer | `8` | The number of threads to execute the runtime for global write operations. |
| `runtime.bg_rt_size` | Integer | `4` | The number of threads to execute the runtime for global background operations. |
| `runtime.global_rt_size` | Integer | `8` | The number of threads to execute the runtime for global read operations. |
| `runtime.compact_rt_size` | Integer | `4` | The number of threads to execute the runtime for global write operations. |
| `http` | -- | -- | The HTTP server options. |
| `http.addr` | String | `127.0.0.1:4000` | The address to bind the HTTP server. |
| `http.timeout` | String | `30s` | HTTP request timeout. Set to 0 to disable timeout. |
@@ -68,6 +67,11 @@
| `wal.prefill_log_files` | Bool | `false` | Whether to pre-create log files on start up.<br/>**It's only used when the provider is `raft_engine`**. |
| `wal.sync_period` | String | `10s` | Duration for fsyncing log files.<br/>**It's only used when the provider is `raft_engine`**. |
| `wal.broker_endpoints` | Array | -- | The Kafka broker endpoints.<br/>**It's only used when the provider is `kafka`**. |
| `wal.num_topics` | Integer | `64` | Number of topics to be created upon start.<br/>**It's only used when the provider is `kafka`**. |
| `wal.selector_type` | String | `round_robin` | Topic selector type.<br/>Available selector types:<br/>- `round_robin` (default)<br/>**It's only used when the provider is `kafka`**. |
| `wal.topic_name_prefix` | String | `greptimedb_wal_topic` | A Kafka topic is constructed by concatenating `topic_name_prefix` and `topic_id`.<br/>**It's only used when the provider is `kafka`**. |
| `wal.replication_factor` | Integer | `1` | Expected number of replicas of each partition.<br/>**It's only used when the provider is `kafka`**. |
| `wal.create_topic_timeout` | String | `30s` | Above which a topic creation operation will be cancelled.<br/>**It's only used when the provider is `kafka`**. |
| `wal.max_batch_bytes` | String | `1MB` | The max size of a single producer batch.<br/>Warning: Kafka has a default limit of 1MB per message in a topic.<br/>**It's only used when the provider is `kafka`**. |
| `wal.consumer_wait_timeout` | String | `100ms` | The consumer wait timeout.<br/>**It's only used when the provider is `kafka`**. |
| `wal.backoff_init` | String | `500ms` | The initial backoff delay.<br/>**It's only used when the provider is `kafka`**. |
@@ -94,6 +98,7 @@
| `storage.account_key` | String | `None` | The account key of the azure account.<br/>**It's only used when the storage type is `Azblob`**. |
| `storage.scope` | String | `None` | The scope of the google cloud storage.<br/>**It's only used when the storage type is `Gcs`**. |
| `storage.credential_path` | String | `None` | The credential path of the google cloud storage.<br/>**It's only used when the storage type is `Gcs`**. |
| `storage.credential` | String | `None` | The credential of the google cloud storage.<br/>**It's only used when the storage type is `Gcs`**. |
| `storage.container` | String | `None` | The container of the azure account.<br/>**It's only used when the storage type is `Azblob`**. |
| `storage.sas_token` | String | `None` | The sas token of the azure account.<br/>**It's only used when the storage type is `Azblob`**. |
| `storage.endpoint` | String | `None` | The endpoint of the S3 service.<br/>**It's only used when the storage type is `S3`, `Oss`, `Gcs` and `Azblob`**. |
@@ -129,6 +134,8 @@
| `region_engine.mito.inverted_index.apply_on_query` | String | `auto` | Whether to apply the index on query<br/>- `auto`: automatically (default)<br/>- `disable`: never |
| `region_engine.mito.inverted_index.mem_threshold_on_create` | String | `auto` | Memory threshold for performing an external sort during index creation.<br/>- `auto`: automatically determine the threshold based on the system memory size (default)<br/>- `unlimited`: no memory limit<br/>- `[size]` e.g. `64MB`: fixed memory threshold |
| `region_engine.mito.inverted_index.intermediate_path` | String | `""` | Deprecated, use `region_engine.mito.index.aux_path` instead. |
| `region_engine.mito.inverted_index.metadata_cache_size` | String | `64MiB` | Cache size for inverted index metadata. |
| `region_engine.mito.inverted_index.content_cache_size` | String | `128MiB` | Cache size for inverted index content. |
| `region_engine.mito.fulltext_index` | -- | -- | The options for full-text index in Mito engine. |
| `region_engine.mito.fulltext_index.create_on_flush` | String | `auto` | Whether to create the index on flush.<br/>- `auto`: automatically (default)<br/>- `disable`: never |
| `region_engine.mito.fulltext_index.create_on_compaction` | String | `auto` | Whether to create the index on compaction.<br/>- `auto`: automatically (default)<br/>- `disable`: never |
@@ -144,7 +151,7 @@
| `logging.dir` | String | `/tmp/greptimedb/logs` | The directory to store the log files. |
| `logging.level` | String | `None` | The log level. Can be `info`/`debug`/`warn`/`error`. |
| `logging.enable_otlp_tracing` | Bool | `false` | Enable OTLP tracing. |
| `logging.otlp_endpoint` | String | `None` | The OTLP tracing endpoint. |
| `logging.otlp_endpoint` | String | `http://localhost:4317` | The OTLP tracing endpoint. |
| `logging.append_stdout` | Bool | `true` | Whether to append logs to stdout. |
| `logging.tracing_sample_ratio` | -- | -- | The percentage of tracing will be sampled and exported.<br/>Valid range `[0, 1]`, 1 means all traces are sampled, 0 means all traces are not sampled, the default value is 1.<br/>ratio > 1 are treated as 1. Fractions < 0 are treated as 0 |
| `logging.tracing_sample_ratio.default_ratio` | Float | `1.0` | -- |
@@ -166,12 +173,10 @@
| Key | Type | Default | Descriptions |
| --- | -----| ------- | ----------- |
| `mode` | String | `standalone` | The running mode of the datanode. It can be `standalone` or `distributed`. |
| `default_timezone` | String | `None` | The default timezone of the server. |
| `runtime` | -- | -- | The runtime options. |
| `runtime.read_rt_size` | Integer | `8` | The number of threads to execute the runtime for global read operations. |
| `runtime.write_rt_size` | Integer | `8` | The number of threads to execute the runtime for global write operations. |
| `runtime.bg_rt_size` | Integer | `4` | The number of threads to execute the runtime for global background operations. |
| `runtime.global_rt_size` | Integer | `8` | The number of threads to execute the runtime for global read operations. |
| `runtime.compact_rt_size` | Integer | `4` | The number of threads to execute the runtime for global write operations. |
| `heartbeat` | -- | -- | The heartbeat options. |
| `heartbeat.interval` | String | `18s` | Interval for sending heartbeat messages to the metasrv. |
| `heartbeat.retry_interval` | String | `3s` | Interval for retrying to send heartbeat messages to the metasrv. |
@@ -231,7 +236,7 @@
| `logging.dir` | String | `/tmp/greptimedb/logs` | The directory to store the log files. |
| `logging.level` | String | `None` | The log level. Can be `info`/`debug`/`warn`/`error`. |
| `logging.enable_otlp_tracing` | Bool | `false` | Enable OTLP tracing. |
| `logging.otlp_endpoint` | String | `None` | The OTLP tracing endpoint. |
| `logging.otlp_endpoint` | String | `http://localhost:4317` | The OTLP tracing endpoint. |
| `logging.append_stdout` | Bool | `true` | Whether to append logs to stdout. |
| `logging.tracing_sample_ratio` | -- | -- | The percentage of tracing will be sampled and exported.<br/>Valid range `[0, 1]`, 1 means all traces are sampled, 0 means all traces are not sampled, the default value is 1.<br/>ratio > 1 are treated as 1. Fractions < 0 are treated as 0 |
| `logging.tracing_sample_ratio.default_ratio` | Float | `1.0` | -- |
@@ -255,29 +260,28 @@
| `bind_addr` | String | `127.0.0.1:3002` | The bind address of metasrv. |
| `server_addr` | String | `127.0.0.1:3002` | The communication server address for frontend and datanode to connect to metasrv, "127.0.0.1:3002" by default for localhost. |
| `store_addr` | String | `127.0.0.1:2379` | Etcd server address. |
| `selector` | String | `lease_based` | Datanode selector type.<br/>- `lease_based` (default value).<br/>- `load_based`<br/>For details, please see "https://docs.greptime.com/developer-guide/metasrv/selector". |
| `selector` | String | `round_robin` | Datanode selector type.<br/>- `round_robin` (default value)<br/>- `lease_based`<br/>- `load_based`<br/>For details, please see "https://docs.greptime.com/developer-guide/metasrv/selector". |
| `use_memory_store` | Bool | `false` | Store data in memory. |
| `enable_telemetry` | Bool | `true` | Whether to enable greptimedb telemetry. |
| `store_key_prefix` | String | `""` | If it's not empty, the metasrv will store all data with this key prefix. |
| `enable_region_failover` | Bool | `false` | Whether to enable region failover.<br/>This feature is only available on GreptimeDB running on cluster mode and<br/>- Using Remote WAL<br/>- Using shared storage (e.g., s3). |
| `runtime` | -- | -- | The runtime options. |
| `runtime.read_rt_size` | Integer | `8` | The number of threads to execute the runtime for global read operations. |
| `runtime.write_rt_size` | Integer | `8` | The number of threads to execute the runtime for global write operations. |
| `runtime.bg_rt_size` | Integer | `4` | The number of threads to execute the runtime for global background operations. |
| `runtime.global_rt_size` | Integer | `8` | The number of threads to execute the runtime for global read operations. |
| `runtime.compact_rt_size` | Integer | `4` | The number of threads to execute the runtime for global write operations. |
| `procedure` | -- | -- | Procedure storage options. |
| `procedure.max_retry_times` | Integer | `12` | Procedure max retry time. |
| `procedure.retry_delay` | String | `500ms` | Initial retry delay of procedures, increases exponentially |
| `procedure.max_metadata_value_size` | String | `1500KiB` | Auto split large value<br/>GreptimeDB procedure uses etcd as the default metadata storage backend.<br/>The etcd the maximum size of any request is 1.5 MiB<br/>1500KiB = 1536KiB (1.5MiB) - 36KiB (reserved size of key)<br/>Comments out the `max_metadata_value_size`, for don't split large value (no limit). |
| `failure_detector` | -- | -- | -- |
| `failure_detector.threshold` | Float | `8.0` | -- |
| `failure_detector.min_std_deviation` | String | `100ms` | -- |
| `failure_detector.acceptable_heartbeat_pause` | String | `10000ms` | -- |
| `failure_detector.first_heartbeat_estimate` | String | `1000ms` | -- |
| `failure_detector.threshold` | Float | `8.0` | The threshold value used by the failure detector to determine failure conditions. |
| `failure_detector.min_std_deviation` | String | `100ms` | The minimum standard deviation of the heartbeat intervals, used to calculate acceptable variations. |
| `failure_detector.acceptable_heartbeat_pause` | String | `10000ms` | The acceptable pause duration between heartbeats, used to determine if a heartbeat interval is acceptable. |
| `failure_detector.first_heartbeat_estimate` | String | `1000ms` | The initial estimate of the heartbeat interval used by the failure detector. |
| `datanode` | -- | -- | Datanode options. |
| `datanode.client` | -- | -- | Datanode client options. |
| `datanode.client.timeout` | String | `10s` | -- |
| `datanode.client.connect_timeout` | String | `10s` | -- |
| `datanode.client.tcp_nodelay` | Bool | `true` | -- |
| `datanode.client.timeout` | String | `10s` | Operation timeout. |
| `datanode.client.connect_timeout` | String | `10s` | Connect server timeout. |
| `datanode.client.tcp_nodelay` | Bool | `true` | `TCP_NODELAY` option for accepted connections. |
| `wal` | -- | -- | -- |
| `wal.provider` | String | `raft_engine` | -- |
| `wal.broker_endpoints` | Array | -- | The broker endpoints of the Kafka cluster. |
@@ -294,7 +298,7 @@
| `logging.dir` | String | `/tmp/greptimedb/logs` | The directory to store the log files. |
| `logging.level` | String | `None` | The log level. Can be `info`/`debug`/`warn`/`error`. |
| `logging.enable_otlp_tracing` | Bool | `false` | Enable OTLP tracing. |
| `logging.otlp_endpoint` | String | `None` | The OTLP tracing endpoint. |
| `logging.otlp_endpoint` | String | `http://localhost:4317` | The OTLP tracing endpoint. |
| `logging.append_stdout` | Bool | `true` | Whether to append logs to stdout. |
| `logging.tracing_sample_ratio` | -- | -- | The percentage of tracing will be sampled and exported.<br/>Valid range `[0, 1]`, 1 means all traces are sampled, 0 means all traces are not sampled, the default value is 1.<br/>ratio > 1 are treated as 1. Fractions < 0 are treated as 0 |
| `logging.tracing_sample_ratio.default_ratio` | Float | `1.0` | -- |
@@ -337,9 +341,8 @@
| `grpc.tls.key_path` | String | `None` | Private key file path. |
| `grpc.tls.watch` | Bool | `false` | Watch for Certificate and key file change and auto reload.<br/>For now, gRPC tls config does not support auto reload. |
| `runtime` | -- | -- | The runtime options. |
| `runtime.read_rt_size` | Integer | `8` | The number of threads to execute the runtime for global read operations. |
| `runtime.write_rt_size` | Integer | `8` | The number of threads to execute the runtime for global write operations. |
| `runtime.bg_rt_size` | Integer | `4` | The number of threads to execute the runtime for global background operations. |
| `runtime.global_rt_size` | Integer | `8` | The number of threads to execute the runtime for global read operations. |
| `runtime.compact_rt_size` | Integer | `4` | The number of threads to execute the runtime for global write operations. |
| `heartbeat` | -- | -- | The heartbeat options. |
| `heartbeat.interval` | String | `3s` | Interval for sending heartbeat messages to the metasrv. |
| `heartbeat.retry_interval` | String | `3s` | Interval for retrying to send heartbeat messages to the metasrv. |
@@ -371,6 +374,8 @@
| `wal.backoff_max` | String | `10s` | The maximum backoff delay.<br/>**It's only used when the provider is `kafka`**. |
| `wal.backoff_base` | Integer | `2` | The exponential backoff rate, i.e. next backoff = base * current backoff.<br/>**It's only used when the provider is `kafka`**. |
| `wal.backoff_deadline` | String | `5mins` | The deadline of retries.<br/>**It's only used when the provider is `kafka`**. |
| `wal.create_index` | Bool | `true` | Whether to enable WAL index creation.<br/>**It's only used when the provider is `kafka`**. |
| `wal.dump_index_interval` | String | `60s` | The interval for dumping WAL indexes.<br/>**It's only used when the provider is `kafka`**. |
| `storage` | -- | -- | The data storage options. |
| `storage.data_home` | String | `/tmp/greptimedb/` | The working home directory. |
| `storage.type` | String | `File` | The storage type used to store the data.<br/>- `File`: the data is stored in the local file system.<br/>- `S3`: the data is stored in the S3 object storage.<br/>- `Gcs`: the data is stored in the Google Cloud Storage.<br/>- `Azblob`: the data is stored in the Azure Blob Storage.<br/>- `Oss`: the data is stored in the Aliyun OSS. |
@@ -385,6 +390,7 @@
| `storage.account_key` | String | `None` | The account key of the azure account.<br/>**It's only used when the storage type is `Azblob`**. |
| `storage.scope` | String | `None` | The scope of the google cloud storage.<br/>**It's only used when the storage type is `Gcs`**. |
| `storage.credential_path` | String | `None` | The credential path of the google cloud storage.<br/>**It's only used when the storage type is `Gcs`**. |
| `storage.credential` | String | `None` | The credential of the google cloud storage.<br/>**It's only used when the storage type is `Gcs`**. |
| `storage.container` | String | `None` | The container of the azure account.<br/>**It's only used when the storage type is `Azblob`**. |
| `storage.sas_token` | String | `None` | The sas token of the azure account.<br/>**It's only used when the storage type is `Azblob`**. |
| `storage.endpoint` | String | `None` | The endpoint of the S3 service.<br/>**It's only used when the storage type is `S3`, `Oss`, `Gcs` and `Azblob`**. |
@@ -435,7 +441,7 @@
| `logging.dir` | String | `/tmp/greptimedb/logs` | The directory to store the log files. |
| `logging.level` | String | `None` | The log level. Can be `info`/`debug`/`warn`/`error`. |
| `logging.enable_otlp_tracing` | Bool | `false` | Enable OTLP tracing. |
| `logging.otlp_endpoint` | String | `None` | The OTLP tracing endpoint. |
| `logging.otlp_endpoint` | String | `http://localhost:4317` | The OTLP tracing endpoint. |
| `logging.append_stdout` | Bool | `true` | Whether to append logs to stdout. |
| `logging.tracing_sample_ratio` | -- | -- | The percentage of tracing will be sampled and exported.<br/>Valid range `[0, 1]`, 1 means all traces are sampled, 0 means all traces are not sampled, the default value is 1.<br/>ratio > 1 are treated as 1. Fractions < 0 are treated as 0 |
| `logging.tracing_sample_ratio.default_ratio` | Float | `1.0` | -- |
@@ -480,7 +486,7 @@
| `logging.dir` | String | `/tmp/greptimedb/logs` | The directory to store the log files. |
| `logging.level` | String | `None` | The log level. Can be `info`/`debug`/`warn`/`error`. |
| `logging.enable_otlp_tracing` | Bool | `false` | Enable OTLP tracing. |
| `logging.otlp_endpoint` | String | `None` | The OTLP tracing endpoint. |
| `logging.otlp_endpoint` | String | `http://localhost:4317` | The OTLP tracing endpoint. |
| `logging.append_stdout` | Bool | `true` | Whether to append logs to stdout. |
| `logging.tracing_sample_ratio` | -- | -- | The percentage of tracing will be sampled and exported.<br/>Valid range `[0, 1]`, 1 means all traces are sampled, 0 means all traces are not sampled, the default value is 1.<br/>ratio > 1 are treated as 1. Fractions < 0 are treated as 0 |
| `logging.tracing_sample_ratio.default_ratio` | Float | `1.0` | -- |

View File

@@ -73,11 +73,9 @@ watch = false
## The runtime options.
[runtime]
## The number of threads to execute the runtime for global read operations.
read_rt_size = 8
global_rt_size = 8
## The number of threads to execute the runtime for global write operations.
write_rt_size = 8
## The number of threads to execute the runtime for global background operations.
bg_rt_size = 4
compact_rt_size = 4
## The heartbeat options.
[heartbeat]
@@ -189,6 +187,32 @@ backoff_base = 2
## **It's only used when the provider is `kafka`**.
backoff_deadline = "5mins"
## Whether to enable WAL index creation.
## **It's only used when the provider is `kafka`**.
create_index = true
## The interval for dumping WAL indexes.
## **It's only used when the provider is `kafka`**.
dump_index_interval = "60s"
# The Kafka SASL configuration.
# **It's only used when the provider is `kafka`**.
# Available SASL mechanisms:
# - `PLAIN`
# - `SCRAM-SHA-256`
# - `SCRAM-SHA-512`
# [wal.sasl]
# type = "SCRAM-SHA-512"
# username = "user_kafka"
# password = "secret"
# The Kafka TLS configuration.
# **It's only used when the provider is `kafka`**.
# [wal.tls]
# server_ca_cert_path = "/path/to/server_cert"
# client_cert_path = "/path/to/client_cert"
# client_key_path = "/path/to/key"
# Example of using S3 as the storage.
# [storage]
# type = "S3"
@@ -225,6 +249,7 @@ backoff_deadline = "5mins"
# root = "data"
# scope = "test"
# credential_path = "123456"
# credential = "base64-credential"
# endpoint = "https://storage.googleapis.com"
## The data storage options.
@@ -296,6 +321,11 @@ scope = "test"
## +toml2docs:none-default
credential_path = "test"
## The credential of the google cloud storage.
## **It's only used when the storage type is `Gcs`**.
## +toml2docs:none-default
credential= "base64-credential"
## The container of the azure account.
## **It's only used when the storage type is `Azblob`**.
## +toml2docs:none-default
@@ -495,8 +525,7 @@ level = "info"
enable_otlp_tracing = false
## The OTLP tracing endpoint.
## +toml2docs:none-default
otlp_endpoint = ""
otlp_endpoint = "http://localhost:4317"
## Whether to append logs to stdout.
append_stdout = true

View File

@@ -70,8 +70,7 @@ level = "info"
enable_otlp_tracing = false
## The OTLP tracing endpoint.
## +toml2docs:none-default
otlp_endpoint = ""
otlp_endpoint = "http://localhost:4317"
## Whether to append logs to stdout.
append_stdout = true

View File

@@ -1,6 +1,3 @@
## The running mode of the datanode. It can be `standalone` or `distributed`.
mode = "standalone"
## The default timezone of the server.
## +toml2docs:none-default
default_timezone = "UTC"
@@ -8,11 +5,9 @@ default_timezone = "UTC"
## The runtime options.
[runtime]
## The number of threads to execute the runtime for global read operations.
read_rt_size = 8
global_rt_size = 8
## The number of threads to execute the runtime for global write operations.
write_rt_size = 8
## The number of threads to execute the runtime for global background operations.
bg_rt_size = 4
compact_rt_size = 4
## The heartbeat options.
[heartbeat]
@@ -182,8 +177,7 @@ level = "info"
enable_otlp_tracing = false
## The OTLP tracing endpoint.
## +toml2docs:none-default
otlp_endpoint = ""
otlp_endpoint = "http://localhost:4317"
## Whether to append logs to stdout.
append_stdout = true

View File

@@ -11,10 +11,11 @@ server_addr = "127.0.0.1:3002"
store_addr = "127.0.0.1:2379"
## Datanode selector type.
## - `lease_based` (default value).
## - `round_robin` (default value)
## - `lease_based`
## - `load_based`
## For details, please see "https://docs.greptime.com/developer-guide/metasrv/selector".
selector = "lease_based"
selector = "round_robin"
## Store data in memory.
use_memory_store = false
@@ -34,11 +35,9 @@ enable_region_failover = false
## The runtime options.
[runtime]
## The number of threads to execute the runtime for global read operations.
read_rt_size = 8
global_rt_size = 8
## The number of threads to execute the runtime for global write operations.
write_rt_size = 8
## The number of threads to execute the runtime for global background operations.
bg_rt_size = 4
compact_rt_size = 4
## Procedure storage options.
[procedure]
@@ -58,17 +57,32 @@ max_metadata_value_size = "1500KiB"
# Failure detectors options.
[failure_detector]
## The threshold value used by the failure detector to determine failure conditions.
threshold = 8.0
## The minimum standard deviation of the heartbeat intervals, used to calculate acceptable variations.
min_std_deviation = "100ms"
## The acceptable pause duration between heartbeats, used to determine if a heartbeat interval is acceptable.
acceptable_heartbeat_pause = "10000ms"
## The initial estimate of the heartbeat interval used by the failure detector.
first_heartbeat_estimate = "1000ms"
## Datanode options.
[datanode]
## Datanode client options.
[datanode.client]
## Operation timeout.
timeout = "10s"
## Connect server timeout.
connect_timeout = "10s"
## `TCP_NODELAY` option for accepted connections.
tcp_nodelay = true
[wal]
@@ -110,6 +124,24 @@ backoff_base = 2
## Stop reconnecting if the total wait time reaches the deadline. If this config is missing, the reconnecting won't terminate.
backoff_deadline = "5mins"
# The Kafka SASL configuration.
# **It's only used when the provider is `kafka`**.
# Available SASL mechanisms:
# - `PLAIN`
# - `SCRAM-SHA-256`
# - `SCRAM-SHA-512`
# [wal.sasl]
# type = "SCRAM-SHA-512"
# username = "user_kafka"
# password = "secret"
# The Kafka TLS configuration.
# **It's only used when the provider is `kafka`**.
# [wal.tls]
# server_ca_cert_path = "/path/to/server_cert"
# client_cert_path = "/path/to/client_cert"
# client_key_path = "/path/to/key"
## The logging options.
[logging]
## The directory to store the log files.
@@ -123,8 +155,7 @@ level = "info"
enable_otlp_tracing = false
## The OTLP tracing endpoint.
## +toml2docs:none-default
otlp_endpoint = ""
otlp_endpoint = "http://localhost:4317"
## Whether to append logs to stdout.
append_stdout = true

View File

@@ -11,11 +11,9 @@ default_timezone = "UTC"
## The runtime options.
[runtime]
## The number of threads to execute the runtime for global read operations.
read_rt_size = 8
global_rt_size = 8
## The number of threads to execute the runtime for global write operations.
write_rt_size = 8
## The number of threads to execute the runtime for global background operations.
bg_rt_size = 4
compact_rt_size = 4
## The HTTP server options.
[http]
@@ -173,6 +171,28 @@ sync_period = "10s"
## **It's only used when the provider is `kafka`**.
broker_endpoints = ["127.0.0.1:9092"]
## Number of topics to be created upon start.
## **It's only used when the provider is `kafka`**.
num_topics = 64
## Topic selector type.
## Available selector types:
## - `round_robin` (default)
## **It's only used when the provider is `kafka`**.
selector_type = "round_robin"
## A Kafka topic is constructed by concatenating `topic_name_prefix` and `topic_id`.
## **It's only used when the provider is `kafka`**.
topic_name_prefix = "greptimedb_wal_topic"
## Expected number of replicas of each partition.
## **It's only used when the provider is `kafka`**.
replication_factor = 1
## Above which a topic creation operation will be cancelled.
## **It's only used when the provider is `kafka`**.
create_topic_timeout = "30s"
## The max size of a single producer batch.
## Warning: Kafka has a default limit of 1MB per message in a topic.
## **It's only used when the provider is `kafka`**.
@@ -198,6 +218,24 @@ backoff_base = 2
## **It's only used when the provider is `kafka`**.
backoff_deadline = "5mins"
# The Kafka SASL configuration.
# **It's only used when the provider is `kafka`**.
# Available SASL mechanisms:
# - `PLAIN`
# - `SCRAM-SHA-256`
# - `SCRAM-SHA-512`
# [wal.sasl]
# type = "SCRAM-SHA-512"
# username = "user_kafka"
# password = "secret"
# The Kafka TLS configuration.
# **It's only used when the provider is `kafka`**.
# [wal.tls]
# server_ca_cert_path = "/path/to/server_cert"
# client_cert_path = "/path/to/client_cert"
# client_key_path = "/path/to/key"
## Metadata storage options.
[metadata_store]
## Kv file size in bytes.
@@ -248,6 +286,7 @@ retry_delay = "500ms"
# root = "data"
# scope = "test"
# credential_path = "123456"
# credential = "base64-credential"
# endpoint = "https://storage.googleapis.com"
## The data storage options.
@@ -319,6 +358,11 @@ scope = "test"
## +toml2docs:none-default
credential_path = "test"
## The credential of the google cloud storage.
## **It's only used when the storage type is `Gcs`**.
## +toml2docs:none-default
credential = "base64-credential"
## The container of the azure account.
## **It's only used when the storage type is `Azblob`**.
## +toml2docs:none-default
@@ -459,6 +503,12 @@ mem_threshold_on_create = "auto"
## Deprecated, use `region_engine.mito.index.aux_path` instead.
intermediate_path = ""
## Cache size for inverted index metadata.
metadata_cache_size = "64MiB"
## Cache size for inverted index content.
content_cache_size = "128MiB"
## The options for full-text index in Mito engine.
[region_engine.mito.fulltext_index]
@@ -518,8 +568,7 @@ level = "info"
enable_otlp_tracing = false
## The OTLP tracing endpoint.
## +toml2docs:none-default
otlp_endpoint = ""
otlp_endpoint = "http://localhost:4317"
## Whether to append logs to stdout.
append_stdout = true

View File

@@ -1,9 +1,9 @@
x-custom:
etcd_initial_cluster_token: &etcd_initial_cluster_token "--initial-cluster-token=etcd-cluster"
etcd_common_settings: &etcd_common_settings
image: quay.io/coreos/etcd:v3.5.10
image: "${ETCD_REGISTRY:-quay.io}/${ETCD_NAMESPACE:-coreos}/etcd:${ETCD_VERSION:-v3.5.10}"
entrypoint: /usr/local/bin/etcd
greptimedb_image: &greptimedb_image docker.io/greptimedb/greptimedb:latest
greptimedb_image: &greptimedb_image "${GREPTIMEDB_REGISTRY:-docker.io}/${GREPTIMEDB_NAMESPACE:-greptime}/greptimedb:${GREPTIMEDB_VERSION:-latest}"
services:
etcd0:

View File

@@ -105,7 +105,7 @@ use tests_fuzz::utils::{init_greptime_connections, Connections};
fuzz_target!(|input: FuzzInput| {
common_telemetry::init_default_ut_logging();
common_runtime::block_on_write(async {
common_runtime::block_on_global(async {
let Connections { mysql } = init_greptime_connections().await;
let mut rng = ChaChaRng::seed_from_u64(input.seed);
let columns = rng.gen_range(2..30);

View File

@@ -25,7 +25,7 @@ Please ensure the following configuration before importing the dashboard into Gr
__1. Prometheus scrape config__
Assign `greptime_pod` label to each host target. We use this label to identify each node instance.
Configure Prometheus to scrape the cluster.
```yml
# example config
@@ -34,27 +34,15 @@ Assign `greptime_pod` label to each host target. We use this label to identify e
scrape_configs:
- job_name: metasrv
static_configs:
- targets: ['<ip>:<port>']
labels:
greptime_pod: metasrv
- targets: ['<metasrv-ip>:<port>']
- job_name: datanode
static_configs:
- targets: ['<ip>:<port>']
labels:
greptime_pod: datanode1
- targets: ['<ip>:<port>']
labels:
greptime_pod: datanode2
- targets: ['<ip>:<port>']
labels:
greptime_pod: datanode3
- targets: ['<datanode0-ip>:<port>', '<datanode1-ip>:<port>', '<datanode2-ip>:<port>']
- job_name: frontend
static_configs:
- targets: ['<ip>:<port>']
labels:
greptime_pod: frontend
- targets: ['<frontend-ip>:<port>']
```
__2. Grafana config__
@@ -63,4 +51,4 @@ Create a Prometheus data source in Grafana before using this dashboard. We use `
### Usage
Use `datasource` or `greptime_pod` on the upper-left corner to filter data from certain node.
Use `datasource` or `instance` on the upper-left corner to filter data from certain node.

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,62 +1,72 @@
#!/bin/sh
#!/usr/bin/env bash
set -ue
OS_TYPE=
ARCH_TYPE=
# Set the GitHub token to avoid GitHub API rate limit.
# You can run with `GITHUB_TOKEN`:
# GITHUB_TOKEN=<your_token> ./scripts/install.sh
GITHUB_TOKEN=${GITHUB_TOKEN:-}
VERSION=${1:-latest}
GITHUB_ORG=GreptimeTeam
GITHUB_REPO=greptimedb
BIN=greptime
get_os_type() {
os_type="$(uname -s)"
function get_os_type() {
os_type="$(uname -s)"
case "$os_type" in
case "$os_type" in
Darwin)
OS_TYPE=darwin
;;
OS_TYPE=darwin
;;
Linux)
OS_TYPE=linux
;;
OS_TYPE=linux
;;
*)
echo "Error: Unknown OS type: $os_type"
exit 1
esac
echo "Error: Unknown OS type: $os_type"
exit 1
esac
}
get_arch_type() {
arch_type="$(uname -m)"
function get_arch_type() {
arch_type="$(uname -m)"
case "$arch_type" in
case "$arch_type" in
arm64)
ARCH_TYPE=arm64
;;
ARCH_TYPE=arm64
;;
aarch64)
ARCH_TYPE=arm64
;;
ARCH_TYPE=arm64
;;
x86_64)
ARCH_TYPE=amd64
;;
ARCH_TYPE=amd64
;;
amd64)
ARCH_TYPE=amd64
;;
ARCH_TYPE=amd64
;;
*)
echo "Error: Unknown CPU type: $arch_type"
exit 1
esac
echo "Error: Unknown CPU type: $arch_type"
exit 1
esac
}
get_os_type
get_arch_type
if [ -n "${OS_TYPE}" ] && [ -n "${ARCH_TYPE}" ]; then
# Use the latest nightly version.
function download_artifact() {
if [ -n "${OS_TYPE}" ] && [ -n "${ARCH_TYPE}" ]; then
# Use the latest stable released version.
# GitHub API reference: https://docs.github.com/en/rest/releases/releases?apiVersion=2022-11-28#get-the-latest-release.
if [ "${VERSION}" = "latest" ]; then
VERSION=$(curl -s -XGET "https://api.github.com/repos/${GITHUB_ORG}/${GITHUB_REPO}/releases" | grep tag_name | grep nightly | cut -d: -f 2 | sed 's/.*"\(.*\)".*/\1/' | uniq | sort -r | head -n 1)
if [ -z "${VERSION}" ]; then
echo "Failed to get the latest version."
exit 1
# To avoid other tools dependency, we choose to use `curl` to get the version metadata and parsed by `sed`.
VERSION=$(curl -sL \
-H "Accept: application/vnd.github+json" \
-H "X-GitHub-Api-Version: 2022-11-28" \
${GITHUB_TOKEN:+-H "Authorization: Bearer $GITHUB_TOKEN"} \
"https://api.github.com/repos/${GITHUB_ORG}/${GITHUB_REPO}/releases/latest" | sed -n 's/.*"tag_name": "\([^"]*\)".*/\1/p')
if [ -z "${VERSION}" ]; then
echo "Failed to get the latest stable released version."
exit 1
fi
fi
@@ -73,4 +83,9 @@ if [ -n "${OS_TYPE}" ] && [ -n "${ARCH_TYPE}" ]; then
rm -r "${PACKAGE_NAME%.tar.gz}" && \
echo "Run './${BIN} --help' to get started"
fi
fi
fi
}
get_os_type
get_arch_type
download_artifact

View File

@@ -12,7 +12,7 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use common_error::ext::ErrorExt;
use common_error::ext::{BoxedError, ErrorExt};
use common_error::status_code::StatusCode;
use common_macro::stack_trace_debug;
use snafu::{Location, Snafu};
@@ -38,6 +38,14 @@ pub enum Error {
location: Location,
},
#[snafu(display("Authentication source failure"))]
AuthBackend {
#[snafu(implicit)]
location: Location,
#[snafu(source)]
source: BoxedError,
},
#[snafu(display("User not found, username: {}", username))]
UserNotFound { username: String },
@@ -81,6 +89,7 @@ impl ErrorExt for Error {
Error::FileWatch { .. } => StatusCode::InvalidArguments,
Error::InternalState { .. } => StatusCode::Unexpected,
Error::Io { .. } => StatusCode::StorageUnavailable,
Error::AuthBackend { .. } => StatusCode::Internal,
Error::UserNotFound { .. } => StatusCode::UserNotFound,
Error::UnsupportedPasswordType { .. } => StatusCode::UnsupportedPasswordType,

View File

@@ -40,6 +40,7 @@ moka = { workspace = true, features = ["future", "sync"] }
partition.workspace = true
paste = "1.0"
prometheus.workspace = true
rustc-hash.workspace = true
serde_json.workspace = true
session.workspace = true
snafu.workspace = true

View File

@@ -57,6 +57,31 @@ pub enum Error {
source: BoxedError,
},
#[snafu(display("Failed to list flows in catalog {catalog}"))]
ListFlows {
#[snafu(implicit)]
location: Location,
catalog: String,
source: BoxedError,
},
#[snafu(display("Flow info not found: {flow_name} in catalog {catalog_name}"))]
FlowInfoNotFound {
flow_name: String,
catalog_name: String,
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Can't convert value to json, input={input}"))]
Json {
input: String,
#[snafu(source)]
error: serde_json::error::Error,
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Failed to re-compile script due to internal error"))]
CompileScriptInternal {
#[snafu(implicit)]
@@ -254,12 +279,15 @@ impl ErrorExt for Error {
| Error::FindPartitions { .. }
| Error::FindRegionRoutes { .. }
| Error::CacheNotFound { .. }
| Error::CastManager { .. } => StatusCode::Unexpected,
| Error::CastManager { .. }
| Error::Json { .. } => StatusCode::Unexpected,
Error::ViewPlanColumnsChanged { .. } => StatusCode::InvalidArguments,
Error::ViewInfoNotFound { .. } => StatusCode::TableNotFound,
Error::FlowInfoNotFound { .. } => StatusCode::FlowNotFound,
Error::SystemCatalog { .. } => StatusCode::StorageUnavailable,
Error::UpgradeWeakCatalogManagerRef { .. } => StatusCode::Internal,
@@ -270,7 +298,8 @@ impl ErrorExt for Error {
Error::ListCatalogs { source, .. }
| Error::ListNodes { source, .. }
| Error::ListSchemas { source, .. }
| Error::ListTables { source, .. } => source.status_code(),
| Error::ListTables { source, .. }
| Error::ListFlows { source, .. } => source.status_code(),
Error::CreateTable { source, .. } => source.status_code(),

View File

@@ -25,6 +25,7 @@ use common_config::Mode;
use common_error::ext::BoxedError;
use common_meta::cache::{LayeredCacheRegistryRef, ViewInfoCacheRef};
use common_meta::key::catalog_name::CatalogNameKey;
use common_meta::key::flow::FlowMetadataManager;
use common_meta::key::schema_name::SchemaNameKey;
use common_meta::key::table_info::TableInfoValue;
use common_meta::key::table_name::TableNameKey;
@@ -85,7 +86,7 @@ impl KvBackendCatalogManager {
.get()
.expect("Failed to get table_route_cache"),
)),
table_metadata_manager: Arc::new(TableMetadataManager::new(backend)),
table_metadata_manager: Arc::new(TableMetadataManager::new(backend.clone())),
system_catalog: SystemCatalog {
catalog_manager: me.clone(),
catalog_cache: Cache::new(CATALOG_CACHE_MAX_CAPACITY),
@@ -93,11 +94,13 @@ impl KvBackendCatalogManager {
information_schema_provider: Arc::new(InformationSchemaProvider::new(
DEFAULT_CATALOG_NAME.to_string(),
me.clone(),
Arc::new(FlowMetadataManager::new(backend.clone())),
)),
pg_catalog_provider: Arc::new(PGCatalogProvider::new(
DEFAULT_CATALOG_NAME.to_string(),
me.clone(),
)),
backend,
},
cache_registry,
})
@@ -313,6 +316,7 @@ struct SystemCatalog {
// system_schema_provier for default catalog
information_schema_provider: Arc<InformationSchemaProvider>,
pg_catalog_provider: Arc<PGCatalogProvider>,
backend: KvBackendRef,
}
impl SystemCatalog {
@@ -358,6 +362,7 @@ impl SystemCatalog {
Arc::new(InformationSchemaProvider::new(
catalog.to_string(),
self.catalog_manager.clone(),
Arc::new(FlowMetadataManager::new(self.backend.clone())),
))
});
information_schema_provider.table(table_name)

View File

@@ -23,6 +23,8 @@ use common_catalog::consts::{
DEFAULT_CATALOG_NAME, DEFAULT_PRIVATE_SCHEMA_NAME, DEFAULT_SCHEMA_NAME,
INFORMATION_SCHEMA_NAME, PG_CATALOG_NAME,
};
use common_meta::key::flow::FlowMetadataManager;
use common_meta::kv_backend::memory::MemoryKvBackend;
use futures_util::stream::BoxStream;
use snafu::OptionExt;
use table::TableRef;
@@ -298,6 +300,7 @@ impl MemoryCatalogManager {
let information_schema_provider = InformationSchemaProvider::new(
catalog,
Arc::downgrade(self) as Weak<dyn CatalogManager>,
Arc::new(FlowMetadataManager::new(Arc::new(MemoryKvBackend::new()))),
);
let information_schema = information_schema_provider.tables().clone();

View File

@@ -15,6 +15,8 @@
pub mod information_schema;
mod memory_table;
pub mod pg_catalog;
mod predicate;
mod utils;
use std::collections::HashMap;
use std::sync::Arc;

View File

@@ -14,28 +14,27 @@
mod cluster_info;
pub mod columns;
pub mod flows;
mod information_memory_table;
pub mod key_column_usage;
mod partitions;
mod predicate;
mod region_peers;
mod runtime_metrics;
pub mod schemata;
mod table_constraints;
mod table_names;
pub mod tables;
pub(crate) mod utils;
mod views;
use std::collections::HashMap;
use std::sync::{Arc, Weak};
use common_catalog::consts::{self, DEFAULT_CATALOG_NAME, INFORMATION_SCHEMA_NAME};
use common_meta::key::flow::FlowMetadataManager;
use common_recordbatch::SendableRecordBatchStream;
use datatypes::schema::SchemaRef;
use lazy_static::lazy_static;
use paste::paste;
pub(crate) use predicate::Predicates;
use store_api::storage::{ScanRequest, TableId};
use table::metadata::TableType;
use table::TableRef;
@@ -46,6 +45,7 @@ use self::columns::InformationSchemaColumns;
use super::{SystemSchemaProviderInner, SystemTable, SystemTableRef};
use crate::error::Result;
use crate::system_schema::information_schema::cluster_info::InformationSchemaClusterInfo;
use crate::system_schema::information_schema::flows::InformationSchemaFlows;
use crate::system_schema::information_schema::information_memory_table::get_schema_columns;
use crate::system_schema::information_schema::key_column_usage::InformationSchemaKeyColumnUsage;
use crate::system_schema::information_schema::partitions::InformationSchemaPartitions;
@@ -55,6 +55,7 @@ use crate::system_schema::information_schema::schemata::InformationSchemaSchemat
use crate::system_schema::information_schema::table_constraints::InformationSchemaTableConstraints;
use crate::system_schema::information_schema::tables::InformationSchemaTables;
use crate::system_schema::memory_table::MemoryTable;
pub(crate) use crate::system_schema::predicate::Predicates;
use crate::system_schema::SystemSchemaProvider;
use crate::CatalogManager;
@@ -104,6 +105,7 @@ macro_rules! setup_memory_table {
pub struct InformationSchemaProvider {
catalog_name: String,
catalog_manager: Weak<dyn CatalogManager>,
flow_metadata_manager: Arc<FlowMetadataManager>,
tables: HashMap<String, TableRef>,
}
@@ -182,16 +184,25 @@ impl SystemSchemaProviderInner for InformationSchemaProvider {
self.catalog_name.clone(),
self.catalog_manager.clone(),
)) as _),
FLOWS => Some(Arc::new(InformationSchemaFlows::new(
self.catalog_name.clone(),
self.flow_metadata_manager.clone(),
)) as _),
_ => None,
}
}
}
impl InformationSchemaProvider {
pub fn new(catalog_name: String, catalog_manager: Weak<dyn CatalogManager>) -> Self {
pub fn new(
catalog_name: String,
catalog_manager: Weak<dyn CatalogManager>,
flow_metadata_manager: Arc<FlowMetadataManager>,
) -> Self {
let mut provider = Self {
catalog_name,
catalog_manager,
flow_metadata_manager,
tables: HashMap::new(),
};
@@ -238,6 +249,7 @@ impl InformationSchemaProvider {
TABLE_CONSTRAINTS.to_string(),
self.build_table(TABLE_CONSTRAINTS).unwrap(),
);
tables.insert(FLOWS.to_string(), self.build_table(FLOWS).unwrap());
// Add memory tables
for name in MEMORY_TABLES.iter() {

View File

@@ -41,7 +41,8 @@ use store_api::storage::{ScanRequest, TableId};
use super::CLUSTER_INFO;
use crate::error::{CreateRecordBatchSnafu, InternalSnafu, ListNodesSnafu, Result};
use crate::system_schema::information_schema::{utils, InformationTable, Predicates};
use crate::system_schema::information_schema::{InformationTable, Predicates};
use crate::system_schema::utils;
use crate::CatalogManager;
const PEER_ID: &str = "peer_id";

View File

@@ -0,0 +1,305 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::sync::Arc;
use common_catalog::consts::INFORMATION_SCHEMA_FLOW_TABLE_ID;
use common_error::ext::BoxedError;
use common_meta::key::flow::flow_info::FlowInfoValue;
use common_meta::key::flow::FlowMetadataManager;
use common_meta::key::FlowId;
use common_recordbatch::adapter::RecordBatchStreamAdapter;
use common_recordbatch::{DfSendableRecordBatchStream, RecordBatch, SendableRecordBatchStream};
use datafusion::execution::TaskContext;
use datafusion::physical_plan::stream::RecordBatchStreamAdapter as DfRecordBatchStreamAdapter;
use datafusion::physical_plan::streaming::PartitionStream as DfPartitionStream;
use datatypes::prelude::ConcreteDataType as CDT;
use datatypes::scalars::ScalarVectorBuilder;
use datatypes::schema::{ColumnSchema, Schema, SchemaRef};
use datatypes::value::Value;
use datatypes::vectors::{Int64VectorBuilder, StringVectorBuilder, UInt32VectorBuilder, VectorRef};
use futures::TryStreamExt;
use snafu::{OptionExt, ResultExt};
use store_api::storage::{ScanRequest, TableId};
use crate::error::{
CreateRecordBatchSnafu, FlowInfoNotFoundSnafu, InternalSnafu, JsonSnafu, ListFlowsSnafu, Result,
};
use crate::information_schema::{Predicates, FLOWS};
use crate::system_schema::information_schema::InformationTable;
const INIT_CAPACITY: usize = 42;
// rows of information_schema.flows
// pk is (flow_name, flow_id, table_catalog)
pub const FLOW_NAME: &str = "flow_name";
pub const FLOW_ID: &str = "flow_id";
pub const TABLE_CATALOG: &str = "table_catalog";
pub const FLOW_DEFINITION: &str = "flow_definition";
pub const COMMENT: &str = "comment";
pub const EXPIRE_AFTER: &str = "expire_after";
pub const SOURCE_TABLE_IDS: &str = "source_table_ids";
pub const SINK_TABLE_NAME: &str = "sink_table_name";
pub const FLOWNODE_IDS: &str = "flownode_ids";
pub const OPTIONS: &str = "options";
/// The `information_schema.flows` to provides information about flows in databases.
pub(super) struct InformationSchemaFlows {
schema: SchemaRef,
catalog_name: String,
flow_metadata_manager: Arc<FlowMetadataManager>,
}
impl InformationSchemaFlows {
pub(super) fn new(
catalog_name: String,
flow_metadata_manager: Arc<FlowMetadataManager>,
) -> Self {
Self {
schema: Self::schema(),
catalog_name,
flow_metadata_manager,
}
}
/// for complex fields(including [`SOURCE_TABLE_IDS`], [`FLOWNODE_IDS`] and [`OPTIONS`]), it will be serialized to json string for now
/// TODO(discord9): use a better way to store complex fields like json type
pub(crate) fn schema() -> SchemaRef {
Arc::new(Schema::new(
vec![
(FLOW_NAME, CDT::string_datatype(), false),
(FLOW_ID, CDT::uint32_datatype(), false),
(TABLE_CATALOG, CDT::string_datatype(), false),
(FLOW_DEFINITION, CDT::string_datatype(), false),
(COMMENT, CDT::string_datatype(), true),
(EXPIRE_AFTER, CDT::int64_datatype(), true),
(SOURCE_TABLE_IDS, CDT::string_datatype(), true),
(SINK_TABLE_NAME, CDT::string_datatype(), false),
(FLOWNODE_IDS, CDT::string_datatype(), true),
(OPTIONS, CDT::string_datatype(), true),
]
.into_iter()
.map(|(name, ty, nullable)| ColumnSchema::new(name, ty, nullable))
.collect(),
))
}
fn builder(&self) -> InformationSchemaFlowsBuilder {
InformationSchemaFlowsBuilder::new(
self.schema.clone(),
self.catalog_name.clone(),
&self.flow_metadata_manager,
)
}
}
impl InformationTable for InformationSchemaFlows {
fn table_id(&self) -> TableId {
INFORMATION_SCHEMA_FLOW_TABLE_ID
}
fn table_name(&self) -> &'static str {
FLOWS
}
fn schema(&self) -> SchemaRef {
self.schema.clone()
}
fn to_stream(&self, request: ScanRequest) -> Result<SendableRecordBatchStream> {
let schema = self.schema.arrow_schema().clone();
let mut builder = self.builder();
let stream = Box::pin(DfRecordBatchStreamAdapter::new(
schema,
futures::stream::once(async move {
builder
.make_flows(Some(request))
.await
.map(|x| x.into_df_record_batch())
.map_err(|err| datafusion::error::DataFusionError::External(Box::new(err)))
}),
));
Ok(Box::pin(
RecordBatchStreamAdapter::try_new(stream)
.map_err(BoxedError::new)
.context(InternalSnafu)?,
))
}
}
/// Builds the `information_schema.FLOWS` table row by row
///
/// columns are based on [`FlowInfoValue`]
struct InformationSchemaFlowsBuilder {
schema: SchemaRef,
catalog_name: String,
flow_metadata_manager: Arc<FlowMetadataManager>,
flow_names: StringVectorBuilder,
flow_ids: UInt32VectorBuilder,
table_catalogs: StringVectorBuilder,
raw_sqls: StringVectorBuilder,
comments: StringVectorBuilder,
expire_afters: Int64VectorBuilder,
source_table_id_groups: StringVectorBuilder,
sink_table_names: StringVectorBuilder,
flownode_id_groups: StringVectorBuilder,
option_groups: StringVectorBuilder,
}
impl InformationSchemaFlowsBuilder {
fn new(
schema: SchemaRef,
catalog_name: String,
flow_metadata_manager: &Arc<FlowMetadataManager>,
) -> Self {
Self {
schema,
catalog_name,
flow_metadata_manager: flow_metadata_manager.clone(),
flow_names: StringVectorBuilder::with_capacity(INIT_CAPACITY),
flow_ids: UInt32VectorBuilder::with_capacity(INIT_CAPACITY),
table_catalogs: StringVectorBuilder::with_capacity(INIT_CAPACITY),
raw_sqls: StringVectorBuilder::with_capacity(INIT_CAPACITY),
comments: StringVectorBuilder::with_capacity(INIT_CAPACITY),
expire_afters: Int64VectorBuilder::with_capacity(INIT_CAPACITY),
source_table_id_groups: StringVectorBuilder::with_capacity(INIT_CAPACITY),
sink_table_names: StringVectorBuilder::with_capacity(INIT_CAPACITY),
flownode_id_groups: StringVectorBuilder::with_capacity(INIT_CAPACITY),
option_groups: StringVectorBuilder::with_capacity(INIT_CAPACITY),
}
}
/// Construct the `information_schema.flows` virtual table
async fn make_flows(&mut self, request: Option<ScanRequest>) -> Result<RecordBatch> {
let catalog_name = self.catalog_name.clone();
let predicates = Predicates::from_scan_request(&request);
let flow_info_manager = self.flow_metadata_manager.clone();
// TODO(discord9): use `AsyncIterator` once it's stable-ish
let mut stream = flow_info_manager
.flow_name_manager()
.flow_names(&catalog_name)
.await;
while let Some((flow_name, flow_id)) = stream
.try_next()
.await
.map_err(BoxedError::new)
.context(ListFlowsSnafu {
catalog: &catalog_name,
})?
{
let flow_info = flow_info_manager
.flow_info_manager()
.get(flow_id.flow_id())
.await
.map_err(BoxedError::new)
.context(InternalSnafu)?
.context(FlowInfoNotFoundSnafu {
catalog_name: catalog_name.to_string(),
flow_name: flow_name.to_string(),
})?;
self.add_flow(&predicates, flow_id.flow_id(), flow_info)?;
}
self.finish()
}
fn add_flow(
&mut self,
predicates: &Predicates,
flow_id: FlowId,
flow_info: FlowInfoValue,
) -> Result<()> {
let row = [
(FLOW_NAME, &Value::from(flow_info.flow_name().to_string())),
(FLOW_ID, &Value::from(flow_id)),
(
TABLE_CATALOG,
&Value::from(flow_info.catalog_name().to_string()),
),
];
if !predicates.eval(&row) {
return Ok(());
}
self.flow_names.push(Some(flow_info.flow_name()));
self.flow_ids.push(Some(flow_id));
self.table_catalogs.push(Some(flow_info.catalog_name()));
self.raw_sqls.push(Some(flow_info.raw_sql()));
self.comments.push(Some(flow_info.comment()));
self.expire_afters.push(flow_info.expire_after());
self.source_table_id_groups.push(Some(
&serde_json::to_string(flow_info.source_table_ids()).context(JsonSnafu {
input: format!("{:?}", flow_info.source_table_ids()),
})?,
));
self.sink_table_names
.push(Some(&flow_info.sink_table_name().to_string()));
self.flownode_id_groups.push(Some(
&serde_json::to_string(flow_info.flownode_ids()).context({
JsonSnafu {
input: format!("{:?}", flow_info.flownode_ids()),
}
})?,
));
self.option_groups
.push(Some(&serde_json::to_string(flow_info.options()).context(
JsonSnafu {
input: format!("{:?}", flow_info.options()),
},
)?));
Ok(())
}
fn finish(&mut self) -> Result<RecordBatch> {
let columns: Vec<VectorRef> = vec![
Arc::new(self.flow_names.finish()),
Arc::new(self.flow_ids.finish()),
Arc::new(self.table_catalogs.finish()),
Arc::new(self.raw_sqls.finish()),
Arc::new(self.comments.finish()),
Arc::new(self.expire_afters.finish()),
Arc::new(self.source_table_id_groups.finish()),
Arc::new(self.sink_table_names.finish()),
Arc::new(self.flownode_id_groups.finish()),
Arc::new(self.option_groups.finish()),
];
RecordBatch::new(self.schema.clone(), columns).context(CreateRecordBatchSnafu)
}
}
impl DfPartitionStream for InformationSchemaFlows {
fn schema(&self) -> &arrow_schema::SchemaRef {
self.schema.arrow_schema()
}
fn execute(&self, _: Arc<TaskContext>) -> DfSendableRecordBatchStream {
let schema: Arc<arrow_schema::Schema> = self.schema.arrow_schema().clone();
let mut builder = self.builder();
Box::pin(DfRecordBatchStreamAdapter::new(
schema,
futures::stream::once(async move {
builder
.make_flows(None)
.await
.map(|x| x.into_df_record_batch())
.map_err(Into::into)
}),
))
}
}

View File

@@ -19,7 +19,7 @@ use datatypes::schema::{Schema, SchemaRef};
use datatypes::vectors::{Int64Vector, StringVector, VectorRef};
use super::table_names::*;
use crate::system_schema::memory_table::tables::{
use crate::system_schema::utils::tables::{
bigint_column, datetime_column, string_column, string_columns,
};

View File

@@ -36,7 +36,8 @@ use crate::error::{
CreateRecordBatchSnafu, InternalSnafu, Result, TableMetadataManagerSnafu,
UpgradeWeakCatalogManagerRefSnafu,
};
use crate::system_schema::information_schema::{utils, InformationTable, Predicates};
use crate::system_schema::information_schema::{InformationTable, Predicates};
use crate::system_schema::utils;
use crate::CatalogManager;
pub const CATALOG_NAME: &str = "catalog_name";

View File

@@ -44,3 +44,4 @@ pub const REGION_PEERS: &str = "region_peers";
pub const TABLE_CONSTRAINTS: &str = "table_constraints";
pub const CLUSTER_INFO: &str = "cluster_info";
pub const VIEWS: &str = "views";
pub const FLOWS: &str = "flows";

View File

@@ -298,7 +298,7 @@ impl InformationSchemaTablesBuilder {
self.data_free.push(Some(0));
self.auto_increment.push(Some(0));
self.row_format.push(Some("Fixed"));
self.table_collation.push(None);
self.table_collation.push(Some("utf8_bin"));
self.update_time.push(None);
self.check_time.push(None);

View File

@@ -26,7 +26,7 @@ use datafusion::physical_plan::SendableRecordBatchStream as DfSendableRecordBatc
use datatypes::prelude::{ConcreteDataType, ScalarVectorBuilder, VectorRef};
use datatypes::schema::{ColumnSchema, Schema, SchemaRef};
use datatypes::value::Value;
use datatypes::vectors::{BooleanVectorBuilder, StringVectorBuilder};
use datatypes::vectors::StringVectorBuilder;
use futures::TryStreamExt;
use snafu::{OptionExt, ResultExt};
use store_api::storage::{ScanRequest, TableId};
@@ -76,7 +76,7 @@ impl InformationSchemaViews {
ColumnSchema::new(TABLE_NAME, ConcreteDataType::string_datatype(), false),
ColumnSchema::new(VIEW_DEFINITION, ConcreteDataType::string_datatype(), false),
ColumnSchema::new(CHECK_OPTION, ConcreteDataType::string_datatype(), true),
ColumnSchema::new(IS_UPDATABLE, ConcreteDataType::boolean_datatype(), true),
ColumnSchema::new(IS_UPDATABLE, ConcreteDataType::string_datatype(), true),
ColumnSchema::new(DEFINER, ConcreteDataType::string_datatype(), true),
ColumnSchema::new(SECURITY_TYPE, ConcreteDataType::string_datatype(), true),
ColumnSchema::new(
@@ -148,7 +148,7 @@ struct InformationSchemaViewsBuilder {
table_names: StringVectorBuilder,
view_definitions: StringVectorBuilder,
check_options: StringVectorBuilder,
is_updatable: BooleanVectorBuilder,
is_updatable: StringVectorBuilder,
definer: StringVectorBuilder,
security_type: StringVectorBuilder,
character_set_client: StringVectorBuilder,
@@ -170,7 +170,7 @@ impl InformationSchemaViewsBuilder {
table_names: StringVectorBuilder::with_capacity(INIT_CAPACITY),
view_definitions: StringVectorBuilder::with_capacity(INIT_CAPACITY),
check_options: StringVectorBuilder::with_capacity(INIT_CAPACITY),
is_updatable: BooleanVectorBuilder::with_capacity(INIT_CAPACITY),
is_updatable: StringVectorBuilder::with_capacity(INIT_CAPACITY),
definer: StringVectorBuilder::with_capacity(INIT_CAPACITY),
security_type: StringVectorBuilder::with_capacity(INIT_CAPACITY),
character_set_client: StringVectorBuilder::with_capacity(INIT_CAPACITY),
@@ -241,7 +241,8 @@ impl InformationSchemaViewsBuilder {
self.table_names.push(Some(table_name));
self.view_definitions.push(Some(definition));
self.check_options.push(None);
self.is_updatable.push(Some(true));
// View is not updatable, statements such UPDATE , DELETE , and INSERT are illegal and are rejected.
self.is_updatable.push(Some("NO"));
self.definer.push(None);
self.security_type.push(None);
self.character_set_client.push(Some("utf8"));

View File

@@ -13,7 +13,6 @@
// limitations under the License.
mod table_columns;
pub mod tables;
use std::sync::Arc;

View File

@@ -13,6 +13,8 @@
// limitations under the License.
mod pg_catalog_memory_table;
mod pg_class;
mod pg_namespace;
mod table_names;
use std::collections::HashMap;
@@ -23,11 +25,14 @@ use datatypes::schema::ColumnSchema;
use lazy_static::lazy_static;
use paste::paste;
use pg_catalog_memory_table::get_schema_columns;
use pg_class::PGClass;
use pg_namespace::PGNamespace;
use table::TableRef;
pub use table_names::*;
use super::memory_table::tables::u32_column;
use self::pg_namespace::oid_map::{PGNamespaceOidMap, PGNamespaceOidMapRef};
use super::memory_table::MemoryTable;
use super::utils::tables::u32_column;
use super::{SystemSchemaProvider, SystemSchemaProviderInner, SystemTableRef};
use crate::CatalogManager;
@@ -46,8 +51,11 @@ fn oid_column() -> ColumnSchema {
/// [`PGCatalogProvider`] is the provider for a schema named `pg_catalog`, it is not a catalog.
pub struct PGCatalogProvider {
catalog_name: String,
_catalog_manager: Weak<dyn CatalogManager>,
catalog_manager: Weak<dyn CatalogManager>,
tables: HashMap<String, TableRef>,
// Workaround to store mapping of schema_name to a numeric id
namespace_oid_map: PGNamespaceOidMapRef,
}
impl SystemSchemaProvider for PGCatalogProvider {
@@ -79,8 +87,9 @@ impl PGCatalogProvider {
pub fn new(catalog_name: String, catalog_manager: Weak<dyn CatalogManager>) -> Self {
let mut provider = Self {
catalog_name,
_catalog_manager: catalog_manager,
catalog_manager,
tables: HashMap::new(),
namespace_oid_map: Arc::new(PGNamespaceOidMap::new()),
};
provider.build_tables();
provider
@@ -90,9 +99,19 @@ impl PGCatalogProvider {
// SECURITY NOTE:
// Must follow the same security rules as [`InformationSchemaProvider::build_tables`].
let mut tables = HashMap::new();
// TODO(J0HN50N133): modeling the table_name as a enum type to get rid of expect/unwrap here
// It's safe to unwrap here because we are sure that the constants have been handle correctly inside system_table.
for name in MEMORY_TABLES.iter() {
tables.insert(name.to_string(), self.build_table(name).expect(name));
}
tables.insert(
PG_NAMESPACE.to_string(),
self.build_table(PG_NAMESPACE).expect(PG_NAMESPACE),
);
tables.insert(
PG_CLASS.to_string(),
self.build_table(PG_CLASS).expect(PG_NAMESPACE),
);
self.tables = tables;
}
}
@@ -105,6 +124,16 @@ impl SystemSchemaProviderInner for PGCatalogProvider {
fn system_table(&self, name: &str) -> Option<SystemTableRef> {
match name {
table_names::PG_TYPE => setup_memory_table!(PG_TYPE),
table_names::PG_NAMESPACE => Some(Arc::new(PGNamespace::new(
self.catalog_name.clone(),
self.catalog_manager.clone(),
self.namespace_oid_map.clone(),
))),
table_names::PG_CLASS => Some(Arc::new(PGClass::new(
self.catalog_name.clone(),
self.catalog_manager.clone(),
self.namespace_oid_map.clone(),
))),
_ => None,
}
}

View File

@@ -20,7 +20,7 @@ use datatypes::vectors::{Int16Vector, StringVector, UInt32Vector, VectorRef};
use super::oid_column;
use super::table_names::PG_TYPE;
use crate::memory_table_cols;
use crate::system_schema::memory_table::tables::{i16_column, string_column};
use crate::system_schema::utils::tables::{i16_column, string_column};
fn pg_type_schema_columns() -> (Vec<ColumnSchema>, Vec<VectorRef>) {
// TODO(j0hn50n133): acquire this information from `DataType` instead of hardcoding it to avoid regression.

View File

@@ -0,0 +1,263 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::sync::{Arc, Weak};
use arrow_schema::SchemaRef as ArrowSchemaRef;
use common_catalog::consts::PG_CATALOG_PG_CLASS_TABLE_ID;
use common_error::ext::BoxedError;
use common_recordbatch::adapter::RecordBatchStreamAdapter;
use common_recordbatch::{DfSendableRecordBatchStream, RecordBatch};
use datafusion::execution::TaskContext;
use datafusion::physical_plan::stream::RecordBatchStreamAdapter as DfRecordBatchStreamAdapter;
use datafusion::physical_plan::streaming::PartitionStream as DfPartitionStream;
use datatypes::scalars::ScalarVectorBuilder;
use datatypes::schema::{Schema, SchemaRef};
use datatypes::value::Value;
use datatypes::vectors::{StringVectorBuilder, UInt32VectorBuilder, VectorRef};
use futures::TryStreamExt;
use snafu::{OptionExt, ResultExt};
use store_api::storage::ScanRequest;
use table::metadata::TableType;
use super::pg_namespace::oid_map::PGNamespaceOidMapRef;
use super::{OID_COLUMN_NAME, PG_CLASS};
use crate::error::{
CreateRecordBatchSnafu, InternalSnafu, Result, UpgradeWeakCatalogManagerRefSnafu,
};
use crate::information_schema::Predicates;
use crate::system_schema::utils::tables::{string_column, u32_column};
use crate::system_schema::SystemTable;
use crate::CatalogManager;
// === column name ===
pub const RELNAME: &str = "relname";
pub const RELNAMESPACE: &str = "relnamespace";
pub const RELKIND: &str = "relkind";
pub const RELOWNER: &str = "relowner";
// === enum value of relkind ===
pub const RELKIND_TABLE: &str = "r";
pub const RELKIND_VIEW: &str = "v";
/// The initial capacity of the vector builders.
const INIT_CAPACITY: usize = 42;
/// The dummy owner id for the namespace.
const DUMMY_OWNER_ID: u32 = 0;
/// The `pg_catalog.pg_class` table implementation.
pub(super) struct PGClass {
schema: SchemaRef,
catalog_name: String,
catalog_manager: Weak<dyn CatalogManager>,
// Workaround to convert schema_name to a numeric id
namespace_oid_map: PGNamespaceOidMapRef,
}
impl PGClass {
pub(super) fn new(
catalog_name: String,
catalog_manager: Weak<dyn CatalogManager>,
namespace_oid_map: PGNamespaceOidMapRef,
) -> Self {
Self {
schema: Self::schema(),
catalog_name,
catalog_manager,
namespace_oid_map,
}
}
fn schema() -> SchemaRef {
Arc::new(Schema::new(vec![
u32_column(OID_COLUMN_NAME),
string_column(RELNAME),
u32_column(RELNAMESPACE),
string_column(RELKIND),
u32_column(RELOWNER),
]))
}
fn builder(&self) -> PGClassBuilder {
PGClassBuilder::new(
self.schema.clone(),
self.catalog_name.clone(),
self.catalog_manager.clone(),
self.namespace_oid_map.clone(),
)
}
}
impl SystemTable for PGClass {
fn table_id(&self) -> table::metadata::TableId {
PG_CATALOG_PG_CLASS_TABLE_ID
}
fn table_name(&self) -> &'static str {
PG_CLASS
}
fn schema(&self) -> SchemaRef {
self.schema.clone()
}
fn to_stream(
&self,
request: ScanRequest,
) -> Result<common_recordbatch::SendableRecordBatchStream> {
let schema = self.schema.arrow_schema().clone();
let mut builder = self.builder();
let stream = Box::pin(DfRecordBatchStreamAdapter::new(
schema,
futures::stream::once(async move {
builder
.make_class(Some(request))
.await
.map(|x| x.into_df_record_batch())
.map_err(Into::into)
}),
));
Ok(Box::pin(
RecordBatchStreamAdapter::try_new(stream)
.map_err(BoxedError::new)
.context(InternalSnafu)?,
))
}
}
impl DfPartitionStream for PGClass {
fn schema(&self) -> &ArrowSchemaRef {
self.schema.arrow_schema()
}
fn execute(&self, _: Arc<TaskContext>) -> DfSendableRecordBatchStream {
let schema = self.schema.arrow_schema().clone();
let mut builder = self.builder();
Box::pin(DfRecordBatchStreamAdapter::new(
schema,
futures::stream::once(async move {
builder
.make_class(None)
.await
.map(|x| x.into_df_record_batch())
.map_err(Into::into)
}),
))
}
}
/// Builds the `pg_catalog.pg_class` table row by row
/// TODO(J0HN50N133): `relowner` is always the [`DUMMY_OWNER_ID`] cuz we don't have user.
/// Once we have user system, make it the actual owner of the table.
struct PGClassBuilder {
schema: SchemaRef,
catalog_name: String,
catalog_manager: Weak<dyn CatalogManager>,
namespace_oid_map: PGNamespaceOidMapRef,
oid: UInt32VectorBuilder,
relname: StringVectorBuilder,
relnamespace: UInt32VectorBuilder,
relkind: StringVectorBuilder,
relowner: UInt32VectorBuilder,
}
impl PGClassBuilder {
fn new(
schema: SchemaRef,
catalog_name: String,
catalog_manager: Weak<dyn CatalogManager>,
namespace_oid_map: PGNamespaceOidMapRef,
) -> Self {
Self {
schema,
catalog_name,
catalog_manager,
namespace_oid_map,
oid: UInt32VectorBuilder::with_capacity(INIT_CAPACITY),
relname: StringVectorBuilder::with_capacity(INIT_CAPACITY),
relnamespace: UInt32VectorBuilder::with_capacity(INIT_CAPACITY),
relkind: StringVectorBuilder::with_capacity(INIT_CAPACITY),
relowner: UInt32VectorBuilder::with_capacity(INIT_CAPACITY),
}
}
async fn make_class(&mut self, request: Option<ScanRequest>) -> Result<RecordBatch> {
let catalog_name = self.catalog_name.clone();
let catalog_manager = self
.catalog_manager
.upgrade()
.context(UpgradeWeakCatalogManagerRefSnafu)?;
let predicates = Predicates::from_scan_request(&request);
for schema_name in catalog_manager.schema_names(&catalog_name).await? {
let mut stream = catalog_manager.tables(&catalog_name, &schema_name);
while let Some(table) = stream.try_next().await? {
let table_info = table.table_info();
self.add_class(
&predicates,
table_info.table_id(),
&schema_name,
&table_info.name,
if table_info.table_type == TableType::View {
RELKIND_VIEW
} else {
RELKIND_TABLE
},
);
}
}
self.finish()
}
fn add_class(
&mut self,
predicates: &Predicates,
oid: u32,
schema: &str,
table: &str,
kind: &str,
) {
let namespace_oid = self.namespace_oid_map.get_oid(schema);
let row = [
(OID_COLUMN_NAME, &Value::from(oid)),
(RELNAMESPACE, &Value::from(schema)),
(RELNAME, &Value::from(table)),
(RELKIND, &Value::from(kind)),
(RELOWNER, &Value::from(DUMMY_OWNER_ID)),
];
if !predicates.eval(&row) {
return;
}
self.oid.push(Some(oid));
self.relnamespace.push(Some(namespace_oid));
self.relname.push(Some(table));
self.relkind.push(Some(kind));
self.relowner.push(Some(DUMMY_OWNER_ID));
}
fn finish(&mut self) -> Result<RecordBatch> {
let columns: Vec<VectorRef> = vec![
Arc::new(self.oid.finish()),
Arc::new(self.relname.finish()),
Arc::new(self.relnamespace.finish()),
Arc::new(self.relkind.finish()),
Arc::new(self.relowner.finish()),
];
RecordBatch::new(self.schema.clone(), columns).context(CreateRecordBatchSnafu)
}
}

View File

@@ -0,0 +1,206 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
pub(super) mod oid_map;
use std::sync::{Arc, Weak};
use arrow_schema::SchemaRef as ArrowSchemaRef;
use common_catalog::consts::PG_CATALOG_PG_NAMESPACE_TABLE_ID;
use common_error::ext::BoxedError;
use common_recordbatch::adapter::RecordBatchStreamAdapter;
use common_recordbatch::{DfSendableRecordBatchStream, RecordBatch, SendableRecordBatchStream};
use datafusion::execution::TaskContext;
use datafusion::physical_plan::stream::RecordBatchStreamAdapter as DfRecordBatchStreamAdapter;
use datafusion::physical_plan::streaming::PartitionStream as DfPartitionStream;
use datatypes::scalars::ScalarVectorBuilder;
use datatypes::schema::{Schema, SchemaRef};
use datatypes::value::Value;
use datatypes::vectors::{StringVectorBuilder, UInt32VectorBuilder, VectorRef};
use snafu::{OptionExt, ResultExt};
use store_api::storage::ScanRequest;
use super::{PGNamespaceOidMapRef, OID_COLUMN_NAME, PG_NAMESPACE};
use crate::error::{
CreateRecordBatchSnafu, InternalSnafu, Result, UpgradeWeakCatalogManagerRefSnafu,
};
use crate::information_schema::Predicates;
use crate::system_schema::utils::tables::{string_column, u32_column};
use crate::system_schema::SystemTable;
use crate::CatalogManager;
/// The `pg_catalog.pg_namespace` table implementation.
/// namespace is a schema in greptime
const NSPNAME: &str = "nspname";
const INIT_CAPACITY: usize = 42;
pub(super) struct PGNamespace {
schema: SchemaRef,
catalog_name: String,
catalog_manager: Weak<dyn CatalogManager>,
// Workaround to convert schema_name to a numeric id
oid_map: PGNamespaceOidMapRef,
}
impl PGNamespace {
pub(super) fn new(
catalog_name: String,
catalog_manager: Weak<dyn CatalogManager>,
oid_map: PGNamespaceOidMapRef,
) -> Self {
Self {
schema: Self::schema(),
catalog_name,
catalog_manager,
oid_map,
}
}
fn schema() -> SchemaRef {
Arc::new(Schema::new(vec![
// TODO(J0HN50N133): we do not have a numeric schema id, use schema name as a workaround. Use a proper schema id once we have it.
u32_column(OID_COLUMN_NAME),
string_column(NSPNAME),
]))
}
fn builder(&self) -> PGNamespaceBuilder {
PGNamespaceBuilder::new(
self.schema.clone(),
self.catalog_name.clone(),
self.catalog_manager.clone(),
self.oid_map.clone(),
)
}
}
impl SystemTable for PGNamespace {
fn schema(&self) -> SchemaRef {
self.schema.clone()
}
fn table_id(&self) -> table::metadata::TableId {
PG_CATALOG_PG_NAMESPACE_TABLE_ID
}
fn table_name(&self) -> &'static str {
PG_NAMESPACE
}
fn to_stream(&self, request: ScanRequest) -> Result<SendableRecordBatchStream> {
let schema = self.schema.arrow_schema().clone();
let mut builder = self.builder();
let stream = Box::pin(DfRecordBatchStreamAdapter::new(
schema,
futures::stream::once(async move {
builder
.make_namespace(Some(request))
.await
.map(|x| x.into_df_record_batch())
.map_err(Into::into)
}),
));
Ok(Box::pin(
RecordBatchStreamAdapter::try_new(stream)
.map_err(BoxedError::new)
.context(InternalSnafu)?,
))
}
}
impl DfPartitionStream for PGNamespace {
fn schema(&self) -> &ArrowSchemaRef {
self.schema.arrow_schema()
}
fn execute(&self, _: Arc<TaskContext>) -> DfSendableRecordBatchStream {
let schema = self.schema.arrow_schema().clone();
let mut builder = self.builder();
Box::pin(DfRecordBatchStreamAdapter::new(
schema,
futures::stream::once(async move {
builder
.make_namespace(None)
.await
.map(|x| x.into_df_record_batch())
.map_err(Into::into)
}),
))
}
}
/// Builds the `pg_catalog.pg_namespace` table row by row
/// `oid` use schema name as a workaround since we don't have numeric schema id.
/// `nspname` is the schema name.
struct PGNamespaceBuilder {
schema: SchemaRef,
catalog_name: String,
catalog_manager: Weak<dyn CatalogManager>,
namespace_oid_map: PGNamespaceOidMapRef,
oid: UInt32VectorBuilder,
nspname: StringVectorBuilder,
}
impl PGNamespaceBuilder {
fn new(
schema: SchemaRef,
catalog_name: String,
catalog_manager: Weak<dyn CatalogManager>,
namespace_oid_map: PGNamespaceOidMapRef,
) -> Self {
Self {
schema,
catalog_name,
catalog_manager,
namespace_oid_map,
oid: UInt32VectorBuilder::with_capacity(INIT_CAPACITY),
nspname: StringVectorBuilder::with_capacity(INIT_CAPACITY),
}
}
/// Construct the `pg_catalog.pg_namespace` virtual table
async fn make_namespace(&mut self, request: Option<ScanRequest>) -> Result<RecordBatch> {
let catalog_name = self.catalog_name.clone();
let catalog_manager = self
.catalog_manager
.upgrade()
.context(UpgradeWeakCatalogManagerRefSnafu)?;
let predicates = Predicates::from_scan_request(&request);
for schema_name in catalog_manager.schema_names(&catalog_name).await? {
self.add_namespace(&predicates, &schema_name);
}
self.finish()
}
fn finish(&mut self) -> Result<RecordBatch> {
let columns: Vec<VectorRef> =
vec![Arc::new(self.oid.finish()), Arc::new(self.nspname.finish())];
RecordBatch::new(self.schema.clone(), columns).context(CreateRecordBatchSnafu)
}
fn add_namespace(&mut self, predicates: &Predicates, schema_name: &str) {
let oid = self.namespace_oid_map.get_oid(schema_name);
let row = [
(OID_COLUMN_NAME, &Value::from(oid)),
(NSPNAME, &Value::from(schema_name)),
];
if !predicates.eval(&row) {
return;
}
self.oid.push(Some(oid));
self.nspname.push(Some(schema_name));
}
}

View File

@@ -0,0 +1,100 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::hash::BuildHasher;
use std::sync::Arc;
use dashmap::DashMap;
use rustc_hash::FxSeededState;
pub type PGNamespaceOidMapRef = Arc<PGNamespaceOidMap>;
// Workaround to convert schema_name to a numeric id,
// remove this when we have numeric schema id in greptime
pub struct PGNamespaceOidMap {
oid_map: DashMap<String, u32>,
// Rust use SipHasher by default, which provides resistance against DOS attacks.
// This will produce different hash value between each greptime instance. This will
// cause the sqlness test fail. We need a deterministic hash here to provide
// same oid for the same schema name with best effort and DOS attacks aren't concern here.
hasher: FxSeededState,
}
impl PGNamespaceOidMap {
pub fn new() -> Self {
Self {
oid_map: DashMap::new(),
hasher: FxSeededState::with_seed(0), // PLEASE DO NOT MODIFY THIS SEED VALUE!!!
}
}
fn oid_is_used(&self, oid: u32) -> bool {
self.oid_map.iter().any(|e| *e.value() == oid)
}
pub fn get_oid(&self, schema_name: &str) -> u32 {
if let Some(oid) = self.oid_map.get(schema_name) {
*oid
} else {
let mut oid = self.hasher.hash_one(schema_name) as u32;
while self.oid_is_used(oid) {
oid = self.hasher.hash_one(oid) as u32;
}
self.oid_map.insert(schema_name.to_string(), oid);
oid
}
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn oid_is_stable() {
let oid_map_1 = PGNamespaceOidMap::new();
let oid_map_2 = PGNamespaceOidMap::new();
let schema = "schema";
let oid = oid_map_1.get_oid(schema);
// oid keep stable in the same instance
assert_eq!(oid, oid_map_1.get_oid(schema));
// oid keep stable between different instances
assert_eq!(oid, oid_map_2.get_oid(schema));
}
#[test]
fn oid_collision() {
let oid_map = PGNamespaceOidMap::new();
let key1 = "3178510";
let key2 = "4215648";
// have collision
assert_eq!(
oid_map.hasher.hash_one(key1) as u32,
oid_map.hasher.hash_one(key2) as u32
);
// insert them into oid_map
let oid1 = oid_map.get_oid(key1);
let oid2 = oid_map.get_oid(key2);
// they should have different id
assert_ne!(oid1, oid2);
}
}

View File

@@ -25,7 +25,7 @@ type ColumnName = String;
/// we only support these simple predicates currently.
/// TODO(dennis): supports more predicate types.
#[derive(Clone, PartialEq, Eq, Debug)]
enum Predicate {
pub(crate) enum Predicate {
Eq(ColumnName, Value),
Like(ColumnName, String, bool),
NotEq(ColumnName, Value),

View File

@@ -12,6 +12,8 @@
// See the License for the specific language governing permissions and
// limitations under the License.
pub mod tables;
use std::sync::{Arc, Weak};
use common_config::Mode;

View File

@@ -33,9 +33,12 @@ use common_telemetry::tracing_context::W3cTrace;
use futures_util::StreamExt;
use prost::Message;
use snafu::{ensure, ResultExt};
use tonic::metadata::AsciiMetadataKey;
use tonic::transport::Channel;
use crate::error::{ConvertFlightDataSnafu, Error, IllegalFlightMessagesSnafu, ServerSnafu};
use crate::error::{
ConvertFlightDataSnafu, Error, IllegalFlightMessagesSnafu, InvalidAsciiSnafu, ServerSnafu,
};
use crate::{from_grpc_response, Client, Result};
#[derive(Clone, Debug, Default)]
@@ -130,6 +133,36 @@ impl Database {
self.handle(Request::Inserts(requests)).await
}
pub async fn insert_with_hints(
&self,
requests: InsertRequests,
hints: &[(&str, &str)],
) -> Result<u32> {
let mut client = make_database_client(&self.client)?.inner;
let request = self.to_rpc_request(Request::Inserts(requests));
let mut request = tonic::Request::new(request);
let metadata = request.metadata_mut();
for (key, value) in hints {
let key = AsciiMetadataKey::from_bytes(format!("x-greptime-hint-{}", key).as_bytes())
.map_err(|_| {
InvalidAsciiSnafu {
value: key.to_string(),
}
.build()
})?;
let value = value.parse().map_err(|_| {
InvalidAsciiSnafu {
value: value.to_string(),
}
.build()
})?;
metadata.insert(key, value);
}
let response = client.handle(request).await?.into_inner();
from_grpc_response(response)
}
async fn handle(&self, request: Request) -> Result<u32> {
let mut client = make_database_client(&self.client)?.inner;
let request = self.to_rpc_request(request);

View File

@@ -122,6 +122,13 @@ pub enum Error {
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Failed to parse ascii string: {}", value))]
InvalidAscii {
value: String,
#[snafu(implicit)]
location: Location,
},
}
pub type Result<T> = std::result::Result<T, Error>;
@@ -143,6 +150,8 @@ impl ErrorExt for Error {
| Error::ConvertFlightData { source, .. }
| Error::CreateTlsChannel { source, .. } => source.status_code(),
Error::IllegalGrpcClientState { .. } => StatusCode::Unexpected,
Error::InvalidAscii { .. } => StatusCode::InvalidArguments,
}
}

View File

@@ -16,7 +16,7 @@ use api::v1::flow::{FlowRequest, FlowResponse};
use api::v1::region::InsertRequests;
use common_error::ext::BoxedError;
use common_meta::node_manager::Flownode;
use snafu::{location, Location, ResultExt};
use snafu::{location, ResultExt};
use crate::error::Result;
use crate::Client;

View File

@@ -33,7 +33,7 @@ use common_telemetry::error;
use common_telemetry::tracing_context::TracingContext;
use prost::Message;
use query::query_engine::DefaultSerializer;
use snafu::{location, Location, OptionExt, ResultExt};
use snafu::{location, OptionExt, ResultExt};
use substrait::{DFLogicalSubstraitConvertor, SubstraitPlan};
use tokio_stream::StreamExt;

View File

@@ -10,7 +10,9 @@ name = "greptime"
path = "src/bin/greptime.rs"
[features]
default = ["python"]
tokio-console = ["common-telemetry/tokio-console"]
python = ["frontend/python"]
[lints]
workspace = true
@@ -47,7 +49,7 @@ either = "1.8"
etcd-client.workspace = true
file-engine.workspace = true
flow.workspace = true
frontend.workspace = true
frontend = { workspace = true, default-features = false }
futures.workspace = true
human-panic = "1.2.2"
lazy_static.workspace = true

View File

@@ -62,8 +62,37 @@ enum SubCommand {
#[global_allocator]
static ALLOC: tikv_jemallocator::Jemalloc = tikv_jemallocator::Jemalloc;
#[cfg(debug_assertions)]
fn main() -> Result<()> {
use snafu::ResultExt;
// Set the stack size to 8MB for the thread so it wouldn't overflow on large stack usage in debug mode
// see https://github.com/GreptimeTeam/greptimedb/pull/4317
// and https://github.com/rust-lang/rust/issues/34283
std::thread::Builder::new()
.name("main_spawn".to_string())
.stack_size(8 * 1024 * 1024)
.spawn(|| {
{
tokio::runtime::Builder::new_multi_thread()
.thread_stack_size(8 * 1024 * 1024)
.enable_all()
.build()
.expect("Failed building the Runtime")
.block_on(main_body())
}
})
.context(cmd::error::SpawnThreadSnafu)?
.join()
.expect("Couldn't join on the associated thread")
}
#[cfg(not(debug_assertions))]
#[tokio::main]
async fn main() -> Result<()> {
main_body().await
}
async fn main_body() -> Result<()> {
setup_human_panic();
start(Command::parse()).await
}

View File

@@ -21,11 +21,12 @@ use base64::engine::general_purpose;
use base64::Engine;
use clap::{Parser, ValueEnum};
use client::DEFAULT_SCHEMA_NAME;
use common_telemetry::{debug, error, info, warn};
use common_catalog::consts::DEFAULT_CATALOG_NAME;
use common_telemetry::{debug, error, info};
use serde_json::Value;
use servers::http::greptime_result_v1::GreptimedbV1Response;
use servers::http::GreptimeQueryOutput;
use snafu::{OptionExt, ResultExt};
use snafu::ResultExt;
use tokio::fs::File;
use tokio::io::{AsyncWriteExt, BufWriter};
use tokio::sync::Semaphore;
@@ -34,19 +35,20 @@ use tracing_appender::non_blocking::WorkerGuard;
use crate::cli::{Instance, Tool};
use crate::error::{
EmptyResultSnafu, Error, FileIoSnafu, HttpQuerySqlSnafu, InvalidDatabaseNameSnafu, Result,
SerdeJsonSnafu,
EmptyResultSnafu, Error, FileIoSnafu, HttpQuerySqlSnafu, Result, SerdeJsonSnafu,
};
type TableReference = (String, String, String);
#[derive(Debug, Default, Clone, ValueEnum)]
enum ExportTarget {
/// Corresponding to `SHOW CREATE TABLE`
/// Export all table schemas, corresponding to `SHOW CREATE TABLE`.
Schema,
/// Export all table data, corresponding to `COPY DATABASE TO`.
Data,
/// Export all table schemas and data at once.
#[default]
CreateTable,
/// Corresponding to `EXPORT TABLE`
TableData,
All,
}
#[derive(Debug, Default, Parser)]
@@ -72,10 +74,20 @@ pub struct ExportCommand {
max_retry: usize,
/// Things to export
#[clap(long, short = 't', value_enum)]
#[clap(long, short = 't', value_enum, default_value = "all")]
target: ExportTarget,
/// basic authentication for connecting to the server
/// A half-open time range: [start_time, end_time).
/// The start of the time range (time-index column) for data export.
#[clap(long)]
start_time: Option<String>,
/// A half-open time range: [start_time, end_time).
/// The end of the time range (time-index column) for data export.
#[clap(long)]
end_time: Option<String>,
/// The basic authentication for connecting to the server
#[clap(long)]
auth_basic: Option<String>,
}
@@ -99,6 +111,8 @@ impl ExportCommand {
output_dir: self.output_dir.clone(),
parallelism: self.export_jobs,
target: self.target.clone(),
start_time: self.start_time.clone(),
end_time: self.end_time.clone(),
auth_header,
}),
guard,
@@ -113,6 +127,8 @@ pub struct Export {
output_dir: String,
parallelism: usize,
target: ExportTarget,
start_time: Option<String>,
end_time: Option<String>,
auth_header: Option<String>,
}
@@ -161,13 +177,13 @@ impl Export {
if let Some(schema) = &self.schema {
Ok(vec![(self.catalog.clone(), schema.clone())])
} else {
let result = self.sql("show databases").await?;
let result = self.sql("SHOW DATABASES").await?;
let Some(records) = result else {
EmptyResultSnafu.fail()?
};
let mut result = Vec::with_capacity(records.len());
for value in records {
let serde_json::Value::String(schema) = &value[0] else {
let Value::String(schema) = &value[0] else {
unreachable!()
};
if schema == common_catalog::consts::INFORMATION_SCHEMA_NAME {
@@ -188,9 +204,11 @@ impl Export {
) -> Result<(Vec<TableReference>, Vec<TableReference>)> {
// Puts all metric table first
let sql = format!(
"select table_catalog, table_schema, table_name from \
information_schema.columns where column_name = '__tsid' \
and table_catalog = \'{catalog}\' and table_schema = \'{schema}\'"
"SELECT table_catalog, table_schema, table_name \
FROM information_schema.columns \
WHERE column_name = '__tsid' \
and table_catalog = \'{catalog}\' \
and table_schema = \'{schema}\'"
);
let result = self.sql(&sql).await?;
let Some(records) = result else {
@@ -210,9 +228,11 @@ impl Export {
// TODO: SQL injection hurts
let sql = format!(
"select table_catalog, table_schema, table_name from \
information_schema.tables where table_type = \'BASE TABLE\' \
and table_catalog = \'{catalog}\' and table_schema = \'{schema}\'",
"SELECT table_catalog, table_schema, table_name \
FROM information_schema.tables \
WHERE table_type = \'BASE TABLE\' \
and table_catalog = \'{catalog}\' \
and table_schema = \'{schema}\'",
);
let result = self.sql(&sql).await?;
let Some(records) = result else {
@@ -249,14 +269,14 @@ impl Export {
async fn show_create_table(&self, catalog: &str, schema: &str, table: &str) -> Result<String> {
let sql = format!(
r#"show create table "{}"."{}"."{}""#,
r#"SHOW CREATE TABLE "{}"."{}"."{}""#,
catalog, schema, table
);
let result = self.sql(&sql).await?;
let Some(records) = result else {
EmptyResultSnafu.fail()?
};
let serde_json::Value::String(create_table) = &records[0][1] else {
let Value::String(create_table) = &records[0][1] else {
unreachable!()
};
@@ -276,11 +296,13 @@ impl Export {
let (metric_physical_tables, remaining_tables) =
self.get_table_list(&catalog, &schema).await?;
let table_count = metric_physical_tables.len() + remaining_tables.len();
tokio::fs::create_dir_all(&self.output_dir)
let output_dir = Path::new(&self.output_dir)
.join(&catalog)
.join(format!("{schema}/"));
tokio::fs::create_dir_all(&output_dir)
.await
.context(FileIoSnafu)?;
let output_file =
Path::new(&self.output_dir).join(format!("{catalog}-{schema}.sql"));
let output_file = Path::new(&output_dir).join("create_tables.sql");
let mut file = File::create(output_file).await.context(FileIoSnafu)?;
for (c, s, t) in metric_physical_tables.into_iter().chain(remaining_tables) {
match self.show_create_table(&c, &s, &t).await {
@@ -294,7 +316,12 @@ impl Export {
}
}
}
info!("finished exporting {catalog}.{schema} with {table_count} tables",);
info!(
"Finished exporting {catalog}.{schema} with {table_count} table schemas to path: {}",
output_dir.to_string_lossy()
);
Ok::<(), Error>(())
});
}
@@ -317,7 +344,7 @@ impl Export {
Ok(())
}
async fn export_table_data(&self) -> Result<()> {
async fn export_database_data(&self) -> Result<()> {
let timer = Instant::now();
let semaphore = Arc::new(Semaphore::new(self.parallelism));
let db_names = self.iter_db_names().await?;
@@ -327,70 +354,66 @@ impl Export {
let semaphore_moved = semaphore.clone();
tasks.push(async move {
let _permit = semaphore_moved.acquire().await.unwrap();
tokio::fs::create_dir_all(&self.output_dir)
let output_dir = Path::new(&self.output_dir)
.join(&catalog)
.join(format!("{schema}/"));
tokio::fs::create_dir_all(&output_dir)
.await
.context(FileIoSnafu)?;
let output_dir = Path::new(&self.output_dir).join(format!("{catalog}-{schema}/"));
// Ignores metric physical tables
let (metrics_tables, table_list) = self.get_table_list(&catalog, &schema).await?;
for (_, _, table_name) in metrics_tables {
warn!("Ignores metric physical table: {table_name}");
}
for (catalog_name, schema_name, table_name) in table_list {
// copy table to
let sql = format!(
r#"Copy "{}"."{}"."{}" TO '{}{}.parquet' WITH (format='parquet');"#,
catalog_name,
schema_name,
table_name,
output_dir.to_str().unwrap(),
table_name,
);
info!("Executing sql: {sql}");
self.sql(&sql).await?;
}
info!("Finished exporting {catalog}.{schema} data");
// export copy from sql
let dir_filenames = match output_dir.read_dir() {
Ok(dir) => dir,
Err(_) => {
warn!("empty database {catalog}.{schema}");
return Ok(());
let with_options = match (&self.start_time, &self.end_time) {
(Some(start_time), Some(end_time)) => {
format!(
"WITH (FORMAT='parquet', start_time='{}', end_time='{}')",
start_time, end_time
)
}
(Some(start_time), None) => {
format!("WITH (FORMAT='parquet', start_time='{}')", start_time)
}
(None, Some(end_time)) => {
format!("WITH (FORMAT='parquet', end_time='{}')", end_time)
}
(None, None) => "WITH (FORMAT='parquet')".to_string(),
};
let copy_from_file =
Path::new(&self.output_dir).join(format!("{catalog}-{schema}_copy_from.sql"));
let sql = format!(
r#"COPY DATABASE "{}"."{}" TO '{}' {};"#,
catalog,
schema,
output_dir.to_str().unwrap(),
with_options
);
info!("Executing sql: {sql}");
self.sql(&sql).await?;
info!(
"Finished exporting {catalog}.{schema} data into path: {}",
output_dir.to_string_lossy()
);
// The export copy from sql
let copy_from_file = output_dir.join("copy_from.sql");
let mut writer =
BufWriter::new(File::create(copy_from_file).await.context(FileIoSnafu)?);
for table_file in dir_filenames {
let table_file = table_file.unwrap();
let table_name = table_file
.file_name()
.into_string()
.unwrap()
.replace(".parquet", "");
writer
.write(
format!(
"copy {} from '{}' with (format='parquet');\n",
table_name,
table_file.path().to_str().unwrap()
)
.as_bytes(),
)
.await
.context(FileIoSnafu)?;
}
let copy_database_from_sql = format!(
r#"COPY DATABASE "{}"."{}" FROM '{}' WITH (FORMAT='parquet');"#,
catalog,
schema,
output_dir.to_str().unwrap()
);
writer
.write(copy_database_from_sql.as_bytes())
.await
.context(FileIoSnafu)?;
writer.flush().await.context(FileIoSnafu)?;
info!("finished exporting {catalog}.{schema} copy_from.sql");
info!("Finished exporting {catalog}.{schema} copy_from.sql");
Ok::<(), Error>(())
});
})
}
let success = futures::future::join_all(tasks)
@@ -399,35 +422,41 @@ impl Export {
.filter(|r| match r {
Ok(_) => true,
Err(e) => {
error!(e; "export job failed");
error!(e; "export database job failed");
false
}
})
.count();
let elapsed = timer.elapsed();
info!("Success {success}/{db_count} jobs, costs: {:?}", elapsed);
Ok(())
}
}
#[allow(deprecated)]
#[async_trait]
impl Tool for Export {
async fn do_work(&self) -> Result<()> {
match self.target {
ExportTarget::CreateTable => self.export_create_table().await,
ExportTarget::TableData => self.export_table_data().await,
ExportTarget::Schema => self.export_create_table().await,
ExportTarget::Data => self.export_database_data().await,
ExportTarget::All => {
self.export_create_table().await?;
self.export_database_data().await
}
}
}
}
/// Split at `-`.
fn split_database(database: &str) -> Result<(String, Option<String>)> {
let (catalog, schema) = database
.split_once('-')
.with_context(|| InvalidDatabaseNameSnafu {
database: database.to_string(),
})?;
let (catalog, schema) = match database.split_once('-') {
Some((catalog, schema)) => (catalog, schema),
None => (DEFAULT_CATALOG_NAME, database),
};
if schema == "*" {
Ok((catalog.to_string(), None))
} else {
@@ -442,10 +471,26 @@ mod tests {
use common_catalog::consts::{DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME};
use common_telemetry::logging::LoggingOptions;
use crate::cli::export::split_database;
use crate::error::Result as CmdResult;
use crate::options::GlobalOptions;
use crate::{cli, standalone, App};
#[test]
fn test_split_database() {
let result = split_database("catalog-schema").unwrap();
assert_eq!(result, ("catalog".to_string(), Some("schema".to_string())));
let result = split_database("schema").unwrap();
assert_eq!(result, ("greptime".to_string(), Some("schema".to_string())));
let result = split_database("catalog-*").unwrap();
assert_eq!(result, ("catalog".to_string(), None));
let result = split_database("*").unwrap();
assert_eq!(result, ("greptime".to_string(), None));
}
#[tokio::test(flavor = "multi_thread")]
async fn test_export_create_table_with_quoted_names() -> CmdResult<()> {
let output_dir = tempfile::tempdir().unwrap();
@@ -487,7 +532,7 @@ mod tests {
"--output-dir",
&*output_dir.path().to_string_lossy(),
"--target",
"create-table",
"schema",
]);
let mut cli_app = cli.build(LoggingOptions::default()).await?;
cli_app.start().await?;
@@ -496,7 +541,9 @@ mod tests {
let output_file = output_dir
.path()
.join("greptime-cli.export.create_table.sql");
.join("greptime")
.join("cli.export.create_table")
.join("create_tables.sql");
let res = std::fs::read_to_string(output_file).unwrap();
let expect = r#"CREATE TABLE IF NOT EXISTS "a.b.c" (
"ts" TIMESTAMP(3) NOT NULL,

View File

@@ -34,7 +34,6 @@ use common_telemetry::debug;
use either::Either;
use meta_client::client::MetaClientBuilder;
use query::datafusion::DatafusionQueryEngine;
use query::logical_optimizer::LogicalOptimizer;
use query::parser::QueryLanguageParser;
use query::plan::LogicalPlan;
use query::query_engine::{DefaultSerializer, QueryEngineState};
@@ -289,6 +288,7 @@ async fn create_query_engine(meta_addr: &str) -> Result<DatafusionQueryEngine> {
None,
None,
None,
None,
false,
plugins.clone(),
));

View File

@@ -18,6 +18,7 @@ use std::time::Duration;
use async_trait::async_trait;
use catalog::kvbackend::MetaKvBackend;
use clap::Parser;
use common_base::Plugins;
use common_config::Configurable;
use common_telemetry::logging::TracingOptions;
use common_telemetry::{info, warn};
@@ -271,8 +272,9 @@ impl StartCommand {
info!("Datanode start command: {:#?}", self);
info!("Datanode options: {:#?}", opts);
let mut opts = opts.component;
let plugins = plugins::setup_datanode_plugins(&mut opts)
let opts = opts.component;
let mut plugins = Plugins::new();
plugins::setup_datanode_plugins(&mut plugins, &opts)
.await
.context(StartDatanodeSnafu)?;

View File

@@ -298,13 +298,6 @@ pub enum Error {
error: std::io::Error,
},
#[snafu(display("Invalid database name: {}", database))]
InvalidDatabaseName {
#[snafu(implicit)]
location: Location,
database: String,
},
#[snafu(display("Failed to create directory {}", dir))]
CreateDir {
dir: String,
@@ -312,6 +305,12 @@ pub enum Error {
error: std::io::Error,
},
#[snafu(display("Failed to spawn thread"))]
SpawnThread {
#[snafu(source)]
error: std::io::Error,
},
#[snafu(display("Other error"))]
Other {
source: BoxedError,
@@ -384,8 +383,7 @@ impl ErrorExt for Error {
| Error::ConnectEtcd { .. }
| Error::NotDataFromOutput { .. }
| Error::CreateDir { .. }
| Error::EmptyResult { .. }
| Error::InvalidDatabaseName { .. } => StatusCode::InvalidArguments,
| Error::EmptyResult { .. } => StatusCode::InvalidArguments,
Error::StartProcedureManager { source, .. }
| Error::StopProcedureManager { source, .. } => source.status_code(),
@@ -403,7 +401,9 @@ impl ErrorExt for Error {
Error::SubstraitEncodeLogicalPlan { source, .. } => source.status_code(),
Error::StartCatalogManager { source, .. } => source.status_code(),
Error::SerdeJson { .. } | Error::FileIo { .. } => StatusCode::Unexpected,
Error::SerdeJson { .. } | Error::FileIo { .. } | Error::SpawnThread { .. } => {
StatusCode::Unexpected
}
Error::Other { source, .. } => source.status_code(),

View File

@@ -24,6 +24,7 @@ use common_grpc::channel_manager::ChannelConfig;
use common_meta::cache::{CacheRegistryBuilder, LayeredCacheRegistryBuilder};
use common_meta::heartbeat::handler::parse_mailbox_message::ParseMailboxMessageHandler;
use common_meta::heartbeat::handler::HandlerGroupExecutor;
use common_meta::key::flow::FlowMetadataManager;
use common_meta::key::TableMetadataManager;
use common_telemetry::info;
use common_telemetry::logging::TracingOptions;
@@ -296,11 +297,13 @@ impl StartCommand {
Arc::new(executor),
);
let flow_metadata_manager = Arc::new(FlowMetadataManager::new(cached_meta_backend.clone()));
let flownode_builder = FlownodeBuilder::new(
opts,
Plugins::new(),
table_metadata_manager,
catalog_manager.clone(),
flow_metadata_manager,
)
.with_heartbeat_task(heartbeat_task);

View File

@@ -20,6 +20,7 @@ use cache::{build_fundamental_cache_registry, with_default_composite_cache_regis
use catalog::kvbackend::{CachedMetaKvBackendBuilder, KvBackendCatalogManager, MetaKvBackend};
use clap::Parser;
use client::client_manager::NodeClients;
use common_base::Plugins;
use common_config::Configurable;
use common_grpc::channel_manager::ChannelConfig;
use common_meta::cache::{CacheRegistryBuilder, LayeredCacheRegistryBuilder};
@@ -242,10 +243,11 @@ impl StartCommand {
.get_or_insert_with(MetaClientOptions::default)
.metasrv_addrs
.clone_from(metasrv_addrs);
opts.mode = Mode::Distributed;
}
opts.user_provider.clone_from(&self.user_provider);
if let Some(user_provider) = &self.user_provider {
opts.user_provider = Some(user_provider.clone());
}
Ok(())
}
@@ -264,9 +266,9 @@ impl StartCommand {
info!("Frontend start command: {:#?}", self);
info!("Frontend options: {:#?}", opts);
let mut opts = opts.component;
#[allow(clippy::unnecessary_mut_passed)]
let plugins = plugins::setup_frontend_plugins(&mut opts)
let opts = opts.component;
let mut plugins = Plugins::new();
plugins::setup_frontend_plugins(&mut plugins, &opts)
.await
.context(StartFrontendSnafu)?;
@@ -314,7 +316,7 @@ impl StartCommand {
);
let catalog_manager = KvBackendCatalogManager::new(
opts.mode,
Mode::Distributed,
Some(meta_client.clone()),
cached_meta_backend.clone(),
layered_cache_registry.clone(),
@@ -445,7 +447,6 @@ mod tests {
let fe_opts = command.load_options(&Default::default()).unwrap().component;
assert_eq!(Mode::Distributed, fe_opts.mode);
assert_eq!("127.0.0.1:4000".to_string(), fe_opts.http.addr);
assert_eq!(Duration::from_secs(30), fe_opts.http.timeout);
@@ -458,7 +459,7 @@ mod tests {
#[tokio::test]
async fn test_try_from_start_command_to_anymap() {
let mut fe_opts = frontend::frontend::FrontendOptions {
let fe_opts = frontend::frontend::FrontendOptions {
http: HttpOptions {
disable_dashboard: false,
..Default::default()
@@ -467,8 +468,10 @@ mod tests {
..Default::default()
};
#[allow(clippy::unnecessary_mut_passed)]
let plugins = plugins::setup_frontend_plugins(&mut fe_opts).await.unwrap();
let mut plugins = Plugins::new();
plugins::setup_frontend_plugins(&mut plugins, &fe_opts)
.await
.unwrap();
let provider = plugins.get::<UserProviderRef>().unwrap();
let result = provider

View File

@@ -30,7 +30,7 @@ pub mod standalone;
lazy_static::lazy_static! {
static ref APP_VERSION: prometheus::IntGaugeVec =
prometheus::register_int_gauge_vec!("greptime_app_version", "app version", &["short_version", "version"]).unwrap();
prometheus::register_int_gauge_vec!("greptime_app_version", "app version", &["version", "short_version"]).unwrap();
}
#[async_trait]
@@ -74,16 +74,16 @@ pub trait App: Send {
}
/// Log the versions of the application, and the arguments passed to the cli.
/// `version_string` should be the same as the output of cli "--version";
/// and the `app_version` is the short version of the codes, often consist of git branch and commit.
pub fn log_versions(version_string: &str, app_version: &str) {
/// `version` should be the same as the output of cli "--version";
/// and the `short_version` is the short version of the codes, often consist of git branch and commit.
pub fn log_versions(version: &str, short_version: &str) {
// Report app version as gauge.
APP_VERSION
.with_label_values(&[env!("CARGO_PKG_VERSION"), app_version])
.with_label_values(&[env!("CARGO_PKG_VERSION"), short_version])
.inc();
// Log version and argument flags.
info!("GreptimeDB version: {}", version_string);
info!("GreptimeDB version: {}", version);
log_env_flags();
}

View File

@@ -16,11 +16,13 @@ use std::time::Duration;
use async_trait::async_trait;
use clap::Parser;
use common_base::Plugins;
use common_config::Configurable;
use common_telemetry::info;
use common_telemetry::logging::TracingOptions;
use common_version::{short_version, version};
use meta_srv::bootstrap::MetasrvInstance;
use meta_srv::metasrv::BackendImpl;
use snafu::ResultExt;
use tracing_appender::non_blocking::WorkerGuard;
@@ -136,6 +138,9 @@ struct StartCommand {
/// The max operations per txn
#[clap(long)]
max_txn_ops: Option<usize>,
/// The database backend.
#[clap(long, value_enum)]
backend: Option<BackendImpl>,
}
impl StartCommand {
@@ -218,6 +223,12 @@ impl StartCommand {
opts.max_txn_ops = max_txn_ops;
}
if let Some(backend) = &self.backend {
opts.backend.clone_from(backend);
} else {
opts.backend = BackendImpl::default()
}
// Disable dashboard in metasrv.
opts.http.disable_dashboard = true;
@@ -238,8 +249,9 @@ impl StartCommand {
info!("Metasrv start command: {:#?}", self);
info!("Metasrv options: {:#?}", opts);
let mut opts = opts.component;
let plugins = plugins::setup_metasrv_plugins(&mut opts)
let opts = opts.component;
let mut plugins = Plugins::new();
plugins::setup_metasrv_plugins(&mut plugins, &opts)
.await
.context(StartMetaServerSnafu)?;

View File

@@ -19,6 +19,7 @@ use async_trait::async_trait;
use cache::{build_fundamental_cache_registry, with_default_composite_cache_registry};
use catalog::kvbackend::KvBackendCatalogManager;
use clap::Parser;
use common_base::Plugins;
use common_catalog::consts::{MIN_USER_FLOW_ID, MIN_USER_TABLE_ID};
use common_config::{metadata_store_dir, Configurable, KvBackendConfig};
use common_error::ext::BoxedError;
@@ -181,7 +182,6 @@ impl StandaloneOptions {
pub fn frontend_options(&self) -> FrontendOptions {
let cloned_opts = self.clone();
FrontendOptions {
mode: cloned_opts.mode,
default_timezone: cloned_opts.default_timezone,
http: cloned_opts.http,
grpc: cloned_opts.grpc,
@@ -396,7 +396,9 @@ impl StartCommand {
opts.influxdb.enable = self.influxdb_enable;
}
opts.user_provider.clone_from(&self.user_provider);
if let Some(user_provider) = &self.user_provider {
opts.user_provider = Some(user_provider.clone());
}
Ok(())
}
@@ -418,14 +420,18 @@ impl StartCommand {
info!("Standalone start command: {:#?}", self);
info!("Standalone options: {opts:#?}");
let mut plugins = Plugins::new();
let opts = opts.component;
let mut fe_opts = opts.frontend_options();
#[allow(clippy::unnecessary_mut_passed)]
let fe_plugins = plugins::setup_frontend_plugins(&mut fe_opts) // mut ref is MUST, DO NOT change it
let fe_opts = opts.frontend_options();
let dn_opts = opts.datanode_options();
plugins::setup_frontend_plugins(&mut plugins, &fe_opts)
.await
.context(StartFrontendSnafu)?;
let dn_opts = opts.datanode_options();
plugins::setup_datanode_plugins(&mut plugins, &dn_opts)
.await
.context(StartDatanodeSnafu)?;
set_default_timezone(fe_opts.default_timezone.as_deref()).context(InitTimezoneSnafu)?;
@@ -464,17 +470,19 @@ impl StartCommand {
let table_metadata_manager =
Self::create_table_metadata_manager(kv_backend.clone()).await?;
let datanode = DatanodeBuilder::new(dn_opts, fe_plugins.clone())
let datanode = DatanodeBuilder::new(dn_opts, plugins.clone())
.with_kv_backend(kv_backend.clone())
.build()
.await
.context(StartDatanodeSnafu)?;
let flow_metadata_manager = Arc::new(FlowMetadataManager::new(kv_backend.clone()));
let flow_builder = FlownodeBuilder::new(
Default::default(),
fe_plugins.clone(),
plugins.clone(),
table_metadata_manager.clone(),
catalog_manager.clone(),
flow_metadata_manager.clone(),
);
let flownode = Arc::new(
flow_builder
@@ -505,7 +513,6 @@ impl StartCommand {
opts.wal.into(),
kv_backend.clone(),
));
let flow_metadata_manager = Arc::new(FlowMetadataManager::new(kv_backend.clone()));
let table_meta_allocator = Arc::new(TableMetadataAllocator::new(
table_id_sequence,
wal_options_allocator.clone(),
@@ -533,7 +540,7 @@ impl StartCommand {
node_manager.clone(),
ddl_task_executor.clone(),
)
.with_plugin(fe_plugins.clone())
.with_plugin(plugins.clone())
.try_build()
.await
.context(StartFrontendSnafu)?;
@@ -554,7 +561,7 @@ impl StartCommand {
let (tx, _rx) = broadcast::channel(1);
let servers = Services::new(fe_opts, Arc::new(frontend.clone()), fe_plugins)
let servers = Services::new(fe_opts, Arc::new(frontend.clone()), plugins)
.build()
.await
.context(StartFrontendSnafu)?;
@@ -629,20 +636,21 @@ mod tests {
use common_test_util::temp_dir::create_named_temp_file;
use common_wal::config::DatanodeWalConfig;
use datanode::config::{FileConfig, GcsConfig};
use servers::Mode;
use super::*;
use crate::options::GlobalOptions;
#[tokio::test]
async fn test_try_from_start_command_to_anymap() {
let mut fe_opts = FrontendOptions {
let fe_opts = FrontendOptions {
user_provider: Some("static_user_provider:cmd:test=test".to_string()),
..Default::default()
};
#[allow(clippy::unnecessary_mut_passed)]
let plugins = plugins::setup_frontend_plugins(&mut fe_opts).await.unwrap();
let mut plugins = Plugins::new();
plugins::setup_frontend_plugins(&mut plugins, &fe_opts)
.await
.unwrap();
let provider = plugins.get::<UserProviderRef>().unwrap();
let result = provider
@@ -727,7 +735,6 @@ mod tests {
let fe_opts = options.frontend_options();
let dn_opts = options.datanode_options();
let logging_opts = options.logging;
assert_eq!(Mode::Standalone, fe_opts.mode);
assert_eq!("127.0.0.1:4000".to_string(), fe_opts.http.addr);
assert_eq!(Duration::from_secs(33), fe_opts.http.timeout);
assert_eq!(ReadableSize::mb(128), fe_opts.http.body_limit);

View File

@@ -22,7 +22,7 @@ use common_grpc::channel_manager::{
DEFAULT_MAX_GRPC_RECV_MESSAGE_SIZE, DEFAULT_MAX_GRPC_SEND_MESSAGE_SIZE,
};
use common_runtime::global::RuntimeOptions;
use common_telemetry::logging::LoggingOptions;
use common_telemetry::logging::{LoggingOptions, DEFAULT_OTLP_ENDPOINT};
use common_wal::config::raft_engine::RaftEngineConfig;
use common_wal::config::DatanodeWalConfig;
use datanode::config::{DatanodeOptions, RegionEngineConfig, StorageConfig};
@@ -46,9 +46,8 @@ fn test_load_datanode_example_config() {
let expected = GreptimeOptions::<DatanodeOptions> {
runtime: RuntimeOptions {
read_rt_size: 8,
write_rt_size: 8,
bg_rt_size: 4,
global_rt_size: 8,
compact_rt_size: 4,
},
component: DatanodeOptions {
node_id: Some(42),
@@ -89,7 +88,7 @@ fn test_load_datanode_example_config() {
],
logging: LoggingOptions {
level: Some("info".to_string()),
otlp_endpoint: Some("".to_string()),
otlp_endpoint: Some(DEFAULT_OTLP_ENDPOINT.to_string()),
tracing_sample_ratio: Some(Default::default()),
..Default::default()
},
@@ -119,9 +118,8 @@ fn test_load_frontend_example_config() {
.unwrap();
let expected = GreptimeOptions::<FrontendOptions> {
runtime: RuntimeOptions {
read_rt_size: 8,
write_rt_size: 8,
bg_rt_size: 4,
global_rt_size: 8,
compact_rt_size: 4,
},
component: FrontendOptions {
default_timezone: Some("UTC".to_string()),
@@ -138,7 +136,7 @@ fn test_load_frontend_example_config() {
}),
logging: LoggingOptions {
level: Some("info".to_string()),
otlp_endpoint: Some("".to_string()),
otlp_endpoint: Some(DEFAULT_OTLP_ENDPOINT.to_string()),
tracing_sample_ratio: Some(Default::default()),
..Default::default()
},
@@ -167,17 +165,16 @@ fn test_load_metasrv_example_config() {
.unwrap();
let expected = GreptimeOptions::<MetasrvOptions> {
runtime: RuntimeOptions {
read_rt_size: 8,
write_rt_size: 8,
bg_rt_size: 4,
global_rt_size: 8,
compact_rt_size: 4,
},
component: MetasrvOptions {
selector: SelectorType::LeaseBased,
selector: SelectorType::default(),
data_home: "/tmp/metasrv/".to_string(),
logging: LoggingOptions {
dir: "/tmp/greptimedb/logs".to_string(),
level: Some("info".to_string()),
otlp_endpoint: Some("".to_string()),
otlp_endpoint: Some(DEFAULT_OTLP_ENDPOINT.to_string()),
tracing_sample_ratio: Some(Default::default()),
..Default::default()
},
@@ -200,9 +197,8 @@ fn test_load_standalone_example_config() {
.unwrap();
let expected = GreptimeOptions::<StandaloneOptions> {
runtime: RuntimeOptions {
read_rt_size: 8,
write_rt_size: 8,
bg_rt_size: 4,
global_rt_size: 8,
compact_rt_size: 4,
},
component: StandaloneOptions {
default_timezone: Some("UTC".to_string()),
@@ -232,7 +228,7 @@ fn test_load_standalone_example_config() {
},
logging: LoggingOptions {
level: Some("info".to_string()),
otlp_endpoint: Some("".to_string()),
otlp_endpoint: Some(DEFAULT_OTLP_ENDPOINT.to_string()),
tracing_sample_ratio: Some(Default::default()),
..Default::default()
},

View File

@@ -96,11 +96,14 @@ pub const INFORMATION_SCHEMA_TABLE_CONSTRAINTS_TABLE_ID: u32 = 30;
pub const INFORMATION_SCHEMA_CLUSTER_INFO_TABLE_ID: u32 = 31;
/// id for information_schema.VIEWS
pub const INFORMATION_SCHEMA_VIEW_TABLE_ID: u32 = 32;
/// id for information_schema.FLOWS
pub const INFORMATION_SCHEMA_FLOW_TABLE_ID: u32 = 33;
/// ----- End of information_schema tables -----
/// ----- Begin of pg_catalog tables -----
pub const PG_CATALOG_PG_CLASS_TABLE_ID: u32 = 256;
pub const PG_CATALOG_PG_TYPE_TABLE_ID: u32 = 257;
pub const PG_CATALOG_PG_NAMESPACE_TABLE_ID: u32 = 258;
/// ----- End of pg_catalog tables -----
pub const MITO_ENGINE: &str = "mito";

View File

@@ -185,7 +185,7 @@ impl FileFormat for CsvFormat {
let schema_infer_max_record = self.schema_infer_max_record;
let has_header = self.has_header;
common_runtime::spawn_blocking_read(move || {
common_runtime::spawn_blocking_global(move || {
let reader = SyncIoBridge::new(decoded);
let (schema, _records_read) =

View File

@@ -101,7 +101,7 @@ impl FileFormat for JsonFormat {
let schema_infer_max_record = self.schema_infer_max_record;
common_runtime::spawn_blocking_read(move || {
common_runtime::spawn_blocking_global(move || {
let mut reader = BufReader::new(SyncIoBridge::new(decoded));
let iter = ValueIter::new(&mut reader, schema_infer_max_record);

View File

@@ -19,9 +19,8 @@ use snafu::ResultExt;
use crate::error::{BuildBackendSnafu, Result};
pub fn build_fs_backend(root: &str) -> Result<ObjectStore> {
let mut builder = Fs::default();
let _ = builder.root(root);
let object_store = ObjectStore::new(builder)
let builder = Fs::default();
let object_store = ObjectStore::new(builder.root(root))
.context(BuildBackendSnafu)?
.layer(
object_store::layers::LoggingLayer::default()

View File

@@ -44,28 +44,26 @@ pub fn build_s3_backend(
path: &str,
connection: &HashMap<String, String>,
) -> Result<ObjectStore> {
let mut builder = S3::default();
let _ = builder.root(path).bucket(host);
let mut builder = S3::default().root(path).bucket(host);
if let Some(endpoint) = connection.get(ENDPOINT) {
let _ = builder.endpoint(endpoint);
builder = builder.endpoint(endpoint);
}
if let Some(region) = connection.get(REGION) {
let _ = builder.region(region);
builder = builder.region(region);
}
if let Some(key_id) = connection.get(ACCESS_KEY_ID) {
let _ = builder.access_key_id(key_id);
builder = builder.access_key_id(key_id);
}
if let Some(key) = connection.get(SECRET_ACCESS_KEY) {
let _ = builder.secret_access_key(key);
builder = builder.secret_access_key(key);
}
if let Some(session_token) = connection.get(SESSION_TOKEN) {
let _ = builder.security_token(session_token);
builder = builder.session_token(session_token);
}
if let Some(enable_str) = connection.get(ENABLE_VIRTUAL_HOST_STYLE) {
@@ -79,7 +77,7 @@ pub fn build_s3_backend(
.build()
})?;
if enable {
let _ = builder.enable_virtual_host_style();
builder = builder.enable_virtual_host_style();
}
}

View File

@@ -47,19 +47,15 @@ pub fn format_schema(schema: Schema) -> Vec<String> {
}
pub fn test_store(root: &str) -> ObjectStore {
let mut builder = Fs::default();
let _ = builder.root(root);
ObjectStore::new(builder).unwrap().finish()
let builder = Fs::default();
ObjectStore::new(builder.root(root)).unwrap().finish()
}
pub fn test_tmp_store(root: &str) -> (ObjectStore, TempDir) {
let dir = create_temp_dir(root);
let mut builder = Fs::default();
let _ = builder.root("/");
(ObjectStore::new(builder).unwrap().finish(), dir)
let builder = Fs::default();
(ObjectStore::new(builder.root("/")).unwrap().finish(), dir)
}
pub fn test_basic_schema() -> SchemaRef {

View File

@@ -53,6 +53,20 @@ pub trait ErrorExt: StackError {
}
}
}
/// Find out root level error for nested error
fn root_cause(&self) -> Option<&dyn std::error::Error>
where
Self: Sized,
{
let error = self.last();
if let Some(external_error) = error.source() {
let external_root = external_error.sources().last().unwrap();
Some(external_root)
} else {
None
}
}
}
pub trait StackError: std::error::Error {

View File

@@ -36,7 +36,7 @@ pub enum StatusCode {
InvalidArguments = 1004,
/// The task is cancelled.
Cancelled = 1005,
/// Illegal state, can be exposed to users
/// Illegal state, can be exposed to users.
IllegalState = 1006,
// ====== End of common status code ================
@@ -55,33 +55,34 @@ pub enum StatusCode {
// ====== Begin of catalog related status code =====
/// Table already exists.
TableAlreadyExists = 4000,
/// Table not found
/// Table not found.
TableNotFound = 4001,
/// Table column not found
/// Table column not found.
TableColumnNotFound = 4002,
/// Table column already exists
/// Table column already exists.
TableColumnExists = 4003,
/// Database not found
/// Database not found.
DatabaseNotFound = 4004,
/// Region not found
/// Region not found.
RegionNotFound = 4005,
/// Region already exists
/// Region already exists.
RegionAlreadyExists = 4006,
/// Region is read-only in current state.
RegionReadonly = 4007,
/// Region is not in a proper state to handle specific request.
RegionNotReady = 4008,
/// Region is temporarily in busy state
/// Region is temporarily in busy state.
RegionBusy = 4009,
/// Table is temporarily unable to handle the request
/// Table is temporarily unable to handle the request.
TableUnavailable = 4010,
/// Database not found
/// Database already exists.
DatabaseAlreadyExists = 4011,
// ====== End of catalog related status code =======
// ====== Begin of storage related status code =====
/// Storage is temporarily unable to handle the request
/// Storage is temporarily unable to handle the request.
StorageUnavailable = 5000,
/// Request is outdated, e.g., version mismatch
/// Request is outdated, e.g., version mismatch.
RequestOutdated = 5001,
// ====== End of storage related status code =======
@@ -89,24 +90,24 @@ pub enum StatusCode {
/// Runtime resources exhausted, like creating threads failed.
RuntimeResourcesExhausted = 6000,
/// Rate limit exceeded
/// Rate limit exceeded.
RateLimited = 6001,
// ====== End of server related status code =======
// ====== Begin of auth related status code =====
/// User not exist
/// User not exist.
UserNotFound = 7000,
/// Unsupported password type
/// Unsupported password type.
UnsupportedPasswordType = 7001,
/// Username and password does not match
/// Username and password does not match.
UserPasswordMismatch = 7002,
/// Not found http authorization header
/// Not found http authorization header.
AuthHeaderNotFound = 7003,
/// Invalid http authorization header
/// Invalid http authorization header.
InvalidAuthHeader = 7004,
/// Illegal request to connect catalog-schema
/// Illegal request to connect catalog-schema.
AccessDenied = 7005,
/// User is not authorized to perform the operation
/// User is not authorized to perform the operation.
PermissionDenied = 7006,
// ====== End of auth related status code =====

View File

@@ -31,6 +31,7 @@ serde.workspace = true
serde_json.workspace = true
session.workspace = true
snafu.workspace = true
sql.workspace = true
statrs = "0.16"
store-api.workspace = true
table.workspace = true

View File

@@ -0,0 +1,164 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use common_error::ext::BoxedError;
use common_macro::admin_fn;
use common_query::error::{
ExecuteSnafu, InvalidFuncArgsSnafu, MissingFlowServiceHandlerSnafu, Result,
UnsupportedInputDataTypeSnafu,
};
use common_query::prelude::Signature;
use datafusion::logical_expr::Volatility;
use datatypes::value::{Value, ValueRef};
use session::context::QueryContextRef;
use snafu::{ensure, ResultExt};
use sql::parser::ParserContext;
use store_api::storage::ConcreteDataType;
use crate::handlers::FlowServiceHandlerRef;
fn flush_signature() -> Signature {
Signature::uniform(
1,
vec![ConcreteDataType::string_datatype()],
Volatility::Immutable,
)
}
#[admin_fn(
name = FlushFlowFunction,
display_name = flush_flow,
sig_fn = flush_signature,
ret = uint64
)]
pub(crate) async fn flush_flow(
flow_service_handler: &FlowServiceHandlerRef,
query_ctx: &QueryContextRef,
params: &[ValueRef<'_>],
) -> Result<Value> {
let (catalog_name, flow_name) = parse_flush_flow(params, query_ctx)?;
let res = flow_service_handler
.flush(&catalog_name, &flow_name, query_ctx.clone())
.await?;
let affected_rows = res.affected_rows;
Ok(Value::from(affected_rows))
}
fn parse_flush_flow(
params: &[ValueRef<'_>],
query_ctx: &QueryContextRef,
) -> Result<(String, String)> {
ensure!(
params.len() == 1,
InvalidFuncArgsSnafu {
err_msg: format!(
"The length of the args is not correct, expect 1, have: {}",
params.len()
),
}
);
let ValueRef::String(flow_name) = params[0] else {
return UnsupportedInputDataTypeSnafu {
function: "flush_flow",
datatypes: params.iter().map(|v| v.data_type()).collect::<Vec<_>>(),
}
.fail();
};
let obj_name = ParserContext::parse_table_name(flow_name, query_ctx.sql_dialect())
.map_err(BoxedError::new)
.context(ExecuteSnafu)?;
let (catalog_name, flow_name) = match &obj_name.0[..] {
[flow_name] => (
query_ctx.current_catalog().to_string(),
flow_name.value.clone(),
),
[catalog, flow_name] => (catalog.value.clone(), flow_name.value.clone()),
_ => {
return InvalidFuncArgsSnafu {
err_msg: format!(
"expect flow name to be <catalog>.<flow-name> or <flow-name>, actual: {}",
obj_name
),
}
.fail()
}
};
Ok((catalog_name, flow_name))
}
#[cfg(test)]
mod test {
use std::sync::Arc;
use datatypes::scalars::ScalarVector;
use datatypes::vectors::StringVector;
use session::context::QueryContext;
use super::*;
use crate::function::{Function, FunctionContext};
#[test]
fn test_flush_flow_metadata() {
let f = FlushFlowFunction;
assert_eq!("flush_flow", f.name());
assert_eq!(
ConcreteDataType::uint64_datatype(),
f.return_type(&[]).unwrap()
);
assert_eq!(
f.signature(),
Signature::uniform(
1,
vec![ConcreteDataType::string_datatype()],
Volatility::Immutable,
)
);
}
#[test]
fn test_missing_flow_service() {
let f = FlushFlowFunction;
let args = vec!["flow_name"];
let args = args
.into_iter()
.map(|arg| Arc::new(StringVector::from_slice(&[arg])) as _)
.collect::<Vec<_>>();
let result = f.eval(FunctionContext::default(), &args).unwrap_err();
assert_eq!(
"Missing FlowServiceHandler, not expected",
result.to_string()
);
}
#[test]
fn test_parse_flow_args() {
let testcases = [
("flow_name", ("greptime", "flow_name")),
("catalog.flow_name", ("catalog", "flow_name")),
];
for (input, expected) in testcases.iter() {
let args = vec![*input];
let args = args.into_iter().map(ValueRef::String).collect::<Vec<_>>();
let result = parse_flush_flow(&args, &QueryContext::arc()).unwrap();
assert_eq!(*expected, (result.0.as_str(), result.1.as_str()));
}
}
}

View File

@@ -65,6 +65,19 @@ pub trait ProcedureServiceHandler: Send + Sync {
async fn query_procedure_state(&self, pid: &str) -> Result<ProcedureStateResponse>;
}
/// This flow service handler is only use for flush flow for now.
#[async_trait]
pub trait FlowServiceHandler: Send + Sync {
async fn flush(
&self,
catalog: &str,
flow: &str,
ctx: QueryContextRef,
) -> Result<api::v1::flow::FlowResponse>;
}
pub type TableMutationHandlerRef = Arc<dyn TableMutationHandler>;
pub type ProcedureServiceHandlerRef = Arc<dyn ProcedureServiceHandler>;
pub type FlowServiceHandlerRef = Arc<dyn FlowServiceHandler>;

View File

@@ -15,6 +15,7 @@
#![feature(let_chains)]
#![feature(try_blocks)]
mod flush_flow;
mod macros;
pub mod scalars;
mod system;

View File

@@ -19,8 +19,6 @@ use snafu::ResultExt;
use crate::scalars::expression::ctx::EvalContext;
/// TODO: remove the allow_unused when it's used.
#[allow(unused)]
pub fn scalar_unary_op<L: Scalar, O: Scalar, F>(
l: &VectorRef,
f: F,

View File

@@ -14,11 +14,9 @@
use std::sync::Arc;
mod greatest;
mod to_timezone;
mod to_unixtime;
use greatest::GreatestFunction;
use to_timezone::ToTimezoneFunction;
use to_unixtime::ToUnixtimeFunction;
use crate::function_registry::FunctionRegistry;
@@ -27,7 +25,6 @@ pub(crate) struct TimestampFunction;
impl TimestampFunction {
pub fn register(registry: &FunctionRegistry) {
registry.register(Arc::new(ToTimezoneFunction));
registry.register(Arc::new(ToUnixtimeFunction));
registry.register(Arc::new(GreatestFunction));
}

View File

@@ -1,313 +0,0 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::fmt;
use std::sync::Arc;
use common_query::error::{InvalidFuncArgsSnafu, Result, UnsupportedInputDataTypeSnafu};
use common_query::prelude::Signature;
use common_time::{Timestamp, Timezone};
use datatypes::data_type::ConcreteDataType;
use datatypes::prelude::VectorRef;
use datatypes::types::TimestampType;
use datatypes::value::Value;
use datatypes::vectors::{
Int64Vector, StringVector, TimestampMicrosecondVector, TimestampMillisecondVector,
TimestampNanosecondVector, TimestampSecondVector, Vector,
};
use snafu::{ensure, OptionExt};
use crate::function::{Function, FunctionContext};
use crate::helper;
#[derive(Clone, Debug, Default)]
pub struct ToTimezoneFunction;
const NAME: &str = "to_timezone";
fn convert_to_timezone(arg: &str) -> Option<Timezone> {
Timezone::from_tz_string(arg).ok()
}
fn convert_to_timestamp(arg: &Value) -> Option<Timestamp> {
match arg {
Value::Timestamp(ts) => Some(*ts),
Value::Int64(i) => Some(Timestamp::new_millisecond(*i)),
_ => None,
}
}
impl fmt::Display for ToTimezoneFunction {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(f, "TO_TIMEZONE")
}
}
impl Function for ToTimezoneFunction {
fn name(&self) -> &str {
NAME
}
fn return_type(&self, input_types: &[ConcreteDataType]) -> Result<ConcreteDataType> {
// type checked by signature - MUST BE timestamp
Ok(input_types[0].clone())
}
fn signature(&self) -> Signature {
helper::one_of_sigs2(
vec![
ConcreteDataType::int32_datatype(),
ConcreteDataType::int64_datatype(),
ConcreteDataType::timestamp_second_datatype(),
ConcreteDataType::timestamp_millisecond_datatype(),
ConcreteDataType::timestamp_microsecond_datatype(),
ConcreteDataType::timestamp_nanosecond_datatype(),
],
vec![ConcreteDataType::string_datatype()],
)
}
fn eval(&self, _ctx: FunctionContext, columns: &[VectorRef]) -> Result<VectorRef> {
ensure!(
columns.len() == 2,
InvalidFuncArgsSnafu {
err_msg: format!(
"The length of the args is not correct, expect exactly 2, have: {}",
columns.len()
),
}
);
let array = columns[0].to_arrow_array();
let times = match columns[0].data_type() {
ConcreteDataType::Int64(_) | ConcreteDataType::Int32(_) => {
let vector = Int64Vector::try_from_arrow_array(array).unwrap();
(0..vector.len())
.map(|i| convert_to_timestamp(&vector.get(i)))
.collect::<Vec<_>>()
}
ConcreteDataType::Timestamp(ts) => match ts {
TimestampType::Second(_) => {
let vector = TimestampSecondVector::try_from_arrow_array(array).unwrap();
(0..vector.len())
.map(|i| convert_to_timestamp(&vector.get(i)))
.collect::<Vec<_>>()
}
TimestampType::Millisecond(_) => {
let vector = TimestampMillisecondVector::try_from_arrow_array(array).unwrap();
(0..vector.len())
.map(|i| convert_to_timestamp(&vector.get(i)))
.collect::<Vec<_>>()
}
TimestampType::Microsecond(_) => {
let vector = TimestampMicrosecondVector::try_from_arrow_array(array).unwrap();
(0..vector.len())
.map(|i| convert_to_timestamp(&vector.get(i)))
.collect::<Vec<_>>()
}
TimestampType::Nanosecond(_) => {
let vector = TimestampNanosecondVector::try_from_arrow_array(array).unwrap();
(0..vector.len())
.map(|i| convert_to_timestamp(&vector.get(i)))
.collect::<Vec<_>>()
}
},
_ => UnsupportedInputDataTypeSnafu {
function: NAME,
datatypes: columns.iter().map(|c| c.data_type()).collect::<Vec<_>>(),
}
.fail()?,
};
let tzs = {
let array = columns[1].to_arrow_array();
let vector = StringVector::try_from_arrow_array(&array)
.ok()
.with_context(|| UnsupportedInputDataTypeSnafu {
function: NAME,
datatypes: columns.iter().map(|c| c.data_type()).collect::<Vec<_>>(),
})?;
(0..vector.len())
.map(|i| convert_to_timezone(&vector.get(i).to_string()))
.collect::<Vec<_>>()
};
let result = times
.iter()
.zip(tzs.iter())
.map(|(time, tz)| match (time, tz) {
(Some(time), _) => Some(time.to_timezone_aware_string(tz.as_ref())),
_ => None,
})
.collect::<Vec<Option<String>>>();
Ok(Arc::new(StringVector::from(result)))
}
}
#[cfg(test)]
mod tests {
use datatypes::scalars::ScalarVector;
use datatypes::timestamp::{
TimestampMicrosecond, TimestampMillisecond, TimestampNanosecond, TimestampSecond,
};
use datatypes::vectors::{Int64Vector, StringVector};
use super::*;
#[test]
fn test_timestamp_to_timezone() {
let f = ToTimezoneFunction;
assert_eq!("to_timezone", f.name());
let results = vec![
Some("1969-12-31 19:00:01"),
None,
Some("1970-01-01 03:00:01"),
None,
];
let times: Vec<Option<TimestampSecond>> = vec![
Some(TimestampSecond::new(1)),
None,
Some(TimestampSecond::new(1)),
None,
];
let ts_vector: TimestampSecondVector =
TimestampSecondVector::from_owned_iterator(times.into_iter());
let tzs = vec![Some("America/New_York"), None, Some("Europe/Moscow"), None];
let args: Vec<VectorRef> = vec![
Arc::new(ts_vector),
Arc::new(StringVector::from(tzs.clone())),
];
let vector = f.eval(FunctionContext::default(), &args).unwrap();
assert_eq!(4, vector.len());
let expect_times: VectorRef = Arc::new(StringVector::from(results));
assert_eq!(expect_times, vector);
let results = vec![
Some("1969-12-31 19:00:00.001"),
None,
Some("1970-01-01 03:00:00.001"),
None,
];
let times: Vec<Option<TimestampMillisecond>> = vec![
Some(TimestampMillisecond::new(1)),
None,
Some(TimestampMillisecond::new(1)),
None,
];
let ts_vector: TimestampMillisecondVector =
TimestampMillisecondVector::from_owned_iterator(times.into_iter());
let args: Vec<VectorRef> = vec![
Arc::new(ts_vector),
Arc::new(StringVector::from(tzs.clone())),
];
let vector = f.eval(FunctionContext::default(), &args).unwrap();
assert_eq!(4, vector.len());
let expect_times: VectorRef = Arc::new(StringVector::from(results));
assert_eq!(expect_times, vector);
let results = vec![
Some("1969-12-31 19:00:00.000001"),
None,
Some("1970-01-01 03:00:00.000001"),
None,
];
let times: Vec<Option<TimestampMicrosecond>> = vec![
Some(TimestampMicrosecond::new(1)),
None,
Some(TimestampMicrosecond::new(1)),
None,
];
let ts_vector: TimestampMicrosecondVector =
TimestampMicrosecondVector::from_owned_iterator(times.into_iter());
let args: Vec<VectorRef> = vec![
Arc::new(ts_vector),
Arc::new(StringVector::from(tzs.clone())),
];
let vector = f.eval(FunctionContext::default(), &args).unwrap();
assert_eq!(4, vector.len());
let expect_times: VectorRef = Arc::new(StringVector::from(results));
assert_eq!(expect_times, vector);
let results = vec![
Some("1969-12-31 19:00:00.000000001"),
None,
Some("1970-01-01 03:00:00.000000001"),
None,
];
let times: Vec<Option<TimestampNanosecond>> = vec![
Some(TimestampNanosecond::new(1)),
None,
Some(TimestampNanosecond::new(1)),
None,
];
let ts_vector: TimestampNanosecondVector =
TimestampNanosecondVector::from_owned_iterator(times.into_iter());
let args: Vec<VectorRef> = vec![
Arc::new(ts_vector),
Arc::new(StringVector::from(tzs.clone())),
];
let vector = f.eval(FunctionContext::default(), &args).unwrap();
assert_eq!(4, vector.len());
let expect_times: VectorRef = Arc::new(StringVector::from(results));
assert_eq!(expect_times, vector);
}
#[test]
fn test_numerical_to_timezone() {
let f = ToTimezoneFunction;
let results = vec![
Some("1969-12-31 19:00:00.001"),
None,
Some("1970-01-01 03:00:00.001"),
None,
Some("2024-03-26 23:01:50"),
None,
Some("2024-03-27 06:02:00"),
None,
];
let times: Vec<Option<i64>> = vec![
Some(1),
None,
Some(1),
None,
Some(1711508510000),
None,
Some(1711508520000),
None,
];
let ts_vector: Int64Vector = Int64Vector::from_owned_iterator(times.into_iter());
let tzs = vec![
Some("America/New_York"),
None,
Some("Europe/Moscow"),
None,
Some("America/New_York"),
None,
Some("Europe/Moscow"),
None,
];
let args: Vec<VectorRef> = vec![
Arc::new(ts_vector),
Arc::new(StringVector::from(tzs.clone())),
];
let vector = f.eval(FunctionContext::default(), &args).unwrap();
assert_eq!(8, vector.len());
let expect_times: VectorRef = Arc::new(StringVector::from(results));
assert_eq!(expect_times, vector);
}
}

View File

@@ -12,7 +12,7 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use crate::handlers::{ProcedureServiceHandlerRef, TableMutationHandlerRef};
use crate::handlers::{FlowServiceHandlerRef, ProcedureServiceHandlerRef, TableMutationHandlerRef};
/// Shared state for SQL functions.
/// The handlers in state may be `None` in cli command-line or test cases.
@@ -22,6 +22,8 @@ pub struct FunctionState {
pub table_mutation_handler: Option<TableMutationHandlerRef>,
// The procedure service handler
pub procedure_service_handler: Option<ProcedureServiceHandlerRef>,
// The flownode handler
pub flow_service_handler: Option<FlowServiceHandlerRef>,
}
impl FunctionState {
@@ -42,9 +44,10 @@ impl FunctionState {
CompactTableRequest, DeleteRequest, FlushTableRequest, InsertRequest,
};
use crate::handlers::{ProcedureServiceHandler, TableMutationHandler};
use crate::handlers::{FlowServiceHandler, ProcedureServiceHandler, TableMutationHandler};
struct MockProcedureServiceHandler;
struct MockTableMutationHandler;
struct MockFlowServiceHandler;
const ROWS: usize = 42;
#[async_trait]
@@ -116,9 +119,22 @@ impl FunctionState {
}
}
#[async_trait]
impl FlowServiceHandler for MockFlowServiceHandler {
async fn flush(
&self,
_catalog: &str,
_flow: &str,
_ctx: QueryContextRef,
) -> Result<api::v1::flow::FlowResponse> {
todo!()
}
}
Self {
table_mutation_handler: Some(Arc::new(MockTableMutationHandler)),
procedure_service_handler: Some(Arc::new(MockProcedureServiceHandler)),
flow_service_handler: Some(Arc::new(MockFlowServiceHandler)),
}
}
}

View File

@@ -14,6 +14,7 @@
mod build;
mod database;
mod pg_catalog;
mod procedure_state;
mod timezone;
mod version;
@@ -22,6 +23,7 @@ use std::sync::Arc;
use build::BuildFunction;
use database::DatabaseFunction;
use pg_catalog::PGCatalogFunction;
use procedure_state::ProcedureStateFunction;
use timezone::TimezoneFunction;
use version::VersionFunction;
@@ -37,5 +39,6 @@ impl SystemFunction {
registry.register(Arc::new(DatabaseFunction));
registry.register(Arc::new(TimezoneFunction));
registry.register(Arc::new(ProcedureStateFunction));
PGCatalogFunction::register(registry);
}
}

View File

@@ -12,22 +12,28 @@
// See the License for the specific language governing permissions and
// limitations under the License.
mod pg_get_userbyid;
mod table_is_visible;
use std::sync::Arc;
use datafusion::physical_plan::ExecutionPlan;
use pg_get_userbyid::PGGetUserByIdFunction;
use table_is_visible::PGTableIsVisibleFunction;
use crate::error::Result;
use crate::plan::LogicalPlan;
use crate::query_engine::QueryEngineContext;
use crate::function_registry::FunctionRegistry;
/// Physical query planner that converts a `LogicalPlan` to an
/// `ExecutionPlan` suitable for execution.
#[async_trait::async_trait]
pub trait PhysicalPlanner {
/// Create a physical plan from a logical plan
async fn create_physical_plan(
&self,
ctx: &mut QueryEngineContext,
logical_plan: &LogicalPlan,
) -> Result<Arc<dyn ExecutionPlan>>;
#[macro_export]
macro_rules! pg_catalog_func_fullname {
($name:literal) => {
concat!("pg_catalog.", $name)
};
}
pub(super) struct PGCatalogFunction;
impl PGCatalogFunction {
pub fn register(registry: &FunctionRegistry) {
registry.register(Arc::new(PGTableIsVisibleFunction));
registry.register(Arc::new(PGGetUserByIdFunction));
}
}

View File

@@ -0,0 +1,72 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::fmt::{self};
use std::sync::Arc;
use common_query::error::Result;
use common_query::prelude::{Signature, Volatility};
use datatypes::prelude::{ConcreteDataType, DataType, VectorRef};
use datatypes::types::LogicalPrimitiveType;
use datatypes::with_match_primitive_type_id;
use num_traits::AsPrimitive;
use crate::function::{Function, FunctionContext};
use crate::scalars::expression::{scalar_unary_op, EvalContext};
#[derive(Clone, Debug, Default)]
pub struct PGGetUserByIdFunction;
const NAME: &str = crate::pg_catalog_func_fullname!("pg_get_userbyid");
impl fmt::Display for PGGetUserByIdFunction {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(f, crate::pg_catalog_func_fullname!("PG_GET_USERBYID"))
}
}
impl Function for PGGetUserByIdFunction {
fn name(&self) -> &str {
NAME
}
fn return_type(&self, _input_types: &[ConcreteDataType]) -> Result<ConcreteDataType> {
Ok(ConcreteDataType::string_datatype())
}
fn signature(&self) -> Signature {
Signature::uniform(
1,
vec![ConcreteDataType::uint32_datatype()],
Volatility::Immutable,
)
}
fn eval(&self, _func_ctx: FunctionContext, columns: &[VectorRef]) -> Result<VectorRef> {
with_match_primitive_type_id!(columns[0].data_type().logical_type_id(), |$T| {
let col = scalar_unary_op::<<$T as LogicalPrimitiveType>::Native, String, _>(&columns[0], pg_get_user_by_id, &mut EvalContext::default())?;
Ok(Arc::new(col))
}, {
unreachable!()
})
}
}
fn pg_get_user_by_id<I>(table_oid: Option<I>, _ctx: &mut EvalContext) -> Option<String>
where
I: AsPrimitive<u32>,
{
// TODO(J0HN50N133): We lack way to get the user_info by a numeric value. Once we have it, we can implement this function.
table_oid.map(|_| "".to_string())
}

View File

@@ -0,0 +1,72 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::fmt::{self};
use std::sync::Arc;
use common_query::error::Result;
use common_query::prelude::{Signature, Volatility};
use datatypes::prelude::{ConcreteDataType, DataType, VectorRef};
use datatypes::types::LogicalPrimitiveType;
use datatypes::with_match_primitive_type_id;
use num_traits::AsPrimitive;
use crate::function::{Function, FunctionContext};
use crate::scalars::expression::{scalar_unary_op, EvalContext};
#[derive(Clone, Debug, Default)]
pub struct PGTableIsVisibleFunction;
const NAME: &str = crate::pg_catalog_func_fullname!("pg_table_is_visible");
impl fmt::Display for PGTableIsVisibleFunction {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(f, crate::pg_catalog_func_fullname!("PG_TABLE_IS_VISIBLE"))
}
}
impl Function for PGTableIsVisibleFunction {
fn name(&self) -> &str {
NAME
}
fn return_type(&self, _input_types: &[ConcreteDataType]) -> Result<ConcreteDataType> {
Ok(ConcreteDataType::boolean_datatype())
}
fn signature(&self) -> Signature {
Signature::uniform(
1,
vec![ConcreteDataType::uint32_datatype()],
Volatility::Immutable,
)
}
fn eval(&self, _func_ctx: FunctionContext, columns: &[VectorRef]) -> Result<VectorRef> {
with_match_primitive_type_id!(columns[0].data_type().logical_type_id(), |$T| {
let col = scalar_unary_op::<<$T as LogicalPrimitiveType>::Native, bool, _>(&columns[0], pg_table_is_visible, &mut EvalContext::default())?;
Ok(Arc::new(col))
}, {
unreachable!()
})
}
}
fn pg_table_is_visible<I>(table_oid: Option<I>, _ctx: &mut EvalContext) -> Option<bool>
where
I: AsPrimitive<u32>,
{
// There is no table visibility in greptime, so we always return true
table_oid.map(|_| true)
}

View File

@@ -12,26 +12,19 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use std::fmt;
use api::v1::meta::ProcedureStatus;
use common_macro::admin_fn;
use common_meta::rpc::procedure::ProcedureStateResponse;
use common_query::error::Error::ThreadJoin;
use common_query::error::{
InvalidFuncArgsSnafu, MissingProcedureServiceHandlerSnafu, Result,
UnsupportedInputDataTypeSnafu,
};
use common_query::prelude::{Signature, Volatility};
use common_telemetry::error;
use datatypes::prelude::*;
use datatypes::vectors::VectorRef;
use serde::Serialize;
use session::context::QueryContextRef;
use snafu::{ensure, Location, OptionExt};
use snafu::ensure;
use crate::ensure_greptime;
use crate::function::{Function, FunctionContext};
use crate::handlers::ProcedureServiceHandlerRef;
#[derive(Serialize)]
@@ -103,6 +96,7 @@ mod tests {
use datatypes::vectors::StringVector;
use super::*;
use crate::function::{Function, FunctionContext};
#[test]
fn test_procedure_state_misc() {

View File

@@ -22,6 +22,7 @@ use flush_compact_region::{CompactRegionFunction, FlushRegionFunction};
use flush_compact_table::{CompactTableFunction, FlushTableFunction};
use migrate_region::MigrateRegionFunction;
use crate::flush_flow::FlushFlowFunction;
use crate::function_registry::FunctionRegistry;
/// Table functions
@@ -35,5 +36,6 @@ impl TableFunction {
registry.register(Arc::new(CompactRegionFunction));
registry.register(Arc::new(FlushTableFunction));
registry.register(Arc::new(CompactTableFunction));
registry.register(Arc::new(FlushFlowFunction));
}
}

View File

@@ -12,23 +12,16 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use std::fmt;
use common_macro::admin_fn;
use common_query::error::Error::ThreadJoin;
use common_query::error::{
InvalidFuncArgsSnafu, MissingTableMutationHandlerSnafu, Result, UnsupportedInputDataTypeSnafu,
};
use common_query::prelude::{Signature, Volatility};
use common_telemetry::error;
use datatypes::prelude::*;
use datatypes::vectors::VectorRef;
use session::context::QueryContextRef;
use snafu::{ensure, Location, OptionExt};
use snafu::ensure;
use store_api::storage::RegionId;
use crate::ensure_greptime;
use crate::function::{Function, FunctionContext};
use crate::handlers::TableMutationHandlerRef;
use crate::helper::cast_u64;
@@ -84,6 +77,7 @@ mod tests {
use datatypes::vectors::UInt64Vector;
use super::*;
use crate::function::{Function, FunctionContext};
macro_rules! define_region_function_test {
($name: ident, $func: ident) => {

View File

@@ -12,32 +12,29 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use std::fmt;
use std::str::FromStr;
use api::v1::region::{compact_request, StrictWindow};
use common_error::ext::BoxedError;
use common_macro::admin_fn;
use common_query::error::Error::ThreadJoin;
use common_query::error::{
InvalidFuncArgsSnafu, MissingTableMutationHandlerSnafu, Result, TableMutationSnafu,
UnsupportedInputDataTypeSnafu,
};
use common_query::prelude::{Signature, Volatility};
use common_telemetry::{error, info};
use common_telemetry::info;
use datatypes::prelude::*;
use datatypes::vectors::VectorRef;
use session::context::QueryContextRef;
use session::table_name::table_name_to_full_name;
use snafu::{ensure, Location, OptionExt, ResultExt};
use snafu::{ensure, ResultExt};
use table::requests::{CompactTableRequest, FlushTableRequest};
use crate::ensure_greptime;
use crate::function::{Function, FunctionContext};
use crate::handlers::TableMutationHandlerRef;
/// Compact type: strict window.
const COMPACT_TYPE_STRICT_WINDOW: &str = "strict_window";
/// Compact type: strict window (short name).
const COMPACT_TYPE_STRICT_WINDOW_SHORT: &str = "swcs";
#[admin_fn(
name = FlushTableFunction,
@@ -173,8 +170,12 @@ fn parse_compact_params(
})
}
/// Parses compaction strategy type. For `strict_window` or `swcs` strict window compaction is chose,
/// otherwise choose regular (TWCS) compaction.
fn parse_compact_type(type_str: &str, option: Option<&str>) -> Result<compact_request::Options> {
if type_str.eq_ignore_ascii_case(COMPACT_TYPE_STRICT_WINDOW) {
if type_str.eq_ignore_ascii_case(COMPACT_TYPE_STRICT_WINDOW)
| type_str.eq_ignore_ascii_case(COMPACT_TYPE_STRICT_WINDOW_SHORT)
{
let window_seconds = option
.map(|v| {
i64::from_str(v).map_err(|_| {
@@ -209,6 +210,7 @@ mod tests {
use session::context::QueryContext;
use super::*;
use crate::function::{Function, FunctionContext};
macro_rules! define_table_function_test {
($name: ident, $func: ident) => {
@@ -354,6 +356,17 @@ mod tests {
compact_options: Options::Regular(Default::default()),
},
),
(
&["table", "swcs", "120"],
CompactTableRequest {
catalog_name: DEFAULT_CATALOG_NAME.to_string(),
schema_name: DEFAULT_SCHEMA_NAME.to_string(),
table_name: "table".to_string(),
compact_options: Options::StrictWindow(StrictWindow {
window_seconds: 120,
}),
},
),
]);
assert!(parse_compact_params(

View File

@@ -12,24 +12,16 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use std::fmt::{self};
use std::time::Duration;
use common_macro::admin_fn;
use common_meta::rpc::procedure::MigrateRegionRequest;
use common_query::error::Error::ThreadJoin;
use common_query::error::{InvalidFuncArgsSnafu, MissingProcedureServiceHandlerSnafu, Result};
use common_query::prelude::{Signature, TypeSignature, Volatility};
use common_telemetry::error;
use datatypes::data_type::DataType;
use datatypes::prelude::ConcreteDataType;
use datatypes::value::{Value, ValueRef};
use datatypes::vectors::VectorRef;
use session::context::QueryContextRef;
use snafu::{Location, OptionExt};
use crate::ensure_greptime;
use crate::function::{Function, FunctionContext};
use crate::handlers::ProcedureServiceHandlerRef;
use crate::helper::cast_u64;
@@ -128,9 +120,10 @@ mod tests {
use std::sync::Arc;
use common_query::prelude::TypeSignature;
use datatypes::vectors::{StringVector, UInt64Vector};
use datatypes::vectors::{StringVector, UInt64Vector, VectorRef};
use super::*;
use crate::function::{Function, FunctionContext};
#[test]
fn test_migrate_region_misc() {

View File

@@ -72,7 +72,7 @@ impl GreptimeDBTelemetryTask {
match self {
GreptimeDBTelemetryTask::Enable((task, _)) => {
print_anonymous_usage_data_disclaimer();
task.start(common_runtime::bg_runtime())
task.start(common_runtime::global_runtime())
}
GreptimeDBTelemetryTask::Disable => Ok(()),
}

View File

@@ -225,7 +225,7 @@ impl ChannelManager {
}
let pool = self.pool.clone();
let _handle = common_runtime::spawn_bg(async {
let _handle = common_runtime::spawn_global(async {
recycle_channel_in_loop(pool, RECYCLE_CHANNEL_INTERVAL_SECS).await;
});
info!(

View File

@@ -153,6 +153,7 @@ fn build_struct(
let ret = Ident::new(&format!("{ret}_datatype"), ret.span());
let uppcase_display_name = display_name.to_uppercase();
// Get the handler name in function state by the argument ident
// TODO(discord9): consider simple depend injection if more handlers are needed
let (handler, snafu_type) = match handler_type.to_string().as_str() {
"ProcedureServiceHandlerRef" => (
Ident::new("procedure_service_handler", handler_type.span()),
@@ -163,6 +164,11 @@ fn build_struct(
Ident::new("table_mutation_handler", handler_type.span()),
Ident::new("MissingTableMutationHandlerSnafu", handler_type.span()),
),
"FlowServiceHandlerRef" => (
Ident::new("flow_service_handler", handler_type.span()),
Ident::new("MissingFlowServiceHandlerSnafu", handler_type.span()),
),
handler => ok!(error!(
handler_type.span(),
format!("Unknown handler type: {handler}")
@@ -174,29 +180,29 @@ fn build_struct(
#[derive(Debug)]
#vis struct #name;
impl fmt::Display for #name {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
impl std::fmt::Display for #name {
fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result {
write!(f, #uppcase_display_name)
}
}
impl Function for #name {
impl crate::function::Function for #name {
fn name(&self) -> &'static str {
#display_name
}
fn return_type(&self, _input_types: &[ConcreteDataType]) -> Result<ConcreteDataType> {
Ok(ConcreteDataType::#ret())
fn return_type(&self, _input_types: &[store_api::storage::ConcreteDataType]) -> common_query::error::Result<store_api::storage::ConcreteDataType> {
Ok(store_api::storage::ConcreteDataType::#ret())
}
fn signature(&self) -> Signature {
#sig_fn()
}
fn eval(&self, func_ctx: FunctionContext, columns: &[VectorRef]) -> Result<VectorRef> {
fn eval(&self, func_ctx: crate::function::FunctionContext, columns: &[datatypes::vectors::VectorRef]) -> common_query::error::Result<datatypes::vectors::VectorRef> {
// Ensure under the `greptime` catalog for security
ensure_greptime!(func_ctx);
crate::ensure_greptime!(func_ctx);
let columns_num = columns.len();
let rows_num = if columns.is_empty() {
@@ -208,6 +214,9 @@ fn build_struct(
// TODO(dennis): DataFusion doesn't support async UDF currently
std::thread::spawn(move || {
use snafu::OptionExt;
use datatypes::data_type::DataType;
let query_ctx = &func_ctx.query_ctx;
let handler = func_ctx
.state
@@ -215,11 +224,11 @@ fn build_struct(
.as_ref()
.context(#snafu_type)?;
let mut builder = ConcreteDataType::#ret()
let mut builder = store_api::storage::ConcreteDataType::#ret()
.create_mutable_vector(rows_num);
if columns_num == 0 {
let result = common_runtime::block_on_read(async move {
let result = common_runtime::block_on_global(async move {
#fn_name(handler, query_ctx, &[]).await
})?;
@@ -230,7 +239,7 @@ fn build_struct(
.map(|vector| vector.get_ref(i))
.collect();
let result = common_runtime::block_on_read(async move {
let result = common_runtime::block_on_global(async move {
#fn_name(handler, query_ctx, &args).await
})?;
@@ -242,9 +251,9 @@ fn build_struct(
})
.join()
.map_err(|e| {
error!(e; "Join thread error");
ThreadJoin {
location: Location::default(),
common_telemetry::error!(e; "Join thread error");
common_query::error::Error::ThreadJoin {
location: snafu::Location::default(),
}
})?

View File

@@ -73,7 +73,7 @@ pub fn range_fn(args: TokenStream, input: TokenStream) -> TokenStream {
/// Attribute macro to convert a normal function to SQL administration function. The annotated function
/// should accept:
/// - `&ProcedureServiceHandlerRef` or `&TableMutationHandlerRef` as the first argument,
/// - `&ProcedureServiceHandlerRef` or `&TableMutationHandlerRef` or `FlowServiceHandlerRef` as the first argument,
/// - `&QueryContextRef` as the second argument, and
/// - `&[ValueRef<'_>]` as the third argument which is SQL function input values in each row.
/// Return type must be `common_query::error::Result<Value>`.
@@ -85,6 +85,8 @@ pub fn range_fn(args: TokenStream, input: TokenStream) -> TokenStream {
/// - `ret`: The return type of the generated SQL function, it will be transformed into `ConcreteDataType::{ret}_datatype()` result.
/// - `display_name`: The display name of the generated SQL function.
/// - `sig_fn`: the function to returns `Signature` of generated `Function`.
///
/// Note that this macro should only be used in `common-function` crate for now
#[proc_macro_attribute]
pub fn admin_fn(args: TokenStream, input: TokenStream) -> TokenStream {
process_admin_fn(args, input)

View File

@@ -6,6 +6,7 @@ license.workspace = true
[features]
testing = []
pg_kvbackend = ["dep:tokio-postgres"]
[lints]
workspace = true
@@ -56,6 +57,7 @@ store-api.workspace = true
strum.workspace = true
table.workspace = true
tokio.workspace = true
tokio-postgres = { workspace = true, optional = true }
tonic.workspace = true
typetag = "0.2"

View File

@@ -227,7 +227,7 @@ impl Procedure for DropTableProcedure {
}
fn rollback_supported(&self) -> bool {
!matches!(self.data.state, DropTableState::Prepare)
!matches!(self.data.state, DropTableState::Prepare) && self.data.allow_rollback
}
async fn rollback(&mut self, _: &ProcedureContext) -> ProcedureResult<()> {
@@ -256,6 +256,8 @@ pub struct DropTableData {
pub task: DropTableTask,
pub physical_region_routes: Vec<RegionRoute>,
pub physical_table_id: Option<TableId>,
#[serde(default)]
pub allow_rollback: bool,
}
impl DropTableData {
@@ -266,6 +268,7 @@ impl DropTableData {
task,
physical_region_routes: vec![],
physical_table_id: None,
allow_rollback: false,
}
}

View File

@@ -12,8 +12,12 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use common_catalog::format_full_table_name;
use snafu::OptionExt;
use store_api::metric_engine_consts::METRIC_ENGINE_NAME;
use crate::ddl::drop_table::DropTableProcedure;
use crate::error::Result;
use crate::error::{self, Result};
impl DropTableProcedure {
/// Fetches the table info and physical table route.
@@ -29,6 +33,23 @@ impl DropTableProcedure {
self.data.physical_region_routes = physical_table_route_value.region_routes;
self.data.physical_table_id = Some(physical_table_id);
if physical_table_id == self.data.table_id() {
let table_info_value = self
.context
.table_metadata_manager
.table_info_manager()
.get(task.table_id)
.await?
.with_context(|| error::TableInfoNotFoundSnafu {
table: format_full_table_name(&task.catalog, &task.schema, &task.table),
})?
.into_inner();
let engine = table_info_value.table_info.meta.engine;
// rollback only if dropping the metric physical table fails
self.data.allow_rollback = engine.as_str() == METRIC_ENGINE_NAME
}
Ok(())
}
}

Some files were not shown because too many files have changed in this diff Show More