Compare commits

...

68 Commits

Author SHA1 Message Date
evenyag
00d759e828 chore: bump version to v0.15.1
Signed-off-by: evenyag <realevenyag@gmail.com>
2025-07-04 22:53:46 +08:00
Lei, HUANG
0042ea6462 fix: filter empty batch in bulk insert api (#6459)
* fix/filter-empty-batch-in-bulk-insert-api:
 **Add Early Return for Empty Record Batches in `bulk_insert.rs`**

 - Implemented an early return in the `Inserter` implementation to handle cases where `record_batch.num_rows()` is zero, improving efficiency by avoiding unnecessary processing.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* fix/filter-empty-batch-in-bulk-insert-api:
 **Improve Bulk Insert Handling**

 - **`handle_bulk_insert.rs`**: Added a check to handle cases where the batch has zero rows, immediately returning and sending a success response with zero rows processed.
 - **`bulk_insert.rs`**: Enhanced logic to skip processing for masks that select none, optimizing the bulk insert operation by avoiding unnecessary iterations.

 These changes improve the efficiency and robustness of the bulk insert process by handling edge cases more effectively.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* fix/filter-empty-batch-in-bulk-insert-api:
 ### Refactor and Error Handling Enhancements

 - **Refactored Timestamp Handling**: Introduced `timestamp_array_to_primitive` function in `timestamp.rs` to streamline conversion of timestamp arrays to primitive arrays, reducing redundancy in `handle_bulk_insert.rs` and `bulk_insert.rs`.
 - **Error Handling**: Added `InconsistentTimestampLength` error in `error.rs` to handle mismatched timestamp column lengths in bulk insert operations.
 - **Bulk Insert Logic**: Updated `handle_bulk_insert.rs` to utilize the new timestamp conversion function and added checks for timestamp length consistency.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* fix/filter-empty-batch-in-bulk-insert-api:
 **Refactor `bulk_insert.rs` to streamline imports**

 - Simplified import statements by removing unused timestamp-related arrays and data types from the `arrow` crate in `bulk_insert.rs`.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

---------

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
Signed-off-by: evenyag <realevenyag@gmail.com>
2025-07-04 22:53:46 +08:00
Zhenchi
d06450715f fix: add backward compatibility for SkippingIndexOptions deserialization (#6458)
* fix: add backward compatibility for `SkippingIndexOptions` deserialization

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* address comments

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* address comments

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
Signed-off-by: evenyag <realevenyag@gmail.com>
2025-07-04 22:53:46 +08:00
evenyag
8612bb066f chore: fix statement compile errors
Signed-off-by: evenyag <realevenyag@gmail.com>
2025-07-03 23:05:21 +08:00
Yingwen
467593d329 fix: enable max_execution time for other read only statements (#6454)
Also disable the timeout when timeout is 0

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-07-03 23:05:21 +08:00
Ruihang Xia
9e4ae070b2 feat: skip rule checker on ingestion (#6453)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-07-03 23:05:21 +08:00
Ruihang Xia
d8261dda51 feat!: point matrix based partition rule checker (#6431)
* bare implementation

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* stateful generator

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* error report

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix remap checkpoint

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* use matrix generator as iterator

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* pre-calculate suffix product

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update existing test cases

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* sqlness

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix ut

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* clean up

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Signed-off-by: evenyag <realevenyag@gmail.com>
2025-07-03 23:05:21 +08:00
dennis zhuang
7ab9b335a1 fix: label_replace and label_join functions when used as sub‐expressions (#6443)
* fix: label_replace and label_join functions in expressions

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: remove update_fields

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: tql eval -> TQL EVAL

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: empty regex and not existing source label

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: simplfy test

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: test

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: test

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

---------

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
Signed-off-by: evenyag <realevenyag@gmail.com>
2025-07-03 23:05:21 +08:00
Ruihang Xia
60835afb47 feat: Collider for playing with PartitionRule (#6399)
* skeleton

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* initial impl and tests

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* refactor and reorganize

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix typo

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add comment

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* error handling

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* explain naming

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Signed-off-by: evenyag <realevenyag@gmail.com>
2025-07-03 23:05:21 +08:00
fys
aba5bf7431 refactor: avoid adding feature to parameter (#6391)
* refactor: avoid adding feature to parameter

* avoid `cfg(not(feature = ...))` block
2025-07-03 15:48:22 +08:00
Yingwen
7897fe8dbe fix: correct MAX_EXECUTION_TIME timeout calculation (#6444)
* feat: implement statement timeout in frontend instance

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: fail fast when timeout is 0

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: update start time

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-07-03 10:46:35 +08:00
Ruihang Xia
cc8ec706a1 fix: remap column indices on overriding logical table partitions (#6446)
* fix: remap column indices on overriding logical table partitions

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* sqlness

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* refactor map query

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Signed-off-by: evenyag <realevenyag@gmail.com>
2025-07-03 10:46:35 +08:00
Weny Xu
7c688718db fix: fix dest_keys chunks bug in TombstoneManager (#6432)
* fix(meta): fix dest_keys_chunks bug in TombstoneManager

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: fix typo

Signed-off-by: WenyXu <wenymedia@gmail.com>

* fix: fix sqlness tests

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
Signed-off-by: evenyag <realevenyag@gmail.com>
2025-07-03 10:46:35 +08:00
shuiyisong
8a0e554e5a feat(pipeline): support Loki API (#6390)
* chore: use schema_info

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* refactor: abstract loki item generator

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: introduce middle item

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* feat: introduce pipeline in loki api

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* test: add tests

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: minor update

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: minor update

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: update prefix and test

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: change recursion to loop

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* fix: cr issue

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

---------

Signed-off-by: shuiyisong <xixing.sys@gmail.com>
Signed-off-by: evenyag <realevenyag@gmail.com>
2025-07-03 10:46:35 +08:00
jeremyhi
80fae1c559 feat: override logical table's partition key indices (#6385)
* feat: Override logical table's partition key indices with physical table's

* chore: by comment

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-07-03 10:46:35 +08:00
Zhenchi
c37c4df20d feat: pick #6416 to release/0.15 (#6445)
* feat: pick #6416 to release/0.15

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* upgrade proto

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2025-07-02 08:55:54 +00:00
Yingwen
f712c1b356 feat: cherry-pick #6384 #6388 #6396 #6403 #6412 #6405 to 0.15 branch (#6414)
* feat: supports CsvWithNames and CsvWithNamesAndTypes formats (#6384)

* feat: supports CsvWithNames and CsvWithNamesAndTypes formats and object/array types

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* test: added and fixed tests

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: fix test

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: remove comments

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* test: add json type csv tests

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: remove comment

Co-authored-by: Yingwen <realevenyag@gmail.com>

---------

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>
Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: introduce /v1/health for healthcheck from external (#6388)

Signed-off-by: Ning Sun <sunning@greptime.com>
Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: update dashboard to v0.10.1 (#6396)

Co-authored-by: ZonaHex <ZonaHex@users.noreply.github.com>
Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: complete partial index search results in cache (#6403)

* fix: complete partial index search results in cache

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* polish

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* address comments

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* add initial tests

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* cover issue case

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* TestEnv new -> async

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: skip failing nodes when gathering porcess info (#6412)

* fix/process-manager-skip-fail-nodes:
 - **Enhance Error Handling in `process_manager.rs`:**
   Improved error handling by adding a warning log for failing nodes in the `list_process` method. This ensures that the process listing continues even if some nodes fail to respond.

 - **Add Error Type Import in `process_manager.rs`:**
   Included the `Error` type from the `error` module to handle errors more effectively within the `ProcessManager` implementation.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* fix: clippy

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* fix/process-manager-skip-fail-nodes:
 **Enhancements to Debugging and Trait Implementation**

 - **`process_manager.rs`**: Improved logging by adding more detailed error messages when skipping failing nodes.
 - **`selector.rs`**: Enhanced the `FrontendClient` trait by adding the `Debug` trait bound to improve debugging capabilities.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

---------

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: pass pipeline name through http header and get db from query context (#6405)

Signed-off-by: zyy17 <zyylsxm@gmail.com>
Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
Signed-off-by: evenyag <realevenyag@gmail.com>
Signed-off-by: Ning Sun <sunning@greptime.com>
Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
Signed-off-by: zyy17 <zyylsxm@gmail.com>
Co-authored-by: dennis zhuang <killme2008@gmail.com>
Co-authored-by: Ning Sun <sunng@protonmail.com>
Co-authored-by: ZonaHe <zonahe@qq.com>
Co-authored-by: ZonaHex <ZonaHex@users.noreply.github.com>
Co-authored-by: Zhenchi <zhongzc_arch@outlook.com>
Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>
Co-authored-by: zyy17 <zyylsxm@gmail.com>
2025-06-27 20:11:28 +08:00
shuiyisong
7cd6be41ce feat(pipeline): introduce pipeline doc version 2 for combine-transform (#6360)
* chore: init commit of pipeline doc version v2

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: remove unused code

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: remove unused code

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: add test

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: add test

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: compatible with v1 to remain field in the map during transform

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* refactor: pipeline.exec_mut

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* fix: typo

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: change from v2 to 2 in version setting

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

---------

Signed-off-by: shuiyisong <xixing.sys@gmail.com>
2025-06-22 00:58:36 +00:00
ZonaHe
15616d0c43 feat: update dashboard to v0.10.0 (#6368)
Co-authored-by: ZonaHex <ZonaHex@users.noreply.github.com>
2025-06-20 23:48:35 +00:00
dennis zhuang
b43e315c67 fix: test test_tls_file_change_watch (#6366)
* fix: test test_tls_file_change_watch

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: cert_path

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: test

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: revert times

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: debug

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: times

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: remove assertions

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: use inspect_err

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

---------

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
2025-06-20 16:57:07 +00:00
Yingwen
36ab1ceef7 chore: prints a warning when skip_ssl_validation is true (#6367)
chore: warn when skip_ssl_validation is true

We already log all configs when a node starts.

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-06-20 13:14:50 +00:00
Weny Xu
3fb1b726c6 refactor(cli): simplify metadata command parameters (#6364)
Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-06-20 09:00:21 +00:00
Olexandr88
c423bb31fe docs: added YouTube link to documentation (#6362)
Update README.md
2025-06-20 08:16:54 +00:00
rgidda
e026f766d2 feat(storage): Add skip_ssl_validation option for object storage HTTP client (#6358)
* feat(storage): Add skip_ssl_validation option for object storage HTTP client

Signed-off-by: rgidda <rgidda@hitachivantara.com>

* fix(test): Broken test case for - Add skip_ssl_validation option for object storage HTTP client

Signed-off-by: rgidda <rgidda@hitachivantara.com>

* fix: test

* fix: test

---------

Signed-off-by: rgidda <rgidda@hitachivantara.com>
Co-authored-by: rgidda <rgidda@hitachivantara.com>
Co-authored-by: dennis zhuang <killme2008@gmail.com>
2025-06-20 08:08:19 +00:00
discord9
9d08f2532a feat: dist auto step aggr pushdown (#6268)
* wip: steppable aggr fn

Signed-off-by: discord9 <discord9@163.com>

* poc: step aggr query

Signed-off-by: discord9 <discord9@163.com>

* feat: mvp poc stuff

Signed-off-by: discord9 <discord9@163.com>

* test: sqlness

Signed-off-by: discord9 <discord9@163.com>

* chore: import missing

Signed-off-by: discord9 <discord9@163.com>

* feat: support first/last_value

Signed-off-by: discord9 <discord9@163.com>

* fix: check also include first/last value

Signed-off-by: discord9 <discord9@163.com>

* chore: clean up after rebase

Signed-off-by: discord9 <discord9@163.com>

* feat: optimize yes!

Signed-off-by: discord9 <discord9@163.com>

* fix: alias qualifled

Signed-off-by: discord9 <discord9@163.com>

* test: more testcases

Signed-off-by: discord9 <discord9@163.com>

* chore: qualified column

Signed-off-by: discord9 <discord9@163.com>

* chore: per review

Signed-off-by: discord9 <discord9@163.com>

* fix: case when can push down

Signed-off-by: discord9 <discord9@163.com>

* feat: udd/hll_merge is also the same

Signed-off-by: discord9 <discord9@163.com>

* fix: udd/hll_merge args

Signed-off-by: discord9 <discord9@163.com>

* tests: fix sqlness

Signed-off-by: discord9 <discord9@163.com>

* tests: fix sqlness

Signed-off-by: discord9 <discord9@163.com>

* fix: udd/hll merge steppable

Signed-off-by: discord9 <discord9@163.com>

* chore: per review

Signed-off-by: discord9 <discord9@163.com>

* test: REDACTED

Signed-off-by: discord9 <discord9@163.com>

* refactor: per review

Signed-off-by: discord9 <discord9@163.com>

* refactor: more formal transform action

Signed-off-by: discord9 <discord9@163.com>

* feat: support modify child plan too

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-06-20 07:18:55 +00:00
LFC
e072726ea8 refactor: make scanner creation async (#6349)
* refactor: make scanner creation async

Signed-off-by: luofucong <luofc@foxmail.com>

* resolve PR comments

Signed-off-by: luofucong <luofc@foxmail.com>

---------

Signed-off-by: luofucong <luofc@foxmail.com>
2025-06-20 06:44:49 +00:00
Ning Sun
e78c3e1eaa refactor: make metadata region option opt-in (#6350)
* refactor: make metadata region option opt-in

Signed-off-by: Ning Sun <sunning@greptime.com>

* fix: preserve wal_options for metadata region

Signed-off-by: Ning Sun <sunning@greptime.com>

* Update src/metric-engine/src/engine/create.rs

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ning Sun <sunning@greptime.com>
Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
2025-06-20 03:31:16 +00:00
Weny Xu
89e3c8edab fix(meta): enhance mysql election client with timeouts and reconnection (#6341)
* fix(meta): enhance mysql election client with timeouts and reconnection

Signed-off-by: WenyXu <wenymedia@gmail.com>

* feat: improve MySQL election client lease management and error handling

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: adjust timeout configurations for election clients

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: remove unused error

Signed-off-by: WenyXu <wenymedia@gmail.com>

* fix: fix unit test

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-06-19 12:44:23 +00:00
fys
d4826b998d feat: support execute sql in frontend_client (#6355)
* feat: support execute sql in frontend_client

* chore: remove unnecessary clone

* add components for flownode instance

* add feature gate for component

* fix: enterprise feature
2025-06-19 09:47:16 +00:00
Weny Xu
d9faa5c801 feat(cli): add metadata del commands (#6339)
* feat: introduce cli for deleting metadata

Signed-off-by: WenyXu <wenymedia@gmail.com>

* refactor(cli): flatten metadata command structure

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: add alias

Signed-off-by: WenyXu <wenymedia@gmail.com>

* refactor(meta): implement logical deletion for CLI tool

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-06-19 06:48:00 +00:00
Ning Sun
12c3a3205b chore: security updates (#6351) 2025-06-19 06:43:43 +00:00
Weny Xu
5231505021 fix(metric-engine): properly propagate errors during batch open operation (#6325)
* fix(metric-engine): properly propagate errors during batch open operation

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: add comments

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-06-19 06:37:54 +00:00
Lei, HUANG
6ece560f8c fix: reordered write cause incorrect kv (#6345)
* fix/reordered-write-cause-incorrect-kv:
 - **Enhance Testing in `partition_tree.rs`**: Added comprehensive test functions such as `kv_region_metadata`, `key_values`, and `collect_kvs` to improve the robustness of key-value operations and ensure correct behavior of the `PartitionTreeMemtable`.
 - **Improve Key Handling in `dict.rs`**: Modified `KeyDictBuilder` to handle both full and sparse keys, ensuring correct mapping and insertion. Added a new test `test_builder_finish_with_sparse_key` to validate the handling of sparse keys.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* fix/reordered-write-cause-incorrect-kv:
 ### Refactor `partition_tree.rs` for Improved Key Handling

 - **Refactored Key Handling**: Simplified the `key_values` function to accept an iterator of keys, removing hardcoded key-value pairs. This change enhances flexibility and reduces redundancy in key management.
 - **Updated Test Cases**: Modified test cases to use the new `key_values` function signature, ensuring they iterate over keys dynamically rather than relying on predefined lists.

 Files affected:
 - `src/mito2/src/memtable/partition_tree.rs`

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* fix/reordered-write-cause-incorrect-kv:
 Enhance Testing in `partition_tree.rs`

 - Added assertions to verify key-value collection after `memtable` and `forked` operations.
 - Refactored key-value writing logic for clarity in `forked` operations.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

---------

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-06-19 06:32:40 +00:00
fys
2ab08a8f93 chore(deps): switch greptime-proto to official repository (#6347) 2025-06-18 12:52:46 +00:00
Lei, HUANG
086ae9cdcd chore: print series count after wal replay (#6344)
* chore/print-series-count-after-wal-replay:
 ### Add Series Count Functionality and Logging Enhancements

 - **`time_partition.rs`**: Introduced `series_count` method to calculate the total timeseries count across all time partitions.
 - **`opener.rs`**: Enhanced logging to include the total timeseries replayed during WAL replay.
 - **`version.rs`**: Added `series_count` method to `VersionControlData` for approximating timeseries count in the current version.
 - **`handler.rs`**: Added entry and exit logging for the `sql` function to trace execution flow.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* chore/print-series-count-after-wal-replay:
 ### Remove Unused Import

 - **File Modified**: `src/servers/src/http/handler.rs`
 - **Change Summary**: Removed the unused `info` import from `common_telemetry`.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

---------

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-06-18 12:04:39 +00:00
LFC
6da8e00243 refactor: make finding leader in metasrv client dynamic (#6343)
Signed-off-by: luofucong <luofc@foxmail.com>
2025-06-18 11:29:23 +00:00
Arshdeep
4b04c402b6 fix: add path exist check in copy_table_from (#6182) (#6300)
Signed-off-by: Arshdeep54 <balarsh535@gmail.com>
2025-06-18 09:50:27 +00:00
Lei, HUANG
a59b6c36d2 chore: add metrics for active series and field builders (#6332)
* chore/series-metrics:
 ### Add Metrics for Active Series and Values in Memtable

 - **`simple_bulk_memtable.rs`**: Implemented `Drop` trait for `SimpleBulkMemtable` to decrement `MEMTABLE_ACTIVE_SERIES_COUNT` and `MEMTABLE_ACTIVE_VALUES_COUNT` upon dropping.
 - **`time_series.rs`**:
   - Introduced `SeriesMap` with `Drop` implementation to manage active series and values count.
   - Updated `SeriesSet` and `Iter` to use `SeriesMap`.
   - Added `num_values` method in `Series` to calculate the number of values.
 - **`metrics.rs`**: Added `MEMTABLE_ACTIVE_SERIES_COUNT` and `MEMTABLE_ACTIVE_VALUES_COUNT` metrics to track active series and values in `TimeSeriesMemtable`.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* chore/series-metrics:
- Add metrics for active series and field builders
- Update dashboard

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* chore/series-metrics:
 **Add Series Count Tracking in Memtables**

 - **`flush.rs`**: Updated `RegionFlushTask` to track and log the series count during memtable flush operations.
 - **`memtable.rs`**: Introduced `series_count` in `MemtableStats` and added a method to retrieve it.
 - **`partition_tree.rs`, `partition.rs`, `tree.rs`**: Implemented series count calculation in `PartitionTreeMemtable` and its components.
 - **`simple_bulk_memtable.rs`, `time_series.rs`**: Integrated series count tracking in `SimpleBulkMemtable` and `TimeSeriesMemtable` implementations.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* Update src/mito2/src/memtable.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

---------

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>
2025-06-18 09:16:45 +00:00
zyy17
f6ce6fe385 fix(jaeger-api): incorrect find_traces() logic and multiple api compatible issues (#6293)
* fix: use `limit` params in jaeger http

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* refactor: only parse `max_duration` and `min_duration` when it's not empty

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* fix: handle the input for empty `limit` string

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* fix: missing the fileter for `service_name`

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* test: fix ci errors

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* fix: incorrect behavior of find_traces

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* fix: the logic of `find_traces()`

The correct logic should be:

1. Get all trace ids that match the filters;

2. Get all traces that match the trace ids from the previous query;

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* fix: integration test errors

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* refactor: add `empty_string_as_none`

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* refactor: refine naming

Signed-off-by: zyy17 <zyylsxm@gmail.com>

---------

Signed-off-by: zyy17 <zyylsxm@gmail.com>
2025-06-18 08:01:36 +00:00
Weny Xu
4d4bfb7d8b fix(metric): prevent setting memtable type for metadata region (#6340)
Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-06-18 03:49:22 +00:00
Ruihang Xia
6e1e8f19e6 feat: support setting FORMAT in TQL ANALYZE/VERBOSE (#6327)
* feat: support setting FORMAT in TQL ANALYZE/VERBOSE

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update sqlness result

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-06-18 03:39:12 +00:00
Weny Xu
49cb4da6d2 feat: introduce CLI tool for repairing logical table metadata (#6322)
* feat: introduce logical table metadata repair cli tool

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: deps

Signed-off-by: WenyXu <wenymedia@gmail.com>

* refactor: flatten doctor module structure

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-06-17 13:53:59 +00:00
Lei, HUANG
0d0236ddab fix: revert string builder initial capacity in TimeSeriesMemtable (#6330)
fix/revert-string-builder-initial-capacity:
 ### Update `time_series.rs` Memory Allocation

 - **Reduced StringBuilder Capacity**: Adjusted the initial capacity of `StringBuilder` in `ValueBuilder` from `(256, 4096)` to `(4, 8)` to optimize memory usage in `time_series.rs`.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
2025-06-17 13:24:52 +00:00
Lei, HUANG
f8edb53b30 fix: carry process id in query ctx (#6335)
fix/carry-process-id-in-query-ctx:
 ### Commit Message

 Enhance query context handling in `instance.rs`

 - Updated `Instance` implementation to include `process_id` in query context initialization, improving traceability and debugging capabilities.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-06-17 12:04:23 +00:00
Lin Yihai
438791b3e4 feat: Add DROP DEFAULT (#6290)
* feat: Add `DROP DEFAULT`

Signed-off-by: Yihai Lin <yihai-lin@foxmail.com>

* chore: use `next_token`

Signed-off-by: Yihai Lin <yihai-lin@foxmail.com>

---------

Signed-off-by: Yihai Lin <yihai-lin@foxmail.com>
2025-06-17 09:33:56 +00:00
discord9
50e4c916e7 chore: clean up unused impl &standalone use mark dirty (#6331)
Signed-off-by: discord9 <discord9@163.com>
2025-06-17 08:18:17 +00:00
Ruihang Xia
16e7f7b64b fix: override logical table's partition column with physical table's (#6326)
* fix: override logical table's partition column with physical table's

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add more sqlness test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* naming

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-06-17 08:00:54 +00:00
localhost
53c4fd478e chore: add skip error for pipeline skip error log (#6318)
* chore: wip

Signed-off-by: paomian <xpaomian@gmail.com>

* chore: add skip error for pipeline skip error log

Signed-off-by: paomian <xpaomian@gmail.com>

* chore: add test and check timestamp must be non null

Signed-off-by: paomian <xpaomian@gmail.com>

* chore: fix test

* chore: fix by pr comment

* fix: typo

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

---------

Signed-off-by: paomian <xpaomian@gmail.com>
Co-authored-by: dennis zhuang <killme2008@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-06-17 06:44:11 +00:00
Lei, HUANG
ecbbd2fbdb feat: handle Ctrl-C command in MySQL client (#6320)
* feat/answer-ctrl-c-in-mysql:
 ## Implement Connection ID-based Query Killing

 ### Key Changes:
 - **Connection ID Management:**
   - Added `connection_id` to `Session` and `QueryContext` in `src/session/src/lib.rs` and `src/session/src/context.rs`.
   - Updated `MysqlInstanceShim` and `MysqlServer` to handle `connection_id` in `src/servers/src/mysql/handler.rs` and `src/servers/src/mysql/server.rs`.

 - **KILL Statement Enhancements:**
   - Introduced `Kill` enum to handle both `ProcessId` and `ConnectionId` in `src/sql/src/statements/kill.rs`.
   - Updated `ParserContext` to parse `KILL QUERY <connection_id>` in `src/sql/src/parser.rs`.
   - Modified `StatementExecutor` to support killing queries by `connection_id` in `src/operator/src/statement/kill.rs`.

 - **Process Management:**
   - Refactored `ProcessManager` to include `connection_id` in `src/catalog/src/process_manager.rs`.
   - Added `kill_local_process` method for local query termination.

 - **Testing:**
   - Added tests for `KILL` statement parsing and execution in `src/sql/src/parser.rs`.

 ### Affected Files:
 - `Cargo.lock`, `Cargo.toml`
 - `src/catalog/src/process_manager.rs`
 - `src/frontend/src/instance.rs`
 - `src/frontend/src/stream_wrapper.rs`
 - `src/operator/src/statement.rs`
 - `src/operator/src/statement/kill.rs`
 - `src/servers/src/mysql/federated.rs`
 - `src/servers/src/mysql/handler.rs`
 - `src/servers/src/mysql/server.rs`
 - `src/servers/src/postgres.rs`
 - `src/session/src/context.rs`
 - `src/session/src/lib.rs`
 - `src/sql/src/parser.rs`
 - `src/sql/src/statements.rs`
 - `src/sql/src/statements/kill.rs`
 - `src/sql/src/statements/statement.rs`

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

 Conflicts:
	Cargo.lock
	Cargo.toml

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* feat/answer-ctrl-c-in-mysql:
 ### Enhance Process Management and Execution

 - **`process_manager.rs`**: Added a new method `find_processes_by_connection_id` to filter processes by connection ID, improving process management capabilities.
 - **`kill.rs`**: Refactored the process killing logic to utilize the new `find_processes_by_connection_id` method, streamlining the execution flow and reducing redundant checks.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* feat/answer-ctrl-c-in-mysql:
 ## Commit Message

 ### Update Process ID Type and Refactor Code

 - **Change Process ID Type**: Updated the process ID type from `u64` to `u32` across multiple files to optimize memory usage. Affected files include `process_manager.rs`, `lib.rs`, `database.rs`, `instance.rs`, `server.rs`, `stream_wrapper.rs`, `kill.rs`, `federated.rs`, `handler.rs`, `server.rs`,
 `postgres.rs`, `mysql_server_test.rs`, `context.rs`, `lib.rs`, and `test_util.rs`.

 - **Remove Connection ID**: Removed the `connection_id` field and related logic from `process_manager.rs`, `lib.rs`, `instance.rs`, `server.rs`, `stream_wrapper.rs`, `kill.rs`, `federated.rs`, `handler.rs`, `server.rs`, `postgres.rs`, `mysql_server_test.rs`, `context.rs`, `lib.rs`, and `test_util.rs` to
 simplify the codebase.

 - **Refactor Process Management**: Refactored process management logic to improve clarity and maintainability in `process_manager.rs`, `kill.rs`, and `handler.rs`.

 - **Enhance MySQL Server Handling**: Improved MySQL server handling by integrating process management in `server.rs` and `mysql_server_test.rs`.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* feat/answer-ctrl-c-in-mysql:
 ### Add Process Manager to Postgres Server

 - **`src/frontend/src/server.rs`**: Updated server initialization to include `process_manager`.
 - **`src/servers/src/postgres.rs`**: Modified `MakePostgresServerHandler` to accept `process_id` for session creation.
 - **`src/servers/src/postgres/server.rs`**: Integrated `process_manager` into `PostgresServer` for generating `process_id` during connection handling.
 - **`src/servers/tests/postgres/mod.rs`** and **`tests-integration/src/test_util.rs`**: Adjusted test server setup to accommodate optional `process_manager`.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* feat/answer-ctrl-c-in-mysql:
 Update `greptime-proto` Dependency

 - Updated the `greptime-proto` dependency to a new revision in both `Cargo.lock` and `Cargo.toml`.
   - `Cargo.lock`: Changed source revision from `d75a56e05a87594fe31ad5c48525e9b2124149ba` to `fdcbe5f1c7c467634c90a1fd1a00a784b92a4e80`.
   - `Cargo.toml`: Updated the `greptime-proto` git revision to match the new commit.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

---------

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-06-17 06:36:23 +00:00
fys
3e3a12385c refactor: make flownode gRPC services able to be added dynamically (#6323)
chore: enhance the flownode gRPC servers extension
2025-06-17 06:27:41 +00:00
shuiyisong
079daf5db9 feat: support special labels parsing in prom remote write (#6302)
* feat: support special labels parsing in prom remote write

* chore: change __schema__ to __database__

* fix: test

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* fix: remove the missing type alias

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: update cr issue

Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>

---------

Signed-off-by: shuiyisong <xixing.sys@gmail.com>
Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>
2025-06-17 03:20:33 +00:00
liyang
56b9ab5279 ci: add pr label workflow (#6316)
* ci: add pr label workflow

Signed-off-by: liyang <daviderli614@gmail.com>

* move permissions to jobs

Signed-off-by: liyang <daviderli614@gmail.com>

* add checkout step

Signed-off-by: liyang <daviderli614@gmail.com>

* add job permissions

Signed-off-by: liyang <daviderli614@gmail.com>

* custom sizes

Signed-off-by: liyang <daviderli614@gmail.com>

* Update .github/workflows/pr-labeling.yaml

Co-authored-by: shuiyisong <113876041+shuiyisong@users.noreply.github.com>

---------

Signed-off-by: liyang <daviderli614@gmail.com>
Co-authored-by: shuiyisong <113876041+shuiyisong@users.noreply.github.com>
2025-06-16 17:26:16 +00:00
Ruihang Xia
be4e0d589e feat: support arbitrary constant expression in PromQL function (#6315)
* refactor holt_winters, predict_linear, quantile, round

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* some sqlness result

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* support some functions

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* make all sqlness cases pass

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix other sqlness cases

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* some refactor

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-06-16 15:12:27 +00:00
Yingwen
2a3445c72c fix: ignore missing columns and tables in PromQL (#6285)
* fix: handle table/column not found in or

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: update result

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: drop table after test

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: fix test cases

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: do not return table not found error in series_query

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-06-16 12:15:38 +00:00
Lei, HUANG
9d997d593c feat: bulk support flow batch (#6291)
* feat/bulk-support-flow-batch:
 ### Refactor and Enhance Timestamp Handling in gRPC and Bulk Insert

 - **Refactor Table Handling**:
   - Updated `put_record_batch` method to use `TableRef` instead of `TableId` in `grpc.rs`, `greptime_handler.rs`, and `grpc.rs`.
   - Modified `handle_bulk_insert` to accept `TableRef` and extract `TableId` internally in `bulk_insert.rs`.

 - **Enhance Timestamp Processing**:
   - Added `compute_timestamp_range` function to calculate timestamp range in `bulk_insert.rs`.
   - Introduced error handling for invalid time index types in `error.rs`.

 - **Test Adjustments**:
   - Updated `DummyInstance` implementation in `tests/mod.rs` to align with new method signatures.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* feat/bulk-support-flow-batch:
 ### Add Dirty Window Handling in Flow Module

 - **Updated `greptime-proto` Dependency**: Updated the `greptime-proto` dependency to a new revision in `Cargo.lock` and `Cargo.toml`.
 - **Flow Module Enhancements**:
   - Added `DirtyWindowRequest` handling in `flow.rs`, `node_manager.rs`, `test_util.rs`, `flownode_impl.rs`, and `server.rs`.
   - Implemented `handle_mark_window_dirty` function to manage dirty time windows.
 - **Bulk Insert Enhancements**:
   - Modified `bulk_insert.rs` to notify flownodes about dirty time windows using `update_flow_dirty_window`.
 - **Removed Unused Imports**: Cleaned up unused imports in `greptime_handler.rs`, `grpc.rs`, and `mod.rs`.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* feat: mark dirty time window

* feat: metrics

* metrics: more useful metrics batching mode

* feat/bulk-support-flow-batch:
 **Refactor Timestamp Handling and Update Dependencies**

 - **Dependency Update**: Updated `greptime-proto` dependency in `Cargo.lock` and `Cargo.toml` to a new revision.
 - **Batching Engine Refactor**: Modified `src/flow/src/batching_mode/engine.rs` to replace `dirty_time_ranges` with `timestamps` for improved timestamp handling.
 - **Bulk Insert Refactor**: Updated `src/operator/src/bulk_insert.rs` to refactor timestamp extraction and handling. Replaced `compute_timestamp_range` with `extract_timestamps` and adjusted related logic to handle timestamps directly.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* feat/bulk-support-flow-batch:
 ### Update Metrics in Batching Mode Engine

 - **Modified Metrics**: Replaced `METRIC_FLOW_BATCHING_ENGINE_BULK_MARK_TIME_WINDOW_RANGE` with `METRIC_FLOW_BATCHING_ENGINE_BULK_MARK_TIME_WINDOW` to track the count of time windows instead of their range.
   - Files affected: `engine.rs`, `metrics.rs`
 - **New Method**: Added `len()` method to `DirtyTimeWindows` to return the number of dirty windows.
   - File affected: `state.rs`

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* feat/bulk-support-flow-batch:
 **Refactor and Enhance Timestamp Handling in `bulk_insert.rs`**

 - **Refactored Timestamp Extraction**: Moved timestamp extraction logic to a new method `maybe_update_flow_dirty_window` to improve code readability and maintainability.
 - **Enhanced Flow Update Logic**: Updated the flow dirty window update mechanism to conditionally notify flownodes only if they are configured, using `table_info` and `record_batch`.
 - **Imports Adjusted**: Updated imports to reflect changes in table metadata handling, replacing `TableId` with `TableInfoRef`.

 Files affected:
 - `src/operator/src/bulk_insert.rs`

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* feat/bulk-support-flow-batch:
 ## Update `handle_mark_window_dirty` Method in `flownode_impl.rs`

 - Replaced `unimplemented!()` with `unreachable!()` in the `handle_mark_window_dirty` method for both `FlowDualEngine` and `StreamingEngine` implementations in `flownode_impl.rs`.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* feat/bulk-support-flow-batch:
 Update `greptime-proto` Dependency

 - Updated the `greptime-proto` dependency to a new revision in both `Cargo.lock` and `Cargo.toml`.
   - `Cargo.lock`: Changed the source revision from `f0913f179ee1d2ce428f8b85a9ea12b5f69ad636` to `17971523673f4fbc982510d3c9d6647ff642e16f`.
   - `Cargo.toml`: Updated the `greptime-proto` git revision to `17971523673f4fbc982510d3c9d6647ff642e16f`.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

---------

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
Co-authored-by: discord9 <discord9@163.com>
2025-06-16 08:19:14 +00:00
Weny Xu
10bf9b11f6 fix: handle corner case in catchup where compacted entry id exceeds region last entry id (#6312)
* fix(mito2): handle corner case in catchup where compacted entry id exceeds region last entry id

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-06-16 06:36:31 +00:00
localhost
f4f8d65a39 fix: event api content type only check type and subtype (#6317)
* fix: event api content type only check type and subtype

Signed-off-by: paomian <xpaomian@gmail.com>

* chore: make clippy happy

Signed-off-by: paomian <xpaomian@gmail.com>

---------

Signed-off-by: paomian <xpaomian@gmail.com>
2025-06-13 18:50:05 +00:00
Lei, HUANG
b31990e881 chore: add connection info to QueryContext (#6319)
chore/add-conn-info-to-query-ctx:
 ### Add Connection Information to Query Context

 - **`src/frontend/src/instance.rs`**: Updated to use `query_ctx.conn_info().to_string()` for connection information instead of a placeholder string.
 - **`src/session/src/context.rs`**: Introduced `conn_info` field in `QueryContext` and added a method `conn_info()` to retrieve it. Updated `QueryContextBuilder` to handle `conn_info`.
 - **`src/session/src/lib.rs`**: Modified `Session` to include `conn_info` in the query context building process.

 These changes enhance the query context by incorporating connection information, allowing for more detailed session management.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-06-13 18:42:13 +00:00
Lei, HUANG
6da633e70d feat: support killing process (#6309)
* feat/kill-process:
 ### Add Cancellation Support and Enhance Process Management

 - **Cancellation Handle Implementation**: Introduced `CancellationHandle` in `cancellation_handle.rs` to facilitate cancellation of futures and streams.
 - **Process Management Enhancements**:
   - Updated `ProcessManager` in `process_manager.rs` to support cancellable processes using `CancellableProcess`.
   - Added `kill_process` method for terminating processes.
 - **Stream Wrapper Update**:
   - Replaced `StreamWrapper` with `CancellableStreamWrapper` in `stream_wrapper.rs` and `instance.rs` to handle stream cancellation.
 - **Error Handling**:
   - Added `StreamCancelled` error variant in `error.rs` to handle stream cancellation scenarios.
 - **gRPC Handler Update**:
   - Added `kill_process` gRPC method in `frontend_grpc_handler.rs` to allow external process termination.
 - **Dependency Updates**:
   - Updated `Cargo.lock` and `Cargo.toml` to include `common-base` and `tokio-util`.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* feat/kill-process:
 **Enhancements and Bug Fixes**

 - **Dependency Update**: Updated `greptime-proto` dependency in `Cargo.lock` and `Cargo.toml` to a new revision.
 - **Error Handling Improvements**:
   - Modified error variants in `src/catalog/src/error.rs` and `src/common/frontend/src/error.rs` to improve error messages and handling.
   - Added `FrontendNotFound` error variant for better error specificity.
 - **Process Management Enhancements**:
   - Updated `ProcessManager` in `src/catalog/src/process_manager.rs` to include `kill_process` functionality with server address validation.
   - Enhanced `FrontendClient` trait in `src/common/frontend/src/selector.rs` to support `kill_process` requests.
 - **gRPC Handler Update**:
   - Refactored `FrontendGrpcHandler` in `src/servers/src/grpc/frontend_grpc_handler.rs` to handle `kill_process` requests asynchronously and return process status.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* feat/kill-process:
 ### Add Kill Process Functionality

 - **`Cargo.lock`, `Cargo.toml`**: Added `common-frontend` as a dependency.
 - **`server.rs`, `builder.rs`, `instance.rs`**: Updated `FrontendInvoker` and `FrontendBuilder` to support process management.
 - **`error.rs`**: Introduced `InvalidProcessId` error for handling invalid process IDs.
 - **`statement.rs`, `kill.rs`**: Implemented `execute_kill` method in `StatementExecutor` to handle the `KILL` statement.
 - **`parser.rs`, `statement.rs`**: Updated SQL parser to recognize and parse the `KILL` statement.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* feat/kill-process:
 ## Add Cancellation Support to Query Execution

 - **`process_manager.rs`**: Updated `CancellationHandle` initialization to use `default()` method.
 - **`cancellation_handle.rs`**: Implemented `Debug` trait for `CancellationHandle` and added `Cancellation` and `CancellableFuture` structs to support cancellable futures.
 - **`error.rs`**: Introduced `Cancelled` error variant to handle query cancellations.
 - **`instance.rs`**: Integrated `CancellableFuture` to manage query execution with cancellation support.
 - **`stream_wrapper.rs`**: Modified `CancellableStreamWrapper` to use the new `waker()` method for cancellation handling.
 - **`statement.rs`**: Added `#[allow(clippy::too_many_arguments)]` to `StatementExecutor::new` to suppress clippy warnings.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* feat/kill-process:
 - **Add `MetaClientMissing` Error**: Introduced a new error variant `MetaClientMissing` in `error.rs` to handle missing meta client scenarios.
 - **Refactor Cancellation Handling**: Merged `cancellation_handle.rs` into `cancellation.rs` and updated related logic in `process_manager.rs`, `instance.rs`, and `stream_wrapper.rs`.
 - **Enhance Process Management**: Improved process management logic in `process_manager.rs` to handle process cancellation more effectively.
 - **Update Tests**: Added and updated tests in `cancellation.rs` and `stream_wrapper.rs` to cover new cancellation logic and error handling.
 - **Cargo.toml Update**: Adjusted workspace settings in `Cargo.toml` for `common-frontend`.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* feat/kill-process:
 - **Add Tests for Process Management**: Introduced multiple async tests in `process_manager.rs` to verify query registration, deregistration, cancellation, and process killing functionalities.
 - **Update Error Message in SQL Parser**: Modified the expected error message in `parser.rs` to clarify the expected token as a "process id string literal".

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* feat/kill-process:
 ### Add Process Count Metrics to Catalog

 - **`metrics.rs`**: Introduced a new metric `PROCESS_LIST_COUNT` to track the count of running processes per catalog using `IntGaugeVec`.
 - **`process_manager.rs`**: Updated `CancellableProcess` to increment and decrement `PROCESS_LIST_COUNT` upon creation and destruction, respectively. Added a `Drop` implementation for `CancellableProcess` to handle metric updates.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* feat/kill-process:
 ### Fix process removal logic in `process_manager.rs`

 - Corrected the condition for removing an entry from the catalog in `ProcessManager` by using `o.get()` instead of `o.get_mut()`.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* feat/kill-process:
 - **Error Handling Improvements**:
   - Updated status codes for `Error::FrontendNotFound` and `Error::MetaClientMissing` to `StatusCode::Unexpected` in `src/catalog/src/error.rs`.
   - Changed `InvokeFrontend` error display message and status code in `src/common/frontend/src/error.rs`.
   - Added `ProcessManagerMissing` error in `src/operator/src/error.rs` and updated its handling in `src/operator/src/statement/kill.rs`.

 - **Process Management Enhancements**:
   - Added documentation for `ProcessManager` and `register_query` in `src/catalog/src/process_manager.rs`.
   - Modified `kill_process` response handling in `src/servers/src/grpc/frontend_grpc_handler.rs`.

 - **Cancellation Logic Update**:
   - Improved cancellation logic in `src/common/base/src/cancellation.rs` to use `compare_exchange` for atomic operations.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* feat/kill-process:
 ### Add Process Kill Count Metric and Refactor Cancellation Handle

 - **Metrics Update**: Added a new metric `PROCESS_KILL_COUNT` in `metrics.rs` to track the count of completed kill process requests per catalog.
 - **Refactor Cancellation Handle**: Renamed `cancellation_handler` to `cancellation_handle` across multiple files for consistency:
   - `process_manager.rs`
   - `instance.rs`
   - `stream_wrapper.rs`
 - **Process Management**: Updated process management logic in `process_manager.rs` to increment the `PROCESS_KILL_COUNT` metric upon successful process termination.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* feat/kill-process:
 Update metric description in `metrics.rs`

 - Changed the description of `PROCESS_KILL_COUNT` to reflect the count of killed processes instead of running processes in `metrics.rs`.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* feat/kill-process:
 Update `greptime-proto` Dependency and Fix Response Field

 - **Updated Dependency**: Changed the `greptime-proto` Git revision in `Cargo.lock` and `Cargo.toml` to `f0913f1`.
 - **Code Fix**: Modified `frontend_grpc_handler.rs` to correct the response field from `found` to `success` in `KillProcessResponse`.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

---------

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-06-13 13:30:25 +00:00
zyy17
9633e794c7 fix: always use linux path style in windows platform unit tests (#6314)
Signed-off-by: zyy17 <zyylsxm@gmail.com>
2025-06-13 07:15:53 +00:00
Yingwen
eaf1e1198f refactor: Extract mito codec part into a new crate (#6307)
* chore: add a new crate mito-codec

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: port necessary mods for primary key codec

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: use codec utils in mito-codec

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: remove unused mods

Signed-off-by: evenyag <realevenyag@gmail.com>

* style: fix clippy

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: remove Partition::is_partition_column()

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: remove duplicated test utils

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: remove unused comment

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: fix is_partition_column check

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-06-13 07:14:29 +00:00
ZonaHe
505bf25505 feat: update dashboard to v0.9.3 (#6311)
Co-authored-by: ZonaHex <ZonaHex@users.noreply.github.com>
2025-06-13 07:13:12 +00:00
Ning Sun
f1b29ece3c feat: process id for session, query context and postgres (#6301)
* feat: process id for session, query context and postgres

Signed-off-by: Ning Sun <sunning@greptime.com>

* feat: add sql functions to retrieve connection/process id

Signed-off-by: Ning Sun <sunning@greptime.com>

---------

Signed-off-by: Ning Sun <sunning@greptime.com>
2025-06-12 16:53:57 +00:00
discord9
74df12e8c0 fix: check for zero parallelism (#6310)
* fix: check for zero parallelism

Signed-off-by: discord9 <discord9@163.com>

* chore: silently use default value

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-06-12 15:58:59 +00:00
discord9
be6a5d2da8 feat: parallelism hint in grpc (#6306)
* feat: parallelism hint in grpc

Signed-off-by: discord9 <discord9@163.com>

* chore: per review

Signed-off-by: discord9 <discord9@163.com>

* chore: comment

Signed-off-by: discord9 <discord9@163.com>

* chore:docs

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-06-12 10:12:45 +00:00
Ruihang Xia
7468a8ab2a feat: organize EXPLAIN ANALYZE VERBOSE's output in JSON format (#6308)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-06-12 09:55:53 +00:00
Lei, HUANG
5bb0466ff2 feat: introduce file group in compaction (#6261)
* fix/file-group-in-compaction:
 ### Enhance Compaction Logic with File Grouping

 - **`run.rs`**: Introduced `FileGroup` struct to manage groups of `FileHandle` objects, allowing for more efficient compaction operations. Updated `Ranged` and `Item` trait implementations to work with `FileGroup`.
 - **`test_util.rs`**: Added `new_file_handle_with_sequence` function to support file handles with sequence numbers, enhancing test utilities.
 - **`twcs.rs`**: Modified `TwcsPicker` to utilize `FileGroup` for managing files within windows, improving compaction logic. Updated `Window` struct to use `HashMap` for storing `FileGroup` objects.
 - **`version_util.rs`**: Updated version control utilities to handle sequence numbers in file metadata, aligning with new compaction logic.

Signed-off-by: Lei, HUANG <lhuang@greptime.com>

* fix/file-group-in-compaction:
 ### Add Test for File Group Assignment in TWCS

 - **Enhancements in `twcs.rs`:**
   - Added a new test `test_assign_file_groups_to_windows` to verify the correct assignment of file groups to windows.
   - Enhanced `test_assign_compacting_to_windows` with a new case to ensure files with overlapping time ranges and the same sequence are treated as one `FileGroup`.

Signed-off-by: Lei, HUANG <lhuang@greptime.com>

* fix/file-group-in-compaction:
 **Enhance Compaction Task Documentation and Initialization**

 - **`run.rs`**: Added documentation for `FileGroup` to clarify its role in representing a group of files created by the same compaction task.
 - **`twcs.rs`**: Introduced comments in the `Window` struct to explain the mapping of file sequences to file groups, indicating files created from the same compaction task. Simplified the initialization of the `files` hashmap using `HashMap::from`.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

---------

Signed-off-by: Lei, HUANG <lhuang@greptime.com>
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-06-12 09:33:40 +00:00
Ruihang Xia
f6db419afd feat: support using expressions as literal in PromQL (#6297)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-06-12 08:18:10 +00:00
386 changed files with 24984 additions and 11981 deletions

15
.github/labeler.yaml vendored Normal file
View File

@@ -0,0 +1,15 @@
ci:
- changed-files:
- any-glob-to-any-file: .github/**
docker:
- changed-files:
- any-glob-to-any-file: docker/**
documentation:
- changed-files:
- any-glob-to-any-file: docs/**
dashboard:
- changed-files:
- any-glob-to-any-file: grafana/**

42
.github/workflows/pr-labeling.yaml vendored Normal file
View File

@@ -0,0 +1,42 @@
name: 'PR Labeling'
on:
pull_request_target:
types:
- opened
- synchronize
- reopened
permissions:
contents: read
pull-requests: write
issues: write
jobs:
labeler:
runs-on: ubuntu-latest
steps:
- name: Checkout sources
uses: actions/checkout@v4
- uses: actions/labeler@v5
with:
configuration-path: ".github/labeler.yaml"
repo-token: "${{ secrets.GITHUB_TOKEN }}"
size-label:
runs-on: ubuntu-latest
steps:
- uses: pascalgn/size-label-action@v0.5.5
env:
GITHUB_TOKEN: "${{ secrets.GITHUB_TOKEN }}"
with:
sizes: >
{
"0": "XS",
"100": "S",
"300": "M",
"1000": "L",
"1500": "XL",
"2000": "XXL"
}

208
Cargo.lock generated
View File

@@ -211,7 +211,7 @@ checksum = "d301b3b94cb4b2f23d7917810addbbaff90738e0ca2be692bd027e70d7e0330c"
[[package]]
name = "api"
version = "0.15.0"
version = "0.15.1"
dependencies = [
"common-base",
"common-decimal",
@@ -944,7 +944,7 @@ dependencies = [
[[package]]
name = "auth"
version = "0.15.0"
version = "0.15.1"
dependencies = [
"api",
"async-trait",
@@ -1586,7 +1586,7 @@ dependencies = [
[[package]]
name = "cache"
version = "0.15.0"
version = "0.15.1"
dependencies = [
"catalog",
"common-error",
@@ -1610,7 +1610,7 @@ checksum = "37b2a672a2cb129a2e41c10b1224bb368f9f37a2b16b612598138befd7b37eb5"
[[package]]
name = "catalog"
version = "0.15.0"
version = "0.15.1"
dependencies = [
"api",
"arrow 54.2.1",
@@ -1621,6 +1621,7 @@ dependencies = [
"cache",
"catalog",
"chrono",
"common-base",
"common-catalog",
"common-error",
"common-frontend",
@@ -1669,9 +1670,9 @@ dependencies = [
[[package]]
name = "cc"
version = "1.1.24"
version = "1.2.27"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "812acba72f0a070b003d3697490d2b55b837230ae7c6c6497f05cc2ddbb8d938"
checksum = "d487aa071b5f64da6f19a3e848e3578944b726ee5a4854b82172f02aa876bfdc"
dependencies = [
"jobserver",
"libc",
@@ -1947,8 +1948,9 @@ checksum = "1462739cb27611015575c0c11df5df7601141071f07518d56fcc1be504cbec97"
[[package]]
name = "cli"
version = "0.15.0"
version = "0.15.1"
dependencies = [
"async-stream",
"async-trait",
"auth",
"base64 0.22.1",
@@ -1981,6 +1983,7 @@ dependencies = [
"meta-srv",
"nu-ansi-term",
"object-store",
"operator",
"query",
"rand 0.9.0",
"reqwest",
@@ -1990,7 +1993,7 @@ dependencies = [
"session",
"snafu 0.8.5",
"store-api",
"substrait 0.15.0",
"substrait 0.15.1",
"table",
"tempfile",
"tokio",
@@ -1999,7 +2002,7 @@ dependencies = [
[[package]]
name = "client"
version = "0.15.0"
version = "0.15.1"
dependencies = [
"api",
"arc-swap",
@@ -2029,7 +2032,7 @@ dependencies = [
"rand 0.9.0",
"serde_json",
"snafu 0.8.5",
"substrait 0.15.0",
"substrait 0.15.1",
"substrait 0.37.3",
"tokio",
"tokio-stream",
@@ -2070,7 +2073,7 @@ dependencies = [
[[package]]
name = "cmd"
version = "0.15.0"
version = "0.15.1"
dependencies = [
"async-trait",
"auth",
@@ -2131,7 +2134,7 @@ dependencies = [
"snafu 0.8.5",
"stat",
"store-api",
"substrait 0.15.0",
"substrait 0.15.1",
"table",
"temp-env",
"tempfile",
@@ -2178,7 +2181,7 @@ checksum = "55b672471b4e9f9e95499ea597ff64941a309b2cdbffcc46f2cc5e2d971fd335"
[[package]]
name = "common-base"
version = "0.15.0"
version = "0.15.1"
dependencies = [
"anymap2",
"async-trait",
@@ -2200,11 +2203,11 @@ dependencies = [
[[package]]
name = "common-catalog"
version = "0.15.0"
version = "0.15.1"
[[package]]
name = "common-config"
version = "0.15.0"
version = "0.15.1"
dependencies = [
"common-base",
"common-error",
@@ -2229,7 +2232,7 @@ dependencies = [
[[package]]
name = "common-datasource"
version = "0.15.0"
version = "0.15.1"
dependencies = [
"arrow 54.2.1",
"arrow-schema 54.3.1",
@@ -2266,7 +2269,7 @@ dependencies = [
[[package]]
name = "common-decimal"
version = "0.15.0"
version = "0.15.1"
dependencies = [
"bigdecimal 0.4.8",
"common-error",
@@ -2279,7 +2282,7 @@ dependencies = [
[[package]]
name = "common-error"
version = "0.15.0"
version = "0.15.1"
dependencies = [
"common-macro",
"http 1.1.0",
@@ -2290,7 +2293,7 @@ dependencies = [
[[package]]
name = "common-frontend"
version = "0.15.0"
version = "0.15.1"
dependencies = [
"async-trait",
"common-error",
@@ -2306,7 +2309,7 @@ dependencies = [
[[package]]
name = "common-function"
version = "0.15.0"
version = "0.15.1"
dependencies = [
"ahash 0.8.11",
"api",
@@ -2359,7 +2362,7 @@ dependencies = [
[[package]]
name = "common-greptimedb-telemetry"
version = "0.15.0"
version = "0.15.1"
dependencies = [
"async-trait",
"common-runtime",
@@ -2376,7 +2379,7 @@ dependencies = [
[[package]]
name = "common-grpc"
version = "0.15.0"
version = "0.15.1"
dependencies = [
"api",
"arrow-flight",
@@ -2408,7 +2411,7 @@ dependencies = [
[[package]]
name = "common-grpc-expr"
version = "0.15.0"
version = "0.15.1"
dependencies = [
"api",
"common-base",
@@ -2427,7 +2430,7 @@ dependencies = [
[[package]]
name = "common-macro"
version = "0.15.0"
version = "0.15.1"
dependencies = [
"arc-swap",
"common-query",
@@ -2441,7 +2444,7 @@ dependencies = [
[[package]]
name = "common-mem-prof"
version = "0.15.0"
version = "0.15.1"
dependencies = [
"anyhow",
"common-error",
@@ -2457,7 +2460,7 @@ dependencies = [
[[package]]
name = "common-meta"
version = "0.15.0"
version = "0.15.1"
dependencies = [
"anymap2",
"api",
@@ -2522,7 +2525,7 @@ dependencies = [
[[package]]
name = "common-options"
version = "0.15.0"
version = "0.15.1"
dependencies = [
"common-grpc",
"humantime-serde",
@@ -2531,11 +2534,11 @@ dependencies = [
[[package]]
name = "common-plugins"
version = "0.15.0"
version = "0.15.1"
[[package]]
name = "common-pprof"
version = "0.15.0"
version = "0.15.1"
dependencies = [
"common-error",
"common-macro",
@@ -2547,7 +2550,7 @@ dependencies = [
[[package]]
name = "common-procedure"
version = "0.15.0"
version = "0.15.1"
dependencies = [
"async-stream",
"async-trait",
@@ -2574,7 +2577,7 @@ dependencies = [
[[package]]
name = "common-procedure-test"
version = "0.15.0"
version = "0.15.1"
dependencies = [
"async-trait",
"common-procedure",
@@ -2583,7 +2586,7 @@ dependencies = [
[[package]]
name = "common-query"
version = "0.15.0"
version = "0.15.1"
dependencies = [
"api",
"async-trait",
@@ -2609,7 +2612,7 @@ dependencies = [
[[package]]
name = "common-recordbatch"
version = "0.15.0"
version = "0.15.1"
dependencies = [
"arc-swap",
"common-error",
@@ -2629,7 +2632,7 @@ dependencies = [
[[package]]
name = "common-runtime"
version = "0.15.0"
version = "0.15.1"
dependencies = [
"async-trait",
"clap 4.5.19",
@@ -2659,16 +2662,15 @@ dependencies = [
[[package]]
name = "common-session"
version = "0.15.0"
version = "0.15.1"
dependencies = [
"strum 0.27.1",
]
[[package]]
name = "common-telemetry"
version = "0.15.0"
version = "0.15.1"
dependencies = [
"atty",
"backtrace",
"common-error",
"console-subscriber",
@@ -2694,7 +2696,7 @@ dependencies = [
[[package]]
name = "common-test-util"
version = "0.15.0"
version = "0.15.1"
dependencies = [
"client",
"common-grpc",
@@ -2707,7 +2709,7 @@ dependencies = [
[[package]]
name = "common-time"
version = "0.15.0"
version = "0.15.1"
dependencies = [
"arrow 54.2.1",
"chrono",
@@ -2725,7 +2727,7 @@ dependencies = [
[[package]]
name = "common-version"
version = "0.15.0"
version = "0.15.1"
dependencies = [
"build-data",
"const_format",
@@ -2735,7 +2737,7 @@ dependencies = [
[[package]]
name = "common-wal"
version = "0.15.0"
version = "0.15.1"
dependencies = [
"common-base",
"common-error",
@@ -2758,7 +2760,7 @@ dependencies = [
[[package]]
name = "common-workload"
version = "0.15.0"
version = "0.15.1"
dependencies = [
"api",
"common-telemetry",
@@ -3069,9 +3071,9 @@ dependencies = [
[[package]]
name = "crossbeam-channel"
version = "0.5.13"
version = "0.5.15"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "33480d6946193aa8033910124896ca395333cae7e2d1113d1fef6c3272217df2"
checksum = "82b8f8f868b36967f9606790d1903570de9ceaf870a7bf9fbbd3016d636a2cb2"
dependencies = [
"crossbeam-utils",
]
@@ -3714,7 +3716,7 @@ dependencies = [
[[package]]
name = "datanode"
version = "0.15.0"
version = "0.15.1"
dependencies = [
"api",
"arrow-flight",
@@ -3767,7 +3769,7 @@ dependencies = [
"session",
"snafu 0.8.5",
"store-api",
"substrait 0.15.0",
"substrait 0.15.1",
"table",
"tokio",
"toml 0.8.19",
@@ -3776,7 +3778,7 @@ dependencies = [
[[package]]
name = "datatypes"
version = "0.15.0"
version = "0.15.1"
dependencies = [
"arrow 54.2.1",
"arrow-array 54.2.1",
@@ -4436,7 +4438,7 @@ checksum = "e8c02a5121d4ea3eb16a80748c74f5549a5665e4c21333c6098f283870fbdea6"
[[package]]
name = "file-engine"
version = "0.15.0"
version = "0.15.1"
dependencies = [
"api",
"async-trait",
@@ -4573,7 +4575,7 @@ checksum = "8bf7cc16383c4b8d58b9905a8509f02926ce3058053c056376248d958c9df1e8"
[[package]]
name = "flow"
version = "0.15.0"
version = "0.15.1"
dependencies = [
"api",
"arrow 54.2.1",
@@ -4638,7 +4640,7 @@ dependencies = [
"sql",
"store-api",
"strum 0.27.1",
"substrait 0.15.0",
"substrait 0.15.1",
"table",
"tokio",
"tonic 0.12.3",
@@ -4693,10 +4695,11 @@ checksum = "6c2141d6d6c8512188a7891b4b01590a45f6dac67afb4f255c4124dbb86d4eaa"
[[package]]
name = "frontend"
version = "0.15.0"
version = "0.15.1"
dependencies = [
"api",
"arc-swap",
"async-stream",
"async-trait",
"auth",
"bytes",
@@ -4752,9 +4755,10 @@ dependencies = [
"sqlparser 0.54.0 (git+https://github.com/GreptimeTeam/sqlparser-rs.git?rev=0cf6c04490d59435ee965edd2078e8855bd8471e)",
"store-api",
"strfmt",
"substrait 0.15.0",
"substrait 0.15.1",
"table",
"tokio",
"tokio-util",
"toml 0.8.19",
"tonic 0.12.3",
"tower 0.5.2",
@@ -5141,7 +5145,7 @@ dependencies = [
[[package]]
name = "greptime-proto"
version = "0.1.0"
source = "git+https://github.com/GreptimeTeam/greptime-proto.git?rev=5f6119ac7952878d39dcde0343c4bf828d18ffc8#5f6119ac7952878d39dcde0343c4bf828d18ffc8"
source = "git+https://github.com/GreptimeTeam/greptime-proto.git?rev=96c733f8472284d3c83a4c011dc6de9cf830c353#96c733f8472284d3c83a4c011dc6de9cf830c353"
dependencies = [
"prost 0.13.5",
"serde",
@@ -5912,7 +5916,7 @@ dependencies = [
[[package]]
name = "index"
version = "0.15.0"
version = "0.15.1"
dependencies = [
"async-trait",
"asynchronous-codec",
@@ -6692,7 +6696,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4979f22fdb869068da03c9f7528f8297c6fd2606bc3a4affe42e6a823fdb8da4"
dependencies = [
"cfg-if",
"windows-targets 0.52.6",
"windows-targets 0.48.5",
]
[[package]]
@@ -6797,7 +6801,7 @@ checksum = "a7a70ba024b9dc04c27ea2f0c0548feb474ec5c54bba33a7f72f873a39d07b24"
[[package]]
name = "log-query"
version = "0.15.0"
version = "0.15.1"
dependencies = [
"chrono",
"common-error",
@@ -6809,7 +6813,7 @@ dependencies = [
[[package]]
name = "log-store"
version = "0.15.0"
version = "0.15.1"
dependencies = [
"async-stream",
"async-trait",
@@ -7107,7 +7111,7 @@ dependencies = [
[[package]]
name = "meta-client"
version = "0.15.0"
version = "0.15.1"
dependencies = [
"api",
"async-trait",
@@ -7135,7 +7139,7 @@ dependencies = [
[[package]]
name = "meta-srv"
version = "0.15.0"
version = "0.15.1"
dependencies = [
"api",
"async-trait",
@@ -7226,7 +7230,7 @@ dependencies = [
[[package]]
name = "metric-engine"
version = "0.15.0"
version = "0.15.1"
dependencies = [
"api",
"aquamarine",
@@ -7248,6 +7252,7 @@ dependencies = [
"humantime-serde",
"itertools 0.14.0",
"lazy_static",
"mito-codec",
"mito2",
"mur3",
"object-store",
@@ -7313,9 +7318,32 @@ dependencies = [
"windows-sys 0.52.0",
]
[[package]]
name = "mito-codec"
version = "0.15.1"
dependencies = [
"api",
"bytes",
"common-base",
"common-decimal",
"common-error",
"common-macro",
"common-recordbatch",
"common-telemetry",
"common-time",
"datafusion-common",
"datafusion-expr",
"datatypes",
"memcomparable",
"paste",
"serde",
"snafu 0.8.5",
"store-api",
]
[[package]]
name = "mito2"
version = "0.15.0"
version = "0.15.1"
dependencies = [
"api",
"aquamarine",
@@ -7355,6 +7383,7 @@ dependencies = [
"lazy_static",
"log-store",
"memcomparable",
"mito-codec",
"moka",
"object-store",
"parquet",
@@ -8064,7 +8093,7 @@ dependencies = [
[[package]]
name = "object-store"
version = "0.15.0"
version = "0.15.1"
dependencies = [
"anyhow",
"bytes",
@@ -8378,7 +8407,7 @@ dependencies = [
[[package]]
name = "operator"
version = "0.15.0"
version = "0.15.1"
dependencies = [
"ahash 0.8.11",
"api",
@@ -8394,6 +8423,7 @@ dependencies = [
"common-catalog",
"common-datasource",
"common-error",
"common-frontend",
"common-function",
"common-grpc",
"common-grpc-expr",
@@ -8432,7 +8462,7 @@ dependencies = [
"sql",
"sqlparser 0.54.0 (git+https://github.com/GreptimeTeam/sqlparser-rs.git?rev=0cf6c04490d59435ee965edd2078e8855bd8471e)",
"store-api",
"substrait 0.15.0",
"substrait 0.15.1",
"table",
"tokio",
"tokio-util",
@@ -8699,7 +8729,7 @@ dependencies = [
[[package]]
name = "partition"
version = "0.15.0"
version = "0.15.1"
dependencies = [
"api",
"async-trait",
@@ -8894,8 +8924,7 @@ dependencies = [
[[package]]
name = "pgwire"
version = "0.30.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4ca6c26b25be998208a13ff2f0c55b567363f34675410e6d6f1c513a150583fd"
source = "git+https://github.com/sunng87/pgwire?rev=127573d997228cfb70c7699881c568eae8131270#127573d997228cfb70c7699881c568eae8131270"
dependencies = [
"async-trait",
"bytes",
@@ -8988,7 +9017,7 @@ checksum = "8b870d8c151b6f2fb93e84a13146138f05d02ed11c7e7c54f8826aaaf7c9f184"
[[package]]
name = "pipeline"
version = "0.15.0"
version = "0.15.1"
dependencies = [
"ahash 0.8.11",
"api",
@@ -9131,7 +9160,7 @@ dependencies = [
[[package]]
name = "plugins"
version = "0.15.0"
version = "0.15.1"
dependencies = [
"auth",
"clap 4.5.19",
@@ -9444,7 +9473,7 @@ dependencies = [
[[package]]
name = "promql"
version = "0.15.0"
version = "0.15.1"
dependencies = [
"ahash 0.8.11",
"async-trait",
@@ -9540,7 +9569,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "be769465445e8c1474e9c5dac2018218498557af32d9ed057325ec9a41ae81bf"
dependencies = [
"heck 0.5.0",
"itertools 0.14.0",
"itertools 0.11.0",
"log",
"multimap",
"once_cell",
@@ -9586,7 +9615,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8a56d757972c98b346a9b766e3f02746cde6dd1cd1d1d563472929fdd74bec4d"
dependencies = [
"anyhow",
"itertools 0.14.0",
"itertools 0.11.0",
"proc-macro2",
"quote",
"syn 2.0.100",
@@ -9726,7 +9755,7 @@ dependencies = [
[[package]]
name = "puffin"
version = "0.15.0"
version = "0.15.1"
dependencies = [
"async-compression 0.4.13",
"async-trait",
@@ -9768,7 +9797,7 @@ dependencies = [
[[package]]
name = "query"
version = "0.15.0"
version = "0.15.1"
dependencies = [
"ahash 0.8.11",
"api",
@@ -9834,7 +9863,7 @@ dependencies = [
"sqlparser 0.54.0 (git+https://github.com/GreptimeTeam/sqlparser-rs.git?rev=0cf6c04490d59435ee965edd2078e8855bd8471e)",
"statrs",
"store-api",
"substrait 0.15.0",
"substrait 0.15.1",
"table",
"tokio",
"tokio-stream",
@@ -10361,15 +10390,14 @@ dependencies = [
[[package]]
name = "ring"
version = "0.17.8"
version = "0.17.14"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c17fa4cb658e3583423e915b9f3acc01cceaee1860e33d59ebae66adc3a2dc0d"
checksum = "a4689e6c2294d81e88dc6261c768b63bc4fcdb852be6d1352498b114f61383b7"
dependencies = [
"cc",
"cfg-if",
"getrandom 0.2.15",
"libc",
"spin",
"untrusted",
"windows-sys 0.52.0",
]
@@ -11121,7 +11149,7 @@ dependencies = [
[[package]]
name = "servers"
version = "0.15.0"
version = "0.15.1"
dependencies = [
"ahash 0.8.11",
"api",
@@ -11242,7 +11270,7 @@ dependencies = [
[[package]]
name = "session"
version = "0.15.0"
version = "0.15.1"
dependencies = [
"api",
"arc-swap",
@@ -11581,7 +11609,7 @@ dependencies = [
[[package]]
name = "sql"
version = "0.15.0"
version = "0.15.1"
dependencies = [
"api",
"chrono",
@@ -11636,7 +11664,7 @@ dependencies = [
[[package]]
name = "sqlness-runner"
version = "0.15.0"
version = "0.15.1"
dependencies = [
"async-trait",
"clap 4.5.19",
@@ -11936,7 +11964,7 @@ dependencies = [
[[package]]
name = "stat"
version = "0.15.0"
version = "0.15.1"
dependencies = [
"nix 0.30.1",
]
@@ -11962,7 +11990,7 @@ dependencies = [
[[package]]
name = "store-api"
version = "0.15.0"
version = "0.15.1"
dependencies = [
"api",
"aquamarine",
@@ -12123,7 +12151,7 @@ dependencies = [
[[package]]
name = "substrait"
version = "0.15.0"
version = "0.15.1"
dependencies = [
"async-trait",
"bytes",
@@ -12303,7 +12331,7 @@ dependencies = [
[[package]]
name = "table"
version = "0.15.0"
version = "0.15.1"
dependencies = [
"api",
"async-trait",
@@ -12564,7 +12592,7 @@ checksum = "3369f5ac52d5eb6ab48c6b4ffdc8efbcad6b89c765749064ba298f2c68a16a76"
[[package]]
name = "tests-fuzz"
version = "0.15.0"
version = "0.15.1"
dependencies = [
"arbitrary",
"async-trait",
@@ -12608,7 +12636,7 @@ dependencies = [
[[package]]
name = "tests-integration"
version = "0.15.0"
version = "0.15.1"
dependencies = [
"api",
"arrow-flight",
@@ -12675,7 +12703,7 @@ dependencies = [
"sql",
"sqlx",
"store-api",
"substrait 0.15.0",
"substrait 0.15.1",
"table",
"tempfile",
"time",
@@ -14156,7 +14184,7 @@ version = "0.1.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cf221c93e13a30d793f7645a0e7762c55d169dbb0a49671918a2319d289b10bb"
dependencies = [
"windows-sys 0.59.0",
"windows-sys 0.48.0",
]
[[package]]

View File

@@ -49,6 +49,7 @@ members = [
"src/meta-client",
"src/meta-srv",
"src/metric-engine",
"src/mito-codec",
"src/mito2",
"src/object-store",
"src/operator",
@@ -70,7 +71,7 @@ members = [
resolver = "2"
[workspace.package]
version = "0.15.0"
version = "0.15.1"
edition = "2021"
license = "Apache-2.0"
@@ -133,7 +134,7 @@ etcd-client = "0.14"
fst = "0.4.7"
futures = "0.3"
futures-util = "0.3"
greptime-proto = { git = "https://github.com/GreptimeTeam/greptime-proto.git", rev = "5f6119ac7952878d39dcde0343c4bf828d18ffc8" }
greptime-proto = { git = "https://github.com/GreptimeTeam/greptime-proto.git", rev = "96c733f8472284d3c83a4c011dc6de9cf830c353" }
hex = "0.4"
http = "1"
humantime = "2.1"
@@ -274,6 +275,7 @@ log-store = { path = "src/log-store" }
meta-client = { path = "src/meta-client" }
meta-srv = { path = "src/meta-srv" }
metric-engine = { path = "src/metric-engine" }
mito-codec = { path = "src/mito-codec" }
mito2 = { path = "src/mito2" }
object-store = { path = "src/object-store" }
operator = { path = "src/operator" }

View File

@@ -189,7 +189,8 @@ We invite you to engage and contribute!
- [Official Website](https://greptime.com/)
- [Blog](https://greptime.com/blogs/)
- [LinkedIn](https://www.linkedin.com/company/greptime/)
- [Twitter](https://twitter.com/greptime)
- [X (Twitter)](https://X.com/greptime)
- [YouTube](https://www.youtube.com/@greptime)
## License

View File

@@ -123,6 +123,7 @@
| `storage.http_client.connect_timeout` | String | `30s` | The timeout for only the connect phase of a http client. |
| `storage.http_client.timeout` | String | `30s` | The total request timeout, applied from when the request starts connecting until the response body has finished.<br/>Also considered a total deadline. |
| `storage.http_client.pool_idle_timeout` | String | `90s` | The timeout for idle sockets being kept-alive. |
| `storage.http_client.skip_ssl_validation` | Bool | `false` | To skip the ssl verification<br/>**Security Notice**: Setting `skip_ssl_validation = true` disables certificate verification, making connections vulnerable to man-in-the-middle attacks. Only use this in development or trusted private networks. |
| `[[region_engine]]` | -- | -- | The region engine options. You can configure multiple region engines. |
| `region_engine.mito` | -- | -- | The Mito engine options. |
| `region_engine.mito.num_workers` | Integer | `8` | Number of region workers. |
@@ -471,6 +472,7 @@
| `storage.http_client.connect_timeout` | String | `30s` | The timeout for only the connect phase of a http client. |
| `storage.http_client.timeout` | String | `30s` | The total request timeout, applied from when the request starts connecting until the response body has finished.<br/>Also considered a total deadline. |
| `storage.http_client.pool_idle_timeout` | String | `90s` | The timeout for idle sockets being kept-alive. |
| `storage.http_client.skip_ssl_validation` | Bool | `false` | To skip the ssl verification<br/>**Security Notice**: Setting `skip_ssl_validation = true` disables certificate verification, making connections vulnerable to man-in-the-middle attacks. Only use this in development or trusted private networks. |
| `[[region_engine]]` | -- | -- | The region engine options. You can configure multiple region engines. |
| `region_engine.mito` | -- | -- | The Mito engine options. |
| `region_engine.mito.num_workers` | Integer | `8` | Number of region workers. |

View File

@@ -367,6 +367,10 @@ timeout = "30s"
## The timeout for idle sockets being kept-alive.
pool_idle_timeout = "90s"
## To skip the ssl verification
## **Security Notice**: Setting `skip_ssl_validation = true` disables certificate verification, making connections vulnerable to man-in-the-middle attacks. Only use this in development or trusted private networks.
skip_ssl_validation = false
# Custom storage options
# [[storage.providers]]
# name = "S3"

View File

@@ -458,6 +458,10 @@ timeout = "30s"
## The timeout for idle sockets being kept-alive.
pool_idle_timeout = "90s"
## To skip the ssl verification
## **Security Notice**: Setting `skip_ssl_validation = true` disables certificate verification, making connections vulnerable to man-in-the-middle attacks. Only use this in development or trusted private networks.
skip_ssl_validation = false
# Custom storage options
# [[storage.providers]]
# name = "S3"

File diff suppressed because it is too large Load Diff

View File

@@ -70,6 +70,7 @@
| Inflight Flush | `greptime_mito_inflight_flush_count` | `timeseries` | Ongoing flush task count | `prometheus` | `none` | `[{{instance}}]-[{{pod}}]` |
| Compaction Input/Output Bytes | `sum by(instance, pod) (greptime_mito_compaction_input_bytes)`<br/>`sum by(instance, pod) (greptime_mito_compaction_output_bytes)` | `timeseries` | Compaction oinput output bytes | `prometheus` | `bytes` | `[{{instance}}]-[{{pod}}]-input` |
| Region Worker Handle Bulk Insert Requests | `histogram_quantile(0.95, sum by(le,instance, stage, pod) (rate(greptime_region_worker_handle_write_bucket[$__rate_interval])))`<br/>`sum by(instance, stage, pod) (rate(greptime_region_worker_handle_write_sum[$__rate_interval]))/sum by(instance, stage, pod) (rate(greptime_region_worker_handle_write_count[$__rate_interval]))` | `timeseries` | Per-stage elapsed time for region worker to handle bulk insert region requests. | `prometheus` | `s` | `[{{instance}}]-[{{pod}}]-[{{stage}}]-P95` |
| Active Series and Field Builders Count | `sum by(instance, pod) (greptime_mito_memtable_active_series_count)`<br/>`sum by(instance, pod) (greptime_mito_memtable_field_builder_count)` | `timeseries` | Compaction oinput output bytes | `prometheus` | `none` | `[{{instance}}]-[{{pod}}]-series` |
| Region Worker Convert Requests | `histogram_quantile(0.95, sum by(le, instance, stage, pod) (rate(greptime_datanode_convert_region_request_bucket[$__rate_interval])))`<br/>`sum by(le,instance, stage, pod) (rate(greptime_datanode_convert_region_request_sum[$__rate_interval]))/sum by(le,instance, stage, pod) (rate(greptime_datanode_convert_region_request_count[$__rate_interval]))` | `timeseries` | Per-stage elapsed time for region worker to decode requests. | `prometheus` | `s` | `[{{instance}}]-[{{pod}}]-[{{stage}}]-P95` |
# OpenDAL
| Title | Query | Type | Description | Datasource | Unit | Legend Format |

View File

@@ -612,6 +612,21 @@ groups:
type: prometheus
uid: ${metrics}
legendFormat: '[{{instance}}]-[{{pod}}]-[{{stage}}]-AVG'
- title: Active Series and Field Builders Count
type: timeseries
description: Compaction oinput output bytes
unit: none
queries:
- expr: sum by(instance, pod) (greptime_mito_memtable_active_series_count)
datasource:
type: prometheus
uid: ${metrics}
legendFormat: '[{{instance}}]-[{{pod}}]-series'
- expr: sum by(instance, pod) (greptime_mito_memtable_field_builder_count)
datasource:
type: prometheus
uid: ${metrics}
legendFormat: '[{{instance}}]-[{{pod}}]-field_builders'
- title: Region Worker Convert Requests
type: timeseries
description: Per-stage elapsed time for region worker to decode requests.

File diff suppressed because it is too large Load Diff

View File

@@ -70,6 +70,7 @@
| Inflight Flush | `greptime_mito_inflight_flush_count` | `timeseries` | Ongoing flush task count | `prometheus` | `none` | `[{{instance}}]-[{{pod}}]` |
| Compaction Input/Output Bytes | `sum by(instance, pod) (greptime_mito_compaction_input_bytes)`<br/>`sum by(instance, pod) (greptime_mito_compaction_output_bytes)` | `timeseries` | Compaction oinput output bytes | `prometheus` | `bytes` | `[{{instance}}]-[{{pod}}]-input` |
| Region Worker Handle Bulk Insert Requests | `histogram_quantile(0.95, sum by(le,instance, stage, pod) (rate(greptime_region_worker_handle_write_bucket[$__rate_interval])))`<br/>`sum by(instance, stage, pod) (rate(greptime_region_worker_handle_write_sum[$__rate_interval]))/sum by(instance, stage, pod) (rate(greptime_region_worker_handle_write_count[$__rate_interval]))` | `timeseries` | Per-stage elapsed time for region worker to handle bulk insert region requests. | `prometheus` | `s` | `[{{instance}}]-[{{pod}}]-[{{stage}}]-P95` |
| Active Series and Field Builders Count | `sum by(instance, pod) (greptime_mito_memtable_active_series_count)`<br/>`sum by(instance, pod) (greptime_mito_memtable_field_builder_count)` | `timeseries` | Compaction oinput output bytes | `prometheus` | `none` | `[{{instance}}]-[{{pod}}]-series` |
| Region Worker Convert Requests | `histogram_quantile(0.95, sum by(le, instance, stage, pod) (rate(greptime_datanode_convert_region_request_bucket[$__rate_interval])))`<br/>`sum by(le,instance, stage, pod) (rate(greptime_datanode_convert_region_request_sum[$__rate_interval]))/sum by(le,instance, stage, pod) (rate(greptime_datanode_convert_region_request_count[$__rate_interval]))` | `timeseries` | Per-stage elapsed time for region worker to decode requests. | `prometheus` | `s` | `[{{instance}}]-[{{pod}}]-[{{stage}}]-P95` |
# OpenDAL
| Title | Query | Type | Description | Datasource | Unit | Legend Format |

View File

@@ -612,6 +612,21 @@ groups:
type: prometheus
uid: ${metrics}
legendFormat: '[{{instance}}]-[{{pod}}]-[{{stage}}]-AVG'
- title: Active Series and Field Builders Count
type: timeseries
description: Compaction oinput output bytes
unit: none
queries:
- expr: sum by(instance, pod) (greptime_mito_memtable_active_series_count)
datasource:
type: prometheus
uid: ${metrics}
legendFormat: '[{{instance}}]-[{{pod}}]-series'
- expr: sum by(instance, pod) (greptime_mito_memtable_field_builder_count)
datasource:
type: prometheus
uid: ${metrics}
legendFormat: '[{{instance}}]-[{{pod}}]-field_builders'
- title: Region Worker Convert Requests
type: timeseries
description: Per-stage elapsed time for region worker to decode requests.

View File

@@ -226,18 +226,20 @@ mod tests {
assert!(options.is_none());
let mut schema = ColumnSchema::new("test", ConcreteDataType::string_datatype(), true)
.with_fulltext_options(FulltextOptions {
enable: true,
analyzer: FulltextAnalyzer::English,
case_sensitive: false,
backend: FulltextBackend::Bloom,
})
.with_fulltext_options(FulltextOptions::new_unchecked(
true,
FulltextAnalyzer::English,
false,
FulltextBackend::Bloom,
10240,
0.01,
))
.unwrap();
schema.set_inverted_index(true);
let options = options_from_column_schema(&schema).unwrap();
assert_eq!(
options.options.get(FULLTEXT_GRPC_KEY).unwrap(),
"{\"enable\":true,\"analyzer\":\"English\",\"case-sensitive\":false,\"backend\":\"bloom\"}"
"{\"enable\":true,\"analyzer\":\"English\",\"case-sensitive\":false,\"backend\":\"bloom\",\"granularity\":10240,\"false-positive-rate-in-10000\":100}"
);
assert_eq!(
options.options.get(INVERTED_INDEX_GRPC_KEY).unwrap(),
@@ -247,16 +249,18 @@ mod tests {
#[test]
fn test_options_with_fulltext() {
let fulltext = FulltextOptions {
enable: true,
analyzer: FulltextAnalyzer::English,
case_sensitive: false,
backend: FulltextBackend::Bloom,
};
let fulltext = FulltextOptions::new_unchecked(
true,
FulltextAnalyzer::English,
false,
FulltextBackend::Bloom,
10240,
0.01,
);
let options = options_from_fulltext(&fulltext).unwrap().unwrap();
assert_eq!(
options.options.get(FULLTEXT_GRPC_KEY).unwrap(),
"{\"enable\":true,\"analyzer\":\"English\",\"case-sensitive\":false,\"backend\":\"bloom\"}"
"{\"enable\":true,\"analyzer\":\"English\",\"case-sensitive\":false,\"backend\":\"bloom\",\"granularity\":10240,\"false-positive-rate-in-10000\":100}"
);
}

View File

@@ -17,6 +17,7 @@ arrow-schema.workspace = true
async-stream.workspace = true
async-trait.workspace = true
bytes.workspace = true
common-base.workspace = true
common-catalog.workspace = true
common-error.workspace = true
common-frontend.workspace = true

View File

@@ -278,12 +278,25 @@ pub enum Error {
location: Location,
},
#[snafu(display("Failed to list frontend nodes"))]
ListProcess {
#[snafu(display("Failed to invoke frontend services"))]
InvokeFrontend {
source: common_frontend::error::Error,
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Meta client is not provided"))]
MetaClientMissing {
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Failed to find frontend node: {}", addr))]
FrontendNotFound {
addr: String,
#[snafu(implicit)]
location: Location,
},
}
impl Error {
@@ -352,7 +365,10 @@ impl ErrorExt for Error {
Error::GetViewCache { source, .. } | Error::GetTableCache { source, .. } => {
source.status_code()
}
Error::ListProcess { source, .. } => source.status_code(),
Error::InvokeFrontend { source, .. } => source.status_code(),
Error::FrontendNotFound { .. } | Error::MetaClientMissing { .. } => {
StatusCode::Unexpected
}
}
}

View File

@@ -22,11 +22,13 @@ use common_catalog::consts::{
PG_CATALOG_NAME,
};
use common_error::ext::BoxedError;
use common_meta::cache::{LayeredCacheRegistryRef, ViewInfoCacheRef};
use common_meta::cache::{
LayeredCacheRegistryRef, TableRoute, TableRouteCacheRef, ViewInfoCacheRef,
};
use common_meta::key::catalog_name::CatalogNameKey;
use common_meta::key::flow::FlowMetadataManager;
use common_meta::key::schema_name::SchemaNameKey;
use common_meta::key::table_info::TableInfoValue;
use common_meta::key::table_info::{TableInfoManager, TableInfoValue};
use common_meta::key::table_name::TableNameKey;
use common_meta::key::{TableMetadataManager, TableMetadataManagerRef};
use common_meta::kv_backend::KvBackendRef;
@@ -37,6 +39,7 @@ use moka::sync::Cache;
use partition::manager::{PartitionRuleManager, PartitionRuleManagerRef};
use session::context::{Channel, QueryContext};
use snafu::prelude::*;
use store_api::metric_engine_consts::METRIC_ENGINE_NAME;
use table::dist_table::DistTable;
use table::metadata::TableId;
use table::table::numbers::{NumbersTable, NUMBERS_TABLE_NAME};
@@ -140,6 +143,61 @@ impl KvBackendCatalogManager {
pub fn procedure_manager(&self) -> Option<ProcedureManagerRef> {
self.procedure_manager.clone()
}
// Override logical table's partition key indices with physical table's.
async fn override_logical_table_partition_key_indices(
table_route_cache: &TableRouteCacheRef,
table_info_manager: &TableInfoManager,
table: TableRef,
) -> Result<TableRef> {
// If the table is not a metric table, return the table directly.
if table.table_info().meta.engine != METRIC_ENGINE_NAME {
return Ok(table);
}
if let Some(table_route_value) = table_route_cache
.get(table.table_info().table_id())
.await
.context(TableMetadataManagerSnafu)?
&& let TableRoute::Logical(logical_route) = &*table_route_value
&& let Some(physical_table_info_value) = table_info_manager
.get(logical_route.physical_table_id())
.await
.context(TableMetadataManagerSnafu)?
{
let mut new_table_info = (*table.table_info()).clone();
// Remap partition key indices from physical table to logical table
new_table_info.meta.partition_key_indices = physical_table_info_value
.table_info
.meta
.partition_key_indices
.iter()
.filter_map(|&physical_index| {
// Get the column name from the physical table using the physical index
physical_table_info_value
.table_info
.meta
.schema
.column_schemas
.get(physical_index)
.and_then(|physical_column| {
// Find the corresponding index in the logical table schema
new_table_info
.meta
.schema
.column_index_by_name(physical_column.name.as_str())
})
})
.collect();
let new_table = DistTable::table(Arc::new(new_table_info));
return Ok(new_table);
}
Ok(table)
}
}
#[async_trait::async_trait]
@@ -266,16 +324,28 @@ impl CatalogManager for KvBackendCatalogManager {
let table_cache: TableCacheRef = self.cache_registry.get().context(CacheNotFoundSnafu {
name: "table_cache",
})?;
if let Some(table) = table_cache
let table = table_cache
.get_by_ref(&TableName {
catalog_name: catalog_name.to_string(),
schema_name: schema_name.to_string(),
table_name: table_name.to_string(),
})
.await
.context(GetTableCacheSnafu)?
{
return Ok(Some(table));
.context(GetTableCacheSnafu)?;
if let Some(table) = table {
let table_route_cache: TableRouteCacheRef =
self.cache_registry.get().context(CacheNotFoundSnafu {
name: "table_route_cache",
})?;
return Self::override_logical_table_partition_key_indices(
&table_route_cache,
self.table_metadata_manager.table_info_manager(),
table,
)
.await
.map(Some);
}
if channel == Channel::Postgres {
@@ -288,7 +358,7 @@ impl CatalogManager for KvBackendCatalogManager {
}
}
return Ok(None);
Ok(None)
}
async fn tables_by_ids(
@@ -340,8 +410,20 @@ impl CatalogManager for KvBackendCatalogManager {
let catalog = catalog.to_string();
let schema = schema.to_string();
let semaphore = Arc::new(Semaphore::new(CONCURRENCY));
let table_route_cache: Result<TableRouteCacheRef> =
self.cache_registry.get().context(CacheNotFoundSnafu {
name: "table_route_cache",
});
common_runtime::spawn_global(async move {
let table_route_cache = match table_route_cache {
Ok(table_route_cache) => table_route_cache,
Err(e) => {
let _ = tx.send(Err(e)).await;
return;
}
};
let table_id_stream = metadata_manager
.table_name_manager()
.tables(&catalog, &schema)
@@ -368,6 +450,7 @@ impl CatalogManager for KvBackendCatalogManager {
let metadata_manager = metadata_manager.clone();
let tx = tx.clone();
let semaphore = semaphore.clone();
let table_route_cache = table_route_cache.clone();
common_runtime::spawn_global(async move {
// we don't explicitly close the semaphore so just ignore the potential error.
let _ = semaphore.acquire().await;
@@ -385,6 +468,16 @@ impl CatalogManager for KvBackendCatalogManager {
};
for table in table_info_values.into_values().map(build_table) {
let table = if let Ok(table) = table {
Self::override_logical_table_partition_key_indices(
&table_route_cache,
metadata_manager.table_info_manager(),
table,
)
.await
} else {
table
};
if tx.send(table).await.is_err() {
return;
}

View File

@@ -14,6 +14,7 @@
#![feature(assert_matches)]
#![feature(try_blocks)]
#![feature(let_chains)]
use std::any::Any;
use std::fmt::{Debug, Formatter};

View File

@@ -34,4 +34,20 @@ lazy_static! {
register_histogram!("greptime_catalog_kv_get", "catalog kv get").unwrap();
pub static ref METRIC_CATALOG_KV_BATCH_GET: Histogram =
register_histogram!("greptime_catalog_kv_batch_get", "catalog kv batch get").unwrap();
/// Count of running process in each catalog.
pub static ref PROCESS_LIST_COUNT: IntGaugeVec = register_int_gauge_vec!(
"greptime_process_list_count",
"Running process count per catalog",
&["catalog"]
)
.unwrap();
/// Count of killed process in each catalog.
pub static ref PROCESS_KILL_COUNT: IntCounterVec = register_int_counter_vec!(
"greptime_process_kill_count",
"Completed kill process requests count",
&["catalog"]
)
.unwrap();
}

View File

@@ -14,24 +14,33 @@
use std::collections::hash_map::Entry;
use std::collections::HashMap;
use std::sync::atomic::{AtomicU64, Ordering};
use std::fmt::{Debug, Formatter};
use std::sync::atomic::{AtomicU32, Ordering};
use std::sync::{Arc, RwLock};
use api::v1::frontend::{ListProcessRequest, ProcessInfo};
use api::v1::frontend::{KillProcessRequest, ListProcessRequest, ProcessInfo};
use common_base::cancellation::CancellationHandle;
use common_frontend::selector::{FrontendSelector, MetaClientSelector};
use common_telemetry::{debug, info};
use common_telemetry::{debug, info, warn};
use common_time::util::current_time_millis;
use meta_client::MetaClientRef;
use snafu::ResultExt;
use snafu::{ensure, OptionExt, ResultExt};
use crate::error;
use crate::metrics::{PROCESS_KILL_COUNT, PROCESS_LIST_COUNT};
pub type ProcessId = u32;
pub type ProcessManagerRef = Arc<ProcessManager>;
/// Query process manager.
pub struct ProcessManager {
/// Local frontend server address,
server_addr: String,
next_id: AtomicU64,
catalogs: RwLock<HashMap<String, HashMap<u64, ProcessInfo>>>,
/// Next process id for local queries.
next_id: AtomicU32,
/// Running process per catalog.
catalogs: RwLock<HashMap<String, HashMap<ProcessId, CancellableProcess>>>,
/// Frontend selector to locate frontend nodes.
frontend_selector: Option<MetaClientSelector>,
}
@@ -50,15 +59,16 @@ impl ProcessManager {
impl ProcessManager {
/// Registers a submitted query. Use the provided id if present.
#[must_use]
pub fn register_query(
self: &Arc<Self>,
catalog: String,
schemas: Vec<String>,
query: String,
client: String,
id: Option<u64>,
query_id: Option<ProcessId>,
) -> Ticket {
let id = id.unwrap_or_else(|| self.next_id.fetch_add(1, Ordering::Relaxed));
let id = query_id.unwrap_or_else(|| self.next_id.fetch_add(1, Ordering::Relaxed));
let process = ProcessInfo {
id,
catalog: catalog.clone(),
@@ -68,53 +78,53 @@ impl ProcessManager {
client,
frontend: self.server_addr.clone(),
};
let cancellation_handle = Arc::new(CancellationHandle::default());
let cancellable_process = CancellableProcess::new(cancellation_handle.clone(), process);
self.catalogs
.write()
.unwrap()
.entry(catalog.clone())
.or_default()
.insert(id, process);
.insert(id, cancellable_process);
Ticket {
catalog,
manager: self.clone(),
id,
cancellation_handle,
}
}
/// Generates the next process id.
pub fn next_id(&self) -> u64 {
pub fn next_id(&self) -> u32 {
self.next_id.fetch_add(1, Ordering::Relaxed)
}
/// De-register a query from process list.
pub fn deregister_query(&self, catalog: String, id: u64) {
pub fn deregister_query(&self, catalog: String, id: ProcessId) {
if let Entry::Occupied(mut o) = self.catalogs.write().unwrap().entry(catalog) {
let process = o.get_mut().remove(&id);
debug!("Deregister process: {:?}", process);
if o.get_mut().is_empty() {
if o.get().is_empty() {
o.remove();
}
}
}
pub fn deregister_all_queries(&self) {
self.catalogs.write().unwrap().clear();
info!("All queries on {} has been deregistered", self.server_addr);
}
/// List local running processes in given catalog.
pub fn local_processes(&self, catalog: Option<&str>) -> error::Result<Vec<ProcessInfo>> {
let catalogs = self.catalogs.read().unwrap();
let result = if let Some(catalog) = catalog {
if let Some(catalogs) = catalogs.get(catalog) {
catalogs.values().cloned().collect()
catalogs.values().map(|p| p.process.clone()).collect()
} else {
vec![]
}
} else {
catalogs
.values()
.flat_map(|v| v.values().cloned())
.flat_map(|v| v.values().map(|p| p.process.clone()))
.collect()
};
Ok(result)
@@ -129,27 +139,90 @@ impl ProcessManager {
let frontends = remote_frontend_selector
.select(|node| node.peer.addr != self.server_addr)
.await
.context(error::ListProcessSnafu)?;
.context(error::InvokeFrontendSnafu)?;
for mut f in frontends {
processes.extend(
f.list_process(ListProcessRequest {
let result = f
.list_process(ListProcessRequest {
catalog: catalog.unwrap_or_default().to_string(),
})
.await
.context(error::ListProcessSnafu)?
.processes,
);
.context(error::InvokeFrontendSnafu);
match result {
Ok(resp) => {
processes.extend(resp.processes);
}
Err(e) => {
warn!(e; "Skipping failing node: {:?}", f)
}
}
}
}
processes.extend(self.local_processes(catalog)?);
Ok(processes)
}
/// Kills query with provided catalog and id.
pub async fn kill_process(
&self,
server_addr: String,
catalog: String,
id: ProcessId,
) -> error::Result<bool> {
if server_addr == self.server_addr {
self.kill_local_process(catalog, id).await
} else {
let mut nodes = self
.frontend_selector
.as_ref()
.context(error::MetaClientMissingSnafu)?
.select(|node| node.peer.addr == server_addr)
.await
.context(error::InvokeFrontendSnafu)?;
ensure!(
!nodes.is_empty(),
error::FrontendNotFoundSnafu { addr: server_addr }
);
let request = KillProcessRequest {
server_addr,
catalog,
process_id: id,
};
nodes[0]
.kill_process(request)
.await
.context(error::InvokeFrontendSnafu)?;
Ok(true)
}
}
/// Kills local query with provided catalog and id.
pub async fn kill_local_process(&self, catalog: String, id: ProcessId) -> error::Result<bool> {
if let Some(catalogs) = self.catalogs.write().unwrap().get_mut(&catalog) {
if let Some(process) = catalogs.remove(&id) {
process.handle.cancel();
info!(
"Killed process, catalog: {}, id: {:?}",
process.process.catalog, process.process.id
);
PROCESS_KILL_COUNT.with_label_values(&[&catalog]).inc();
Ok(true)
} else {
debug!("Failed to kill process, id not found: {}", id);
Ok(false)
}
} else {
debug!("Failed to kill process, catalog not found: {}", catalog);
Ok(false)
}
}
}
pub struct Ticket {
pub(crate) catalog: String,
pub(crate) manager: ProcessManagerRef,
pub(crate) id: u64,
pub(crate) id: ProcessId,
pub cancellation_handle: Arc<CancellationHandle>,
}
impl Drop for Ticket {
@@ -159,6 +232,37 @@ impl Drop for Ticket {
}
}
struct CancellableProcess {
handle: Arc<CancellationHandle>,
process: ProcessInfo,
}
impl Drop for CancellableProcess {
fn drop(&mut self) {
PROCESS_LIST_COUNT
.with_label_values(&[&self.process.catalog])
.dec();
}
}
impl CancellableProcess {
fn new(handle: Arc<CancellationHandle>, process: ProcessInfo) -> Self {
PROCESS_LIST_COUNT
.with_label_values(&[&process.catalog])
.inc();
Self { handle, process }
}
}
impl Debug for CancellableProcess {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
f.debug_struct("CancellableProcess")
.field("cancelled", &self.handle.is_cancelled())
.field("process", &self.process)
.finish()
}
}
#[cfg(test)]
mod tests {
use std::sync::Arc;
@@ -185,4 +289,212 @@ mod tests {
drop(ticket);
assert_eq!(process_manager.local_processes(None).unwrap().len(), 0);
}
#[tokio::test]
async fn test_register_query_with_custom_id() {
let process_manager = Arc::new(ProcessManager::new("127.0.0.1:8000".to_string(), None));
let custom_id = 12345;
let ticket = process_manager.clone().register_query(
"public".to_string(),
vec!["test".to_string()],
"SELECT * FROM table".to_string(),
"client1".to_string(),
Some(custom_id),
);
assert_eq!(ticket.id, custom_id);
let running_processes = process_manager.local_processes(None).unwrap();
assert_eq!(running_processes.len(), 1);
assert_eq!(running_processes[0].id, custom_id);
assert_eq!(&running_processes[0].client, "client1");
}
#[tokio::test]
async fn test_multiple_queries_same_catalog() {
let process_manager = Arc::new(ProcessManager::new("127.0.0.1:8000".to_string(), None));
let ticket1 = process_manager.clone().register_query(
"public".to_string(),
vec!["schema1".to_string()],
"SELECT * FROM table1".to_string(),
"client1".to_string(),
None,
);
let ticket2 = process_manager.clone().register_query(
"public".to_string(),
vec!["schema2".to_string()],
"SELECT * FROM table2".to_string(),
"client2".to_string(),
None,
);
let running_processes = process_manager.local_processes(Some("public")).unwrap();
assert_eq!(running_processes.len(), 2);
// Verify both processes are present
let ids: Vec<u32> = running_processes.iter().map(|p| p.id).collect();
assert!(ids.contains(&ticket1.id));
assert!(ids.contains(&ticket2.id));
}
#[tokio::test]
async fn test_multiple_catalogs() {
let process_manager = Arc::new(ProcessManager::new("127.0.0.1:8000".to_string(), None));
let _ticket1 = process_manager.clone().register_query(
"catalog1".to_string(),
vec!["schema1".to_string()],
"SELECT * FROM table1".to_string(),
"client1".to_string(),
None,
);
let _ticket2 = process_manager.clone().register_query(
"catalog2".to_string(),
vec!["schema2".to_string()],
"SELECT * FROM table2".to_string(),
"client2".to_string(),
None,
);
// Test listing processes for specific catalog
let catalog1_processes = process_manager.local_processes(Some("catalog1")).unwrap();
assert_eq!(catalog1_processes.len(), 1);
assert_eq!(&catalog1_processes[0].catalog, "catalog1");
let catalog2_processes = process_manager.local_processes(Some("catalog2")).unwrap();
assert_eq!(catalog2_processes.len(), 1);
assert_eq!(&catalog2_processes[0].catalog, "catalog2");
// Test listing all processes
let all_processes = process_manager.local_processes(None).unwrap();
assert_eq!(all_processes.len(), 2);
}
#[tokio::test]
async fn test_deregister_query() {
let process_manager = Arc::new(ProcessManager::new("127.0.0.1:8000".to_string(), None));
let ticket = process_manager.clone().register_query(
"public".to_string(),
vec!["test".to_string()],
"SELECT * FROM table".to_string(),
"client1".to_string(),
None,
);
assert_eq!(process_manager.local_processes(None).unwrap().len(), 1);
process_manager.deregister_query("public".to_string(), ticket.id);
assert_eq!(process_manager.local_processes(None).unwrap().len(), 0);
}
#[tokio::test]
async fn test_cancellation_handle() {
let process_manager = Arc::new(ProcessManager::new("127.0.0.1:8000".to_string(), None));
let ticket = process_manager.clone().register_query(
"public".to_string(),
vec!["test".to_string()],
"SELECT * FROM table".to_string(),
"client1".to_string(),
None,
);
assert!(!ticket.cancellation_handle.is_cancelled());
ticket.cancellation_handle.cancel();
assert!(ticket.cancellation_handle.is_cancelled());
}
#[tokio::test]
async fn test_kill_local_process() {
let process_manager = Arc::new(ProcessManager::new("127.0.0.1:8000".to_string(), None));
let ticket = process_manager.clone().register_query(
"public".to_string(),
vec!["test".to_string()],
"SELECT * FROM table".to_string(),
"client1".to_string(),
None,
);
assert!(!ticket.cancellation_handle.is_cancelled());
let killed = process_manager
.kill_process(
"127.0.0.1:8000".to_string(),
"public".to_string(),
ticket.id,
)
.await
.unwrap();
assert!(killed);
assert_eq!(process_manager.local_processes(None).unwrap().len(), 0);
}
#[tokio::test]
async fn test_kill_nonexistent_process() {
let process_manager = Arc::new(ProcessManager::new("127.0.0.1:8000".to_string(), None));
let killed = process_manager
.kill_process("127.0.0.1:8000".to_string(), "public".to_string(), 999)
.await
.unwrap();
assert!(!killed);
}
#[tokio::test]
async fn test_kill_process_nonexistent_catalog() {
let process_manager = Arc::new(ProcessManager::new("127.0.0.1:8000".to_string(), None));
let killed = process_manager
.kill_process("127.0.0.1:8000".to_string(), "nonexistent".to_string(), 1)
.await
.unwrap();
assert!(!killed);
}
#[tokio::test]
async fn test_process_info_fields() {
let process_manager = Arc::new(ProcessManager::new("127.0.0.1:8000".to_string(), None));
let _ticket = process_manager.clone().register_query(
"test_catalog".to_string(),
vec!["schema1".to_string(), "schema2".to_string()],
"SELECT COUNT(*) FROM users WHERE age > 18".to_string(),
"test_client".to_string(),
Some(42),
);
let processes = process_manager.local_processes(None).unwrap();
assert_eq!(processes.len(), 1);
let process = &processes[0];
assert_eq!(process.id, 42);
assert_eq!(&process.catalog, "test_catalog");
assert_eq!(process.schemas, vec!["schema1", "schema2"]);
assert_eq!(&process.query, "SELECT COUNT(*) FROM users WHERE age > 18");
assert_eq!(&process.client, "test_client");
assert_eq!(&process.frontend, "127.0.0.1:8000");
assert!(process.start_timestamp > 0);
}
#[tokio::test]
async fn test_ticket_drop_deregisters_process() {
let process_manager = Arc::new(ProcessManager::new("127.0.0.1:8000".to_string(), None));
{
let _ticket = process_manager.clone().register_query(
"public".to_string(),
vec!["test".to_string()],
"SELECT * FROM table".to_string(),
"client1".to_string(),
None,
);
// Process should be registered
assert_eq!(process_manager.local_processes(None).unwrap().len(), 1);
} // ticket goes out of scope here
// Process should be automatically deregistered
assert_eq!(process_manager.local_processes(None).unwrap().len(), 0);
}
}

View File

@@ -16,6 +16,7 @@ mysql_kvbackend = ["common-meta/mysql_kvbackend", "meta-srv/mysql_kvbackend"]
workspace = true
[dependencies]
async-stream.workspace = true
async-trait.workspace = true
auth.workspace = true
base64.workspace = true
@@ -50,6 +51,7 @@ meta-client.workspace = true
meta-srv.workspace = true
nu-ansi-term = "0.46"
object-store.workspace = true
operator.workspace = true
query.workspace = true
rand.workspace = true
reqwest.workspace = true
@@ -65,6 +67,7 @@ tokio.workspace = true
tracing-appender.workspace = true
[dev-dependencies]
common-meta = { workspace = true, features = ["testing"] }
common-version.workspace = true
serde.workspace = true
tempfile.workspace = true

View File

@@ -17,8 +17,10 @@ use std::any::Any;
use common_error::ext::{BoxedError, ErrorExt};
use common_error::status_code::StatusCode;
use common_macro::stack_trace_debug;
use common_meta::peer::Peer;
use object_store::Error as ObjectStoreError;
use snafu::{Location, Snafu};
use store_api::storage::TableId;
#[derive(Snafu)]
#[snafu(visibility(pub))]
@@ -73,6 +75,20 @@ pub enum Error {
source: common_meta::error::Error,
},
#[snafu(display("Failed to get table metadata"))]
TableMetadata {
#[snafu(implicit)]
location: Location,
source: common_meta::error::Error,
},
#[snafu(display("Unexpected error: {}", msg))]
Unexpected {
msg: String,
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Missing config, msg: {}", msg))]
MissingConfig {
msg: String,
@@ -222,6 +238,13 @@ pub enum Error {
location: Location,
},
#[snafu(display("Table not found: {table_id}"))]
TableNotFound {
table_id: TableId,
#[snafu(implicit)]
location: Location,
},
#[snafu(display("OpenDAL operator failed"))]
OpenDal {
#[snafu(implicit)]
@@ -267,6 +290,29 @@ pub enum Error {
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Failed to init backend"))]
InitBackend {
#[snafu(implicit)]
location: Location,
#[snafu(source)]
error: ObjectStoreError,
},
#[snafu(display("Covert column schemas to defs failed"))]
CovertColumnSchemasToDefs {
#[snafu(implicit)]
location: Location,
source: operator::error::Error,
},
#[snafu(display("Failed to send request to datanode: {}", peer))]
SendRequestToDatanode {
peer: Peer,
#[snafu(implicit)]
location: Location,
source: common_meta::error::Error,
},
}
pub type Result<T> = std::result::Result<T, Error>;
@@ -274,9 +320,9 @@ pub type Result<T> = std::result::Result<T, Error>;
impl ErrorExt for Error {
fn status_code(&self) -> StatusCode {
match self {
Error::InitMetadata { source, .. } | Error::InitDdlManager { source, .. } => {
source.status_code()
}
Error::InitMetadata { source, .. }
| Error::InitDdlManager { source, .. }
| Error::TableMetadata { source, .. } => source.status_code(),
Error::MissingConfig { .. }
| Error::LoadLayeredConfig { .. }
@@ -290,6 +336,9 @@ impl ErrorExt for Error {
| Error::InvalidArguments { .. }
| Error::ParseProxyOpts { .. } => StatusCode::InvalidArguments,
Error::CovertColumnSchemasToDefs { source, .. } => source.status_code(),
Error::SendRequestToDatanode { source, .. } => source.status_code(),
Error::StartProcedureManager { source, .. }
| Error::StopProcedureManager { source, .. } => source.status_code(),
Error::StartWalOptionsAllocator { source, .. } => source.status_code(),
@@ -297,6 +346,7 @@ impl ErrorExt for Error {
Error::ParseSql { source, .. } | Error::PlanStatement { source, .. } => {
source.status_code()
}
Error::Unexpected { .. } => StatusCode::Unexpected,
Error::SerdeJson { .. }
| Error::FileIo { .. }
@@ -305,7 +355,7 @@ impl ErrorExt for Error {
| Error::BuildClient { .. } => StatusCode::Unexpected,
Error::Other { source, .. } => source.status_code(),
Error::OpenDal { .. } => StatusCode::Internal,
Error::OpenDal { .. } | Error::InitBackend { .. } => StatusCode::Internal,
Error::S3ConfigNotSet { .. }
| Error::OutputDirNotSet { .. }
| Error::EmptyStoreAddrs { .. } => StatusCode::InvalidArguments,
@@ -314,6 +364,7 @@ impl ErrorExt for Error {
Error::CacheRequired { .. } | Error::BuildCacheRegistry { .. } => StatusCode::Internal,
Error::MetaClientInit { source, .. } => source.status_code(),
Error::TableNotFound { .. } => StatusCode::TableNotFound,
Error::SchemaNotFound { .. } => StatusCode::DatabaseNotFound,
}
}

View File

@@ -14,29 +14,39 @@
mod common;
mod control;
mod repair;
mod snapshot;
mod utils;
use clap::Subcommand;
use common_error::ext::BoxedError;
use crate::metadata::control::ControlCommand;
use crate::metadata::control::{DelCommand, GetCommand};
use crate::metadata::repair::RepairLogicalTablesCommand;
use crate::metadata::snapshot::SnapshotCommand;
use crate::Tool;
/// Command for managing metadata operations, including saving metadata snapshots and restoring metadata from snapshots.
/// Command for managing metadata operations,
/// including saving and restoring metadata snapshots,
/// controlling metadata operations, and diagnosing and repairing metadata.
#[derive(Subcommand)]
pub enum MetadataCommand {
#[clap(subcommand)]
Snapshot(SnapshotCommand),
#[clap(subcommand)]
Control(ControlCommand),
Get(GetCommand),
#[clap(subcommand)]
Del(DelCommand),
RepairLogicalTables(RepairLogicalTablesCommand),
}
impl MetadataCommand {
pub async fn build(&self) -> Result<Box<dyn Tool>, BoxedError> {
match self {
MetadataCommand::Snapshot(cmd) => cmd.build().await,
MetadataCommand::Control(cmd) => cmd.build().await,
MetadataCommand::RepairLogicalTables(cmd) => cmd.build().await,
MetadataCommand::Get(cmd) => cmd.build().await,
MetadataCommand::Del(cmd) => cmd.build().await,
}
}
}

View File

@@ -12,27 +12,11 @@
// See the License for the specific language governing permissions and
// limitations under the License.
mod del;
mod get;
#[cfg(test)]
mod test_utils;
mod utils;
use clap::Subcommand;
use common_error::ext::BoxedError;
use get::GetCommand;
use crate::Tool;
/// Subcommand for metadata control.
#[derive(Subcommand)]
pub enum ControlCommand {
/// Get the metadata from the metasrv.
#[clap(subcommand)]
Get(GetCommand),
}
impl ControlCommand {
pub async fn build(&self) -> Result<Box<dyn Tool>, BoxedError> {
match self {
ControlCommand::Get(cmd) => cmd.build().await,
}
}
}
pub(crate) use del::DelCommand;
pub(crate) use get::GetCommand;

View File

@@ -0,0 +1,42 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
mod key;
mod table;
use clap::Subcommand;
use common_error::ext::BoxedError;
use crate::metadata::control::del::key::DelKeyCommand;
use crate::metadata::control::del::table::DelTableCommand;
use crate::Tool;
/// The prefix of the tombstone keys.
pub(crate) const CLI_TOMBSTONE_PREFIX: &str = "__cli_tombstone/";
/// Subcommand for deleting metadata from the metadata store.
#[derive(Subcommand)]
pub enum DelCommand {
Key(DelKeyCommand),
Table(DelTableCommand),
}
impl DelCommand {
pub async fn build(&self) -> Result<Box<dyn Tool>, BoxedError> {
match self {
DelCommand::Key(cmd) => cmd.build().await,
DelCommand::Table(cmd) => cmd.build().await,
}
}
}

View File

@@ -0,0 +1,132 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use async_trait::async_trait;
use clap::Parser;
use common_error::ext::BoxedError;
use common_meta::key::tombstone::TombstoneManager;
use common_meta::kv_backend::KvBackendRef;
use common_meta::rpc::store::RangeRequest;
use crate::metadata::common::StoreConfig;
use crate::metadata::control::del::CLI_TOMBSTONE_PREFIX;
use crate::Tool;
/// Delete key-value pairs logically from the metadata store.
#[derive(Debug, Default, Parser)]
pub struct DelKeyCommand {
/// The key to delete from the metadata store.
key: String,
/// Delete key-value pairs with the given prefix.
#[clap(long)]
prefix: bool,
#[clap(flatten)]
store: StoreConfig,
}
impl DelKeyCommand {
pub async fn build(&self) -> Result<Box<dyn Tool>, BoxedError> {
let kv_backend = self.store.build().await?;
Ok(Box::new(DelKeyTool {
key: self.key.to_string(),
prefix: self.prefix,
key_deleter: KeyDeleter::new(kv_backend),
}))
}
}
struct KeyDeleter {
kv_backend: KvBackendRef,
tombstone_manager: TombstoneManager,
}
impl KeyDeleter {
fn new(kv_backend: KvBackendRef) -> Self {
Self {
kv_backend: kv_backend.clone(),
tombstone_manager: TombstoneManager::new_with_prefix(kv_backend, CLI_TOMBSTONE_PREFIX),
}
}
async fn delete(&self, key: &str, prefix: bool) -> Result<usize, BoxedError> {
let mut req = RangeRequest::default().with_keys_only();
if prefix {
req = req.with_prefix(key.as_bytes());
} else {
req = req.with_key(key.as_bytes());
}
let resp = self.kv_backend.range(req).await.map_err(BoxedError::new)?;
let keys = resp.kvs.iter().map(|kv| kv.key.clone()).collect::<Vec<_>>();
self.tombstone_manager
.create(keys)
.await
.map_err(BoxedError::new)
}
}
struct DelKeyTool {
key: String,
prefix: bool,
key_deleter: KeyDeleter,
}
#[async_trait]
impl Tool for DelKeyTool {
async fn do_work(&self) -> Result<(), BoxedError> {
let deleted = self.key_deleter.delete(&self.key, self.prefix).await?;
// Print the number of deleted keys.
println!("{}", deleted);
Ok(())
}
}
#[cfg(test)]
mod tests {
use std::sync::Arc;
use common_meta::kv_backend::chroot::ChrootKvBackend;
use common_meta::kv_backend::memory::MemoryKvBackend;
use common_meta::kv_backend::{KvBackend, KvBackendRef};
use common_meta::rpc::store::RangeRequest;
use crate::metadata::control::del::key::KeyDeleter;
use crate::metadata::control::del::CLI_TOMBSTONE_PREFIX;
use crate::metadata::control::test_utils::put_key;
#[tokio::test]
async fn test_delete_keys() {
let kv_backend = Arc::new(MemoryKvBackend::new()) as KvBackendRef;
let key_deleter = KeyDeleter::new(kv_backend.clone());
put_key(&kv_backend, "foo", "bar").await;
put_key(&kv_backend, "foo/bar", "baz").await;
put_key(&kv_backend, "foo/baz", "qux").await;
let deleted = key_deleter.delete("foo", true).await.unwrap();
assert_eq!(deleted, 3);
let deleted = key_deleter.delete("foo/bar", false).await.unwrap();
assert_eq!(deleted, 0);
let chroot = ChrootKvBackend::new(CLI_TOMBSTONE_PREFIX.as_bytes().to_vec(), kv_backend);
let req = RangeRequest::default().with_prefix(b"foo");
let resp = chroot.range(req).await.unwrap();
assert_eq!(resp.kvs.len(), 3);
assert_eq!(resp.kvs[0].key, b"foo");
assert_eq!(resp.kvs[0].value, b"bar");
assert_eq!(resp.kvs[1].key, b"foo/bar");
assert_eq!(resp.kvs[1].value, b"baz");
assert_eq!(resp.kvs[2].key, b"foo/baz");
assert_eq!(resp.kvs[2].value, b"qux");
}
}

View File

@@ -0,0 +1,235 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use async_trait::async_trait;
use clap::Parser;
use client::{DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME};
use common_catalog::format_full_table_name;
use common_error::ext::BoxedError;
use common_meta::ddl::utils::get_region_wal_options;
use common_meta::key::table_name::TableNameManager;
use common_meta::key::TableMetadataManager;
use common_meta::kv_backend::KvBackendRef;
use store_api::storage::TableId;
use crate::error::{InvalidArgumentsSnafu, TableNotFoundSnafu};
use crate::metadata::common::StoreConfig;
use crate::metadata::control::del::CLI_TOMBSTONE_PREFIX;
use crate::metadata::control::utils::get_table_id_by_name;
use crate::Tool;
/// Delete table metadata logically from the metadata store.
#[derive(Debug, Default, Parser)]
pub struct DelTableCommand {
/// The table id to delete from the metadata store.
#[clap(long)]
table_id: Option<u32>,
/// The table name to delete from the metadata store.
#[clap(long)]
table_name: Option<String>,
/// The schema name of the table.
#[clap(long, default_value = DEFAULT_SCHEMA_NAME)]
schema_name: String,
/// The catalog name of the table.
#[clap(long, default_value = DEFAULT_CATALOG_NAME)]
catalog_name: String,
#[clap(flatten)]
store: StoreConfig,
}
impl DelTableCommand {
fn validate(&self) -> Result<(), BoxedError> {
if matches!(
(&self.table_id, &self.table_name),
(Some(_), Some(_)) | (None, None)
) {
return Err(BoxedError::new(
InvalidArgumentsSnafu {
msg: "You must specify either --table-id or --table-name.",
}
.build(),
));
}
Ok(())
}
}
impl DelTableCommand {
pub async fn build(&self) -> Result<Box<dyn Tool>, BoxedError> {
self.validate()?;
let kv_backend = self.store.build().await?;
Ok(Box::new(DelTableTool {
table_id: self.table_id,
table_name: self.table_name.clone(),
schema_name: self.schema_name.clone(),
catalog_name: self.catalog_name.clone(),
table_name_manager: TableNameManager::new(kv_backend.clone()),
table_metadata_deleter: TableMetadataDeleter::new(kv_backend),
}))
}
}
struct DelTableTool {
table_id: Option<u32>,
table_name: Option<String>,
schema_name: String,
catalog_name: String,
table_name_manager: TableNameManager,
table_metadata_deleter: TableMetadataDeleter,
}
#[async_trait]
impl Tool for DelTableTool {
async fn do_work(&self) -> Result<(), BoxedError> {
let table_id = if let Some(table_name) = &self.table_name {
let catalog_name = &self.catalog_name;
let schema_name = &self.schema_name;
let Some(table_id) = get_table_id_by_name(
&self.table_name_manager,
catalog_name,
schema_name,
table_name,
)
.await?
else {
println!(
"Table({}) not found",
format_full_table_name(catalog_name, schema_name, table_name)
);
return Ok(());
};
table_id
} else {
// Safety: we have validated that table_id or table_name is not None
self.table_id.unwrap()
};
self.table_metadata_deleter.delete(table_id).await?;
println!("Table({}) deleted", table_id);
Ok(())
}
}
struct TableMetadataDeleter {
table_metadata_manager: TableMetadataManager,
}
impl TableMetadataDeleter {
fn new(kv_backend: KvBackendRef) -> Self {
Self {
table_metadata_manager: TableMetadataManager::new_with_custom_tombstone_prefix(
kv_backend,
CLI_TOMBSTONE_PREFIX,
),
}
}
async fn delete(&self, table_id: TableId) -> Result<(), BoxedError> {
let (table_info, table_route) = self
.table_metadata_manager
.get_full_table_info(table_id)
.await
.map_err(BoxedError::new)?;
let Some(table_info) = table_info else {
return Err(BoxedError::new(TableNotFoundSnafu { table_id }.build()));
};
let Some(table_route) = table_route else {
return Err(BoxedError::new(TableNotFoundSnafu { table_id }.build()));
};
let physical_table_id = self
.table_metadata_manager
.table_route_manager()
.get_physical_table_id(table_id)
.await
.map_err(BoxedError::new)?;
let table_name = table_info.table_name();
let region_wal_options = get_region_wal_options(
&self.table_metadata_manager,
&table_route,
physical_table_id,
)
.await
.map_err(BoxedError::new)?;
self.table_metadata_manager
.delete_table_metadata(table_id, &table_name, &table_route, &region_wal_options)
.await
.map_err(BoxedError::new)?;
Ok(())
}
}
#[cfg(test)]
mod tests {
use std::collections::HashMap;
use std::sync::Arc;
use common_error::ext::ErrorExt;
use common_error::status_code::StatusCode;
use common_meta::key::table_route::TableRouteValue;
use common_meta::key::TableMetadataManager;
use common_meta::kv_backend::chroot::ChrootKvBackend;
use common_meta::kv_backend::memory::MemoryKvBackend;
use common_meta::kv_backend::{KvBackend, KvBackendRef};
use common_meta::rpc::store::RangeRequest;
use crate::metadata::control::del::table::TableMetadataDeleter;
use crate::metadata::control::del::CLI_TOMBSTONE_PREFIX;
use crate::metadata::control::test_utils::prepare_physical_table_metadata;
#[tokio::test]
async fn test_delete_table_not_found() {
let kv_backend = Arc::new(MemoryKvBackend::new()) as KvBackendRef;
let table_metadata_deleter = TableMetadataDeleter::new(kv_backend);
let table_id = 1;
let err = table_metadata_deleter.delete(table_id).await.unwrap_err();
assert_eq!(err.status_code(), StatusCode::TableNotFound);
}
#[tokio::test]
async fn test_delete_table_metadata() {
let kv_backend = Arc::new(MemoryKvBackend::new());
let table_metadata_manager = TableMetadataManager::new(kv_backend.clone());
let table_id = 1024;
let (table_info, table_route) = prepare_physical_table_metadata("my_table", table_id).await;
table_metadata_manager
.create_table_metadata(
table_info,
TableRouteValue::Physical(table_route),
HashMap::new(),
)
.await
.unwrap();
let total_keys = kv_backend.len();
assert!(total_keys > 0);
let table_metadata_deleter = TableMetadataDeleter::new(kv_backend.clone());
table_metadata_deleter.delete(table_id).await.unwrap();
// Check the tombstone keys are deleted
let chroot =
ChrootKvBackend::new(CLI_TOMBSTONE_PREFIX.as_bytes().to_vec(), kv_backend.clone());
let req = RangeRequest::default().with_range(vec![0], vec![0]);
let resp = chroot.range(req).await.unwrap();
assert_eq!(resp.kvs.len(), total_keys);
}
}

View File

@@ -20,7 +20,6 @@ use client::{DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME};
use common_catalog::format_full_table_name;
use common_error::ext::BoxedError;
use common_meta::key::table_info::TableInfoKey;
use common_meta::key::table_name::TableNameKey;
use common_meta::key::table_route::TableRouteKey;
use common_meta::key::TableMetadataManager;
use common_meta::kv_backend::KvBackendRef;
@@ -30,10 +29,10 @@ use futures::TryStreamExt;
use crate::error::InvalidArgumentsSnafu;
use crate::metadata::common::StoreConfig;
use crate::metadata::control::utils::{decode_key_value, json_fromatter};
use crate::metadata::control::utils::{decode_key_value, get_table_id_by_name, json_fromatter};
use crate::Tool;
/// Subcommand for get command.
/// Getting metadata from metadata store.
#[derive(Subcommand)]
pub enum GetCommand {
Key(GetKeyCommand),
@@ -52,7 +51,7 @@ impl GetCommand {
/// Get key-value pairs from the metadata store.
#[derive(Debug, Default, Parser)]
pub struct GetKeyCommand {
/// The key to get from the metadata store. If empty, returns all key-value pairs.
/// The key to get from the metadata store.
#[clap(default_value = "")]
key: String,
@@ -130,8 +129,12 @@ pub struct GetTableCommand {
table_name: Option<String>,
/// The schema name of the table.
#[clap(long)]
schema_name: Option<String>,
#[clap(long, default_value = DEFAULT_SCHEMA_NAME)]
schema_name: String,
/// The catalog name of the table.
#[clap(long, default_value = DEFAULT_CATALOG_NAME)]
catalog_name: String,
/// Pretty print the output.
#[clap(long, default_value = "false")]
@@ -143,7 +146,10 @@ pub struct GetTableCommand {
impl GetTableCommand {
pub fn validate(&self) -> Result<(), BoxedError> {
if self.table_id.is_none() && self.table_name.is_none() {
if matches!(
(&self.table_id, &self.table_name),
(Some(_), Some(_)) | (None, None)
) {
return Err(BoxedError::new(
InvalidArgumentsSnafu {
msg: "You must specify either --table-id or --table-name.",
@@ -159,7 +165,8 @@ struct GetTableTool {
kvbackend: KvBackendRef,
table_id: Option<u32>,
table_name: Option<String>,
schema_name: Option<String>,
schema_name: String,
catalog_name: String,
pretty: bool,
}
@@ -172,23 +179,20 @@ impl Tool for GetTableTool {
let table_route_manager = table_metadata_manager.table_route_manager();
let table_id = if let Some(table_name) = &self.table_name {
let catalog = DEFAULT_CATALOG_NAME.to_string();
let schema_name = self
.schema_name
.clone()
.unwrap_or_else(|| DEFAULT_SCHEMA_NAME.to_string());
let key = TableNameKey::new(&catalog, &schema_name, table_name);
let catalog_name = &self.catalog_name;
let schema_name = &self.schema_name;
let Some(table_name) = table_name_manager.get(key).await.map_err(BoxedError::new)?
let Some(table_id) =
get_table_id_by_name(table_name_manager, catalog_name, schema_name, table_name)
.await?
else {
println!(
"Table({}) not found",
format_full_table_name(&catalog, &schema_name, table_name)
format_full_table_name(catalog_name, schema_name, table_name)
);
return Ok(());
};
table_name.table_id()
table_id
} else {
// Safety: we have validated that table_id or table_name is not None
self.table_id.unwrap()
@@ -236,6 +240,7 @@ impl GetTableCommand {
table_id: self.table_id,
table_name: self.table_name.clone(),
schema_name: self.schema_name.clone(),
catalog_name: self.catalog_name.clone(),
pretty: self.pretty,
}))
}

View File

@@ -0,0 +1,51 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use common_meta::ddl::test_util::test_create_physical_table_task;
use common_meta::key::table_route::PhysicalTableRouteValue;
use common_meta::kv_backend::KvBackendRef;
use common_meta::peer::Peer;
use common_meta::rpc::router::{Region, RegionRoute};
use common_meta::rpc::store::PutRequest;
use store_api::storage::{RegionId, TableId};
use table::metadata::RawTableInfo;
/// Puts a key-value pair into the kv backend.
pub async fn put_key(kv_backend: &KvBackendRef, key: &str, value: &str) {
let put_req = PutRequest::new()
.with_key(key.as_bytes())
.with_value(value.as_bytes());
kv_backend.put(put_req).await.unwrap();
}
/// Prepares the physical table metadata for testing.
///
/// Returns the table info and the table route.
pub async fn prepare_physical_table_metadata(
table_name: &str,
table_id: TableId,
) -> (RawTableInfo, PhysicalTableRouteValue) {
let mut create_physical_table_task = test_create_physical_table_task(table_name);
let table_route = PhysicalTableRouteValue::new(vec![RegionRoute {
region: Region {
id: RegionId::new(table_id, 1),
..Default::default()
},
leader_peer: Some(Peer::empty(1)),
..Default::default()
}]);
create_physical_table_task.set_table_id(table_id);
(create_physical_table_task.table_info, table_route)
}

View File

@@ -12,9 +12,12 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use common_error::ext::BoxedError;
use common_meta::error::Result as CommonMetaResult;
use common_meta::key::table_name::{TableNameKey, TableNameManager};
use common_meta::rpc::KeyValue;
use serde::Serialize;
use store_api::storage::TableId;
/// Decodes a key-value pair into a string.
pub fn decode_key_value(kv: KeyValue) -> CommonMetaResult<(String, String)> {
@@ -34,3 +37,21 @@ where
serde_json::to_string(value).unwrap()
}
}
/// Gets the table id by table name.
pub async fn get_table_id_by_name(
table_name_manager: &TableNameManager,
catalog_name: &str,
schema_name: &str,
table_name: &str,
) -> Result<Option<TableId>, BoxedError> {
let table_name_key = TableNameKey::new(catalog_name, schema_name, table_name);
let Some(table_name_value) = table_name_manager
.get(table_name_key)
.await
.map_err(BoxedError::new)?
else {
return Ok(None);
};
Ok(Some(table_name_value.table_id()))
}

View File

@@ -0,0 +1,369 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
mod alter_table;
mod create_table;
use std::sync::Arc;
use std::time::Duration;
use async_trait::async_trait;
use clap::Parser;
use client::api::v1::CreateTableExpr;
use client::client_manager::NodeClients;
use client::{DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME};
use common_error::ext::{BoxedError, ErrorExt};
use common_error::status_code::StatusCode;
use common_grpc::channel_manager::ChannelConfig;
use common_meta::error::Error as CommonMetaError;
use common_meta::key::TableMetadataManager;
use common_meta::kv_backend::KvBackendRef;
use common_meta::node_manager::NodeManagerRef;
use common_meta::peer::Peer;
use common_meta::rpc::router::{find_leaders, RegionRoute};
use common_telemetry::{error, info, warn};
use futures::TryStreamExt;
use snafu::{ensure, ResultExt};
use store_api::storage::TableId;
use crate::error::{
InvalidArgumentsSnafu, Result, SendRequestToDatanodeSnafu, TableMetadataSnafu, UnexpectedSnafu,
};
use crate::metadata::common::StoreConfig;
use crate::metadata::utils::{FullTableMetadata, IteratorInput, TableMetadataIterator};
use crate::Tool;
/// Repair metadata of logical tables.
#[derive(Debug, Default, Parser)]
pub struct RepairLogicalTablesCommand {
/// The names of the tables to repair.
#[clap(long, value_delimiter = ',', alias = "table-name")]
table_names: Vec<String>,
/// The id of the table to repair.
#[clap(long, value_delimiter = ',', alias = "table-id")]
table_ids: Vec<TableId>,
/// The schema of the tables to repair.
#[clap(long, default_value = DEFAULT_SCHEMA_NAME)]
schema_name: String,
/// The catalog of the tables to repair.
#[clap(long, default_value = DEFAULT_CATALOG_NAME)]
catalog_name: String,
/// Whether to fail fast if any repair operation fails.
#[clap(long)]
fail_fast: bool,
#[clap(flatten)]
store: StoreConfig,
/// The timeout for the client to operate the datanode.
#[clap(long, default_value_t = 30)]
client_timeout_secs: u64,
/// The timeout for the client to connect to the datanode.
#[clap(long, default_value_t = 3)]
client_connect_timeout_secs: u64,
}
impl RepairLogicalTablesCommand {
fn validate(&self) -> Result<()> {
ensure!(
!self.table_names.is_empty() || !self.table_ids.is_empty(),
InvalidArgumentsSnafu {
msg: "You must specify --table-names or --table-ids.",
}
);
Ok(())
}
}
impl RepairLogicalTablesCommand {
pub async fn build(&self) -> std::result::Result<Box<dyn Tool>, BoxedError> {
self.validate().map_err(BoxedError::new)?;
let kv_backend = self.store.build().await?;
let node_client_channel_config = ChannelConfig::new()
.timeout(Duration::from_secs(self.client_timeout_secs))
.connect_timeout(Duration::from_secs(self.client_connect_timeout_secs));
let node_manager = Arc::new(NodeClients::new(node_client_channel_config));
Ok(Box::new(RepairTool {
table_names: self.table_names.clone(),
table_ids: self.table_ids.clone(),
schema_name: self.schema_name.clone(),
catalog_name: self.catalog_name.clone(),
fail_fast: self.fail_fast,
kv_backend,
node_manager,
}))
}
}
struct RepairTool {
table_names: Vec<String>,
table_ids: Vec<TableId>,
schema_name: String,
catalog_name: String,
fail_fast: bool,
kv_backend: KvBackendRef,
node_manager: NodeManagerRef,
}
#[async_trait]
impl Tool for RepairTool {
async fn do_work(&self) -> std::result::Result<(), BoxedError> {
self.repair_tables().await.map_err(BoxedError::new)
}
}
impl RepairTool {
fn generate_iterator_input(&self) -> Result<IteratorInput> {
if !self.table_names.is_empty() {
let table_names = &self.table_names;
let catalog = &self.catalog_name;
let schema_name = &self.schema_name;
let table_names = table_names
.iter()
.map(|table_name| {
(
catalog.to_string(),
schema_name.to_string(),
table_name.to_string(),
)
})
.collect::<Vec<_>>();
return Ok(IteratorInput::new_table_names(table_names));
} else if !self.table_ids.is_empty() {
return Ok(IteratorInput::new_table_ids(self.table_ids.clone()));
};
InvalidArgumentsSnafu {
msg: "You must specify --table-names or --table-id.",
}
.fail()
}
async fn repair_tables(&self) -> Result<()> {
let input = self.generate_iterator_input()?;
let mut table_metadata_iterator =
Box::pin(TableMetadataIterator::new(self.kv_backend.clone(), input).into_stream());
let table_metadata_manager = TableMetadataManager::new(self.kv_backend.clone());
let mut skipped_table = 0;
let mut success_table = 0;
while let Some(full_table_metadata) = table_metadata_iterator.try_next().await? {
let full_table_name = full_table_metadata.full_table_name();
if !full_table_metadata.is_metric_engine() {
warn!(
"Skipping repair for non-metric engine table: {}",
full_table_name
);
skipped_table += 1;
continue;
}
if full_table_metadata.is_physical_table() {
warn!("Skipping repair for physical table: {}", full_table_name);
skipped_table += 1;
continue;
}
let (physical_table_id, physical_table_route) = table_metadata_manager
.table_route_manager()
.get_physical_table_route(full_table_metadata.table_id)
.await
.context(TableMetadataSnafu)?;
if let Err(err) = self
.repair_table(
&full_table_metadata,
physical_table_id,
&physical_table_route.region_routes,
)
.await
{
error!(
err;
"Failed to repair table: {}, skipped table: {}",
full_table_name,
skipped_table,
);
if self.fail_fast {
return Err(err);
}
} else {
success_table += 1;
}
}
info!(
"Repair logical tables result: {} tables repaired, {} tables skipped",
success_table, skipped_table
);
Ok(())
}
async fn alter_table_on_datanodes(
&self,
full_table_metadata: &FullTableMetadata,
physical_region_routes: &[RegionRoute],
) -> Result<Vec<(Peer, CommonMetaError)>> {
let logical_table_id = full_table_metadata.table_id;
let alter_table_expr = alter_table::generate_alter_table_expr_for_all_columns(
&full_table_metadata.table_info,
)?;
let node_manager = self.node_manager.clone();
let mut failed_peers = Vec::new();
info!(
"Sending alter table requests to all datanodes for table: {}, number of regions:{}.",
full_table_metadata.full_table_name(),
physical_region_routes.len()
);
let leaders = find_leaders(physical_region_routes);
for peer in &leaders {
let alter_table_request = alter_table::make_alter_region_request_for_peer(
logical_table_id,
&alter_table_expr,
full_table_metadata.table_info.ident.version,
peer,
physical_region_routes,
)?;
let datanode = node_manager.datanode(peer).await;
if let Err(err) = datanode.handle(alter_table_request).await {
failed_peers.push((peer.clone(), err));
}
}
Ok(failed_peers)
}
async fn create_table_on_datanode(
&self,
create_table_expr: &CreateTableExpr,
logical_table_id: TableId,
physical_table_id: TableId,
peer: &Peer,
physical_region_routes: &[RegionRoute],
) -> Result<()> {
let node_manager = self.node_manager.clone();
let datanode = node_manager.datanode(peer).await;
let create_table_request = create_table::make_create_region_request_for_peer(
logical_table_id,
physical_table_id,
create_table_expr,
peer,
physical_region_routes,
)?;
datanode
.handle(create_table_request)
.await
.with_context(|_| SendRequestToDatanodeSnafu { peer: peer.clone() })?;
Ok(())
}
async fn repair_table(
&self,
full_table_metadata: &FullTableMetadata,
physical_table_id: TableId,
physical_region_routes: &[RegionRoute],
) -> Result<()> {
let full_table_name = full_table_metadata.full_table_name();
// First we sends alter table requests to all datanodes with all columns.
let failed_peers = self
.alter_table_on_datanodes(full_table_metadata, physical_region_routes)
.await?;
if failed_peers.is_empty() {
info!(
"All alter table requests sent successfully for table: {}",
full_table_name
);
return Ok(());
}
warn!(
"Sending alter table requests to datanodes for table: {} failed for the datanodes: {:?}",
full_table_name,
failed_peers.iter().map(|(peer, _)| peer.id).collect::<Vec<_>>()
);
let create_table_expr =
create_table::generate_create_table_expr(&full_table_metadata.table_info)?;
let mut errors = Vec::new();
for (peer, err) in failed_peers {
if err.status_code() != StatusCode::RegionNotFound {
error!(
err;
"Sending alter table requests to datanode: {} for table: {} failed",
peer.id,
full_table_name,
);
continue;
}
info!(
"Region not found for table: {}, datanode: {}, trying to create the logical table on that datanode",
full_table_name,
peer.id
);
// If the alter table request fails for any datanode, we attempt to create the table on that datanode
// as a fallback mechanism to ensure table consistency across the cluster.
if let Err(err) = self
.create_table_on_datanode(
&create_table_expr,
full_table_metadata.table_id,
physical_table_id,
&peer,
physical_region_routes,
)
.await
{
error!(
err;
"Failed to create table on datanode: {} for table: {}",
peer.id, full_table_name
);
errors.push(err);
if self.fail_fast {
break;
}
} else {
info!(
"Created table on datanode: {} for table: {}",
peer.id, full_table_name
);
}
}
if !errors.is_empty() {
return UnexpectedSnafu {
msg: format!(
"Failed to create table on datanodes for table: {}",
full_table_name,
),
}
.fail();
}
Ok(())
}
}

View File

@@ -0,0 +1,85 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use client::api::v1::alter_table_expr::Kind;
use client::api::v1::region::{region_request, AlterRequests, RegionRequest, RegionRequestHeader};
use client::api::v1::{AddColumn, AddColumns, AlterTableExpr};
use common_meta::ddl::alter_logical_tables::make_alter_region_request;
use common_meta::peer::Peer;
use common_meta::rpc::router::{find_leader_regions, RegionRoute};
use operator::expr_helper::column_schemas_to_defs;
use snafu::ResultExt;
use store_api::storage::{RegionId, TableId};
use table::metadata::RawTableInfo;
use crate::error::{CovertColumnSchemasToDefsSnafu, Result};
/// Generates alter table expression for all columns.
pub fn generate_alter_table_expr_for_all_columns(
table_info: &RawTableInfo,
) -> Result<AlterTableExpr> {
let schema = &table_info.meta.schema;
let mut alter_table_expr = AlterTableExpr {
catalog_name: table_info.catalog_name.to_string(),
schema_name: table_info.schema_name.to_string(),
table_name: table_info.name.to_string(),
..Default::default()
};
let primary_keys = table_info
.meta
.primary_key_indices
.iter()
.map(|i| schema.column_schemas[*i].name.clone())
.collect::<Vec<_>>();
let add_columns = column_schemas_to_defs(schema.column_schemas.clone(), &primary_keys)
.context(CovertColumnSchemasToDefsSnafu)?;
alter_table_expr.kind = Some(Kind::AddColumns(AddColumns {
add_columns: add_columns
.into_iter()
.map(|col| AddColumn {
column_def: Some(col),
location: None,
add_if_not_exists: true,
})
.collect(),
}));
Ok(alter_table_expr)
}
/// Makes an alter region request for a peer.
pub fn make_alter_region_request_for_peer(
logical_table_id: TableId,
alter_table_expr: &AlterTableExpr,
schema_version: u64,
peer: &Peer,
region_routes: &[RegionRoute],
) -> Result<RegionRequest> {
let regions_on_this_peer = find_leader_regions(region_routes, peer);
let mut requests = Vec::with_capacity(regions_on_this_peer.len());
for region_number in &regions_on_this_peer {
let region_id = RegionId::new(logical_table_id, *region_number);
let request = make_alter_region_request(region_id, alter_table_expr, schema_version);
requests.push(request);
}
Ok(RegionRequest {
header: Some(RegionRequestHeader::default()),
body: Some(region_request::Body::Alters(AlterRequests { requests })),
})
}

View File

@@ -0,0 +1,89 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::collections::HashMap;
use client::api::v1::region::{region_request, CreateRequests, RegionRequest, RegionRequestHeader};
use client::api::v1::CreateTableExpr;
use common_meta::ddl::create_logical_tables::create_region_request_builder;
use common_meta::ddl::utils::region_storage_path;
use common_meta::peer::Peer;
use common_meta::rpc::router::{find_leader_regions, RegionRoute};
use operator::expr_helper::column_schemas_to_defs;
use snafu::ResultExt;
use store_api::storage::{RegionId, TableId};
use table::metadata::RawTableInfo;
use crate::error::{CovertColumnSchemasToDefsSnafu, Result};
/// Generates a `CreateTableExpr` from a `RawTableInfo`.
pub fn generate_create_table_expr(table_info: &RawTableInfo) -> Result<CreateTableExpr> {
let schema = &table_info.meta.schema;
let primary_keys = table_info
.meta
.primary_key_indices
.iter()
.map(|i| schema.column_schemas[*i].name.clone())
.collect::<Vec<_>>();
let timestamp_index = schema.timestamp_index.as_ref().unwrap();
let time_index = schema.column_schemas[*timestamp_index].name.clone();
let column_defs = column_schemas_to_defs(schema.column_schemas.clone(), &primary_keys)
.context(CovertColumnSchemasToDefsSnafu)?;
let table_options = HashMap::from(&table_info.meta.options);
Ok(CreateTableExpr {
catalog_name: table_info.catalog_name.to_string(),
schema_name: table_info.schema_name.to_string(),
table_name: table_info.name.to_string(),
desc: String::default(),
column_defs,
time_index,
primary_keys,
create_if_not_exists: true,
table_options,
table_id: None,
engine: table_info.meta.engine.to_string(),
})
}
/// Makes a create region request for a peer.
pub fn make_create_region_request_for_peer(
logical_table_id: TableId,
physical_table_id: TableId,
create_table_expr: &CreateTableExpr,
peer: &Peer,
region_routes: &[RegionRoute],
) -> Result<RegionRequest> {
let regions_on_this_peer = find_leader_regions(region_routes, peer);
let mut requests = Vec::with_capacity(regions_on_this_peer.len());
let request_builder =
create_region_request_builder(create_table_expr, physical_table_id).unwrap();
let catalog = &create_table_expr.catalog_name;
let schema = &create_table_expr.schema_name;
let storage_path = region_storage_path(catalog, schema);
for region_number in &regions_on_this_peer {
let region_id = RegionId::new(logical_table_id, *region_number);
let region_request =
request_builder.build_one(region_id, storage_path.clone(), &HashMap::new());
requests.push(region_request);
}
Ok(RegionRequest {
header: Some(RegionRequestHeader::default()),
body: Some(region_request::Body::Creates(CreateRequests { requests })),
})
}

View File

@@ -0,0 +1,178 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::collections::VecDeque;
use async_stream::try_stream;
use common_catalog::consts::METRIC_ENGINE;
use common_catalog::format_full_table_name;
use common_meta::key::table_name::TableNameKey;
use common_meta::key::table_route::TableRouteValue;
use common_meta::key::TableMetadataManager;
use common_meta::kv_backend::KvBackendRef;
use futures::Stream;
use snafu::{OptionExt, ResultExt};
use store_api::storage::TableId;
use table::metadata::RawTableInfo;
use crate::error::{Result, TableMetadataSnafu, UnexpectedSnafu};
/// The input for the iterator.
pub enum IteratorInput {
TableIds(VecDeque<TableId>),
TableNames(VecDeque<(String, String, String)>),
}
impl IteratorInput {
/// Creates a new iterator input from a list of table ids.
pub fn new_table_ids(table_ids: Vec<TableId>) -> Self {
Self::TableIds(table_ids.into())
}
/// Creates a new iterator input from a list of table names.
pub fn new_table_names(table_names: Vec<(String, String, String)>) -> Self {
Self::TableNames(table_names.into())
}
}
/// An iterator for retrieving table metadata from the metadata store.
///
/// This struct provides functionality to iterate over table metadata based on
/// either [`TableId`] and their associated regions or fully qualified table names.
pub struct TableMetadataIterator {
input: IteratorInput,
table_metadata_manager: TableMetadataManager,
}
/// The full table metadata.
pub struct FullTableMetadata {
pub table_id: TableId,
pub table_info: RawTableInfo,
pub table_route: TableRouteValue,
}
impl FullTableMetadata {
/// Returns true if it's [TableRouteValue::Physical].
pub fn is_physical_table(&self) -> bool {
self.table_route.is_physical()
}
/// Returns true if it's a metric engine table.
pub fn is_metric_engine(&self) -> bool {
self.table_info.meta.engine == METRIC_ENGINE
}
/// Returns the full table name.
pub fn full_table_name(&self) -> String {
format_full_table_name(
&self.table_info.catalog_name,
&self.table_info.schema_name,
&self.table_info.name,
)
}
}
impl TableMetadataIterator {
pub fn new(kvbackend: KvBackendRef, input: IteratorInput) -> Self {
let table_metadata_manager = TableMetadataManager::new(kvbackend);
Self {
input,
table_metadata_manager,
}
}
/// Returns the next table metadata.
///
/// This method handles two types of inputs:
/// - TableIds: Returns metadata for a specific [`TableId`].
/// - TableNames: Returns metadata for a table identified by its full name (catalog.schema.table).
///
/// Returns `None` when there are no more tables to process.
pub async fn next(&mut self) -> Result<Option<FullTableMetadata>> {
match &mut self.input {
IteratorInput::TableIds(table_ids) => {
if let Some(table_id) = table_ids.pop_front() {
let full_table_metadata = self.get_table_metadata(table_id).await?;
return Ok(Some(full_table_metadata));
}
}
IteratorInput::TableNames(table_names) => {
if let Some(full_table_name) = table_names.pop_front() {
let table_id = self.get_table_id_by_name(full_table_name).await?;
let full_table_metadata = self.get_table_metadata(table_id).await?;
return Ok(Some(full_table_metadata));
}
}
}
Ok(None)
}
/// Converts the iterator into a stream of table metadata.
pub fn into_stream(mut self) -> impl Stream<Item = Result<FullTableMetadata>> {
try_stream!({
while let Some(full_table_metadata) = self.next().await? {
yield full_table_metadata;
}
})
}
async fn get_table_id_by_name(
&mut self,
(catalog_name, schema_name, table_name): (String, String, String),
) -> Result<TableId> {
let key = TableNameKey::new(&catalog_name, &schema_name, &table_name);
let table_id = self
.table_metadata_manager
.table_name_manager()
.get(key)
.await
.context(TableMetadataSnafu)?
.with_context(|| UnexpectedSnafu {
msg: format!(
"Table not found: {}",
format_full_table_name(&catalog_name, &schema_name, &table_name)
),
})?
.table_id();
Ok(table_id)
}
async fn get_table_metadata(&mut self, table_id: TableId) -> Result<FullTableMetadata> {
let (table_info, table_route) = self
.table_metadata_manager
.get_full_table_info(table_id)
.await
.context(TableMetadataSnafu)?;
let table_info = table_info
.with_context(|| UnexpectedSnafu {
msg: format!("Table info not found for table id: {table_id}"),
})?
.into_inner()
.table_info;
let table_route = table_route
.with_context(|| UnexpectedSnafu {
msg: format!("Table route not found for table id: {table_id}"),
})?
.into_inner();
Ok(FullTableMetadata {
table_id,
table_info,
table_route,
})
}
}

View File

@@ -12,7 +12,7 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use api::v1::flow::{FlowRequest, FlowResponse};
use api::v1::flow::{DirtyWindowRequest, DirtyWindowRequests, FlowRequest, FlowResponse};
use api::v1::region::InsertRequests;
use common_error::ext::BoxedError;
use common_meta::node_manager::Flownode;
@@ -44,6 +44,16 @@ impl Flownode for FlowRequester {
.map_err(BoxedError::new)
.context(common_meta::error::ExternalSnafu)
}
async fn handle_mark_window_dirty(
&self,
req: DirtyWindowRequest,
) -> common_meta::error::Result<FlowResponse> {
self.handle_mark_window_dirty(req)
.await
.map_err(BoxedError::new)
.context(common_meta::error::ExternalSnafu)
}
}
impl FlowRequester {
@@ -91,4 +101,20 @@ impl FlowRequester {
.into_inner();
Ok(response)
}
async fn handle_mark_window_dirty(&self, req: DirtyWindowRequest) -> Result<FlowResponse> {
let (addr, mut client) = self.client.raw_flow_client()?;
let response = client
.handle_mark_dirty_time_window(DirtyWindowRequests {
requests: vec![req],
})
.await
.or_else(|e| {
let code = e.code();
let err: crate::error::Error = e.into();
Err(BoxedError::new(err)).context(FlowServerSnafu { addr, code })
})?
.into_inner();
Ok(response)
}
}

View File

@@ -93,6 +93,7 @@ impl InstanceBuilder {
MetaClientType::Datanode { member_id },
meta_client_options,
Some(&plugins),
None,
)
.await
.context(MetaClientInitSnafu)?;

View File

@@ -55,14 +55,32 @@ type FlownodeOptions = GreptimeOptions<flow::FlownodeOptions>;
pub struct Instance {
flownode: FlownodeInstance,
// The components of flownode, which make it easier to expand based
// on the components.
#[cfg(feature = "enterprise")]
components: Components,
// Keep the logging guard to prevent the worker from being dropped.
_guard: Vec<WorkerGuard>,
}
#[cfg(feature = "enterprise")]
pub struct Components {
pub catalog_manager: catalog::CatalogManagerRef,
pub fe_client: Arc<FrontendClient>,
pub kv_backend: common_meta::kv_backend::KvBackendRef,
}
impl Instance {
pub fn new(flownode: FlownodeInstance, guard: Vec<WorkerGuard>) -> Self {
pub fn new(
flownode: FlownodeInstance,
#[cfg(feature = "enterprise")] components: Components,
guard: Vec<WorkerGuard>,
) -> Self {
Self {
flownode,
#[cfg(feature = "enterprise")]
components,
_guard: guard,
}
}
@@ -75,6 +93,11 @@ impl Instance {
pub fn flownode_mut(&mut self) -> &mut FlownodeInstance {
&mut self.flownode
}
#[cfg(feature = "enterprise")]
pub fn components(&self) -> &Components {
&self.components
}
}
#[async_trait::async_trait]
@@ -283,6 +306,7 @@ impl StartCommand {
MetaClientType::Flownode { member_id },
meta_config,
None,
None,
)
.await
.context(MetaClientInitSnafu)?;
@@ -349,19 +373,20 @@ impl StartCommand {
let flow_auth_header = get_flow_auth_options(&opts).context(StartFlownodeSnafu)?;
let frontend_client =
FrontendClient::from_meta_client(meta_client.clone(), flow_auth_header);
let frontend_client = Arc::new(frontend_client);
let flownode_builder = FlownodeBuilder::new(
opts.clone(),
plugins,
table_metadata_manager,
catalog_manager.clone(),
flow_metadata_manager,
Arc::new(frontend_client),
frontend_client.clone(),
)
.with_heartbeat_task(heartbeat_task);
let mut flownode = flownode_builder.build().await.context(StartFlownodeSnafu)?;
let services = FlownodeServiceBuilder::new(&opts)
.with_grpc_server(flownode.flownode_server().clone())
.with_default_grpc_server(flownode.flownode_server())
.enable_http_service()
.build()
.context(StartFlownodeSnafu)?;
@@ -393,6 +418,16 @@ impl StartCommand {
.set_frontend_invoker(invoker)
.await;
Ok(Instance::new(flownode, guard))
#[cfg(feature = "enterprise")]
let components = Components {
catalog_manager: catalog_manager.clone(),
fe_client: frontend_client,
kv_backend: cached_meta_backend,
};
#[cfg(not(feature = "enterprise"))]
return Ok(Instance::new(flownode, guard));
#[cfg(feature = "enterprise")]
Ok(Instance::new(flownode, components, guard))
}
}

View File

@@ -313,6 +313,7 @@ impl StartCommand {
MetaClientType::Frontend,
meta_client_options,
Some(&plugins),
None,
)
.await
.context(error::MetaClientInitSnafu)?;

View File

@@ -30,20 +30,16 @@ use common_catalog::consts::{MIN_USER_FLOW_ID, MIN_USER_TABLE_ID};
use common_config::{metadata_store_dir, Configurable, KvBackendConfig};
use common_error::ext::BoxedError;
use common_meta::cache::LayeredCacheRegistryBuilder;
use common_meta::cache_invalidator::CacheInvalidatorRef;
use common_meta::cluster::{NodeInfo, NodeStatus};
use common_meta::datanode::RegionStat;
use common_meta::ddl::flow_meta::{FlowMetadataAllocator, FlowMetadataAllocatorRef};
use common_meta::ddl::table_meta::{TableMetadataAllocator, TableMetadataAllocatorRef};
use common_meta::ddl::flow_meta::FlowMetadataAllocator;
use common_meta::ddl::table_meta::TableMetadataAllocator;
use common_meta::ddl::{DdlContext, NoopRegionFailureDetectorControl, ProcedureExecutorRef};
use common_meta::ddl_manager::DdlManager;
#[cfg(feature = "enterprise")]
use common_meta::ddl_manager::TriggerDdlManagerRef;
use common_meta::key::flow::flow_state::FlowStat;
use common_meta::key::flow::{FlowMetadataManager, FlowMetadataManagerRef};
use common_meta::key::flow::FlowMetadataManager;
use common_meta::key::{TableMetadataManager, TableMetadataManagerRef};
use common_meta::kv_backend::KvBackendRef;
use common_meta::node_manager::NodeManagerRef;
use common_meta::peer::Peer;
use common_meta::region_keeper::MemoryRegionKeeper;
use common_meta::region_registry::LeaderRegionRegistry;
@@ -594,28 +590,36 @@ impl StartCommand {
.await
.context(error::BuildWalOptionsAllocatorSnafu)?;
let wal_options_allocator = Arc::new(wal_options_allocator);
let table_meta_allocator = Arc::new(TableMetadataAllocator::new(
let table_metadata_allocator = Arc::new(TableMetadataAllocator::new(
table_id_sequence,
wal_options_allocator.clone(),
));
let flow_meta_allocator = Arc::new(FlowMetadataAllocator::with_noop_peer_allocator(
let flow_metadata_allocator = Arc::new(FlowMetadataAllocator::with_noop_peer_allocator(
flow_id_sequence,
));
let ddl_context = DdlContext {
node_manager: node_manager.clone(),
cache_invalidator: layered_cache_registry.clone(),
memory_region_keeper: Arc::new(MemoryRegionKeeper::default()),
leader_region_registry: Arc::new(LeaderRegionRegistry::default()),
table_metadata_manager: table_metadata_manager.clone(),
table_metadata_allocator: table_metadata_allocator.clone(),
flow_metadata_manager: flow_metadata_manager.clone(),
flow_metadata_allocator: flow_metadata_allocator.clone(),
region_failure_detector_controller: Arc::new(NoopRegionFailureDetectorControl),
};
let procedure_manager_c = procedure_manager.clone();
let ddl_manager = DdlManager::try_new(ddl_context, procedure_manager_c, true)
.context(error::InitDdlManagerSnafu)?;
#[cfg(feature = "enterprise")]
let trigger_ddl_manager: Option<TriggerDdlManagerRef> = plugins.get();
let ddl_task_executor = Self::create_ddl_task_executor(
procedure_manager.clone(),
node_manager.clone(),
layered_cache_registry.clone(),
table_metadata_manager,
table_meta_allocator,
flow_metadata_manager,
flow_meta_allocator,
#[cfg(feature = "enterprise")]
trigger_ddl_manager,
)
.await?;
let ddl_manager = {
let trigger_ddl_manager: Option<common_meta::ddl_manager::TriggerDdlManagerRef> =
plugins.get();
ddl_manager.with_trigger_ddl_manager(trigger_ddl_manager)
};
let ddl_task_executor: ProcedureExecutorRef = Arc::new(ddl_manager);
let fe_instance = FrontendBuilder::new(
fe_opts.clone(),
@@ -679,41 +683,6 @@ impl StartCommand {
})
}
#[allow(clippy::too_many_arguments)]
pub async fn create_ddl_task_executor(
procedure_manager: ProcedureManagerRef,
node_manager: NodeManagerRef,
cache_invalidator: CacheInvalidatorRef,
table_metadata_manager: TableMetadataManagerRef,
table_metadata_allocator: TableMetadataAllocatorRef,
flow_metadata_manager: FlowMetadataManagerRef,
flow_metadata_allocator: FlowMetadataAllocatorRef,
#[cfg(feature = "enterprise")] trigger_ddl_manager: Option<TriggerDdlManagerRef>,
) -> Result<ProcedureExecutorRef> {
let procedure_executor: ProcedureExecutorRef = Arc::new(
DdlManager::try_new(
DdlContext {
node_manager,
cache_invalidator,
memory_region_keeper: Arc::new(MemoryRegionKeeper::default()),
leader_region_registry: Arc::new(LeaderRegionRegistry::default()),
table_metadata_manager,
table_metadata_allocator,
flow_metadata_manager,
flow_metadata_allocator,
region_failure_detector_controller: Arc::new(NoopRegionFailureDetectorControl),
},
procedure_manager,
true,
#[cfg(feature = "enterprise")]
trigger_ddl_manager,
)
.context(error::InitDdlManagerSnafu)?,
);
Ok(procedure_executor)
}
pub async fn create_table_metadata_manager(
kv_backend: KvBackendRef,
) -> Result<TableMetadataManagerRef> {

View File

@@ -12,7 +12,6 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use std::path::Path;
use std::time::Duration;
use cmd::options::GreptimeOptions;
@@ -58,12 +57,7 @@ fn test_load_datanode_example_config() {
metadata_cache_tti: Duration::from_secs(300),
}),
wal: DatanodeWalConfig::RaftEngine(RaftEngineConfig {
dir: Some(
Path::new(DEFAULT_DATA_HOME)
.join(WAL_DIR)
.to_string_lossy()
.to_string(),
),
dir: Some(format!("{}/{}", DEFAULT_DATA_HOME, WAL_DIR)),
sync_period: Some(Duration::from_secs(10)),
recovery_parallelism: 2,
..Default::default()
@@ -86,10 +80,7 @@ fn test_load_datanode_example_config() {
],
logging: LoggingOptions {
level: Some("info".to_string()),
dir: Path::new(DEFAULT_DATA_HOME)
.join(DEFAULT_LOGGING_DIR)
.to_string_lossy()
.to_string(),
dir: format!("{}/{}", DEFAULT_DATA_HOME, DEFAULT_LOGGING_DIR),
otlp_endpoint: Some(DEFAULT_OTLP_ENDPOINT.to_string()),
tracing_sample_ratio: Some(Default::default()),
..Default::default()
@@ -132,10 +123,7 @@ fn test_load_frontend_example_config() {
}),
logging: LoggingOptions {
level: Some("info".to_string()),
dir: Path::new(DEFAULT_DATA_HOME)
.join(DEFAULT_LOGGING_DIR)
.to_string_lossy()
.to_string(),
dir: format!("{}/{}", DEFAULT_DATA_HOME, DEFAULT_LOGGING_DIR),
otlp_endpoint: Some(DEFAULT_OTLP_ENDPOINT.to_string()),
tracing_sample_ratio: Some(Default::default()),
..Default::default()
@@ -182,10 +170,7 @@ fn test_load_metasrv_example_config() {
..Default::default()
},
logging: LoggingOptions {
dir: Path::new(DEFAULT_DATA_HOME)
.join(DEFAULT_LOGGING_DIR)
.to_string_lossy()
.to_string(),
dir: format!("{}/{}", DEFAULT_DATA_HOME, DEFAULT_LOGGING_DIR),
level: Some("info".to_string()),
otlp_endpoint: Some(DEFAULT_OTLP_ENDPOINT.to_string()),
tracing_sample_ratio: Some(Default::default()),
@@ -220,12 +205,7 @@ fn test_load_standalone_example_config() {
component: StandaloneOptions {
default_timezone: Some("UTC".to_string()),
wal: DatanodeWalConfig::RaftEngine(RaftEngineConfig {
dir: Some(
Path::new(DEFAULT_DATA_HOME)
.join(WAL_DIR)
.to_string_lossy()
.to_string(),
),
dir: Some(format!("{}/{}", DEFAULT_DATA_HOME, WAL_DIR)),
sync_period: Some(Duration::from_secs(10)),
recovery_parallelism: 2,
..Default::default()
@@ -248,10 +228,7 @@ fn test_load_standalone_example_config() {
},
logging: LoggingOptions {
level: Some("info".to_string()),
dir: Path::new(DEFAULT_DATA_HOME)
.join(DEFAULT_LOGGING_DIR)
.to_string_lossy()
.to_string(),
dir: format!("{}/{}", DEFAULT_DATA_HOME, DEFAULT_LOGGING_DIR),
otlp_endpoint: Some(DEFAULT_OTLP_ENDPOINT.to_string()),
tracing_sample_ratio: Some(Default::default()),
..Default::default()

View File

@@ -0,0 +1,240 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//! [CancellationHandle] is used to compose with manual implementation of [futures::future::Future]
//! or [futures::stream::Stream] to facilitate cancellation.
//! See example in [frontend::stream_wrapper::CancellableStreamWrapper] and [CancellableFuture].
use std::fmt::{Debug, Display, Formatter};
use std::future::Future;
use std::pin::Pin;
use std::sync::atomic::{AtomicBool, Ordering};
use std::sync::Arc;
use std::task::{Context, Poll};
use futures::task::AtomicWaker;
use pin_project::pin_project;
#[derive(Default)]
pub struct CancellationHandle {
waker: AtomicWaker,
cancelled: AtomicBool,
}
impl Debug for CancellationHandle {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
f.debug_struct("CancellationHandle")
.field("cancelled", &self.is_cancelled())
.finish()
}
}
impl CancellationHandle {
pub fn waker(&self) -> &AtomicWaker {
&self.waker
}
/// Cancels a future or stream.
pub fn cancel(&self) {
if self
.cancelled
.compare_exchange(false, true, Ordering::Acquire, Ordering::Relaxed)
.is_ok()
{
self.waker.wake();
}
}
/// Is this handle cancelled.
pub fn is_cancelled(&self) -> bool {
self.cancelled.load(Ordering::Relaxed)
}
}
#[pin_project]
#[derive(Debug, Clone)]
pub struct CancellableFuture<T> {
#[pin]
fut: T,
handle: Arc<CancellationHandle>,
}
impl<T> CancellableFuture<T> {
pub fn new(fut: T, handle: Arc<CancellationHandle>) -> Self {
Self { fut, handle }
}
}
impl<T> Future for CancellableFuture<T>
where
T: Future,
{
type Output = Result<T::Output, Cancelled>;
fn poll(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {
let this = self.as_mut().project();
// Check if the task has been aborted
if this.handle.is_cancelled() {
return Poll::Ready(Err(Cancelled));
}
if let Poll::Ready(x) = this.fut.poll(cx) {
return Poll::Ready(Ok(x));
}
this.handle.waker().register(cx.waker());
if this.handle.is_cancelled() {
return Poll::Ready(Err(Cancelled));
}
Poll::Pending
}
}
#[derive(Copy, Clone, Debug)]
pub struct Cancelled;
impl Display for Cancelled {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
write!(f, "Future has been cancelled")
}
}
#[cfg(test)]
mod tests {
use std::sync::Arc;
use std::time::Duration;
use tokio::time::{sleep, timeout};
use crate::cancellation::{CancellableFuture, CancellationHandle, Cancelled};
#[tokio::test]
async fn test_cancellable_future_completes_normally() {
let handle = Arc::new(CancellationHandle::default());
let future = async { 42 };
let cancellable = CancellableFuture::new(future, handle);
let result = cancellable.await;
assert!(result.is_ok());
assert_eq!(result.unwrap(), 42);
}
#[tokio::test]
async fn test_cancellable_future_cancelled_before_start() {
let handle = Arc::new(CancellationHandle::default());
handle.cancel();
let future = async { 42 };
let cancellable = CancellableFuture::new(future, handle);
let result = cancellable.await;
assert!(result.is_err());
assert!(matches!(result.unwrap_err(), Cancelled));
}
#[tokio::test]
async fn test_cancellable_future_cancelled_during_execution() {
let handle = Arc::new(CancellationHandle::default());
let handle_clone = handle.clone();
// Create a future that sleeps for a long time
let future = async {
sleep(Duration::from_secs(10)).await;
42
};
let cancellable = CancellableFuture::new(future, handle);
// Cancel the future after a short delay
tokio::spawn(async move {
sleep(Duration::from_millis(50)).await;
handle_clone.cancel();
});
let result = cancellable.await;
assert!(result.is_err());
assert!(matches!(result.unwrap_err(), Cancelled));
}
#[tokio::test]
async fn test_cancellable_future_completes_before_cancellation() {
let handle = Arc::new(CancellationHandle::default());
let handle_clone = handle.clone();
// Create a future that completes quickly
let future = async {
sleep(Duration::from_millis(10)).await;
42
};
let cancellable = CancellableFuture::new(future, handle);
// Try to cancel after the future should have completed
tokio::spawn(async move {
sleep(Duration::from_millis(100)).await;
handle_clone.cancel();
});
let result = cancellable.await;
assert!(result.is_ok());
assert_eq!(result.unwrap(), 42);
}
#[tokio::test]
async fn test_cancellation_handle_is_cancelled() {
let handle = CancellationHandle::default();
assert!(!handle.is_cancelled());
handle.cancel();
assert!(handle.is_cancelled());
}
#[tokio::test]
async fn test_multiple_cancellable_futures_with_same_handle() {
let handle = Arc::new(CancellationHandle::default());
let future1 = CancellableFuture::new(async { 1 }, handle.clone());
let future2 = CancellableFuture::new(async { 2 }, handle.clone());
// Cancel before starting
handle.cancel();
let (result1, result2) = tokio::join!(future1, future2);
assert!(result1.is_err());
assert!(result2.is_err());
assert!(matches!(result1.unwrap_err(), Cancelled));
assert!(matches!(result2.unwrap_err(), Cancelled));
}
#[tokio::test]
async fn test_cancellable_future_with_timeout() {
let handle = Arc::new(CancellationHandle::default());
let future = async {
sleep(Duration::from_secs(1)).await;
42
};
let cancellable = CancellableFuture::new(future, handle.clone());
// Use timeout to ensure the test doesn't hang
let result = timeout(Duration::from_millis(100), cancellable).await;
// Should timeout because the future takes 1 second but we timeout after 100ms
assert!(result.is_err());
}
#[tokio::test]
async fn test_cancelled_display() {
let cancelled = Cancelled;
assert_eq!(format!("{}", cancelled), "Future has been cancelled");
}
}

View File

@@ -14,6 +14,7 @@
pub mod bit_vec;
pub mod bytes;
pub mod cancellation;
pub mod plugins;
pub mod range_read;
#[allow(clippy::all)]

View File

@@ -42,8 +42,8 @@ pub enum Error {
location: Location,
},
#[snafu(display("Failed to invoke list process service"))]
ListProcess {
#[snafu(display("Failed to invoke frontend service"))]
InvokeFrontend {
#[snafu(source)]
error: tonic::Status,
#[snafu(implicit)]
@@ -67,7 +67,7 @@ impl ErrorExt for Error {
External { source, .. } => source.status_code(),
Meta { source, .. } => source.status_code(),
ParseProcessId { .. } => StatusCode::InvalidArguments,
ListProcess { .. } => StatusCode::External,
InvokeFrontend { .. } => StatusCode::Unexpected,
CreateChannel { source, .. } => source.status_code(),
}
}

View File

@@ -23,7 +23,7 @@ pub mod selector;
#[derive(Debug, Clone, Eq, PartialEq)]
pub struct DisplayProcessId {
pub server_addr: String,
pub id: u64,
pub id: u32,
}
impl Display for DisplayProcessId {
@@ -44,7 +44,7 @@ impl TryFrom<&str> for DisplayProcessId {
let id = split
.next()
.context(error::ParseProcessIdSnafu { s: value })?;
let id = u64::from_str(id)
let id = u32::from_str(id)
.ok()
.context(error::ParseProcessIdSnafu { s: value })?;
Ok(DisplayProcessId { server_addr, id })

View File

@@ -12,13 +12,18 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use std::fmt::Debug;
use std::time::Duration;
use common_grpc::channel_manager::{ChannelConfig, ChannelManager};
use common_meta::cluster::{ClusterInfo, NodeInfo, Role};
use greptime_proto::v1::frontend::{frontend_client, ListProcessRequest, ListProcessResponse};
use greptime_proto::v1::frontend::{
frontend_client, KillProcessRequest, KillProcessResponse, ListProcessRequest,
ListProcessResponse,
};
use meta_client::MetaClientRef;
use snafu::ResultExt;
use tonic::Response;
use crate::error;
use crate::error::{MetaSnafu, Result};
@@ -26,20 +31,30 @@ use crate::error::{MetaSnafu, Result};
pub type FrontendClientPtr = Box<dyn FrontendClient>;
#[async_trait::async_trait]
pub trait FrontendClient: Send {
pub trait FrontendClient: Send + Debug {
async fn list_process(&mut self, req: ListProcessRequest) -> Result<ListProcessResponse>;
async fn kill_process(&mut self, req: KillProcessRequest) -> Result<KillProcessResponse>;
}
#[async_trait::async_trait]
impl FrontendClient for frontend_client::FrontendClient<tonic::transport::channel::Channel> {
async fn list_process(&mut self, req: ListProcessRequest) -> Result<ListProcessResponse> {
let response: ListProcessResponse = frontend_client::FrontendClient::<
tonic::transport::channel::Channel,
>::list_process(self, req)
frontend_client::FrontendClient::<tonic::transport::channel::Channel>::list_process(
self, req,
)
.await
.context(error::ListProcessSnafu)?
.into_inner();
Ok(response)
.context(error::InvokeFrontendSnafu)
.map(Response::into_inner)
}
async fn kill_process(&mut self, req: KillProcessRequest) -> Result<KillProcessResponse> {
frontend_client::FrontendClient::<tonic::transport::channel::Channel>::kill_process(
self, req,
)
.await
.context(error::InvokeFrontendSnafu)
.map(Response::into_inner)
}
}

View File

@@ -14,8 +14,8 @@
use crate::function_registry::FunctionRegistry;
pub(crate) mod hll;
mod uddsketch;
pub mod hll;
pub mod uddsketch;
pub(crate) struct ApproximateFunction;

View File

@@ -23,7 +23,8 @@ use std::sync::Arc;
use build::BuildFunction;
use database::{
CurrentSchemaFunction, DatabaseFunction, ReadPreferenceFunction, SessionUserFunction,
ConnectionIdFunction, CurrentSchemaFunction, DatabaseFunction, PgBackendPidFunction,
ReadPreferenceFunction, SessionUserFunction,
};
use pg_catalog::PGCatalogFunction;
use procedure_state::ProcedureStateFunction;
@@ -42,6 +43,8 @@ impl SystemFunction {
registry.register_scalar(DatabaseFunction);
registry.register_scalar(SessionUserFunction);
registry.register_scalar(ReadPreferenceFunction);
registry.register_scalar(PgBackendPidFunction);
registry.register_scalar(ConnectionIdFunction);
registry.register_scalar(TimezoneFunction);
registry.register_async(Arc::new(ProcedureStateFunction));
PGCatalogFunction::register(registry);

View File

@@ -18,7 +18,8 @@ use std::sync::Arc;
use common_query::error::Result;
use common_query::prelude::{Signature, Volatility};
use datatypes::prelude::{ConcreteDataType, ScalarVector};
use datatypes::vectors::{StringVector, VectorRef};
use datatypes::vectors::{StringVector, UInt32Vector, VectorRef};
use derive_more::Display;
use crate::function::{Function, FunctionContext};
@@ -32,10 +33,20 @@ pub struct SessionUserFunction;
pub struct ReadPreferenceFunction;
#[derive(Display)]
#[display("{}", self.name())]
pub struct PgBackendPidFunction;
#[derive(Display)]
#[display("{}", self.name())]
pub struct ConnectionIdFunction;
const DATABASE_FUNCTION_NAME: &str = "database";
const CURRENT_SCHEMA_FUNCTION_NAME: &str = "current_schema";
const SESSION_USER_FUNCTION_NAME: &str = "session_user";
const READ_PREFERENCE_FUNCTION_NAME: &str = "read_preference";
const PG_BACKEND_PID: &str = "pg_backend_pid";
const CONNECTION_ID: &str = "connection_id";
impl Function for DatabaseFunction {
fn name(&self) -> &str {
@@ -117,6 +128,46 @@ impl Function for ReadPreferenceFunction {
}
}
impl Function for PgBackendPidFunction {
fn name(&self) -> &str {
PG_BACKEND_PID
}
fn return_type(&self, _input_types: &[ConcreteDataType]) -> Result<ConcreteDataType> {
Ok(ConcreteDataType::uint64_datatype())
}
fn signature(&self) -> Signature {
Signature::nullary(Volatility::Immutable)
}
fn eval(&self, func_ctx: &FunctionContext, _columns: &[VectorRef]) -> Result<VectorRef> {
let pid = func_ctx.query_ctx.process_id();
Ok(Arc::new(UInt32Vector::from_slice([pid])) as _)
}
}
impl Function for ConnectionIdFunction {
fn name(&self) -> &str {
CONNECTION_ID
}
fn return_type(&self, _input_types: &[ConcreteDataType]) -> Result<ConcreteDataType> {
Ok(ConcreteDataType::uint64_datatype())
}
fn signature(&self) -> Signature {
Signature::nullary(Volatility::Immutable)
}
fn eval(&self, func_ctx: &FunctionContext, _columns: &[VectorRef]) -> Result<VectorRef> {
let pid = func_ctx.query_ctx.process_id();
Ok(Arc::new(UInt32Vector::from_slice([pid])) as _)
}
}
impl fmt::Display for DatabaseFunction {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(f, "DATABASE")

View File

@@ -34,7 +34,7 @@ use table::requests::{
};
use crate::error::{
InvalidColumnDefSnafu, InvalidSetFulltextOptionRequestSnafu,
InvalidColumnDefSnafu, InvalidIndexOptionSnafu, InvalidSetFulltextOptionRequestSnafu,
InvalidSetSkippingIndexOptionRequestSnafu, InvalidSetTableOptionRequestSnafu,
InvalidUnsetTableOptionRequestSnafu, MissingAlterIndexOptionSnafu, MissingFieldSnafu,
MissingTimestampColumnSnafu, Result, UnknownLocationTypeSnafu,
@@ -126,18 +126,21 @@ pub fn alter_expr_to_request(table_id: TableId, expr: AlterTableExpr) -> Result<
api::v1::set_index::Options::Fulltext(f) => AlterKind::SetIndex {
options: SetIndexOptions::Fulltext {
column_name: f.column_name.clone(),
options: FulltextOptions {
enable: f.enable,
analyzer: as_fulltext_option_analyzer(
options: FulltextOptions::new(
f.enable,
as_fulltext_option_analyzer(
Analyzer::try_from(f.analyzer)
.context(InvalidSetFulltextOptionRequestSnafu)?,
),
case_sensitive: f.case_sensitive,
backend: as_fulltext_option_backend(
f.case_sensitive,
as_fulltext_option_backend(
PbFulltextBackend::try_from(f.backend)
.context(InvalidSetFulltextOptionRequestSnafu)?,
),
},
f.granularity as u32,
f.false_positive_rate,
)
.context(InvalidIndexOptionSnafu)?,
},
},
api::v1::set_index::Options::Inverted(i) => AlterKind::SetIndex {
@@ -148,13 +151,15 @@ pub fn alter_expr_to_request(table_id: TableId, expr: AlterTableExpr) -> Result<
api::v1::set_index::Options::Skipping(s) => AlterKind::SetIndex {
options: SetIndexOptions::Skipping {
column_name: s.column_name,
options: SkippingIndexOptions {
granularity: s.granularity as u32,
index_type: as_skipping_index_type(
options: SkippingIndexOptions::new(
s.granularity as u32,
s.false_positive_rate,
as_skipping_index_type(
PbSkippingIndexType::try_from(s.skipping_index_type)
.context(InvalidSetSkippingIndexOptionRequestSnafu)?,
),
},
)
.context(InvalidIndexOptionSnafu)?,
},
},
},
@@ -180,6 +185,22 @@ pub fn alter_expr_to_request(table_id: TableId, expr: AlterTableExpr) -> Result<
},
None => return MissingAlterIndexOptionSnafu.fail(),
},
Kind::DropDefaults(o) => {
let names = o
.drop_defaults
.into_iter()
.map(|col| {
ensure!(
!col.column_name.is_empty(),
MissingFieldSnafu {
field: "column_name"
}
);
Ok(col.column_name)
})
.collect::<Result<Vec<_>>>()?;
AlterKind::DropDefaults { names }
}
};
let request = AlterTableRequest {

View File

@@ -153,6 +153,14 @@ pub enum Error {
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Invalid index option"))]
InvalidIndexOption {
#[snafu(implicit)]
location: Location,
#[snafu(source)]
error: datatypes::error::Error,
},
}
pub type Result<T> = std::result::Result<T, Error>;
@@ -180,7 +188,8 @@ impl ErrorExt for Error {
| Error::InvalidUnsetTableOptionRequest { .. }
| Error::InvalidSetFulltextOptionRequest { .. }
| Error::InvalidSetSkippingIndexOptionRequest { .. }
| Error::MissingAlterIndexOption { .. } => StatusCode::InvalidArguments,
| Error::MissingAlterIndexOption { .. }
| Error::InvalidIndexOption { .. } => StatusCode::InvalidArguments,
}
}

View File

@@ -201,8 +201,8 @@ impl ChannelManager {
"http"
};
let mut endpoint =
Endpoint::new(format!("{http_prefix}://{addr}")).context(CreateChannelSnafu)?;
let mut endpoint = Endpoint::new(format!("{http_prefix}://{addr}"))
.context(CreateChannelSnafu { addr })?;
if let Some(dur) = self.config().timeout {
endpoint = endpoint.timeout(dur);
@@ -237,7 +237,7 @@ impl ChannelManager {
if let Some(tls_config) = &self.inner.client_tls_config {
endpoint = endpoint
.tls_config(tls_config.clone())
.context(CreateChannelSnafu)?;
.context(CreateChannelSnafu { addr })?;
}
endpoint = endpoint

View File

@@ -52,8 +52,9 @@ pub enum Error {
location: Location,
},
#[snafu(display("Failed to create gRPC channel"))]
#[snafu(display("Failed to create gRPC channel from '{addr}'"))]
CreateChannel {
addr: String,
#[snafu(source)]
error: tonic::transport::Error,
#[snafu(implicit)]

View File

@@ -17,7 +17,7 @@ workspace = true
anymap2 = "0.13.0"
api.workspace = true
async-recursion = "1.0"
async-stream = "0.3"
async-stream.workspace = true
async-trait.workspace = true
backon = { workspace = true, optional = true }
base64.workspace = true

View File

@@ -25,6 +25,7 @@ use common_procedure::error::{FromJsonSnafu, Result as ProcedureResult, ToJsonSn
use common_procedure::{Context, LockKey, Procedure, Status};
use common_telemetry::{error, info, warn};
use futures_util::future;
pub use region_request::make_alter_region_request;
use serde::{Deserialize, Serialize};
use snafu::{ensure, ResultExt};
use store_api::metadata::ColumnMetadata;

View File

@@ -12,20 +12,18 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use api::v1;
use api::v1::alter_table_expr::Kind;
use api::v1::region::{
alter_request, region_request, AddColumn, AddColumns, AlterRequest, AlterRequests,
RegionColumnDef, RegionRequest, RegionRequestHeader,
};
use api::v1::{self, AlterTableExpr};
use common_telemetry::tracing_context::TracingContext;
use store_api::storage::RegionId;
use crate::ddl::alter_logical_tables::AlterLogicalTablesProcedure;
use crate::error::Result;
use crate::key::table_info::TableInfoValue;
use crate::peer::Peer;
use crate::rpc::ddl::AlterTableTask;
use crate::rpc::router::{find_leader_regions, RegionRoute};
impl AlterLogicalTablesProcedure {
@@ -62,34 +60,37 @@ impl AlterLogicalTablesProcedure {
{
for region_number in &regions_on_this_peer {
let region_id = RegionId::new(table.table_info.ident.table_id, *region_number);
let request = self.make_alter_region_request(region_id, task, table)?;
let request = make_alter_region_request(
region_id,
&task.alter_table,
table.table_info.ident.version,
);
requests.push(request);
}
}
Ok(AlterRequests { requests })
}
}
fn make_alter_region_request(
&self,
region_id: RegionId,
task: &AlterTableTask,
table: &TableInfoValue,
) -> Result<AlterRequest> {
let region_id = region_id.as_u64();
let schema_version = table.table_info.ident.version;
let kind = match &task.alter_table.kind {
Some(Kind::AddColumns(add_columns)) => Some(alter_request::Kind::AddColumns(
to_region_add_columns(add_columns),
)),
_ => unreachable!(), // Safety: we have checked the kind in check_input_tasks
};
/// Makes an alter region request.
pub fn make_alter_region_request(
region_id: RegionId,
alter_table_expr: &AlterTableExpr,
schema_version: u64,
) -> AlterRequest {
let region_id = region_id.as_u64();
let kind = match &alter_table_expr.kind {
Some(Kind::AddColumns(add_columns)) => Some(alter_request::Kind::AddColumns(
to_region_add_columns(add_columns),
)),
_ => unreachable!(), // Safety: we have checked the kind in check_input_tasks
};
Ok(AlterRequest {
region_id,
schema_version,
kind,
})
AlterRequest {
region_id,
schema_version,
kind,
}
}

View File

@@ -135,6 +135,7 @@ fn create_proto_alter_kind(
Kind::UnsetTableOptions(v) => Ok(Some(alter_request::Kind::UnsetTableOptions(v.clone()))),
Kind::SetIndex(v) => Ok(Some(alter_request::Kind::SetIndex(v.clone()))),
Kind::UnsetIndex(v) => Ok(Some(alter_request::Kind::UnsetIndex(v.clone()))),
Kind::DropDefaults(v) => Ok(Some(alter_request::Kind::DropDefaults(v.clone()))),
}
}

View File

@@ -61,7 +61,8 @@ impl AlterTableProcedure {
| AlterKind::SetTableOptions { .. }
| AlterKind::UnsetTableOptions { .. }
| AlterKind::SetIndex { .. }
| AlterKind::UnsetIndex { .. } => {}
| AlterKind::UnsetIndex { .. }
| AlterKind::DropDefaults { .. } => {}
}
Ok(new_info)

View File

@@ -25,6 +25,7 @@ use common_procedure::error::{FromJsonSnafu, Result as ProcedureResult, ToJsonSn
use common_procedure::{Context as ProcedureContext, LockKey, Procedure, Status};
use common_telemetry::{debug, error, warn};
use futures::future;
pub use region_request::create_region_request_builder;
use serde::{Deserialize, Serialize};
use snafu::{ensure, ResultExt};
use store_api::metadata::ColumnMetadata;

View File

@@ -15,16 +15,16 @@
use std::collections::HashMap;
use api::v1::region::{region_request, CreateRequests, RegionRequest, RegionRequestHeader};
use api::v1::CreateTableExpr;
use common_telemetry::debug;
use common_telemetry::tracing_context::TracingContext;
use store_api::storage::RegionId;
use store_api::storage::{RegionId, TableId};
use crate::ddl::create_logical_tables::CreateLogicalTablesProcedure;
use crate::ddl::create_table_template::{build_template, CreateRequestBuilder};
use crate::ddl::utils::region_storage_path;
use crate::error::Result;
use crate::peer::Peer;
use crate::rpc::ddl::CreateTableTask;
use crate::rpc::router::{find_leader_regions, RegionRoute};
impl CreateLogicalTablesProcedure {
@@ -45,13 +45,15 @@ impl CreateLogicalTablesProcedure {
let catalog = &create_table_expr.catalog_name;
let schema = &create_table_expr.schema_name;
let logical_table_id = task.table_info.ident.table_id;
let physical_table_id = self.data.physical_table_id;
let storage_path = region_storage_path(catalog, schema);
let request_builder = self.create_region_request_builder(task)?;
let request_builder =
create_region_request_builder(&task.create_table, physical_table_id)?;
for region_number in &regions_on_this_peer {
let region_id = RegionId::new(logical_table_id, *region_number);
let one_region_request =
request_builder.build_one(region_id, storage_path.clone(), &HashMap::new())?;
request_builder.build_one(region_id, storage_path.clone(), &HashMap::new());
requests.push(one_region_request);
}
}
@@ -69,16 +71,13 @@ impl CreateLogicalTablesProcedure {
body: Some(region_request::Body::Creates(CreateRequests { requests })),
}))
}
fn create_region_request_builder(
&self,
task: &CreateTableTask,
) -> Result<CreateRequestBuilder> {
let create_expr = &task.create_table;
let template = build_template(create_expr)?;
Ok(CreateRequestBuilder::new(
template,
Some(self.data.physical_table_id),
))
}
}
/// Creates a region request builder.
pub fn create_region_request_builder(
create_table_expr: &CreateTableExpr,
physical_table_id: TableId,
) -> Result<CreateRequestBuilder> {
let template = build_template(create_table_expr)?;
Ok(CreateRequestBuilder::new(template, Some(physical_table_id)))
}

View File

@@ -218,11 +218,8 @@ impl CreateTableProcedure {
let mut requests = Vec::with_capacity(regions.len());
for region_number in regions {
let region_id = RegionId::new(self.table_id(), region_number);
let create_region_request = request_builder.build_one(
region_id,
storage_path.clone(),
region_wal_options,
)?;
let create_region_request =
request_builder.build_one(region_id, storage_path.clone(), region_wal_options);
requests.push(PbRegionRequest::Create(create_region_request));
}

View File

@@ -105,12 +105,12 @@ impl CreateRequestBuilder {
&self.template
}
pub(crate) fn build_one(
pub fn build_one(
&self,
region_id: RegionId,
storage_path: String,
region_wal_options: &HashMap<RegionNumber, String>,
) -> Result<CreateRequest> {
) -> CreateRequest {
let mut request = self.template.clone();
request.region_id = region_id.as_u64();
@@ -130,6 +130,6 @@ impl CreateRequestBuilder {
);
}
Ok(request)
request
}
}

View File

@@ -13,7 +13,6 @@
// limitations under the License.
use std::any::Any;
use std::collections::HashMap;
use common_procedure::Status;
use common_telemetry::info;
@@ -25,7 +24,7 @@ use table::table_name::TableName;
use crate::ddl::drop_database::cursor::DropDatabaseCursor;
use crate::ddl::drop_database::{DropDatabaseContext, DropTableTarget, State};
use crate::ddl::drop_table::executor::DropTableExecutor;
use crate::ddl::utils::extract_region_wal_options;
use crate::ddl::utils::get_region_wal_options;
use crate::ddl::DdlContext;
use crate::error::{self, Result};
use crate::key::table_route::TableRouteValue;
@@ -109,17 +108,12 @@ impl State for DropDatabaseExecutor {
);
// Deletes topic-region mapping if dropping physical table
let region_wal_options =
if let TableRouteValue::Physical(table_route_value) = &table_route_value {
let datanode_table_values = ddl_ctx
.table_metadata_manager
.datanode_table_manager()
.regions(self.physical_table_id, table_route_value)
.await?;
extract_region_wal_options(&datanode_table_values)?
} else {
HashMap::new()
};
let region_wal_options = get_region_wal_options(
&ddl_ctx.table_metadata_manager,
&table_route_value,
self.physical_table_id,
)
.await?;
executor
.on_destroy_metadata(ddl_ctx, &table_route_value, &region_wal_options)

View File

@@ -42,7 +42,8 @@ use crate::error::{
};
use crate::key::datanode_table::DatanodeTableValue;
use crate::key::table_name::TableNameKey;
use crate::key::TableMetadataManagerRef;
use crate::key::table_route::TableRouteValue;
use crate::key::{TableMetadataManager, TableMetadataManagerRef};
use crate::peer::Peer;
use crate::rpc::ddl::CreateTableTask;
use crate::rpc::router::{find_follower_regions, find_followers, RegionRoute};
@@ -187,6 +188,25 @@ pub fn parse_region_wal_options(
Ok(region_wal_options)
}
/// Gets the wal options for a table.
pub async fn get_region_wal_options(
table_metadata_manager: &TableMetadataManager,
table_route_value: &TableRouteValue,
physical_table_id: TableId,
) -> Result<HashMap<RegionNumber, WalOptions>> {
let region_wal_options =
if let TableRouteValue::Physical(table_route_value) = &table_route_value {
let datanode_table_values = table_metadata_manager
.datanode_table_manager()
.regions(physical_table_id, table_route_value)
.await?;
extract_region_wal_options(&datanode_table_values)?
} else {
HashMap::new()
};
Ok(region_wal_options)
}
/// Extracts region wal options from [DatanodeTableValue]s.
pub fn extract_region_wal_options(
datanode_table_values: &Vec<DatanodeTableValue>,

View File

@@ -125,13 +125,12 @@ impl DdlManager {
ddl_context: DdlContext,
procedure_manager: ProcedureManagerRef,
register_loaders: bool,
#[cfg(feature = "enterprise")] trigger_ddl_manager: Option<TriggerDdlManagerRef>,
) -> Result<Self> {
let manager = Self {
ddl_context,
procedure_manager,
#[cfg(feature = "enterprise")]
trigger_ddl_manager,
trigger_ddl_manager: None,
};
if register_loaders {
manager.register_loaders()?;
@@ -139,6 +138,15 @@ impl DdlManager {
Ok(manager)
}
#[cfg(feature = "enterprise")]
pub fn with_trigger_ddl_manager(
mut self,
trigger_ddl_manager: Option<TriggerDdlManagerRef>,
) -> Self {
self.trigger_ddl_manager = trigger_ddl_manager;
self
}
/// Returns the [TableMetadataManagerRef].
pub fn table_metadata_manager(&self) -> &TableMetadataManagerRef {
&self.ddl_context.table_metadata_manager
@@ -964,8 +972,6 @@ mod tests {
},
procedure_manager.clone(),
true,
#[cfg(feature = "enterprise")]
None,
);
let expected_loaders = vec![

View File

@@ -109,7 +109,7 @@ pub mod table_name;
pub mod table_route;
#[cfg(any(test, feature = "testing"))]
pub mod test_utils;
mod tombstone;
pub mod tombstone;
pub mod topic_name;
pub mod topic_region;
pub mod txn_helper;
@@ -535,6 +535,29 @@ impl TableMetadataManager {
}
}
/// Creates a new `TableMetadataManager` with a custom tombstone prefix.
pub fn new_with_custom_tombstone_prefix(
kv_backend: KvBackendRef,
tombstone_prefix: &str,
) -> Self {
Self {
table_name_manager: TableNameManager::new(kv_backend.clone()),
table_info_manager: TableInfoManager::new(kv_backend.clone()),
view_info_manager: ViewInfoManager::new(kv_backend.clone()),
datanode_table_manager: DatanodeTableManager::new(kv_backend.clone()),
catalog_manager: CatalogManager::new(kv_backend.clone()),
schema_manager: SchemaManager::new(kv_backend.clone()),
table_route_manager: TableRouteManager::new(kv_backend.clone()),
tombstone_manager: TombstoneManager::new_with_prefix(
kv_backend.clone(),
tombstone_prefix,
),
topic_name_manager: TopicNameManager::new(kv_backend.clone()),
topic_region_manager: TopicRegionManager::new(kv_backend.clone()),
kv_backend,
}
}
pub async fn init(&self) -> Result<()> {
let catalog_name = CatalogNameKey::new(DEFAULT_CATALOG_NAME);
@@ -925,7 +948,7 @@ impl TableMetadataManager {
) -> Result<()> {
let keys =
self.table_metadata_keys(table_id, table_name, table_route_value, region_wal_options)?;
self.tombstone_manager.create(keys).await
self.tombstone_manager.create(keys).await.map(|_| ())
}
/// Deletes metadata tombstone for table **permanently**.
@@ -939,7 +962,10 @@ impl TableMetadataManager {
) -> Result<()> {
let table_metadata_keys =
self.table_metadata_keys(table_id, table_name, table_route_value, region_wal_options)?;
self.tombstone_manager.delete(table_metadata_keys).await
self.tombstone_manager
.delete(table_metadata_keys)
.await
.map(|_| ())
}
/// Restores metadata for table.
@@ -953,7 +979,7 @@ impl TableMetadataManager {
) -> Result<()> {
let keys =
self.table_metadata_keys(table_id, table_name, table_route_value, region_wal_options)?;
self.tombstone_manager.restore(keys).await
self.tombstone_manager.restore(keys).await.map(|_| ())
}
/// Deletes metadata for table **permanently**.

View File

@@ -14,31 +14,51 @@
use std::collections::HashMap;
use common_telemetry::debug;
use snafu::ensure;
use crate::error::{self, Result};
use crate::key::txn_helper::TxnOpGetResponseSet;
use crate::kv_backend::txn::{Compare, CompareOp, Txn, TxnOp};
use crate::kv_backend::KvBackendRef;
use crate::rpc::store::BatchGetRequest;
use crate::rpc::store::{BatchDeleteRequest, BatchGetRequest};
/// [TombstoneManager] provides the ability to:
/// - logically delete values
/// - restore the deleted values
pub(crate) struct TombstoneManager {
pub struct TombstoneManager {
kv_backend: KvBackendRef,
tombstone_prefix: String,
// Only used for testing.
#[cfg(test)]
max_txn_ops: Option<usize>,
}
const TOMBSTONE_PREFIX: &str = "__tombstone/";
fn to_tombstone(key: &[u8]) -> Vec<u8> {
[TOMBSTONE_PREFIX.as_bytes(), key].concat()
}
impl TombstoneManager {
/// Returns [TombstoneManager].
pub fn new(kv_backend: KvBackendRef) -> Self {
Self { kv_backend }
Self::new_with_prefix(kv_backend, TOMBSTONE_PREFIX)
}
/// Returns [TombstoneManager] with a custom tombstone prefix.
pub fn new_with_prefix(kv_backend: KvBackendRef, prefix: &str) -> Self {
Self {
kv_backend,
tombstone_prefix: prefix.to_string(),
#[cfg(test)]
max_txn_ops: None,
}
}
pub fn to_tombstone(&self, key: &[u8]) -> Vec<u8> {
[self.tombstone_prefix.as_bytes(), key].concat()
}
#[cfg(test)]
pub fn set_max_txn_ops(&mut self, max_txn_ops: usize) {
self.max_txn_ops = Some(max_txn_ops);
}
/// Moves value to `dest_key`.
@@ -67,11 +87,15 @@ impl TombstoneManager {
(txn, TxnOpGetResponseSet::filter(src_key))
}
async fn move_values_inner(&self, keys: &[Vec<u8>], dest_keys: &[Vec<u8>]) -> Result<()> {
async fn move_values_inner(&self, keys: &[Vec<u8>], dest_keys: &[Vec<u8>]) -> Result<usize> {
ensure!(
keys.len() == dest_keys.len(),
error::UnexpectedSnafu {
err_msg: "The length of keys does not match the length of dest_keys."
err_msg: format!(
"The length of keys({}) does not match the length of dest_keys({}).",
keys.len(),
dest_keys.len()
),
}
);
// The key -> dest key mapping.
@@ -102,7 +126,7 @@ impl TombstoneManager {
.unzip();
let mut resp = self.kv_backend.txn(Txn::merge_all(txns)).await?;
if resp.succeeded {
return Ok(());
return Ok(keys.len());
}
let mut set = TxnOpGetResponseSet::from(&mut resp.responses);
// Updates results.
@@ -124,17 +148,45 @@ impl TombstoneManager {
.fail()
}
/// Moves values to `dest_key`.
async fn move_values(&self, keys: Vec<Vec<u8>>, dest_keys: Vec<Vec<u8>>) -> Result<()> {
let chunk_size = self.kv_backend.max_txn_ops() / 2;
if keys.len() > chunk_size {
let keys_chunks = keys.chunks(chunk_size).collect::<Vec<_>>();
let dest_keys_chunks = keys.chunks(chunk_size).collect::<Vec<_>>();
for (keys, dest_keys) in keys_chunks.into_iter().zip(dest_keys_chunks) {
self.move_values_inner(keys, dest_keys).await?;
}
fn max_txn_ops(&self) -> usize {
#[cfg(test)]
if let Some(max_txn_ops) = self.max_txn_ops {
return max_txn_ops;
}
self.kv_backend.max_txn_ops()
}
Ok(())
/// Moves values to `dest_key`.
///
/// Returns the number of keys that were moved.
async fn move_values(&self, keys: Vec<Vec<u8>>, dest_keys: Vec<Vec<u8>>) -> Result<usize> {
ensure!(
keys.len() == dest_keys.len(),
error::UnexpectedSnafu {
err_msg: format!(
"The length of keys({}) does not match the length of dest_keys({}).",
keys.len(),
dest_keys.len()
),
}
);
if keys.is_empty() {
return Ok(0);
}
let chunk_size = self.max_txn_ops() / 2;
if keys.len() > chunk_size {
debug!(
"Moving values with multiple chunks, keys len: {}, chunk_size: {}",
keys.len(),
chunk_size
);
let mut moved_keys = 0;
let keys_chunks = keys.chunks(chunk_size).collect::<Vec<_>>();
let dest_keys_chunks = dest_keys.chunks(chunk_size).collect::<Vec<_>>();
for (keys, dest_keys) in keys_chunks.into_iter().zip(dest_keys_chunks) {
moved_keys += self.move_values_inner(keys, dest_keys).await?;
}
Ok(moved_keys)
} else {
self.move_values_inner(&keys, &dest_keys).await
}
@@ -145,11 +197,13 @@ impl TombstoneManager {
/// Preforms to:
/// - deletes origin values.
/// - stores tombstone values.
pub(crate) async fn create(&self, keys: Vec<Vec<u8>>) -> Result<()> {
///
/// Returns the number of keys that were moved.
pub async fn create(&self, keys: Vec<Vec<u8>>) -> Result<usize> {
let (keys, dest_keys): (Vec<_>, Vec<_>) = keys
.into_iter()
.map(|key| {
let tombstone_key = to_tombstone(&key);
let tombstone_key = self.to_tombstone(&key);
(key, tombstone_key)
})
.unzip();
@@ -162,11 +216,13 @@ impl TombstoneManager {
/// Preforms to:
/// - restore origin value.
/// - deletes tombstone values.
pub(crate) async fn restore(&self, keys: Vec<Vec<u8>>) -> Result<()> {
///
/// Returns the number of keys that were restored.
pub async fn restore(&self, keys: Vec<Vec<u8>>) -> Result<usize> {
let (keys, dest_keys): (Vec<_>, Vec<_>) = keys
.into_iter()
.map(|key| {
let tombstone_key = to_tombstone(&key);
let tombstone_key = self.to_tombstone(&key);
(tombstone_key, key)
})
.unzip();
@@ -175,16 +231,21 @@ impl TombstoneManager {
}
/// Deletes tombstones values for the specified `keys`.
pub(crate) async fn delete(&self, keys: Vec<Vec<u8>>) -> Result<()> {
let operations = keys
///
/// Returns the number of keys that were deleted.
pub async fn delete(&self, keys: Vec<Vec<u8>>) -> Result<usize> {
let keys = keys
.iter()
.map(|key| TxnOp::Delete(to_tombstone(key)))
.map(|key| self.to_tombstone(key))
.collect::<Vec<_>>();
let txn = Txn::new().and_then(operations);
// Always success.
let _ = self.kv_backend.txn(txn).await?;
Ok(())
let num_keys = keys.len();
let _ = self
.kv_backend
.batch_delete(BatchDeleteRequest::new().with_keys(keys))
.await?;
Ok(num_keys)
}
}
@@ -194,7 +255,6 @@ mod tests {
use std::collections::HashMap;
use std::sync::Arc;
use super::to_tombstone;
use crate::error::Error;
use crate::key::tombstone::TombstoneManager;
use crate::kv_backend::memory::MemoryKvBackend;
@@ -246,7 +306,7 @@ mod tests {
assert!(!kv_backend.exists(b"foo").await.unwrap());
assert_eq!(
kv_backend
.get(&to_tombstone(b"bar"))
.get(&tombstone_manager.to_tombstone(b"bar"))
.await
.unwrap()
.unwrap()
@@ -255,7 +315,7 @@ mod tests {
);
assert_eq!(
kv_backend
.get(&to_tombstone(b"foo"))
.get(&tombstone_manager.to_tombstone(b"foo"))
.await
.unwrap()
.unwrap()
@@ -287,7 +347,7 @@ mod tests {
kv_backend.clone(),
&[MoveValue {
key: b"bar".to_vec(),
dest_key: to_tombstone(b"bar"),
dest_key: tombstone_manager.to_tombstone(b"bar"),
value: b"baz".to_vec(),
}],
)
@@ -364,7 +424,7 @@ mod tests {
.iter()
.map(|(key, value)| MoveValue {
key: key.clone(),
dest_key: to_tombstone(key),
dest_key: tombstone_manager.to_tombstone(key),
value: value.clone(),
})
.collect::<Vec<_>>();
@@ -373,16 +433,73 @@ mod tests {
.into_iter()
.map(|kv| (kv.key, kv.dest_key))
.unzip();
tombstone_manager
let moved_keys = tombstone_manager
.move_values(keys.clone(), dest_keys.clone())
.await
.unwrap();
assert_eq!(kvs.len(), moved_keys);
check_moved_values(kv_backend.clone(), &move_values).await;
// Moves again
tombstone_manager
let moved_keys = tombstone_manager
.move_values(keys.clone(), dest_keys.clone())
.await
.unwrap();
assert_eq!(0, moved_keys);
check_moved_values(kv_backend.clone(), &move_values).await;
}
#[tokio::test]
async fn test_move_values_with_max_txn_ops() {
common_telemetry::init_default_ut_logging();
let kv_backend = Arc::new(MemoryKvBackend::default());
let mut tombstone_manager = TombstoneManager::new(kv_backend.clone());
tombstone_manager.set_max_txn_ops(4);
let kvs = HashMap::from([
(b"bar".to_vec(), b"baz".to_vec()),
(b"foo".to_vec(), b"hi".to_vec()),
(b"baz".to_vec(), b"hello".to_vec()),
(b"qux".to_vec(), b"world".to_vec()),
(b"quux".to_vec(), b"world".to_vec()),
(b"quuux".to_vec(), b"world".to_vec()),
(b"quuuux".to_vec(), b"world".to_vec()),
(b"quuuuux".to_vec(), b"world".to_vec()),
(b"quuuuuux".to_vec(), b"world".to_vec()),
]);
for (key, value) in &kvs {
kv_backend
.put(
PutRequest::new()
.with_key(key.clone())
.with_value(value.clone()),
)
.await
.unwrap();
}
let move_values = kvs
.iter()
.map(|(key, value)| MoveValue {
key: key.clone(),
dest_key: tombstone_manager.to_tombstone(key),
value: value.clone(),
})
.collect::<Vec<_>>();
let (keys, dest_keys): (Vec<_>, Vec<_>) = move_values
.clone()
.into_iter()
.map(|kv| (kv.key, kv.dest_key))
.unzip();
let moved_keys = tombstone_manager
.move_values(keys.clone(), dest_keys.clone())
.await
.unwrap();
assert_eq!(kvs.len(), moved_keys);
check_moved_values(kv_backend.clone(), &move_values).await;
// Moves again
let moved_keys = tombstone_manager
.move_values(keys.clone(), dest_keys.clone())
.await
.unwrap();
assert_eq!(0, moved_keys);
check_moved_values(kv_backend.clone(), &move_values).await;
}
@@ -409,7 +526,7 @@ mod tests {
.iter()
.map(|(key, value)| MoveValue {
key: key.clone(),
dest_key: to_tombstone(key),
dest_key: tombstone_manager.to_tombstone(key),
value: value.clone(),
})
.collect::<Vec<_>>();
@@ -420,17 +537,19 @@ mod tests {
.unzip();
keys.push(b"non-exists".to_vec());
dest_keys.push(b"hi/non-exists".to_vec());
tombstone_manager
let moved_keys = tombstone_manager
.move_values(keys.clone(), dest_keys.clone())
.await
.unwrap();
check_moved_values(kv_backend.clone(), &move_values).await;
assert_eq!(3, moved_keys);
// Moves again
tombstone_manager
let moved_keys = tombstone_manager
.move_values(keys.clone(), dest_keys.clone())
.await
.unwrap();
check_moved_values(kv_backend.clone(), &move_values).await;
assert_eq!(0, moved_keys);
}
#[tokio::test]
@@ -462,7 +581,7 @@ mod tests {
.iter()
.map(|(key, value)| MoveValue {
key: key.clone(),
dest_key: to_tombstone(key),
dest_key: tombstone_manager.to_tombstone(key),
value: value.clone(),
})
.collect::<Vec<_>>();
@@ -471,10 +590,11 @@ mod tests {
.into_iter()
.map(|kv| (kv.key, kv.dest_key))
.unzip();
tombstone_manager
let moved_keys = tombstone_manager
.move_values(keys, dest_keys)
.await
.unwrap();
assert_eq!(kvs.len(), moved_keys);
}
#[tokio::test]
@@ -502,7 +622,7 @@ mod tests {
.iter()
.map(|(key, value)| MoveValue {
key: key.clone(),
dest_key: to_tombstone(key),
dest_key: tombstone_manager.to_tombstone(key),
value: value.clone(),
})
.collect::<Vec<_>>();
@@ -537,7 +657,7 @@ mod tests {
.iter()
.map(|(key, value)| MoveValue {
key: key.clone(),
dest_key: to_tombstone(key),
dest_key: tombstone_manager.to_tombstone(key),
value: value.clone(),
})
.collect::<Vec<_>>();
@@ -552,4 +672,24 @@ mod tests {
.unwrap();
check_moved_values(kv_backend.clone(), &move_values).await;
}
#[tokio::test]
async fn test_move_values_with_different_lengths() {
let kv_backend = Arc::new(MemoryKvBackend::default());
let tombstone_manager = TombstoneManager::new(kv_backend.clone());
let keys = vec![b"bar".to_vec(), b"foo".to_vec()];
let dest_keys = vec![b"bar".to_vec(), b"foo".to_vec(), b"baz".to_vec()];
let err = tombstone_manager
.move_values(keys, dest_keys)
.await
.unwrap_err();
assert!(err
.to_string()
.contains("The length of keys(2) does not match the length of dest_keys(3)."),);
let moved_keys = tombstone_manager.move_values(vec![], vec![]).await.unwrap();
assert_eq!(0, moved_keys);
}
}

View File

@@ -54,20 +54,20 @@ impl<T> MemoryKvBackend<T> {
kvs.clear();
}
#[cfg(test)]
#[cfg(any(test, feature = "testing"))]
/// Returns true if the `kvs` is empty.
pub fn is_empty(&self) -> bool {
self.kvs.read().unwrap().is_empty()
}
#[cfg(test)]
#[cfg(any(test, feature = "testing"))]
/// Returns the `kvs`.
pub fn dump(&self) -> BTreeMap<Vec<u8>, Vec<u8>> {
let kvs = self.kvs.read().unwrap();
kvs.clone()
}
#[cfg(test)]
#[cfg(any(test, feature = "testing"))]
/// Returns the length of `kvs`
pub fn len(&self) -> usize {
self.kvs.read().unwrap().len()

View File

@@ -15,7 +15,7 @@
use std::sync::Arc;
use api::region::RegionResponse;
use api::v1::flow::{FlowRequest, FlowResponse};
use api::v1::flow::{DirtyWindowRequest, FlowRequest, FlowResponse};
use api::v1::region::{InsertRequests, RegionRequest};
pub use common_base::AffectedRows;
use common_query::request::QueryRequest;
@@ -42,6 +42,9 @@ pub trait Flownode: Send + Sync {
async fn handle(&self, request: FlowRequest) -> Result<FlowResponse>;
async fn handle_inserts(&self, request: InsertRequests) -> Result<FlowResponse>;
/// Handles requests to mark time window as dirty.
async fn handle_mark_window_dirty(&self, req: DirtyWindowRequest) -> Result<FlowResponse>;
}
pub type FlownodeRef = Arc<dyn Flownode>;

View File

@@ -15,7 +15,7 @@
use std::sync::Arc;
use api::region::RegionResponse;
use api::v1::flow::{FlowRequest, FlowResponse};
use api::v1::flow::{DirtyWindowRequest, FlowRequest, FlowResponse};
use api::v1::region::{InsertRequests, RegionRequest};
pub use common_base::AffectedRows;
use common_query::request::QueryRequest;
@@ -67,6 +67,14 @@ pub trait MockFlownodeHandler: Sync + Send + Clone {
) -> Result<FlowResponse> {
unimplemented!()
}
async fn handle_mark_window_dirty(
&self,
_peer: &Peer,
_req: DirtyWindowRequest,
) -> Result<FlowResponse> {
unimplemented!()
}
}
/// A mock struct implements [NodeManager] only implement the `datanode` method.
@@ -134,6 +142,10 @@ impl<T: MockFlownodeHandler> Flownode for MockNode<T> {
async fn handle_inserts(&self, requests: InsertRequests) -> Result<FlowResponse> {
self.handler.handle_inserts(&self.peer, requests).await
}
async fn handle_mark_window_dirty(&self, req: DirtyWindowRequest) -> Result<FlowResponse> {
self.handler.handle_mark_window_dirty(&self.peer, req).await
}
}
#[async_trait::async_trait]

View File

@@ -173,13 +173,13 @@ pub enum Error {
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Stream timeout"))]
StreamTimeout {
#[snafu(implicit)]
location: Location,
#[snafu(source)]
error: tokio::time::error::Elapsed,
},
#[snafu(display("RecordBatch slice index overflow: {visit_index} > {size}"))]
RecordBatchSliceIndexOverflow {
#[snafu(implicit)]
@@ -187,6 +187,12 @@ pub enum Error {
size: usize,
visit_index: usize,
},
#[snafu(display("Stream has been cancelled"))]
StreamCancelled {
#[snafu(implicit)]
location: Location,
},
}
impl ErrorExt for Error {
@@ -221,6 +227,8 @@ impl ErrorExt for Error {
}
Error::StreamTimeout { .. } => StatusCode::Cancelled,
Error::StreamCancelled { .. } => StatusCode::Cancelled,
}
}

View File

@@ -12,7 +12,6 @@ deadlock_detection = ["parking_lot/deadlock_detection"]
workspace = true
[dependencies]
atty = "0.2"
backtrace = "0.3"
common-error.workspace = true
console-subscriber = { version = "0.1", optional = true }

View File

@@ -14,6 +14,7 @@
//! logging stuffs, inspired by databend
use std::env;
use std::io::IsTerminal;
use std::sync::{Arc, Mutex, Once};
use std::time::Duration;
@@ -221,14 +222,14 @@ pub fn init_global_logging(
Layer::new()
.json()
.with_writer(writer)
.with_ansi(atty::is(atty::Stream::Stdout))
.with_ansi(std::io::stdout().is_terminal())
.boxed(),
)
} else {
Some(
Layer::new()
.with_writer(writer)
.with_ansi(atty::is(atty::Stream::Stdout))
.with_ansi(std::io::stdout().is_terminal())
.boxed(),
)
}

View File

@@ -475,7 +475,7 @@ mod test {
async fn region_alive_keeper() {
common_telemetry::init_default_ut_logging();
let mut region_server = mock_region_server();
let mut engine_env = TestEnv::with_prefix("region-alive-keeper");
let mut engine_env = TestEnv::with_prefix("region-alive-keeper").await;
let engine = engine_env.create_engine(MitoConfig::default()).await;
let engine = Arc::new(engine);
region_server.register_engine(engine.clone());

View File

@@ -144,6 +144,9 @@ pub struct HttpClientConfig {
/// The timeout for idle sockets being kept-alive.
#[serde(with = "humantime_serde")]
pub(crate) pool_idle_timeout: Duration,
/// Skip SSL certificate validation (insecure)
pub skip_ssl_validation: bool,
}
impl Default for HttpClientConfig {
@@ -153,6 +156,7 @@ impl Default for HttpClientConfig {
connect_timeout: Duration::from_secs(30),
timeout: Duration::from_secs(30),
pool_idle_timeout: Duration::from_secs(90),
skip_ssl_validation: false,
}
}
}
@@ -514,4 +518,48 @@ mod tests {
_ => unreachable!(),
}
}
#[test]
fn test_skip_ssl_validation_config() {
// Test with skip_ssl_validation = true
let toml_str_true = r#"
[storage]
type = "S3"
[storage.http_client]
skip_ssl_validation = true
"#;
let opts: DatanodeOptions = toml::from_str(toml_str_true).unwrap();
match &opts.storage.store {
ObjectStoreConfig::S3(cfg) => {
assert!(cfg.http_client.skip_ssl_validation);
}
_ => panic!("Expected S3 config"),
}
// Test with skip_ssl_validation = false
let toml_str_false = r#"
[storage]
type = "S3"
[storage.http_client]
skip_ssl_validation = false
"#;
let opts: DatanodeOptions = toml::from_str(toml_str_false).unwrap();
match &opts.storage.store {
ObjectStoreConfig::S3(cfg) => {
assert!(!cfg.http_client.skip_ssl_validation);
}
_ => panic!("Expected S3 config"),
}
// Test default value (should be false)
let toml_str_default = r#"
[storage]
type = "S3"
"#;
let opts: DatanodeOptions = toml::from_str(toml_str_default).unwrap();
match &opts.storage.store {
ObjectStoreConfig::S3(cfg) => {
assert!(!cfg.http_client.skip_ssl_validation);
}
_ => panic!("Expected S3 config"),
}
}
}

View File

@@ -278,7 +278,7 @@ mod tests {
let mut region_server = mock_region_server();
let heartbeat_handler = RegionHeartbeatResponseHandler::new(region_server.clone());
let mut engine_env = TestEnv::with_prefix("close-region");
let mut engine_env = TestEnv::with_prefix("close-region").await;
let engine = engine_env.create_engine(MitoConfig::default()).await;
region_server.register_engine(Arc::new(engine));
let region_id = RegionId::new(1024, 1);
@@ -326,7 +326,7 @@ mod tests {
let mut region_server = mock_region_server();
let heartbeat_handler = RegionHeartbeatResponseHandler::new(region_server.clone());
let mut engine_env = TestEnv::with_prefix("open-region");
let mut engine_env = TestEnv::with_prefix("open-region").await;
let engine = engine_env.create_engine(MitoConfig::default()).await;
region_server.register_engine(Arc::new(engine));
let region_id = RegionId::new(1024, 1);
@@ -374,7 +374,7 @@ mod tests {
let mut region_server = mock_region_server();
let heartbeat_handler = RegionHeartbeatResponseHandler::new(region_server.clone());
let mut engine_env = TestEnv::with_prefix("open-not-exists-region");
let mut engine_env = TestEnv::with_prefix("open-not-exists-region").await;
let engine = engine_env.create_engine(MitoConfig::default()).await;
region_server.register_engine(Arc::new(engine));
let region_id = RegionId::new(1024, 1);
@@ -406,7 +406,7 @@ mod tests {
let mut region_server = mock_region_server();
let heartbeat_handler = RegionHeartbeatResponseHandler::new(region_server.clone());
let mut engine_env = TestEnv::with_prefix("downgrade-region");
let mut engine_env = TestEnv::with_prefix("downgrade-region").await;
let engine = engine_env.create_engine(MitoConfig::default()).await;
region_server.register_engine(Arc::new(engine));
let region_id = RegionId::new(1024, 1);

View File

@@ -17,7 +17,7 @@ use std::sync::Arc;
use common_config::Configurable;
use servers::grpc::builder::GrpcServerBuilder;
use servers::grpc::{GrpcServer, GrpcServerConfig};
use servers::grpc::GrpcServer;
use servers::http::HttpServerBuilder;
use servers::metrics_handler::MetricsHandler;
use servers::server::{ServerHandler, ServerHandlers};
@@ -92,13 +92,7 @@ impl<'a> DatanodeServiceBuilder<'a> {
opts: &DatanodeOptions,
region_server: &RegionServer,
) -> GrpcServerBuilder {
let config = GrpcServerConfig {
max_recv_message_size: opts.grpc.max_recv_message_size.as_bytes() as usize,
max_send_message_size: opts.grpc.max_send_message_size.as_bytes() as usize,
tls: opts.grpc.tls.clone(),
};
GrpcServerBuilder::new(config, region_server.runtime())
GrpcServerBuilder::new(opts.grpc.as_config(), region_server.runtime())
.flight_handler(Arc::new(region_server.clone()))
.region_server_handler(Arc::new(region_server.clone()))
}

View File

@@ -207,11 +207,16 @@ pub(crate) fn clean_temp_dir(dir: &str) -> Result<()> {
}
pub(crate) fn build_http_client(config: &HttpClientConfig) -> Result<HttpClient> {
if config.skip_ssl_validation {
common_telemetry::warn!("Skipping SSL validation for object storage HTTP client. Please ensure the environment is trusted.");
}
let client = reqwest::ClientBuilder::new()
.pool_max_idle_per_host(config.pool_max_idle_per_host as usize)
.connect_timeout(config.connect_timeout)
.pool_idle_timeout(config.pool_idle_timeout)
.timeout(config.timeout)
.danger_accept_invalid_certs(config.skip_ssl_validation)
.build()
.context(BuildHttpClientSnafu)?;
Ok(HttpClient::with(client))

View File

@@ -31,9 +31,10 @@ pub use crate::schema::column_schema::{
ColumnSchema, FulltextAnalyzer, FulltextBackend, FulltextOptions, Metadata,
SkippingIndexOptions, SkippingIndexType, COLUMN_FULLTEXT_CHANGE_OPT_KEY_ENABLE,
COLUMN_FULLTEXT_OPT_KEY_ANALYZER, COLUMN_FULLTEXT_OPT_KEY_BACKEND,
COLUMN_FULLTEXT_OPT_KEY_CASE_SENSITIVE, COLUMN_SKIPPING_INDEX_OPT_KEY_GRANULARITY,
COLUMN_SKIPPING_INDEX_OPT_KEY_TYPE, COMMENT_KEY, FULLTEXT_KEY, INVERTED_INDEX_KEY,
SKIPPING_INDEX_KEY, TIME_INDEX_KEY,
COLUMN_FULLTEXT_OPT_KEY_CASE_SENSITIVE, COLUMN_FULLTEXT_OPT_KEY_FALSE_POSITIVE_RATE,
COLUMN_FULLTEXT_OPT_KEY_GRANULARITY, COLUMN_SKIPPING_INDEX_OPT_KEY_FALSE_POSITIVE_RATE,
COLUMN_SKIPPING_INDEX_OPT_KEY_GRANULARITY, COLUMN_SKIPPING_INDEX_OPT_KEY_TYPE, COMMENT_KEY,
FULLTEXT_KEY, INVERTED_INDEX_KEY, SKIPPING_INDEX_KEY, TIME_INDEX_KEY,
};
pub use crate::schema::constraint::ColumnDefaultConstraint;
pub use crate::schema::raw::RawSchema;

View File

@@ -47,13 +47,18 @@ pub const COLUMN_FULLTEXT_CHANGE_OPT_KEY_ENABLE: &str = "enable";
pub const COLUMN_FULLTEXT_OPT_KEY_ANALYZER: &str = "analyzer";
pub const COLUMN_FULLTEXT_OPT_KEY_CASE_SENSITIVE: &str = "case_sensitive";
pub const COLUMN_FULLTEXT_OPT_KEY_BACKEND: &str = "backend";
pub const COLUMN_FULLTEXT_OPT_KEY_GRANULARITY: &str = "granularity";
pub const COLUMN_FULLTEXT_OPT_KEY_FALSE_POSITIVE_RATE: &str = "false_positive_rate";
/// Keys used in SKIPPING index options
pub const COLUMN_SKIPPING_INDEX_OPT_KEY_GRANULARITY: &str = "granularity";
pub const COLUMN_SKIPPING_INDEX_OPT_KEY_FALSE_POSITIVE_RATE: &str = "false_positive_rate";
pub const COLUMN_SKIPPING_INDEX_OPT_KEY_TYPE: &str = "type";
pub const DEFAULT_GRANULARITY: u32 = 10240;
pub const DEFAULT_FALSE_POSITIVE_RATE: f64 = 0.01;
/// Schema of a column, used as an immutable struct.
#[derive(Clone, PartialEq, Eq, Serialize, Deserialize)]
pub struct ColumnSchema {
@@ -504,7 +509,7 @@ impl TryFrom<&ColumnSchema> for Field {
}
/// Fulltext options for a column.
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize, Default, Visit, VisitMut)]
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize, Visit, VisitMut)]
#[serde(rename_all = "kebab-case")]
pub struct FulltextOptions {
/// Whether the fulltext index is enabled.
@@ -518,6 +523,92 @@ pub struct FulltextOptions {
/// The fulltext backend to use.
#[serde(default)]
pub backend: FulltextBackend,
/// The granularity of the fulltext index (for bloom backend only)
#[serde(default = "fulltext_options_default_granularity")]
pub granularity: u32,
/// The false positive rate of the fulltext index (for bloom backend only)
#[serde(default = "index_options_default_false_positive_rate_in_10000")]
pub false_positive_rate_in_10000: u32,
}
fn fulltext_options_default_granularity() -> u32 {
DEFAULT_GRANULARITY
}
fn index_options_default_false_positive_rate_in_10000() -> u32 {
(DEFAULT_FALSE_POSITIVE_RATE * 10000.0) as u32
}
impl FulltextOptions {
/// Creates a new fulltext options.
pub fn new(
enable: bool,
analyzer: FulltextAnalyzer,
case_sensitive: bool,
backend: FulltextBackend,
granularity: u32,
false_positive_rate: f64,
) -> Result<Self> {
ensure!(
0.0 < false_positive_rate && false_positive_rate <= 1.0,
error::InvalidFulltextOptionSnafu {
msg: format!(
"Invalid false positive rate: {false_positive_rate}, expected: 0.0 < rate <= 1.0"
),
}
);
ensure!(
granularity > 0,
error::InvalidFulltextOptionSnafu {
msg: format!("Invalid granularity: {granularity}, expected: positive integer"),
}
);
Ok(Self::new_unchecked(
enable,
analyzer,
case_sensitive,
backend,
granularity,
false_positive_rate,
))
}
/// Creates a new fulltext options without checking `false_positive_rate` and `granularity`.
pub fn new_unchecked(
enable: bool,
analyzer: FulltextAnalyzer,
case_sensitive: bool,
backend: FulltextBackend,
granularity: u32,
false_positive_rate: f64,
) -> Self {
Self {
enable,
analyzer,
case_sensitive,
backend,
granularity,
false_positive_rate_in_10000: (false_positive_rate * 10000.0) as u32,
}
}
/// Gets the false positive rate.
pub fn false_positive_rate(&self) -> f64 {
self.false_positive_rate_in_10000 as f64 / 10000.0
}
}
impl Default for FulltextOptions {
fn default() -> Self {
Self::new_unchecked(
false,
FulltextAnalyzer::default(),
false,
FulltextBackend::default(),
DEFAULT_GRANULARITY,
DEFAULT_FALSE_POSITIVE_RATE,
)
}
}
impl fmt::Display for FulltextOptions {
@@ -527,6 +618,10 @@ impl fmt::Display for FulltextOptions {
write!(f, ", analyzer={}", self.analyzer)?;
write!(f, ", case_sensitive={}", self.case_sensitive)?;
write!(f, ", backend={}", self.backend)?;
if self.backend == FulltextBackend::Bloom {
write!(f, ", granularity={}", self.granularity)?;
write!(f, ", false_positive_rate={}", self.false_positive_rate())?;
}
}
Ok(())
}
@@ -611,6 +706,45 @@ impl TryFrom<HashMap<String, String>> for FulltextOptions {
}
}
if fulltext_options.backend == FulltextBackend::Bloom {
// Parse granularity with default value 10240
let granularity = match options.get(COLUMN_FULLTEXT_OPT_KEY_GRANULARITY) {
Some(value) => value
.parse::<u32>()
.ok()
.filter(|&v| v > 0)
.ok_or_else(|| {
error::InvalidFulltextOptionSnafu {
msg: format!(
"Invalid granularity: {value}, expected: positive integer"
),
}
.build()
})?,
None => DEFAULT_GRANULARITY,
};
fulltext_options.granularity = granularity;
// Parse false positive rate with default value 0.01
let false_positive_rate = match options.get(COLUMN_FULLTEXT_OPT_KEY_FALSE_POSITIVE_RATE)
{
Some(value) => value
.parse::<f64>()
.ok()
.filter(|&v| v > 0.0 && v <= 1.0)
.ok_or_else(|| {
error::InvalidFulltextOptionSnafu {
msg: format!(
"Invalid false positive rate: {value}, expected: 0.0 < rate <= 1.0"
),
}
.build()
})?,
None => DEFAULT_FALSE_POSITIVE_RATE,
};
fulltext_options.false_positive_rate_in_10000 = (false_positive_rate * 10000.0) as u32;
}
Ok(fulltext_options)
}
}
@@ -638,23 +772,73 @@ impl fmt::Display for FulltextAnalyzer {
pub struct SkippingIndexOptions {
/// The granularity of the skip index.
pub granularity: u32,
/// The false positive rate of the skip index (in ten-thousandths, e.g., 100 = 1%).
#[serde(default = "index_options_default_false_positive_rate_in_10000")]
pub false_positive_rate_in_10000: u32,
/// The type of the skip index.
#[serde(default)]
pub index_type: SkippingIndexType,
}
impl SkippingIndexOptions {
/// Creates a new skipping index options without checking `false_positive_rate` and `granularity`.
pub fn new_unchecked(
granularity: u32,
false_positive_rate: f64,
index_type: SkippingIndexType,
) -> Self {
Self {
granularity,
false_positive_rate_in_10000: (false_positive_rate * 10000.0) as u32,
index_type,
}
}
/// Creates a new skipping index options.
pub fn new(
granularity: u32,
false_positive_rate: f64,
index_type: SkippingIndexType,
) -> Result<Self> {
ensure!(
0.0 < false_positive_rate && false_positive_rate <= 1.0,
error::InvalidSkippingIndexOptionSnafu {
msg: format!("Invalid false positive rate: {false_positive_rate}, expected: 0.0 < rate <= 1.0"),
}
);
ensure!(
granularity > 0,
error::InvalidSkippingIndexOptionSnafu {
msg: format!("Invalid granularity: {granularity}, expected: positive integer"),
}
);
Ok(Self::new_unchecked(
granularity,
false_positive_rate,
index_type,
))
}
/// Gets the false positive rate.
pub fn false_positive_rate(&self) -> f64 {
self.false_positive_rate_in_10000 as f64 / 10000.0
}
}
impl Default for SkippingIndexOptions {
fn default() -> Self {
Self {
granularity: DEFAULT_GRANULARITY,
index_type: SkippingIndexType::default(),
}
Self::new_unchecked(
DEFAULT_GRANULARITY,
DEFAULT_FALSE_POSITIVE_RATE,
SkippingIndexType::default(),
)
}
}
impl fmt::Display for SkippingIndexOptions {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(f, "granularity={}", self.granularity)?;
write!(f, ", false_positive_rate={}", self.false_positive_rate())?;
write!(f, ", index_type={}", self.index_type)?;
Ok(())
}
@@ -681,15 +865,37 @@ impl TryFrom<HashMap<String, String>> for SkippingIndexOptions {
fn try_from(options: HashMap<String, String>) -> Result<Self> {
// Parse granularity with default value 1
let granularity = match options.get(COLUMN_SKIPPING_INDEX_OPT_KEY_GRANULARITY) {
Some(value) => value.parse::<u32>().map_err(|_| {
error::InvalidSkippingIndexOptionSnafu {
msg: format!("Invalid granularity: {value}, expected: positive integer"),
}
.build()
})?,
Some(value) => value
.parse::<u32>()
.ok()
.filter(|&v| v > 0)
.ok_or_else(|| {
error::InvalidSkippingIndexOptionSnafu {
msg: format!("Invalid granularity: {value}, expected: positive integer"),
}
.build()
})?,
None => DEFAULT_GRANULARITY,
};
// Parse false positive rate with default value 100
let false_positive_rate =
match options.get(COLUMN_SKIPPING_INDEX_OPT_KEY_FALSE_POSITIVE_RATE) {
Some(value) => value
.parse::<f64>()
.ok()
.filter(|&v| v > 0.0 && v <= 1.0)
.ok_or_else(|| {
error::InvalidSkippingIndexOptionSnafu {
msg: format!(
"Invalid false positive rate: {value}, expected: 0.0 < rate <= 1.0"
),
}
.build()
})?,
None => DEFAULT_FALSE_POSITIVE_RATE,
};
// Parse index type with default value BloomFilter
let index_type = match options.get(COLUMN_SKIPPING_INDEX_OPT_KEY_TYPE) {
Some(typ) => match typ.to_ascii_uppercase().as_str() {
@@ -704,10 +910,11 @@ impl TryFrom<HashMap<String, String>> for SkippingIndexOptions {
None => SkippingIndexType::default(),
};
Ok(SkippingIndexOptions {
Ok(SkippingIndexOptions::new_unchecked(
granularity,
false_positive_rate,
index_type,
})
))
}
}
@@ -973,4 +1180,59 @@ mod tests {
assert!(column_schema.default_constraint.is_none());
assert!(column_schema.metadata.is_empty());
}
#[test]
fn test_skipping_index_options_deserialization() {
let original_options = "{\"granularity\":1024,\"false-positive-rate-in-10000\":10,\"index-type\":\"BloomFilter\"}";
let options = serde_json::from_str::<SkippingIndexOptions>(original_options).unwrap();
assert_eq!(1024, options.granularity);
assert_eq!(SkippingIndexType::BloomFilter, options.index_type);
assert_eq!(0.001, options.false_positive_rate());
let options_str = serde_json::to_string(&options).unwrap();
assert_eq!(options_str, original_options);
}
#[test]
fn test_skipping_index_options_deserialization_v0_14_to_v0_15() {
let options = "{\"granularity\":10240,\"index-type\":\"BloomFilter\"}";
let options = serde_json::from_str::<SkippingIndexOptions>(options).unwrap();
assert_eq!(10240, options.granularity);
assert_eq!(SkippingIndexType::BloomFilter, options.index_type);
assert_eq!(DEFAULT_FALSE_POSITIVE_RATE, options.false_positive_rate());
let options_str = serde_json::to_string(&options).unwrap();
assert_eq!(options_str, "{\"granularity\":10240,\"false-positive-rate-in-10000\":100,\"index-type\":\"BloomFilter\"}");
}
#[test]
fn test_fulltext_options_deserialization() {
let original_options = "{\"enable\":true,\"analyzer\":\"English\",\"case-sensitive\":false,\"backend\":\"bloom\",\"granularity\":1024,\"false-positive-rate-in-10000\":10}";
let options = serde_json::from_str::<FulltextOptions>(original_options).unwrap();
assert!(!options.case_sensitive);
assert!(options.enable);
assert_eq!(FulltextBackend::Bloom, options.backend);
assert_eq!(FulltextAnalyzer::default(), options.analyzer);
assert_eq!(1024, options.granularity);
assert_eq!(0.001, options.false_positive_rate());
let options_str = serde_json::to_string(&options).unwrap();
assert_eq!(options_str, original_options);
}
#[test]
fn test_fulltext_options_deserialization_v0_14_to_v0_15() {
// 0.14 to 0.15
let options = "{\"enable\":true,\"analyzer\":\"English\",\"case-sensitive\":false,\"backend\":\"bloom\"}";
let options = serde_json::from_str::<FulltextOptions>(options).unwrap();
assert!(!options.case_sensitive);
assert!(options.enable);
assert_eq!(FulltextBackend::Bloom, options.backend);
assert_eq!(FulltextAnalyzer::default(), options.analyzer);
assert_eq!(DEFAULT_GRANULARITY, options.granularity);
assert_eq!(DEFAULT_FALSE_POSITIVE_RATE, options.false_positive_rate());
let options_str = serde_json::to_string(&options).unwrap();
assert_eq!(options_str, "{\"enable\":true,\"analyzer\":\"English\",\"case-sensitive\":false,\"backend\":\"bloom\",\"granularity\":10240,\"false-positive-rate-in-10000\":100}");
}
}

View File

@@ -12,6 +12,11 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use arrow_array::{
ArrayRef, PrimitiveArray, TimestampMicrosecondArray, TimestampMillisecondArray,
TimestampNanosecondArray, TimestampSecondArray,
};
use arrow_schema::DataType;
use common_time::timestamp::TimeUnit;
use common_time::Timestamp;
use paste::paste;
@@ -138,6 +143,41 @@ define_timestamp_with_unit!(Millisecond);
define_timestamp_with_unit!(Microsecond);
define_timestamp_with_unit!(Nanosecond);
pub fn timestamp_array_to_primitive(
ts_array: &ArrayRef,
) -> Option<(
PrimitiveArray<arrow_array::types::Int64Type>,
arrow::datatypes::TimeUnit,
)> {
let DataType::Timestamp(unit, _) = ts_array.data_type() else {
return None;
};
let ts_primitive = match unit {
arrow_schema::TimeUnit::Second => ts_array
.as_any()
.downcast_ref::<TimestampSecondArray>()
.unwrap()
.reinterpret_cast::<arrow_array::types::Int64Type>(),
arrow_schema::TimeUnit::Millisecond => ts_array
.as_any()
.downcast_ref::<TimestampMillisecondArray>()
.unwrap()
.reinterpret_cast::<arrow_array::types::Int64Type>(),
arrow_schema::TimeUnit::Microsecond => ts_array
.as_any()
.downcast_ref::<TimestampMicrosecondArray>()
.unwrap()
.reinterpret_cast::<arrow_array::types::Int64Type>(),
arrow_schema::TimeUnit::Nanosecond => ts_array
.as_any()
.downcast_ref::<TimestampNanosecondArray>()
.unwrap()
.reinterpret_cast::<arrow_array::types::Int64Type>(),
};
Some((ts_primitive, *unit))
}
#[cfg(test)]
mod tests {
use common_time::timezone::set_default_timezone;

View File

@@ -316,7 +316,7 @@ impl StreamingEngine {
);
METRIC_FLOW_ROWS
.with_label_values(&["out"])
.with_label_values(&["out-streaming"])
.inc_by(total_rows as u64);
let now = self.tick_manager.tick();
@@ -899,7 +899,7 @@ impl StreamingEngine {
let rows_send = self.run_available(true).await?;
let row = self.send_writeback_requests().await?;
debug!(
"Done to flush flow_id={:?} with {} input rows flushed, {} rows sended and {} output rows flushed",
"Done to flush flow_id={:?} with {} input rows flushed, {} rows sent and {} output rows flushed",
flow_id, flushed_input_rows, rows_send, row
);
Ok(row)

View File

@@ -18,7 +18,8 @@ use std::sync::atomic::AtomicBool;
use std::sync::Arc;
use api::v1::flow::{
flow_request, CreateRequest, DropRequest, FlowRequest, FlowResponse, FlushFlow,
flow_request, CreateRequest, DirtyWindowRequests, DropRequest, FlowRequest, FlowResponse,
FlushFlow,
};
use api::v1::region::InsertRequests;
use catalog::CatalogManager;
@@ -31,6 +32,7 @@ use common_runtime::JoinHandle;
use common_telemetry::{error, info, trace, warn};
use datatypes::value::Value;
use futures::TryStreamExt;
use greptime_proto::v1::flow::DirtyWindowRequest;
use itertools::Itertools;
use session::context::QueryContextBuilder;
use snafu::{ensure, IntoError, OptionExt, ResultExt};
@@ -46,7 +48,7 @@ use crate::error::{
IllegalCheckTaskStateSnafu, InsertIntoFlowSnafu, InternalSnafu, JoinTaskSnafu, ListFlowsSnafu,
NoAvailableFrontendSnafu, SyncCheckTaskSnafu, UnexpectedSnafu,
};
use crate::metrics::METRIC_FLOW_TASK_COUNT;
use crate::metrics::{METRIC_FLOW_ROWS, METRIC_FLOW_TASK_COUNT};
use crate::repr::{self, DiffRow};
use crate::{Error, FlowId};
@@ -689,6 +691,9 @@ impl FlowEngine for FlowDualEngine {
let mut to_stream_engine = Vec::with_capacity(request.requests.len());
let mut to_batch_engine = request.requests;
let mut batching_row_cnt = 0;
let mut streaming_row_cnt = 0;
{
// not locking this, or recover flows will be starved when also handling flow inserts
let src_table2flow = self.src_table2flow.read().await;
@@ -698,9 +703,11 @@ impl FlowEngine for FlowDualEngine {
let is_in_stream = src_table2flow.in_stream(table_id);
let is_in_batch = src_table2flow.in_batch(table_id);
if is_in_stream {
streaming_row_cnt += req.rows.as_ref().map(|rs| rs.rows.len()).unwrap_or(0);
to_stream_engine.push(req.clone());
}
if is_in_batch {
batching_row_cnt += req.rows.as_ref().map(|rs| rs.rows.len()).unwrap_or(0);
return true;
}
if !is_in_batch && !is_in_stream {
@@ -713,6 +720,14 @@ impl FlowEngine for FlowDualEngine {
// can't use drop due to https://github.com/rust-lang/rust/pull/128846
}
METRIC_FLOW_ROWS
.with_label_values(&["in-streaming"])
.inc_by(streaming_row_cnt as u64);
METRIC_FLOW_ROWS
.with_label_values(&["in-batching"])
.inc_by(batching_row_cnt as u64);
let streaming_engine = self.streaming_engine.clone();
let stream_handler: JoinHandle<Result<(), Error>> =
common_runtime::spawn_global(async move {
@@ -819,6 +834,15 @@ impl common_meta::node_manager::Flownode for FlowDualEngine {
.map(|_| Default::default())
.map_err(to_meta_err(snafu::location!()))
}
async fn handle_mark_window_dirty(&self, req: DirtyWindowRequest) -> MetaResult<FlowResponse> {
self.batching_engine()
.handle_mark_dirty_time_window(DirtyWindowRequests {
requests: vec![req],
})
.await
.map_err(to_meta_err(snafu::location!()))
}
}
/// return a function to convert `crate::error::Error` to `common_meta::error::Error`
@@ -841,93 +865,6 @@ fn to_meta_err(
}
}
#[async_trait::async_trait]
impl common_meta::node_manager::Flownode for StreamingEngine {
async fn handle(&self, request: FlowRequest) -> MetaResult<FlowResponse> {
let query_ctx = request
.header
.and_then(|h| h.query_context)
.map(|ctx| ctx.into());
match request.body {
Some(flow_request::Body::Create(CreateRequest {
flow_id: Some(task_id),
source_table_ids,
sink_table_name: Some(sink_table_name),
create_if_not_exists,
expire_after,
comment,
sql,
flow_options,
or_replace,
})) => {
let source_table_ids = source_table_ids.into_iter().map(|id| id.id).collect_vec();
let sink_table_name = [
sink_table_name.catalog_name,
sink_table_name.schema_name,
sink_table_name.table_name,
];
let expire_after = expire_after.map(|e| e.value);
let args = CreateFlowArgs {
flow_id: task_id.id as u64,
sink_table_name,
source_table_ids,
create_if_not_exists,
or_replace,
expire_after,
comment: Some(comment),
sql: sql.clone(),
flow_options,
query_ctx,
};
let ret = self
.create_flow(args)
.await
.map_err(BoxedError::new)
.with_context(|_| CreateFlowSnafu { sql: sql.clone() })
.map_err(to_meta_err(snafu::location!()))?;
METRIC_FLOW_TASK_COUNT.inc();
Ok(FlowResponse {
affected_flows: ret
.map(|id| greptime_proto::v1::FlowId { id: id as u32 })
.into_iter()
.collect_vec(),
..Default::default()
})
}
Some(flow_request::Body::Drop(DropRequest {
flow_id: Some(flow_id),
})) => {
self.remove_flow(flow_id.id as u64)
.await
.map_err(to_meta_err(snafu::location!()))?;
METRIC_FLOW_TASK_COUNT.dec();
Ok(Default::default())
}
Some(flow_request::Body::Flush(FlushFlow {
flow_id: Some(flow_id),
})) => {
let row = self
.flush_flow_inner(flow_id.id as u64)
.await
.map_err(to_meta_err(snafu::location!()))?;
Ok(FlowResponse {
affected_flows: vec![flow_id],
affected_rows: row as u64,
..Default::default()
})
}
other => common_meta::error::InvalidFlowRequestBodySnafu { body: other }.fail(),
}
}
async fn handle_inserts(&self, request: InsertRequests) -> MetaResult<FlowResponse> {
self.handle_inserts_inner(request)
.await
.map(|_| Default::default())
.map_err(to_meta_err(snafu::location!()))
}
}
impl FlowEngine for StreamingEngine {
async fn create_flow(&self, args: CreateFlowArgs) -> Result<Option<FlowId>, Error> {
self.create_flow_inner(args).await

View File

@@ -17,6 +17,7 @@
use std::collections::{BTreeMap, HashMap};
use std::sync::Arc;
use api::v1::flow::{DirtyWindowRequests, FlowResponse};
use catalog::CatalogManagerRef;
use common_error::ext::BoxedError;
use common_meta::ddl::create_flow::FlowType;
@@ -29,8 +30,7 @@ use common_telemetry::{debug, info};
use common_time::TimeToLive;
use query::QueryEngineRef;
use snafu::{ensure, OptionExt, ResultExt};
use store_api::storage::RegionId;
use table::metadata::TableId;
use store_api::storage::{RegionId, TableId};
use tokio::sync::{oneshot, RwLock};
use crate::batching_mode::frontend_client::FrontendClient;
@@ -42,6 +42,7 @@ use crate::error::{
ExternalSnafu, FlowAlreadyExistSnafu, FlowNotFoundSnafu, TableNotFoundMetaSnafu,
UnexpectedSnafu, UnsupportedSnafu,
};
use crate::metrics::METRIC_FLOW_BATCHING_ENGINE_BULK_MARK_TIME_WINDOW;
use crate::{CreateFlowArgs, Error, FlowId, TableName};
/// Batching mode Engine, responsible for driving all the batching mode tasks
@@ -77,6 +78,116 @@ impl BatchingEngine {
}
}
pub async fn handle_mark_dirty_time_window(
&self,
reqs: DirtyWindowRequests,
) -> Result<FlowResponse, Error> {
let table_info_mgr = self.table_meta.table_info_manager();
let mut group_by_table_id: HashMap<u32, Vec<_>> = HashMap::new();
for r in reqs.requests {
let tid = TableId::from(r.table_id);
let entry = group_by_table_id.entry(tid).or_default();
entry.extend(r.timestamps);
}
let tids = group_by_table_id.keys().cloned().collect::<Vec<TableId>>();
let table_infos =
table_info_mgr
.batch_get(&tids)
.await
.with_context(|_| TableNotFoundMetaSnafu {
msg: format!("Failed to get table info for table ids: {:?}", tids),
})?;
let group_by_table_name = group_by_table_id
.into_iter()
.filter_map(|(id, timestamps)| {
let table_name = table_infos.get(&id).map(|info| info.table_name());
let Some(table_name) = table_name else {
warn!("Failed to get table infos for table id: {:?}", id);
return None;
};
let table_name = [
table_name.catalog_name,
table_name.schema_name,
table_name.table_name,
];
let schema = &table_infos.get(&id).unwrap().table_info.meta.schema;
let time_index_unit = schema.column_schemas[schema.timestamp_index.unwrap()]
.data_type
.as_timestamp()
.unwrap()
.unit();
Some((table_name, (timestamps, time_index_unit)))
})
.collect::<HashMap<_, _>>();
let group_by_table_name = Arc::new(group_by_table_name);
let mut handles = Vec::new();
let tasks = self.tasks.read().await;
for (_flow_id, task) in tasks.iter() {
let src_table_names = &task.config.source_table_names;
if src_table_names
.iter()
.all(|name| !group_by_table_name.contains_key(name))
{
continue;
}
let group_by_table_name = group_by_table_name.clone();
let task = task.clone();
let handle: JoinHandle<Result<(), Error>> = tokio::spawn(async move {
let src_table_names = &task.config.source_table_names;
let mut all_dirty_windows = vec![];
for src_table_name in src_table_names {
if let Some((timestamps, unit)) = group_by_table_name.get(src_table_name) {
let Some(expr) = &task.config.time_window_expr else {
continue;
};
for timestamp in timestamps {
let align_start = expr
.eval(common_time::Timestamp::new(*timestamp, *unit))?
.0
.context(UnexpectedSnafu {
reason: "Failed to eval start value",
})?;
all_dirty_windows.push(align_start);
}
}
}
let mut state = task.state.write().unwrap();
let flow_id_label = task.config.flow_id.to_string();
for timestamp in all_dirty_windows {
state.dirty_time_windows.add_window(timestamp, None);
}
METRIC_FLOW_BATCHING_ENGINE_BULK_MARK_TIME_WINDOW
.with_label_values(&[&flow_id_label])
.set(state.dirty_time_windows.len() as f64);
Ok(())
});
handles.push(handle);
}
drop(tasks);
for handle in handles {
match handle.await {
Err(e) => {
warn!("Failed to handle inserts: {e}");
}
Ok(Ok(())) => (),
Ok(Err(e)) => {
warn!("Failed to handle inserts: {e}");
}
}
}
Ok(Default::default())
}
pub async fn handle_inserts_inner(
&self,
request: api::v1::region::InsertRequests,

View File

@@ -18,7 +18,8 @@ use std::sync::{Arc, Weak};
use std::time::SystemTime;
use api::v1::greptime_request::Request;
use api::v1::CreateTableExpr;
use api::v1::query_request::Query;
use api::v1::{CreateTableExpr, QueryRequest};
use client::{Client, Database};
use common_error::ext::{BoxedError, ErrorExt};
use common_grpc::channel_manager::{ChannelConfig, ChannelManager};
@@ -269,6 +270,55 @@ impl FrontendClient {
.await
}
/// Execute a SQL statement on the frontend.
pub async fn sql(&self, catalog: &str, schema: &str, sql: &str) -> Result<Output, Error> {
match self {
FrontendClient::Distributed { .. } => {
let db = self.get_random_active_frontend(catalog, schema).await?;
db.database
.sql(sql)
.await
.map_err(BoxedError::new)
.context(ExternalSnafu)
}
FrontendClient::Standalone { database_client } => {
let ctx = QueryContextBuilder::default()
.current_catalog(catalog.to_string())
.current_schema(schema.to_string())
.build();
let ctx = Arc::new(ctx);
{
let database_client = {
database_client
.lock()
.map_err(|e| {
UnexpectedSnafu {
reason: format!("Failed to lock database client: {e}"),
}
.build()
})?
.as_ref()
.context(UnexpectedSnafu {
reason: "Standalone's frontend instance is not set",
})?
.upgrade()
.context(UnexpectedSnafu {
reason: "Failed to upgrade database client",
})?
};
let req = Request::Query(QueryRequest {
query: Some(Query::Sql(sql.to_string())),
});
database_client
.do_query(req, ctx)
.await
.map_err(BoxedError::new)
.context(ExternalSnafu)
}
}
}
}
/// Handle a request to frontend
pub(crate) async fn handle(
&self,
@@ -318,7 +368,7 @@ impl FrontendClient {
})?
};
let resp: common_query::Output = database_client
.do_query(req.clone(), ctx)
.do_query(req, ctx)
.await
.map_err(BoxedError::new)
.context(ExternalSnafu)?;

View File

@@ -156,6 +156,11 @@ impl DirtyTimeWindows {
self.windows.clear();
}
/// Number of dirty windows.
pub fn len(&self) -> usize {
self.windows.len()
}
/// Generate all filter expressions consuming all time windows
///
/// there is two limits:

View File

@@ -61,7 +61,9 @@ use crate::error::{
SubstraitEncodeLogicalPlanSnafu, UnexpectedSnafu,
};
use crate::metrics::{
METRIC_FLOW_BATCHING_ENGINE_QUERY_TIME, METRIC_FLOW_BATCHING_ENGINE_SLOW_QUERY,
METRIC_FLOW_BATCHING_ENGINE_ERROR_CNT, METRIC_FLOW_BATCHING_ENGINE_QUERY_TIME,
METRIC_FLOW_BATCHING_ENGINE_SLOW_QUERY, METRIC_FLOW_BATCHING_ENGINE_START_QUERY_CNT,
METRIC_FLOW_ROWS,
};
use crate::{Error, FlowId};
@@ -371,6 +373,9 @@ impl BatchingTask {
"Flow {flow_id} executed, affected_rows: {affected_rows:?}, elapsed: {:?}",
elapsed
);
METRIC_FLOW_ROWS
.with_label_values(&[format!("{}-out-batching", flow_id).as_str()])
.inc_by(*affected_rows as _);
} else if let Err(err) = &res {
warn!(
"Failed to execute Flow {flow_id} on frontend {:?}, result: {err:?}, elapsed: {:?} with query: {}",
@@ -410,6 +415,7 @@ impl BatchingTask {
engine: QueryEngineRef,
frontend_client: Arc<FrontendClient>,
) {
let flow_id_str = self.config.flow_id.to_string();
loop {
// first check if shutdown signal is received
// if so, break the loop
@@ -427,6 +433,9 @@ impl BatchingTask {
Err(TryRecvError::Empty) => (),
}
}
METRIC_FLOW_BATCHING_ENGINE_START_QUERY_CNT
.with_label_values(&[&flow_id_str])
.inc();
let new_query = match self.gen_insert_plan(&engine).await {
Ok(new_query) => new_query,
@@ -473,6 +482,9 @@ impl BatchingTask {
}
// TODO(discord9): this error should have better place to go, but for now just print error, also more context is needed
Err(err) => {
METRIC_FLOW_BATCHING_ENGINE_ERROR_CNT
.with_label_values(&[&flow_id_str])
.inc();
match new_query {
Some(query) => {
common_telemetry::error!(err; "Failed to execute query for flow={} with query: {query}", self.config.flow_id)

View File

@@ -58,11 +58,32 @@ lazy_static! {
vec![60., 4. * 60., 16. * 60., 64. * 60., 256. * 60.]
)
.unwrap();
pub static ref METRIC_FLOW_BATCHING_ENGINE_BULK_MARK_TIME_WINDOW: GaugeVec =
register_gauge_vec!(
"greptime_flow_batching_engine_bulk_mark_time_window",
"flow batching engine query time window count marked by bulk inserts",
&["flow_id"],
)
.unwrap();
pub static ref METRIC_FLOW_BATCHING_ENGINE_START_QUERY_CNT: IntCounterVec =
register_int_counter_vec!(
"greptime_flow_batching_start_query_count",
"flow batching engine started query count",
&["flow_id"],
)
.unwrap();
pub static ref METRIC_FLOW_BATCHING_ENGINE_ERROR_CNT: IntCounterVec =
register_int_counter_vec!(
"greptime_flow_batching_error_count",
"flow batching engine error count per flow id",
&["flow_id"],
)
.unwrap();
pub static ref METRIC_FLOW_RUN_INTERVAL_MS: IntGauge =
register_int_gauge!("greptime_flow_run_interval_ms", "flow run interval in ms").unwrap();
pub static ref METRIC_FLOW_ROWS: IntCounterVec = register_int_counter_vec!(
"greptime_flow_processed_rows",
"Count of rows flowing through the system",
"Count of rows flowing through the system.",
&["direction"]
)
.unwrap();

View File

@@ -17,6 +17,7 @@
use std::net::SocketAddr;
use std::sync::Arc;
use api::v1::flow::DirtyWindowRequests;
use api::v1::{RowDeleteRequests, RowInsertRequests};
use cache::{TABLE_FLOWNODE_SET_CACHE_NAME, TABLE_ROUTE_CACHE_NAME};
use catalog::CatalogManagerRef;
@@ -31,7 +32,7 @@ use common_meta::node_manager::{Flownode, NodeManagerRef};
use common_query::Output;
use common_runtime::JoinHandle;
use common_telemetry::tracing::info;
use futures::{FutureExt, TryStreamExt};
use futures::TryStreamExt;
use greptime_proto::v1::flow::{flow_server, FlowRequest, FlowResponse, InsertRequests};
use itertools::Itertools;
use operator::delete::Deleter;
@@ -39,16 +40,16 @@ use operator::insert::Inserter;
use operator::statement::StatementExecutor;
use partition::manager::PartitionRuleManager;
use query::{QueryEngine, QueryEngineFactory};
use servers::error::{StartGrpcSnafu, TcpBindSnafu, TcpIncomingSnafu};
use servers::add_service;
use servers::grpc::builder::GrpcServerBuilder;
use servers::grpc::{GrpcServer, GrpcServerConfig};
use servers::http::HttpServerBuilder;
use servers::metrics_handler::MetricsHandler;
use servers::server::{ServerHandler, ServerHandlers};
use session::context::QueryContextRef;
use snafu::{OptionExt, ResultExt};
use tokio::net::TcpListener;
use tokio::sync::{broadcast, oneshot, Mutex};
use tonic::codec::CompressionEncoding;
use tonic::transport::server::TcpIncoming;
use tonic::{Request, Response, Status};
use crate::adapter::flownode_impl::{FlowDualEngine, FlowDualEngineRef};
@@ -136,6 +137,18 @@ impl flow_server::Flow for FlowService {
.map(Response::new)
.map_err(to_status_with_last_err)
}
async fn handle_mark_dirty_time_window(
&self,
reqs: Request<DirtyWindowRequests>,
) -> Result<Response<FlowResponse>, Status> {
self.dual_engine
.batching_engine()
.handle_mark_dirty_time_window(reqs.into_inner())
.await
.map(Response::new)
.map_err(to_status_with_last_err)
}
}
#[derive(Clone)]
@@ -218,50 +231,6 @@ impl FlownodeServer {
}
}
#[async_trait::async_trait]
impl servers::server::Server for FlownodeServer {
async fn shutdown(&self) -> Result<(), servers::error::Error> {
let tx = self.inner.server_shutdown_tx.lock().await;
if tx.send(()).is_err() {
info!("Receiver dropped, the flow node server has already shutdown");
}
info!("Shutdown flow node server");
Ok(())
}
async fn start(&mut self, addr: SocketAddr) -> Result<(), servers::error::Error> {
let mut rx_server = self.inner.server_shutdown_tx.lock().await.subscribe();
let incoming = {
let listener = TcpListener::bind(addr)
.await
.context(TcpBindSnafu { addr })?;
let addr = listener.local_addr().context(TcpBindSnafu { addr })?;
let incoming =
TcpIncoming::from_listener(listener, true, None).context(TcpIncomingSnafu)?;
info!("flow server is bound to {}", addr);
incoming
};
let builder = tonic::transport::Server::builder().add_service(self.create_flow_service());
let _handle = common_runtime::spawn_global(async move {
let _result = builder
.serve_with_incoming_shutdown(incoming, rx_server.recv().map(drop))
.await
.context(StartGrpcSnafu);
});
Ok(())
}
fn name(&self) -> &str {
FLOW_NODE_SERVER_NAME
}
}
/// The flownode server instance.
pub struct FlownodeInstance {
flownode_server: FlownodeServer,
@@ -457,7 +426,7 @@ impl FlownodeBuilder {
/// Useful in distributed mode
pub struct FlownodeServiceBuilder<'a> {
opts: &'a FlownodeOptions,
grpc_server: Option<FlownodeServer>,
grpc_server: Option<GrpcServer>,
enable_http_service: bool,
}
@@ -477,13 +446,19 @@ impl<'a> FlownodeServiceBuilder<'a> {
}
}
pub fn with_grpc_server(self, grpc_server: FlownodeServer) -> Self {
pub fn with_grpc_server(self, grpc_server: GrpcServer) -> Self {
Self {
grpc_server: Some(grpc_server),
..self
}
}
pub fn with_default_grpc_server(mut self, flownode_server: &FlownodeServer) -> Self {
let grpc_server = Self::grpc_server_builder(self.opts, flownode_server).build();
self.grpc_server = Some(grpc_server);
self
}
pub fn build(mut self) -> Result<ServerHandlers, Error> {
let handlers = ServerHandlers::default();
if let Some(grpc_server) = self.grpc_server.take() {
@@ -506,6 +481,22 @@ impl<'a> FlownodeServiceBuilder<'a> {
}
Ok(handlers)
}
pub fn grpc_server_builder(
opts: &FlownodeOptions,
flownode_server: &FlownodeServer,
) -> GrpcServerBuilder {
let config = GrpcServerConfig {
max_recv_message_size: opts.grpc.max_recv_message_size.as_bytes() as usize,
max_send_message_size: opts.grpc.max_send_message_size.as_bytes() as usize,
tls: opts.grpc.tls.clone(),
};
let service = flownode_server.create_flow_service();
let runtime = common_runtime::global_runtime();
let mut builder = GrpcServerBuilder::new(config, runtime);
add_service!(builder, service);
builder
}
}
/// Basically a tiny frontend that communicates with datanode, different from [`FrontendClient`] which
@@ -578,6 +569,7 @@ impl FrontendInvoker {
layered_cache_registry.clone(),
inserter.clone(),
table_route_cache,
None,
));
let invoker = FrontendInvoker::new(inserter, deleter, statement_executor);

View File

@@ -14,6 +14,7 @@ workspace = true
[dependencies]
api.workspace = true
arc-swap = "1.0"
async-stream.workspace = true
async-trait.workspace = true
auth.workspace = true
bytes.workspace = true
@@ -70,6 +71,7 @@ store-api.workspace = true
substrait.workspace = true
table.workspace = true
tokio.workspace = true
tokio-util.workspace = true
toml.workspace = true
tonic.workspace = true

View File

@@ -357,6 +357,18 @@ pub enum Error {
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Query has been cancelled"))]
Cancelled {
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Canceling statement due to statement timeout"))]
StatementTimeout {
#[snafu(implicit)]
location: Location,
},
}
pub type Result<T> = std::result::Result<T, Error>;
@@ -435,6 +447,10 @@ impl ErrorExt for Error {
Error::InFlightWriteBytesExceeded { .. } => StatusCode::RateLimited,
Error::DataFusion { error, .. } => datafusion_status_code::<Self>(error, None),
Error::Cancelled { .. } => StatusCode::Cancelled,
Error::StatementTimeout { .. } => StatusCode::Cancelled,
}
}

View File

@@ -25,14 +25,17 @@ mod promql;
mod region_query;
pub mod standalone;
use std::pin::Pin;
use std::sync::Arc;
use std::time::SystemTime;
use std::time::{Duration, SystemTime};
use async_stream::stream;
use async_trait::async_trait;
use auth::{PermissionChecker, PermissionCheckerRef, PermissionReq};
use catalog::process_manager::ProcessManagerRef;
use catalog::CatalogManagerRef;
use client::OutputData;
use common_base::cancellation::CancellableFuture;
use common_base::Plugins;
use common_config::KvBackendConfig;
use common_error::ext::{BoxedError, ErrorExt};
@@ -43,8 +46,11 @@ use common_procedure::local::{LocalManager, ManagerConfig};
use common_procedure::options::ProcedureConfig;
use common_procedure::ProcedureManagerRef;
use common_query::Output;
use common_recordbatch::error::StreamTimeoutSnafu;
use common_recordbatch::RecordBatchStreamWrapper;
use common_telemetry::{debug, error, info, tracing};
use datafusion_expr::LogicalPlan;
use futures::{Stream, StreamExt};
use log_store::raft_engine::RaftEngineBackend;
use operator::delete::DeleterRef;
use operator::insert::InserterRef;
@@ -64,24 +70,25 @@ use servers::interceptor::{
};
use servers::prometheus_handler::PrometheusHandler;
use servers::query_handler::sql::SqlQueryHandler;
use session::context::QueryContextRef;
use session::context::{Channel, QueryContextRef};
use session::table_name::table_idents_to_full_name;
use snafu::prelude::*;
use sql::dialect::Dialect;
use sql::parser::{ParseOptions, ParserContext};
use sql::statements::copy::{CopyDatabase, CopyTable};
use sql::statements::statement::Statement;
use sql::statements::tql::Tql;
use sqlparser::ast::ObjectName;
pub use standalone::StandaloneDatanodeManager;
use crate::error::{
self, Error, ExecLogicalPlanSnafu, ExecutePromqlSnafu, ExternalSnafu, InvalidSqlSnafu,
ParseSqlSnafu, PermissionSnafu, PlanStatementSnafu, Result, SqlExecInterceptedSnafu,
TableOperationSnafu,
StatementTimeoutSnafu, TableOperationSnafu,
};
use crate::limiter::LimiterRef;
use crate::slow_query_recorder::SlowQueryRecorder;
use crate::stream_wrapper::StreamWrapper;
use crate::stream_wrapper::CancellableStreamWrapper;
/// The frontend instance contains necessary components, and implements many
/// traits, like [`servers::query_handler::grpc::GrpcQueryHandler`],
@@ -183,11 +190,62 @@ impl Instance {
query_ctx.current_catalog().to_string(),
vec![query_ctx.current_schema()],
stmt.to_string(),
"unknown".to_string(),
None,
query_ctx.conn_info().to_string(),
Some(query_ctx.process_id()),
);
let output = match stmt {
let query_fut = self.exec_statement_with_timeout(stmt, query_ctx, query_interceptor);
CancellableFuture::new(query_fut, ticket.cancellation_handle.clone())
.await
.map_err(|_| error::CancelledSnafu.build())?
.map(|output| {
let Output { meta, data } = output;
let data = match data {
OutputData::Stream(stream) => {
OutputData::Stream(Box::pin(CancellableStreamWrapper::new(stream, ticket)))
}
other => other,
};
Output { data, meta }
})
}
async fn exec_statement_with_timeout(
&self,
stmt: Statement,
query_ctx: QueryContextRef,
query_interceptor: Option<&SqlQueryInterceptorRef<Error>>,
) -> Result<Output> {
let timeout = derive_timeout(&stmt, &query_ctx);
match timeout {
Some(timeout) => {
let start = tokio::time::Instant::now();
let output = tokio::time::timeout(
timeout,
self.exec_statement(stmt, query_ctx, query_interceptor),
)
.await
.map_err(|_| StatementTimeoutSnafu.build())??;
// compute remaining timeout
let remaining_timeout = timeout.checked_sub(start.elapsed()).unwrap_or_default();
attach_timeout(output, remaining_timeout)
}
None => {
self.exec_statement(stmt, query_ctx, query_interceptor)
.await
}
}
}
async fn exec_statement(
&self,
stmt: Statement,
query_ctx: QueryContextRef,
query_interceptor: Option<&SqlQueryInterceptorRef<Error>>,
) -> Result<Output> {
match stmt {
Statement::Query(_) | Statement::Explain(_) | Statement::Delete(_) => {
// TODO: remove this when format is supported in datafusion
if let Statement::Explain(explain) = &stmt {
@@ -196,55 +254,110 @@ impl Instance {
}
}
let stmt = QueryStatement::Sql(stmt);
let plan = self
.statement_executor
.plan(&stmt, query_ctx.clone())
.await?;
let QueryStatement::Sql(stmt) = stmt else {
unreachable!()
};
query_interceptor.pre_execute(&stmt, Some(&plan), query_ctx.clone())?;
self.statement_executor.exec_plan(plan, query_ctx).await
self.plan_and_exec_sql(stmt, &query_ctx, query_interceptor)
.await
}
Statement::Tql(tql) => {
let plan = self
.statement_executor
.plan_tql(tql.clone(), &query_ctx)
.await?;
query_interceptor.pre_execute(
&Statement::Tql(tql),
Some(&plan),
query_ctx.clone(),
)?;
self.statement_executor.exec_plan(plan, query_ctx).await
self.plan_and_exec_tql(&query_ctx, query_interceptor, tql)
.await
}
_ => {
query_interceptor.pre_execute(&stmt, None, query_ctx.clone())?;
self.statement_executor.execute_sql(stmt, query_ctx).await
self.statement_executor
.execute_sql(stmt, query_ctx)
.await
.context(TableOperationSnafu)
}
};
match output {
Ok(output) => {
let Output { meta, data } = output;
let data = match data {
OutputData::Stream(stream) => {
OutputData::Stream(Box::pin(StreamWrapper::new(stream, ticket)))
}
other => other,
};
Ok(Output { data, meta })
}
Err(e) => Err(e).context(TableOperationSnafu),
}
}
async fn plan_and_exec_sql(
&self,
stmt: Statement,
query_ctx: &QueryContextRef,
query_interceptor: Option<&SqlQueryInterceptorRef<Error>>,
) -> Result<Output> {
let stmt = QueryStatement::Sql(stmt);
let plan = self
.statement_executor
.plan(&stmt, query_ctx.clone())
.await?;
let QueryStatement::Sql(stmt) = stmt else {
unreachable!()
};
query_interceptor.pre_execute(&stmt, Some(&plan), query_ctx.clone())?;
self.statement_executor
.exec_plan(plan, query_ctx.clone())
.await
.context(TableOperationSnafu)
}
async fn plan_and_exec_tql(
&self,
query_ctx: &QueryContextRef,
query_interceptor: Option<&SqlQueryInterceptorRef<Error>>,
tql: Tql,
) -> Result<Output> {
let plan = self
.statement_executor
.plan_tql(tql.clone(), query_ctx)
.await?;
query_interceptor.pre_execute(&Statement::Tql(tql), Some(&plan), query_ctx.clone())?;
self.statement_executor
.exec_plan(plan, query_ctx.clone())
.await
.context(TableOperationSnafu)
}
}
/// If the relevant variables are set, the timeout is enforced for all PostgreSQL statements.
/// For MySQL, it applies only to read-only statements.
fn derive_timeout(stmt: &Statement, query_ctx: &QueryContextRef) -> Option<Duration> {
let query_timeout = query_ctx.query_timeout()?;
if query_timeout.is_zero() {
return None;
}
match query_ctx.channel() {
Channel::Mysql if stmt.is_readonly() => Some(query_timeout),
Channel::Postgres => Some(query_timeout),
_ => None,
}
}
fn attach_timeout(output: Output, mut timeout: Duration) -> Result<Output> {
if timeout.is_zero() {
return StatementTimeoutSnafu.fail();
}
let output = match output.data {
OutputData::AffectedRows(_) | OutputData::RecordBatches(_) => output,
OutputData::Stream(mut stream) => {
let schema = stream.schema();
let s = Box::pin(stream! {
let mut start = tokio::time::Instant::now();
while let Some(item) = tokio::time::timeout(timeout, stream.next()).await.map_err(|_| StreamTimeoutSnafu.build())? {
yield item;
let now = tokio::time::Instant::now();
timeout = timeout.checked_sub(now - start).unwrap_or(Duration::ZERO);
start = now;
// tokio::time::timeout may not return an error immediately when timeout is 0.
if timeout.is_zero() {
StreamTimeoutSnafu.fail()?;
}
}
}) as Pin<Box<dyn Stream<Item = _> + Send>>;
let stream = RecordBatchStreamWrapper {
schema,
stream: s,
output_ordering: None,
metrics: Default::default(),
};
Output::new(OutputData::Stream(Box::pin(stream)), output.meta)
}
};
Ok(output)
}
#[async_trait]
@@ -605,6 +718,8 @@ pub fn check_permission(
}
// cursor operations are always allowed once it's created
Statement::FetchCursor(_) | Statement::CloseCursor(_) => {}
// User can only kill process in their own catalog.
Statement::Kill(_) => {}
}
Ok(())
}

View File

@@ -180,6 +180,7 @@ impl FrontendBuilder {
local_cache_invalidator,
inserter.clone(),
table_route_cache,
Some(process_manager.clone()),
));
let pipeline_operator = Arc::new(PipelineOperator::new(

View File

@@ -35,8 +35,8 @@ use servers::query_handler::grpc::GrpcQueryHandler;
use servers::query_handler::sql::SqlQueryHandler;
use session::context::QueryContextRef;
use snafu::{ensure, OptionExt, ResultExt};
use table::metadata::TableId;
use table::table_name::TableName;
use table::TableRef;
use crate::error::{
CatalogSnafu, DataFusionSnafu, Error, InFlightWriteBytesExceededSnafu,
@@ -235,34 +235,33 @@ impl GrpcQueryHandler for Instance {
async fn put_record_batch(
&self,
table: &TableName,
table_id: &mut Option<TableId>,
table_name: &TableName,
table_ref: &mut Option<TableRef>,
decoder: &mut FlightDecoder,
data: FlightData,
) -> Result<AffectedRows> {
let table_id = if let Some(table_id) = table_id {
*table_id
let table = if let Some(table) = table_ref {
table.clone()
} else {
let table = self
.catalog_manager()
.table(
&table.catalog_name,
&table.schema_name,
&table.table_name,
&table_name.catalog_name,
&table_name.schema_name,
&table_name.table_name,
None,
)
.await
.context(CatalogSnafu)?
.with_context(|| TableNotFoundSnafu {
table_name: table.to_string(),
table_name: table_name.to_string(),
})?;
let id = table.table_info().table_id();
*table_id = Some(id);
id
*table_ref = Some(table.clone());
table
};
self.inserter
.handle_bulk_insert(table_id, decoder, data)
.handle_bulk_insert(table, decoder, data)
.await
.context(TableOperationSnafu)
}

View File

@@ -24,12 +24,14 @@ use common_function::scalars::json::json_get::{
};
use common_function::scalars::udf::create_udf;
use common_function::state::FunctionState;
use common_query::Output;
use common_query::{Output, OutputData};
use common_recordbatch::adapter::RecordBatchStreamAdapter;
use common_recordbatch::util;
use datafusion::dataframe::DataFrame;
use datafusion::execution::context::SessionContext;
use datafusion::execution::SessionStateBuilder;
use datafusion_expr::{col, lit, lit_timestamp_nano, wildcard, Expr, SortExpr};
use datatypes::value::ValueRef;
use query::QueryEngineRef;
use serde_json::Value as JsonValue;
use servers::error::{
@@ -97,7 +99,7 @@ impl JaegerQueryHandler for Instance {
filters.push(col(TIMESTAMP_COLUMN).lt_eq(lit_timestamp_nano(end_time * 1_000)));
}
// It's equivalent to
// It's equivalent to the following SQL query:
//
// ```
// SELECT DISTINCT span_name, span_kind
@@ -137,7 +139,7 @@ impl JaegerQueryHandler for Instance {
start_time: Option<i64>,
end_time: Option<i64>,
) -> ServerResult<Output> {
// It's equivalent to
// It's equivalent to the following SQL query:
//
// ```
// SELECT
@@ -156,13 +158,11 @@ impl JaegerQueryHandler for Instance {
let mut filters = vec![col(TRACE_ID_COLUMN).eq(lit(trace_id))];
if let Some(start_time) = start_time {
// Microseconds to nanoseconds.
filters.push(col(TIMESTAMP_COLUMN).gt_eq(lit_timestamp_nano(start_time * 1_000)));
filters.push(col(TIMESTAMP_COLUMN).gt_eq(lit_timestamp_nano(start_time)));
}
if let Some(end_time) = end_time {
// Microseconds to nanoseconds.
filters.push(col(TIMESTAMP_COLUMN).lt_eq(lit_timestamp_nano(end_time * 1_000)));
filters.push(col(TIMESTAMP_COLUMN).lt_eq(lit_timestamp_nano(end_time)));
}
Ok(query_trace_table(
@@ -184,10 +184,11 @@ impl JaegerQueryHandler for Instance {
ctx: QueryContextRef,
query_params: QueryTraceParams,
) -> ServerResult<Output> {
let selects = vec![wildcard()];
let mut filters = vec![];
// `service_name` is already validated in `from_jaeger_query_params()`, so no additional check needed here.
filters.push(col(SERVICE_NAME_COLUMN).eq(lit(query_params.service_name)));
if let Some(operation_name) = query_params.operation_name {
filters.push(col(SPAN_NAME_COLUMN).eq(lit(operation_name)));
}
@@ -208,15 +209,73 @@ impl JaegerQueryHandler for Instance {
filters.push(col(DURATION_NANO_COLUMN).lt_eq(lit(max_duration)));
}
// Get all distinct trace ids that match the filters.
// It's equivalent to the following SQL query:
//
// ```
// SELECT DISTINCT trace_id
// FROM
// {db}.{trace_table}
// WHERE
// service_name = '{service_name}' AND
// operation_name = '{operation_name}' AND
// timestamp >= {start_time} AND
// timestamp <= {end_time} AND
// duration >= {min_duration} AND
// duration <= {max_duration}
// LIMIT {limit}
// ```.
let output = query_trace_table(
ctx.clone(),
self.catalog_manager(),
self.query_engine(),
vec![wildcard()],
filters,
vec![],
Some(query_params.limit.unwrap_or(DEFAULT_LIMIT)),
query_params.tags,
vec![col(TRACE_ID_COLUMN)],
)
.await?;
// Get all traces that match the trace ids from the previous query.
// It's equivalent to the following SQL query:
//
// ```
// SELECT *
// FROM
// {db}.{trace_table}
// WHERE
// trace_id IN ({trace_ids}) AND
// timestamp >= {start_time} AND
// timestamp <= {end_time}
// ```
let mut filters = vec![col(TRACE_ID_COLUMN).in_list(
trace_ids_from_output(output)
.await?
.iter()
.map(lit)
.collect::<Vec<Expr>>(),
false,
)];
if let Some(start_time) = query_params.start_time {
filters.push(col(TIMESTAMP_COLUMN).gt_eq(lit_timestamp_nano(start_time)));
}
if let Some(end_time) = query_params.end_time {
filters.push(col(TIMESTAMP_COLUMN).lt_eq(lit_timestamp_nano(end_time)));
}
Ok(query_trace_table(
ctx,
self.catalog_manager(),
self.query_engine(),
selects,
vec![wildcard()],
filters,
vec![col(TIMESTAMP_COLUMN).sort(false, false)], // Sort by timestamp in descending order.
Some(DEFAULT_LIMIT),
query_params.tags,
vec![],
None,
None,
vec![],
)
.await?)
@@ -458,3 +517,34 @@ fn tags_filters(
json_tag_filters(dataframe, tags)
}
}
// Get trace ids from the output in recordbatches.
async fn trace_ids_from_output(output: Output) -> ServerResult<Vec<String>> {
if let OutputData::Stream(stream) = output.data {
let schema = stream.schema().clone();
let recordbatches = util::collect(stream)
.await
.context(CollectRecordbatchSnafu)?;
// Only contains `trace_id` column in string type.
if !recordbatches.is_empty()
&& schema.num_columns() == 1
&& schema.contains_column(TRACE_ID_COLUMN)
{
let mut trace_ids = vec![];
for recordbatch in recordbatches {
for col in recordbatch.columns().iter() {
for row_idx in 0..recordbatch.num_rows() {
if let ValueRef::String(value) = col.get_ref(row_idx) {
trace_ids.push(value.to_string());
}
}
}
}
return Ok(trace_ids);
}
}
Ok(vec![])
}

View File

@@ -22,7 +22,7 @@ use servers::error::Error as ServerError;
use servers::grpc::builder::GrpcServerBuilder;
use servers::grpc::frontend_grpc_handler::FrontendGrpcHandler;
use servers::grpc::greptime_handler::GreptimeRequestHandler;
use servers::grpc::{GrpcOptions, GrpcServer, GrpcServerConfig};
use servers::grpc::{GrpcOptions, GrpcServer};
use servers::http::event::LogValidatorRef;
use servers::http::{HttpServer, HttpServerBuilder};
use servers::interceptor::LogIngestInterceptorRef;
@@ -66,12 +66,7 @@ where
}
pub fn grpc_server_builder(&self, opts: &GrpcOptions) -> Result<GrpcServerBuilder> {
let grpc_config = GrpcServerConfig {
max_recv_message_size: opts.max_recv_message_size.as_bytes() as usize,
max_send_message_size: opts.max_send_message_size.as_bytes() as usize,
tls: opts.tls.clone(),
};
let builder = GrpcServerBuilder::new(grpc_config, common_runtime::global_runtime())
let builder = GrpcServerBuilder::new(opts.as_config(), common_runtime::global_runtime())
.with_tls_config(opts.tls.clone())
.context(error::InvalidTlsConfigSnafu)?;
Ok(builder)
@@ -235,6 +230,7 @@ where
opts.keep_alive.as_secs(),
opts.reject_no_database.unwrap_or(false),
)),
Some(instance.process_manager().clone()),
);
handlers.insert((mysql_server, mysql_addr));
}
@@ -257,6 +253,7 @@ where
opts.keep_alive.as_secs(),
common_runtime::global_runtime(),
user_provider.clone(),
Some(self.instance.process_manager().clone()),
)) as Box<dyn Server>;
handlers.insert((pg_server, pg_addr));

View File

@@ -15,37 +15,52 @@
use std::pin::Pin;
use std::task::{Context, Poll};
use catalog::process_manager::Ticket;
use common_recordbatch::adapter::RecordBatchMetrics;
use common_recordbatch::{OrderOption, RecordBatch, RecordBatchStream, SendableRecordBatchStream};
use datatypes::schema::SchemaRef;
use futures::Stream;
pub struct StreamWrapper<T> {
pub struct CancellableStreamWrapper {
inner: SendableRecordBatchStream,
_attachment: T,
ticket: Ticket,
}
impl<T> Unpin for StreamWrapper<T> {}
impl Unpin for CancellableStreamWrapper {}
impl<T> StreamWrapper<T> {
pub fn new(stream: SendableRecordBatchStream, attachment: T) -> Self {
impl CancellableStreamWrapper {
pub fn new(stream: SendableRecordBatchStream, ticket: Ticket) -> Self {
Self {
inner: stream,
_attachment: attachment,
ticket,
}
}
}
impl<T> Stream for StreamWrapper<T> {
impl Stream for CancellableStreamWrapper {
type Item = common_recordbatch::error::Result<RecordBatch>;
fn poll_next(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Option<Self::Item>> {
let this = &mut *self;
Pin::new(&mut this.inner).poll_next(cx)
if this.ticket.cancellation_handle.is_cancelled() {
return Poll::Ready(Some(common_recordbatch::error::StreamCancelledSnafu.fail()));
}
if let Poll::Ready(res) = Pin::new(&mut this.inner).poll_next(cx) {
return Poll::Ready(res);
}
// on pending, register cancellation waker.
this.ticket.cancellation_handle.waker().register(cx.waker());
// check if canceled again.
if this.ticket.cancellation_handle.is_cancelled() {
return Poll::Ready(Some(common_recordbatch::error::StreamCancelledSnafu.fail()));
}
Poll::Pending
}
}
impl<T> RecordBatchStream for StreamWrapper<T> {
impl RecordBatchStream for CancellableStreamWrapper {
fn schema(&self) -> SchemaRef {
self.inner.schema()
}
@@ -58,3 +73,295 @@ impl<T> RecordBatchStream for StreamWrapper<T> {
self.inner.metrics()
}
}
#[cfg(test)]
mod tests {
use std::pin::Pin;
use std::sync::Arc;
use std::task::{Context, Poll};
use std::time::Duration;
use catalog::process_manager::ProcessManager;
use common_recordbatch::adapter::RecordBatchMetrics;
use common_recordbatch::{OrderOption, RecordBatch, RecordBatchStream};
use datatypes::data_type::ConcreteDataType;
use datatypes::prelude::VectorRef;
use datatypes::schema::{ColumnSchema, Schema, SchemaRef};
use datatypes::vectors::Int32Vector;
use futures::{Stream, StreamExt};
use tokio::time::{sleep, timeout};
use super::CancellableStreamWrapper;
// Mock stream for testing
struct MockRecordBatchStream {
schema: SchemaRef,
batches: Vec<common_recordbatch::error::Result<RecordBatch>>,
current: usize,
delay: Option<Duration>,
}
impl MockRecordBatchStream {
fn new(batches: Vec<common_recordbatch::error::Result<RecordBatch>>) -> Self {
let schema = Arc::new(Schema::new(vec![ColumnSchema::new(
"test_col",
ConcreteDataType::int32_datatype(),
false,
)]));
Self {
schema,
batches,
current: 0,
delay: None,
}
}
fn with_delay(mut self, delay: Duration) -> Self {
self.delay = Some(delay);
self
}
}
impl Stream for MockRecordBatchStream {
type Item = common_recordbatch::error::Result<RecordBatch>;
fn poll_next(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Option<Self::Item>> {
if let Some(delay) = self.delay {
// Simulate async delay
let waker = cx.waker().clone();
let delay_clone = delay;
tokio::spawn(async move {
sleep(delay_clone).await;
waker.wake();
});
self.delay = None; // Only delay once
return Poll::Pending;
}
if self.current >= self.batches.len() {
return Poll::Ready(None);
}
let batch = self.batches[self.current].as_ref().unwrap().clone();
self.current += 1;
Poll::Ready(Some(Ok(batch)))
}
}
impl RecordBatchStream for MockRecordBatchStream {
fn schema(&self) -> SchemaRef {
self.schema.clone()
}
fn output_ordering(&self) -> Option<&[OrderOption]> {
None
}
fn metrics(&self) -> Option<RecordBatchMetrics> {
None
}
}
fn create_test_batch() -> RecordBatch {
let schema = Arc::new(Schema::new(vec![ColumnSchema::new(
"test_col",
ConcreteDataType::int32_datatype(),
false,
)]));
RecordBatch::new(
schema,
vec![Arc::new(Int32Vector::from_values(0..3)) as VectorRef],
)
.unwrap()
}
#[tokio::test]
async fn test_stream_completes_normally() {
let batch = create_test_batch();
let mock_stream = MockRecordBatchStream::new(vec![Ok(batch.clone())]);
let process_manager = Arc::new(ProcessManager::new("".to_string(), None));
let ticket = process_manager.register_query(
"catalog".to_string(),
vec![],
"query".to_string(),
"client".to_string(),
None,
);
let mut cancellable_stream = CancellableStreamWrapper::new(Box::pin(mock_stream), ticket);
let result = cancellable_stream.next().await;
assert!(result.is_some());
assert!(result.unwrap().is_ok());
let end_result = cancellable_stream.next().await;
assert!(end_result.is_none());
}
#[tokio::test]
async fn test_stream_cancelled_before_start() {
let batch = create_test_batch();
let mock_stream = MockRecordBatchStream::new(vec![Ok(batch)]);
let process_manager = Arc::new(ProcessManager::new("".to_string(), None));
let ticket = process_manager.register_query(
"catalog".to_string(),
vec![],
"query".to_string(),
"client".to_string(),
None,
);
// Cancel before creating the wrapper
ticket.cancellation_handle.cancel();
let mut cancellable_stream = CancellableStreamWrapper::new(Box::pin(mock_stream), ticket);
let result = cancellable_stream.next().await;
assert!(result.is_some());
assert!(result.unwrap().is_err());
}
#[tokio::test]
async fn test_stream_cancelled_during_execution() {
let batch = create_test_batch();
let mock_stream =
MockRecordBatchStream::new(vec![Ok(batch)]).with_delay(Duration::from_millis(100));
let process_manager = Arc::new(ProcessManager::new("".to_string(), None));
let ticket = process_manager.register_query(
"catalog".to_string(),
vec![],
"query".to_string(),
"client".to_string(),
None,
);
let cancellation_handle = ticket.cancellation_handle.clone();
let mut cancellable_stream = CancellableStreamWrapper::new(Box::pin(mock_stream), ticket);
// Cancel after a short delay
tokio::spawn(async move {
sleep(Duration::from_millis(50)).await;
cancellation_handle.cancel();
});
let result = cancellable_stream.next().await;
assert!(result.is_some());
assert!(result.unwrap().is_err());
}
#[tokio::test]
async fn test_stream_completes_before_cancellation() {
let batch = create_test_batch();
let mock_stream = MockRecordBatchStream::new(vec![Ok(batch.clone())]);
let process_manager = Arc::new(ProcessManager::new("".to_string(), None));
let ticket = process_manager.register_query(
"catalog".to_string(),
vec![],
"query".to_string(),
"client".to_string(),
None,
);
let cancellation_handle = ticket.cancellation_handle.clone();
let mut cancellable_stream = CancellableStreamWrapper::new(Box::pin(mock_stream), ticket);
// Try to cancel after the stream should have completed
tokio::spawn(async move {
sleep(Duration::from_millis(100)).await;
cancellation_handle.cancel();
});
let result = cancellable_stream.next().await;
assert!(result.is_some());
assert!(result.unwrap().is_ok());
}
#[tokio::test]
async fn test_multiple_batches() {
let batch1 = create_test_batch();
let batch2 = create_test_batch();
let mock_stream = MockRecordBatchStream::new(vec![Ok(batch1), Ok(batch2)]);
let process_manager = Arc::new(ProcessManager::new("".to_string(), None));
let ticket = process_manager.register_query(
"catalog".to_string(),
vec![],
"query".to_string(),
"client".to_string(),
None,
);
let mut cancellable_stream = CancellableStreamWrapper::new(Box::pin(mock_stream), ticket);
// First batch
let result1 = cancellable_stream.next().await;
assert!(result1.is_some());
assert!(result1.unwrap().is_ok());
// Second batch
let result2 = cancellable_stream.next().await;
assert!(result2.is_some());
assert!(result2.unwrap().is_ok());
// End of stream
let end_result = cancellable_stream.next().await;
assert!(end_result.is_none());
}
#[tokio::test]
async fn test_record_batch_stream_methods() {
let batch = create_test_batch();
let mock_stream = MockRecordBatchStream::new(vec![Ok(batch)]);
let process_manager = Arc::new(ProcessManager::new("".to_string(), None));
let ticket = process_manager.register_query(
"catalog".to_string(),
vec![],
"query".to_string(),
"client".to_string(),
None,
);
let cancellable_stream = CancellableStreamWrapper::new(Box::pin(mock_stream), ticket);
// Test schema method
let schema = cancellable_stream.schema();
assert_eq!(schema.column_schemas().len(), 1);
assert_eq!(schema.column_schemas()[0].name, "test_col");
// Test output_ordering method
assert!(cancellable_stream.output_ordering().is_none());
// Test metrics method
assert!(cancellable_stream.metrics().is_none());
}
#[tokio::test]
async fn test_cancellation_during_pending_poll() {
let batch = create_test_batch();
let mock_stream =
MockRecordBatchStream::new(vec![Ok(batch)]).with_delay(Duration::from_millis(200));
let process_manager = Arc::new(ProcessManager::new("".to_string(), None));
let ticket = process_manager.register_query(
"catalog".to_string(),
vec![],
"query".to_string(),
"client".to_string(),
None,
);
let cancellation_handle = ticket.cancellation_handle.clone();
let mut cancellable_stream = CancellableStreamWrapper::new(Box::pin(mock_stream), ticket);
// Cancel while the stream is pending
tokio::spawn(async move {
sleep(Duration::from_millis(50)).await;
cancellation_handle.cancel();
});
let result = timeout(Duration::from_millis(300), cancellable_stream.next()).await;
assert!(result.is_ok());
let stream_result = result.unwrap();
assert!(stream_result.is_some());
assert!(stream_result.unwrap().is_err());
}
}

View File

@@ -218,6 +218,7 @@ mod tests {
let mut writer = Cursor::new(Vec::new());
let mut creator = BloomFilterCreator::new(
4,
0.01,
Arc::new(MockExternalTempFileProvider::new()),
Arc::new(AtomicUsize::new(0)),
None,

View File

@@ -30,9 +30,6 @@ use crate::bloom_filter::SEED;
use crate::external_provider::ExternalTempFileProvider;
use crate::Bytes;
/// The false positive rate of the Bloom filter.
pub const FALSE_POSITIVE_RATE: f64 = 0.01;
/// `BloomFilterCreator` is responsible for creating and managing bloom filters
/// for a set of elements. It divides the rows into segments and creates
/// bloom filters for each segment.
@@ -79,6 +76,7 @@ impl BloomFilterCreator {
/// `rows_per_segment` <= 0
pub fn new(
rows_per_segment: usize,
false_positive_rate: f64,
intermediate_provider: Arc<dyn ExternalTempFileProvider>,
global_memory_usage: Arc<AtomicUsize>,
global_memory_usage_threshold: Option<usize>,
@@ -95,6 +93,7 @@ impl BloomFilterCreator {
cur_seg_distinct_elems_mem_usage: 0,
global_memory_usage: global_memory_usage.clone(),
finalized_bloom_filters: FinalizedBloomFilterStorage::new(
false_positive_rate,
intermediate_provider,
global_memory_usage,
global_memory_usage_threshold,
@@ -263,6 +262,7 @@ mod tests {
let mut writer = Cursor::new(Vec::new());
let mut creator = BloomFilterCreator::new(
2,
0.01,
Arc::new(MockExternalTempFileProvider::new()),
Arc::new(AtomicUsize::new(0)),
None,
@@ -337,6 +337,7 @@ mod tests {
let mut writer = Cursor::new(Vec::new());
let mut creator: BloomFilterCreator = BloomFilterCreator::new(
2,
0.01,
Arc::new(MockExternalTempFileProvider::new()),
Arc::new(AtomicUsize::new(0)),
None,
@@ -418,6 +419,7 @@ mod tests {
let mut writer = Cursor::new(Vec::new());
let mut creator = BloomFilterCreator::new(
2,
0.01,
Arc::new(MockExternalTempFileProvider::new()),
Arc::new(AtomicUsize::new(0)),
None,

View File

@@ -23,7 +23,7 @@ use futures::{stream, AsyncWriteExt, Stream};
use snafu::ResultExt;
use crate::bloom_filter::creator::intermediate_codec::IntermediateBloomFilterCodecV1;
use crate::bloom_filter::creator::{FALSE_POSITIVE_RATE, SEED};
use crate::bloom_filter::creator::SEED;
use crate::bloom_filter::error::{IntermediateSnafu, IoSnafu, Result};
use crate::external_provider::ExternalTempFileProvider;
use crate::Bytes;
@@ -33,6 +33,9 @@ const MIN_MEMORY_USAGE_THRESHOLD: usize = 1024 * 1024; // 1MB
/// Storage for finalized Bloom filters.
pub struct FinalizedBloomFilterStorage {
/// The false positive rate of the Bloom filter.
false_positive_rate: f64,
/// Indices of the segments in the sequence of finalized Bloom filters.
segment_indices: Vec<usize>,
@@ -65,12 +68,14 @@ pub struct FinalizedBloomFilterStorage {
impl FinalizedBloomFilterStorage {
/// Creates a new `FinalizedBloomFilterStorage`.
pub fn new(
false_positive_rate: f64,
intermediate_provider: Arc<dyn ExternalTempFileProvider>,
global_memory_usage: Arc<AtomicUsize>,
global_memory_usage_threshold: Option<usize>,
) -> Self {
let external_prefix = format!("intm-bloom-filters-{}", uuid::Uuid::new_v4());
Self {
false_positive_rate,
segment_indices: Vec::new(),
in_memory: Vec::new(),
intermediate_file_id_counter: 0,
@@ -96,7 +101,7 @@ impl FinalizedBloomFilterStorage {
elems: impl IntoIterator<Item = Bytes>,
element_count: usize,
) -> Result<()> {
let mut bf = BloomFilter::with_false_pos(FALSE_POSITIVE_RATE)
let mut bf = BloomFilter::with_false_pos(self.false_positive_rate)
.seed(&SEED)
.expected_items(element_count);
for elem in elems.into_iter() {
@@ -284,6 +289,7 @@ mod tests {
let global_memory_usage_threshold = Some(1024 * 1024); // 1MB
let provider = Arc::new(mock_provider);
let mut storage = FinalizedBloomFilterStorage::new(
0.01,
provider,
global_memory_usage.clone(),
global_memory_usage_threshold,
@@ -340,6 +346,7 @@ mod tests {
let global_memory_usage_threshold = Some(1024 * 1024); // 1MB
let provider = Arc::new(mock_provider);
let mut storage = FinalizedBloomFilterStorage::new(
0.01,
provider,
global_memory_usage.clone(),
global_memory_usage_threshold,

Some files were not shown because too many files have changed in this diff Show More