Compare commits

...

42 Commits

Author SHA1 Message Date
liyang
bb43d604a4 ci: use ubuntu 16 core machine in release-cn-artifacts (#6464)
ci: use ubuntu 16 core machine
2025-07-04 18:06:07 +00:00
Lei, HUANG
9576bcb9ae fix: filter empty batch in bulk insert api (#6459)
* fix/filter-empty-batch-in-bulk-insert-api:
 **Add Early Return for Empty Record Batches in `bulk_insert.rs`**

 - Implemented an early return in the `Inserter` implementation to handle cases where `record_batch.num_rows()` is zero, improving efficiency by avoiding unnecessary processing.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* fix/filter-empty-batch-in-bulk-insert-api:
 **Improve Bulk Insert Handling**

 - **`handle_bulk_insert.rs`**: Added a check to handle cases where the batch has zero rows, immediately returning and sending a success response with zero rows processed.
 - **`bulk_insert.rs`**: Enhanced logic to skip processing for masks that select none, optimizing the bulk insert operation by avoiding unnecessary iterations.

 These changes improve the efficiency and robustness of the bulk insert process by handling edge cases more effectively.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* fix/filter-empty-batch-in-bulk-insert-api:
 ### Refactor and Error Handling Enhancements

 - **Refactored Timestamp Handling**: Introduced `timestamp_array_to_primitive` function in `timestamp.rs` to streamline conversion of timestamp arrays to primitive arrays, reducing redundancy in `handle_bulk_insert.rs` and `bulk_insert.rs`.
 - **Error Handling**: Added `InconsistentTimestampLength` error in `error.rs` to handle mismatched timestamp column lengths in bulk insert operations.
 - **Bulk Insert Logic**: Updated `handle_bulk_insert.rs` to utilize the new timestamp conversion function and added checks for timestamp length consistency.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* fix/filter-empty-batch-in-bulk-insert-api:
 **Refactor `bulk_insert.rs` to streamline imports**

 - Simplified import statements by removing unused timestamp-related arrays and data types from the `arrow` crate in `bulk_insert.rs`.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

---------

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-07-04 13:32:10 +00:00
Zhenchi
dc17e6e517 fix: add backward compatibility for SkippingIndexOptions deserialization (#6458)
* fix: add backward compatibility for `SkippingIndexOptions` deserialization

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* address comments

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* address comments

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2025-07-04 11:58:27 +00:00
Yiran
563d25ee04 fix: doc links (#6305)
Signed-off-by: Yiran <cuiyiran3@gmail.com>
2025-07-04 09:52:47 +00:00
Weny Xu
7d17782fd5 feat: persist column ids in table metadata (#6457)
* feat: persist column ids in table metadata

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-07-04 08:12:29 +00:00
fys
c5360601f5 feat: information table extension (#6434)
* feat: information table extension

* avoid use std HashMap behind cfg feature
2025-07-04 04:37:36 +00:00
discord9
9b5baa965c feat: truly limit time range by split window (#6295)
* feat: actually split window to limit time range

feat: truly limit time range by split window

Update src/flow/src/batching_mode/state.rs

Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>
Signed-off-by: discord9 <discord9@163.com>

* chore: added stalled time window range

Signed-off-by: discord9 <discord9@163.com>

* fix: not flush all time range as too expensive

Signed-off-by: discord9 <discord9@163.com>

* test: make it more robust

Signed-off-by: discord9 <discord9@163.com>

* what

Signed-off-by: discord9 <discord9@163.com>

* feat: denfensively handle surplus

Signed-off-by: discord9 <discord9@163.com>

* refactor: per review,explain flush flow

Signed-off-by: discord9 <discord9@163.com>

* chore: per bugbot

Signed-off-by: discord9 <discord9@163.com>

* fix: a temp fix to make mirror insert go first(still need better fix to sync with mirror insert that happens before

Signed-off-by: discord9 <discord9@163.com>

* chore: add todo

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>
2025-07-04 03:37:43 +00:00
Yingwen
76a5145def fix: enable max_execution time for other read only statements (#6454)
Also disable the timeout when timeout is 0

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-07-03 13:46:02 +00:00
Ruihang Xia
7b2703760b feat: skip rule checker on ingestion (#6453)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-07-03 13:31:16 +00:00
Ruihang Xia
81ea172ce4 feat!: point matrix based partition rule checker (#6431)
* bare implementation

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* stateful generator

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* error report

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix remap checkpoint

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* use matrix generator as iterator

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* pre-calculate suffix product

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update existing test cases

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* sqlness

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix ut

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* clean up

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-07-03 06:50:02 +00:00
dennis zhuang
f7c363f969 fix: label_replace and label_join functions when used as sub‐expressions (#6443)
* fix: label_replace and label_join functions in expressions

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: remove update_fields

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: tql eval -> TQL EVAL

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: empty regex and not existing source label

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: simplfy test

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: test

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: test

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

---------

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
2025-07-03 05:34:22 +00:00
Ruihang Xia
5f2daae087 fix: remap column indices on overriding logical table partitions (#6446)
* fix: remap column indices on overriding logical table partitions

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* sqlness

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* refactor map query

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-07-02 12:12:00 +00:00
Yingwen
b1b0d0136f fix: correct MAX_EXECUTION_TIME timeout calculation (#6444)
* feat: implement statement timeout in frontend instance

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: fail fast when timeout is 0

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: update start time

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-07-02 08:31:40 +00:00
Zhenchi
599f289f59 feat: add granularity and false_positive_rate options for indexes (#6416)
* feat: add `granularity` and `false_positive_rate` options for indexes

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* address comments

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* upgrade proto

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2025-07-02 07:33:39 +00:00
LFC
385f12a62e refactor: extract the common method for errors into tonic status (#6437)
Signed-off-by: luofucong <luofc@foxmail.com>
2025-07-02 02:57:30 +00:00
fys
6b90e2b6b4 fix: allow clippy::print_stdout in cli crate (#6436)
* fix: allow clippy::print_stdout in cli crate

* add clippy lint options
2025-07-02 01:40:58 +00:00
ZonaHe
a4f3e96e96 feat: update dashboard to v0.10.2 (#6433)
Co-authored-by: sunchanglong <sunchanglong@users.noreply.github.com>
2025-07-02 01:27:37 +00:00
Ruihang Xia
2b0f27da51 feat: don't allow creating flow with the same sink and source table (#6435)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-07-01 11:33:09 +00:00
Weny Xu
e0382eeb7c fix: fix dest_keys chunks bug in TombstoneManager (#6432)
* fix(meta): fix dest_keys_chunks bug in TombstoneManager

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: fix typo

Signed-off-by: WenyXu <wenymedia@gmail.com>

* fix: fix sqlness tests

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-07-01 09:20:13 +00:00
liyang
4aa6add8dc ci: add check-version script to check whether push the latast image (#6415)
Signed-off-by: liyang <daviderli614@gmail.com>
2025-07-01 07:45:47 +00:00
zyy17
645988975e refactor: add RegionMigrationTriggerReason in RegionMigrationProcedureTask (#6413)
Signed-off-by: zyy17 <zyylsxm@gmail.com>
2025-06-30 11:16:41 +00:00
LFC
a203909de3 feat: extension range definition (#6386)
* feat: defined extension range

Signed-off-by: luofucong <luofc@foxmail.com>

* remove feature parameters

Signed-off-by: luofucong <luofc@foxmail.com>

* resolve PR comments

Signed-off-by: luofucong <luofc@foxmail.com>

* resolve PR comments

Signed-off-by: luofucong <luofc@foxmail.com>

---------

Signed-off-by: luofucong <luofc@foxmail.com>
2025-06-30 02:42:40 +00:00
discord9
616e76941a feat: flow query parallel=1&query faster with many windows&min one time window (#6324)
* feat: flow query parallel=1&query faster when
windows too many&min one time window

Signed-off-by: discord9 <discord9@163.com>

* chore: default flow query parallelism=1

Signed-off-by: discord9 <discord9@163.com>

* refactor: use query options in flownode per review

Signed-off-by: discord9 <discord9@163.com>

* docs: update comment

Signed-off-by: discord9 <discord9@163.com>

* chore: fix test

Signed-off-by: discord9 <discord9@163.com>

* chore: per review

Signed-off-by: discord9 <discord9@163.com>

* chore: make config docs

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-06-30 02:17:01 +00:00
Yingwen
bc42d35c2a chore: bump version to 0.16 (#6417)
Signed-off-by: evenyag <realevenyag@gmail.com>
2025-06-28 01:46:01 +00:00
fys
524bdfff22 fix: add cfg for DecodeSqlValue error (#6420) 2025-06-28 01:39:06 +00:00
fys
6bed0b6ba0 feat: add trigger-related error code (#6419) 2025-06-28 01:25:20 +00:00
shuiyisong
dec8c52b18 feat(pipeline): support Loki API (#6390)
* chore: use schema_info

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* refactor: abstract loki item generator

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: introduce middle item

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* feat: introduce pipeline in loki api

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* test: add tests

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: minor update

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: minor update

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: update prefix and test

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: change recursion to loop

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* fix: cr issue

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

---------

Signed-off-by: shuiyisong <xixing.sys@gmail.com>
2025-06-28 01:01:08 +00:00
zyy17
753a7e1a24 refactor: pass pipeline name through http header and get db from query context (#6405)
Signed-off-by: zyy17 <zyylsxm@gmail.com>
2025-06-27 10:43:37 +00:00
Lei, HUANG
6684200fce fix: skip failing nodes when gathering porcess info (#6412)
* fix/process-manager-skip-fail-nodes:
 - **Enhance Error Handling in `process_manager.rs`:**
   Improved error handling by adding a warning log for failing nodes in the `list_process` method. This ensures that the process listing continues even if some nodes fail to respond.

 - **Add Error Type Import in `process_manager.rs`:**
   Included the `Error` type from the `error` module to handle errors more effectively within the `ProcessManager` implementation.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* fix: clippy

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* fix/process-manager-skip-fail-nodes:
 **Enhancements to Debugging and Trait Implementation**

 - **`process_manager.rs`**: Improved logging by adding more detailed error messages when skipping failing nodes.
 - **`selector.rs`**: Enhanced the `FrontendClient` trait by adding the `Debug` trait bound to improve debugging capabilities.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

---------

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-06-27 08:20:01 +00:00
Weny Xu
5fcb97724f chore: correct typo in configuration (#6411)
Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-06-27 07:55:13 +00:00
Zhenchi
ff559b2688 fix: complete partial index search results in cache (#6403)
* fix: complete partial index search results in cache

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* polish

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* address comments

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* add initial tests

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* cover issue case

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* TestEnv new -> async

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2025-06-27 07:40:14 +00:00
Ruihang Xia
8473a34fc9 feat: Collider for playing with PartitionRule (#6399)
* skeleton

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* initial impl and tests

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* refactor and reorganize

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix typo

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add comment

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* error handling

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* explain naming

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-06-27 07:15:33 +00:00
jeremyhi
df0ebf0378 feat: override logical table's partition key indices (#6385)
* feat: Override logical table's partition key indices with physical table's

* chore: by comment
2025-06-27 02:55:56 +00:00
Weny Xu
4a665fd27b refactor: move #[allow(clippy::print_stdout)] to lib level (#6398)
chore: allow cli to print stdout

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-06-27 02:40:12 +00:00
Yingwen
b4d6441716 refactor: rename test show_processList to show_process_list (#6408)
refactor: rename show_processList to show_process_list

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-06-27 01:18:54 +00:00
liyang
bdd50a2263 ci: try to fix the job permissions (#6407)
Signed-off-by: liyang <daviderli614@gmail.com>
2025-06-26 17:23:43 +00:00
codephage
f87b12b2aa feat: remove own pow fn (#6404)
feat remove own pow fn

Signed-off-by: codephage. <381510760@qq.com>
2025-06-26 09:27:30 +00:00
Yiran
07eec083b9 fix: doc issue assignee (#6406)
Signed-off-by: Yiran <cuiyiran3@gmail.com>
2025-06-26 09:18:47 +00:00
Weny Xu
4737285275 feat: implement pause/resume functionality for procedure manager (#6393)
* feat: implement pause/resume functionality for procedure manager

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-06-26 01:57:12 +00:00
codephage
55f5e09885 fix: sqlness_test show_processList (#6401)
fix test sqlness_test show_processList

Signed-off-by: codephage. <381510760@qq.com>
2025-06-26 01:53:55 +00:00
liyang
9ab36e9a6f test: add a test load configuration example for flownode (#6397)
* test: add a test load configuration example for flownode

Signed-off-by: liyang <daviderli614@gmail.com>

* format rust

Signed-off-by: liyang <daviderli614@gmail.com>

* fix cargo clippy

Signed-off-by: liyang <daviderli614@gmail.com>

* refine FlownodeOptions visibility

Signed-off-by: liyang <daviderli614@gmail.com>

* format rust

Signed-off-by: liyang <daviderli614@gmail.com>

---------

Signed-off-by: liyang <daviderli614@gmail.com>
2025-06-26 01:48:53 +00:00
Lei, HUANG
4bb5d00a4b fix(http): apply string validation mode to pipeline processor (#6378)
* fix/apply-string-validation-to-pipeline:
 ### Commit Summary

 - **Refactor `decode_string` Functionality**:
   - Moved `decode_string` logic into `PromValidationMode` as a method `decode_string`.
   - Updated all references to use the new method.
   - Files affected: `http.rs`, `prom_row_builder.rs`, `proto.rs`.

 - **Logging Enhancements**:
   - Added `debug` logging for invalid UTF-8 string values.
   - File affected: `http.rs`.

 - **Test Updates**:
   - Modified tests to use the new `decode_string` method in `PromValidationMode`.
   - File affected: `proto.rs`.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* fix clippy

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

---------

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-06-25 18:56:35 +00:00
238 changed files with 8133 additions and 2781 deletions

42
.github/scripts/check-version.sh vendored Executable file
View File

@@ -0,0 +1,42 @@
#!/bin/bash
# Get current version
CURRENT_VERSION=$1
if [ -z "$CURRENT_VERSION" ]; then
echo "Error: Failed to get current version"
exit 1
fi
# Get the latest version from GitHub Releases
API_RESPONSE=$(curl -s "https://api.github.com/repos/GreptimeTeam/greptimedb/releases/latest")
if [ -z "$API_RESPONSE" ] || [ "$(echo "$API_RESPONSE" | jq -r '.message')" = "Not Found" ]; then
echo "Error: Failed to fetch latest version from GitHub"
exit 1
fi
# Get the latest version
LATEST_VERSION=$(echo "$API_RESPONSE" | jq -r '.tag_name')
if [ -z "$LATEST_VERSION" ] || [ "$LATEST_VERSION" = "null" ]; then
echo "Error: No valid version found in GitHub releases"
exit 1
fi
# Cleaned up version number format (removed possible 'v' prefix and -nightly suffix)
CLEAN_CURRENT=$(echo "$CURRENT_VERSION" | sed 's/^v//' | sed 's/-nightly-.*//')
CLEAN_LATEST=$(echo "$LATEST_VERSION" | sed 's/^v//' | sed 's/-nightly-.*//')
echo "Current version: $CLEAN_CURRENT"
echo "Latest release version: $CLEAN_LATEST"
# Use sort -V to compare versions
HIGHER_VERSION=$(printf "%s\n%s" "$CLEAN_CURRENT" "$CLEAN_LATEST" | sort -V | tail -n1)
if [ "$HIGHER_VERSION" = "$CLEAN_CURRENT" ]; then
echo "Current version ($CLEAN_CURRENT) is NEWER than or EQUAL to latest ($CLEAN_LATEST)"
echo "should-push-latest-tag=true" >> $GITHUB_OUTPUT
else
echo "Current version ($CLEAN_CURRENT) is OLDER than latest ($CLEAN_LATEST)"
echo "should-push-latest-tag=false" >> $GITHUB_OUTPUT
fi

View File

@@ -110,6 +110,8 @@ jobs:
# The 'version' use as the global tag name of the release workflow.
version: ${{ steps.create-version.outputs.version }}
should-push-latest-tag: ${{ steps.check-version.outputs.should-push-latest-tag }}
steps:
- name: Checkout
uses: actions/checkout@v4
@@ -135,6 +137,11 @@ jobs:
GITHUB_REF_NAME: ${{ github.ref_name }}
NIGHTLY_RELEASE_PREFIX: ${{ env.NIGHTLY_RELEASE_PREFIX }}
- name: Check version
id: check-version
run: |
./.github/scripts/check-version.sh "${{ steps.create-version.outputs.version }}"
- name: Allocate linux-amd64 runner
if: ${{ inputs.build_linux_amd64_artifacts || github.event_name == 'push' || github.event_name == 'schedule' }}
uses: ./.github/actions/start-runner
@@ -314,7 +321,7 @@ jobs:
image-registry-username: ${{ secrets.DOCKERHUB_USERNAME }}
image-registry-password: ${{ secrets.DOCKERHUB_TOKEN }}
version: ${{ needs.allocate-runners.outputs.version }}
push-latest-tag: ${{ github.ref_type == 'tag' && !contains(github.ref_name, 'nightly') && github.event_name != 'schedule' }}
push-latest-tag: ${{ needs.allocate-runners.outputs.should-push-latest-tag == 'true' && github.ref_type == 'tag' && !contains(github.ref_name, 'nightly') && github.event_name != 'schedule' }}
- name: Set build image result
id: set-build-image-result
@@ -332,7 +339,7 @@ jobs:
build-windows-artifacts,
release-images-to-dockerhub,
]
runs-on: ubuntu-latest
runs-on: ubuntu-latest-16-cores
# When we push to ACR, it's easy to fail due to some unknown network issues.
# However, we don't want to fail the whole workflow because of this.
# The ACR have daily sync with DockerHub, so don't worry about the image not being updated.
@@ -361,7 +368,7 @@ jobs:
dev-mode: false
upload-to-s3: true
update-version-info: true
push-latest-tag: ${{ github.ref_type == 'tag' && !contains(github.ref_name, 'nightly') && github.event_name != 'schedule' }}
push-latest-tag: ${{ needs.allocate-runners.outputs.should-push-latest-tag == 'true' && github.ref_type == 'tag' && !contains(github.ref_name, 'nightly') && github.event_name != 'schedule' }}
publish-github-release:
name: Create GitHub release and upload artifacts

View File

@@ -11,17 +11,17 @@ concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
cancel-in-progress: true
permissions:
issues: write
contents: write
pull-requests: write
jobs:
check:
runs-on: ubuntu-latest
permissions:
pull-requests: write # Add permissions to modify PRs
issues: write
timeout-minutes: 10
steps:
- uses: actions/checkout@v4
with:
persist-credentials: false
- uses: ./.github/actions/setup-cyborg
- name: Check Pull Request
working-directory: cyborg

161
Cargo.lock generated
View File

@@ -211,7 +211,7 @@ checksum = "d301b3b94cb4b2f23d7917810addbbaff90738e0ca2be692bd027e70d7e0330c"
[[package]]
name = "api"
version = "0.15.0"
version = "0.16.0"
dependencies = [
"common-base",
"common-decimal",
@@ -944,7 +944,7 @@ dependencies = [
[[package]]
name = "auth"
version = "0.15.0"
version = "0.16.0"
dependencies = [
"api",
"async-trait",
@@ -1586,7 +1586,7 @@ dependencies = [
[[package]]
name = "cache"
version = "0.15.0"
version = "0.16.0"
dependencies = [
"catalog",
"common-error",
@@ -1610,7 +1610,7 @@ checksum = "37b2a672a2cb129a2e41c10b1224bb368f9f37a2b16b612598138befd7b37eb5"
[[package]]
name = "catalog"
version = "0.15.0"
version = "0.16.0"
dependencies = [
"api",
"arrow 54.2.1",
@@ -1948,7 +1948,7 @@ checksum = "1462739cb27611015575c0c11df5df7601141071f07518d56fcc1be504cbec97"
[[package]]
name = "cli"
version = "0.15.0"
version = "0.16.0"
dependencies = [
"async-stream",
"async-trait",
@@ -1993,7 +1993,7 @@ dependencies = [
"session",
"snafu 0.8.5",
"store-api",
"substrait 0.15.0",
"substrait 0.16.0",
"table",
"tempfile",
"tokio",
@@ -2002,7 +2002,7 @@ dependencies = [
[[package]]
name = "client"
version = "0.15.0"
version = "0.16.0"
dependencies = [
"api",
"arc-swap",
@@ -2032,7 +2032,7 @@ dependencies = [
"rand 0.9.0",
"serde_json",
"snafu 0.8.5",
"substrait 0.15.0",
"substrait 0.16.0",
"substrait 0.37.3",
"tokio",
"tokio-stream",
@@ -2073,7 +2073,7 @@ dependencies = [
[[package]]
name = "cmd"
version = "0.15.0"
version = "0.16.0"
dependencies = [
"async-trait",
"auth",
@@ -2135,7 +2135,7 @@ dependencies = [
"snafu 0.8.5",
"stat",
"store-api",
"substrait 0.15.0",
"substrait 0.16.0",
"table",
"temp-env",
"tempfile",
@@ -2182,7 +2182,7 @@ checksum = "55b672471b4e9f9e95499ea597ff64941a309b2cdbffcc46f2cc5e2d971fd335"
[[package]]
name = "common-base"
version = "0.15.0"
version = "0.16.0"
dependencies = [
"anymap2",
"async-trait",
@@ -2204,11 +2204,11 @@ dependencies = [
[[package]]
name = "common-catalog"
version = "0.15.0"
version = "0.16.0"
[[package]]
name = "common-config"
version = "0.15.0"
version = "0.16.0"
dependencies = [
"common-base",
"common-error",
@@ -2234,7 +2234,7 @@ dependencies = [
[[package]]
name = "common-datasource"
version = "0.15.0"
version = "0.16.0"
dependencies = [
"arrow 54.2.1",
"arrow-schema 54.3.1",
@@ -2271,7 +2271,7 @@ dependencies = [
[[package]]
name = "common-decimal"
version = "0.15.0"
version = "0.16.0"
dependencies = [
"bigdecimal 0.4.8",
"common-error",
@@ -2284,7 +2284,7 @@ dependencies = [
[[package]]
name = "common-error"
version = "0.15.0"
version = "0.16.0"
dependencies = [
"common-macro",
"http 1.1.0",
@@ -2295,7 +2295,7 @@ dependencies = [
[[package]]
name = "common-frontend"
version = "0.15.0"
version = "0.16.0"
dependencies = [
"async-trait",
"common-error",
@@ -2311,7 +2311,7 @@ dependencies = [
[[package]]
name = "common-function"
version = "0.15.0"
version = "0.16.0"
dependencies = [
"ahash 0.8.11",
"api",
@@ -2365,7 +2365,7 @@ dependencies = [
[[package]]
name = "common-greptimedb-telemetry"
version = "0.15.0"
version = "0.16.0"
dependencies = [
"async-trait",
"common-runtime",
@@ -2382,7 +2382,7 @@ dependencies = [
[[package]]
name = "common-grpc"
version = "0.15.0"
version = "0.16.0"
dependencies = [
"api",
"arrow-flight",
@@ -2414,7 +2414,7 @@ dependencies = [
[[package]]
name = "common-grpc-expr"
version = "0.15.0"
version = "0.16.0"
dependencies = [
"api",
"common-base",
@@ -2433,7 +2433,7 @@ dependencies = [
[[package]]
name = "common-macro"
version = "0.15.0"
version = "0.16.0"
dependencies = [
"arc-swap",
"common-query",
@@ -2447,7 +2447,7 @@ dependencies = [
[[package]]
name = "common-mem-prof"
version = "0.15.0"
version = "0.16.0"
dependencies = [
"anyhow",
"common-error",
@@ -2463,7 +2463,7 @@ dependencies = [
[[package]]
name = "common-meta"
version = "0.15.0"
version = "0.16.0"
dependencies = [
"anymap2",
"api",
@@ -2528,7 +2528,7 @@ dependencies = [
[[package]]
name = "common-options"
version = "0.15.0"
version = "0.16.0"
dependencies = [
"common-grpc",
"humantime-serde",
@@ -2537,11 +2537,11 @@ dependencies = [
[[package]]
name = "common-plugins"
version = "0.15.0"
version = "0.16.0"
[[package]]
name = "common-pprof"
version = "0.15.0"
version = "0.16.0"
dependencies = [
"common-error",
"common-macro",
@@ -2553,7 +2553,7 @@ dependencies = [
[[package]]
name = "common-procedure"
version = "0.15.0"
version = "0.16.0"
dependencies = [
"async-stream",
"async-trait",
@@ -2580,7 +2580,7 @@ dependencies = [
[[package]]
name = "common-procedure-test"
version = "0.15.0"
version = "0.16.0"
dependencies = [
"async-trait",
"common-procedure",
@@ -2589,7 +2589,7 @@ dependencies = [
[[package]]
name = "common-query"
version = "0.15.0"
version = "0.16.0"
dependencies = [
"api",
"async-trait",
@@ -2615,7 +2615,7 @@ dependencies = [
[[package]]
name = "common-recordbatch"
version = "0.15.0"
version = "0.16.0"
dependencies = [
"arc-swap",
"common-error",
@@ -2635,7 +2635,7 @@ dependencies = [
[[package]]
name = "common-runtime"
version = "0.15.0"
version = "0.16.0"
dependencies = [
"async-trait",
"clap 4.5.19",
@@ -2665,14 +2665,14 @@ dependencies = [
[[package]]
name = "common-session"
version = "0.15.0"
version = "0.16.0"
dependencies = [
"strum 0.27.1",
]
[[package]]
name = "common-telemetry"
version = "0.15.0"
version = "0.16.0"
dependencies = [
"backtrace",
"common-error",
@@ -2699,7 +2699,7 @@ dependencies = [
[[package]]
name = "common-test-util"
version = "0.15.0"
version = "0.16.0"
dependencies = [
"client",
"common-grpc",
@@ -2712,7 +2712,7 @@ dependencies = [
[[package]]
name = "common-time"
version = "0.15.0"
version = "0.16.0"
dependencies = [
"arrow 54.2.1",
"chrono",
@@ -2730,7 +2730,7 @@ dependencies = [
[[package]]
name = "common-version"
version = "0.15.0"
version = "0.16.0"
dependencies = [
"build-data",
"const_format",
@@ -2740,7 +2740,7 @@ dependencies = [
[[package]]
name = "common-wal"
version = "0.15.0"
version = "0.16.0"
dependencies = [
"common-base",
"common-error",
@@ -2763,7 +2763,7 @@ dependencies = [
[[package]]
name = "common-workload"
version = "0.15.0"
version = "0.16.0"
dependencies = [
"api",
"common-telemetry",
@@ -3719,7 +3719,7 @@ dependencies = [
[[package]]
name = "datanode"
version = "0.15.0"
version = "0.16.0"
dependencies = [
"api",
"arrow-flight",
@@ -3772,7 +3772,7 @@ dependencies = [
"session",
"snafu 0.8.5",
"store-api",
"substrait 0.15.0",
"substrait 0.16.0",
"table",
"tokio",
"toml 0.8.19",
@@ -3781,7 +3781,7 @@ dependencies = [
[[package]]
name = "datatypes"
version = "0.15.0"
version = "0.16.0"
dependencies = [
"arrow 54.2.1",
"arrow-array 54.2.1",
@@ -4441,7 +4441,7 @@ checksum = "e8c02a5121d4ea3eb16a80748c74f5549a5665e4c21333c6098f283870fbdea6"
[[package]]
name = "file-engine"
version = "0.15.0"
version = "0.16.0"
dependencies = [
"api",
"async-trait",
@@ -4578,7 +4578,7 @@ checksum = "8bf7cc16383c4b8d58b9905a8509f02926ce3058053c056376248d958c9df1e8"
[[package]]
name = "flow"
version = "0.15.0"
version = "0.16.0"
dependencies = [
"api",
"arrow 54.2.1",
@@ -4643,7 +4643,7 @@ dependencies = [
"sql",
"store-api",
"strum 0.27.1",
"substrait 0.15.0",
"substrait 0.16.0",
"table",
"tokio",
"tonic 0.12.3",
@@ -4698,10 +4698,11 @@ checksum = "6c2141d6d6c8512188a7891b4b01590a45f6dac67afb4f255c4124dbb86d4eaa"
[[package]]
name = "frontend"
version = "0.15.0"
version = "0.16.0"
dependencies = [
"api",
"arc-swap",
"async-stream",
"async-trait",
"auth",
"bytes",
@@ -4757,7 +4758,7 @@ dependencies = [
"sqlparser 0.54.0 (git+https://github.com/GreptimeTeam/sqlparser-rs.git?rev=0cf6c04490d59435ee965edd2078e8855bd8471e)",
"store-api",
"strfmt",
"substrait 0.15.0",
"substrait 0.16.0",
"table",
"tokio",
"tokio-util",
@@ -5147,7 +5148,7 @@ dependencies = [
[[package]]
name = "greptime-proto"
version = "0.1.0"
source = "git+https://github.com/GreptimeTeam/greptime-proto.git?rev=464226cf8a4a22696503536a123d0b9e318582f4#464226cf8a4a22696503536a123d0b9e318582f4"
source = "git+https://github.com/GreptimeTeam/greptime-proto.git?rev=ceb1af4fa9309ce65bda0367db7b384df2bb4d4f#ceb1af4fa9309ce65bda0367db7b384df2bb4d4f"
dependencies = [
"prost 0.13.5",
"serde",
@@ -5918,7 +5919,7 @@ dependencies = [
[[package]]
name = "index"
version = "0.15.0"
version = "0.16.0"
dependencies = [
"async-trait",
"asynchronous-codec",
@@ -6803,7 +6804,7 @@ checksum = "a7a70ba024b9dc04c27ea2f0c0548feb474ec5c54bba33a7f72f873a39d07b24"
[[package]]
name = "log-query"
version = "0.15.0"
version = "0.16.0"
dependencies = [
"chrono",
"common-error",
@@ -6815,7 +6816,7 @@ dependencies = [
[[package]]
name = "log-store"
version = "0.15.0"
version = "0.16.0"
dependencies = [
"async-stream",
"async-trait",
@@ -7113,7 +7114,7 @@ dependencies = [
[[package]]
name = "meta-client"
version = "0.15.0"
version = "0.16.0"
dependencies = [
"api",
"async-trait",
@@ -7141,7 +7142,7 @@ dependencies = [
[[package]]
name = "meta-srv"
version = "0.15.0"
version = "0.16.0"
dependencies = [
"api",
"async-trait",
@@ -7232,7 +7233,7 @@ dependencies = [
[[package]]
name = "metric-engine"
version = "0.15.0"
version = "0.16.0"
dependencies = [
"api",
"aquamarine",
@@ -7322,7 +7323,7 @@ dependencies = [
[[package]]
name = "mito-codec"
version = "0.15.0"
version = "0.16.0"
dependencies = [
"api",
"bytes",
@@ -7345,7 +7346,7 @@ dependencies = [
[[package]]
name = "mito2"
version = "0.15.0"
version = "0.16.0"
dependencies = [
"api",
"aquamarine",
@@ -8095,7 +8096,7 @@ dependencies = [
[[package]]
name = "object-store"
version = "0.15.0"
version = "0.16.0"
dependencies = [
"anyhow",
"bytes",
@@ -8431,7 +8432,7 @@ dependencies = [
[[package]]
name = "operator"
version = "0.15.0"
version = "0.16.0"
dependencies = [
"ahash 0.8.11",
"api",
@@ -8486,7 +8487,7 @@ dependencies = [
"sql",
"sqlparser 0.54.0 (git+https://github.com/GreptimeTeam/sqlparser-rs.git?rev=0cf6c04490d59435ee965edd2078e8855bd8471e)",
"store-api",
"substrait 0.15.0",
"substrait 0.16.0",
"table",
"tokio",
"tokio-util",
@@ -8753,7 +8754,7 @@ dependencies = [
[[package]]
name = "partition"
version = "0.15.0"
version = "0.16.0"
dependencies = [
"api",
"async-trait",
@@ -9041,7 +9042,7 @@ checksum = "8b870d8c151b6f2fb93e84a13146138f05d02ed11c7e7c54f8826aaaf7c9f184"
[[package]]
name = "pipeline"
version = "0.15.0"
version = "0.16.0"
dependencies = [
"ahash 0.8.11",
"api",
@@ -9184,7 +9185,7 @@ dependencies = [
[[package]]
name = "plugins"
version = "0.15.0"
version = "0.16.0"
dependencies = [
"auth",
"clap 4.5.19",
@@ -9497,7 +9498,7 @@ dependencies = [
[[package]]
name = "promql"
version = "0.15.0"
version = "0.16.0"
dependencies = [
"ahash 0.8.11",
"async-trait",
@@ -9593,7 +9594,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "be769465445e8c1474e9c5dac2018218498557af32d9ed057325ec9a41ae81bf"
dependencies = [
"heck 0.5.0",
"itertools 0.14.0",
"itertools 0.11.0",
"log",
"multimap",
"once_cell",
@@ -9639,7 +9640,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8a56d757972c98b346a9b766e3f02746cde6dd1cd1d1d563472929fdd74bec4d"
dependencies = [
"anyhow",
"itertools 0.14.0",
"itertools 0.11.0",
"proc-macro2",
"quote",
"syn 2.0.100",
@@ -9779,7 +9780,7 @@ dependencies = [
[[package]]
name = "puffin"
version = "0.15.0"
version = "0.16.0"
dependencies = [
"async-compression 0.4.13",
"async-trait",
@@ -9821,7 +9822,7 @@ dependencies = [
[[package]]
name = "query"
version = "0.15.0"
version = "0.16.0"
dependencies = [
"ahash 0.8.11",
"api",
@@ -9887,7 +9888,7 @@ dependencies = [
"sqlparser 0.54.0 (git+https://github.com/GreptimeTeam/sqlparser-rs.git?rev=0cf6c04490d59435ee965edd2078e8855bd8471e)",
"statrs",
"store-api",
"substrait 0.15.0",
"substrait 0.16.0",
"table",
"tokio",
"tokio-stream",
@@ -11209,7 +11210,7 @@ dependencies = [
[[package]]
name = "servers"
version = "0.15.0"
version = "0.16.0"
dependencies = [
"ahash 0.8.11",
"api",
@@ -11330,7 +11331,7 @@ dependencies = [
[[package]]
name = "session"
version = "0.15.0"
version = "0.16.0"
dependencies = [
"api",
"arc-swap",
@@ -11669,7 +11670,7 @@ dependencies = [
[[package]]
name = "sql"
version = "0.15.0"
version = "0.16.0"
dependencies = [
"api",
"chrono",
@@ -11724,7 +11725,7 @@ dependencies = [
[[package]]
name = "sqlness-runner"
version = "0.15.0"
version = "0.16.0"
dependencies = [
"async-trait",
"clap 4.5.19",
@@ -12024,7 +12025,7 @@ dependencies = [
[[package]]
name = "stat"
version = "0.15.0"
version = "0.16.0"
dependencies = [
"nix 0.30.1",
]
@@ -12050,7 +12051,7 @@ dependencies = [
[[package]]
name = "store-api"
version = "0.15.0"
version = "0.16.0"
dependencies = [
"api",
"aquamarine",
@@ -12211,7 +12212,7 @@ dependencies = [
[[package]]
name = "substrait"
version = "0.15.0"
version = "0.16.0"
dependencies = [
"async-trait",
"bytes",
@@ -12412,7 +12413,7 @@ dependencies = [
[[package]]
name = "table"
version = "0.15.0"
version = "0.16.0"
dependencies = [
"api",
"async-trait",
@@ -12673,7 +12674,7 @@ checksum = "3369f5ac52d5eb6ab48c6b4ffdc8efbcad6b89c765749064ba298f2c68a16a76"
[[package]]
name = "tests-fuzz"
version = "0.15.0"
version = "0.16.0"
dependencies = [
"arbitrary",
"async-trait",
@@ -12717,7 +12718,7 @@ dependencies = [
[[package]]
name = "tests-integration"
version = "0.15.0"
version = "0.16.0"
dependencies = [
"api",
"arrow-flight",
@@ -12784,7 +12785,7 @@ dependencies = [
"sql",
"sqlx",
"store-api",
"substrait 0.15.0",
"substrait 0.16.0",
"table",
"tempfile",
"time",
@@ -14265,7 +14266,7 @@ version = "0.1.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cf221c93e13a30d793f7645a0e7762c55d169dbb0a49671918a2319d289b10bb"
dependencies = [
"windows-sys 0.59.0",
"windows-sys 0.48.0",
]
[[package]]

View File

@@ -71,11 +71,13 @@ members = [
resolver = "2"
[workspace.package]
version = "0.15.0"
version = "0.16.0"
edition = "2021"
license = "Apache-2.0"
[workspace.lints]
clippy.print_stdout = "warn"
clippy.print_stderr = "warn"
clippy.dbg_macro = "warn"
clippy.implicit_clone = "warn"
clippy.result_large_err = "allow"
@@ -135,7 +137,7 @@ etcd-client = "0.14"
fst = "0.4.7"
futures = "0.3"
futures-util = "0.3"
greptime-proto = { git = "https://github.com/GreptimeTeam/greptime-proto.git", rev = "464226cf8a4a22696503536a123d0b9e318582f4" }
greptime-proto = { git = "https://github.com/GreptimeTeam/greptime-proto.git", rev = "ceb1af4fa9309ce65bda0367db7b384df2bb4d4f" }
hex = "0.4"
http = "1"
humantime = "2.1"

View File

@@ -75,9 +75,9 @@
| --------- | ----------- |
| [Unified Observability Data](https://docs.greptime.com/user-guide/concepts/why-greptimedb) | Store metrics, logs, and traces as timestamped, contextual wide events. Query via [SQL](https://docs.greptime.com/user-guide/query-data/sql), [PromQL](https://docs.greptime.com/user-guide/query-data/promql), and [streaming](https://docs.greptime.com/user-guide/flow-computation/overview). |
| [High Performance & Cost Effective](https://docs.greptime.com/user-guide/manage-data/data-index) | Written in Rust, with a distributed query engine, [rich indexing](https://docs.greptime.com/user-guide/manage-data/data-index), and optimized columnar storage, delivering sub-second responses at PB scale. |
| [Cloud-Native Architecture](https://docs.greptime.com/user-guide/concepts/architecture) | Designed for [Kubernetes](https://docs.greptime.com/user-guide/deployments/deploy-on-kubernetes/greptimedb-operator-management), with compute/storage separation, native object storage (AWS S3, Azure Blob, etc.) and seamless cross-cloud access. |
| [Cloud-Native Architecture](https://docs.greptime.com/user-guide/concepts/architecture) | Designed for [Kubernetes](https://docs.greptime.com/user-guide/deployments-administration/deploy-on-kubernetes/greptimedb-operator-management), with compute/storage separation, native object storage (AWS S3, Azure Blob, etc.) and seamless cross-cloud access. |
| [Developer-Friendly](https://docs.greptime.com/user-guide/protocols/overview) | Access via SQL/PromQL interfaces, REST API, MySQL/PostgreSQL protocols, and popular ingestion [protocols](https://docs.greptime.com/user-guide/protocols/overview). |
| [Flexible Deployment](https://docs.greptime.com/user-guide/deployments/overview) | Deploy anywhere: edge (including ARM/[Android](https://docs.greptime.com/user-guide/deployments/run-on-android)) or cloud, with unified APIs and efficient data sync. |
| [Flexible Deployment](https://docs.greptime.com/user-guide/deployments-administration/overview) | Deploy anywhere: edge (including ARM/[Android](https://docs.greptime.com/user-guide/deployments-administration/run-on-android)) or cloud, with unified APIs and efficient data sync. |
Learn more in [Why GreptimeDB](https://docs.greptime.com/user-guide/concepts/why-greptimedb) and [Observability 2.0 and the Database for It](https://greptime.com/blogs/2025-04-25-greptimedb-observability2-new-database).

View File

@@ -325,7 +325,7 @@
| `selector` | String | `round_robin` | Datanode selector type.<br/>- `round_robin` (default value)<br/>- `lease_based`<br/>- `load_based`<br/>For details, please see "https://docs.greptime.com/developer-guide/metasrv/selector". |
| `use_memory_store` | Bool | `false` | Store data in memory. |
| `enable_region_failover` | Bool | `false` | Whether to enable region failover.<br/>This feature is only available on GreptimeDB running on cluster mode and<br/>- Using Remote WAL<br/>- Using shared storage (e.g., s3). |
| `region_failure_detector_initialization_delay` | String | `10m` | Delay before initializing region failure detectors.<br/>This delay helps prevent premature initialization of region failure detectors in cases where<br/>cluster maintenance mode is enabled right after metasrv starts, especially when the cluster<br/>is not deployed via the recommended GreptimeDB Operator. Without this delay, early detector registration<br/>may trigger unnecessary region failovers during datanode startup. |
| `region_failure_detector_initialization_delay` | String | `10m` | The delay before starting region failure detection.<br/>This delay helps prevent Metasrv from triggering unnecessary region failovers before all Datanodes are fully started.<br/>Especially useful when the cluster is not deployed with GreptimeDB Operator and maintenance mode is not enabled. |
| `allow_region_failover_on_local_wal` | Bool | `false` | Whether to allow region failover on local WAL.<br/>**This option is not recommended to be set to true, because it may lead to data loss during failover.** |
| `node_max_idle_time` | String | `24hours` | Max allowed idle time before removing node info from metasrv memory. |
| `enable_telemetry` | Bool | `true` | Whether to enable greptimedb telemetry. Enabled by default. |
@@ -436,8 +436,8 @@
| `wal.provider` | String | `raft_engine` | The provider of the WAL.<br/>- `raft_engine`: the wal is stored in the local file system by raft-engine.<br/>- `kafka`: it's remote wal that data is stored in Kafka. |
| `wal.dir` | String | Unset | The directory to store the WAL files.<br/>**It's only used when the provider is `raft_engine`**. |
| `wal.file_size` | String | `128MB` | The size of the WAL segment file.<br/>**It's only used when the provider is `raft_engine`**. |
| `wal.purge_threshold` | String | `1GB` | The threshold of the WAL size to trigger a flush.<br/>**It's only used when the provider is `raft_engine`**. |
| `wal.purge_interval` | String | `1m` | The interval to trigger a flush.<br/>**It's only used when the provider is `raft_engine`**. |
| `wal.purge_threshold` | String | `1GB` | The threshold of the WAL size to trigger a purge.<br/>**It's only used when the provider is `raft_engine`**. |
| `wal.purge_interval` | String | `1m` | The interval to trigger a purge.<br/>**It's only used when the provider is `raft_engine`**. |
| `wal.read_batch_size` | Integer | `128` | The read batch size.<br/>**It's only used when the provider is `raft_engine`**. |
| `wal.sync_write` | Bool | `false` | Whether to use sync write.<br/>**It's only used when the provider is `raft_engine`**. |
| `wal.enable_log_recycle` | Bool | `true` | Whether to reuse logically truncated log files.<br/>**It's only used when the provider is `raft_engine`**. |
@@ -598,3 +598,5 @@
| `logging.tracing_sample_ratio.default_ratio` | Float | `1.0` | -- |
| `tracing` | -- | -- | The tracing options. Only effect when compiled with `tokio-console` feature. |
| `tracing.tokio_console_addr` | String | Unset | The tokio console address. |
| `query` | -- | -- | -- |
| `query.parallelism` | Integer | `1` | Parallelism of the query engine for query sent by flownode.<br/>Default to 1, so it won't use too much cpu or memory |

View File

@@ -129,11 +129,11 @@ dir = "./greptimedb_data/wal"
## **It's only used when the provider is `raft_engine`**.
file_size = "128MB"
## The threshold of the WAL size to trigger a flush.
## The threshold of the WAL size to trigger a purge.
## **It's only used when the provider is `raft_engine`**.
purge_threshold = "1GB"
## The interval to trigger a flush.
## The interval to trigger a purge.
## **It's only used when the provider is `raft_engine`**.
purge_interval = "1m"

View File

@@ -108,3 +108,8 @@ default_ratio = 1.0
## The tokio console address.
## @toml2docs:none-default
#+ tokio_console_addr = "127.0.0.1"
[query]
## Parallelism of the query engine for query sent by flownode.
## Default to 1, so it won't use too much cpu or memory
parallelism = 1

View File

@@ -43,11 +43,9 @@ use_memory_store = false
## - Using shared storage (e.g., s3).
enable_region_failover = false
## Delay before initializing region failure detectors.
## This delay helps prevent premature initialization of region failure detectors in cases where
## cluster maintenance mode is enabled right after metasrv starts, especially when the cluster
## is not deployed via the recommended GreptimeDB Operator. Without this delay, early detector registration
## may trigger unnecessary region failovers during datanode startup.
## The delay before starting region failure detection.
## This delay helps prevent Metasrv from triggering unnecessary region failovers before all Datanodes are fully started.
## Especially useful when the cluster is not deployed with GreptimeDB Operator and maintenance mode is not enabled.
region_failure_detector_initialization_delay = '10m'
## Whether to allow region failover on local WAL.

View File

@@ -55,12 +55,25 @@ async function main() {
await client.rest.issues.addLabels({
owner, repo, issue_number: number, labels: [labelDocsRequired],
})
// Get available assignees for the docs repo
const assigneesResponse = await docsClient.rest.issues.listAssignees({
owner: 'GreptimeTeam',
repo: 'docs',
})
const validAssignees = assigneesResponse.data.map(assignee => assignee.login)
core.info(`Available assignees: ${validAssignees.join(', ')}`)
// Check if the actor is a valid assignee, otherwise fallback to fengjiachun
const assignee = validAssignees.includes(actor) ? actor : 'fengjiachun'
core.info(`Assigning issue to: ${assignee}`)
await docsClient.rest.issues.create({
owner: 'GreptimeTeam',
repo: 'docs',
title: `Update docs for ${title}`,
body: `A document change request is generated from ${html_url}`,
assignee: actor,
assignee: assignee,
}).then((res) => {
core.info(`Created issue ${res.data}`)
})

View File

@@ -48,4 +48,4 @@ Please refer to [SQL query](./query.sql) for GreptimeDB and Clickhouse, and [que
## Addition
- You can tune GreptimeDB's configuration to get better performance.
- You can setup GreptimeDB to use S3 as storage, see [here](https://docs.greptime.com/user-guide/deployments/configuration#storage-options).
- You can setup GreptimeDB to use S3 as storage, see [here](https://docs.greptime.com/user-guide/deployments-administration/configuration#storage-options).

View File

@@ -83,7 +83,7 @@ If you use the [Helm Chart](https://github.com/GreptimeTeam/helm-charts) to depl
- `monitoring.enabled=true`: Deploys a standalone GreptimeDB instance dedicated to monitoring the cluster;
- `grafana.enabled=true`: Deploys Grafana and automatically imports the monitoring dashboard;
The standalone GreptimeDB instance will collect metrics from your cluster, and the dashboard will be available in the Grafana UI. For detailed deployment instructions, please refer to our [Kubernetes deployment guide](https://docs.greptime.com/user-guide/deployments-administration/deploy-on-kubernetes/getting-started).
The standalone GreptimeDB instance will collect metrics from your cluster, and the dashboard will be available in the Grafana UI. For detailed deployment instructions, please refer to our [Kubernetes deployment guide](https://docs.greptime.com/user-guide/deployments-administration-administration/deploy-on-kubernetes/getting-started).
### Self-host Prometheus and import dashboards manually

View File

@@ -34,6 +34,7 @@ excludes = [
"src/sql/src/statements/drop/trigger.rs",
"src/sql/src/parsers/create_parser/trigger.rs",
"src/sql/src/parsers/show_parser/trigger.rs",
"src/mito2/src/extension.rs",
]
[properties]

View File

@@ -226,18 +226,20 @@ mod tests {
assert!(options.is_none());
let mut schema = ColumnSchema::new("test", ConcreteDataType::string_datatype(), true)
.with_fulltext_options(FulltextOptions {
enable: true,
analyzer: FulltextAnalyzer::English,
case_sensitive: false,
backend: FulltextBackend::Bloom,
})
.with_fulltext_options(FulltextOptions::new_unchecked(
true,
FulltextAnalyzer::English,
false,
FulltextBackend::Bloom,
10240,
0.01,
))
.unwrap();
schema.set_inverted_index(true);
let options = options_from_column_schema(&schema).unwrap();
assert_eq!(
options.options.get(FULLTEXT_GRPC_KEY).unwrap(),
"{\"enable\":true,\"analyzer\":\"English\",\"case-sensitive\":false,\"backend\":\"bloom\"}"
"{\"enable\":true,\"analyzer\":\"English\",\"case-sensitive\":false,\"backend\":\"bloom\",\"granularity\":10240,\"false-positive-rate-in-10000\":100}"
);
assert_eq!(
options.options.get(INVERTED_INDEX_GRPC_KEY).unwrap(),
@@ -247,16 +249,18 @@ mod tests {
#[test]
fn test_options_with_fulltext() {
let fulltext = FulltextOptions {
enable: true,
analyzer: FulltextAnalyzer::English,
case_sensitive: false,
backend: FulltextBackend::Bloom,
};
let fulltext = FulltextOptions::new_unchecked(
true,
FulltextAnalyzer::English,
false,
FulltextBackend::Bloom,
10240,
0.01,
);
let options = options_from_fulltext(&fulltext).unwrap().unwrap();
assert_eq!(
options.options.get(FULLTEXT_GRPC_KEY).unwrap(),
"{\"enable\":true,\"analyzer\":\"English\",\"case-sensitive\":false,\"backend\":\"bloom\"}"
"{\"enable\":true,\"analyzer\":\"English\",\"case-sensitive\":false,\"backend\":\"bloom\",\"granularity\":10240,\"false-positive-rate-in-10000\":100}"
);
}

View File

@@ -5,6 +5,7 @@ edition.workspace = true
license.workspace = true
[features]
enterprise = []
testing = []
[lints]

View File

@@ -14,9 +14,11 @@
pub use client::{CachedKvBackend, CachedKvBackendBuilder, MetaKvBackend};
mod builder;
mod client;
mod manager;
mod table_cache;
pub use builder::KvBackendCatalogManagerBuilder;
pub use manager::KvBackendCatalogManager;
pub use table_cache::{new_table_cache, TableCache, TableCacheRef};

View File

@@ -0,0 +1,131 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::sync::Arc;
use common_catalog::consts::DEFAULT_CATALOG_NAME;
use common_meta::cache::LayeredCacheRegistryRef;
use common_meta::key::flow::FlowMetadataManager;
use common_meta::key::TableMetadataManager;
use common_meta::kv_backend::KvBackendRef;
use common_procedure::ProcedureManagerRef;
use moka::sync::Cache;
use partition::manager::PartitionRuleManager;
#[cfg(feature = "enterprise")]
use crate::information_schema::InformationSchemaTableFactoryRef;
use crate::information_schema::{InformationExtensionRef, InformationSchemaProvider};
use crate::kvbackend::manager::{SystemCatalog, CATALOG_CACHE_MAX_CAPACITY};
use crate::kvbackend::KvBackendCatalogManager;
use crate::process_manager::ProcessManagerRef;
use crate::system_schema::pg_catalog::PGCatalogProvider;
pub struct KvBackendCatalogManagerBuilder {
information_extension: InformationExtensionRef,
backend: KvBackendRef,
cache_registry: LayeredCacheRegistryRef,
procedure_manager: Option<ProcedureManagerRef>,
process_manager: Option<ProcessManagerRef>,
#[cfg(feature = "enterprise")]
extra_information_table_factories:
std::collections::HashMap<String, InformationSchemaTableFactoryRef>,
}
impl KvBackendCatalogManagerBuilder {
pub fn new(
information_extension: InformationExtensionRef,
backend: KvBackendRef,
cache_registry: LayeredCacheRegistryRef,
) -> Self {
Self {
information_extension,
backend,
cache_registry,
procedure_manager: None,
process_manager: None,
#[cfg(feature = "enterprise")]
extra_information_table_factories: std::collections::HashMap::new(),
}
}
pub fn with_procedure_manager(mut self, procedure_manager: ProcedureManagerRef) -> Self {
self.procedure_manager = Some(procedure_manager);
self
}
pub fn with_process_manager(mut self, process_manager: ProcessManagerRef) -> Self {
self.process_manager = Some(process_manager);
self
}
/// Sets the extra information tables.
#[cfg(feature = "enterprise")]
pub fn with_extra_information_table_factories(
mut self,
factories: std::collections::HashMap<String, InformationSchemaTableFactoryRef>,
) -> Self {
self.extra_information_table_factories = factories;
self
}
pub fn build(self) -> Arc<KvBackendCatalogManager> {
let Self {
information_extension,
backend,
cache_registry,
procedure_manager,
process_manager,
#[cfg(feature = "enterprise")]
extra_information_table_factories,
} = self;
Arc::new_cyclic(|me| KvBackendCatalogManager {
information_extension,
partition_manager: Arc::new(PartitionRuleManager::new(
backend.clone(),
cache_registry
.get()
.expect("Failed to get table_route_cache"),
)),
table_metadata_manager: Arc::new(TableMetadataManager::new(backend.clone())),
system_catalog: SystemCatalog {
catalog_manager: me.clone(),
catalog_cache: Cache::new(CATALOG_CACHE_MAX_CAPACITY),
pg_catalog_cache: Cache::new(CATALOG_CACHE_MAX_CAPACITY),
information_schema_provider: {
let provider = InformationSchemaProvider::new(
DEFAULT_CATALOG_NAME.to_string(),
me.clone(),
Arc::new(FlowMetadataManager::new(backend.clone())),
process_manager.clone(),
backend.clone(),
);
#[cfg(feature = "enterprise")]
let provider = provider
.with_extra_table_factories(extra_information_table_factories.clone());
Arc::new(provider)
},
pg_catalog_provider: Arc::new(PGCatalogProvider::new(
DEFAULT_CATALOG_NAME.to_string(),
me.clone(),
)),
backend,
process_manager,
#[cfg(feature = "enterprise")]
extra_information_table_factories,
},
cache_registry,
procedure_manager,
})
}
}

View File

@@ -28,17 +28,18 @@ use common_meta::cache::{
use common_meta::key::catalog_name::CatalogNameKey;
use common_meta::key::flow::FlowMetadataManager;
use common_meta::key::schema_name::SchemaNameKey;
use common_meta::key::table_info::TableInfoValue;
use common_meta::key::table_info::{TableInfoManager, TableInfoValue};
use common_meta::key::table_name::TableNameKey;
use common_meta::key::{TableMetadataManager, TableMetadataManagerRef};
use common_meta::key::TableMetadataManagerRef;
use common_meta::kv_backend::KvBackendRef;
use common_procedure::ProcedureManagerRef;
use futures_util::stream::BoxStream;
use futures_util::{StreamExt, TryStreamExt};
use moka::sync::Cache;
use partition::manager::{PartitionRuleManager, PartitionRuleManagerRef};
use partition::manager::PartitionRuleManagerRef;
use session::context::{Channel, QueryContext};
use snafu::prelude::*;
use store_api::metric_engine_consts::METRIC_ENGINE_NAME;
use table::dist_table::DistTable;
use table::metadata::TableId;
use table::table::numbers::{NumbersTable, NUMBERS_TABLE_NAME};
@@ -51,6 +52,8 @@ use crate::error::{
CacheNotFoundSnafu, GetTableCacheSnafu, InvalidTableInfoInCatalogSnafu, ListCatalogsSnafu,
ListSchemasSnafu, ListTablesSnafu, Result, TableMetadataManagerSnafu,
};
#[cfg(feature = "enterprise")]
use crate::information_schema::InformationSchemaTableFactoryRef;
use crate::information_schema::{InformationExtensionRef, InformationSchemaProvider};
use crate::kvbackend::TableCacheRef;
use crate::process_manager::ProcessManagerRef;
@@ -66,60 +69,22 @@ use crate::CatalogManager;
#[derive(Clone)]
pub struct KvBackendCatalogManager {
/// Provides the extension methods for the `information_schema` tables
information_extension: InformationExtensionRef,
pub(super) information_extension: InformationExtensionRef,
/// Manages partition rules.
partition_manager: PartitionRuleManagerRef,
pub(super) partition_manager: PartitionRuleManagerRef,
/// Manages table metadata.
table_metadata_manager: TableMetadataManagerRef,
pub(super) table_metadata_manager: TableMetadataManagerRef,
/// A sub-CatalogManager that handles system tables
system_catalog: SystemCatalog,
pub(super) system_catalog: SystemCatalog,
/// Cache registry for all caches.
cache_registry: LayeredCacheRegistryRef,
pub(super) cache_registry: LayeredCacheRegistryRef,
/// Only available in `Standalone` mode.
procedure_manager: Option<ProcedureManagerRef>,
pub(super) procedure_manager: Option<ProcedureManagerRef>,
}
const CATALOG_CACHE_MAX_CAPACITY: u64 = 128;
pub(super) const CATALOG_CACHE_MAX_CAPACITY: u64 = 128;
impl KvBackendCatalogManager {
pub fn new(
information_extension: InformationExtensionRef,
backend: KvBackendRef,
cache_registry: LayeredCacheRegistryRef,
procedure_manager: Option<ProcedureManagerRef>,
process_manager: Option<ProcessManagerRef>,
) -> Arc<Self> {
Arc::new_cyclic(|me| Self {
information_extension,
partition_manager: Arc::new(PartitionRuleManager::new(
backend.clone(),
cache_registry
.get()
.expect("Failed to get table_route_cache"),
)),
table_metadata_manager: Arc::new(TableMetadataManager::new(backend.clone())),
system_catalog: SystemCatalog {
catalog_manager: me.clone(),
catalog_cache: Cache::new(CATALOG_CACHE_MAX_CAPACITY),
pg_catalog_cache: Cache::new(CATALOG_CACHE_MAX_CAPACITY),
information_schema_provider: Arc::new(InformationSchemaProvider::new(
DEFAULT_CATALOG_NAME.to_string(),
me.clone(),
Arc::new(FlowMetadataManager::new(backend.clone())),
process_manager.clone(),
)),
pg_catalog_provider: Arc::new(PGCatalogProvider::new(
DEFAULT_CATALOG_NAME.to_string(),
me.clone(),
)),
backend,
process_manager,
},
cache_registry,
procedure_manager,
})
}
pub fn view_info_cache(&self) -> Result<ViewInfoCacheRef> {
self.cache_registry.get().context(CacheNotFoundSnafu {
name: "view_info_cache",
@@ -142,6 +107,61 @@ impl KvBackendCatalogManager {
pub fn procedure_manager(&self) -> Option<ProcedureManagerRef> {
self.procedure_manager.clone()
}
// Override logical table's partition key indices with physical table's.
async fn override_logical_table_partition_key_indices(
table_route_cache: &TableRouteCacheRef,
table_info_manager: &TableInfoManager,
table: TableRef,
) -> Result<TableRef> {
// If the table is not a metric table, return the table directly.
if table.table_info().meta.engine != METRIC_ENGINE_NAME {
return Ok(table);
}
if let Some(table_route_value) = table_route_cache
.get(table.table_info().table_id())
.await
.context(TableMetadataManagerSnafu)?
&& let TableRoute::Logical(logical_route) = &*table_route_value
&& let Some(physical_table_info_value) = table_info_manager
.get(logical_route.physical_table_id())
.await
.context(TableMetadataManagerSnafu)?
{
let mut new_table_info = (*table.table_info()).clone();
// Remap partition key indices from physical table to logical table
new_table_info.meta.partition_key_indices = physical_table_info_value
.table_info
.meta
.partition_key_indices
.iter()
.filter_map(|&physical_index| {
// Get the column name from the physical table using the physical index
physical_table_info_value
.table_info
.meta
.schema
.column_schemas
.get(physical_index)
.and_then(|physical_column| {
// Find the corresponding index in the logical table schema
new_table_info
.meta
.schema
.column_index_by_name(physical_column.name.as_str())
})
})
.collect();
let new_table = DistTable::table(Arc::new(new_table_info));
return Ok(new_table);
}
Ok(table)
}
}
#[async_trait::async_trait]
@@ -268,10 +288,7 @@ impl CatalogManager for KvBackendCatalogManager {
let table_cache: TableCacheRef = self.cache_registry.get().context(CacheNotFoundSnafu {
name: "table_cache",
})?;
let table_route_cache: TableRouteCacheRef =
self.cache_registry.get().context(CacheNotFoundSnafu {
name: "table_route_cache",
})?;
let table = table_cache
.get_by_ref(&TableName {
catalog_name: catalog_name.to_string(),
@@ -281,55 +298,18 @@ impl CatalogManager for KvBackendCatalogManager {
.await
.context(GetTableCacheSnafu)?;
// Override logical table's partition key indices with physical table's.
if let Some(table) = &table
&& let Some(table_route_value) = table_route_cache
.get(table.table_info().table_id())
.await
.context(TableMetadataManagerSnafu)?
&& let TableRoute::Logical(logical_route) = &*table_route_value
&& let Some(physical_table_info_value) = self
.table_metadata_manager
.table_info_manager()
.get(logical_route.physical_table_id())
.await
.context(TableMetadataManagerSnafu)?
{
let mut new_table_info = (*table.table_info()).clone();
// Gather all column names from the logical table
let logical_column_names: std::collections::HashSet<_> = new_table_info
.meta
.schema
.column_schemas()
.iter()
.map(|col| &col.name)
.collect();
// Only preserve partition key indices where the corresponding columns exist in logical table
new_table_info.meta.partition_key_indices = physical_table_info_value
.table_info
.meta
.partition_key_indices
.iter()
.filter(|&&index| {
if let Some(physical_column) = physical_table_info_value
.table_info
.meta
.schema
.column_schemas
.get(index)
{
logical_column_names.contains(&physical_column.name)
} else {
false
}
})
.cloned()
.collect();
let new_table = DistTable::table(Arc::new(new_table_info));
return Ok(Some(new_table));
if let Some(table) = table {
let table_route_cache: TableRouteCacheRef =
self.cache_registry.get().context(CacheNotFoundSnafu {
name: "table_route_cache",
})?;
return Self::override_logical_table_partition_key_indices(
&table_route_cache,
self.table_metadata_manager.table_info_manager(),
table,
)
.await
.map(Some);
}
if channel == Channel::Postgres {
@@ -342,7 +322,7 @@ impl CatalogManager for KvBackendCatalogManager {
}
}
Ok(table)
Ok(None)
}
async fn tables_by_ids(
@@ -394,8 +374,20 @@ impl CatalogManager for KvBackendCatalogManager {
let catalog = catalog.to_string();
let schema = schema.to_string();
let semaphore = Arc::new(Semaphore::new(CONCURRENCY));
let table_route_cache: Result<TableRouteCacheRef> =
self.cache_registry.get().context(CacheNotFoundSnafu {
name: "table_route_cache",
});
common_runtime::spawn_global(async move {
let table_route_cache = match table_route_cache {
Ok(table_route_cache) => table_route_cache,
Err(e) => {
let _ = tx.send(Err(e)).await;
return;
}
};
let table_id_stream = metadata_manager
.table_name_manager()
.tables(&catalog, &schema)
@@ -422,6 +414,7 @@ impl CatalogManager for KvBackendCatalogManager {
let metadata_manager = metadata_manager.clone();
let tx = tx.clone();
let semaphore = semaphore.clone();
let table_route_cache = table_route_cache.clone();
common_runtime::spawn_global(async move {
// we don't explicitly close the semaphore so just ignore the potential error.
let _ = semaphore.acquire().await;
@@ -439,6 +432,16 @@ impl CatalogManager for KvBackendCatalogManager {
};
for table in table_info_values.into_values().map(build_table) {
let table = if let Ok(table) = table {
Self::override_logical_table_partition_key_indices(
&table_route_cache,
metadata_manager.table_info_manager(),
table,
)
.await
} else {
table
};
if tx.send(table).await.is_err() {
return;
}
@@ -468,16 +471,19 @@ fn build_table(table_info_value: TableInfoValue) -> Result<TableRef> {
/// - information_schema.{tables}
/// - pg_catalog.{tables}
#[derive(Clone)]
struct SystemCatalog {
catalog_manager: Weak<KvBackendCatalogManager>,
catalog_cache: Cache<String, Arc<InformationSchemaProvider>>,
pg_catalog_cache: Cache<String, Arc<PGCatalogProvider>>,
pub(super) struct SystemCatalog {
pub(super) catalog_manager: Weak<KvBackendCatalogManager>,
pub(super) catalog_cache: Cache<String, Arc<InformationSchemaProvider>>,
pub(super) pg_catalog_cache: Cache<String, Arc<PGCatalogProvider>>,
// system_schema_provider for default catalog
information_schema_provider: Arc<InformationSchemaProvider>,
pg_catalog_provider: Arc<PGCatalogProvider>,
backend: KvBackendRef,
process_manager: Option<ProcessManagerRef>,
pub(super) information_schema_provider: Arc<InformationSchemaProvider>,
pub(super) pg_catalog_provider: Arc<PGCatalogProvider>,
pub(super) backend: KvBackendRef,
pub(super) process_manager: Option<ProcessManagerRef>,
#[cfg(feature = "enterprise")]
pub(super) extra_information_table_factories:
std::collections::HashMap<String, InformationSchemaTableFactoryRef>,
}
impl SystemCatalog {
@@ -541,12 +547,17 @@ impl SystemCatalog {
if schema == INFORMATION_SCHEMA_NAME {
let information_schema_provider =
self.catalog_cache.get_with_by_ref(catalog, move || {
Arc::new(InformationSchemaProvider::new(
let provider = InformationSchemaProvider::new(
catalog.to_string(),
self.catalog_manager.clone(),
Arc::new(FlowMetadataManager::new(self.backend.clone())),
self.process_manager.clone(),
))
self.backend.clone(),
);
#[cfg(feature = "enterprise")]
let provider = provider
.with_extra_table_factories(self.extra_information_table_factories.clone());
Arc::new(provider)
});
information_schema_provider.table(table_name)
} else if schema == PG_CATALOG_NAME && channel == Channel::Postgres {

View File

@@ -352,11 +352,13 @@ impl MemoryCatalogManager {
}
fn create_catalog_entry(self: &Arc<Self>, catalog: String) -> SchemaEntries {
let backend = Arc::new(MemoryKvBackend::new());
let information_schema_provider = InformationSchemaProvider::new(
catalog,
Arc::downgrade(self) as Weak<dyn CatalogManager>,
Arc::new(FlowMetadataManager::new(Arc::new(MemoryKvBackend::new()))),
Arc::new(FlowMetadataManager::new(backend.clone())),
None, // we don't need ProcessManager on regions server.
backend,
);
let information_schema = information_schema_provider.tables().clone();

View File

@@ -21,7 +21,7 @@ use std::sync::{Arc, RwLock};
use api::v1::frontend::{KillProcessRequest, ListProcessRequest, ProcessInfo};
use common_base::cancellation::CancellationHandle;
use common_frontend::selector::{FrontendSelector, MetaClientSelector};
use common_telemetry::{debug, info};
use common_telemetry::{debug, info, warn};
use common_time::util::current_time_millis;
use meta_client::MetaClientRef;
use snafu::{ensure, OptionExt, ResultExt};
@@ -141,14 +141,20 @@ impl ProcessManager {
.await
.context(error::InvokeFrontendSnafu)?;
for mut f in frontends {
processes.extend(
f.list_process(ListProcessRequest {
let result = f
.list_process(ListProcessRequest {
catalog: catalog.unwrap_or_default().to_string(),
})
.await
.context(error::InvokeFrontendSnafu)?
.processes,
);
.context(error::InvokeFrontendSnafu);
match result {
Ok(resp) => {
processes.extend(resp.processes);
}
Err(e) => {
warn!(e; "Skipping failing node: {:?}", f)
}
}
}
}
processes.extend(self.local_processes(catalog)?);

View File

@@ -15,7 +15,7 @@
pub mod information_schema;
mod memory_table;
pub mod pg_catalog;
mod predicate;
pub mod predicate;
mod utils;
use std::collections::HashMap;
@@ -96,7 +96,7 @@ trait SystemSchemaProviderInner {
}
}
pub(crate) trait SystemTable {
pub trait SystemTable {
fn table_id(&self) -> TableId;
fn table_name(&self) -> &'static str;
@@ -110,7 +110,7 @@ pub(crate) trait SystemTable {
}
}
pub(crate) type SystemTableRef = Arc<dyn SystemTable + Send + Sync>;
pub type SystemTableRef = Arc<dyn SystemTable + Send + Sync>;
struct SystemTableDataSource {
table: SystemTableRef,

View File

@@ -38,6 +38,7 @@ use common_meta::cluster::NodeInfo;
use common_meta::datanode::RegionStat;
use common_meta::key::flow::flow_state::FlowStat;
use common_meta::key::flow::FlowMetadataManager;
use common_meta::kv_backend::KvBackendRef;
use common_procedure::ProcedureInfo;
use common_recordbatch::SendableRecordBatchStream;
use datatypes::schema::SchemaRef;
@@ -112,6 +113,25 @@ macro_rules! setup_memory_table {
};
}
#[cfg(feature = "enterprise")]
pub struct MakeInformationTableRequest {
pub catalog_name: String,
pub catalog_manager: Weak<dyn CatalogManager>,
pub kv_backend: KvBackendRef,
}
/// A factory trait for making information schema tables.
///
/// This trait allows for extensibility of the information schema by providing
/// a way to dynamically create custom information schema tables.
#[cfg(feature = "enterprise")]
pub trait InformationSchemaTableFactory {
fn make_information_table(&self, req: MakeInformationTableRequest) -> SystemTableRef;
}
#[cfg(feature = "enterprise")]
pub type InformationSchemaTableFactoryRef = Arc<dyn InformationSchemaTableFactory + Send + Sync>;
/// The `information_schema` tables info provider.
pub struct InformationSchemaProvider {
catalog_name: String,
@@ -119,6 +139,10 @@ pub struct InformationSchemaProvider {
process_manager: Option<ProcessManagerRef>,
flow_metadata_manager: Arc<FlowMetadataManager>,
tables: HashMap<String, TableRef>,
#[allow(dead_code)]
kv_backend: KvBackendRef,
#[cfg(feature = "enterprise")]
extra_table_factories: HashMap<String, InformationSchemaTableFactoryRef>,
}
impl SystemSchemaProvider for InformationSchemaProvider {
@@ -128,6 +152,7 @@ impl SystemSchemaProvider for InformationSchemaProvider {
&self.tables
}
}
impl SystemSchemaProviderInner for InformationSchemaProvider {
fn catalog_name(&self) -> &str {
&self.catalog_name
@@ -215,7 +240,22 @@ impl SystemSchemaProviderInner for InformationSchemaProvider {
.process_manager
.as_ref()
.map(|p| Arc::new(InformationSchemaProcessList::new(p.clone())) as _),
_ => None,
table_name => {
#[cfg(feature = "enterprise")]
return self.extra_table_factories.get(table_name).map(|factory| {
let req = MakeInformationTableRequest {
catalog_name: self.catalog_name.clone(),
catalog_manager: self.catalog_manager.clone(),
kv_backend: self.kv_backend.clone(),
};
factory.make_information_table(req)
});
#[cfg(not(feature = "enterprise"))]
{
let _ = table_name;
None
}
}
}
}
}
@@ -226,6 +266,7 @@ impl InformationSchemaProvider {
catalog_manager: Weak<dyn CatalogManager>,
flow_metadata_manager: Arc<FlowMetadataManager>,
process_manager: Option<ProcessManagerRef>,
kv_backend: KvBackendRef,
) -> Self {
let mut provider = Self {
catalog_name,
@@ -233,6 +274,9 @@ impl InformationSchemaProvider {
flow_metadata_manager,
process_manager,
tables: HashMap::new(),
kv_backend,
#[cfg(feature = "enterprise")]
extra_table_factories: HashMap::new(),
};
provider.build_tables();
@@ -240,6 +284,16 @@ impl InformationSchemaProvider {
provider
}
#[cfg(feature = "enterprise")]
pub(crate) fn with_extra_table_factories(
mut self,
factories: HashMap<String, InformationSchemaTableFactoryRef>,
) -> Self {
self.extra_table_factories = factories;
self.build_tables();
self
}
fn build_tables(&mut self) {
let mut tables = HashMap::new();
@@ -290,16 +344,19 @@ impl InformationSchemaProvider {
if let Some(process_list) = self.build_table(PROCESS_LIST) {
tables.insert(PROCESS_LIST.to_string(), process_list);
}
#[cfg(feature = "enterprise")]
for name in self.extra_table_factories.keys() {
tables.insert(name.to_string(), self.build_table(name).expect(name));
}
// Add memory tables
for name in MEMORY_TABLES.iter() {
tables.insert((*name).to_string(), self.build_table(name).expect(name));
}
self.tables = tables;
}
}
trait InformationTable {
pub trait InformationTable {
fn table_id(&self) -> TableId;
fn table_name(&self) -> &'static str;

View File

@@ -48,3 +48,4 @@ pub const FLOWS: &str = "flows";
pub const PROCEDURE_INFO: &str = "procedure_info";
pub const REGION_STATISTICS: &str = "region_statistics";
pub const PROCESS_LIST: &str = "process_list";
pub const TRIGGER_LIST: &str = "trigger_list";

View File

@@ -207,6 +207,7 @@ mod tests {
use session::context::QueryContext;
use super::*;
use crate::kvbackend::KvBackendCatalogManagerBuilder;
use crate::memory::MemoryCatalogManager;
#[test]
@@ -323,13 +324,13 @@ mod tests {
.build(),
);
let catalog_manager = KvBackendCatalogManager::new(
let catalog_manager = KvBackendCatalogManagerBuilder::new(
Arc::new(NoopInformationExtension),
backend.clone(),
layered_cache_registry,
None,
None,
);
)
.build();
let table_metadata_manager = TableMetadataManager::new(backend);
let mut view_info = common_meta::key::test_utils::new_test_table_info(1024, vec![]);
view_info.table_type = TableType::View;

View File

@@ -160,6 +160,7 @@ fn create_table_info(table_id: TableId, table_name: TableName) -> RawTableInfo {
options: Default::default(),
region_numbers: (1..=100).collect(),
partition_key_indices: vec![],
column_ids: vec![],
};
RawTableInfo {

View File

@@ -12,6 +12,7 @@
// See the License for the specific language governing permissions and
// limitations under the License.
#![allow(clippy::print_stdout)]
mod bench;
mod data;
mod database;

View File

@@ -301,7 +301,6 @@ struct MetaInfoTool {
#[async_trait]
impl Tool for MetaInfoTool {
#[allow(clippy::print_stdout)]
async fn do_work(&self) -> std::result::Result<(), BoxedError> {
let result = MetadataSnapshotManager::info(
&self.inner,

View File

@@ -31,7 +31,7 @@ use base64::prelude::BASE64_STANDARD;
use base64::Engine;
use common_catalog::build_db_string;
use common_catalog::consts::{DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME};
use common_error::ext::{BoxedError, ErrorExt};
use common_error::ext::BoxedError;
use common_grpc::flight::do_put::DoPutResponse;
use common_grpc::flight::{FlightDecoder, FlightMessage};
use common_query::Output;
@@ -48,7 +48,7 @@ use tonic::transport::Channel;
use crate::error::{
ConvertFlightDataSnafu, Error, FlightGetSnafu, IllegalFlightMessagesSnafu,
InvalidTonicMetadataValueSnafu, ServerSnafu,
InvalidTonicMetadataValueSnafu,
};
use crate::{error, from_grpc_response, Client, Result};
@@ -196,12 +196,22 @@ impl Database {
/// Retry if connection fails, max_retries is the max number of retries, so the total wait time
/// is `max_retries * GRPC_CONN_TIMEOUT`
pub async fn handle_with_retry(&self, request: Request, max_retries: u32) -> Result<u32> {
pub async fn handle_with_retry(
&self,
request: Request,
max_retries: u32,
hints: &[(&str, &str)],
) -> Result<u32> {
let mut client = make_database_client(&self.client)?.inner;
let mut retries = 0;
let request = self.to_rpc_request(request);
loop {
let raw_response = client.handle(request.clone()).await;
let mut tonic_request = tonic::Request::new(request.clone());
let metadata = tonic_request.metadata_mut();
Self::put_hints(metadata, hints)?;
let raw_response = client.handle(tonic_request).await;
match (raw_response, retries < max_retries) {
(Ok(resp), _) => return from_grpc_response(resp.into_inner()),
(Err(err), true) => {
@@ -292,21 +302,16 @@ impl Database {
let response = client.mut_inner().do_get(request).await.or_else(|e| {
let tonic_code = e.code();
let e: Error = e.into();
let code = e.status_code();
let msg = e.to_string();
let error =
Err(BoxedError::new(ServerSnafu { code, msg }.build())).with_context(|_| {
FlightGetSnafu {
addr: client.addr().to_string(),
tonic_code,
}
});
error!(
"Failed to do Flight get, addr: {}, code: {}, source: {:?}",
client.addr(),
tonic_code,
error
e
);
let error = Err(BoxedError::new(e)).with_context(|_| FlightGetSnafu {
addr: client.addr().to_string(),
tonic_code,
});
error
})?;
@@ -436,8 +441,11 @@ mod tests {
use api::v1::auth_header::AuthScheme;
use api::v1::{AuthHeader, Basic};
use common_error::status_code::StatusCode;
use tonic::{Code, Status};
use super::*;
use crate::error::TonicSnafu;
#[test]
fn test_flight_ctx() {
@@ -460,4 +468,19 @@ mod tests {
})
)
}
#[test]
fn test_from_tonic_status() {
let expected = TonicSnafu {
code: StatusCode::Internal,
msg: "blabla".to_string(),
tonic_code: Code::Internal,
}
.build();
let status = Status::new(Code::Internal, "blabla");
let actual: Error = status.into();
assert_eq!(expected.to_string(), actual.to_string());
}
}

View File

@@ -14,13 +14,13 @@
use std::any::Any;
use common_error::define_from_tonic_status;
use common_error::ext::{BoxedError, ErrorExt};
use common_error::status_code::{convert_tonic_code_to_status_code, StatusCode};
use common_error::{GREPTIME_DB_HEADER_ERROR_CODE, GREPTIME_DB_HEADER_ERROR_MSG};
use common_error::status_code::StatusCode;
use common_macro::stack_trace_debug;
use snafu::{location, Location, Snafu};
use tonic::metadata::errors::InvalidMetadataValue;
use tonic::{Code, Status};
use tonic::Code;
#[derive(Snafu)]
#[snafu(visibility(pub))]
@@ -124,6 +124,15 @@ pub enum Error {
location: Location,
source: datatypes::error::Error,
},
#[snafu(display("{}", msg))]
Tonic {
code: StatusCode,
msg: String,
tonic_code: Code,
#[snafu(implicit)]
location: Location,
},
}
pub type Result<T> = std::result::Result<T, Error>;
@@ -135,7 +144,7 @@ impl ErrorExt for Error {
| Error::MissingField { .. }
| Error::IllegalDatabaseResponse { .. } => StatusCode::Internal,
Error::Server { code, .. } => *code,
Error::Server { code, .. } | Error::Tonic { code, .. } => *code,
Error::FlightGet { source, .. }
| Error::RegionServer { source, .. }
| Error::FlowServer { source, .. } => source.status_code(),
@@ -153,34 +162,7 @@ impl ErrorExt for Error {
}
}
impl From<Status> for Error {
fn from(e: Status) -> Self {
fn get_metadata_value(e: &Status, key: &str) -> Option<String> {
e.metadata()
.get(key)
.and_then(|v| String::from_utf8(v.as_bytes().to_vec()).ok())
}
let code = get_metadata_value(&e, GREPTIME_DB_HEADER_ERROR_CODE).and_then(|s| {
if let Ok(code) = s.parse::<u32>() {
StatusCode::from_u32(code)
} else {
None
}
});
let tonic_code = e.code();
let code = code.unwrap_or_else(|| convert_tonic_code_to_status_code(tonic_code));
let msg = get_metadata_value(&e, GREPTIME_DB_HEADER_ERROR_MSG)
.unwrap_or_else(|| e.message().to_string());
Self::Server {
code,
msg,
location: location!(),
}
}
}
define_from_tonic_status!(Error, Tonic);
impl Error {
pub fn should_retry(&self) -> bool {

View File

@@ -21,7 +21,7 @@ use arc_swap::ArcSwapOption;
use arrow_flight::Ticket;
use async_stream::stream;
use async_trait::async_trait;
use common_error::ext::{BoxedError, ErrorExt};
use common_error::ext::BoxedError;
use common_error::status_code::StatusCode;
use common_grpc::flight::{FlightDecoder, FlightMessage};
use common_meta::error::{self as meta_error, Result as MetaResult};
@@ -107,24 +107,18 @@ impl RegionRequester {
.mut_inner()
.do_get(ticket)
.await
.map_err(|e| {
.or_else(|e| {
let tonic_code = e.code();
let e: error::Error = e.into();
let code = e.status_code();
let msg = e.to_string();
let error = ServerSnafu { code, msg }
.fail::<()>()
.map_err(BoxedError::new)
.with_context(|_| FlightGetSnafu {
tonic_code,
addr: flight_client.addr().to_string(),
})
.unwrap_err();
error!(
e; "Failed to do Flight get, addr: {}, code: {}",
flight_client.addr(),
tonic_code
);
let error = Err(BoxedError::new(e)).with_context(|_| FlightGetSnafu {
addr: flight_client.addr().to_string(),
tonic_code,
});
error
})?;

View File

@@ -16,7 +16,7 @@ default = [
"meta-srv/pg_kvbackend",
"meta-srv/mysql_kvbackend",
]
enterprise = ["common-meta/enterprise", "frontend/enterprise", "meta-srv/enterprise"]
enterprise = ["common-meta/enterprise", "frontend/enterprise", "meta-srv/enterprise", "catalog/enterprise"]
tokio-console = ["common-telemetry/tokio-console"]
[lints]

View File

@@ -18,7 +18,7 @@ use std::time::Duration;
use cache::{build_fundamental_cache_registry, with_default_composite_cache_registry};
use catalog::information_extension::DistributedInformationExtension;
use catalog::kvbackend::{CachedKvBackendBuilder, KvBackendCatalogManager, MetaKvBackend};
use catalog::kvbackend::{CachedKvBackendBuilder, KvBackendCatalogManagerBuilder, MetaKvBackend};
use clap::Parser;
use client::client_manager::NodeClients;
use common_base::Plugins;
@@ -342,13 +342,12 @@ impl StartCommand {
let information_extension =
Arc::new(DistributedInformationExtension::new(meta_client.clone()));
let catalog_manager = KvBackendCatalogManager::new(
let catalog_manager = KvBackendCatalogManagerBuilder::new(
information_extension,
cached_meta_backend.clone(),
layered_cache_registry.clone(),
None,
None,
);
)
.build();
let table_metadata_manager =
Arc::new(TableMetadataManager::new(cached_meta_backend.clone()));
@@ -371,8 +370,11 @@ impl StartCommand {
let flow_metadata_manager = Arc::new(FlowMetadataManager::new(cached_meta_backend.clone()));
let flow_auth_header = get_flow_auth_options(&opts).context(StartFlownodeSnafu)?;
let frontend_client =
FrontendClient::from_meta_client(meta_client.clone(), flow_auth_header);
let frontend_client = FrontendClient::from_meta_client(
meta_client.clone(),
flow_auth_header,
opts.query.clone(),
);
let frontend_client = Arc::new(frontend_client);
let flownode_builder = FlownodeBuilder::new(
opts.clone(),

View File

@@ -19,7 +19,7 @@ use std::time::Duration;
use async_trait::async_trait;
use cache::{build_fundamental_cache_registry, with_default_composite_cache_registry};
use catalog::information_extension::DistributedInformationExtension;
use catalog::kvbackend::{CachedKvBackendBuilder, KvBackendCatalogManager, MetaKvBackend};
use catalog::kvbackend::{CachedKvBackendBuilder, KvBackendCatalogManagerBuilder, MetaKvBackend};
use catalog::process_manager::ProcessManager;
use clap::Parser;
use client::client_manager::NodeClients;
@@ -350,13 +350,20 @@ impl StartCommand {
addrs::resolve_addr(&opts.grpc.bind_addr, Some(&opts.grpc.server_addr)),
Some(meta_client.clone()),
));
let catalog_manager = KvBackendCatalogManager::new(
let builder = KvBackendCatalogManagerBuilder::new(
information_extension,
cached_meta_backend.clone(),
layered_cache_registry.clone(),
None,
Some(process_manager.clone()),
);
)
.with_process_manager(process_manager.clone());
#[cfg(feature = "enterprise")]
let builder = if let Some(factories) = plugins.get() {
builder.with_extra_information_table_factories(factories)
} else {
builder
};
let catalog_manager = builder.build();
let executor = HandlerGroupExecutor::new(vec![
Arc::new(ParseMailboxMessageHandler),

View File

@@ -20,7 +20,7 @@ use std::{fs, path};
use async_trait::async_trait;
use cache::{build_fundamental_cache_registry, with_default_composite_cache_registry};
use catalog::information_schema::InformationExtension;
use catalog::kvbackend::KvBackendCatalogManager;
use catalog::kvbackend::KvBackendCatalogManagerBuilder;
use catalog::process_manager::ProcessManager;
use clap::Parser;
use client::api::v1::meta::RegionRole;
@@ -544,13 +544,20 @@ impl StartCommand {
));
let process_manager = Arc::new(ProcessManager::new(opts.grpc.server_addr.clone(), None));
let catalog_manager = KvBackendCatalogManager::new(
let builder = KvBackendCatalogManagerBuilder::new(
information_extension.clone(),
kv_backend.clone(),
layered_cache_registry.clone(),
Some(procedure_manager.clone()),
Some(process_manager.clone()),
);
)
.with_procedure_manager(procedure_manager.clone())
.with_process_manager(process_manager.clone());
#[cfg(feature = "enterprise")]
let builder = if let Some(factories) = plugins.get() {
builder.with_extra_information_table_factories(factories)
} else {
builder
};
let catalog_manager = builder.build();
let table_metadata_manager =
Self::create_table_metadata_manager(kv_backend.clone()).await?;
@@ -564,7 +571,7 @@ impl StartCommand {
// for standalone not use grpc, but get a handler to frontend grpc client without
// actually make a connection
let (frontend_client, frontend_instance_handler) =
FrontendClient::from_empty_grpc_handler();
FrontendClient::from_empty_grpc_handler(opts.query.clone());
let frontend_client = Arc::new(frontend_client);
let flow_builder = FlownodeBuilder::new(
flownode_options,

View File

@@ -23,12 +23,14 @@ use common_wal::config::raft_engine::RaftEngineConfig;
use common_wal::config::DatanodeWalConfig;
use datanode::config::{DatanodeOptions, RegionEngineConfig, StorageConfig};
use file_engine::config::EngineConfig as FileEngineConfig;
use flow::FlownodeOptions;
use frontend::frontend::FrontendOptions;
use meta_client::MetaClientOptions;
use meta_srv::metasrv::MetasrvOptions;
use meta_srv::selector::SelectorType;
use metric_engine::config::EngineConfig as MetricEngineConfig;
use mito2::config::MitoConfig;
use query::options::QueryOptions;
use servers::export_metrics::ExportMetricsOption;
use servers::grpc::GrpcOptions;
use servers::http::HttpOptions;
@@ -195,6 +197,57 @@ fn test_load_metasrv_example_config() {
similar_asserts::assert_eq!(options, expected);
}
#[test]
fn test_load_flownode_example_config() {
let example_config = common_test_util::find_workspace_path("config/flownode.example.toml");
let options =
GreptimeOptions::<FlownodeOptions>::load_layered_options(example_config.to_str(), "")
.unwrap();
let expected = GreptimeOptions::<FlownodeOptions> {
component: FlownodeOptions {
node_id: Some(14),
flow: Default::default(),
grpc: GrpcOptions {
bind_addr: "127.0.0.1:6800".to_string(),
server_addr: "127.0.0.1:6800".to_string(),
runtime_size: 2,
..Default::default()
},
logging: LoggingOptions {
dir: format!("{}/{}", DEFAULT_DATA_HOME, DEFAULT_LOGGING_DIR),
level: Some("info".to_string()),
otlp_endpoint: Some(DEFAULT_OTLP_HTTP_ENDPOINT.to_string()),
otlp_export_protocol: Some(common_telemetry::logging::OtlpExportProtocol::Http),
tracing_sample_ratio: Some(Default::default()),
..Default::default()
},
tracing: Default::default(),
heartbeat: Default::default(),
// flownode deliberately use a slower query parallelism
// to avoid overwhelming the frontend with too many queries
query: QueryOptions { parallelism: 1 },
meta_client: Some(MetaClientOptions {
metasrv_addrs: vec!["127.0.0.1:3002".to_string()],
timeout: Duration::from_secs(3),
heartbeat_timeout: Duration::from_millis(500),
ddl_timeout: Duration::from_secs(10),
connect_timeout: Duration::from_secs(1),
tcp_nodelay: true,
metadata_cache_max_capacity: 100000,
metadata_cache_ttl: Duration::from_secs(600),
metadata_cache_tti: Duration::from_secs(300),
}),
http: HttpOptions {
addr: "127.0.0.1:4000".to_string(),
..Default::default()
},
user_provider: None,
},
..Default::default()
};
similar_asserts::assert_eq!(options, expected);
}
#[test]
fn test_load_standalone_example_config() {
let example_config = common_test_util::find_workspace_path("config/standalone.example.toml");

View File

@@ -78,7 +78,7 @@ pub const INFORMATION_SCHEMA_ROUTINES_TABLE_ID: u32 = 21;
pub const INFORMATION_SCHEMA_SCHEMA_PRIVILEGES_TABLE_ID: u32 = 22;
/// id for information_schema.TABLE_PRIVILEGES
pub const INFORMATION_SCHEMA_TABLE_PRIVILEGES_TABLE_ID: u32 = 23;
/// id for information_schema.TRIGGERS
/// id for information_schema.TRIGGERS (for mysql)
pub const INFORMATION_SCHEMA_TRIGGERS_TABLE_ID: u32 = 24;
/// id for information_schema.GLOBAL_STATUS
pub const INFORMATION_SCHEMA_GLOBAL_STATUS_TABLE_ID: u32 = 25;
@@ -104,6 +104,8 @@ pub const INFORMATION_SCHEMA_PROCEDURE_INFO_TABLE_ID: u32 = 34;
pub const INFORMATION_SCHEMA_REGION_STATISTICS_TABLE_ID: u32 = 35;
/// id for information_schema.process_list
pub const INFORMATION_SCHEMA_PROCESS_LIST_TABLE_ID: u32 = 36;
/// id for information_schema.trigger_list (for greptimedb trigger)
pub const INFORMATION_SCHEMA_TRIGGER_TABLE_ID: u32 = 37;
// ----- End of information_schema tables -----

View File

@@ -119,6 +119,11 @@ pub enum StatusCode {
FlowAlreadyExists = 8000,
FlowNotFound = 8001,
// ====== End of flow related status code =====
// ====== Begin of trigger related status code =====
TriggerAlreadyExists = 9000,
TriggerNotFound = 9001,
// ====== End of trigger related status code =====
}
impl StatusCode {
@@ -155,6 +160,8 @@ impl StatusCode {
| StatusCode::RegionNotFound
| StatusCode::FlowAlreadyExists
| StatusCode::FlowNotFound
| StatusCode::TriggerAlreadyExists
| StatusCode::TriggerNotFound
| StatusCode::RegionReadonly
| StatusCode::TableColumnNotFound
| StatusCode::TableColumnExists
@@ -198,6 +205,8 @@ impl StatusCode {
| StatusCode::PlanQuery
| StatusCode::FlowAlreadyExists
| StatusCode::FlowNotFound
| StatusCode::TriggerAlreadyExists
| StatusCode::TriggerNotFound
| StatusCode::RegionNotReady
| StatusCode::RegionBusy
| StatusCode::RegionReadonly
@@ -230,6 +239,48 @@ impl fmt::Display for StatusCode {
}
}
#[macro_export]
macro_rules! define_from_tonic_status {
($Error: ty, $Variant: ident) => {
impl From<tonic::Status> for $Error {
fn from(e: tonic::Status) -> Self {
use snafu::location;
fn metadata_value(e: &tonic::Status, key: &str) -> Option<String> {
e.metadata()
.get(key)
.and_then(|v| String::from_utf8(v.as_bytes().to_vec()).ok())
}
let code = metadata_value(&e, $crate::GREPTIME_DB_HEADER_ERROR_CODE)
.and_then(|s| {
if let Ok(code) = s.parse::<u32>() {
StatusCode::from_u32(code)
} else {
None
}
})
.unwrap_or_else(|| match e.code() {
tonic::Code::Cancelled => StatusCode::Cancelled,
tonic::Code::DeadlineExceeded => StatusCode::DeadlineExceeded,
_ => StatusCode::Internal,
});
let msg = metadata_value(&e, $crate::GREPTIME_DB_HEADER_ERROR_MSG)
.unwrap_or_else(|| e.message().to_string());
// TODO(LFC): Make the error variant defined automatically.
Self::$Variant {
code,
msg,
tonic_code: e.code(),
location: location!(),
}
}
}
};
}
#[macro_export]
macro_rules! define_into_tonic_status {
($Error: ty) => {
@@ -281,12 +332,14 @@ pub fn status_to_tonic_code(status_code: StatusCode) -> Code {
| StatusCode::TableColumnExists
| StatusCode::RegionAlreadyExists
| StatusCode::DatabaseAlreadyExists
| StatusCode::TriggerAlreadyExists
| StatusCode::FlowAlreadyExists => Code::AlreadyExists,
StatusCode::TableNotFound
| StatusCode::RegionNotFound
| StatusCode::TableColumnNotFound
| StatusCode::DatabaseNotFound
| StatusCode::UserNotFound
| StatusCode::TriggerNotFound
| StatusCode::FlowNotFound => Code::NotFound,
StatusCode::TableUnavailable
| StatusCode::StorageUnavailable
@@ -304,15 +357,6 @@ pub fn status_to_tonic_code(status_code: StatusCode) -> Code {
}
}
/// Converts tonic [Code] to [StatusCode].
pub fn convert_tonic_code_to_status_code(code: Code) -> StatusCode {
match code {
Code::Cancelled => StatusCode::Cancelled,
Code::DeadlineExceeded => StatusCode::DeadlineExceeded,
_ => StatusCode::Internal,
}
}
#[cfg(test)]
mod tests {
use strum::IntoEnumIterator;

View File

@@ -12,6 +12,7 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use std::fmt::Debug;
use std::time::Duration;
use common_grpc::channel_manager::{ChannelConfig, ChannelManager};
@@ -30,7 +31,7 @@ use crate::error::{MetaSnafu, Result};
pub type FrontendClientPtr = Box<dyn FrontendClient>;
#[async_trait::async_trait]
pub trait FrontendClient: Send {
pub trait FrontendClient: Send + Debug {
async fn list_process(&mut self, req: ListProcessRequest) -> Result<ListProcessResponse>;
async fn kill_process(&mut self, req: KillProcessRequest) -> Result<KillProcessResponse>;

View File

@@ -14,7 +14,6 @@
pub mod clamp;
mod modulo;
mod pow;
mod rate;
use std::fmt;
@@ -26,7 +25,6 @@ use datafusion::error::DataFusionError;
use datafusion::logical_expr::Volatility;
use datatypes::prelude::ConcreteDataType;
use datatypes::vectors::VectorRef;
pub use pow::PowFunction;
pub use rate::RateFunction;
use snafu::ResultExt;
@@ -39,7 +37,6 @@ pub(crate) struct MathFunction;
impl MathFunction {
pub fn register(registry: &FunctionRegistry) {
registry.register_scalar(ModuloFunction);
registry.register_scalar(PowFunction);
registry.register_scalar(RateFunction);
registry.register_scalar(RangeFunction);
registry.register_scalar(ClampFunction);

View File

@@ -1,120 +0,0 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::fmt;
use std::sync::Arc;
use common_query::error::Result;
use common_query::prelude::{Signature, Volatility};
use datatypes::data_type::DataType;
use datatypes::prelude::ConcreteDataType;
use datatypes::types::LogicalPrimitiveType;
use datatypes::vectors::VectorRef;
use datatypes::with_match_primitive_type_id;
use num::traits::Pow;
use num_traits::AsPrimitive;
use crate::function::{Function, FunctionContext};
use crate::scalars::expression::{scalar_binary_op, EvalContext};
#[derive(Clone, Debug, Default)]
pub struct PowFunction;
impl Function for PowFunction {
fn name(&self) -> &str {
"pow"
}
fn return_type(&self, _input_types: &[ConcreteDataType]) -> Result<ConcreteDataType> {
Ok(ConcreteDataType::float64_datatype())
}
fn signature(&self) -> Signature {
Signature::uniform(2, ConcreteDataType::numerics(), Volatility::Immutable)
}
fn eval(&self, _func_ctx: &FunctionContext, columns: &[VectorRef]) -> Result<VectorRef> {
with_match_primitive_type_id!(columns[0].data_type().logical_type_id(), |$S| {
with_match_primitive_type_id!(columns[1].data_type().logical_type_id(), |$T| {
let col = scalar_binary_op::<<$S as LogicalPrimitiveType>::Native, <$T as LogicalPrimitiveType>::Native, f64, _>(&columns[0], &columns[1], scalar_pow, &mut EvalContext::default())?;
Ok(Arc::new(col))
},{
unreachable!()
})
},{
unreachable!()
})
}
}
#[inline]
fn scalar_pow<S, T>(value: Option<S>, base: Option<T>, _ctx: &mut EvalContext) -> Option<f64>
where
S: AsPrimitive<f64>,
T: AsPrimitive<f64>,
{
match (value, base) {
(Some(value), Some(base)) => Some(value.as_().pow(base.as_())),
_ => None,
}
}
impl fmt::Display for PowFunction {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f, "POW")
}
}
#[cfg(test)]
mod tests {
use common_query::prelude::TypeSignature;
use datatypes::value::Value;
use datatypes::vectors::{Float32Vector, Int8Vector};
use super::*;
use crate::function::FunctionContext;
#[test]
fn test_pow_function() {
let pow = PowFunction;
assert_eq!("pow", pow.name());
assert_eq!(
ConcreteDataType::float64_datatype(),
pow.return_type(&[]).unwrap()
);
assert!(matches!(pow.signature(),
Signature {
type_signature: TypeSignature::Uniform(2, valid_types),
volatility: Volatility::Immutable
} if valid_types == ConcreteDataType::numerics()
));
let values = vec![1.0, 2.0, 3.0];
let bases = vec![0i8, -1i8, 3i8];
let args: Vec<VectorRef> = vec![
Arc::new(Float32Vector::from_vec(values.clone())),
Arc::new(Int8Vector::from_vec(bases.clone())),
];
let vector = pow.eval(&FunctionContext::default(), &args).unwrap();
assert_eq!(3, vector.len());
for i in 0..3 {
let p: f64 = (values[i] as f64).pow(bases[i] as f64);
assert!(matches!(vector.get(i), Value::Float64(v) if v == p));
}
}
}

View File

@@ -34,7 +34,7 @@ use table::requests::{
};
use crate::error::{
InvalidColumnDefSnafu, InvalidSetFulltextOptionRequestSnafu,
InvalidColumnDefSnafu, InvalidIndexOptionSnafu, InvalidSetFulltextOptionRequestSnafu,
InvalidSetSkippingIndexOptionRequestSnafu, InvalidSetTableOptionRequestSnafu,
InvalidUnsetTableOptionRequestSnafu, MissingAlterIndexOptionSnafu, MissingFieldSnafu,
MissingTimestampColumnSnafu, Result, UnknownLocationTypeSnafu,
@@ -126,18 +126,21 @@ pub fn alter_expr_to_request(table_id: TableId, expr: AlterTableExpr) -> Result<
api::v1::set_index::Options::Fulltext(f) => AlterKind::SetIndex {
options: SetIndexOptions::Fulltext {
column_name: f.column_name.clone(),
options: FulltextOptions {
enable: f.enable,
analyzer: as_fulltext_option_analyzer(
options: FulltextOptions::new(
f.enable,
as_fulltext_option_analyzer(
Analyzer::try_from(f.analyzer)
.context(InvalidSetFulltextOptionRequestSnafu)?,
),
case_sensitive: f.case_sensitive,
backend: as_fulltext_option_backend(
f.case_sensitive,
as_fulltext_option_backend(
PbFulltextBackend::try_from(f.backend)
.context(InvalidSetFulltextOptionRequestSnafu)?,
),
},
f.granularity as u32,
f.false_positive_rate,
)
.context(InvalidIndexOptionSnafu)?,
},
},
api::v1::set_index::Options::Inverted(i) => AlterKind::SetIndex {
@@ -148,13 +151,15 @@ pub fn alter_expr_to_request(table_id: TableId, expr: AlterTableExpr) -> Result<
api::v1::set_index::Options::Skipping(s) => AlterKind::SetIndex {
options: SetIndexOptions::Skipping {
column_name: s.column_name,
options: SkippingIndexOptions {
granularity: s.granularity as u32,
index_type: as_skipping_index_type(
options: SkippingIndexOptions::new(
s.granularity as u32,
s.false_positive_rate,
as_skipping_index_type(
PbSkippingIndexType::try_from(s.skipping_index_type)
.context(InvalidSetSkippingIndexOptionRequestSnafu)?,
),
},
)
.context(InvalidIndexOptionSnafu)?,
},
},
},

View File

@@ -153,6 +153,14 @@ pub enum Error {
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Invalid index option"))]
InvalidIndexOption {
#[snafu(implicit)]
location: Location,
#[snafu(source)]
error: datatypes::error::Error,
},
}
pub type Result<T> = std::result::Result<T, Error>;
@@ -180,7 +188,8 @@ impl ErrorExt for Error {
| Error::InvalidUnsetTableOptionRequest { .. }
| Error::InvalidSetFulltextOptionRequest { .. }
| Error::InvalidSetSkippingIndexOptionRequest { .. }
| Error::MissingAlterIndexOption { .. } => StatusCode::InvalidArguments,
| Error::MissingAlterIndexOption { .. }
| Error::InvalidIndexOption { .. } => StatusCode::InvalidArguments,
}
}

View File

@@ -27,17 +27,18 @@ use common_telemetry::{error, info, warn};
use futures_util::future;
pub use region_request::make_alter_region_request;
use serde::{Deserialize, Serialize};
use snafu::{ensure, ResultExt};
use snafu::ResultExt;
use store_api::metadata::ColumnMetadata;
use store_api::metric_engine_consts::ALTER_PHYSICAL_EXTENSION_KEY;
use strum::AsRefStr;
use table::metadata::TableId;
use crate::ddl::utils::{
add_peer_context_if_needed, map_to_procedure_error, sync_follower_regions,
add_peer_context_if_needed, extract_column_metadatas, map_to_procedure_error,
sync_follower_regions,
};
use crate::ddl::DdlContext;
use crate::error::{DecodeJsonSnafu, MetadataCorruptionSnafu, Result};
use crate::error::Result;
use crate::instruction::CacheIdent;
use crate::key::table_info::TableInfoValue;
use crate::key::table_route::PhysicalTableRouteValue;
@@ -137,37 +138,13 @@ impl AlterLogicalTablesProcedure {
.into_iter()
.collect::<Result<Vec<_>>>()?;
// Collects responses from datanodes.
let phy_raw_schemas = results
.iter_mut()
.map(|res| res.extensions.remove(ALTER_PHYSICAL_EXTENSION_KEY))
.collect::<Vec<_>>();
if phy_raw_schemas.is_empty() {
self.submit_sync_region_requests(results, &physical_table_route.region_routes)
.await;
self.data.state = AlterTablesState::UpdateMetadata;
return Ok(Status::executing(true));
}
// Verify all the physical schemas are the same
// Safety: previous check ensures this vec is not empty
let first = phy_raw_schemas.first().unwrap();
ensure!(
phy_raw_schemas.iter().all(|x| x == first),
MetadataCorruptionSnafu {
err_msg: "The physical schemas from datanodes are not the same."
}
);
// Decodes the physical raw schemas
if let Some(phy_raw_schema) = first {
self.data.physical_columns =
ColumnMetadata::decode_list(phy_raw_schema).context(DecodeJsonSnafu)?;
if let Some(column_metadatas) =
extract_column_metadatas(&mut results, ALTER_PHYSICAL_EXTENSION_KEY)?
{
self.data.physical_columns = column_metadatas;
} else {
warn!("altering logical table result doesn't contains extension key `{ALTER_PHYSICAL_EXTENSION_KEY}`,leaving the physical table's schema unchanged");
}
self.submit_sync_region_requests(results, &physical_table_route.region_routes)
.await;
self.data.state = AlterTablesState::UpdateMetadata;
@@ -183,7 +160,7 @@ impl AlterLogicalTablesProcedure {
if let Err(err) = sync_follower_regions(
&self.context,
self.data.physical_table_id,
results,
&results,
region_routes,
table_info.meta.engine.as_str(),
)

View File

@@ -29,19 +29,22 @@ use common_procedure::{
Context as ProcedureContext, ContextProvider, Error as ProcedureError, LockKey, PoisonKey,
PoisonKeys, Procedure, ProcedureId, Status, StringKey,
};
use common_telemetry::{debug, error, info};
use common_telemetry::{debug, error, info, warn};
use futures::future::{self};
use serde::{Deserialize, Serialize};
use snafu::{ensure, ResultExt};
use store_api::metadata::ColumnMetadata;
use store_api::metric_engine_consts::TABLE_COLUMN_METADATA_EXTENSION_KEY;
use store_api::storage::RegionId;
use strum::AsRefStr;
use table::metadata::{RawTableInfo, TableId, TableInfo};
use table::table_reference::TableReference;
use crate::cache_invalidator::Context;
use crate::ddl::physical_table_metadata::update_table_info_column_ids;
use crate::ddl::utils::{
add_peer_context_if_needed, handle_multiple_results, map_to_procedure_error,
sync_follower_regions, MultipleResults,
add_peer_context_if_needed, extract_column_metadatas, handle_multiple_results,
map_to_procedure_error, sync_follower_regions, MultipleResults,
};
use crate::ddl::DdlContext;
use crate::error::{AbortProcedureSnafu, NoLeaderSnafu, PutPoisonSnafu, Result, RetryLaterSnafu};
@@ -202,9 +205,9 @@ impl AlterTableProcedure {
})
}
MultipleResults::Ok(results) => {
self.submit_sync_region_requests(results, &physical_table_route.region_routes)
self.submit_sync_region_requests(&results, &physical_table_route.region_routes)
.await;
self.data.state = AlterTableState::UpdateMetadata;
self.handle_alter_region_response(results)?;
Ok(Status::executing_with_clean_poisons(true))
}
MultipleResults::AllNonRetryable(error) => {
@@ -220,9 +223,22 @@ impl AlterTableProcedure {
}
}
fn handle_alter_region_response(&mut self, mut results: Vec<RegionResponse>) -> Result<()> {
self.data.state = AlterTableState::UpdateMetadata;
if let Some(column_metadatas) =
extract_column_metadatas(&mut results, TABLE_COLUMN_METADATA_EXTENSION_KEY)?
{
self.data.column_metadatas = column_metadatas;
} else {
warn!("altering table result doesn't contains extension key `{TABLE_COLUMN_METADATA_EXTENSION_KEY}`,leaving the table's column metadata unchanged");
}
Ok(())
}
async fn submit_sync_region_requests(
&mut self,
results: Vec<RegionResponse>,
results: &[RegionResponse],
region_routes: &[RegionRoute],
) {
// Safety: filled in `prepare` step.
@@ -268,10 +284,14 @@ impl AlterTableProcedure {
self.on_update_metadata_for_rename(new_table_name.to_string(), table_info_value)
.await?;
} else {
let mut raw_table_info = new_info.into();
if !self.data.column_metadatas.is_empty() {
update_table_info_column_ids(&mut raw_table_info, &self.data.column_metadatas);
}
// region distribution is set in submit_alter_region_requests
let region_distribution = self.data.region_distribution.as_ref().unwrap().clone();
self.on_update_metadata_for_alter(
new_info.into(),
raw_table_info,
region_distribution,
table_info_value,
)
@@ -318,6 +338,16 @@ impl AlterTableProcedure {
lock_key
}
#[cfg(test)]
pub(crate) fn data(&self) -> &AlterTableData {
&self.data
}
#[cfg(test)]
pub(crate) fn mut_data(&mut self) -> &mut AlterTableData {
&mut self.data
}
}
#[async_trait]
@@ -380,6 +410,8 @@ pub struct AlterTableData {
state: AlterTableState,
task: AlterTableTask,
table_id: TableId,
#[serde(default)]
column_metadatas: Vec<ColumnMetadata>,
/// Table info value before alteration.
table_info_value: Option<DeserializedValueWithBytes<TableInfoValue>>,
/// Region distribution for table in case we need to update region options.
@@ -392,6 +424,7 @@ impl AlterTableData {
state: AlterTableState::Prepare,
task,
table_id,
column_metadatas: vec![],
table_info_value: None,
region_distribution: None,
}
@@ -410,4 +443,14 @@ impl AlterTableData {
.as_ref()
.map(|value| &value.table_info)
}
#[cfg(test)]
pub(crate) fn column_metadatas(&self) -> &[ColumnMetadata] {
&self.column_metadatas
}
#[cfg(test)]
pub(crate) fn set_column_metadatas(&mut self, column_metadatas: Vec<ColumnMetadata>) {
self.column_metadatas = column_metadatas;
}
}

View File

@@ -167,6 +167,25 @@ impl CreateFlowProcedure {
}
self.collect_source_tables().await?;
// Validate that source and sink tables are not the same
let sink_table_name = &self.data.task.sink_table_name;
if self
.data
.task
.source_table_names
.iter()
.any(|source| source == sink_table_name)
{
return error::UnsupportedSnafu {
operation: format!(
"Creating flow with source and sink table being the same: {}",
sink_table_name
),
}
.fail();
}
if self.data.flow_id.is_none() {
self.allocate_flow_id().await?;
}

View File

@@ -27,7 +27,7 @@ use common_telemetry::{debug, error, warn};
use futures::future;
pub use region_request::create_region_request_builder;
use serde::{Deserialize, Serialize};
use snafu::{ensure, ResultExt};
use snafu::ResultExt;
use store_api::metadata::ColumnMetadata;
use store_api::metric_engine_consts::ALTER_PHYSICAL_EXTENSION_KEY;
use store_api::storage::{RegionId, RegionNumber};
@@ -35,10 +35,11 @@ use strum::AsRefStr;
use table::metadata::{RawTableInfo, TableId};
use crate::ddl::utils::{
add_peer_context_if_needed, map_to_procedure_error, sync_follower_regions,
add_peer_context_if_needed, extract_column_metadatas, map_to_procedure_error,
sync_follower_regions,
};
use crate::ddl::DdlContext;
use crate::error::{DecodeJsonSnafu, MetadataCorruptionSnafu, Result};
use crate::error::Result;
use crate::key::table_route::TableRouteValue;
use crate::lock_key::{CatalogLock, SchemaLock, TableLock, TableNameLock};
use crate::metrics;
@@ -166,47 +167,23 @@ impl CreateLogicalTablesProcedure {
.into_iter()
.collect::<Result<Vec<_>>>()?;
// Collects response from datanodes.
let phy_raw_schemas = results
.iter_mut()
.map(|res| res.extensions.remove(ALTER_PHYSICAL_EXTENSION_KEY))
.collect::<Vec<_>>();
if phy_raw_schemas.is_empty() {
self.submit_sync_region_requests(results, region_routes)
.await;
self.data.state = CreateTablesState::CreateMetadata;
return Ok(Status::executing(false));
}
// Verify all the physical schemas are the same
// Safety: previous check ensures this vec is not empty
let first = phy_raw_schemas.first().unwrap();
ensure!(
phy_raw_schemas.iter().all(|x| x == first),
MetadataCorruptionSnafu {
err_msg: "The physical schemas from datanodes are not the same."
}
);
// Decodes the physical raw schemas
if let Some(phy_raw_schemas) = first {
self.data.physical_columns =
ColumnMetadata::decode_list(phy_raw_schemas).context(DecodeJsonSnafu)?;
if let Some(column_metadatas) =
extract_column_metadatas(&mut results, ALTER_PHYSICAL_EXTENSION_KEY)?
{
self.data.physical_columns = column_metadatas;
} else {
warn!("creating logical table result doesn't contains extension key `{ALTER_PHYSICAL_EXTENSION_KEY}`,leaving the physical table's schema unchanged");
}
self.submit_sync_region_requests(results, region_routes)
self.submit_sync_region_requests(&results, region_routes)
.await;
self.data.state = CreateTablesState::CreateMetadata;
Ok(Status::executing(true))
}
async fn submit_sync_region_requests(
&self,
results: Vec<RegionResponse>,
results: &[RegionResponse],
region_routes: &[RegionRoute],
) {
if let Err(err) = sync_follower_regions(

View File

@@ -22,20 +22,23 @@ use common_procedure::error::{
ExternalSnafu, FromJsonSnafu, Result as ProcedureResult, ToJsonSnafu,
};
use common_procedure::{Context as ProcedureContext, LockKey, Procedure, Status};
use common_telemetry::info;
use common_telemetry::tracing_context::TracingContext;
use common_telemetry::{info, warn};
use futures::future::join_all;
use serde::{Deserialize, Serialize};
use snafu::{ensure, OptionExt, ResultExt};
use store_api::metadata::ColumnMetadata;
use store_api::metric_engine_consts::TABLE_COLUMN_METADATA_EXTENSION_KEY;
use store_api::storage::{RegionId, RegionNumber};
use strum::AsRefStr;
use table::metadata::{RawTableInfo, TableId};
use table::table_reference::TableReference;
use crate::ddl::create_table_template::{build_template, CreateRequestBuilder};
use crate::ddl::physical_table_metadata::update_table_info_column_ids;
use crate::ddl::utils::{
add_peer_context_if_needed, convert_region_routes_to_detecting_regions, map_to_procedure_error,
region_storage_path,
add_peer_context_if_needed, convert_region_routes_to_detecting_regions,
extract_column_metadatas, map_to_procedure_error, region_storage_path,
};
use crate::ddl::{DdlContext, TableMetadata};
use crate::error::{self, Result};
@@ -243,14 +246,21 @@ impl CreateTableProcedure {
}
}
join_all(create_region_tasks)
self.creator.data.state = CreateTableState::CreateMetadata;
let mut results = join_all(create_region_tasks)
.await
.into_iter()
.collect::<Result<Vec<_>>>()?;
self.creator.data.state = CreateTableState::CreateMetadata;
if let Some(column_metadatas) =
extract_column_metadatas(&mut results, TABLE_COLUMN_METADATA_EXTENSION_KEY)?
{
self.creator.data.column_metadatas = column_metadatas;
} else {
warn!("creating table result doesn't contains extension key `{TABLE_COLUMN_METADATA_EXTENSION_KEY}`,leaving the table's column metadata unchanged");
}
// TODO(weny): Add more tests.
Ok(Status::executing(true))
}
@@ -262,7 +272,10 @@ impl CreateTableProcedure {
let table_id = self.table_id();
let manager = &self.context.table_metadata_manager;
let raw_table_info = self.table_info().clone();
let mut raw_table_info = self.table_info().clone();
if !self.creator.data.column_metadatas.is_empty() {
update_table_info_column_ids(&mut raw_table_info, &self.creator.data.column_metadatas);
}
// Safety: the region_wal_options must be allocated.
let region_wal_options = self.region_wal_options()?.clone();
// Safety: the table_route must be allocated.
@@ -346,6 +359,7 @@ impl TableCreator {
Self {
data: CreateTableData {
state: CreateTableState::Prepare,
column_metadatas: vec![],
task,
table_route: None,
region_wal_options: None,
@@ -407,6 +421,8 @@ pub enum CreateTableState {
pub struct CreateTableData {
pub state: CreateTableState,
pub task: CreateTableTask,
#[serde(default)]
pub column_metadatas: Vec<ColumnMetadata>,
/// None stands for not allocated yet.
table_route: Option<PhysicalTableRouteValue>,
/// None stands for not allocated yet.

View File

@@ -12,9 +12,11 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use std::collections::HashSet;
use std::collections::{HashMap, HashSet};
use api::v1::SemanticType;
use common_telemetry::debug;
use common_telemetry::tracing::warn;
use store_api::metadata::ColumnMetadata;
use table::metadata::RawTableInfo;
@@ -23,6 +25,10 @@ pub(crate) fn build_new_physical_table_info(
mut raw_table_info: RawTableInfo,
physical_columns: &[ColumnMetadata],
) -> RawTableInfo {
debug!(
"building new physical table info for table: {}, table_id: {}",
raw_table_info.name, raw_table_info.ident.table_id
);
let existing_columns = raw_table_info
.meta
.schema
@@ -36,6 +42,8 @@ pub(crate) fn build_new_physical_table_info(
let time_index = &mut raw_table_info.meta.schema.timestamp_index;
let columns = &mut raw_table_info.meta.schema.column_schemas;
columns.clear();
let column_ids = &mut raw_table_info.meta.column_ids;
column_ids.clear();
for (idx, col) in physical_columns.iter().enumerate() {
match col.semantic_type {
@@ -50,6 +58,7 @@ pub(crate) fn build_new_physical_table_info(
}
columns.push(col.column_schema.clone());
column_ids.push(col.column_id);
}
if let Some(time_index) = *time_index {
@@ -58,3 +67,54 @@ pub(crate) fn build_new_physical_table_info(
raw_table_info
}
/// Updates the column IDs in the table info based on the provided column metadata.
///
/// This function validates that the column metadata matches the existing table schema
/// before updating the column ids. If the column metadata doesn't match the table schema,
/// the table info remains unchanged.
pub(crate) fn update_table_info_column_ids(
raw_table_info: &mut RawTableInfo,
column_metadatas: &[ColumnMetadata],
) {
let mut table_column_names = raw_table_info
.meta
.schema
.column_schemas
.iter()
.map(|c| c.name.as_str())
.collect::<Vec<_>>();
table_column_names.sort_unstable();
let mut column_names = column_metadatas
.iter()
.map(|c| c.column_schema.name.as_str())
.collect::<Vec<_>>();
column_names.sort_unstable();
if table_column_names != column_names {
warn!(
"Column metadata doesn't match the table schema for table {}, table_id: {}, column in table: {:?}, column in metadata: {:?}",
raw_table_info.name,
raw_table_info.ident.table_id,
table_column_names,
column_names,
);
return;
}
let name_to_id = column_metadatas
.iter()
.map(|c| (c.column_schema.name.clone(), c.column_id))
.collect::<HashMap<_, _>>();
let schema = &raw_table_info.meta.schema.column_schemas;
let mut column_ids = Vec::with_capacity(schema.len());
for column_schema in schema {
if let Some(id) = name_to_id.get(&column_schema.name) {
column_ids.push(*id);
}
}
raw_table_info.meta.column_ids = column_ids;
}

View File

@@ -122,6 +122,7 @@ impl TableMetadataAllocator {
);
let peers = self.peer_allocator.alloc(regions).await?;
debug!("Allocated peers {:?} for table {}", peers, table_id);
let region_routes = task
.partitions
.iter()

View File

@@ -24,7 +24,14 @@ use std::collections::HashMap;
use api::v1::meta::Partition;
use api::v1::{ColumnDataType, SemanticType};
use common_procedure::Status;
use store_api::metric_engine_consts::{LOGICAL_TABLE_METADATA_KEY, METRIC_ENGINE_NAME};
use datatypes::prelude::ConcreteDataType;
use datatypes::schema::ColumnSchema;
use store_api::metadata::ColumnMetadata;
use store_api::metric_engine_consts::{
DATA_SCHEMA_TABLE_ID_COLUMN_NAME, DATA_SCHEMA_TSID_COLUMN_NAME, LOGICAL_TABLE_METADATA_KEY,
METRIC_ENGINE_NAME,
};
use store_api::storage::consts::ReservedColumnId;
use table::metadata::{RawTableInfo, TableId};
use crate::ddl::create_logical_tables::CreateLogicalTablesProcedure;
@@ -146,6 +153,7 @@ pub fn test_create_logical_table_task(name: &str) -> CreateTableTask {
}
}
/// Creates a physical table task with a single region.
pub fn test_create_physical_table_task(name: &str) -> CreateTableTask {
let create_table = TestCreateTableExprBuilder::default()
.column_defs([
@@ -182,3 +190,95 @@ pub fn test_create_physical_table_task(name: &str) -> CreateTableTask {
table_info,
}
}
/// Creates a column metadata list with tag fields.
pub fn test_column_metadatas(tag_fields: &[&str]) -> Vec<ColumnMetadata> {
let mut output = Vec::with_capacity(tag_fields.len() + 4);
output.extend([
ColumnMetadata {
column_schema: ColumnSchema::new(
"ts",
ConcreteDataType::timestamp_millisecond_datatype(),
false,
),
semantic_type: SemanticType::Timestamp,
column_id: 0,
},
ColumnMetadata {
column_schema: ColumnSchema::new("value", ConcreteDataType::float64_datatype(), false),
semantic_type: SemanticType::Field,
column_id: 1,
},
ColumnMetadata {
column_schema: ColumnSchema::new(
DATA_SCHEMA_TABLE_ID_COLUMN_NAME,
ConcreteDataType::timestamp_millisecond_datatype(),
false,
),
semantic_type: SemanticType::Tag,
column_id: ReservedColumnId::table_id(),
},
ColumnMetadata {
column_schema: ColumnSchema::new(
DATA_SCHEMA_TSID_COLUMN_NAME,
ConcreteDataType::float64_datatype(),
false,
),
semantic_type: SemanticType::Tag,
column_id: ReservedColumnId::tsid(),
},
]);
for (i, name) in tag_fields.iter().enumerate() {
output.push(ColumnMetadata {
column_schema: ColumnSchema::new(
name.to_string(),
ConcreteDataType::string_datatype(),
true,
),
semantic_type: SemanticType::Tag,
column_id: (i + 2) as u32,
});
}
output
}
/// Asserts the column names.
pub fn assert_column_name(table_info: &RawTableInfo, expected_column_names: &[&str]) {
assert_eq!(
table_info
.meta
.schema
.column_schemas
.iter()
.map(|c| c.name.to_string())
.collect::<Vec<_>>(),
expected_column_names
);
}
/// Asserts the column metadatas
pub fn assert_column_name_and_id(column_metadatas: &[ColumnMetadata], expected: &[(&str, u32)]) {
assert_eq!(expected.len(), column_metadatas.len());
for (name, id) in expected {
let column_metadata = column_metadatas
.iter()
.find(|c| c.column_id == *id)
.unwrap();
assert_eq!(column_metadata.column_schema.name, *name);
}
}
/// Gets the raw table info.
pub async fn get_raw_table_info(ddl_context: &DdlContext, table_id: TableId) -> RawTableInfo {
ddl_context
.table_metadata_manager
.table_info_manager()
.get(table_id)
.await
.unwrap()
.unwrap()
.into_inner()
.table_info
}

View File

@@ -132,6 +132,7 @@ pub fn build_raw_table_info_from_expr(expr: &CreateTableExpr) -> RawTableInfo {
options: TableOptions::try_from_iter(&expr.table_options).unwrap(),
created_on: DateTime::default(),
partition_key_indices: vec![],
column_ids: vec![],
},
table_type: TableType::Base,
}

View File

@@ -12,6 +12,8 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use std::sync::Arc;
use api::region::RegionResponse;
use api::v1::region::RegionRequest;
use common_error::ext::{BoxedError, ErrorExt, StackError};
@@ -45,10 +47,13 @@ impl MockDatanodeHandler for () {
}
}
type RegionRequestHandler =
Arc<dyn Fn(Peer, RegionRequest) -> Result<RegionResponse> + Send + Sync>;
#[derive(Clone)]
pub struct DatanodeWatcher {
sender: mpsc::Sender<(Peer, RegionRequest)>,
handler: Option<fn(Peer, RegionRequest) -> Result<RegionResponse>>,
handler: Option<RegionRequestHandler>,
}
impl DatanodeWatcher {
@@ -61,9 +66,9 @@ impl DatanodeWatcher {
pub fn with_handler(
mut self,
user_handler: fn(Peer, RegionRequest) -> Result<RegionResponse>,
user_handler: impl Fn(Peer, RegionRequest) -> Result<RegionResponse> + Send + Sync + 'static,
) -> Self {
self.handler = Some(user_handler);
self.handler = Some(Arc::new(user_handler));
self
}
}
@@ -76,7 +81,7 @@ impl MockDatanodeHandler for DatanodeWatcher {
.send((peer.clone(), request.clone()))
.await
.unwrap();
if let Some(handler) = self.handler {
if let Some(handler) = self.handler.as_ref() {
handler(peer.clone(), request)
} else {
Ok(RegionResponse::new(0))

View File

@@ -23,17 +23,20 @@ use api::v1::{ColumnDataType, SemanticType};
use common_catalog::consts::{DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME};
use common_procedure::{Procedure, ProcedureId, Status};
use common_procedure_test::MockContextProvider;
use store_api::metric_engine_consts::MANIFEST_INFO_EXTENSION_KEY;
use store_api::metadata::ColumnMetadata;
use store_api::metric_engine_consts::{ALTER_PHYSICAL_EXTENSION_KEY, MANIFEST_INFO_EXTENSION_KEY};
use store_api::region_engine::RegionManifestInfo;
use store_api::storage::consts::ReservedColumnId;
use store_api::storage::RegionId;
use tokio::sync::mpsc;
use crate::ddl::alter_logical_tables::AlterLogicalTablesProcedure;
use crate::ddl::test_util::alter_table::TestAlterTableExprBuilder;
use crate::ddl::test_util::columns::TestColumnDefBuilder;
use crate::ddl::test_util::datanode_handler::{DatanodeWatcher, NaiveDatanodeHandler};
use crate::ddl::test_util::datanode_handler::DatanodeWatcher;
use crate::ddl::test_util::{
create_logical_table, create_physical_table, create_physical_table_metadata,
assert_column_name, create_logical_table, create_physical_table,
create_physical_table_metadata, get_raw_table_info, test_column_metadatas,
test_create_physical_table_task,
};
use crate::error::Error::{AlterLogicalTablesInvalidArguments, TableNotFound};
@@ -96,6 +99,52 @@ fn make_alter_logical_table_rename_task(
}
}
fn make_alters_request_handler(
column_metadatas: Vec<ColumnMetadata>,
) -> impl Fn(Peer, RegionRequest) -> Result<RegionResponse> {
move |_peer: Peer, request: RegionRequest| {
if let region_request::Body::Alters(_) = request.body.unwrap() {
let mut response = RegionResponse::new(0);
// Default region id for physical table.
let region_id = RegionId::new(1000, 1);
response.extensions.insert(
MANIFEST_INFO_EXTENSION_KEY.to_string(),
RegionManifestInfo::encode_list(&[(
region_id,
RegionManifestInfo::metric(1, 0, 2, 0),
)])
.unwrap(),
);
response.extensions.insert(
ALTER_PHYSICAL_EXTENSION_KEY.to_string(),
ColumnMetadata::encode_list(&column_metadatas).unwrap(),
);
return Ok(response);
}
Ok(RegionResponse::new(0))
}
}
fn assert_alters_request(
peer: Peer,
request: RegionRequest,
expected_peer_id: u64,
expected_region_ids: &[RegionId],
) {
assert_eq!(peer.id, expected_peer_id,);
let Some(region_request::Body::Alters(req)) = request.body else {
unreachable!();
};
for (i, region_id) in expected_region_ids.iter().enumerate() {
assert_eq!(
req.requests[i].region_id,
*region_id,
"actual region id: {}",
RegionId::from_u64(req.requests[i].region_id)
);
}
}
#[tokio::test]
async fn test_on_prepare_check_schema() {
let node_manager = Arc::new(MockDatanodeManager::new(()));
@@ -205,15 +254,20 @@ async fn test_on_prepare() {
#[tokio::test]
async fn test_on_update_metadata() {
let node_manager = Arc::new(MockDatanodeManager::new(NaiveDatanodeHandler));
common_telemetry::init_default_ut_logging();
let (tx, mut rx) = mpsc::channel(8);
let test_column_metadatas = test_column_metadatas(&["new_col", "mew_col"]);
let datanode_handler =
DatanodeWatcher::new(tx).with_handler(make_alters_request_handler(test_column_metadatas));
let node_manager = Arc::new(MockDatanodeManager::new(datanode_handler));
let ddl_context = new_ddl_context(node_manager);
// Creates physical table
let phy_id = create_physical_table(&ddl_context, "phy").await;
// Creates 3 logical tables
create_logical_table(ddl_context.clone(), phy_id, "table1").await;
create_logical_table(ddl_context.clone(), phy_id, "table2").await;
create_logical_table(ddl_context.clone(), phy_id, "table3").await;
let logical_table1_id = create_logical_table(ddl_context.clone(), phy_id, "table1").await;
let logical_table2_id = create_logical_table(ddl_context.clone(), phy_id, "table2").await;
let logical_table3_id = create_logical_table(ddl_context.clone(), phy_id, "table3").await;
create_logical_table(ddl_context.clone(), phy_id, "table4").await;
create_logical_table(ddl_context.clone(), phy_id, "table5").await;
@@ -223,7 +277,7 @@ async fn test_on_update_metadata() {
make_alter_logical_table_add_column_task(None, "table3", vec!["new_col".to_string()]),
];
let mut procedure = AlterLogicalTablesProcedure::new(tasks, phy_id, ddl_context);
let mut procedure = AlterLogicalTablesProcedure::new(tasks, phy_id, ddl_context.clone());
let mut status = procedure.on_prepare().await.unwrap();
assert_matches!(
status,
@@ -255,18 +309,52 @@ async fn test_on_update_metadata() {
clean_poisons: false
}
);
let (peer, request) = rx.try_recv().unwrap();
rx.try_recv().unwrap_err();
assert_alters_request(
peer,
request,
0,
&[
RegionId::new(logical_table1_id, 0),
RegionId::new(logical_table2_id, 0),
RegionId::new(logical_table3_id, 0),
],
);
let table_info = get_raw_table_info(&ddl_context, phy_id).await;
assert_column_name(
&table_info,
&["ts", "value", "__table_id", "__tsid", "new_col", "mew_col"],
);
assert_eq!(
table_info.meta.column_ids,
vec![
0,
1,
ReservedColumnId::table_id(),
ReservedColumnId::tsid(),
2,
3
]
);
}
#[tokio::test]
async fn test_on_part_duplicate_alter_request() {
let node_manager = Arc::new(MockDatanodeManager::new(NaiveDatanodeHandler));
let ddl_context = new_ddl_context(node_manager);
common_telemetry::init_default_ut_logging();
let (tx, mut rx) = mpsc::channel(8);
let column_metadatas = test_column_metadatas(&["col_0"]);
let handler =
DatanodeWatcher::new(tx).with_handler(make_alters_request_handler(column_metadatas));
let node_manager = Arc::new(MockDatanodeManager::new(handler));
let mut ddl_context = new_ddl_context(node_manager);
// Creates physical table
let phy_id = create_physical_table(&ddl_context, "phy").await;
// Creates 3 logical tables
create_logical_table(ddl_context.clone(), phy_id, "table1").await;
create_logical_table(ddl_context.clone(), phy_id, "table2").await;
let logical_table1_id = create_logical_table(ddl_context.clone(), phy_id, "table1").await;
let logical_table2_id = create_logical_table(ddl_context.clone(), phy_id, "table2").await;
let tasks = vec![
make_alter_logical_table_add_column_task(None, "table1", vec!["col_0".to_string()]),
@@ -305,6 +393,40 @@ async fn test_on_part_duplicate_alter_request() {
clean_poisons: false
}
);
let (peer, request) = rx.try_recv().unwrap();
rx.try_recv().unwrap_err();
assert_alters_request(
peer,
request,
0,
&[
RegionId::new(logical_table1_id, 0),
RegionId::new(logical_table2_id, 0),
],
);
let table_info = get_raw_table_info(&ddl_context, phy_id).await;
assert_column_name(
&table_info,
&["ts", "value", "__table_id", "__tsid", "col_0"],
);
assert_eq!(
table_info.meta.column_ids,
vec![
0,
1,
ReservedColumnId::table_id(),
ReservedColumnId::tsid(),
2
]
);
let (tx, mut rx) = mpsc::channel(8);
let column_metadatas = test_column_metadatas(&["col_0", "new_col_1", "new_col_2"]);
let handler =
DatanodeWatcher::new(tx).with_handler(make_alters_request_handler(column_metadatas));
let node_manager = Arc::new(MockDatanodeManager::new(handler));
ddl_context.node_manager = node_manager;
// re-alter
let tasks = vec![
@@ -357,6 +479,44 @@ async fn test_on_part_duplicate_alter_request() {
}
);
let (peer, request) = rx.try_recv().unwrap();
rx.try_recv().unwrap_err();
assert_alters_request(
peer,
request,
0,
&[
RegionId::new(logical_table1_id, 0),
RegionId::new(logical_table2_id, 0),
],
);
let table_info = get_raw_table_info(&ddl_context, phy_id).await;
assert_column_name(
&table_info,
&[
"ts",
"value",
"__table_id",
"__tsid",
"col_0",
"new_col_1",
"new_col_2",
],
);
assert_eq!(
table_info.meta.column_ids,
vec![
0,
1,
ReservedColumnId::table_id(),
ReservedColumnId::tsid(),
2,
3,
4,
]
);
let table_name_keys = vec![
TableNameKey::new(DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, "table1"),
TableNameKey::new(DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, "table2"),
@@ -422,27 +582,13 @@ async fn test_on_part_duplicate_alter_request() {
);
}
fn alters_request_handler(_peer: Peer, request: RegionRequest) -> Result<RegionResponse> {
if let region_request::Body::Alters(_) = request.body.unwrap() {
let mut response = RegionResponse::new(0);
// Default region id for physical table.
let region_id = RegionId::new(1000, 1);
response.extensions.insert(
MANIFEST_INFO_EXTENSION_KEY.to_string(),
RegionManifestInfo::encode_list(&[(region_id, RegionManifestInfo::metric(1, 0, 2, 0))])
.unwrap(),
);
return Ok(response);
}
Ok(RegionResponse::new(0))
}
#[tokio::test]
async fn test_on_submit_alter_region_request() {
common_telemetry::init_default_ut_logging();
let (tx, mut rx) = mpsc::channel(8);
let handler = DatanodeWatcher::new(tx).with_handler(alters_request_handler);
let column_metadatas = test_column_metadatas(&["new_col", "mew_col"]);
let handler =
DatanodeWatcher::new(tx).with_handler(make_alters_request_handler(column_metadatas));
let node_manager = Arc::new(MockDatanodeManager::new(handler));
let ddl_context = new_ddl_context(node_manager);

View File

@@ -30,7 +30,12 @@ use common_error::status_code::StatusCode;
use common_procedure::store::poison_store::PoisonStore;
use common_procedure::{ProcedureId, Status};
use common_procedure_test::MockContextProvider;
use store_api::metric_engine_consts::MANIFEST_INFO_EXTENSION_KEY;
use datatypes::prelude::ConcreteDataType;
use datatypes::schema::ColumnSchema;
use store_api::metadata::ColumnMetadata;
use store_api::metric_engine_consts::{
MANIFEST_INFO_EXTENSION_KEY, TABLE_COLUMN_METADATA_EXTENSION_KEY,
};
use store_api::region_engine::RegionManifestInfo;
use store_api::storage::RegionId;
use table::requests::TTL_KEY;
@@ -43,6 +48,7 @@ use crate::ddl::test_util::datanode_handler::{
AllFailureDatanodeHandler, DatanodeWatcher, PartialSuccessDatanodeHandler,
RequestOutdatedErrorDatanodeHandler,
};
use crate::ddl::test_util::{assert_column_name, assert_column_name_and_id};
use crate::error::{Error, Result};
use crate::key::datanode_table::DatanodeTableKey;
use crate::key::table_name::TableNameKey;
@@ -179,6 +185,30 @@ fn alter_request_handler(_peer: Peer, request: RegionRequest) -> Result<RegionRe
RegionManifestInfo::encode_list(&[(region_id, RegionManifestInfo::mito(1, 1))])
.unwrap(),
);
response.extensions.insert(
TABLE_COLUMN_METADATA_EXTENSION_KEY.to_string(),
ColumnMetadata::encode_list(&[
ColumnMetadata {
column_schema: ColumnSchema::new(
"ts",
ConcreteDataType::timestamp_millisecond_datatype(),
false,
),
semantic_type: SemanticType::Timestamp,
column_id: 0,
},
ColumnMetadata {
column_schema: ColumnSchema::new(
"host",
ConcreteDataType::float64_datatype(),
false,
),
semantic_type: SemanticType::Tag,
column_id: 1,
},
])
.unwrap(),
);
return Ok(response);
}
@@ -187,6 +217,7 @@ fn alter_request_handler(_peer: Peer, request: RegionRequest) -> Result<RegionRe
#[tokio::test]
async fn test_on_submit_alter_request() {
common_telemetry::init_default_ut_logging();
let (tx, mut rx) = mpsc::channel(8);
let datanode_handler = DatanodeWatcher::new(tx).with_handler(alter_request_handler);
let node_manager = Arc::new(MockDatanodeManager::new(datanode_handler));
@@ -234,6 +265,8 @@ async fn test_on_submit_alter_request() {
assert_sync_request(peer, request, 4, RegionId::new(table_id, 2), 1);
let (peer, request) = results.remove(0);
assert_sync_request(peer, request, 5, RegionId::new(table_id, 1), 1);
let column_metadatas = procedure.data().column_metadatas();
assert_column_name_and_id(column_metadatas, &[("ts", 0), ("host", 1)]);
}
#[tokio::test]
@@ -378,6 +411,7 @@ async fn test_on_update_metadata_rename() {
#[tokio::test]
async fn test_on_update_metadata_add_columns() {
common_telemetry::init_default_ut_logging();
let node_manager = Arc::new(MockDatanodeManager::new(()));
let ddl_context = new_ddl_context(node_manager);
let table_name = "foo";
@@ -431,6 +465,34 @@ async fn test_on_update_metadata_add_columns() {
.submit_alter_region_requests(procedure_id, provider.as_ref())
.await
.unwrap();
// Returned column metadatas is empty.
assert!(procedure.data().column_metadatas().is_empty());
procedure.mut_data().set_column_metadatas(vec![
ColumnMetadata {
column_schema: ColumnSchema::new(
"ts",
ConcreteDataType::timestamp_millisecond_datatype(),
false,
),
semantic_type: SemanticType::Timestamp,
column_id: 0,
},
ColumnMetadata {
column_schema: ColumnSchema::new("host", ConcreteDataType::float64_datatype(), false),
semantic_type: SemanticType::Tag,
column_id: 1,
},
ColumnMetadata {
column_schema: ColumnSchema::new("cpu", ConcreteDataType::float64_datatype(), false),
semantic_type: SemanticType::Tag,
column_id: 2,
},
ColumnMetadata {
column_schema: ColumnSchema::new("my_tag3", ConcreteDataType::string_datatype(), true),
semantic_type: SemanticType::Tag,
column_id: 3,
},
]);
procedure.on_update_metadata().await.unwrap();
let table_info = ddl_context
@@ -447,6 +509,8 @@ async fn test_on_update_metadata_add_columns() {
table_info.meta.schema.column_schemas.len() as u32,
table_info.meta.next_column_id
);
assert_column_name(&table_info, &["ts", "host", "cpu", "my_tag3"]);
assert_eq!(table_info.meta.column_ids, vec![0, 1, 2, 3]);
}
#[tokio::test]

View File

@@ -141,3 +141,41 @@ async fn test_create_flow() {
let err = procedure.on_prepare().await.unwrap_err();
assert_matches!(err, error::Error::FlowAlreadyExists { .. });
}
#[tokio::test]
async fn test_create_flow_same_source_and_sink_table() {
let table_id = 1024;
let table_name = TableName::new(DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, "same_table");
// Use the same table for both source and sink
let source_table_names = vec![table_name.clone()];
let sink_table_name = table_name.clone();
let node_manager = Arc::new(MockFlownodeManager::new(NaiveFlownodeHandler));
let ddl_context = new_ddl_context(node_manager);
// Create the table first so it exists
let task = test_create_table_task("same_table", table_id);
ddl_context
.table_metadata_manager
.create_table_metadata(
task.table_info.clone(),
TableRouteValue::physical(vec![]),
HashMap::new(),
)
.await
.unwrap();
// Try to create a flow with same source and sink table - should fail
let task = test_create_flow_task("my_flow", source_table_names, sink_table_name, false);
let query_ctx = QueryContext::arc().into();
let mut procedure = CreateFlowProcedure::new(task, query_ctx, ddl_context);
let err = procedure.on_prepare().await.unwrap_err();
assert_matches!(err, error::Error::Unsupported { .. });
// Verify the error message contains information about the same table
if let error::Error::Unsupported { operation, .. } = &err {
assert!(operation.contains("source and sink table being the same"));
assert!(operation.contains("same_table"));
}
}

View File

@@ -23,15 +23,18 @@ use common_error::ext::ErrorExt;
use common_error::status_code::StatusCode;
use common_procedure::{Context as ProcedureContext, Procedure, ProcedureId, Status};
use common_procedure_test::MockContextProvider;
use store_api::metric_engine_consts::MANIFEST_INFO_EXTENSION_KEY;
use store_api::metadata::ColumnMetadata;
use store_api::metric_engine_consts::{ALTER_PHYSICAL_EXTENSION_KEY, MANIFEST_INFO_EXTENSION_KEY};
use store_api::region_engine::RegionManifestInfo;
use store_api::storage::consts::ReservedColumnId;
use store_api::storage::RegionId;
use tokio::sync::mpsc;
use crate::ddl::create_logical_tables::CreateLogicalTablesProcedure;
use crate::ddl::test_util::datanode_handler::{DatanodeWatcher, NaiveDatanodeHandler};
use crate::ddl::test_util::{
create_physical_table_metadata, test_create_logical_table_task, test_create_physical_table_task,
assert_column_name, create_physical_table_metadata, get_raw_table_info, test_column_metadatas,
test_create_logical_table_task, test_create_physical_table_task,
};
use crate::ddl::TableMetadata;
use crate::error::{Error, Result};
@@ -39,6 +42,54 @@ use crate::key::table_route::{PhysicalTableRouteValue, TableRouteValue};
use crate::rpc::router::{Region, RegionRoute};
use crate::test_util::{new_ddl_context, MockDatanodeManager};
fn make_creates_request_handler(
column_metadatas: Vec<ColumnMetadata>,
) -> impl Fn(Peer, RegionRequest) -> Result<RegionResponse> {
move |_peer, request| {
let _ = _peer;
if let region_request::Body::Creates(_) = request.body.unwrap() {
let mut response = RegionResponse::new(0);
// Default region id for physical table.
let region_id = RegionId::new(1024, 1);
response.extensions.insert(
MANIFEST_INFO_EXTENSION_KEY.to_string(),
RegionManifestInfo::encode_list(&[(
region_id,
RegionManifestInfo::metric(1, 0, 2, 0),
)])
.unwrap(),
);
response.extensions.insert(
ALTER_PHYSICAL_EXTENSION_KEY.to_string(),
ColumnMetadata::encode_list(&column_metadatas).unwrap(),
);
return Ok(response);
}
Ok(RegionResponse::new(0))
}
}
fn assert_creates_request(
peer: Peer,
request: RegionRequest,
expected_peer_id: u64,
expected_region_ids: &[RegionId],
) {
assert_eq!(peer.id, expected_peer_id,);
let Some(region_request::Body::Creates(req)) = request.body else {
unreachable!();
};
for (i, region_id) in expected_region_ids.iter().enumerate() {
assert_eq!(
req.requests[i].region_id,
*region_id,
"actual region id: {}",
RegionId::from_u64(req.requests[i].region_id)
);
}
}
#[tokio::test]
async fn test_on_prepare_physical_table_not_found() {
let node_manager = Arc::new(MockDatanodeManager::new(()));
@@ -227,7 +278,12 @@ async fn test_on_prepare_part_logical_tables_exist() {
#[tokio::test]
async fn test_on_create_metadata() {
let node_manager = Arc::new(MockDatanodeManager::new(NaiveDatanodeHandler));
common_telemetry::init_default_ut_logging();
let (tx, mut rx) = mpsc::channel(8);
let column_metadatas = test_column_metadatas(&["host", "cpu"]);
let datanode_handler =
DatanodeWatcher::new(tx).with_handler(make_creates_request_handler(column_metadatas));
let node_manager = Arc::new(MockDatanodeManager::new(datanode_handler));
let ddl_context = new_ddl_context(node_manager);
// Prepares physical table metadata.
let mut create_physical_table_task = test_create_physical_table_task("phy_table");
@@ -255,7 +311,7 @@ async fn test_on_create_metadata() {
let mut procedure = CreateLogicalTablesProcedure::new(
vec![task, yet_another_task],
physical_table_id,
ddl_context,
ddl_context.clone(),
);
let status = procedure.on_prepare().await.unwrap();
assert_matches!(
@@ -274,11 +330,42 @@ async fn test_on_create_metadata() {
let status = procedure.execute(&ctx).await.unwrap();
let table_ids = status.downcast_output_ref::<Vec<u32>>().unwrap();
assert_eq!(*table_ids, vec![1025, 1026]);
let (peer, request) = rx.try_recv().unwrap();
rx.try_recv().unwrap_err();
assert_creates_request(
peer,
request,
0,
&[RegionId::new(1025, 0), RegionId::new(1026, 0)],
);
let table_info = get_raw_table_info(&ddl_context, table_id).await;
assert_column_name(
&table_info,
&["ts", "value", "__table_id", "__tsid", "host", "cpu"],
);
assert_eq!(
table_info.meta.column_ids,
vec![
0,
1,
ReservedColumnId::table_id(),
ReservedColumnId::tsid(),
2,
3
]
);
}
#[tokio::test]
async fn test_on_create_metadata_part_logical_tables_exist() {
let node_manager = Arc::new(MockDatanodeManager::new(NaiveDatanodeHandler));
common_telemetry::init_default_ut_logging();
let (tx, mut rx) = mpsc::channel(8);
let column_metadatas = test_column_metadatas(&["host", "cpu"]);
let datanode_handler =
DatanodeWatcher::new(tx).with_handler(make_creates_request_handler(column_metadatas));
let node_manager = Arc::new(MockDatanodeManager::new(datanode_handler));
let ddl_context = new_ddl_context(node_manager);
// Prepares physical table metadata.
let mut create_physical_table_task = test_create_physical_table_task("phy_table");
@@ -317,7 +404,7 @@ async fn test_on_create_metadata_part_logical_tables_exist() {
let mut procedure = CreateLogicalTablesProcedure::new(
vec![task, non_exist_task],
physical_table_id,
ddl_context,
ddl_context.clone(),
);
let status = procedure.on_prepare().await.unwrap();
assert_matches!(
@@ -336,6 +423,27 @@ async fn test_on_create_metadata_part_logical_tables_exist() {
let status = procedure.execute(&ctx).await.unwrap();
let table_ids = status.downcast_output_ref::<Vec<u32>>().unwrap();
assert_eq!(*table_ids, vec![8192, 1025]);
let (peer, request) = rx.try_recv().unwrap();
rx.try_recv().unwrap_err();
assert_creates_request(peer, request, 0, &[RegionId::new(1025, 0)]);
let table_info = get_raw_table_info(&ddl_context, table_id).await;
assert_column_name(
&table_info,
&["ts", "value", "__table_id", "__tsid", "host", "cpu"],
);
assert_eq!(
table_info.meta.column_ids,
vec![
0,
1,
ReservedColumnId::table_id(),
ReservedColumnId::tsid(),
2,
3
]
);
}
#[tokio::test]
@@ -399,27 +507,13 @@ async fn test_on_create_metadata_err() {
assert!(!error.is_retry_later());
}
fn creates_request_handler(_peer: Peer, request: RegionRequest) -> Result<RegionResponse> {
if let region_request::Body::Creates(_) = request.body.unwrap() {
let mut response = RegionResponse::new(0);
// Default region id for physical table.
let region_id = RegionId::new(1024, 1);
response.extensions.insert(
MANIFEST_INFO_EXTENSION_KEY.to_string(),
RegionManifestInfo::encode_list(&[(region_id, RegionManifestInfo::metric(1, 0, 2, 0))])
.unwrap(),
);
return Ok(response);
}
Ok(RegionResponse::new(0))
}
#[tokio::test]
async fn test_on_submit_create_request() {
common_telemetry::init_default_ut_logging();
let (tx, mut rx) = mpsc::channel(8);
let handler = DatanodeWatcher::new(tx).with_handler(creates_request_handler);
let column_metadatas = test_column_metadatas(&["host", "cpu"]);
let handler =
DatanodeWatcher::new(tx).with_handler(make_creates_request_handler(column_metadatas));
let node_manager = Arc::new(MockDatanodeManager::new(handler));
let ddl_context = new_ddl_context(node_manager);
let mut create_physical_table_task = test_create_physical_table_task("phy_table");

View File

@@ -16,7 +16,9 @@ use std::assert_matches::assert_matches;
use std::collections::HashMap;
use std::sync::Arc;
use api::v1::meta::Partition;
use api::region::RegionResponse;
use api::v1::meta::{Partition, Peer};
use api::v1::region::{region_request, RegionRequest};
use api::v1::{ColumnDataType, SemanticType};
use common_error::ext::ErrorExt;
use common_error::status_code::StatusCode;
@@ -24,7 +26,12 @@ use common_procedure::{Context as ProcedureContext, Procedure, ProcedureId, Stat
use common_procedure_test::{
execute_procedure_until, execute_procedure_until_done, MockContextProvider,
};
use datatypes::prelude::ConcreteDataType;
use datatypes::schema::ColumnSchema;
use store_api::metadata::ColumnMetadata;
use store_api::metric_engine_consts::TABLE_COLUMN_METADATA_EXTENSION_KEY;
use store_api::storage::RegionId;
use tokio::sync::mpsc;
use crate::ddl::create_table::{CreateTableProcedure, CreateTableState};
use crate::ddl::test_util::columns::TestColumnDefBuilder;
@@ -32,14 +39,73 @@ use crate::ddl::test_util::create_table::{
build_raw_table_info_from_expr, TestCreateTableExprBuilder,
};
use crate::ddl::test_util::datanode_handler::{
NaiveDatanodeHandler, RetryErrorDatanodeHandler, UnexpectedErrorDatanodeHandler,
DatanodeWatcher, NaiveDatanodeHandler, RetryErrorDatanodeHandler,
UnexpectedErrorDatanodeHandler,
};
use crate::error::Error;
use crate::ddl::test_util::{assert_column_name, get_raw_table_info};
use crate::error::{Error, Result};
use crate::key::table_route::TableRouteValue;
use crate::kv_backend::memory::MemoryKvBackend;
use crate::rpc::ddl::CreateTableTask;
use crate::test_util::{new_ddl_context, new_ddl_context_with_kv_backend, MockDatanodeManager};
fn create_request_handler(_peer: Peer, request: RegionRequest) -> Result<RegionResponse> {
let _ = _peer;
if let region_request::Body::Create(_) = request.body.unwrap() {
let mut response = RegionResponse::new(0);
response.extensions.insert(
TABLE_COLUMN_METADATA_EXTENSION_KEY.to_string(),
ColumnMetadata::encode_list(&[
ColumnMetadata {
column_schema: ColumnSchema::new(
"ts",
ConcreteDataType::timestamp_millisecond_datatype(),
false,
),
semantic_type: SemanticType::Timestamp,
column_id: 0,
},
ColumnMetadata {
column_schema: ColumnSchema::new(
"host",
ConcreteDataType::float64_datatype(),
false,
),
semantic_type: SemanticType::Tag,
column_id: 1,
},
ColumnMetadata {
column_schema: ColumnSchema::new(
"cpu",
ConcreteDataType::float64_datatype(),
false,
),
semantic_type: SemanticType::Tag,
column_id: 2,
},
])
.unwrap(),
);
return Ok(response);
}
Ok(RegionResponse::new(0))
}
fn assert_create_request(
peer: Peer,
request: RegionRequest,
expected_peer_id: u64,
expected_region_id: RegionId,
) {
assert_eq!(peer.id, expected_peer_id);
let Some(region_request::Body::Create(req)) = request.body else {
unreachable!();
};
assert_eq!(req.region_id, expected_region_id);
}
pub(crate) fn test_create_table_task(name: &str) -> CreateTableTask {
let create_table = TestCreateTableExprBuilder::default()
.column_defs([
@@ -230,11 +296,13 @@ async fn test_on_create_metadata_error() {
#[tokio::test]
async fn test_on_create_metadata() {
common_telemetry::init_default_ut_logging();
let node_manager = Arc::new(MockDatanodeManager::new(NaiveDatanodeHandler));
let (tx, mut rx) = mpsc::channel(8);
let datanode_handler = DatanodeWatcher::new(tx).with_handler(create_request_handler);
let node_manager = Arc::new(MockDatanodeManager::new(datanode_handler));
let ddl_context = new_ddl_context(node_manager);
let task = test_create_table_task("foo");
assert!(!task.create_table.create_if_not_exists);
let mut procedure = CreateTableProcedure::new(task, ddl_context);
let mut procedure = CreateTableProcedure::new(task, ddl_context.clone());
procedure.on_prepare().await.unwrap();
let ctx = ProcedureContext {
procedure_id: ProcedureId::random(),
@@ -243,8 +311,16 @@ async fn test_on_create_metadata() {
procedure.execute(&ctx).await.unwrap();
// Triggers procedure to create table metadata
let status = procedure.execute(&ctx).await.unwrap();
let table_id = status.downcast_output_ref::<u32>().unwrap();
assert_eq!(*table_id, 1024);
let table_id = *status.downcast_output_ref::<u32>().unwrap();
assert_eq!(table_id, 1024);
let (peer, request) = rx.try_recv().unwrap();
rx.try_recv().unwrap_err();
assert_create_request(peer, request, 0, RegionId::new(table_id, 0));
let table_info = get_raw_table_info(&ddl_context, table_id).await;
assert_column_name(&table_info, &["ts", "host", "cpu"]);
assert_eq!(table_info.meta.column_ids, vec![0, 1, 2]);
}
#[tokio::test]

View File

@@ -29,6 +29,7 @@ use common_telemetry::{error, info, warn};
use common_wal::options::WalOptions;
use futures::future::join_all;
use snafu::{ensure, OptionExt, ResultExt};
use store_api::metadata::ColumnMetadata;
use store_api::metric_engine_consts::{LOGICAL_TABLE_METADATA_KEY, MANIFEST_INFO_EXTENSION_KEY};
use store_api::region_engine::RegionManifestInfo;
use store_api::storage::{RegionId, RegionNumber};
@@ -37,8 +38,8 @@ use table::table_reference::TableReference;
use crate::ddl::{DdlContext, DetectingRegion};
use crate::error::{
self, Error, OperateDatanodeSnafu, ParseWalOptionsSnafu, Result, TableNotFoundSnafu,
UnsupportedSnafu,
self, DecodeJsonSnafu, Error, MetadataCorruptionSnafu, OperateDatanodeSnafu,
ParseWalOptionsSnafu, Result, TableNotFoundSnafu, UnsupportedSnafu,
};
use crate::key::datanode_table::DatanodeTableValue;
use crate::key::table_name::TableNameKey;
@@ -314,11 +315,23 @@ pub fn parse_manifest_infos_from_extensions(
Ok(data_manifest_version)
}
/// Parses column metadatas from extensions.
pub fn parse_column_metadatas(
extensions: &HashMap<String, Vec<u8>>,
key: &str,
) -> Result<Vec<ColumnMetadata>> {
let value = extensions.get(key).context(error::UnexpectedSnafu {
err_msg: format!("column metadata extension not found: {}", key),
})?;
let column_metadatas = ColumnMetadata::decode_list(value).context(error::SerdeJsonSnafu {})?;
Ok(column_metadatas)
}
/// Sync follower regions on datanodes.
pub async fn sync_follower_regions(
context: &DdlContext,
table_id: TableId,
results: Vec<RegionResponse>,
results: &[RegionResponse],
region_routes: &[RegionRoute],
engine: &str,
) -> Result<()> {
@@ -331,7 +344,7 @@ pub async fn sync_follower_regions(
}
let results = results
.into_iter()
.iter()
.map(|response| parse_manifest_infos_from_extensions(&response.extensions))
.collect::<Result<Vec<_>>>()?
.into_iter()
@@ -418,6 +431,38 @@ pub async fn sync_follower_regions(
Ok(())
}
/// Extracts column metadatas from extensions.
pub fn extract_column_metadatas(
results: &mut [RegionResponse],
key: &str,
) -> Result<Option<Vec<ColumnMetadata>>> {
let schemas = results
.iter_mut()
.map(|r| r.extensions.remove(key))
.collect::<Vec<_>>();
if schemas.is_empty() {
return Ok(None);
}
// Verify all the physical schemas are the same
// Safety: previous check ensures this vec is not empty
let first = schemas.first().unwrap();
ensure!(
schemas.iter().all(|x| x == first),
MetadataCorruptionSnafu {
err_msg: "The table column metadata schemas from datanodes are not the same."
}
);
if let Some(first) = first {
let column_metadatas = ColumnMetadata::decode_list(first).context(DecodeJsonSnafu)?;
Ok(Some(column_metadatas))
} else {
Ok(None)
}
}
#[cfg(test)]
mod tests {
use super::*;

View File

@@ -995,6 +995,7 @@ mod tests {
Default::default(),
state_store,
poison_manager,
None,
));
let _ = DdlManager::try_new(

View File

@@ -100,8 +100,8 @@
pub mod catalog_name;
pub mod datanode_table;
pub mod flow;
pub mod maintenance;
pub mod node_address;
pub mod runtime_switch;
mod schema_metadata_manager;
pub mod schema_name;
pub mod table_info;
@@ -164,7 +164,9 @@ use crate::state_store::PoisonValue;
use crate::DatanodeId;
pub const NAME_PATTERN: &str = r"[a-zA-Z_:-][a-zA-Z0-9_:\-\.@#]*";
pub const MAINTENANCE_KEY: &str = "__maintenance";
pub const LEGACY_MAINTENANCE_KEY: &str = "__maintenance";
pub const MAINTENANCE_KEY: &str = "__switches/maintenance";
pub const PAUSE_PROCEDURE_KEY: &str = "__switches/pause_procedure";
pub const DATANODE_TABLE_KEY_PREFIX: &str = "__dn_table";
pub const TABLE_INFO_KEY_PREFIX: &str = "__table_info";

View File

@@ -1,86 +0,0 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::sync::Arc;
use crate::error::Result;
use crate::key::MAINTENANCE_KEY;
use crate::kv_backend::KvBackendRef;
use crate::rpc::store::PutRequest;
pub type MaintenanceModeManagerRef = Arc<MaintenanceModeManager>;
/// The maintenance mode manager.
///
/// Used to enable or disable maintenance mode.
#[derive(Clone)]
pub struct MaintenanceModeManager {
kv_backend: KvBackendRef,
}
impl MaintenanceModeManager {
pub fn new(kv_backend: KvBackendRef) -> Self {
Self { kv_backend }
}
/// Enables maintenance mode.
pub async fn set_maintenance_mode(&self) -> Result<()> {
let req = PutRequest {
key: Vec::from(MAINTENANCE_KEY),
value: vec![],
prev_kv: false,
};
self.kv_backend.put(req).await?;
Ok(())
}
/// Unsets maintenance mode.
pub async fn unset_maintenance_mode(&self) -> Result<()> {
self.kv_backend
.delete(MAINTENANCE_KEY.as_bytes(), false)
.await?;
Ok(())
}
/// Returns true if maintenance mode is enabled.
pub async fn maintenance_mode(&self) -> Result<bool> {
self.kv_backend.exists(MAINTENANCE_KEY.as_bytes()).await
}
}
#[cfg(test)]
mod tests {
use std::sync::Arc;
use crate::key::maintenance::MaintenanceModeManager;
use crate::kv_backend::memory::MemoryKvBackend;
#[tokio::test]
async fn test_maintenance_mode_manager() {
let maintenance_mode_manager = Arc::new(MaintenanceModeManager::new(Arc::new(
MemoryKvBackend::new(),
)));
assert!(!maintenance_mode_manager.maintenance_mode().await.unwrap());
maintenance_mode_manager
.set_maintenance_mode()
.await
.unwrap();
assert!(maintenance_mode_manager.maintenance_mode().await.unwrap());
maintenance_mode_manager
.unset_maintenance_mode()
.await
.unwrap();
assert!(!maintenance_mode_manager.maintenance_mode().await.unwrap());
}
}

View File

@@ -0,0 +1,224 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::sync::Arc;
use std::time::Duration;
use common_error::ext::BoxedError;
use common_procedure::local::PauseAware;
use moka::future::Cache;
use snafu::ResultExt;
use crate::error::{GetCacheSnafu, Result};
use crate::key::{LEGACY_MAINTENANCE_KEY, MAINTENANCE_KEY, PAUSE_PROCEDURE_KEY};
use crate::kv_backend::KvBackendRef;
use crate::rpc::store::{BatchDeleteRequest, PutRequest};
pub type RuntimeSwitchManagerRef = Arc<RuntimeSwitchManager>;
/// The runtime switch manager.
///
/// Used to enable or disable runtime switches.
#[derive(Clone)]
pub struct RuntimeSwitchManager {
kv_backend: KvBackendRef,
cache: Cache<Vec<u8>, Option<Vec<u8>>>,
}
#[async_trait::async_trait]
impl PauseAware for RuntimeSwitchManager {
async fn is_paused(&self) -> std::result::Result<bool, BoxedError> {
self.is_procedure_paused().await.map_err(BoxedError::new)
}
}
const CACHE_TTL: Duration = Duration::from_secs(10);
const MAX_CAPACITY: u64 = 32;
impl RuntimeSwitchManager {
pub fn new(kv_backend: KvBackendRef) -> Self {
let cache = Cache::builder()
.time_to_live(CACHE_TTL)
.max_capacity(MAX_CAPACITY)
.build();
Self { kv_backend, cache }
}
async fn put_key(&self, key: &str) -> Result<()> {
let req = PutRequest {
key: Vec::from(key),
value: vec![],
prev_kv: false,
};
self.kv_backend.put(req).await?;
self.cache.invalidate(key.as_bytes()).await;
Ok(())
}
async fn delete_keys(&self, keys: &[&str]) -> Result<()> {
let req = BatchDeleteRequest::new()
.with_keys(keys.iter().map(|x| x.as_bytes().to_vec()).collect());
self.kv_backend.batch_delete(req).await?;
for key in keys {
self.cache.invalidate(key.as_bytes()).await;
}
Ok(())
}
/// Returns true if the key exists.
async fn exists(&self, key: &str) -> Result<bool> {
let key = key.as_bytes().to_vec();
let kv_backend = self.kv_backend.clone();
let value = self
.cache
.try_get_with(key.clone(), async move {
kv_backend.get(&key).await.map(|v| v.map(|v| v.value))
})
.await
.context(GetCacheSnafu)?;
Ok(value.is_some())
}
/// Enables maintenance mode.
pub async fn set_maintenance_mode(&self) -> Result<()> {
self.put_key(MAINTENANCE_KEY).await
}
/// Unsets maintenance mode.
pub async fn unset_maintenance_mode(&self) -> Result<()> {
self.delete_keys(&[MAINTENANCE_KEY, LEGACY_MAINTENANCE_KEY])
.await
}
/// Returns true if maintenance mode is enabled.
pub async fn maintenance_mode(&self) -> Result<bool> {
let exists = self.exists(MAINTENANCE_KEY).await?;
if exists {
return Ok(true);
}
let exists = self.exists(LEGACY_MAINTENANCE_KEY).await?;
if exists {
return Ok(true);
}
Ok(false)
}
// Pauses handling of incoming procedure requests.
pub async fn pasue_procedure(&self) -> Result<()> {
self.put_key(PAUSE_PROCEDURE_KEY).await
}
/// Resumes processing of incoming procedure requests.
pub async fn resume_procedure(&self) -> Result<()> {
self.delete_keys(&[PAUSE_PROCEDURE_KEY]).await
}
/// Returns true if the system is currently pausing incoming procedure requests.
pub async fn is_procedure_paused(&self) -> Result<bool> {
self.exists(PAUSE_PROCEDURE_KEY).await
}
}
#[cfg(test)]
mod tests {
use std::sync::Arc;
use crate::key::runtime_switch::RuntimeSwitchManager;
use crate::key::{LEGACY_MAINTENANCE_KEY, MAINTENANCE_KEY};
use crate::kv_backend::memory::MemoryKvBackend;
use crate::kv_backend::KvBackend;
use crate::rpc::store::PutRequest;
#[tokio::test]
async fn test_runtime_switch_manager_basic() {
let runtime_switch_manager =
Arc::new(RuntimeSwitchManager::new(Arc::new(MemoryKvBackend::new())));
runtime_switch_manager
.put_key(MAINTENANCE_KEY)
.await
.unwrap();
let v = runtime_switch_manager
.cache
.get(MAINTENANCE_KEY.as_bytes())
.await;
assert!(v.is_none());
runtime_switch_manager
.exists(MAINTENANCE_KEY)
.await
.unwrap();
let v = runtime_switch_manager
.cache
.get(MAINTENANCE_KEY.as_bytes())
.await;
assert!(v.is_some());
runtime_switch_manager
.delete_keys(&[MAINTENANCE_KEY])
.await
.unwrap();
let v = runtime_switch_manager
.cache
.get(MAINTENANCE_KEY.as_bytes())
.await;
assert!(v.is_none());
}
#[tokio::test]
async fn test_runtime_switch_manager() {
let runtime_switch_manager =
Arc::new(RuntimeSwitchManager::new(Arc::new(MemoryKvBackend::new())));
assert!(!runtime_switch_manager.maintenance_mode().await.unwrap());
runtime_switch_manager.set_maintenance_mode().await.unwrap();
assert!(runtime_switch_manager.maintenance_mode().await.unwrap());
runtime_switch_manager
.unset_maintenance_mode()
.await
.unwrap();
assert!(!runtime_switch_manager.maintenance_mode().await.unwrap());
}
#[tokio::test]
async fn test_runtime_switch_manager_with_legacy_key() {
let kv_backend = Arc::new(MemoryKvBackend::new());
kv_backend
.put(PutRequest {
key: Vec::from(LEGACY_MAINTENANCE_KEY),
value: vec![],
prev_kv: false,
})
.await
.unwrap();
let runtime_switch_manager = Arc::new(RuntimeSwitchManager::new(kv_backend));
assert!(runtime_switch_manager.maintenance_mode().await.unwrap());
runtime_switch_manager
.unset_maintenance_mode()
.await
.unwrap();
assert!(!runtime_switch_manager.maintenance_mode().await.unwrap());
runtime_switch_manager.set_maintenance_mode().await.unwrap();
assert!(runtime_switch_manager.maintenance_mode().await.unwrap());
}
#[tokio::test]
async fn test_pasue_procedure() {
let runtime_switch_manager =
Arc::new(RuntimeSwitchManager::new(Arc::new(MemoryKvBackend::new())));
runtime_switch_manager.pasue_procedure().await.unwrap();
assert!(runtime_switch_manager.is_procedure_paused().await.unwrap());
runtime_switch_manager.resume_procedure().await.unwrap();
assert!(!runtime_switch_manager.is_procedure_paused().await.unwrap());
}
}

View File

@@ -334,6 +334,7 @@ mod tests {
options: Default::default(),
region_numbers: vec![1],
partition_key_indices: vec![],
column_ids: vec![],
};
RawTableInfo {

View File

@@ -14,13 +14,14 @@
use std::collections::HashMap;
use common_telemetry::debug;
use snafu::ensure;
use crate::error::{self, Result};
use crate::key::txn_helper::TxnOpGetResponseSet;
use crate::kv_backend::txn::{Compare, CompareOp, Txn, TxnOp};
use crate::kv_backend::KvBackendRef;
use crate::rpc::store::BatchGetRequest;
use crate::rpc::store::{BatchDeleteRequest, BatchGetRequest};
/// [TombstoneManager] provides the ability to:
/// - logically delete values
@@ -28,6 +29,9 @@ use crate::rpc::store::BatchGetRequest;
pub struct TombstoneManager {
kv_backend: KvBackendRef,
tombstone_prefix: String,
// Only used for testing.
#[cfg(test)]
max_txn_ops: Option<usize>,
}
const TOMBSTONE_PREFIX: &str = "__tombstone/";
@@ -35,10 +39,7 @@ const TOMBSTONE_PREFIX: &str = "__tombstone/";
impl TombstoneManager {
/// Returns [TombstoneManager].
pub fn new(kv_backend: KvBackendRef) -> Self {
Self {
kv_backend,
tombstone_prefix: TOMBSTONE_PREFIX.to_string(),
}
Self::new_with_prefix(kv_backend, TOMBSTONE_PREFIX)
}
/// Returns [TombstoneManager] with a custom tombstone prefix.
@@ -46,6 +47,8 @@ impl TombstoneManager {
Self {
kv_backend,
tombstone_prefix: prefix.to_string(),
#[cfg(test)]
max_txn_ops: None,
}
}
@@ -53,6 +56,11 @@ impl TombstoneManager {
[self.tombstone_prefix.as_bytes(), key].concat()
}
#[cfg(test)]
pub fn set_max_txn_ops(&mut self, max_txn_ops: usize) {
self.max_txn_ops = Some(max_txn_ops);
}
/// Moves value to `dest_key`.
///
/// Puts `value` to `dest_key` if the value of `src_key` equals `value`.
@@ -83,7 +91,11 @@ impl TombstoneManager {
ensure!(
keys.len() == dest_keys.len(),
error::UnexpectedSnafu {
err_msg: "The length of keys does not match the length of dest_keys."
err_msg: format!(
"The length of keys({}) does not match the length of dest_keys({}).",
keys.len(),
dest_keys.len()
),
}
);
// The key -> dest key mapping.
@@ -136,19 +148,45 @@ impl TombstoneManager {
.fail()
}
fn max_txn_ops(&self) -> usize {
#[cfg(test)]
if let Some(max_txn_ops) = self.max_txn_ops {
return max_txn_ops;
}
self.kv_backend.max_txn_ops()
}
/// Moves values to `dest_key`.
///
/// Returns the number of keys that were moved.
async fn move_values(&self, keys: Vec<Vec<u8>>, dest_keys: Vec<Vec<u8>>) -> Result<usize> {
let chunk_size = self.kv_backend.max_txn_ops() / 2;
if keys.len() > chunk_size {
let keys_chunks = keys.chunks(chunk_size).collect::<Vec<_>>();
let dest_keys_chunks = keys.chunks(chunk_size).collect::<Vec<_>>();
for (keys, dest_keys) in keys_chunks.into_iter().zip(dest_keys_chunks) {
self.move_values_inner(keys, dest_keys).await?;
ensure!(
keys.len() == dest_keys.len(),
error::UnexpectedSnafu {
err_msg: format!(
"The length of keys({}) does not match the length of dest_keys({}).",
keys.len(),
dest_keys.len()
),
}
Ok(keys.len())
);
if keys.is_empty() {
return Ok(0);
}
let chunk_size = self.max_txn_ops() / 2;
if keys.len() > chunk_size {
debug!(
"Moving values with multiple chunks, keys len: {}, chunk_size: {}",
keys.len(),
chunk_size
);
let mut moved_keys = 0;
let keys_chunks = keys.chunks(chunk_size).collect::<Vec<_>>();
let dest_keys_chunks = dest_keys.chunks(chunk_size).collect::<Vec<_>>();
for (keys, dest_keys) in keys_chunks.into_iter().zip(dest_keys_chunks) {
moved_keys += self.move_values_inner(keys, dest_keys).await?;
}
Ok(moved_keys)
} else {
self.move_values_inner(&keys, &dest_keys).await
}
@@ -196,15 +234,18 @@ impl TombstoneManager {
///
/// Returns the number of keys that were deleted.
pub async fn delete(&self, keys: Vec<Vec<u8>>) -> Result<usize> {
let operations = keys
let keys = keys
.iter()
.map(|key| TxnOp::Delete(self.to_tombstone(key)))
.map(|key| self.to_tombstone(key))
.collect::<Vec<_>>();
let txn = Txn::new().and_then(operations);
// Always success.
let _ = self.kv_backend.txn(txn).await?;
Ok(keys.len())
let num_keys = keys.len();
let _ = self
.kv_backend
.batch_delete(BatchDeleteRequest::new().with_keys(keys))
.await?;
Ok(num_keys)
}
}
@@ -392,16 +433,73 @@ mod tests {
.into_iter()
.map(|kv| (kv.key, kv.dest_key))
.unzip();
tombstone_manager
let moved_keys = tombstone_manager
.move_values(keys.clone(), dest_keys.clone())
.await
.unwrap();
assert_eq!(kvs.len(), moved_keys);
check_moved_values(kv_backend.clone(), &move_values).await;
// Moves again
tombstone_manager
let moved_keys = tombstone_manager
.move_values(keys.clone(), dest_keys.clone())
.await
.unwrap();
assert_eq!(0, moved_keys);
check_moved_values(kv_backend.clone(), &move_values).await;
}
#[tokio::test]
async fn test_move_values_with_max_txn_ops() {
common_telemetry::init_default_ut_logging();
let kv_backend = Arc::new(MemoryKvBackend::default());
let mut tombstone_manager = TombstoneManager::new(kv_backend.clone());
tombstone_manager.set_max_txn_ops(4);
let kvs = HashMap::from([
(b"bar".to_vec(), b"baz".to_vec()),
(b"foo".to_vec(), b"hi".to_vec()),
(b"baz".to_vec(), b"hello".to_vec()),
(b"qux".to_vec(), b"world".to_vec()),
(b"quux".to_vec(), b"world".to_vec()),
(b"quuux".to_vec(), b"world".to_vec()),
(b"quuuux".to_vec(), b"world".to_vec()),
(b"quuuuux".to_vec(), b"world".to_vec()),
(b"quuuuuux".to_vec(), b"world".to_vec()),
]);
for (key, value) in &kvs {
kv_backend
.put(
PutRequest::new()
.with_key(key.clone())
.with_value(value.clone()),
)
.await
.unwrap();
}
let move_values = kvs
.iter()
.map(|(key, value)| MoveValue {
key: key.clone(),
dest_key: tombstone_manager.to_tombstone(key),
value: value.clone(),
})
.collect::<Vec<_>>();
let (keys, dest_keys): (Vec<_>, Vec<_>) = move_values
.clone()
.into_iter()
.map(|kv| (kv.key, kv.dest_key))
.unzip();
let moved_keys = tombstone_manager
.move_values(keys.clone(), dest_keys.clone())
.await
.unwrap();
assert_eq!(kvs.len(), moved_keys);
check_moved_values(kv_backend.clone(), &move_values).await;
// Moves again
let moved_keys = tombstone_manager
.move_values(keys.clone(), dest_keys.clone())
.await
.unwrap();
assert_eq!(0, moved_keys);
check_moved_values(kv_backend.clone(), &move_values).await;
}
@@ -439,17 +537,19 @@ mod tests {
.unzip();
keys.push(b"non-exists".to_vec());
dest_keys.push(b"hi/non-exists".to_vec());
tombstone_manager
let moved_keys = tombstone_manager
.move_values(keys.clone(), dest_keys.clone())
.await
.unwrap();
check_moved_values(kv_backend.clone(), &move_values).await;
assert_eq!(3, moved_keys);
// Moves again
tombstone_manager
let moved_keys = tombstone_manager
.move_values(keys.clone(), dest_keys.clone())
.await
.unwrap();
check_moved_values(kv_backend.clone(), &move_values).await;
assert_eq!(0, moved_keys);
}
#[tokio::test]
@@ -490,10 +590,11 @@ mod tests {
.into_iter()
.map(|kv| (kv.key, kv.dest_key))
.unzip();
tombstone_manager
let moved_keys = tombstone_manager
.move_values(keys, dest_keys)
.await
.unwrap();
assert_eq!(kvs.len(), moved_keys);
}
#[tokio::test]
@@ -571,4 +672,24 @@ mod tests {
.unwrap();
check_moved_values(kv_backend.clone(), &move_values).await;
}
#[tokio::test]
async fn test_move_values_with_different_lengths() {
let kv_backend = Arc::new(MemoryKvBackend::default());
let tombstone_manager = TombstoneManager::new(kv_backend.clone());
let keys = vec![b"bar".to_vec(), b"foo".to_vec()];
let dest_keys = vec![b"bar".to_vec(), b"foo".to_vec(), b"baz".to_vec()];
let err = tombstone_manager
.move_values(keys, dest_keys)
.await
.unwrap_err();
assert!(err
.to_string()
.contains("The length of keys(2) does not match the length of dest_keys(3)."),);
let moved_keys = tombstone_manager.move_values(vec![], vec![]).await.unwrap();
assert_eq!(0, moved_keys);
}
}

View File

@@ -1374,6 +1374,7 @@ mod tests {
options: Default::default(),
created_on: Default::default(),
partition_key_indices: Default::default(),
column_ids: Default::default(),
};
// construct RawTableInfo

View File

@@ -28,6 +28,19 @@ use crate::PoisonKey;
#[snafu(visibility(pub))]
#[stack_trace_debug]
pub enum Error {
#[snafu(display("Failed to check procedure manager status"))]
CheckStatus {
source: BoxedError,
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Manager is pasued"))]
ManagerPasued {
#[snafu(implicit)]
location: Location,
},
#[snafu(display(
"Failed to execute procedure due to external error, clean poisons: {}",
clean_poisons
@@ -246,7 +259,8 @@ impl ErrorExt for Error {
| Error::ListState { source, .. }
| Error::PutPoison { source, .. }
| Error::DeletePoison { source, .. }
| Error::GetPoison { source, .. } => source.status_code(),
| Error::GetPoison { source, .. }
| Error::CheckStatus { source, .. } => source.status_code(),
Error::ToJson { .. }
| Error::DeleteState { .. }
@@ -259,7 +273,8 @@ impl ErrorExt for Error {
Error::RetryTimesExceeded { .. }
| Error::RollbackTimesExceeded { .. }
| Error::ManagerNotStart { .. } => StatusCode::IllegalState,
| Error::ManagerNotStart { .. }
| Error::ManagerPasued { .. } => StatusCode::IllegalState,
Error::RollbackNotSupported { .. } => StatusCode::Unsupported,
Error::LoaderConflict { .. } | Error::DuplicateProcedure { .. } => {

View File

@@ -22,6 +22,7 @@ use std::time::{Duration, Instant};
use async_trait::async_trait;
use backon::ExponentialBuilder;
use common_error::ext::BoxedError;
use common_runtime::{RepeatedTask, TaskFunction};
use common_telemetry::tracing_context::{FutureExt, TracingContext};
use common_telemetry::{error, info, tracing};
@@ -30,9 +31,10 @@ use tokio::sync::watch::{self, Receiver, Sender};
use tokio::sync::{Mutex as TokioMutex, Notify};
use crate::error::{
self, DuplicateProcedureSnafu, Error, LoaderConflictSnafu, ManagerNotStartSnafu,
PoisonKeyNotDefinedSnafu, ProcedureNotFoundSnafu, Result, StartRemoveOutdatedMetaTaskSnafu,
StopRemoveOutdatedMetaTaskSnafu, TooManyRunningProceduresSnafu,
self, CheckStatusSnafu, DuplicateProcedureSnafu, Error, LoaderConflictSnafu,
ManagerNotStartSnafu, ManagerPasuedSnafu, PoisonKeyNotDefinedSnafu, ProcedureNotFoundSnafu,
Result, StartRemoveOutdatedMetaTaskSnafu, StopRemoveOutdatedMetaTaskSnafu,
TooManyRunningProceduresSnafu,
};
use crate::local::runner::Runner;
use crate::procedure::{BoxedProcedureLoader, InitProcedureState, PoisonKeys, ProcedureInfo};
@@ -522,6 +524,14 @@ impl Default for ManagerConfig {
}
}
type PauseAwareRef = Arc<dyn PauseAware>;
#[async_trait]
pub trait PauseAware: Send + Sync {
/// Returns true if the procedure manager is paused.
async fn is_paused(&self) -> std::result::Result<bool, BoxedError>;
}
/// A [ProcedureManager] that maintains procedure states locally.
pub struct LocalManager {
manager_ctx: Arc<ManagerContext>,
@@ -531,6 +541,7 @@ pub struct LocalManager {
/// GC task.
remove_outdated_meta_task: TokioMutex<Option<RepeatedTask<Error>>>,
config: ManagerConfig,
pause_aware: Option<PauseAwareRef>,
}
impl LocalManager {
@@ -539,6 +550,7 @@ impl LocalManager {
config: ManagerConfig,
state_store: StateStoreRef,
poison_store: PoisonStoreRef,
pause_aware: Option<PauseAwareRef>,
) -> LocalManager {
let manager_ctx = Arc::new(ManagerContext::new(poison_store));
@@ -549,6 +561,7 @@ impl LocalManager {
retry_delay: config.retry_delay,
remove_outdated_meta_task: TokioMutex::new(None),
config,
pause_aware,
}
}
@@ -719,6 +732,17 @@ impl LocalManager {
let loaders = self.manager_ctx.loaders.lock().unwrap();
loaders.contains_key(name)
}
async fn check_status(&self) -> Result<()> {
if let Some(pause_aware) = self.pause_aware.as_ref() {
ensure!(
!pause_aware.is_paused().await.context(CheckStatusSnafu)?,
ManagerPasuedSnafu
);
}
Ok(())
}
}
#[async_trait]
@@ -774,6 +798,7 @@ impl ProcedureManager for LocalManager {
!self.manager_ctx.contains_procedure(procedure_id),
DuplicateProcedureSnafu { procedure_id }
);
self.check_status().await?;
self.submit_root(
procedure.id,
@@ -979,7 +1004,7 @@ mod tests {
};
let state_store = Arc::new(ObjectStateStore::new(test_util::new_object_store(&dir)));
let poison_manager = Arc::new(InMemoryPoisonStore::new());
let manager = LocalManager::new(config, state_store, poison_manager);
let manager = LocalManager::new(config, state_store, poison_manager, None);
manager.manager_ctx.start();
manager
@@ -1004,7 +1029,7 @@ mod tests {
};
let state_store = Arc::new(ObjectStateStore::new(object_store.clone()));
let poison_manager = Arc::new(InMemoryPoisonStore::new());
let manager = LocalManager::new(config, state_store, poison_manager);
let manager = LocalManager::new(config, state_store, poison_manager, None);
manager.manager_ctx.start();
manager
@@ -1058,7 +1083,7 @@ mod tests {
};
let state_store = Arc::new(ObjectStateStore::new(test_util::new_object_store(&dir)));
let poison_manager = Arc::new(InMemoryPoisonStore::new());
let manager = LocalManager::new(config, state_store, poison_manager);
let manager = LocalManager::new(config, state_store, poison_manager, None);
manager.manager_ctx.start();
let procedure_id = ProcedureId::random();
@@ -1110,7 +1135,7 @@ mod tests {
};
let state_store = Arc::new(ObjectStateStore::new(test_util::new_object_store(&dir)));
let poison_manager = Arc::new(InMemoryPoisonStore::new());
let manager = LocalManager::new(config, state_store, poison_manager);
let manager = LocalManager::new(config, state_store, poison_manager, None);
manager.manager_ctx.start();
#[derive(Debug)]
@@ -1191,7 +1216,7 @@ mod tests {
};
let state_store = Arc::new(ObjectStateStore::new(test_util::new_object_store(&dir)));
let poison_manager = Arc::new(InMemoryPoisonStore::new());
let manager = LocalManager::new(config, state_store, poison_manager);
let manager = LocalManager::new(config, state_store, poison_manager, None);
let mut procedure = ProcedureToLoad::new("submit");
procedure.lock_key = LockKey::single_exclusive("test.submit");
@@ -1219,7 +1244,7 @@ mod tests {
};
let state_store = Arc::new(ObjectStateStore::new(test_util::new_object_store(&dir)));
let poison_manager = Arc::new(InMemoryPoisonStore::new());
let manager = LocalManager::new(config, state_store, poison_manager);
let manager = LocalManager::new(config, state_store, poison_manager, None);
manager.start().await.unwrap();
manager.stop().await.unwrap();
@@ -1256,7 +1281,7 @@ mod tests {
};
let state_store = Arc::new(ObjectStateStore::new(object_store.clone()));
let poison_manager = Arc::new(InMemoryPoisonStore::new());
let manager = LocalManager::new(config, state_store, poison_manager);
let manager = LocalManager::new(config, state_store, poison_manager, None);
manager.manager_ctx.set_running();
let mut procedure = ProcedureToLoad::new("submit");
@@ -1338,7 +1363,7 @@ mod tests {
};
let state_store = Arc::new(ObjectStateStore::new(test_util::new_object_store(&dir)));
let poison_manager = Arc::new(InMemoryPoisonStore::new());
let manager = LocalManager::new(config, state_store, poison_manager);
let manager = LocalManager::new(config, state_store, poison_manager, None);
manager.manager_ctx.set_running();
manager
@@ -1463,7 +1488,7 @@ mod tests {
};
let state_store = Arc::new(ObjectStateStore::new(object_store.clone()));
let poison_manager = Arc::new(InMemoryPoisonStore::new());
let manager = LocalManager::new(config, state_store, poison_manager);
let manager = LocalManager::new(config, state_store, poison_manager, None);
manager.manager_ctx.start();
let notify = Arc::new(Notify::new());

View File

@@ -83,7 +83,7 @@ mod tests {
};
let state_store = Arc::new(ObjectStateStore::new(test_util::new_object_store(&dir)));
let poison_manager = Arc::new(InMemoryPoisonStore::default());
let manager = LocalManager::new(config, state_store, poison_manager);
let manager = LocalManager::new(config, state_store, poison_manager, None);
manager.start().await.unwrap();
#[derive(Debug)]

View File

@@ -178,8 +178,6 @@ pub enum Error {
StreamTimeout {
#[snafu(implicit)]
location: Location,
#[snafu(source)]
error: tokio::time::error::Elapsed,
},
#[snafu(display("RecordBatch slice index overflow: {visit_index} > {size}"))]

View File

@@ -475,7 +475,7 @@ mod test {
async fn region_alive_keeper() {
common_telemetry::init_default_ut_logging();
let mut region_server = mock_region_server();
let mut engine_env = TestEnv::with_prefix("region-alive-keeper");
let mut engine_env = TestEnv::with_prefix("region-alive-keeper").await;
let engine = engine_env.create_engine(MitoConfig::default()).await;
let engine = Arc::new(engine);
region_server.register_engine(engine.clone());

View File

@@ -40,7 +40,7 @@ use log_store::raft_engine::log_store::RaftEngineLogStore;
use meta_client::MetaClientRef;
use metric_engine::engine::MetricEngine;
use mito2::config::MitoConfig;
use mito2::engine::MitoEngine;
use mito2::engine::{MitoEngine, MitoEngineBuilder};
use object_store::manager::{ObjectStoreManager, ObjectStoreManagerRef};
use object_store::util::normalize_dir;
use query::dummy_catalog::TableProviderFactoryRef;
@@ -162,6 +162,8 @@ pub struct DatanodeBuilder {
meta_client: Option<MetaClientRef>,
kv_backend: KvBackendRef,
cache_registry: Option<Arc<LayeredCacheRegistry>>,
#[cfg(feature = "enterprise")]
extension_range_provider_factory: Option<mito2::extension::BoxedExtensionRangeProviderFactory>,
}
impl DatanodeBuilder {
@@ -173,6 +175,8 @@ impl DatanodeBuilder {
meta_client: None,
kv_backend,
cache_registry: None,
#[cfg(feature = "enterprise")]
extension_range_provider_factory: None,
}
}
@@ -199,6 +203,15 @@ impl DatanodeBuilder {
self
}
#[cfg(feature = "enterprise")]
pub fn with_extension_range_provider(
&mut self,
extension_range_provider_factory: mito2::extension::BoxedExtensionRangeProviderFactory,
) -> &mut Self {
self.extension_range_provider_factory = Some(extension_range_provider_factory);
self
}
pub async fn build(mut self) -> Result<Datanode> {
let node_id = self.opts.node_id.context(MissingNodeIdSnafu)?;
@@ -340,7 +353,7 @@ impl DatanodeBuilder {
}
async fn new_region_server(
&self,
&mut self,
schema_metadata_manager: SchemaMetadataManagerRef,
event_listener: RegionServerEventListenerRef,
) -> Result<RegionServer> {
@@ -376,13 +389,13 @@ impl DatanodeBuilder {
);
let object_store_manager = Self::build_object_store_manager(&opts.storage).await?;
let engines = Self::build_store_engines(
opts,
object_store_manager,
schema_metadata_manager,
self.plugins.clone(),
)
.await?;
let engines = self
.build_store_engines(
object_store_manager,
schema_metadata_manager,
self.plugins.clone(),
)
.await?;
for engine in engines {
region_server.register_engine(engine);
}
@@ -394,7 +407,7 @@ impl DatanodeBuilder {
/// Builds [RegionEngineRef] from `store_engine` section in `opts`
async fn build_store_engines(
opts: &DatanodeOptions,
&mut self,
object_store_manager: ObjectStoreManagerRef,
schema_metadata_manager: SchemaMetadataManagerRef,
plugins: Plugins,
@@ -403,7 +416,7 @@ impl DatanodeBuilder {
let mut mito_engine_config = MitoConfig::default();
let mut file_engine_config = file_engine::config::EngineConfig::default();
for engine in &opts.region_engine {
for engine in &self.opts.region_engine {
match engine {
RegionEngineConfig::Mito(config) => {
mito_engine_config = config.clone();
@@ -417,14 +430,14 @@ impl DatanodeBuilder {
}
}
let mito_engine = Self::build_mito_engine(
opts,
object_store_manager.clone(),
mito_engine_config,
schema_metadata_manager.clone(),
plugins.clone(),
)
.await?;
let mito_engine = self
.build_mito_engine(
object_store_manager.clone(),
mito_engine_config,
schema_metadata_manager.clone(),
plugins.clone(),
)
.await?;
let metric_engine = MetricEngine::try_new(mito_engine.clone(), metric_engine_config)
.context(BuildMetricEngineSnafu)?;
@@ -443,12 +456,13 @@ impl DatanodeBuilder {
/// Builds [MitoEngine] according to options.
async fn build_mito_engine(
opts: &DatanodeOptions,
&mut self,
object_store_manager: ObjectStoreManagerRef,
mut config: MitoConfig,
schema_metadata_manager: SchemaMetadataManagerRef,
plugins: Plugins,
) -> Result<MitoEngine> {
let opts = &self.opts;
if opts.storage.is_object_storage() {
// Enable the write cache when setting object storage
config.enable_write_cache = true;
@@ -456,17 +470,27 @@ impl DatanodeBuilder {
}
let mito_engine = match &opts.wal {
DatanodeWalConfig::RaftEngine(raft_engine_config) => MitoEngine::new(
&opts.storage.data_home,
config,
Self::build_raft_engine_log_store(&opts.storage.data_home, raft_engine_config)
.await?,
object_store_manager,
schema_metadata_manager,
plugins,
)
.await
.context(BuildMitoEngineSnafu)?,
DatanodeWalConfig::RaftEngine(raft_engine_config) => {
let log_store =
Self::build_raft_engine_log_store(&opts.storage.data_home, raft_engine_config)
.await?;
let builder = MitoEngineBuilder::new(
&opts.storage.data_home,
config,
log_store,
object_store_manager,
schema_metadata_manager,
plugins,
);
#[cfg(feature = "enterprise")]
let builder = builder.with_extension_range_provider_factory(
self.extension_range_provider_factory.take(),
);
builder.try_build().await.context(BuildMitoEngineSnafu)?
}
DatanodeWalConfig::Kafka(kafka_config) => {
if kafka_config.create_index && opts.node_id.is_none() {
warn!("The WAL index creation only available in distributed mode.")
@@ -488,16 +512,21 @@ impl DatanodeBuilder {
None
};
MitoEngine::new(
let builder = MitoEngineBuilder::new(
&opts.storage.data_home,
config,
Self::build_kafka_log_store(kafka_config, global_index_collector).await?,
object_store_manager,
schema_metadata_manager,
plugins,
)
.await
.context(BuildMitoEngineSnafu)?
);
#[cfg(feature = "enterprise")]
let builder = builder.with_extension_range_provider_factory(
self.extension_range_provider_factory.take(),
);
builder.try_build().await.context(BuildMitoEngineSnafu)?
}
};
Ok(mito_engine)

View File

@@ -278,7 +278,7 @@ mod tests {
let mut region_server = mock_region_server();
let heartbeat_handler = RegionHeartbeatResponseHandler::new(region_server.clone());
let mut engine_env = TestEnv::with_prefix("close-region");
let mut engine_env = TestEnv::with_prefix("close-region").await;
let engine = engine_env.create_engine(MitoConfig::default()).await;
region_server.register_engine(Arc::new(engine));
let region_id = RegionId::new(1024, 1);
@@ -326,7 +326,7 @@ mod tests {
let mut region_server = mock_region_server();
let heartbeat_handler = RegionHeartbeatResponseHandler::new(region_server.clone());
let mut engine_env = TestEnv::with_prefix("open-region");
let mut engine_env = TestEnv::with_prefix("open-region").await;
let engine = engine_env.create_engine(MitoConfig::default()).await;
region_server.register_engine(Arc::new(engine));
let region_id = RegionId::new(1024, 1);
@@ -374,7 +374,7 @@ mod tests {
let mut region_server = mock_region_server();
let heartbeat_handler = RegionHeartbeatResponseHandler::new(region_server.clone());
let mut engine_env = TestEnv::with_prefix("open-not-exists-region");
let mut engine_env = TestEnv::with_prefix("open-not-exists-region").await;
let engine = engine_env.create_engine(MitoConfig::default()).await;
region_server.register_engine(Arc::new(engine));
let region_id = RegionId::new(1024, 1);
@@ -406,7 +406,7 @@ mod tests {
let mut region_server = mock_region_server();
let heartbeat_handler = RegionHeartbeatResponseHandler::new(region_server.clone());
let mut engine_env = TestEnv::with_prefix("downgrade-region");
let mut engine_env = TestEnv::with_prefix("downgrade-region").await;
let engine = engine_env.create_engine(MitoConfig::default()).await;
region_server.register_engine(Arc::new(engine));
let region_id = RegionId::new(1024, 1);

View File

@@ -31,9 +31,10 @@ pub use crate::schema::column_schema::{
ColumnSchema, FulltextAnalyzer, FulltextBackend, FulltextOptions, Metadata,
SkippingIndexOptions, SkippingIndexType, COLUMN_FULLTEXT_CHANGE_OPT_KEY_ENABLE,
COLUMN_FULLTEXT_OPT_KEY_ANALYZER, COLUMN_FULLTEXT_OPT_KEY_BACKEND,
COLUMN_FULLTEXT_OPT_KEY_CASE_SENSITIVE, COLUMN_SKIPPING_INDEX_OPT_KEY_GRANULARITY,
COLUMN_SKIPPING_INDEX_OPT_KEY_TYPE, COMMENT_KEY, FULLTEXT_KEY, INVERTED_INDEX_KEY,
SKIPPING_INDEX_KEY, TIME_INDEX_KEY,
COLUMN_FULLTEXT_OPT_KEY_CASE_SENSITIVE, COLUMN_FULLTEXT_OPT_KEY_FALSE_POSITIVE_RATE,
COLUMN_FULLTEXT_OPT_KEY_GRANULARITY, COLUMN_SKIPPING_INDEX_OPT_KEY_FALSE_POSITIVE_RATE,
COLUMN_SKIPPING_INDEX_OPT_KEY_GRANULARITY, COLUMN_SKIPPING_INDEX_OPT_KEY_TYPE, COMMENT_KEY,
FULLTEXT_KEY, INVERTED_INDEX_KEY, SKIPPING_INDEX_KEY, TIME_INDEX_KEY,
};
pub use crate::schema::constraint::ColumnDefaultConstraint;
pub use crate::schema::raw::RawSchema;

View File

@@ -47,13 +47,18 @@ pub const COLUMN_FULLTEXT_CHANGE_OPT_KEY_ENABLE: &str = "enable";
pub const COLUMN_FULLTEXT_OPT_KEY_ANALYZER: &str = "analyzer";
pub const COLUMN_FULLTEXT_OPT_KEY_CASE_SENSITIVE: &str = "case_sensitive";
pub const COLUMN_FULLTEXT_OPT_KEY_BACKEND: &str = "backend";
pub const COLUMN_FULLTEXT_OPT_KEY_GRANULARITY: &str = "granularity";
pub const COLUMN_FULLTEXT_OPT_KEY_FALSE_POSITIVE_RATE: &str = "false_positive_rate";
/// Keys used in SKIPPING index options
pub const COLUMN_SKIPPING_INDEX_OPT_KEY_GRANULARITY: &str = "granularity";
pub const COLUMN_SKIPPING_INDEX_OPT_KEY_FALSE_POSITIVE_RATE: &str = "false_positive_rate";
pub const COLUMN_SKIPPING_INDEX_OPT_KEY_TYPE: &str = "type";
pub const DEFAULT_GRANULARITY: u32 = 10240;
pub const DEFAULT_FALSE_POSITIVE_RATE: f64 = 0.01;
/// Schema of a column, used as an immutable struct.
#[derive(Clone, PartialEq, Eq, Serialize, Deserialize)]
pub struct ColumnSchema {
@@ -504,7 +509,7 @@ impl TryFrom<&ColumnSchema> for Field {
}
/// Fulltext options for a column.
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize, Default, Visit, VisitMut)]
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize, Visit, VisitMut)]
#[serde(rename_all = "kebab-case")]
pub struct FulltextOptions {
/// Whether the fulltext index is enabled.
@@ -518,6 +523,92 @@ pub struct FulltextOptions {
/// The fulltext backend to use.
#[serde(default)]
pub backend: FulltextBackend,
/// The granularity of the fulltext index (for bloom backend only)
#[serde(default = "fulltext_options_default_granularity")]
pub granularity: u32,
/// The false positive rate of the fulltext index (for bloom backend only)
#[serde(default = "index_options_default_false_positive_rate_in_10000")]
pub false_positive_rate_in_10000: u32,
}
fn fulltext_options_default_granularity() -> u32 {
DEFAULT_GRANULARITY
}
fn index_options_default_false_positive_rate_in_10000() -> u32 {
(DEFAULT_FALSE_POSITIVE_RATE * 10000.0) as u32
}
impl FulltextOptions {
/// Creates a new fulltext options.
pub fn new(
enable: bool,
analyzer: FulltextAnalyzer,
case_sensitive: bool,
backend: FulltextBackend,
granularity: u32,
false_positive_rate: f64,
) -> Result<Self> {
ensure!(
0.0 < false_positive_rate && false_positive_rate <= 1.0,
error::InvalidFulltextOptionSnafu {
msg: format!(
"Invalid false positive rate: {false_positive_rate}, expected: 0.0 < rate <= 1.0"
),
}
);
ensure!(
granularity > 0,
error::InvalidFulltextOptionSnafu {
msg: format!("Invalid granularity: {granularity}, expected: positive integer"),
}
);
Ok(Self::new_unchecked(
enable,
analyzer,
case_sensitive,
backend,
granularity,
false_positive_rate,
))
}
/// Creates a new fulltext options without checking `false_positive_rate` and `granularity`.
pub fn new_unchecked(
enable: bool,
analyzer: FulltextAnalyzer,
case_sensitive: bool,
backend: FulltextBackend,
granularity: u32,
false_positive_rate: f64,
) -> Self {
Self {
enable,
analyzer,
case_sensitive,
backend,
granularity,
false_positive_rate_in_10000: (false_positive_rate * 10000.0) as u32,
}
}
/// Gets the false positive rate.
pub fn false_positive_rate(&self) -> f64 {
self.false_positive_rate_in_10000 as f64 / 10000.0
}
}
impl Default for FulltextOptions {
fn default() -> Self {
Self::new_unchecked(
false,
FulltextAnalyzer::default(),
false,
FulltextBackend::default(),
DEFAULT_GRANULARITY,
DEFAULT_FALSE_POSITIVE_RATE,
)
}
}
impl fmt::Display for FulltextOptions {
@@ -527,6 +618,10 @@ impl fmt::Display for FulltextOptions {
write!(f, ", analyzer={}", self.analyzer)?;
write!(f, ", case_sensitive={}", self.case_sensitive)?;
write!(f, ", backend={}", self.backend)?;
if self.backend == FulltextBackend::Bloom {
write!(f, ", granularity={}", self.granularity)?;
write!(f, ", false_positive_rate={}", self.false_positive_rate())?;
}
}
Ok(())
}
@@ -611,6 +706,45 @@ impl TryFrom<HashMap<String, String>> for FulltextOptions {
}
}
if fulltext_options.backend == FulltextBackend::Bloom {
// Parse granularity with default value 10240
let granularity = match options.get(COLUMN_FULLTEXT_OPT_KEY_GRANULARITY) {
Some(value) => value
.parse::<u32>()
.ok()
.filter(|&v| v > 0)
.ok_or_else(|| {
error::InvalidFulltextOptionSnafu {
msg: format!(
"Invalid granularity: {value}, expected: positive integer"
),
}
.build()
})?,
None => DEFAULT_GRANULARITY,
};
fulltext_options.granularity = granularity;
// Parse false positive rate with default value 0.01
let false_positive_rate = match options.get(COLUMN_FULLTEXT_OPT_KEY_FALSE_POSITIVE_RATE)
{
Some(value) => value
.parse::<f64>()
.ok()
.filter(|&v| v > 0.0 && v <= 1.0)
.ok_or_else(|| {
error::InvalidFulltextOptionSnafu {
msg: format!(
"Invalid false positive rate: {value}, expected: 0.0 < rate <= 1.0"
),
}
.build()
})?,
None => DEFAULT_FALSE_POSITIVE_RATE,
};
fulltext_options.false_positive_rate_in_10000 = (false_positive_rate * 10000.0) as u32;
}
Ok(fulltext_options)
}
}
@@ -638,23 +772,73 @@ impl fmt::Display for FulltextAnalyzer {
pub struct SkippingIndexOptions {
/// The granularity of the skip index.
pub granularity: u32,
/// The false positive rate of the skip index (in ten-thousandths, e.g., 100 = 1%).
#[serde(default = "index_options_default_false_positive_rate_in_10000")]
pub false_positive_rate_in_10000: u32,
/// The type of the skip index.
#[serde(default)]
pub index_type: SkippingIndexType,
}
impl SkippingIndexOptions {
/// Creates a new skipping index options without checking `false_positive_rate` and `granularity`.
pub fn new_unchecked(
granularity: u32,
false_positive_rate: f64,
index_type: SkippingIndexType,
) -> Self {
Self {
granularity,
false_positive_rate_in_10000: (false_positive_rate * 10000.0) as u32,
index_type,
}
}
/// Creates a new skipping index options.
pub fn new(
granularity: u32,
false_positive_rate: f64,
index_type: SkippingIndexType,
) -> Result<Self> {
ensure!(
0.0 < false_positive_rate && false_positive_rate <= 1.0,
error::InvalidSkippingIndexOptionSnafu {
msg: format!("Invalid false positive rate: {false_positive_rate}, expected: 0.0 < rate <= 1.0"),
}
);
ensure!(
granularity > 0,
error::InvalidSkippingIndexOptionSnafu {
msg: format!("Invalid granularity: {granularity}, expected: positive integer"),
}
);
Ok(Self::new_unchecked(
granularity,
false_positive_rate,
index_type,
))
}
/// Gets the false positive rate.
pub fn false_positive_rate(&self) -> f64 {
self.false_positive_rate_in_10000 as f64 / 10000.0
}
}
impl Default for SkippingIndexOptions {
fn default() -> Self {
Self {
granularity: DEFAULT_GRANULARITY,
index_type: SkippingIndexType::default(),
}
Self::new_unchecked(
DEFAULT_GRANULARITY,
DEFAULT_FALSE_POSITIVE_RATE,
SkippingIndexType::default(),
)
}
}
impl fmt::Display for SkippingIndexOptions {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(f, "granularity={}", self.granularity)?;
write!(f, ", false_positive_rate={}", self.false_positive_rate())?;
write!(f, ", index_type={}", self.index_type)?;
Ok(())
}
@@ -681,15 +865,37 @@ impl TryFrom<HashMap<String, String>> for SkippingIndexOptions {
fn try_from(options: HashMap<String, String>) -> Result<Self> {
// Parse granularity with default value 1
let granularity = match options.get(COLUMN_SKIPPING_INDEX_OPT_KEY_GRANULARITY) {
Some(value) => value.parse::<u32>().map_err(|_| {
error::InvalidSkippingIndexOptionSnafu {
msg: format!("Invalid granularity: {value}, expected: positive integer"),
}
.build()
})?,
Some(value) => value
.parse::<u32>()
.ok()
.filter(|&v| v > 0)
.ok_or_else(|| {
error::InvalidSkippingIndexOptionSnafu {
msg: format!("Invalid granularity: {value}, expected: positive integer"),
}
.build()
})?,
None => DEFAULT_GRANULARITY,
};
// Parse false positive rate with default value 100
let false_positive_rate =
match options.get(COLUMN_SKIPPING_INDEX_OPT_KEY_FALSE_POSITIVE_RATE) {
Some(value) => value
.parse::<f64>()
.ok()
.filter(|&v| v > 0.0 && v <= 1.0)
.ok_or_else(|| {
error::InvalidSkippingIndexOptionSnafu {
msg: format!(
"Invalid false positive rate: {value}, expected: 0.0 < rate <= 1.0"
),
}
.build()
})?,
None => DEFAULT_FALSE_POSITIVE_RATE,
};
// Parse index type with default value BloomFilter
let index_type = match options.get(COLUMN_SKIPPING_INDEX_OPT_KEY_TYPE) {
Some(typ) => match typ.to_ascii_uppercase().as_str() {
@@ -704,10 +910,11 @@ impl TryFrom<HashMap<String, String>> for SkippingIndexOptions {
None => SkippingIndexType::default(),
};
Ok(SkippingIndexOptions {
Ok(SkippingIndexOptions::new_unchecked(
granularity,
false_positive_rate,
index_type,
})
))
}
}
@@ -973,4 +1180,59 @@ mod tests {
assert!(column_schema.default_constraint.is_none());
assert!(column_schema.metadata.is_empty());
}
#[test]
fn test_skipping_index_options_deserialization() {
let original_options = "{\"granularity\":1024,\"false-positive-rate-in-10000\":10,\"index-type\":\"BloomFilter\"}";
let options = serde_json::from_str::<SkippingIndexOptions>(original_options).unwrap();
assert_eq!(1024, options.granularity);
assert_eq!(SkippingIndexType::BloomFilter, options.index_type);
assert_eq!(0.001, options.false_positive_rate());
let options_str = serde_json::to_string(&options).unwrap();
assert_eq!(options_str, original_options);
}
#[test]
fn test_skipping_index_options_deserialization_v0_14_to_v0_15() {
let options = "{\"granularity\":10240,\"index-type\":\"BloomFilter\"}";
let options = serde_json::from_str::<SkippingIndexOptions>(options).unwrap();
assert_eq!(10240, options.granularity);
assert_eq!(SkippingIndexType::BloomFilter, options.index_type);
assert_eq!(DEFAULT_FALSE_POSITIVE_RATE, options.false_positive_rate());
let options_str = serde_json::to_string(&options).unwrap();
assert_eq!(options_str, "{\"granularity\":10240,\"false-positive-rate-in-10000\":100,\"index-type\":\"BloomFilter\"}");
}
#[test]
fn test_fulltext_options_deserialization() {
let original_options = "{\"enable\":true,\"analyzer\":\"English\",\"case-sensitive\":false,\"backend\":\"bloom\",\"granularity\":1024,\"false-positive-rate-in-10000\":10}";
let options = serde_json::from_str::<FulltextOptions>(original_options).unwrap();
assert!(!options.case_sensitive);
assert!(options.enable);
assert_eq!(FulltextBackend::Bloom, options.backend);
assert_eq!(FulltextAnalyzer::default(), options.analyzer);
assert_eq!(1024, options.granularity);
assert_eq!(0.001, options.false_positive_rate());
let options_str = serde_json::to_string(&options).unwrap();
assert_eq!(options_str, original_options);
}
#[test]
fn test_fulltext_options_deserialization_v0_14_to_v0_15() {
// 0.14 to 0.15
let options = "{\"enable\":true,\"analyzer\":\"English\",\"case-sensitive\":false,\"backend\":\"bloom\"}";
let options = serde_json::from_str::<FulltextOptions>(options).unwrap();
assert!(!options.case_sensitive);
assert!(options.enable);
assert_eq!(FulltextBackend::Bloom, options.backend);
assert_eq!(FulltextAnalyzer::default(), options.analyzer);
assert_eq!(DEFAULT_GRANULARITY, options.granularity);
assert_eq!(DEFAULT_FALSE_POSITIVE_RATE, options.false_positive_rate());
let options_str = serde_json::to_string(&options).unwrap();
assert_eq!(options_str, "{\"enable\":true,\"analyzer\":\"English\",\"case-sensitive\":false,\"backend\":\"bloom\",\"granularity\":10240,\"false-positive-rate-in-10000\":100}");
}
}

View File

@@ -12,6 +12,11 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use arrow_array::{
ArrayRef, PrimitiveArray, TimestampMicrosecondArray, TimestampMillisecondArray,
TimestampNanosecondArray, TimestampSecondArray,
};
use arrow_schema::DataType;
use common_time::timestamp::TimeUnit;
use common_time::Timestamp;
use paste::paste;
@@ -138,6 +143,41 @@ define_timestamp_with_unit!(Millisecond);
define_timestamp_with_unit!(Microsecond);
define_timestamp_with_unit!(Nanosecond);
pub fn timestamp_array_to_primitive(
ts_array: &ArrayRef,
) -> Option<(
PrimitiveArray<arrow_array::types::Int64Type>,
arrow::datatypes::TimeUnit,
)> {
let DataType::Timestamp(unit, _) = ts_array.data_type() else {
return None;
};
let ts_primitive = match unit {
arrow_schema::TimeUnit::Second => ts_array
.as_any()
.downcast_ref::<TimestampSecondArray>()
.unwrap()
.reinterpret_cast::<arrow_array::types::Int64Type>(),
arrow_schema::TimeUnit::Millisecond => ts_array
.as_any()
.downcast_ref::<TimestampMillisecondArray>()
.unwrap()
.reinterpret_cast::<arrow_array::types::Int64Type>(),
arrow_schema::TimeUnit::Microsecond => ts_array
.as_any()
.downcast_ref::<TimestampMicrosecondArray>()
.unwrap()
.reinterpret_cast::<arrow_array::types::Int64Type>(),
arrow_schema::TimeUnit::Nanosecond => ts_array
.as_any()
.downcast_ref::<TimestampNanosecondArray>()
.unwrap()
.reinterpret_cast::<arrow_array::types::Int64Type>(),
};
Some((ts_primitive, *unit))
}
#[cfg(test)]
mod tests {
use common_time::timezone::set_default_timezone;

View File

@@ -95,7 +95,7 @@ impl Default for FlowConfig {
}
/// Options for flow node
#[derive(Clone, Debug, Serialize, Deserialize)]
#[derive(Clone, Debug, PartialEq, Serialize, Deserialize)]
#[serde(default)]
pub struct FlownodeOptions {
pub node_id: Option<u64>,
@@ -121,7 +121,9 @@ impl Default for FlownodeOptions {
logging: LoggingOptions::default(),
tracing: TracingOptions::default(),
heartbeat: HeartbeatOptions::default(),
query: QueryOptions::default(),
// flownode's query option is set to 1 to throttle flow's query so
// that it won't use too much cpu or memory
query: QueryOptions { parallelism: 1 },
user_provider: None,
}
}
@@ -251,6 +253,10 @@ impl DiffRequest {
Self::Delete(v) => v.len(),
}
}
pub fn is_empty(&self) -> bool {
self.len() == 0
}
}
pub fn batches_to_rows_req(batches: Vec<Batch>) -> Result<Vec<DiffRequest>, Error> {
@@ -899,7 +905,7 @@ impl StreamingEngine {
let rows_send = self.run_available(true).await?;
let row = self.send_writeback_requests().await?;
debug!(
"Done to flush flow_id={:?} with {} input rows flushed, {} rows sended and {} output rows flushed",
"Done to flush flow_id={:?} with {} input rows flushed, {} rows sent and {} output rows flushed",
flow_id, flushed_input_rows, rows_send, row
);
Ok(row)
@@ -929,6 +935,12 @@ pub struct FlowTickManager {
start_timestamp: repr::Timestamp,
}
impl Default for FlowTickManager {
fn default() -> Self {
Self::new()
}
}
impl FlowTickManager {
pub fn new() -> Self {
FlowTickManager {

View File

@@ -476,15 +476,37 @@ impl BatchingEngine {
Ok(())
}
/// Only flush the dirty windows of the flow task with given flow id, by running the query on it.
/// As flush the whole time range is usually prohibitively expensive.
pub async fn flush_flow_inner(&self, flow_id: FlowId) -> Result<usize, Error> {
debug!("Try flush flow {flow_id}");
// need to wait a bit to ensure previous mirror insert is handled
// this is only useful for the case when we are flushing the flow right after inserting data into it
// TODO(discord9): find a better way to ensure the data is ready, maybe inform flownode from frontend?
tokio::time::sleep(std::time::Duration::from_millis(100)).await;
let task = self.tasks.read().await.get(&flow_id).cloned();
let task = task.with_context(|| FlowNotFoundSnafu { id: flow_id })?;
task.mark_all_windows_as_dirty()?;
let time_window_size = task
.config
.time_window_expr
.as_ref()
.and_then(|expr| *expr.time_window_size());
let cur_dirty_window_cnt = time_window_size.map(|time_window_size| {
task.state
.read()
.unwrap()
.dirty_time_windows
.effective_count(&time_window_size)
});
let res = task
.gen_exec_once(&self.query_engine, &self.frontend_client)
.gen_exec_once(
&self.query_engine,
&self.frontend_client,
cur_dirty_window_cnt,
)
.await?;
let affected_rows = res.map(|(r, _)| r).unwrap_or_default() as usize;

View File

@@ -14,6 +14,7 @@
//! Frontend client to run flow as batching task which is time-window-aware normal query triggered every tick set by user
use std::collections::HashMap;
use std::sync::{Arc, Weak};
use std::time::SystemTime;
@@ -29,6 +30,8 @@ use common_meta::rpc::store::RangeRequest;
use common_query::Output;
use common_telemetry::warn;
use meta_client::client::MetaClient;
use query::datafusion::QUERY_PARALLELISM_HINT;
use query::options::QueryOptions;
use rand::rng;
use rand::seq::SliceRandom;
use servers::query_handler::grpc::GrpcQueryHandler;
@@ -84,27 +87,34 @@ pub enum FrontendClient {
meta_client: Arc<MetaClient>,
chnl_mgr: ChannelManager,
auth: Option<FlowAuthHeader>,
query: QueryOptions,
},
Standalone {
/// for the sake of simplicity still use grpc even in standalone mode
/// notice the client here should all be lazy, so that can wait after frontend is booted then make conn
database_client: HandlerMutable,
query: QueryOptions,
},
}
impl FrontendClient {
/// Create a new empty frontend client, with a `HandlerMutable` to set the grpc handler later
pub fn from_empty_grpc_handler() -> (Self, HandlerMutable) {
pub fn from_empty_grpc_handler(query: QueryOptions) -> (Self, HandlerMutable) {
let handler = Arc::new(std::sync::Mutex::new(None));
(
Self::Standalone {
database_client: handler.clone(),
query,
},
handler,
)
}
pub fn from_meta_client(meta_client: Arc<MetaClient>, auth: Option<FlowAuthHeader>) -> Self {
pub fn from_meta_client(
meta_client: Arc<MetaClient>,
auth: Option<FlowAuthHeader>,
query: QueryOptions,
) -> Self {
common_telemetry::info!("Frontend client build with auth={:?}", auth);
Self::Distributed {
meta_client,
@@ -115,12 +125,17 @@ impl FrontendClient {
ChannelManager::with_config(cfg)
},
auth,
query,
}
}
pub fn from_grpc_handler(grpc_handler: Weak<dyn GrpcQueryHandlerWithBoxedError>) -> Self {
pub fn from_grpc_handler(
grpc_handler: Weak<dyn GrpcQueryHandlerWithBoxedError>,
query: QueryOptions,
) -> Self {
Self::Standalone {
database_client: Arc::new(std::sync::Mutex::new(Some(grpc_handler))),
query,
}
}
}
@@ -193,6 +208,7 @@ impl FrontendClient {
meta_client: _,
chnl_mgr,
auth,
query: _,
} = self
else {
return UnexpectedSnafu {
@@ -281,7 +297,9 @@ impl FrontendClient {
.map_err(BoxedError::new)
.context(ExternalSnafu)
}
FrontendClient::Standalone { database_client } => {
FrontendClient::Standalone {
database_client, ..
} => {
let ctx = QueryContextBuilder::default()
.current_catalog(catalog.to_string())
.current_schema(schema.to_string())
@@ -328,7 +346,7 @@ impl FrontendClient {
peer_desc: &mut Option<PeerDesc>,
) -> Result<u32, Error> {
match self {
FrontendClient::Distributed { .. } => {
FrontendClient::Distributed { query, .. } => {
let db = self.get_random_active_frontend(catalog, schema).await?;
*peer_desc = Some(PeerDesc::Dist {
@@ -336,16 +354,27 @@ impl FrontendClient {
});
db.database
.handle_with_retry(req.clone(), GRPC_MAX_RETRIES)
.handle_with_retry(
req.clone(),
GRPC_MAX_RETRIES,
&[(QUERY_PARALLELISM_HINT, &query.parallelism.to_string())],
)
.await
.with_context(|_| InvalidRequestSnafu {
context: format!("Failed to handle request at {:?}: {:?}", db.peer, req),
})
}
FrontendClient::Standalone { database_client } => {
FrontendClient::Standalone {
database_client,
query,
} => {
let ctx = QueryContextBuilder::default()
.current_catalog(catalog.to_string())
.current_schema(schema.to_string())
.extensions(HashMap::from([(
QUERY_PARALLELISM_HINT.to_string(),
query.parallelism.to_string(),
)]))
.build();
let ctx = Arc::new(ctx);
{

View File

@@ -22,7 +22,7 @@ use common_telemetry::tracing::warn;
use common_time::Timestamp;
use datatypes::value::Value;
use session::context::QueryContextRef;
use snafu::{OptionExt, ResultExt};
use snafu::{ensure, OptionExt, ResultExt};
use tokio::sync::oneshot;
use tokio::time::Instant;
@@ -31,7 +31,8 @@ use crate::batching_mode::time_window::TimeWindowExpr;
use crate::batching_mode::MIN_REFRESH_DURATION;
use crate::error::{DatatypesSnafu, InternalSnafu, TimeSnafu, UnexpectedSnafu};
use crate::metrics::{
METRIC_FLOW_BATCHING_ENGINE_QUERY_TIME_RANGE, METRIC_FLOW_BATCHING_ENGINE_QUERY_WINDOW_CNT,
METRIC_FLOW_BATCHING_ENGINE_QUERY_WINDOW_CNT, METRIC_FLOW_BATCHING_ENGINE_QUERY_WINDOW_SIZE,
METRIC_FLOW_BATCHING_ENGINE_STALLED_WINDOW_SIZE,
};
use crate::{Error, FlowId};
@@ -76,38 +77,43 @@ impl TaskState {
/// Compute the next query delay based on the time window size or the last query duration.
/// Aiming to avoid too frequent queries. But also not too long delay.
/// The delay is computed as follows:
/// - If `time_window_size` is set, the delay is half the time window size, constrained to be
/// at least `last_query_duration` and at most `max_timeout`.
/// - If `time_window_size` is not set, the delay defaults to `last_query_duration`, constrained
/// to be at least `MIN_REFRESH_DURATION` and at most `max_timeout`.
///
/// If there are dirty time windows, the function returns an immediate execution time to clean them.
/// TODO: Make this behavior configurable.
/// next wait time is calculated as:
/// last query duration, capped by [max(min_run_interval, time_window_size), max_timeout],
/// note at most wait for `max_timeout`.
///
/// if current the dirty time range is longer than one query can handle,
/// execute immediately to faster clean up dirty time windows.
///
pub fn get_next_start_query_time(
&self,
flow_id: FlowId,
time_window_size: &Option<Duration>,
max_timeout: Option<Duration>,
) -> Instant {
let last_duration = max_timeout
.unwrap_or(self.last_query_duration)
.min(self.last_query_duration)
.max(MIN_REFRESH_DURATION);
// = last query duration, capped by [max(min_run_interval, time_window_size), max_timeout], note at most `max_timeout`
let lower = time_window_size.unwrap_or(MIN_REFRESH_DURATION);
let next_duration = self.last_query_duration.max(lower);
let next_duration = if let Some(max_timeout) = max_timeout {
next_duration.min(max_timeout)
} else {
next_duration
};
let next_duration = time_window_size
.map(|t| {
let half = t / 2;
half.max(last_duration)
})
.unwrap_or(last_duration);
// if have dirty time window, execute immediately to clean dirty time window
if self.dirty_time_windows.windows.is_empty() {
let cur_dirty_window_size = self.dirty_time_windows.window_size();
// compute how much time range can be handled in one query
let max_query_update_range = (*time_window_size)
.unwrap_or_default()
.mul_f64(DirtyTimeWindows::MAX_FILTER_NUM as f64);
// if dirty time range is more than one query can handle, execute immediately
// to faster clean up dirty time windows
if cur_dirty_window_size < max_query_update_range {
self.last_update_time + next_duration
} else {
// if dirty time windows can't be clean up in one query, execute immediately to faster
// clean up dirty time windows
debug!(
"Flow id = {}, still have {} dirty time window({:?}), execute immediately",
"Flow id = {}, still have too many {} dirty time window({:?}), execute immediately",
flow_id,
self.dirty_time_windows.windows.len(),
self.dirty_time_windows.windows
@@ -147,6 +153,18 @@ impl DirtyTimeWindows {
}
}
pub fn window_size(&self) -> Duration {
let mut ret = Duration::from_secs(0);
for (start, end) in &self.windows {
if let Some(end) = end {
if let Some(duration) = end.sub(start) {
ret += duration.to_std().unwrap_or_default();
}
}
}
ret
}
pub fn add_window(&mut self, start: Timestamp, end: Option<Timestamp>) {
self.windows.insert(start, end);
}
@@ -161,6 +179,33 @@ impl DirtyTimeWindows {
self.windows.len()
}
/// Get the effective count of time windows, which is the number of time windows that can be
/// used for query, compute from total time window range divided by `window_size`.
pub fn effective_count(&self, window_size: &Duration) -> usize {
if self.windows.is_empty() {
return 0;
}
let window_size =
chrono::Duration::from_std(*window_size).unwrap_or(chrono::Duration::zero());
let total_window_time_range =
self.windows
.iter()
.fold(chrono::Duration::zero(), |acc, (start, end)| {
if let Some(end) = end {
acc + end.sub(start).unwrap_or(chrono::Duration::zero())
} else {
acc + window_size
}
});
// not sure window_size is zero have any meaning, but just in case
if window_size.num_seconds() == 0 {
0
} else {
(total_window_time_range.num_seconds() / window_size.num_seconds()) as usize
}
}
/// Generate all filter expressions consuming all time windows
///
/// there is two limits:
@@ -175,6 +220,13 @@ impl DirtyTimeWindows {
flow_id: FlowId,
task_ctx: Option<&BatchingTask>,
) -> Result<Option<datafusion_expr::Expr>, Error> {
ensure!(
window_size.num_seconds() > 0,
UnexpectedSnafu {
reason: "window_size is zero, can't generate filter exprs",
}
);
debug!(
"expire_lower_bound: {:?}, window_size: {:?}",
expire_lower_bound.map(|t| t.to_iso8601_string()),
@@ -211,62 +263,94 @@ impl DirtyTimeWindows {
// get the first `window_cnt` time windows
let max_time_range = window_size * window_cnt as i32;
let nth = {
let mut cur_time_range = chrono::Duration::zero();
let mut nth_key = None;
for (idx, (start, end)) in self.windows.iter().enumerate() {
// if time range is too long, stop
if cur_time_range > max_time_range {
nth_key = Some(*start);
break;
}
// if we have enough time windows, stop
if idx >= window_cnt {
nth_key = Some(*start);
break;
}
let mut to_be_query = BTreeMap::new();
let mut new_windows = self.windows.clone();
let mut cur_time_range = chrono::Duration::zero();
for (idx, (start, end)) in self.windows.iter().enumerate() {
let first_end = start
.add_duration(window_size.to_std().unwrap())
.context(TimeSnafu)?;
let end = end.unwrap_or(first_end);
if let Some(end) = end {
if let Some(x) = end.sub(start) {
cur_time_range += x;
}
}
// if time range is too long, stop
if cur_time_range >= max_time_range {
break;
}
nth_key
};
let first_nth = {
if let Some(nth) = nth {
let mut after = self.windows.split_off(&nth);
std::mem::swap(&mut self.windows, &mut after);
// if we have enough time windows, stop
if idx >= window_cnt {
break;
}
after
let Some(x) = end.sub(start) else {
continue;
};
if cur_time_range + x <= max_time_range {
to_be_query.insert(*start, Some(end));
new_windows.remove(start);
cur_time_range += x;
} else {
std::mem::take(&mut self.windows)
// too large a window, split it
// split at window_size * times
let surplus = max_time_range - cur_time_range;
if surplus.num_seconds() <= window_size.num_seconds() {
// Skip splitting if surplus is smaller than window_size
break;
}
let times = surplus.num_seconds() / window_size.num_seconds();
let split_offset = window_size * times as i32;
let split_at = start
.add_duration(split_offset.to_std().unwrap())
.context(TimeSnafu)?;
to_be_query.insert(*start, Some(split_at));
// remove the original window
new_windows.remove(start);
new_windows.insert(split_at, Some(end));
cur_time_range += split_offset;
break;
}
};
}
self.windows = new_windows;
METRIC_FLOW_BATCHING_ENGINE_QUERY_WINDOW_CNT
.with_label_values(&[flow_id.to_string().as_str()])
.observe(first_nth.len() as f64);
.observe(to_be_query.len() as f64);
let full_time_range = first_nth
let full_time_range = to_be_query
.iter()
.fold(chrono::Duration::zero(), |acc, (start, end)| {
if let Some(end) = end {
acc + end.sub(start).unwrap_or(chrono::Duration::zero())
} else {
acc
acc + window_size
}
})
.num_seconds() as f64;
METRIC_FLOW_BATCHING_ENGINE_QUERY_TIME_RANGE
METRIC_FLOW_BATCHING_ENGINE_QUERY_WINDOW_SIZE
.with_label_values(&[flow_id.to_string().as_str()])
.observe(full_time_range);
let stalled_time_range =
self.windows
.iter()
.fold(chrono::Duration::zero(), |acc, (start, end)| {
if let Some(end) = end {
acc + end.sub(start).unwrap_or(chrono::Duration::zero())
} else {
acc + window_size
}
});
METRIC_FLOW_BATCHING_ENGINE_STALLED_WINDOW_SIZE
.with_label_values(&[flow_id.to_string().as_str()])
.observe(stalled_time_range.num_seconds() as f64);
let mut expr_lst = vec![];
for (start, end) in first_nth.into_iter() {
for (start, end) in to_be_query.into_iter() {
// align using time window exprs
let (start, end) = if let Some(ctx) = task_ctx {
let Some(time_window_expr) = &ctx.config.time_window_expr else {
@@ -500,6 +584,64 @@ mod test {
"((ts >= CAST('1970-01-01 00:00:00' AS TIMESTAMP)) AND (ts < CAST('1970-01-01 00:00:21' AS TIMESTAMP)))",
)
),
// split range
(
Vec::from_iter((0..20).map(|i|Timestamp::new_second(i*3)).chain(std::iter::once(
Timestamp::new_second(60 + 3 * (DirtyTimeWindows::MERGE_DIST as i64 + 1)),
))),
(chrono::Duration::seconds(3), None),
BTreeMap::from([
(
Timestamp::new_second(0),
Some(Timestamp::new_second(
60
)),
),
(
Timestamp::new_second(60 + 3 * (DirtyTimeWindows::MERGE_DIST as i64 + 1)),
Some(Timestamp::new_second(
60 + 3 * (DirtyTimeWindows::MERGE_DIST as i64 + 1) + 3
)),
)]),
Some(
"((ts >= CAST('1970-01-01 00:00:00' AS TIMESTAMP)) AND (ts < CAST('1970-01-01 00:01:00' AS TIMESTAMP)))",
)
),
// split 2 min into 1 min
(
Vec::from_iter((0..40).map(|i|Timestamp::new_second(i*3))),
(chrono::Duration::seconds(3), None),
BTreeMap::from([
(
Timestamp::new_second(0),
Some(Timestamp::new_second(
40 * 3
)),
)]),
Some(
"((ts >= CAST('1970-01-01 00:00:00' AS TIMESTAMP)) AND (ts < CAST('1970-01-01 00:01:00' AS TIMESTAMP)))",
)
),
// split 3s + 1min into 3s + 57s
(
Vec::from_iter(std::iter::once(Timestamp::new_second(0)).chain((0..40).map(|i|Timestamp::new_second(20+i*3)))),
(chrono::Duration::seconds(3), None),
BTreeMap::from([
(
Timestamp::new_second(0),
Some(Timestamp::new_second(
3
)),
),(
Timestamp::new_second(20),
Some(Timestamp::new_second(
140
)),
)]),
Some(
"(((ts >= CAST('1970-01-01 00:00:00' AS TIMESTAMP)) AND (ts < CAST('1970-01-01 00:00:03' AS TIMESTAMP))) OR ((ts >= CAST('1970-01-01 00:00:20' AS TIMESTAMP)) AND (ts < CAST('1970-01-01 00:01:17' AS TIMESTAMP))))",
)
),
// expired
(
vec![
@@ -516,6 +658,8 @@ mod test {
None
),
];
// let len = testcases.len();
// let testcases = testcases[(len - 2)..(len - 1)].to_vec();
for (lower_bounds, (window_size, expire_lower_bound), expected, expected_filter_expr) in
testcases
{

View File

@@ -211,8 +211,9 @@ impl BatchingTask {
&self,
engine: &QueryEngineRef,
frontend_client: &Arc<FrontendClient>,
max_window_cnt: Option<usize>,
) -> Result<Option<(u32, Duration)>, Error> {
if let Some(new_query) = self.gen_insert_plan(engine).await? {
if let Some(new_query) = self.gen_insert_plan(engine, max_window_cnt).await? {
debug!("Generate new query: {}", new_query);
self.execute_logical_plan(frontend_client, &new_query).await
} else {
@@ -224,6 +225,7 @@ impl BatchingTask {
pub async fn gen_insert_plan(
&self,
engine: &QueryEngineRef,
max_window_cnt: Option<usize>,
) -> Result<Option<LogicalPlan>, Error> {
let (table, df_schema) = get_table_info_df_schema(
self.config.catalog_manager.clone(),
@@ -232,7 +234,7 @@ impl BatchingTask {
.await?;
let new_query = self
.gen_query_with_time_window(engine.clone(), &table.meta.schema)
.gen_query_with_time_window(engine.clone(), &table.meta.schema, max_window_cnt)
.await?;
let insert_into = if let Some((new_query, _column_cnt)) = new_query {
@@ -437,7 +439,7 @@ impl BatchingTask {
.with_label_values(&[&flow_id_str])
.inc();
let new_query = match self.gen_insert_plan(&engine).await {
let new_query = match self.gen_insert_plan(&engine, None).await {
Ok(new_query) => new_query,
Err(err) => {
common_telemetry::error!(err; "Failed to generate query for flow={}", self.config.flow_id);
@@ -521,6 +523,7 @@ impl BatchingTask {
&self,
engine: QueryEngineRef,
sink_table_schema: &Arc<Schema>,
max_window_cnt: Option<usize>,
) -> Result<Option<(LogicalPlan, usize)>, Error> {
let query_ctx = self.state.read().unwrap().query_ctx.clone();
let start = SystemTime::now();
@@ -574,8 +577,8 @@ impl BatchingTask {
};
debug!(
"Flow id = {:?}, found time window: precise_lower_bound={:?}, precise_upper_bound={:?}",
self.config.flow_id, l, u
"Flow id = {:?}, found time window: precise_lower_bound={:?}, precise_upper_bound={:?} with dirty time windows: {:?}",
self.config.flow_id, l, u, self.state.read().unwrap().dirty_time_windows
);
let window_size = u.sub(&l).with_context(|| UnexpectedSnafu {
reason: format!("Can't get window size from {u:?} - {l:?}"),
@@ -601,7 +604,7 @@ impl BatchingTask {
&col_name,
Some(l),
window_size,
DirtyTimeWindows::MAX_FILTER_NUM,
max_window_cnt.unwrap_or(DirtyTimeWindows::MAX_FILTER_NUM),
self.config.flow_id,
Some(self),
)?;

View File

@@ -43,7 +43,7 @@ mod utils;
#[cfg(test)]
mod test_utils;
pub use adapter::{FlowConfig, FlowStreamingEngineRef, FlownodeOptions, StreamingEngine};
pub use adapter::{FlowConfig, FlowStreamingEngineRef, StreamingEngine};
pub use batching_mode::frontend_client::{FrontendClient, GrpcQueryHandlerWithBoxedError};
pub use engine::FlowAuthHeader;
pub(crate) use engine::{CreateFlowArgs, FlowId, TableName};
@@ -52,3 +52,5 @@ pub use server::{
get_flow_auth_options, FlownodeBuilder, FlownodeInstance, FlownodeServer,
FlownodeServiceBuilder, FrontendInvoker,
};
pub use crate::adapter::FlownodeOptions;

View File

@@ -50,10 +50,18 @@ lazy_static! {
vec![0.0, 5., 10., 20., 40.]
)
.unwrap();
pub static ref METRIC_FLOW_BATCHING_ENGINE_QUERY_TIME_RANGE: HistogramVec =
pub static ref METRIC_FLOW_BATCHING_ENGINE_QUERY_WINDOW_SIZE: HistogramVec =
register_histogram_vec!(
"greptime_flow_batching_engine_query_time_range_secs",
"flow batching engine query time range(seconds)",
"greptime_flow_batching_engine_query_window_size_secs",
"flow batching engine query window size(seconds)",
&["flow_id"],
vec![60., 4. * 60., 16. * 60., 64. * 60., 256. * 60.]
)
.unwrap();
pub static ref METRIC_FLOW_BATCHING_ENGINE_STALLED_WINDOW_SIZE: HistogramVec =
register_histogram_vec!(
"greptime_flow_batching_engine_stalled_window_size_secs",
"flow batching engine stalled window size(seconds)",
&["flow_id"],
vec![60., 4. * 60., 16. * 60., 64. * 60., 256. * 60.]
)

View File

@@ -14,6 +14,7 @@ workspace = true
[dependencies]
api.workspace = true
arc-swap = "1.0"
async-stream.workspace = true
async-trait.workspace = true
auth.workspace = true
bytes.workspace = true

View File

@@ -363,6 +363,12 @@ pub enum Error {
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Canceling statement due to statement timeout"))]
StatementTimeout {
#[snafu(implicit)]
location: Location,
},
}
pub type Result<T> = std::result::Result<T, Error>;
@@ -443,6 +449,8 @@ impl ErrorExt for Error {
Error::DataFusion { error, .. } => datafusion_status_code::<Self>(error, None),
Error::Cancelled { .. } => StatusCode::Cancelled,
Error::StatementTimeout { .. } => StatusCode::Cancelled,
}
}

View File

@@ -25,9 +25,11 @@ mod promql;
mod region_query;
pub mod standalone;
use std::pin::Pin;
use std::sync::Arc;
use std::time::SystemTime;
use std::time::{Duration, SystemTime};
use async_stream::stream;
use async_trait::async_trait;
use auth::{PermissionChecker, PermissionCheckerRef, PermissionReq};
use catalog::process_manager::ProcessManagerRef;
@@ -37,6 +39,7 @@ use common_base::cancellation::CancellableFuture;
use common_base::Plugins;
use common_config::KvBackendConfig;
use common_error::ext::{BoxedError, ErrorExt};
use common_meta::key::runtime_switch::RuntimeSwitchManager;
use common_meta::key::TableMetadataManagerRef;
use common_meta::kv_backend::KvBackendRef;
use common_meta::state_store::KvStateStore;
@@ -44,8 +47,11 @@ use common_procedure::local::{LocalManager, ManagerConfig};
use common_procedure::options::ProcedureConfig;
use common_procedure::ProcedureManagerRef;
use common_query::Output;
use common_recordbatch::error::StreamTimeoutSnafu;
use common_recordbatch::RecordBatchStreamWrapper;
use common_telemetry::{debug, error, info, tracing};
use datafusion_expr::LogicalPlan;
use futures::{Stream, StreamExt};
use log_store::raft_engine::RaftEngineBackend;
use operator::delete::DeleterRef;
use operator::insert::InserterRef;
@@ -65,20 +71,21 @@ use servers::interceptor::{
};
use servers::prometheus_handler::PrometheusHandler;
use servers::query_handler::sql::SqlQueryHandler;
use session::context::QueryContextRef;
use session::context::{Channel, QueryContextRef};
use session::table_name::table_idents_to_full_name;
use snafu::prelude::*;
use sql::dialect::Dialect;
use sql::parser::{ParseOptions, ParserContext};
use sql::statements::copy::{CopyDatabase, CopyTable};
use sql::statements::statement::Statement;
use sql::statements::tql::Tql;
use sqlparser::ast::ObjectName;
pub use standalone::StandaloneDatanodeManager;
use crate::error::{
self, Error, ExecLogicalPlanSnafu, ExecutePromqlSnafu, ExternalSnafu, InvalidSqlSnafu,
ParseSqlSnafu, PermissionSnafu, PlanStatementSnafu, Result, SqlExecInterceptedSnafu,
TableOperationSnafu,
StatementTimeoutSnafu, TableOperationSnafu,
};
use crate::limiter::LimiterRef;
use crate::slow_query_recorder::SlowQueryRecorder;
@@ -125,10 +132,12 @@ impl Instance {
max_running_procedures: procedure_config.max_running_procedures,
..Default::default()
};
let runtime_switch_manager = Arc::new(RuntimeSwitchManager::new(kv_backend.clone()));
let procedure_manager = Arc::new(LocalManager::new(
manager_config,
kv_state_store.clone(),
kv_state_store,
Some(runtime_switch_manager),
));
Ok((kv_backend, procedure_manager))
@@ -188,56 +197,7 @@ impl Instance {
Some(query_ctx.process_id()),
);
let query_fut = async {
match stmt {
Statement::Query(_) | Statement::Explain(_) | Statement::Delete(_) => {
// TODO: remove this when format is supported in datafusion
if let Statement::Explain(explain) = &stmt {
if let Some(format) = explain.format() {
query_ctx.set_explain_format(format.to_string());
}
}
let stmt = QueryStatement::Sql(stmt);
let plan = self
.statement_executor
.plan(&stmt, query_ctx.clone())
.await?;
let QueryStatement::Sql(stmt) = stmt else {
unreachable!()
};
query_interceptor.pre_execute(&stmt, Some(&plan), query_ctx.clone())?;
self.statement_executor
.exec_plan(plan, query_ctx)
.await
.context(TableOperationSnafu)
}
Statement::Tql(tql) => {
let plan = self
.statement_executor
.plan_tql(tql.clone(), &query_ctx)
.await?;
query_interceptor.pre_execute(
&Statement::Tql(tql),
Some(&plan),
query_ctx.clone(),
)?;
self.statement_executor
.exec_plan(plan, query_ctx)
.await
.context(TableOperationSnafu)
}
_ => {
query_interceptor.pre_execute(&stmt, None, query_ctx.clone())?;
self.statement_executor
.execute_sql(stmt, query_ctx)
.await
.context(TableOperationSnafu)
}
}
};
let query_fut = self.exec_statement_with_timeout(stmt, query_ctx, query_interceptor);
CancellableFuture::new(query_fut, ticket.cancellation_handle.clone())
.await
@@ -254,6 +214,153 @@ impl Instance {
Output { data, meta }
})
}
async fn exec_statement_with_timeout(
&self,
stmt: Statement,
query_ctx: QueryContextRef,
query_interceptor: Option<&SqlQueryInterceptorRef<Error>>,
) -> Result<Output> {
let timeout = derive_timeout(&stmt, &query_ctx);
match timeout {
Some(timeout) => {
let start = tokio::time::Instant::now();
let output = tokio::time::timeout(
timeout,
self.exec_statement(stmt, query_ctx, query_interceptor),
)
.await
.map_err(|_| StatementTimeoutSnafu.build())??;
// compute remaining timeout
let remaining_timeout = timeout.checked_sub(start.elapsed()).unwrap_or_default();
attach_timeout(output, remaining_timeout)
}
None => {
self.exec_statement(stmt, query_ctx, query_interceptor)
.await
}
}
}
async fn exec_statement(
&self,
stmt: Statement,
query_ctx: QueryContextRef,
query_interceptor: Option<&SqlQueryInterceptorRef<Error>>,
) -> Result<Output> {
match stmt {
Statement::Query(_) | Statement::Explain(_) | Statement::Delete(_) => {
// TODO: remove this when format is supported in datafusion
if let Statement::Explain(explain) = &stmt {
if let Some(format) = explain.format() {
query_ctx.set_explain_format(format.to_string());
}
}
self.plan_and_exec_sql(stmt, &query_ctx, query_interceptor)
.await
}
Statement::Tql(tql) => {
self.plan_and_exec_tql(&query_ctx, query_interceptor, tql)
.await
}
_ => {
query_interceptor.pre_execute(&stmt, None, query_ctx.clone())?;
self.statement_executor
.execute_sql(stmt, query_ctx)
.await
.context(TableOperationSnafu)
}
}
}
async fn plan_and_exec_sql(
&self,
stmt: Statement,
query_ctx: &QueryContextRef,
query_interceptor: Option<&SqlQueryInterceptorRef<Error>>,
) -> Result<Output> {
let stmt = QueryStatement::Sql(stmt);
let plan = self
.statement_executor
.plan(&stmt, query_ctx.clone())
.await?;
let QueryStatement::Sql(stmt) = stmt else {
unreachable!()
};
query_interceptor.pre_execute(&stmt, Some(&plan), query_ctx.clone())?;
self.statement_executor
.exec_plan(plan, query_ctx.clone())
.await
.context(TableOperationSnafu)
}
async fn plan_and_exec_tql(
&self,
query_ctx: &QueryContextRef,
query_interceptor: Option<&SqlQueryInterceptorRef<Error>>,
tql: Tql,
) -> Result<Output> {
let plan = self
.statement_executor
.plan_tql(tql.clone(), query_ctx)
.await?;
query_interceptor.pre_execute(&Statement::Tql(tql), Some(&plan), query_ctx.clone())?;
self.statement_executor
.exec_plan(plan, query_ctx.clone())
.await
.context(TableOperationSnafu)
}
}
/// If the relevant variables are set, the timeout is enforced for all PostgreSQL statements.
/// For MySQL, it applies only to read-only statements.
fn derive_timeout(stmt: &Statement, query_ctx: &QueryContextRef) -> Option<Duration> {
let query_timeout = query_ctx.query_timeout()?;
if query_timeout.is_zero() {
return None;
}
match query_ctx.channel() {
Channel::Mysql if stmt.is_readonly() => Some(query_timeout),
Channel::Postgres => Some(query_timeout),
_ => None,
}
}
fn attach_timeout(output: Output, mut timeout: Duration) -> Result<Output> {
if timeout.is_zero() {
return StatementTimeoutSnafu.fail();
}
let output = match output.data {
OutputData::AffectedRows(_) | OutputData::RecordBatches(_) => output,
OutputData::Stream(mut stream) => {
let schema = stream.schema();
let s = Box::pin(stream! {
let mut start = tokio::time::Instant::now();
while let Some(item) = tokio::time::timeout(timeout, stream.next()).await.map_err(|_| StreamTimeoutSnafu.build())? {
yield item;
let now = tokio::time::Instant::now();
timeout = timeout.checked_sub(now - start).unwrap_or(Duration::ZERO);
start = now;
// tokio::time::timeout may not return an error immediately when timeout is 0.
if timeout.is_zero() {
StreamTimeoutSnafu.fail()?;
}
}
}) as Pin<Box<dyn Stream<Item = _> + Send>>;
let stream = RecordBatchStreamWrapper {
schema,
stream: s,
output_ordering: None,
metrics: Default::default(),
};
Output::new(OutputData::Stream(Box::pin(stream)), output.meta)
}
};
Ok(output)
}
#[async_trait]

View File

@@ -218,6 +218,7 @@ mod tests {
let mut writer = Cursor::new(Vec::new());
let mut creator = BloomFilterCreator::new(
4,
0.01,
Arc::new(MockExternalTempFileProvider::new()),
Arc::new(AtomicUsize::new(0)),
None,

View File

@@ -30,9 +30,6 @@ use crate::bloom_filter::SEED;
use crate::external_provider::ExternalTempFileProvider;
use crate::Bytes;
/// The false positive rate of the Bloom filter.
pub const FALSE_POSITIVE_RATE: f64 = 0.01;
/// `BloomFilterCreator` is responsible for creating and managing bloom filters
/// for a set of elements. It divides the rows into segments and creates
/// bloom filters for each segment.
@@ -79,6 +76,7 @@ impl BloomFilterCreator {
/// `rows_per_segment` <= 0
pub fn new(
rows_per_segment: usize,
false_positive_rate: f64,
intermediate_provider: Arc<dyn ExternalTempFileProvider>,
global_memory_usage: Arc<AtomicUsize>,
global_memory_usage_threshold: Option<usize>,
@@ -95,6 +93,7 @@ impl BloomFilterCreator {
cur_seg_distinct_elems_mem_usage: 0,
global_memory_usage: global_memory_usage.clone(),
finalized_bloom_filters: FinalizedBloomFilterStorage::new(
false_positive_rate,
intermediate_provider,
global_memory_usage,
global_memory_usage_threshold,
@@ -263,6 +262,7 @@ mod tests {
let mut writer = Cursor::new(Vec::new());
let mut creator = BloomFilterCreator::new(
2,
0.01,
Arc::new(MockExternalTempFileProvider::new()),
Arc::new(AtomicUsize::new(0)),
None,
@@ -337,6 +337,7 @@ mod tests {
let mut writer = Cursor::new(Vec::new());
let mut creator: BloomFilterCreator = BloomFilterCreator::new(
2,
0.01,
Arc::new(MockExternalTempFileProvider::new()),
Arc::new(AtomicUsize::new(0)),
None,
@@ -418,6 +419,7 @@ mod tests {
let mut writer = Cursor::new(Vec::new());
let mut creator = BloomFilterCreator::new(
2,
0.01,
Arc::new(MockExternalTempFileProvider::new()),
Arc::new(AtomicUsize::new(0)),
None,

View File

@@ -23,7 +23,7 @@ use futures::{stream, AsyncWriteExt, Stream};
use snafu::ResultExt;
use crate::bloom_filter::creator::intermediate_codec::IntermediateBloomFilterCodecV1;
use crate::bloom_filter::creator::{FALSE_POSITIVE_RATE, SEED};
use crate::bloom_filter::creator::SEED;
use crate::bloom_filter::error::{IntermediateSnafu, IoSnafu, Result};
use crate::external_provider::ExternalTempFileProvider;
use crate::Bytes;
@@ -33,6 +33,9 @@ const MIN_MEMORY_USAGE_THRESHOLD: usize = 1024 * 1024; // 1MB
/// Storage for finalized Bloom filters.
pub struct FinalizedBloomFilterStorage {
/// The false positive rate of the Bloom filter.
false_positive_rate: f64,
/// Indices of the segments in the sequence of finalized Bloom filters.
segment_indices: Vec<usize>,
@@ -65,12 +68,14 @@ pub struct FinalizedBloomFilterStorage {
impl FinalizedBloomFilterStorage {
/// Creates a new `FinalizedBloomFilterStorage`.
pub fn new(
false_positive_rate: f64,
intermediate_provider: Arc<dyn ExternalTempFileProvider>,
global_memory_usage: Arc<AtomicUsize>,
global_memory_usage_threshold: Option<usize>,
) -> Self {
let external_prefix = format!("intm-bloom-filters-{}", uuid::Uuid::new_v4());
Self {
false_positive_rate,
segment_indices: Vec::new(),
in_memory: Vec::new(),
intermediate_file_id_counter: 0,
@@ -96,7 +101,7 @@ impl FinalizedBloomFilterStorage {
elems: impl IntoIterator<Item = Bytes>,
element_count: usize,
) -> Result<()> {
let mut bf = BloomFilter::with_false_pos(FALSE_POSITIVE_RATE)
let mut bf = BloomFilter::with_false_pos(self.false_positive_rate)
.seed(&SEED)
.expected_items(element_count);
for elem in elems.into_iter() {
@@ -284,6 +289,7 @@ mod tests {
let global_memory_usage_threshold = Some(1024 * 1024); // 1MB
let provider = Arc::new(mock_provider);
let mut storage = FinalizedBloomFilterStorage::new(
0.01,
provider,
global_memory_usage.clone(),
global_memory_usage_threshold,
@@ -340,6 +346,7 @@ mod tests {
let global_memory_usage_threshold = Some(1024 * 1024); // 1MB
let provider = Arc::new(mock_provider);
let mut storage = FinalizedBloomFilterStorage::new(
0.01,
provider,
global_memory_usage.clone(),
global_memory_usage_threshold,

View File

@@ -222,6 +222,7 @@ mod tests {
let mut writer = Cursor::new(vec![]);
let mut creator = BloomFilterCreator::new(
2,
0.01,
Arc::new(MockExternalTempFileProvider::new()),
Arc::new(AtomicUsize::new(0)),
None,

View File

@@ -45,6 +45,7 @@ impl BloomFilterFulltextIndexCreator {
pub fn new(
config: Config,
rows_per_segment: usize,
false_positive_rate: f64,
intermediate_provider: Arc<dyn ExternalTempFileProvider>,
global_memory_usage: Arc<AtomicUsize>,
global_memory_usage_threshold: Option<usize>,
@@ -57,6 +58,7 @@ impl BloomFilterFulltextIndexCreator {
let inner = BloomFilterCreator::new(
rows_per_segment,
false_positive_rate,
intermediate_provider,
global_memory_usage,
global_memory_usage_threshold,

View File

@@ -12,12 +12,11 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use common_error::define_from_tonic_status;
use common_error::ext::ErrorExt;
use common_error::status_code::{convert_tonic_code_to_status_code, StatusCode};
use common_error::{GREPTIME_DB_HEADER_ERROR_CODE, GREPTIME_DB_HEADER_ERROR_MSG};
use common_error::status_code::StatusCode;
use common_macro::stack_trace_debug;
use snafu::{location, Location, Snafu};
use tonic::Status;
#[derive(Snafu)]
#[snafu(visibility(pub))]
@@ -161,33 +160,4 @@ impl Error {
}
}
// FIXME(dennis): partial duplicated with src/client/src/error.rs
impl From<Status> for Error {
fn from(e: Status) -> Self {
fn get_metadata_value(s: &Status, key: &str) -> Option<String> {
s.metadata()
.get(key)
.and_then(|v| String::from_utf8(v.as_bytes().to_vec()).ok())
}
let code = get_metadata_value(&e, GREPTIME_DB_HEADER_ERROR_CODE).and_then(|s| {
if let Ok(code) = s.parse::<u32>() {
StatusCode::from_u32(code)
} else {
None
}
});
let tonic_code = e.code();
let code = code.unwrap_or_else(|| convert_tonic_code_to_status_code(tonic_code));
let msg = get_metadata_value(&e, GREPTIME_DB_HEADER_ERROR_MSG)
.unwrap_or_else(|| e.message().to_string());
Self::MetaServer {
code,
msg,
tonic_code,
location: location!(),
}
}
}
define_from_tonic_status!(Error, MetaServer);

View File

@@ -374,6 +374,7 @@ pub enum Error {
location: Location,
},
#[cfg(feature = "mysql_kvbackend")]
#[snafu(display("Failed to decode sql value"))]
DecodeSqlValue {
#[snafu(source)]
@@ -687,8 +688,8 @@ pub enum Error {
location: Location,
},
#[snafu(display("Maintenance mode manager error"))]
MaintenanceModeManager {
#[snafu(display("Runtime switch manager error"))]
RuntimeSwitchManager {
source: common_meta::error::Error,
#[snafu(implicit)]
location: Location,
@@ -1015,7 +1016,7 @@ impl ErrorExt for Error {
Error::SubmitDdlTask { source, .. } => source.status_code(),
Error::ConvertProtoData { source, .. }
| Error::TableMetadataManager { source, .. }
| Error::MaintenanceModeManager { source, .. }
| Error::RuntimeSwitchManager { source, .. }
| Error::KvBackend { source, .. }
| Error::UnexpectedLogicalRouteTable { source, .. }
| Error::UpdateTopicNameValue { source, .. } => source.status_code(),

View File

@@ -27,7 +27,7 @@ use common_greptimedb_telemetry::GreptimeDBTelemetryTask;
use common_meta::cache_invalidator::CacheInvalidatorRef;
use common_meta::ddl::ProcedureExecutorRef;
use common_meta::distributed_time_constants;
use common_meta::key::maintenance::MaintenanceModeManagerRef;
use common_meta::key::runtime_switch::RuntimeSwitchManagerRef;
use common_meta::key::TableMetadataManagerRef;
use common_meta::kv_backend::{KvBackendRef, ResettableKvBackend, ResettableKvBackendRef};
use common_meta::leadership_notifier::{
@@ -110,12 +110,9 @@ pub struct MetasrvOptions {
pub use_memory_store: bool,
/// Whether to enable region failover.
pub enable_region_failover: bool,
/// Delay before initializing region failure detectors.
///
/// This delay helps prevent premature initialization of region failure detectors in cases where
/// cluster maintenance mode is enabled right after metasrv starts, especially when the cluster
/// is not deployed via the recommended GreptimeDB Operator. Without this delay, early detector registration
/// may trigger unnecessary region failovers during datanode startup.
/// The delay before starting region failure detection.
/// This delay helps prevent Metasrv from triggering unnecessary region failovers before all Datanodes are fully started.
/// Especially useful when the cluster is not deployed with GreptimeDB Operator and maintenance mode is not enabled.
#[serde(with = "humantime_serde")]
pub region_failure_detector_initialization_delay: Duration,
/// Whether to allow region failover on local WAL.
@@ -437,7 +434,7 @@ pub struct Metasrv {
procedure_executor: ProcedureExecutorRef,
wal_options_allocator: WalOptionsAllocatorRef,
table_metadata_manager: TableMetadataManagerRef,
maintenance_mode_manager: MaintenanceModeManagerRef,
runtime_switch_manager: RuntimeSwitchManagerRef,
memory_region_keeper: MemoryRegionKeeperRef,
greptimedb_telemetry_task: Arc<GreptimeDBTelemetryTask>,
region_migration_manager: RegionMigrationManagerRef,
@@ -696,8 +693,8 @@ impl Metasrv {
&self.table_metadata_manager
}
pub fn maintenance_mode_manager(&self) -> &MaintenanceModeManagerRef {
&self.maintenance_mode_manager
pub fn runtime_switch_manager(&self) -> &RuntimeSwitchManagerRef {
&self.runtime_switch_manager
}
pub fn memory_region_keeper(&self) -> &MemoryRegionKeeperRef {

View File

@@ -29,7 +29,7 @@ use common_meta::ddl_manager::DdlManager;
use common_meta::distributed_time_constants;
use common_meta::key::flow::flow_state::FlowStateManager;
use common_meta::key::flow::FlowMetadataManager;
use common_meta::key::maintenance::MaintenanceModeManager;
use common_meta::key::runtime_switch::{RuntimeSwitchManager, RuntimeSwitchManagerRef};
use common_meta::key::TableMetadataManager;
use common_meta::kv_backend::memory::MemoryKvBackend;
use common_meta::kv_backend::{KvBackendRef, ResettableKvBackendRef};
@@ -193,7 +193,9 @@ impl MetasrvBuilder {
let selector = selector.unwrap_or_else(|| Arc::new(LeaseBasedSelector::default()));
let pushers = Pushers::default();
let mailbox = build_mailbox(&kv_backend, &pushers);
let procedure_manager = build_procedure_manager(&options, &kv_backend);
let runtime_switch_manager = Arc::new(RuntimeSwitchManager::new(kv_backend.clone()));
let procedure_manager =
build_procedure_manager(&options, &kv_backend, &runtime_switch_manager);
let table_metadata_manager = Arc::new(TableMetadataManager::new(
leader_cached_kv_backend.clone() as _,
@@ -201,7 +203,7 @@ impl MetasrvBuilder {
let flow_metadata_manager = Arc::new(FlowMetadataManager::new(
leader_cached_kv_backend.clone() as _,
));
let maintenance_mode_manager = Arc::new(MaintenanceModeManager::new(kv_backend.clone()));
let selector_ctx = SelectorContext {
server_addr: options.grpc.server_addr.clone(),
datanode_lease_secs: distributed_time_constants::DATANODE_LEASE_SECS,
@@ -341,7 +343,7 @@ impl MetasrvBuilder {
selector_ctx.clone(),
supervisor_selector,
region_migration_manager.clone(),
maintenance_mode_manager.clone(),
runtime_switch_manager.clone(),
peer_lookup_service.clone(),
leader_cached_kv_backend.clone(),
);
@@ -464,7 +466,7 @@ impl MetasrvBuilder {
procedure_executor: ddl_manager,
wal_options_allocator,
table_metadata_manager,
maintenance_mode_manager,
runtime_switch_manager,
greptimedb_telemetry_task: get_greptimedb_telemetry_task(
Some(metasrv_home),
meta_peer_client,
@@ -507,6 +509,7 @@ fn build_mailbox(kv_backend: &KvBackendRef, pushers: &Pushers) -> MailboxRef {
fn build_procedure_manager(
options: &MetasrvOptions,
kv_backend: &KvBackendRef,
runtime_switch_manager: &RuntimeSwitchManagerRef,
) -> ProcedureManagerRef {
let manager_config = ManagerConfig {
max_retry_times: options.procedure.max_retry_times,
@@ -527,6 +530,7 @@ fn build_procedure_manager(
manager_config,
kv_state_store.clone(),
kv_state_store,
Some(runtime_switch_manager.clone()),
))
}

View File

@@ -49,6 +49,7 @@ use common_telemetry::{error, info};
use manager::RegionMigrationProcedureGuard;
pub use manager::{
RegionMigrationManagerRef, RegionMigrationProcedureTask, RegionMigrationProcedureTracker,
RegionMigrationTriggerReason,
};
use serde::{Deserialize, Serialize};
use snafu::{OptionExt, ResultExt};
@@ -86,6 +87,9 @@ pub struct PersistentContext {
/// The timeout for downgrading leader region and upgrading candidate region operations.
#[serde(with = "humantime_serde", default = "default_timeout")]
timeout: Duration,
/// The trigger reason of region migration.
#[serde(default)]
trigger_reason: RegionMigrationTriggerReason,
}
fn default_timeout() -> Duration {
@@ -617,6 +621,7 @@ impl RegionMigrationProcedure {
from_peer: persistent_ctx.from_peer.clone(),
to_peer: persistent_ctx.to_peer.clone(),
timeout: persistent_ctx.timeout,
trigger_reason: persistent_ctx.trigger_reason,
});
let context = context_factory.new_context(persistent_ctx);
@@ -793,7 +798,7 @@ mod tests {
let procedure = RegionMigrationProcedure::new(persistent_context, context, None);
let serialized = procedure.dump().unwrap();
let expected = r#"{"persistent_ctx":{"catalog":"greptime","schema":"public","from_peer":{"id":1,"addr":""},"to_peer":{"id":2,"addr":""},"region_id":4398046511105,"timeout":"10s"},"state":{"region_migration_state":"RegionMigrationStart"}}"#;
let expected = r#"{"persistent_ctx":{"catalog":"greptime","schema":"public","from_peer":{"id":1,"addr":""},"to_peer":{"id":2,"addr":""},"region_id":4398046511105,"timeout":"10s","trigger_reason":"Unknown"},"state":{"region_migration_state":"RegionMigrationStart"}}"#;
assert_eq!(expected, serialized);
}

View File

@@ -51,10 +51,11 @@ impl State for CloseDowngradedRegion {
warn!(err; "Failed to close downgraded leader region: {region_id} on datanode {:?}", downgrade_leader_datanode);
}
info!(
"Region migration is finished: region_id: {}, from_peer: {}, to_peer: {}, {}",
"Region migration is finished: region_id: {}, from_peer: {}, to_peer: {}, trigger_reason: {}, {}",
ctx.region_id(),
ctx.persistent_ctx.from_peer,
ctx.persistent_ctx.to_peer,
ctx.persistent_ctx.trigger_reason,
ctx.volatile_ctx.metrics,
);
Ok((Box::new(RegionMigrationEnd), Status::done()))

View File

@@ -352,6 +352,7 @@ mod tests {
use super::*;
use crate::error::Error;
use crate::procedure::region_migration::manager::RegionMigrationTriggerReason;
use crate::procedure::region_migration::test_util::{new_procedure_context, TestingEnv};
use crate::procedure::region_migration::{ContextFactory, PersistentContext};
use crate::procedure::test_util::{
@@ -366,6 +367,7 @@ mod tests {
to_peer: Peer::empty(2),
region_id: RegionId::new(1024, 1),
timeout: Duration::from_millis(1000),
trigger_reason: RegionMigrationTriggerReason::Manual,
}
}

View File

@@ -24,6 +24,7 @@ use common_meta::peer::Peer;
use common_meta::rpc::router::RegionRoute;
use common_procedure::{watcher, ProcedureId, ProcedureManagerRef, ProcedureWithId};
use common_telemetry::{error, info, warn};
use serde::{Deserialize, Serialize};
use snafu::{ensure, OptionExt, ResultExt};
use store_api::storage::RegionId;
use table::table_name::TableName;
@@ -104,15 +105,38 @@ pub struct RegionMigrationProcedureTask {
pub(crate) from_peer: Peer,
pub(crate) to_peer: Peer,
pub(crate) timeout: Duration,
pub(crate) trigger_reason: RegionMigrationTriggerReason,
}
/// The reason why the region migration procedure is triggered.
#[derive(Default, Debug, Clone, Copy, PartialEq, Eq, Serialize, Deserialize, strum::Display)]
#[strum(serialize_all = "PascalCase")]
pub enum RegionMigrationTriggerReason {
#[default]
/// The region migration procedure is triggered by unknown reason.
Unknown,
/// The region migration procedure is triggered by administrator.
Manual,
/// The region migration procedure is triggered by auto rebalance.
AutoRebalance,
/// The region migration procedure is triggered by failover.
Failover,
}
impl RegionMigrationProcedureTask {
pub fn new(region_id: RegionId, from_peer: Peer, to_peer: Peer, timeout: Duration) -> Self {
pub fn new(
region_id: RegionId,
from_peer: Peer,
to_peer: Peer,
timeout: Duration,
trigger_reason: RegionMigrationTriggerReason,
) -> Self {
Self {
region_id,
from_peer,
to_peer,
timeout,
trigger_reason,
}
}
}
@@ -121,8 +145,8 @@ impl Display for RegionMigrationProcedureTask {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(
f,
"region: {}, from_peer: {}, to_peer: {}",
self.region_id, self.from_peer, self.to_peer
"region: {}, from_peer: {}, to_peer: {}, trigger_reason: {}",
self.region_id, self.from_peer, self.to_peer, self.trigger_reason
)
}
}
@@ -357,6 +381,7 @@ impl RegionMigrationManager {
from_peer,
to_peer,
timeout,
trigger_reason,
} = task.clone();
let procedure = RegionMigrationProcedure::new(
PersistentContext {
@@ -366,6 +391,7 @@ impl RegionMigrationManager {
from_peer,
to_peer,
timeout,
trigger_reason,
},
self.context_factory.clone(),
Some(guard),
@@ -424,6 +450,7 @@ mod test {
from_peer: Peer::empty(2),
to_peer: Peer::empty(1),
timeout: Duration::from_millis(1000),
trigger_reason: RegionMigrationTriggerReason::Manual,
};
// Inserts one
manager
@@ -448,6 +475,7 @@ mod test {
from_peer: Peer::empty(1),
to_peer: Peer::empty(1),
timeout: Duration::from_millis(1000),
trigger_reason: RegionMigrationTriggerReason::Manual,
};
let err = manager.submit_procedure(task).await.unwrap_err();
@@ -465,6 +493,7 @@ mod test {
from_peer: Peer::empty(1),
to_peer: Peer::empty(2),
timeout: Duration::from_millis(1000),
trigger_reason: RegionMigrationTriggerReason::Manual,
};
let err = manager.submit_procedure(task).await.unwrap_err();
@@ -482,6 +511,7 @@ mod test {
from_peer: Peer::empty(1),
to_peer: Peer::empty(2),
timeout: Duration::from_millis(1000),
trigger_reason: RegionMigrationTriggerReason::Manual,
};
let table_info = new_test_table_info(1024, vec![1]).into();
@@ -509,6 +539,7 @@ mod test {
from_peer: Peer::empty(1),
to_peer: Peer::empty(2),
timeout: Duration::from_millis(1000),
trigger_reason: RegionMigrationTriggerReason::Manual,
};
let table_info = new_test_table_info(1024, vec![1]).into();
@@ -537,6 +568,7 @@ mod test {
from_peer: Peer::empty(3),
to_peer: Peer::empty(2),
timeout: Duration::from_millis(1000),
trigger_reason: RegionMigrationTriggerReason::Manual,
};
let table_info = new_test_table_info(1024, vec![1]).into();
@@ -570,6 +602,7 @@ mod test {
from_peer: Peer::empty(1),
to_peer: Peer::empty(2),
timeout: Duration::from_millis(1000),
trigger_reason: RegionMigrationTriggerReason::Manual,
};
let table_info = new_test_table_info(1024, vec![1]).into();
@@ -597,6 +630,7 @@ mod test {
from_peer: Peer::empty(1),
to_peer: Peer::empty(2),
timeout: Duration::from_millis(1000),
trigger_reason: RegionMigrationTriggerReason::Manual,
};
let err = manager

View File

@@ -44,11 +44,12 @@ impl State for RegionMigrationAbort {
_procedure_ctx: &ProcedureContext,
) -> Result<(Box<dyn State>, Status)> {
warn!(
"Region migration is aborted: {}, region_id: {}, from_peer: {}, to_peer: {}, {}",
"Region migration is aborted: {}, region_id: {}, from_peer: {}, to_peer: {}, trigger_reason: {}, {}",
self.reason,
ctx.region_id(),
ctx.persistent_ctx.from_peer,
ctx.persistent_ctx.to_peer,
ctx.persistent_ctx.trigger_reason,
ctx.volatile_ctx.metrics,
);
error::MigrationAbortSnafu {

Some files were not shown because too many files have changed in this diff Show More