Compare commits

...

238 Commits

Author SHA1 Message Date
Ning Sun
62d109c1f4 fix: allow case-insensitive timezone settings (#7207) 2025-11-08 15:56:27 +00:00
Alan Tang
910a383420 feat(expr): support avg functions on vector (#7146)
* feat(expr): support vec_elem_avg function

Signed-off-by: Alan Tang <jmtangcs@gmail.com>

* feat: support vec_avg function

Signed-off-by: Alan Tang <jmtangcs@gmail.com>

* test: add more query test for avg aggregator

Signed-off-by: Alan Tang <jmtangcs@gmail.com>

* fix: fix the merge batch mode

Signed-off-by: Alan Tang <jmtangcs@gmail.com>

* refactor: use sum and count as state for avg function

Signed-off-by: Alan Tang <jmtangcs@gmail.com>

* refactor: refactor merge batch mode for avg function

Signed-off-by: Alan Tang <jmtangcs@gmail.com>

* feat: add additional vector restrictions for validation

Signed-off-by: Alan Tang <jmtangcs@gmail.com>

---------

Signed-off-by: Alan Tang <jmtangcs@gmail.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>
2025-11-07 13:42:14 +00:00
Weny Xu
af6bbacc8c fix: add serde defaults for MetasrvNodeInfo (#7204)
* fix: add serde defaults for `MetasrvNodeInfo`

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: fmt

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-11-07 09:50:09 +00:00
Yingwen
7616ffcb35 test: only set ttl to forever in fuzz alter test (#7202)
Signed-off-by: evenyag <realevenyag@gmail.com>
2025-11-07 07:32:53 +00:00
shuiyisong
a3dbd029c5 chore: remove ttl option if presents in trace meta table (#7197)
* chore: remove ttl option if presents in trace meta table

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: update test

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

---------

Signed-off-by: shuiyisong <xixing.sys@gmail.com>
2025-11-06 11:51:45 +00:00
Yingwen
9caeae391e chore: print root cause in opendal logging interceptor (#7183)
* chore: print root cause in opendal

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: extract a function root_source() to get the cause

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-11-06 08:48:59 +00:00
fys
35951afff9 chore: remove unnecessary code related to triggers (#7192)
* chore: remove unused triggers memory tables

* fix: cargo clippy

* fix: sqlness
2025-11-06 08:09:14 +00:00
Ruihang Xia
a049b68c26 feat: import backup data from local files (#7180)
* feat: import backup data from local files

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add unit tests

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-11-06 07:33:33 +00:00
Lei, HUANG
c2ff563ac6 fix(mito): avoid shortcut in picking multi window files (#7174)
* fix/pick-continue:
 ### Add Tests for TWCS Compaction Logic

 - **`twcs.rs`**:
   - Modified the logic in `TwcsPicker` to handle cases with zero runs by using `continue` instead of `return`.
   - Added two new test cases: `test_build_output_multiple_windows_with_zero_runs` and `test_build_output_single_window_zero_runs` to verify the behavior of the compaction logic when there are zero runs in
 the windows.

 - **`memtable_util.rs`**:
   - Removed unused import `PredicateGroup`.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* fix: clippy

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* fix/pick-continue:
 ### Commit Message

 Enhance Compaction Process with Expired SST Handling and Testing

 - **`compactor.rs`**:
   - Introduced handling for expired SSTs by updating the manifest immediately upon task completion.
   - Added new test cases to verify the handling of expired SSTs and manifest updates.

 - **`task.rs`**:
   - Implemented `remove_expired` function to handle expired SSTs by updating the manifest and notifying the region worker loop.
   - Refactored `handle_compaction` to `handle_expiration_and_compaction` to integrate expired SST removal before merging inputs.
   - Added logging and error handling for expired SST removal process.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* refactor/progressive-compaction:
 **Enhance Compaction Task Error Handling**

 - Updated `task.rs` to conditionally execute the removal of expired SST files only when they exist, improving error handling and performance.
 - Added a check for non-empty `expired_ssts` before initiating the removal process, ensuring unnecessary operations are avoided.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* refactor/progressive-compaction:
 ### Refactor `DefaultCompactor` to Extract `merge_single_output` Method

 - **File**: `src/mito2/src/compaction/compactor.rs`
   - Extracted the logic for merging a single compaction output into SST files into a new method `merge_single_output` within the `DefaultCompactor` struct.
   - Simplified the `merge_ssts` method by utilizing the new `merge_single_output` method, reducing code duplication and improving maintainability.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* refactor/progressive-compaction:
 ### Add Max Background Compaction Tasks Configuration

 - **`compaction.rs`**: Added `max_background_compactions` to the compaction scheduler to limit background tasks.
 - **`compaction/compactor.rs`**: Removed immediate manifest update logic after task completion.
 - **`compaction/picker.rs`**: Introduced `max_background_tasks` parameter in `new_picker` to control task limits.
 - **`compaction/twcs.rs`**: Updated `TwcsPicker` to include `max_background_tasks` and truncate inputs exceeding this limit. Added related test cases to ensure functionality.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* fix/pick-continue:
 ### Improve Error Handling and Task Management in Compaction

 - **`task.rs`**: Enhanced error handling in `remove_expired` function by logging errors without halting the compaction process. Removed the return of `Result` type and added detailed logging for various
 failure scenarios.
 - **`twcs.rs`**: Adjusted task management logic by removing input truncation based on `max_background_tasks` and instead discarding remaining tasks if the output size exceeds the limit. This ensures better
 control over task execution and resource management.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* fix/pick-continue:
 ### Add Unit Tests for Compaction Task and TWCS Picker

 - **`task.rs`**: Added unit tests to verify the behavior of `PickerOutput` with and without expired SSTs.
 - **`twcs.rs`**: Introduced tests for `TwcsPicker` to ensure correct handling of `max_background_tasks` during compaction, including scenarios with and without task truncation.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* fix/pick-continue:
 **Improve Error Handling and Notification in Compaction Task**

 - **File:** `task.rs`
   - Changed log level from `warn` to `error` for manifest update failures to enhance error visibility.
   - Refactored the notification mechanism for expired file removal by using `BackgroundNotify::RegionEdit` with `RegionEditResult` to streamline the process.
   - Simplified error handling by consolidating match cases into a single `if let Err` block for better readability and maintainability.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

---------

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-11-06 06:27:17 +00:00
Yingwen
82812ff19e test: add a unit test to scan data from memtable in append mode (#7193)
* test: add tests for scanning append mode before flush

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: extract a function maybe_dedup_one

Signed-off-by: evenyag <realevenyag@gmail.com>

* ci: add flat format to docs.yml so we can make it required later

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-11-06 06:11:58 +00:00
Ning Sun
4a77167138 chore: update readme (#7187) 2025-11-06 03:21:01 +00:00
Lei, HUANG
934df46f53 fix(mito): append mode in flat format not working (#7186)
* mito2: add unit test for flat single-range append_mode dedup behavior

Verify memtable_flat_sources skips dedup when append_mode is true and
performs dedup otherwise for single-range flat memtables, preventing
regressions in the new append_mode path.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* fix/flat-source-merge:
 ### Improve Column Metadata Extraction Logic

 - **File**: `src/common/meta/src/ddl/utils.rs`
   - Modified the `extract_column_metadatas` function to use `swap_remove` for extracting the first schema and decode column metadata for comparison instead of raw bytes. This ensures that the extension map is considered during
 verification, enhancing the robustness of metadata consistency checks across datanodes.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

---------

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-11-06 03:19:39 +00:00
Ning Sun
fb92e4d0b2 feat: add greptime's arrow json extension type (#7168)
* feat: add arrow json extension type

* feat: add json structure settings to extension type

* refactor: store json structure settings as extension metadata

* chore: make binary an acceptable type for extension
2025-11-05 18:34:57 +00:00
Yingwen
0939dc1d32 test: run sqlness for flat format (#7178)
* test: support flat format in sqlness

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: replace region stats test result with NUM

Signed-off-by: evenyag <realevenyag@gmail.com>

* ci: add flat format to sqlness ci

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-11-05 11:23:12 +00:00
shuiyisong
50c9600ef8 fix: stabilize test results (#7182)
* fix: stablize test results

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* fix: test

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

---------

Signed-off-by: shuiyisong <xixing.sys@gmail.com>
2025-11-05 09:19:23 +00:00
Lei, HUANG
abcfbd7f41 chore(metrics): add region server requests failures count metrics (#7173)
* chore/add-region-insert-failure-metric: Add metric for failed insert requests to region server in datanode module

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* chore/add-region-insert-failure-metric:
 Add metric for tracking failed region server requests

 - Introduce a new metric `REGION_SERVER_REQUEST_FAILURE_COUNT` to count failed region server requests.
 - Update `REGION_SERVER_INSERT_FAIL_COUNT` metric description for consistency.
 - Implement error handling in `RegionServerHandler` to increment the new failure metric on request errors.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

---------

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-11-05 07:23:40 +00:00
Ruihang Xia
aac3ede261 feat: allow creating logical tabel with same partition rule with physical table's (#7177)
* feat: allow creating logical tabel with same partition rule with physical table's

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix errors

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-11-05 06:37:17 +00:00
Yingwen
3001c2d719 feat: BulkMemtable stores small fragments in another buffer (#7164)
* feat: buffer small parts in bulk memtable

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: use assert_eq instead of assert

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: fix compiler errors

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: collect bulk memtable scan metrics

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: report metrics early

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-11-05 06:35:32 +00:00
shuiyisong
6caff50d01 chore: improve search traces and jaeger resp (#7166)
* chore: add jaeger field in trace query

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: update search v1 with tags

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: update col matching using col names

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: minify code with macro

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: fix test

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: change macro to inline function

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: fix filter with tags & add test

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

---------

Signed-off-by: shuiyisong <xixing.sys@gmail.com>
2025-11-04 05:49:08 +00:00
ZonaHe
421f4eec05 feat: update dashboard to v0.11.7 (#7170)
Co-authored-by: sunchanglong <sunchanglong@users.noreply.github.com>
Co-authored-by: Ning Sun <sunng@protonmail.com>
2025-11-04 02:52:26 +00:00
Yingwen
d944e5c6b8 test: add sqlness for delete and filter (#7171)
Signed-off-by: evenyag <realevenyag@gmail.com>
2025-11-04 02:13:47 +00:00
fys
013d61acbb chore(deps): remove sqlx pg feature in greptimedb build (#7172)
* chore(deps): remove sqlx pg feature in greptimedb build

* fix: ci
2025-11-03 18:49:00 +00:00
LFC
b7e834ab92 refactor: convert to influxdb values directly from arrow (#7163)
* refactor: convert to influxdb values directly from arrow

Signed-off-by: luofucong <luofc@foxmail.com>

* resolve PR comments

Signed-off-by: luofucong <luofc@foxmail.com>

* resolve PR comments

Signed-off-by: luofucong <luofc@foxmail.com>

---------

Signed-off-by: luofucong <luofc@foxmail.com>
2025-11-03 07:52:37 +00:00
LFC
5eab9a1be3 feat: json vector builder (#7151)
* resolve PR comments

Signed-off-by: luofucong <luofc@foxmail.com>

Update src/datatypes/src/vectors/json/builder.rs

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

feat: json vector builder

Signed-off-by: luofucong <luofc@foxmail.com>

* resolve PR comments

Signed-off-by: luofucong <luofc@foxmail.com>

---------

Signed-off-by: luofucong <luofc@foxmail.com>
2025-11-03 06:06:54 +00:00
Weny Xu
9de680f456 refactor: add support for batch region upgrade operations part2 (#7160)
* add tests for metric engines

Signed-off-by: WenyXu <wenymedia@gmail.com>

* feat: catchup in background

Signed-off-by: WenyXu <wenymedia@gmail.com>

* refactor: replace sequential catchup with batch processing

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: unit tests

Signed-off-by: WenyXu <wenymedia@gmail.com>

* remove single catchup

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: remove unused error

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: refine catchup tests

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: add unit tests

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-11-03 06:01:38 +00:00
Ning Sun
5deaaa59ec chore: fix typo (#7169) 2025-11-03 02:22:34 +00:00
dennis zhuang
61724386ef fix: potential failure in tests (#7167)
* fix: potential failure in the test_index_build_type_compact test

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: relax timestamp checking in test_timestamp_default_now

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

---------

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
2025-10-31 22:08:59 +00:00
Weny Xu
6960a0183a refactor: add support for batch region upgrade operations part1 (#7155)
* refactor: convert UpgradeRegion instruction to batch operation

Signed-off-by: WenyXu <wenymedia@gmail.com>

* feat: introduce `handle_batch_catchup_requests` fn for mito engine

Signed-off-by: WenyXu <wenymedia@gmail.com>

* test: add tests

Signed-off-by: WenyXu <wenymedia@gmail.com>

* feat: introduce `handle_batch_catchup_requests` fn for metric engine

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: suggestion and add ser/de tests

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: add comments

Signed-off-by: WenyXu <wenymedia@gmail.com>

* fix: fix unit tests

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-10-31 03:08:38 +00:00
Sicong Hu
30894d7599 feat(mito): Optimize async index building with priority-based batching (#7034)
* feat: add priority-based batching to IndexBuildScheduler

Signed-off-by: SNC123 <sinhco@outlook.com>

* fix: clean old puffin-related cache

Signed-off-by: SNC123 <sinhco@outlook.com>

* test: add test for IndexBuildScheduler

Signed-off-by: SNC123 <sinhco@outlook.com>

* feat: different index file id for read and async write

Signed-off-by: SNC123 <sinhco@outlook.com>

* feat: different index file id for delete

Signed-off-by: SNC123 <sinhco@outlook.com>

* chore: clippy

Signed-off-by: SNC123 <sinhco@outlook.com>

* fix: apply suggestions

Signed-off-by: SNC123 <sinhco@outlook.com>

* fix: apply comments

Signed-off-by: SNC123 <sinhco@outlook.com>

* combine files and index files

Signed-off-by: SNC123 <sinhco@outlook.com>

* feat: add index_file_id into ManifestSstEntry

Signed-off-by: SNC123 <sinhco@outlook.com>

* Update src/mito2/src/gc.rs

Signed-off-by: SNC123 <sinhco@outlook.com>

* resolve conflicts

Signed-off-by: SNC123 <sinhco@outlook.com>

* fix: sqlness

Signed-off-by: SNC123 <sinhco@outlook.com>

* chore: fmt

Signed-off-by: SNC123 <sinhco@outlook.com>

---------

Signed-off-by: SNC123 <sinhco@outlook.com>
2025-10-31 02:13:17 +00:00
Yingwen
acf38a7091 fix: avoid filtering rows with delete op by fields under merge mode (#7154)
* chore: clear allow dead_code for flat format

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: pass exprs to build appliers

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: split field filters and index appliers

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: support skip filtering fields in RowGroupPruningStats

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: add PreFilterMode to config whether to skip filtering fields

Adds the PreFilterMode to the RangeBase and sets it in
ParquetReaderBuilder

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: support skipping fields in prune reader

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: support pre filter mode in bulk memtable

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: pass PreFilterMode to memtable

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: test mito filter delete

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: fix compiler errors

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: remove commented code

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: move predicate and sequence to RangesOptions

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: fmt code

Signed-off-by: evenyag <realevenyag@gmail.com>

* ci: skip cargo gc

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: fix cargo build warning

Signed-off-by: evenyag <realevenyag@gmail.com>

* Revert "ci: skip cargo gc"

This reverts commit 1ec9594a6d.

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-10-30 12:14:45 +00:00
LFC
109b70750a refactor: convert to prometheus values directly from arrow (#7153)
* refactor: convert to prometheus values directly from arrow

Signed-off-by: luofucong <luofc@foxmail.com>

* resolve PR comments

Signed-off-by: luofucong <luofc@foxmail.com>

---------

Signed-off-by: luofucong <luofc@foxmail.com>
2025-10-30 10:24:12 +00:00
shuiyisong
ee5b7ff3c8 chore: unify initialization of channel manager (#7159)
* chore: unify initialization of channel manager and extract loading tls

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: fix cr issue

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

---------

Signed-off-by: shuiyisong <xixing.sys@gmail.com>
2025-10-30 04:26:02 +00:00
liyang
5d0ef376de fix: initializer container not work (#7152)
* fix: initializer not work

Signed-off-by: liyang <daviderli614@gmail.com>

* use a one version of operator

Signed-off-by: liyang <daviderli614@gmail.com>

---------

Signed-off-by: liyang <daviderli614@gmail.com>
2025-10-29 18:11:55 +00:00
shuiyisong
11c0381fc1 chore: set default catalog using build env (#7156)
* chore: update reference to const

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: use option_env to set default catalog

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: use const_format

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: update reference in cli

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: introduce a build.rs to set default catalog

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: remove unused feature gate

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

---------

Signed-off-by: shuiyisong <xixing.sys@gmail.com>
2025-10-29 18:10:58 +00:00
LFC
e8b7b0ad16 fix: memtable value push result was ignored (#7136)
* fix: memtable value push result was ignored

Signed-off-by: luofucong <luofc@foxmail.com>

* chore: apply suggestion

Co-authored-by: Yingwen <realevenyag@gmail.com>

---------

Signed-off-by: luofucong <luofc@foxmail.com>
Co-authored-by: dennis zhuang <killme2008@gmail.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>
2025-10-29 13:44:36 +00:00
Weny Xu
6efffa427d fix: missing flamegraph feature in pprof dependency (#7158)
fix: fix pprof deps

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-10-29 11:41:21 +00:00
Ruihang Xia
6576e3555d fix: cache estimate methods (#7157)
* fix: cache estimate methods

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* revert page value change

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* Apply suggestion from @evenyag

Co-authored-by: Yingwen <realevenyag@gmail.com>

* update test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>
2025-10-29 09:57:28 +00:00
Lei, HUANG
f0afd675e3 feat: objbench sub command for datanode (#7114)
* feat/objbench-subcmd:
 ### Add Object Storage Benchmark Tool and Update Dependencies

 - **`Cargo.lock` & `Cargo.toml`**: Added dependencies for `colored`, `parquet`, and `pprof` to support new features.
 - **`datanode.rs`**: Introduced `ObjbenchCommand` for benchmarking object storage, including command-line options for configuration and execution. Added `StorageConfig` and `StorageConfigWrapper` for storage engine configuration.
 - **`datanode.rs`**: Implemented a stub for `build_object_store` function to initialize object storage.

 These changes introduce a new subcommand for object storage benchmarking and update dependencies to support additional functionality.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* init

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* fix: code style and clippy

* feat/objbench-subcmd:
 Improve error handling in `objbench.rs`

 - Enhanced error handling in `parse_config` and `parse_file_dir_components` functions by replacing `unwrap` with `OptionExt` and `context` for better error messages.
 - Updated `build_access_layer_simple` and `build_cache_manager` functions to use `map_err` for more descriptive error handling.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* chore: rebase main

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

---------

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-10-29 05:26:29 +00:00
discord9
37bc2e6b07 feat: gc worker heartbeat instruction (#7118)
again



false by default



test: config api



refactor: per code review



less info!



even less info!!



docs: gc regions instr



refactor: grp by region id



per code review



per review



error handling?



test: fix



todos



aft rebase fix



after refactor

Signed-off-by: discord9 <discord9@163.com>
2025-10-29 02:59:36 +00:00
Ning Sun
a9d1d33138 feat: update datafusion-pg-catalog for better dbeaver support (#7143)
* chore: update datafusion-pg-catalog to 0.12.1

* feat: import more udfs
2025-10-28 18:42:03 +00:00
discord9
22d9eb6930 feat: part sort provide dyn filter (#7140)
* feat: part sort provide dyn filter

Signed-off-by: discord9 <discord9@163.com>

* fix: reset_state reset dynamic filter

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-10-28 02:44:29 +00:00
shuiyisong
da976e534d refactor: add test feature gate to numbers table (#7148)
* refactor: add test feature gate to numbers table

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: add debug_assertions

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* refactor: extract numbers table provider

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: address CR issues

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

---------

Signed-off-by: shuiyisong <xixing.sys@gmail.com>
2025-10-27 10:16:00 +00:00
discord9
f2bc92b9e6 refactor: use generic for heartbeat instruction handler (#7149)
* refactor: use generic

Signed-off-by: discord9 <discord9@163.com>

* w

Signed-off-by: discord9 <discord9@163.com>

* per review

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-10-27 09:09:48 +00:00
Weny Xu
785f9d7fd7 fix: add delays in reconcile tests for async cache invalidation (#7147)
Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-10-27 08:07:51 +00:00
shuiyisong
a20ac4f9e5 feat: prefix option for timestamp index and value column (#7125)
* refactor: use GREPTIME_TIMESTAMP const

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* feat: add config for default ts col name

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* refactor: replace GREPTIME_TIMESTAMP with function get

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: update config doc

* fix: test

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: remove opts on flownode and metasrv

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: add validation for ts column name

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: use get_or_init to avoid test error

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: fmt

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: update docs

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: using empty string to disable prefix

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: update comment

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: address CR issues

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

---------

Signed-off-by: shuiyisong <xixing.sys@gmail.com>
2025-10-27 08:00:03 +00:00
zyy17
0a3961927d refactor!: add a opentelemetry_traces_operations table to aggregate (service_name, span_name, span_kind) to improve query performance (#7144)
refactor: add a `*_operations` table to aggregate `(service_name, span_name, span_kind)` to improve query performance

Signed-off-by: zyy17 <zyylsxm@gmail.com>
2025-10-27 03:36:22 +00:00
LFC
d7ed6a69ab feat: merge json datatype (#7142)
* feat: merge json datatype

Signed-off-by: luofucong <luofc@foxmail.com>

* resolve PR comments

Signed-off-by: luofucong <luofc@foxmail.com>

---------

Signed-off-by: luofucong <luofc@foxmail.com>
2025-10-27 03:30:52 +00:00
discord9
68247fc9b1 fix: count_state use stat to eval&predicate w/out region (#7116)
* fix: count_state use stat to eval

Signed-off-by: discord9 <discord9@163.com>

* cleanup

Signed-off-by: discord9 <discord9@163.com>

* fix: use predicate without region

Signed-off-by: discord9 <discord9@163.com>

* test: diverge standalone/dist impl

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-10-27 02:14:45 +00:00
Lei, HUANG
e386a366d0 feat: add HTTP endpoint to control prof.gdump feature (#6999)
* feat/gdump:
 ### Add Support for Jemalloc Gdump Flag

 - **`jemalloc.rs`**: Introduced `PROF_GDUMP` constant and added functions `set_gdump_active` and `is_gdump_active` to manage the gdump flag.
 - **`error.rs`**: Added error handling for reading and updating the jemalloc gdump flag with `ReadGdump` and `UpdateGdump` errors.
 - **`lib.rs`**: Exposed `is_gdump_active` and `set_gdump_active` functions for non-Windows platforms.
 - **`http.rs`**: Added HTTP routes for checking and toggling the jemalloc gdump flag status.
 - **`mem_prof.rs`**: Implemented handlers `gdump_toggle_handler` and `gdump_status_handler` for managing gdump flag via HTTP requests.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* Update docs/how-to/how-to-profile-memory.md

Co-authored-by: shuiyisong <113876041+shuiyisong@users.noreply.github.com>

* fix: typo in docs

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

---------

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
Co-authored-by: shuiyisong <113876041+shuiyisong@users.noreply.github.com>
2025-10-27 01:41:19 +00:00
dennis zhuang
d8563ba56d feat: adds regex_extract function and more type tests (#7107)
* feat: adds format, regex_extract function and more type tests

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: forgot functions

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: forgot null type

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* test: forgot date type

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* feat: remove format function

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* test: update results after upgrading datafusion

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

---------

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
2025-10-25 08:41:49 +00:00
Weny Xu
7da2f5ed12 refactor: refactor instruction handler and adds support for batch region downgrade operations (#7130)
* refactor: refactor instruction handler

Signed-off-by: WenyXu <wenymedia@gmail.com>

* refactor: support batch downgrade region instructions

Signed-off-by: WenyXu <wenymedia@gmail.com>

* fix compat

Signed-off-by: WenyXu <wenymedia@gmail.com>

* fix clippy

Signed-off-by: WenyXu <wenymedia@gmail.com>

* add tests

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: add comments

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-10-24 09:11:42 +00:00
Yingwen
4c70b4c31d feat: store estimated series num in file meta (#7126)
* feat: add num_series to FileMeta

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: add SeriesEstimator to collect num_series

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: set num_series in compactor

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: print num_series in Debug for FileMeta

Signed-off-by: evenyag <realevenyag@gmail.com>

* style: fmt code

Signed-off-by: evenyag <realevenyag@gmail.com>

* style: fix clippy

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: increase series count when next ts <= last

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: add tests for SeriesEstimator

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: add num_series to ssts_manifest table

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: update sqlness tests

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: fix metric engine list entry test

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-10-24 05:53:48 +00:00
Ning Sun
b78ee1743c feat: add a missing pg_catalog function current_database (#7138)
feat: add a missing function current_database
2025-10-24 03:36:07 +00:00
LFC
6ad23bc9b4 refactor: convert to postgres values directly from arrow (#7131)
* refactor: convert to pg values directly from arrow

Signed-off-by: luofucong <luofc@foxmail.com>

* resolve PR comments

Signed-off-by: luofucong <luofc@foxmail.com>

---------

Signed-off-by: luofucong <luofc@foxmail.com>
2025-10-24 03:28:04 +00:00
Sicong Hu
03a29c6591 fix: correct test_index_build_type_compact (#7137)
Signed-off-by: SNC123 <sinhco@outlook.com>
2025-10-24 03:24:13 +00:00
zyy17
a0e6bcbeb3 feat: add cpu_usage_millicores and memory_usage_bytes in information_schema.cluster_info table. (#7051)
* refactor: add `hostname` in cluster_info table

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* chore: update information schema result

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* feat: enable zstd for bulk memtable encoded parts (#7045)

feat: enable zstd in bulk memtable

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: add `get_total_cpu_millicores()` / `get_total_cpu_cores()` / `get_total_memory_bytes()` / `get_total_memory_readable()` in common-stat

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* feat: add `cpu_usage_millicores` and `memory_usage_bytes` in `information_schema.cluster_info` table

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* fix: compile warning and integration test failed

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* fix: integration test failed

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* refactor: add `ResourceStat`

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* refactor: apply code review comments

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* chore: update greptime-proto

Signed-off-by: zyy17 <zyylsxm@gmail.com>

---------

Signed-off-by: zyy17 <zyylsxm@gmail.com>
Signed-off-by: evenyag <realevenyag@gmail.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>
2025-10-24 03:12:45 +00:00
LFC
b53a0b86fb feat: create table with new json datatype (#7128)
* feat: create table with new json datatype

Signed-off-by: luofucong <luofc@foxmail.com>

* resolve PR comments

Signed-off-by: luofucong <luofc@foxmail.com>

---------

Signed-off-by: luofucong <luofc@foxmail.com>
2025-10-24 02:16:49 +00:00
LFC
2f637a262e chore: update datafusion to 50 (#7076)
* chore: update datafusion to 50

Signed-off-by: luofucong <luofc@foxmail.com>

* fix ci

Signed-off-by: luofucong <luofc@foxmail.com>

* fix: update datafusion_pg_catalog import

* chore: fix toml format

* chore: fix toml format again

* fix nextest

Signed-off-by: luofucong <luofc@foxmail.com>

* fix sqlness

Signed-off-by: luofucong <luofc@foxmail.com>

* chore: switch datafusion-orc to upstream tag

* fix sqlness

Signed-off-by: luofucong <luofc@foxmail.com>

* resolve PR comments

Signed-off-by: luofucong <luofc@foxmail.com>

---------

Signed-off-by: luofucong <luofc@foxmail.com>
Co-authored-by: Ning Sun <sunning@greptime.com>
2025-10-23 07:18:36 +00:00
Yingwen
f388dbdbb8 fix: fix index and tag filtering for flat format (#7121)
* perf: only decode primary keys in the batch

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: don't push none to creator

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: implement method to filter __table_id for sparse encoding

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: filter table id for sparse encoding separately

The __table_id doesn't present in projection so we have to filter it
manually

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: decode tags for sparse encoding when building bloom filter

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: support inverted index for tags under sparse encoding

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: skip tag columns in fulltext index

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: fix warnings

Signed-off-by: evenyag <realevenyag@gmail.com>

* style: fix clippy

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: fix list index metadata test

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: decode primary key columns to filter

When primary key columns are not in projection but in filters, we need
to decode them in compute_filter_mask_flat

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: reuse filter method

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: only use dictionary for string type in compat

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: safe to get column by creator's column id

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-10-23 06:43:46 +00:00
jeremyhi
136b9eef7a feat: pr review reminder frequency (#7129)
* feat: run at 9:00 am on monday, wednesday, friday

Signed-off-by: jeremyhi <fengjiachun@gmail.com>

* chore: remove unused method

Signed-off-by: jeremyhi <fengjiachun@gmail.com>

---------

Signed-off-by: jeremyhi <fengjiachun@gmail.com>
2025-10-23 06:22:02 +00:00
fys
e8f39cbc4f fix: unit test about trigger parser (#7132)
* fix: unit test about trigger parser

* fix: cargo clippy
2025-10-23 03:47:25 +00:00
jeremyhi
62b51c6736 feat: writer mem limiter for http and grpc service (#7092)
* feat: writer mem limiter for http and grpc service

Signed-off-by: jeremyhi <fengjiachun@gmail.com>

* fix: docs

Signed-off-by: jeremyhi <fengjiachun@gmail.com>

* feat: add metrics for limiter

Signed-off-by: jeremyhi <fengjiachun@gmail.com>

* Apply suggestion from @MichaelScofield

Co-authored-by: LFC <990479+MichaelScofield@users.noreply.github.com>

* chore: refactor try_acquire

Signed-off-by: jeremyhi <fengjiachun@gmail.com>

* chore: make size human readable

Signed-off-by: jeremyhi <fengjiachun@gmail.com>

---------

Signed-off-by: jeremyhi <fengjiachun@gmail.com>
Co-authored-by: LFC <990479+MichaelScofield@users.noreply.github.com>
2025-10-22 09:30:36 +00:00
discord9
a9a3e0b121 fix: prom ql logical plan use column index not name (#7109)
* feat: use index not col name

Signed-off-by: discord9 <discord9@163.com>

* fix: use name without qualifier&output schema fix

Signed-off-by: discord9 <discord9@163.com>

* proto

Signed-off-by: discord9 <discord9@163.com>

* refactor: resolve column name/index

Signed-off-by: discord9 <discord9@163.com>

* pcr

Signed-off-by: discord9 <discord9@163.com>

* chore: update proto

Signed-off-by: discord9 <discord9@163.com>

* chore: update proto

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-10-22 09:04:09 +00:00
Ning Sun
41ce100624 feat: update pgwire to 0.34 for a critical issue on accepting connection (#7127)
feat: update pgwire to 0.34
2025-10-22 07:25:04 +00:00
Weny Xu
328ec56b63 feat: introduce OpenRegions and CloseRegions instructions to support batch region operations (#7122)
* feat: introduce `OpenRegions` and `CloseRegions` instructions to support batch region operations

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions

Signed-off-by: WenyXu <wenymedia@gmail.com>

* feat: merge instructions

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-10-22 03:43:47 +00:00
Ning Sun
bfa00df9f2 fix: list inner type for json and valueref, refactor type to ref for struct/list (#7113)
* refactor: use arc for struct type

* fix: inner type of list value and ref
2025-10-21 12:46:18 +00:00
jeremyhi
2e7b3951fb feat: 14 days PRs review reminder (#7123) 2025-10-21 08:53:38 +00:00
Yingwen
1054c63503 test: run engine unit tests for flat format (#7119)
* test: support flat in basic_test

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: support flat in alter_test

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: support flat for append_mode_test

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: update bump_committed_sequence_test to test both formats

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: update close_test to test both formats

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: update compaction_test to test both formats

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: update create_test to test both formats

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: update edit_region_test to test both formats

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: update merge_mode_test to test both formats

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: update parallel_test to test both formats

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: update projection_test to test both formats

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: update prune_test to test both formats

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: update row_selector_test to test both formats

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: update scan_test to test both formats

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: update drop_test to test both formats

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: update filter_deleted_test to test both formats

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: update sync_test to test both formats

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: update set_role_state_test to test both formats

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: update staging_test to test both formats

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: update truncate_test to test both formats

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: update catchup_test to test both formats

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: update flush_test to test both formats

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: update open_test to test both formats

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: update batch_open_test to test both formats

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: fix all flat format tests

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-10-21 08:24:12 +00:00
Sicong Hu
a1af4dce0c feat: implement three build types for async index build (#7029)
* feat: impl four types index build

Signed-off-by: SNC123 <sinhco@outlook.com>

* test: add tests for four types index build

Signed-off-by: SNC123 <sinhco@outlook.com>

* test: add sqlness test for manual index build

Signed-off-by: SNC123 <sinhco@outlook.com>

* fix: add region request support and correct sqlness

Signed-off-by: SNC123 <sinhco@outlook.com>

* fix: update cargo.toml for proto and resolve conflicts

Signed-off-by: SNC123 <sinhco@outlook.com>

* fix: rebase

Signed-off-by: SNC123 <sinhco@outlook.com>

* chore: clippy

Signed-off-by: SNC123 <sinhco@outlook.com>

* fix: toml fmt and correct sqlness

Signed-off-by: SNC123 <sinhco@outlook.com>

* fix: correct sqlness result

Signed-off-by: SNC123 <sinhco@outlook.com>

* refactor: extract manual build logic

Signed-off-by: SNC123 <sinhco@outlook.com>

* apply suggestions

Signed-off-by: SNC123 <sinhco@outlook.com>

* feat: abort index build process

Signed-off-by: SNC123 <sinhco@outlook.com>

* clippy

Signed-off-by: SNC123 <sinhco@outlook.com>

* chore: wrap `should_abort_index`

Signed-off-by: SNC123 <sinhco@outlook.com>

* chore: clippy

Signed-off-by: SNC123 <sinhco@outlook.com>

---------

Signed-off-by: SNC123 <sinhco@outlook.com>
2025-10-21 02:48:28 +00:00
jeremyhi
27268cf424 chore: pr review reminder (#7120)
* chore: pr review reminder

* chore: for test

* chore: vars

* fix: gracefully handle missing webhook URL

* test: allow workflow to run in fork for testing

* test: add environment variable logging

* chore: monior change

* feat: filter draft pr
2025-10-21 02:44:04 +00:00
Zhenchi
938d757523 feat: expose SST index metadata via information schema (#7044)
Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2025-10-20 11:59:16 +00:00
LFC
855eb54ded refactor: convert to mysql values directly from arrow (#7096)
Signed-off-by: luofucong <luofc@foxmail.com>
2025-10-20 11:09:24 +00:00
Weny Xu
3119464ff9 feat: introduce the Noop WAL provider for datanode (#7105)
* feat: introduce noop log store

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: update config example

Signed-off-by: WenyXu <wenymedia@gmail.com>

* test: add noop wal tests

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-10-20 06:13:27 +00:00
fys
20b5b9bee4 chore: remove unused deps (#7108) 2025-10-17 11:53:19 +00:00
Zhenchi
7b396bb290 feat(mito2): expose puffin index metadata (#7042)
* Add encode/decode helpers for IndexTarget

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* Use IndexTarget encode for puffin index blob keys

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* Normalize puffin index blobs to use IndexTarget keys

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* feat(mito2): expose puffin index metadata

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* target json polish

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix header

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* add index path

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* address copilot comments

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* address comments

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* reuse cached index metadata

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* parallelism for reading index meta

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2025-10-17 06:22:07 +00:00
LFC
21532abf94 feat: new create table syntax for new json datatype (#7103)
* feat: new create table syntax for new json datatype

Signed-off-by: luofucong <luofc@foxmail.com>

* refactor: extract consts

* refactor: remove unused error variant

* fix tests

Signed-off-by: luofucong <luofc@foxmail.com>

* fix sqlness

Signed-off-by: luofucong <luofc@foxmail.com>

---------

Signed-off-by: luofucong <luofc@foxmail.com>
Co-authored-by: Ning Sun <sunning@greptime.com>
2025-10-17 05:22:29 +00:00
fys
331c64c6fd feat(trigger): support "for" and "keep_firing_for" (#7087)
* feat: support for and keep_firing_for optiosn in create trigger

* upgrade greptime-proto
2025-10-17 04:31:56 +00:00
Zhenchi
82e4600d1b feat: add index cache eviction support (#7064)
Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2025-10-17 03:30:02 +00:00
dennis zhuang
8a2371a05c feat: supports large string (#7097)
* feat: supports large string

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: add doc for extract_string_vector_values

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: refactor by cr comments

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: changes by cr comments

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* refactor: extract_string_vector_values

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* feat: remove large string type and refactor string vector

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: revert some changes

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* feat: adds large string type

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: impl default for StringSizeType

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: tests and test compatibility

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* test: update sqlness tests

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: remove panic

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

---------

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-10-17 01:46:11 +00:00
zyy17
cf1b8392af refactor!: unify the API of getting total cpu and memory (#7049)
* refactor: add `get_total_cpu_millicores()` / `get_total_cpu_cores()` / `get_total_memory_bytes()` / `get_total_memory_readable()` in common-stat

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* tests: update sqlness test cases

Signed-off-by: zyy17 <zyylsxm@gmail.com>

---------

Signed-off-by: zyy17 <zyylsxm@gmail.com>
2025-10-16 12:41:34 +00:00
Ning Sun
2e6ea1167f refactor: update valueref coerce function name based on its semantics (#7098) 2025-10-16 09:11:40 +00:00
Lei, HUANG
50386fda97 chore: pub route_prometheus function (#7101)
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-10-16 08:11:06 +00:00
Weny Xu
873555feb2 fix: fix build warnings (#7099)
Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-10-16 07:57:16 +00:00
zyy17
6ab4672866 refactor: add peer_hostname field in information_schema.cluster_info table (#7050)
* refactor: add `hostname` in cluster_info table

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* chore: update information schema result

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* chore: apply code review comments

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* chore: update greptime-proto

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* chore: add the compatibility for old proto

Signed-off-by: zyy17 <zyylsxm@gmail.com>

---------

Signed-off-by: zyy17 <zyylsxm@gmail.com>
2025-10-16 06:02:47 +00:00
discord9
ac65ede033 feat: memtable seq range read (#6950)
* feat: seq range memtable read

Signed-off-by: discord9 <discord9@163.com>

* test: from&range

Signed-off-by: discord9 <discord9@163.com>

* wt

Signed-off-by: discord9 <discord9@163.com>

* after rebase fix

Signed-off-by: discord9 <discord9@163.com>

* refactor: per review

Signed-off-by: discord9 <discord9@163.com>

* docs: better naming&emphaise

Signed-off-by: discord9 <discord9@163.com>

* refactor: use filter method

Signed-off-by: discord9 <discord9@163.com>

* tests: unwrap

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-10-16 05:30:56 +00:00
Lei, HUANG
552c502620 feat: manual compaction parallelism (#7086)
* feat/manual-compaction-parallelism:
 ### Add Parallelism Support to Compaction Requests

 - **`Cargo.lock` & `Cargo.toml`**: Updated `greptime-proto` dependency to a new revision.
 - **`flush_compact_table.rs`**: Enhanced `parse_compact_params` to support a new `parallelism` parameter, allowing users to
 specify the level of parallelism for table compaction.
 - **`handle_compaction.rs`**: Integrated `parallelism` into the compaction scheduling process, defaulting to 1 if not
 specified.
 - **`request.rs` & `region_request.rs`**: Modified `CompactRequest` to include `parallelism`, with logic to handle unspecifie
 values.
 - **`requests.rs`**: Updated `CompactTableRequest` structure to include an optional `parallelism` field.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* feat/manual-compaction-parallelism:
 ### Commit Message

 Enhance Compaction Request Handling

 - **`flush_compact_table.rs`**:
   - Renamed `parse_compact_params` to `parse_compact_request`.
   - Introduced `DEFAULT_COMPACTION_PARALLELISM` constant.
   - Updated parsing logic to handle keyword arguments for `strict_window` and `regular` compaction types, including `parallelism` and `window`.
   - Modified tests to reflect changes in parsing logic and default parallelism handling.

 - **`request.rs`**:
   - Updated `parallelism` handling in `RegionRequestBody::Compact` to use the new default value.

 - **`requests.rs`**:
   - Changed `CompactTableRequest` to use a non-optional `parallelism` field with a default value of 1.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* feat/manual-compaction-parallelism:
 ### Update `flush_compact_table.rs` Parameter Validation

 - Modified parameter validation in `flush_compact_table.rs` to restrict the maximum number of parameters from 4 to 3 in the `parse_compact_request` function.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* feat/manual-compaction-parallelism:
 Update `greptime-proto` dependency

 - Updated the `greptime-proto` dependency to a new revision in both `Cargo.lock` and `Cargo.toml`.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

---------

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-10-16 03:47:01 +00:00
discord9
9aca7c97d7 fix: part cols not in projection (#7090)
* fix: part cols not in projection

Signed-off-by: discord9 <discord9@163.com>

* test: table scan with projection

Signed-off-by: discord9 <discord9@163.com>

* Update src/query/src/dist_plan/analyzer.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>
Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>
2025-10-16 03:44:30 +00:00
Ning Sun
145c1024d1 feat: add Value::Json value type (#7083)
* feat: struct value

Signed-off-by: Ning Sun <sunning@greptime.com>

* feat: update for proto module

* feat: wip struct type

* feat: implement more vector operations

* feat: make datatype and api

* feat: reoslve some compilation issues

* feat: resolve all compilation issues

* chore: format update

* test: resolve tests

* test: test and refactor value-to-pb

* feat: add more tests and fix for value types

* chore: remove dbg

* feat: test and fix iterator

* fix: resolve struct_type issue

* feat: pgwire 0.33 update

* refactor: use vec for struct items

* feat: conversion from json to value

* feat: add decode function

* fix: lint issue

* feat: update how we encode raw data

* feat: add convertion to fully strcutured StructValue

* refactor: take owned value in all encode/decode functions

* feat: add pg serialization of structvalue

* chore: toml format

* refactor: adopt new and try_new from struct value

* chore: cleanup residual issues

* docs: docs up

* fix lint issue

* Apply suggestion from @MichaelScofield

Co-authored-by: LFC <990479+MichaelScofield@users.noreply.github.com>

* Apply suggestion from @MichaelScofield

Co-authored-by: LFC <990479+MichaelScofield@users.noreply.github.com>

* Apply suggestion from @MichaelScofield

Co-authored-by: LFC <990479+MichaelScofield@users.noreply.github.com>

* Apply suggestion from @MichaelScofield

Co-authored-by: LFC <990479+MichaelScofield@users.noreply.github.com>

* chore: address review comment especially collection capacity

* refactor: remove unneeded processed keys collection

* feat: Value::Json type

* chore: add some work in progress changes

* feat: adopt new json type

* refactor: limit scope json conversion functions

* fix: self review update

* test: provide tests for value::json

* test: add tests for api/helper

* switch proto to main branch

* fix: implement is_null for ValueRef::Json

---------

Signed-off-by: Ning Sun <sunning@greptime.com>
Co-authored-by: LFC <990479+MichaelScofield@users.noreply.github.com>
2025-10-15 20:13:12 +00:00
Alan Tang
8073e552df feat: add updated_on to tablemeta with a default of created_on (#7072)
* feat: add updated_on to tablemeta with a default of created_on

Signed-off-by: Alan Tang <jmtangcs@gmail.com>

* feat: support the update_on on alter procedure

Signed-off-by: Alan Tang <jmtangcs@gmail.com>

* feat: add updated_on into information_schema.tables

Signed-off-by: Alan Tang <jmtangcs@gmail.com>

* fix: make sqlness happy

Signed-off-by: Alan Tang <jmtangcs@gmail.com>

* test: add test case for tablemeta update

Signed-off-by: Alan Tang <jmtangcs@gmail.com>

* fix: fix failing test for ALTER TABLE

Signed-off-by: Alan Tang <jmtangcs@gmail.com>

* feat: use created_on as default for updated_on when missing

Signed-off-by: Alan Tang <jmtangcs@gmail.com>

---------

Signed-off-by: Alan Tang <jmtangcs@gmail.com>
2025-10-15 11:12:27 +00:00
Ruihang Xia
aa98033e85 feat(parser): ALTER TABLE ... REPARTITION ... (#7082)
* initial impl

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* sqlness tests

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* tidy up

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-10-15 03:54:36 +00:00
Ning Sun
9606a6fda8 feat: conversion between struct, value and json (#7052)
* feat: struct value

Signed-off-by: Ning Sun <sunning@greptime.com>

* feat: update for proto module

* feat: wip struct type

* feat: implement more vector operations

* feat: make datatype and api

* feat: reoslve some compilation issues

* feat: resolve all compilation issues

* chore: format update

* test: resolve tests

* test: test and refactor value-to-pb

* feat: add more tests and fix for value types

* chore: remove dbg

* feat: test and fix iterator

* fix: resolve struct_type issue

* feat: pgwire 0.33 update

* refactor: use vec for struct items

* feat: conversion from json to value

* feat: add decode function

* fix: lint issue

* feat: update how we encode raw data

* feat: add convertion to fully strcutured StructValue

* refactor: take owned value in all encode/decode functions

* feat: add pg serialization of structvalue

* chore: toml format

* refactor: adopt new and try_new from struct value

* chore: cleanup residual issues

* docs: docs up

* fix lint issue

* Apply suggestion from @MichaelScofield

Co-authored-by: LFC <990479+MichaelScofield@users.noreply.github.com>

* Apply suggestion from @MichaelScofield

Co-authored-by: LFC <990479+MichaelScofield@users.noreply.github.com>

* Apply suggestion from @MichaelScofield

Co-authored-by: LFC <990479+MichaelScofield@users.noreply.github.com>

* Apply suggestion from @MichaelScofield

Co-authored-by: LFC <990479+MichaelScofield@users.noreply.github.com>

* chore: address review comment especially collection capacity

* refactor: remove unneeded processed keys collection

---------

Signed-off-by: Ning Sun <sunning@greptime.com>
Co-authored-by: LFC <990479+MichaelScofield@users.noreply.github.com>
2025-10-14 07:22:37 +00:00
github-actions[bot]
9cc0bcb449 ci: update dev-builder image tag (#7073)
Signed-off-by: greptimedb-ci <greptimedb-ci@greptime.com>
Co-authored-by: greptimedb-ci <greptimedb-ci@greptime.com>
2025-10-14 06:53:02 +00:00
Ning Sun
5ad1eac924 refactor: remove unused grpc-expr module and pb conversions (#7085)
* refactor: remove unused grpc-expr module and pb conversions

* chore: remove unused snafu
2025-10-14 04:11:48 +00:00
shuiyisong
a027b824a2 chore: add information extension to the plugins in standalone (#7079)
chore: add information extension to the plugins

Signed-off-by: shuiyisong <xixing.sys@gmail.com>
2025-10-14 02:30:42 +00:00
Lei, HUANG
44d46a6702 fix: correct impl Clear for &[u8] (#7081)
* fix: correct impl Clear for &[u8]

* fix: clippy

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

---------

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-10-13 11:59:26 +00:00
Yingwen
a9c342b0f7 feat: support setting sst_format in table options (#7068)
* feat: add FormatType to support multi format in the future

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: add sst_format to RegionOptions

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: sets the sst_format based on RegionOptions

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: add sst_format to mito table options

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: fix RegionManifest deserialization without sst_format

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: remove Parquet suffix from FormatType

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: prefer RegionOptions::sst_format in compactor/memtable builder

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: rename enable_experimental_flat_format to
default_experimental_flat_format

Signed-off-by: evenyag <realevenyag@gmail.com>

* docs: update config.md

Signed-off-by: evenyag <realevenyag@gmail.com>

* style: fmt

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: update manifest test

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: fix compiler errors, handle sst_format in remap_manifest

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-10-13 08:38:37 +00:00
Ruihang Xia
1a73b485fe feat: apply region partition expr to region scan (#7067)
* handle null in partition expr

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* apply region partition expr on scanning

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix format

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* tidy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix gt/gteq

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-10-13 07:38:19 +00:00
Ruihang Xia
ab46127414 feat: remap SST files for partition change (#7071)
* initial impl

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update expr

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* move error

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* immutable file meta

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* tidy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* reduce state

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* inherit manifest

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* simplify test cases

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix rebase error

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* log new exprs

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-10-13 03:13:14 +00:00
LFC
8fe17d43d5 chore: update rust to nightly 2025-10-01 (#7069)
* chore: update rust to nightly 2025-10-01

Signed-off-by: luofucong <luofc@foxmail.com>

* chore: nix update

---------

Signed-off-by: luofucong <luofc@foxmail.com>
Co-authored-by: Ning Sun <sunning@greptime.com>
2025-10-11 07:30:52 +00:00
Weny Xu
40e9ce90a7 refactor: restructure sqlness to support multiple envs and extract common utils (#7066)
* refactor: restructure sqlness to support multiple envs and extract common utils

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore(ci): update sqlness cmd

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: add comments

Signed-off-by: WenyXu <wenymedia@gmail.com>

* fix: error fmt

Signed-off-by: WenyXu <wenymedia@gmail.com>

* fix: only reconnect mysql and pg client

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-10-11 06:34:17 +00:00
discord9
ba034c5a9e feat: explain custom statement (#7058)
* feat: explain tql cte

Signed-off-by: discord9 <discord9@163.com>

* chore: unused

Signed-off-by: discord9 <discord9@163.com>

* fix: analyze format

Signed-off-by: discord9 <discord9@163.com>

* Update src/sql/src/statements/statement.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>
Signed-off-by: discord9 <discord9@163.com>

* test: sqlness

Signed-off-by: discord9 <discord9@163.com>

* pcr

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>
2025-10-11 06:27:51 +00:00
Ruihang Xia
e46ce7c6da feat: divide subtasks from old/new partition rules (#7003)
* feat: divide subtasks from old/new partition rules

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix format

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* change copyright year

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* simplify filter

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* naming

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* Update src/partition/src/subtask.rs

Co-authored-by: Zhenchi <zhongzc_arch@outlook.com>
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: Zhenchi <zhongzc_arch@outlook.com>
2025-10-11 06:17:25 +00:00
dennis zhuang
57d84b9de5 feat: supports value aliasing in TQL (#7041)
* feat: supports value aliasing in TQL

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: invalid checking

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: remove invalid checking

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* test: add explain test

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: improve parser

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* test: add explain TQL-CTE

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

---------

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
2025-10-11 02:49:09 +00:00
Ning Sun
749a5ab165 feat: struct value and vector (#7033)
* feat: struct value

Signed-off-by: Ning Sun <sunning@greptime.com>

* feat: update for proto module

* feat: wip struct type

* feat: implement more vector operations

* feat: make datatype and api

* feat: reoslve some compilation issues

* feat: resolve all compilation issues

* chore: format update

* test: resolve tests

* test: test and refactor value-to-pb

* feat: add more tests and fix for value types

* chore: remove dbg

* feat: test and fix iterator

* fix: resolve struct_type issue

* refactor: use vec for struct items

* chore: update proto to main branch

* refactor: address some of review issues

* refactor: update for further review

* Add validation on new methods

* feat: update struct/list json serialization

* refactor: reimplement get in struct_vector

* refactor: struct vector functions

* refactor: fix lint issue

* refactor: address review comments

---------

Signed-off-by: Ning Sun <sunning@greptime.com>
2025-10-10 21:49:51 +00:00
LFC
3738440753 feat: align influxdb line timestamp with table time index (#7057)
* feat: align influxdb line timestamp with table time index

Signed-off-by: luofucong <luofc@foxmail.com>

* fix ci

Signed-off-by: luofucong <luofc@foxmail.com>

---------

Signed-off-by: luofucong <luofc@foxmail.com>
2025-10-10 07:37:52 +00:00
Ning Sun
aa84642afc refactor!: remove pb_value to json conversion, keep json output consistent (#7063)
* refactor: remove pb_value to json

* chore: remove unused module
2025-10-10 07:09:20 +00:00
Ning Sun
af213be403 refactor: remove duplicated valueref to json (#7062) 2025-10-10 07:08:26 +00:00
Sicong Hu
779865d389 feat: introduce IndexBuildTask for async index build (#6927)
* feat: add framework for asynchronous index building

Signed-off-by: SNC123 <sinhco@outlook.com>

* test: add unit tests for IndexBuildTask

Signed-off-by: SNC123 <sinhco@outlook.com>

* chore: clippy,format,fix-udeps

Signed-off-by: SNC123 <sinhco@outlook.com>

* fix: correct write cache logic in IndexBuildTask

Signed-off-by: SNC123 <sinhco@outlook.com>

* chore: clippy, resolve conflicts

Signed-off-by: SNC123 <sinhco@outlook.com>

* chore: resolve conflicts

Signed-off-by: SNC123 <sinhco@outlook.com>

* fix: apply review suggestions

Signed-off-by: SNC123 <sinhco@outlook.com>

* chore: resolve conflicts

Signed-off-by: SNC123 <sinhco@outlook.com>

* fix: clean up index files in aborted case

Signed-off-by: SNC123 <sinhco@outlook.com>

* refactor: move manifest update logic into IndexBuildTask

Signed-off-by: SNC123 <sinhco@outlook.com>

* fix: enhance check file logic and error handling

Signed-off-by: SNC123 <sinhco@outlook.com>

---------

Signed-off-by: SNC123 <sinhco@outlook.com>
2025-10-10 03:29:32 +00:00
Yingwen
47c1ef672a fix: support dictionary in regex match (#7055)
* fix: support dictionary in regex match

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: get key from keys buffer directly

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-10-10 03:03:34 +00:00
shyam
591b9f3e81 fix: show proper error msg, when executing non-admin functions as admin functions (#7061)
Signed-off-by: Shyamnatesan <shyamnatesan21@gmail.com>
2025-10-10 01:25:49 +00:00
LFC
979c8be51b feat: able to pass external service for sqlness test (#7032)
feat: able to pass external service instead of creating inside for sqlness test

Signed-off-by: luofucong <luofc@foxmail.com>
2025-10-09 07:02:19 +00:00
Yingwen
45b1458254 fix: only skips auto convert when encoding is sparse (#7056)
* fix: only skips auto convert when encoding is sparse

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: address comment and add tests

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-10-09 06:50:48 +00:00
fys
4cdcf2ef39 chore: add trigger querier factory trait (#7053)
feat: add trigger-querier-factory-ent
2025-10-09 02:16:50 +00:00
shuiyisong
b24a55cea4 chore: rename the default ts column name to greptime_timestamp for influxdb line protocol (#7046)
* chore: rename influxdb ts column name to greptime_timestamp

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* fix: tests

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

---------

Signed-off-by: shuiyisong <xixing.sys@gmail.com>
2025-10-09 02:14:11 +00:00
Ning Sun
1aa4f346a0 fix: build_grpc_server visibility (#7054) 2025-10-06 03:16:48 +00:00
Ning Sun
f7202bc176 feat: pgwire 0.33 update (#7048) 2025-10-03 08:06:05 +00:00
Yingwen
b7045e57a5 feat: enable zstd for bulk memtable encoded parts (#7045)
feat: enable zstd in bulk memtable

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-10-02 16:05:33 +00:00
Ning Sun
660790148d fix: various typos reported by CI (#7047)
* fix: various typos reported by CI

* fix: additional typo
2025-10-02 15:11:09 +00:00
zyy17
d777e8c52f refactor: add cgroup metrics collector (#7038)
Signed-off-by: zyy17 <zyylsxm@gmail.com>
2025-09-30 02:26:02 +00:00
zyy17
efa616ce44 fix: use instance lables to fetch greptime_memory_limit_in_bytes and greptime_cpu_limit_in_millicores metrics (#7043)
fix: remove unnecessary labels of standalone dashboard.json

Signed-off-by: zyy17 <zyylsxm@gmail.com>
2025-09-29 11:43:35 +00:00
LFC
5b13fba65b refactor: make Function trait a simple shim of DataFusion UDF (#7036)
Signed-off-by: luofucong <luofc@foxmail.com>
2025-09-29 09:07:39 +00:00
LFC
aa05b3b993 feat: add max_connection_age config to grpc server (#7031)
* feat: add `max_connection_age` config to grpc server

Signed-off-by: luofucong <luofc@foxmail.com>

* Apply suggestions from code review

Co-authored-by: Yingwen <realevenyag@gmail.com>

* fix ci

Signed-off-by: luofucong <luofc@foxmail.com>

---------

Signed-off-by: luofucong <luofc@foxmail.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>
2025-09-29 07:32:43 +00:00
fys
c4a7cc0adb chore: improve create trigger display (#7027)
* chore: improve create_trigger_statement display

* improve display of create trigger

* add components for frontend

* Revert "add components for frontend"

This reverts commit 8d71540a72.
2025-09-29 02:22:43 +00:00
Yingwen
90d37cb10e fix: fix panic and limit concurrency in flat format (#7035)
* feat: add a semaphore to control flush concurrency

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: build FlatSchemaOptions from encoding in FlatWriteFormat

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: remove allow dead_code

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: handle sparse encoding in FlatCompatBatch

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: add time index column in try_new_compact_sparse

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: add test for compaction and sparse encoding

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: remove comment

Signed-off-by: evenyag <realevenyag@gmail.com>

* style: fix clippy

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-09-29 02:20:06 +00:00
localhost
4a3c5f85e5 fix: fix test_resolve_relative_path_relative on windows (#7039) 2025-09-28 13:03:57 +00:00
discord9
3ca5c77d91 chore: not warning (#7037)
Signed-off-by: discord9 <discord9@163.com>
2025-09-28 08:11:27 +00:00
discord9
8bcf4a8ab5 test: update unit test by passing extra sort columns (#7030)
* tests: fix unit test by passing one sort columns

Signed-off-by: discord9 <discord9@163.com>

* chore: per copilot

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-09-28 03:22:43 +00:00
zyy17
0717773f62 refactor!: add enable_read_cache config to support disable read cache explicitly (#6834)
* refactor: add `enable_read_cache` config to support disable read cache explicitly

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* refactor: if `cache_path` is empty and `enable_read_cache` is true, set the default cache dir

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* refactor: remove the unessary Option type for `ObjectStorageCacheConfig`

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* refactor: sanitize cache config in `DatanodeOptions` and `StandaloneOptions`

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* chore: code review comment

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* chore: apply code review comments

Signed-off-by: zyy17 <zyylsxm@gmail.com>

---------

Signed-off-by: zyy17 <zyylsxm@gmail.com>
2025-09-26 09:44:12 +00:00
shuiyisong
195ed73448 chore: disable file not exist on watch_file_user_provider (#7028)
Signed-off-by: shuiyisong <xixing.sys@gmail.com>
2025-09-26 09:29:59 +00:00
LFC
243dbde3d5 refactor: rewrite some UDFs to DataFusion style (final part) (#7023)
Signed-off-by: luofucong <luofc@foxmail.com>
2025-09-26 09:24:29 +00:00
discord9
aca8b690d1 fix: step aggr merge phase not order nor filter (#6998)
* fix: not order

Signed-off-by: discord9 <discord9@163.com>

* test: redacted

Signed-off-by: discord9 <discord9@163.com>

* feat: fix up state wrapper

Signed-off-by: discord9 <discord9@163.com>

* df last_value state not as promised!

Signed-off-by: discord9 <discord9@163.com>

* fix?: could fix better

Signed-off-by: discord9 <discord9@163.com>

* test: unstable result

Signed-off-by: discord9 <discord9@163.com>

* fix: work around by fixing state

Signed-off-by: discord9 <discord9@163.com>

* chore: after rebase fix

Signed-off-by: discord9 <discord9@163.com>

* chore: finish some todo

Signed-off-by: discord9 <discord9@163.com>

* chore: per copilot

Signed-off-by: discord9 <discord9@163.com>

* refactor: not fix but just notify mismatch

Signed-off-by: discord9 <discord9@163.com>

* chore: warn -> debug state mismatch

Signed-off-by: discord9 <discord9@163.com>

* chore: refine error msg

Signed-off-by: discord9 <discord9@163.com>

* test: sqlness add last_value date_bin test

Signed-off-by: discord9 <discord9@163.com>

* ?: substrait order by decode failure

Signed-off-by: discord9 <discord9@163.com>

* unit test reproduce that

Signed-off-by: discord9 <discord9@163.com>

* feat: support state wrapper's order serde in substrait

Signed-off-by: discord9 <discord9@163.com>

* refactor: stuff

Signed-off-by: discord9 <discord9@163.com>

* test: standalone/distributed different exec

Signed-off-by: discord9 <discord9@163.com>

* fmt

Signed-off-by: discord9 <discord9@163.com>

* chore: per review

Signed-off-by: discord9 <discord9@163.com>

* refactor: closure

Signed-off-by: discord9 <discord9@163.com>

* test: first value order by

Signed-off-by: discord9 <discord9@163.com>

* refactor: per cr

Signed-off-by: discord9 <discord9@163.com>

* feat: ScanHint last_value last row selector

Signed-off-by: discord9 <discord9@163.com>

* docs: per cr

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-09-26 09:12:45 +00:00
ZonaHe
9564180a6a feat: update dashboard to v0.11.6 (#7026)
Co-authored-by: ZonaHex <ZonaHex@users.noreply.github.com>
2025-09-26 06:35:37 +00:00
dennis zhuang
17d16da483 feat: supports expression in TQL params (#7014)
* feat: supports expression in TQL params

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: by cr comments

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: comment

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: by cr comments

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

---------

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
2025-09-26 03:43:50 +00:00
Ning Sun
c1acce9943 refactor: cleanup datafusion-pg-catalog dependencies (#7025)
* refactor: cleanup datafusion-pg-catalog dependencies

Signed-off-by: Ning Sun <sunning@greptime.com>

* chore: toml format

* feat: update upstream

---------

Signed-off-by: Ning Sun <sunning@greptime.com>
2025-09-26 03:07:00 +00:00
Ruihang Xia
0790835c77 feat!: improve greptime_identity pipeline behavior (#6932)
* flat by default, store array in string

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* expose max_nested_levels param, store string instead of error

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove flatten option

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove unused errors

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-09-25 15:28:28 +00:00
zyy17
280df064c7 chore: add some trace logs in fetching data from cache and object store (#6877)
* chore: add some important debug logs

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* chore: add traces logs in `fetch_byte_ranges()`

Signed-off-by: zyy17 <zyylsxm@gmail.com>

---------

Signed-off-by: zyy17 <zyylsxm@gmail.com>
2025-09-25 11:54:22 +00:00
discord9
11a08d1381 fix: not step when aggr have order by/filter (#7015)
* fix: not applied

Signed-off-by: discord9 <discord9@163.com>

* chore: per review

Signed-off-by: discord9 <discord9@163.com>

* test: confirm order by not push down

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-09-25 08:43:18 +00:00
shuiyisong
06a4f0abea chore: add function for getting started on metasrv (#7022)
Signed-off-by: shuiyisong <xixing.sys@gmail.com>
2025-09-25 08:24:23 +00:00
dennis zhuang
c6e5552f05 test: migrate aggregation tests from duckdb, part4 (#6965)
* test: migrate aggregation tests from duckdb, part4

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: tests

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: rename tests

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: comments

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: ignore zero weights test

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: remove duplicated sql

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

---------

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
2025-09-25 08:00:17 +00:00
discord9
9c8ff1d8a0 fix: skip placeholder when partition columns (#7020)
Signed-off-by: discord9 <discord9@163.com>
2025-09-25 07:01:49 +00:00
Yingwen
cff9cb6327 feat: converts batches in old format to the flat format in query time (#6987)
* feat: use correct projection index for old format

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: remove allow dead_code from format

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: check and convert old format to flat format

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: sub primary key num from projection

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: always convert the batch in FlatRowGroupReader

Signed-off-by: evenyag <realevenyag@gmail.com>

* style: fix clippy

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: Change &Option<&[]> to Option<&[]>

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: only build arrow schema once

adds a method flat_sst_arrow_schema_column_num() to get the field num

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: Handle flat format and old format separately

Adds two structs ParquetFlat and ParquetPrimaryKeyToFlat.
ParquetPrimaryKeyToFlat delegates stats and projection to the
PrimaryKeyReadFormat.

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: handle non string tag correctly

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: do not register file cache twice

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: clean temp files

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: add rows and bytes to flush success log

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: convert format in memtable

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: add compaction flag to ScanInput

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: compaction should use old format for sparse encoding

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: merge schema use old format in sparse encoding

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: reads legacy format but not convert if skip_auto_convert

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: suppport sparse encoding in bulk parts

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-09-25 06:42:22 +00:00
Ning Sun
964dc254aa feat: upgraded pg_catalog support (#6918)
* refactor: add datafusion-postgres dependency

* refactor: move and include pg_catalog udfs

* chore: update upstream

* feat: register table function pg_get_keywords

* feat: bridge CatalogInfo for our CatalogManager

Signed-off-by: Ning Sun <sunning@greptime.com>

* feat: convert pg_catalog table to our system table

* feat: bridge system catalog with datafusion-postgres

Signed-off-by: Ning Sun <sunning@greptime.com>

* feat: add more udfs

* feat: add compatibility rewriter to postgres handler

* fix: various fix

* fmt: fix

* fix: use functions from pg_catalog library

* fmt

* fix: sqlness runner

Signed-off-by: Ning Sun <sunning@greptime.com>

* test: adopt arrow 56.0 to 56.1 memory size change

* fix: add additional udfs

* chore: format

* refactor: return None when creating system table failed

Signed-off-by: Ning Sun <sunning@greptime.com>

* chore: provide safety comments about expect usage

---------

Signed-off-by: Ning Sun <sunning@greptime.com>
2025-09-25 04:05:34 +00:00
dennis zhuang
91a727790d feat: supports permission mode for static user provider (#7017)
* feat: supports permission mode for static user provider

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: style

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: comment

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

---------

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-09-25 03:45:31 +00:00
Weny Xu
07b9de620e fix(cli): fix FS object store handling of absolute paths (#7018)
* fix(cli): fix FS object store handling of absolute paths

Signed-off-by: WenyXu <wenymedia@gmail.com>

* test: add unit tests

Signed-off-by: WenyXu <wenymedia@gmail.com>

* Update src/cli/src/utils.rs

Co-authored-by: LFC <990479+MichaelScofield@users.noreply.github.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
Co-authored-by: LFC <990479+MichaelScofield@users.noreply.github.com>
2025-09-25 03:38:33 +00:00
LFC
6d0dd2540e refactor: rewrite some UDFs to DataFusion style (part 4) (#7011)
Signed-off-by: luofucong <luofc@foxmail.com>
2025-09-24 19:50:58 +00:00
fys
a14c01a807 feat: sql parse about show create trigger (#7016)
* feat: sql parse for show create trigger

* fix: build

* remove unused comment

* chore: add tests for parsing complete SQL
2025-09-24 15:52:15 +00:00
discord9
238ed003df fix: group by expr not as column in step aggr (#7008)
* fix: group by expr not as column

Signed-off-by: discord9 <discord9@163.com>

* test: dist analyzer date_bin

Signed-off-by: discord9 <discord9@163.com>

* ???fix wip

Signed-off-by: discord9 <discord9@163.com>

* fix: deduce using correct input fields

Signed-off-by: discord9 <discord9@163.com>

* refactor: clearer wrapper

Signed-off-by: discord9 <discord9@163.com>

* chore: update sqlness

Signed-off-by: discord9 <discord9@163.com>

* chore: per review

Signed-off-by: discord9 <discord9@163.com>

* chore: per review

Signed-off-by: discord9 <discord9@163.com>

* chore: rm todo

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-09-24 06:57:01 +00:00
Weny Xu
0c038f755f refactor(cli): refactor object storage config (#7009)
* refactor: refactor object storage config

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: public common config

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-09-24 06:50:47 +00:00
Lin Yihai
b5a8725582 feat(copy_to_csv): add date_format/timestamp_format/time_format. (#6995)
feat(copy_to_csv): add `date_format` and so on to `Copy ... to with` syntax

Signed-off-by: Yihai Lin <yihai-lin@foxmail.com>
2025-09-24 06:22:53 +00:00
Ruihang Xia
c7050831db fix: match promql column reference in case sensitive way (#7013)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-09-24 03:28:09 +00:00
Ruihang Xia
f65dcd12cc feat: refine failure detector (#7005)
* feat: refine failure detector

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix format

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* revert back default value

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* revert change of test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-09-24 01:43:22 +00:00
Zhenchi
80c8ab42b0 feat: add ssts releated system table (#6924)
* feat: add InformationExtension.inspect_datanode for datanode inspection

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* aggregate results from all datanodes

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix fmt

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* feat: add ssts releated system table

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* update sst entry

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* address comments

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix sqlness

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix sqlness

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2025-09-23 11:06:00 +00:00
discord9
4507736528 docs: laminar flow rfc (#6928)
* docs: luminar flow

Signed-off-by: discord9 <discord9@163.com>

* chore: wording details

Signed-off-by: discord9 <discord9@163.com>

* more details

Signed-off-by: discord9 <discord9@163.com>

* rearrange phases

Signed-off-by: discord9 <discord9@163.com>

* refactor: use embed frontend per review

Signed-off-by: discord9 <discord9@163.com>

* todo per review

Signed-off-by: discord9 <discord9@163.com>

* docs: seq read impl for luminar flow

Signed-off-by: discord9 <discord9@163.com>

* rename

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-09-23 08:16:06 +00:00
LFC
2712c5cd7a refactor: rewrite some UDFs to DataFusion style (part 3) (#6990)
* refactor: rewrite some UDFs to DataFusion style (part 3)

Signed-off-by: luofucong <luofc@foxmail.com>

* resolve PR comments

Signed-off-by: luofucong <luofc@foxmail.com>

* resolve PR comments

Signed-off-by: luofucong <luofc@foxmail.com>

* resolve PR comments

Signed-off-by: luofucong <luofc@foxmail.com>

---------

Signed-off-by: luofucong <luofc@foxmail.com>
2025-09-23 07:53:51 +00:00
Ruihang Xia
078379816c fix: incorrect timestamp resolution in information_schema.partitions table (#7004)
* fix: incorrect timestamp resolution in information_schema.partitions table

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* use second for all fields in partitions table

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update sqlness result

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-09-23 06:13:10 +00:00
Ruihang Xia
6dc5fbe9a1 fix: promql range function has incorrect timestamps (#7006)
* fix: promql range function has incorrect timestamps

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* simplify

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-09-23 05:16:54 +00:00
ZonaHe
cd3fb5fd3e feat: update dashboard to v0.11.5 (#7001)
Co-authored-by: sunchanglong <sunchanglong@users.noreply.github.com>
2025-09-23 05:05:17 +00:00
shyam
5fcca4eeab fix: make EXPIRE (keyword) parsing case-insensitive, when creating flow (#6997)
fix: make EXPIRE keyword case-insensitive in CREATE FLOW parser

Signed-off-by: Shyamnatesan <shyamnatesan21@gmail.com>
2025-09-22 12:07:04 +00:00
Weny Xu
b3d413258d feat: extract standalone functionality and introduce plugin-based router configuration (#7002)
* feat: extract standalone functionality and introduce plugin-based router configuration

Signed-off-by: WenyXu <wenymedia@gmail.com>

* fix: ensure dump file does not exist

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: introduce `External` error

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-09-22 11:21:04 +00:00
discord9
03954e8b3b chore: update proto (#6992)
* chore: update proto

Signed-off-by: discord9 <discord9@163.com>

* update lockfile

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-09-19 09:01:46 +00:00
Yingwen
bd8f5d2b71 fix: print the output message of the error in admin fn macro (#6994)
Signed-off-by: evenyag <realevenyag@gmail.com>
2025-09-19 08:11:19 +00:00
Weny Xu
74721a06ba chore: improve error logging in WAL prune manager (#6993)
Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-09-19 07:08:28 +00:00
discord9
18e4839a17 feat: datanode side local gc worker (#6940)
* wip

Signed-off-by: discord9 <discord9@163.com>

* docs for behavior

Signed-off-by: discord9 <discord9@163.com>

* wip: handle outdated version

Signed-off-by: discord9 <discord9@163.com>

* feat: just retry

Signed-off-by: discord9 <discord9@163.com>

* feat: smaller lingering time

Signed-off-by: discord9 <discord9@163.com>

* refactor: partial per review

Signed-off-by: discord9 <discord9@163.com>

* refactor: rm tmp file cnt

Signed-off-by: discord9 <discord9@163.com>

* chore: per review

Signed-off-by: discord9 <discord9@163.com>

* chore: opt partial

Signed-off-by: discord9 <discord9@163.com>

* chore: rebase fix

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-09-19 03:20:13 +00:00
LFC
cbe0cf4a74 refactor: rewrite some UDFs to DataFusion style (part 2) (#6967)
* refactor: rewrite some UDFs to DataFusion style (part 2)

Signed-off-by: luofucong <luofc@foxmail.com>

* deal with vector UDFs `(scalar, scalar)` situation, and try getting the scalar value reference everytime

Signed-off-by: luofucong <luofc@foxmail.com>

* reduce some vector literal parsing

Signed-off-by: luofucong <luofc@foxmail.com>

* fix ci

Signed-off-by: luofucong <luofc@foxmail.com>

---------

Signed-off-by: luofucong <luofc@foxmail.com>
2025-09-18 06:37:27 +00:00
discord9
e26b98f452 refactor: put FileId to store-api (#6988)
* refactor: put FileId to store-api

Signed-off-by: discord9 <discord9@163.com>

* per review

Signed-off-by: discord9 <discord9@163.com>

* chore: lock file

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-09-18 03:20:42 +00:00
localhost
d8b967408e chore: modify LogExpr AggrFunc (#6948)
* chore: modify  LogExpr AggrFunc

* chore: change AggrFunc range field

* chore: remove range from aggrfunc
2025-09-17 12:19:48 +00:00
Weny Xu
c35407fdce refactor: region follower management with unified interface (#6986)
Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-09-17 10:01:03 +00:00
Lei, HUANG
edf4b3f7f8 chore: unset tz env in test (#6984)
chore/unset-tz-env-in-test:
 ### Commit Message

 Add environment variable cleanup in timezone tests

 - Updated `timezone.rs` to include removal of the `TZ` environment variable in the `test_from_tz_string` function to ensure a clean test environment.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-09-17 08:48:38 +00:00
Yingwen
14550429e9 chore: reduce SeriesScan sender timeout (#6983)
Signed-off-by: evenyag <realevenyag@gmail.com>
2025-09-17 07:02:47 +00:00
shuiyisong
ff2da4903e fix: OTel metrics naming wiht Prometheus style (#6982)
* fix: otel metrics naming

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* fix: otel metrics naming & add some tests

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

---------

Signed-off-by: shuiyisong <xixing.sys@gmail.com>
2025-09-17 06:11:38 +00:00
Lei, HUANG
c92ab4217f fix: avoid truncating SST statistics during flush (#6977)
fix/disable-parquet-stats-truncate:
 - **Update `memcomparable` Dependency**: Switched from crates.io to a Git repository for `memcomparable` in `Cargo.lock`, `mito-codec/Cargo.toml`, and removed it from `mito2/Cargo.toml`.
 - **Enhance Parquet Writer Properties**: Added `set_statistics_truncate_length` and `set_column_index_truncate_length` to `WriterProperties` in `parquet.rs`, `bulk/part.rs`, `partition_tree/data.rs`, and `writer.rs`.
 - **Add Test for Corrupt Scan**: Introduced a new test module `scan_corrupt.rs` in `mito2/src/engine` to verify handling of corrupt data.
 - **Update Test Data**: Modified test data in `flush.rs` to reflect changes in file sizes and sequences.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-09-17 03:02:52 +00:00
Zhenchi
77981a7de5 fix: clean intm ignore notfound (#6971)
* fix: clean intm ignore notfound

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* address comments

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2025-09-17 02:58:03 +00:00
Lei, HUANG
9096c5ebbf chore: bump sequence on region edit (#6947)
* chore/update-sequence-on-region-edit:
 ### Commit Message

 Refactor `get_last_seq_num` Method Across Engines

 - **Change Return Type**: Updated the `get_last_seq_num` method to return `Result<SequenceNumber, BoxedError>` instead of `Result<Option<SequenceNumber>, BoxedError>` in the following files:
   - `src/datanode/src/tests.rs`
   - `src/file-engine/src/engine.rs`
   - `src/metric-engine/src/engine.rs`
   - `src/metric-engine/src/engine/read.rs`
   - `src/mito2/src/engine.rs`
   - `src/query/src/optimizer/test_util.rs`
   - `src/store-api/src/region_engine.rs`

 - **Enhance Region Edit Handling**: Modified `RegionWorkerLoop` in `src/mito2/src/worker/handle_manifest.rs` to update file sequences during region edits.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* add committed_sequence to RegionEdit

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* chore/update-sequence-on-region-edit:
 ### Commit Message

 Refactor sequence retrieval method

 - **Renamed Method**: Changed `get_last_seq_num` to `get_committed_sequence` across multiple files to better reflect its purpose of retrieving the latest committed sequence.
   - Affected files: `tests.rs`, `engine.rs` in `file-engine`, `metric-engine`, `mito2`, `test_util.rs`, and `region_engine.rs`.
 - **Removed Unused Struct**: Deleted `RegionSequencesRequest` struct from `region_request.rs` as it is no longer needed.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* chore/update-sequence-on-region-edit:
 **Add Committed Sequence Handling in Region Engine**

 - **`engine.rs`**: Introduced a new test module `bump_committed_sequence_test` to verify committed sequence handling.
 - **`bump_committed_sequence_test.rs`**: Added a test to ensure the committed sequence is correctly updated and persisted across region reopenings.
 - **`action.rs`**: Updated `RegionManifest` and `RegionManifestBuilder` to include `committed_sequence` for tracking.
 - **`manager.rs`**: Adjusted manifest size assertion to accommodate new committed sequence data.
 - **`opener.rs`**: Implemented logic to override committed sequence during region opening.
 - **`version.rs`**: Added `set_committed_sequence` method to update the committed sequence in `VersionControl`.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* chore/update-sequence-on-region-edit:
 **Enhance `test_bump_committed_sequence` in `bump_committed_sequence_test.rs`**

 - Updated the test to include row operations using `build_rows`, `put_rows`, and `rows_schema` to verify the committed sequence behavior.
 - Adjusted assertions to reflect changes in committed sequence after row operations and region edits.
 - Added comments to clarify the expected behavior of committed sequence after reopening the region and replaying the WAL.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* chore/update-sequence-on-region-edit:
 **Enhance Region Sequence Management**

 - **`bump_committed_sequence_test.rs`**: Updated test to handle region reopening and sequence management, ensuring committed sequences are correctly set and verified after edits.
 - **`opener.rs`**: Improved committed sequence handling by overriding it only if the manifest's sequence is greater than the replayed sequence. Added logging for mutation sequence replay.
 - **`region_write_ctx.rs`**: Modified `push_mutation` and `push_bulk` methods to adopt sequence numbers from parameters, enhancing sequence management during write operations.
 - **`handle_write.rs`**: Updated `RegionWorkerLoop` to pass sequence numbers in `push_bulk` and `push_mutation` methods, ensuring consistent sequence handling.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* chore/update-sequence-on-region-edit:
 ### Remove Debug Logging from `opener.rs`

 - Removed debug logging for mutation sequences in `opener.rs` to clean up the output and improve performance.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

---------

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-09-16 16:22:25 +00:00
Weny Xu
0a959f9920 feat: add TLS support for mysql backend (#6979)
* refactor: move etcd tls code to `common-meta`

Signed-off-by: WenyXu <wenymedia@gmail.com>

* refactor: move postgre pool logic to `utils::postgre`

Signed-off-by: WenyXu <wenymedia@gmail.com>

* feat: setup mysql ssl options

Signed-off-by: WenyXu <wenymedia@gmail.com>

* feat: add test for mysql backend with tls

Signed-off-by: WenyXu <wenymedia@gmail.com>

* refactor: simplify certs generation

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-09-16 13:46:37 +00:00
discord9
85c1a91bae feat: support SubqueryAlias pushdown (#6963)
* wip enforce dist requirement rewriter

Signed-off-by: discord9 <discord9@163.com>

* feat: enforce dist req

Signed-off-by: discord9 <discord9@163.com>

* test: sqlness result

Signed-off-by: discord9 <discord9@163.com>

* fix: double projection

Signed-off-by: discord9 <discord9@163.com>

* test: fix sqlness

Signed-off-by: discord9 <discord9@163.com>

* refactor: per review

Signed-off-by: discord9 <discord9@163.com>

* docs: use btree map

Signed-off-by: discord9 <discord9@163.com>

* test: sqlness explain&comment

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-09-16 13:27:35 +00:00
Weny Xu
7aba9a18fd chore: add tests for postgre backend with tls (#6973)
* chore: add tests for postgre backend with tls

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: minor

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-09-16 11:03:11 +00:00
shuiyisong
4c18d140b4 fix: deadlock in dashmap (#6978)
* fix: deadlock in dashmap

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* Update src/frontend/src/instance.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

* chore: extract fast cache check and add test

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

---------

Signed-off-by: shuiyisong <xixing.sys@gmail.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>
2025-09-16 10:49:28 +00:00
Yingwen
b8e0c49cb4 feat: add an flag to enable the experimental flat format (#6976)
* feat: add enable_experimental_flat_format flag to enable flat format

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: extract build_scan_input for CompactionSstReaderBuilder

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: add compact memtable cost to flush metrics

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: Sets compact dispatcher for bulk memtable

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: Cast dictionary to target type in FlatProjectionMapper

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: add time index to FlatProjectionMapper::batch_schema

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: update config toml

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: pass flat_format to ProjectionMapper in CompactionSstReaderBuilder

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-09-16 09:33:12 +00:00
Zhenchi
db42ad42dc feat: add visible to sst entry for staging mode (#6964)
Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2025-09-15 09:05:54 +00:00
shuiyisong
8ce963f63e fix: shorten lock time (#6968) 2025-09-15 03:37:36 +00:00
Yingwen
b3aabb6706 feat: support flush and compact flat format files (#6949)
* feat: basic functions for flush/compact flat format

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: bridge flush and compaction for flat format

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: add write cache support

Signed-off-by: evenyag <realevenyag@gmail.com>

* style: fix clippy

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: change log level to debug

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: wrap duplicated code to merge and dedup iter

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: wrap some code into flush_flat_mem_ranges

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: extract logic into do_flush_memtables

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-09-14 13:36:24 +00:00
Ning Sun
028effe952 docs: update memory profiling description doc (#6960)
doc: update memory profiling description doc
2025-09-12 08:30:22 +00:00
Ruihang Xia
d86f489a74 fix: staging mode with proper region edit operations (#6962)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-09-12 04:39:42 +00:00
dennis zhuang
6c066c1a4a test: migrate join tests from duckdb, part3 (#6881)
* test: migrate join tests

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: update test results after rebasing main branch

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: unstable query sort results and natural_join test

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: count(*) with joining

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: unstable query sort results and style

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

---------

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
2025-09-12 04:20:00 +00:00
LFC
9ab87e11a4 refactor: rewrite h3 functions to DataFusion style (#6942)
* refactor: rewrite h3 functions to DataFusion style

Signed-off-by: luofucong <luofc@foxmail.com>

* resolve PR comments

Signed-off-by: luofucong <luofc@foxmail.com>

---------

Signed-off-by: luofucong <luofc@foxmail.com>
2025-09-12 02:27:24 +00:00
Weny Xu
9fe7069146 feat: add postgres tls support for CLI (#6941)
* feat: add postgres tls support for cli

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-09-11 12:18:13 +00:00
fys
733a1afcd1 fix: correct jemalloc metrics (#6959)
The allocated and resident metrics were swapped in the set calls. This commit
fixes the issue by ensuring each metric receives its corresponding value.
2025-09-11 06:37:19 +00:00
Yingwen
5e65581f94 feat: support flat format for SeriesScan (#6938)
* feat: Support flat format for SeriesScan

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: simplify tests

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: update comment

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: only accumulate fetch time to scan_cost in SeriesDistributor of
the SeriesScan

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: update comment

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-09-11 06:12:53 +00:00
ZonaHe
e75e5baa63 feat: update dashboard to v0.11.4 (#6956)
Co-authored-by: sunchanglong <sunchanglong@users.noreply.github.com>
2025-09-11 04:34:25 +00:00
zyy17
c4b89df523 fix: use pull_request_target to fix add labels 403 error (#6958)
Signed-off-by: zyy17 <zyylsxm@gmail.com>
2025-09-11 03:53:14 +00:00
Weny Xu
6a15e62719 feat: expose workload filter to selector options (#6951)
* feat: add workload filtering support to selector options

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-09-11 03:11:13 +00:00
discord9
2bddbe8c47 feat(query): better alias tracker (#6909)
* better resolve

Signed-off-by: discord9 <discord9@163.com>

feat: layered alias tracker

Signed-off-by: discord9 <discord9@163.com>

refactor

Signed-off-by: discord9 <discord9@163.com>

docs: expalin for no offset by one

Signed-off-by: discord9 <discord9@163.com>

test: more

Signed-off-by: discord9 <discord9@163.com>

simpify api

Signed-off-by: discord9 <discord9@163.com>

wip

Signed-off-by: discord9 <discord9@163.com>

fix: filter non-exist columns

Signed-off-by: discord9 <discord9@163.com>

feat: stuff

Signed-off-by: discord9 <discord9@163.com>

feat: cache partition columns

Signed-off-by: discord9 <discord9@163.com>

refactor: rm unused fn

Signed-off-by: discord9 <discord9@163.com>

no need res

Signed-off-by: discord9 <discord9@163.com>

chore: rm unwrap&docs update

Signed-off-by: discord9 <discord9@163.com>

* chore: after rebase fix

Signed-off-by: discord9 <discord9@163.com>

* refactor: per review

Signed-off-by: discord9 <discord9@163.com>

* fix: unsupport part

Signed-off-by: discord9 <discord9@163.com>

* err msg

Signed-off-by: discord9 <discord9@163.com>

* fix: pass correct partition cols

Signed-off-by: discord9 <discord9@163.com>

* fix? use column name only

Signed-off-by: discord9 <discord9@163.com>

* fix: merge scan has partition columns no alias/no partition diff

Signed-off-by: discord9 <discord9@163.com>

* refactor: loop instead of recursive

Signed-off-by: discord9 <discord9@163.com>

* refactor: per review

Signed-off-by: discord9 <discord9@163.com>

* feat: overlaps

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-09-11 02:30:51 +00:00
discord9
ea8125aafb fix: count(1) instead of count(ts) when >1 inputs (#6952)
Signed-off-by: discord9 <discord9@163.com>
2025-09-10 21:30:43 +00:00
dennis zhuang
49722951c6 fix: unstable query sort results (#6944)
Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
2025-09-10 20:41:10 +00:00
Cason Kervis
c431b036ec fix(path): fix program lookup failure on Windows CI (#6946)
* fix(path): fix program lookup failure on Windows CI

Signed-off-by: Cason Kervis <cscnk52@outlook.com>

* fix(path): fix program exec name

Signed-off-by: Cason Kervis <cscnk52@outlook.com>

* fix(path): using absolute path

Signed-off-by: Cason Kervis <cscnk52@outlook.com>

* style: using fmt

Signed-off-by: Cason Kervis <cscnk52@outlook.com>

---------

Signed-off-by: Cason Kervis <cscnk52@outlook.com>
2025-09-10 13:38:54 +00:00
Ruihang Xia
c797d87210 fix: handle hash distribution properly (#6943)
* fix: handle hash distribution properly

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* Update src/query/src/optimizer/pass_distribution.rs

Co-authored-by: dennis zhuang <killme2008@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: dennis zhuang <killme2008@gmail.com>
2025-09-10 06:35:10 +00:00
Weny Xu
e0ce0a6446 refactor: refactor PeerLookupService and simplify Selector implementations (#6939)
* refactor: move `lease` into `discovery` dir

Signed-off-by: WenyXu <wenymedia@gmail.com>

* feat: introduce discovery components

Signed-off-by: WenyXu <wenymedia@gmail.com>

* refactor:  simplify selector

Signed-off-by: WenyXu <wenymedia@gmail.com>

* refactor: remove  duplicate peer allocator trait

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions

Signed-off-by: WenyXu <wenymedia@gmail.com>

* refactor: minor refactor

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: refine comments

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-09-10 03:43:46 +00:00
Ruihang Xia
2f6da3b718 feat: exiting staging mode on success case (#6913)
* clear staging directories on entering staging mode

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* test and merger

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* storage accessor

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* exit on success

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* use remove_all

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-09-10 01:52:55 +00:00
Ruihang Xia
83290be3ba feat: store partition expr per file in region manifest (#6849)
* feat: store partition expr per file in region manifest

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* use deserialized form in memory

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* test non-existing partition expr

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* try remove partition expr dup

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* Revert "try remove partition expr dup"

This reverts commit a0f2c8ca6a127ef356a5510466e84513cf4b13d8.

* fix merge test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* format doc

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-09-09 23:37:29 +00:00
Zhenchi
21ee981b49 feat: add origin_region_id and node_id to sst entry (#6937)
Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2025-09-09 09:45:15 +00:00
LFC
44c2aa4c23 refactor: use DataFusion's return_type in our function trait directly (#6935)
Signed-off-by: luofucong <luofc@foxmail.com>
2025-09-09 09:30:31 +00:00
Yingwen
9c22092189 feat: Implements compaction for bulk memtable (#6923)
* feat: initial bulk memtable compaction implementation

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: implement compact for memtable

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: add tests for bulk memtable compaction

Signed-off-by: evenyag <realevenyag@gmail.com>

* style: clippy

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: address review comments

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-09-09 09:15:25 +00:00
discord9
ad690e14d0 chore: clean up FlowEngine trait (#6934)
Signed-off-by: discord9 <discord9@163.com>
2025-09-09 07:14:52 +00:00
Zhenchi
264d05d20e feat: add InformationExtension.inspect_datanode for datanode inspection (#6921)
* feat: add InformationExtension.inspect_datanode for datanode inspection

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* aggregate results from all datanodes

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix fmt

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix unreleased mito engine

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2025-09-09 03:29:04 +00:00
Zhenchi
9fe84f6fbd feat(mito): backfill partition expr on region open (#6862)
* feat(mito): backfill partition expr on region open

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* tiny polish

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* only writable leader persists

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* catchup needs backfill

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* add log

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* handle distribute mode

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* only when transfer to set gracefully

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix fmt

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2025-09-09 02:50:01 +00:00
shuiyisong
ac051af201 feat(pipeline): generate create table sql from pipeline config (#6930)
* chore: add expr_to_create function using cc and polish

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* feat: add get pipeline create table

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: update test_expr_to_create

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* fix: column extension

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: add message hint for variant table name

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: add message hint for existing table

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* test: create table for pipeline

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: update comment

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: rename to query pipeline ddl

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

---------

Signed-off-by: shuiyisong <xixing.sys@gmail.com>
2025-09-09 02:47:51 +00:00
Ning Sun
948a6578fa feat: add udtf (table function) registration (#6922)
* feat: work-in-progress udtf support

Signed-off-by: Ning Sun <sunning@greptime.com>

* feat: add table function support

Signed-off-by: Ning Sun <sunning@greptime.com>

* test: resolve test error

Signed-off-by: Ning Sun <sunning@greptime.com>

---------

Signed-off-by: Ning Sun <sunning@greptime.com>
2025-09-09 01:26:38 +00:00
Weny Xu
16febbd4c2 feat: add CPU, memory and node status info to cluster_info (#6897)
* feat: add CPU and memory info to `cluster_info`

Signed-off-by: WenyXu <wenymedia@gmail.com>

* feat: add `node_status` to `cluster_info` table

Signed-off-by: WenyXu <wenymedia@gmail.com>

* test: update sqlness

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: update proto

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-09-08 08:59:34 +00:00
LFC
47384c7701 feat: support function alias (#6917)
* feat: udf alias

Signed-off-by: luofucong <luofc@foxmail.com>

* trying to fix sqlness

Signed-off-by: luofucong <luofc@foxmail.com>

* x

Signed-off-by: luofucong <luofc@foxmail.com>

---------

Signed-off-by: luofucong <luofc@foxmail.com>
2025-09-08 08:57:24 +00:00
Ruihang Xia
c9377e7c5a build: bump rust edition to 2024 (#6920)
* bump edition

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* format

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* gen keyword

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* lifetime and env var

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* one more gen fix

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* lifetime of temporaries in tail expressions

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* format again

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* clippy nested if

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* clippy let and return

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-09-08 02:37:18 +00:00
Weny Xu
658d07bfc8 feat: add written_bytes_since_open column to region_statistics table (#6904)
* feat: add `write_bytes` column to `region_statistics` table

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: update comments

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: rename `write_bytes` to `written_bytes`

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: rename `written_bytes` to `written_bytes_since_open`

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-09-05 07:27:30 +00:00
dennis zhuang
24e5c9f6da test: migrate duckdb tests, part 1 (#6870)
* test: migrate duckdb tests

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: style

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* test: add more duckdb tests

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: stable order

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: simplfy comments

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: remove tests/cases/standalone/common/DUCKDB_MIGRATION_GUIDE.md

* fix: incorrect_sql.sql

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: integer flow test

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: integer flow test

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* docs: add todo

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

---------

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
2025-09-05 06:10:14 +00:00
Ruihang Xia
4158afa618 fix: wrap tql cte in a subquery alias (#6910)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-09-05 06:07:20 +00:00
LFC
48f5db3f5f refactor: use DataFusion's Signature directly in UDF (#6908)
* refactor: use DataFusion's Signature directly in UDF

Signed-off-by: luofucong <luofc@foxmail.com>

* fix sqlness

Signed-off-by: luofucong <luofc@foxmail.com>

* resolve PR comments

Signed-off-by: luofucong <luofc@foxmail.com>

* fix ci

Signed-off-by: luofucong <luofc@foxmail.com>

---------

Signed-off-by: luofucong <luofc@foxmail.com>
2025-09-05 05:04:56 +00:00
discord9
f7c8d86ebb feat: file ref mgr (#6844)
* feat: file ref manager

Signed-off-by: discord9 <discord9@163.com>

* refactor: put file ref mgr to sep file

Signed-off-by: discord9 <discord9@163.com>

* chore: rename path

Signed-off-by: discord9 <discord9@163.com>

* test: ref path

Signed-off-by: discord9 <discord9@163.com>

* fix: pass node id

Signed-off-by: discord9 <discord9@163.com>

* chore: after rebase fix

Signed-off-by: discord9 <discord9@163.com>

* feat: exlucde already in manifest files

Signed-off-by: discord9 <discord9@163.com>

* docs: explain why it can work

Signed-off-by: discord9 <discord9@163.com>

* feat: also include manifest versions

Signed-off-by: discord9 <discord9@163.com>

* refactor: per review

Signed-off-by: discord9 <discord9@163.com>

* rename func to imply what's it actually doing

Signed-off-by: discord9 <discord9@163.com>

* more docs

Signed-off-by: discord9 <discord9@163.com>

* docs: expect gc worker to do the job right

Signed-off-by: discord9 <discord9@163.com>

* refactor: partially per review

Signed-off-by: discord9 <discord9@163.com>

* refactor: per review

Signed-off-by: discord9 <discord9@163.com>

* chore: unused

Signed-off-by: discord9 <discord9@163.com>

* metrics: change to per datanode instead

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-09-05 03:45:01 +00:00
Ruihang Xia
faae1db066 feat: humanize analyze numbers (#6889)
* feat: humanize analyze numbers

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update cargo lock

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* tailing space

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* use readable size

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-09-05 03:42:38 +00:00
Ruihang Xia
ba105e18b0 feat: put sqlness into a separated dir (#6911)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-09-05 01:39:29 +00:00
Yingwen
8bbb396506 feat: Supports flat format in SeqScan and UnorderedScan (#6905)
* feat: support flat format in SeqScan

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: support flat format in unordered scan

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: support parallel read for flat format in SeqScan

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: rename flat DedupReader to FlatDedupReader

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: address review comments

It also precomputes the input arrow schema

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-09-04 13:12:24 +00:00
dennis zhuang
8e7f2e92cc test: adds approx_percentile_cont to range query test (#6903)
Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
2025-09-04 12:18:27 +00:00
Ruihang Xia
493a1efe65 feat: skip compaction on large file on append only mode (#6838)
* feat: skip compaction on large file on append only mode

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* log ignored files

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* format

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* only ignore level 1 files

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* early exit

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix typo

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-09-04 12:01:09 +00:00
Yingwen
40a49ddc82 feat: implement basic write/read methods for bulk memtable (#6888)
* feat: implement basic write/read methods for bulk memtable

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: change update stats atomic ordering

We have already acquired the write lock so Relaxed ordering is fine

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-09-04 08:42:05 +00:00
Alex Araujo
2c019965be feat: unify FlushRegions instructions (#6819)
* feat: add FlushRegionsV2 instruction with unified semantics

- Add FlushRegionsV2 struct supporting both single and batch operations
- Preserve original FlushRegion(RegionId) API for backward compatibility
- Support configurable FlushStrategy (Sync/Async) and FlushErrorStrategy (FailFast/TryAll)
- Add detailed per-region error reporting in FlushRegionReply
- Update datanode handlers to support both legacy and enhanced flush instructions
- Maintain zero breaking changes through automatic conversion of legacy formats

Signed-off-by: Alex Araujo <alexaraujo@gmail.com>

* chore: run make fmt

Signed-off-by: Alex Araujo <alexaraujo@gmail.com>

* Apply suggestions from code review

Co-authored-by: LFC <990479+MichaelScofield@users.noreply.github.com>

Signed-off-by: Alex Araujo <alexaraujo@gmail.com>

* refactor: extract shared perform_region_flush fn

Signed-off-by: Alex Araujo <alexaraujo@gmail.com>

* refactor: use consistent error type across similar methods

see gh copilot suggestion: https://github.com/GreptimeTeam/greptimedb/pull/6819#discussion_r2299603698

Signed-off-by: Alex Araujo <alexaraujo@gmail.com>

* chore: make fmt

Signed-off-by: Alex Araujo <alexaraujo@gmail.com>

* refactor: consolidate FlushRegion instructions

Signed-off-by: Alex Araujo <alexaraujo@gmail.com>

---------

Signed-off-by: Alex Araujo <alexaraujo@gmail.com>
2025-09-04 01:18:54 +00:00
localhost
756856799f refactor(pipeline)!: change dispatch table name format (#6901)
* chore(breaking): change dispatch table name format

* chore: fix test
2025-09-03 13:47:40 +00:00
Weny Xu
dbb76483e8 chore: disable stats persistence by default (#6900)
* chore: disable stats persistence by default

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions

Signed-off-by: WenyXu <wenymedia@gmail.com>

* fix: fix clippy

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-09-03 12:39:43 +00:00
discord9
a65db7121e feat: flow full aggr only trigger on new data (#6880)
* fix: flow full aggr only trigger on new data

Signed-off-by: discord9 <discord9@163.com>

* chore: better debug msg

Signed-off-by: discord9 <discord9@163.com>

* refactor: per review

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-09-03 08:45:24 +00:00
liyang
ee4b8708d7 fix: fix deploy greptimedb in sqlness-test (#6894)
Signed-off-by: liyang <daviderli614@gmail.com>
2025-09-03 08:11:18 +00:00
Weny Xu
2512f9456d chore: bump main branch version to 0.18 (#6890)
Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-09-03 07:50:24 +00:00
Zhenchi
deb728c38a fix: move prune_region_dir to region drop (#6891)
* fix: move prune_region_dir to region drop

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* address comments

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2025-09-03 07:11:05 +00:00
dennis zhuang
9dbf6dd8d0 test: migrate duckdb tests part2, window functions (#6875)
* test: migrate window tests

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: blank line at the end

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

---------

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
2025-09-03 06:55:47 +00:00
Weny Xu
7cf47ccf54 fix: initialize remote WAL regions with correct flushed entry IDs (#6856)
* fix: initialize remote WAL regions with correct flushed entry IDs

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: add logs

Signed-off-by: WenyXu <wenymedia@gmail.com>

* fix: correct latest offset

Signed-off-by: WenyXu <wenymedia@gmail.com>

* fix: update sqlness

Signed-off-by: WenyXu <wenymedia@gmail.com>

* feat: add replay checkpoint to catchup request

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: logs

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-09-03 04:00:02 +00:00
Weny Xu
c3c79e4c79 chore: fix typo (#6887)
Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-09-03 03:45:55 +00:00
Ruihang Xia
56db1d468a chore: fix typo (#6885)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-09-03 02:43:25 +00:00
WaterWhisperer
7fbcae1ef8 test: add upgrade compatibility tests (#6863)
test: add upgrade compatibility tests (#5188)

Signed-off-by: WaterWhisperer <waterwhisperer24@qq.com>
2025-09-02 21:16:00 +00:00
Weny Xu
b7f507cb37 chore: update dashboard (#6883)
* chore: update dashboard

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: update json

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: update json

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: add desc

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-09-02 14:54:22 +00:00
Zhenchi
9b8f04b065 fix: prune intermediate dirs on index finish and region pruge (#6878)
* fix: prune intermediate dirs on index finish and region pruge

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* address comments

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2025-09-02 13:57:16 +00:00
Yingwen
8fc42aeb27 feat: Update parquet writer and indexer to support the flat format (#6866)
* feat: implements method to write flat batch for ParquetWriter

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: add update method for flat RecordBatch in Indexer

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: calls indexer to write flat batch in ParquetWriter

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: handle empty projection for flat format

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: eval array in precise_filter_flat

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: cache column lookup result in inverted indexer

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: add test

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: support dict type in dense codec

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: remove read part in test as it need modifying the reader

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: support dictionary type in other methods for dense codec

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: fulltext use string array directly

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-09-02 12:48:34 +00:00
Weny Xu
d394f38d18 fix: ignore reserved column IDs and prevent panic on chunk_size is zero (#6882)
Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-09-02 12:37:16 +00:00
discord9
6a3791ab31 fix(flow): promql auto create table (#6867)
* fix: non aggr prom ql auto create table

Signed-off-by: discord9 <discord9@163.com>

* feat: val column use any name

Signed-off-by: discord9 <discord9@163.com>

* feat: check if it's tql src table

Signed-off-by: discord9 <discord9@163.com>

* test: check sink table is tql-able

Signed-off-by: discord9 <discord9@163.com>

* test: sqlness redacted

Signed-off-by: discord9 <discord9@163.com>

* fix: sql also handle no aggr case

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-09-02 08:54:16 +00:00
1721 changed files with 102693 additions and 33636 deletions

View File

@@ -2,7 +2,7 @@
linker = "aarch64-linux-gnu-gcc"
[alias]
sqlness = "run --bin sqlness-runner --"
sqlness = "run --bin sqlness-runner --target-dir target/sqlness --"
[unstable.git]
shallow_index = true

View File

@@ -1,3 +1,8 @@
logging:
level: "info"
format: "json"
filters:
- log_store=debug
meta:
configData: |-
[runtime]

View File

@@ -3,12 +3,16 @@
set -e
set -o pipefail
KUBERNETES_VERSION="${KUBERNETES_VERSION:-v1.24.0}"
KUBERNETES_VERSION="${KUBERNETES_VERSION:-v1.32.0}"
ENABLE_STANDALONE_MODE="${ENABLE_STANDALONE_MODE:-true}"
DEFAULT_INSTALL_NAMESPACE=${DEFAULT_INSTALL_NAMESPACE:-default}
GREPTIMEDB_IMAGE_TAG=${GREPTIMEDB_IMAGE_TAG:-latest}
ETCD_CHART="oci://registry-1.docker.io/bitnamicharts/etcd"
GREPTIMEDB_OPERATOR_IMAGE_TAG=${GREPTIMEDB_OPERATOR_IMAGE_TAG:-v0.5.1}
GREPTIMEDB_INITIALIZER_IMAGE_TAG="${GREPTIMEDB_OPERATOR_IMAGE_TAG}"
GREPTIME_CHART="https://greptimeteam.github.io/helm-charts/"
ETCD_CHART="oci://registry-1.docker.io/bitnamicharts/etcd"
ETCD_CHART_VERSION="${ETCD_CHART_VERSION:-12.0.8}"
ETCD_IMAGE_TAG="${ETCD_IMAGE_TAG:-3.6.1-debian-12-r3}"
# Create a cluster with 1 control-plane node and 5 workers.
function create_kind_cluster() {
@@ -35,10 +39,16 @@ function add_greptime_chart() {
function deploy_etcd_cluster() {
local namespace="$1"
helm install etcd "$ETCD_CHART" \
helm upgrade --install etcd "$ETCD_CHART" \
--version "$ETCD_CHART_VERSION" \
--create-namespace \
--set replicaCount=3 \
--set auth.rbac.create=false \
--set auth.rbac.token.enabled=false \
--set global.security.allowInsecureImages=true \
--set image.registry=docker.io \
--set image.repository=greptime/etcd \
--set image.tag="$ETCD_IMAGE_TAG" \
-n "$namespace"
# Wait for etcd cluster to be ready.
@@ -48,8 +58,9 @@ function deploy_etcd_cluster() {
# Deploy greptimedb-operator.
function deploy_greptimedb_operator() {
# Use the latest chart and image.
helm install greptimedb-operator greptime/greptimedb-operator \
--set image.tag=latest \
helm upgrade --install greptimedb-operator greptime/greptimedb-operator \
--create-namespace \
--set image.tag="$GREPTIMEDB_OPERATOR_IMAGE_TAG" \
-n "$DEFAULT_INSTALL_NAMESPACE"
# Wait for greptimedb-operator to be ready.
@@ -66,9 +77,12 @@ function deploy_greptimedb_cluster() {
deploy_etcd_cluster "$install_namespace"
helm install "$cluster_name" greptime/greptimedb-cluster \
helm upgrade --install "$cluster_name" greptime/greptimedb-cluster \
--create-namespace \
--set image.tag="$GREPTIMEDB_IMAGE_TAG" \
--set initializer.tag="$GREPTIMEDB_INITIALIZER_IMAGE_TAG" \
--set meta.backendStorage.etcd.endpoints="etcd.$install_namespace:2379" \
--set meta.backendStorage.etcd.storeKeyPrefix="$cluster_name" \
-n "$install_namespace"
# Wait for greptimedb cluster to be ready.
@@ -101,15 +115,18 @@ function deploy_greptimedb_cluster_with_s3_storage() {
deploy_etcd_cluster "$install_namespace"
helm install "$cluster_name" greptime/greptimedb-cluster -n "$install_namespace" \
helm upgrade --install "$cluster_name" greptime/greptimedb-cluster -n "$install_namespace" \
--create-namespace \
--set image.tag="$GREPTIMEDB_IMAGE_TAG" \
--set initializer.tag="$GREPTIMEDB_INITIALIZER_IMAGE_TAG" \
--set meta.backendStorage.etcd.endpoints="etcd.$install_namespace:2379" \
--set storage.s3.bucket="$AWS_CI_TEST_BUCKET" \
--set storage.s3.region="$AWS_REGION" \
--set storage.s3.root="$DATA_ROOT" \
--set storage.credentials.secretName=s3-credentials \
--set storage.credentials.accessKeyId="$AWS_ACCESS_KEY_ID" \
--set storage.credentials.secretAccessKey="$AWS_SECRET_ACCESS_KEY"
--set meta.backendStorage.etcd.storeKeyPrefix="$cluster_name" \
--set objectStorage.s3.bucket="$AWS_CI_TEST_BUCKET" \
--set objectStorage.s3.region="$AWS_REGION" \
--set objectStorage.s3.root="$DATA_ROOT" \
--set objectStorage.credentials.secretName=s3-credentials \
--set objectStorage.credentials.accessKeyId="$AWS_ACCESS_KEY_ID" \
--set objectStorage.credentials.secretAccessKey="$AWS_SECRET_ACCESS_KEY"
# Wait for greptimedb cluster to be ready.
while true; do
@@ -134,7 +151,8 @@ function deploy_greptimedb_cluster_with_s3_storage() {
# Deploy standalone greptimedb.
# It will expose cluster service ports as '34000', '34001', '34002', '34003' to local access.
function deploy_standalone_greptimedb() {
helm install greptimedb-standalone greptime/greptimedb-standalone \
helm upgrade --install greptimedb-standalone greptime/greptimedb-standalone \
--create-namespace \
--set image.tag="$GREPTIMEDB_IMAGE_TAG" \
-n "$DEFAULT_INSTALL_NAMESPACE"

507
.github/scripts/package-lock.json generated vendored Normal file
View File

@@ -0,0 +1,507 @@
{
"name": "greptimedb-github-scripts",
"version": "1.0.0",
"lockfileVersion": 3,
"requires": true,
"packages": {
"": {
"name": "greptimedb-github-scripts",
"version": "1.0.0",
"dependencies": {
"@octokit/rest": "^21.0.0",
"axios": "^1.7.0"
}
},
"node_modules/@octokit/auth-token": {
"version": "5.1.2",
"resolved": "https://registry.npmjs.org/@octokit/auth-token/-/auth-token-5.1.2.tgz",
"integrity": "sha512-JcQDsBdg49Yky2w2ld20IHAlwr8d/d8N6NiOXbtuoPCqzbsiJgF633mVUw3x4mo0H5ypataQIX7SFu3yy44Mpw==",
"license": "MIT",
"engines": {
"node": ">= 18"
}
},
"node_modules/@octokit/core": {
"version": "6.1.6",
"resolved": "https://registry.npmjs.org/@octokit/core/-/core-6.1.6.tgz",
"integrity": "sha512-kIU8SLQkYWGp3pVKiYzA5OSaNF5EE03P/R8zEmmrG6XwOg5oBjXyQVVIauQ0dgau4zYhpZEhJrvIYt6oM+zZZA==",
"license": "MIT",
"dependencies": {
"@octokit/auth-token": "^5.0.0",
"@octokit/graphql": "^8.2.2",
"@octokit/request": "^9.2.3",
"@octokit/request-error": "^6.1.8",
"@octokit/types": "^14.0.0",
"before-after-hook": "^3.0.2",
"universal-user-agent": "^7.0.0"
},
"engines": {
"node": ">= 18"
}
},
"node_modules/@octokit/endpoint": {
"version": "10.1.4",
"resolved": "https://registry.npmjs.org/@octokit/endpoint/-/endpoint-10.1.4.tgz",
"integrity": "sha512-OlYOlZIsfEVZm5HCSR8aSg02T2lbUWOsCQoPKfTXJwDzcHQBrVBGdGXb89dv2Kw2ToZaRtudp8O3ZIYoaOjKlA==",
"license": "MIT",
"dependencies": {
"@octokit/types": "^14.0.0",
"universal-user-agent": "^7.0.2"
},
"engines": {
"node": ">= 18"
}
},
"node_modules/@octokit/graphql": {
"version": "8.2.2",
"resolved": "https://registry.npmjs.org/@octokit/graphql/-/graphql-8.2.2.tgz",
"integrity": "sha512-Yi8hcoqsrXGdt0yObxbebHXFOiUA+2v3n53epuOg1QUgOB6c4XzvisBNVXJSl8RYA5KrDuSL2yq9Qmqe5N0ryA==",
"license": "MIT",
"dependencies": {
"@octokit/request": "^9.2.3",
"@octokit/types": "^14.0.0",
"universal-user-agent": "^7.0.0"
},
"engines": {
"node": ">= 18"
}
},
"node_modules/@octokit/openapi-types": {
"version": "25.1.0",
"resolved": "https://registry.npmjs.org/@octokit/openapi-types/-/openapi-types-25.1.0.tgz",
"integrity": "sha512-idsIggNXUKkk0+BExUn1dQ92sfysJrje03Q0bv0e+KPLrvyqZF8MnBpFz8UNfYDwB3Ie7Z0TByjWfzxt7vseaA==",
"license": "MIT"
},
"node_modules/@octokit/plugin-paginate-rest": {
"version": "11.6.0",
"resolved": "https://registry.npmjs.org/@octokit/plugin-paginate-rest/-/plugin-paginate-rest-11.6.0.tgz",
"integrity": "sha512-n5KPteiF7pWKgBIBJSk8qzoZWcUkza2O6A0za97pMGVrGfPdltxrfmfF5GucHYvHGZD8BdaZmmHGz5cX/3gdpw==",
"license": "MIT",
"dependencies": {
"@octokit/types": "^13.10.0"
},
"engines": {
"node": ">= 18"
},
"peerDependencies": {
"@octokit/core": ">=6"
}
},
"node_modules/@octokit/plugin-paginate-rest/node_modules/@octokit/openapi-types": {
"version": "24.2.0",
"resolved": "https://registry.npmjs.org/@octokit/openapi-types/-/openapi-types-24.2.0.tgz",
"integrity": "sha512-9sIH3nSUttelJSXUrmGzl7QUBFul0/mB8HRYl3fOlgHbIWG+WnYDXU3v/2zMtAvuzZ/ed00Ei6on975FhBfzrg==",
"license": "MIT"
},
"node_modules/@octokit/plugin-paginate-rest/node_modules/@octokit/types": {
"version": "13.10.0",
"resolved": "https://registry.npmjs.org/@octokit/types/-/types-13.10.0.tgz",
"integrity": "sha512-ifLaO34EbbPj0Xgro4G5lP5asESjwHracYJvVaPIyXMuiuXLlhic3S47cBdTb+jfODkTE5YtGCLt3Ay3+J97sA==",
"license": "MIT",
"dependencies": {
"@octokit/openapi-types": "^24.2.0"
}
},
"node_modules/@octokit/plugin-request-log": {
"version": "5.3.1",
"resolved": "https://registry.npmjs.org/@octokit/plugin-request-log/-/plugin-request-log-5.3.1.tgz",
"integrity": "sha512-n/lNeCtq+9ofhC15xzmJCNKP2BWTv8Ih2TTy+jatNCCq/gQP/V7rK3fjIfuz0pDWDALO/o/4QY4hyOF6TQQFUw==",
"license": "MIT",
"engines": {
"node": ">= 18"
},
"peerDependencies": {
"@octokit/core": ">=6"
}
},
"node_modules/@octokit/plugin-rest-endpoint-methods": {
"version": "13.5.0",
"resolved": "https://registry.npmjs.org/@octokit/plugin-rest-endpoint-methods/-/plugin-rest-endpoint-methods-13.5.0.tgz",
"integrity": "sha512-9Pas60Iv9ejO3WlAX3maE1+38c5nqbJXV5GrncEfkndIpZrJ/WPMRd2xYDcPPEt5yzpxcjw9fWNoPhsSGzqKqw==",
"license": "MIT",
"dependencies": {
"@octokit/types": "^13.10.0"
},
"engines": {
"node": ">= 18"
},
"peerDependencies": {
"@octokit/core": ">=6"
}
},
"node_modules/@octokit/plugin-rest-endpoint-methods/node_modules/@octokit/openapi-types": {
"version": "24.2.0",
"resolved": "https://registry.npmjs.org/@octokit/openapi-types/-/openapi-types-24.2.0.tgz",
"integrity": "sha512-9sIH3nSUttelJSXUrmGzl7QUBFul0/mB8HRYl3fOlgHbIWG+WnYDXU3v/2zMtAvuzZ/ed00Ei6on975FhBfzrg==",
"license": "MIT"
},
"node_modules/@octokit/plugin-rest-endpoint-methods/node_modules/@octokit/types": {
"version": "13.10.0",
"resolved": "https://registry.npmjs.org/@octokit/types/-/types-13.10.0.tgz",
"integrity": "sha512-ifLaO34EbbPj0Xgro4G5lP5asESjwHracYJvVaPIyXMuiuXLlhic3S47cBdTb+jfODkTE5YtGCLt3Ay3+J97sA==",
"license": "MIT",
"dependencies": {
"@octokit/openapi-types": "^24.2.0"
}
},
"node_modules/@octokit/request": {
"version": "9.2.4",
"resolved": "https://registry.npmjs.org/@octokit/request/-/request-9.2.4.tgz",
"integrity": "sha512-q8ybdytBmxa6KogWlNa818r0k1wlqzNC+yNkcQDECHvQo8Vmstrg18JwqJHdJdUiHD2sjlwBgSm9kHkOKe2iyA==",
"license": "MIT",
"dependencies": {
"@octokit/endpoint": "^10.1.4",
"@octokit/request-error": "^6.1.8",
"@octokit/types": "^14.0.0",
"fast-content-type-parse": "^2.0.0",
"universal-user-agent": "^7.0.2"
},
"engines": {
"node": ">= 18"
}
},
"node_modules/@octokit/request-error": {
"version": "6.1.8",
"resolved": "https://registry.npmjs.org/@octokit/request-error/-/request-error-6.1.8.tgz",
"integrity": "sha512-WEi/R0Jmq+IJKydWlKDmryPcmdYSVjL3ekaiEL1L9eo1sUnqMJ+grqmC9cjk7CA7+b2/T397tO5d8YLOH3qYpQ==",
"license": "MIT",
"dependencies": {
"@octokit/types": "^14.0.0"
},
"engines": {
"node": ">= 18"
}
},
"node_modules/@octokit/rest": {
"version": "21.1.1",
"resolved": "https://registry.npmjs.org/@octokit/rest/-/rest-21.1.1.tgz",
"integrity": "sha512-sTQV7va0IUVZcntzy1q3QqPm/r8rWtDCqpRAmb8eXXnKkjoQEtFe3Nt5GTVsHft+R6jJoHeSiVLcgcvhtue/rg==",
"license": "MIT",
"dependencies": {
"@octokit/core": "^6.1.4",
"@octokit/plugin-paginate-rest": "^11.4.2",
"@octokit/plugin-request-log": "^5.3.1",
"@octokit/plugin-rest-endpoint-methods": "^13.3.0"
},
"engines": {
"node": ">= 18"
}
},
"node_modules/@octokit/types": {
"version": "14.1.0",
"resolved": "https://registry.npmjs.org/@octokit/types/-/types-14.1.0.tgz",
"integrity": "sha512-1y6DgTy8Jomcpu33N+p5w58l6xyt55Ar2I91RPiIA0xCJBXyUAhXCcmZaDWSANiha7R9a6qJJ2CRomGPZ6f46g==",
"license": "MIT",
"dependencies": {
"@octokit/openapi-types": "^25.1.0"
}
},
"node_modules/asynckit": {
"version": "0.4.0",
"resolved": "https://registry.npmjs.org/asynckit/-/asynckit-0.4.0.tgz",
"integrity": "sha512-Oei9OH4tRh0YqU3GxhX79dM/mwVgvbZJaSNaRk+bshkj0S5cfHcgYakreBjrHwatXKbz+IoIdYLxrKim2MjW0Q==",
"license": "MIT"
},
"node_modules/axios": {
"version": "1.12.2",
"resolved": "https://registry.npmjs.org/axios/-/axios-1.12.2.tgz",
"integrity": "sha512-vMJzPewAlRyOgxV2dU0Cuz2O8zzzx9VYtbJOaBgXFeLc4IV/Eg50n4LowmehOOR61S8ZMpc2K5Sa7g6A4jfkUw==",
"license": "MIT",
"dependencies": {
"follow-redirects": "^1.15.6",
"form-data": "^4.0.4",
"proxy-from-env": "^1.1.0"
}
},
"node_modules/before-after-hook": {
"version": "3.0.2",
"resolved": "https://registry.npmjs.org/before-after-hook/-/before-after-hook-3.0.2.tgz",
"integrity": "sha512-Nik3Sc0ncrMK4UUdXQmAnRtzmNQTAAXmXIopizwZ1W1t8QmfJj+zL4OA2I7XPTPW5z5TDqv4hRo/JzouDJnX3A==",
"license": "Apache-2.0"
},
"node_modules/call-bind-apply-helpers": {
"version": "1.0.2",
"resolved": "https://registry.npmjs.org/call-bind-apply-helpers/-/call-bind-apply-helpers-1.0.2.tgz",
"integrity": "sha512-Sp1ablJ0ivDkSzjcaJdxEunN5/XvksFJ2sMBFfq6x0ryhQV/2b/KwFe21cMpmHtPOSij8K99/wSfoEuTObmuMQ==",
"license": "MIT",
"dependencies": {
"es-errors": "^1.3.0",
"function-bind": "^1.1.2"
},
"engines": {
"node": ">= 0.4"
}
},
"node_modules/combined-stream": {
"version": "1.0.8",
"resolved": "https://registry.npmjs.org/combined-stream/-/combined-stream-1.0.8.tgz",
"integrity": "sha512-FQN4MRfuJeHf7cBbBMJFXhKSDq+2kAArBlmRBvcvFE5BB1HZKXtSFASDhdlz9zOYwxh8lDdnvmMOe/+5cdoEdg==",
"license": "MIT",
"dependencies": {
"delayed-stream": "~1.0.0"
},
"engines": {
"node": ">= 0.8"
}
},
"node_modules/delayed-stream": {
"version": "1.0.0",
"resolved": "https://registry.npmjs.org/delayed-stream/-/delayed-stream-1.0.0.tgz",
"integrity": "sha512-ZySD7Nf91aLB0RxL4KGrKHBXl7Eds1DAmEdcoVawXnLD7SDhpNgtuII2aAkg7a7QS41jxPSZ17p4VdGnMHk3MQ==",
"license": "MIT",
"engines": {
"node": ">=0.4.0"
}
},
"node_modules/dunder-proto": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/dunder-proto/-/dunder-proto-1.0.1.tgz",
"integrity": "sha512-KIN/nDJBQRcXw0MLVhZE9iQHmG68qAVIBg9CqmUYjmQIhgij9U5MFvrqkUL5FbtyyzZuOeOt0zdeRe4UY7ct+A==",
"license": "MIT",
"dependencies": {
"call-bind-apply-helpers": "^1.0.1",
"es-errors": "^1.3.0",
"gopd": "^1.2.0"
},
"engines": {
"node": ">= 0.4"
}
},
"node_modules/es-define-property": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/es-define-property/-/es-define-property-1.0.1.tgz",
"integrity": "sha512-e3nRfgfUZ4rNGL232gUgX06QNyyez04KdjFrF+LTRoOXmrOgFKDg4BCdsjW8EnT69eqdYGmRpJwiPVYNrCaW3g==",
"license": "MIT",
"engines": {
"node": ">= 0.4"
}
},
"node_modules/es-errors": {
"version": "1.3.0",
"resolved": "https://registry.npmjs.org/es-errors/-/es-errors-1.3.0.tgz",
"integrity": "sha512-Zf5H2Kxt2xjTvbJvP2ZWLEICxA6j+hAmMzIlypy4xcBg1vKVnx89Wy0GbS+kf5cwCVFFzdCFh2XSCFNULS6csw==",
"license": "MIT",
"engines": {
"node": ">= 0.4"
}
},
"node_modules/es-object-atoms": {
"version": "1.1.1",
"resolved": "https://registry.npmjs.org/es-object-atoms/-/es-object-atoms-1.1.1.tgz",
"integrity": "sha512-FGgH2h8zKNim9ljj7dankFPcICIK9Cp5bm+c2gQSYePhpaG5+esrLODihIorn+Pe6FGJzWhXQotPv73jTaldXA==",
"license": "MIT",
"dependencies": {
"es-errors": "^1.3.0"
},
"engines": {
"node": ">= 0.4"
}
},
"node_modules/es-set-tostringtag": {
"version": "2.1.0",
"resolved": "https://registry.npmjs.org/es-set-tostringtag/-/es-set-tostringtag-2.1.0.tgz",
"integrity": "sha512-j6vWzfrGVfyXxge+O0x5sh6cvxAog0a/4Rdd2K36zCMV5eJ+/+tOAngRO8cODMNWbVRdVlmGZQL2YS3yR8bIUA==",
"license": "MIT",
"dependencies": {
"es-errors": "^1.3.0",
"get-intrinsic": "^1.2.6",
"has-tostringtag": "^1.0.2",
"hasown": "^2.0.2"
},
"engines": {
"node": ">= 0.4"
}
},
"node_modules/fast-content-type-parse": {
"version": "2.0.1",
"resolved": "https://registry.npmjs.org/fast-content-type-parse/-/fast-content-type-parse-2.0.1.tgz",
"integrity": "sha512-nGqtvLrj5w0naR6tDPfB4cUmYCqouzyQiz6C5y/LtcDllJdrcc6WaWW6iXyIIOErTa/XRybj28aasdn4LkVk6Q==",
"funding": [
{
"type": "github",
"url": "https://github.com/sponsors/fastify"
},
{
"type": "opencollective",
"url": "https://opencollective.com/fastify"
}
],
"license": "MIT"
},
"node_modules/follow-redirects": {
"version": "1.15.11",
"resolved": "https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.15.11.tgz",
"integrity": "sha512-deG2P0JfjrTxl50XGCDyfI97ZGVCxIpfKYmfyrQ54n5FO/0gfIES8C/Psl6kWVDolizcaaxZJnTS0QSMxvnsBQ==",
"funding": [
{
"type": "individual",
"url": "https://github.com/sponsors/RubenVerborgh"
}
],
"license": "MIT",
"engines": {
"node": ">=4.0"
},
"peerDependenciesMeta": {
"debug": {
"optional": true
}
}
},
"node_modules/form-data": {
"version": "4.0.4",
"resolved": "https://registry.npmjs.org/form-data/-/form-data-4.0.4.tgz",
"integrity": "sha512-KrGhL9Q4zjj0kiUt5OO4Mr/A/jlI2jDYs5eHBpYHPcBEVSiipAvn2Ko2HnPe20rmcuuvMHNdZFp+4IlGTMF0Ow==",
"license": "MIT",
"dependencies": {
"asynckit": "^0.4.0",
"combined-stream": "^1.0.8",
"es-set-tostringtag": "^2.1.0",
"hasown": "^2.0.2",
"mime-types": "^2.1.12"
},
"engines": {
"node": ">= 6"
}
},
"node_modules/function-bind": {
"version": "1.1.2",
"resolved": "https://registry.npmjs.org/function-bind/-/function-bind-1.1.2.tgz",
"integrity": "sha512-7XHNxH7qX9xG5mIwxkhumTox/MIRNcOgDrxWsMt2pAr23WHp6MrRlN7FBSFpCpr+oVO0F744iUgR82nJMfG2SA==",
"license": "MIT",
"funding": {
"url": "https://github.com/sponsors/ljharb"
}
},
"node_modules/get-intrinsic": {
"version": "1.3.0",
"resolved": "https://registry.npmjs.org/get-intrinsic/-/get-intrinsic-1.3.0.tgz",
"integrity": "sha512-9fSjSaos/fRIVIp+xSJlE6lfwhES7LNtKaCBIamHsjr2na1BiABJPo0mOjjz8GJDURarmCPGqaiVg5mfjb98CQ==",
"license": "MIT",
"dependencies": {
"call-bind-apply-helpers": "^1.0.2",
"es-define-property": "^1.0.1",
"es-errors": "^1.3.0",
"es-object-atoms": "^1.1.1",
"function-bind": "^1.1.2",
"get-proto": "^1.0.1",
"gopd": "^1.2.0",
"has-symbols": "^1.1.0",
"hasown": "^2.0.2",
"math-intrinsics": "^1.1.0"
},
"engines": {
"node": ">= 0.4"
},
"funding": {
"url": "https://github.com/sponsors/ljharb"
}
},
"node_modules/get-proto": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/get-proto/-/get-proto-1.0.1.tgz",
"integrity": "sha512-sTSfBjoXBp89JvIKIefqw7U2CCebsc74kiY6awiGogKtoSGbgjYE/G/+l9sF3MWFPNc9IcoOC4ODfKHfxFmp0g==",
"license": "MIT",
"dependencies": {
"dunder-proto": "^1.0.1",
"es-object-atoms": "^1.0.0"
},
"engines": {
"node": ">= 0.4"
}
},
"node_modules/gopd": {
"version": "1.2.0",
"resolved": "https://registry.npmjs.org/gopd/-/gopd-1.2.0.tgz",
"integrity": "sha512-ZUKRh6/kUFoAiTAtTYPZJ3hw9wNxx+BIBOijnlG9PnrJsCcSjs1wyyD6vJpaYtgnzDrKYRSqf3OO6Rfa93xsRg==",
"license": "MIT",
"engines": {
"node": ">= 0.4"
},
"funding": {
"url": "https://github.com/sponsors/ljharb"
}
},
"node_modules/has-symbols": {
"version": "1.1.0",
"resolved": "https://registry.npmjs.org/has-symbols/-/has-symbols-1.1.0.tgz",
"integrity": "sha512-1cDNdwJ2Jaohmb3sg4OmKaMBwuC48sYni5HUw2DvsC8LjGTLK9h+eb1X6RyuOHe4hT0ULCW68iomhjUoKUqlPQ==",
"license": "MIT",
"engines": {
"node": ">= 0.4"
},
"funding": {
"url": "https://github.com/sponsors/ljharb"
}
},
"node_modules/has-tostringtag": {
"version": "1.0.2",
"resolved": "https://registry.npmjs.org/has-tostringtag/-/has-tostringtag-1.0.2.tgz",
"integrity": "sha512-NqADB8VjPFLM2V0VvHUewwwsw0ZWBaIdgo+ieHtK3hasLz4qeCRjYcqfB6AQrBggRKppKF8L52/VqdVsO47Dlw==",
"license": "MIT",
"dependencies": {
"has-symbols": "^1.0.3"
},
"engines": {
"node": ">= 0.4"
},
"funding": {
"url": "https://github.com/sponsors/ljharb"
}
},
"node_modules/hasown": {
"version": "2.0.2",
"resolved": "https://registry.npmjs.org/hasown/-/hasown-2.0.2.tgz",
"integrity": "sha512-0hJU9SCPvmMzIBdZFqNPXWa6dqh7WdH0cII9y+CyS8rG3nL48Bclra9HmKhVVUHyPWNH5Y7xDwAB7bfgSjkUMQ==",
"license": "MIT",
"dependencies": {
"function-bind": "^1.1.2"
},
"engines": {
"node": ">= 0.4"
}
},
"node_modules/math-intrinsics": {
"version": "1.1.0",
"resolved": "https://registry.npmjs.org/math-intrinsics/-/math-intrinsics-1.1.0.tgz",
"integrity": "sha512-/IXtbwEk5HTPyEwyKX6hGkYXxM9nbj64B+ilVJnC/R6B0pH5G4V3b0pVbL7DBj4tkhBAppbQUlf6F6Xl9LHu1g==",
"license": "MIT",
"engines": {
"node": ">= 0.4"
}
},
"node_modules/mime-db": {
"version": "1.52.0",
"resolved": "https://registry.npmjs.org/mime-db/-/mime-db-1.52.0.tgz",
"integrity": "sha512-sPU4uV7dYlvtWJxwwxHD0PuihVNiE7TyAbQ5SWxDCB9mUYvOgroQOwYQQOKPJ8CIbE+1ETVlOoK1UC2nU3gYvg==",
"license": "MIT",
"engines": {
"node": ">= 0.6"
}
},
"node_modules/mime-types": {
"version": "2.1.35",
"resolved": "https://registry.npmjs.org/mime-types/-/mime-types-2.1.35.tgz",
"integrity": "sha512-ZDY+bPm5zTTF+YpCrAU9nK0UgICYPT0QtT1NZWFv4s++TNkcgVaT0g6+4R2uI4MjQjzysHB1zxuWL50hzaeXiw==",
"license": "MIT",
"dependencies": {
"mime-db": "1.52.0"
},
"engines": {
"node": ">= 0.6"
}
},
"node_modules/proxy-from-env": {
"version": "1.1.0",
"resolved": "https://registry.npmjs.org/proxy-from-env/-/proxy-from-env-1.1.0.tgz",
"integrity": "sha512-D+zkORCbA9f1tdWRK0RaCR3GPv50cMxcrz4X8k5LTSUD1Dkw47mKJEZQNunItRTkWwgtaUSo1RVFRIG9ZXiFYg==",
"license": "MIT"
},
"node_modules/universal-user-agent": {
"version": "7.0.3",
"resolved": "https://registry.npmjs.org/universal-user-agent/-/universal-user-agent-7.0.3.tgz",
"integrity": "sha512-TmnEAEAsBJVZM/AADELsK76llnwcf9vMKuPz8JflO1frO8Lchitr0fNaN9d+Ap0BjKtqWqd/J17qeDnXh8CL2A==",
"license": "ISC"
}
}
}

10
.github/scripts/package.json vendored Normal file
View File

@@ -0,0 +1,10 @@
{
"name": "greptimedb-github-scripts",
"version": "1.0.0",
"type": "module",
"description": "GitHub automation scripts for GreptimeDB",
"dependencies": {
"@octokit/rest": "^21.0.0",
"axios": "^1.7.0"
}
}

152
.github/scripts/pr-review-reminder.js vendored Normal file
View File

@@ -0,0 +1,152 @@
// Daily PR Review Reminder Script
// Fetches open PRs from GreptimeDB repository and sends Slack notifications
// to PR owners and assigned reviewers to keep review process moving.
(async () => {
const { Octokit } = await import("@octokit/rest");
const { default: axios } = await import('axios');
// Configuration
const GITHUB_TOKEN = process.env.GITHUB_TOKEN;
const SLACK_WEBHOOK_URL = process.env.SLACK_PR_REVIEW_WEBHOOK_URL;
const REPO_OWNER = "GreptimeTeam";
const REPO_NAME = "greptimedb";
const GITHUB_TO_SLACK = JSON.parse(process.env.GITHUBID_SLACKID_MAPPING || '{}');
// Debug: Print environment variable status
console.log("=== Environment Variables Debug ===");
console.log(`GITHUB_TOKEN: ${GITHUB_TOKEN ? 'Set ✓' : 'NOT SET ✗'}`);
console.log(`SLACK_PR_REVIEW_WEBHOOK_URL: ${SLACK_WEBHOOK_URL ? 'Set ✓' : 'NOT SET ✗'}`);
console.log(`GITHUBID_SLACKID_MAPPING: ${process.env.GITHUBID_SLACKID_MAPPING ? `Set ✓ (${Object.keys(GITHUB_TO_SLACK).length} mappings)` : 'NOT SET ✗'}`);
console.log("===================================\n");
const octokit = new Octokit({
auth: GITHUB_TOKEN
});
// Fetch all open PRs from the repository
async function fetchOpenPRs() {
try {
const prs = await octokit.pulls.list({
owner: REPO_OWNER,
repo: REPO_NAME,
state: "open",
per_page: 100,
sort: "created",
direction: "asc"
});
return prs.data.filter((pr) => !pr.draft);
} catch (error) {
console.error("Error fetching PRs:", error);
return [];
}
}
// Convert GitHub username to Slack mention or fallback to GitHub username
function toSlackMention(githubUser) {
const slackUserId = GITHUB_TO_SLACK[githubUser];
return slackUserId ? `<@${slackUserId}>` : `@${githubUser}`;
}
// Calculate days since PR was opened
function getDaysOpen(createdAt) {
const created = new Date(createdAt);
const now = new Date();
const diffMs = now - created;
const days = Math.floor(diffMs / (1000 * 60 * 60 * 24));
return days;
}
// Build Slack notification message from PR list
function buildSlackMessage(prs) {
if (prs.length === 0) {
return "*🎉 Great job! No pending PRs for review.*";
}
// Separate PRs by age threshold (14 days)
const criticalPRs = [];
const recentPRs = [];
prs.forEach(pr => {
const daysOpen = getDaysOpen(pr.created_at);
if (daysOpen >= 14) {
criticalPRs.push(pr);
} else {
recentPRs.push(pr);
}
});
const lines = [
`*🔍 Daily PR Review Reminder 🔍*`,
`Found *${criticalPRs.length}* critical PR(s) (14+ days old)\n`
];
// Show critical PRs (14+ days) in detail
if (criticalPRs.length > 0) {
criticalPRs.forEach((pr, index) => {
const owner = toSlackMention(pr.user.login);
const reviewers = pr.requested_reviewers || [];
const reviewerMentions = reviewers.map(r => toSlackMention(r.login)).join(", ");
const daysOpen = getDaysOpen(pr.created_at);
const prInfo = `${index + 1}. <${pr.html_url}|#${pr.number}: ${pr.title}>`;
const ageInfo = ` 🔴 Opened *${daysOpen}* day(s) ago`;
const ownerInfo = ` 👤 Owner: ${owner}`;
const reviewerInfo = reviewers.length > 0
? ` 👁️ Reviewers: ${reviewerMentions}`
: ` 👁️ Reviewers: _Not assigned yet_`;
lines.push(prInfo);
lines.push(ageInfo);
lines.push(ownerInfo);
lines.push(reviewerInfo);
lines.push(""); // Empty line between PRs
});
}
lines.push("_Let's keep the code review process moving! 🚀_");
return lines.join("\n");
}
// Send notification to Slack webhook
async function sendSlackNotification(message) {
if (!SLACK_WEBHOOK_URL) {
console.log("⚠️ SLACK_PR_REVIEW_WEBHOOK_URL not configured. Message preview:");
console.log("=".repeat(60));
console.log(message);
console.log("=".repeat(60));
return;
}
try {
const response = await axios.post(SLACK_WEBHOOK_URL, {
text: message
});
if (response.status !== 200) {
throw new Error(`Slack API returned status ${response.status}`);
}
console.log("Slack notification sent successfully.");
} catch (error) {
console.error("Error sending Slack notification:", error);
throw error;
}
}
// Main execution flow
async function run() {
console.log(`Fetching open PRs from ${REPO_OWNER}/${REPO_NAME}...`);
const prs = await fetchOpenPRs();
console.log(`Found ${prs.length} open PR(s).`);
const message = buildSlackMessage(prs);
console.log("Sending Slack notification...");
await sendSlackNotification(message);
}
run().catch(error => {
console.error("Script execution failed:", error);
process.exit(1);
});
})();

View File

@@ -613,6 +613,9 @@ jobs:
- name: "MySQL Kvbackend"
opts: "--setup-mysql"
kafka: false
- name: "Flat format"
opts: "--enable-flat-format"
kafka: false
timeout-minutes: 60
steps:
- uses: actions/checkout@v4
@@ -632,7 +635,7 @@ jobs:
- name: Unzip binaries
run: tar -xvf ./bins.tar.gz
- name: Run sqlness
run: RUST_BACKTRACE=1 ./bins/sqlness-runner ${{ matrix.mode.opts }} -c ./tests/cases --bins-dir ./bins --preserve-state
run: RUST_BACKTRACE=1 ./bins/sqlness-runner bare ${{ matrix.mode.opts }} -c ./tests/cases --bins-dir ./bins --preserve-state
- name: Upload sqlness logs
if: failure()
uses: actions/upload-artifact@v4
@@ -808,7 +811,7 @@ jobs:
- name: Setup external services
working-directory: tests-integration/fixtures
run: ../../.github/scripts/pull-test-deps-images.sh && docker compose up -d --wait
- name: Run nextest cases
run: cargo llvm-cov nextest --workspace --lcov --output-path lcov.info -F dashboard -F pg_kvbackend -F mysql_kvbackend
env:

View File

@@ -92,5 +92,6 @@ jobs:
mode:
- name: "Basic"
- name: "Remote WAL"
- name: "Flat format"
steps:
- run: 'echo "No action required"'

View File

@@ -0,0 +1,36 @@
name: PR Review Reminder
on:
schedule:
# Run at 9:00 AM UTC+8 (01:00 AM UTC) on Monday, Wednesday, Friday
- cron: '0 1 * * 1,3,5'
workflow_dispatch:
jobs:
pr-review-reminder:
name: Send PR Review Reminders
runs-on: ubuntu-latest
permissions:
contents: read
pull-requests: read
if: ${{ github.repository == 'GreptimeTeam/greptimedb' }}
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
- name: Install dependencies
working-directory: .github/scripts
run: npm ci
- name: Run PR review reminder
working-directory: .github/scripts
run: node pr-review-reminder.js
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
SLACK_PR_REVIEW_WEBHOOK_URL: ${{ vars.SLACK_PR_REVIEW_WEBHOOK_URL }}
GITHUBID_SLACKID_MAPPING: ${{ vars.GITHUBID_SLACKID_MAPPING }}

View File

@@ -1,7 +1,7 @@
name: "Semantic Pull Request"
on:
pull_request:
pull_request_target:
types:
- opened
- reopened
@@ -12,9 +12,9 @@ concurrency:
cancel-in-progress: true
permissions:
issues: write
contents: write
contents: read
pull-requests: write
issues: write
jobs:
check:

1331
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -61,6 +61,7 @@ members = [
"src/promql",
"src/puffin",
"src/query",
"src/standalone",
"src/servers",
"src/session",
"src/sql",
@@ -73,8 +74,8 @@ members = [
resolver = "2"
[workspace.package]
version = "0.17.0"
edition = "2021"
version = "0.18.0"
edition = "2024"
license = "Apache-2.0"
[workspace.lints]
@@ -98,12 +99,12 @@ rust.unexpected_cfgs = { level = "warn", check-cfg = ['cfg(tokio_unstable)'] }
# See for more detaiils: https://github.com/rust-lang/cargo/issues/11329
ahash = { version = "0.8", features = ["compile-time-rng"] }
aquamarine = "0.6"
arrow = { version = "56.0", features = ["prettyprint"] }
arrow-array = { version = "56.0", default-features = false, features = ["chrono-tz"] }
arrow-buffer = "56.0"
arrow-flight = "56.0"
arrow-ipc = { version = "56.0", default-features = false, features = ["lz4", "zstd"] }
arrow-schema = { version = "56.0", features = ["serde"] }
arrow = { version = "56.2", features = ["prettyprint"] }
arrow-array = { version = "56.2", default-features = false, features = ["chrono-tz"] }
arrow-buffer = "56.2"
arrow-flight = "56.2"
arrow-ipc = { version = "56.2", default-features = false, features = ["lz4", "zstd"] }
arrow-schema = { version = "56.2", features = ["serde"] }
async-stream = "0.3"
async-trait = "0.1"
# Remember to update axum-extra, axum-macros when updating axum
@@ -117,22 +118,24 @@ bitflags = "2.4.1"
bytemuck = "1.12"
bytes = { version = "1.7", features = ["serde"] }
chrono = { version = "0.4", features = ["serde"] }
chrono-tz = "0.10.1"
chrono-tz = { version = "0.10.1", features = ["case-insensitive"] }
clap = { version = "4.4", features = ["derive"] }
config = "0.13.0"
const_format = "0.2"
crossbeam-utils = "0.8"
dashmap = "6.1"
datafusion = { git = "https://github.com/GreptimeTeam/datafusion.git", rev = "7d5214512740b4dfb742b6b3d91ed9affcc2c9d0" }
datafusion-common = { git = "https://github.com/GreptimeTeam/datafusion.git", rev = "7d5214512740b4dfb742b6b3d91ed9affcc2c9d0" }
datafusion-expr = { git = "https://github.com/GreptimeTeam/datafusion.git", rev = "7d5214512740b4dfb742b6b3d91ed9affcc2c9d0" }
datafusion-functions = { git = "https://github.com/GreptimeTeam/datafusion.git", rev = "7d5214512740b4dfb742b6b3d91ed9affcc2c9d0" }
datafusion-functions-aggregate-common = { git = "https://github.com/GreptimeTeam/datafusion.git", rev = "7d5214512740b4dfb742b6b3d91ed9affcc2c9d0" }
datafusion-optimizer = { git = "https://github.com/GreptimeTeam/datafusion.git", rev = "7d5214512740b4dfb742b6b3d91ed9affcc2c9d0" }
datafusion-orc = { git = "https://github.com/GreptimeTeam/datafusion-orc", rev = "a0a5f902158f153119316eaeec868cff3fc8a99d" }
datafusion-physical-expr = { git = "https://github.com/GreptimeTeam/datafusion.git", rev = "7d5214512740b4dfb742b6b3d91ed9affcc2c9d0" }
datafusion-physical-plan = { git = "https://github.com/GreptimeTeam/datafusion.git", rev = "7d5214512740b4dfb742b6b3d91ed9affcc2c9d0" }
datafusion-sql = { git = "https://github.com/GreptimeTeam/datafusion.git", rev = "7d5214512740b4dfb742b6b3d91ed9affcc2c9d0" }
datafusion-substrait = { git = "https://github.com/GreptimeTeam/datafusion.git", rev = "7d5214512740b4dfb742b6b3d91ed9affcc2c9d0" }
datafusion = "50"
datafusion-common = "50"
datafusion-expr = "50"
datafusion-functions = "50"
datafusion-functions-aggregate-common = "50"
datafusion-optimizer = "50"
datafusion-orc = "0.5"
datafusion-pg-catalog = "0.12.1"
datafusion-physical-expr = "50"
datafusion-physical-plan = "50"
datafusion-sql = "50"
datafusion-substrait = "50"
deadpool = "0.12"
deadpool-postgres = "0.14"
derive_builder = "0.20"
@@ -145,7 +148,7 @@ etcd-client = { git = "https://github.com/GreptimeTeam/etcd-client", rev = "f62d
fst = "0.4.7"
futures = "0.3"
futures-util = "0.3"
greptime-proto = { git = "https://github.com/GreptimeTeam/greptime-proto.git", rev = "66eb089afa6baaa3ddfafabd0a4abbe317d012c3" }
greptime-proto = { git = "https://github.com/GreptimeTeam/greptime-proto.git", rev = "14b9dc40bdc8288742b0cefc7bb024303b7429ef" }
hex = "0.4"
http = "1"
humantime = "2.1"
@@ -174,8 +177,11 @@ opentelemetry-proto = { version = "0.30", features = [
"logs",
] }
ordered-float = { version = "4.3", features = ["serde"] }
otel-arrow-rust = { git = "https://github.com/GreptimeTeam/otel-arrow", rev = "2d64b7c0fa95642028a8205b36fe9ea0b023ec59", features = [
"server",
] }
parking_lot = "0.12"
parquet = { version = "56.0", default-features = false, features = ["arrow", "async", "object_store"] }
parquet = { version = "56.2", default-features = false, features = ["arrow", "async", "object_store"] }
paste = "1.0"
pin-project = "1.0"
pretty_assertions = "1.4.0"
@@ -186,7 +192,7 @@ prost-types = "0.13"
raft-engine = { version = "0.4.1", default-features = false }
rand = "0.9"
ratelimit = "0.10"
regex = "1.8"
regex = "1.12"
regex-automata = "0.4"
reqwest = { version = "0.12", default-features = false, features = [
"json",
@@ -202,6 +208,7 @@ rstest_reuse = "0.7"
rust_decimal = "1.33"
rustc-hash = "2.0"
# It is worth noting that we should try to avoid using aws-lc-rs until it can be compiled on various platforms.
hostname = "0.4.0"
rustls = { version = "0.23.25", default-features = false }
sea-query = "0.32"
serde = { version = "1.0", features = ["derive"] }
@@ -211,16 +218,8 @@ simd-json = "0.15"
similar-asserts = "1.6.0"
smallvec = { version = "1", features = ["serde"] }
snafu = "0.8"
sqlparser = { git = "https://github.com/GreptimeTeam/sqlparser-rs.git", rev = "39e4fc94c3c741981f77e9d63b5ce8c02e0a27ea", features = [
"visitor",
"serde",
] } # branch = "v0.55.x"
sqlx = { version = "0.8", features = [
"runtime-tokio-rustls",
"mysql",
"postgres",
"chrono",
] }
sqlparser = { version = "0.58.0", default-features = false, features = ["std", "visitor", "serde"] }
sqlx = { version = "0.8", default-features = false, features = ["any", "macros", "json", "runtime-tokio-rustls"] }
strum = { version = "0.27", features = ["derive"] }
sysinfo = "0.33"
tempfile = "3"
@@ -275,6 +274,7 @@ common-recordbatch = { path = "src/common/recordbatch" }
common-runtime = { path = "src/common/runtime" }
common-session = { path = "src/common/session" }
common-sql = { path = "src/common/sql" }
common-stat = { path = "src/common/stat" }
common-telemetry = { path = "src/common/telemetry" }
common-test-util = { path = "src/common/test-util" }
common-time = { path = "src/common/time" }
@@ -296,9 +296,6 @@ mito-codec = { path = "src/mito-codec" }
mito2 = { path = "src/mito2" }
object-store = { path = "src/object-store" }
operator = { path = "src/operator" }
otel-arrow-rust = { git = "https://github.com/GreptimeTeam/otel-arrow", rev = "2d64b7c0fa95642028a8205b36fe9ea0b023ec59", features = [
"server",
] }
partition = { path = "src/partition" }
pipeline = { path = "src/pipeline" }
plugins = { path = "src/plugins" }
@@ -308,7 +305,7 @@ query = { path = "src/query" }
servers = { path = "src/servers" }
session = { path = "src/session" }
sql = { path = "src/sql" }
stat = { path = "src/common/stat" }
standalone = { path = "src/standalone" }
store-api = { path = "src/store-api" }
substrait = { path = "src/common/substrait" }
table = { path = "src/table" }
@@ -317,6 +314,21 @@ table = { path = "src/table" }
git = "https://github.com/GreptimeTeam/greptime-meter.git"
rev = "5618e779cf2bb4755b499c630fba4c35e91898cb"
[patch.crates-io]
datafusion = { git = "https://github.com/GreptimeTeam/datafusion.git", rev = "fd4b2abcf3c3e43e94951bda452c9fd35243aab0" }
datafusion-common = { git = "https://github.com/GreptimeTeam/datafusion.git", rev = "fd4b2abcf3c3e43e94951bda452c9fd35243aab0" }
datafusion-expr = { git = "https://github.com/GreptimeTeam/datafusion.git", rev = "fd4b2abcf3c3e43e94951bda452c9fd35243aab0" }
datafusion-functions = { git = "https://github.com/GreptimeTeam/datafusion.git", rev = "fd4b2abcf3c3e43e94951bda452c9fd35243aab0" }
datafusion-functions-aggregate-common = { git = "https://github.com/GreptimeTeam/datafusion.git", rev = "fd4b2abcf3c3e43e94951bda452c9fd35243aab0" }
datafusion-optimizer = { git = "https://github.com/GreptimeTeam/datafusion.git", rev = "fd4b2abcf3c3e43e94951bda452c9fd35243aab0" }
datafusion-physical-expr = { git = "https://github.com/GreptimeTeam/datafusion.git", rev = "fd4b2abcf3c3e43e94951bda452c9fd35243aab0" }
datafusion-physical-expr-common = { git = "https://github.com/GreptimeTeam/datafusion.git", rev = "fd4b2abcf3c3e43e94951bda452c9fd35243aab0" }
datafusion-physical-plan = { git = "https://github.com/GreptimeTeam/datafusion.git", rev = "fd4b2abcf3c3e43e94951bda452c9fd35243aab0" }
datafusion-datasource = { git = "https://github.com/GreptimeTeam/datafusion.git", rev = "fd4b2abcf3c3e43e94951bda452c9fd35243aab0" }
datafusion-sql = { git = "https://github.com/GreptimeTeam/datafusion.git", rev = "fd4b2abcf3c3e43e94951bda452c9fd35243aab0" }
datafusion-substrait = { git = "https://github.com/GreptimeTeam/datafusion.git", rev = "fd4b2abcf3c3e43e94951bda452c9fd35243aab0" }
sqlparser = { git = "https://github.com/GreptimeTeam/sqlparser-rs.git", rev = "4b519a5caa95472cc3988f5556813a583dd35af1" } # branch = "v0.58.x"
[profile.release]
debug = 1

View File

@@ -8,7 +8,7 @@ CARGO_BUILD_OPTS := --locked
IMAGE_REGISTRY ?= docker.io
IMAGE_NAMESPACE ?= greptime
IMAGE_TAG ?= latest
DEV_BUILDER_IMAGE_TAG ?= 2025-05-19-f55023f3-20250829091211
DEV_BUILDER_IMAGE_TAG ?= 2025-10-01-8fe17d43-20251011080129
BUILDX_MULTI_PLATFORM_BUILD ?= false
BUILDX_BUILDER_NAME ?= gtbuilder
BASE_IMAGE ?= ubuntu
@@ -169,7 +169,7 @@ nextest: ## Install nextest tools.
.PHONY: sqlness-test
sqlness-test: ## Run sqlness test.
cargo sqlness ${SQLNESS_OPTS}
cargo sqlness bare ${SQLNESS_OPTS}
RUNS ?= 1
FUZZ_TARGET ?= fuzz_alter_table

View File

@@ -12,7 +12,6 @@
<div align="center">
<h3 align="center">
<a href="https://greptime.com/product/cloud">GreptimeCloud</a> |
<a href="https://docs.greptime.com/">User Guide</a> |
<a href="https://greptimedb.rs/">API Docs</a> |
<a href="https://github.com/GreptimeTeam/greptimedb/issues/5446">Roadmap 2025</a>
@@ -105,16 +104,6 @@ Read [more benchmark reports](https://docs.greptime.com/user-guide/concepts/feat
## Try GreptimeDB
### 1. [Live Demo](https://greptime.com/playground)
Experience GreptimeDB directly in your browser.
### 2. [GreptimeCloud](https://console.greptime.cloud/)
Start instantly with a free cluster.
### 3. Docker (Local Quickstart)
```shell
docker pull greptime/greptimedb
```

View File

@@ -13,6 +13,7 @@
| Key | Type | Default | Descriptions |
| --- | -----| ------- | ----------- |
| `default_timezone` | String | Unset | The default timezone of the server. |
| `default_column_prefix` | String | Unset | The default column prefix for auto-created time index and value columns. |
| `init_regions_in_background` | Bool | `false` | Initialize all regions in the background during the startup.<br/>By default, it provides services after all regions have been initialized. |
| `init_regions_parallelism` | Integer | `16` | Parallelism of initializing regions. |
| `max_concurrent_queries` | Integer | `0` | The maximum current queries allowed to be executed. Zero means unlimited. |
@@ -25,12 +26,15 @@
| `http.addr` | String | `127.0.0.1:4000` | The address to bind the HTTP server. |
| `http.timeout` | String | `0s` | HTTP request timeout. Set to 0 to disable timeout. |
| `http.body_limit` | String | `64MB` | HTTP request body limit.<br/>The following units are supported: `B`, `KB`, `KiB`, `MB`, `MiB`, `GB`, `GiB`, `TB`, `TiB`, `PB`, `PiB`.<br/>Set to 0 to disable limit. |
| `http.max_total_body_memory` | String | Unset | Maximum total memory for all concurrent HTTP request bodies.<br/>Set to 0 to disable the limit. Default: "0" (unlimited) |
| `http.enable_cors` | Bool | `true` | HTTP CORS support, it's turned on by default<br/>This allows browser to access http APIs without CORS restrictions |
| `http.cors_allowed_origins` | Array | Unset | Customize allowed origins for HTTP CORS. |
| `http.prom_validation_mode` | String | `strict` | Whether to enable validation for Prometheus remote write requests.<br/>Available options:<br/>- strict: deny invalid UTF-8 strings (default).<br/>- lossy: allow invalid UTF-8 strings, replace invalid characters with REPLACEMENT_CHARACTER(U+FFFD).<br/>- unchecked: do not valid strings. |
| `grpc` | -- | -- | The gRPC server options. |
| `grpc.bind_addr` | String | `127.0.0.1:4001` | The address to bind the gRPC server. |
| `grpc.runtime_size` | Integer | `8` | The number of server worker threads. |
| `grpc.max_total_message_memory` | String | Unset | Maximum total memory for all concurrent gRPC request messages.<br/>Set to 0 to disable the limit. Default: "0" (unlimited) |
| `grpc.max_connection_age` | String | Unset | The maximum connection age for gRPC connection.<br/>The value can be a human-readable time string. For example: `10m` for ten minutes or `1h` for one hour.<br/>Refer to https://grpc.io/docs/guides/keepalive/ for more details. |
| `grpc.tls` | -- | -- | gRPC server TLS options, see `mysql.tls` section. |
| `grpc.tls.mode` | String | `disable` | TLS mode. |
| `grpc.tls.cert_path` | String | Unset | Certificate file path. |
@@ -103,6 +107,7 @@
| `storage` | -- | -- | The data storage options. |
| `storage.data_home` | String | `./greptimedb_data` | The working home directory. |
| `storage.type` | String | `File` | The storage type used to store the data.<br/>- `File`: the data is stored in the local file system.<br/>- `S3`: the data is stored in the S3 object storage.<br/>- `Gcs`: the data is stored in the Google Cloud Storage.<br/>- `Azblob`: the data is stored in the Azure Blob Storage.<br/>- `Oss`: the data is stored in the Aliyun OSS. |
| `storage.enable_read_cache` | Bool | `true` | Whether to enable read cache. If not set, the read cache will be enabled by default when using object storage. |
| `storage.cache_path` | String | Unset | Read cache configuration for object storage such as 'S3' etc, it's configured by default when using object storage. It is recommended to configure it when using object storage for better performance.<br/>A local file directory, defaults to `{data_home}`. An empty string means disabling. |
| `storage.cache_capacity` | String | Unset | The local file cache capacity in bytes. If your disk space is sufficient, it is recommended to set it larger. |
| `storage.bucket` | String | Unset | The S3 bucket name.<br/>**It's only used when the storage type is `S3`, `Oss` and `Gcs`**. |
@@ -151,6 +156,7 @@
| `region_engine.mito.max_concurrent_scan_files` | Integer | `384` | Maximum number of SST files to scan concurrently. |
| `region_engine.mito.allow_stale_entries` | Bool | `false` | Whether to allow stale WAL entries read during replay. |
| `region_engine.mito.min_compaction_interval` | String | `0m` | Minimum time interval between two compactions.<br/>To align with the old behavior, the default value is 0 (no restrictions). |
| `region_engine.mito.default_experimental_flat_format` | Bool | `false` | Whether to enable experimental flat format as the default format. |
| `region_engine.mito.index` | -- | -- | The options for index in Mito engine. |
| `region_engine.mito.index.aux_path` | String | `""` | Auxiliary directory path for the index in filesystem, used to store intermediate files for<br/>creating the index and staging files for searching the index, defaults to `{data_home}/index_intermediate`.<br/>The default name for this directory is `index_intermediate` for backward compatibility.<br/><br/>This path contains two subdirectories:<br/>- `__intm`: for storing intermediate files used during creating index.<br/>- `staging`: for storing staging files used during searching index. |
| `region_engine.mito.index.staging_size` | String | `2GB` | The max capacity of the staging directory. |
@@ -221,6 +227,7 @@
| Key | Type | Default | Descriptions |
| --- | -----| ------- | ----------- |
| `default_timezone` | String | Unset | The default timezone of the server. |
| `default_column_prefix` | String | Unset | The default column prefix for auto-created time index and value columns. |
| `max_in_flight_write_bytes` | String | Unset | The maximum in-flight write bytes. |
| `runtime` | -- | -- | The runtime options. |
| `runtime.global_rt_size` | Integer | `8` | The number of threads to execute the runtime for global read operations. |
@@ -232,6 +239,7 @@
| `http.addr` | String | `127.0.0.1:4000` | The address to bind the HTTP server. |
| `http.timeout` | String | `0s` | HTTP request timeout. Set to 0 to disable timeout. |
| `http.body_limit` | String | `64MB` | HTTP request body limit.<br/>The following units are supported: `B`, `KB`, `KiB`, `MB`, `MiB`, `GB`, `GiB`, `TB`, `TiB`, `PB`, `PiB`.<br/>Set to 0 to disable limit. |
| `http.max_total_body_memory` | String | Unset | Maximum total memory for all concurrent HTTP request bodies.<br/>Set to 0 to disable the limit. Default: "0" (unlimited) |
| `http.enable_cors` | Bool | `true` | HTTP CORS support, it's turned on by default<br/>This allows browser to access http APIs without CORS restrictions |
| `http.cors_allowed_origins` | Array | Unset | Customize allowed origins for HTTP CORS. |
| `http.prom_validation_mode` | String | `strict` | Whether to enable validation for Prometheus remote write requests.<br/>Available options:<br/>- strict: deny invalid UTF-8 strings (default).<br/>- lossy: allow invalid UTF-8 strings, replace invalid characters with REPLACEMENT_CHARACTER(U+FFFD).<br/>- unchecked: do not valid strings. |
@@ -239,7 +247,9 @@
| `grpc.bind_addr` | String | `127.0.0.1:4001` | The address to bind the gRPC server. |
| `grpc.server_addr` | String | `127.0.0.1:4001` | The address advertised to the metasrv, and used for connections from outside the host.<br/>If left empty or unset, the server will automatically use the IP address of the first network interface<br/>on the host, with the same port number as the one specified in `grpc.bind_addr`. |
| `grpc.runtime_size` | Integer | `8` | The number of server worker threads. |
| `grpc.max_total_message_memory` | String | Unset | Maximum total memory for all concurrent gRPC request messages.<br/>Set to 0 to disable the limit. Default: "0" (unlimited) |
| `grpc.flight_compression` | String | `arrow_ipc` | Compression mode for frontend side Arrow IPC service. Available options:<br/>- `none`: disable all compression<br/>- `transport`: only enable gRPC transport compression (zstd)<br/>- `arrow_ipc`: only enable Arrow IPC compression (lz4)<br/>- `all`: enable all compression.<br/>Default to `none` |
| `grpc.max_connection_age` | String | Unset | The maximum connection age for gRPC connection.<br/>The value can be a human-readable time string. For example: `10m` for ten minutes or `1h` for one hour.<br/>Refer to https://grpc.io/docs/guides/keepalive/ for more details. |
| `grpc.tls` | -- | -- | gRPC server TLS options, see `mysql.tls` section. |
| `grpc.tls.mode` | String | `disable` | TLS mode. |
| `grpc.tls.cert_path` | String | Unset | Certificate file path. |
@@ -377,10 +387,9 @@
| `procedure.max_metadata_value_size` | String | `1500KiB` | Auto split large value<br/>GreptimeDB procedure uses etcd as the default metadata storage backend.<br/>The etcd the maximum size of any request is 1.5 MiB<br/>1500KiB = 1536KiB (1.5MiB) - 36KiB (reserved size of key)<br/>Comments out the `max_metadata_value_size`, for don't split large value (no limit). |
| `procedure.max_running_procedures` | Integer | `128` | Max running procedures.<br/>The maximum number of procedures that can be running at the same time.<br/>If the number of running procedures exceeds this limit, the procedure will be rejected. |
| `failure_detector` | -- | -- | -- |
| `failure_detector.threshold` | Float | `8.0` | The threshold value used by the failure detector to determine failure conditions. |
| `failure_detector.min_std_deviation` | String | `100ms` | The minimum standard deviation of the heartbeat intervals, used to calculate acceptable variations. |
| `failure_detector.acceptable_heartbeat_pause` | String | `10000ms` | The acceptable pause duration between heartbeats, used to determine if a heartbeat interval is acceptable. |
| `failure_detector.first_heartbeat_estimate` | String | `1000ms` | The initial estimate of the heartbeat interval used by the failure detector. |
| `failure_detector.threshold` | Float | `8.0` | Maximum acceptable φ before the peer is treated as failed.<br/>Lower values react faster but yield more false positives. |
| `failure_detector.min_std_deviation` | String | `100ms` | The minimum standard deviation of the heartbeat intervals.<br/>So tiny variations dont make φ explode. Prevents hypersensitivity when heartbeat intervals barely vary. |
| `failure_detector.acceptable_heartbeat_pause` | String | `10000ms` | The acceptable pause duration between heartbeats.<br/>Additional extra grace period to the learned mean interval before φ rises, absorbing temporary network hiccups or GC pauses. |
| `datanode` | -- | -- | Datanode options. |
| `datanode.client` | -- | -- | Datanode client options. |
| `datanode.client.timeout` | String | `10s` | Operation timeout. |
@@ -402,8 +411,8 @@
| `event_recorder` | -- | -- | Configuration options for the event recorder. |
| `event_recorder.ttl` | String | `90d` | TTL for the events table that will be used to store the events. Default is `90d`. |
| `stats_persistence` | -- | -- | Configuration options for the stats persistence. |
| `stats_persistence.ttl` | String | `30d` | TTL for the stats table that will be used to store the stats. Default is `30d`.<br/>Set to `0s` to disable stats persistence. |
| `stats_persistence.interval` | String | `60s` | The interval to persist the stats. Default is `60s`.<br/>The minimum value is `60s`, if the value is less than `60s`, it will be overridden to `60s`. |
| `stats_persistence.ttl` | String | `0s` | TTL for the stats table that will be used to store the stats.<br/>Set to `0s` to disable stats persistence.<br/>Default is `0s`.<br/>If you want to enable stats persistence, set the TTL to a value greater than 0.<br/>It is recommended to set a small value, e.g., `3h`. |
| `stats_persistence.interval` | String | `10m` | The interval to persist the stats. Default is `10m`.<br/>The minimum value is `10m`, if the value is less than `10m`, it will be overridden to `10m`. |
| `logging` | -- | -- | The logging options. |
| `logging.dir` | String | `./greptimedb_data/logs` | The directory to store the log files. If set to empty, logs will not be written to files. |
| `logging.level` | String | Unset | The log level. Can be `info`/`debug`/`warn`/`error`. |
@@ -433,6 +442,7 @@
| Key | Type | Default | Descriptions |
| --- | -----| ------- | ----------- |
| `node_id` | Integer | Unset | The datanode identifier and should be unique in the cluster. |
| `default_column_prefix` | String | Unset | The default column prefix for auto-created time index and value columns. |
| `require_lease_before_startup` | Bool | `false` | Start services after regions have obtained leases.<br/>It will block the datanode start if it can't receive leases in the heartbeat from metasrv. |
| `init_regions_in_background` | Bool | `false` | Initialize all regions in the background during the startup.<br/>By default, it provides services after all regions have been initialized. |
| `init_regions_parallelism` | Integer | `16` | Parallelism of initializing regions. |
@@ -471,7 +481,7 @@
| `meta_client.metadata_cache_ttl` | String | `10m` | TTL of the metadata cache. |
| `meta_client.metadata_cache_tti` | String | `5m` | -- |
| `wal` | -- | -- | The WAL options. |
| `wal.provider` | String | `raft_engine` | The provider of the WAL.<br/>- `raft_engine`: the wal is stored in the local file system by raft-engine.<br/>- `kafka`: it's remote wal that data is stored in Kafka. |
| `wal.provider` | String | `raft_engine` | The provider of the WAL.<br/>- `raft_engine`: the wal is stored in the local file system by raft-engine.<br/>- `kafka`: it's remote wal that data is stored in Kafka.<br/>- `noop`: it's a no-op WAL provider that does not store any WAL data.<br/>**Notes: any unflushed data will be lost when the datanode is shutdown.** |
| `wal.dir` | String | Unset | The directory to store the WAL files.<br/>**It's only used when the provider is `raft_engine`**. |
| `wal.file_size` | String | `128MB` | The size of the WAL segment file.<br/>**It's only used when the provider is `raft_engine`**. |
| `wal.purge_threshold` | String | `1GB` | The threshold of the WAL size to trigger a purge.<br/>**It's only used when the provider is `raft_engine`**. |
@@ -494,6 +504,7 @@
| `storage.data_home` | String | `./greptimedb_data` | The working home directory. |
| `storage.type` | String | `File` | The storage type used to store the data.<br/>- `File`: the data is stored in the local file system.<br/>- `S3`: the data is stored in the S3 object storage.<br/>- `Gcs`: the data is stored in the Google Cloud Storage.<br/>- `Azblob`: the data is stored in the Azure Blob Storage.<br/>- `Oss`: the data is stored in the Aliyun OSS. |
| `storage.cache_path` | String | Unset | Read cache configuration for object storage such as 'S3' etc, it's configured by default when using object storage. It is recommended to configure it when using object storage for better performance.<br/>A local file directory, defaults to `{data_home}`. An empty string means disabling. |
| `storage.enable_read_cache` | Bool | `true` | Whether to enable read cache. If not set, the read cache will be enabled by default when using object storage. |
| `storage.cache_capacity` | String | Unset | The local file cache capacity in bytes. If your disk space is sufficient, it is recommended to set it larger. |
| `storage.bucket` | String | Unset | The S3 bucket name.<br/>**It's only used when the storage type is `S3`, `Oss` and `Gcs`**. |
| `storage.root` | String | Unset | The S3 data will be stored in the specified prefix, for example, `s3://${bucket}/${root}`.<br/>**It's only used when the storage type is `S3`, `Oss` and `Azblob`**. |
@@ -543,6 +554,7 @@
| `region_engine.mito.max_concurrent_scan_files` | Integer | `384` | Maximum number of SST files to scan concurrently. |
| `region_engine.mito.allow_stale_entries` | Bool | `false` | Whether to allow stale WAL entries read during replay. |
| `region_engine.mito.min_compaction_interval` | String | `0m` | Minimum time interval between two compactions.<br/>To align with the old behavior, the default value is 0 (no restrictions). |
| `region_engine.mito.default_experimental_flat_format` | Bool | `false` | Whether to enable experimental flat format as the default format. |
| `region_engine.mito.index` | -- | -- | The options for index in Mito engine. |
| `region_engine.mito.index.aux_path` | String | `""` | Auxiliary directory path for the index in filesystem, used to store intermediate files for<br/>creating the index and staging files for searching the index, defaults to `{data_home}/index_intermediate`.<br/>The default name for this directory is `index_intermediate` for backward compatibility.<br/><br/>This path contains two subdirectories:<br/>- `__intm`: for storing intermediate files used during creating index.<br/>- `staging`: for storing staging files used during searching index. |
| `region_engine.mito.index.staging_size` | String | `2GB` | The max capacity of the staging directory. |

View File

@@ -2,6 +2,10 @@
## @toml2docs:none-default
node_id = 42
## The default column prefix for auto-created time index and value columns.
## @toml2docs:none-default
default_column_prefix = "greptime"
## Start services after regions have obtained leases.
## It will block the datanode start if it can't receive leases in the heartbeat from metasrv.
require_lease_before_startup = false
@@ -118,6 +122,7 @@ metadata_cache_tti = "5m"
## The provider of the WAL.
## - `raft_engine`: the wal is stored in the local file system by raft-engine.
## - `kafka`: it's remote wal that data is stored in Kafka.
## - `noop`: it's a no-op WAL provider that does not store any WAL data.<br/>**Notes: any unflushed data will be lost when the datanode is shutdown.**
provider = "raft_engine"
## The directory to store the WAL files.
@@ -274,6 +279,9 @@ type = "File"
## @toml2docs:none-default
#+ cache_path = ""
## Whether to enable read cache. If not set, the read cache will be enabled by default when using object storage.
#+ enable_read_cache = true
## The local file cache capacity in bytes. If your disk space is sufficient, it is recommended to set it larger.
## @toml2docs:none-default
cache_capacity = "5GiB"
@@ -497,6 +505,9 @@ allow_stale_entries = false
## To align with the old behavior, the default value is 0 (no restrictions).
min_compaction_interval = "0m"
## Whether to enable experimental flat format as the default format.
default_experimental_flat_format = false
## The options for index in Mito engine.
[region_engine.mito.index]

View File

@@ -2,6 +2,10 @@
## @toml2docs:none-default
default_timezone = "UTC"
## The default column prefix for auto-created time index and value columns.
## @toml2docs:none-default
default_column_prefix = "greptime"
## The maximum in-flight write bytes.
## @toml2docs:none-default
#+ max_in_flight_write_bytes = "500MB"
@@ -31,6 +35,10 @@ timeout = "0s"
## The following units are supported: `B`, `KB`, `KiB`, `MB`, `MiB`, `GB`, `GiB`, `TB`, `TiB`, `PB`, `PiB`.
## Set to 0 to disable limit.
body_limit = "64MB"
## Maximum total memory for all concurrent HTTP request bodies.
## Set to 0 to disable the limit. Default: "0" (unlimited)
## @toml2docs:none-default
#+ max_total_body_memory = "1GB"
## HTTP CORS support, it's turned on by default
## This allows browser to access http APIs without CORS restrictions
enable_cors = true
@@ -54,6 +62,10 @@ bind_addr = "127.0.0.1:4001"
server_addr = "127.0.0.1:4001"
## The number of server worker threads.
runtime_size = 8
## Maximum total memory for all concurrent gRPC request messages.
## Set to 0 to disable the limit. Default: "0" (unlimited)
## @toml2docs:none-default
#+ max_total_message_memory = "1GB"
## Compression mode for frontend side Arrow IPC service. Available options:
## - `none`: disable all compression
## - `transport`: only enable gRPC transport compression (zstd)
@@ -61,6 +73,11 @@ runtime_size = 8
## - `all`: enable all compression.
## Default to `none`
flight_compression = "arrow_ipc"
## The maximum connection age for gRPC connection.
## The value can be a human-readable time string. For example: `10m` for ten minutes or `1h` for one hour.
## Refer to https://grpc.io/docs/guides/keepalive/ for more details.
## @toml2docs:none-default
#+ max_connection_age = "10m"
## gRPC server TLS options, see `mysql.tls` section.
[grpc.tls]

View File

@@ -149,20 +149,18 @@ max_metadata_value_size = "1500KiB"
max_running_procedures = 128
# Failure detectors options.
# GreptimeDB uses the Phi Accrual Failure Detector algorithm to detect datanode failures.
[failure_detector]
## The threshold value used by the failure detector to determine failure conditions.
## Maximum acceptable φ before the peer is treated as failed.
## Lower values react faster but yield more false positives.
threshold = 8.0
## The minimum standard deviation of the heartbeat intervals, used to calculate acceptable variations.
## The minimum standard deviation of the heartbeat intervals.
## So tiny variations dont make φ explode. Prevents hypersensitivity when heartbeat intervals barely vary.
min_std_deviation = "100ms"
## The acceptable pause duration between heartbeats, used to determine if a heartbeat interval is acceptable.
## The acceptable pause duration between heartbeats.
## Additional extra grace period to the learned mean interval before φ rises, absorbing temporary network hiccups or GC pauses.
acceptable_heartbeat_pause = "10000ms"
## The initial estimate of the heartbeat interval used by the failure detector.
first_heartbeat_estimate = "1000ms"
## Datanode options.
[datanode]
@@ -274,12 +272,15 @@ ttl = "90d"
## Configuration options for the stats persistence.
[stats_persistence]
## TTL for the stats table that will be used to store the stats. Default is `30d`.
## TTL for the stats table that will be used to store the stats.
## Set to `0s` to disable stats persistence.
ttl = "30d"
## The interval to persist the stats. Default is `60s`.
## The minimum value is `60s`, if the value is less than `60s`, it will be overridden to `60s`.
interval = "60s"
## Default is `0s`.
## If you want to enable stats persistence, set the TTL to a value greater than 0.
## It is recommended to set a small value, e.g., `3h`.
ttl = "0s"
## The interval to persist the stats. Default is `10m`.
## The minimum value is `10m`, if the value is less than `10m`, it will be overridden to `10m`.
interval = "10m"
## The logging options.
[logging]

View File

@@ -2,6 +2,10 @@
## @toml2docs:none-default
default_timezone = "UTC"
## The default column prefix for auto-created time index and value columns.
## @toml2docs:none-default
default_column_prefix = "greptime"
## Initialize all regions in the background during the startup.
## By default, it provides services after all regions have been initialized.
init_regions_in_background = false
@@ -36,6 +40,10 @@ timeout = "0s"
## The following units are supported: `B`, `KB`, `KiB`, `MB`, `MiB`, `GB`, `GiB`, `TB`, `TiB`, `PB`, `PiB`.
## Set to 0 to disable limit.
body_limit = "64MB"
## Maximum total memory for all concurrent HTTP request bodies.
## Set to 0 to disable the limit. Default: "0" (unlimited)
## @toml2docs:none-default
#+ max_total_body_memory = "1GB"
## HTTP CORS support, it's turned on by default
## This allows browser to access http APIs without CORS restrictions
enable_cors = true
@@ -56,6 +64,15 @@ prom_validation_mode = "strict"
bind_addr = "127.0.0.1:4001"
## The number of server worker threads.
runtime_size = 8
## Maximum total memory for all concurrent gRPC request messages.
## Set to 0 to disable the limit. Default: "0" (unlimited)
## @toml2docs:none-default
#+ max_total_message_memory = "1GB"
## The maximum connection age for gRPC connection.
## The value can be a human-readable time string. For example: `10m` for ten minutes or `1h` for one hour.
## Refer to https://grpc.io/docs/guides/keepalive/ for more details.
## @toml2docs:none-default
#+ max_connection_age = "10m"
## gRPC server TLS options, see `mysql.tls` section.
[grpc.tls]
@@ -361,6 +378,9 @@ data_home = "./greptimedb_data"
## - `Oss`: the data is stored in the Aliyun OSS.
type = "File"
## Whether to enable read cache. If not set, the read cache will be enabled by default when using object storage.
#+ enable_read_cache = true
## Read cache configuration for object storage such as 'S3' etc, it's configured by default when using object storage. It is recommended to configure it when using object storage for better performance.
## A local file directory, defaults to `{data_home}`. An empty string means disabling.
## @toml2docs:none-default
@@ -576,6 +596,9 @@ allow_stale_entries = false
## To align with the old behavior, the default value is 0 (no restrictions).
min_compaction_interval = "0m"
## Whether to enable experimental flat format as the default format.
default_experimental_flat_format = false
## The options for index in Mito engine.
[region_engine.mito.index]

View File

@@ -30,22 +30,7 @@ curl https://raw.githubusercontent.com/brendangregg/FlameGraph/master/flamegraph
## Profiling
### Configuration
You can control heap profiling activation through configuration. Add the following to your configuration file:
```toml
[memory]
# Whether to enable heap profiling activation during startup.
# When enabled, heap profiling will be activated if the `MALLOC_CONF` environment variable
# is set to "prof:true,prof_active:false". The official image adds this env variable.
# Default is true.
enable_heap_profiling = true
```
By default, if you set `MALLOC_CONF=prof:true,prof_active:false`, the database will enable profiling during startup. You can disable this behavior by setting `enable_heap_profiling = false` in the configuration.
### Starting with environment variables
### Enable memory profiling for greptimedb binary
Start GreptimeDB instance with environment variables:
@@ -57,6 +42,22 @@ MALLOC_CONF=prof:true ./target/debug/greptime standalone start
_RJEM_MALLOC_CONF=prof:true ./target/debug/greptime standalone start
```
### Memory profiling for greptimedb docker image
We have memory profiling enabled and activated by default in our official docker
image.
This behavior is controlled by configuration `enable_heap_profiling`:
```toml
[memory]
# Whether to enable heap profiling activation during startup.
# Default is true.
enable_heap_profiling = true
```
To disable memory profiling, set `enable_heap_profiling` to `false`.
### Memory profiling control
You can control heap profiling activation using the new HTTP APIs:
@@ -70,6 +71,15 @@ curl -X POST localhost:4000/debug/prof/mem/activate
# Deactivate heap profiling
curl -X POST localhost:4000/debug/prof/mem/deactivate
# Activate gdump feature that dumps memory profiling data every time virtual memory usage exceeds previous maximum value.
curl -X POST localhost:4000/debug/prof/mem/gdump -d 'activate=true'
# Deactivate gdump.
curl -X POST localhost:4000/debug/prof/mem/gdump -d 'activate=false'
# Retrieve current gdump status.
curl -X GET localhost:4000/debug/prof/mem/gdump
```
### Dump memory profiling data

View File

@@ -0,0 +1,463 @@
---
Feature Name: "laminar-flow"
Tracking Issue: https://github.com/GreptimeTeam/greptimedb/issues/TBD
Date: 2025-09-08
Author: "discord9 <discord9@163.com>"
---
# laminar Flow
## Summary
This RFC proposes a redesign of the flow architecture where flownode becomes a lightweight in-memory state management node with an embedded frontend for direct computation. This approach optimizes resource utilization and improves scalability by eliminating network hops while maintaining clear separation between coordination and computation tasks.
## Motivation
The current flow architecture has several limitations:
1. **Resource Inefficiency**: Flownodes perform both state management and computation, leading to resource duplication and inefficient utilization.
2. **Scalability Constraints**: Computation resources are tied to flownode instances, limiting horizontal scaling capabilities.
3. **State Management Complexity**: Mixing computation with state management makes the system harder to maintain and debug.
4. **Network Overhead**: Additional network hops between flownode and separate frontend nodes add latency.
The laminar Flow architecture addresses these issues by:
- Consolidating computation within flownode through embedded frontend
- Eliminating network overhead by removing separate frontend node communication
- Simplifying state management by focusing flownode on its core responsibility
- Improving system scalability and maintainability
## Details
### Architecture Overview
The laminar Flow architecture transforms flownode into a lightweight coordinator that maintains flow state with an embedded frontend for computation. The key components involved are:
1. **Flownode**: Maintains in-memory state, coordinates computation, and includes an embedded frontend for query execution
2. **Embedded Frontend**: Executes **incremental** computations within the flownode
3. **Datanode**: Stores final results and source data
```mermaid
graph TB
subgraph "laminar Flow Architecture"
subgraph Flownode["Flownode (State Manager + Embedded Frontend)"]
StateMap["Flow State Map<br/>Map<Timestamp, (Map<Key, Value>, Sequence)>"]
Coordinator["Computation Coordinator"]
subgraph EmbeddedFrontend["Embedded Frontend"]
QueryEngine["Query Engine"]
AggrState["__aggr_state Executor"]
end
end
subgraph Datanode["Datanode"]
Storage["Data Storage"]
Results["Result Tables"]
end
end
Coordinator -->|Internal Query| EmbeddedFrontend
EmbeddedFrontend -->|Incremental States| Coordinator
Flownode -->|Incremental Results| Datanode
EmbeddedFrontend -.->|Read Data| Datanode
```
### Core Components
#### 1. Flow State Management
Flownode maintains a state map for each flow:
```rust
type FlowState = Map<Timestamp, (Map<Key, Value>, Sequence)>;
```
Where:
- **Timestamp**: Time window identifier for aggregation groups
- **Key**: Aggregation group expressions (`group_exprs`)
- **Value**: Aggregation expressions results (`aggr_exprs`)
- **Sequence**: Computation progress marker for incremental updates
#### 2. Incremental Computation Process
The computation process follows these steps:
1. **Trigger Evaluation**: Flownode determines when to trigger computation based on:
- Time intervals (periodic updates)
- Data volume thresholds
- Sequence progress requirements
2. **Query Execution**: Flownode executes `__aggr_state` queries using its embedded frontend with:
- Time window filters
- Sequence range constraints
3. **State Update**: Flownode receives partial state results and updates its internal state:
- Merges new values with existing aggregation state
- Updates sequence markers to track progress
- Identifies changed time windows for result computation
4. **Result Materialization**: Flownode computes final results using `__aggr_merge` operations:
- Processes only updated time windows(and time series) for efficiency
- Writes results back to datanode directly through its embedded frontend
### Detailed Workflow
#### Incremental State Query
```sql
-- Example incremental state query executed by embedded frontend
SELECT
__aggr_state(avg(value)) as state,
time_window,
group_key
FROM source_table
WHERE
timestamp >= :window_start
AND timestamp < :window_end
AND __sequence >= :last_sequence
AND __sequence < :current_sequence
-- sequence range is actually written in grpc header, but shown here for clarity
GROUP BY time_window, group_key;
```
#### State Merge Process
```mermaid
sequenceDiagram
participant F as Flownode (Coordinator)
participant EF as Embedded Frontend (Lightweight)
participant DN as Datanode (Heavy Computation)
F->>F: Evaluate trigger conditions
F->>EF: Execute __aggr_state query with sequence range
EF->>DN: Send query to datanode (Heavy scan & aggregation)
DN->>DN: Scan data and compute partial aggregation state (Heavy CPU/I/O)
DN->>EF: Return aggregated state results
EF->>F: Forward state results (Lightweight merge)
F->>F: Merge with existing state
F->>F: Update sequence markers (Lightweight)
F->>EF: Compute incremental results with __aggr_merge
EF->>DN: Write incremental results to datanode
```
### Refill Implementation and State Management
#### Refill Process
Refill is implemented as a straightforward `__aggr_state` query with time and sequence constraints:
```sql
-- Refill query for flow state recovery
SELECT
__aggr_state(aggregation_functions) as state,
time_window,
group_keys
FROM source_table
WHERE
timestamp >= :refill_start_time
AND timestamp < :refill_end_time
AND __sequence >= :start_sequence
AND __sequence < :end_sequence
-- sequence range is actually written in grpc header, but shown here for clarity
GROUP BY time_window, group_keys;
```
#### State Recovery Strategy
1. **Recent Data (Stream Mode)**: For recent time windows, flownode refills state using incremental queries
2. **Historical Data (Batch Mode)**: For older time windows, flownode triggers batch computation directly and no need to refill state
3. **Hybrid Approach**: Combines stream and batch processing based on data age and availability
#### Mirror Write Optimization
Mirror writes are simplified to only transmit timestamps to flownode:
```rust
struct MirrorWrite {
timestamps: Vec<Timestamp>,
// Removed: actual data payload
}
```
This optimization:
- Eliminates network overhead by using embedded frontend
- Enables flownode to track pending time windows efficiently
- Allows flownode to decide processing mode (stream vs batch) based on timestamp age
Another optimization could be just send dirty time windows range for each flow to flownode directly, no need to send timestamps one by one.
### Query Optimization Strategies
#### Sequence-Based Incremental Processing
The core optimization relies on sequence-constrained queries:
```sql
-- Optimized incremental query
SELECT __aggr_state(expr)
FROM table
WHERE time_range AND sequence_range
```
Benefits:
- **Reduced Scan Volume**: Only processes data since last computation
- **Efficient Resource Usage**: Minimizes CPU and I/O overhead
- **Predictable Performance**: Query cost scales with incremental data size
#### Time Window Partitioning
```mermaid
graph LR
subgraph "Time Windows"
W1["Window 1<br/>09:00-09:05"]
W2["Window 2<br/>09:05-09:10"]
W3["Window 3<br/>09:10-09:15"]
end
subgraph "Processing Strategy"
W1 --> Batch["Batch Mode<br/>(Old Data)"]
W2 --> Stream["Stream Mode<br/>(Recent Data)"]
W3 --> Stream2["Stream Mode<br/>(Current Data)"]
end
```
### Performance Characteristics
#### Memory Usage
- **Flownode**: O(active_time_windows × group_cardinality) for state storage
- **Embedded Frontend**: O(query_batch_size) for temporary computation
- **Overall**: Significantly reduced compared to current architecture
#### Computation Distribution
- **Direct Processing**: Queries processed directly within flownode's embedded frontend
- **Fault Tolerance**: Simplified error handling with fewer distributed components
- **Scalability**: Computation capacity scales with flownode instances
#### Network Optimization
- **Reduced Payload**: Mirror writes only contain timestamps
- **Efficient Queries**: Sequence constraints minimize data transfer
- **Result Caching**: State results cached in flownode memory
### Sequential Read Implementation for Incremental Queries
#### Sequence Management
Flow maintains two critical sequences to track incremental query progress for each region:
- **`memtable_last_seq`**: Tracks the latest sequence number read from the memtable
- **`sst_last_seq`**: Tracks the latest sequence number read from SST files
These sequences enable precise incremental data processing by defining the exact range of data to query in subsequent iterations.
#### Query Protocol
When executing incremental queries, flownode provides both sequence parameters to datanode:
```rust
struct GrpcHeader {
...
// Sequence tracking for incremental reads
memtable_last_seq: HashMap<RegionId, SequenceNumber>,
sst_last_seqs: HashMap<RegionId, SequenceNumber>,
}
```
The datanode processes these parameters to return only the data within the specified sequence ranges, ensuring efficient incremental processing.
#### Sequence Invalidation and Refill Mechanism
A critical challenge occurs when data referenced by `memtable_last_seq` gets flushed from memory to disk. Since SST files only maintain a single maximum sequence number for the entire file (rather than per-record sequence tracking), precise incremental queries become impossible for the affected time ranges.
**Detection of Invalidation:**
```rust
// When memtable_last_seq data has been flushed to SST
if memtable_last_seq_flushed_to_disk {
// Incremental query is no longer feasible
// Need to trigger refill for affected time ranges
}
```
**Refill Process:**
1. **Identify Affected Time Range**: Query the time range corresponding to the flushed `memtable_last_seq` data
2. **Full Recomputation**: Execute a complete aggregation query for the affected time windows
3. **State Replacement**: Replace the existing flow state for these time ranges with newly computed values
4. **Sequence Update**: Update `memtable_last_seq` to the current latest sequence, while `sst_last_seq` continues normal incremental updates
```sql
-- Refill query when memtable data has been flushed
SELECT
__aggr_state(aggregation_functions) as state,
time_window,
group_keys
FROM source_table
WHERE
timestamp >= :affected_time_start
AND timestamp < :affected_time_end
-- Full scan required since sequence precision is lost in SST
GROUP BY time_window, group_keys;
```
#### Datanode Implementation Requirements
Datanode must implement enhanced query processing capabilities to support sequence-based incremental reads:
**Input Processing:**
- Accept `memtable_last_seq` and `sst_last_seq` parameters in query requests
- Filter data based on sequence ranges across both memtable and SST storage layers
**Output Enhancement:**
```rust
struct OutputMeta {
pub plan: Option<Arc<dyn ExecutionPlan>>,
pub cost: OutputCost,
pub sequence_info: HashMap<RegionId, SequenceInfo>, // New field for sequence tracking per regions involved in the query
}
struct SequenceInfo {
// Sequence tracking for next iteration
max_memtable_seq: SequenceNumber, // Highest sequence from memtable in this result
max_sst_seq: SequenceNumber, // Highest sequence from SST in this result
}
```
**Sequence Tracking Logic:**
datanode already impl `max_sst_seq` in leader range read, can reuse similar logic for `max_memtable_seq`.
#### Sequence Update Strategy
**Normal Incremental Updates:**
- Update both `memtable_last_seq` and `sst_last_seq` after successful query execution
- Use returned `max_memtable_seq` and `max_sst_seq` values for next iteration
**Refill Scenario:**
- Reset `memtable_last_seq` to current maximum after refill completion
- Continue normal `sst_last_seq` updates based on successful query responses
- Maintain separate tracking to detect future flush events
#### Performance Considerations
**Sequence Range Optimization:**
- Minimize sequence range spans to reduce scan overhead
- Batch multiple small incremental updates when beneficial
- Balance between query frequency and processing efficiency
**Memory Management:**
- Monitor memtable flush frequency to predict refill requirements
- Implement adaptive query scheduling based on flush patterns
- Optimize state storage to handle frequent updates efficiently
This sequential read implementation ensures reliable incremental processing while gracefully handling the complexities of storage architecture, maintaining both correctness and performance in the face of background compaction and flush operations.
## Implementation Plan
### Phase 1: Core Infrastructure
1. **State Management**: Implement in-memory state map in flownode
2. **Query Interface**: Integrate `__aggr_state` query interface in embedded frontend(Already done in previous query pushdown optimizer work)
3. **Basic Coordination**: Implement query dispatch and result collection
4. **Sequence Tracking**: Implement sequence-based incremental processing(Can use similar interface which leader range read use)
After phase 1, the system should support basic flow operations with incremental updates.
### Phase 2: Optimization Features
1. **Refill Logic**: Develop state recovery mechanisms
2. **Mirror Write Optimization**: Simplify mirror write protocol
### Phase 3: Advanced Features
1. **Load Balancing**: Implement intelligent resource allocation for partitioned flow(Flow distributed executed on multiple flownodes)
2. **Fault Tolerance**: Add retry mechanisms and error handling
3. **Performance Tuning**: Optimize query batching and state management
## Drawbacks
### Reduced Network Communication
- **Eliminated Hops**: Direct communication between flownode and datanode through embedded frontend
- **Reduced Latency**: No separate frontend node communication overhead
- **Simplified Network Topology**: Fewer network dependencies and failure points
### Complexity in Error Handling
- **Distributed Failures**: Need to handle failures across multiple components
- **State Consistency**: Ensuring state consistency during partial failures
- **Recovery Complexity**: More complex recovery procedures
### Datanode Resource Requirements
- **Computation Load**: Datanode handles the heavy computational workload for flow queries
- **Query Interference**: Flow queries may impact regular query performance on datanode
- **Resource Contention**: Need careful resource management and isolation on datanode
## Alternatives
### Alternative 1: Enhanced Current Architecture
Keep computation in flownode but optimize through:
- Better resource management
- Improved query optimization
- Enhanced state persistence
**Pros:**
- Simpler architecture
- Fewer network hops
- Easier debugging
**Cons:**
- Limited scalability
- Resource inefficiency
- Harder to optimize computation distribution
### Alternative 2: Embedded Computation
Embed lightweight computation engines within flownode:
**Pros:**
- Reduced network communication
- Better performance for simple queries
- Simpler deployment
**Cons:**
- Limited scalability
- Resource constraints
- Harder to leverage existing frontend optimizations
## Future Work
### Advanced Query Optimization
- **Parallel Processing**: Enable parallel execution of flow queries
- **Query Caching**: Cache frequently executed query patterns
### Enhanced State Management
- **State Compression**: Implement efficient state serialization
- **Distributed State**: Support state distribution across multiple flownodes
- **State Persistence**: Add optional state persistence for durability
### Monitoring and Observability
- **Performance Metrics**: Track query execution times and resource usage
- **State Visualization**: Provide tools for state inspection and debugging
- **Health Monitoring**: Monitor system health and performance characteristics
### Integration Improvements
- **Embedded Frontend Optimization**: Optimize embedded frontend query planning and execution
- **Datanode Optimization**: Optimize result writing from flownode
- **Metasrv Coordination**: Enhanced metadata management and coordination
## Conclusion
The laminar Flow architecture represents a significant improvement over the current flow system by separating state management from computation execution. This design enables better resource utilization, improved scalability, and simplified maintenance while maintaining the core functionality of continuous aggregation.
The key benefits include:
1. **Improved Scalability**: Computation can scale independently of state management
2. **Better Resource Utilization**: Eliminates network overhead and leverages embedded frontend infrastructure
3. **Simplified Architecture**: Clear separation of concerns between components
4. **Enhanced Performance**: Sequence-based incremental processing reduces computational overhead
While the architecture introduces some complexity in terms of distributed coordination and error handling, the benefits significantly outweigh the drawbacks, making it a compelling evolution of the flow system.

18
flake.lock generated
View File

@@ -8,11 +8,11 @@
"rust-analyzer-src": "rust-analyzer-src"
},
"locked": {
"lastModified": 1745735608,
"narHash": "sha256-L0jzm815XBFfF2wCFmR+M1CF+beIEFj6SxlqVKF59Ec=",
"lastModified": 1760078406,
"narHash": "sha256-JeJK0ZA845PtkCHkfo4KjeI1mYrsr2s3cxBYKhF4BoE=",
"owner": "nix-community",
"repo": "fenix",
"rev": "c39a78eba6ed2a022cc3218db90d485077101496",
"rev": "351277c60d104944122ee389cdf581c5ce2c6732",
"type": "github"
},
"original": {
@@ -41,11 +41,11 @@
},
"nixpkgs": {
"locked": {
"lastModified": 1748162331,
"narHash": "sha256-rqc2RKYTxP3tbjA+PB3VMRQNnjesrT0pEofXQTrMsS8=",
"lastModified": 1759994382,
"narHash": "sha256-wSK+3UkalDZRVHGCRikZ//CyZUJWDJkBDTQX1+G77Ow=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "7c43f080a7f28b2774f3b3f43234ca11661bf334",
"rev": "5da4a26309e796daa7ffca72df93dbe53b8164c7",
"type": "github"
},
"original": {
@@ -65,11 +65,11 @@
"rust-analyzer-src": {
"flake": false,
"locked": {
"lastModified": 1745694049,
"narHash": "sha256-fxvRYH/tS7hGQeg9zCVh5RBcSWT+JGJet7RA8Ss+rC0=",
"lastModified": 1760014945,
"narHash": "sha256-ySdl7F9+oeWNHVrg3QL/brazqmJvYFEdpGnF3pyoDH8=",
"owner": "rust-lang",
"repo": "rust-analyzer",
"rev": "d8887c0758bbd2d5f752d5bd405d4491e90e7ed6",
"rev": "90d2e1ce4dfe7dc49250a8b88a0f08ffdb9cb23f",
"type": "github"
},
"original": {

View File

@@ -19,7 +19,7 @@
lib = nixpkgs.lib;
rustToolchain = fenix.packages.${system}.fromToolchainName {
name = (lib.importTOML ./rust-toolchain.toml).toolchain.channel;
sha256 = "sha256-tJJr8oqX3YD+ohhPK7jlt/7kvKBnBqJVjYtoFr520d4=";
sha256 = "sha256-GCGEXGZeJySLND0KU5TdtTrqFV76TF3UdvAHSUegSsk=";
};
in
{

File diff suppressed because it is too large Load Diff

View File

@@ -21,14 +21,14 @@
# Resources
| Title | Query | Type | Description | Datasource | Unit | Legend Format |
| --- | --- | --- | --- | --- | --- | --- |
| Datanode Memory per Instance | `sum(process_resident_memory_bytes{instance=~"$datanode"}) by (instance, pod)`<br/>`max(greptime_memory_limit_in_bytes{app="greptime-datanode"})` | `timeseries` | Current memory usage by instance | `prometheus` | `bytes` | `[{{instance}}]-[{{ pod }}]` |
| Datanode CPU Usage per Instance | `sum(rate(process_cpu_seconds_total{instance=~"$datanode"}[$__rate_interval]) * 1000) by (instance, pod)`<br/>`max(greptime_cpu_limit_in_millicores{app="greptime-datanode"})` | `timeseries` | Current cpu usage by instance | `prometheus` | `none` | `[{{ instance }}]-[{{ pod }}]` |
| Frontend Memory per Instance | `sum(process_resident_memory_bytes{instance=~"$frontend"}) by (instance, pod)`<br/>`max(greptime_memory_limit_in_bytes{app="greptime-frontend"})` | `timeseries` | Current memory usage by instance | `prometheus` | `bytes` | `[{{ instance }}]-[{{ pod }}]` |
| Frontend CPU Usage per Instance | `sum(rate(process_cpu_seconds_total{instance=~"$frontend"}[$__rate_interval]) * 1000) by (instance, pod)`<br/>`max(greptime_cpu_limit_in_millicores{app="greptime-frontend"})` | `timeseries` | Current cpu usage by instance | `prometheus` | `none` | `[{{ instance }}]-[{{ pod }}]-cpu` |
| Metasrv Memory per Instance | `sum(process_resident_memory_bytes{instance=~"$metasrv"}) by (instance, pod)`<br/>`max(greptime_memory_limit_in_bytes{app="greptime-metasrv"})` | `timeseries` | Current memory usage by instance | `prometheus` | `bytes` | `[{{ instance }}]-[{{ pod }}]-resident` |
| Metasrv CPU Usage per Instance | `sum(rate(process_cpu_seconds_total{instance=~"$metasrv"}[$__rate_interval]) * 1000) by (instance, pod)`<br/>`max(greptime_cpu_limit_in_millicores{app="greptime-metasrv"})` | `timeseries` | Current cpu usage by instance | `prometheus` | `none` | `[{{ instance }}]-[{{ pod }}]` |
| Flownode Memory per Instance | `sum(process_resident_memory_bytes{instance=~"$flownode"}) by (instance, pod)`<br/>`max(greptime_memory_limit_in_bytes{app="greptime-flownode"})` | `timeseries` | Current memory usage by instance | `prometheus` | `bytes` | `[{{ instance }}]-[{{ pod }}]` |
| Flownode CPU Usage per Instance | `sum(rate(process_cpu_seconds_total{instance=~"$flownode"}[$__rate_interval]) * 1000) by (instance, pod)`<br/>`max(greptime_cpu_limit_in_millicores{app="greptime-flownode"})` | `timeseries` | Current cpu usage by instance | `prometheus` | `none` | `[{{ instance }}]-[{{ pod }}]` |
| Datanode Memory per Instance | `sum(process_resident_memory_bytes{instance=~"$datanode"}) by (instance, pod)`<br/>`max(greptime_memory_limit_in_bytes{instance=~"$datanode"})` | `timeseries` | Current memory usage by instance | `prometheus` | `bytes` | `[{{instance}}]-[{{ pod }}]` |
| Datanode CPU Usage per Instance | `sum(rate(process_cpu_seconds_total{instance=~"$datanode"}[$__rate_interval]) * 1000) by (instance, pod)`<br/>`max(greptime_cpu_limit_in_millicores{instance=~"$datanode"})` | `timeseries` | Current cpu usage by instance | `prometheus` | `none` | `[{{ instance }}]-[{{ pod }}]` |
| Frontend Memory per Instance | `sum(process_resident_memory_bytes{instance=~"$frontend"}) by (instance, pod)`<br/>`max(greptime_memory_limit_in_bytes{instance=~"$frontend"})` | `timeseries` | Current memory usage by instance | `prometheus` | `bytes` | `[{{ instance }}]-[{{ pod }}]` |
| Frontend CPU Usage per Instance | `sum(rate(process_cpu_seconds_total{instance=~"$frontend"}[$__rate_interval]) * 1000) by (instance, pod)`<br/>`max(greptime_cpu_limit_in_millicores{instance=~"$frontend"})` | `timeseries` | Current cpu usage by instance | `prometheus` | `none` | `[{{ instance }}]-[{{ pod }}]-cpu` |
| Metasrv Memory per Instance | `sum(process_resident_memory_bytes{instance=~"$metasrv"}) by (instance, pod)`<br/>`max(greptime_memory_limit_in_bytes{instance=~"$metasrv"})` | `timeseries` | Current memory usage by instance | `prometheus` | `bytes` | `[{{ instance }}]-[{{ pod }}]-resident` |
| Metasrv CPU Usage per Instance | `sum(rate(process_cpu_seconds_total{instance=~"$metasrv"}[$__rate_interval]) * 1000) by (instance, pod)`<br/>`max(greptime_cpu_limit_in_millicores{instance=~"$metasrv"})` | `timeseries` | Current cpu usage by instance | `prometheus` | `none` | `[{{ instance }}]-[{{ pod }}]` |
| Flownode Memory per Instance | `sum(process_resident_memory_bytes{instance=~"$flownode"}) by (instance, pod)`<br/>`max(greptime_memory_limit_in_bytes{instance=~"$flownode"})` | `timeseries` | Current memory usage by instance | `prometheus` | `bytes` | `[{{ instance }}]-[{{ pod }}]` |
| Flownode CPU Usage per Instance | `sum(rate(process_cpu_seconds_total{instance=~"$flownode"}[$__rate_interval]) * 1000) by (instance, pod)`<br/>`max(greptime_cpu_limit_in_millicores{instance=~"$flownode"})` | `timeseries` | Current cpu usage by instance | `prometheus` | `none` | `[{{ instance }}]-[{{ pod }}]` |
# Frontend Requests
| Title | Query | Type | Description | Datasource | Unit | Legend Format |
| --- | --- | --- | --- | --- | --- | --- |
@@ -87,6 +87,13 @@
| Other Request P99 per Instance | `histogram_quantile(0.99, sum by(instance, pod, le, scheme, operation) (rate(opendal_operation_duration_seconds_bucket{instance=~"$datanode", operation!~"read\|write\|list\|Writer::write\|Writer::close\|Reader::read"}[$__rate_interval])))` | `timeseries` | Other Request P99 per Instance. | `prometheus` | `s` | `[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]` |
| Opendal traffic | `sum by(instance, pod, scheme, operation) (rate(opendal_operation_bytes_sum{instance=~"$datanode"}[$__rate_interval]))` | `timeseries` | Total traffic as in bytes by instance and operation | `prometheus` | `decbytes` | `[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]` |
| OpenDAL errors per Instance | `sum by(instance, pod, scheme, operation, error) (rate(opendal_operation_errors_total{instance=~"$datanode", error!="NotFound"}[$__rate_interval]))` | `timeseries` | OpenDAL error counts per Instance. | `prometheus` | -- | `[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]-[{{error}}]` |
# Remote WAL
| Title | Query | Type | Description | Datasource | Unit | Legend Format |
| --- | --- | --- | --- | --- | --- | --- |
| Triggered region flush total | `meta_triggered_region_flush_total` | `timeseries` | Triggered region flush total | `prometheus` | `none` | `{{pod}}-{{topic_name}}` |
| Triggered region checkpoint total | `meta_triggered_region_checkpoint_total` | `timeseries` | Triggered region checkpoint total | `prometheus` | `none` | `{{pod}}-{{topic_name}}` |
| Topic estimated replay size | `meta_topic_estimated_replay_size` | `timeseries` | Topic estimated max replay size | `prometheus` | `bytes` | `{{pod}}-{{topic_name}}` |
| Kafka logstore's bytes traffic | `rate(greptime_logstore_kafka_client_bytes_total[$__rate_interval])` | `timeseries` | Kafka logstore's bytes traffic | `prometheus` | `bytes` | `{{pod}}-{{logstore}}` |
# Metasrv
| Title | Query | Type | Description | Datasource | Unit | Legend Format |
| --- | --- | --- | --- | --- | --- | --- |
@@ -103,6 +110,8 @@
| Meta KV Ops Latency | `histogram_quantile(0.99, sum by(pod, le, op, target) (greptime_meta_kv_request_elapsed_bucket))` | `timeseries` | Gauge of load information of each datanode, collected via heartbeat between datanode and metasrv. This information is for metasrv to schedule workloads. | `prometheus` | `s` | `{{pod}}-{{op}} p99` |
| Rate of meta KV Ops | `rate(greptime_meta_kv_request_elapsed_count[$__rate_interval])` | `timeseries` | Gauge of load information of each datanode, collected via heartbeat between datanode and metasrv. This information is for metasrv to schedule workloads. | `prometheus` | `none` | `{{pod}}-{{op}} p99` |
| DDL Latency | `histogram_quantile(0.9, sum by(le, pod, step) (greptime_meta_procedure_create_tables_bucket))`<br/>`histogram_quantile(0.9, sum by(le, pod, step) (greptime_meta_procedure_create_table))`<br/>`histogram_quantile(0.9, sum by(le, pod, step) (greptime_meta_procedure_create_view))`<br/>`histogram_quantile(0.9, sum by(le, pod, step) (greptime_meta_procedure_create_flow))`<br/>`histogram_quantile(0.9, sum by(le, pod, step) (greptime_meta_procedure_drop_table))`<br/>`histogram_quantile(0.9, sum by(le, pod, step) (greptime_meta_procedure_alter_table))` | `timeseries` | Gauge of load information of each datanode, collected via heartbeat between datanode and metasrv. This information is for metasrv to schedule workloads. | `prometheus` | `s` | `CreateLogicalTables-{{step}} p90` |
| Reconciliation stats | `greptime_meta_reconciliation_stats` | `timeseries` | Reconciliation stats | `prometheus` | `s` | `{{pod}}-{{table_type}}-{{type}}` |
| Reconciliation steps | `histogram_quantile(0.9, greptime_meta_reconciliation_procedure_bucket)` | `timeseries` | Elapsed of Reconciliation steps | `prometheus` | `s` | `{{procedure_name}}-{{step}}-P90` |
# Flownode
| Title | Query | Type | Description | Datasource | Unit | Legend Format |
| --- | --- | --- | --- | --- | --- | --- |

View File

@@ -187,7 +187,7 @@ groups:
type: prometheus
uid: ${metrics}
legendFormat: '[{{instance}}]-[{{ pod }}]'
- expr: max(greptime_memory_limit_in_bytes{app="greptime-datanode"})
- expr: max(greptime_memory_limit_in_bytes{instance=~"$datanode"})
datasource:
type: prometheus
uid: ${metrics}
@@ -202,7 +202,7 @@ groups:
type: prometheus
uid: ${metrics}
legendFormat: '[{{ instance }}]-[{{ pod }}]'
- expr: max(greptime_cpu_limit_in_millicores{app="greptime-datanode"})
- expr: max(greptime_cpu_limit_in_millicores{instance=~"$datanode"})
datasource:
type: prometheus
uid: ${metrics}
@@ -217,7 +217,7 @@ groups:
type: prometheus
uid: ${metrics}
legendFormat: '[{{ instance }}]-[{{ pod }}]'
- expr: max(greptime_memory_limit_in_bytes{app="greptime-frontend"})
- expr: max(greptime_memory_limit_in_bytes{instance=~"$frontend"})
datasource:
type: prometheus
uid: ${metrics}
@@ -232,7 +232,7 @@ groups:
type: prometheus
uid: ${metrics}
legendFormat: '[{{ instance }}]-[{{ pod }}]-cpu'
- expr: max(greptime_cpu_limit_in_millicores{app="greptime-frontend"})
- expr: max(greptime_cpu_limit_in_millicores{instance=~"$frontend"})
datasource:
type: prometheus
uid: ${metrics}
@@ -247,7 +247,7 @@ groups:
type: prometheus
uid: ${metrics}
legendFormat: '[{{ instance }}]-[{{ pod }}]-resident'
- expr: max(greptime_memory_limit_in_bytes{app="greptime-metasrv"})
- expr: max(greptime_memory_limit_in_bytes{instance=~"$metasrv"})
datasource:
type: prometheus
uid: ${metrics}
@@ -262,7 +262,7 @@ groups:
type: prometheus
uid: ${metrics}
legendFormat: '[{{ instance }}]-[{{ pod }}]'
- expr: max(greptime_cpu_limit_in_millicores{app="greptime-metasrv"})
- expr: max(greptime_cpu_limit_in_millicores{instance=~"$metasrv"})
datasource:
type: prometheus
uid: ${metrics}
@@ -277,7 +277,7 @@ groups:
type: prometheus
uid: ${metrics}
legendFormat: '[{{ instance }}]-[{{ pod }}]'
- expr: max(greptime_memory_limit_in_bytes{app="greptime-flownode"})
- expr: max(greptime_memory_limit_in_bytes{instance=~"$flownode"})
datasource:
type: prometheus
uid: ${metrics}
@@ -292,7 +292,7 @@ groups:
type: prometheus
uid: ${metrics}
legendFormat: '[{{ instance }}]-[{{ pod }}]'
- expr: max(greptime_cpu_limit_in_millicores{app="greptime-flownode"})
- expr: max(greptime_cpu_limit_in_millicores{instance=~"$flownode"})
datasource:
type: prometheus
uid: ${metrics}
@@ -802,6 +802,48 @@ groups:
type: prometheus
uid: ${metrics}
legendFormat: '[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]-[{{error}}]'
- title: Remote WAL
panels:
- title: Triggered region flush total
type: timeseries
description: Triggered region flush total
unit: none
queries:
- expr: meta_triggered_region_flush_total
datasource:
type: prometheus
uid: ${metrics}
legendFormat: '{{pod}}-{{topic_name}}'
- title: Triggered region checkpoint total
type: timeseries
description: Triggered region checkpoint total
unit: none
queries:
- expr: meta_triggered_region_checkpoint_total
datasource:
type: prometheus
uid: ${metrics}
legendFormat: '{{pod}}-{{topic_name}}'
- title: Topic estimated replay size
type: timeseries
description: Topic estimated max replay size
unit: bytes
queries:
- expr: meta_topic_estimated_replay_size
datasource:
type: prometheus
uid: ${metrics}
legendFormat: '{{pod}}-{{topic_name}}'
- title: Kafka logstore's bytes traffic
type: timeseries
description: Kafka logstore's bytes traffic
unit: bytes
queries:
- expr: rate(greptime_logstore_kafka_client_bytes_total[$__rate_interval])
datasource:
type: prometheus
uid: ${metrics}
legendFormat: '{{pod}}-{{logstore}}'
- title: Metasrv
panels:
- title: Region migration datanode
@@ -948,6 +990,26 @@ groups:
type: prometheus
uid: ${metrics}
legendFormat: AlterTable-{{step}} p90
- title: Reconciliation stats
type: timeseries
description: Reconciliation stats
unit: s
queries:
- expr: greptime_meta_reconciliation_stats
datasource:
type: prometheus
uid: ${metrics}
legendFormat: '{{pod}}-{{table_type}}-{{type}}'
- title: Reconciliation steps
type: timeseries
description: 'Elapsed of Reconciliation steps '
unit: s
queries:
- expr: histogram_quantile(0.9, greptime_meta_reconciliation_procedure_bucket)
datasource:
type: prometheus
uid: ${metrics}
legendFormat: '{{procedure_name}}-{{step}}-P90'
- title: Flownode
panels:
- title: Flow Ingest / Output Rate

File diff suppressed because it is too large Load Diff

View File

@@ -21,14 +21,14 @@
# Resources
| Title | Query | Type | Description | Datasource | Unit | Legend Format |
| --- | --- | --- | --- | --- | --- | --- |
| Datanode Memory per Instance | `sum(process_resident_memory_bytes{}) by (instance, pod)`<br/>`max(greptime_memory_limit_in_bytes{app="greptime-datanode"})` | `timeseries` | Current memory usage by instance | `prometheus` | `bytes` | `[{{instance}}]-[{{ pod }}]` |
| Datanode CPU Usage per Instance | `sum(rate(process_cpu_seconds_total{}[$__rate_interval]) * 1000) by (instance, pod)`<br/>`max(greptime_cpu_limit_in_millicores{app="greptime-datanode"})` | `timeseries` | Current cpu usage by instance | `prometheus` | `none` | `[{{ instance }}]-[{{ pod }}]` |
| Frontend Memory per Instance | `sum(process_resident_memory_bytes{}) by (instance, pod)`<br/>`max(greptime_memory_limit_in_bytes{app="greptime-frontend"})` | `timeseries` | Current memory usage by instance | `prometheus` | `bytes` | `[{{ instance }}]-[{{ pod }}]` |
| Frontend CPU Usage per Instance | `sum(rate(process_cpu_seconds_total{}[$__rate_interval]) * 1000) by (instance, pod)`<br/>`max(greptime_cpu_limit_in_millicores{app="greptime-frontend"})` | `timeseries` | Current cpu usage by instance | `prometheus` | `none` | `[{{ instance }}]-[{{ pod }}]-cpu` |
| Metasrv Memory per Instance | `sum(process_resident_memory_bytes{}) by (instance, pod)`<br/>`max(greptime_memory_limit_in_bytes{app="greptime-metasrv"})` | `timeseries` | Current memory usage by instance | `prometheus` | `bytes` | `[{{ instance }}]-[{{ pod }}]-resident` |
| Metasrv CPU Usage per Instance | `sum(rate(process_cpu_seconds_total{}[$__rate_interval]) * 1000) by (instance, pod)`<br/>`max(greptime_cpu_limit_in_millicores{app="greptime-metasrv"})` | `timeseries` | Current cpu usage by instance | `prometheus` | `none` | `[{{ instance }}]-[{{ pod }}]` |
| Flownode Memory per Instance | `sum(process_resident_memory_bytes{}) by (instance, pod)`<br/>`max(greptime_memory_limit_in_bytes{app="greptime-flownode"})` | `timeseries` | Current memory usage by instance | `prometheus` | `bytes` | `[{{ instance }}]-[{{ pod }}]` |
| Flownode CPU Usage per Instance | `sum(rate(process_cpu_seconds_total{}[$__rate_interval]) * 1000) by (instance, pod)`<br/>`max(greptime_cpu_limit_in_millicores{app="greptime-flownode"})` | `timeseries` | Current cpu usage by instance | `prometheus` | `none` | `[{{ instance }}]-[{{ pod }}]` |
| Datanode Memory per Instance | `sum(process_resident_memory_bytes{}) by (instance, pod)`<br/>`max(greptime_memory_limit_in_bytes{})` | `timeseries` | Current memory usage by instance | `prometheus` | `bytes` | `[{{instance}}]-[{{ pod }}]` |
| Datanode CPU Usage per Instance | `sum(rate(process_cpu_seconds_total{}[$__rate_interval]) * 1000) by (instance, pod)`<br/>`max(greptime_cpu_limit_in_millicores{})` | `timeseries` | Current cpu usage by instance | `prometheus` | `none` | `[{{ instance }}]-[{{ pod }}]` |
| Frontend Memory per Instance | `sum(process_resident_memory_bytes{}) by (instance, pod)`<br/>`max(greptime_memory_limit_in_bytes{})` | `timeseries` | Current memory usage by instance | `prometheus` | `bytes` | `[{{ instance }}]-[{{ pod }}]` |
| Frontend CPU Usage per Instance | `sum(rate(process_cpu_seconds_total{}[$__rate_interval]) * 1000) by (instance, pod)`<br/>`max(greptime_cpu_limit_in_millicores{})` | `timeseries` | Current cpu usage by instance | `prometheus` | `none` | `[{{ instance }}]-[{{ pod }}]-cpu` |
| Metasrv Memory per Instance | `sum(process_resident_memory_bytes{}) by (instance, pod)`<br/>`max(greptime_memory_limit_in_bytes{})` | `timeseries` | Current memory usage by instance | `prometheus` | `bytes` | `[{{ instance }}]-[{{ pod }}]-resident` |
| Metasrv CPU Usage per Instance | `sum(rate(process_cpu_seconds_total{}[$__rate_interval]) * 1000) by (instance, pod)`<br/>`max(greptime_cpu_limit_in_millicores{})` | `timeseries` | Current cpu usage by instance | `prometheus` | `none` | `[{{ instance }}]-[{{ pod }}]` |
| Flownode Memory per Instance | `sum(process_resident_memory_bytes{}) by (instance, pod)`<br/>`max(greptime_memory_limit_in_bytes{})` | `timeseries` | Current memory usage by instance | `prometheus` | `bytes` | `[{{ instance }}]-[{{ pod }}]` |
| Flownode CPU Usage per Instance | `sum(rate(process_cpu_seconds_total{}[$__rate_interval]) * 1000) by (instance, pod)`<br/>`max(greptime_cpu_limit_in_millicores{})` | `timeseries` | Current cpu usage by instance | `prometheus` | `none` | `[{{ instance }}]-[{{ pod }}]` |
# Frontend Requests
| Title | Query | Type | Description | Datasource | Unit | Legend Format |
| --- | --- | --- | --- | --- | --- | --- |
@@ -87,6 +87,13 @@
| Other Request P99 per Instance | `histogram_quantile(0.99, sum by(instance, pod, le, scheme, operation) (rate(opendal_operation_duration_seconds_bucket{ operation!~"read\|write\|list\|Writer::write\|Writer::close\|Reader::read"}[$__rate_interval])))` | `timeseries` | Other Request P99 per Instance. | `prometheus` | `s` | `[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]` |
| Opendal traffic | `sum by(instance, pod, scheme, operation) (rate(opendal_operation_bytes_sum{}[$__rate_interval]))` | `timeseries` | Total traffic as in bytes by instance and operation | `prometheus` | `decbytes` | `[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]` |
| OpenDAL errors per Instance | `sum by(instance, pod, scheme, operation, error) (rate(opendal_operation_errors_total{ error!="NotFound"}[$__rate_interval]))` | `timeseries` | OpenDAL error counts per Instance. | `prometheus` | -- | `[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]-[{{error}}]` |
# Remote WAL
| Title | Query | Type | Description | Datasource | Unit | Legend Format |
| --- | --- | --- | --- | --- | --- | --- |
| Triggered region flush total | `meta_triggered_region_flush_total` | `timeseries` | Triggered region flush total | `prometheus` | `none` | `{{pod}}-{{topic_name}}` |
| Triggered region checkpoint total | `meta_triggered_region_checkpoint_total` | `timeseries` | Triggered region checkpoint total | `prometheus` | `none` | `{{pod}}-{{topic_name}}` |
| Topic estimated replay size | `meta_topic_estimated_replay_size` | `timeseries` | Topic estimated max replay size | `prometheus` | `bytes` | `{{pod}}-{{topic_name}}` |
| Kafka logstore's bytes traffic | `rate(greptime_logstore_kafka_client_bytes_total[$__rate_interval])` | `timeseries` | Kafka logstore's bytes traffic | `prometheus` | `bytes` | `{{pod}}-{{logstore}}` |
# Metasrv
| Title | Query | Type | Description | Datasource | Unit | Legend Format |
| --- | --- | --- | --- | --- | --- | --- |
@@ -103,6 +110,8 @@
| Meta KV Ops Latency | `histogram_quantile(0.99, sum by(pod, le, op, target) (greptime_meta_kv_request_elapsed_bucket))` | `timeseries` | Gauge of load information of each datanode, collected via heartbeat between datanode and metasrv. This information is for metasrv to schedule workloads. | `prometheus` | `s` | `{{pod}}-{{op}} p99` |
| Rate of meta KV Ops | `rate(greptime_meta_kv_request_elapsed_count[$__rate_interval])` | `timeseries` | Gauge of load information of each datanode, collected via heartbeat between datanode and metasrv. This information is for metasrv to schedule workloads. | `prometheus` | `none` | `{{pod}}-{{op}} p99` |
| DDL Latency | `histogram_quantile(0.9, sum by(le, pod, step) (greptime_meta_procedure_create_tables_bucket))`<br/>`histogram_quantile(0.9, sum by(le, pod, step) (greptime_meta_procedure_create_table))`<br/>`histogram_quantile(0.9, sum by(le, pod, step) (greptime_meta_procedure_create_view))`<br/>`histogram_quantile(0.9, sum by(le, pod, step) (greptime_meta_procedure_create_flow))`<br/>`histogram_quantile(0.9, sum by(le, pod, step) (greptime_meta_procedure_drop_table))`<br/>`histogram_quantile(0.9, sum by(le, pod, step) (greptime_meta_procedure_alter_table))` | `timeseries` | Gauge of load information of each datanode, collected via heartbeat between datanode and metasrv. This information is for metasrv to schedule workloads. | `prometheus` | `s` | `CreateLogicalTables-{{step}} p90` |
| Reconciliation stats | `greptime_meta_reconciliation_stats` | `timeseries` | Reconciliation stats | `prometheus` | `s` | `{{pod}}-{{table_type}}-{{type}}` |
| Reconciliation steps | `histogram_quantile(0.9, greptime_meta_reconciliation_procedure_bucket)` | `timeseries` | Elapsed of Reconciliation steps | `prometheus` | `s` | `{{procedure_name}}-{{step}}-P90` |
# Flownode
| Title | Query | Type | Description | Datasource | Unit | Legend Format |
| --- | --- | --- | --- | --- | --- | --- |

View File

@@ -187,7 +187,7 @@ groups:
type: prometheus
uid: ${metrics}
legendFormat: '[{{instance}}]-[{{ pod }}]'
- expr: max(greptime_memory_limit_in_bytes{app="greptime-datanode"})
- expr: max(greptime_memory_limit_in_bytes{})
datasource:
type: prometheus
uid: ${metrics}
@@ -202,7 +202,7 @@ groups:
type: prometheus
uid: ${metrics}
legendFormat: '[{{ instance }}]-[{{ pod }}]'
- expr: max(greptime_cpu_limit_in_millicores{app="greptime-datanode"})
- expr: max(greptime_cpu_limit_in_millicores{})
datasource:
type: prometheus
uid: ${metrics}
@@ -217,7 +217,7 @@ groups:
type: prometheus
uid: ${metrics}
legendFormat: '[{{ instance }}]-[{{ pod }}]'
- expr: max(greptime_memory_limit_in_bytes{app="greptime-frontend"})
- expr: max(greptime_memory_limit_in_bytes{})
datasource:
type: prometheus
uid: ${metrics}
@@ -232,7 +232,7 @@ groups:
type: prometheus
uid: ${metrics}
legendFormat: '[{{ instance }}]-[{{ pod }}]-cpu'
- expr: max(greptime_cpu_limit_in_millicores{app="greptime-frontend"})
- expr: max(greptime_cpu_limit_in_millicores{})
datasource:
type: prometheus
uid: ${metrics}
@@ -247,7 +247,7 @@ groups:
type: prometheus
uid: ${metrics}
legendFormat: '[{{ instance }}]-[{{ pod }}]-resident'
- expr: max(greptime_memory_limit_in_bytes{app="greptime-metasrv"})
- expr: max(greptime_memory_limit_in_bytes{})
datasource:
type: prometheus
uid: ${metrics}
@@ -262,7 +262,7 @@ groups:
type: prometheus
uid: ${metrics}
legendFormat: '[{{ instance }}]-[{{ pod }}]'
- expr: max(greptime_cpu_limit_in_millicores{app="greptime-metasrv"})
- expr: max(greptime_cpu_limit_in_millicores{})
datasource:
type: prometheus
uid: ${metrics}
@@ -277,7 +277,7 @@ groups:
type: prometheus
uid: ${metrics}
legendFormat: '[{{ instance }}]-[{{ pod }}]'
- expr: max(greptime_memory_limit_in_bytes{app="greptime-flownode"})
- expr: max(greptime_memory_limit_in_bytes{})
datasource:
type: prometheus
uid: ${metrics}
@@ -292,7 +292,7 @@ groups:
type: prometheus
uid: ${metrics}
legendFormat: '[{{ instance }}]-[{{ pod }}]'
- expr: max(greptime_cpu_limit_in_millicores{app="greptime-flownode"})
- expr: max(greptime_cpu_limit_in_millicores{})
datasource:
type: prometheus
uid: ${metrics}
@@ -802,6 +802,48 @@ groups:
type: prometheus
uid: ${metrics}
legendFormat: '[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]-[{{error}}]'
- title: Remote WAL
panels:
- title: Triggered region flush total
type: timeseries
description: Triggered region flush total
unit: none
queries:
- expr: meta_triggered_region_flush_total
datasource:
type: prometheus
uid: ${metrics}
legendFormat: '{{pod}}-{{topic_name}}'
- title: Triggered region checkpoint total
type: timeseries
description: Triggered region checkpoint total
unit: none
queries:
- expr: meta_triggered_region_checkpoint_total
datasource:
type: prometheus
uid: ${metrics}
legendFormat: '{{pod}}-{{topic_name}}'
- title: Topic estimated replay size
type: timeseries
description: Topic estimated max replay size
unit: bytes
queries:
- expr: meta_topic_estimated_replay_size
datasource:
type: prometheus
uid: ${metrics}
legendFormat: '{{pod}}-{{topic_name}}'
- title: Kafka logstore's bytes traffic
type: timeseries
description: Kafka logstore's bytes traffic
unit: bytes
queries:
- expr: rate(greptime_logstore_kafka_client_bytes_total[$__rate_interval])
datasource:
type: prometheus
uid: ${metrics}
legendFormat: '{{pod}}-{{logstore}}'
- title: Metasrv
panels:
- title: Region migration datanode
@@ -948,6 +990,26 @@ groups:
type: prometheus
uid: ${metrics}
legendFormat: AlterTable-{{step}} p90
- title: Reconciliation stats
type: timeseries
description: Reconciliation stats
unit: s
queries:
- expr: greptime_meta_reconciliation_stats
datasource:
type: prometheus
uid: ${metrics}
legendFormat: '{{pod}}-{{table_type}}-{{type}}'
- title: Reconciliation steps
type: timeseries
description: 'Elapsed of Reconciliation steps '
unit: s
queries:
- expr: histogram_quantile(0.9, greptime_meta_reconciliation_procedure_bucket)
datasource:
type: prometheus
uid: ${metrics}
legendFormat: '{{procedure_name}}-{{step}}-P90'
- title: Flownode
panels:
- title: Flow Ingest / Output Rate

View File

@@ -1,2 +1,2 @@
[toolchain]
channel = "nightly-2025-05-19"
channel = "nightly-2025-10-01"

41
scripts/generate_certs.sh Executable file
View File

@@ -0,0 +1,41 @@
#!/usr/bin/env bash
set -euo pipefail
CERT_DIR="${1:-$(dirname "$0")/../tests-integration/fixtures/certs}"
DAYS="${2:-365}"
mkdir -p "${CERT_DIR}"
cd "${CERT_DIR}"
echo "Generating CA certificate..."
openssl req -new -x509 -days "${DAYS}" -nodes -text \
-out root.crt -keyout root.key \
-subj "/CN=GreptimeDBRootCA"
echo "Generating server certificate..."
openssl req -new -nodes -text \
-out server.csr -keyout server.key \
-subj "/CN=greptime"
openssl x509 -req -in server.csr -text -days "${DAYS}" \
-CA root.crt -CAkey root.key -CAcreateserial \
-out server.crt \
-extensions v3_req -extfile <(printf "[v3_req]\nsubjectAltName=DNS:localhost,IP:127.0.0.1")
echo "Generating client certificate..."
# Make sure the client certificate is for the greptimedb user
openssl req -new -nodes -text \
-out client.csr -keyout client.key \
-subj "/CN=greptimedb"
openssl x509 -req -in client.csr -CA root.crt -CAkey root.key -CAcreateserial \
-out client.crt -days 365 -extensions v3_req -extfile <(printf "[v3_req]\nsubjectAltName=DNS:localhost")
rm -f *.csr
echo "TLS certificates generated successfully in ${CERT_DIR}"
chmod 644 root.key
chmod 644 client.key
chmod 644 server.key

View File

@@ -8,6 +8,7 @@ license.workspace = true
workspace = true
[dependencies]
arrow-schema.workspace = true
common-base.workspace = true
common-decimal.workspace = true
common-error.workspace = true

View File

@@ -19,8 +19,8 @@ use common_error::status_code::StatusCode;
use common_macro::stack_trace_debug;
use common_time::timestamp::TimeUnit;
use datatypes::prelude::ConcreteDataType;
use snafu::prelude::*;
use snafu::Location;
use snafu::prelude::*;
pub type Result<T> = std::result::Result<T, Error>;

File diff suppressed because it is too large Load Diff

View File

@@ -12,8 +12,6 @@
// See the License for the specific language governing permissions and
// limitations under the License.
#![feature(let_chains)]
pub mod error;
pub mod helper;

View File

@@ -14,10 +14,11 @@
use std::collections::HashMap;
use arrow_schema::extension::{EXTENSION_TYPE_METADATA_KEY, EXTENSION_TYPE_NAME_KEY};
use datatypes::schema::{
ColumnDefaultConstraint, ColumnSchema, FulltextAnalyzer, FulltextBackend, FulltextOptions,
SkippingIndexOptions, SkippingIndexType, COMMENT_KEY, FULLTEXT_KEY, INVERTED_INDEX_KEY,
SKIPPING_INDEX_KEY,
COMMENT_KEY, ColumnDefaultConstraint, ColumnSchema, FULLTEXT_KEY, FulltextAnalyzer,
FulltextBackend, FulltextOptions, INVERTED_INDEX_KEY, SKIPPING_INDEX_KEY, SkippingIndexOptions,
SkippingIndexType,
};
use greptime_proto::v1::{
Analyzer, FulltextBackend as PbFulltextBackend, SkippingIndexType as PbSkippingIndexType,
@@ -37,8 +38,10 @@ const SKIPPING_INDEX_GRPC_KEY: &str = "skipping_index";
/// Tries to construct a `ColumnSchema` from the given `ColumnDef`.
pub fn try_as_column_schema(column_def: &ColumnDef) -> Result<ColumnSchema> {
let data_type =
ColumnDataTypeWrapper::try_new(column_def.data_type, column_def.datatype_extension)?;
let data_type = ColumnDataTypeWrapper::try_new(
column_def.data_type,
column_def.datatype_extension.clone(),
)?;
let constraint = if column_def.default_constraint.is_empty() {
None
@@ -66,6 +69,15 @@ pub fn try_as_column_schema(column_def: &ColumnDef) -> Result<ColumnSchema> {
if let Some(skipping_index) = options.options.get(SKIPPING_INDEX_GRPC_KEY) {
metadata.insert(SKIPPING_INDEX_KEY.to_string(), skipping_index.to_owned());
}
if let Some(extension_name) = options.options.get(EXTENSION_TYPE_NAME_KEY) {
metadata.insert(EXTENSION_TYPE_NAME_KEY.to_string(), extension_name.clone());
}
if let Some(extension_metadata) = options.options.get(EXTENSION_TYPE_METADATA_KEY) {
metadata.insert(
EXTENSION_TYPE_METADATA_KEY.to_string(),
extension_metadata.clone(),
);
}
}
ColumnSchema::new(&column_def.name, data_type.into(), column_def.is_nullable)
@@ -137,6 +149,17 @@ pub fn options_from_column_schema(column_schema: &ColumnSchema) -> Option<Column
.options
.insert(SKIPPING_INDEX_GRPC_KEY.to_string(), skipping_index.clone());
}
if let Some(extension_name) = column_schema.metadata().get(EXTENSION_TYPE_NAME_KEY) {
options
.options
.insert(EXTENSION_TYPE_NAME_KEY.to_string(), extension_name.clone());
}
if let Some(extension_metadata) = column_schema.metadata().get(EXTENSION_TYPE_METADATA_KEY) {
options.options.insert(
EXTENSION_TYPE_METADATA_KEY.to_string(),
extension_metadata.clone(),
);
}
(!options.options.is_empty()).then_some(options)
}

View File

@@ -17,13 +17,13 @@ use std::sync::Arc;
use common_base::secrets::SecretString;
use digest::Digest;
use sha1::Sha1;
use snafu::{ensure, OptionExt};
use snafu::{OptionExt, ensure};
use crate::error::{IllegalParamSnafu, InvalidConfigSnafu, Result, UserPasswordMismatchSnafu};
use crate::user_info::DefaultUserInfo;
use crate::user_provider::static_user_provider::{StaticUserProvider, STATIC_USER_PROVIDER};
use crate::user_provider::static_user_provider::{STATIC_USER_PROVIDER, StaticUserProvider};
use crate::user_provider::watch_file_user_provider::{
WatchFileUserProvider, WATCH_FILE_USER_PROVIDER,
WATCH_FILE_USER_PROVIDER, WatchFileUserProvider,
};
use crate::{UserInfoRef, UserProviderRef};
@@ -35,7 +35,7 @@ pub fn userinfo_by_name(username: Option<String>) -> UserInfoRef {
DefaultUserInfo::with_name(username.unwrap_or_else(|| DEFAULT_USERNAME.to_string()))
}
pub fn user_provider_from_option(opt: &String) -> Result<UserProviderRef> {
pub fn user_provider_from_option(opt: &str) -> Result<UserProviderRef> {
let (name, content) = opt.split_once(':').with_context(|| InvalidConfigSnafu {
value: opt.to_string(),
msg: "UserProviderOption must be in format `<option>:<value>`",
@@ -57,7 +57,7 @@ pub fn user_provider_from_option(opt: &String) -> Result<UserProviderRef> {
}
}
pub fn static_user_provider_from_option(opt: &String) -> Result<StaticUserProvider> {
pub fn static_user_provider_from_option(opt: &str) -> Result<StaticUserProvider> {
let (name, content) = opt.split_once(':').with_context(|| InvalidConfigSnafu {
value: opt.to_string(),
msg: "UserProviderOption must be in format `<option>:<value>`",

View File

@@ -22,13 +22,13 @@ mod user_provider;
pub mod tests;
pub use common::{
auth_mysql, static_user_provider_from_option, user_provider_from_option, userinfo_by_name,
HashedPassword, Identity, Password,
HashedPassword, Identity, Password, auth_mysql, static_user_provider_from_option,
user_provider_from_option, userinfo_by_name,
};
pub use permission::{PermissionChecker, PermissionReq, PermissionResp};
pub use permission::{DefaultPermissionChecker, PermissionChecker, PermissionReq, PermissionResp};
pub use user_info::UserInfo;
pub use user_provider::static_user_provider::StaticUserProvider;
pub use user_provider::UserProvider;
pub use user_provider::static_user_provider::StaticUserProvider;
/// pub type alias
pub type UserInfoRef = std::sync::Arc<dyn UserInfo>;

View File

@@ -13,12 +13,15 @@
// limitations under the License.
use std::fmt::Debug;
use std::sync::Arc;
use api::v1::greptime_request::Request;
use common_telemetry::debug;
use sql::statements::statement::Statement;
use crate::error::{PermissionDeniedSnafu, Result};
use crate::{PermissionCheckerRef, UserInfoRef};
use crate::user_info::DefaultUserInfo;
use crate::{PermissionCheckerRef, UserInfo, UserInfoRef};
#[derive(Debug, Clone)]
pub enum PermissionReq<'a> {
@@ -35,6 +38,32 @@ pub enum PermissionReq<'a> {
BulkInsert,
}
impl<'a> PermissionReq<'a> {
/// Returns true if the permission request is for read operations.
pub fn is_readonly(&self) -> bool {
match self {
PermissionReq::GrpcRequest(Request::Query(_))
| PermissionReq::PromQuery
| PermissionReq::LogQuery
| PermissionReq::PromStoreRead => true,
PermissionReq::SqlStatement(stmt) => stmt.is_readonly(),
PermissionReq::GrpcRequest(_)
| PermissionReq::Opentsdb
| PermissionReq::LineProtocol
| PermissionReq::PromStoreWrite
| PermissionReq::Otlp
| PermissionReq::LogWrite
| PermissionReq::BulkInsert => false,
}
}
/// Returns true if the permission request is for write operations.
pub fn is_write(&self) -> bool {
!self.is_readonly()
}
}
#[derive(Debug)]
pub enum PermissionResp {
Allow,
@@ -65,3 +94,106 @@ impl PermissionChecker for Option<&PermissionCheckerRef> {
}
}
}
/// The default permission checker implementation.
/// It checks the permission mode of [DefaultUserInfo].
pub struct DefaultPermissionChecker;
impl DefaultPermissionChecker {
/// Returns a new [PermissionCheckerRef] instance.
pub fn arc() -> PermissionCheckerRef {
Arc::new(DefaultPermissionChecker)
}
}
impl PermissionChecker for DefaultPermissionChecker {
fn check_permission(
&self,
user_info: UserInfoRef,
req: PermissionReq,
) -> Result<PermissionResp> {
if let Some(default_user) = user_info.as_any().downcast_ref::<DefaultUserInfo>() {
let permission_mode = default_user.permission_mode();
if req.is_readonly() && !permission_mode.can_read() {
debug!(
"Permission denied: read operation not allowed, user = {}, permission = {}",
default_user.username(),
permission_mode.as_str()
);
return Ok(PermissionResp::Reject);
}
if req.is_write() && !permission_mode.can_write() {
debug!(
"Permission denied: write operation not allowed, user = {}, permission = {}",
default_user.username(),
permission_mode.as_str()
);
return Ok(PermissionResp::Reject);
}
}
// default allow all
Ok(PermissionResp::Allow)
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::user_info::PermissionMode;
#[test]
fn test_default_permission_checker_allow_all_operations() {
let checker = DefaultPermissionChecker;
let user_info =
DefaultUserInfo::with_name_and_permission("test_user", PermissionMode::ReadWrite);
let read_req = PermissionReq::PromQuery;
let write_req = PermissionReq::PromStoreWrite;
let read_result = checker
.check_permission(user_info.clone(), read_req)
.unwrap();
let write_result = checker.check_permission(user_info, write_req).unwrap();
assert!(matches!(read_result, PermissionResp::Allow));
assert!(matches!(write_result, PermissionResp::Allow));
}
#[test]
fn test_default_permission_checker_readonly_user() {
let checker = DefaultPermissionChecker;
let user_info =
DefaultUserInfo::with_name_and_permission("readonly_user", PermissionMode::ReadOnly);
let read_req = PermissionReq::PromQuery;
let write_req = PermissionReq::PromStoreWrite;
let read_result = checker
.check_permission(user_info.clone(), read_req)
.unwrap();
let write_result = checker.check_permission(user_info, write_req).unwrap();
assert!(matches!(read_result, PermissionResp::Allow));
assert!(matches!(write_result, PermissionResp::Reject));
}
#[test]
fn test_default_permission_checker_writeonly_user() {
let checker = DefaultPermissionChecker;
let user_info =
DefaultUserInfo::with_name_and_permission("writeonly_user", PermissionMode::WriteOnly);
let read_req = PermissionReq::LogQuery;
let write_req = PermissionReq::LogWrite;
let read_result = checker
.check_permission(user_info.clone(), read_req)
.unwrap();
let write_result = checker.check_permission(user_info, write_req).unwrap();
assert!(matches!(read_result, PermissionResp::Reject));
assert!(matches!(write_result, PermissionResp::Allow));
}
}

View File

@@ -21,7 +21,7 @@ use crate::error::{
UserPasswordMismatchSnafu,
};
use crate::user_info::DefaultUserInfo;
use crate::{auth_mysql, Identity, Password, UserInfoRef, UserProvider};
use crate::{Identity, Password, UserInfoRef, UserProvider, auth_mysql};
pub struct DatabaseAuthInfo<'a> {
pub catalog: &'a str,

View File

@@ -23,17 +23,86 @@ pub trait UserInfo: Debug + Sync + Send {
fn username(&self) -> &str;
}
/// The user permission mode
#[derive(Debug, Clone, Copy, PartialEq, Eq, Default)]
pub enum PermissionMode {
#[default]
ReadWrite,
ReadOnly,
WriteOnly,
}
impl PermissionMode {
/// Parse permission mode from string.
/// Supported values are:
/// - "rw", "readwrite", "read_write" => ReadWrite
/// - "ro", "readonly", "read_only" => ReadOnly
/// - "wo", "writeonly", "write_only" => WriteOnly
/// Returns None if the input string is not a valid permission mode.
pub fn from_str(s: &str) -> Self {
match s.to_lowercase().as_str() {
"readwrite" | "read_write" | "rw" => PermissionMode::ReadWrite,
"readonly" | "read_only" | "ro" => PermissionMode::ReadOnly,
"writeonly" | "write_only" | "wo" => PermissionMode::WriteOnly,
_ => PermissionMode::ReadWrite,
}
}
/// Convert permission mode to string.
/// - ReadWrite => "rw"
/// - ReadOnly => "ro"
/// - WriteOnly => "wo"
/// The returned string is a static string slice.
pub fn as_str(&self) -> &'static str {
match self {
PermissionMode::ReadWrite => "rw",
PermissionMode::ReadOnly => "ro",
PermissionMode::WriteOnly => "wo",
}
}
/// Returns true if the permission mode allows read operations.
pub fn can_read(&self) -> bool {
matches!(self, PermissionMode::ReadWrite | PermissionMode::ReadOnly)
}
/// Returns true if the permission mode allows write operations.
pub fn can_write(&self) -> bool {
matches!(self, PermissionMode::ReadWrite | PermissionMode::WriteOnly)
}
}
impl std::fmt::Display for PermissionMode {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{}", self.as_str())
}
}
#[derive(Debug)]
pub(crate) struct DefaultUserInfo {
username: String,
permission_mode: PermissionMode,
}
impl DefaultUserInfo {
pub(crate) fn with_name(username: impl Into<String>) -> UserInfoRef {
Self::with_name_and_permission(username, PermissionMode::default())
}
/// Create a UserInfo with specified permission mode.
pub(crate) fn with_name_and_permission(
username: impl Into<String>,
permission_mode: PermissionMode,
) -> UserInfoRef {
Arc::new(Self {
username: username.into(),
permission_mode,
})
}
pub(crate) fn permission_mode(&self) -> &PermissionMode {
&self.permission_mode
}
}
impl UserInfo for DefaultUserInfo {
@@ -45,3 +114,120 @@ impl UserInfo for DefaultUserInfo {
self.username.as_str()
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_permission_mode_from_str() {
// Test ReadWrite variants
assert_eq!(
PermissionMode::from_str("readwrite"),
PermissionMode::ReadWrite
);
assert_eq!(
PermissionMode::from_str("read_write"),
PermissionMode::ReadWrite
);
assert_eq!(PermissionMode::from_str("rw"), PermissionMode::ReadWrite);
assert_eq!(
PermissionMode::from_str("ReadWrite"),
PermissionMode::ReadWrite
);
assert_eq!(PermissionMode::from_str("RW"), PermissionMode::ReadWrite);
// Test ReadOnly variants
assert_eq!(
PermissionMode::from_str("readonly"),
PermissionMode::ReadOnly
);
assert_eq!(
PermissionMode::from_str("read_only"),
PermissionMode::ReadOnly
);
assert_eq!(PermissionMode::from_str("ro"), PermissionMode::ReadOnly);
assert_eq!(
PermissionMode::from_str("ReadOnly"),
PermissionMode::ReadOnly
);
assert_eq!(PermissionMode::from_str("RO"), PermissionMode::ReadOnly);
// Test WriteOnly variants
assert_eq!(
PermissionMode::from_str("writeonly"),
PermissionMode::WriteOnly
);
assert_eq!(
PermissionMode::from_str("write_only"),
PermissionMode::WriteOnly
);
assert_eq!(PermissionMode::from_str("wo"), PermissionMode::WriteOnly);
assert_eq!(
PermissionMode::from_str("WriteOnly"),
PermissionMode::WriteOnly
);
assert_eq!(PermissionMode::from_str("WO"), PermissionMode::WriteOnly);
// Test invalid inputs default to ReadWrite
assert_eq!(
PermissionMode::from_str("invalid"),
PermissionMode::ReadWrite
);
assert_eq!(PermissionMode::from_str(""), PermissionMode::ReadWrite);
assert_eq!(PermissionMode::from_str("xyz"), PermissionMode::ReadWrite);
}
#[test]
fn test_permission_mode_as_str() {
assert_eq!(PermissionMode::ReadWrite.as_str(), "rw");
assert_eq!(PermissionMode::ReadOnly.as_str(), "ro");
assert_eq!(PermissionMode::WriteOnly.as_str(), "wo");
}
#[test]
fn test_permission_mode_default() {
assert_eq!(PermissionMode::default(), PermissionMode::ReadWrite);
}
#[test]
fn test_permission_mode_round_trip() {
let modes = [
PermissionMode::ReadWrite,
PermissionMode::ReadOnly,
PermissionMode::WriteOnly,
];
for mode in modes {
let str_repr = mode.as_str();
let parsed = PermissionMode::from_str(str_repr);
assert_eq!(mode, parsed);
}
}
#[test]
fn test_default_user_info_with_name() {
let user_info = DefaultUserInfo::with_name("test_user");
assert_eq!(user_info.username(), "test_user");
}
#[test]
fn test_default_user_info_with_name_and_permission() {
let user_info =
DefaultUserInfo::with_name_and_permission("test_user", PermissionMode::ReadOnly);
assert_eq!(user_info.username(), "test_user");
// Cast to DefaultUserInfo to access permission_mode
let default_user = user_info
.as_any()
.downcast_ref::<DefaultUserInfo>()
.unwrap();
assert_eq!(default_user.permission_mode, PermissionMode::ReadOnly);
}
#[test]
fn test_user_info_as_any() {
let user_info = DefaultUserInfo::with_name("test_user");
let any_ref = user_info.as_any();
assert!(any_ref.downcast_ref::<DefaultUserInfo>().is_some());
}
}

View File

@@ -22,15 +22,15 @@ use std::io::BufRead;
use std::path::Path;
use common_base::secrets::ExposeSecret;
use snafu::{ensure, OptionExt, ResultExt};
use snafu::{OptionExt, ResultExt, ensure};
use crate::common::{Identity, Password};
use crate::error::{
IllegalParamSnafu, InvalidConfigSnafu, IoSnafu, Result, UnsupportedPasswordTypeSnafu,
UserNotFoundSnafu, UserPasswordMismatchSnafu,
};
use crate::user_info::DefaultUserInfo;
use crate::{auth_mysql, UserInfoRef};
use crate::user_info::{DefaultUserInfo, PermissionMode};
use crate::{UserInfoRef, auth_mysql};
#[async_trait::async_trait]
pub trait UserProvider: Send + Sync {
@@ -64,11 +64,19 @@ pub trait UserProvider: Send + Sync {
}
}
fn load_credential_from_file(filepath: &str) -> Result<Option<HashMap<String, Vec<u8>>>> {
/// Type alias for user info map
/// Key is username, value is (password, permission_mode)
pub type UserInfoMap = HashMap<String, (Vec<u8>, PermissionMode)>;
fn load_credential_from_file(filepath: &str) -> Result<UserInfoMap> {
// check valid path
let path = Path::new(filepath);
if !path.exists() {
return Ok(None);
return InvalidConfigSnafu {
value: filepath.to_string(),
msg: "UserProvider file must exist",
}
.fail();
}
ensure!(
@@ -83,13 +91,19 @@ fn load_credential_from_file(filepath: &str) -> Result<Option<HashMap<String, Ve
.lines()
.map_while(std::result::Result::ok)
.filter_map(|line| {
if let Some((k, v)) = line.split_once('=') {
Some((k.to_string(), v.as_bytes().to_vec()))
} else {
None
// The line format is:
// - `username=password` - Basic user with default permissions
// - `username:permission_mode=password` - User with specific permission mode
// - Lines starting with '#' are treated as comments and ignored
// - Empty lines are ignored
let line = line.trim();
if line.is_empty() || line.starts_with('#') {
return None;
}
parse_credential_line(line)
})
.collect::<HashMap<String, Vec<u8>>>();
.collect::<HashMap<String, _>>();
ensure!(
!credential.is_empty(),
@@ -99,11 +113,31 @@ fn load_credential_from_file(filepath: &str) -> Result<Option<HashMap<String, Ve
}
);
Ok(Some(credential))
Ok(credential)
}
/// Parse a line of credential in the format of `username=password` or `username:permission_mode=password`
pub(crate) fn parse_credential_line(line: &str) -> Option<(String, (Vec<u8>, PermissionMode))> {
let parts = line.split('=').collect::<Vec<&str>>();
if parts.len() != 2 {
return None;
}
let (username_part, password) = (parts[0], parts[1]);
let (username, permission_mode) = if let Some((user, perm)) = username_part.split_once(':') {
(user, PermissionMode::from_str(perm))
} else {
(username_part, PermissionMode::default())
};
Some((
username.to_string(),
(password.as_bytes().to_vec(), permission_mode),
))
}
fn authenticate_with_credential(
users: &HashMap<String, Vec<u8>>,
users: &UserInfoMap,
input_id: Identity<'_>,
input_pwd: Password<'_>,
) -> Result<UserInfoRef> {
@@ -115,7 +149,7 @@ fn authenticate_with_credential(
msg: "blank username"
}
);
let save_pwd = users.get(username).context(UserNotFoundSnafu {
let (save_pwd, permission_mode) = users.get(username).context(UserNotFoundSnafu {
username: username.to_string(),
})?;
@@ -128,7 +162,10 @@ fn authenticate_with_credential(
}
);
if save_pwd == pwd.expose_secret().as_bytes() {
Ok(DefaultUserInfo::with_name(username))
Ok(DefaultUserInfo::with_name_and_permission(
username,
*permission_mode,
))
} else {
UserPasswordMismatchSnafu {
username: username.to_string(),
@@ -137,8 +174,9 @@ fn authenticate_with_credential(
}
}
Password::MysqlNativePassword(auth_data, salt) => {
auth_mysql(auth_data, salt, username, save_pwd)
.map(|_| DefaultUserInfo::with_name(username))
auth_mysql(auth_data, salt, username, save_pwd).map(|_| {
DefaultUserInfo::with_name_and_permission(username, *permission_mode)
})
}
Password::PgMD5(_, _) => UnsupportedPasswordTypeSnafu {
password_type: "pg_md5",
@@ -148,3 +186,108 @@ fn authenticate_with_credential(
}
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_parse_credential_line() {
// Basic username=password format
let result = parse_credential_line("admin=password123");
assert_eq!(
result,
Some((
"admin".to_string(),
("password123".as_bytes().to_vec(), PermissionMode::default())
))
);
// Username with permission mode
let result = parse_credential_line("user:ReadOnly=secret");
assert_eq!(
result,
Some((
"user".to_string(),
("secret".as_bytes().to_vec(), PermissionMode::ReadOnly)
))
);
let result = parse_credential_line("user:ro=secret");
assert_eq!(
result,
Some((
"user".to_string(),
("secret".as_bytes().to_vec(), PermissionMode::ReadOnly)
))
);
// Username with WriteOnly permission mode
let result = parse_credential_line("writer:WriteOnly=mypass");
assert_eq!(
result,
Some((
"writer".to_string(),
("mypass".as_bytes().to_vec(), PermissionMode::WriteOnly)
))
);
// Username with 'wo' as WriteOnly permission shorthand
let result = parse_credential_line("writer:wo=mypass");
assert_eq!(
result,
Some((
"writer".to_string(),
("mypass".as_bytes().to_vec(), PermissionMode::WriteOnly)
))
);
// Username with complex password containing special characters
let result = parse_credential_line("admin:rw=p@ssw0rd!123");
assert_eq!(
result,
Some((
"admin".to_string(),
(
"p@ssw0rd!123".as_bytes().to_vec(),
PermissionMode::ReadWrite
)
))
);
// Username with spaces should be preserved
let result = parse_credential_line("user name:WriteOnly=password");
assert_eq!(
result,
Some((
"user name".to_string(),
("password".as_bytes().to_vec(), PermissionMode::WriteOnly)
))
);
// Invalid format - no equals sign
let result = parse_credential_line("invalid_line");
assert_eq!(result, None);
// Invalid format - multiple equals signs
let result = parse_credential_line("user=pass=word");
assert_eq!(result, None);
// Empty password
let result = parse_credential_line("user=");
assert_eq!(
result,
Some((
"user".to_string(),
("".as_bytes().to_vec(), PermissionMode::default())
))
);
// Empty username
let result = parse_credential_line("=password");
assert_eq!(
result,
Some((
"".to_string(),
("password".as_bytes().to_vec(), PermissionMode::default())
))
);
}
}

View File

@@ -12,19 +12,19 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use std::collections::HashMap;
use async_trait::async_trait;
use snafu::{OptionExt, ResultExt};
use crate::error::{FromUtf8Snafu, InvalidConfigSnafu, Result};
use crate::user_provider::{authenticate_with_credential, load_credential_from_file};
use crate::user_provider::{
UserInfoMap, authenticate_with_credential, load_credential_from_file, parse_credential_line,
};
use crate::{Identity, Password, UserInfoRef, UserProvider};
pub(crate) const STATIC_USER_PROVIDER: &str = "static_user_provider";
pub struct StaticUserProvider {
users: HashMap<String, Vec<u8>>,
users: UserInfoMap,
}
impl StaticUserProvider {
@@ -35,23 +35,18 @@ impl StaticUserProvider {
})?;
match mode {
"file" => {
let users = load_credential_from_file(content)?
.context(InvalidConfigSnafu {
value: content.to_string(),
msg: "StaticFileUserProvider must be a valid file path",
})?;
let users = load_credential_from_file(content)?;
Ok(StaticUserProvider { users })
}
"cmd" => content
.split(',')
.map(|kv| {
let (k, v) = kv.split_once('=').context(InvalidConfigSnafu {
parse_credential_line(kv).context(InvalidConfigSnafu {
value: kv.to_string(),
msg: "StaticUserProviderOption cmd values must be in format `user=pwd[,user=pwd]`",
})?;
Ok((k.to_string(), v.as_bytes().to_vec()))
})
})
.collect::<Result<HashMap<String, Vec<u8>>>>()
.collect::<Result<UserInfoMap>>()
.map(|users| StaticUserProvider { users }),
_ => InvalidConfigSnafu {
value: mode.to_string(),
@@ -69,7 +64,7 @@ impl StaticUserProvider {
msg: "Expect at least one pair of username and password",
})?;
let username = kv.0;
let pwd = String::from_utf8(kv.1.clone()).context(FromUtf8Snafu)?;
let pwd = String::from_utf8(kv.1.0.clone()).context(FromUtf8Snafu)?;
Ok((username.clone(), pwd))
}
}
@@ -102,10 +97,10 @@ pub mod test {
use common_test_util::temp_dir::create_temp_dir;
use crate::UserProvider;
use crate::user_info::DefaultUserInfo;
use crate::user_provider::static_user_provider::StaticUserProvider;
use crate::user_provider::{Identity, Password};
use crate::UserProvider;
async fn test_authenticate(provider: &dyn UserProvider, username: &str, password: &str) {
let re = provider
@@ -143,12 +138,13 @@ pub mod test {
let file = File::create(&file_path);
let file = file.unwrap();
let mut lw = LineWriter::new(file);
assert!(lw
.write_all(
assert!(
lw.write_all(
b"root=123456
admin=654321",
)
.is_ok());
.is_ok()
);
lw.flush().unwrap();
}

View File

@@ -12,7 +12,6 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use std::collections::HashMap;
use std::path::Path;
use std::sync::mpsc::channel;
use std::sync::{Arc, Mutex};
@@ -20,20 +19,20 @@ use std::sync::{Arc, Mutex};
use async_trait::async_trait;
use common_telemetry::{info, warn};
use notify::{EventKind, RecursiveMode, Watcher};
use snafu::{ensure, ResultExt};
use snafu::{ResultExt, ensure};
use crate::error::{FileWatchSnafu, InvalidConfigSnafu, Result};
use crate::user_info::DefaultUserInfo;
use crate::user_provider::{authenticate_with_credential, load_credential_from_file};
use crate::user_provider::{UserInfoMap, authenticate_with_credential, load_credential_from_file};
use crate::{Identity, Password, UserInfoRef, UserProvider};
pub(crate) const WATCH_FILE_USER_PROVIDER: &str = "watch_file_user_provider";
type WatchedCredentialRef = Arc<Mutex<Option<HashMap<String, Vec<u8>>>>>;
type WatchedCredentialRef = Arc<Mutex<UserInfoMap>>;
/// A user provider that reads user credential from a file and watches the file for changes.
///
/// Empty file is invalid; but file not exist means every user can be authenticated.
/// Both empty file and non-existent file are invalid and will cause initialization to fail.
#[derive(Debug)]
pub(crate) struct WatchFileUserProvider {
users: WatchedCredentialRef,
}
@@ -108,16 +107,7 @@ impl UserProvider for WatchFileUserProvider {
async fn authenticate(&self, id: Identity<'_>, password: Password<'_>) -> Result<UserInfoRef> {
let users = self.users.lock().expect("users credential must be valid");
if let Some(users) = users.as_ref() {
authenticate_with_credential(users, id, password)
} else {
match id {
Identity::UserId(id, _) => {
warn!(id, "User provider file not exist, allow all users");
Ok(DefaultUserInfo::with_name(id))
}
}
}
authenticate_with_credential(&users, id, password)
}
async fn authorize(&self, _: &str, _: &str, _: &UserInfoRef) -> Result<()> {
@@ -133,9 +123,9 @@ pub mod test {
use common_test_util::temp_dir::create_temp_dir;
use tokio::time::sleep;
use crate::UserProvider;
use crate::user_provider::watch_file_user_provider::WatchFileUserProvider;
use crate::user_provider::{Identity, Password};
use crate::UserProvider;
async fn test_authenticate(
provider: &dyn UserProvider,
@@ -178,6 +168,21 @@ pub mod test {
}
}
#[tokio::test]
async fn test_file_provider_initialization_with_missing_file() {
common_telemetry::init_default_ut_logging();
let dir = create_temp_dir("test_missing_file");
let file_path = format!("{}/non_existent_file", dir.path().to_str().unwrap());
// Try to create provider with non-existent file should fail
let result = WatchFileUserProvider::new(file_path.as_str());
assert!(result.is_err());
let error = result.unwrap_err();
assert!(error.to_string().contains("UserProvider file must exist"));
}
#[tokio::test]
async fn test_file_provider() {
common_telemetry::init_default_ut_logging();
@@ -202,9 +207,10 @@ pub mod test {
// remove the tmp file
assert!(std::fs::remove_file(&file_path).is_ok());
test_authenticate(&provider, "root", "123456", true, Some(timeout)).await;
// When file is deleted during runtime, keep the last known good credentials
test_authenticate(&provider, "root", "654321", true, Some(timeout)).await;
test_authenticate(&provider, "admin", "654321", true, Some(timeout)).await;
test_authenticate(&provider, "root", "123456", false, Some(timeout)).await;
test_authenticate(&provider, "admin", "654321", false, Some(timeout)).await;
// recreate the tmp file
assert!(std::fs::write(&file_path, "root=123456\n").is_ok());

View File

@@ -19,9 +19,9 @@ use std::time::Duration;
use catalog::kvbackend::new_table_cache;
use common_meta::cache::{
new_schema_cache, new_table_flownode_set_cache, new_table_info_cache, new_table_name_cache,
new_table_route_cache, new_table_schema_cache, new_view_info_cache, CacheRegistry,
CacheRegistryBuilder, LayeredCacheRegistryBuilder,
CacheRegistry, CacheRegistryBuilder, LayeredCacheRegistryBuilder, new_schema_cache,
new_table_flownode_set_cache, new_table_info_cache, new_table_name_cache,
new_table_route_cache, new_table_schema_cache, new_view_info_cache,
};
use common_meta::kv_backend::KvBackendRef;
use moka::future::CacheBuilder;

View File

@@ -32,8 +32,10 @@ common-runtime.workspace = true
common-telemetry.workspace = true
common-time.workspace = true
common-version.workspace = true
common-workload.workspace = true
dashmap.workspace = true
datafusion.workspace = true
datafusion-pg-catalog.workspace = true
datatypes.workspace = true
futures.workspace = true
futures-util.workspace = true
@@ -47,7 +49,7 @@ paste.workspace = true
prometheus.workspace = true
promql-parser.workspace = true
rand.workspace = true
rustc-hash.workspace = true
serde.workspace = true
serde_json.workspace = true
session.workspace = true
snafu.workspace = true

View File

@@ -297,6 +297,20 @@ pub enum Error {
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Failed to handle query"))]
HandleQuery {
source: common_meta::error::Error,
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Failed to project schema"))]
ProjectSchema {
source: datatypes::error::Error,
#[snafu(implicit)]
location: Location,
},
}
impl Error {
@@ -369,6 +383,8 @@ impl ErrorExt for Error {
Error::FrontendNotFound { .. } | Error::MetaClientMissing { .. } => {
StatusCode::Unexpected
}
Error::HandleQuery { source, .. } => source.status_code(),
Error::ProjectSchema { source, .. } => source.status_code(),
}
}

View File

@@ -14,25 +14,34 @@
use api::v1::meta::ProcedureStatus;
use common_error::ext::BoxedError;
use common_meta::cluster::{ClusterInfo, NodeInfo};
use common_meta::cluster::{ClusterInfo, NodeInfo, Role};
use common_meta::datanode::RegionStat;
use common_meta::key::flow::flow_state::FlowStat;
use common_meta::node_manager::DatanodeManagerRef;
use common_meta::procedure_executor::{ExecutorContext, ProcedureExecutor};
use common_meta::rpc::procedure;
use common_procedure::{ProcedureInfo, ProcedureState};
use common_query::request::QueryRequest;
use common_recordbatch::SendableRecordBatchStream;
use common_recordbatch::util::ChainedRecordBatchStream;
use meta_client::MetaClientRef;
use snafu::ResultExt;
use store_api::storage::RegionId;
use crate::error;
use crate::information_schema::InformationExtension;
use crate::information_schema::{DatanodeInspectRequest, InformationExtension};
pub struct DistributedInformationExtension {
meta_client: MetaClientRef,
datanode_manager: DatanodeManagerRef,
}
impl DistributedInformationExtension {
pub fn new(meta_client: MetaClientRef) -> Self {
Self { meta_client }
pub fn new(meta_client: MetaClientRef, datanode_manager: DatanodeManagerRef) -> Self {
Self {
meta_client,
datanode_manager,
}
}
}
@@ -98,4 +107,39 @@ impl InformationExtension for DistributedInformationExtension {
.map_err(BoxedError::new)
.context(crate::error::ListFlowStatsSnafu)
}
async fn inspect_datanode(
&self,
request: DatanodeInspectRequest,
) -> std::result::Result<SendableRecordBatchStream, Self::Error> {
// Aggregate results from all datanodes
let nodes = self
.meta_client
.list_nodes(Some(Role::Datanode))
.await
.map_err(BoxedError::new)
.context(crate::error::ListNodesSnafu)?;
let plan = request
.build_plan()
.context(crate::error::DatafusionSnafu)?;
let mut streams = Vec::with_capacity(nodes.len());
for node in nodes {
let client = self.datanode_manager.datanode(&node.peer).await;
let stream = client
.handle_query(QueryRequest {
plan: plan.clone(),
region_id: RegionId::default(),
header: None,
})
.await
.context(crate::error::HandleQuerySnafu)?;
streams.push(stream);
}
let chained =
ChainedRecordBatchStream::new(streams).context(crate::error::CreateRecordBatchSnafu)?;
Ok(Box::pin(chained))
}
}

View File

@@ -21,4 +21,4 @@ mod table_cache;
pub use builder::KvBackendCatalogManagerBuilder;
pub use manager::KvBackendCatalogManager;
pub use table_cache::{new_table_cache, TableCache, TableCacheRef};
pub use table_cache::{TableCache, TableCacheRef, new_table_cache};

View File

@@ -16,8 +16,8 @@ use std::sync::Arc;
use common_catalog::consts::DEFAULT_CATALOG_NAME;
use common_meta::cache::LayeredCacheRegistryRef;
use common_meta::key::flow::FlowMetadataManager;
use common_meta::key::TableMetadataManager;
use common_meta::key::flow::FlowMetadataManager;
use common_meta::kv_backend::KvBackendRef;
use common_procedure::ProcedureManagerRef;
use moka::sync::Cache;
@@ -26,9 +26,10 @@ use partition::manager::PartitionRuleManager;
#[cfg(feature = "enterprise")]
use crate::information_schema::InformationSchemaTableFactoryRef;
use crate::information_schema::{InformationExtensionRef, InformationSchemaProvider};
use crate::kvbackend::manager::{SystemCatalog, CATALOG_CACHE_MAX_CAPACITY};
use crate::kvbackend::KvBackendCatalogManager;
use crate::kvbackend::manager::{CATALOG_CACHE_MAX_CAPACITY, SystemCatalog};
use crate::process_manager::ProcessManagerRef;
use crate::system_schema::numbers_table_provider::NumbersTableProvider;
use crate::system_schema::pg_catalog::PGCatalogProvider;
pub struct KvBackendCatalogManagerBuilder {
@@ -119,6 +120,7 @@ impl KvBackendCatalogManagerBuilder {
DEFAULT_CATALOG_NAME.to_string(),
me.clone(),
)),
numbers_table_provider: NumbersTableProvider,
backend,
process_manager,
#[cfg(feature = "enterprise")]

View File

@@ -24,12 +24,12 @@ use common_meta::error::Error::CacheNotGet;
use common_meta::error::{CacheNotGetSnafu, Error, ExternalSnafu, GetKvCacheSnafu, Result};
use common_meta::kv_backend::txn::{Txn, TxnResponse};
use common_meta::kv_backend::{KvBackend, KvBackendRef, TxnService};
use common_meta::rpc::KeyValue;
use common_meta::rpc::store::{
BatchDeleteRequest, BatchDeleteResponse, BatchGetRequest, BatchGetResponse, BatchPutRequest,
BatchPutResponse, CompareAndPutRequest, CompareAndPutResponse, DeleteRangeRequest,
DeleteRangeResponse, PutRequest, PutResponse, RangeRequest, RangeResponse,
};
use common_meta::rpc::KeyValue;
use common_telemetry::debug;
use meta_client::client::MetaClient;
use moka::future::{Cache, CacheBuilder};
@@ -461,17 +461,17 @@ impl KvBackend for MetaKvBackend {
#[cfg(test)]
mod tests {
use std::any::Any;
use std::sync::atomic::{AtomicU32, Ordering};
use std::sync::Arc;
use std::sync::atomic::{AtomicU32, Ordering};
use async_trait::async_trait;
use common_meta::kv_backend::{KvBackend, TxnService};
use common_meta::rpc::KeyValue;
use common_meta::rpc::store::{
BatchDeleteRequest, BatchDeleteResponse, BatchGetRequest, BatchGetResponse,
BatchPutRequest, BatchPutResponse, DeleteRangeRequest, DeleteRangeResponse, PutRequest,
PutResponse, RangeRequest, RangeResponse,
};
use common_meta::rpc::KeyValue;
use dashmap::DashMap;
use super::CachedKvBackend;

View File

@@ -18,20 +18,19 @@ use std::sync::{Arc, Weak};
use async_stream::try_stream;
use common_catalog::consts::{
DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, INFORMATION_SCHEMA_NAME, NUMBERS_TABLE_ID,
PG_CATALOG_NAME,
DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, INFORMATION_SCHEMA_NAME, PG_CATALOG_NAME,
};
use common_error::ext::BoxedError;
use common_meta::cache::{
LayeredCacheRegistryRef, TableInfoCacheRef, TableNameCacheRef, TableRoute, TableRouteCacheRef,
ViewInfoCacheRef,
};
use common_meta::key::TableMetadataManagerRef;
use common_meta::key::catalog_name::CatalogNameKey;
use common_meta::key::flow::FlowMetadataManager;
use common_meta::key::schema_name::SchemaNameKey;
use common_meta::key::table_info::{TableInfoManager, TableInfoValue};
use common_meta::key::table_name::TableNameKey;
use common_meta::key::TableMetadataManagerRef;
use common_meta::kv_backend::KvBackendRef;
use common_procedure::ProcedureManagerRef;
use futures_util::stream::BoxStream;
@@ -41,15 +40,15 @@ use partition::manager::PartitionRuleManagerRef;
use session::context::{Channel, QueryContext};
use snafu::prelude::*;
use store_api::metric_engine_consts::METRIC_ENGINE_NAME;
use table::TableRef;
use table::dist_table::DistTable;
use table::metadata::{TableId, TableInfoRef};
use table::table::numbers::{NumbersTable, NUMBERS_TABLE_NAME};
use table::table::PartitionRules;
use table::table_name::TableName;
use table::TableRef;
use tokio::sync::Semaphore;
use tokio_stream::wrappers::ReceiverStream;
use crate::CatalogManager;
use crate::error::{
CacheNotFoundSnafu, GetTableCacheSnafu, InvalidTableInfoInCatalogSnafu, ListCatalogsSnafu,
ListSchemasSnafu, ListTablesSnafu, Result, TableMetadataManagerSnafu,
@@ -59,9 +58,9 @@ use crate::information_schema::InformationSchemaTableFactoryRef;
use crate::information_schema::{InformationExtensionRef, InformationSchemaProvider};
use crate::kvbackend::TableCacheRef;
use crate::process_manager::ProcessManagerRef;
use crate::system_schema::pg_catalog::PGCatalogProvider;
use crate::system_schema::SystemSchemaProvider;
use crate::CatalogManager;
use crate::system_schema::numbers_table_provider::NumbersTableProvider;
use crate::system_schema::pg_catalog::PGCatalogProvider;
/// Access all existing catalog, schema and tables.
///
@@ -555,6 +554,7 @@ pub(super) struct SystemCatalog {
// system_schema_provider for default catalog
pub(super) information_schema_provider: Arc<InformationSchemaProvider>,
pub(super) pg_catalog_provider: Arc<PGCatalogProvider>,
pub(super) numbers_table_provider: NumbersTableProvider,
pub(super) backend: KvBackendRef,
pub(super) process_manager: Option<ProcessManagerRef>,
#[cfg(feature = "enterprise")]
@@ -584,9 +584,7 @@ impl SystemCatalog {
PG_CATALOG_NAME if channel == Channel::Postgres => {
self.pg_catalog_provider.table_names()
}
DEFAULT_SCHEMA_NAME => {
vec![NUMBERS_TABLE_NAME.to_string()]
}
DEFAULT_SCHEMA_NAME => self.numbers_table_provider.table_names(),
_ => vec![],
}
}
@@ -604,7 +602,7 @@ impl SystemCatalog {
if schema == INFORMATION_SCHEMA_NAME {
self.information_schema_provider.table(table).is_some()
} else if schema == DEFAULT_SCHEMA_NAME {
table == NUMBERS_TABLE_NAME
self.numbers_table_provider.table_exists(table)
} else if schema == PG_CATALOG_NAME && channel == Channel::Postgres {
self.pg_catalog_provider.table(table).is_some()
} else {
@@ -649,8 +647,8 @@ impl SystemCatalog {
});
pg_catalog_provider.table(table_name)
}
} else if schema == DEFAULT_SCHEMA_NAME && table_name == NUMBERS_TABLE_NAME {
Some(NumbersTable::table(NUMBERS_TABLE_ID))
} else if schema == DEFAULT_SCHEMA_NAME {
self.numbers_table_provider.table(table_name)
} else {
None
}

View File

@@ -20,9 +20,9 @@ use common_meta::instruction::CacheIdent;
use futures::future::BoxFuture;
use moka::future::Cache;
use snafu::OptionExt;
use table::TableRef;
use table::dist_table::DistTable;
use table::table_name::TableName;
use table::TableRef;
pub type TableCacheRef = Arc<TableCache>;

View File

@@ -14,7 +14,6 @@
#![feature(assert_matches)]
#![feature(try_blocks)]
#![feature(let_chains)]
use std::any::Any;
use std::fmt::{Debug, Formatter};
@@ -25,8 +24,8 @@ use common_catalog::consts::{INFORMATION_SCHEMA_NAME, PG_CATALOG_NAME};
use futures::future::BoxFuture;
use futures_util::stream::BoxStream;
use session::context::QueryContext;
use table::metadata::{TableId, TableInfoRef};
use table::TableRef;
use table::metadata::{TableId, TableInfoRef};
use crate::error::Result;

View File

@@ -14,4 +14,4 @@
pub mod manager;
pub use manager::{new_memory_catalog_manager, MemoryCatalogManager};
pub use manager::{MemoryCatalogManager, new_memory_catalog_manager};

View File

@@ -28,8 +28,8 @@ use common_meta::kv_backend::memory::MemoryKvBackend;
use futures_util::stream::BoxStream;
use session::context::QueryContext;
use snafu::OptionExt;
use table::metadata::{TableId, TableInfoRef};
use table::TableRef;
use table::metadata::{TableId, TableInfoRef};
use crate::error::{CatalogNotFoundSnafu, Result, SchemaNotFoundSnafu, TableExistsSnafu};
use crate::information_schema::InformationSchemaProvider;
@@ -392,15 +392,15 @@ impl MemoryCatalogManager {
if !manager.schema_exist_sync(catalog, schema).unwrap() {
manager
.register_schema_sync(RegisterSchemaRequest {
catalog: catalog.to_string(),
schema: schema.to_string(),
catalog: catalog.clone(),
schema: schema.clone(),
})
.unwrap();
}
let request = RegisterTableRequest {
catalog: catalog.to_string(),
schema: schema.to_string(),
catalog: catalog.clone(),
schema: schema.clone(),
table_name: table.table_info().name.clone(),
table_id: table.table_info().ident.table_id,
table,
@@ -419,7 +419,7 @@ pub fn new_memory_catalog_manager() -> Result<Arc<MemoryCatalogManager>> {
mod tests {
use common_catalog::consts::*;
use futures_util::TryStreamExt;
use table::table::numbers::{NumbersTable, NUMBERS_TABLE_NAME};
use table::table::numbers::{NUMBERS_TABLE_NAME, NumbersTable};
use super::*;
@@ -454,16 +454,18 @@ mod tests {
tables[0].table_info().table_id()
);
assert!(catalog_list
.table(
DEFAULT_CATALOG_NAME,
DEFAULT_SCHEMA_NAME,
"not_exists",
None
)
.await
.unwrap()
.is_none());
assert!(
catalog_list
.table(
DEFAULT_CATALOG_NAME,
DEFAULT_SCHEMA_NAME,
"not_exists",
None
)
.await
.unwrap()
.is_none()
);
}
#[test]
@@ -486,11 +488,13 @@ mod tests {
table: NumbersTable::table(2333),
};
catalog.register_table_sync(register_table_req).unwrap();
assert!(catalog
.table(DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, table_name, None)
.await
.unwrap()
.is_some());
assert!(
catalog
.table(DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, table_name, None)
.await
.unwrap()
.is_some()
);
let deregister_table_req = DeregisterTableRequest {
catalog: DEFAULT_CATALOG_NAME.to_string(),
@@ -498,10 +502,12 @@ mod tests {
table_name: table_name.to_string(),
};
catalog.deregister_table_sync(deregister_table_req).unwrap();
assert!(catalog
.table(DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, table_name, None)
.await
.unwrap()
.is_none());
assert!(
catalog
.table(DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, table_name, None)
.await
.unwrap()
.is_none()
);
}
}

View File

@@ -12,8 +12,8 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use std::collections::hash_map::Entry;
use std::collections::HashMap;
use std::collections::hash_map::Entry;
use std::fmt::{Debug, Display, Formatter};
use std::sync::atomic::{AtomicU32, Ordering};
use std::sync::{Arc, RwLock};
@@ -30,7 +30,7 @@ use common_time::util::current_time_millis;
use meta_client::MetaClientRef;
use promql_parser::parser::EvalStmt;
use rand::random;
use snafu::{ensure, OptionExt, ResultExt};
use snafu::{OptionExt, ResultExt, ensure};
use sql::statements::statement::Statement;
use crate::error;
@@ -56,14 +56,21 @@ pub struct ProcessManager {
#[derive(Debug, Clone)]
pub enum QueryStatement {
Sql(Statement),
Promql(EvalStmt),
// The optional string is the alias of the PromQL query.
Promql(EvalStmt, Option<String>),
}
impl Display for QueryStatement {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
match self {
QueryStatement::Sql(stmt) => write!(f, "{}", stmt),
QueryStatement::Promql(eval_stmt) => write!(f, "{}", eval_stmt),
QueryStatement::Promql(eval_stmt, alias) => {
if let Some(alias) = alias {
write!(f, "{} AS {}", eval_stmt, alias)
} else {
write!(f, "{}", eval_stmt)
}
}
}
}
}
@@ -338,9 +345,9 @@ impl SlowQueryTimer {
};
match &self.stmt {
QueryStatement::Promql(stmt) => {
QueryStatement::Promql(stmt, _alias) => {
slow_query_event.is_promql = true;
slow_query_event.query = stmt.expr.to_string();
slow_query_event.query = self.stmt.to_string();
slow_query_event.promql_step = Some(stmt.interval.as_millis() as u64);
let start = stmt

View File

@@ -14,6 +14,7 @@
pub mod information_schema;
mod memory_table;
pub mod numbers_table_provider;
pub mod pg_catalog;
pub mod predicate;
mod utils;
@@ -137,21 +138,24 @@ impl DataSource for SystemTableDataSource {
&self,
request: ScanRequest,
) -> std::result::Result<SendableRecordBatchStream, BoxedError> {
let projection = request.projection.clone();
let projected_schema = match &projection {
let projected_schema = match &request.projection {
Some(projection) => self.try_project(projection)?,
None => self.table.schema(),
};
let projection = request.projection.clone();
let stream = self
.table
.to_stream(request)
.map_err(BoxedError::new)
.context(TablesRecordBatchSnafu)
.map_err(BoxedError::new)?
.map(move |batch| match &projection {
Some(p) => batch.and_then(|b| b.try_project(p)),
None => batch,
.map(move |batch| match (&projection, batch) {
// Some tables (e.g., inspect tables) already honor projection in their inner stream;
// others ignore it and return full rows. We will only apply projection here if the
// inner batch width doesn't match the projection size.
(Some(p), Ok(b)) if b.num_columns() != p.len() => b.try_project(p),
(_, res) => res,
});
let stream = RecordBatchStreamWrapper {

View File

@@ -24,6 +24,7 @@ pub mod region_peers;
mod region_statistics;
mod runtime_metrics;
pub mod schemata;
mod ssts;
mod table_constraints;
mod table_names;
pub mod tables;
@@ -36,22 +37,26 @@ use common_catalog::consts::{self, DEFAULT_CATALOG_NAME, INFORMATION_SCHEMA_NAME
use common_error::ext::ErrorExt;
use common_meta::cluster::NodeInfo;
use common_meta::datanode::RegionStat;
use common_meta::key::flow::flow_state::FlowStat;
use common_meta::key::flow::FlowMetadataManager;
use common_meta::key::flow::flow_state::FlowStat;
use common_meta::kv_backend::KvBackendRef;
use common_procedure::ProcedureInfo;
use common_recordbatch::SendableRecordBatchStream;
use datafusion::error::DataFusionError;
use datafusion::logical_expr::LogicalPlan;
use datatypes::schema::SchemaRef;
use lazy_static::lazy_static;
use paste::paste;
use process_list::InformationSchemaProcessList;
use store_api::sst_entry::{ManifestSstEntry, PuffinIndexMetaEntry, StorageSstEntry};
use store_api::storage::{ScanRequest, TableId};
use table::metadata::TableType;
use table::TableRef;
use table::metadata::TableType;
pub use table_names::*;
use views::InformationSchemaViews;
use self::columns::InformationSchemaColumns;
use crate::CatalogManager;
use crate::error::{Error, Result};
use crate::process_manager::ProcessManagerRef;
use crate::system_schema::information_schema::cluster_info::InformationSchemaClusterInfo;
@@ -62,6 +67,9 @@ use crate::system_schema::information_schema::partitions::InformationSchemaParti
use crate::system_schema::information_schema::region_peers::InformationSchemaRegionPeers;
use crate::system_schema::information_schema::runtime_metrics::InformationSchemaMetrics;
use crate::system_schema::information_schema::schemata::InformationSchemaSchemata;
use crate::system_schema::information_schema::ssts::{
InformationSchemaSstsIndexMeta, InformationSchemaSstsManifest, InformationSchemaSstsStorage,
};
use crate::system_schema::information_schema::table_constraints::InformationSchemaTableConstraints;
use crate::system_schema::information_schema::tables::InformationSchemaTables;
use crate::system_schema::memory_table::MemoryTable;
@@ -69,7 +77,6 @@ pub(crate) use crate::system_schema::predicate::Predicates;
use crate::system_schema::{
SystemSchemaProvider, SystemSchemaProviderInner, SystemTable, SystemTableRef,
};
use crate::CatalogManager;
lazy_static! {
// Memory tables in `information_schema`.
@@ -90,7 +97,6 @@ lazy_static! {
ROUTINES,
SCHEMA_PRIVILEGES,
TABLE_PRIVILEGES,
TRIGGERS,
GLOBAL_STATUS,
SESSION_STATUS,
PARTITIONS,
@@ -200,7 +206,6 @@ impl SystemSchemaProviderInner for InformationSchemaProvider {
ROUTINES => setup_memory_table!(ROUTINES),
SCHEMA_PRIVILEGES => setup_memory_table!(SCHEMA_PRIVILEGES),
TABLE_PRIVILEGES => setup_memory_table!(TABLE_PRIVILEGES),
TRIGGERS => setup_memory_table!(TRIGGERS),
GLOBAL_STATUS => setup_memory_table!(GLOBAL_STATUS),
SESSION_STATUS => setup_memory_table!(SESSION_STATUS),
KEY_COLUMN_USAGE => Some(Arc::new(InformationSchemaKeyColumnUsage::new(
@@ -250,6 +255,15 @@ impl SystemSchemaProviderInner for InformationSchemaProvider {
.process_manager
.as_ref()
.map(|p| Arc::new(InformationSchemaProcessList::new(p.clone())) as _),
SSTS_MANIFEST => Some(Arc::new(InformationSchemaSstsManifest::new(
self.catalog_manager.clone(),
)) as _),
SSTS_STORAGE => Some(Arc::new(InformationSchemaSstsStorage::new(
self.catalog_manager.clone(),
)) as _),
SSTS_INDEX_META => Some(Arc::new(InformationSchemaSstsIndexMeta::new(
self.catalog_manager.clone(),
)) as _),
_ => None,
}
}
@@ -321,6 +335,18 @@ impl InformationSchemaProvider {
REGION_STATISTICS.to_string(),
self.build_table(REGION_STATISTICS).unwrap(),
);
tables.insert(
SSTS_MANIFEST.to_string(),
self.build_table(SSTS_MANIFEST).unwrap(),
);
tables.insert(
SSTS_STORAGE.to_string(),
self.build_table(SSTS_STORAGE).unwrap(),
);
tables.insert(
SSTS_INDEX_META.to_string(),
self.build_table(SSTS_INDEX_META).unwrap(),
);
}
tables.insert(TABLES.to_string(), self.build_table(TABLES).unwrap());
@@ -341,7 +367,7 @@ impl InformationSchemaProvider {
}
#[cfg(feature = "enterprise")]
for name in self.extra_table_factories.keys() {
tables.insert(name.to_string(), self.build_table(name).expect(name));
tables.insert(name.clone(), self.build_table(name).expect(name));
}
// Add memory tables
for name in MEMORY_TABLES.iter() {
@@ -409,8 +435,46 @@ pub trait InformationExtension {
/// Get the flow statistics. If no flownode is available, return `None`.
async fn flow_stats(&self) -> std::result::Result<Option<FlowStat>, Self::Error>;
/// Inspects the datanode.
async fn inspect_datanode(
&self,
request: DatanodeInspectRequest,
) -> std::result::Result<SendableRecordBatchStream, Self::Error>;
}
/// The request to inspect the datanode.
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct DatanodeInspectRequest {
/// Kind to fetch from datanode.
pub kind: DatanodeInspectKind,
/// Pushdown scan configuration (projection/predicate/limit) for the returned stream.
/// This allows server-side filtering to reduce I/O and network costs.
pub scan: ScanRequest,
}
/// The kind of the datanode inspect request.
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub enum DatanodeInspectKind {
/// List SST entries recorded in manifest
SstManifest,
/// List SST entries discovered in storage layer
SstStorage,
/// List index metadata collected from manifest
SstIndexMeta,
}
impl DatanodeInspectRequest {
/// Builds a logical plan for the datanode inspect request.
pub fn build_plan(self) -> std::result::Result<LogicalPlan, DataFusionError> {
match self.kind {
DatanodeInspectKind::SstManifest => ManifestSstEntry::build_plan(self.scan),
DatanodeInspectKind::SstStorage => StorageSstEntry::build_plan(self.scan),
DatanodeInspectKind::SstIndexMeta => PuffinIndexMetaEntry::build_plan(self.scan),
}
}
}
pub struct NoopInformationExtension;
#[async_trait::async_trait]
@@ -432,4 +496,11 @@ impl InformationExtension for NoopInformationExtension {
async fn flow_stats(&self) -> std::result::Result<Option<FlowStat>, Self::Error> {
Ok(None)
}
async fn inspect_datanode(
&self,
_request: DatanodeInspectRequest,
) -> std::result::Result<SendableRecordBatchStream, Self::Error> {
Ok(common_recordbatch::RecordBatches::empty().as_stream())
}
}

View File

@@ -18,14 +18,15 @@ use std::time::Duration;
use arrow_schema::SchemaRef as ArrowSchemaRef;
use common_catalog::consts::INFORMATION_SCHEMA_CLUSTER_INFO_TABLE_ID;
use common_error::ext::BoxedError;
use common_meta::cluster::NodeInfo;
use common_meta::cluster::{DatanodeStatus, NodeInfo, NodeStatus};
use common_recordbatch::adapter::RecordBatchStreamAdapter;
use common_recordbatch::{RecordBatch, SendableRecordBatchStream};
use common_time::timestamp::Timestamp;
use common_workload::DatanodeWorkloadType;
use datafusion::execution::TaskContext;
use datafusion::physical_plan::SendableRecordBatchStream as DfSendableRecordBatchStream;
use datafusion::physical_plan::stream::RecordBatchStreamAdapter as DfRecordBatchStreamAdapter;
use datafusion::physical_plan::streaming::PartitionStream as DfPartitionStream;
use datafusion::physical_plan::SendableRecordBatchStream as DfSendableRecordBatchStream;
use datatypes::prelude::{ConcreteDataType, ScalarVectorBuilder, VectorRef};
use datatypes::schema::{ColumnSchema, Schema, SchemaRef};
use datatypes::timestamp::TimestampMillisecond;
@@ -33,22 +34,32 @@ use datatypes::value::Value;
use datatypes::vectors::{
Int64VectorBuilder, StringVectorBuilder, TimestampMillisecondVectorBuilder,
};
use serde::Serialize;
use snafu::ResultExt;
use store_api::storage::{ScanRequest, TableId};
use crate::error::{CreateRecordBatchSnafu, InternalSnafu, Result};
use crate::system_schema::information_schema::{InformationTable, Predicates, CLUSTER_INFO};
use crate::system_schema::utils;
use crate::CatalogManager;
use crate::error::{CreateRecordBatchSnafu, InternalSnafu, Result};
use crate::system_schema::information_schema::{CLUSTER_INFO, InformationTable, Predicates};
use crate::system_schema::utils;
const PEER_TYPE_FRONTEND: &str = "FRONTEND";
const PEER_TYPE_METASRV: &str = "METASRV";
const PEER_ID: &str = "peer_id";
const PEER_TYPE: &str = "peer_type";
const PEER_ADDR: &str = "peer_addr";
const PEER_HOSTNAME: &str = "peer_hostname";
const TOTAL_CPU_MILLICORES: &str = "total_cpu_millicores";
const TOTAL_MEMORY_BYTES: &str = "total_memory_bytes";
const CPU_USAGE_MILLICORES: &str = "cpu_usage_millicores";
const MEMORY_USAGE_BYTES: &str = "memory_usage_bytes";
const VERSION: &str = "version";
const GIT_COMMIT: &str = "git_commit";
const START_TIME: &str = "start_time";
const UPTIME: &str = "uptime";
const ACTIVE_TIME: &str = "active_time";
const NODE_STATUS: &str = "node_status";
const INIT_CAPACITY: usize = 42;
@@ -57,11 +68,17 @@ const INIT_CAPACITY: usize = 42;
/// - `peer_id`: the peer server id.
/// - `peer_type`: the peer type, such as `datanode`, `frontend`, `metasrv` etc.
/// - `peer_addr`: the peer gRPC address.
/// - `peer_hostname`: the hostname of the peer.
/// - `total_cpu_millicores`: the total CPU millicores of the peer.
/// - `total_memory_bytes`: the total memory bytes of the peer.
/// - `cpu_usage_millicores`: the CPU usage millicores of the peer.
/// - `memory_usage_bytes`: the memory usage bytes of the peer.
/// - `version`: the build package version of the peer.
/// - `git_commit`: the build git commit hash of the peer.
/// - `start_time`: the starting time of the peer.
/// - `uptime`: the uptime of the peer.
/// - `active_time`: the time since the last activity of the peer.
/// - `node_status`: the status info of the peer.
///
#[derive(Debug)]
pub(super) struct InformationSchemaClusterInfo {
@@ -82,6 +99,27 @@ impl InformationSchemaClusterInfo {
ColumnSchema::new(PEER_ID, ConcreteDataType::int64_datatype(), false),
ColumnSchema::new(PEER_TYPE, ConcreteDataType::string_datatype(), false),
ColumnSchema::new(PEER_ADDR, ConcreteDataType::string_datatype(), true),
ColumnSchema::new(PEER_HOSTNAME, ConcreteDataType::string_datatype(), true),
ColumnSchema::new(
TOTAL_CPU_MILLICORES,
ConcreteDataType::int64_datatype(),
false,
),
ColumnSchema::new(
TOTAL_MEMORY_BYTES,
ConcreteDataType::int64_datatype(),
false,
),
ColumnSchema::new(
CPU_USAGE_MILLICORES,
ConcreteDataType::int64_datatype(),
false,
),
ColumnSchema::new(
MEMORY_USAGE_BYTES,
ConcreteDataType::int64_datatype(),
false,
),
ColumnSchema::new(VERSION, ConcreteDataType::string_datatype(), false),
ColumnSchema::new(GIT_COMMIT, ConcreteDataType::string_datatype(), false),
ColumnSchema::new(
@@ -91,6 +129,7 @@ impl InformationSchemaClusterInfo {
),
ColumnSchema::new(UPTIME, ConcreteDataType::string_datatype(), true),
ColumnSchema::new(ACTIVE_TIME, ConcreteDataType::string_datatype(), true),
ColumnSchema::new(NODE_STATUS, ConcreteDataType::string_datatype(), true),
]))
}
@@ -140,11 +179,17 @@ struct InformationSchemaClusterInfoBuilder {
peer_ids: Int64VectorBuilder,
peer_types: StringVectorBuilder,
peer_addrs: StringVectorBuilder,
peer_hostnames: StringVectorBuilder,
total_cpu_millicores: Int64VectorBuilder,
total_memory_bytes: Int64VectorBuilder,
cpu_usage_millicores: Int64VectorBuilder,
memory_usage_bytes: Int64VectorBuilder,
versions: StringVectorBuilder,
git_commits: StringVectorBuilder,
start_times: TimestampMillisecondVectorBuilder,
uptimes: StringVectorBuilder,
active_times: StringVectorBuilder,
node_status: StringVectorBuilder,
}
impl InformationSchemaClusterInfoBuilder {
@@ -155,11 +200,17 @@ impl InformationSchemaClusterInfoBuilder {
peer_ids: Int64VectorBuilder::with_capacity(INIT_CAPACITY),
peer_types: StringVectorBuilder::with_capacity(INIT_CAPACITY),
peer_addrs: StringVectorBuilder::with_capacity(INIT_CAPACITY),
peer_hostnames: StringVectorBuilder::with_capacity(INIT_CAPACITY),
total_cpu_millicores: Int64VectorBuilder::with_capacity(INIT_CAPACITY),
total_memory_bytes: Int64VectorBuilder::with_capacity(INIT_CAPACITY),
cpu_usage_millicores: Int64VectorBuilder::with_capacity(INIT_CAPACITY),
memory_usage_bytes: Int64VectorBuilder::with_capacity(INIT_CAPACITY),
versions: StringVectorBuilder::with_capacity(INIT_CAPACITY),
git_commits: StringVectorBuilder::with_capacity(INIT_CAPACITY),
start_times: TimestampMillisecondVectorBuilder::with_capacity(INIT_CAPACITY),
uptimes: StringVectorBuilder::with_capacity(INIT_CAPACITY),
active_times: StringVectorBuilder::with_capacity(INIT_CAPACITY),
node_status: StringVectorBuilder::with_capacity(INIT_CAPACITY),
}
}
@@ -176,11 +227,13 @@ impl InformationSchemaClusterInfoBuilder {
fn add_node_info(&mut self, predicates: &Predicates, node_info: NodeInfo) {
let peer_type = node_info.status.role_name();
let peer_id = peer_id(peer_type, node_info.peer.id);
let row = [
(PEER_ID, &Value::from(node_info.peer.id)),
(PEER_ID, &Value::from(peer_id)),
(PEER_TYPE, &Value::from(peer_type)),
(PEER_ADDR, &Value::from(node_info.peer.addr.as_str())),
(PEER_HOSTNAME, &Value::from(node_info.hostname.as_str())),
(VERSION, &Value::from(node_info.version.as_str())),
(GIT_COMMIT, &Value::from(node_info.git_commit.as_str())),
];
@@ -189,15 +242,10 @@ impl InformationSchemaClusterInfoBuilder {
return;
}
if peer_type == "FRONTEND" || peer_type == "METASRV" {
// Always set peer_id to be -1 for frontends and metasrvs
self.peer_ids.push(Some(-1));
} else {
self.peer_ids.push(Some(node_info.peer.id as i64));
}
self.peer_ids.push(Some(peer_id));
self.peer_types.push(Some(peer_type));
self.peer_addrs.push(Some(&node_info.peer.addr));
self.peer_hostnames.push(Some(&node_info.hostname));
self.versions.push(Some(&node_info.version));
self.git_commits.push(Some(&node_info.git_commit));
if node_info.start_time_ms > 0 {
@@ -212,6 +260,14 @@ impl InformationSchemaClusterInfoBuilder {
self.start_times.push(None);
self.uptimes.push(None);
}
self.total_cpu_millicores
.push(Some(node_info.total_cpu_millicores));
self.total_memory_bytes
.push(Some(node_info.total_memory_bytes));
self.cpu_usage_millicores
.push(Some(node_info.cpu_usage_millicores));
self.memory_usage_bytes
.push(Some(node_info.memory_usage_bytes));
if node_info.last_activity_ts > 0 {
self.active_times.push(Some(
@@ -220,6 +276,8 @@ impl InformationSchemaClusterInfoBuilder {
} else {
self.active_times.push(None);
}
self.node_status
.push(format_node_status(&node_info).as_deref());
}
fn format_duration_since(ts: u64) -> String {
@@ -233,11 +291,17 @@ impl InformationSchemaClusterInfoBuilder {
Arc::new(self.peer_ids.finish()),
Arc::new(self.peer_types.finish()),
Arc::new(self.peer_addrs.finish()),
Arc::new(self.peer_hostnames.finish()),
Arc::new(self.total_cpu_millicores.finish()),
Arc::new(self.total_memory_bytes.finish()),
Arc::new(self.cpu_usage_millicores.finish()),
Arc::new(self.memory_usage_bytes.finish()),
Arc::new(self.versions.finish()),
Arc::new(self.git_commits.finish()),
Arc::new(self.start_times.finish()),
Arc::new(self.uptimes.finish()),
Arc::new(self.active_times.finish()),
Arc::new(self.node_status.finish()),
];
RecordBatch::new(self.schema.clone(), columns).context(CreateRecordBatchSnafu)
}
@@ -263,3 +327,56 @@ impl DfPartitionStream for InformationSchemaClusterInfo {
))
}
}
fn peer_id(peer_type: &str, peer_id: u64) -> i64 {
if peer_type == PEER_TYPE_FRONTEND || peer_type == PEER_TYPE_METASRV {
-1
} else {
peer_id as i64
}
}
#[derive(Serialize)]
struct DisplayMetasrvStatus {
is_leader: bool,
}
#[derive(Serialize)]
struct DisplayDatanodeStatus {
workloads: Vec<DatanodeWorkloadType>,
leader_regions: usize,
follower_regions: usize,
}
impl From<&DatanodeStatus> for DisplayDatanodeStatus {
fn from(status: &DatanodeStatus) -> Self {
Self {
workloads: status
.workloads
.types
.iter()
.flat_map(|w| DatanodeWorkloadType::from_i32(*w))
.collect(),
leader_regions: status.leader_regions,
follower_regions: status.follower_regions,
}
}
}
fn format_node_status(node_info: &NodeInfo) -> Option<String> {
match &node_info.status {
NodeStatus::Datanode(datanode_status) => {
serde_json::to_string(&DisplayDatanodeStatus::from(datanode_status)).ok()
}
NodeStatus::Frontend(_) => None,
NodeStatus::Flownode(_) => None,
NodeStatus::Metasrv(metasrv_status) => {
if metasrv_status.is_leader {
serde_json::to_string(&DisplayMetasrvStatus { is_leader: true }).ok()
} else {
None
}
}
NodeStatus::Standalone => None,
}
}

View File

@@ -23,9 +23,9 @@ use common_error::ext::BoxedError;
use common_recordbatch::adapter::RecordBatchStreamAdapter;
use common_recordbatch::{RecordBatch, SendableRecordBatchStream};
use datafusion::execution::TaskContext;
use datafusion::physical_plan::SendableRecordBatchStream as DfSendableRecordBatchStream;
use datafusion::physical_plan::stream::RecordBatchStreamAdapter as DfRecordBatchStreamAdapter;
use datafusion::physical_plan::streaming::PartitionStream as DfPartitionStream;
use datafusion::physical_plan::SendableRecordBatchStream as DfSendableRecordBatchStream;
use datatypes::prelude::{ConcreteDataType, DataType, MutableVector};
use datatypes::scalars::ScalarVectorBuilder;
use datatypes::schema::{ColumnSchema, Schema, SchemaRef};
@@ -38,12 +38,12 @@ use snafu::{OptionExt, ResultExt};
use sql::statements;
use store_api::storage::{ScanRequest, TableId};
use crate::CatalogManager;
use crate::error::{
CreateRecordBatchSnafu, InternalSnafu, Result, UpgradeWeakCatalogManagerRefSnafu,
};
use crate::information_schema::Predicates;
use crate::system_schema::information_schema::{InformationTable, COLUMNS};
use crate::CatalogManager;
use crate::system_schema::information_schema::{COLUMNS, InformationTable};
#[derive(Debug)]
pub(super) struct InformationSchemaColumns {

View File

@@ -16,10 +16,10 @@ use std::sync::{Arc, Weak};
use common_catalog::consts::INFORMATION_SCHEMA_FLOW_TABLE_ID;
use common_error::ext::BoxedError;
use common_meta::key::FlowId;
use common_meta::key::flow::FlowMetadataManager;
use common_meta::key::flow::flow_info::FlowInfoValue;
use common_meta::key::flow::flow_state::FlowStat;
use common_meta::key::flow::FlowMetadataManager;
use common_meta::key::FlowId;
use common_recordbatch::adapter::RecordBatchStreamAdapter;
use common_recordbatch::{DfSendableRecordBatchStream, RecordBatch, SendableRecordBatchStream};
use datafusion::execution::TaskContext;
@@ -38,14 +38,14 @@ use futures::TryStreamExt;
use snafu::{OptionExt, ResultExt};
use store_api::storage::{ScanRequest, TableId};
use crate::CatalogManager;
use crate::error::{
CreateRecordBatchSnafu, FlowInfoNotFoundSnafu, InternalSnafu, JsonSnafu, ListFlowsSnafu,
Result, UpgradeWeakCatalogManagerRefSnafu,
};
use crate::information_schema::{Predicates, FLOWS};
use crate::information_schema::{FLOWS, Predicates};
use crate::system_schema::information_schema::InformationTable;
use crate::system_schema::utils;
use crate::CatalogManager;
const INIT_CAPACITY: usize = 42;
@@ -254,9 +254,9 @@ impl InformationSchemaFlowsBuilder {
.await
.map_err(BoxedError::new)
.context(InternalSnafu)?
.context(FlowInfoNotFoundSnafu {
catalog_name: catalog_name.to_string(),
flow_name: flow_name.to_string(),
.with_context(|| FlowInfoNotFoundSnafu {
catalog_name: catalog_name.clone(),
flow_name: flow_name.clone(),
})?;
self.add_flow(&predicates, flow_id.flow_id(), flow_info, &flow_stat)
.await?;
@@ -273,11 +273,11 @@ impl InformationSchemaFlowsBuilder {
flow_stat: &Option<FlowStat>,
) -> Result<()> {
let row = [
(FLOW_NAME, &Value::from(flow_info.flow_name().to_string())),
(FLOW_NAME, &Value::from(flow_info.flow_name().clone())),
(FLOW_ID, &Value::from(flow_id)),
(
TABLE_CATALOG,
&Value::from(flow_info.catalog_name().to_string()),
&Value::from(flow_info.catalog_name().clone()),
),
];
if !predicates.eval(&row) {

View File

@@ -15,8 +15,7 @@
use std::sync::Arc;
use common_catalog::consts::{METRIC_ENGINE, MITO_ENGINE};
use datatypes::data_type::ConcreteDataType;
use datatypes::schema::{ColumnSchema, Schema, SchemaRef};
use datatypes::schema::{Schema, SchemaRef};
use datatypes::vectors::{Int64Vector, StringVector, VectorRef};
use crate::system_schema::information_schema::table_names::*;
@@ -89,9 +88,9 @@ pub(super) fn get_schema_columns(table_name: &str) -> (SchemaRef, Vec<VectorRef>
vec![
Arc::new(StringVector::from(vec![build_info.branch.to_string()])),
Arc::new(StringVector::from(vec![build_info.commit.to_string()])),
Arc::new(StringVector::from(vec![build_info
.commit_short
.to_string()])),
Arc::new(StringVector::from(vec![
build_info.commit_short.to_string(),
])),
Arc::new(StringVector::from(vec![build_info.clean.to_string()])),
Arc::new(StringVector::from(vec![build_info.version.to_string()])),
],
@@ -366,24 +365,6 @@ pub(super) fn get_schema_columns(table_name: &str) -> (SchemaRef, Vec<VectorRef>
vec![],
),
TRIGGERS => (
vec![
string_column("TRIGGER_NAME"),
ColumnSchema::new(
"trigger_id",
ConcreteDataType::uint64_datatype(),
false,
),
string_column("TRIGGER_DEFINITION"),
ColumnSchema::new(
"flownode_id",
ConcreteDataType::uint64_datatype(),
true,
),
],
vec![],
),
// TODO: Considering store internal metrics in `global_status` and
// `session_status` tables.
GLOBAL_STATUS => (

View File

@@ -20,9 +20,9 @@ use common_error::ext::BoxedError;
use common_recordbatch::adapter::RecordBatchStreamAdapter;
use common_recordbatch::{RecordBatch, SendableRecordBatchStream};
use datafusion::execution::TaskContext;
use datafusion::physical_plan::SendableRecordBatchStream as DfSendableRecordBatchStream;
use datafusion::physical_plan::stream::RecordBatchStreamAdapter as DfRecordBatchStreamAdapter;
use datafusion::physical_plan::streaming::PartitionStream as DfPartitionStream;
use datafusion::physical_plan::SendableRecordBatchStream as DfSendableRecordBatchStream;
use datatypes::prelude::{ConcreteDataType, MutableVector, ScalarVectorBuilder, VectorRef};
use datatypes::schema::{ColumnSchema, FulltextBackend, Schema, SchemaRef};
use datatypes::value::Value;
@@ -31,11 +31,11 @@ use futures_util::TryStreamExt;
use snafu::{OptionExt, ResultExt};
use store_api::storage::{ScanRequest, TableId};
use crate::CatalogManager;
use crate::error::{
CreateRecordBatchSnafu, InternalSnafu, Result, UpgradeWeakCatalogManagerRefSnafu,
};
use crate::system_schema::information_schema::{InformationTable, Predicates, KEY_COLUMN_USAGE};
use crate::CatalogManager;
use crate::system_schema::information_schema::{InformationTable, KEY_COLUMN_USAGE, Predicates};
pub const CONSTRAINT_SCHEMA: &str = "constraint_schema";
pub const CONSTRAINT_NAME: &str = "constraint_name";
@@ -277,15 +277,15 @@ impl InformationSchemaKeyColumnUsageBuilder {
constraints.push(CONSTRAINT_NAME_INVERTED_INDEX);
greptime_index_type.push(INDEX_TYPE_INVERTED_INDEX);
}
if let Ok(Some(options)) = column.fulltext_options() {
if options.enable {
constraints.push(CONSTRAINT_NAME_FULLTEXT_INDEX);
let index_type = match options.backend {
FulltextBackend::Bloom => INDEX_TYPE_FULLTEXT_BLOOM,
FulltextBackend::Tantivy => INDEX_TYPE_FULLTEXT_TANTIVY,
};
greptime_index_type.push(index_type);
}
if let Ok(Some(options)) = column.fulltext_options()
&& options.enable
{
constraints.push(CONSTRAINT_NAME_FULLTEXT_INDEX);
let index_type = match options.backend {
FulltextBackend::Bloom => INDEX_TYPE_FULLTEXT_BLOOM,
FulltextBackend::Tantivy => INDEX_TYPE_FULLTEXT_TANTIVY,
};
greptime_index_type.push(index_type);
}
if column.is_skipping_indexed() {
constraints.push(CONSTRAINT_NAME_SKIPPING_INDEX);

View File

@@ -21,17 +21,16 @@ use common_error::ext::BoxedError;
use common_recordbatch::adapter::RecordBatchStreamAdapter;
use common_recordbatch::{RecordBatch, SendableRecordBatchStream};
use datafusion::execution::TaskContext;
use datafusion::physical_plan::SendableRecordBatchStream as DfSendableRecordBatchStream;
use datafusion::physical_plan::stream::RecordBatchStreamAdapter as DfRecordBatchStreamAdapter;
use datafusion::physical_plan::streaming::PartitionStream as DfPartitionStream;
use datafusion::physical_plan::SendableRecordBatchStream as DfSendableRecordBatchStream;
use datatypes::prelude::{ConcreteDataType, ScalarVectorBuilder, VectorRef};
use datatypes::schema::{ColumnSchema, Schema, SchemaRef};
use datatypes::timestamp::TimestampMicrosecond;
use datatypes::timestamp::TimestampSecond;
use datatypes::value::Value;
use datatypes::vectors::{
ConstantVector, Int64Vector, Int64VectorBuilder, MutableVector, StringVector,
StringVectorBuilder, TimestampMicrosecondVector, TimestampMicrosecondVectorBuilder,
UInt64VectorBuilder,
StringVectorBuilder, TimestampSecondVector, TimestampSecondVectorBuilder, UInt64VectorBuilder,
};
use futures::{StreamExt, TryStreamExt};
use partition::manager::PartitionInfo;
@@ -39,13 +38,13 @@ use snafu::{OptionExt, ResultExt};
use store_api::storage::{ScanRequest, TableId};
use table::metadata::{TableInfo, TableType};
use crate::CatalogManager;
use crate::error::{
CreateRecordBatchSnafu, FindPartitionsSnafu, InternalSnafu, PartitionManagerNotFoundSnafu,
Result, UpgradeWeakCatalogManagerRefSnafu,
};
use crate::kvbackend::KvBackendCatalogManager;
use crate::system_schema::information_schema::{InformationTable, Predicates, PARTITIONS};
use crate::CatalogManager;
use crate::system_schema::information_schema::{InformationTable, PARTITIONS, Predicates};
const TABLE_CATALOG: &str = "table_catalog";
const TABLE_SCHEMA: &str = "table_schema";
@@ -129,17 +128,17 @@ impl InformationSchemaPartitions {
ColumnSchema::new("data_free", ConcreteDataType::int64_datatype(), true),
ColumnSchema::new(
"create_time",
ConcreteDataType::timestamp_microsecond_datatype(),
ConcreteDataType::timestamp_second_datatype(),
true,
),
ColumnSchema::new(
"update_time",
ConcreteDataType::timestamp_microsecond_datatype(),
ConcreteDataType::timestamp_second_datatype(),
true,
),
ColumnSchema::new(
"check_time",
ConcreteDataType::timestamp_microsecond_datatype(),
ConcreteDataType::timestamp_second_datatype(),
true,
),
ColumnSchema::new("checksum", ConcreteDataType::int64_datatype(), true),
@@ -212,7 +211,7 @@ struct InformationSchemaPartitionsBuilder {
partition_names: StringVectorBuilder,
partition_ordinal_positions: Int64VectorBuilder,
partition_expressions: StringVectorBuilder,
create_times: TimestampMicrosecondVectorBuilder,
create_times: TimestampSecondVectorBuilder,
partition_ids: UInt64VectorBuilder,
}
@@ -232,7 +231,7 @@ impl InformationSchemaPartitionsBuilder {
partition_names: StringVectorBuilder::with_capacity(INIT_CAPACITY),
partition_ordinal_positions: Int64VectorBuilder::with_capacity(INIT_CAPACITY),
partition_expressions: StringVectorBuilder::with_capacity(INIT_CAPACITY),
create_times: TimestampMicrosecondVectorBuilder::with_capacity(INIT_CAPACITY),
create_times: TimestampSecondVectorBuilder::with_capacity(INIT_CAPACITY),
partition_ids: UInt64VectorBuilder::with_capacity(INIT_CAPACITY),
}
}
@@ -331,8 +330,8 @@ impl InformationSchemaPartitionsBuilder {
.push(Some((index + 1) as i64));
let expression = partition.partition_expr.as_ref().map(|e| e.to_string());
self.partition_expressions.push(expression.as_deref());
self.create_times.push(Some(TimestampMicrosecond::from(
table_info.meta.created_on.timestamp_millis(),
self.create_times.push(Some(TimestampSecond::from(
table_info.meta.created_on.timestamp(),
)));
self.partition_ids.push(Some(partition.id.as_u64()));
}
@@ -349,8 +348,8 @@ impl InformationSchemaPartitionsBuilder {
Arc::new(Int64Vector::from(vec![None])),
rows_num,
));
let null_timestampmicrosecond_vector = Arc::new(ConstantVector::new(
Arc::new(TimestampMicrosecondVector::from(vec![None])),
let null_timestamp_second_vector = Arc::new(ConstantVector::new(
Arc::new(TimestampSecondVector::from(vec![None])),
rows_num,
));
let partition_methods = Arc::new(ConstantVector::new(
@@ -380,8 +379,8 @@ impl InformationSchemaPartitionsBuilder {
null_i64_vector.clone(),
Arc::new(self.create_times.finish()),
// TODO(dennis): supports update_time
null_timestampmicrosecond_vector.clone(),
null_timestampmicrosecond_vector,
null_timestamp_second_vector.clone(),
null_timestamp_second_vector,
null_i64_vector,
null_string_vector.clone(),
null_string_vector.clone(),

View File

@@ -22,9 +22,9 @@ use common_recordbatch::adapter::RecordBatchStreamAdapter;
use common_recordbatch::{RecordBatch, SendableRecordBatchStream};
use common_time::timestamp::Timestamp;
use datafusion::execution::TaskContext;
use datafusion::physical_plan::SendableRecordBatchStream as DfSendableRecordBatchStream;
use datafusion::physical_plan::stream::RecordBatchStreamAdapter as DfRecordBatchStreamAdapter;
use datafusion::physical_plan::streaming::PartitionStream as DfPartitionStream;
use datafusion::physical_plan::SendableRecordBatchStream as DfSendableRecordBatchStream;
use datatypes::prelude::{ConcreteDataType, ScalarVectorBuilder, VectorRef};
use datatypes::schema::{ColumnSchema, Schema, SchemaRef};
use datatypes::timestamp::TimestampMillisecond;
@@ -33,10 +33,10 @@ use datatypes::vectors::{StringVectorBuilder, TimestampMillisecondVectorBuilder}
use snafu::ResultExt;
use store_api::storage::{ScanRequest, TableId};
use crate::error::{CreateRecordBatchSnafu, InternalSnafu, Result};
use crate::system_schema::information_schema::{InformationTable, Predicates, PROCEDURE_INFO};
use crate::system_schema::utils;
use crate::CatalogManager;
use crate::error::{CreateRecordBatchSnafu, InternalSnafu, Result};
use crate::system_schema::information_schema::{InformationTable, PROCEDURE_INFO, Predicates};
use crate::system_schema::utils;
const PROCEDURE_ID: &str = "procedure_id";
const PROCEDURE_TYPE: &str = "procedure_type";

View File

@@ -135,7 +135,7 @@ async fn make_process_list(
for process in queries {
let display_id = DisplayProcessId {
server_addr: process.frontend.to_string(),
server_addr: process.frontend.clone(),
id: process.id,
}
.to_string();

View File

@@ -23,9 +23,9 @@ use common_recordbatch::adapter::RecordBatchStreamAdapter;
use common_recordbatch::{RecordBatch, SendableRecordBatchStream};
use datafusion::common::HashMap;
use datafusion::execution::TaskContext;
use datafusion::physical_plan::SendableRecordBatchStream as DfSendableRecordBatchStream;
use datafusion::physical_plan::stream::RecordBatchStreamAdapter as DfRecordBatchStreamAdapter;
use datafusion::physical_plan::streaming::PartitionStream as DfPartitionStream;
use datafusion::physical_plan::SendableRecordBatchStream as DfSendableRecordBatchStream;
use datatypes::prelude::{ConcreteDataType, ScalarVectorBuilder, VectorRef};
use datatypes::schema::{ColumnSchema, Schema, SchemaRef};
use datatypes::value::Value;
@@ -35,13 +35,13 @@ use snafu::{OptionExt, ResultExt};
use store_api::storage::{RegionId, ScanRequest, TableId};
use table::metadata::TableType;
use crate::CatalogManager;
use crate::error::{
CreateRecordBatchSnafu, FindRegionRoutesSnafu, InternalSnafu, Result,
UpgradeWeakCatalogManagerRefSnafu,
};
use crate::kvbackend::KvBackendCatalogManager;
use crate::system_schema::information_schema::{InformationTable, Predicates, REGION_PEERS};
use crate::CatalogManager;
pub const TABLE_CATALOG: &str = "table_catalog";
pub const TABLE_SCHEMA: &str = "table_schema";
@@ -199,10 +199,7 @@ impl InformationSchemaRegionPeersBuilder {
if table_info.table_type == TableType::Temporary {
Ok(None)
} else {
Ok(Some((
table_info.ident.table_id,
table_info.name.to_string(),
)))
Ok(Some((table_info.ident.table_id, table_info.name.clone())))
}
});

View File

@@ -30,16 +30,17 @@ use datatypes::vectors::{StringVectorBuilder, UInt32VectorBuilder, UInt64VectorB
use snafu::ResultExt;
use store_api::storage::{ScanRequest, TableId};
use crate::CatalogManager;
use crate::error::{CreateRecordBatchSnafu, InternalSnafu, Result};
use crate::information_schema::Predicates;
use crate::system_schema::information_schema::{InformationTable, REGION_STATISTICS};
use crate::system_schema::utils;
use crate::CatalogManager;
const REGION_ID: &str = "region_id";
const TABLE_ID: &str = "table_id";
const REGION_NUMBER: &str = "region_number";
const REGION_ROWS: &str = "region_rows";
const WRITTEN_BYTES: &str = "written_bytes_since_open";
const DISK_SIZE: &str = "disk_size";
const MEMTABLE_SIZE: &str = "memtable_size";
const MANIFEST_SIZE: &str = "manifest_size";
@@ -57,6 +58,7 @@ const INIT_CAPACITY: usize = 42;
/// - `table_id`: The table id.
/// - `region_number`: The region number.
/// - `region_rows`: The number of rows in region.
/// - `written_bytes_since_open`: The total bytes written of the region since region opened.
/// - `memtable_size`: The memtable size in bytes.
/// - `disk_size`: The approximate disk size in bytes.
/// - `manifest_size`: The manifest size in bytes.
@@ -84,6 +86,7 @@ impl InformationSchemaRegionStatistics {
ColumnSchema::new(TABLE_ID, ConcreteDataType::uint32_datatype(), false),
ColumnSchema::new(REGION_NUMBER, ConcreteDataType::uint32_datatype(), false),
ColumnSchema::new(REGION_ROWS, ConcreteDataType::uint64_datatype(), true),
ColumnSchema::new(WRITTEN_BYTES, ConcreteDataType::uint64_datatype(), true),
ColumnSchema::new(DISK_SIZE, ConcreteDataType::uint64_datatype(), true),
ColumnSchema::new(MEMTABLE_SIZE, ConcreteDataType::uint64_datatype(), true),
ColumnSchema::new(MANIFEST_SIZE, ConcreteDataType::uint64_datatype(), true),
@@ -147,6 +150,7 @@ struct InformationSchemaRegionStatisticsBuilder {
table_ids: UInt32VectorBuilder,
region_numbers: UInt32VectorBuilder,
region_rows: UInt64VectorBuilder,
written_bytes: UInt64VectorBuilder,
disk_sizes: UInt64VectorBuilder,
memtable_sizes: UInt64VectorBuilder,
manifest_sizes: UInt64VectorBuilder,
@@ -166,6 +170,7 @@ impl InformationSchemaRegionStatisticsBuilder {
table_ids: UInt32VectorBuilder::with_capacity(INIT_CAPACITY),
region_numbers: UInt32VectorBuilder::with_capacity(INIT_CAPACITY),
region_rows: UInt64VectorBuilder::with_capacity(INIT_CAPACITY),
written_bytes: UInt64VectorBuilder::with_capacity(INIT_CAPACITY),
disk_sizes: UInt64VectorBuilder::with_capacity(INIT_CAPACITY),
memtable_sizes: UInt64VectorBuilder::with_capacity(INIT_CAPACITY),
manifest_sizes: UInt64VectorBuilder::with_capacity(INIT_CAPACITY),
@@ -197,6 +202,7 @@ impl InformationSchemaRegionStatisticsBuilder {
(TABLE_ID, &Value::from(region_stat.id.table_id())),
(REGION_NUMBER, &Value::from(region_stat.id.region_number())),
(REGION_ROWS, &Value::from(region_stat.num_rows)),
(WRITTEN_BYTES, &Value::from(region_stat.written_bytes)),
(DISK_SIZE, &Value::from(region_stat.approximate_bytes)),
(MEMTABLE_SIZE, &Value::from(region_stat.memtable_size)),
(MANIFEST_SIZE, &Value::from(region_stat.manifest_size)),
@@ -216,6 +222,7 @@ impl InformationSchemaRegionStatisticsBuilder {
self.region_numbers
.push(Some(region_stat.id.region_number()));
self.region_rows.push(Some(region_stat.num_rows));
self.written_bytes.push(Some(region_stat.written_bytes));
self.disk_sizes.push(Some(region_stat.approximate_bytes));
self.memtable_sizes.push(Some(region_stat.memtable_size));
self.manifest_sizes.push(Some(region_stat.manifest_size));
@@ -232,6 +239,7 @@ impl InformationSchemaRegionStatisticsBuilder {
Arc::new(self.table_ids.finish()),
Arc::new(self.region_numbers.finish()),
Arc::new(self.region_rows.finish()),
Arc::new(self.written_bytes.finish()),
Arc::new(self.disk_sizes.finish()),
Arc::new(self.memtable_sizes.finish()),
Arc::new(self.manifest_sizes.finish()),

View File

@@ -21,9 +21,9 @@ use common_recordbatch::adapter::RecordBatchStreamAdapter;
use common_recordbatch::{RecordBatch, SendableRecordBatchStream};
use common_time::util::current_time_millis;
use datafusion::execution::TaskContext;
use datafusion::physical_plan::SendableRecordBatchStream as DfSendableRecordBatchStream;
use datafusion::physical_plan::stream::RecordBatchStreamAdapter as DfRecordBatchStreamAdapter;
use datafusion::physical_plan::streaming::PartitionStream as DfPartitionStream;
use datafusion::physical_plan::SendableRecordBatchStream as DfSendableRecordBatchStream;
use datatypes::prelude::{ConcreteDataType, MutableVector};
use datatypes::scalars::ScalarVectorBuilder;
use datatypes::schema::{ColumnSchema, Schema, SchemaRef};

View File

@@ -21,9 +21,9 @@ use common_meta::key::schema_name::SchemaNameKey;
use common_recordbatch::adapter::RecordBatchStreamAdapter;
use common_recordbatch::{RecordBatch, SendableRecordBatchStream};
use datafusion::execution::TaskContext;
use datafusion::physical_plan::SendableRecordBatchStream as DfSendableRecordBatchStream;
use datafusion::physical_plan::stream::RecordBatchStreamAdapter as DfRecordBatchStreamAdapter;
use datafusion::physical_plan::streaming::PartitionStream as DfPartitionStream;
use datafusion::physical_plan::SendableRecordBatchStream as DfSendableRecordBatchStream;
use datatypes::prelude::{ConcreteDataType, ScalarVectorBuilder, VectorRef};
use datatypes::schema::{ColumnSchema, Schema, SchemaRef};
use datatypes::value::Value;
@@ -31,13 +31,13 @@ use datatypes::vectors::StringVectorBuilder;
use snafu::{OptionExt, ResultExt};
use store_api::storage::{ScanRequest, TableId};
use crate::CatalogManager;
use crate::error::{
CreateRecordBatchSnafu, InternalSnafu, Result, TableMetadataManagerSnafu,
UpgradeWeakCatalogManagerRefSnafu,
};
use crate::system_schema::information_schema::{InformationTable, Predicates, SCHEMATA};
use crate::system_schema::utils;
use crate::CatalogManager;
pub const CATALOG_NAME: &str = "catalog_name";
pub const SCHEMA_NAME: &str = "schema_name";

View File

@@ -0,0 +1,199 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::sync::{Arc, Weak};
use common_catalog::consts::{
INFORMATION_SCHEMA_SSTS_INDEX_META_TABLE_ID, INFORMATION_SCHEMA_SSTS_MANIFEST_TABLE_ID,
INFORMATION_SCHEMA_SSTS_STORAGE_TABLE_ID,
};
use common_error::ext::BoxedError;
use common_recordbatch::SendableRecordBatchStream;
use common_recordbatch::adapter::AsyncRecordBatchStreamAdapter;
use datatypes::schema::SchemaRef;
use snafu::ResultExt;
use store_api::sst_entry::{ManifestSstEntry, PuffinIndexMetaEntry, StorageSstEntry};
use store_api::storage::{ScanRequest, TableId};
use crate::CatalogManager;
use crate::error::{ProjectSchemaSnafu, Result};
use crate::information_schema::{
DatanodeInspectKind, DatanodeInspectRequest, InformationTable, SSTS_INDEX_META, SSTS_MANIFEST,
SSTS_STORAGE,
};
use crate::system_schema::utils;
/// Information schema table for sst manifest.
pub struct InformationSchemaSstsManifest {
schema: SchemaRef,
catalog_manager: Weak<dyn CatalogManager>,
}
impl InformationSchemaSstsManifest {
pub(super) fn new(catalog_manager: Weak<dyn CatalogManager>) -> Self {
Self {
schema: ManifestSstEntry::schema(),
catalog_manager,
}
}
}
impl InformationTable for InformationSchemaSstsManifest {
fn table_id(&self) -> TableId {
INFORMATION_SCHEMA_SSTS_MANIFEST_TABLE_ID
}
fn table_name(&self) -> &'static str {
SSTS_MANIFEST
}
fn schema(&self) -> SchemaRef {
self.schema.clone()
}
fn to_stream(&self, request: ScanRequest) -> Result<SendableRecordBatchStream> {
let schema = if let Some(p) = &request.projection {
Arc::new(self.schema.try_project(p).context(ProjectSchemaSnafu)?)
} else {
self.schema.clone()
};
let info_ext = utils::information_extension(&self.catalog_manager)?;
let req = DatanodeInspectRequest {
kind: DatanodeInspectKind::SstManifest,
scan: request,
};
let future = async move {
info_ext
.inspect_datanode(req)
.await
.map_err(BoxedError::new)
.context(common_recordbatch::error::ExternalSnafu)
};
Ok(Box::pin(AsyncRecordBatchStreamAdapter::new(
schema,
Box::pin(future),
)))
}
}
/// Information schema table for sst storage.
pub struct InformationSchemaSstsStorage {
schema: SchemaRef,
catalog_manager: Weak<dyn CatalogManager>,
}
impl InformationSchemaSstsStorage {
pub(super) fn new(catalog_manager: Weak<dyn CatalogManager>) -> Self {
Self {
schema: StorageSstEntry::schema(),
catalog_manager,
}
}
}
impl InformationTable for InformationSchemaSstsStorage {
fn table_id(&self) -> TableId {
INFORMATION_SCHEMA_SSTS_STORAGE_TABLE_ID
}
fn table_name(&self) -> &'static str {
SSTS_STORAGE
}
fn schema(&self) -> SchemaRef {
self.schema.clone()
}
fn to_stream(&self, request: ScanRequest) -> Result<SendableRecordBatchStream> {
let schema = if let Some(p) = &request.projection {
Arc::new(self.schema.try_project(p).context(ProjectSchemaSnafu)?)
} else {
self.schema.clone()
};
let info_ext = utils::information_extension(&self.catalog_manager)?;
let req = DatanodeInspectRequest {
kind: DatanodeInspectKind::SstStorage,
scan: request,
};
let future = async move {
info_ext
.inspect_datanode(req)
.await
.map_err(BoxedError::new)
.context(common_recordbatch::error::ExternalSnafu)
};
Ok(Box::pin(AsyncRecordBatchStreamAdapter::new(
schema,
Box::pin(future),
)))
}
}
/// Information schema table for index metadata.
pub struct InformationSchemaSstsIndexMeta {
schema: SchemaRef,
catalog_manager: Weak<dyn CatalogManager>,
}
impl InformationSchemaSstsIndexMeta {
pub(super) fn new(catalog_manager: Weak<dyn CatalogManager>) -> Self {
Self {
schema: PuffinIndexMetaEntry::schema(),
catalog_manager,
}
}
}
impl InformationTable for InformationSchemaSstsIndexMeta {
fn table_id(&self) -> TableId {
INFORMATION_SCHEMA_SSTS_INDEX_META_TABLE_ID
}
fn table_name(&self) -> &'static str {
SSTS_INDEX_META
}
fn schema(&self) -> SchemaRef {
self.schema.clone()
}
fn to_stream(&self, request: ScanRequest) -> Result<SendableRecordBatchStream> {
let schema = if let Some(p) = &request.projection {
Arc::new(self.schema.try_project(p).context(ProjectSchemaSnafu)?)
} else {
self.schema.clone()
};
let info_ext = utils::information_extension(&self.catalog_manager)?;
let req = DatanodeInspectRequest {
kind: DatanodeInspectKind::SstIndexMeta,
scan: request,
};
let future = async move {
info_ext
.inspect_datanode(req)
.await
.map_err(BoxedError::new)
.context(common_recordbatch::error::ExternalSnafu)
};
Ok(Box::pin(AsyncRecordBatchStreamAdapter::new(
schema,
Box::pin(future),
)))
}
}

View File

@@ -20,9 +20,9 @@ use common_error::ext::BoxedError;
use common_recordbatch::adapter::RecordBatchStreamAdapter;
use common_recordbatch::{RecordBatch, SendableRecordBatchStream};
use datafusion::execution::TaskContext;
use datafusion::physical_plan::SendableRecordBatchStream as DfSendableRecordBatchStream;
use datafusion::physical_plan::stream::RecordBatchStreamAdapter as DfRecordBatchStreamAdapter;
use datafusion::physical_plan::streaming::PartitionStream as DfPartitionStream;
use datafusion::physical_plan::SendableRecordBatchStream as DfSendableRecordBatchStream;
use datatypes::prelude::{ConcreteDataType, MutableVector};
use datatypes::scalars::ScalarVectorBuilder;
use datatypes::schema::{ColumnSchema, Schema, SchemaRef};
@@ -32,15 +32,15 @@ use futures::TryStreamExt;
use snafu::{OptionExt, ResultExt};
use store_api::storage::{ScanRequest, TableId};
use crate::CatalogManager;
use crate::error::{
CreateRecordBatchSnafu, InternalSnafu, Result, UpgradeWeakCatalogManagerRefSnafu,
};
use crate::information_schema::Predicates;
use crate::information_schema::key_column_usage::{
CONSTRAINT_NAME_PRI, CONSTRAINT_NAME_TIME_INDEX,
};
use crate::information_schema::Predicates;
use crate::system_schema::information_schema::{InformationTable, TABLE_CONSTRAINTS};
use crate::CatalogManager;
/// The `TABLE_CONSTRAINTS` table describes which tables have constraints.
#[derive(Debug)]

View File

@@ -48,3 +48,6 @@ pub const FLOWS: &str = "flows";
pub const PROCEDURE_INFO: &str = "procedure_info";
pub const REGION_STATISTICS: &str = "region_statistics";
pub const PROCESS_LIST: &str = "process_list";
pub const SSTS_MANIFEST: &str = "ssts_manifest";
pub const SSTS_STORAGE: &str = "ssts_storage";
pub const SSTS_INDEX_META: &str = "ssts_index_meta";

View File

@@ -23,9 +23,9 @@ use common_recordbatch::adapter::RecordBatchStreamAdapter;
use common_recordbatch::{RecordBatch, SendableRecordBatchStream};
use common_telemetry::error;
use datafusion::execution::TaskContext;
use datafusion::physical_plan::SendableRecordBatchStream as DfSendableRecordBatchStream;
use datafusion::physical_plan::stream::RecordBatchStreamAdapter as DfRecordBatchStreamAdapter;
use datafusion::physical_plan::streaming::PartitionStream as DfPartitionStream;
use datafusion::physical_plan::SendableRecordBatchStream as DfSendableRecordBatchStream;
use datatypes::prelude::{ConcreteDataType, ScalarVectorBuilder, VectorRef};
use datatypes::schema::{ColumnSchema, Schema, SchemaRef};
use datatypes::value::Value;
@@ -37,12 +37,12 @@ use snafu::{OptionExt, ResultExt};
use store_api::storage::{RegionId, ScanRequest, TableId};
use table::metadata::{TableInfo, TableType};
use crate::CatalogManager;
use crate::error::{
CreateRecordBatchSnafu, InternalSnafu, Result, UpgradeWeakCatalogManagerRefSnafu,
};
use crate::system_schema::information_schema::{InformationTable, Predicates, TABLES};
use crate::system_schema::utils;
use crate::CatalogManager;
pub const TABLE_CATALOG: &str = "table_catalog";
pub const TABLE_SCHEMA: &str = "table_schema";
@@ -371,7 +371,8 @@ impl InformationSchemaTablesBuilder {
self.auto_increment.push(Some(0));
self.row_format.push(Some("Fixed"));
self.table_collation.push(Some("utf8_bin"));
self.update_time.push(None);
self.update_time
.push(Some(table_info.meta.updated_on.timestamp().into()));
self.check_time.push(None);
// use mariadb default table version number here
self.version.push(Some(11));

View File

@@ -20,9 +20,9 @@ use common_error::ext::BoxedError;
use common_recordbatch::adapter::RecordBatchStreamAdapter;
use common_recordbatch::{RecordBatch, SendableRecordBatchStream};
use datafusion::execution::TaskContext;
use datafusion::physical_plan::SendableRecordBatchStream as DfSendableRecordBatchStream;
use datafusion::physical_plan::stream::RecordBatchStreamAdapter as DfRecordBatchStreamAdapter;
use datafusion::physical_plan::streaming::PartitionStream as DfPartitionStream;
use datafusion::physical_plan::SendableRecordBatchStream as DfSendableRecordBatchStream;
use datatypes::prelude::{ConcreteDataType, ScalarVectorBuilder, VectorRef};
use datatypes::schema::{ColumnSchema, Schema, SchemaRef};
use datatypes::value::Value;
@@ -32,13 +32,13 @@ use snafu::{OptionExt, ResultExt};
use store_api::storage::{ScanRequest, TableId};
use table::metadata::TableType;
use crate::CatalogManager;
use crate::error::{
CastManagerSnafu, CreateRecordBatchSnafu, GetViewCacheSnafu, InternalSnafu, Result,
UpgradeWeakCatalogManagerRefSnafu, ViewInfoNotFoundSnafu,
};
use crate::kvbackend::KvBackendCatalogManager;
use crate::system_schema::information_schema::{InformationTable, Predicates, VIEWS};
use crate::CatalogManager;
const INIT_CAPACITY: usize = 42;
pub const TABLE_CATALOG: &str = "table_catalog";

View File

@@ -21,9 +21,9 @@ use common_error::ext::BoxedError;
use common_recordbatch::adapter::RecordBatchStreamAdapter;
use common_recordbatch::{RecordBatch, SendableRecordBatchStream};
use datafusion::execution::TaskContext;
use datafusion::physical_plan::SendableRecordBatchStream as DfSendableRecordBatchStream;
use datafusion::physical_plan::stream::RecordBatchStreamAdapter as DfRecordBatchStreamAdapter;
use datafusion::physical_plan::streaming::PartitionStream as DfPartitionStream;
use datafusion::physical_plan::SendableRecordBatchStream as DfSendableRecordBatchStream;
use datatypes::schema::SchemaRef;
use datatypes::vectors::VectorRef;
use snafu::ResultExt;

View File

@@ -0,0 +1,59 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#[cfg(any(test, feature = "testing", debug_assertions))]
use common_catalog::consts::NUMBERS_TABLE_ID;
use table::TableRef;
#[cfg(any(test, feature = "testing", debug_assertions))]
use table::table::numbers::NUMBERS_TABLE_NAME;
#[cfg(any(test, feature = "testing", debug_assertions))]
use table::table::numbers::NumbersTable;
// NumbersTableProvider is a dedicated provider for feature-gating the numbers table.
#[derive(Clone)]
pub struct NumbersTableProvider;
#[cfg(any(test, feature = "testing", debug_assertions))]
impl NumbersTableProvider {
pub(crate) fn table_exists(&self, name: &str) -> bool {
name == NUMBERS_TABLE_NAME
}
pub(crate) fn table_names(&self) -> Vec<String> {
vec![NUMBERS_TABLE_NAME.to_string()]
}
pub(crate) fn table(&self, name: &str) -> Option<TableRef> {
if name == NUMBERS_TABLE_NAME {
Some(NumbersTable::table(NUMBERS_TABLE_ID))
} else {
None
}
}
}
#[cfg(not(any(test, feature = "testing", debug_assertions)))]
impl NumbersTableProvider {
pub(crate) fn table_exists(&self, _name: &str) -> bool {
false
}
pub(crate) fn table_names(&self) -> Vec<String> {
vec![]
}
pub(crate) fn table(&self, _name: &str) -> Option<TableRef> {
None
}
}

View File

@@ -12,53 +12,42 @@
// See the License for the specific language governing permissions and
// limitations under the License.
mod pg_catalog_memory_table;
mod pg_class;
mod pg_database;
mod pg_namespace;
mod table_names;
use std::collections::HashMap;
use std::sync::{Arc, LazyLock, Weak};
use std::sync::{Arc, Weak};
use common_catalog::consts::{self, DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, PG_CATALOG_NAME};
use datatypes::schema::ColumnSchema;
use lazy_static::lazy_static;
use paste::paste;
use pg_catalog_memory_table::get_schema_columns;
use pg_class::PGClass;
use pg_database::PGDatabase;
use pg_namespace::PGNamespace;
use session::context::{Channel, QueryContext};
use arrow_schema::SchemaRef;
use async_trait::async_trait;
use common_catalog::consts::{DEFAULT_CATALOG_NAME, PG_CATALOG_NAME, PG_CATALOG_TABLE_ID_START};
use common_error::ext::BoxedError;
use common_recordbatch::SendableRecordBatchStream;
use common_recordbatch::adapter::RecordBatchStreamAdapter;
use common_telemetry::warn;
use datafusion::datasource::TableType;
use datafusion::error::DataFusionError;
use datafusion::execution::TaskContext;
use datafusion::physical_plan::stream::RecordBatchStreamAdapter as DfRecordBatchStreamAdapter;
use datafusion_pg_catalog::pg_catalog::catalog_info::CatalogInfo;
use datafusion_pg_catalog::pg_catalog::context::EmptyContextProvider;
use datafusion_pg_catalog::pg_catalog::{
PG_CATALOG_TABLES, PgCatalogSchemaProvider, PgCatalogStaticTables, PgCatalogTable,
};
use snafu::ResultExt;
use store_api::storage::ScanRequest;
use table::TableRef;
pub use table_names::*;
use table::metadata::TableId;
use self::pg_namespace::oid_map::{PGNamespaceOidMap, PGNamespaceOidMapRef};
use crate::system_schema::memory_table::MemoryTable;
use crate::system_schema::utils::tables::u32_column;
use crate::system_schema::{SystemSchemaProvider, SystemSchemaProviderInner, SystemTableRef};
use crate::CatalogManager;
lazy_static! {
static ref MEMORY_TABLES: &'static [&'static str] = &[table_names::PG_TYPE];
}
/// The column name for the OID column.
/// The OID column is a unique identifier of type u32 for each object in the database.
const OID_COLUMN_NAME: &str = "oid";
fn oid_column() -> ColumnSchema {
u32_column(OID_COLUMN_NAME)
}
use crate::error::{InternalSnafu, ProjectSchemaSnafu, Result};
use crate::system_schema::{
SystemSchemaProvider, SystemSchemaProviderInner, SystemTable, SystemTableRef,
};
/// [`PGCatalogProvider`] is the provider for a schema named `pg_catalog`, it is not a catalog.
pub struct PGCatalogProvider {
catalog_name: String,
catalog_manager: Weak<dyn CatalogManager>,
inner: PgCatalogSchemaProvider<CatalogManagerWrapper, EmptyContextProvider>,
tables: HashMap<String, TableRef>,
// Workaround to store mapping of schema_name to a numeric id
namespace_oid_map: PGNamespaceOidMapRef,
table_ids: HashMap<&'static str, u32>,
}
impl SystemSchemaProvider for PGCatalogProvider {
@@ -69,30 +58,34 @@ impl SystemSchemaProvider for PGCatalogProvider {
}
}
// TODO(j0hn50n133): Not sure whether to avoid duplication with `information_schema` or not.
macro_rules! setup_memory_table {
($name: expr) => {
paste! {
{
let (schema, columns) = get_schema_columns($name);
Some(Arc::new(MemoryTable::new(
consts::[<PG_CATALOG_ $name _TABLE_ID>],
$name,
schema,
columns
)) as _)
}
}
};
}
impl PGCatalogProvider {
pub fn new(catalog_name: String, catalog_manager: Weak<dyn CatalogManager>) -> Self {
// safe to expect/unwrap because it contains only schema read, this can
// be ensured by sqlness tests
let static_tables =
PgCatalogStaticTables::try_new().expect("Failed to initialize static tables");
let inner = PgCatalogSchemaProvider::try_new(
CatalogManagerWrapper {
catalog_name: catalog_name.clone(),
catalog_manager,
},
Arc::new(static_tables),
EmptyContextProvider,
)
.expect("Failed to initialize PgCatalogSchemaProvider");
let mut table_ids = HashMap::new();
let mut table_id = PG_CATALOG_TABLE_ID_START;
for name in PG_CATALOG_TABLES {
table_ids.insert(*name, table_id);
table_id += 1;
}
let mut provider = Self {
catalog_name,
catalog_manager,
inner,
tables: HashMap::new(),
namespace_oid_map: Arc::new(PGNamespaceOidMap::new()),
table_ids,
};
provider.build_tables();
provider
@@ -102,23 +95,13 @@ impl PGCatalogProvider {
// SECURITY NOTE:
// Must follow the same security rules as [`InformationSchemaProvider::build_tables`].
let mut tables = HashMap::new();
// TODO(J0HN50N133): modeling the table_name as a enum type to get rid of expect/unwrap here
// It's safe to unwrap here because we are sure that the constants have been handle correctly inside system_table.
for name in MEMORY_TABLES.iter() {
tables.insert(name.to_string(), self.build_table(name).expect(name));
for name in PG_CATALOG_TABLES {
if let Some(table) = self.build_table(name) {
tables.insert(name.to_string(), table);
}
}
tables.insert(
PG_NAMESPACE.to_string(),
self.build_table(PG_NAMESPACE).expect(PG_NAMESPACE),
);
tables.insert(
PG_CLASS.to_string(),
self.build_table(PG_CLASS).expect(PG_NAMESPACE),
);
tables.insert(
PG_DATABASE.to_string(),
self.build_table(PG_DATABASE).expect(PG_DATABASE),
);
self.tables = tables;
}
}
@@ -129,24 +112,26 @@ impl SystemSchemaProviderInner for PGCatalogProvider {
}
fn system_table(&self, name: &str) -> Option<SystemTableRef> {
match name {
table_names::PG_TYPE => setup_memory_table!(PG_TYPE),
table_names::PG_NAMESPACE => Some(Arc::new(PGNamespace::new(
self.catalog_name.clone(),
self.catalog_manager.clone(),
self.namespace_oid_map.clone(),
))),
table_names::PG_CLASS => Some(Arc::new(PGClass::new(
self.catalog_name.clone(),
self.catalog_manager.clone(),
self.namespace_oid_map.clone(),
))),
table_names::PG_DATABASE => Some(Arc::new(PGDatabase::new(
self.catalog_name.clone(),
self.catalog_manager.clone(),
self.namespace_oid_map.clone(),
))),
_ => None,
if let Some((table_name, table_id)) = self.table_ids.get_key_value(name) {
let table = self.inner.build_table_by_name(name).expect(name);
if let Some(table) = table {
if let Ok(system_table) = DFTableProviderAsSystemTable::try_new(
*table_id,
table_name,
table::metadata::TableType::Temporary,
table,
) {
Some(Arc::new(system_table))
} else {
warn!("failed to create pg_catalog system table {}", name);
None
}
} else {
None
}
} else {
None
}
}
@@ -155,11 +140,177 @@ impl SystemSchemaProviderInner for PGCatalogProvider {
}
}
/// Provide query context to call the [`CatalogManager`]'s method.
static PG_QUERY_CTX: LazyLock<QueryContext> = LazyLock::new(|| {
QueryContext::with_channel(DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, Channel::Postgres)
});
fn query_ctx() -> Option<&'static QueryContext> {
Some(&PG_QUERY_CTX)
#[derive(Clone)]
pub struct CatalogManagerWrapper {
catalog_name: String,
catalog_manager: Weak<dyn CatalogManager>,
}
impl CatalogManagerWrapper {
fn catalog_manager(&self) -> std::result::Result<Arc<dyn CatalogManager>, DataFusionError> {
self.catalog_manager.upgrade().ok_or_else(|| {
DataFusionError::Internal("Failed to access catalog manager".to_string())
})
}
}
impl std::fmt::Debug for CatalogManagerWrapper {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.debug_struct("CatalogManagerWrapper").finish()
}
}
#[async_trait]
impl CatalogInfo for CatalogManagerWrapper {
async fn catalog_names(&self) -> std::result::Result<Vec<String>, DataFusionError> {
if self.catalog_name == DEFAULT_CATALOG_NAME {
CatalogManager::catalog_names(self.catalog_manager()?.as_ref())
.await
.map_err(|e| DataFusionError::External(Box::new(e)))
} else {
Ok(vec![self.catalog_name.clone()])
}
}
async fn schema_names(
&self,
catalog_name: &str,
) -> std::result::Result<Option<Vec<String>>, DataFusionError> {
self.catalog_manager()?
.schema_names(catalog_name, None)
.await
.map(Some)
.map_err(|e| DataFusionError::External(Box::new(e)))
}
async fn table_names(
&self,
catalog_name: &str,
schema_name: &str,
) -> std::result::Result<Option<Vec<String>>, DataFusionError> {
self.catalog_manager()?
.table_names(catalog_name, schema_name, None)
.await
.map(Some)
.map_err(|e| DataFusionError::External(Box::new(e)))
}
async fn table_schema(
&self,
catalog_name: &str,
schema_name: &str,
table_name: &str,
) -> std::result::Result<Option<SchemaRef>, DataFusionError> {
let table = self
.catalog_manager()?
.table(catalog_name, schema_name, table_name, None)
.await
.map_err(|e| DataFusionError::External(Box::new(e)))?;
Ok(table.map(|t| t.schema().arrow_schema().clone()))
}
async fn table_type(
&self,
catalog_name: &str,
schema_name: &str,
table_name: &str,
) -> std::result::Result<Option<TableType>, DataFusionError> {
let table = self
.catalog_manager()?
.table(catalog_name, schema_name, table_name, None)
.await
.map_err(|e| DataFusionError::External(Box::new(e)))?;
Ok(table.map(|t| t.table_type().into()))
}
}
struct DFTableProviderAsSystemTable {
pub table_id: TableId,
pub table_name: &'static str,
pub table_type: table::metadata::TableType,
pub schema: Arc<datatypes::schema::Schema>,
pub table_provider: PgCatalogTable,
}
impl DFTableProviderAsSystemTable {
pub fn try_new(
table_id: TableId,
table_name: &'static str,
table_type: table::metadata::TableType,
table_provider: PgCatalogTable,
) -> Result<Self> {
let arrow_schema = table_provider.schema();
let schema = Arc::new(arrow_schema.try_into().context(ProjectSchemaSnafu)?);
Ok(Self {
table_id,
table_name,
table_type,
schema,
table_provider,
})
}
}
impl SystemTable for DFTableProviderAsSystemTable {
fn table_id(&self) -> TableId {
self.table_id
}
fn table_name(&self) -> &'static str {
self.table_name
}
fn schema(&self) -> Arc<datatypes::schema::Schema> {
self.schema.clone()
}
fn table_type(&self) -> table::metadata::TableType {
self.table_type
}
fn to_stream(&self, _request: ScanRequest) -> Result<SendableRecordBatchStream> {
match &self.table_provider {
PgCatalogTable::Static(table) => {
let schema = self.schema.arrow_schema().clone();
let data = table
.data()
.iter()
.map(|rb| Ok(rb.clone()))
.collect::<Vec<_>>();
let stream = Box::pin(DfRecordBatchStreamAdapter::new(
schema,
futures::stream::iter(data),
));
Ok(Box::pin(
RecordBatchStreamAdapter::try_new(stream)
.map_err(BoxedError::new)
.context(InternalSnafu)?,
))
}
PgCatalogTable::Dynamic(table) => {
let stream = table.execute(Arc::new(TaskContext::default()));
Ok(Box::pin(
RecordBatchStreamAdapter::try_new(stream)
.map_err(BoxedError::new)
.context(InternalSnafu)?,
))
}
PgCatalogTable::Empty(_) => {
let schema = self.schema.arrow_schema().clone();
let stream = Box::pin(DfRecordBatchStreamAdapter::new(
schema,
futures::stream::iter(vec![]),
));
Ok(Box::pin(
RecordBatchStreamAdapter::try_new(stream)
.map_err(BoxedError::new)
.context(InternalSnafu)?,
))
}
}
}
}

View File

@@ -1,69 +0,0 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::sync::Arc;
use datatypes::schema::{ColumnSchema, Schema, SchemaRef};
use datatypes::vectors::{Int16Vector, StringVector, UInt32Vector, VectorRef};
use crate::memory_table_cols;
use crate::system_schema::pg_catalog::oid_column;
use crate::system_schema::pg_catalog::table_names::PG_TYPE;
use crate::system_schema::utils::tables::{i16_column, string_column};
fn pg_type_schema_columns() -> (Vec<ColumnSchema>, Vec<VectorRef>) {
// TODO(j0hn50n133): acquire this information from `DataType` instead of hardcoding it to avoid regression.
memory_table_cols!(
[oid, typname, typlen],
[
(1, "String", -1),
(2, "Binary", -1),
(3, "Int8", 1),
(4, "Int16", 2),
(5, "Int32", 4),
(6, "Int64", 8),
(7, "UInt8", 1),
(8, "UInt16", 2),
(9, "UInt32", 4),
(10, "UInt64", 8),
(11, "Float32", 4),
(12, "Float64", 8),
(13, "Decimal", 16),
(14, "Date", 4),
(15, "DateTime", 8),
(16, "Timestamp", 8),
(17, "Time", 8),
(18, "Duration", 8),
(19, "Interval", 16),
(20, "List", -1),
]
);
(
// not quiet identical with pg, we only follow the definition in pg
vec![oid_column(), string_column("typname"), i16_column("typlen")],
vec![
Arc::new(UInt32Vector::from_vec(oid)), // oid
Arc::new(StringVector::from(typname)),
Arc::new(Int16Vector::from_vec(typlen)), // typlen in bytes
],
)
}
pub(super) fn get_schema_columns(table_name: &str) -> (SchemaRef, Vec<VectorRef>) {
let (column_schemas, columns): (_, Vec<VectorRef>) = match table_name {
PG_TYPE => pg_type_schema_columns(),
_ => unreachable!("Unknown table in pg_catalog: {}", table_name),
};
(Arc::new(Schema::new(column_schemas)), columns)
}

View File

@@ -1,276 +0,0 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::fmt;
use std::sync::{Arc, Weak};
use arrow_schema::SchemaRef as ArrowSchemaRef;
use common_catalog::consts::PG_CATALOG_PG_CLASS_TABLE_ID;
use common_error::ext::BoxedError;
use common_recordbatch::adapter::RecordBatchStreamAdapter;
use common_recordbatch::{DfSendableRecordBatchStream, RecordBatch};
use datafusion::execution::TaskContext;
use datafusion::physical_plan::stream::RecordBatchStreamAdapter as DfRecordBatchStreamAdapter;
use datafusion::physical_plan::streaming::PartitionStream as DfPartitionStream;
use datatypes::scalars::ScalarVectorBuilder;
use datatypes::schema::{Schema, SchemaRef};
use datatypes::value::Value;
use datatypes::vectors::{StringVectorBuilder, UInt32VectorBuilder, VectorRef};
use futures::TryStreamExt;
use snafu::{OptionExt, ResultExt};
use store_api::storage::ScanRequest;
use table::metadata::TableType;
use crate::error::{
CreateRecordBatchSnafu, InternalSnafu, Result, UpgradeWeakCatalogManagerRefSnafu,
};
use crate::information_schema::Predicates;
use crate::system_schema::pg_catalog::pg_namespace::oid_map::PGNamespaceOidMapRef;
use crate::system_schema::pg_catalog::{query_ctx, OID_COLUMN_NAME, PG_CLASS};
use crate::system_schema::utils::tables::{string_column, u32_column};
use crate::system_schema::SystemTable;
use crate::CatalogManager;
// === column name ===
pub const RELNAME: &str = "relname";
pub const RELNAMESPACE: &str = "relnamespace";
pub const RELKIND: &str = "relkind";
pub const RELOWNER: &str = "relowner";
// === enum value of relkind ===
pub const RELKIND_TABLE: &str = "r";
pub const RELKIND_VIEW: &str = "v";
/// The initial capacity of the vector builders.
const INIT_CAPACITY: usize = 42;
/// The dummy owner id for the namespace.
const DUMMY_OWNER_ID: u32 = 0;
/// The `pg_catalog.pg_class` table implementation.
pub(super) struct PGClass {
schema: SchemaRef,
catalog_name: String,
catalog_manager: Weak<dyn CatalogManager>,
// Workaround to convert schema_name to a numeric id
namespace_oid_map: PGNamespaceOidMapRef,
}
impl PGClass {
pub(super) fn new(
catalog_name: String,
catalog_manager: Weak<dyn CatalogManager>,
namespace_oid_map: PGNamespaceOidMapRef,
) -> Self {
Self {
schema: Self::schema(),
catalog_name,
catalog_manager,
namespace_oid_map,
}
}
fn schema() -> SchemaRef {
Arc::new(Schema::new(vec![
u32_column(OID_COLUMN_NAME),
string_column(RELNAME),
u32_column(RELNAMESPACE),
string_column(RELKIND),
u32_column(RELOWNER),
]))
}
fn builder(&self) -> PGClassBuilder {
PGClassBuilder::new(
self.schema.clone(),
self.catalog_name.clone(),
self.catalog_manager.clone(),
self.namespace_oid_map.clone(),
)
}
}
impl fmt::Debug for PGClass {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.debug_struct("PGClass")
.field("schema", &self.schema)
.field("catalog_name", &self.catalog_name)
.finish()
}
}
impl SystemTable for PGClass {
fn table_id(&self) -> table::metadata::TableId {
PG_CATALOG_PG_CLASS_TABLE_ID
}
fn table_name(&self) -> &'static str {
PG_CLASS
}
fn schema(&self) -> SchemaRef {
self.schema.clone()
}
fn to_stream(
&self,
request: ScanRequest,
) -> Result<common_recordbatch::SendableRecordBatchStream> {
let schema = self.schema.arrow_schema().clone();
let mut builder = self.builder();
let stream = Box::pin(DfRecordBatchStreamAdapter::new(
schema,
futures::stream::once(async move {
builder
.make_class(Some(request))
.await
.map(|x| x.into_df_record_batch())
.map_err(Into::into)
}),
));
Ok(Box::pin(
RecordBatchStreamAdapter::try_new(stream)
.map_err(BoxedError::new)
.context(InternalSnafu)?,
))
}
}
impl DfPartitionStream for PGClass {
fn schema(&self) -> &ArrowSchemaRef {
self.schema.arrow_schema()
}
fn execute(&self, _: Arc<TaskContext>) -> DfSendableRecordBatchStream {
let schema = self.schema.arrow_schema().clone();
let mut builder = self.builder();
Box::pin(DfRecordBatchStreamAdapter::new(
schema,
futures::stream::once(async move {
builder
.make_class(None)
.await
.map(|x| x.into_df_record_batch())
.map_err(Into::into)
}),
))
}
}
/// Builds the `pg_catalog.pg_class` table row by row
/// TODO(J0HN50N133): `relowner` is always the [`DUMMY_OWNER_ID`] because we don't have users.
/// Once we have user system, make it the actual owner of the table.
struct PGClassBuilder {
schema: SchemaRef,
catalog_name: String,
catalog_manager: Weak<dyn CatalogManager>,
namespace_oid_map: PGNamespaceOidMapRef,
oid: UInt32VectorBuilder,
relname: StringVectorBuilder,
relnamespace: UInt32VectorBuilder,
relkind: StringVectorBuilder,
relowner: UInt32VectorBuilder,
}
impl PGClassBuilder {
fn new(
schema: SchemaRef,
catalog_name: String,
catalog_manager: Weak<dyn CatalogManager>,
namespace_oid_map: PGNamespaceOidMapRef,
) -> Self {
Self {
schema,
catalog_name,
catalog_manager,
namespace_oid_map,
oid: UInt32VectorBuilder::with_capacity(INIT_CAPACITY),
relname: StringVectorBuilder::with_capacity(INIT_CAPACITY),
relnamespace: UInt32VectorBuilder::with_capacity(INIT_CAPACITY),
relkind: StringVectorBuilder::with_capacity(INIT_CAPACITY),
relowner: UInt32VectorBuilder::with_capacity(INIT_CAPACITY),
}
}
async fn make_class(&mut self, request: Option<ScanRequest>) -> Result<RecordBatch> {
let catalog_name = self.catalog_name.clone();
let catalog_manager = self
.catalog_manager
.upgrade()
.context(UpgradeWeakCatalogManagerRefSnafu)?;
let predicates = Predicates::from_scan_request(&request);
for schema_name in catalog_manager
.schema_names(&catalog_name, query_ctx())
.await?
{
let mut stream = catalog_manager.tables(&catalog_name, &schema_name, query_ctx());
while let Some(table) = stream.try_next().await? {
let table_info = table.table_info();
self.add_class(
&predicates,
table_info.table_id(),
&schema_name,
&table_info.name,
if table_info.table_type == TableType::View {
RELKIND_VIEW
} else {
RELKIND_TABLE
},
);
}
}
self.finish()
}
fn add_class(
&mut self,
predicates: &Predicates,
oid: u32,
schema: &str,
table: &str,
kind: &str,
) {
let namespace_oid = self.namespace_oid_map.get_oid(schema);
let row = [
(OID_COLUMN_NAME, &Value::from(oid)),
(RELNAMESPACE, &Value::from(schema)),
(RELNAME, &Value::from(table)),
(RELKIND, &Value::from(kind)),
(RELOWNER, &Value::from(DUMMY_OWNER_ID)),
];
if !predicates.eval(&row) {
return;
}
self.oid.push(Some(oid));
self.relnamespace.push(Some(namespace_oid));
self.relname.push(Some(table));
self.relkind.push(Some(kind));
self.relowner.push(Some(DUMMY_OWNER_ID));
}
fn finish(&mut self) -> Result<RecordBatch> {
let columns: Vec<VectorRef> = vec![
Arc::new(self.oid.finish()),
Arc::new(self.relname.finish()),
Arc::new(self.relnamespace.finish()),
Arc::new(self.relkind.finish()),
Arc::new(self.relowner.finish()),
];
RecordBatch::new(self.schema.clone(), columns).context(CreateRecordBatchSnafu)
}
}

View File

@@ -1,223 +0,0 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::sync::{Arc, Weak};
use arrow_schema::SchemaRef as ArrowSchemaRef;
use common_catalog::consts::PG_CATALOG_PG_DATABASE_TABLE_ID;
use common_error::ext::BoxedError;
use common_recordbatch::adapter::RecordBatchStreamAdapter;
use common_recordbatch::{DfSendableRecordBatchStream, RecordBatch};
use datafusion::execution::TaskContext;
use datafusion::physical_plan::stream::RecordBatchStreamAdapter as DfRecordBatchStreamAdapter;
use datafusion::physical_plan::streaming::PartitionStream as DfPartitionStream;
use datatypes::scalars::ScalarVectorBuilder;
use datatypes::schema::{Schema, SchemaRef};
use datatypes::value::Value;
use datatypes::vectors::{StringVectorBuilder, UInt32VectorBuilder, VectorRef};
use snafu::{OptionExt, ResultExt};
use store_api::storage::ScanRequest;
use crate::error::{
CreateRecordBatchSnafu, InternalSnafu, Result, UpgradeWeakCatalogManagerRefSnafu,
};
use crate::information_schema::Predicates;
use crate::system_schema::pg_catalog::pg_namespace::oid_map::PGNamespaceOidMapRef;
use crate::system_schema::pg_catalog::{query_ctx, OID_COLUMN_NAME, PG_DATABASE};
use crate::system_schema::utils::tables::{string_column, u32_column};
use crate::system_schema::SystemTable;
use crate::CatalogManager;
// === column name ===
pub const DATNAME: &str = "datname";
/// The initial capacity of the vector builders.
const INIT_CAPACITY: usize = 42;
/// The `pg_catalog.database` table implementation.
pub(super) struct PGDatabase {
schema: SchemaRef,
catalog_name: String,
catalog_manager: Weak<dyn CatalogManager>,
// Workaround to convert schema_name to a numeric id
namespace_oid_map: PGNamespaceOidMapRef,
}
impl std::fmt::Debug for PGDatabase {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.debug_struct("PGDatabase")
.field("schema", &self.schema)
.field("catalog_name", &self.catalog_name)
.finish()
}
}
impl PGDatabase {
pub(super) fn new(
catalog_name: String,
catalog_manager: Weak<dyn CatalogManager>,
namespace_oid_map: PGNamespaceOidMapRef,
) -> Self {
Self {
schema: Self::schema(),
catalog_name,
catalog_manager,
namespace_oid_map,
}
}
fn schema() -> SchemaRef {
Arc::new(Schema::new(vec![
u32_column(OID_COLUMN_NAME),
string_column(DATNAME),
]))
}
fn builder(&self) -> PGCDatabaseBuilder {
PGCDatabaseBuilder::new(
self.schema.clone(),
self.catalog_name.clone(),
self.catalog_manager.clone(),
self.namespace_oid_map.clone(),
)
}
}
impl DfPartitionStream for PGDatabase {
fn schema(&self) -> &ArrowSchemaRef {
self.schema.arrow_schema()
}
fn execute(&self, _: Arc<TaskContext>) -> DfSendableRecordBatchStream {
let schema = self.schema.arrow_schema().clone();
let mut builder = self.builder();
Box::pin(DfRecordBatchStreamAdapter::new(
schema,
futures::stream::once(async move {
builder
.make_database(None)
.await
.map(|x| x.into_df_record_batch())
.map_err(Into::into)
}),
))
}
}
impl SystemTable for PGDatabase {
fn table_id(&self) -> table::metadata::TableId {
PG_CATALOG_PG_DATABASE_TABLE_ID
}
fn table_name(&self) -> &'static str {
PG_DATABASE
}
fn schema(&self) -> SchemaRef {
self.schema.clone()
}
fn to_stream(
&self,
request: ScanRequest,
) -> Result<common_recordbatch::SendableRecordBatchStream> {
let schema = self.schema.arrow_schema().clone();
let mut builder = self.builder();
let stream = Box::pin(DfRecordBatchStreamAdapter::new(
schema,
futures::stream::once(async move {
builder
.make_database(Some(request))
.await
.map(|x| x.into_df_record_batch())
.map_err(Into::into)
}),
));
Ok(Box::pin(
RecordBatchStreamAdapter::try_new(stream)
.map_err(BoxedError::new)
.context(InternalSnafu)?,
))
}
}
/// Builds the `pg_catalog.pg_database` table row by row
/// `oid` use schema name as a workaround since we don't have numeric schema id.
/// `nspname` is the schema name.
struct PGCDatabaseBuilder {
schema: SchemaRef,
catalog_name: String,
catalog_manager: Weak<dyn CatalogManager>,
namespace_oid_map: PGNamespaceOidMapRef,
oid: UInt32VectorBuilder,
datname: StringVectorBuilder,
}
impl PGCDatabaseBuilder {
fn new(
schema: SchemaRef,
catalog_name: String,
catalog_manager: Weak<dyn CatalogManager>,
namespace_oid_map: PGNamespaceOidMapRef,
) -> Self {
Self {
schema,
catalog_name,
catalog_manager,
namespace_oid_map,
oid: UInt32VectorBuilder::with_capacity(INIT_CAPACITY),
datname: StringVectorBuilder::with_capacity(INIT_CAPACITY),
}
}
async fn make_database(&mut self, request: Option<ScanRequest>) -> Result<RecordBatch> {
let catalog_name = self.catalog_name.clone();
let catalog_manager = self
.catalog_manager
.upgrade()
.context(UpgradeWeakCatalogManagerRefSnafu)?;
let predicates = Predicates::from_scan_request(&request);
for schema_name in catalog_manager
.schema_names(&catalog_name, query_ctx())
.await?
{
self.add_database(&predicates, &schema_name);
}
self.finish()
}
fn add_database(&mut self, predicates: &Predicates, schema_name: &str) {
let oid = self.namespace_oid_map.get_oid(schema_name);
let row: [(&str, &Value); 2] = [
(OID_COLUMN_NAME, &Value::from(oid)),
(DATNAME, &Value::from(schema_name)),
];
if !predicates.eval(&row) {
return;
}
self.oid.push(Some(oid));
self.datname.push(Some(schema_name));
}
fn finish(&mut self) -> Result<RecordBatch> {
let columns: Vec<VectorRef> =
vec![Arc::new(self.oid.finish()), Arc::new(self.datname.finish())];
RecordBatch::new(self.schema.clone(), columns).context(CreateRecordBatchSnafu)
}
}

View File

@@ -1,221 +0,0 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//! The `pg_catalog.pg_namespace` table implementation.
//! namespace is a schema in greptime
pub(super) mod oid_map;
use std::fmt;
use std::sync::{Arc, Weak};
use arrow_schema::SchemaRef as ArrowSchemaRef;
use common_catalog::consts::PG_CATALOG_PG_NAMESPACE_TABLE_ID;
use common_error::ext::BoxedError;
use common_recordbatch::adapter::RecordBatchStreamAdapter;
use common_recordbatch::{DfSendableRecordBatchStream, RecordBatch, SendableRecordBatchStream};
use datafusion::execution::TaskContext;
use datafusion::physical_plan::stream::RecordBatchStreamAdapter as DfRecordBatchStreamAdapter;
use datafusion::physical_plan::streaming::PartitionStream as DfPartitionStream;
use datatypes::scalars::ScalarVectorBuilder;
use datatypes::schema::{Schema, SchemaRef};
use datatypes::value::Value;
use datatypes::vectors::{StringVectorBuilder, UInt32VectorBuilder, VectorRef};
use snafu::{OptionExt, ResultExt};
use store_api::storage::ScanRequest;
use crate::error::{
CreateRecordBatchSnafu, InternalSnafu, Result, UpgradeWeakCatalogManagerRefSnafu,
};
use crate::information_schema::Predicates;
use crate::system_schema::pg_catalog::{
query_ctx, PGNamespaceOidMapRef, OID_COLUMN_NAME, PG_NAMESPACE,
};
use crate::system_schema::utils::tables::{string_column, u32_column};
use crate::system_schema::SystemTable;
use crate::CatalogManager;
const NSPNAME: &str = "nspname";
const INIT_CAPACITY: usize = 42;
pub(super) struct PGNamespace {
schema: SchemaRef,
catalog_name: String,
catalog_manager: Weak<dyn CatalogManager>,
// Workaround to convert schema_name to a numeric id
oid_map: PGNamespaceOidMapRef,
}
impl PGNamespace {
pub(super) fn new(
catalog_name: String,
catalog_manager: Weak<dyn CatalogManager>,
oid_map: PGNamespaceOidMapRef,
) -> Self {
Self {
schema: Self::schema(),
catalog_name,
catalog_manager,
oid_map,
}
}
fn schema() -> SchemaRef {
Arc::new(Schema::new(vec![
// TODO(J0HN50N133): we do not have a numeric schema id, use schema name as a workaround. Use a proper schema id once we have it.
u32_column(OID_COLUMN_NAME),
string_column(NSPNAME),
]))
}
fn builder(&self) -> PGNamespaceBuilder {
PGNamespaceBuilder::new(
self.schema.clone(),
self.catalog_name.clone(),
self.catalog_manager.clone(),
self.oid_map.clone(),
)
}
}
impl fmt::Debug for PGNamespace {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.debug_struct("PGNamespace")
.field("schema", &self.schema)
.field("catalog_name", &self.catalog_name)
.finish()
}
}
impl SystemTable for PGNamespace {
fn schema(&self) -> SchemaRef {
self.schema.clone()
}
fn table_id(&self) -> table::metadata::TableId {
PG_CATALOG_PG_NAMESPACE_TABLE_ID
}
fn table_name(&self) -> &'static str {
PG_NAMESPACE
}
fn to_stream(&self, request: ScanRequest) -> Result<SendableRecordBatchStream> {
let schema = self.schema.arrow_schema().clone();
let mut builder = self.builder();
let stream = Box::pin(DfRecordBatchStreamAdapter::new(
schema,
futures::stream::once(async move {
builder
.make_namespace(Some(request))
.await
.map(|x| x.into_df_record_batch())
.map_err(Into::into)
}),
));
Ok(Box::pin(
RecordBatchStreamAdapter::try_new(stream)
.map_err(BoxedError::new)
.context(InternalSnafu)?,
))
}
}
impl DfPartitionStream for PGNamespace {
fn schema(&self) -> &ArrowSchemaRef {
self.schema.arrow_schema()
}
fn execute(&self, _: Arc<TaskContext>) -> DfSendableRecordBatchStream {
let schema = self.schema.arrow_schema().clone();
let mut builder = self.builder();
Box::pin(DfRecordBatchStreamAdapter::new(
schema,
futures::stream::once(async move {
builder
.make_namespace(None)
.await
.map(|x| x.into_df_record_batch())
.map_err(Into::into)
}),
))
}
}
/// Builds the `pg_catalog.pg_namespace` table row by row
/// `oid` use schema name as a workaround since we don't have numeric schema id.
/// `nspname` is the schema name.
struct PGNamespaceBuilder {
schema: SchemaRef,
catalog_name: String,
catalog_manager: Weak<dyn CatalogManager>,
namespace_oid_map: PGNamespaceOidMapRef,
oid: UInt32VectorBuilder,
nspname: StringVectorBuilder,
}
impl PGNamespaceBuilder {
fn new(
schema: SchemaRef,
catalog_name: String,
catalog_manager: Weak<dyn CatalogManager>,
namespace_oid_map: PGNamespaceOidMapRef,
) -> Self {
Self {
schema,
catalog_name,
catalog_manager,
namespace_oid_map,
oid: UInt32VectorBuilder::with_capacity(INIT_CAPACITY),
nspname: StringVectorBuilder::with_capacity(INIT_CAPACITY),
}
}
/// Construct the `pg_catalog.pg_namespace` virtual table
async fn make_namespace(&mut self, request: Option<ScanRequest>) -> Result<RecordBatch> {
let catalog_name = self.catalog_name.clone();
let catalog_manager = self
.catalog_manager
.upgrade()
.context(UpgradeWeakCatalogManagerRefSnafu)?;
let predicates = Predicates::from_scan_request(&request);
for schema_name in catalog_manager
.schema_names(&catalog_name, query_ctx())
.await?
{
self.add_namespace(&predicates, &schema_name);
}
self.finish()
}
fn finish(&mut self) -> Result<RecordBatch> {
let columns: Vec<VectorRef> =
vec![Arc::new(self.oid.finish()), Arc::new(self.nspname.finish())];
RecordBatch::new(self.schema.clone(), columns).context(CreateRecordBatchSnafu)
}
fn add_namespace(&mut self, predicates: &Predicates, schema_name: &str) {
let oid = self.namespace_oid_map.get_oid(schema_name);
let row = [
(OID_COLUMN_NAME, &Value::from(oid)),
(NSPNAME, &Value::from(schema_name)),
];
if !predicates.eval(&row) {
return;
}
self.oid.push(Some(oid));
self.nspname.push(Some(schema_name));
}
}

View File

@@ -1,94 +0,0 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::hash::BuildHasher;
use std::sync::Arc;
use dashmap::DashMap;
use rustc_hash::FxSeededState;
pub type PGNamespaceOidMapRef = Arc<PGNamespaceOidMap>;
// Workaround to convert schema_name to a numeric id,
// remove this when we have numeric schema id in greptime
pub struct PGNamespaceOidMap {
oid_map: DashMap<String, u32>,
// Rust use SipHasher by default, which provides resistance against DOS attacks.
// This will produce different hash value between each greptime instance. This will
// cause the sqlness test fail. We need a deterministic hash here to provide
// same oid for the same schema name with best effort and DOS attacks aren't concern here.
hasher: FxSeededState,
}
impl PGNamespaceOidMap {
pub fn new() -> Self {
Self {
oid_map: DashMap::new(),
hasher: FxSeededState::with_seed(0), // PLEASE DO NOT MODIFY THIS SEED VALUE!!!
}
}
fn oid_is_used(&self, oid: u32) -> bool {
self.oid_map.iter().any(|e| *e.value() == oid)
}
pub fn get_oid(&self, schema_name: &str) -> u32 {
if let Some(oid) = self.oid_map.get(schema_name) {
*oid
} else {
let mut oid = self.hasher.hash_one(schema_name) as u32;
while self.oid_is_used(oid) {
oid = self.hasher.hash_one(oid) as u32;
}
self.oid_map.insert(schema_name.to_string(), oid);
oid
}
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn oid_is_stable() {
let oid_map_1 = PGNamespaceOidMap::new();
let oid_map_2 = PGNamespaceOidMap::new();
let schema = "schema";
let oid = oid_map_1.get_oid(schema);
// oid keep stable in the same instance
assert_eq!(oid, oid_map_1.get_oid(schema));
// oid keep stable between different instances
assert_eq!(oid, oid_map_2.get_oid(schema));
}
#[test]
fn oid_collision() {
let oid_map = PGNamespaceOidMap::new();
let key1 = "3178510";
let key2 = "4215648";
// insert them into oid_map
let oid1 = oid_map.get_oid(key1);
let oid2 = oid_map.get_oid(key2);
// they should have different id
assert_ne!(oid1, oid2);
}
}

View File

@@ -339,18 +339,22 @@ mod tests {
assert!(!p.eval(&wrong_row).unwrap());
assert!(p.eval(&[]).is_none());
assert!(p.eval(&[("c", &a_value)]).is_none());
assert!(!p
.eval(&[(a_col.as_str(), &b_value), (b_col.as_str(), &a_value)])
.unwrap());
assert!(!p
.eval(&[(a_col.as_str(), &b_value), (b_col.as_str(), &b_value)])
.unwrap());
assert!(p
.eval(&[(a_col.as_ref(), &a_value), ("c", &a_value)])
.is_none());
assert!(!p
.eval(&[(a_col.as_ref(), &b_value), ("c", &a_value)])
.unwrap());
assert!(
!p.eval(&[(a_col.as_str(), &b_value), (b_col.as_str(), &a_value)])
.unwrap()
);
assert!(
!p.eval(&[(a_col.as_str(), &b_value), (b_col.as_str(), &b_value)])
.unwrap()
);
assert!(
p.eval(&[(a_col.as_ref(), &a_value), ("c", &a_value)])
.is_none()
);
assert!(
!p.eval(&[(a_col.as_ref(), &b_value), ("c", &a_value)])
.unwrap()
);
//Predicate::Or
let p = Predicate::Or(Box::new(p1), Box::new(p2));
@@ -358,18 +362,22 @@ mod tests {
assert!(p.eval(&wrong_row).unwrap());
assert!(p.eval(&[]).is_none());
assert!(p.eval(&[("c", &a_value)]).is_none());
assert!(!p
.eval(&[(a_col.as_str(), &b_value), (b_col.as_str(), &a_value)])
.unwrap());
assert!(p
.eval(&[(a_col.as_str(), &b_value), (b_col.as_str(), &b_value)])
.unwrap());
assert!(p
.eval(&[(a_col.as_ref(), &a_value), ("c", &a_value)])
.unwrap());
assert!(p
.eval(&[(a_col.as_ref(), &b_value), ("c", &a_value)])
.is_none());
assert!(
!p.eval(&[(a_col.as_str(), &b_value), (b_col.as_str(), &a_value)])
.unwrap()
);
assert!(
p.eval(&[(a_col.as_str(), &b_value), (b_col.as_str(), &b_value)])
.unwrap()
);
assert!(
p.eval(&[(a_col.as_ref(), &a_value), ("c", &a_value)])
.unwrap()
);
assert!(
p.eval(&[(a_col.as_ref(), &b_value), ("c", &a_value)])
.is_none()
);
}
#[test]

View File

@@ -17,10 +17,10 @@ use std::sync::Weak;
use common_meta::key::TableMetadataManagerRef;
use snafu::OptionExt;
use crate::CatalogManager;
use crate::error::{GetInformationExtensionSnafu, Result, UpgradeWeakCatalogManagerRefSnafu};
use crate::information_schema::InformationExtensionRef;
use crate::kvbackend::KvBackendCatalogManager;
use crate::CatalogManager;
pub mod tables;

View File

@@ -27,22 +27,6 @@ pub fn string_column(name: &str) -> ColumnSchema {
)
}
pub fn u32_column(name: &str) -> ColumnSchema {
ColumnSchema::new(
str::to_lowercase(name),
ConcreteDataType::uint32_datatype(),
false,
)
}
pub fn i16_column(name: &str) -> ColumnSchema {
ColumnSchema::new(
str::to_lowercase(name),
ConcreteDataType::int16_datatype(),
false,
)
}
pub fn bigint_column(name: &str) -> ColumnSchema {
ColumnSchema::new(
str::to_lowercase(name),

View File

@@ -17,27 +17,27 @@ use std::sync::Arc;
use bytes::Bytes;
use common_catalog::format_full_table_name;
use common_query::logical_plan::{rename_logical_plan_columns, SubstraitPlanDecoderRef};
use common_query::logical_plan::{SubstraitPlanDecoderRef, rename_logical_plan_columns};
use datafusion::common::{ResolvedTableReference, TableReference};
use datafusion::datasource::view::ViewTable;
use datafusion::datasource::{provider_as_source, TableProvider};
use datafusion::datasource::{TableProvider, provider_as_source};
use datafusion::logical_expr::TableSource;
use itertools::Itertools;
use session::context::QueryContextRef;
use snafu::{ensure, OptionExt, ResultExt};
use snafu::{OptionExt, ResultExt, ensure};
use table::metadata::TableType;
use table::table::adapter::DfTableProviderAdapter;
pub mod dummy_catalog;
use dummy_catalog::DummyCatalogList;
use table::TableRef;
use crate::CatalogManagerRef;
use crate::error::{
CastManagerSnafu, DecodePlanSnafu, GetViewCacheSnafu, ProjectViewColumnsSnafu,
QueryAccessDeniedSnafu, Result, TableNotExistSnafu, ViewInfoNotFoundSnafu,
ViewPlanColumnsChangedSnafu,
};
use crate::kvbackend::KvBackendCatalogManager;
use crate::CatalogManagerRef;
pub struct DfTableSourceProvider {
catalog_manager: CatalogManagerRef,
@@ -201,7 +201,7 @@ impl DfTableSourceProvider {
Ok(Arc::new(ViewTable::new(
logical_plan,
Some(view_info.definition.to_string()),
Some(view_info.definition.clone()),
)))
}
}
@@ -272,7 +272,7 @@ mod tests {
use common_query::logical_plan::SubstraitPlanDecoder;
use datafusion::catalog::CatalogProviderList;
use datafusion::logical_expr::builder::LogicalTableSource;
use datafusion::logical_expr::{col, lit, LogicalPlan, LogicalPlanBuilder};
use datafusion::logical_expr::{LogicalPlan, LogicalPlanBuilder, col, lit};
use crate::information_schema::NoopInformationExtension;

View File

@@ -25,8 +25,8 @@ use datafusion::datasource::TableProvider;
use snafu::OptionExt;
use table::table::adapter::DfTableProviderAdapter;
use crate::error::TableNotExistSnafu;
use crate::CatalogManagerRef;
use crate::error::TableNotExistSnafu;
/// Delegate the resolving requests to the `[CatalogManager]` unconditionally.
#[derive(Clone)]

View File

@@ -51,6 +51,7 @@ meta-srv.workspace = true
nu-ansi-term = "0.46"
object-store.workspace = true
operator.workspace = true
paste.workspace = true
query.workspace = true
rand.workspace = true
reqwest.workspace = true
@@ -60,7 +61,6 @@ servers.workspace = true
session.workspace = true
snafu.workspace = true
store-api.workspace = true
substrait.workspace = true
table.workspace = true
tokio.workspace = true
tracing-appender.workspace = true

View File

@@ -157,6 +157,7 @@ fn create_table_info(table_id: TableId, table_name: TableName) -> RawTableInfo {
schema: RawSchema::new(column_schemas),
engine: "mito".to_string(),
created_on: chrono::DateTime::default(),
updated_on: chrono::DateTime::default(),
primary_key_indices: vec![],
next_column_id: columns as u32 + 1,
value_indices: vec![],

View File

@@ -14,8 +14,8 @@
use std::time::Instant;
use common_meta::key::table_route::TableRouteValue;
use common_meta::key::TableMetadataManagerRef;
use common_meta::key::table_route::TableRouteValue;
use table::table_name::TableName;
use crate::bench::{

View File

@@ -12,11 +12,8 @@
// See the License for the specific language governing permissions and
// limitations under the License.
// https://www.postgresql.org/docs/current/catalog-pg-database.html
pub const PG_DATABASE: &str = "pg_database";
// https://www.postgresql.org/docs/current/catalog-pg-namespace.html
pub const PG_NAMESPACE: &str = "pg_namespace";
// https://www.postgresql.org/docs/current/catalog-pg-class.html
pub const PG_CLASS: &str = "pg_class";
// https://www.postgresql.org/docs/current/catalog-pg-type.html
pub const PG_TYPE: &str = "pg_type";
mod object_store;
mod store;
pub use object_store::{ObjectStoreConfig, new_fs_object_store};
pub use store::StoreConfig;

View File

@@ -0,0 +1,224 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use common_base::secrets::SecretString;
use common_error::ext::BoxedError;
use object_store::services::{Azblob, Fs, Gcs, Oss, S3};
use object_store::util::{with_instrument_layers, with_retry_layers};
use object_store::{AzblobConnection, GcsConnection, ObjectStore, OssConnection, S3Connection};
use paste::paste;
use snafu::ResultExt;
use crate::error::{self};
macro_rules! wrap_with_clap_prefix {
(
$new_name:ident, $prefix:literal, $base:ty, {
$( $( #[doc = $doc:expr] )? $( #[alias = $alias:literal] )? $field:ident : $type:ty $( = $default:expr )? ),* $(,)?
}
) => {
paste!{
#[derive(clap::Parser, Debug, Clone, PartialEq, Default)]
pub struct $new_name {
$(
$( #[doc = $doc] )?
$( #[clap(alias = $alias)] )?
#[clap(long $(, default_value_t = $default )? )]
[<$prefix $field>]: $type,
)*
}
impl From<$new_name> for $base {
fn from(w: $new_name) -> Self {
Self {
$( $field: w.[<$prefix $field>] ),*
}
}
}
}
};
}
wrap_with_clap_prefix! {
PrefixedAzblobConnection,
"azblob-",
AzblobConnection,
{
#[doc = "The container of the object store."]
container: String = Default::default(),
#[doc = "The root of the object store."]
root: String = Default::default(),
#[doc = "The account name of the object store."]
account_name: SecretString = Default::default(),
#[doc = "The account key of the object store."]
account_key: SecretString = Default::default(),
#[doc = "The endpoint of the object store."]
endpoint: String = Default::default(),
#[doc = "The SAS token of the object store."]
sas_token: Option<String>,
}
}
wrap_with_clap_prefix! {
PrefixedS3Connection,
"s3-",
S3Connection,
{
#[doc = "The bucket of the object store."]
bucket: String = Default::default(),
#[doc = "The root of the object store."]
root: String = Default::default(),
#[doc = "The access key ID of the object store."]
access_key_id: SecretString = Default::default(),
#[doc = "The secret access key of the object store."]
secret_access_key: SecretString = Default::default(),
#[doc = "The endpoint of the object store."]
endpoint: Option<String>,
#[doc = "The region of the object store."]
region: Option<String>,
#[doc = "Enable virtual host style for the object store."]
enable_virtual_host_style: bool = Default::default(),
}
}
wrap_with_clap_prefix! {
PrefixedOssConnection,
"oss-",
OssConnection,
{
#[doc = "The bucket of the object store."]
bucket: String = Default::default(),
#[doc = "The root of the object store."]
root: String = Default::default(),
#[doc = "The access key ID of the object store."]
access_key_id: SecretString = Default::default(),
#[doc = "The access key secret of the object store."]
access_key_secret: SecretString = Default::default(),
#[doc = "The endpoint of the object store."]
endpoint: String = Default::default(),
}
}
wrap_with_clap_prefix! {
PrefixedGcsConnection,
"gcs-",
GcsConnection,
{
#[doc = "The root of the object store."]
root: String = Default::default(),
#[doc = "The bucket of the object store."]
bucket: String = Default::default(),
#[doc = "The scope of the object store."]
scope: String = Default::default(),
#[doc = "The credential path of the object store."]
credential_path: SecretString = Default::default(),
#[doc = "The credential of the object store."]
credential: SecretString = Default::default(),
#[doc = "The endpoint of the object store."]
endpoint: String = Default::default(),
}
}
/// common config for object store.
#[derive(clap::Parser, Debug, Clone, PartialEq, Default)]
pub struct ObjectStoreConfig {
/// Whether to use S3 object store.
#[clap(long, alias = "s3")]
pub enable_s3: bool,
#[clap(flatten)]
pub s3: PrefixedS3Connection,
/// Whether to use OSS.
#[clap(long, alias = "oss")]
pub enable_oss: bool,
#[clap(flatten)]
pub oss: PrefixedOssConnection,
/// Whether to use GCS.
#[clap(long, alias = "gcs")]
pub enable_gcs: bool,
#[clap(flatten)]
pub gcs: PrefixedGcsConnection,
/// Whether to use Azure Blob.
#[clap(long, alias = "azblob")]
pub enable_azblob: bool,
#[clap(flatten)]
pub azblob: PrefixedAzblobConnection,
}
/// Creates a new file system object store.
pub fn new_fs_object_store(root: &str) -> std::result::Result<ObjectStore, BoxedError> {
let builder = Fs::default().root(root);
let object_store = ObjectStore::new(builder)
.context(error::InitBackendSnafu)
.map_err(BoxedError::new)?
.finish();
Ok(with_instrument_layers(object_store, false))
}
impl ObjectStoreConfig {
/// Builds the object store from the config.
pub fn build(&self) -> Result<Option<ObjectStore>, BoxedError> {
let object_store = if self.enable_s3 {
let s3 = S3Connection::from(self.s3.clone());
common_telemetry::info!("Building object store with s3: {:?}", s3);
Some(
ObjectStore::new(S3::from(&s3))
.context(error::InitBackendSnafu)
.map_err(BoxedError::new)?
.finish(),
)
} else if self.enable_oss {
let oss = OssConnection::from(self.oss.clone());
common_telemetry::info!("Building object store with oss: {:?}", oss);
Some(
ObjectStore::new(Oss::from(&oss))
.context(error::InitBackendSnafu)
.map_err(BoxedError::new)?
.finish(),
)
} else if self.enable_gcs {
let gcs = GcsConnection::from(self.gcs.clone());
common_telemetry::info!("Building object store with gcs: {:?}", gcs);
Some(
ObjectStore::new(Gcs::from(&gcs))
.context(error::InitBackendSnafu)
.map_err(BoxedError::new)?
.finish(),
)
} else if self.enable_azblob {
let azblob = AzblobConnection::from(self.azblob.clone());
common_telemetry::info!("Building object store with azblob: {:?}", azblob);
Some(
ObjectStore::new(Azblob::from(&azblob))
.context(error::InitBackendSnafu)
.map_err(BoxedError::new)?
.finish(),
)
} else {
None
};
let object_store = object_store
.map(|object_store| with_instrument_layers(with_retry_layers(object_store), false));
Ok(object_store)
}
}

View File

@@ -16,17 +16,17 @@ use std::sync::Arc;
use clap::Parser;
use common_error::ext::BoxedError;
use common_meta::kv_backend::KvBackendRef;
use common_meta::kv_backend::chroot::ChrootKvBackend;
use common_meta::kv_backend::etcd::EtcdStore;
use common_meta::kv_backend::KvBackendRef;
use meta_srv::bootstrap::create_etcd_client_with_tls;
use meta_srv::metasrv::BackendImpl;
use meta_srv::utils::etcd::create_etcd_client_with_tls;
use servers::tls::{TlsMode, TlsOption};
use crate::error::{EmptyStoreAddrsSnafu, UnsupportedMemoryBackendSnafu};
use crate::error::EmptyStoreAddrsSnafu;
#[derive(Debug, Default, Parser)]
pub(crate) struct StoreConfig {
pub struct StoreConfig {
/// The endpoint of store. one of etcd, postgres or mysql.
///
/// For postgres store, the format is:
@@ -38,51 +38,65 @@ pub(crate) struct StoreConfig {
/// For mysql store, the format is:
/// "mysql://user:password@ip:port/dbname"
#[clap(long, alias = "store-addr", value_delimiter = ',', num_args = 1..)]
store_addrs: Vec<String>,
pub store_addrs: Vec<String>,
/// The maximum number of operations in a transaction. Only used when using [etcd-store].
#[clap(long, default_value = "128")]
max_txn_ops: usize,
pub max_txn_ops: usize,
/// The metadata store backend.
#[clap(long, value_enum, default_value = "etcd-store")]
backend: BackendImpl,
pub backend: BackendImpl,
/// The key prefix of the metadata store.
#[clap(long, default_value = "")]
store_key_prefix: String,
pub store_key_prefix: String,
/// The table name in RDS to store metadata. Only used when using [postgres-store] or [mysql-store].
#[cfg(any(feature = "pg_kvbackend", feature = "mysql_kvbackend"))]
#[clap(long, default_value = common_meta::kv_backend::DEFAULT_META_TABLE_NAME)]
meta_table_name: String,
pub meta_table_name: String,
/// Optional PostgreSQL schema for metadata table (defaults to current search_path if unset).
#[cfg(feature = "pg_kvbackend")]
#[clap(long)]
meta_schema_name: Option<String>,
pub meta_schema_name: Option<String>,
/// TLS mode for backend store connections (etcd, PostgreSQL, MySQL)
#[clap(long = "backend-tls-mode", value_enum, default_value = "disable")]
backend_tls_mode: TlsMode,
pub backend_tls_mode: TlsMode,
/// Path to TLS certificate file for backend store connections
#[clap(long = "backend-tls-cert-path", default_value = "")]
backend_tls_cert_path: String,
pub backend_tls_cert_path: String,
/// Path to TLS private key file for backend store connections
#[clap(long = "backend-tls-key-path", default_value = "")]
backend_tls_key_path: String,
pub backend_tls_key_path: String,
/// Path to TLS CA certificate file for backend store connections
#[clap(long = "backend-tls-ca-cert-path", default_value = "")]
backend_tls_ca_cert_path: String,
pub backend_tls_ca_cert_path: String,
/// Enable watching TLS certificate files for changes
#[clap(long = "backend-tls-watch")]
backend_tls_watch: bool,
pub backend_tls_watch: bool,
}
impl StoreConfig {
pub fn tls_config(&self) -> Option<TlsOption> {
if self.backend_tls_mode != TlsMode::Disable {
Some(TlsOption {
mode: self.backend_tls_mode.clone(),
cert_path: self.backend_tls_cert_path.clone(),
key_path: self.backend_tls_key_path.clone(),
ca_cert_path: self.backend_tls_ca_cert_path.clone(),
watch: self.backend_tls_watch,
})
} else {
None
}
}
/// Builds a [`KvBackendRef`] from the store configuration.
pub async fn build(&self) -> Result<KvBackendRef, BoxedError> {
let max_txn_ops = self.max_txn_ops;
@@ -90,19 +104,14 @@ impl StoreConfig {
if store_addrs.is_empty() {
EmptyStoreAddrsSnafu.fail().map_err(BoxedError::new)
} else {
common_telemetry::info!(
"Building kvbackend with store addrs: {:?}, backend: {:?}",
store_addrs,
self.backend
);
let kvbackend = match self.backend {
BackendImpl::EtcdStore => {
let tls_config = if self.backend_tls_mode != TlsMode::Disable {
Some(TlsOption {
mode: self.backend_tls_mode.clone(),
cert_path: self.backend_tls_cert_path.clone(),
key_path: self.backend_tls_key_path.clone(),
ca_cert_path: self.backend_tls_ca_cert_path.clone(),
watch: self.backend_tls_watch,
})
} else {
None
};
let tls_config = self.tls_config();
let etcd_client = create_etcd_client_with_tls(store_addrs, tls_config.as_ref())
.await
.map_err(BoxedError::new)?;
@@ -111,9 +120,14 @@ impl StoreConfig {
#[cfg(feature = "pg_kvbackend")]
BackendImpl::PostgresStore => {
let table_name = &self.meta_table_name;
let pool = meta_srv::bootstrap::create_postgres_pool(store_addrs, None)
.await
.map_err(BoxedError::new)?;
let tls_config = self.tls_config();
let pool = meta_srv::utils::postgres::create_postgres_pool(
store_addrs,
None,
tls_config,
)
.await
.map_err(BoxedError::new)?;
let schema_name = self.meta_schema_name.as_deref();
Ok(common_meta::kv_backend::rds::PgStore::with_pg_pool(
pool,
@@ -127,9 +141,11 @@ impl StoreConfig {
#[cfg(feature = "mysql_kvbackend")]
BackendImpl::MysqlStore => {
let table_name = &self.meta_table_name;
let pool = meta_srv::bootstrap::create_mysql_pool(store_addrs)
.await
.map_err(BoxedError::new)?;
let tls_config = self.tls_config();
let pool =
meta_srv::utils::mysql::create_mysql_pool(store_addrs, tls_config.as_ref())
.await
.map_err(BoxedError::new)?;
Ok(common_meta::kv_backend::rds::MySqlStore::with_mysql_pool(
pool,
table_name,
@@ -138,9 +154,20 @@ impl StoreConfig {
.await
.map_err(BoxedError::new)?)
}
BackendImpl::MemoryStore => UnsupportedMemoryBackendSnafu
.fail()
.map_err(BoxedError::new),
#[cfg(not(test))]
BackendImpl::MemoryStore => {
use crate::error::UnsupportedMemoryBackendSnafu;
UnsupportedMemoryBackendSnafu
.fail()
.map_err(BoxedError::new)
}
#[cfg(test)]
BackendImpl::MemoryStore => {
use common_meta::kv_backend::memory::MemoryKvBackend;
Ok(Arc::new(MemoryKvBackend::default()) as _)
}
};
if self.store_key_prefix.is_empty() {
kvbackend

View File

@@ -16,11 +16,14 @@ mod export;
mod import;
use clap::Subcommand;
use client::DEFAULT_CATALOG_NAME;
use common_error::ext::BoxedError;
use crate::Tool;
use crate::data::export::ExportCommand;
use crate::data::import::ImportCommand;
use crate::Tool;
pub(crate) const COPY_PATH_PLACEHOLDER: &str = "<PATH/TO/FILES>";
/// Command for data operations including exporting data from and importing data into GreptimeDB.
#[derive(Subcommand)]
@@ -37,3 +40,7 @@ impl DataCommand {
}
}
}
pub(crate) fn default_database() -> String {
format!("{DEFAULT_CATALOG_NAME}-*")
}

View File

@@ -24,18 +24,19 @@ use common_error::ext::BoxedError;
use common_telemetry::{debug, error, info};
use object_store::layers::LoggingLayer;
use object_store::services::Oss;
use object_store::{services, ObjectStore};
use object_store::{ObjectStore, services};
use serde_json::Value;
use snafu::{OptionExt, ResultExt};
use tokio::sync::Semaphore;
use tokio::time::Instant;
use crate::database::{parse_proxy_opts, DatabaseClient};
use crate::data::{COPY_PATH_PLACEHOLDER, default_database};
use crate::database::{DatabaseClient, parse_proxy_opts};
use crate::error::{
EmptyResultSnafu, Error, OpenDalSnafu, OutputDirNotSetSnafu, Result, S3ConfigNotSetSnafu,
SchemaNotFoundSnafu,
};
use crate::{database, Tool};
use crate::{Tool, database};
type TableReference = (String, String, String);
@@ -63,7 +64,7 @@ pub struct ExportCommand {
output_dir: Option<String>,
/// The name of the catalog to export.
#[clap(long, default_value = "greptime-*")]
#[clap(long, default_value_t = default_database())]
database: String,
/// Parallelism of the export.
@@ -667,10 +668,26 @@ impl Export {
);
// Create copy_from.sql file
let copy_database_from_sql = format!(
r#"COPY DATABASE "{}"."{}" FROM '{}' WITH ({}){};"#,
export_self.catalog, schema, path, with_options_clone, connection_part
);
let copy_database_from_sql = {
let command_without_connection = format!(
r#"COPY DATABASE "{}"."{}" FROM '{}' WITH ({});"#,
export_self.catalog, schema, COPY_PATH_PLACEHOLDER, with_options_clone
);
if connection_part.is_empty() {
command_without_connection
} else {
let command_with_connection = format!(
r#"COPY DATABASE "{}"."{}" FROM '{}' WITH ({}){};"#,
export_self.catalog, schema, path, with_options_clone, connection_part
);
format!(
"-- {}\n{}",
command_with_connection, command_without_connection
)
}
};
let copy_from_path = export_self.get_file_path(&schema, "copy_from.sql");
export_self

View File

@@ -21,13 +21,14 @@ use clap::{Parser, ValueEnum};
use common_catalog::consts::DEFAULT_SCHEMA_NAME;
use common_error::ext::BoxedError;
use common_telemetry::{error, info, warn};
use snafu::{OptionExt, ResultExt};
use snafu::{OptionExt, ResultExt, ensure};
use tokio::sync::Semaphore;
use tokio::time::Instant;
use crate::database::{parse_proxy_opts, DatabaseClient};
use crate::error::{Error, FileIoSnafu, Result, SchemaNotFoundSnafu};
use crate::{database, Tool};
use crate::data::{COPY_PATH_PLACEHOLDER, default_database};
use crate::database::{DatabaseClient, parse_proxy_opts};
use crate::error::{Error, FileIoSnafu, InvalidArgumentsSnafu, Result, SchemaNotFoundSnafu};
use crate::{Tool, database};
#[derive(Debug, Default, Clone, ValueEnum)]
enum ImportTarget {
@@ -52,7 +53,7 @@ pub struct ImportCommand {
input_dir: String,
/// The name of the catalog to import.
#[clap(long, default_value = "greptime-*")]
#[clap(long, default_value_t = default_database())]
database: String,
/// Parallelism of the import.
@@ -147,12 +148,15 @@ impl Import {
let _permit = semaphore_moved.acquire().await.unwrap();
let database_input_dir = self.catalog_path().join(&schema);
let sql_file = database_input_dir.join(filename);
let sql = tokio::fs::read_to_string(sql_file)
let mut sql = tokio::fs::read_to_string(sql_file)
.await
.context(FileIoSnafu)?;
if sql.is_empty() {
if sql.trim().is_empty() {
info!("Empty `{filename}` {database_input_dir:?}");
} else {
if filename == "copy_from.sql" {
sql = self.rewrite_copy_database_sql(&schema, &sql)?;
}
let db = exec_db.unwrap_or(&schema);
self.database_client.sql(&sql, db).await?;
info!("Imported `{filename}` for database {schema}");
@@ -225,6 +229,57 @@ impl Import {
}
Ok(db_names)
}
fn rewrite_copy_database_sql(&self, schema: &str, sql: &str) -> Result<String> {
let target_location = self.build_copy_database_location(schema);
let escaped_location = target_location.replace('\'', "''");
let mut first_stmt_checked = false;
for line in sql.lines() {
let trimmed = line.trim_start();
if trimmed.is_empty() || trimmed.starts_with("--") {
continue;
}
ensure!(
trimmed.starts_with("COPY DATABASE"),
InvalidArgumentsSnafu {
msg: "Expected COPY DATABASE statement at start of copy_from.sql"
}
);
first_stmt_checked = true;
break;
}
ensure!(
first_stmt_checked,
InvalidArgumentsSnafu {
msg: "COPY DATABASE statement not found in copy_from.sql"
}
);
ensure!(
sql.contains(COPY_PATH_PLACEHOLDER),
InvalidArgumentsSnafu {
msg: format!(
"Placeholder `{}` not found in COPY DATABASE statement",
COPY_PATH_PLACEHOLDER
)
}
);
Ok(sql.replacen(COPY_PATH_PLACEHOLDER, &escaped_location, 1))
}
fn build_copy_database_location(&self, schema: &str) -> String {
let mut path = self.catalog_path();
path.push(schema);
let mut path_str = path.to_string_lossy().into_owned();
if !path_str.ends_with('/') {
path_str.push('/');
}
path_str
}
}
#[async_trait]
@@ -240,3 +295,52 @@ impl Tool for Import {
}
}
}
#[cfg(test)]
mod tests {
use std::time::Duration;
use super::*;
fn build_import(input_dir: &str) -> Import {
Import {
catalog: "catalog".to_string(),
schema: None,
database_client: DatabaseClient::new(
"127.0.0.1:4000".to_string(),
"catalog".to_string(),
None,
Duration::from_secs(0),
None,
),
input_dir: input_dir.to_string(),
parallelism: 1,
target: ImportTarget::Data,
}
}
#[test]
fn rewrite_copy_database_sql_replaces_placeholder() {
let import = build_import("/tmp/export-path");
let comment = "-- COPY DATABASE \"catalog\".\"schema\" FROM 's3://bucket/demo/' WITH (format = 'parquet') CONNECTION (region = 'us-west-2')";
let sql = format!(
"{comment}\nCOPY DATABASE \"catalog\".\"schema\" FROM '{}' WITH (format = 'parquet');",
COPY_PATH_PLACEHOLDER
);
let rewritten = import.rewrite_copy_database_sql("schema", &sql).unwrap();
let expected_location = import.build_copy_database_location("schema");
let escaped = expected_location.replace('\'', "''");
assert!(rewritten.starts_with(comment));
assert!(rewritten.contains(&format!("FROM '{escaped}'")));
assert!(!rewritten.contains(COPY_PATH_PLACEHOLDER));
}
#[test]
fn rewrite_copy_database_sql_requires_placeholder() {
let import = build_import("/tmp/export-path");
let sql = "COPY DATABASE \"catalog\".\"schema\" FROM '/tmp/export-path/catalog/schema/' WITH (format = 'parquet');";
assert!(import.rewrite_copy_database_sql("schema", sql).is_err());
}
}

View File

@@ -14,15 +14,15 @@
use std::time::Duration;
use base64::engine::general_purpose;
use base64::Engine;
use base64::engine::general_purpose;
use common_catalog::consts::{DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME};
use common_error::ext::BoxedError;
use humantime::format_duration;
use serde_json::Value;
use servers::http::GreptimeQueryOutput;
use servers::http::header::constants::GREPTIME_DB_HEADER_TIMEOUT;
use servers::http::result::greptime_result_v1::GreptimedbV1Response;
use servers::http::GreptimeQueryOutput;
use snafu::ResultExt;
use crate::error::{

View File

@@ -313,6 +313,14 @@ pub enum Error {
location: Location,
source: common_meta::error::Error,
},
#[snafu(display("Failed to get current directory"))]
GetCurrentDir {
#[snafu(implicit)]
location: Location,
#[snafu(source)]
error: std::io::Error,
},
}
pub type Result<T> = std::result::Result<T, Error>;
@@ -362,7 +370,9 @@ impl ErrorExt for Error {
Error::BuildRuntime { source, .. } => source.status_code(),
Error::CacheRequired { .. } | Error::BuildCacheRegistry { .. } => StatusCode::Internal,
Error::CacheRequired { .. }
| Error::BuildCacheRegistry { .. }
| Error::GetCurrentDir { .. } => StatusCode::Internal,
Error::MetaClientInit { source, .. } => source.status_code(),
Error::TableNotFound { .. } => StatusCode::TableNotFound,
Error::SchemaNotFound { .. } => StatusCode::DatabaseNotFound,

Some files were not shown because too many files have changed in this diff Show More