Compare commits

...

79 Commits

Author SHA1 Message Date
yihong
6ded2d267a fix: close issue #6586 make pg also show error as mysql (#6587)
* fix: close issue #6586 make pg also show error as mysql

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

* fix: address comments

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

* fix: drop useless debug print

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

* fix: revert wrong change

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

* refactor: convert error types

* refactor: inline

* chore: minimize changes

* fix: address comments

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

* fix: make clippy happy

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

* fix: address comments

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

* refactor: convert datafusion error to ErrorExt

Signed-off-by: Ning Sun <sunning@greptime.com>

* fix: headers ?

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

---------

Signed-off-by: yihong0618 <zouzou0208@gmail.com>
Signed-off-by: Ning Sun <sunning@greptime.com>
Co-authored-by: Ning Sun <sunning@greptime.com>
2025-07-25 09:59:26 +00:00
yihong
3465bedc7f fix: ignore target files in make fmt-check (#6560)
* fix: ignore target files in make fmt-check

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

* fix: address comments

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

---------

Signed-off-by: yihong0618 <zouzou0208@gmail.com>
2025-07-25 09:54:38 +00:00
dennis zhuang
052033d436 feat: supports more db options (#6529)
* feat: supports more db options

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: tests

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: use btree map for consistent results

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* feat: adds compaction keys into valid db options

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

---------

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
2025-07-25 08:31:11 +00:00
Lei, HUANG
c3201c32c3 feat(mito): replace Memtable::iter with Memtable::ranges (#6549)
* bulk-multiparts-merge-reader:
 **Enhance Memtable Iteration and Flushing Logic**

 - **`flush.rs`**: Updated `RegionFlushTask` to handle multiple ranges using `MergeReaderBuilder` for improved source management during flush operations.
 - **`memtable.rs`**: Introduced `build_prune_iter` and `build_iter` methods in `MemtableRange` for flexible iteration. Added `MemtableRanges` struct to manage multiple contexts.
 - **`simple_bulk_memtable.rs`**: Refactored to use `BatchIterBuilder` and `BatchIterBuilderDeprecated` for iteration, supporting new `read_to_values` method in `Series`.
 - **`time_series.rs`**: Added `read_to_values` and `finish_cloned` methods in `Series` and `ValueBuilder` for efficient data handling.
 - **`scan_util.rs`**: Replaced `build_iter` with `build_prune_iter` for range iteration, enhancing scan utility.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* bulk-multiparts-merge-reader:
 - **Add Rayon for Parallel Processing**: Introduced `rayon` for parallel processing in `simple_bulk_memtable.rs` and updated `Cargo.toml` and `Cargo.lock` to include `rayon` dependency.
 - **Enhance Benchmarking**: Added new benchmarks in `simple_bulk_memtable.rs` to compare parallel vs sequential processing, projection, sequence filtering, and write performance.
 - **Make Structs and Methods Public**: Changed visibility of several structs and methods to `pub` in `simple_bulk_memtable.rs`, `memtable.rs`, `time_series.rs`, and `test_util.rs` to facilitate testing and benchmarking.
 - **Update Criterion Features**: Modified `Cargo.toml` to include `html_reports` feature for `criterion`.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* bulk-multiparts-merge-reader:
 ### Commit Summary

 - **Refactor `SimpleBulkMemtable`**:
   - Moved `ranges_sequential` function to a new `test_only` module and made it a method of `SimpleBulkMemtable`.
   - Made several fields in `SimpleBulkMemtable` private and added a `region_metadata` getter.
   - Affected files: `simple_bulk_memtable.rs`, `test_only.rs`.

 - **Benchmark Adjustments**:
   - Updated benchmark functions to use the new `ranges_sequential` method.
   - Affected file: `simple_bulk_memtable.rs`.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* bulk-multiparts-merge-reader:
 ### Add Test Configuration for `iter` Method in Memtable Implementations

 - **Enhancements**:
   - Added `#[cfg(any(test, feature = "test"))]` attribute to the `iter` method in various `Memtable` implementations to enable conditional compilation for testing purposes.
   - Affected files:
     - `src/mito2/src/memtable.rs`
     - `src/mito2/src/memtable/bulk.rs`
     - `src/mito2/src/memtable/partition_tree.rs`
     - `src/mito2/src/memtable/simple_bulk_memtable.rs`
     - `src/mito2/src/memtable/time_series.rs`
     - `src/mito2/src/test_util/memtable_util.rs`

 - **Benchmark Adjustments**:
   - Removed `black_box` usage in `bench_memtable_write_performance` function to streamline benchmarking.
   - Affected file: `src/mito2/benches/simple_bulk_memtable.rs`

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* bulk-multiparts-merge-reader:
 **Enhance Async Support and Refactor Iteration in `mito2`**

 - **Add Async Features**: Updated `Cargo.toml` to include `async` and `async_tokio` features for `criterion`.
 - **Async Iteration**: Introduced async functions `flush` and `flush_original` in `simple_bulk_memtable.rs` to handle memtable flushing using async iterators.
 - **Refactor Iteration Logic**: Moved `create_iter` and `BatchIterBuilderDeprecated` to `test_only.rs` for better separation of concerns.
 - **Public API Change**: Made `next_batch` in `read.rs` public to support async batch processing.
 - **Benchmark Updates**: Modified benchmarks in `simple_bulk_memtable.rs` to use async runtime for performance testing.

 Files affected: `Cargo.toml`, `simple_bulk_memtable.rs`, `test_only.rs`, `read.rs`.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* bulk-multiparts-merge-reader:
 **Enhance Benchmarking for Memtable**

 - Refactored `create_large_memtable` to `create_memtable_with_rows` in `simple_bulk_memtable.rs` to allow dynamic row count configuration.
 - Introduced parameterized benchmarking in `bench_ranges_parallel_vs_sequential` to test various row counts, improving the flexibility and coverage of performance tests.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* bulk-multiparts-merge-reader:
 ### Enhance Memory Management and Public API

 - **`builder.rs`**: Made `next_offset` method public to allow external access to offset calculations.
 - **`simple_bulk_memtable.rs`**: Simplified the `series.extend` method by removing the iterator conversion for `fields`.
 - **`time_series.rs`**:
   - Added `can_accommodate` method to `ValueBuilder` to check if fields can be accommodated without offset overflow.
   - Modified `extend` method to use a `Vec` for `fields` instead of an iterator, improving memory management and error handling.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* bulk-multiparts-merge-reader:
 Add License and Enhance Testing in `simple_bulk_memtable.rs`

 - Added Apache License header to `simple_bulk_memtable.rs`.
 - Modified test configuration in `simple_bulk_memtable.rs` to include `any(test, feature = "test")`.
 - Introduced a new test `test_write_read_large_string` in `simple_bulk_memtable.rs` to verify handling of large strings.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* bulk-multiparts-merge-reader:
 Update `Cargo.toml` dependencies

 - Adjust features for `common-meta` and `mito-codec` to include "testing".
 - Maintain `criterion` version and features for async support.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* bulk-multiparts-merge-reader:
 ### Update Predicate Type in Memtable Iterators

 - **Files Modified**:
   - `src/mito2/src/memtable.rs`
   - `src/mito2/src/memtable/bulk.rs`
   - `src/mito2/src/memtable/simple_bulk_memtable.rs`

 - **Key Changes**:
   - Updated the `iter` method in `Memtable` trait and its implementations to use `Option<table::predicate::Predicate>` instead of `Option<Predicate>`.
   - Adjusted return type in `BulkMemtable`'s `iter` method to `Result<crate::memtable::BoxedBatchIterator>`.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* bulk-multiparts-merge-reader:
 **Enhance Memtable Functionality**

 - **`memtable.rs`**:
   - Added `Clone` trait to `MemtableStats` and made `num_ranges` public.
   - Introduced `num_rows` field in `MemtableRange` and updated its constructor.
   - Added `num_rows` method to `MemtableRange`.

 - **`partition_tree.rs`, `simple_bulk_memtable.rs`, `time_series.rs`**:
   - Updated `MemtableRange` instantiation to include `num_rows`.

 - **`range.rs`**:
   - Refactored `MemRangeBuilder` to handle a single `MemtableRange` and `MemtableStats`.

 - **`scan_region.rs`**:
   - Enhanced memtable filtering based on time range and updated `MemRangeBuilder` usage.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* bulk-multiparts-merge-reader:
 **Enhancements and Bug Fixes**

 - **Deduplication Enhancements**:
   - Introduced `DedupReader` and `LastRow` as public structs in `dedup.rs` to enhance deduplication capabilities.
   - Added `LastNonNull` deduplication strategy in `flush.rs` and `simple_bulk_memtable.rs`.

 - **Memtable Improvements**:
   - Updated `SimpleBulkMemtable` to support batch size configuration and deduplication strategies.
   - Modified `Series` struct in `time_series.rs` to include a configurable capacity.

 - **Testing Enhancements**:
   - Added new test `test_write_dedup` in `simple_bulk_memtable.rs` to verify deduplication functionality.
   - Updated existing tests to include `OpType` parameter for better operation type handling.

 - **Refactoring**:
   - Renamed `BatchIterBuilder` to `BatchRangeBuilder` in `simple_bulk_memtable.rs` for clarity.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* bulk-multiparts-merge-reader:
 - **Refactor `flush.rs`:** Removed `LastNonNullIter` usage and adjusted `DedupReader` instantiation to use `LastRow::new(false)` and `LastNonNull::new(false)`.
 - **Enhance `simple_bulk_memtable.rs`:** Added logic to handle `LastNonNull` merge mode in `IterBuilder`. Introduced new tests: `test_delete_only` and `test_single_range` to verify delete operations and single range handling.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* fix: tests

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

---------

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-07-25 03:05:54 +00:00
Zhenchi
dc01511adc refactor: explicitly accept path type as param (#6583)
Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2025-07-24 09:56:41 +00:00
Lanqing Yang
b01be5efc5 feat: move metasrv admin to http server while keep tonic for backward compatibility (#6466)
* feat: move metasrv admin to http server while keep tonic for backward compatibility

Signed-off-by: lyang24 <lanqingy93@gmail.com>

* refactor with nest method

Signed-off-by: lyang24 <lanqingy93@gmail.com>

---------

Signed-off-by: lyang24 <lanqingy93@gmail.com>
Co-authored-by: lyang24 <lanqingy@usc.edu>
2025-07-24 09:11:04 +00:00
Zhenchi
5908febd6c refactor: remove unused PartitionDef (#6573)
* refactor: remove unused PartitionDef

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix snafu

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2025-07-24 07:55:51 +00:00
Ruihang Xia
076e20081b feat: add RegionId to FileId (#6410)
* refactor: remove staled manifest structures

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add RegionId to FileId

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* rename method

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix test cases

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* refactor: introduce RegionFileId

- FileId still only consist of an uuid
- PathProvider accepts RegionFileId and doesn't need to keep a region id
  in it
- All Index applier takes RegionFileId and respects the region id in the RegionFileId
- FileMeta can still derive Serialize/Deserialize
- Refactor the CacheManager to accept RegionFileId

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: define PathType

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: adding PathType WIP

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: fix compiler errors

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: add path_type to region_dir_from_table_dir

Move region_dir_from_table_dir to mito and use join_dir internally

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: set path type to ApplierBuilder

Signed-off-by: evenyag <realevenyag@gmail.com>

* style: fmt code

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: fix passing incorrect dir to access layer

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: remove region_dir from CompactionRegion

We can get table_dir and path_type from the access layer

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: fix unit tests

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: fix typo

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: update comment

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: correct marker path

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: use AccessLayer::build_region_dir to get region dir

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: log entries in test

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: set path type in catchup

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: fix test_open_region_failure test

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: fix compiler errors

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Signed-off-by: evenyag <realevenyag@gmail.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>
2025-07-24 06:29:38 +00:00
yihong
61d83bc36b fix: closee issue #6555 return empty result (#6569)
* fix: closee issue #6555 return empty result

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

* fix: only start one instance one regrex sqlness test (#6570)

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

* refactor: refactor partition mod to use PartitionExpr instead of PartitionDef (#6554)

* refactor: refactor partition mod to use PartitionExpr instead of PartitionDef

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix snafu

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* Puts expression into PbPartition

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* address comments

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix compile

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* update proto

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* add serde test

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* add serde test

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix: address comments

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

---------

Signed-off-by: yihong0618 <zouzou0208@gmail.com>
Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
Co-authored-by: Zhenchi <zhongzc_arch@outlook.com>
2025-07-24 03:30:59 +00:00
discord9
8d71213ea6 fix: aggr group by all partition cols use partial commutative (#6534)
* fix: aggr group by all partition cols use partial commutative

Signed-off-by: discord9 <discord9@163.com>

* test: bugged case

Signed-off-by: discord9 <discord9@163.com>

* test: sqlness fix

Signed-off-by: discord9 <discord9@163.com>

* test: more redacted

Signed-off-by: discord9 <discord9@163.com>

* more cases

Signed-off-by: discord9 <discord9@163.com>

* even more test cases

Signed-off-by: discord9 <discord9@163.com>

* join testcase

Signed-off-by: discord9 <discord9@163.com>

* fix: column requirement added in correct location

Signed-off-by: discord9 <discord9@163.com>

* fix test

Signed-off-by: discord9 <discord9@163.com>

* chore: clippy

Signed-off-by: discord9 <discord9@163.com>

* track col reqs per stack

Signed-off-by: discord9 <discord9@163.com>

* fix: continue

Signed-off-by: discord9 <discord9@163.com>

* chore: clippy

Signed-off-by: discord9 <discord9@163.com>

* refactor: test mod

Signed-off-by: discord9 <discord9@163.com>

* test utils

Signed-off-by: discord9 <discord9@163.com>

* test: better test

Signed-off-by: discord9 <discord9@163.com>

* more testcases

Signed-off-by: discord9 <discord9@163.com>

* test limit push down

Signed-off-by: discord9 <discord9@163.com>

* more testcases

Signed-off-by: discord9 <discord9@163.com>

* more testcase

Signed-off-by: discord9 <discord9@163.com>

* more test

Signed-off-by: discord9 <discord9@163.com>

* chore: update sqlness

Signed-off-by: discord9 <discord9@163.com>

* chore: update commnets

Signed-off-by: discord9 <discord9@163.com>

* fix: check col reqs from bottom to upper

Signed-off-by: discord9 <discord9@163.com>

* chore: more comment

Signed-off-by: discord9 <discord9@163.com>

* docs: more todo

Signed-off-by: discord9 <discord9@163.com>

* chore: comments

Signed-off-by: discord9 <discord9@163.com>

* test: a new failing test that should be fixed

Signed-off-by: discord9 <discord9@163.com>

* fix: part col alias tracking

Signed-off-by: discord9 <discord9@163.com>

* chore: unused

Signed-off-by: discord9 <discord9@163.com>

* chore: clippy

Signed-off-by: discord9 <discord9@163.com>

* docs: comment

Signed-off-by: discord9 <discord9@163.com>

* mroe testcase

Signed-off-by: discord9 <discord9@163.com>

* more testcase for step/part aggr combine

Signed-off-by: discord9 <discord9@163.com>

* FIXME: a new bug

Signed-off-by: discord9 <discord9@163.com>

* literally unfixable

Signed-off-by: discord9 <discord9@163.com>

* chore: remove some debug print

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-07-23 09:05:34 +00:00
Zhenchi
2298227e0c refactor: refactor partition mod to use PartitionExpr instead of PartitionDef (#6554)
* refactor: refactor partition mod to use PartitionExpr instead of PartitionDef

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix snafu

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* Puts expression into PbPartition

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* address comments

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix compile

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* update proto

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* add serde test

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* add serde test

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2025-07-23 03:51:28 +00:00
yihong
b51bafe9d5 fix: only start one instance one regrex sqlness test (#6570)
Signed-off-by: yihong0618 <zouzou0208@gmail.com>
2025-07-23 03:14:06 +00:00
ZonaHe
6e324adf02 feat: update dashboard to v0.10.4 (#6568)
Co-authored-by: ZonaHex <ZonaHex@users.noreply.github.com>
2025-07-22 15:38:47 +00:00
ZonaHe
02a9edef8b feat: update dashboard to v0.10.3 (#6566)
Co-authored-by: ZonaHex <ZonaHex@users.noreply.github.com>
2025-07-22 06:05:03 +00:00
Weny Xu
71b564b4aa refactor: support multiple index operations in single alter region request (#6487)
* refactor: support multiple index operations in single alter region request

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: update greptime-proto

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-07-22 03:10:42 +00:00
Ning Sun
2d67f9ae8b fix: windows path in error tests (#6564) 2025-07-21 09:07:18 +00:00
discord9
6501b0b13c feat: MergeScan print input (#6563)
* feat: MergeScan print input

Signed-off-by: discord9 <discord9@163.com>

* test: fix ut

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-07-21 06:40:15 +00:00
zyy17
dfe9eeb15c fix: unit test failed by adding necessary sleep function to ensure the time seqence (#6548)
Signed-off-by: zyy17 <zyylsxm@gmail.com>
2025-07-21 03:47:43 +00:00
dennis zhuang
78b1c6c554 feat: impl timestamp function for promql (#6556)
* feat: impl timestamp function for promql

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: style and typo

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: test

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* docs: update comments

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: comment

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

---------

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
2025-07-21 03:46:41 +00:00
yihong
1c8e8b96c1 docs: fix all dead links using lychee (#6559)
Signed-off-by: yihong0618 <zouzou0208@gmail.com>
2025-07-19 14:04:06 +00:00
discord9
73a3ac1320 fix: flow mirror cache (#6551)
* fix: invalid cache when flownode change address

Signed-off-by: discord9 <discord9@163.com>

* update comments

Signed-off-by: discord9 <discord9@163.com>

* fix

Signed-off-by: discord9 <discord9@163.com>

* refactor: add log&rename

Signed-off-by: discord9 <discord9@163.com>

* stuff

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-07-18 12:21:01 +00:00
Yan Tingwang
efefddbc85 test: add sqlness test for max execution time (#6517)
* add sqlness test for max_execution_time

Signed-off-by: codephage. <tingwangyan2020@163.com>

* add Pre-line comments SQLNESS PROTOCOL MYSQL

Signed-off-by: codephage. <tingwangyan2020@163.com>

* fix(mysql): support max_execution_time variable

Co-authored-by: evenyag <realevenyag@gmail.com>
Signed-off-by: codephage. <tingwangyan2020@163.com>

* fix: test::test_check & sqlness test mysql

Signed-off-by: codephage. <tingwangyan2020@163.com>

* add sqlness test for max_execution_time

Signed-off-by: codephage. <tingwangyan2020@163.com>

* add Pre-line comments SQLNESS PROTOCOL MYSQL

Signed-off-by: codephage. <tingwangyan2020@163.com>

* fix(mysql): support max_execution_time variable

Co-authored-by: evenyag <realevenyag@gmail.com>
Signed-off-by: codephage. <tingwangyan2020@163.com>

* fix: test::test_check & sqlness test mysql

Signed-off-by: codephage. <tingwangyan2020@163.com>

* chore: Unify the sql style

Signed-off-by: codephage. <tingwangyan2020@163.com>

---------

Signed-off-by: codephage. <tingwangyan2020@163.com>
Co-authored-by: evenyag <realevenyag@gmail.com>
2025-07-18 12:18:14 +00:00
localhost
58d2b7764f chore: Add explicit channels to Grpc and Prometheus query contexts (#6552) 2025-07-18 08:37:48 +00:00
fys
0dad8fc4a8 fix: estimate mem size for bulk ingester (#6550) 2025-07-18 08:03:07 +00:00
LFC
370d27587a refactor: make greptimedb's tests ran as a submodule (#6544)
fix: failed to run a test when as a submodule

Signed-off-by: luofucong <luofc@foxmail.com>
2025-07-18 08:01:30 +00:00
discord9
77b540ff68 feat: state/merge wrapper for aggr func (#6377)
* refactor: move to query crate

Signed-off-by: discord9 <discord9@163.com>

* refactor: split to multiple columns

Signed-off-by: discord9 <discord9@163.com>

* feat: aggr merge accum wrapper

Signed-off-by: discord9 <discord9@163.com>

* rename shorter

Signed-off-by: discord9 <discord9@163.com>

* feat: add all in one helper

Signed-off-by: discord9 <discord9@163.com>

* tests: sum&avg

Signed-off-by: discord9 <discord9@163.com>

* chore: allow unused

Signed-off-by: discord9 <discord9@163.com>

* chore: typos

Signed-off-by: discord9 <discord9@163.com>

* refactor: per ds

Signed-off-by: discord9 <discord9@163.com>

* chore: fix tests

Signed-off-by: discord9 <discord9@163.com>

* refactor: move to common-function

Signed-off-by: discord9 <discord9@163.com>

* WIP massive refactor

Signed-off-by: discord9 <discord9@163.com>

* typo

Signed-off-by: discord9 <discord9@163.com>

* todo: stuff

Signed-off-by: discord9 <discord9@163.com>

* refactor: state2input type

Signed-off-by: discord9 <discord9@163.com>

* chore: rm unused

Signed-off-by: discord9 <discord9@163.com>

* refactor: per bot review

Signed-off-by: discord9 <discord9@163.com>

* chore: per bot

Signed-off-by: discord9 <discord9@163.com>

* refactor: rm duplicate infer type

Signed-off-by: discord9 <discord9@163.com>

* chore: better test

Signed-off-by: discord9 <discord9@163.com>

* fix: test sum refactor&fix wrong state types

Signed-off-by: discord9 <discord9@163.com>

* test: refactor avg udaf test

Signed-off-by: discord9 <discord9@163.com>

* refactor: split files

Signed-off-by: discord9 <discord9@163.com>

* refactor: docs&dedup

Signed-off-by: discord9 <discord9@163.com>

* refactor: allow merge to carry extra info

Signed-off-by: discord9 <discord9@163.com>

* chore: rm unused

Signed-off-by: discord9 <discord9@163.com>

* chore: clippy

Signed-off-by: discord9 <discord9@163.com>

* chore: docs&unused

Signed-off-by: discord9 <discord9@163.com>

* refactor: check fields equal

Signed-off-by: discord9 <discord9@163.com>

* test: test count_hash

Signed-off-by: discord9 <discord9@163.com>

* test: more custom udaf

Signed-off-by: discord9 <discord9@163.com>

* chore: clippy

Signed-off-by: discord9 <discord9@163.com>

* refactor: per review

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-07-17 17:37:40 +00:00
Lei, HUANG
0cf25f7b05 feat: add sst file num in region stat (#6537)
* feat/add-sst-file-num-in-region-stat:
 ### Add SST File Count to Region Statistics

 - **Enhancements**:
   - Added `sst_num` to track the number of SST files in region statistics across multiple modules.
   - Updated `RegionStat` and `RegionStatistic` structs in `datanode.rs` and `region_engine.rs` to include `sst_num`.
   - Modified `MitoRegion` and `SstVersion` in `region.rs` and `version.rs` to compute and return the number of SST files.
   - Adjusted test cases in `collect_leader_region_handler.rs`, `failure_handler.rs`, `region_lease_handler.rs`, and `weight_compute.rs` to initialize `sst_num`.
   - Updated `get_region_statistic` in `utils.rs` to sum `sst_num` from metadata and data statistics.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* feat/add-sst-file-num-in-region-stat:
 Add `sst_num` to `region_statistics`

 - Updated `region_statistics.rs` to include a new constant `SST_NUM` and added it to the schema and builder structures.
 - Modified `information_schema.result` to reflect the addition of `sst_num` in the `region_statistics` table.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

---------

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-07-17 17:36:20 +00:00
Yingwen
c23b26461c feat: add metrics for request wait time and adjust stall metrics (#6540)
* feat: add metric greptime_mito_request_wait_time to observe wait time

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: add worker to wait time metric

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: rename stall gauge to greptime_mito_write_stalling_count

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: change greptime_mito_write_stall_total to total stalled requests

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: merge lazy static blocks

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-07-17 17:17:51 +00:00
zyy17
90ababca97 fix: typo for existed -> exited (#6547)
chore: `existed` -> `exited`

Signed-off-by: zyy17 <zyylsxm@gmail.com>
2025-07-17 12:27:29 +00:00
dennis zhuang
37dc057423 feat: adds uptime telemetry (#6545)
* feat: adds uptime telemetry

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: remove seconds and minutes

---------

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
2025-07-17 09:55:28 +00:00
Weny Xu
8237646055 feat: add table reconciliation utilities (#6519)
* feat: add table reconciliation utilities

Signed-off-by: WenyXu <wenymedia@gmail.com>

* fix: fix unit tests

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestison from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: update comment

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-07-17 08:05:38 +00:00
Lei, HUANG
011dd51495 chore: update opendal dashboard (#6541)
* chore/update-opendal-dashboard:
 ### Update Grafana Dashboard Queries

 - **Enhanced Metrics Queries**: Updated Prometheus queries in `dashboard.json`, `dashboard.md`, and `dashboard.yaml` files for both `cluster` and `standalone` dashboards to include additional operations (`Reader::read`, `Writer::write`, `Writer::close`) in the metrics calculations.
 - **Legend Format Adjustments**: Modified legend formats to include the `operation` field for better clarity in visualizations.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* chore/update-opendal-dashboard:
 Enhance Legend Format in Grafana Dashboards

 - Updated the `legendFormat` in `dashboard.json`, `dashboard.md`, and `dashboard.yaml` files for both `cluster` and `standalone` dashboards to include the `operation` field.
 - This change affects the following files:
   - `grafana/dashboards/metrics/cluster/dashboard.json`
   - `grafana/dashboards/metrics/cluster/dashboard.md`
   - `grafana/dashboards/metrics/cluster/dashboard.yaml`
   - `grafana/dashboards/metrics/standalone/dashboard.json`
   - `grafana/dashboards/metrics/standalone/dashboard.md`
   - `grafana/dashboards/metrics/standalone/dashboard.yaml`

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

---------

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-07-17 07:17:46 +00:00
Zhenchi
eb99e439c7 feat: MatchesConstTerm displays probes (#6518)
* feat: `MatchesConstTerm` displays probes

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix fmt

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2025-07-17 07:01:55 +00:00
Zhenchi
50148b25b5 fix: row selection intersection removes trailing rows (#6539)
* fix: row selection intersection removes trailing rows

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix typos

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2025-07-16 17:00:10 +00:00
Ruihang Xia
639b3ddc3e feat: update partial execution metrics (#6499)
* feat: update partial execution metrics

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* send data with metrics in distributed mode

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* only send partial metrics under VERBOSE flag

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* loop to while

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-07-16 16:59:10 +00:00
discord9
139a36a459 fix: breaking loop when not retryable (#6538)
fix: breaking when not retryable

Signed-off-by: discord9 <discord9@163.com>
2025-07-16 09:22:41 +00:00
LFC
e6b9d09901 feat: Flight supports RecordBatch with dictionary arrays (#6521)
* feat: Flight supports RecordBatch with dictionary arrays

Signed-off-by: luofucong <luofc@foxmail.com>

* resolve PR comments

Signed-off-by: luofucong <luofc@foxmail.com>

---------

Signed-off-by: luofucong <luofc@foxmail.com>
2025-07-16 09:21:12 +00:00
discord9
bbc9f3ea1e refactor: expose flow batching mode constants to config (#6442)
* refactor: make flow batching mode constant to configs

Signed-off-by: discord9 <discord9@163.com>

* docs: config docs

Signed-off-by: discord9 <discord9@163.com>

* docs: update code comment

Signed-off-by: discord9 <discord9@163.com>

* test: fix test_config_api

Signed-off-by: discord9 <discord9@163.com>

* feat: more batch opts

Signed-off-by: discord9 <discord9@163.com>

* fix after rebase

Signed-off-by: discord9 <discord9@163.com>

* chore: per review

Signed-off-by: discord9 <discord9@163.com>

* per review experimental options

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-07-16 08:05:20 +00:00
zyy17
fac8c3e62c feat: introduce common event recorder (#6501)
* feat: introduce common event recorder

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* refactor: use `backon` as retry lib

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* chore: remove unused error

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* refactor: add CancellationToken

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* refactor: add `cancel()`

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* refactor: `cancel()` -> `close()`
chore: `cancel()` -> `close()` and polish some code

Signed-off-by: zyy17 <zyylsxm@gmail.com>

---------

Signed-off-by: zyy17 <zyylsxm@gmail.com>
2025-07-16 03:59:41 +00:00
dennis zhuang
411eb768b1 feat: supports null reponse format for http API (#6531)
* feat: supports null reponse format for http API

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: license header and assertion

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: in seconds

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

---------

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
2025-07-16 03:46:39 +00:00
shuiyisong
95d2549007 chore: minor update for using pipeline with prometheus (#6522)
Signed-off-by: shuiyisong <xixing.sys@gmail.com>
2025-07-16 03:03:07 +00:00
Lei, HUANG
6744f5470b fix(grpc): check grpc client unavailable (#6488)
* fix/check-grpc-client-unavailable:
 Improve async handling in `greptime_handler.rs`

 - Updated the `DoPut` response handling to use `await` with `result_sender.send` for better asynchronous operation.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* fix/check-grpc-client-unavailable:
 ### Improve Error Handling in `greptime_handler.rs`

 - Enhanced error handling for the `DoPut` operation by switching from `send` to `try_send` for the `result_sender`.
 - Added specific logging for unreachable clients, including `request_id` in the warning message.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

---------

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-07-15 14:32:45 +00:00
localhost
9d05c7c5fa chore: subdivision of different channels (#6526)
* chore: subdivision of different channels

* chore: add as_str for channel

* chore: fix by pr comment

* chore: fix by pr comment
2025-07-15 09:22:01 +00:00
localhost
9442284fd4 chore: add db label for greptime_table_operator_ingest_rows (#6520)
* chore: add db label for greptime_table_operator_ingest_rows

* chore: add db label for greptime_table_operator_delete_rows
2025-07-15 07:16:44 +00:00
Weny Xu
1065db9518 fix: fix state transition in create table procedure (#6523)
Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-07-15 06:06:27 +00:00
Lei, HUANG
2f9a10ab74 refactor: expose bulk symbols (#6467)
* wip

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* refactor/expose-bulk-symbols:
 ### Commit Message

 Enhance DDL Module Accessibility and Refactor `verify_alter` Function

 - **`statement.rs`**: Made the `ddl` module public to enhance accessibility.
 - **`ddl.rs`**:
   - Made `NAME_PATTERN_REG` public for broader usage.
   - Refactored `verify_alter` function to be a standalone public function, improving modularity and reusability.
   - Made `parse_partitions` function public to allow external access.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* refactor/expose-bulk-symbols:
 ### Add Parquet Writer and Enhance Row Modifier

 - **Add Parquet Writer Module**: Introduced a new module `parquet_writer.rs` to bridge `opendal` `Writer` with `parquet` `AsyncFileWriter`.
 - **Enhance Row Modifier**: Updated `RowModifier` to use `Default` trait and made `fill_internal_columns` a public static method in `row_modifier.rs`.
 - **Expose Internal Structures**: Made `RowsIter`, `RowIter`, `TablesBuilder`, and `TableBuilder` structs public in `row_modifier.rs` and `prom_row_builder.rs`.
 - **Update Metric Engine**: Changed `RowModifier` instantiation to use `default()` in `engine.rs`.
 - **Modify Table Options Handling**: Added `fill_table_options_for_create` function in `insert.rs` to handle table options based on `AutoCreateTableType`.
 - **Make Constants Public**: Changed `DEFAULT_ROW_GROUP_SIZE` to public in `parquet.rs`.
 - **Expose Functions**: Made `extract_add_columns_expr` public in `expr_helper.rs` and `AutoCreateTableType` public in `insert.rs`.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* refactor/expose-bulk-symbols:
 ### Commit Message

 Enhance HTTP Server and Prometheus Integration

 - **`http.rs`**: Made `extractor` module public to allow external access.
 - **`prom_store.rs`**: Refactored `decode_remote_write_request` to return `TablesBuilder` and adjusted logic for processing requests based on pipeline usage.
 - **`lib.rs`**: Made `metrics` module public for broader accessibility.
 - **`prom_row_builder.rs`**: Exposed `tables` field in `TablesBuilder` for external manipulation.
 - **`proto.rs`**: Changed visibility of `table_data` in `PromWriteRequest` to `pub(crate)` for internal module access.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* refactor/expose-bulk-symbols:
 ### Add Accessor Methods for Managers and Executors

 - **`src/frontend/src/instance.rs`**: Added accessor methods for `NodeManagerRef`, `PartitionRuleManagerRef`, `CacheInvalidatorRef`, and `ProcedureExecutorRef` to the `Instance` struct.
 - **`src/operator/src/insert.rs`**: Introduced methods to access `NodeManagerRef` and `PartitionRuleManagerRef` in the `Inserter` struct.
 - **`src/operator/src/statement.rs`**: Added methods to retrieve `ProcedureExecutorRef` and `CacheInvalidatorRef` in the `StatementExecutor` struct.

 ### Change HashMap Implementation

 - **`src/servers/src/prom_row_builder.rs`**: Replaced `ahash::HashMap` with `std::collections::HashMap`.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* refactor/expose-bulk-symbols:
 Refactor table option handling in `insert.rs`

 - Replaced `Vec` with `HashMap` for `table_options` to improve efficiency.
 - Extracted logic for filling table options into a new function `fill_table_options_for_create`.
 - Modified `fill_table_options_for_create` to return the engine name based on `create_type`.
 - Simplified the insertion of table options into `create_table_expr` by using `extend` method.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* refactor/expose-bulk-symbols:
 Refactor `insert.rs` to separate engine name logic from table options

 - Updated `Inserter` implementation to determine `engine_name` separately from `fill_table_options_for_create`.
 - Modified `fill_table_options_for_create` to no longer return an engine name, focusing solely on populating table options.
 - Adjusted logic to set `engine_name` based on `AutoCreateTableType`, using `METRIC_ENGINE_NAME` for logical tables and `default_engine()` otherwise.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

---------

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-07-14 07:32:01 +00:00
Ruihang Xia
d82bc98717 feat(parser): parse TQL in CTE position (#6456)
* naive implementation

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* clean up

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* refactor to use existing tql parse logic

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* refactor display logic

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* refactor column list parsing logic

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* refactor to remove redundent check logic

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* set sql cte into Query

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-07-14 06:44:56 +00:00
shuiyisong
582bcc3b14 feat(pipeline): filter processor (#6502)
* feat: add filter processor

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* test: add tests

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: change target to list and use `in` and `not_in`

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: rebase main and fix error

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

---------

Signed-off-by: shuiyisong <xixing.sys@gmail.com>
2025-07-13 23:18:42 +00:00
zyy17
e5e10fd362 refactor: add the active_frontends() in PeerLookupService (#6504)
Signed-off-by: zyy17 <zyylsxm@gmail.com>
2025-07-11 09:42:00 +00:00
zyy17
104d607b3f refactor: support to speficy ttl in open_compaction_region() (#6515)
refactor: add `ttl` in `open_compaction_region()` and `CompactionJob`

Signed-off-by: zyy17 <zyylsxm@gmail.com>
2025-07-11 09:00:05 +00:00
zyy17
93e3a04aa8 refactor: add row_inserts() and row_inserts_with_hints(). (#6503)
Signed-off-by: zyy17 <zyylsxm@gmail.com>
2025-07-11 08:06:36 +00:00
Weny Xu
c1847e6b6a chore: change log level for region not found during lease renewal (#6513)
Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-07-11 07:51:29 +00:00
discord9
d258739c26 refactor(flow): faster time window expr (#6495)
* refactor: faster window expr

Signed-off-by: discord9 <discord9@163.com>

* docs: explain fast path

Signed-off-by: discord9 <discord9@163.com>

* chore: rm unwrap

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-07-11 06:43:14 +00:00
Yan Tingwang
914086668d fix: add system variable max_execution_time (#6511)
add system variable : max_execution_time

Signed-off-by: codephage. <tingwangyan2020@163.com>
2025-07-11 02:11:21 +00:00
localhost
01a8ad1304 chore: add prom store metrics (#6508)
chore: add metrics for db
2025-07-10 17:09:58 +00:00
shuiyisong
1594859957 refactor: replace pipeline::value with vrl::value (#6430)
* chore: pass compile

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* fix: default case

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* fix: test

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: remove and move code

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: remove serde_value to vrlvalue conversion

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* refactor: optimized vrl value related code

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* refactor: loki transform using vrl

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* fix: remove unused error

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: fix cr issue

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: use from_utf8_lossy_owned

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: CR issue

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

---------

Signed-off-by: shuiyisong <xixing.sys@gmail.com>
2025-07-10 17:08:31 +00:00
Ruihang Xia
351a77a2e5 fix: expand on conditional commutative as well (#6484)
* fix: expand on conditional commutative as well

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Signed-off-by: discord9 <discord9@163.com>

* update sqlness result

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Signed-off-by: discord9 <discord9@163.com>

* add logging to figure test failure

Signed-off-by: discord9 <discord9@163.com>

* revert

Signed-off-by: discord9 <discord9@163.com>

* feat: stream drop record metrics

Signed-off-by: discord9 <discord9@163.com>

* Revert "feat: stream drop record metrics"

This reverts commit 6a16946a5b8ea37557bbb1b600847d24274d6500.

Signed-off-by: discord9 <discord9@163.com>

* feat: stream drop record metrics

Signed-off-by: discord9 <discord9@163.com>

refactor: move logging to drop too

Signed-off-by: discord9 <discord9@163.com>

fix: drop input stream before collect metrics

Signed-off-by: discord9 <discord9@163.com>

* fix: expand differently

Signed-off-by: discord9 <discord9@163.com>

* test: update sqlness

Signed-off-by: discord9 <discord9@163.com>

* chore: more dbg

Signed-off-by: discord9 <discord9@163.com>

* Revert "feat: stream drop record metrics"

This reverts commit 3eda4a2257928d95cf9c1328ae44fae84cfbb017.

Signed-off-by: discord9 <discord9@163.com>

* test: sqlness redacted

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Signed-off-by: discord9 <discord9@163.com>
Co-authored-by: discord9 <discord9@163.com>
2025-07-10 15:13:52 +00:00
shuiyisong
7723cba7da chore: skip calc ts in doc 2 with transform (#6509)
Signed-off-by: shuiyisong <xixing.sys@gmail.com>
2025-07-10 13:10:02 +00:00
localhost
dd7da3d2c2 chore: remove region id to reduce time series (#6506) 2025-07-10 12:33:06 +00:00
Weny Xu
ffe0da0405 fix: correctly update partition key indices during alter table operations (#6494)
* fix: correctly update partition key indices in alter table operations

Signed-off-by: WenyXu <wenymedia@gmail.com>

* test: add sqlness tests

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-07-10 08:08:07 +00:00
Ning Sun
f2c7b09825 fix: add tracing dependencies (#6497) 2025-07-10 03:01:31 +00:00
Yingwen
3583b3204f feat: override batch sequence by the sequence in FileMeta (#6492)
* feat: support overriding read sequence by sequence in the FileMeta

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: add test for parquet reader

Signed-off-by: evenyag <realevenyag@gmail.com>

* style: fix clippy

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: update need_override_sequence to check all row groups

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-07-10 02:26:25 +00:00
Copilot
fea8bc5ee7 chore(comments): fix typo and grammar issues (#6496)
* Initial plan

* Fix 5 TODO comments: spelling typos and formatting issues

Co-authored-by: waynexia <15380403+waynexia@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: waynexia <15380403+waynexia@users.noreply.github.com>
2025-07-10 02:24:42 +00:00
Yingwen
40363bfc0f fix: range query returns range selector error when table not found (#6481)
* test: add sqlness test for range vector with non-existence metric

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: handle empty metric for matrix selector

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: update sqlness result

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: add newline

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-07-10 01:53:23 +00:00
jeremyhi
85c0136619 fix: greptime timestamp display null (#6469)
* feat: is_overflow method

* feat: check ts overflow
2025-07-10 01:53:00 +00:00
dennis zhuang
b70d998596 feat: improve install script (#6490)
Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
2025-07-09 17:04:20 +00:00
LFC
2f765c8fd4 refactor: remove unnecessary args (#6493)
* x

Signed-off-by: luofucong <luofc@foxmail.com>

* refactor: remove unnecessary args

Signed-off-by: luofucong <luofc@foxmail.com>

---------

Signed-off-by: luofucong <luofc@foxmail.com>
2025-07-09 13:23:15 +00:00
shuiyisong
d99cd98c01 fix: skip nan in prom remote write pipeline (#6489)
Signed-off-by: shuiyisong <xixing.sys@gmail.com>
2025-07-09 11:46:07 +00:00
Weny Xu
a858f55257 refactor(meta): separate validation and execution logic in alter logical tables procedure (#6478)
* refactor(meta): separate validation and execution logic in alter logical tables procedure

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-07-09 06:48:27 +00:00
Ning Sun
916967ea59 feat: allow alternative version string (#6472)
* feat: allow alternative version string

* refactor: rename original version function to verbose_version

Signed-off-by: Ning Sun <sunning@greptime.com>

---------

Signed-off-by: Ning Sun <sunning@greptime.com>
2025-07-09 06:43:01 +00:00
Weny Xu
c58d8aa94a refactor(meta): extract AlterTableExecutor from AlterTableProcedure (#6470)
* refactor(meta): extract `AlterTableExecutor` from `AlterTableProcedure`

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-07-09 05:13:19 +00:00
Ning Sun
eeb061ca74 feat: allow float number literal in step (#6483)
* chore: allow float number literal as step

Signed-off-by: Ning Sun <sunning@greptime.com>

* chore: switch to released version of promql parser

Signed-off-by: Ning Sun <sunning@greptime.com>

---------

Signed-off-by: Ning Sun <sunning@greptime.com>
2025-07-09 03:09:09 +00:00
shuiyisong
f7282fde28 chore: sort range query return values (#6474)
* chore: sort range query return values

* chore: add comments

* chore: add is_sorted check

* fix: test
2025-07-09 02:27:12 +00:00
dennis zhuang
a4bd11fb9c fix: empty statements hang (#6480)
* fix: empty statements hang

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* tests: add cases

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

---------

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
2025-07-09 02:13:14 +00:00
LFC
6dc9e8ddb4 feat: display extension ranges in "explain" (#6475)
* feat: display extension ranges in "explain"

Signed-off-by: luofucong <luofc@foxmail.com>

* fix ci

Signed-off-by: luofucong <luofc@foxmail.com>

---------

Signed-off-by: luofucong <luofc@foxmail.com>
2025-07-09 02:11:23 +00:00
discord9
af03e89139 fix: stricter win sort condition (#6477)
test: sqlness



test: fix sqlness redacted

Signed-off-by: discord9 <discord9@163.com>
2025-07-08 22:27:17 +00:00
jeremyhi
e7a64b7dc0 chore: refactor register_region method to avoid TOCTOU issues (#6468) 2025-07-08 13:26:38 +00:00
Lin Yihai
29739b556e refactor: split some convert function into sql-common crate (#6452)
refactor: split some convert function into `sql-common` crates

Signed-off-by: Yihai Lin <yihai-lin@foxmail.com>
2025-07-08 12:08:33 +00:00
Lei, HUANG
77e50d0e08 chore: expose some config (#6479)
refactor/expose-config:
 ### Make SubCommand and Fields Public in `frontend.rs`

 - Made `subcmd` field in `Command` struct public.
 - Made `SubCommand` enum public.
 - Made `config_file` and `env_prefix` fields in `StartCommand` struct public.

 These changes enhance the accessibility of command-related structures and fields, facilitating external usage and integration.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-07-08 11:52:23 +00:00
444 changed files with 20174 additions and 8007 deletions

View File

@@ -12,3 +12,6 @@ fetch = true
checkout = true
list_files = true
internal_use_git2 = false
[env]
CARGO_WORKSPACE_DIR = { value = "", relative = true }

5
.gitignore vendored
View File

@@ -60,4 +60,7 @@ tests-fuzz/corpus/
greptimedb_data
# github
!/.github
!/.github
# Claude code
CLAUDE.md

View File

@@ -10,12 +10,10 @@
* [NiwakaDev](https://github.com/NiwakaDev)
* [tisonkun](https://github.com/tisonkun)
## Team Members (in alphabetical order)
* [apdong2022](https://github.com/apdong2022)
* [beryl678](https://github.com/beryl678)
* [Breeze-P](https://github.com/Breeze-P)
* [daviderli614](https://github.com/daviderli614)
* [discord9](https://github.com/discord9)
* [evenyag](https://github.com/evenyag)

150
Cargo.lock generated
View File

@@ -1602,6 +1602,17 @@ version = "1.0.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "acbc26382d871df4b7442e3df10a9402bf3cf5e55cbd66f12be38861425f0564"
[[package]]
name = "cargo-manifest"
version = "0.19.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a1d8af896b707212cd0e99c112a78c9497dd32994192a463ed2f7419d29bd8c6"
dependencies = [
"serde",
"thiserror 2.0.12",
"toml 0.8.19",
]
[[package]]
name = "cast"
version = "0.3.0"
@@ -2291,6 +2302,27 @@ dependencies = [
"tonic 0.12.3",
]
[[package]]
name = "common-event-recorder"
version = "0.16.0"
dependencies = [
"api",
"async-trait",
"backon",
"client",
"common-error",
"common-macro",
"common-meta",
"common-telemetry",
"common-time",
"serde",
"serde_json",
"snafu 0.8.5",
"store-api",
"tokio",
"tokio-util",
]
[[package]]
name = "common-frontend"
version = "0.16.0"
@@ -2315,6 +2347,8 @@ dependencies = [
"api",
"approx 0.5.1",
"arc-swap",
"arrow 54.2.1",
"arrow-schema 54.3.1",
"async-trait",
"bincode",
"catalog",
@@ -2333,8 +2367,10 @@ dependencies = [
"datafusion-common",
"datafusion-expr",
"datafusion-functions-aggregate-common",
"datafusion-physical-expr",
"datatypes",
"derive_more",
"futures",
"geo",
"geo-types",
"geohash",
@@ -2347,6 +2383,7 @@ dependencies = [
"num-traits",
"once_cell",
"paste",
"pretty_assertions",
"s2",
"serde",
"serde_json",
@@ -2408,6 +2445,7 @@ dependencies = [
"tokio-util",
"tonic 0.12.3",
"tower 0.5.2",
"vec1",
]
[[package]]
@@ -2520,6 +2558,7 @@ dependencies = [
"tokio",
"tokio-postgres",
"tonic 0.12.3",
"tracing",
"typetag",
"uuid",
]
@@ -2668,12 +2707,31 @@ dependencies = [
"strum 0.27.1",
]
[[package]]
name = "common-sql"
version = "0.16.0"
dependencies = [
"common-base",
"common-datasource",
"common-decimal",
"common-error",
"common-macro",
"common-time",
"datafusion-sql",
"datatypes",
"hex",
"jsonb",
"snafu 0.8.5",
"sqlparser 0.54.0 (git+https://github.com/GreptimeTeam/sqlparser-rs.git?rev=0cf6c04490d59435ee965edd2078e8855bd8471e)",
]
[[package]]
name = "common-telemetry"
version = "0.16.0"
dependencies = [
"backtrace",
"common-error",
"common-version",
"console-subscriber",
"greptime-proto",
"humantime-serde",
@@ -2731,6 +2789,7 @@ name = "common-version"
version = "0.16.0"
dependencies = [
"build-data",
"cargo-manifest",
"const_format",
"serde",
"shadow-rs",
@@ -2964,9 +3023,9 @@ dependencies = [
[[package]]
name = "crc"
version = "3.2.1"
version = "3.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "69e6e4d7b33a94f0991c26729976b10ebde1d34c3ee82408fb536164fa10d636"
checksum = "9710d3b3739c2e349eb44fe848ad0b7c8cb1e42bd87ee49371df2f7acaf3e675"
dependencies = [
"crc-catalog",
]
@@ -3007,6 +3066,7 @@ dependencies = [
"ciborium",
"clap 3.2.25",
"criterion-plot",
"futures",
"itertools 0.10.5",
"lazy_static",
"num-traits",
@@ -3018,6 +3078,7 @@ dependencies = [
"serde_derive",
"serde_json",
"tinytemplate",
"tokio",
"walkdir",
]
@@ -3775,6 +3836,7 @@ dependencies = [
"tokio",
"toml 0.8.19",
"tonic 0.12.3",
"tracing",
]
[[package]]
@@ -3797,7 +3859,7 @@ dependencies = [
"jsonb",
"num",
"num-traits",
"ordered-float 3.9.2",
"ordered-float 4.3.0",
"paste",
"serde",
"serde_json",
@@ -4118,12 +4180,16 @@ dependencies = [
[[package]]
name = "domain"
version = "0.10.4"
version = "0.11.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4c84070523f8ba0f9127ff156920f27eb27b302b425efe60bf5f41ec244d1c60"
checksum = "a11dd7f04a6a6d2aea0153c6e31f5ea7af8b2efdf52cdaeea7a9a592c7fefef9"
dependencies = [
"bumpalo",
"bytes",
"domain-macros",
"futures-util",
"hashbrown 0.14.5",
"log",
"moka",
"octseq",
"rand 0.8.5",
@@ -4134,6 +4200,17 @@ dependencies = [
"tracing",
]
[[package]]
name = "domain-macros"
version = "0.11.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0e197fdfd2cdb5fdeb7f8ddcf3aed5d5d04ecde2890d448b14ffb716f7376b70"
dependencies = [
"proc-macro2",
"quote",
"syn 2.0.100",
]
[[package]]
name = "dotenv"
version = "0.15.0"
@@ -4619,6 +4696,7 @@ dependencies = [
"get-size2",
"greptime-proto",
"http 1.1.0",
"humantime-serde",
"itertools 0.14.0",
"lazy_static",
"meta-client",
@@ -4763,6 +4841,7 @@ dependencies = [
"toml 0.8.19",
"tonic 0.12.3",
"tower 0.5.2",
"tracing",
"uuid",
]
@@ -5146,7 +5225,7 @@ dependencies = [
[[package]]
name = "greptime-proto"
version = "0.1.0"
source = "git+https://github.com/GreptimeTeam/greptime-proto.git?rev=ceb1af4fa9309ce65bda0367db7b384df2bb4d4f#ceb1af4fa9309ce65bda0367db7b384df2bb4d4f"
source = "git+https://github.com/GreptimeTeam/greptime-proto.git?rev=7fcaa3e413947a7a28d9af95812af26c1939ce78#7fcaa3e413947a7a28d9af95812af26c1939ce78"
dependencies = [
"prost 0.13.5",
"serde",
@@ -6697,7 +6776,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4979f22fdb869068da03c9f7528f8297c6fd2606bc3a4affe42e6a823fdb8da4"
dependencies = [
"cfg-if",
"windows-targets 0.52.6",
"windows-targets 0.48.5",
]
[[package]]
@@ -7144,6 +7223,9 @@ version = "0.16.0"
dependencies = [
"api",
"async-trait",
"axum 0.8.1",
"axum-extra",
"axum-macros",
"bytes",
"chrono",
"clap 4.5.19",
@@ -7177,6 +7259,7 @@ dependencies = [
"http-body-util",
"humantime",
"humantime-serde",
"hyper 0.14.30",
"hyper-util",
"itertools 0.14.0",
"lazy_static",
@@ -7204,6 +7287,7 @@ dependencies = [
"toml 0.8.19",
"tonic 0.12.3",
"tower 0.5.2",
"tower-http 0.6.2",
"tracing",
"tracing-subscriber",
"typetag",
@@ -7266,6 +7350,7 @@ dependencies = [
"snafu 0.8.5",
"store-api",
"tokio",
"tracing",
]
[[package]]
@@ -7379,6 +7464,7 @@ dependencies = [
"datafusion-expr",
"datatypes",
"dotenv",
"either",
"futures",
"humantime-serde",
"index",
@@ -7396,6 +7482,7 @@ dependencies = [
"prost 0.13.5",
"puffin",
"rand 0.9.0",
"rayon",
"regex",
"rskafka",
"rstest",
@@ -7414,6 +7501,7 @@ dependencies = [
"tokio-stream",
"tokio-util",
"toml 0.8.19",
"tracing",
"uuid",
]
@@ -8457,6 +8545,7 @@ dependencies = [
"common-query",
"common-recordbatch",
"common-runtime",
"common-sql",
"common-telemetry",
"common-test-util",
"common-time",
@@ -8492,6 +8581,7 @@ dependencies = [
"tokio",
"tokio-util",
"tonic 0.12.3",
"tracing",
]
[[package]]
@@ -8528,17 +8618,6 @@ dependencies = [
"num-traits",
]
[[package]]
name = "ordered-float"
version = "3.9.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f1e1c390732d15f1d48471625cd92d154e66db2c56645e29a9cd26f4699f72dc"
dependencies = [
"num-traits",
"rand 0.8.5",
"serde",
]
[[package]]
name = "ordered-float"
version = "4.3.0"
@@ -8546,6 +8625,8 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "44d501f1a72f71d3c063a6bbc8f7271fa73aa09fe5d6283b6571e2ed176a2537"
dependencies = [
"num-traits",
"rand 0.8.5",
"serde",
]
[[package]]
@@ -9082,6 +9163,7 @@ dependencies = [
"moka",
"once_cell",
"operator",
"ordered-float 4.3.0",
"paste",
"prometheus",
"query",
@@ -9522,8 +9604,9 @@ dependencies = [
[[package]]
name = "promql-parser"
version = "0.5.1"
source = "git+https://github.com/GreptimeTeam/promql-parser.git?rev=0410e8b459dda7cb222ce9596f8bf3971bd07bd2#0410e8b459dda7cb222ce9596f8bf3971bd07bd2"
version = "0.6.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "328fe69c2443ec4f8e6c33ea925dde04a1026e6c95928e89ed02343944cac9bf"
dependencies = [
"cfgrammar",
"chrono",
@@ -9594,7 +9677,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "be769465445e8c1474e9c5dac2018218498557af32d9ed057325ec9a41ae81bf"
dependencies = [
"heck 0.5.0",
"itertools 0.14.0",
"itertools 0.11.0",
"log",
"multimap",
"once_cell",
@@ -9640,7 +9723,7 @@ source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8a56d757972c98b346a9b766e3f02746cde6dd1cd1d1d563472929fdd74bec4d"
dependencies = [
"anyhow",
"itertools 0.14.0",
"itertools 0.11.0",
"proc-macro2",
"quote",
"syn 2.0.100",
@@ -9892,6 +9975,7 @@ dependencies = [
"table",
"tokio",
"tokio-stream",
"tracing",
"unescaper",
"uuid",
]
@@ -11243,6 +11327,7 @@ dependencies = [
"common-recordbatch",
"common-runtime",
"common-session",
"common-sql",
"common-telemetry",
"common-test-util",
"common-time",
@@ -11324,8 +11409,10 @@ dependencies = [
"tonic-reflection",
"tower 0.5.2",
"tower-http 0.6.2",
"tracing",
"urlencoding",
"uuid",
"vrl",
"zstd 0.13.2",
]
@@ -11681,6 +11768,7 @@ dependencies = [
"common-error",
"common-macro",
"common-query",
"common-sql",
"common-time",
"datafusion",
"datafusion-common",
@@ -12439,6 +12527,7 @@ dependencies = [
"humantime",
"humantime-serde",
"lazy_static",
"once_cell",
"parquet",
"paste",
"serde",
@@ -12987,9 +13076,9 @@ checksum = "1f3ccbac311fea05f86f61904b462b55fb3df8837a366dfc601a0161d0532f20"
[[package]]
name = "tokio"
version = "1.44.2"
version = "1.45.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e6b88822cbe49de4185e3a4cbf8321dd487cf5fe0c5c65695fef6346371e9c48"
checksum = "75ef51a33ef1da925cea3e4eb122833cb377c61439ca401b770f54902b806779"
dependencies = [
"backtrace",
"bytes",
@@ -13155,6 +13244,7 @@ version = "0.8.19"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a1ed1f98e3fdc28d6d910e6737ae6ab1a93bf1985935a1193e68f93eeb68d24e"
dependencies = [
"indexmap 2.9.0",
"serde",
"serde_spanned",
"toml_datetime",
@@ -13914,6 +14004,12 @@ version = "0.2.15"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "accd4ea62f7bb7a82fe23066fb0957d48ef677f6eeb8215f372f52e48bb32426"
[[package]]
name = "vec1"
version = "1.12.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "eab68b56840f69efb0fefbe3ab6661499217ffdc58e2eef7c3f6f69835386322"
[[package]]
name = "vergen"
version = "8.3.2"
@@ -13944,9 +14040,9 @@ dependencies = [
[[package]]
name = "vrl"
version = "0.24.0"
version = "0.25.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f9ceadaa40aef567a26079ff014ca7a567ba85344f1b81090b5ec7d7bb16a219"
checksum = "4f49394b948406ea1564aa00152e011d87a38ad35d277ebddda257a9ee39c419"
dependencies = [
"aes",
"aes-siv",
@@ -14266,7 +14362,7 @@ version = "0.1.9"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cf221c93e13a30d793f7645a0e7762c55d169dbb0a49671918a2319d289b10bb"
dependencies = [
"windows-sys 0.59.0",
"windows-sys 0.48.0",
]
[[package]]

View File

@@ -13,6 +13,7 @@ members = [
"src/common/datasource",
"src/common/decimal",
"src/common/error",
"src/common/event-recorder",
"src/common/frontend",
"src/common/function",
"src/common/greptimedb-telemetry",
@@ -30,6 +31,7 @@ members = [
"src/common/recordbatch",
"src/common/runtime",
"src/common/session",
"src/common/sql",
"src/common/stat",
"src/common/substrait",
"src/common/telemetry",
@@ -138,7 +140,7 @@ etcd-client = "0.14"
fst = "0.4.7"
futures = "0.3"
futures-util = "0.3"
greptime-proto = { git = "https://github.com/GreptimeTeam/greptime-proto.git", rev = "ceb1af4fa9309ce65bda0367db7b384df2bb4d4f" }
greptime-proto = { git = "https://github.com/GreptimeTeam/greptime-proto.git", rev = "7fcaa3e413947a7a28d9af95812af26c1939ce78" }
hex = "0.4"
http = "1"
humantime = "2.1"
@@ -166,14 +168,13 @@ opentelemetry-proto = { version = "0.27", features = [
"with-serde",
"logs",
] }
ordered-float = { version = "4.3", features = ["serde"] }
parking_lot = "0.12"
parquet = { version = "54.2", default-features = false, features = ["arrow", "async", "object_store"] }
paste = "1.0"
pin-project = "1.0"
prometheus = { version = "0.13.3", features = ["process"] }
promql-parser = { git = "https://github.com/GreptimeTeam/promql-parser.git", rev = "0410e8b459dda7cb222ce9596f8bf3971bd07bd2", features = [
"ser",
] }
promql-parser = { version = "0.6", features = ["ser"] }
prost = { version = "0.13", features = ["no-recursion-limit"] }
raft-engine = { version = "0.4.1", default-features = false }
rand = "0.9"
@@ -224,10 +225,13 @@ tokio-util = { version = "0.7", features = ["io-util", "compat"] }
toml = "0.8.8"
tonic = { version = "0.12", features = ["tls", "gzip", "zstd"] }
tower = "0.5"
tower-http = "0.6"
tracing = "0.1"
tracing-appender = "0.2"
tracing-subscriber = { version = "0.3", features = ["env-filter", "json", "fmt"] }
typetag = "0.2"
uuid = { version = "1.7", features = ["serde", "v4", "fast-rng"] }
vrl = "0.25"
zstd = "0.13"
# DO_NOT_REMOVE_THIS: END_OF_EXTERNAL_DEPENDENCIES
@@ -245,6 +249,7 @@ common-config = { path = "src/common/config" }
common-datasource = { path = "src/common/datasource" }
common-decimal = { path = "src/common/decimal" }
common-error = { path = "src/common/error" }
common-event-recorder = { path = "src/common/event-recorder" }
common-frontend = { path = "src/common/frontend" }
common-function = { path = "src/common/function" }
common-greptimedb-telemetry = { path = "src/common/greptimedb-telemetry" }
@@ -262,6 +267,7 @@ common-query = { path = "src/common/query" }
common-recordbatch = { path = "src/common/recordbatch" }
common-runtime = { path = "src/common/runtime" }
common-session = { path = "src/common/session" }
common-sql = { path = "src/common/sql" }
common-telemetry = { path = "src/common/telemetry" }
common-test-util = { path = "src/common/test-util" }
common-time = { path = "src/common/time" }

View File

@@ -562,6 +562,16 @@
| `node_id` | Integer | Unset | The flownode identifier and should be unique in the cluster. |
| `flow` | -- | -- | flow engine options. |
| `flow.num_workers` | Integer | `0` | The number of flow worker in flownode.<br/>Not setting(or set to 0) this value will use the number of CPU cores divided by 2. |
| `flow.batching_mode` | -- | -- | -- |
| `flow.batching_mode.query_timeout` | String | `600s` | The default batching engine query timeout is 10 minutes. |
| `flow.batching_mode.slow_query_threshold` | String | `60s` | will output a warn log for any query that runs for more that this threshold |
| `flow.batching_mode.experimental_min_refresh_duration` | String | `5s` | The minimum duration between two queries execution by batching mode task |
| `flow.batching_mode.grpc_conn_timeout` | String | `5s` | The gRPC connection timeout |
| `flow.batching_mode.experimental_grpc_max_retries` | Integer | `3` | The gRPC max retry number |
| `flow.batching_mode.experimental_frontend_scan_timeout` | String | `30s` | Flow wait for available frontend timeout,<br/>if failed to find available frontend after frontend_scan_timeout elapsed, return error<br/>which prevent flownode from starting |
| `flow.batching_mode.experimental_frontend_activity_timeout` | String | `60s` | Frontend activity timeout<br/>if frontend is down(not sending heartbeat) for more than frontend_activity_timeout,<br/>it will be removed from the list that flownode use to connect |
| `flow.batching_mode.experimental_max_filter_num_per_query` | Integer | `20` | Maximum number of filters allowed in a single query |
| `flow.batching_mode.experimental_time_window_merge_threshold` | Integer | `3` | Time window merge distance |
| `grpc` | -- | -- | The gRPC server options. |
| `grpc.bind_addr` | String | `127.0.0.1:6800` | The address to bind the gRPC server. |
| `grpc.server_addr` | String | `127.0.0.1:6800` | The address advertised to the metasrv,<br/>and used for connections from outside the host |

View File

@@ -7,6 +7,29 @@ node_id = 14
## The number of flow worker in flownode.
## Not setting(or set to 0) this value will use the number of CPU cores divided by 2.
#+num_workers=0
[flow.batching_mode]
## The default batching engine query timeout is 10 minutes.
#+query_timeout="600s"
## will output a warn log for any query that runs for more that this threshold
#+slow_query_threshold="60s"
## The minimum duration between two queries execution by batching mode task
#+experimental_min_refresh_duration="5s"
## The gRPC connection timeout
#+grpc_conn_timeout="5s"
## The gRPC max retry number
#+experimental_grpc_max_retries=3
## Flow wait for available frontend timeout,
## if failed to find available frontend after frontend_scan_timeout elapsed, return error
## which prevent flownode from starting
#+experimental_frontend_scan_timeout="30s"
## Frontend activity timeout
## if frontend is down(not sending heartbeat) for more than frontend_activity_timeout,
## it will be removed from the list that flownode use to connect
#+experimental_frontend_activity_timeout="60s"
## Maximum number of filters allowed in a single query
#+experimental_max_filter_num_per_query=20
## Time window merge distance
#+experimental_time_window_merge_threshold=3
## The gRPC server options.
[grpc]

View File

@@ -1,4 +1,4 @@
Currently, our query engine is based on DataFusion, so all aggregate function is executed by DataFusion, through its UDAF interface. You can find DataFusion's UDAF example [here](https://github.com/apache/arrow-datafusion/blob/arrow2/datafusion-examples/examples/simple_udaf.rs). Basically, we provide the same way as DataFusion to write aggregate functions: both are centered in a struct called "Accumulator" to accumulates states along the way in aggregation.
Currently, our query engine is based on DataFusion, so all aggregate function is executed by DataFusion, through its UDAF interface. You can find DataFusion's UDAF example [here](https://github.com/apache/datafusion/tree/main/datafusion-examples/examples/simple_udaf.rs). Basically, we provide the same way as DataFusion to write aggregate functions: both are centered in a struct called "Accumulator" to accumulates states along the way in aggregation.
However, DataFusion's UDAF implementation has a huge restriction, that it requires user to provide a concrete "Accumulator". Take `Median` aggregate function for example, to aggregate a `u32` datatype column, you have to write a `MedianU32`, and use `SELECT MEDIANU32(x)` in SQL. `MedianU32` cannot be used to aggregate a `i32` datatype column. Or, there's another way: you can use a special type that can hold all kinds of data (like our `Value` enum or Arrow's `ScalarValue`), and `match` all the way up to do aggregate calculations. It might work, though rather tedious. (But I think it's DataFusion's preferred way to write UDAF.)

View File

@@ -76,7 +76,7 @@ pub trait CompactionStrategy {
```
The most suitable compaction strategy for time-series scenario would be
a hybrid strategy that combines time window compaction with size-tired compaction, just like [Cassandra](https://cassandra.apache.org/doc/latest/cassandra/operating/compaction/twcs.html) and [ScyllaDB](https://docs.scylladb.com/stable/architecture/compaction/compaction-strategies.html#time-window-compaction-strategy-twcs) does.
a hybrid strategy that combines time window compaction with size-tired compaction, just like [Cassandra](https://cassandra.apache.org/doc/latest/cassandra/managing/operating/compaction/twcs.html) and [ScyllaDB](https://docs.scylladb.com/stable/architecture/compaction/compaction-strategies.html#time-window-compaction-strategy-twcs) does.
We can first group SSTs in level n into buckets according to some predefined time window. Within that window,
SSTs are compacted in a size-tired manner (find SSTs with similar size and compact them to level n+1).

View File

@@ -28,7 +28,7 @@ In order to do those things while maintaining a low memory footprint, you need t
- Greptime Flow's is built on top of [Hydroflow](https://github.com/hydro-project/hydroflow).
- We have three choices for the Dataflow/Streaming process framework for our simple continuous aggregation feature:
1. Based on the timely/differential dataflow crate that [materialize](https://github.com/MaterializeInc/materialize) based on. Later, it's proved too obscure for a simple usage, and is hard to customize memory usage control.
2. Based on a simple dataflow framework that we write from ground up, like what [arroyo](https://www.arroyo.dev/) or [risingwave](https://www.risingwave.dev/) did, for example the core streaming logic of [arroyo](https://github.com/ArroyoSystems/arroyo/blob/master/arroyo-datastream/src/lib.rs) only takes up to 2000 line of codes. However, it means maintaining another layer of dataflow framework, which might seem easy in the beginning, but I fear it might be too burdensome to maintain once we need more features.
2. Based on a simple dataflow framework that we write from ground up, like what [arroyo](https://www.arroyo.dev/) or [risingwave](https://www.risingwave.dev/) did, for example the core streaming logic of [arroyo](https://github.com/ArroyoSystems/arroyo/blob/master/crates/arroyo-datastream/src/lib.rs) only takes up to 2000 line of codes. However, it means maintaining another layer of dataflow framework, which might seem easy in the beginning, but I fear it might be too burdensome to maintain once we need more features.
3. Based on a simple and lower level dataflow framework that someone else write, like [hydroflow](https://github.com/hydro-project/hydroflow), this approach combines the best of both worlds. Firstly, it boasts ease of comprehension and customization. Secondly, the dataflow framework offers precisely the necessary features for crafting uncomplicated single-node dataflow programs while delivering decent performance.
Hence, we choose the third option, and use a simple logical plan that's anagonistic to the underlying dataflow framework, as it only describe how the dataflow graph should be doing, not how it do that. And we built operator in hydroflow to execute the plan. And the result hydroflow graph is wrapped in a engine that only support data in/out and tick event to flush and compute the result. This provide a thin middle layer that's easy to maintain and allow switching to other dataflow framework if necessary.

View File

@@ -83,7 +83,7 @@ If you use the [Helm Chart](https://github.com/GreptimeTeam/helm-charts) to depl
- `monitoring.enabled=true`: Deploys a standalone GreptimeDB instance dedicated to monitoring the cluster;
- `grafana.enabled=true`: Deploys Grafana and automatically imports the monitoring dashboard;
The standalone GreptimeDB instance will collect metrics from your cluster, and the dashboard will be available in the Grafana UI. For detailed deployment instructions, please refer to our [Kubernetes deployment guide](https://docs.greptime.com/user-guide/deployments-administration-administration/deploy-on-kubernetes/getting-started).
The standalone GreptimeDB instance will collect metrics from your cluster, and the dashboard will be available in the Grafana UI. For detailed deployment instructions, please refer to our [Kubernetes deployment guide](https://docs.greptime.com/user-guide/deployments-administration/deploy-on-kubernetes/overview).
### Self-host Prometheus and import dashboards manually

View File

@@ -5954,9 +5954,9 @@
"uid": "${metrics}"
},
"editorMode": "code",
"expr": "sum by(instance, pod, scheme) (rate(opendal_operation_duration_seconds_count{instance=~\"$datanode\", operation=\"read\"}[$__rate_interval]))",
"expr": "sum by(instance, pod, scheme, operation) (rate(opendal_operation_duration_seconds_count{instance=~\"$datanode\", operation=~\"read|Reader::read\"}[$__rate_interval]))",
"instant": false,
"legendFormat": "[{{instance}}]-[{{pod}}]-[{{scheme}}]",
"legendFormat": "[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]",
"range": true,
"refId": "A"
}
@@ -6055,9 +6055,9 @@
"uid": "${metrics}"
},
"editorMode": "code",
"expr": "histogram_quantile(0.99, sum by(instance, pod, le, scheme) (rate(opendal_operation_duration_seconds_bucket{instance=~\"$datanode\",operation=\"read\"}[$__rate_interval])))",
"expr": "histogram_quantile(0.99, sum by(instance, pod, le, scheme, operation) (rate(opendal_operation_duration_seconds_bucket{instance=~\"$datanode\",operation=~\"read|Reader::read\"}[$__rate_interval])))",
"instant": false,
"legendFormat": "[{{instance}}]-[{{pod}}]-{{scheme}}",
"legendFormat": "[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]",
"range": true,
"refId": "A"
}
@@ -6156,9 +6156,9 @@
"uid": "${metrics}"
},
"editorMode": "code",
"expr": "sum by(instance, pod, scheme) (rate(opendal_operation_duration_seconds_count{instance=~\"$datanode\", operation=\"write\"}[$__rate_interval]))",
"expr": "sum by(instance, pod, scheme, operation) (rate(opendal_operation_duration_seconds_count{instance=~\"$datanode\", operation=~\"write|Writer::write|Writer::close\"}[$__rate_interval]))",
"instant": false,
"legendFormat": "[{{instance}}]-[{{pod}}]-{{scheme}}",
"legendFormat": "[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]",
"range": true,
"refId": "A"
}
@@ -6257,9 +6257,9 @@
"uid": "${metrics}"
},
"editorMode": "code",
"expr": "histogram_quantile(0.99, sum by(instance, pod, le, scheme) (rate(opendal_operation_duration_seconds_bucket{instance=~\"$datanode\", operation=\"write\"}[$__rate_interval])))",
"expr": "histogram_quantile(0.99, sum by(instance, pod, le, scheme, operation) (rate(opendal_operation_duration_seconds_bucket{instance=~\"$datanode\", operation =~ \"Writer::write|Writer::close|write\"}[$__rate_interval])))",
"instant": false,
"legendFormat": "[{{instance}}]-[{{pod}}]-[{{scheme}}]",
"legendFormat": "[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]",
"range": true,
"refId": "A"
}
@@ -6663,7 +6663,7 @@
"uid": "${metrics}"
},
"editorMode": "code",
"expr": "histogram_quantile(0.99, sum by(instance, pod, le, scheme, operation) (rate(opendal_operation_duration_seconds_bucket{instance=~\"$datanode\", operation!~\"read|write|list\"}[$__rate_interval])))",
"expr": "histogram_quantile(0.99, sum by(instance, pod, le, scheme, operation) (rate(opendal_operation_duration_seconds_bucket{instance=~\"$datanode\", operation!~\"read|write|list|Writer::write|Writer::close|Reader::read\"}[$__rate_interval])))",
"instant": false,
"legendFormat": "[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]",
"range": true,

View File

@@ -76,14 +76,14 @@
| Title | Query | Type | Description | Datasource | Unit | Legend Format |
| --- | --- | --- | --- | --- | --- | --- |
| QPS per Instance | `sum by(instance, pod, scheme, operation) (rate(opendal_operation_duration_seconds_count{instance=~"$datanode"}[$__rate_interval]))` | `timeseries` | QPS per Instance. | `prometheus` | `ops` | `[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]` |
| Read QPS per Instance | `sum by(instance, pod, scheme) (rate(opendal_operation_duration_seconds_count{instance=~"$datanode", operation="read"}[$__rate_interval]))` | `timeseries` | Read QPS per Instance. | `prometheus` | `ops` | `[{{instance}}]-[{{pod}}]-[{{scheme}}]` |
| Read P99 per Instance | `histogram_quantile(0.99, sum by(instance, pod, le, scheme) (rate(opendal_operation_duration_seconds_bucket{instance=~"$datanode",operation="read"}[$__rate_interval])))` | `timeseries` | Read P99 per Instance. | `prometheus` | `s` | `[{{instance}}]-[{{pod}}]-{{scheme}}` |
| Write QPS per Instance | `sum by(instance, pod, scheme) (rate(opendal_operation_duration_seconds_count{instance=~"$datanode", operation="write"}[$__rate_interval]))` | `timeseries` | Write QPS per Instance. | `prometheus` | `ops` | `[{{instance}}]-[{{pod}}]-{{scheme}}` |
| Write P99 per Instance | `histogram_quantile(0.99, sum by(instance, pod, le, scheme) (rate(opendal_operation_duration_seconds_bucket{instance=~"$datanode", operation="write"}[$__rate_interval])))` | `timeseries` | Write P99 per Instance. | `prometheus` | `s` | `[{{instance}}]-[{{pod}}]-[{{scheme}}]` |
| Read QPS per Instance | `sum by(instance, pod, scheme, operation) (rate(opendal_operation_duration_seconds_count{instance=~"$datanode", operation=~"read\|Reader::read"}[$__rate_interval]))` | `timeseries` | Read QPS per Instance. | `prometheus` | `ops` | `[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]` |
| Read P99 per Instance | `histogram_quantile(0.99, sum by(instance, pod, le, scheme, operation) (rate(opendal_operation_duration_seconds_bucket{instance=~"$datanode",operation=~"read\|Reader::read"}[$__rate_interval])))` | `timeseries` | Read P99 per Instance. | `prometheus` | `s` | `[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]` |
| Write QPS per Instance | `sum by(instance, pod, scheme, operation) (rate(opendal_operation_duration_seconds_count{instance=~"$datanode", operation=~"write\|Writer::write\|Writer::close"}[$__rate_interval]))` | `timeseries` | Write QPS per Instance. | `prometheus` | `ops` | `[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]` |
| Write P99 per Instance | `histogram_quantile(0.99, sum by(instance, pod, le, scheme, operation) (rate(opendal_operation_duration_seconds_bucket{instance=~"$datanode", operation =~ "Writer::write\|Writer::close\|write"}[$__rate_interval])))` | `timeseries` | Write P99 per Instance. | `prometheus` | `s` | `[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]` |
| List QPS per Instance | `sum by(instance, pod, scheme) (rate(opendal_operation_duration_seconds_count{instance=~"$datanode", operation="list"}[$__rate_interval]))` | `timeseries` | List QPS per Instance. | `prometheus` | `ops` | `[{{instance}}]-[{{pod}}]-[{{scheme}}]` |
| List P99 per Instance | `histogram_quantile(0.99, sum by(instance, pod, le, scheme) (rate(opendal_operation_duration_seconds_bucket{instance=~"$datanode", operation="list"}[$__rate_interval])))` | `timeseries` | List P99 per Instance. | `prometheus` | `s` | `[{{instance}}]-[{{pod}}]-[{{scheme}}]` |
| Other Requests per Instance | `sum by(instance, pod, scheme, operation) (rate(opendal_operation_duration_seconds_count{instance=~"$datanode",operation!~"read\|write\|list\|stat"}[$__rate_interval]))` | `timeseries` | Other Requests per Instance. | `prometheus` | `ops` | `[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]` |
| Other Request P99 per Instance | `histogram_quantile(0.99, sum by(instance, pod, le, scheme, operation) (rate(opendal_operation_duration_seconds_bucket{instance=~"$datanode", operation!~"read\|write\|list"}[$__rate_interval])))` | `timeseries` | Other Request P99 per Instance. | `prometheus` | `s` | `[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]` |
| Other Request P99 per Instance | `histogram_quantile(0.99, sum by(instance, pod, le, scheme, operation) (rate(opendal_operation_duration_seconds_bucket{instance=~"$datanode", operation!~"read\|write\|list\|Writer::write\|Writer::close\|Reader::read"}[$__rate_interval])))` | `timeseries` | Other Request P99 per Instance. | `prometheus` | `s` | `[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]` |
| Opendal traffic | `sum by(instance, pod, scheme, operation) (rate(opendal_operation_bytes_sum{instance=~"$datanode"}[$__rate_interval]))` | `timeseries` | Total traffic as in bytes by instance and operation | `prometheus` | `decbytes` | `[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]` |
| OpenDAL errors per Instance | `sum by(instance, pod, scheme, operation, error) (rate(opendal_operation_errors_total{instance=~"$datanode", error!="NotFound"}[$__rate_interval]))` | `timeseries` | OpenDAL error counts per Instance. | `prometheus` | -- | `[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]-[{{error}}]` |
# Metasrv

View File

@@ -659,41 +659,41 @@ groups:
description: Read QPS per Instance.
unit: ops
queries:
- expr: sum by(instance, pod, scheme) (rate(opendal_operation_duration_seconds_count{instance=~"$datanode", operation="read"}[$__rate_interval]))
- expr: sum by(instance, pod, scheme, operation) (rate(opendal_operation_duration_seconds_count{instance=~"$datanode", operation=~"read|Reader::read"}[$__rate_interval]))
datasource:
type: prometheus
uid: ${metrics}
legendFormat: '[{{instance}}]-[{{pod}}]-[{{scheme}}]'
legendFormat: '[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]'
- title: Read P99 per Instance
type: timeseries
description: Read P99 per Instance.
unit: s
queries:
- expr: histogram_quantile(0.99, sum by(instance, pod, le, scheme) (rate(opendal_operation_duration_seconds_bucket{instance=~"$datanode",operation="read"}[$__rate_interval])))
- expr: histogram_quantile(0.99, sum by(instance, pod, le, scheme, operation) (rate(opendal_operation_duration_seconds_bucket{instance=~"$datanode",operation=~"read|Reader::read"}[$__rate_interval])))
datasource:
type: prometheus
uid: ${metrics}
legendFormat: '[{{instance}}]-[{{pod}}]-{{scheme}}'
legendFormat: '[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]'
- title: Write QPS per Instance
type: timeseries
description: Write QPS per Instance.
unit: ops
queries:
- expr: sum by(instance, pod, scheme) (rate(opendal_operation_duration_seconds_count{instance=~"$datanode", operation="write"}[$__rate_interval]))
- expr: sum by(instance, pod, scheme, operation) (rate(opendal_operation_duration_seconds_count{instance=~"$datanode", operation=~"write|Writer::write|Writer::close"}[$__rate_interval]))
datasource:
type: prometheus
uid: ${metrics}
legendFormat: '[{{instance}}]-[{{pod}}]-{{scheme}}'
legendFormat: '[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]'
- title: Write P99 per Instance
type: timeseries
description: Write P99 per Instance.
unit: s
queries:
- expr: histogram_quantile(0.99, sum by(instance, pod, le, scheme) (rate(opendal_operation_duration_seconds_bucket{instance=~"$datanode", operation="write"}[$__rate_interval])))
- expr: histogram_quantile(0.99, sum by(instance, pod, le, scheme, operation) (rate(opendal_operation_duration_seconds_bucket{instance=~"$datanode", operation =~ "Writer::write|Writer::close|write"}[$__rate_interval])))
datasource:
type: prometheus
uid: ${metrics}
legendFormat: '[{{instance}}]-[{{pod}}]-[{{scheme}}]'
legendFormat: '[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]'
- title: List QPS per Instance
type: timeseries
description: List QPS per Instance.
@@ -729,7 +729,7 @@ groups:
description: Other Request P99 per Instance.
unit: s
queries:
- expr: histogram_quantile(0.99, sum by(instance, pod, le, scheme, operation) (rate(opendal_operation_duration_seconds_bucket{instance=~"$datanode", operation!~"read|write|list"}[$__rate_interval])))
- expr: histogram_quantile(0.99, sum by(instance, pod, le, scheme, operation) (rate(opendal_operation_duration_seconds_bucket{instance=~"$datanode", operation!~"read|write|list|Writer::write|Writer::close|Reader::read"}[$__rate_interval])))
datasource:
type: prometheus
uid: ${metrics}

View File

@@ -5954,9 +5954,9 @@
"uid": "${metrics}"
},
"editorMode": "code",
"expr": "sum by(instance, pod, scheme) (rate(opendal_operation_duration_seconds_count{ operation=\"read\"}[$__rate_interval]))",
"expr": "sum by(instance, pod, scheme, operation) (rate(opendal_operation_duration_seconds_count{ operation=~\"read|Reader::read\"}[$__rate_interval]))",
"instant": false,
"legendFormat": "[{{instance}}]-[{{pod}}]-[{{scheme}}]",
"legendFormat": "[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]",
"range": true,
"refId": "A"
}
@@ -6055,9 +6055,9 @@
"uid": "${metrics}"
},
"editorMode": "code",
"expr": "histogram_quantile(0.99, sum by(instance, pod, le, scheme) (rate(opendal_operation_duration_seconds_bucket{operation=\"read\"}[$__rate_interval])))",
"expr": "histogram_quantile(0.99, sum by(instance, pod, le, scheme, operation) (rate(opendal_operation_duration_seconds_bucket{operation=~\"read|Reader::read\"}[$__rate_interval])))",
"instant": false,
"legendFormat": "[{{instance}}]-[{{pod}}]-{{scheme}}",
"legendFormat": "[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]",
"range": true,
"refId": "A"
}
@@ -6156,9 +6156,9 @@
"uid": "${metrics}"
},
"editorMode": "code",
"expr": "sum by(instance, pod, scheme) (rate(opendal_operation_duration_seconds_count{ operation=\"write\"}[$__rate_interval]))",
"expr": "sum by(instance, pod, scheme, operation) (rate(opendal_operation_duration_seconds_count{ operation=~\"write|Writer::write|Writer::close\"}[$__rate_interval]))",
"instant": false,
"legendFormat": "[{{instance}}]-[{{pod}}]-{{scheme}}",
"legendFormat": "[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]",
"range": true,
"refId": "A"
}
@@ -6257,9 +6257,9 @@
"uid": "${metrics}"
},
"editorMode": "code",
"expr": "histogram_quantile(0.99, sum by(instance, pod, le, scheme) (rate(opendal_operation_duration_seconds_bucket{ operation=\"write\"}[$__rate_interval])))",
"expr": "histogram_quantile(0.99, sum by(instance, pod, le, scheme, operation) (rate(opendal_operation_duration_seconds_bucket{ operation =~ \"Writer::write|Writer::close|write\"}[$__rate_interval])))",
"instant": false,
"legendFormat": "[{{instance}}]-[{{pod}}]-[{{scheme}}]",
"legendFormat": "[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]",
"range": true,
"refId": "A"
}
@@ -6663,7 +6663,7 @@
"uid": "${metrics}"
},
"editorMode": "code",
"expr": "histogram_quantile(0.99, sum by(instance, pod, le, scheme, operation) (rate(opendal_operation_duration_seconds_bucket{ operation!~\"read|write|list\"}[$__rate_interval])))",
"expr": "histogram_quantile(0.99, sum by(instance, pod, le, scheme, operation) (rate(opendal_operation_duration_seconds_bucket{ operation!~\"read|write|list|Writer::write|Writer::close|Reader::read\"}[$__rate_interval])))",
"instant": false,
"legendFormat": "[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]",
"range": true,

View File

@@ -76,14 +76,14 @@
| Title | Query | Type | Description | Datasource | Unit | Legend Format |
| --- | --- | --- | --- | --- | --- | --- |
| QPS per Instance | `sum by(instance, pod, scheme, operation) (rate(opendal_operation_duration_seconds_count{}[$__rate_interval]))` | `timeseries` | QPS per Instance. | `prometheus` | `ops` | `[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]` |
| Read QPS per Instance | `sum by(instance, pod, scheme) (rate(opendal_operation_duration_seconds_count{ operation="read"}[$__rate_interval]))` | `timeseries` | Read QPS per Instance. | `prometheus` | `ops` | `[{{instance}}]-[{{pod}}]-[{{scheme}}]` |
| Read P99 per Instance | `histogram_quantile(0.99, sum by(instance, pod, le, scheme) (rate(opendal_operation_duration_seconds_bucket{operation="read"}[$__rate_interval])))` | `timeseries` | Read P99 per Instance. | `prometheus` | `s` | `[{{instance}}]-[{{pod}}]-{{scheme}}` |
| Write QPS per Instance | `sum by(instance, pod, scheme) (rate(opendal_operation_duration_seconds_count{ operation="write"}[$__rate_interval]))` | `timeseries` | Write QPS per Instance. | `prometheus` | `ops` | `[{{instance}}]-[{{pod}}]-{{scheme}}` |
| Write P99 per Instance | `histogram_quantile(0.99, sum by(instance, pod, le, scheme) (rate(opendal_operation_duration_seconds_bucket{ operation="write"}[$__rate_interval])))` | `timeseries` | Write P99 per Instance. | `prometheus` | `s` | `[{{instance}}]-[{{pod}}]-[{{scheme}}]` |
| Read QPS per Instance | `sum by(instance, pod, scheme, operation) (rate(opendal_operation_duration_seconds_count{ operation=~"read\|Reader::read"}[$__rate_interval]))` | `timeseries` | Read QPS per Instance. | `prometheus` | `ops` | `[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]` |
| Read P99 per Instance | `histogram_quantile(0.99, sum by(instance, pod, le, scheme, operation) (rate(opendal_operation_duration_seconds_bucket{operation=~"read\|Reader::read"}[$__rate_interval])))` | `timeseries` | Read P99 per Instance. | `prometheus` | `s` | `[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]` |
| Write QPS per Instance | `sum by(instance, pod, scheme, operation) (rate(opendal_operation_duration_seconds_count{ operation=~"write\|Writer::write\|Writer::close"}[$__rate_interval]))` | `timeseries` | Write QPS per Instance. | `prometheus` | `ops` | `[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]` |
| Write P99 per Instance | `histogram_quantile(0.99, sum by(instance, pod, le, scheme, operation) (rate(opendal_operation_duration_seconds_bucket{ operation =~ "Writer::write\|Writer::close\|write"}[$__rate_interval])))` | `timeseries` | Write P99 per Instance. | `prometheus` | `s` | `[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]` |
| List QPS per Instance | `sum by(instance, pod, scheme) (rate(opendal_operation_duration_seconds_count{ operation="list"}[$__rate_interval]))` | `timeseries` | List QPS per Instance. | `prometheus` | `ops` | `[{{instance}}]-[{{pod}}]-[{{scheme}}]` |
| List P99 per Instance | `histogram_quantile(0.99, sum by(instance, pod, le, scheme) (rate(opendal_operation_duration_seconds_bucket{ operation="list"}[$__rate_interval])))` | `timeseries` | List P99 per Instance. | `prometheus` | `s` | `[{{instance}}]-[{{pod}}]-[{{scheme}}]` |
| Other Requests per Instance | `sum by(instance, pod, scheme, operation) (rate(opendal_operation_duration_seconds_count{operation!~"read\|write\|list\|stat"}[$__rate_interval]))` | `timeseries` | Other Requests per Instance. | `prometheus` | `ops` | `[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]` |
| Other Request P99 per Instance | `histogram_quantile(0.99, sum by(instance, pod, le, scheme, operation) (rate(opendal_operation_duration_seconds_bucket{ operation!~"read\|write\|list"}[$__rate_interval])))` | `timeseries` | Other Request P99 per Instance. | `prometheus` | `s` | `[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]` |
| Other Request P99 per Instance | `histogram_quantile(0.99, sum by(instance, pod, le, scheme, operation) (rate(opendal_operation_duration_seconds_bucket{ operation!~"read\|write\|list\|Writer::write\|Writer::close\|Reader::read"}[$__rate_interval])))` | `timeseries` | Other Request P99 per Instance. | `prometheus` | `s` | `[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]` |
| Opendal traffic | `sum by(instance, pod, scheme, operation) (rate(opendal_operation_bytes_sum{}[$__rate_interval]))` | `timeseries` | Total traffic as in bytes by instance and operation | `prometheus` | `decbytes` | `[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]` |
| OpenDAL errors per Instance | `sum by(instance, pod, scheme, operation, error) (rate(opendal_operation_errors_total{ error!="NotFound"}[$__rate_interval]))` | `timeseries` | OpenDAL error counts per Instance. | `prometheus` | -- | `[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]-[{{error}}]` |
# Metasrv

View File

@@ -659,41 +659,41 @@ groups:
description: Read QPS per Instance.
unit: ops
queries:
- expr: sum by(instance, pod, scheme) (rate(opendal_operation_duration_seconds_count{ operation="read"}[$__rate_interval]))
- expr: sum by(instance, pod, scheme, operation) (rate(opendal_operation_duration_seconds_count{ operation=~"read|Reader::read"}[$__rate_interval]))
datasource:
type: prometheus
uid: ${metrics}
legendFormat: '[{{instance}}]-[{{pod}}]-[{{scheme}}]'
legendFormat: '[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]'
- title: Read P99 per Instance
type: timeseries
description: Read P99 per Instance.
unit: s
queries:
- expr: histogram_quantile(0.99, sum by(instance, pod, le, scheme) (rate(opendal_operation_duration_seconds_bucket{operation="read"}[$__rate_interval])))
- expr: histogram_quantile(0.99, sum by(instance, pod, le, scheme, operation) (rate(opendal_operation_duration_seconds_bucket{operation=~"read|Reader::read"}[$__rate_interval])))
datasource:
type: prometheus
uid: ${metrics}
legendFormat: '[{{instance}}]-[{{pod}}]-{{scheme}}'
legendFormat: '[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]'
- title: Write QPS per Instance
type: timeseries
description: Write QPS per Instance.
unit: ops
queries:
- expr: sum by(instance, pod, scheme) (rate(opendal_operation_duration_seconds_count{ operation="write"}[$__rate_interval]))
- expr: sum by(instance, pod, scheme, operation) (rate(opendal_operation_duration_seconds_count{ operation=~"write|Writer::write|Writer::close"}[$__rate_interval]))
datasource:
type: prometheus
uid: ${metrics}
legendFormat: '[{{instance}}]-[{{pod}}]-{{scheme}}'
legendFormat: '[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]'
- title: Write P99 per Instance
type: timeseries
description: Write P99 per Instance.
unit: s
queries:
- expr: histogram_quantile(0.99, sum by(instance, pod, le, scheme) (rate(opendal_operation_duration_seconds_bucket{ operation="write"}[$__rate_interval])))
- expr: histogram_quantile(0.99, sum by(instance, pod, le, scheme, operation) (rate(opendal_operation_duration_seconds_bucket{ operation =~ "Writer::write|Writer::close|write"}[$__rate_interval])))
datasource:
type: prometheus
uid: ${metrics}
legendFormat: '[{{instance}}]-[{{pod}}]-[{{scheme}}]'
legendFormat: '[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]'
- title: List QPS per Instance
type: timeseries
description: List QPS per Instance.
@@ -729,7 +729,7 @@ groups:
description: Other Request P99 per Instance.
unit: s
queries:
- expr: histogram_quantile(0.99, sum by(instance, pod, le, scheme, operation) (rate(opendal_operation_duration_seconds_bucket{ operation!~"read|write|list"}[$__rate_interval])))
- expr: histogram_quantile(0.99, sum by(instance, pod, le, scheme, operation) (rate(opendal_operation_duration_seconds_bucket{ operation!~"read|write|list|Writer::write|Writer::close|Reader::read"}[$__rate_interval])))
datasource:
type: prometheus
uid: ${metrics}

View File

@@ -13,8 +13,8 @@
# limitations under the License.
import os
import re
from multiprocessing import Pool
from pathlib import Path
def find_rust_files(directory):
@@ -24,6 +24,10 @@ def find_rust_files(directory):
if "test" in root.lower():
continue
# Skip the target directory
if "target" in Path(root).parts:
continue
for file in files:
# Skip files with "test" in the filename
if "test" in file.lower():

149
scripts/install.sh Executable file → Normal file
View File

@@ -53,6 +53,54 @@ get_arch_type() {
esac
}
# Verify SHA256 checksum
verify_sha256() {
file="$1"
expected_sha256="$2"
if command -v sha256sum >/dev/null 2>&1; then
actual_sha256=$(sha256sum "$file" | cut -d' ' -f1)
elif command -v shasum >/dev/null 2>&1; then
actual_sha256=$(shasum -a 256 "$file" | cut -d' ' -f1)
else
echo "Warning: No SHA256 verification tool found (sha256sum or shasum). Skipping checksum verification."
return 0
fi
if [ "$actual_sha256" = "$expected_sha256" ]; then
echo "SHA256 checksum verified successfully."
return 0
else
echo "Error: SHA256 checksum verification failed!"
echo "Expected: $expected_sha256"
echo "Actual: $actual_sha256"
return 1
fi
}
# Prompt for user confirmation (compatible with different shells)
prompt_confirmation() {
message="$1"
printf "%s (y/N): " "$message"
# Try to read user input, fallback if read fails
answer=""
if read answer </dev/tty 2>/dev/null; then
case "$answer" in
[Yy]|[Yy][Ee][Ss])
return 0
;;
*)
return 1
;;
esac
else
echo ""
echo "Cannot read user input. Defaulting to No."
return 1
fi
}
download_artifact() {
if [ -n "${OS_TYPE}" ] && [ -n "${ARCH_TYPE}" ]; then
# Use the latest stable released version.
@@ -71,17 +119,104 @@ download_artifact() {
fi
echo "Downloading ${BIN}, OS: ${OS_TYPE}, Arch: ${ARCH_TYPE}, Version: ${VERSION}"
PACKAGE_NAME="${BIN}-${OS_TYPE}-${ARCH_TYPE}-${VERSION}.tar.gz"
PKG_NAME="${BIN}-${OS_TYPE}-${ARCH_TYPE}-${VERSION}"
PACKAGE_NAME="${PKG_NAME}.tar.gz"
SHA256_FILE="${PKG_NAME}.sha256sum"
if [ -n "${PACKAGE_NAME}" ]; then
wget "https://github.com/${GITHUB_ORG}/${GITHUB_REPO}/releases/download/${VERSION}/${PACKAGE_NAME}"
# Check if files already exist and prompt for override
if [ -f "${PACKAGE_NAME}" ]; then
echo "File ${PACKAGE_NAME} already exists."
if prompt_confirmation "Do you want to override it?"; then
echo "Overriding existing file..."
rm -f "${PACKAGE_NAME}"
else
echo "Skipping download. Using existing file."
fi
fi
if [ -f "${BIN}" ]; then
echo "Binary ${BIN} already exists."
if prompt_confirmation "Do you want to override it?"; then
echo "Will override existing binary..."
rm -f "${BIN}"
else
echo "Installation cancelled."
exit 0
fi
fi
# Download package if not exists
if [ ! -f "${PACKAGE_NAME}" ]; then
echo "Downloading ${PACKAGE_NAME}..."
# Use curl instead of wget for better compatibility
if command -v curl >/dev/null 2>&1; then
if ! curl -L -o "${PACKAGE_NAME}" "https://github.com/${GITHUB_ORG}/${GITHUB_REPO}/releases/download/${VERSION}/${PACKAGE_NAME}"; then
echo "Error: Failed to download ${PACKAGE_NAME}"
exit 1
fi
elif command -v wget >/dev/null 2>&1; then
if ! wget -O "${PACKAGE_NAME}" "https://github.com/${GITHUB_ORG}/${GITHUB_REPO}/releases/download/${VERSION}/${PACKAGE_NAME}"; then
echo "Error: Failed to download ${PACKAGE_NAME}"
exit 1
fi
else
echo "Error: Neither curl nor wget is available for downloading."
exit 1
fi
fi
# Download and verify SHA256 checksum
echo "Downloading SHA256 checksum..."
sha256_download_success=0
if command -v curl >/dev/null 2>&1; then
if curl -L -s -o "${SHA256_FILE}" "https://github.com/${GITHUB_ORG}/${GITHUB_REPO}/releases/download/${VERSION}/${SHA256_FILE}" 2>/dev/null; then
sha256_download_success=1
fi
elif command -v wget >/dev/null 2>&1; then
if wget -q -O "${SHA256_FILE}" "https://github.com/${GITHUB_ORG}/${GITHUB_REPO}/releases/download/${VERSION}/${SHA256_FILE}" 2>/dev/null; then
sha256_download_success=1
fi
fi
if [ $sha256_download_success -eq 1 ] && [ -f "${SHA256_FILE}" ]; then
expected_sha256=$(cat "${SHA256_FILE}" | cut -d' ' -f1)
if [ -n "$expected_sha256" ]; then
if ! verify_sha256 "${PACKAGE_NAME}" "${expected_sha256}"; then
echo "SHA256 verification failed. Removing downloaded file."
rm -f "${PACKAGE_NAME}" "${SHA256_FILE}"
exit 1
fi
else
echo "Warning: Could not parse SHA256 checksum from file."
fi
rm -f "${SHA256_FILE}"
else
echo "Warning: Could not download SHA256 checksum file. Skipping verification."
fi
# Extract the binary and clean the rest.
tar xvf "${PACKAGE_NAME}" && \
mv "${PACKAGE_NAME%.tar.gz}/${BIN}" "${PWD}" && \
rm -r "${PACKAGE_NAME}" && \
rm -r "${PACKAGE_NAME%.tar.gz}" && \
echo "Run './${BIN} --help' to get started"
echo "Extracting ${PACKAGE_NAME}..."
if ! tar xf "${PACKAGE_NAME}"; then
echo "Error: Failed to extract ${PACKAGE_NAME}"
exit 1
fi
# Find the binary in the extracted directory
extracted_dir="${PACKAGE_NAME%.tar.gz}"
if [ -f "${extracted_dir}/${BIN}" ]; then
mv "${extracted_dir}/${BIN}" "${PWD}/"
rm -f "${PACKAGE_NAME}"
rm -rf "${extracted_dir}"
chmod +x "${BIN}"
echo "Installation completed successfully!"
echo "Run './${BIN} --help' to get started"
else
echo "Error: Binary ${BIN} not found in extracted archive"
rm -f "${PACKAGE_NAME}"
rm -rf "${extracted_dir}"
exit 1
fi
fi
fi
}

View File

@@ -162,6 +162,16 @@ impl SystemSchemaProviderInner for InformationSchemaProvider {
}
fn system_table(&self, name: &str) -> Option<SystemTableRef> {
#[cfg(feature = "enterprise")]
if let Some(factory) = self.extra_table_factories.get(name) {
let req = MakeInformationTableRequest {
catalog_name: self.catalog_name.clone(),
catalog_manager: self.catalog_manager.clone(),
kv_backend: self.kv_backend.clone(),
};
return Some(factory.make_information_table(req));
}
match name.to_ascii_lowercase().as_str() {
TABLES => Some(Arc::new(InformationSchemaTables::new(
self.catalog_name.clone(),
@@ -240,22 +250,7 @@ impl SystemSchemaProviderInner for InformationSchemaProvider {
.process_manager
.as_ref()
.map(|p| Arc::new(InformationSchemaProcessList::new(p.clone())) as _),
table_name => {
#[cfg(feature = "enterprise")]
return self.extra_table_factories.get(table_name).map(|factory| {
let req = MakeInformationTableRequest {
catalog_name: self.catalog_name.clone(),
catalog_manager: self.catalog_manager.clone(),
kv_backend: self.kv_backend.clone(),
};
factory.make_information_table(req)
});
#[cfg(not(feature = "enterprise"))]
{
let _ = table_name;
None
}
}
_ => None,
}
}
}

View File

@@ -15,7 +15,8 @@
use std::sync::Arc;
use common_catalog::consts::{METRIC_ENGINE, MITO_ENGINE};
use datatypes::schema::{Schema, SchemaRef};
use datatypes::data_type::ConcreteDataType;
use datatypes::schema::{ColumnSchema, Schema, SchemaRef};
use datatypes::vectors::{Int64Vector, StringVector, VectorRef};
use crate::system_schema::information_schema::table_names::*;
@@ -367,28 +368,18 @@ pub(super) fn get_schema_columns(table_name: &str) -> (SchemaRef, Vec<VectorRef>
TRIGGERS => (
vec![
string_column("TRIGGER_CATALOG"),
string_column("TRIGGER_SCHEMA"),
string_column("TRIGGER_NAME"),
string_column("EVENT_MANIPULATION"),
string_column("EVENT_OBJECT_CATALOG"),
string_column("EVENT_OBJECT_SCHEMA"),
string_column("EVENT_OBJECT_TABLE"),
bigint_column("ACTION_ORDER"),
string_column("ACTION_CONDITION"),
string_column("ACTION_STATEMENT"),
string_column("ACTION_ORIENTATION"),
string_column("ACTION_TIMING"),
string_column("ACTION_REFERENCE_OLD_TABLE"),
string_column("ACTION_REFERENCE_NEW_TABLE"),
string_column("ACTION_REFERENCE_OLD_ROW"),
string_column("ACTION_REFERENCE_NEW_ROW"),
timestamp_micro_column("CREATED"),
string_column("SQL_MODE"),
string_column("DEFINER"),
string_column("CHARACTER_SET_CLIENT"),
string_column("COLLATION_CONNECTION"),
string_column("DATABASE_COLLATION"),
ColumnSchema::new(
"trigger_id",
ConcreteDataType::uint64_datatype(),
false,
),
string_column("TRIGGER_DEFINITION"),
ColumnSchema::new(
"flownode_id",
ConcreteDataType::uint64_datatype(),
true,
),
],
vec![],
),

View File

@@ -329,13 +329,8 @@ impl InformationSchemaPartitionsBuilder {
self.partition_names.push(Some(&partition_name));
self.partition_ordinal_positions
.push(Some((index + 1) as i64));
let expressions = if partition.partition.partition_columns().is_empty() {
None
} else {
Some(partition.partition.to_string())
};
self.partition_expressions.push(expressions.as_deref());
let expression = partition.partition_expr.as_ref().map(|e| e.to_string());
self.partition_expressions.push(expression.as_deref());
self.create_times.push(Some(TimestampMicrosecond::from(
table_info.meta.created_on.timestamp_millis(),
)));

View File

@@ -44,6 +44,7 @@ const DISK_SIZE: &str = "disk_size";
const MEMTABLE_SIZE: &str = "memtable_size";
const MANIFEST_SIZE: &str = "manifest_size";
const SST_SIZE: &str = "sst_size";
const SST_NUM: &str = "sst_num";
const INDEX_SIZE: &str = "index_size";
const ENGINE: &str = "engine";
const REGION_ROLE: &str = "region_role";
@@ -87,6 +88,7 @@ impl InformationSchemaRegionStatistics {
ColumnSchema::new(MEMTABLE_SIZE, ConcreteDataType::uint64_datatype(), true),
ColumnSchema::new(MANIFEST_SIZE, ConcreteDataType::uint64_datatype(), true),
ColumnSchema::new(SST_SIZE, ConcreteDataType::uint64_datatype(), true),
ColumnSchema::new(SST_NUM, ConcreteDataType::uint64_datatype(), true),
ColumnSchema::new(INDEX_SIZE, ConcreteDataType::uint64_datatype(), true),
ColumnSchema::new(ENGINE, ConcreteDataType::string_datatype(), true),
ColumnSchema::new(REGION_ROLE, ConcreteDataType::string_datatype(), true),
@@ -149,6 +151,7 @@ struct InformationSchemaRegionStatisticsBuilder {
memtable_sizes: UInt64VectorBuilder,
manifest_sizes: UInt64VectorBuilder,
sst_sizes: UInt64VectorBuilder,
sst_nums: UInt64VectorBuilder,
index_sizes: UInt64VectorBuilder,
engines: StringVectorBuilder,
region_roles: StringVectorBuilder,
@@ -167,6 +170,7 @@ impl InformationSchemaRegionStatisticsBuilder {
memtable_sizes: UInt64VectorBuilder::with_capacity(INIT_CAPACITY),
manifest_sizes: UInt64VectorBuilder::with_capacity(INIT_CAPACITY),
sst_sizes: UInt64VectorBuilder::with_capacity(INIT_CAPACITY),
sst_nums: UInt64VectorBuilder::with_capacity(INIT_CAPACITY),
index_sizes: UInt64VectorBuilder::with_capacity(INIT_CAPACITY),
engines: StringVectorBuilder::with_capacity(INIT_CAPACITY),
region_roles: StringVectorBuilder::with_capacity(INIT_CAPACITY),
@@ -197,6 +201,7 @@ impl InformationSchemaRegionStatisticsBuilder {
(MEMTABLE_SIZE, &Value::from(region_stat.memtable_size)),
(MANIFEST_SIZE, &Value::from(region_stat.manifest_size)),
(SST_SIZE, &Value::from(region_stat.sst_size)),
(SST_NUM, &Value::from(region_stat.sst_num)),
(INDEX_SIZE, &Value::from(region_stat.index_size)),
(ENGINE, &Value::from(region_stat.engine.as_str())),
(REGION_ROLE, &Value::from(region_stat.role.to_string())),
@@ -215,6 +220,7 @@ impl InformationSchemaRegionStatisticsBuilder {
self.memtable_sizes.push(Some(region_stat.memtable_size));
self.manifest_sizes.push(Some(region_stat.manifest_size));
self.sst_sizes.push(Some(region_stat.sst_size));
self.sst_nums.push(Some(region_stat.sst_num));
self.index_sizes.push(Some(region_stat.index_size));
self.engines.push(Some(&region_stat.engine));
self.region_roles.push(Some(&region_stat.role.to_string()));
@@ -230,6 +236,7 @@ impl InformationSchemaRegionStatisticsBuilder {
Arc::new(self.memtable_sizes.finish()),
Arc::new(self.manifest_sizes.finish()),
Arc::new(self.sst_sizes.finish()),
Arc::new(self.sst_nums.finish()),
Arc::new(self.index_sizes.finish()),
Arc::new(self.engines.finish()),
Arc::new(self.region_roles.finish()),

View File

@@ -48,4 +48,3 @@ pub const FLOWS: &str = "flows";
pub const PROCEDURE_INFO: &str = "procedure_info";
pub const REGION_STATISTICS: &str = "region_statistics";
pub const PROCESS_LIST: &str = "process_list";
pub const TRIGGER_LIST: &str = "trigger_list";

View File

@@ -169,7 +169,7 @@ impl DfPartitionStream for PGClass {
}
/// Builds the `pg_catalog.pg_class` table row by row
/// TODO(J0HN50N133): `relowner` is always the [`DUMMY_OWNER_ID`] cuz we don't have user.
/// TODO(J0HN50N133): `relowner` is always the [`DUMMY_OWNER_ID`] because we don't have users.
/// Once we have user system, make it the actual owner of the table.
struct PGClassBuilder {
schema: SchemaRef,

View File

@@ -188,6 +188,7 @@ fn create_region_routes(regions: Vec<RegionNumber>) -> Vec<RegionRoute> {
name: String::new(),
partition: None,
attrs: BTreeMap::new(),
partition_expr: Default::default(),
},
leader_peer: Some(Peer {
id: rng.random_range(0..10),

View File

@@ -241,7 +241,6 @@ impl RepairTool {
let alter_table_request = alter_table::make_alter_region_request_for_peer(
logical_table_id,
&alter_table_expr,
full_table_metadata.table_info.ident.version,
peer,
physical_region_routes,
)?;

View File

@@ -66,7 +66,6 @@ pub fn generate_alter_table_expr_for_all_columns(
pub fn make_alter_region_request_for_peer(
logical_table_id: TableId,
alter_table_expr: &AlterTableExpr,
schema_version: u64,
peer: &Peer,
region_routes: &[RegionRoute],
) -> Result<RegionRequest> {
@@ -74,7 +73,7 @@ pub fn make_alter_region_request_for_peer(
let mut requests = Vec::with_capacity(regions_on_this_peer.len());
for region_number in &regions_on_this_peer {
let region_id = RegionId::new(logical_table_id, *region_number);
let request = make_alter_region_request(region_id, alter_table_expr, schema_version);
let request = make_alter_region_request(region_id, alter_table_expr);
requests.push(request);
}

View File

@@ -23,7 +23,7 @@ use api::v1::greptime_request::Request;
use api::v1::query_request::Query;
use api::v1::{
AlterTableExpr, AuthHeader, Basic, CreateTableExpr, DdlRequest, GreptimeRequest,
InsertRequests, QueryRequest, RequestHeader,
InsertRequests, QueryRequest, RequestHeader, RowInsertRequests,
};
use arrow_flight::{FlightData, Ticket};
use async_stream::stream;
@@ -42,7 +42,7 @@ use common_telemetry::{error, warn};
use futures::future;
use futures_util::{Stream, StreamExt, TryStreamExt};
use prost::Message;
use snafu::{ensure, ResultExt};
use snafu::{ensure, OptionExt, ResultExt};
use tonic::metadata::{AsciiMetadataKey, AsciiMetadataValue, MetadataMap, MetadataValue};
use tonic::transport::Channel;
@@ -118,6 +118,7 @@ impl Database {
}
}
/// Set the catalog for the database client.
pub fn set_catalog(&mut self, catalog: impl Into<String>) {
self.catalog = catalog.into();
}
@@ -130,6 +131,7 @@ impl Database {
}
}
/// Set the schema for the database client.
pub fn set_schema(&mut self, schema: impl Into<String>) {
self.schema = schema.into();
}
@@ -142,20 +144,24 @@ impl Database {
}
}
/// Set the timezone for the database client.
pub fn set_timezone(&mut self, timezone: impl Into<String>) {
self.timezone = timezone.into();
}
/// Set the auth scheme for the database client.
pub fn set_auth(&mut self, auth: AuthScheme) {
self.ctx.auth_header = Some(AuthHeader {
auth_scheme: Some(auth),
});
}
/// Make an InsertRequests request to the database.
pub async fn insert(&self, requests: InsertRequests) -> Result<u32> {
self.handle(Request::Inserts(requests)).await
}
/// Make an InsertRequests request to the database with hints.
pub async fn insert_with_hints(
&self,
requests: InsertRequests,
@@ -172,6 +178,28 @@ impl Database {
from_grpc_response(response)
}
/// Make a RowInsertRequests request to the database.
pub async fn row_inserts(&self, requests: RowInsertRequests) -> Result<u32> {
self.handle(Request::RowInserts(requests)).await
}
/// Make a RowInsertRequests request to the database with hints.
pub async fn row_inserts_with_hints(
&self,
requests: RowInsertRequests,
hints: &[(&str, &str)],
) -> Result<u32> {
let mut client = make_database_client(&self.client)?.inner;
let request = self.to_rpc_request(Request::RowInserts(requests));
let mut request = tonic::Request::new(request);
let metadata = request.metadata_mut();
Self::put_hints(metadata, hints)?;
let response = client.handle(request).await?.into_inner();
from_grpc_response(response)
}
fn put_hints(metadata: &mut MetadataMap, hints: &[(&str, &str)]) -> Result<()> {
let Some(value) = hints
.iter()
@@ -187,6 +215,7 @@ impl Database {
Ok(())
}
/// Make a request to the database.
pub async fn handle(&self, request: Request) -> Result<u32> {
let mut client = make_database_client(&self.client)?.inner;
let request = self.to_rpc_request(request);
@@ -221,12 +250,18 @@ impl Database {
retries += 1;
warn!("Retrying {} times with error = {:?}", retries, err);
continue;
} else {
error!(
err; "Failed to send request to grpc handle, retries = {}, not retryable error, aborting",
retries
);
return Err(err.into());
}
}
(Err(err), false) => {
error!(
"Failed to send request to grpc handle after {} retries, error = {:?}",
retries, err
err; "Failed to send request to grpc handle after {} retries",
retries,
);
return Err(err.into());
}
@@ -250,6 +285,7 @@ impl Database {
}
}
/// Executes a SQL query without any hints.
pub async fn sql<S>(&self, sql: S) -> Result<Output>
where
S: AsRef<str>,
@@ -257,6 +293,7 @@ impl Database {
self.sql_with_hint(sql, &[]).await
}
/// Executes a SQL query with optional hints for query optimization.
pub async fn sql_with_hint<S>(&self, sql: S, hints: &[(&str, &str)]) -> Result<Output>
where
S: AsRef<str>,
@@ -267,6 +304,7 @@ impl Database {
self.do_get(request, hints).await
}
/// Executes a logical plan directly without SQL parsing.
pub async fn logical_plan(&self, logical_plan: Vec<u8>) -> Result<Output> {
let request = Request::Query(QueryRequest {
query: Some(Query::LogicalPlan(logical_plan)),
@@ -274,6 +312,7 @@ impl Database {
self.do_get(request, &[]).await
}
/// Creates a new table using the provided table expression.
pub async fn create(&self, expr: CreateTableExpr) -> Result<Output> {
let request = Request::Ddl(DdlRequest {
expr: Some(DdlExpr::CreateTable(expr)),
@@ -281,6 +320,7 @@ impl Database {
self.do_get(request, &[]).await
}
/// Alters an existing table using the provided alter expression.
pub async fn alter(&self, expr: AlterTableExpr) -> Result<Output> {
let request = Request::Ddl(DdlRequest {
expr: Some(DdlExpr::AlterTable(expr)),
@@ -321,7 +361,10 @@ impl Database {
let mut flight_message_stream = flight_data_stream.map(move |flight_data| {
flight_data
.map_err(Error::from)
.and_then(|data| decoder.try_decode(&data).context(ConvertFlightDataSnafu))
.and_then(|data| decoder.try_decode(&data).context(ConvertFlightDataSnafu))?
.context(IllegalFlightMessagesSnafu {
reason: "none message",
})
});
let Some(first_flight_message) = flight_message_stream.next().await else {

View File

@@ -128,7 +128,10 @@ impl RegionRequester {
let mut flight_message_stream = flight_data_stream.map(move |flight_data| {
flight_data
.map_err(Error::from)
.and_then(|data| decoder.try_decode(&data).context(ConvertFlightDataSnafu))
.and_then(|data| decoder.try_decode(&data).context(ConvertFlightDataSnafu))?
.context(IllegalFlightMessagesSnafu {
reason: "none message",
})
});
let Some(first_flight_message) = flight_message_stream.next().await else {
@@ -157,19 +160,70 @@ impl RegionRequester {
let _span = tracing_context.attach(common_telemetry::tracing::info_span!(
"poll_flight_data_stream"
));
while let Some(flight_message) = flight_message_stream.next().await {
let flight_message = flight_message
.map_err(BoxedError::new)
.context(ExternalSnafu)?;
let mut buffered_message: Option<FlightMessage> = None;
let mut stream_ended = false;
while !stream_ended {
// get the next message from the buffered message or read from the flight message stream
let flight_message_item = if let Some(msg) = buffered_message.take() {
Some(Ok(msg))
} else {
flight_message_stream.next().await
};
let flight_message = match flight_message_item {
Some(Ok(message)) => message,
Some(Err(e)) => {
yield Err(BoxedError::new(e)).context(ExternalSnafu);
break;
}
None => break,
};
match flight_message {
FlightMessage::RecordBatch(record_batch) => {
yield RecordBatch::try_from_df_record_batch(
let result_to_yield = RecordBatch::try_from_df_record_batch(
schema_cloned.clone(),
record_batch,
)
);
// get the next message from the stream. normally it should be a metrics message.
if let Some(next_flight_message_result) = flight_message_stream.next().await
{
match next_flight_message_result {
Ok(FlightMessage::Metrics(s)) => {
let m = serde_json::from_str(&s).ok().map(Arc::new);
metrics_ref.swap(m);
}
Ok(FlightMessage::RecordBatch(rb)) => {
// for some reason it's not a metrics message, so we need to buffer this record batch
// and yield it in the next iteration.
buffered_message = Some(FlightMessage::RecordBatch(rb));
}
Ok(_) => {
yield IllegalFlightMessagesSnafu {
reason: "A RecordBatch message can only be succeeded by a Metrics message or another RecordBatch message"
}
.fail()
.map_err(BoxedError::new)
.context(ExternalSnafu);
break;
}
Err(e) => {
yield Err(BoxedError::new(e)).context(ExternalSnafu);
break;
}
}
} else {
// the stream has ended
stream_ended = true;
}
yield result_to_yield;
}
FlightMessage::Metrics(s) => {
// just a branch in case of some metrics message comes after other things.
let m = serde_json::from_str(&s).ok().map(Arc::new);
metrics_ref.swap(m);
break;

View File

@@ -20,11 +20,11 @@ use cmd::error::{InitTlsProviderSnafu, Result};
use cmd::options::GlobalOptions;
use cmd::{cli, datanode, flownode, frontend, metasrv, standalone, App};
use common_base::Plugins;
use common_version::version;
use common_version::{verbose_version, version};
use servers::install_ring_crypto_provider;
#[derive(Parser)]
#[command(name = "greptime", author, version, long_version = version(), about)]
#[command(name = "greptime", author, version, long_version = verbose_version(), about)]
#[command(propagate_version = true)]
pub(crate) struct Command {
#[clap(subcommand)]
@@ -143,10 +143,8 @@ async fn start(cli: Command) -> Result<()> {
}
fn setup_human_panic() {
human_panic::setup_panic!(
human_panic::Metadata::new("GreptimeDB", env!("CARGO_PKG_VERSION"))
.homepage("https://github.com/GreptimeTeam/greptimedb/discussions")
);
human_panic::setup_panic!(human_panic::Metadata::new("GreptimeDB", version())
.homepage("https://github.com/GreptimeTeam/greptimedb/discussions"));
common_telemetry::set_panic_hook();
}

View File

@@ -19,7 +19,7 @@ use catalog::kvbackend::MetaKvBackend;
use common_base::Plugins;
use common_meta::cache::LayeredCacheRegistryBuilder;
use common_telemetry::info;
use common_version::{short_version, version};
use common_version::{short_version, verbose_version};
use datanode::datanode::DatanodeBuilder;
use datanode::service::DatanodeServiceBuilder;
use meta_client::MetaClientType;
@@ -67,7 +67,7 @@ impl InstanceBuilder {
None,
);
log_versions(version(), short_version(), APP_NAME);
log_versions(verbose_version(), short_version(), APP_NAME);
create_resource_limit_metrics(APP_NAME);
plugins::setup_datanode_plugins(plugins, &opts.plugins, dn_opts)

View File

@@ -32,7 +32,7 @@ use common_meta::key::flow::FlowMetadataManager;
use common_meta::key::TableMetadataManager;
use common_telemetry::info;
use common_telemetry::logging::{TracingOptions, DEFAULT_LOGGING_DIR};
use common_version::{short_version, version};
use common_version::{short_version, verbose_version};
use flow::{
get_flow_auth_options, FlownodeBuilder, FlownodeInstance, FlownodeServiceBuilder,
FrontendClient, FrontendInvoker,
@@ -279,7 +279,7 @@ impl StartCommand {
None,
);
log_versions(version(), short_version(), APP_NAME);
log_versions(verbose_version(), short_version(), APP_NAME);
create_resource_limit_metrics(APP_NAME);
info!("Flownode start command: {:#?}", self);
@@ -374,6 +374,7 @@ impl StartCommand {
meta_client.clone(),
flow_auth_header,
opts.query.clone(),
opts.flow.batching_mode.clone(),
);
let frontend_client = Arc::new(frontend_client);
let flownode_builder = FlownodeBuilder::new(

View File

@@ -33,7 +33,7 @@ use common_meta::heartbeat::handler::HandlerGroupExecutor;
use common_telemetry::info;
use common_telemetry::logging::{TracingOptions, DEFAULT_LOGGING_DIR};
use common_time::timezone::set_default_timezone;
use common_version::{short_version, version};
use common_version::{short_version, verbose_version};
use frontend::frontend::Frontend;
use frontend::heartbeat::HeartbeatTask;
use frontend::instance::builder::FrontendBuilder;
@@ -102,7 +102,7 @@ impl App for Instance {
#[derive(Parser)]
pub struct Command {
#[clap(subcommand)]
subcmd: SubCommand,
pub subcmd: SubCommand,
}
impl Command {
@@ -116,7 +116,7 @@ impl Command {
}
#[derive(Parser)]
enum SubCommand {
pub enum SubCommand {
Start(StartCommand),
}
@@ -153,7 +153,7 @@ pub struct StartCommand {
#[clap(long)]
postgres_addr: Option<String>,
#[clap(short, long)]
config_file: Option<String>,
pub config_file: Option<String>,
#[clap(short, long)]
influxdb_enable: Option<bool>,
#[clap(long, value_delimiter = ',', num_args = 1..)]
@@ -169,7 +169,7 @@ pub struct StartCommand {
#[clap(long)]
disable_dashboard: Option<bool>,
#[clap(long, default_value = "GREPTIMEDB_FRONTEND")]
env_prefix: String,
pub env_prefix: String,
}
impl StartCommand {
@@ -282,7 +282,7 @@ impl StartCommand {
opts.component.slow_query.as_ref(),
);
log_versions(version(), short_version(), APP_NAME);
log_versions(verbose_version(), short_version(), APP_NAME);
create_resource_limit_metrics(APP_NAME);
info!("Frontend start command: {:#?}", self);

View File

@@ -112,7 +112,7 @@ pub trait App: Send {
pub fn log_versions(version: &str, short_version: &str, app: &str) {
// Report app version as gauge.
APP_VERSION
.with_label_values(&[env!("CARGO_PKG_VERSION"), short_version, app])
.with_label_values(&[common_version::version(), short_version, app])
.inc();
// Log version and argument flags.

View File

@@ -22,7 +22,7 @@ use common_base::Plugins;
use common_config::Configurable;
use common_telemetry::info;
use common_telemetry::logging::{TracingOptions, DEFAULT_LOGGING_DIR};
use common_version::{short_version, version};
use common_version::{short_version, verbose_version};
use meta_srv::bootstrap::MetasrvInstance;
use meta_srv::metasrv::BackendImpl;
use snafu::ResultExt;
@@ -324,7 +324,7 @@ impl StartCommand {
None,
);
log_versions(version(), short_version(), APP_NAME);
log_versions(verbose_version(), short_version(), APP_NAME);
create_resource_limit_metrics(APP_NAME);
info!("Metasrv start command: {:#?}", self);
@@ -340,12 +340,12 @@ impl StartCommand {
.await
.context(StartMetaServerSnafu)?;
let builder = meta_srv::bootstrap::metasrv_builder(&opts, plugins.clone(), None)
let builder = meta_srv::bootstrap::metasrv_builder(&opts, plugins, None)
.await
.context(error::BuildMetaServerSnafu)?;
let metasrv = builder.build().await.context(error::BuildMetaServerSnafu)?;
let instance = MetasrvInstance::new(opts, plugins, metasrv)
let instance = MetasrvInstance::new(metasrv)
.await
.context(error::BuildMetaServerSnafu)?;

View File

@@ -51,7 +51,7 @@ use common_telemetry::logging::{
LoggingOptions, SlowQueryOptions, TracingOptions, DEFAULT_LOGGING_DIR,
};
use common_time::timezone::set_default_timezone;
use common_version::{short_version, version};
use common_version::{short_version, verbose_version};
use common_wal::config::DatanodeWalConfig;
use datanode::config::{DatanodeOptions, ProcedureConfig, RegionEngineConfig, StorageConfig};
use datanode::datanode::{Datanode, DatanodeBuilder};
@@ -485,7 +485,7 @@ impl StartCommand {
opts.component.slow_query.as_ref(),
);
log_versions(version(), short_version(), APP_NAME);
log_versions(verbose_version(), short_version(), APP_NAME);
create_resource_limit_metrics(APP_NAME);
info!("Standalone start command: {:#?}", self);
@@ -821,6 +821,7 @@ impl InformationExtension for StandaloneInformationExtension {
memtable_size: region_stat.memtable_size,
manifest_size: region_stat.manifest_size,
sst_size: region_stat.sst_size,
sst_num: region_stat.sst_num,
index_size: region_stat.index_size,
region_manifest: region_stat.manifest.into(),
data_topic_latest_entry_id: region_stat.data_topic_latest_entry_id,

View File

@@ -104,8 +104,6 @@ pub const INFORMATION_SCHEMA_PROCEDURE_INFO_TABLE_ID: u32 = 34;
pub const INFORMATION_SCHEMA_REGION_STATISTICS_TABLE_ID: u32 = 35;
/// id for information_schema.process_list
pub const INFORMATION_SCHEMA_PROCESS_LIST_TABLE_ID: u32 = 36;
/// id for information_schema.trigger_list (for greptimedb trigger)
pub const INFORMATION_SCHEMA_TRIGGER_TABLE_ID: u32 = 37;
// ----- End of information_schema tables -----

View File

@@ -21,6 +21,7 @@ pub mod error;
pub mod file_format;
pub mod lister;
pub mod object_store;
pub mod parquet_writer;
pub mod share_buffer;
#[cfg(test)]
pub mod test_util;

View File

@@ -77,6 +77,11 @@ pub fn build_oss_backend(
let op = ObjectStore::new(builder)
.context(error::BuildBackendSnafu)?
.layer(
object_store::layers::RetryLayer::new()
.with_jitter()
.with_notify(object_store::util::PrintDetailedError),
)
.layer(object_store::layers::LoggingLayer::default())
.layer(object_store::layers::TracingLayer)
.layer(object_store::layers::build_prometheus_metrics_layer(true))

View File

@@ -85,6 +85,11 @@ pub fn build_s3_backend(
// TODO(weny): Consider finding a better way to eliminate duplicate code.
Ok(ObjectStore::new(builder)
.context(error::BuildBackendSnafu)?
.layer(
object_store::layers::RetryLayer::new()
.with_jitter()
.with_notify(object_store::util::PrintDetailedError),
)
.layer(object_store::layers::LoggingLayer::new(
DefaultLoggingInterceptor,
))

View File

@@ -0,0 +1,52 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use bytes::Bytes;
use futures::future::BoxFuture;
use object_store::Writer;
use parquet::arrow::async_writer::AsyncFileWriter;
use parquet::errors::ParquetError;
/// Bridges opendal [Writer] with parquet [AsyncFileWriter].
pub struct AsyncWriter {
inner: Writer,
}
impl AsyncWriter {
/// Create a [`AsyncWriter`] by given [`Writer`].
pub fn new(writer: Writer) -> Self {
Self { inner: writer }
}
}
impl AsyncFileWriter for AsyncWriter {
fn write(&mut self, bs: Bytes) -> BoxFuture<'_, parquet::errors::Result<()>> {
Box::pin(async move {
self.inner
.write(bs)
.await
.map_err(|err| ParquetError::External(Box::new(err)))
})
}
fn complete(&mut self) -> BoxFuture<'_, parquet::errors::Result<()>> {
Box::pin(async move {
self.inner
.close()
.await
.map(|_| ())
.map_err(|err| ParquetError::External(Box::new(err)))
})
}
}

View File

@@ -84,24 +84,33 @@ fn test_to_string() {
assert_eq!(result.unwrap_err().to_string(), "<root cause>");
}
fn normalize_path(s: &str) -> String {
s.replace('\\', "/")
}
#[test]
fn test_debug_format() {
let result = normal_error();
let debug_output = format!("{:?}", result.unwrap_err());
let normalized_output = debug_output.replace('\\', "/");
assert_eq!(
normalized_output,
r#"0: A normal error with "display" attribute, message "blabla", at src/common/error/tests/ext.rs:55:22
1: PlainError { msg: "<root cause>", status_code: Unexpected }"#
normalize_path(&debug_output),
format!(
r#"0: A normal error with "display" attribute, message "blabla", at {}:55:22
1: PlainError {{ msg: "<root cause>", status_code: Unexpected }}"#,
normalize_path(file!())
)
);
let result = transparent_error();
let debug_output = format!("{:?}", result.unwrap_err());
let normalized_output = debug_output.replace('\\', "/");
assert_eq!(
normalized_output,
r#"0: <transparent>, at src/common/error/tests/ext.rs:60:5
1: PlainError { msg: "<root cause>", status_code: Unexpected }"#
normalize_path(&debug_output),
format!(
r#"0: <transparent>, at {}:60:5
1: PlainError {{ msg: "<root cause>", status_code: Unexpected }}"#,
normalize_path(file!())
)
);
}

View File

@@ -0,0 +1,25 @@
[package]
name = "common-event-recorder"
version.workspace = true
edition.workspace = true
license.workspace = true
[dependencies]
api.workspace = true
async-trait.workspace = true
backon.workspace = true
client.workspace = true
common-error.workspace = true
common-macro.workspace = true
common-meta.workspace = true
common-telemetry.workspace = true
common-time.workspace = true
serde.workspace = true
serde_json.workspace = true
snafu.workspace = true
store-api.workspace = true
tokio.workspace = true
tokio-util.workspace = true
[lints]
workspace = true

View File

@@ -0,0 +1,53 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use api::v1::ColumnSchema;
use common_error::ext::ErrorExt;
use common_error::status_code::StatusCode;
use common_macro::stack_trace_debug;
use snafu::{Location, Snafu};
#[derive(Snafu)]
#[snafu(visibility(pub))]
#[stack_trace_debug]
pub enum Error {
#[snafu(display("No available frontend"))]
NoAvailableFrontend {
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Mismatched schema, expected: {:?}, actual: {:?}", expected, actual))]
MismatchedSchema {
#[snafu(implicit)]
location: Location,
expected: Vec<ColumnSchema>,
actual: Vec<ColumnSchema>,
},
}
pub type Result<T> = std::result::Result<T, Error>;
impl ErrorExt for Error {
fn status_code(&self) -> StatusCode {
match self {
Error::MismatchedSchema { .. } => StatusCode::InvalidArguments,
Error::NoAvailableFrontend { .. } => StatusCode::Internal,
}
}
fn as_any(&self) -> &dyn std::any::Any {
self
}
}

View File

@@ -0,0 +1,18 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
pub mod error;
pub mod recorder;
pub use recorder::*;

View File

@@ -0,0 +1,527 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::any::Any;
use std::collections::HashMap;
use std::fmt::Debug;
use std::sync::{Arc, OnceLock};
use std::time::Duration;
use api::v1::column_data_type_extension::TypeExt;
use api::v1::value::ValueData;
use api::v1::{
ColumnDataType, ColumnDataTypeExtension, ColumnSchema, JsonTypeExtension, Row,
RowInsertRequest, RowInsertRequests, Rows, SemanticType,
};
use async_trait::async_trait;
use backon::{BackoffBuilder, ExponentialBuilder};
use common_telemetry::{debug, error, info, warn};
use common_time::timestamp::{TimeUnit, Timestamp};
use serde::{Deserialize, Serialize};
use store_api::mito_engine_options::{APPEND_MODE_KEY, TTL_KEY};
use tokio::sync::mpsc::{channel, Receiver, Sender};
use tokio::task::JoinHandle;
use tokio::time::sleep;
use tokio_util::sync::CancellationToken;
use crate::error::{MismatchedSchemaSnafu, Result};
/// The default table name for storing the events.
pub const DEFAULT_EVENTS_TABLE_NAME: &str = "events";
/// The column name for the event type.
pub const EVENTS_TABLE_TYPE_COLUMN_NAME: &str = "type";
/// The column name for the event payload.
pub const EVENTS_TABLE_PAYLOAD_COLUMN_NAME: &str = "payload";
/// The column name for the event timestamp.
pub const EVENTS_TABLE_TIMESTAMP_COLUMN_NAME: &str = "timestamp";
/// EventRecorderRef is the reference to the event recorder.
pub type EventRecorderRef = Arc<dyn EventRecorder>;
static EVENTS_TABLE_TTL: OnceLock<String> = OnceLock::new();
/// The time interval for flushing batched events to the event handler.
pub const DEFAULT_FLUSH_INTERVAL_SECONDS: Duration = Duration::from_secs(5);
// The default TTL for the events table.
const DEFAULT_EVENTS_TABLE_TTL: &str = "30d";
// The capacity of the tokio channel for transmitting events to background processor.
const DEFAULT_CHANNEL_SIZE: usize = 2048;
// The size of the buffer for batching events before flushing to event handler.
const DEFAULT_BUFFER_SIZE: usize = 100;
// The maximum number of retry attempts when event handler processing fails.
const DEFAULT_MAX_RETRY_TIMES: u64 = 3;
/// Event trait defines the interface for events that can be recorded and persisted as the system table.
/// By default, the event will be persisted as the system table with the following schema:
///
/// - `type`: the type of the event.
/// - `payload`: the JSON bytes of the event.
/// - `timestamp`: the timestamp of the event.
///
/// The event can also add the extra schema and row to the event by overriding the `extra_schema` and `extra_row` methods.
pub trait Event: Send + Sync + Debug {
/// Returns the type of the event.
fn event_type(&self) -> &str;
/// Returns the timestamp of the event. Default to the current time.
fn timestamp(&self) -> Timestamp {
Timestamp::current_time(TimeUnit::Nanosecond)
}
/// Returns the JSON bytes of the event as the payload. It will use JSON type to store the payload.
fn json_payload(&self) -> Result<String>;
/// Add the extra schema to the event with the default schema.
fn extra_schema(&self) -> Vec<ColumnSchema> {
vec![]
}
/// Add the extra row to the event with the default row.
fn extra_row(&self) -> Result<Row> {
Ok(Row { values: vec![] })
}
/// Returns the event as any type.
fn as_any(&self) -> &dyn Any;
}
/// Returns the hints for the insert operation.
pub fn insert_hints() -> Vec<(&'static str, &'static str)> {
vec![
(
TTL_KEY,
EVENTS_TABLE_TTL
.get()
.map(|s| s.as_str())
.unwrap_or(DEFAULT_EVENTS_TABLE_TTL),
),
(APPEND_MODE_KEY, "true"),
]
}
/// Builds the row inserts request for the events that will be persisted to the events table.
pub fn build_row_inserts_request(events: &[Box<dyn Event>]) -> Result<RowInsertRequests> {
// Aggregate the events by the event type.
let mut event_groups: HashMap<&str, Vec<&Box<dyn Event>>> = HashMap::new();
for event in events {
event_groups
.entry(event.event_type())
.or_default()
.push(event);
}
let mut row_insert_requests = RowInsertRequests {
inserts: Vec::with_capacity(event_groups.len()),
};
for (_, events) in event_groups {
validate_events(&events)?;
// We already validated the events, so it's safe to get the first event to build the schema for the RowInsertRequest.
let event = &events[0];
let mut schema = vec![
ColumnSchema {
column_name: EVENTS_TABLE_TYPE_COLUMN_NAME.to_string(),
datatype: ColumnDataType::String.into(),
semantic_type: SemanticType::Tag.into(),
..Default::default()
},
ColumnSchema {
column_name: EVENTS_TABLE_PAYLOAD_COLUMN_NAME.to_string(),
datatype: ColumnDataType::Binary as i32,
semantic_type: SemanticType::Field as i32,
datatype_extension: Some(ColumnDataTypeExtension {
type_ext: Some(TypeExt::JsonType(JsonTypeExtension::JsonBinary.into())),
}),
..Default::default()
},
ColumnSchema {
column_name: EVENTS_TABLE_TIMESTAMP_COLUMN_NAME.to_string(),
datatype: ColumnDataType::TimestampNanosecond.into(),
semantic_type: SemanticType::Timestamp.into(),
..Default::default()
},
];
schema.extend(event.extra_schema());
let rows = events
.iter()
.map(|event| {
let mut row = Row {
values: vec![
ValueData::StringValue(event.event_type().to_string()).into(),
ValueData::BinaryValue(event.json_payload()?.as_bytes().to_vec()).into(),
ValueData::TimestampNanosecondValue(event.timestamp().value()).into(),
],
};
row.values.extend(event.extra_row()?.values);
Ok(row)
})
.collect::<Result<Vec<_>>>()?;
row_insert_requests.inserts.push(RowInsertRequest {
table_name: DEFAULT_EVENTS_TABLE_NAME.to_string(),
rows: Some(Rows { schema, rows }),
});
}
Ok(row_insert_requests)
}
// Ensure the events with the same event type have the same extra schema.
#[allow(clippy::borrowed_box)]
fn validate_events(events: &[&Box<dyn Event>]) -> Result<()> {
// It's safe to get the first event because the events are already grouped by the event type.
let extra_schema = events[0].extra_schema();
for event in events {
if event.extra_schema() != extra_schema {
MismatchedSchemaSnafu {
expected: extra_schema.clone(),
actual: event.extra_schema(),
}
.fail()?;
}
}
Ok(())
}
/// EventRecorder trait defines the interface for recording events.
pub trait EventRecorder: Send + Sync + 'static {
/// Records an event for persistence and processing by [EventHandler].
fn record(&self, event: Box<dyn Event>);
/// Cancels the event recorder.
fn close(&self);
}
/// EventHandler trait defines the interface for how to handle the event.
#[async_trait]
pub trait EventHandler: Send + Sync + 'static {
/// Processes and handles incoming events. The [DefaultEventHandlerImpl] implementation forwards events to frontend instances for persistence.
/// We use `&[Box<dyn Event>]` to avoid consuming the events, so the caller can buffer the events and retry if the handler fails.
async fn handle(&self, events: &[Box<dyn Event>]) -> Result<()>;
}
/// Configuration options for the event recorder.
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]
pub struct EventRecorderOptions {
/// TTL for the events table that will be used to store the events.
pub ttl: String,
}
impl Default for EventRecorderOptions {
fn default() -> Self {
Self {
ttl: DEFAULT_EVENTS_TABLE_TTL.to_string(),
}
}
}
/// Implementation of [EventRecorder] that records the events and processes them in the background by the [EventHandler].
pub struct EventRecorderImpl {
// The channel to send the events to the background processor.
tx: Sender<Box<dyn Event>>,
// The cancel token to cancel the background processor.
cancel_token: CancellationToken,
// The background processor to process the events.
handle: Option<JoinHandle<()>>,
}
impl EventRecorderImpl {
pub fn new(event_handler: Box<dyn EventHandler>, opts: EventRecorderOptions) -> Self {
info!("Creating event recorder with options: {:?}", opts);
let (tx, rx) = channel(DEFAULT_CHANNEL_SIZE);
let cancel_token = CancellationToken::new();
let mut recorder = Self {
tx,
handle: None,
cancel_token: cancel_token.clone(),
};
let processor = EventProcessor::new(
rx,
event_handler,
DEFAULT_FLUSH_INTERVAL_SECONDS,
DEFAULT_MAX_RETRY_TIMES,
)
.with_cancel_token(cancel_token);
// Spawn a background task to process the events.
let handle = tokio::spawn(async move {
processor.process(DEFAULT_BUFFER_SIZE).await;
});
recorder.handle = Some(handle);
// It only sets the ttl once, so it's safe to skip the error.
if EVENTS_TABLE_TTL.set(opts.ttl.clone()).is_err() {
info!(
"Events table ttl already set to {}, skip setting it",
opts.ttl
);
}
recorder
}
}
impl EventRecorder for EventRecorderImpl {
// Accepts an event and send it to the background handler.
fn record(&self, event: Box<dyn Event>) {
if let Err(e) = self.tx.try_send(event) {
error!("Failed to send event to the background processor: {}", e);
}
}
// Closes the event recorder. It will stop the background processor and flush the buffer.
fn close(&self) {
self.cancel_token.cancel();
}
}
impl Drop for EventRecorderImpl {
fn drop(&mut self) {
if let Some(handle) = self.handle.take() {
handle.abort();
info!("Aborted the background processor in event recorder");
}
}
}
struct EventProcessor {
rx: Receiver<Box<dyn Event>>,
event_handler: Box<dyn EventHandler>,
max_retry_times: u64,
process_interval: Duration,
cancel_token: CancellationToken,
}
impl EventProcessor {
fn new(
rx: Receiver<Box<dyn Event>>,
event_handler: Box<dyn EventHandler>,
process_interval: Duration,
max_retry_times: u64,
) -> Self {
Self {
rx,
event_handler,
max_retry_times,
process_interval,
cancel_token: CancellationToken::new(),
}
}
fn with_cancel_token(mut self, cancel_token: CancellationToken) -> Self {
self.cancel_token = cancel_token;
self
}
async fn process(mut self, buffer_size: usize) {
info!("Start the background processor in event recorder to handle the received events.");
let mut buffer = Vec::with_capacity(buffer_size);
let mut interval = tokio::time::interval(self.process_interval);
loop {
tokio::select! {
maybe_event = self.rx.recv() => {
if let Some(maybe_event) = maybe_event {
debug!("Received event: {:?}", maybe_event);
if buffer.len() >= buffer_size {
debug!(
"Flushing events to the event handler because the buffer is full with {} events",
buffer.len()
);
self.flush_events_to_handler(&mut buffer).await;
}
// Push the event to the buffer, the buffer will be flushed when the interval is triggered or received a closed signal.
buffer.push(maybe_event);
} else {
// When received a closed signal, flush the buffer and exit the loop.
self.flush_events_to_handler(&mut buffer).await;
break;
}
}
// Cancel the processor through the cancel token.
_ = self.cancel_token.cancelled() => {
warn!("Received a cancel signal, flushing the buffer and exiting the loop");
self.flush_events_to_handler(&mut buffer).await;
break;
}
// When the interval is triggered, flush the buffer and send the events to the event handler.
_ = interval.tick() => {
self.flush_events_to_handler(&mut buffer).await;
}
}
}
}
// NOTE: While we implement a retry mechanism for failed event handling, there is no guarantee that all events will be processed successfully.
async fn flush_events_to_handler(&self, buffer: &mut Vec<Box<dyn Event>>) {
if !buffer.is_empty() {
debug!("Flushing {} events to the event handler", buffer.len());
let mut backoff = ExponentialBuilder::default()
.with_min_delay(Duration::from_millis(
DEFAULT_FLUSH_INTERVAL_SECONDS.as_millis() as u64 / self.max_retry_times.max(1),
))
.with_max_delay(Duration::from_millis(
DEFAULT_FLUSH_INTERVAL_SECONDS.as_millis() as u64,
))
.with_max_times(self.max_retry_times as usize)
.build();
loop {
match self.event_handler.handle(buffer).await {
Ok(()) => {
debug!("Successfully handled {} events", buffer.len());
break;
}
Err(e) => {
if let Some(d) = backoff.next() {
warn!(e; "Failed to handle events, retrying...");
sleep(d).await;
continue;
} else {
warn!(
e; "Failed to handle events after {} retries",
self.max_retry_times
);
break;
}
}
}
}
}
// Clear the buffer to prevent unbounded memory growth, regardless of whether event processing succeeded or failed.
buffer.clear();
}
}
#[cfg(test)]
mod tests {
use super::*;
#[derive(Debug)]
struct TestEvent {}
impl Event for TestEvent {
fn event_type(&self) -> &str {
"test_event"
}
fn json_payload(&self) -> Result<String> {
Ok("{\"procedure_id\": \"1234567890\"}".to_string())
}
fn as_any(&self) -> &dyn Any {
self
}
}
struct TestEventHandlerImpl {}
#[async_trait]
impl EventHandler for TestEventHandlerImpl {
async fn handle(&self, events: &[Box<dyn Event>]) -> Result<()> {
let event = events
.first()
.unwrap()
.as_any()
.downcast_ref::<TestEvent>()
.unwrap();
assert_eq!(
event.json_payload().unwrap(),
"{\"procedure_id\": \"1234567890\"}"
);
assert_eq!(event.event_type(), "test_event");
Ok(())
}
}
#[tokio::test]
async fn test_event_recorder() {
let mut event_recorder = EventRecorderImpl::new(
Box::new(TestEventHandlerImpl {}),
EventRecorderOptions::default(),
);
event_recorder.record(Box::new(TestEvent {}));
// Sleep for a while to let the event be sent to the event handler.
sleep(Duration::from_millis(500)).await;
// Close the event recorder to flush the buffer.
event_recorder.close();
// Sleep for a while to let the background task process the event.
sleep(Duration::from_millis(500)).await;
if let Some(handle) = event_recorder.handle.take() {
assert!(handle.await.is_ok());
}
}
struct TestEventHandlerImplShouldPanic {}
#[async_trait]
impl EventHandler for TestEventHandlerImplShouldPanic {
async fn handle(&self, events: &[Box<dyn Event>]) -> Result<()> {
let event = events
.first()
.unwrap()
.as_any()
.downcast_ref::<TestEvent>()
.unwrap();
// Set the incorrect payload and event type to trigger the panic.
assert_eq!(
event.json_payload().unwrap(),
"{\"procedure_id\": \"should_panic\"}"
);
assert_eq!(event.event_type(), "should_panic");
Ok(())
}
}
#[tokio::test]
async fn test_event_recorder_should_panic() {
let mut event_recorder = EventRecorderImpl::new(
Box::new(TestEventHandlerImplShouldPanic {}),
EventRecorderOptions::default(),
);
event_recorder.record(Box::new(TestEvent {}));
// Sleep for a while to let the event be sent to the event handler.
sleep(Duration::from_millis(500)).await;
// Close the event recorder to flush the buffer.
event_recorder.close();
// Sleep for a while to let the background task process the event.
sleep(Duration::from_millis(500)).await;
if let Some(handle) = event_recorder.handle.take() {
assert!(handle.await.unwrap_err().is_panic());
}
}
}

View File

@@ -16,6 +16,8 @@ geo = ["geohash", "h3o", "s2", "wkt", "geo-types", "dep:geo"]
ahash.workspace = true
api.workspace = true
arc-swap = "1.0"
arrow.workspace = true
arrow-schema.workspace = true
async-trait.workspace = true
bincode = "1.3"
catalog.workspace = true
@@ -34,6 +36,7 @@ datafusion.workspace = true
datafusion-common.workspace = true
datafusion-expr.workspace = true
datafusion-functions-aggregate-common.workspace = true
datafusion-physical-expr.workspace = true
datatypes.workspace = true
derive_more = { version = "1", default-features = false, features = ["display"] }
geo = { version = "0.29", optional = true }
@@ -62,5 +65,7 @@ wkt = { version = "0.11", optional = true }
[dev-dependencies]
approx = "0.5"
futures.workspace = true
pretty_assertions = "1.4.0"
serde = { version = "1.0", features = ["derive"] }
tokio.workspace = true

View File

@@ -17,3 +17,5 @@ pub mod count_hash;
#[cfg(feature = "geo")]
pub mod geo;
pub mod vector;
pub mod aggr_wrapper;

View File

@@ -0,0 +1,538 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//! Wrapper for making aggregate functions out of state/merge functions of original aggregate functions.
//!
//! i.e. for a aggregate function `foo`, we will have a state function `foo_state` and a merge function `foo_merge`.
//!
//! `foo_state`'s input args is the same as `foo`'s, and its output is a state object.
//! Note that `foo_state` might have multiple output columns, so it's a struct array
//! that each output column is a struct field.
//! `foo_merge`'s input arg is the same as `foo_state`'s output, and its output is the same as `foo`'s input.
//!
use std::sync::Arc;
use arrow::array::StructArray;
use arrow_schema::Fields;
use datafusion::optimizer::analyzer::type_coercion::TypeCoercion;
use datafusion::optimizer::AnalyzerRule;
use datafusion::physical_planner::create_aggregate_expr_and_maybe_filter;
use datafusion_common::{Column, ScalarValue};
use datafusion_expr::expr::AggregateFunction;
use datafusion_expr::function::StateFieldsArgs;
use datafusion_expr::{
Accumulator, Aggregate, AggregateUDF, AggregateUDFImpl, Expr, ExprSchemable, LogicalPlan,
Signature,
};
use datafusion_physical_expr::aggregate::AggregateFunctionExpr;
use datatypes::arrow::datatypes::{DataType, Field};
/// Returns the name of the state function for the given aggregate function name.
/// The state function is used to compute the state of the aggregate function.
/// The state function's name is in the format `__<aggr_name>_state
pub fn aggr_state_func_name(aggr_name: &str) -> String {
format!("__{}_state", aggr_name)
}
/// Returns the name of the merge function for the given aggregate function name.
/// The merge function is used to merge the states of the state functions.
/// The merge function's name is in the format `__<aggr_name>_merge
pub fn aggr_merge_func_name(aggr_name: &str) -> String {
format!("__{}_merge", aggr_name)
}
/// A wrapper to make an aggregate function out of the state and merge functions of the original aggregate function.
/// It contains the original aggregate function, the state functions, and the merge function.
///
/// Notice state functions may have multiple output columns, so it's return type is always a struct array, and the merge function is used to merge the states of the state functions.
#[derive(Debug, Clone)]
pub struct StateMergeHelper;
/// A struct to hold the two aggregate plans, one for the state function(lower) and one for the merge function(upper).
#[allow(unused)]
#[derive(Debug, Clone)]
pub struct StepAggrPlan {
/// Upper merge plan, which is the aggregate plan that merges the states of the state function.
pub upper_merge: Arc<LogicalPlan>,
/// Lower state plan, which is the aggregate plan that computes the state of the aggregate function.
pub lower_state: Arc<LogicalPlan>,
}
pub fn get_aggr_func(expr: &Expr) -> Option<&datafusion_expr::expr::AggregateFunction> {
let mut expr_ref = expr;
while let Expr::Alias(alias) = expr_ref {
expr_ref = &alias.expr;
}
if let Expr::AggregateFunction(aggr_func) = expr_ref {
Some(aggr_func)
} else {
None
}
}
impl StateMergeHelper {
/// Split an aggregate plan into two aggregate plans, one for the state function and one for the merge function.
pub fn split_aggr_node(aggr_plan: Aggregate) -> datafusion_common::Result<StepAggrPlan> {
let aggr = {
// certain aggr func need type coercion to work correctly, so we need to analyze the plan first.
let aggr_plan = TypeCoercion::new().analyze(
LogicalPlan::Aggregate(aggr_plan).clone(),
&Default::default(),
)?;
if let LogicalPlan::Aggregate(aggr) = aggr_plan {
aggr
} else {
return Err(datafusion_common::DataFusionError::Internal(format!(
"Failed to coerce expressions in aggregate plan, expected Aggregate, got: {:?}",
aggr_plan
)));
}
};
let mut lower_aggr_exprs = vec![];
let mut upper_aggr_exprs = vec![];
for aggr_expr in aggr.aggr_expr.iter() {
let Some(aggr_func) = get_aggr_func(aggr_expr) else {
return Err(datafusion_common::DataFusionError::NotImplemented(format!(
"Unsupported aggregate expression for step aggr optimize: {:?}",
aggr_expr
)));
};
let original_input_types = aggr_func
.args
.iter()
.map(|e| e.get_type(&aggr.input.schema()))
.collect::<Result<Vec<_>, _>>()?;
// first create the state function from the original aggregate function.
let state_func = StateWrapper::new((*aggr_func.func).clone())?;
let expr = AggregateFunction {
func: Arc::new(state_func.into()),
args: aggr_func.args.clone(),
distinct: aggr_func.distinct,
filter: aggr_func.filter.clone(),
order_by: aggr_func.order_by.clone(),
null_treatment: aggr_func.null_treatment,
};
let expr = Expr::AggregateFunction(expr);
let lower_state_output_col_name = expr.schema_name().to_string();
lower_aggr_exprs.push(expr);
let (original_phy_expr, _filter, _ordering) = create_aggregate_expr_and_maybe_filter(
aggr_expr,
aggr.input.schema(),
aggr.input.schema().as_arrow(),
&Default::default(),
)?;
let merge_func = MergeWrapper::new(
(*aggr_func.func).clone(),
original_phy_expr,
original_input_types,
)?;
let arg = Expr::Column(Column::new_unqualified(lower_state_output_col_name));
let expr = AggregateFunction {
func: Arc::new(merge_func.into()),
args: vec![arg],
distinct: aggr_func.distinct,
filter: aggr_func.filter.clone(),
order_by: aggr_func.order_by.clone(),
null_treatment: aggr_func.null_treatment,
};
// alias to the original aggregate expr's schema name, so parent plan can refer to it
// correctly.
let expr = Expr::AggregateFunction(expr).alias(aggr_expr.schema_name().to_string());
upper_aggr_exprs.push(expr);
}
let mut lower = aggr.clone();
lower.aggr_expr = lower_aggr_exprs;
let lower_plan = LogicalPlan::Aggregate(lower);
// update aggregate's output schema
let lower_plan = Arc::new(lower_plan.recompute_schema()?);
let mut upper = aggr.clone();
let aggr_plan = LogicalPlan::Aggregate(aggr);
upper.aggr_expr = upper_aggr_exprs;
upper.input = lower_plan.clone();
// upper schema's output schema should be the same as the original aggregate plan's output schema
let upper_check = upper.clone();
let upper_plan = Arc::new(LogicalPlan::Aggregate(upper_check).recompute_schema()?);
if *upper_plan.schema() != *aggr_plan.schema() {
return Err(datafusion_common::DataFusionError::Internal(format!(
"Upper aggregate plan's schema is not the same as the original aggregate plan's schema: \n[transformed]:{}\n[ original]{}",
upper_plan.schema(), aggr_plan.schema()
)));
}
Ok(StepAggrPlan {
lower_state: lower_plan,
upper_merge: upper_plan,
})
}
}
/// Wrapper to make an aggregate function out of a state function.
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct StateWrapper {
inner: AggregateUDF,
name: String,
}
impl StateWrapper {
/// `state_index`: The index of the state in the output of the state function.
pub fn new(inner: AggregateUDF) -> datafusion_common::Result<Self> {
let name = aggr_state_func_name(inner.name());
Ok(Self { inner, name })
}
pub fn inner(&self) -> &AggregateUDF {
&self.inner
}
/// Deduce the return type of the original aggregate function
/// based on the accumulator arguments.
///
pub fn deduce_aggr_return_type(
&self,
acc_args: &datafusion_expr::function::AccumulatorArgs,
) -> datafusion_common::Result<DataType> {
let input_exprs = acc_args.exprs;
let input_schema = acc_args.schema;
let input_types = input_exprs
.iter()
.map(|e| e.data_type(input_schema))
.collect::<Result<Vec<_>, _>>()?;
let return_type = self.inner.return_type(&input_types)?;
Ok(return_type)
}
}
impl AggregateUDFImpl for StateWrapper {
fn accumulator<'a, 'b>(
&'a self,
acc_args: datafusion_expr::function::AccumulatorArgs<'b>,
) -> datafusion_common::Result<Box<dyn Accumulator>> {
// fix and recover proper acc args for the original aggregate function.
let state_type = acc_args.return_type.clone();
let inner = {
let old_return_type = self.deduce_aggr_return_type(&acc_args)?;
let acc_args = datafusion_expr::function::AccumulatorArgs {
return_type: &old_return_type,
schema: acc_args.schema,
ignore_nulls: acc_args.ignore_nulls,
ordering_req: acc_args.ordering_req,
is_reversed: acc_args.is_reversed,
name: acc_args.name,
is_distinct: acc_args.is_distinct,
exprs: acc_args.exprs,
};
self.inner.accumulator(acc_args)?
};
Ok(Box::new(StateAccum::new(inner, state_type)?))
}
fn as_any(&self) -> &dyn std::any::Any {
self
}
fn name(&self) -> &str {
self.name.as_str()
}
fn is_nullable(&self) -> bool {
self.inner.is_nullable()
}
/// Return state_fields as the output struct type.
///
fn return_type(&self, arg_types: &[DataType]) -> datafusion_common::Result<DataType> {
let old_return_type = self.inner.return_type(arg_types)?;
let state_fields_args = StateFieldsArgs {
name: self.inner().name(),
input_types: arg_types,
return_type: &old_return_type,
// TODO(discord9): how to get this?, probably ok?
ordering_fields: &[],
is_distinct: false,
};
let state_fields = self.inner.state_fields(state_fields_args)?;
let struct_field = DataType::Struct(state_fields.into());
Ok(struct_field)
}
/// The state function's output fields are the same as the original aggregate function's state fields.
fn state_fields(
&self,
args: datafusion_expr::function::StateFieldsArgs,
) -> datafusion_common::Result<Vec<Field>> {
let old_return_type = self.inner.return_type(args.input_types)?;
let state_fields_args = StateFieldsArgs {
name: args.name,
input_types: args.input_types,
return_type: &old_return_type,
ordering_fields: args.ordering_fields,
is_distinct: args.is_distinct,
};
self.inner.state_fields(state_fields_args)
}
/// The state function's signature is the same as the original aggregate function's signature,
fn signature(&self) -> &Signature {
self.inner.signature()
}
/// Coerce types also do nothing, as optimzer should be able to already make struct types
fn coerce_types(&self, arg_types: &[DataType]) -> datafusion_common::Result<Vec<DataType>> {
self.inner.coerce_types(arg_types)
}
}
/// The wrapper's input is the same as the original aggregate function's input,
/// and the output is the state function's output.
#[derive(Debug)]
pub struct StateAccum {
inner: Box<dyn Accumulator>,
state_fields: Fields,
}
impl StateAccum {
pub fn new(
inner: Box<dyn Accumulator>,
state_type: DataType,
) -> datafusion_common::Result<Self> {
let DataType::Struct(fields) = state_type else {
return Err(datafusion_common::DataFusionError::Internal(format!(
"Expected a struct type for state, got: {:?}",
state_type
)));
};
Ok(Self {
inner,
state_fields: fields,
})
}
}
impl Accumulator for StateAccum {
fn evaluate(&mut self) -> datafusion_common::Result<ScalarValue> {
let state = self.inner.state()?;
let array = state
.iter()
.map(|s| s.to_array())
.collect::<Result<Vec<_>, _>>()?;
let struct_array = StructArray::try_new(self.state_fields.clone(), array, None)?;
Ok(ScalarValue::Struct(Arc::new(struct_array)))
}
fn merge_batch(
&mut self,
states: &[datatypes::arrow::array::ArrayRef],
) -> datafusion_common::Result<()> {
self.inner.merge_batch(states)
}
fn update_batch(
&mut self,
values: &[datatypes::arrow::array::ArrayRef],
) -> datafusion_common::Result<()> {
self.inner.update_batch(values)
}
fn size(&self) -> usize {
self.inner.size()
}
fn state(&mut self) -> datafusion_common::Result<Vec<ScalarValue>> {
self.inner.state()
}
}
/// TODO(discord9): mark this function as non-ser/de able
///
/// This wrapper shouldn't be register as a udaf, as it contain extra data that is not serializable.
/// and changes for different logical plans.
#[derive(Debug, Clone)]
pub struct MergeWrapper {
inner: AggregateUDF,
name: String,
merge_signature: Signature,
/// The original physical expression of the aggregate function, can't store the original aggregate function directly, as PhysicalExpr didn't implement Any
original_phy_expr: Arc<AggregateFunctionExpr>,
original_input_types: Vec<DataType>,
}
impl MergeWrapper {
pub fn new(
inner: AggregateUDF,
original_phy_expr: Arc<AggregateFunctionExpr>,
original_input_types: Vec<DataType>,
) -> datafusion_common::Result<Self> {
let name = aggr_merge_func_name(inner.name());
// the input type is actually struct type, which is the state fields of the original aggregate function.
let merge_signature = Signature::user_defined(datafusion_expr::Volatility::Immutable);
Ok(Self {
inner,
name,
merge_signature,
original_phy_expr,
original_input_types,
})
}
pub fn inner(&self) -> &AggregateUDF {
&self.inner
}
}
impl AggregateUDFImpl for MergeWrapper {
fn accumulator<'a, 'b>(
&'a self,
acc_args: datafusion_expr::function::AccumulatorArgs<'b>,
) -> datafusion_common::Result<Box<dyn Accumulator>> {
if acc_args.schema.fields().len() != 1
|| !matches!(acc_args.schema.field(0).data_type(), DataType::Struct(_))
{
return Err(datafusion_common::DataFusionError::Internal(format!(
"Expected one struct type as input, got: {:?}",
acc_args.schema
)));
}
let input_type = acc_args.schema.field(0).data_type();
let DataType::Struct(fields) = input_type else {
return Err(datafusion_common::DataFusionError::Internal(format!(
"Expected a struct type for input, got: {:?}",
input_type
)));
};
let inner_accum = self.original_phy_expr.create_accumulator()?;
Ok(Box::new(MergeAccum::new(inner_accum, fields)))
}
fn as_any(&self) -> &dyn std::any::Any {
self
}
fn name(&self) -> &str {
self.name.as_str()
}
fn is_nullable(&self) -> bool {
self.inner.is_nullable()
}
/// Notice here the `arg_types` is actually the `state_fields`'s data types,
/// so return fixed return type instead of using `arg_types` to determine the return type.
fn return_type(&self, _arg_types: &[DataType]) -> datafusion_common::Result<DataType> {
// The return type is the same as the original aggregate function's return type.
let ret_type = self.inner.return_type(&self.original_input_types)?;
Ok(ret_type)
}
fn signature(&self) -> &Signature {
&self.merge_signature
}
/// Coerce types also do nothing, as optimzer should be able to already make struct types
fn coerce_types(&self, arg_types: &[DataType]) -> datafusion_common::Result<Vec<DataType>> {
// just check if the arg_types are only one and is struct array
if arg_types.len() != 1 || !matches!(arg_types.first(), Some(DataType::Struct(_))) {
return Err(datafusion_common::DataFusionError::Internal(format!(
"Expected one struct type as input, got: {:?}",
arg_types
)));
}
Ok(arg_types.to_vec())
}
/// Just return the original aggregate function's state fields.
fn state_fields(
&self,
_args: datafusion_expr::function::StateFieldsArgs,
) -> datafusion_common::Result<Vec<Field>> {
self.original_phy_expr.state_fields()
}
}
/// The merge accumulator, which modify `update_batch`'s behavior to accept one struct array which
/// include the state fields of original aggregate function, and merge said states into original accumulator
/// the output is the same as original aggregate function
#[derive(Debug)]
pub struct MergeAccum {
inner: Box<dyn Accumulator>,
state_fields: Fields,
}
impl MergeAccum {
pub fn new(inner: Box<dyn Accumulator>, state_fields: &Fields) -> Self {
Self {
inner,
state_fields: state_fields.clone(),
}
}
}
impl Accumulator for MergeAccum {
fn evaluate(&mut self) -> datafusion_common::Result<ScalarValue> {
self.inner.evaluate()
}
fn merge_batch(&mut self, states: &[arrow::array::ArrayRef]) -> datafusion_common::Result<()> {
self.inner.merge_batch(states)
}
fn update_batch(&mut self, values: &[arrow::array::ArrayRef]) -> datafusion_common::Result<()> {
let value = values.first().ok_or_else(|| {
datafusion_common::DataFusionError::Internal("No values provided for merge".to_string())
})?;
// The input values are states from other accumulators, so we merge them.
let struct_arr = value
.as_any()
.downcast_ref::<StructArray>()
.ok_or_else(|| {
datafusion_common::DataFusionError::Internal(format!(
"Expected StructArray, got: {:?}",
value.data_type()
))
})?;
let fields = struct_arr.fields();
if fields != &self.state_fields {
return Err(datafusion_common::DataFusionError::Internal(format!(
"Expected state fields: {:?}, got: {:?}",
self.state_fields, fields
)));
}
// now fields should be the same, so we can merge the batch
// by pass the columns as order should be the same
let state_columns = struct_arr.columns();
self.inner.merge_batch(state_columns)
}
fn size(&self) -> usize {
self.inner.size()
}
fn state(&mut self) -> datafusion_common::Result<Vec<ScalarValue>> {
self.inner.state()
}
}
#[cfg(test)]
mod tests;

View File

@@ -0,0 +1,804 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::any::Any;
use std::pin::Pin;
use std::sync::{Arc, Mutex};
use std::task::{Context, Poll};
use arrow::array::{ArrayRef, Float64Array, Int64Array, UInt64Array};
use arrow::record_batch::RecordBatch;
use arrow_schema::SchemaRef;
use datafusion::catalog::{Session, TableProvider};
use datafusion::datasource::DefaultTableSource;
use datafusion::execution::{RecordBatchStream, SendableRecordBatchStream, TaskContext};
use datafusion::functions_aggregate::average::avg_udaf;
use datafusion::functions_aggregate::sum::sum_udaf;
use datafusion::optimizer::analyzer::type_coercion::TypeCoercion;
use datafusion::optimizer::AnalyzerRule;
use datafusion::physical_plan::aggregates::AggregateExec;
use datafusion::physical_plan::execution_plan::{Boundedness, EmissionType};
use datafusion::physical_plan::{DisplayAs, DisplayFormatType, ExecutionPlan, PlanProperties};
use datafusion::physical_planner::{DefaultPhysicalPlanner, PhysicalPlanner};
use datafusion::prelude::SessionContext;
use datafusion_common::{Column, TableReference};
use datafusion_expr::expr::AggregateFunction;
use datafusion_expr::sqlparser::ast::NullTreatment;
use datafusion_expr::{Aggregate, Expr, LogicalPlan, SortExpr, TableScan};
use datafusion_physical_expr::aggregate::AggregateExprBuilder;
use datafusion_physical_expr::{EquivalenceProperties, Partitioning};
use datatypes::arrow_array::StringArray;
use futures::{Stream, StreamExt as _};
use pretty_assertions::assert_eq;
use super::*;
use crate::aggrs::approximate::hll::HllState;
use crate::aggrs::approximate::uddsketch::UddSketchState;
use crate::aggrs::count_hash::CountHash;
use crate::function::Function as _;
use crate::scalars::hll_count::HllCalcFunction;
use crate::scalars::uddsketch_calc::UddSketchCalcFunction;
#[derive(Debug)]
pub struct MockInputExec {
input: Vec<RecordBatch>,
schema: SchemaRef,
properties: PlanProperties,
}
impl MockInputExec {
pub fn new(input: Vec<RecordBatch>, schema: SchemaRef) -> Self {
Self {
properties: PlanProperties::new(
EquivalenceProperties::new(schema.clone()),
Partitioning::UnknownPartitioning(1),
EmissionType::Incremental,
Boundedness::Bounded,
),
input,
schema,
}
}
}
impl DisplayAs for MockInputExec {
fn fmt_as(&self, _t: DisplayFormatType, _f: &mut std::fmt::Formatter) -> std::fmt::Result {
unimplemented!()
}
}
impl ExecutionPlan for MockInputExec {
fn name(&self) -> &str {
"MockInputExec"
}
fn as_any(&self) -> &dyn Any {
self
}
fn properties(&self) -> &PlanProperties {
&self.properties
}
fn children(&self) -> Vec<&Arc<dyn ExecutionPlan>> {
vec![]
}
fn with_new_children(
self: Arc<Self>,
_children: Vec<Arc<dyn ExecutionPlan>>,
) -> datafusion_common::Result<Arc<dyn ExecutionPlan>> {
Ok(self)
}
fn execute(
&self,
_partition: usize,
_context: Arc<TaskContext>,
) -> datafusion_common::Result<SendableRecordBatchStream> {
let stream = MockStream {
stream: self.input.clone(),
schema: self.schema.clone(),
idx: 0,
};
Ok(Box::pin(stream))
}
}
struct MockStream {
stream: Vec<RecordBatch>,
schema: SchemaRef,
idx: usize,
}
impl Stream for MockStream {
type Item = datafusion_common::Result<RecordBatch>;
fn poll_next(
mut self: Pin<&mut Self>,
_cx: &mut Context<'_>,
) -> Poll<Option<datafusion_common::Result<RecordBatch>>> {
if self.idx < self.stream.len() {
let ret = self.stream[self.idx].clone();
self.idx += 1;
Poll::Ready(Some(Ok(ret)))
} else {
Poll::Ready(None)
}
}
}
impl RecordBatchStream for MockStream {
fn schema(&self) -> SchemaRef {
self.schema.clone()
}
}
#[derive(Debug)]
struct DummyTableProvider {
schema: Arc<arrow_schema::Schema>,
record_batch: Mutex<Option<RecordBatch>>,
}
impl DummyTableProvider {
#[allow(unused)]
pub fn new(schema: Arc<arrow_schema::Schema>, record_batch: Option<RecordBatch>) -> Self {
Self {
schema,
record_batch: Mutex::new(record_batch),
}
}
}
impl Default for DummyTableProvider {
fn default() -> Self {
Self {
schema: Arc::new(arrow_schema::Schema::new(vec![Field::new(
"number",
DataType::Int64,
true,
)])),
record_batch: Mutex::new(None),
}
}
}
#[async_trait::async_trait]
impl TableProvider for DummyTableProvider {
fn as_any(&self) -> &dyn std::any::Any {
self
}
fn schema(&self) -> Arc<arrow_schema::Schema> {
self.schema.clone()
}
fn table_type(&self) -> datafusion_expr::TableType {
datafusion_expr::TableType::Base
}
async fn scan(
&self,
_state: &dyn Session,
_projection: Option<&Vec<usize>>,
_filters: &[Expr],
_limit: Option<usize>,
) -> datafusion::error::Result<Arc<dyn ExecutionPlan>> {
let input: Vec<RecordBatch> = self
.record_batch
.lock()
.unwrap()
.clone()
.map(|r| vec![r])
.unwrap_or_default();
Ok(Arc::new(MockInputExec::new(input, self.schema.clone())))
}
}
fn dummy_table_scan() -> LogicalPlan {
let table_provider = Arc::new(DummyTableProvider::default());
let table_source = DefaultTableSource::new(table_provider);
LogicalPlan::TableScan(
TableScan::try_new(
TableReference::bare("Number"),
Arc::new(table_source),
None,
vec![],
None,
)
.unwrap(),
)
}
#[tokio::test]
async fn test_sum_udaf() {
let ctx = SessionContext::new();
let sum = datafusion::functions_aggregate::sum::sum_udaf();
let sum = (*sum).clone();
let original_aggr = Aggregate::try_new(
Arc::new(dummy_table_scan()),
vec![],
vec![Expr::AggregateFunction(AggregateFunction::new_udf(
Arc::new(sum.clone()),
vec![Expr::Column(Column::new_unqualified("number"))],
false,
None,
None,
None,
))],
)
.unwrap();
let res = StateMergeHelper::split_aggr_node(original_aggr).unwrap();
let expected_lower_plan = LogicalPlan::Aggregate(
Aggregate::try_new(
Arc::new(dummy_table_scan()),
vec![],
vec![Expr::AggregateFunction(AggregateFunction::new_udf(
Arc::new(StateWrapper::new(sum.clone()).unwrap().into()),
vec![Expr::Column(Column::new_unqualified("number"))],
false,
None,
None,
None,
))],
)
.unwrap(),
)
.recompute_schema()
.unwrap();
assert_eq!(res.lower_state.as_ref(), &expected_lower_plan);
let expected_merge_plan = LogicalPlan::Aggregate(
Aggregate::try_new(
Arc::new(expected_lower_plan),
vec![],
vec![Expr::AggregateFunction(AggregateFunction::new_udf(
Arc::new(
MergeWrapper::new(
sum.clone(),
Arc::new(
AggregateExprBuilder::new(
Arc::new(sum.clone()),
vec![Arc::new(
datafusion::physical_expr::expressions::Column::new(
"number", 0,
),
)],
)
.schema(Arc::new(dummy_table_scan().schema().as_arrow().clone()))
.alias("sum(number)")
.build()
.unwrap(),
),
vec![DataType::Int64],
)
.unwrap()
.into(),
),
vec![Expr::Column(Column::new_unqualified("__sum_state(number)"))],
false,
None,
None,
None,
))
.alias("sum(number)")],
)
.unwrap(),
);
assert_eq!(res.upper_merge.as_ref(), &expected_merge_plan);
let phy_aggr_state_plan = DefaultPhysicalPlanner::default()
.create_physical_plan(&res.lower_state, &ctx.state())
.await
.unwrap();
let aggr_exec = phy_aggr_state_plan
.as_any()
.downcast_ref::<AggregateExec>()
.unwrap();
let aggr_func_expr = &aggr_exec.aggr_expr()[0];
let mut state_accum = aggr_func_expr.create_accumulator().unwrap();
// evaluate the state function
let input = Int64Array::from(vec![Some(1), Some(2), None, Some(3)]);
let values = vec![Arc::new(input) as arrow::array::ArrayRef];
state_accum.update_batch(&values).unwrap();
let state = state_accum.state().unwrap();
assert_eq!(state.len(), 1);
assert_eq!(state[0], ScalarValue::Int64(Some(6)));
let eval_res = state_accum.evaluate().unwrap();
assert_eq!(
eval_res,
ScalarValue::Struct(Arc::new(
StructArray::try_new(
vec![Field::new("sum[sum]", DataType::Int64, true)].into(),
vec![Arc::new(Int64Array::from(vec![Some(6)]))],
None,
)
.unwrap(),
))
);
let phy_aggr_merge_plan = DefaultPhysicalPlanner::default()
.create_physical_plan(&res.upper_merge, &ctx.state())
.await
.unwrap();
let aggr_exec = phy_aggr_merge_plan
.as_any()
.downcast_ref::<AggregateExec>()
.unwrap();
let aggr_func_expr = &aggr_exec.aggr_expr()[0];
let mut merge_accum = aggr_func_expr.create_accumulator().unwrap();
let merge_input =
vec![Arc::new(Int64Array::from(vec![Some(6), Some(42), None])) as arrow::array::ArrayRef];
let merge_input_struct_arr = StructArray::try_new(
vec![Field::new("sum[sum]", DataType::Int64, true)].into(),
merge_input,
None,
)
.unwrap();
merge_accum
.update_batch(&[Arc::new(merge_input_struct_arr)])
.unwrap();
let merge_state = merge_accum.state().unwrap();
assert_eq!(merge_state.len(), 1);
assert_eq!(merge_state[0], ScalarValue::Int64(Some(48)));
let merge_eval_res = merge_accum.evaluate().unwrap();
assert_eq!(merge_eval_res, ScalarValue::Int64(Some(48)));
}
#[tokio::test]
async fn test_avg_udaf() {
let ctx = SessionContext::new();
let avg = datafusion::functions_aggregate::average::avg_udaf();
let avg = (*avg).clone();
let original_aggr = Aggregate::try_new(
Arc::new(dummy_table_scan()),
vec![],
vec![Expr::AggregateFunction(AggregateFunction::new_udf(
Arc::new(avg.clone()),
vec![Expr::Column(Column::new_unqualified("number"))],
false,
None,
None,
None,
))],
)
.unwrap();
let res = StateMergeHelper::split_aggr_node(original_aggr).unwrap();
let state_func: Arc<AggregateUDF> = Arc::new(StateWrapper::new(avg.clone()).unwrap().into());
let expected_aggr_state_plan = LogicalPlan::Aggregate(
Aggregate::try_new(
Arc::new(dummy_table_scan()),
vec![],
vec![Expr::AggregateFunction(AggregateFunction::new_udf(
state_func,
vec![Expr::Column(Column::new_unqualified("number"))],
false,
None,
None,
None,
))],
)
.unwrap(),
);
// type coerced so avg aggr function can function correctly
let coerced_aggr_state_plan = TypeCoercion::new()
.analyze(expected_aggr_state_plan.clone(), &Default::default())
.unwrap();
assert_eq!(res.lower_state.as_ref(), &coerced_aggr_state_plan);
assert_eq!(
res.lower_state.schema().as_arrow(),
&arrow_schema::Schema::new(vec![Field::new(
"__avg_state(number)",
DataType::Struct(
vec![
Field::new("avg[count]", DataType::UInt64, true),
Field::new("avg[sum]", DataType::Float64, true)
]
.into()
),
true,
)])
);
let expected_merge_fn = MergeWrapper::new(
avg.clone(),
Arc::new(
AggregateExprBuilder::new(
Arc::new(avg.clone()),
vec![Arc::new(
datafusion::physical_expr::expressions::Column::new("number", 0),
)],
)
.schema(Arc::new(dummy_table_scan().schema().as_arrow().clone()))
.alias("avg(number)")
.build()
.unwrap(),
),
// coerced to float64
vec![DataType::Float64],
)
.unwrap();
let expected_merge_plan = LogicalPlan::Aggregate(
Aggregate::try_new(
Arc::new(coerced_aggr_state_plan.clone()),
vec![],
vec![Expr::AggregateFunction(AggregateFunction::new_udf(
Arc::new(expected_merge_fn.into()),
vec![Expr::Column(Column::new_unqualified("__avg_state(number)"))],
false,
None,
None,
None,
))
.alias("avg(number)")],
)
.unwrap(),
);
assert_eq!(res.upper_merge.as_ref(), &expected_merge_plan);
let phy_aggr_state_plan = DefaultPhysicalPlanner::default()
.create_physical_plan(&coerced_aggr_state_plan, &ctx.state())
.await
.unwrap();
let aggr_exec = phy_aggr_state_plan
.as_any()
.downcast_ref::<AggregateExec>()
.unwrap();
let aggr_func_expr = &aggr_exec.aggr_expr()[0];
let mut state_accum = aggr_func_expr.create_accumulator().unwrap();
// evaluate the state function
let input = Float64Array::from(vec![Some(1.), Some(2.), None, Some(3.)]);
let values = vec![Arc::new(input) as arrow::array::ArrayRef];
state_accum.update_batch(&values).unwrap();
let state = state_accum.state().unwrap();
assert_eq!(state.len(), 2);
assert_eq!(state[0], ScalarValue::UInt64(Some(3)));
assert_eq!(state[1], ScalarValue::Float64(Some(6.)));
let eval_res = state_accum.evaluate().unwrap();
let expected = Arc::new(
StructArray::try_new(
vec![
Field::new("avg[count]", DataType::UInt64, true),
Field::new("avg[sum]", DataType::Float64, true),
]
.into(),
vec![
Arc::new(UInt64Array::from(vec![Some(3)])),
Arc::new(Float64Array::from(vec![Some(6.)])),
],
None,
)
.unwrap(),
);
assert_eq!(eval_res, ScalarValue::Struct(expected));
let phy_aggr_merge_plan = DefaultPhysicalPlanner::default()
.create_physical_plan(&res.upper_merge, &ctx.state())
.await
.unwrap();
let aggr_exec = phy_aggr_merge_plan
.as_any()
.downcast_ref::<AggregateExec>()
.unwrap();
let aggr_func_expr = &aggr_exec.aggr_expr()[0];
let mut merge_accum = aggr_func_expr.create_accumulator().unwrap();
let merge_input = vec![
Arc::new(UInt64Array::from(vec![Some(3), Some(42), None])) as arrow::array::ArrayRef,
Arc::new(Float64Array::from(vec![Some(48.), Some(84.), None])),
];
let merge_input_struct_arr = StructArray::try_new(
vec![
Field::new("avg[count]", DataType::UInt64, true),
Field::new("avg[sum]", DataType::Float64, true),
]
.into(),
merge_input,
None,
)
.unwrap();
merge_accum
.update_batch(&[Arc::new(merge_input_struct_arr)])
.unwrap();
let merge_state = merge_accum.state().unwrap();
assert_eq!(merge_state.len(), 2);
assert_eq!(merge_state[0], ScalarValue::UInt64(Some(45)));
assert_eq!(merge_state[1], ScalarValue::Float64(Some(132.)));
let merge_eval_res = merge_accum.evaluate().unwrap();
// the merge function returns the average, which is 132 / 45
assert_eq!(merge_eval_res, ScalarValue::Float64(Some(132. / 45_f64)));
}
/// For testing whether the UDAF state fields are correctly implemented.
/// esp. for our own custom UDAF's state fields.
/// By compare eval results before and after split to state/merge functions.
#[tokio::test]
async fn test_udaf_correct_eval_result() {
struct TestCase {
func: Arc<AggregateUDF>,
args: Vec<Expr>,
input_schema: SchemaRef,
input: Vec<ArrayRef>,
expected_output: Option<ScalarValue>,
expected_fn: Option<ExpectedFn>,
distinct: bool,
filter: Option<Box<Expr>>,
order_by: Option<Vec<SortExpr>>,
null_treatment: Option<NullTreatment>,
}
type ExpectedFn = fn(ArrayRef) -> bool;
let test_cases = vec![
TestCase {
func: sum_udaf(),
input_schema: Arc::new(arrow_schema::Schema::new(vec![Field::new(
"number",
DataType::Int64,
true,
)])),
args: vec![Expr::Column(Column::new_unqualified("number"))],
input: vec![Arc::new(Int64Array::from(vec![
Some(1),
Some(2),
None,
Some(3),
]))],
expected_output: Some(ScalarValue::Int64(Some(6))),
expected_fn: None,
distinct: false,
filter: None,
order_by: None,
null_treatment: None,
},
TestCase {
func: avg_udaf(),
input_schema: Arc::new(arrow_schema::Schema::new(vec![Field::new(
"number",
DataType::Int64,
true,
)])),
args: vec![Expr::Column(Column::new_unqualified("number"))],
input: vec![Arc::new(Int64Array::from(vec![
Some(1),
Some(2),
None,
Some(3),
]))],
expected_output: Some(ScalarValue::Float64(Some(2.0))),
expected_fn: None,
distinct: false,
filter: None,
order_by: None,
null_treatment: None,
},
TestCase {
func: Arc::new(CountHash::udf_impl()),
input_schema: Arc::new(arrow_schema::Schema::new(vec![Field::new(
"number",
DataType::Int64,
true,
)])),
args: vec![Expr::Column(Column::new_unqualified("number"))],
input: vec![Arc::new(Int64Array::from(vec![
Some(1),
Some(2),
None,
Some(3),
Some(3),
Some(3),
]))],
expected_output: Some(ScalarValue::Int64(Some(4))),
expected_fn: None,
distinct: false,
filter: None,
order_by: None,
null_treatment: None,
},
TestCase {
func: Arc::new(UddSketchState::state_udf_impl()),
input_schema: Arc::new(arrow_schema::Schema::new(vec![Field::new(
"number",
DataType::Float64,
true,
)])),
args: vec![
Expr::Literal(ScalarValue::Int64(Some(128))),
Expr::Literal(ScalarValue::Float64(Some(0.05))),
Expr::Column(Column::new_unqualified("number")),
],
input: vec![Arc::new(Float64Array::from(vec![
Some(1.),
Some(2.),
None,
Some(3.),
Some(3.),
Some(3.),
]))],
expected_output: None,
expected_fn: Some(|arr| {
let percent = ScalarValue::Float64(Some(0.5)).to_array().unwrap();
let percent = datatypes::vectors::Helper::try_into_vector(percent).unwrap();
let state = datatypes::vectors::Helper::try_into_vector(arr).unwrap();
let udd_calc = UddSketchCalcFunction;
let res = udd_calc
.eval(&Default::default(), &[percent, state])
.unwrap();
let binding = res.to_arrow_array();
let res_arr = binding.as_any().downcast_ref::<Float64Array>().unwrap();
assert!(res_arr.len() == 1);
assert!((res_arr.value(0) - 2.856578984907706f64).abs() <= f64::EPSILON);
true
}),
distinct: false,
filter: None,
order_by: None,
null_treatment: None,
},
TestCase {
func: Arc::new(HllState::state_udf_impl()),
input_schema: Arc::new(arrow_schema::Schema::new(vec![Field::new(
"word",
DataType::Utf8,
true,
)])),
args: vec![Expr::Column(Column::new_unqualified("word"))],
input: vec![Arc::new(StringArray::from(vec![
Some("foo"),
Some("bar"),
None,
Some("baz"),
Some("baz"),
]))],
expected_output: None,
expected_fn: Some(|arr| {
let state = datatypes::vectors::Helper::try_into_vector(arr).unwrap();
let hll_calc = HllCalcFunction;
let res = hll_calc.eval(&Default::default(), &[state]).unwrap();
let binding = res.to_arrow_array();
let res_arr = binding.as_any().downcast_ref::<UInt64Array>().unwrap();
assert!(res_arr.len() == 1);
assert_eq!(res_arr.value(0), 3);
true
}),
distinct: false,
filter: None,
order_by: None,
null_treatment: None,
},
// TODO(discord9): udd_merge/hll_merge/geo_path/quantile_aggr tests
];
let test_table_ref = TableReference::bare("TestTable");
for case in test_cases {
let ctx = SessionContext::new();
let table_provider = DummyTableProvider::new(
case.input_schema.clone(),
Some(RecordBatch::try_new(case.input_schema.clone(), case.input.clone()).unwrap()),
);
let table_source = DefaultTableSource::new(Arc::new(table_provider));
let logical_plan = LogicalPlan::TableScan(
TableScan::try_new(
test_table_ref.clone(),
Arc::new(table_source),
None,
vec![],
None,
)
.unwrap(),
);
let args = case.args;
let aggr_expr = Expr::AggregateFunction(AggregateFunction::new_udf(
case.func.clone(),
args,
case.distinct,
case.filter,
case.order_by,
case.null_treatment,
));
let aggr_plan = LogicalPlan::Aggregate(
Aggregate::try_new(Arc::new(logical_plan), vec![], vec![aggr_expr]).unwrap(),
);
// make sure the aggr_plan is type coerced
let aggr_plan = TypeCoercion::new()
.analyze(aggr_plan, &Default::default())
.unwrap();
// first eval the original aggregate function
let phy_full_aggr_plan = DefaultPhysicalPlanner::default()
.create_physical_plan(&aggr_plan, &ctx.state())
.await
.unwrap();
{
let unsplit_result = execute_phy_plan(&phy_full_aggr_plan).await.unwrap();
assert_eq!(unsplit_result.len(), 1);
let unsplit_batch = &unsplit_result[0];
assert_eq!(unsplit_batch.num_columns(), 1);
assert_eq!(unsplit_batch.num_rows(), 1);
let unsplit_col = unsplit_batch.column(0);
if let Some(expected_output) = &case.expected_output {
assert_eq!(unsplit_col.data_type(), &expected_output.data_type());
assert_eq!(unsplit_col.len(), 1);
assert_eq!(unsplit_col, &expected_output.to_array().unwrap());
}
if let Some(expected_fn) = &case.expected_fn {
assert!(expected_fn(unsplit_col.clone()));
}
}
let LogicalPlan::Aggregate(aggr_plan) = aggr_plan else {
panic!("Expected Aggregate plan");
};
let split_plan = StateMergeHelper::split_aggr_node(aggr_plan).unwrap();
let phy_upper_plan = DefaultPhysicalPlanner::default()
.create_physical_plan(&split_plan.upper_merge, &ctx.state())
.await
.unwrap();
// since upper plan use lower plan as input, execute upper plan should also execute lower plan
// which should give the same result as the original aggregate function
{
let split_res = execute_phy_plan(&phy_upper_plan).await.unwrap();
assert_eq!(split_res.len(), 1);
let split_batch = &split_res[0];
assert_eq!(split_batch.num_columns(), 1);
assert_eq!(split_batch.num_rows(), 1);
let split_col = split_batch.column(0);
if let Some(expected_output) = &case.expected_output {
assert_eq!(split_col.data_type(), &expected_output.data_type());
assert_eq!(split_col.len(), 1);
assert_eq!(split_col, &expected_output.to_array().unwrap());
}
if let Some(expected_fn) = &case.expected_fn {
assert!(expected_fn(split_col.clone()));
}
}
}
}
async fn execute_phy_plan(
phy_plan: &Arc<dyn ExecutionPlan>,
) -> datafusion_common::Result<Vec<RecordBatch>> {
let task_ctx = Arc::new(TaskContext::default());
let mut stream = phy_plan.execute(0, task_ctx)?;
let mut batches = Vec::new();
while let Some(batch) = stream.next().await {
batches.push(batch?);
}
Ok(batches)
}

View File

@@ -12,8 +12,8 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use std::fmt;
use std::sync::Arc;
use std::{env, fmt};
use common_query::error::Result;
use common_query::prelude::{Signature, Volatility};
@@ -47,7 +47,7 @@ impl Function for PGVersionFunction {
fn eval(&self, _func_ctx: &FunctionContext, _columns: &[VectorRef]) -> Result<VectorRef> {
let result = StringVector::from(vec![format!(
"PostgreSQL 16.3 GreptimeDB {}",
env!("CARGO_PKG_VERSION")
common_version::version()
)]);
Ok(Arc::new(result))
}

View File

@@ -12,8 +12,8 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use std::fmt;
use std::sync::Arc;
use std::{env, fmt};
use common_query::error::Result;
use common_query::prelude::{Signature, Volatility};
@@ -52,13 +52,13 @@ impl Function for VersionFunction {
"{}-greptimedb-{}",
std::env::var("GREPTIMEDB_MYSQL_SERVER_VERSION")
.unwrap_or_else(|_| "8.4.2".to_string()),
env!("CARGO_PKG_VERSION")
common_version::version()
)
}
Channel::Postgres => {
format!("16.3-greptimedb-{}", env!("CARGO_PKG_VERSION"))
format!("16.3-greptimedb-{}", common_version::version())
}
_ => env!("CARGO_PKG_VERSION").to_string(),
_ => common_version::version().to_string(),
};
let result = StringVector::from(vec![version]);
Ok(Arc::new(result))

View File

@@ -16,8 +16,8 @@ use std::env;
use std::io::ErrorKind;
use std::path::{Path, PathBuf};
use std::sync::atomic::{AtomicBool, Ordering};
use std::sync::Arc;
use std::time::Duration;
use std::sync::{Arc, LazyLock};
use std::time::{Duration, SystemTime};
use common_runtime::error::{Error, Result};
use common_runtime::{BoxedTaskFunction, RepeatedTask, TaskFunction};
@@ -31,6 +31,9 @@ pub const TELEMETRY_URL: &str = "https://telemetry.greptimestats.com/db/otel/sta
/// The local installation uuid cache file
const UUID_FILE_NAME: &str = ".greptimedb-telemetry-uuid";
/// System start time for uptime calculation
static START_TIME: LazyLock<SystemTime> = LazyLock::new(SystemTime::now);
/// The default interval of reporting telemetry data to greptime cloud
pub static TELEMETRY_INTERVAL: Duration = Duration::from_secs(60 * 30);
/// The default connect timeout to greptime cloud.
@@ -103,6 +106,8 @@ struct StatisticData {
pub nodes: Option<i32>,
/// The local installation uuid
pub uuid: String,
/// System uptime range (e.g., "hours", "days", "weeks")
pub uptime: String,
}
#[derive(Serialize, Deserialize, Debug, Eq, PartialEq)]
@@ -171,6 +176,25 @@ fn print_anonymous_usage_data_disclaimer() {
info!("https://docs.greptime.com/reference/telemetry");
}
/// Format uptime duration into a general time range string
/// Returns privacy-friendly descriptions like "hours", "days", etc.
fn format_uptime() -> String {
let uptime_duration = START_TIME.elapsed().unwrap_or(Duration::ZERO);
let total_seconds = uptime_duration.as_secs();
if total_seconds < 86400 {
"hours".to_string()
} else if total_seconds < 604800 {
"days".to_string()
} else if total_seconds < 2629746 {
"weeks".to_string()
} else if total_seconds < 31556952 {
"months".to_string()
} else {
"years".to_string()
}
}
pub fn default_get_uuid(working_home: &Option<String>) -> Option<String> {
let temp_dir = env::temp_dir();
@@ -260,6 +284,7 @@ impl GreptimeDBTelemetry {
mode: self.statistics.get_mode(),
nodes: self.statistics.get_nodes().await,
uuid,
uptime: format_uptime(),
};
if let Some(client) = self.client.as_ref() {
@@ -294,7 +319,9 @@ mod tests {
use reqwest::{Client, Response};
use tokio::spawn;
use crate::{default_get_uuid, Collector, GreptimeDBTelemetry, Mode, StatisticData};
use crate::{
default_get_uuid, format_uptime, Collector, GreptimeDBTelemetry, Mode, StatisticData,
};
static COUNT: AtomicUsize = std::sync::atomic::AtomicUsize::new(0);
@@ -438,6 +465,7 @@ mod tests {
assert_eq!(build_info().commit, body.git_commit);
assert_eq!(Mode::Standalone, body.mode);
assert_eq!(1, body.nodes.unwrap());
assert!(!body.uptime.is_empty());
let failed_statistic = Box::new(FailedStatistic);
let failed_report = GreptimeDBTelemetry::new(
@@ -477,4 +505,18 @@ mod tests {
assert_eq!(uuid, default_get_uuid(&Some(working_home.clone())));
assert_eq!(uuid, default_get_uuid(&Some(working_home)));
}
#[test]
fn test_format_uptime() {
let uptime = format_uptime();
assert!(!uptime.is_empty());
// Should be a valid general time range (no specific numbers)
assert!(
uptime == "hours"
|| uptime == "days"
|| uptime == "weeks"
|| uptime == "months"
|| uptime == "years"
);
}
}

View File

@@ -29,8 +29,8 @@ use snafu::{ensure, OptionExt, ResultExt};
use store_api::region_request::{SetRegionOption, UnsetRegionOption};
use table::metadata::TableId;
use table::requests::{
AddColumnRequest, AlterKind, AlterTableRequest, ModifyColumnTypeRequest, SetIndexOptions,
UnsetIndexOptions,
AddColumnRequest, AlterKind, AlterTableRequest, ModifyColumnTypeRequest, SetIndexOption,
UnsetIndexOption,
};
use crate::error::{
@@ -43,6 +43,59 @@ use crate::error::{
const LOCATION_TYPE_FIRST: i32 = LocationType::First as i32;
const LOCATION_TYPE_AFTER: i32 = LocationType::After as i32;
fn set_index_option_from_proto(set_index: api::v1::SetIndex) -> Result<SetIndexOption> {
let options = set_index.options.context(MissingAlterIndexOptionSnafu)?;
Ok(match options {
api::v1::set_index::Options::Fulltext(f) => SetIndexOption::Fulltext {
column_name: f.column_name.clone(),
options: FulltextOptions::new(
f.enable,
as_fulltext_option_analyzer(
Analyzer::try_from(f.analyzer).context(InvalidSetFulltextOptionRequestSnafu)?,
),
f.case_sensitive,
as_fulltext_option_backend(
PbFulltextBackend::try_from(f.backend)
.context(InvalidSetFulltextOptionRequestSnafu)?,
),
f.granularity as u32,
f.false_positive_rate,
)
.context(InvalidIndexOptionSnafu)?,
},
api::v1::set_index::Options::Inverted(i) => SetIndexOption::Inverted {
column_name: i.column_name,
},
api::v1::set_index::Options::Skipping(s) => SetIndexOption::Skipping {
column_name: s.column_name,
options: SkippingIndexOptions::new(
s.granularity as u32,
s.false_positive_rate,
as_skipping_index_type(
PbSkippingIndexType::try_from(s.skipping_index_type)
.context(InvalidSetSkippingIndexOptionRequestSnafu)?,
),
)
.context(InvalidIndexOptionSnafu)?,
},
})
}
fn unset_index_option_from_proto(unset_index: api::v1::UnsetIndex) -> Result<UnsetIndexOption> {
let options = unset_index.options.context(MissingAlterIndexOptionSnafu)?;
Ok(match options {
api::v1::unset_index::Options::Fulltext(f) => UnsetIndexOption::Fulltext {
column_name: f.column_name,
},
api::v1::unset_index::Options::Inverted(i) => UnsetIndexOption::Inverted {
column_name: i.column_name,
},
api::v1::unset_index::Options::Skipping(s) => UnsetIndexOption::Skipping {
column_name: s.column_name,
},
})
}
/// Convert an [`AlterTableExpr`] to an [`AlterTableRequest`]
pub fn alter_expr_to_request(table_id: TableId, expr: AlterTableExpr) -> Result<AlterTableRequest> {
let catalog_name = expr.catalog_name;
@@ -121,70 +174,34 @@ pub fn alter_expr_to_request(table_id: TableId, expr: AlterTableExpr) -> Result<
.context(InvalidUnsetTableOptionRequestSnafu)?,
}
}
Kind::SetIndex(o) => match o.options {
Some(opt) => match opt {
api::v1::set_index::Options::Fulltext(f) => AlterKind::SetIndex {
options: SetIndexOptions::Fulltext {
column_name: f.column_name.clone(),
options: FulltextOptions::new(
f.enable,
as_fulltext_option_analyzer(
Analyzer::try_from(f.analyzer)
.context(InvalidSetFulltextOptionRequestSnafu)?,
),
f.case_sensitive,
as_fulltext_option_backend(
PbFulltextBackend::try_from(f.backend)
.context(InvalidSetFulltextOptionRequestSnafu)?,
),
f.granularity as u32,
f.false_positive_rate,
)
.context(InvalidIndexOptionSnafu)?,
},
},
api::v1::set_index::Options::Inverted(i) => AlterKind::SetIndex {
options: SetIndexOptions::Inverted {
column_name: i.column_name,
},
},
api::v1::set_index::Options::Skipping(s) => AlterKind::SetIndex {
options: SetIndexOptions::Skipping {
column_name: s.column_name,
options: SkippingIndexOptions::new(
s.granularity as u32,
s.false_positive_rate,
as_skipping_index_type(
PbSkippingIndexType::try_from(s.skipping_index_type)
.context(InvalidSetSkippingIndexOptionRequestSnafu)?,
),
)
.context(InvalidIndexOptionSnafu)?,
},
},
},
None => return MissingAlterIndexOptionSnafu.fail(),
},
Kind::UnsetIndex(o) => match o.options {
Some(opt) => match opt {
api::v1::unset_index::Options::Fulltext(f) => AlterKind::UnsetIndex {
options: UnsetIndexOptions::Fulltext {
column_name: f.column_name,
},
},
api::v1::unset_index::Options::Inverted(i) => AlterKind::UnsetIndex {
options: UnsetIndexOptions::Inverted {
column_name: i.column_name,
},
},
api::v1::unset_index::Options::Skipping(s) => AlterKind::UnsetIndex {
options: UnsetIndexOptions::Skipping {
column_name: s.column_name,
},
},
},
None => return MissingAlterIndexOptionSnafu.fail(),
},
Kind::SetIndex(o) => {
let option = set_index_option_from_proto(o)?;
AlterKind::SetIndexes {
options: vec![option],
}
}
Kind::UnsetIndex(o) => {
let option = unset_index_option_from_proto(o)?;
AlterKind::UnsetIndexes {
options: vec![option],
}
}
Kind::SetIndexes(o) => {
let options = o
.set_indexes
.into_iter()
.map(set_index_option_from_proto)
.collect::<Result<Vec<_>>>()?;
AlterKind::SetIndexes { options }
}
Kind::UnsetIndexes(o) => {
let options = o
.unset_indexes
.into_iter()
.map(unset_index_option_from_proto)
.collect::<Result<Vec<_>>>()?;
AlterKind::UnsetIndexes { options }
}
Kind::DropDefaults(o) => {
let names = o
.drop_defaults

View File

@@ -31,6 +31,7 @@ tokio.workspace = true
tokio-util.workspace = true
tonic.workspace = true
tower.workspace = true
vec1 = "1.12"
[dev-dependencies]
criterion = "0.4"

View File

@@ -84,9 +84,12 @@ fn prepare_random_record_batch(
fn prepare_flight_data(num_rows: usize) -> (FlightData, FlightData) {
let schema = schema();
let mut encoder = FlightEncoder::default();
let schema_data = encoder.encode(FlightMessage::Schema(schema.clone()));
let schema_data = encoder.encode_schema(schema.as_ref());
let rb = prepare_random_record_batch(schema, num_rows);
let rb_data = encoder.encode(FlightMessage::RecordBatch(rb));
let [rb_data] = encoder
.encode(FlightMessage::RecordBatch(rb))
.try_into()
.unwrap();
(schema_data, rb_data)
}
@@ -96,7 +99,7 @@ fn decode_flight_data_from_protobuf(schema: &Bytes, payload: &Bytes) -> DfRecord
let mut decoder = FlightDecoder::default();
let _schema = decoder.try_decode(&schema).unwrap();
let message = decoder.try_decode(&payload).unwrap();
let FlightMessage::RecordBatch(batch) = message else {
let Some(FlightMessage::RecordBatch(batch)) = message else {
unreachable!("unexpected message");
};
batch

View File

@@ -23,6 +23,7 @@ use arrow_flight::{FlightData, SchemaAsIpc};
use common_base::bytes::Bytes;
use common_recordbatch::DfRecordBatch;
use datatypes::arrow;
use datatypes::arrow::array::ArrayRef;
use datatypes::arrow::buffer::Buffer;
use datatypes::arrow::datatypes::{Schema as ArrowSchema, SchemaRef};
use datatypes::arrow::error::ArrowError;
@@ -31,6 +32,7 @@ use flatbuffers::FlatBufferBuilder;
use prost::bytes::Bytes as ProstBytes;
use prost::Message;
use snafu::{OptionExt, ResultExt};
use vec1::{vec1, Vec1};
use crate::error;
use crate::error::{DecodeFlightDataSnafu, InvalidFlightDataSnafu, Result};
@@ -77,9 +79,19 @@ impl FlightEncoder {
}
}
pub fn encode(&mut self, flight_message: FlightMessage) -> FlightData {
/// Encode the Arrow schema to [FlightData].
pub fn encode_schema(&self, schema: &ArrowSchema) -> FlightData {
SchemaAsIpc::new(schema, &self.write_options).into()
}
/// Encode the [FlightMessage] to a list (at least one element) of [FlightData]s.
///
/// Normally only when the [FlightMessage] is an Arrow [RecordBatch] with dictionary arrays
/// will the encoder produce more than one [FlightData]s. Other types of [FlightMessage] should
/// be encoded to exactly one [FlightData].
pub fn encode(&mut self, flight_message: FlightMessage) -> Vec1<FlightData> {
match flight_message {
FlightMessage::Schema(schema) => SchemaAsIpc::new(&schema, &self.write_options).into(),
FlightMessage::Schema(schema) => vec1![self.encode_schema(schema.as_ref())],
FlightMessage::RecordBatch(record_batch) => {
let (encoded_dictionaries, encoded_batch) = self
.data_gen
@@ -90,14 +102,10 @@ impl FlightEncoder {
)
.expect("DictionaryTracker configured above to not fail on replacement");
// TODO(LFC): Handle dictionary as FlightData here, when we supported Arrow's Dictionary DataType.
// Currently we don't have a datatype corresponding to Arrow's Dictionary DataType,
// so there won't be any "dictionaries" here. Assert to be sure about it, and
// perform a "testing guard" in case we forgot to handle the possible "dictionaries"
// here in the future.
debug_assert_eq!(encoded_dictionaries.len(), 0);
encoded_batch.into()
Vec1::from_vec_push(
encoded_dictionaries.into_iter().map(Into::into).collect(),
encoded_batch.into(),
)
}
FlightMessage::AffectedRows(rows) => {
let metadata = FlightMetadata {
@@ -105,12 +113,12 @@ impl FlightEncoder {
metrics: None,
}
.encode_to_vec();
FlightData {
vec1![FlightData {
flight_descriptor: None,
data_header: build_none_flight_msg().into(),
app_metadata: metadata.into(),
data_body: ProstBytes::default(),
}
}]
}
FlightMessage::Metrics(s) => {
let metadata = FlightMetadata {
@@ -120,12 +128,12 @@ impl FlightEncoder {
}),
}
.encode_to_vec();
FlightData {
vec1![FlightData {
flight_descriptor: None,
data_header: build_none_flight_msg().into(),
app_metadata: metadata.into(),
data_body: ProstBytes::default(),
}
}]
}
}
}
@@ -135,6 +143,7 @@ impl FlightEncoder {
pub struct FlightDecoder {
schema: Option<SchemaRef>,
schema_bytes: Option<bytes::Bytes>,
dictionaries_by_id: HashMap<i64, ArrayRef>,
}
impl FlightDecoder {
@@ -145,6 +154,7 @@ impl FlightDecoder {
Ok(Self {
schema: Some(Arc::new(arrow_schema)),
schema_bytes: Some(schema_bytes.clone()),
dictionaries_by_id: HashMap::new(),
})
}
@@ -186,7 +196,13 @@ impl FlightDecoder {
Ok(result)
}
pub fn try_decode(&mut self, flight_data: &FlightData) -> Result<FlightMessage> {
/// Try to decode the [FlightData] to a [FlightMessage].
///
/// If the [FlightData] is of type `DictionaryBatch` (produced while encoding an Arrow
/// [RecordBatch] with dictionary arrays), the decoder will not return any [FlightMessage]s.
/// Instead, it will update its internal dictionary cache. Other types of [FlightData] will
/// be decoded to exactly one [FlightMessage].
pub fn try_decode(&mut self, flight_data: &FlightData) -> Result<Option<FlightMessage>> {
let message = root_as_message(&flight_data.data_header).map_err(|e| {
InvalidFlightDataSnafu {
reason: e.to_string(),
@@ -198,12 +214,12 @@ impl FlightDecoder {
let metadata = FlightMetadata::decode(flight_data.app_metadata.clone())
.context(DecodeFlightDataSnafu)?;
if let Some(AffectedRows { value }) = metadata.affected_rows {
return Ok(FlightMessage::AffectedRows(value as _));
return Ok(Some(FlightMessage::AffectedRows(value as _)));
}
if let Some(Metrics { metrics }) = metadata.metrics {
return Ok(FlightMessage::Metrics(
return Ok(Some(FlightMessage::Metrics(
String::from_utf8_lossy(&metrics).to_string(),
));
)));
}
InvalidFlightDataSnafu {
reason: "Expecting FlightMetadata have some meaningful content.",
@@ -219,21 +235,46 @@ impl FlightDecoder {
})?);
self.schema = Some(arrow_schema.clone());
self.schema_bytes = Some(flight_data.data_header.clone());
Ok(FlightMessage::Schema(arrow_schema))
Ok(Some(FlightMessage::Schema(arrow_schema)))
}
MessageHeader::RecordBatch => {
let schema = self.schema.clone().context(InvalidFlightDataSnafu {
reason: "Should have decoded schema first!",
})?;
let arrow_batch =
flight_data_to_arrow_batch(flight_data, schema.clone(), &HashMap::new())
.map_err(|e| {
InvalidFlightDataSnafu {
reason: e.to_string(),
}
.build()
let arrow_batch = flight_data_to_arrow_batch(
flight_data,
schema.clone(),
&self.dictionaries_by_id,
)
.map_err(|e| {
InvalidFlightDataSnafu {
reason: e.to_string(),
}
.build()
})?;
Ok(Some(FlightMessage::RecordBatch(arrow_batch)))
}
MessageHeader::DictionaryBatch => {
let dictionary_batch =
message
.header_as_dictionary_batch()
.context(InvalidFlightDataSnafu {
reason: "could not get dictionary batch from DictionaryBatch message",
})?;
Ok(FlightMessage::RecordBatch(arrow_batch))
let schema = self.schema.as_ref().context(InvalidFlightDataSnafu {
reason: "schema message is not present previously",
})?;
reader::read_dictionary(
&flight_data.data_body.clone().into(),
dictionary_batch,
schema,
&mut self.dictionaries_by_id,
&message.version(),
)
.context(error::ArrowSnafu)?;
Ok(None)
}
other => {
let name = other.variant_name().unwrap_or("UNKNOWN");
@@ -305,14 +346,16 @@ fn build_none_flight_msg() -> Bytes {
#[cfg(test)]
mod test {
use arrow_flight::utils::batches_to_flight_data;
use datatypes::arrow::array::Int32Array;
use datatypes::arrow::array::{
DictionaryArray, Int32Array, StringArray, UInt32Array, UInt8Array,
};
use datatypes::arrow::datatypes::{DataType, Field, Schema};
use super::*;
use crate::Error;
#[test]
fn test_try_decode() {
fn test_try_decode() -> Result<()> {
let schema = Arc::new(ArrowSchema::new(vec![Field::new(
"n",
DataType::Int32,
@@ -347,7 +390,7 @@ mod test {
.to_string()
.contains("Should have decoded schema first!"));
let message = decoder.try_decode(d1).unwrap();
let message = decoder.try_decode(d1)?.unwrap();
assert!(matches!(message, FlightMessage::Schema(_)));
let FlightMessage::Schema(decoded_schema) = message else {
unreachable!()
@@ -356,19 +399,20 @@ mod test {
let _ = decoder.schema.as_ref().unwrap();
let message = decoder.try_decode(d2).unwrap();
let message = decoder.try_decode(d2)?.unwrap();
assert!(matches!(message, FlightMessage::RecordBatch(_)));
let FlightMessage::RecordBatch(actual_batch) = message else {
unreachable!()
};
assert_eq!(actual_batch, batch1);
let message = decoder.try_decode(d3).unwrap();
let message = decoder.try_decode(d3)?.unwrap();
assert!(matches!(message, FlightMessage::RecordBatch(_)));
let FlightMessage::RecordBatch(actual_batch) = message else {
unreachable!()
};
assert_eq!(actual_batch, batch2);
Ok(())
}
#[test]
@@ -407,4 +451,86 @@ mod test {
let actual = flight_messages_to_recordbatches(vec![m1, m2, m3]).unwrap();
assert_eq!(actual, recordbatches);
}
#[test]
fn test_flight_encode_decode_with_dictionary_array() -> Result<()> {
let schema = Arc::new(Schema::new(vec![
Field::new("i", DataType::UInt8, true),
Field::new_dictionary("s", DataType::UInt32, DataType::Utf8, true),
]));
let batch1 = DfRecordBatch::try_new(
schema.clone(),
vec![
Arc::new(UInt8Array::from_iter_values(vec![1, 2, 3])) as _,
Arc::new(DictionaryArray::new(
UInt32Array::from_value(0, 3),
Arc::new(StringArray::from_iter_values(["x"])),
)) as _,
],
)
.unwrap();
let batch2 = DfRecordBatch::try_new(
schema.clone(),
vec![
Arc::new(UInt8Array::from_iter_values(vec![4, 5, 6, 7, 8])) as _,
Arc::new(DictionaryArray::new(
UInt32Array::from_iter_values([0, 1, 2, 2, 3]),
Arc::new(StringArray::from_iter_values(["h", "e", "l", "o"])),
)) as _,
],
)
.unwrap();
let message_1 = FlightMessage::Schema(schema.clone());
let message_2 = FlightMessage::RecordBatch(batch1);
let message_3 = FlightMessage::RecordBatch(batch2);
let mut encoder = FlightEncoder::default();
let encoded_1 = encoder.encode(message_1);
let encoded_2 = encoder.encode(message_2);
let encoded_3 = encoder.encode(message_3);
// message 1 is Arrow Schema, should be encoded to one FlightData:
assert_eq!(encoded_1.len(), 1);
// message 2 and 3 are Arrow RecordBatch with dictionary arrays, should be encoded to
// multiple FlightData:
assert_eq!(encoded_2.len(), 2);
assert_eq!(encoded_3.len(), 2);
let mut decoder = FlightDecoder::default();
let decoded_1 = decoder.try_decode(encoded_1.first())?;
let Some(FlightMessage::Schema(actual_schema)) = decoded_1 else {
unreachable!()
};
assert_eq!(actual_schema, schema);
let decoded_2 = decoder.try_decode(&encoded_2[0])?;
// expected to be a dictionary batch message, decoder should return none:
assert!(decoded_2.is_none());
let Some(FlightMessage::RecordBatch(decoded_2)) = decoder.try_decode(&encoded_2[1])? else {
unreachable!()
};
let decoded_3 = decoder.try_decode(&encoded_3[0])?;
// expected to be a dictionary batch message, decoder should return none:
assert!(decoded_3.is_none());
let Some(FlightMessage::RecordBatch(decoded_3)) = decoder.try_decode(&encoded_3[1])? else {
unreachable!()
};
let actual = arrow::util::pretty::pretty_format_batches(&[decoded_2, decoded_3])
.unwrap()
.to_string();
let expected = r"
+---+---+
| i | s |
+---+---+
| 1 | x |
| 2 | x |
| 3 | x |
| 4 | h |
| 5 | e |
| 6 | l |
| 7 | l |
| 8 | o |
+---+---+";
assert_eq!(actual, expected.trim());
Ok(())
}
}

View File

@@ -69,6 +69,7 @@ table = { workspace = true, features = ["testing"] }
tokio.workspace = true
tokio-postgres = { workspace = true, optional = true }
tonic.workspace = true
tracing.workspace = true
typetag.workspace = true
[dev-dependencies]

View File

@@ -15,6 +15,7 @@
use std::collections::HashMap;
use std::sync::Arc;
use common_telemetry::info;
use futures::future::BoxFuture;
use moka::future::Cache;
use moka::ops::compute::Op;
@@ -89,6 +90,12 @@ fn init_factory(table_flow_manager: TableFlowManagerRef) -> Initializer<TableId,
// we have a corresponding cache invalidation mechanism to invalidate `(Key, EmptyHashSet)`.
.map(Arc::new)
.map(Some)
.inspect(|set| {
info!(
"Initialized table_flownode cache for table_id: {}, set: {:?}",
table_id, set
);
})
})
})
}
@@ -167,6 +174,13 @@ fn invalidator<'a>(
match ident {
CacheIdent::CreateFlow(create_flow) => handle_create_flow(cache, create_flow).await,
CacheIdent::DropFlow(drop_flow) => handle_drop_flow(cache, drop_flow).await,
CacheIdent::FlowNodeAddressChange(node_id) => {
info!(
"Invalidate flow node cache for node_id in table_flownode: {}",
node_id
);
cache.invalidate_all();
}
_ => {}
}
Ok(())
@@ -174,7 +188,10 @@ fn invalidator<'a>(
}
fn filter(ident: &CacheIdent) -> bool {
matches!(ident, CacheIdent::CreateFlow(_) | CacheIdent::DropFlow(_))
matches!(
ident,
CacheIdent::CreateFlow(_) | CacheIdent::DropFlow(_) | CacheIdent::FlowNodeAddressChange(_)
)
}
#[cfg(test)]

View File

@@ -22,6 +22,7 @@ use crate::key::flow::flow_name::FlowNameKey;
use crate::key::flow::flow_route::FlowRouteKey;
use crate::key::flow::flownode_flow::FlownodeFlowKey;
use crate::key::flow::table_flow::TableFlowKey;
use crate::key::node_address::NodeAddressKey;
use crate::key::schema_name::SchemaNameKey;
use crate::key::table_info::TableInfoKey;
use crate::key::table_name::TableNameKey;
@@ -53,6 +54,10 @@ pub struct Context {
#[async_trait::async_trait]
pub trait CacheInvalidator: Send + Sync {
async fn invalidate(&self, ctx: &Context, caches: &[CacheIdent]) -> Result<()>;
fn name(&self) -> &'static str {
std::any::type_name::<Self>()
}
}
pub type CacheInvalidatorRef = Arc<dyn CacheInvalidator>;
@@ -137,6 +142,13 @@ where
let key = FlowInfoKey::new(*flow_id);
self.invalidate_key(&key.to_bytes()).await;
}
CacheIdent::FlowNodeAddressChange(node_id) => {
// other caches doesn't need to be invalidated
// since this is only for flownode address change not id change
common_telemetry::info!("Invalidate flow node cache for node_id: {}", node_id);
let key = NodeAddressKey::with_flownode(*node_id);
self.invalidate_key(&key.to_bytes()).await;
}
}
}
Ok(())

View File

@@ -93,6 +93,8 @@ pub struct RegionStat {
pub manifest_size: u64,
/// The size of the SST data files in bytes.
pub sst_size: u64,
/// The num of the SST data files.
pub sst_num: u64,
/// The size of the SST index files in bytes.
pub index_size: u64,
/// The manifest infoof the region.
@@ -173,8 +175,8 @@ impl RegionStat {
std::mem::size_of::<RegionId>() +
// rcus, wcus, approximate_bytes, num_rows
std::mem::size_of::<i64>() * 4 +
// memtable_size, manifest_size, sst_size, index_size
std::mem::size_of::<u64>() * 4 +
// memtable_size, manifest_size, sst_size, sst_num, index_size
std::mem::size_of::<u64>() * 5 +
// engine
std::mem::size_of::<String>() + self.engine.capacity() +
// region_manifest
@@ -275,6 +277,7 @@ impl From<&api::v1::meta::RegionStat> for RegionStat {
memtable_size: region_stat.memtable_size,
manifest_size: region_stat.manifest_size,
sst_size: region_stat.sst_size,
sst_num: region_stat.sst_num,
index_size: region_stat.index_size,
region_manifest: region_stat.manifest.into(),
data_topic_latest_entry_id: region_stat.data_topic_latest_entry_id,

View File

@@ -50,7 +50,6 @@ pub mod drop_flow;
pub mod drop_table;
pub mod drop_view;
pub mod flow_meta;
mod physical_table_metadata;
pub mod table_meta;
#[cfg(any(test, feature = "testing"))]
pub mod test_util;

View File

@@ -12,20 +12,17 @@
// See the License for the specific language governing permissions and
// limitations under the License.
mod check;
mod metadata;
mod region_request;
mod table_cache_keys;
mod executor;
mod update_metadata;
mod validator;
use api::region::RegionResponse;
use async_trait::async_trait;
use common_catalog::format_full_table_name;
use common_procedure::error::{FromJsonSnafu, Result as ProcedureResult, ToJsonSnafu};
use common_procedure::{Context, LockKey, Procedure, Status};
use common_telemetry::{error, info, warn};
use futures_util::future;
pub use region_request::make_alter_region_request;
use common_telemetry::{debug, error, info, warn};
pub use executor::make_alter_region_request;
use serde::{Deserialize, Serialize};
use snafu::ResultExt;
use store_api::metadata::ColumnMetadata;
@@ -33,10 +30,12 @@ use store_api::metric_engine_consts::ALTER_PHYSICAL_EXTENSION_KEY;
use strum::AsRefStr;
use table::metadata::TableId;
use crate::ddl::utils::{
add_peer_context_if_needed, extract_column_metadatas, map_to_procedure_error,
sync_follower_regions,
use crate::cache_invalidator::Context as CacheContext;
use crate::ddl::alter_logical_tables::executor::AlterLogicalTablesExecutor;
use crate::ddl::alter_logical_tables::validator::{
retain_unskipped, AlterLogicalTableValidator, ValidatorResult,
};
use crate::ddl::utils::{extract_column_metadatas, map_to_procedure_error, sync_follower_regions};
use crate::ddl::DdlContext;
use crate::error::Result;
use crate::instruction::CacheIdent;
@@ -46,13 +45,38 @@ use crate::key::DeserializedValueWithBytes;
use crate::lock_key::{CatalogLock, SchemaLock, TableLock};
use crate::metrics;
use crate::rpc::ddl::AlterTableTask;
use crate::rpc::router::{find_leaders, RegionRoute};
use crate::rpc::router::RegionRoute;
pub struct AlterLogicalTablesProcedure {
pub context: DdlContext,
pub data: AlterTablesData,
}
/// Builds the validator from the [`AlterTablesData`].
fn build_validator_from_alter_table_data<'a>(
data: &'a AlterTablesData,
) -> AlterLogicalTableValidator<'a> {
let phsycial_table_id = data.physical_table_id;
let alters = data
.tasks
.iter()
.map(|task| &task.alter_table)
.collect::<Vec<_>>();
AlterLogicalTableValidator::new(phsycial_table_id, alters)
}
/// Builds the executor from the [`AlterTablesData`].
fn build_executor_from_alter_expr<'a>(data: &'a AlterTablesData) -> AlterLogicalTablesExecutor<'a> {
debug_assert_eq!(data.tasks.len(), data.table_info_values.len());
let alters = data
.tasks
.iter()
.zip(data.table_info_values.iter())
.map(|(task, table_info)| (table_info.table_info.ident.table_id, &task.alter_table))
.collect::<Vec<_>>();
AlterLogicalTablesExecutor::new(alters)
}
impl AlterLogicalTablesProcedure {
pub const TYPE_NAME: &'static str = "metasrv-procedure::AlterLogicalTables";
@@ -82,35 +106,44 @@ impl AlterLogicalTablesProcedure {
}
pub(crate) async fn on_prepare(&mut self) -> Result<Status> {
// Checks all the tasks
self.check_input_tasks()?;
// Fills the table info values
self.fill_table_info_values().await?;
// Checks the physical table, must after [fill_table_info_values]
self.check_physical_table().await?;
// Fills the physical table info
self.fill_physical_table_info().await?;
// Filter the finished tasks
let finished_tasks = self.check_finished_tasks()?;
let already_finished_count = finished_tasks
.iter()
.map(|x| if *x { 1 } else { 0 })
.sum::<usize>();
let apply_tasks_count = self.data.tasks.len();
if already_finished_count == apply_tasks_count {
let validator = build_validator_from_alter_table_data(&self.data);
let ValidatorResult {
num_skipped,
skip_alter,
table_info_values,
physical_table_info,
physical_table_route,
} = validator
.validate(&self.context.table_metadata_manager)
.await?;
let num_tasks = self.data.tasks.len();
if num_skipped == num_tasks {
info!("All the alter tasks are finished, will skip the procedure.");
let cache_ident_keys = AlterLogicalTablesExecutor::build_cache_ident_keys(
&physical_table_info,
&table_info_values
.iter()
.map(|v| v.get_inner_ref())
.collect::<Vec<_>>(),
);
self.data.table_cache_keys_to_invalidate = cache_ident_keys;
// Re-invalidate the table cache
self.data.state = AlterTablesState::InvalidateTableCache;
return Ok(Status::executing(true));
} else if already_finished_count > 0 {
} else if num_skipped > 0 {
info!(
"There are {} alter tasks, {} of them were already finished.",
apply_tasks_count, already_finished_count
num_tasks, num_skipped
);
}
self.filter_task(&finished_tasks)?;
// Next state
// Updates the procedure state.
retain_unskipped(&mut self.data.tasks, &skip_alter);
self.data.physical_table_info = Some(physical_table_info);
self.data.physical_table_route = Some(physical_table_route);
self.data.table_info_values = table_info_values;
debug_assert_eq!(self.data.tasks.len(), self.data.table_info_values.len());
self.data.state = AlterTablesState::SubmitAlterRegionRequests;
Ok(Status::executing(true))
}
@@ -118,25 +151,13 @@ impl AlterLogicalTablesProcedure {
pub(crate) async fn on_submit_alter_region_requests(&mut self) -> Result<Status> {
// Safety: we have checked the state in on_prepare
let physical_table_route = &self.data.physical_table_route.as_ref().unwrap();
let leaders = find_leaders(&physical_table_route.region_routes);
let mut alter_region_tasks = Vec::with_capacity(leaders.len());
for peer in leaders {
let requester = self.context.node_manager.datanode(&peer).await;
let request = self.make_request(&peer, &physical_table_route.region_routes)?;
alter_region_tasks.push(async move {
requester
.handle(request)
.await
.map_err(add_peer_context_if_needed(peer))
});
}
let mut results = future::join_all(alter_region_tasks)
.await
.into_iter()
.collect::<Result<Vec<_>>>()?;
let executor = build_executor_from_alter_expr(&self.data);
let mut results = executor
.on_alter_regions(
&self.context.node_manager,
&physical_table_route.region_routes,
)
.await?;
if let Some(column_metadatas) =
extract_column_metadatas(&mut results, ALTER_PHYSICAL_EXTENSION_KEY)?
@@ -177,7 +198,18 @@ impl AlterLogicalTablesProcedure {
self.update_physical_table_metadata().await?;
self.update_logical_tables_metadata().await?;
self.data.build_cache_keys_to_invalidate();
let logical_table_info_values = self
.data
.table_info_values
.iter()
.map(|v| v.get_inner_ref())
.collect::<Vec<_>>();
let cache_ident_keys = AlterLogicalTablesExecutor::build_cache_ident_keys(
self.data.physical_table_info.as_ref().unwrap(),
&logical_table_info_values,
);
self.data.table_cache_keys_to_invalidate = cache_ident_keys;
self.data.clear_metadata_fields();
self.data.state = AlterTablesState::InvalidateTableCache;
@@ -187,9 +219,16 @@ impl AlterLogicalTablesProcedure {
pub(crate) async fn on_invalidate_table_cache(&mut self) -> Result<Status> {
let to_invalidate = &self.data.table_cache_keys_to_invalidate;
let ctx = CacheContext {
subject: Some(format!(
"Invalidate table cache by altering logical tables, physical_table_id: {}",
self.data.physical_table_id,
)),
};
self.context
.cache_invalidator
.invalidate(&Default::default(), to_invalidate)
.invalidate(&ctx, to_invalidate)
.await?;
Ok(Status::done())
}
@@ -209,6 +248,10 @@ impl Procedure for AlterLogicalTablesProcedure {
let _timer = metrics::METRIC_META_PROCEDURE_ALTER_TABLE
.with_label_values(&[step])
.start_timer();
debug!(
"Executing alter logical tables procedure, state: {:?}",
state
);
match state {
AlterTablesState::Prepare => self.on_prepare().await,

View File

@@ -1,136 +0,0 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::collections::HashSet;
use api::v1::alter_table_expr::Kind;
use snafu::{ensure, OptionExt};
use crate::ddl::alter_logical_tables::AlterLogicalTablesProcedure;
use crate::error::{AlterLogicalTablesInvalidArgumentsSnafu, Result};
use crate::key::table_info::TableInfoValue;
use crate::key::table_route::TableRouteValue;
use crate::rpc::ddl::AlterTableTask;
impl AlterLogicalTablesProcedure {
pub(crate) fn check_input_tasks(&self) -> Result<()> {
self.check_schema()?;
self.check_alter_kind()?;
Ok(())
}
pub(crate) async fn check_physical_table(&self) -> Result<()> {
let table_route_manager = self.context.table_metadata_manager.table_route_manager();
let table_ids = self
.data
.table_info_values
.iter()
.map(|v| v.table_info.ident.table_id)
.collect::<Vec<_>>();
let table_routes = table_route_manager
.table_route_storage()
.batch_get(&table_ids)
.await?;
let physical_table_id = self.data.physical_table_id;
let is_same_physical_table = table_routes.iter().all(|r| {
if let Some(TableRouteValue::Logical(r)) = r {
r.physical_table_id() == physical_table_id
} else {
false
}
});
ensure!(
is_same_physical_table,
AlterLogicalTablesInvalidArgumentsSnafu {
err_msg: "All the tasks should have the same physical table id"
}
);
Ok(())
}
pub(crate) fn check_finished_tasks(&self) -> Result<Vec<bool>> {
let task = &self.data.tasks;
let table_info_values = &self.data.table_info_values;
Ok(task
.iter()
.zip(table_info_values.iter())
.map(|(task, table)| Self::check_finished_task(task, table))
.collect())
}
// Checks if the schemas of the tasks are the same
fn check_schema(&self) -> Result<()> {
let is_same_schema = self.data.tasks.windows(2).all(|pair| {
pair[0].alter_table.catalog_name == pair[1].alter_table.catalog_name
&& pair[0].alter_table.schema_name == pair[1].alter_table.schema_name
});
ensure!(
is_same_schema,
AlterLogicalTablesInvalidArgumentsSnafu {
err_msg: "Schemas of the tasks are not the same"
}
);
Ok(())
}
fn check_alter_kind(&self) -> Result<()> {
for task in &self.data.tasks {
let kind = task.alter_table.kind.as_ref().context(
AlterLogicalTablesInvalidArgumentsSnafu {
err_msg: "Alter kind is missing",
},
)?;
let Kind::AddColumns(_) = kind else {
return AlterLogicalTablesInvalidArgumentsSnafu {
err_msg: "Only support add columns operation",
}
.fail();
};
}
Ok(())
}
fn check_finished_task(task: &AlterTableTask, table: &TableInfoValue) -> bool {
let columns = table
.table_info
.meta
.schema
.column_schemas
.iter()
.map(|c| &c.name)
.collect::<HashSet<_>>();
let Some(kind) = task.alter_table.kind.as_ref() else {
return true; // Never get here since we have checked it in `check_alter_kind`
};
let Kind::AddColumns(add_columns) = kind else {
return true; // Never get here since we have checked it in `check_alter_kind`
};
// We only check that all columns have been finished. That is to say,
// if one part is finished but another part is not, it will be considered
// unfinished.
add_columns
.add_columns
.iter()
.map(|add_column| add_column.column_def.as_ref().map(|c| &c.name))
.all(|column| column.map(|c| columns.contains(c)).unwrap_or(false))
}
}

View File

@@ -0,0 +1,216 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::collections::HashMap;
use api::region::RegionResponse;
use api::v1::alter_table_expr::Kind;
use api::v1::region::{
alter_request, region_request, AddColumn, AddColumns, AlterRequest, AlterRequests,
RegionColumnDef, RegionRequest, RegionRequestHeader,
};
use api::v1::{self, AlterTableExpr};
use common_telemetry::tracing_context::TracingContext;
use common_telemetry::{debug, warn};
use futures::future;
use store_api::metadata::ColumnMetadata;
use store_api::storage::{RegionId, RegionNumber, TableId};
use crate::ddl::utils::{add_peer_context_if_needed, raw_table_info};
use crate::error::Result;
use crate::instruction::CacheIdent;
use crate::key::table_info::TableInfoValue;
use crate::key::{DeserializedValueWithBytes, RegionDistribution, TableMetadataManagerRef};
use crate::node_manager::NodeManagerRef;
use crate::rpc::router::{find_leaders, region_distribution, RegionRoute};
/// [AlterLogicalTablesExecutor] performs:
/// - Alters logical regions on the datanodes.
/// - Updates table metadata for alter table operation.
pub struct AlterLogicalTablesExecutor<'a> {
/// The alter table expressions.
///
/// The first element is the logical table id, the second element is the alter table expression.
alters: Vec<(TableId, &'a AlterTableExpr)>,
}
impl<'a> AlterLogicalTablesExecutor<'a> {
pub fn new(alters: Vec<(TableId, &'a AlterTableExpr)>) -> Self {
Self { alters }
}
/// Alters logical regions on the datanodes.
pub(crate) async fn on_alter_regions(
&self,
node_manager: &NodeManagerRef,
region_routes: &[RegionRoute],
) -> Result<Vec<RegionResponse>> {
let region_distribution = region_distribution(region_routes);
let leaders = find_leaders(region_routes)
.into_iter()
.map(|p| (p.id, p))
.collect::<HashMap<_, _>>();
let mut alter_region_tasks = Vec::with_capacity(leaders.len());
for (datanode_id, region_role_set) in region_distribution {
if region_role_set.leader_regions.is_empty() {
continue;
}
// Safety: must exists.
let peer = leaders.get(&datanode_id).unwrap();
let requester = node_manager.datanode(peer).await;
let requests = self.make_alter_region_request(&region_role_set.leader_regions);
let requester = requester.clone();
let peer = peer.clone();
debug!("Sending alter region requests to datanode {}", peer);
alter_region_tasks.push(async move {
requester
.handle(make_request(requests))
.await
.map_err(add_peer_context_if_needed(peer))
});
}
future::join_all(alter_region_tasks)
.await
.into_iter()
.collect::<Result<Vec<_>>>()
}
fn make_alter_region_request(&self, region_numbers: &[RegionNumber]) -> AlterRequests {
let mut requests = Vec::with_capacity(region_numbers.len() * self.alters.len());
for (table_id, alter) in self.alters.iter() {
for region_number in region_numbers {
let region_id = RegionId::new(*table_id, *region_number);
let request = make_alter_region_request(region_id, alter);
requests.push(request);
}
}
AlterRequests { requests }
}
/// Updates table metadata for alter table operation.
///
/// ## Panic:
/// - If the region distribution is not set when updating table metadata.
pub(crate) async fn on_alter_metadata(
physical_table_id: TableId,
table_metadata_manager: &TableMetadataManagerRef,
current_table_info_value: &DeserializedValueWithBytes<TableInfoValue>,
region_distribution: RegionDistribution,
physical_columns: &[ColumnMetadata],
) -> Result<()> {
if physical_columns.is_empty() {
warn!("No physical columns found, leaving the physical table's schema unchanged when altering logical tables");
return Ok(());
}
let table_ref = current_table_info_value.table_ref();
let table_id = physical_table_id;
// Generates new table info
let old_raw_table_info = current_table_info_value.table_info.clone();
let new_raw_table_info =
raw_table_info::build_new_physical_table_info(old_raw_table_info, physical_columns);
debug!(
"Starting update table: {} metadata, table_id: {}, new table info: {:?}",
table_ref, table_id, new_raw_table_info
);
table_metadata_manager
.update_table_info(
current_table_info_value,
Some(region_distribution),
new_raw_table_info,
)
.await?;
Ok(())
}
/// Builds the cache ident keys for the alter logical tables.
///
/// The cache ident keys are:
/// - The table id of the logical tables.
/// - The table name of the logical tables.
/// - The table id of the physical table.
pub(crate) fn build_cache_ident_keys(
physical_table_info: &TableInfoValue,
logical_table_info_values: &[&TableInfoValue],
) -> Vec<CacheIdent> {
let mut cache_keys = Vec::with_capacity(logical_table_info_values.len() * 2 + 2);
cache_keys.extend(logical_table_info_values.iter().flat_map(|table| {
vec![
CacheIdent::TableId(table.table_info.ident.table_id),
CacheIdent::TableName(table.table_name()),
]
}));
cache_keys.push(CacheIdent::TableId(
physical_table_info.table_info.ident.table_id,
));
cache_keys.push(CacheIdent::TableName(physical_table_info.table_name()));
cache_keys
}
}
fn make_request(alter_requests: AlterRequests) -> RegionRequest {
RegionRequest {
header: Some(RegionRequestHeader {
tracing_context: TracingContext::from_current_span().to_w3c(),
..Default::default()
}),
body: Some(region_request::Body::Alters(alter_requests)),
}
}
/// Makes an alter region request.
pub fn make_alter_region_request(
region_id: RegionId,
alter_table_expr: &AlterTableExpr,
) -> AlterRequest {
let region_id = region_id.as_u64();
let kind = match &alter_table_expr.kind {
Some(Kind::AddColumns(add_columns)) => Some(alter_request::Kind::AddColumns(
to_region_add_columns(add_columns),
)),
_ => unreachable!(), // Safety: we have checked the kind in check_input_tasks
};
AlterRequest {
region_id,
schema_version: 0,
kind,
}
}
fn to_region_add_columns(add_columns: &v1::AddColumns) -> AddColumns {
let add_columns = add_columns
.add_columns
.iter()
.map(|add_column| {
let region_column_def = RegionColumnDef {
column_def: add_column.column_def.clone(),
..Default::default() // other fields are not used in alter logical table
};
AddColumn {
column_def: Some(region_column_def),
..Default::default() // other fields are not used in alter logical table
}
})
.collect();
AddColumns { add_columns }
}

View File

@@ -1,158 +0,0 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use common_catalog::format_full_table_name;
use snafu::OptionExt;
use table::metadata::TableId;
use crate::ddl::alter_logical_tables::AlterLogicalTablesProcedure;
use crate::error::{
AlterLogicalTablesInvalidArgumentsSnafu, Result, TableInfoNotFoundSnafu, TableNotFoundSnafu,
TableRouteNotFoundSnafu,
};
use crate::key::table_info::TableInfoValue;
use crate::key::table_name::TableNameKey;
use crate::key::table_route::TableRouteValue;
use crate::key::DeserializedValueWithBytes;
use crate::rpc::ddl::AlterTableTask;
impl AlterLogicalTablesProcedure {
pub(crate) fn filter_task(&mut self, finished_tasks: &[bool]) -> Result<()> {
debug_assert_eq!(finished_tasks.len(), self.data.tasks.len());
debug_assert_eq!(finished_tasks.len(), self.data.table_info_values.len());
self.data.tasks = self
.data
.tasks
.drain(..)
.zip(finished_tasks.iter())
.filter_map(|(task, finished)| if *finished { None } else { Some(task) })
.collect();
self.data.table_info_values = self
.data
.table_info_values
.drain(..)
.zip(finished_tasks.iter())
.filter_map(|(table_info_value, finished)| {
if *finished {
None
} else {
Some(table_info_value)
}
})
.collect();
Ok(())
}
pub(crate) async fn fill_physical_table_info(&mut self) -> Result<()> {
let (physical_table_info, physical_table_route) = self
.context
.table_metadata_manager
.get_full_table_info(self.data.physical_table_id)
.await?;
let physical_table_info = physical_table_info.with_context(|| TableInfoNotFoundSnafu {
table: format!("table id - {}", self.data.physical_table_id),
})?;
let physical_table_route = physical_table_route
.context(TableRouteNotFoundSnafu {
table_id: self.data.physical_table_id,
})?
.into_inner();
self.data.physical_table_info = Some(physical_table_info);
let TableRouteValue::Physical(physical_table_route) = physical_table_route else {
return AlterLogicalTablesInvalidArgumentsSnafu {
err_msg: format!(
"expected a physical table but got a logical table: {:?}",
self.data.physical_table_id
),
}
.fail();
};
self.data.physical_table_route = Some(physical_table_route);
Ok(())
}
pub(crate) async fn fill_table_info_values(&mut self) -> Result<()> {
let table_ids = self.get_all_table_ids().await?;
let table_info_values = self.get_all_table_info_values(&table_ids).await?;
debug_assert_eq!(table_info_values.len(), self.data.tasks.len());
self.data.table_info_values = table_info_values;
Ok(())
}
async fn get_all_table_info_values(
&self,
table_ids: &[TableId],
) -> Result<Vec<DeserializedValueWithBytes<TableInfoValue>>> {
let table_info_manager = self.context.table_metadata_manager.table_info_manager();
let mut table_info_map = table_info_manager.batch_get_raw(table_ids).await?;
let mut table_info_values = Vec::with_capacity(table_ids.len());
for (table_id, task) in table_ids.iter().zip(self.data.tasks.iter()) {
let table_info_value =
table_info_map
.remove(table_id)
.with_context(|| TableInfoNotFoundSnafu {
table: extract_table_name(task),
})?;
table_info_values.push(table_info_value);
}
Ok(table_info_values)
}
async fn get_all_table_ids(&self) -> Result<Vec<TableId>> {
let table_name_manager = self.context.table_metadata_manager.table_name_manager();
let table_name_keys = self
.data
.tasks
.iter()
.map(|task| extract_table_name_key(task))
.collect();
let table_name_values = table_name_manager.batch_get(table_name_keys).await?;
let mut table_ids = Vec::with_capacity(table_name_values.len());
for (value, task) in table_name_values.into_iter().zip(self.data.tasks.iter()) {
let table_id = value
.with_context(|| TableNotFoundSnafu {
table_name: extract_table_name(task),
})?
.table_id();
table_ids.push(table_id);
}
Ok(table_ids)
}
}
#[inline]
fn extract_table_name(task: &AlterTableTask) -> String {
format_full_table_name(
&task.alter_table.catalog_name,
&task.alter_table.schema_name,
&task.alter_table.table_name,
)
}
#[inline]
fn extract_table_name_key(task: &AlterTableTask) -> TableNameKey {
TableNameKey::new(
&task.alter_table.catalog_name,
&task.alter_table.schema_name,
&task.alter_table.table_name,
)
}

View File

@@ -1,113 +0,0 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use api::v1::alter_table_expr::Kind;
use api::v1::region::{
alter_request, region_request, AddColumn, AddColumns, AlterRequest, AlterRequests,
RegionColumnDef, RegionRequest, RegionRequestHeader,
};
use api::v1::{self, AlterTableExpr};
use common_telemetry::tracing_context::TracingContext;
use store_api::storage::RegionId;
use crate::ddl::alter_logical_tables::AlterLogicalTablesProcedure;
use crate::error::Result;
use crate::peer::Peer;
use crate::rpc::router::{find_leader_regions, RegionRoute};
impl AlterLogicalTablesProcedure {
pub(crate) fn make_request(
&self,
peer: &Peer,
region_routes: &[RegionRoute],
) -> Result<RegionRequest> {
let alter_requests = self.make_alter_region_requests(peer, region_routes)?;
let request = RegionRequest {
header: Some(RegionRequestHeader {
tracing_context: TracingContext::from_current_span().to_w3c(),
..Default::default()
}),
body: Some(region_request::Body::Alters(alter_requests)),
};
Ok(request)
}
fn make_alter_region_requests(
&self,
peer: &Peer,
region_routes: &[RegionRoute],
) -> Result<AlterRequests> {
let tasks = &self.data.tasks;
let regions_on_this_peer = find_leader_regions(region_routes, peer);
let mut requests = Vec::with_capacity(tasks.len() * regions_on_this_peer.len());
for (task, table) in self
.data
.tasks
.iter()
.zip(self.data.table_info_values.iter())
{
for region_number in &regions_on_this_peer {
let region_id = RegionId::new(table.table_info.ident.table_id, *region_number);
let request = make_alter_region_request(
region_id,
&task.alter_table,
table.table_info.ident.version,
);
requests.push(request);
}
}
Ok(AlterRequests { requests })
}
}
/// Makes an alter region request.
pub fn make_alter_region_request(
region_id: RegionId,
alter_table_expr: &AlterTableExpr,
schema_version: u64,
) -> AlterRequest {
let region_id = region_id.as_u64();
let kind = match &alter_table_expr.kind {
Some(Kind::AddColumns(add_columns)) => Some(alter_request::Kind::AddColumns(
to_region_add_columns(add_columns),
)),
_ => unreachable!(), // Safety: we have checked the kind in check_input_tasks
};
AlterRequest {
region_id,
schema_version,
kind,
}
}
fn to_region_add_columns(add_columns: &v1::AddColumns) -> AddColumns {
let add_columns = add_columns
.add_columns
.iter()
.map(|add_column| {
let region_column_def = RegionColumnDef {
column_def: add_column.column_def.clone(),
..Default::default() // other fields are not used in alter logical table
};
AddColumn {
column_def: Some(region_column_def),
..Default::default() // other fields are not used in alter logical table
}
})
.collect();
AddColumns { add_columns }
}

View File

@@ -1,50 +0,0 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use table::metadata::RawTableInfo;
use table::table_name::TableName;
use crate::ddl::alter_logical_tables::AlterTablesData;
use crate::instruction::CacheIdent;
impl AlterTablesData {
pub(crate) fn build_cache_keys_to_invalidate(&mut self) {
let mut cache_keys = self
.table_info_values
.iter()
.flat_map(|table| {
vec![
CacheIdent::TableId(table.table_info.ident.table_id),
CacheIdent::TableName(extract_table_name(&table.table_info)),
]
})
.collect::<Vec<_>>();
cache_keys.push(CacheIdent::TableId(self.physical_table_id));
// Safety: physical_table_info already filled in previous steps
let physical_table_info = &self.physical_table_info.as_ref().unwrap().table_info;
cache_keys.push(CacheIdent::TableName(extract_table_name(
physical_table_info,
)));
self.table_cache_keys_to_invalidate = cache_keys;
}
}
fn extract_table_name(table_info: &RawTableInfo) -> TableName {
TableName::new(
&table_info.catalog_name,
&table_info.schema_name,
&table_info.name,
)
}

View File

@@ -13,41 +13,35 @@
// limitations under the License.
use common_grpc_expr::alter_expr_to_request;
use common_telemetry::warn;
use itertools::Itertools;
use snafu::ResultExt;
use table::metadata::{RawTableInfo, TableInfo};
use crate::ddl::alter_logical_tables::executor::AlterLogicalTablesExecutor;
use crate::ddl::alter_logical_tables::AlterLogicalTablesProcedure;
use crate::ddl::physical_table_metadata;
use crate::error;
use crate::error::{ConvertAlterTableRequestSnafu, Result};
use crate::key::table_info::TableInfoValue;
use crate::key::DeserializedValueWithBytes;
use crate::rpc::ddl::AlterTableTask;
use crate::rpc::router::region_distribution;
impl AlterLogicalTablesProcedure {
pub(crate) async fn update_physical_table_metadata(&mut self) -> Result<()> {
if self.data.physical_columns.is_empty() {
warn!("No physical columns found, leaving the physical table's schema unchanged when altering logical tables");
return Ok(());
}
// Safety: must exist.
let physical_table_info = self.data.physical_table_info.as_ref().unwrap();
let physical_table_route = self.data.physical_table_route.as_ref().unwrap();
let region_distribution = region_distribution(&physical_table_route.region_routes);
// Generates new table info
let old_raw_table_info = physical_table_info.table_info.clone();
let new_raw_table_info = physical_table_metadata::build_new_physical_table_info(
old_raw_table_info,
// Updates physical table's metadata.
AlterLogicalTablesExecutor::on_alter_metadata(
self.data.physical_table_id,
&self.context.table_metadata_manager,
physical_table_info,
region_distribution,
&self.data.physical_columns,
);
// Updates physical table's metadata, and we don't need to touch per-region settings.
self.context
.table_metadata_manager
.update_table_info(physical_table_info, None, new_raw_table_info)
.await?;
)
.await?;
Ok(())
}

View File

@@ -0,0 +1,284 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::collections::HashSet;
use api::v1::alter_table_expr::Kind;
use api::v1::AlterTableExpr;
use snafu::{ensure, OptionExt};
use store_api::storage::TableId;
use table::table_reference::TableReference;
use crate::ddl::utils::table_id::get_all_table_ids_by_names;
use crate::ddl::utils::table_info::get_all_table_info_values_by_table_ids;
use crate::error::{
AlterLogicalTablesInvalidArgumentsSnafu, Result, TableInfoNotFoundSnafu,
TableRouteNotFoundSnafu,
};
use crate::key::table_info::TableInfoValue;
use crate::key::table_route::{PhysicalTableRouteValue, TableRouteManager, TableRouteValue};
use crate::key::{DeserializedValueWithBytes, TableMetadataManagerRef};
/// [AlterLogicalTableValidator] validates the alter logical expressions.
pub struct AlterLogicalTableValidator<'a> {
physical_table_id: TableId,
alters: Vec<&'a AlterTableExpr>,
}
impl<'a> AlterLogicalTableValidator<'a> {
pub fn new(physical_table_id: TableId, alters: Vec<&'a AlterTableExpr>) -> Self {
Self {
physical_table_id,
alters,
}
}
/// Validates all alter table expressions have the same schema and catalog.
fn validate_schema(&self) -> Result<()> {
let is_same_schema = self.alters.windows(2).all(|pair| {
pair[0].catalog_name == pair[1].catalog_name
&& pair[0].schema_name == pair[1].schema_name
});
ensure!(
is_same_schema,
AlterLogicalTablesInvalidArgumentsSnafu {
err_msg: "Schemas of the alter table expressions are not the same"
}
);
Ok(())
}
/// Validates that all alter table expressions are of the supported kind.
/// Currently only supports `AddColumns` operations.
fn validate_alter_kind(&self) -> Result<()> {
for alter in &self.alters {
let kind = alter
.kind
.as_ref()
.context(AlterLogicalTablesInvalidArgumentsSnafu {
err_msg: "Alter kind is missing",
})?;
let Kind::AddColumns(_) = kind else {
return AlterLogicalTablesInvalidArgumentsSnafu {
err_msg: "Only support add columns operation",
}
.fail();
};
}
Ok(())
}
fn table_names(&self) -> Vec<TableReference> {
self.alters
.iter()
.map(|alter| {
TableReference::full(&alter.catalog_name, &alter.schema_name, &alter.table_name)
})
.collect()
}
/// Validates that the physical table info and route exist.
///
/// This method performs the following validations:
/// 1. Retrieves the full table info and route for the given physical table id
/// 2. Ensures the table info and table route exists
/// 3. Verifies that the table route is actually a physical table route, not a logical one
///
/// Returns a tuple containing the validated table info and physical table route.
async fn validate_physical_table(
&self,
table_metadata_manager: &TableMetadataManagerRef,
) -> Result<(
DeserializedValueWithBytes<TableInfoValue>,
PhysicalTableRouteValue,
)> {
let (table_info, table_route) = table_metadata_manager
.get_full_table_info(self.physical_table_id)
.await?;
let table_info = table_info.with_context(|| TableInfoNotFoundSnafu {
table: format!("table id - {}", self.physical_table_id),
})?;
let physical_table_route = table_route
.context(TableRouteNotFoundSnafu {
table_id: self.physical_table_id,
})?
.into_inner();
let TableRouteValue::Physical(table_route) = physical_table_route else {
return AlterLogicalTablesInvalidArgumentsSnafu {
err_msg: format!(
"expected a physical table but got a logical table: {:?}",
self.physical_table_id
),
}
.fail();
};
Ok((table_info, table_route))
}
/// Validates that all logical table routes have the same physical table id.
///
/// This method performs the following validations:
/// 1. Retrieves table routes for all the given table ids.
/// 2. Ensures that all retrieved routes are logical table routes (not physical)
/// 3. Verifies that all logical table routes reference the same physical table id.
/// 4. Returns an error if any route is not logical or references a different physical table.
async fn validate_logical_table_routes(
&self,
table_route_manager: &TableRouteManager,
table_ids: &[TableId],
) -> Result<()> {
let table_routes = table_route_manager
.table_route_storage()
.batch_get(table_ids)
.await?;
let physical_table_id = self.physical_table_id;
let is_same_physical_table = table_routes.iter().all(|r| {
if let Some(TableRouteValue::Logical(r)) = r {
r.physical_table_id() == physical_table_id
} else {
false
}
});
ensure!(
is_same_physical_table,
AlterLogicalTablesInvalidArgumentsSnafu {
err_msg: "All the tasks should have the same physical table id"
}
);
Ok(())
}
/// Validates the alter logical expressions.
///
/// This method performs the following validations:
/// 1. Validates that all alter table expressions have the same schema and catalog.
/// 2. Validates that all alter table expressions are of the supported kind.
/// 3. Validates that the physical table info and route exist.
/// 4. Validates that all logical table routes have the same physical table id.
///
/// Returns a [ValidatorResult] containing the validation results.
pub async fn validate(
&self,
table_metadata_manager: &TableMetadataManagerRef,
) -> Result<ValidatorResult> {
self.validate_schema()?;
self.validate_alter_kind()?;
let (physical_table_info, physical_table_route) =
self.validate_physical_table(table_metadata_manager).await?;
let table_names = self.table_names();
let table_ids =
get_all_table_ids_by_names(table_metadata_manager.table_name_manager(), &table_names)
.await?;
let mut table_info_values = get_all_table_info_values_by_table_ids(
table_metadata_manager.table_info_manager(),
&table_ids,
&table_names,
)
.await?;
self.validate_logical_table_routes(
table_metadata_manager.table_route_manager(),
&table_ids,
)
.await?;
let skip_alter = self
.alters
.iter()
.zip(table_info_values.iter())
.map(|(task, table)| skip_alter_logical_region(task, table))
.collect::<Vec<_>>();
retain_unskipped(&mut table_info_values, &skip_alter);
let num_skipped = skip_alter.iter().filter(|&&x| x).count();
Ok(ValidatorResult {
num_skipped,
skip_alter,
table_info_values,
physical_table_info,
physical_table_route,
})
}
}
/// The result of the validator.
pub(crate) struct ValidatorResult {
pub(crate) num_skipped: usize,
pub(crate) skip_alter: Vec<bool>,
pub(crate) table_info_values: Vec<DeserializedValueWithBytes<TableInfoValue>>,
pub(crate) physical_table_info: DeserializedValueWithBytes<TableInfoValue>,
pub(crate) physical_table_route: PhysicalTableRouteValue,
}
/// Retains the elements that are not skipped.
pub(crate) fn retain_unskipped<T>(target: &mut Vec<T>, skipped: &[bool]) {
debug_assert_eq!(target.len(), skipped.len());
let mut iter = skipped.iter();
target.retain(|_| !iter.next().unwrap());
}
/// Returns true if does not required to alter the logical region.
fn skip_alter_logical_region(alter: &AlterTableExpr, table: &TableInfoValue) -> bool {
let existing_columns = table
.table_info
.meta
.schema
.column_schemas
.iter()
.map(|c| &c.name)
.collect::<HashSet<_>>();
let Some(kind) = alter.kind.as_ref() else {
return true; // Never get here since we have checked it in `validate_alter_kind`
};
let Kind::AddColumns(add_columns) = kind else {
return true; // Never get here since we have checked it in `validate_alter_kind`
};
// We only check that all columns have been finished. That is to say,
// if one part is finished but another part is not, it will be considered
// unfinished.
add_columns
.add_columns
.iter()
.map(|add_column| add_column.column_def.as_ref().map(|c| &c.name))
.all(|column| {
column
.map(|c| existing_columns.contains(c))
.unwrap_or(false)
})
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_retain_unskipped() {
let mut target = vec![1, 2, 3, 4, 5];
let skipped = vec![false, true, false, true, false];
retain_unskipped(&mut target, &skipped);
assert_eq!(target, vec![1, 3, 5]);
}
}

View File

@@ -12,10 +12,9 @@
// See the License for the specific language governing permissions and
// limitations under the License.
mod check;
mod executor;
mod metadata;
mod region_request;
mod update_metadata;
use std::vec;
@@ -29,33 +28,29 @@ use common_procedure::{
Context as ProcedureContext, ContextProvider, Error as ProcedureError, LockKey, PoisonKey,
PoisonKeys, Procedure, ProcedureId, Status, StringKey,
};
use common_telemetry::{debug, error, info, warn};
use futures::future::{self};
use common_telemetry::{error, info, warn};
use serde::{Deserialize, Serialize};
use snafu::{ensure, ResultExt};
use store_api::metadata::ColumnMetadata;
use store_api::metric_engine_consts::TABLE_COLUMN_METADATA_EXTENSION_KEY;
use store_api::storage::RegionId;
use strum::AsRefStr;
use table::metadata::{RawTableInfo, TableId, TableInfo};
use table::table_reference::TableReference;
use crate::cache_invalidator::Context;
use crate::ddl::physical_table_metadata::update_table_info_column_ids;
use crate::ddl::alter_table::executor::AlterTableExecutor;
use crate::ddl::utils::{
add_peer_context_if_needed, extract_column_metadatas, handle_multiple_results,
map_to_procedure_error, sync_follower_regions, MultipleResults,
extract_column_metadatas, handle_multiple_results, map_to_procedure_error,
sync_follower_regions, MultipleResults,
};
use crate::ddl::DdlContext;
use crate::error::{AbortProcedureSnafu, NoLeaderSnafu, PutPoisonSnafu, Result, RetryLaterSnafu};
use crate::instruction::CacheIdent;
use crate::key::table_info::TableInfoValue;
use crate::key::{DeserializedValueWithBytes, RegionDistribution};
use crate::lock_key::{CatalogLock, SchemaLock, TableLock, TableNameLock};
use crate::metrics;
use crate::poison_key::table_poison_key;
use crate::rpc::ddl::AlterTableTask;
use crate::rpc::router::{find_leader_regions, find_leaders, region_distribution, RegionRoute};
use crate::rpc::router::{find_leaders, region_distribution, RegionRoute};
/// The alter table procedure
pub struct AlterTableProcedure {
@@ -67,6 +62,24 @@ pub struct AlterTableProcedure {
/// If we recover the procedure from json, then the table info value is not cached.
/// But we already validated it in the prepare step.
new_table_info: Option<TableInfo>,
/// The alter table executor.
executor: AlterTableExecutor,
}
/// Builds the executor from the [`AlterTableData`].
///
/// # Panics
/// - If the alter kind is not set.
fn build_executor_from_alter_expr(alter_data: &AlterTableData) -> AlterTableExecutor {
let table_name = alter_data.table_ref().into();
let table_id = alter_data.table_id;
let alter_kind = alter_data.task.alter_table.kind.as_ref().unwrap();
let new_table_name = if let Kind::RenameTable(RenameTable { new_table_name }) = alter_kind {
Some(new_table_name.to_string())
} else {
None
};
AlterTableExecutor::new(table_name, table_id, new_table_name)
}
impl AlterTableProcedure {
@@ -74,33 +87,42 @@ impl AlterTableProcedure {
pub fn new(table_id: TableId, task: AlterTableTask, context: DdlContext) -> Result<Self> {
task.validate()?;
let data = AlterTableData::new(task, table_id);
let executor = build_executor_from_alter_expr(&data);
Ok(Self {
context,
data: AlterTableData::new(task, table_id),
data,
new_table_info: None,
executor,
})
}
pub fn from_json(json: &str, context: DdlContext) -> ProcedureResult<Self> {
let data: AlterTableData = serde_json::from_str(json).context(FromJsonSnafu)?;
let executor = build_executor_from_alter_expr(&data);
Ok(AlterTableProcedure {
context,
data,
new_table_info: None,
executor,
})
}
// Checks whether the table exists.
pub(crate) async fn on_prepare(&mut self) -> Result<Status> {
self.check_alter().await?;
self.executor
.on_prepare(&self.context.table_metadata_manager)
.await?;
self.fill_table_info().await?;
// Validates the request and builds the new table info.
// We need to build the new table info here because we should ensure the alteration
// is valid in `UpdateMeta` state as we already altered the region.
// Safety: `fill_table_info()` already set it.
// Safety: filled in `fill_table_info`.
let table_info_value = self.data.table_info_value.as_ref().unwrap();
self.new_table_info = Some(self.build_new_table_info(&table_info_value.table_info)?);
let new_table_info = AlterTableExecutor::validate_alter_table_expr(
&table_info_value.table_info,
self.data.task.alter_table.clone(),
)?;
self.new_table_info = Some(new_table_info);
// Safety: Checked in `AlterTableProcedure::new`.
let alter_kind = self.data.task.alter_table.kind.as_ref().unwrap();
@@ -143,9 +165,7 @@ impl AlterTableProcedure {
self.data.region_distribution =
Some(region_distribution(&physical_table_route.region_routes));
let leaders = find_leaders(&physical_table_route.region_routes);
let mut alter_region_tasks = Vec::with_capacity(leaders.len());
let alter_kind = self.make_region_alter_kind()?;
info!(
@@ -158,31 +178,14 @@ impl AlterTableProcedure {
ensure!(!leaders.is_empty(), NoLeaderSnafu { table_id });
// Puts the poison before submitting alter region requests to datanodes.
self.put_poison(ctx_provider, procedure_id).await?;
for datanode in leaders {
let requester = self.context.node_manager.datanode(&datanode).await;
let regions = find_leader_regions(&physical_table_route.region_routes, &datanode);
for region in regions {
let region_id = RegionId::new(table_id, region);
let request = self.make_alter_region_request(region_id, alter_kind.clone())?;
debug!("Submitting {request:?} to {datanode}");
let datanode = datanode.clone();
let requester = requester.clone();
alter_region_tasks.push(async move {
requester
.handle(request)
.await
.map_err(add_peer_context_if_needed(datanode))
});
}
}
let results = future::join_all(alter_region_tasks)
.await
.into_iter()
.collect::<Vec<_>>();
let results = self
.executor
.on_alter_regions(
&self.context.node_manager,
&physical_table_route.region_routes,
alter_kind,
)
.await;
match handle_multiple_results(results) {
MultipleResults::PartialRetryable(error) => {
@@ -224,7 +227,6 @@ impl AlterTableProcedure {
}
fn handle_alter_region_response(&mut self, mut results: Vec<RegionResponse>) -> Result<()> {
self.data.state = AlterTableState::UpdateMetadata;
if let Some(column_metadatas) =
extract_column_metadatas(&mut results, TABLE_COLUMN_METADATA_EXTENSION_KEY)?
{
@@ -232,7 +234,7 @@ impl AlterTableProcedure {
} else {
warn!("altering table result doesn't contains extension key `{TABLE_COLUMN_METADATA_EXTENSION_KEY}`,leaving the table's column metadata unchanged");
}
self.data.state = AlterTableState::UpdateMetadata;
Ok(())
}
@@ -260,43 +262,34 @@ impl AlterTableProcedure {
pub(crate) async fn on_update_metadata(&mut self) -> Result<Status> {
let table_id = self.data.table_id();
let table_ref = self.data.table_ref();
// Safety: checked before.
// Safety: filled in `fill_table_info`.
let table_info_value = self.data.table_info_value.as_ref().unwrap();
// Safety: Checked in `AlterTableProcedure::new`.
let alter_kind = self.data.task.alter_table.kind.as_ref().unwrap();
// Gets the table info from the cache or builds it.
let new_info = match &self.new_table_info {
let new_info = match &self.new_table_info {
Some(cached) => cached.clone(),
None => self.build_new_table_info(&table_info_value.table_info)
None => AlterTableExecutor::validate_alter_table_expr(
&table_info_value.table_info,
self.data.task.alter_table.clone(),
)
.inspect_err(|e| {
// We already check the table info in the prepare step so this should not happen.
error!(e; "Unable to build info for table {} in update metadata step, table_id: {}", table_ref, table_id);
})?,
};
debug!(
"Starting update table: {} metadata, new table info {:?}",
table_ref.to_string(),
new_info
);
// Safety: Checked in `AlterTableProcedure::new`.
let alter_kind = self.data.task.alter_table.kind.as_ref().unwrap();
if let Kind::RenameTable(RenameTable { new_table_name }) = alter_kind {
self.on_update_metadata_for_rename(new_table_name.to_string(), table_info_value)
.await?;
} else {
let mut raw_table_info = new_info.into();
if !self.data.column_metadatas.is_empty() {
update_table_info_column_ids(&mut raw_table_info, &self.data.column_metadatas);
}
// region distribution is set in submit_alter_region_requests
let region_distribution = self.data.region_distribution.as_ref().unwrap().clone();
self.on_update_metadata_for_alter(
raw_table_info,
region_distribution,
// Safety: region distribution is set in `submit_alter_region_requests`.
self.executor
.on_alter_metadata(
&self.context.table_metadata_manager,
table_info_value,
self.data.region_distribution.as_ref(),
new_info.into(),
&self.data.column_metadatas,
)
.await?;
}
info!("Updated table metadata for table {table_ref}, table_id: {table_id}, kind: {alter_kind:?}");
self.data.state = AlterTableState::InvalidateTableCache;
@@ -305,18 +298,9 @@ impl AlterTableProcedure {
/// Broadcasts the invalidating table cache instructions.
async fn on_broadcast(&mut self) -> Result<Status> {
let cache_invalidator = &self.context.cache_invalidator;
cache_invalidator
.invalidate(
&Context::default(),
&[
CacheIdent::TableId(self.data.table_id()),
CacheIdent::TableName(self.data.table_ref().into()),
],
)
self.executor
.invalidate_table_cache(&self.context.cache_invalidator)
.await?;
Ok(Status::done())
}

View File

@@ -1,62 +0,0 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use api::v1::alter_table_expr::Kind;
use api::v1::RenameTable;
use common_catalog::format_full_table_name;
use snafu::ensure;
use crate::ddl::alter_table::AlterTableProcedure;
use crate::error::{self, Result};
use crate::key::table_name::TableNameKey;
impl AlterTableProcedure {
/// Checks:
/// - The new table name doesn't exist (rename).
/// - Table exists.
pub(crate) async fn check_alter(&self) -> Result<()> {
let alter_expr = &self.data.task.alter_table;
let catalog = &alter_expr.catalog_name;
let schema = &alter_expr.schema_name;
let table_name = &alter_expr.table_name;
// Safety: Checked in `AlterTableProcedure::new`.
let alter_kind = self.data.task.alter_table.kind.as_ref().unwrap();
let manager = &self.context.table_metadata_manager;
if let Kind::RenameTable(RenameTable { new_table_name }) = alter_kind {
let new_table_name_key = TableNameKey::new(catalog, schema, new_table_name);
let exists = manager
.table_name_manager()
.exists(new_table_name_key)
.await?;
ensure!(
!exists,
error::TableAlreadyExistsSnafu {
table_name: format_full_table_name(catalog, schema, new_table_name),
}
)
}
let table_name_key = TableNameKey::new(catalog, schema, table_name);
let exists = manager.table_name_manager().exists(table_name_key).await?;
ensure!(
exists,
error::TableNotFoundSnafu {
table_name: format_full_table_name(catalog, schema, &alter_expr.table_name),
}
);
Ok(())
}
}

View File

@@ -0,0 +1,313 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::collections::HashMap;
use api::region::RegionResponse;
use api::v1::region::region_request::Body;
use api::v1::region::{alter_request, AlterRequest, RegionRequest, RegionRequestHeader};
use api::v1::AlterTableExpr;
use common_catalog::format_full_table_name;
use common_grpc_expr::alter_expr_to_request;
use common_telemetry::tracing_context::TracingContext;
use common_telemetry::{debug, info};
use futures::future;
use snafu::{ensure, ResultExt};
use store_api::metadata::ColumnMetadata;
use store_api::storage::{RegionId, TableId};
use table::metadata::{RawTableInfo, TableInfo};
use table::requests::AlterKind;
use table::table_name::TableName;
use crate::cache_invalidator::{CacheInvalidatorRef, Context};
use crate::ddl::utils::{add_peer_context_if_needed, raw_table_info};
use crate::error::{self, Result, UnexpectedSnafu};
use crate::instruction::CacheIdent;
use crate::key::table_info::TableInfoValue;
use crate::key::table_name::TableNameKey;
use crate::key::{DeserializedValueWithBytes, RegionDistribution, TableMetadataManagerRef};
use crate::node_manager::NodeManagerRef;
use crate::rpc::router::{find_leaders, region_distribution, RegionRoute};
/// [AlterTableExecutor] performs:
/// - Alters the metadata of the table.
/// - Alters regions on the datanode nodes.
pub struct AlterTableExecutor {
table: TableName,
table_id: TableId,
/// The new table name if the alter kind is rename table.
new_table_name: Option<String>,
}
impl AlterTableExecutor {
/// Creates a new [`AlterTableExecutor`].
pub fn new(table: TableName, table_id: TableId, new_table_name: Option<String>) -> Self {
Self {
table,
table_id,
new_table_name,
}
}
/// Prepares to alter the table.
///
/// ## Checks:
/// - The new table name doesn't exist (rename).
/// - Table exists.
pub(crate) async fn on_prepare(
&self,
table_metadata_manager: &TableMetadataManagerRef,
) -> Result<()> {
let catalog = &self.table.catalog_name;
let schema = &self.table.schema_name;
let table_name = &self.table.table_name;
let manager = table_metadata_manager;
if let Some(new_table_name) = &self.new_table_name {
let new_table_name_key = TableNameKey::new(catalog, schema, new_table_name);
let exists = manager
.table_name_manager()
.exists(new_table_name_key)
.await?;
ensure!(
!exists,
error::TableAlreadyExistsSnafu {
table_name: format_full_table_name(catalog, schema, new_table_name),
}
)
}
let table_name_key = TableNameKey::new(catalog, schema, table_name);
let exists = manager.table_name_manager().exists(table_name_key).await?;
ensure!(
exists,
error::TableNotFoundSnafu {
table_name: format_full_table_name(catalog, schema, table_name),
}
);
Ok(())
}
/// Validates the alter table expression and builds the new table info.
///
/// This validation is performed early to ensure the alteration is valid before
/// proceeding to the `on_alter_metadata` state, where regions have already been altered.
/// Building the new table info here allows us to catch any issues with the
/// alteration before committing metadata changes.
pub(crate) fn validate_alter_table_expr(
table_info: &RawTableInfo,
alter_table_expr: AlterTableExpr,
) -> Result<TableInfo> {
build_new_table_info(table_info, alter_table_expr)
}
/// Updates table metadata for alter table operation.
pub(crate) async fn on_alter_metadata(
&self,
table_metadata_manager: &TableMetadataManagerRef,
current_table_info_value: &DeserializedValueWithBytes<TableInfoValue>,
region_distribution: Option<&RegionDistribution>,
mut raw_table_info: RawTableInfo,
column_metadatas: &[ColumnMetadata],
) -> Result<()> {
let table_ref = self.table.table_ref();
let table_id = self.table_id;
if let Some(new_table_name) = &self.new_table_name {
debug!(
"Starting update table: {} metadata, table_id: {}, new table info: {:?}, new table name: {}",
table_ref, table_id, raw_table_info, new_table_name
);
table_metadata_manager
.rename_table(current_table_info_value, new_table_name.to_string())
.await?;
} else {
debug!(
"Starting update table: {} metadata, table_id: {}, new table info: {:?}",
table_ref, table_id, raw_table_info
);
ensure!(
region_distribution.is_some(),
UnexpectedSnafu {
err_msg: "region distribution is not set when updating table metadata",
}
);
if !column_metadatas.is_empty() {
raw_table_info::update_table_info_column_ids(&mut raw_table_info, column_metadatas);
}
table_metadata_manager
.update_table_info(
current_table_info_value,
region_distribution.cloned(),
raw_table_info,
)
.await?;
}
Ok(())
}
/// Alters regions on the datanode nodes.
pub(crate) async fn on_alter_regions(
&self,
node_manager: &NodeManagerRef,
region_routes: &[RegionRoute],
kind: Option<alter_request::Kind>,
) -> Vec<Result<RegionResponse>> {
let region_distribution = region_distribution(region_routes);
let leaders = find_leaders(region_routes)
.into_iter()
.map(|p| (p.id, p))
.collect::<HashMap<_, _>>();
let total_num_region = region_distribution
.values()
.map(|r| r.leader_regions.len())
.sum::<usize>();
let mut alter_region_tasks = Vec::with_capacity(total_num_region);
for (datanode_id, region_role_set) in region_distribution {
if region_role_set.leader_regions.is_empty() {
continue;
}
// Safety: must exists.
let peer = leaders.get(&datanode_id).unwrap();
let requester = node_manager.datanode(peer).await;
for region_id in region_role_set.leader_regions {
let region_id = RegionId::new(self.table_id, region_id);
let request = make_alter_region_request(region_id, kind.clone());
let requester = requester.clone();
let peer = peer.clone();
alter_region_tasks.push(async move {
requester
.handle(request)
.await
.map_err(add_peer_context_if_needed(peer))
});
}
}
future::join_all(alter_region_tasks)
.await
.into_iter()
.collect::<Vec<_>>()
}
/// Invalidates cache for the table.
pub(crate) async fn invalidate_table_cache(
&self,
cache_invalidator: &CacheInvalidatorRef,
) -> Result<()> {
let ctx = Context {
subject: Some(format!(
"Invalidate table cache by altering table {}, table_id: {}",
self.table.table_ref(),
self.table_id,
)),
};
cache_invalidator
.invalidate(
&ctx,
&[
CacheIdent::TableName(self.table.clone()),
CacheIdent::TableId(self.table_id),
],
)
.await?;
Ok(())
}
}
/// Makes alter region request.
pub(crate) fn make_alter_region_request(
region_id: RegionId,
kind: Option<alter_request::Kind>,
) -> RegionRequest {
RegionRequest {
header: Some(RegionRequestHeader {
tracing_context: TracingContext::from_current_span().to_w3c(),
..Default::default()
}),
body: Some(Body::Alter(AlterRequest {
region_id: region_id.as_u64(),
kind,
..Default::default()
})),
}
}
/// Builds new table info after alteration.
///
/// This function creates a new table info by applying the alter table expression
/// to the existing table info. For add column operations, it increments the
/// `next_column_id` by the number of columns being added, which may result in gaps
/// in the column id sequence.
fn build_new_table_info(
table_info: &RawTableInfo,
alter_table_expr: AlterTableExpr,
) -> Result<TableInfo> {
let table_info =
TableInfo::try_from(table_info.clone()).context(error::ConvertRawTableInfoSnafu)?;
let schema_name = &table_info.schema_name;
let catalog_name = &table_info.catalog_name;
let table_name = &table_info.name;
let table_id = table_info.ident.table_id;
let request = alter_expr_to_request(table_id, alter_table_expr)
.context(error::ConvertAlterTableRequestSnafu)?;
let new_meta = table_info
.meta
.builder_with_alter_kind(table_name, &request.alter_kind)
.context(error::TableSnafu)?
.build()
.with_context(|_| error::BuildTableMetaSnafu {
table_name: format_full_table_name(catalog_name, schema_name, table_name),
})?;
let mut new_info = table_info.clone();
new_info.meta = new_meta;
new_info.ident.version = table_info.ident.version + 1;
match request.alter_kind {
AlterKind::AddColumns { columns } => {
// Bumps the column id for the new columns.
// It may bump more than the actual number of columns added if there are
// existing columns, but it's fine.
new_info.meta.next_column_id += columns.len() as u32;
}
AlterKind::RenameTable { new_table_name } => {
new_info.name = new_table_name.to_string();
}
AlterKind::DropColumns { .. }
| AlterKind::ModifyColumnTypes { .. }
| AlterKind::SetTableOptions { .. }
| AlterKind::UnsetTableOptions { .. }
| AlterKind::SetIndexes { .. }
| AlterKind::UnsetIndexes { .. }
| AlterKind::DropDefaults { .. } => {}
}
info!(
"Built new table info: {:?} for table {}, table_id: {}",
new_info.meta, table_name, table_id
);
Ok(new_info)
}

View File

@@ -15,43 +15,16 @@
use std::collections::HashSet;
use api::v1::alter_table_expr::Kind;
use api::v1::region::region_request::Body;
use api::v1::region::{
alter_request, AddColumn, AddColumns, AlterRequest, DropColumn, DropColumns, RegionColumnDef,
RegionRequest, RegionRequestHeader,
alter_request, AddColumn, AddColumns, DropColumn, DropColumns, RegionColumnDef,
};
use common_telemetry::tracing_context::TracingContext;
use snafu::OptionExt;
use store_api::storage::RegionId;
use table::metadata::RawTableInfo;
use crate::ddl::alter_table::AlterTableProcedure;
use crate::error::{InvalidProtoMsgSnafu, Result};
impl AlterTableProcedure {
/// Makes alter region request from existing an alter kind.
/// Region alter request always add columns if not exist.
pub(crate) fn make_alter_region_request(
&self,
region_id: RegionId,
kind: Option<alter_request::Kind>,
) -> Result<RegionRequest> {
// Safety: checked
let table_info = self.data.table_info().unwrap();
Ok(RegionRequest {
header: Some(RegionRequestHeader {
tracing_context: TracingContext::from_current_span().to_w3c(),
..Default::default()
}),
body: Some(Body::Alter(AlterRequest {
region_id: region_id.as_u64(),
schema_version: table_info.ident.version,
kind,
})),
})
}
/// Makes alter kind proto that all regions can reuse.
/// Region alter request always add columns if not exist.
pub(crate) fn make_region_alter_kind(&self) -> Result<Option<alter_request::Kind>> {
@@ -135,6 +108,8 @@ fn create_proto_alter_kind(
Kind::UnsetTableOptions(v) => Ok(Some(alter_request::Kind::UnsetTableOptions(v.clone()))),
Kind::SetIndex(v) => Ok(Some(alter_request::Kind::SetIndex(v.clone()))),
Kind::UnsetIndex(v) => Ok(Some(alter_request::Kind::UnsetIndex(v.clone()))),
Kind::SetIndexes(v) => Ok(Some(alter_request::Kind::SetIndexes(v.clone()))),
Kind::UnsetIndexes(v) => Ok(Some(alter_request::Kind::UnsetIndexes(v.clone()))),
Kind::DropDefaults(v) => Ok(Some(alter_request::Kind::DropDefaults(v.clone()))),
}
}
@@ -155,6 +130,7 @@ mod tests {
use common_catalog::consts::{DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME};
use store_api::storage::{RegionId, TableId};
use crate::ddl::alter_table::executor::make_alter_region_request;
use crate::ddl::alter_table::AlterTableProcedure;
use crate::ddl::test_util::columns::TestColumnDefBuilder;
use crate::ddl::test_util::create_table::{
@@ -261,15 +237,13 @@ mod tests {
let mut procedure = AlterTableProcedure::new(table_id, task, ddl_context).unwrap();
procedure.on_prepare().await.unwrap();
let alter_kind = procedure.make_region_alter_kind().unwrap();
let Some(Body::Alter(alter_region_request)) = procedure
.make_alter_region_request(region_id, alter_kind)
.unwrap()
.body
let Some(Body::Alter(alter_region_request)) =
make_alter_region_request(region_id, alter_kind).body
else {
unreachable!()
};
assert_eq!(alter_region_request.region_id, region_id.as_u64());
assert_eq!(alter_region_request.schema_version, 1);
assert_eq!(alter_region_request.schema_version, 0);
assert_eq!(
alter_region_request.kind,
Some(region::alter_request::Kind::AddColumns(
@@ -319,15 +293,13 @@ mod tests {
let mut procedure = AlterTableProcedure::new(table_id, task, ddl_context).unwrap();
procedure.on_prepare().await.unwrap();
let alter_kind = procedure.make_region_alter_kind().unwrap();
let Some(Body::Alter(alter_region_request)) = procedure
.make_alter_region_request(region_id, alter_kind)
.unwrap()
.body
let Some(Body::Alter(alter_region_request)) =
make_alter_region_request(region_id, alter_kind).body
else {
unreachable!()
};
assert_eq!(alter_region_request.region_id, region_id.as_u64());
assert_eq!(alter_region_request.schema_version, 1);
assert_eq!(alter_region_request.schema_version, 0);
assert_eq!(
alter_region_request.kind,
Some(region::alter_request::Kind::ModifyColumnTypes(

View File

@@ -1,103 +0,0 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use common_grpc_expr::alter_expr_to_request;
use snafu::ResultExt;
use table::metadata::{RawTableInfo, TableInfo};
use table::requests::AlterKind;
use crate::ddl::alter_table::AlterTableProcedure;
use crate::error::{self, Result};
use crate::key::table_info::TableInfoValue;
use crate::key::{DeserializedValueWithBytes, RegionDistribution};
impl AlterTableProcedure {
/// Builds new table info after alteration.
/// It bumps the column id of the table by the number of the add column requests.
/// So there may be holes in the column id sequence.
pub(crate) fn build_new_table_info(&self, table_info: &RawTableInfo) -> Result<TableInfo> {
let table_info =
TableInfo::try_from(table_info.clone()).context(error::ConvertRawTableInfoSnafu)?;
let table_ref = self.data.table_ref();
let alter_expr = self.data.task.alter_table.clone();
let request = alter_expr_to_request(self.data.table_id(), alter_expr)
.context(error::ConvertAlterTableRequestSnafu)?;
let new_meta = table_info
.meta
.builder_with_alter_kind(table_ref.table, &request.alter_kind)
.context(error::TableSnafu)?
.build()
.with_context(|_| error::BuildTableMetaSnafu {
table_name: table_ref.table,
})?;
let mut new_info = table_info.clone();
new_info.meta = new_meta;
new_info.ident.version = table_info.ident.version + 1;
match request.alter_kind {
AlterKind::AddColumns { columns } => {
// Bumps the column id for the new columns.
// It may bump more than the actual number of columns added if there are
// existing columns, but it's fine.
new_info.meta.next_column_id += columns.len() as u32;
}
AlterKind::RenameTable { new_table_name } => {
new_info.name = new_table_name.to_string();
}
AlterKind::DropColumns { .. }
| AlterKind::ModifyColumnTypes { .. }
| AlterKind::SetTableOptions { .. }
| AlterKind::UnsetTableOptions { .. }
| AlterKind::SetIndex { .. }
| AlterKind::UnsetIndex { .. }
| AlterKind::DropDefaults { .. } => {}
}
Ok(new_info)
}
/// Updates table metadata for rename table operation.
pub(crate) async fn on_update_metadata_for_rename(
&self,
new_table_name: String,
current_table_info_value: &DeserializedValueWithBytes<TableInfoValue>,
) -> Result<()> {
let table_metadata_manager = &self.context.table_metadata_manager;
table_metadata_manager
.rename_table(current_table_info_value, new_table_name)
.await?;
Ok(())
}
/// Updates table metadata for alter table operation.
pub(crate) async fn on_update_metadata_for_alter(
&self,
new_table_info: RawTableInfo,
region_distribution: RegionDistribution,
current_table_info_value: &DeserializedValueWithBytes<TableInfoValue>,
) -> Result<()> {
let table_metadata_manager = &self.context.table_metadata_manager;
table_metadata_manager
.update_table_info(
current_table_info_value,
Some(region_distribution),
new_table_info,
)
.await?;
Ok(())
}
}

View File

@@ -21,7 +21,7 @@ use crate::key::table_name::TableNameKey;
impl CreateFlowProcedure {
/// Allocates the [FlowId].
pub(crate) async fn allocate_flow_id(&mut self) -> Result<()> {
//TODO(weny, ruihang): We doesn't support the partitions. It's always be 1, now.
// TODO(weny, ruihang): We don't support the partitions. It's always be 1, now.
let partitions = 1;
let (flow_id, peers) = self
.context

View File

@@ -22,7 +22,7 @@ use table::table_name::TableName;
use crate::cache_invalidator::Context;
use crate::ddl::create_logical_tables::CreateLogicalTablesProcedure;
use crate::ddl::physical_table_metadata;
use crate::ddl::utils::raw_table_info;
use crate::error::{Result, TableInfoNotFoundSnafu};
use crate::instruction::CacheIdent;
@@ -47,7 +47,7 @@ impl CreateLogicalTablesProcedure {
// Generates new table info
let raw_table_info = physical_table_info.deref().table_info.clone();
let new_table_info = physical_table_metadata::build_new_physical_table_info(
let new_table_info = raw_table_info::build_new_physical_table_info(
raw_table_info,
&self.data.physical_columns,
);

View File

@@ -21,7 +21,7 @@ use common_error::ext::BoxedError;
use common_procedure::error::{
ExternalSnafu, FromJsonSnafu, Result as ProcedureResult, ToJsonSnafu,
};
use common_procedure::{Context as ProcedureContext, LockKey, Procedure, Status};
use common_procedure::{Context as ProcedureContext, LockKey, Procedure, ProcedureId, Status};
use common_telemetry::tracing_context::TracingContext;
use common_telemetry::{info, warn};
use futures::future::join_all;
@@ -35,7 +35,7 @@ use table::metadata::{RawTableInfo, TableId};
use table::table_reference::TableReference;
use crate::ddl::create_table_template::{build_template, CreateRequestBuilder};
use crate::ddl::physical_table_metadata::update_table_info_column_ids;
use crate::ddl::utils::raw_table_info::update_table_info_column_ids;
use crate::ddl::utils::{
add_peer_context_if_needed, convert_region_routes_to_detecting_regions,
extract_column_metadatas, map_to_procedure_error, region_storage_path,
@@ -246,8 +246,6 @@ impl CreateTableProcedure {
}
}
self.creator.data.state = CreateTableState::CreateMetadata;
let mut results = join_all(create_region_tasks)
.await
.into_iter()
@@ -261,6 +259,7 @@ impl CreateTableProcedure {
warn!("creating table result doesn't contains extension key `{TABLE_COLUMN_METADATA_EXTENSION_KEY}`,leaving the table's column metadata unchanged");
}
self.creator.data.state = CreateTableState::CreateMetadata;
Ok(Status::executing(true))
}
@@ -268,8 +267,9 @@ impl CreateTableProcedure {
///
/// Abort(not-retry):
/// - Failed to create table metadata.
async fn on_create_metadata(&mut self) -> Result<Status> {
async fn on_create_metadata(&mut self, pid: ProcedureId) -> Result<Status> {
let table_id = self.table_id();
let table_ref = self.creator.data.table_ref();
let manager = &self.context.table_metadata_manager;
let mut raw_table_info = self.table_info().clone();
@@ -289,7 +289,10 @@ impl CreateTableProcedure {
self.context
.register_failure_detectors(detecting_regions)
.await;
info!("Created table metadata for table {table_id}");
info!(
"Successfully created table: {}, table_id: {}, procedure_id: {}",
table_ref, table_id, pid
);
self.creator.opening_regions.clear();
Ok(Status::done_with_output(table_id))
@@ -317,7 +320,7 @@ impl Procedure for CreateTableProcedure {
Ok(())
}
async fn execute(&mut self, _ctx: &ProcedureContext) -> ProcedureResult<Status> {
async fn execute(&mut self, ctx: &ProcedureContext) -> ProcedureResult<Status> {
let state = &self.creator.data.state;
let _timer = metrics::METRIC_META_PROCEDURE_CREATE_TABLE
@@ -327,7 +330,7 @@ impl Procedure for CreateTableProcedure {
match state {
CreateTableState::Prepare => self.on_prepare().await,
CreateTableState::DatanodeCreateRegions => self.on_datanode_create_regions().await,
CreateTableState::CreateMetadata => self.on_create_metadata().await,
CreateTableState::CreateMetadata => self.on_create_metadata(ctx.procedure_id).await,
}
.map_err(map_to_procedure_error)
}

View File

@@ -185,11 +185,15 @@ impl DropTableExecutor {
.await
}
/// Invalidates frontend caches
/// Invalidates caches for the table.
pub async fn invalidate_table_cache(&self, ctx: &DdlContext) -> Result<()> {
let cache_invalidator = &ctx.cache_invalidator;
let ctx = Context {
subject: Some("Invalidate table cache by dropping table".to_string()),
subject: Some(format!(
"Invalidate table cache by dropping table {}, table_id: {}",
self.table.table_ref(),
self.table_id,
)),
};
cache_invalidator

View File

@@ -21,7 +21,7 @@ use snafu::ensure;
use store_api::storage::{RegionId, RegionNumber, TableId};
use crate::ddl::TableMetadata;
use crate::error::{self, Result, UnsupportedSnafu};
use crate::error::{Result, UnsupportedSnafu};
use crate::key::table_route::PhysicalTableRouteValue;
use crate::peer::Peer;
use crate::rpc::ddl::CreateTableTask;
@@ -113,24 +113,18 @@ impl TableMetadataAllocator {
table_id: TableId,
task: &CreateTableTask,
) -> Result<PhysicalTableRouteValue> {
let regions = task.partitions.len();
ensure!(
regions > 0,
error::UnexpectedSnafu {
err_msg: "The number of partitions must be greater than 0"
}
);
let regions = task.partitions.len().max(1);
let peers = self.peer_allocator.alloc(regions).await?;
debug!("Allocated peers {:?} for table {}", peers, table_id);
let region_routes = task
let mut region_routes = task
.partitions
.iter()
.enumerate()
.map(|(i, partition)| {
let region = Region {
id: RegionId::new(table_id, i as u32),
partition: Some(partition.clone().into()),
partition_expr: partition.expression.clone(),
..Default::default()
};
@@ -144,6 +138,18 @@ impl TableMetadataAllocator {
})
.collect::<Vec<_>>();
// If the table has no partitions, we need to create a default region.
if region_routes.is_empty() {
region_routes.push(RegionRoute {
region: Region {
id: RegionId::new(table_id, 0),
..Default::default()
},
leader_peer: Some(peers[0].clone()),
..Default::default()
});
}
Ok(PhysicalTableRouteValue::new(region_routes))
}

View File

@@ -17,6 +17,7 @@ pub mod columns;
pub mod create_table;
pub mod datanode_handler;
pub mod flownode_handler;
pub mod region_metadata;
use std::assert_matches::assert_matches;
use std::collections::HashMap;
@@ -145,10 +146,7 @@ pub fn test_create_logical_table_task(name: &str) -> CreateTableTask {
CreateTableTask {
create_table,
// Single region
partitions: vec![Partition {
column_list: vec![],
value_list: vec![],
}],
partitions: vec![Partition::default()],
table_info,
}
}
@@ -183,10 +181,7 @@ pub fn test_create_physical_table_task(name: &str) -> CreateTableTask {
CreateTableTask {
create_table,
// Single region
partitions: vec![Partition {
column_list: vec![],
value_list: vec![],
}],
partitions: vec![Partition::default()],
table_info,
}
}

View File

@@ -175,10 +175,7 @@ pub fn test_create_table_task(name: &str, table_id: TableId) -> CreateTableTask
CreateTableTask {
create_table,
// Single region
partitions: vec![Partition {
column_list: vec![],
value_list: vec![],
}],
partitions: vec![Partition::default()],
table_info,
}
}

View File

@@ -12,9 +12,11 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use std::collections::HashMap;
use std::sync::Arc;
use api::region::RegionResponse;
use api::v1::region::region_request::Body;
use api::v1::region::RegionRequest;
use common_error::ext::{BoxedError, ErrorExt, StackError};
use common_error::status_code::StatusCode;
@@ -22,6 +24,8 @@ use common_query::request::QueryRequest;
use common_recordbatch::SendableRecordBatchStream;
use common_telemetry::debug;
use snafu::{ResultExt, Snafu};
use store_api::metadata::RegionMetadata;
use store_api::storage::RegionId;
use tokio::sync::mpsc;
use crate::error::{self, Error, Result};
@@ -278,3 +282,47 @@ impl MockDatanodeHandler for AllFailureDatanodeHandler {
unreachable!()
}
}
#[derive(Clone)]
pub struct ListMetadataDatanodeHandler {
pub region_metadatas: HashMap<RegionId, Option<RegionMetadata>>,
}
impl ListMetadataDatanodeHandler {
pub fn new(region_metadatas: HashMap<RegionId, Option<RegionMetadata>>) -> Self {
Self { region_metadatas }
}
}
#[async_trait::async_trait]
impl MockDatanodeHandler for ListMetadataDatanodeHandler {
async fn handle(&self, _peer: &Peer, request: RegionRequest) -> Result<RegionResponse> {
let Some(Body::ListMetadata(req)) = request.body else {
unreachable!()
};
let mut response = RegionResponse::new(0);
let mut output = Vec::with_capacity(req.region_ids.len());
for region_id in req.region_ids {
match self.region_metadatas.get(&RegionId::from_u64(region_id)) {
Some(metadata) => {
output.push(metadata.clone());
}
None => {
output.push(None);
}
}
}
response.metadata = serde_json::to_vec(&output).unwrap();
Ok(response)
}
async fn handle_query(
&self,
_peer: &Peer,
_request: QueryRequest,
) -> Result<SendableRecordBatchStream> {
unreachable!()
}
}

View File

@@ -0,0 +1,34 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use api::v1::SemanticType;
use store_api::metadata::{ColumnMetadata, RegionMetadata, RegionMetadataBuilder};
use store_api::storage::RegionId;
/// Builds a region metadata with the given column metadatas.
pub fn build_region_metadata(
region_id: RegionId,
column_metadatas: &[ColumnMetadata],
) -> RegionMetadata {
let mut builder = RegionMetadataBuilder::new(region_id);
let mut primary_key = vec![];
for column_metadata in column_metadatas {
builder.push_column_metadata(column_metadata.clone());
if column_metadata.semantic_type == SemanticType::Tag {
primary_key.push(column_metadata.column_id);
}
}
builder.primary_key(primary_key);
builder.build().unwrap()
}

View File

@@ -141,10 +141,7 @@ pub(crate) fn test_create_table_task(name: &str) -> CreateTableTask {
CreateTableTask {
create_table,
// Single region
partitions: vec![Partition {
column_list: vec![],
value_list: vec![],
}],
partitions: vec![Partition::default()],
table_info,
}
}
@@ -213,21 +210,6 @@ async fn test_on_prepare_without_create_if_table_exists() {
assert_eq!(procedure.table_id(), 1024);
}
#[tokio::test]
async fn test_on_prepare_with_no_partition_err() {
let node_manager = Arc::new(MockDatanodeManager::new(()));
let ddl_context = new_ddl_context(node_manager);
let mut task = test_create_table_task("foo");
task.partitions = vec![];
task.create_table.create_if_not_exists = true;
let mut procedure = CreateTableProcedure::new(task, ddl_context);
let err = procedure.on_prepare().await.unwrap_err();
assert_matches!(err, Error::Unexpected { .. });
assert!(err
.to_string()
.contains("The number of partitions must be greater than 0"),);
}
#[tokio::test]
async fn test_on_datanode_create_regions_should_retry() {
common_telemetry::init_default_ut_logging();

View File

@@ -12,6 +12,12 @@
// See the License for the specific language governing permissions and
// limitations under the License.
pub(crate) mod raw_table_info;
#[allow(dead_code)]
pub(crate) mod region_metadata_lister;
pub(crate) mod table_id;
pub(crate) mod table_info;
use std::collections::HashMap;
use std::fmt::Debug;
@@ -442,6 +448,7 @@ pub fn extract_column_metadatas(
.collect::<Vec<_>>();
if schemas.is_empty() {
warn!("extract_column_metadatas: no extension key `{key}` found in results");
return Ok(None);
}

View File

@@ -54,7 +54,10 @@ pub(crate) fn build_new_physical_table_info(
}
}
SemanticType::Field => value_indices.push(idx),
SemanticType::Timestamp => *time_index = Some(idx),
SemanticType::Timestamp => {
value_indices.push(idx);
*time_index = Some(idx);
}
}
columns.push(col.column_schema.clone());

View File

@@ -0,0 +1,240 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::collections::HashMap;
use api::v1::region::region_request::Body as PbRegionRequest;
use api::v1::region::{ListMetadataRequest, RegionRequest, RegionRequestHeader};
use common_telemetry::tracing_context::TracingContext;
use futures::future::join_all;
use snafu::ResultExt;
use store_api::metadata::RegionMetadata;
use store_api::storage::{RegionId, TableId};
use crate::ddl::utils::add_peer_context_if_needed;
use crate::error::{DecodeJsonSnafu, Result};
use crate::node_manager::NodeManagerRef;
use crate::rpc::router::{find_leaders, region_distribution, RegionRoute};
/// Collects the region metadata from the datanodes.
pub struct RegionMetadataLister {
node_manager: NodeManagerRef,
}
impl RegionMetadataLister {
/// Creates a new [`RegionMetadataLister`] with the given [`NodeManagerRef`].
pub fn new(node_manager: NodeManagerRef) -> Self {
Self { node_manager }
}
/// Collects the region metadata from the datanodes.
pub async fn list(
&self,
table_id: TableId,
region_routes: &[RegionRoute],
) -> Result<Vec<Option<RegionMetadata>>> {
let region_distribution = region_distribution(region_routes);
let leaders = find_leaders(region_routes)
.into_iter()
.map(|p| (p.id, p))
.collect::<HashMap<_, _>>();
let total_num_region = region_distribution
.values()
.map(|r| r.leader_regions.len())
.sum::<usize>();
let mut list_metadata_tasks = Vec::with_capacity(leaders.len());
// Build requests.
for (datanode_id, region_role_set) in region_distribution {
if region_role_set.leader_regions.is_empty() {
continue;
}
// Safety: must exists.
let peer = leaders.get(&datanode_id).unwrap();
let requester = self.node_manager.datanode(peer).await;
let region_ids = region_role_set
.leader_regions
.iter()
.map(|r| RegionId::new(table_id, *r).as_u64())
.collect();
let request = Self::build_list_metadata_request(region_ids);
let peer = peer.clone();
list_metadata_tasks.push(async move {
requester
.handle(request)
.await
.map_err(add_peer_context_if_needed(peer))
});
}
let results = join_all(list_metadata_tasks)
.await
.into_iter()
.collect::<Result<Vec<_>>>()?
.into_iter()
.map(|r| r.metadata);
let mut output = Vec::with_capacity(total_num_region);
for result in results {
let region_metadatas: Vec<Option<RegionMetadata>> =
serde_json::from_slice(&result).context(DecodeJsonSnafu)?;
output.extend(region_metadatas);
}
Ok(output)
}
fn build_list_metadata_request(region_ids: Vec<u64>) -> RegionRequest {
RegionRequest {
header: Some(RegionRequestHeader {
tracing_context: TracingContext::from_current_span().to_w3c(),
..Default::default()
}),
body: Some(PbRegionRequest::ListMetadata(ListMetadataRequest {
region_ids,
})),
}
}
}
#[cfg(test)]
mod tests {
use std::collections::HashMap;
use std::sync::Arc;
use api::region::RegionResponse;
use api::v1::meta::Peer;
use api::v1::region::region_request::Body;
use api::v1::region::RegionRequest;
use store_api::metadata::RegionMetadata;
use store_api::storage::RegionId;
use tokio::sync::mpsc;
use crate::ddl::test_util::datanode_handler::{DatanodeWatcher, ListMetadataDatanodeHandler};
use crate::ddl::test_util::region_metadata::build_region_metadata;
use crate::ddl::test_util::test_column_metadatas;
use crate::ddl::utils::region_metadata_lister::RegionMetadataLister;
use crate::error::Result;
use crate::rpc::router::{Region, RegionRoute};
use crate::test_util::MockDatanodeManager;
fn assert_list_metadata_request(req: RegionRequest, expected_region_ids: &[RegionId]) {
let Some(Body::ListMetadata(req)) = req.body else {
unreachable!()
};
assert_eq!(req.region_ids.len(), expected_region_ids.len());
for region_id in expected_region_ids {
assert!(req.region_ids.contains(&region_id.as_u64()));
}
}
fn empty_list_metadata_handler(_peer: Peer, request: RegionRequest) -> Result<RegionResponse> {
let Some(Body::ListMetadata(req)) = request.body else {
unreachable!()
};
let mut output: Vec<Option<RegionMetadata>> = Vec::with_capacity(req.region_ids.len());
for _region_id in req.region_ids {
output.push(None);
}
Ok(RegionResponse::from_metadata(
serde_json::to_vec(&output).unwrap(),
))
}
#[tokio::test]
async fn test_list_request() {
let (tx, mut rx) = mpsc::channel(8);
let handler = DatanodeWatcher::new(tx).with_handler(empty_list_metadata_handler);
let node_manager = Arc::new(MockDatanodeManager::new(handler));
let lister = RegionMetadataLister::new(node_manager);
let region_routes = vec![
RegionRoute {
region: Region::new_test(RegionId::new(1024, 1)),
leader_peer: Some(Peer::empty(1)),
follower_peers: vec![Peer::empty(5)],
leader_state: None,
leader_down_since: None,
},
RegionRoute {
region: Region::new_test(RegionId::new(1024, 2)),
leader_peer: Some(Peer::empty(3)),
follower_peers: vec![Peer::empty(4)],
leader_state: None,
leader_down_since: None,
},
RegionRoute {
region: Region::new_test(RegionId::new(1024, 3)),
leader_peer: Some(Peer::empty(3)),
follower_peers: vec![Peer::empty(4)],
leader_state: None,
leader_down_since: None,
},
];
let region_metadatas = lister.list(1024, &region_routes).await.unwrap();
assert_eq!(region_metadatas.len(), 3);
let mut requests = vec![];
for _ in 0..2 {
let (peer, request) = rx.try_recv().unwrap();
requests.push((peer, request));
}
rx.try_recv().unwrap_err();
let (peer, request) = requests.remove(0);
assert_eq!(peer.id, 1);
assert_list_metadata_request(request, &[RegionId::new(1024, 1)]);
let (peer, request) = requests.remove(0);
assert_eq!(peer.id, 3);
assert_list_metadata_request(request, &[RegionId::new(1024, 2), RegionId::new(1024, 3)]);
}
#[tokio::test]
async fn test_list_region_metadata() {
let region_metadata =
build_region_metadata(RegionId::new(1024, 1), &test_column_metadatas(&["tag_0"]));
let region_metadatas = HashMap::from([
(RegionId::new(1024, 0), None),
(RegionId::new(1024, 1), Some(region_metadata.clone())),
]);
let handler = ListMetadataDatanodeHandler::new(region_metadatas);
let node_manager = Arc::new(MockDatanodeManager::new(handler));
let lister = RegionMetadataLister::new(node_manager);
let region_routes = vec![
RegionRoute {
region: Region::new_test(RegionId::new(1024, 0)),
leader_peer: Some(Peer::empty(1)),
follower_peers: vec![],
leader_state: None,
leader_down_since: None,
},
RegionRoute {
region: Region::new_test(RegionId::new(1024, 1)),
leader_peer: Some(Peer::empty(3)),
follower_peers: vec![],
leader_state: None,
leader_down_since: None,
},
];
let region_metadatas = lister.list(1024, &region_routes).await.unwrap();
assert_eq!(region_metadatas.len(), 2);
assert_eq!(region_metadatas[0], None);
assert_eq!(region_metadatas[1], Some(region_metadata));
}
}

View File

@@ -0,0 +1,46 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use snafu::OptionExt;
use store_api::storage::TableId;
use table::table_reference::TableReference;
use crate::error::{Result, TableNotFoundSnafu};
use crate::key::table_name::{TableNameKey, TableNameManager};
/// Get all the table ids from the table names.
///
/// Returns an error if any table does not exist.
pub(crate) async fn get_all_table_ids_by_names<'a>(
table_name_manager: &TableNameManager,
table_names: &[TableReference<'a>],
) -> Result<Vec<TableId>> {
let table_name_keys = table_names
.iter()
.map(TableNameKey::from)
.collect::<Vec<_>>();
let table_name_values = table_name_manager.batch_get(table_name_keys).await?;
let mut table_ids = Vec::with_capacity(table_name_values.len());
for (value, table_name) in table_name_values.into_iter().zip(table_names) {
let value = value
.with_context(|| TableNotFoundSnafu {
table_name: table_name.to_string(),
})?
.table_id();
table_ids.push(value);
}
Ok(table_ids)
}

View File

@@ -0,0 +1,44 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use snafu::OptionExt;
use store_api::storage::TableId;
use table::table_reference::TableReference;
use crate::error::{Result, TableInfoNotFoundSnafu};
use crate::key::table_info::{TableInfoManager, TableInfoValue};
use crate::key::DeserializedValueWithBytes;
/// Get all table info values by table ids.
///
/// Returns an error if any table does not exist.
pub(crate) async fn get_all_table_info_values_by_table_ids<'a>(
table_info_manager: &TableInfoManager,
table_ids: &[TableId],
table_names: &[TableReference<'a>],
) -> Result<Vec<DeserializedValueWithBytes<TableInfoValue>>> {
let mut table_info_map = table_info_manager.batch_get_raw(table_ids).await?;
let mut table_info_values = Vec::with_capacity(table_ids.len());
for (table_id, table_name) in table_ids.iter().zip(table_names) {
let table_info_value =
table_info_map
.remove(table_id)
.with_context(|| TableInfoNotFoundSnafu {
table: table_name.to_string(),
})?;
table_info_values.push(table_info_value);
}
Ok(table_info_values)
}

View File

@@ -877,6 +877,36 @@ pub enum Error {
#[snafu(source)]
error: object_store::Error,
},
#[snafu(display(
"Missing column in column metadata: {}, table: {}, table_id: {}",
column_name,
table_name,
table_id,
))]
MissingColumnInColumnMetadata {
column_name: String,
#[snafu(implicit)]
location: Location,
table_name: String,
table_id: TableId,
},
#[snafu(display(
"Mismatch column id: column_name: {}, column_id: {}, table: {}, table_id: {}",
column_name,
column_id,
table_name,
table_id,
))]
MismatchColumnId {
column_name: String,
column_id: u32,
#[snafu(implicit)]
location: Location,
table_name: String,
table_id: TableId,
},
}
pub type Result<T> = std::result::Result<T, Error>;
@@ -896,7 +926,10 @@ impl ErrorExt for Error {
| DeserializeFromJson { .. } => StatusCode::Internal,
NoLeader { .. } => StatusCode::TableUnavailable,
ValueNotExist { .. } | ProcedurePoisonConflict { .. } => StatusCode::Unexpected,
ValueNotExist { .. }
| ProcedurePoisonConflict { .. }
| MissingColumnInColumnMetadata { .. }
| MismatchColumnId { .. } => StatusCode::Unexpected,
Unsupported { .. } => StatusCode::Unsupported,
WriteObject { .. } | ReadObject { .. } => StatusCode::StorageUnavailable,

View File

@@ -174,6 +174,8 @@ pub struct UpgradeRegion {
/// The identifier of cache.
pub enum CacheIdent {
FlowId(FlowId),
/// Indicate change of address of flownode.
FlowNodeAddressChange(u64),
FlowName(FlowName),
TableId(TableId),
TableName(TableName),

View File

@@ -1509,6 +1509,7 @@ mod tests {
name: "r1".to_string(),
partition: None,
attrs: BTreeMap::new(),
partition_expr: Default::default(),
},
leader_peer: Some(Peer::new(datanode, "a2")),
follower_peers: vec![],
@@ -2001,6 +2002,7 @@ mod tests {
name: "r1".to_string(),
partition: None,
attrs: BTreeMap::new(),
partition_expr: Default::default(),
},
leader_peer: Some(Peer::new(datanode, "a2")),
leader_state: Some(LeaderState::Downgrading),
@@ -2013,6 +2015,7 @@ mod tests {
name: "r2".to_string(),
partition: None,
attrs: BTreeMap::new(),
partition_expr: Default::default(),
},
leader_peer: Some(Peer::new(datanode, "a1")),
leader_state: None,

View File

@@ -12,7 +12,7 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use std::collections::HashMap;
use std::collections::{BTreeMap, HashMap};
use std::fmt::Display;
use common_catalog::consts::{DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME};
@@ -58,12 +58,17 @@ impl Default for SchemaNameKey<'_> {
pub struct SchemaNameValue {
#[serde(default)]
pub ttl: Option<DatabaseTimeToLive>,
#[serde(default)]
pub extra_options: BTreeMap<String, String>,
}
impl Display for SchemaNameValue {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
if let Some(ttl) = self.ttl.map(|i| i.to_string()) {
write!(f, "ttl='{}'", ttl)?;
writeln!(f, "'ttl'='{}'", ttl)?;
}
for (k, v) in self.extra_options.iter() {
writeln!(f, "'{k}'='{v}'")?;
}
Ok(())
@@ -87,7 +92,18 @@ impl TryFrom<&HashMap<String, String>> for SchemaNameValue {
})
.transpose()?
.map(|ttl| ttl.into());
Ok(Self { ttl })
let extra_options = value
.iter()
.filter_map(|(k, v)| {
if k == OPT_KEY_TTL {
None
} else {
Some((k.clone(), v.clone()))
}
})
.collect();
Ok(Self { ttl, extra_options })
}
}
@@ -97,6 +113,12 @@ impl From<SchemaNameValue> for HashMap<String, String> {
if let Some(ttl) = value.ttl.map(|ttl| ttl.to_string()) {
opts.insert(OPT_KEY_TTL.to_string(), ttl);
}
opts.extend(
value
.extra_options
.iter()
.map(|(k, v)| (k.clone(), v.clone())),
);
opts
}
}
@@ -316,18 +338,23 @@ mod tests {
#[test]
fn test_display_schema_value() {
let schema_value = SchemaNameValue { ttl: None };
let schema_value = SchemaNameValue {
ttl: None,
..Default::default()
};
assert_eq!("", schema_value.to_string());
let schema_value = SchemaNameValue {
ttl: Some(Duration::from_secs(9).into()),
..Default::default()
};
assert_eq!("ttl='9s'", schema_value.to_string());
assert_eq!("'ttl'='9s'\n", schema_value.to_string());
let schema_value = SchemaNameValue {
ttl: Some(Duration::from_secs(0).into()),
..Default::default()
};
assert_eq!("ttl='forever'", schema_value.to_string());
assert_eq!("'ttl'='forever'\n", schema_value.to_string());
}
#[test]
@@ -341,6 +368,7 @@ mod tests {
let value = SchemaNameValue {
ttl: Some(Duration::from_secs(10).into()),
..Default::default()
};
let mut opts: HashMap<String, String> = HashMap::new();
opts.insert("ttl".to_string(), "10s".to_string());
@@ -355,6 +383,7 @@ mod tests {
let forever = SchemaNameValue {
ttl: Some(Default::default()),
..Default::default()
};
let parsed = SchemaNameValue::try_from_raw_value(
serde_json::json!({"ttl": "forever"}).to_string().as_bytes(),
@@ -374,6 +403,81 @@ mod tests {
assert!(err_empty.is_err());
}
#[test]
fn test_extra_options_compatibility() {
// Test with extra_options only
let mut opts: HashMap<String, String> = HashMap::new();
opts.insert("foo".to_string(), "bar".to_string());
opts.insert("baz".to_string(), "qux".to_string());
let value = SchemaNameValue::try_from(&opts).unwrap();
assert_eq!(value.ttl, None);
assert_eq!(value.extra_options.get("foo"), Some(&"bar".to_string()));
assert_eq!(value.extra_options.get("baz"), Some(&"qux".to_string()));
// Test round-trip conversion
let opts_back: HashMap<String, String> = value.clone().into();
assert_eq!(opts_back.get("foo"), Some(&"bar".to_string()));
assert_eq!(opts_back.get("baz"), Some(&"qux".to_string()));
assert!(!opts_back.contains_key("ttl"));
// Test with both ttl and extra_options
let mut opts: HashMap<String, String> = HashMap::new();
opts.insert("ttl".to_string(), "5m".to_string());
opts.insert("opt1".to_string(), "val1".to_string());
let value = SchemaNameValue::try_from(&opts).unwrap();
assert_eq!(value.ttl, Some(Duration::from_secs(300).into()));
assert_eq!(value.extra_options.get("opt1"), Some(&"val1".to_string()));
// Test serialization/deserialization compatibility
let json = serde_json::to_string(&value).unwrap();
let deserialized: SchemaNameValue = serde_json::from_str(&json).unwrap();
assert_eq!(value, deserialized);
// Test display includes extra_options
let mut value = SchemaNameValue::default();
value
.extra_options
.insert("foo".to_string(), "bar".to_string());
let display = value.to_string();
assert!(display.contains("'foo'='bar'"));
}
#[test]
fn test_backward_compatibility_with_old_format() {
// Simulate old format: only ttl, no extra_options
let json = r#"{"ttl":"10s"}"#;
let parsed = SchemaNameValue::try_from_raw_value(json.as_bytes()).unwrap();
assert_eq!(
parsed,
Some(SchemaNameValue {
ttl: Some(Duration::from_secs(10).into()),
extra_options: BTreeMap::new(),
})
);
// Simulate old format: null value
let json = r#"null"#;
let parsed = SchemaNameValue::try_from_raw_value(json.as_bytes()).unwrap();
assert!(parsed.is_none());
}
#[test]
fn test_forward_compatibility_with_new_options() {
// Simulate new format: ttl + extra_options
let json = r#"{"ttl":"15s","extra_options":{"foo":"bar","baz":"qux"}}"#;
let parsed = SchemaNameValue::try_from_raw_value(json.as_bytes()).unwrap();
let mut expected_options = BTreeMap::new();
expected_options.insert("foo".to_string(), "bar".to_string());
expected_options.insert("baz".to_string(), "qux".to_string());
assert_eq!(
parsed,
Some(SchemaNameValue {
ttl: Some(Duration::from_secs(15).into()),
extra_options: expected_options,
})
);
}
#[tokio::test]
async fn test_key_exist() {
let manager = SchemaManager::new(Arc::new(MemoryKvBackend::default()));
@@ -396,6 +500,7 @@ mod tests {
let current_schema_value = manager.get(schema_key).await.unwrap().unwrap();
let new_schema_value = SchemaNameValue {
ttl: Some(Duration::from_secs(10).into()),
..Default::default()
};
manager
.update(schema_key, &current_schema_value, &new_schema_value)
@@ -410,9 +515,11 @@ mod tests {
let new_schema_value = SchemaNameValue {
ttl: Some(Duration::from_secs(40).into()),
..Default::default()
};
let incorrect_schema_value = SchemaNameValue {
ttl: Some(Duration::from_secs(20).into()),
..Default::default()
}
.try_as_raw_value()
.unwrap();
@@ -425,7 +532,10 @@ mod tests {
.unwrap_err();
let current_schema_value = manager.get(schema_key).await.unwrap().unwrap();
let new_schema_value = SchemaNameValue { ttl: None };
let new_schema_value = SchemaNameValue {
ttl: None,
..Default::default()
};
manager
.update(schema_key, &current_schema_value, &new_schema_value)
.await

View File

@@ -103,6 +103,26 @@ pub fn table_decoder(kv: KeyValue) -> Result<(String, TableNameValue)> {
Ok((table_name_key.table.to_string(), table_name_value))
}
impl<'a> From<&TableReference<'a>> for TableNameKey<'a> {
fn from(value: &TableReference<'a>) -> Self {
Self {
catalog: value.catalog,
schema: value.schema,
table: value.table,
}
}
}
impl<'a> From<TableReference<'a>> for TableNameKey<'a> {
fn from(value: TableReference<'a>) -> Self {
Self {
catalog: value.catalog,
schema: value.schema,
table: value.table,
}
}
}
impl<'a> From<&'a TableName> for TableNameKey<'a> {
fn from(value: &'a TableName) -> Self {
Self {

View File

@@ -711,6 +711,7 @@ mod tests {
name: "r1".to_string(),
partition: None,
attrs: Default::default(),
partition_expr: Default::default(),
},
leader_peer: Some(Peer {
id: 2,
@@ -726,6 +727,7 @@ mod tests {
name: "r1".to_string(),
partition: None,
attrs: Default::default(),
partition_expr: Default::default(),
},
leader_peer: Some(Peer {
id: 2,

View File

@@ -32,6 +32,7 @@ pub mod key;
pub mod kv_backend;
pub mod leadership_notifier;
pub mod lock_key;
pub mod maintenance;
pub mod metrics;
pub mod node_expiry_listener;
pub mod node_manager;

View File

@@ -0,0 +1,15 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
pub(crate) mod reconcile_table;

View File

@@ -0,0 +1,17 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
// TODO(weny): Remove it
#[allow(dead_code)]
pub(crate) mod utils;

Some files were not shown because too many files have changed in this diff Show More