Compare commits

...

805 Commits

Author SHA1 Message Date
discord9
1d3cfdc0e5 clippy
Signed-off-by: discord9 <discord9@163.com>
2025-12-22 21:07:24 +08:00
discord9
088401c3e9 c
Signed-off-by: discord9 <discord9@163.com>
2025-12-22 21:05:23 +08:00
discord9
4419e0254f refactor: add list test
Signed-off-by: discord9 <discord9@163.com>
2025-12-22 21:05:13 +08:00
discord9
709ccd3e31 c
Signed-off-by: discord9 <discord9@163.com>
2025-12-22 18:39:14 +08:00
discord9
5b50b4824d wt
Signed-off-by: discord9 <discord9@163.com>
2025-12-22 18:35:19 +08:00
discord9
1ef5c2e024 chore
Signed-off-by: discord9 <discord9@163.com>
2025-12-22 18:22:03 +08:00
discord9
d20727f335 test: better fuzz
Signed-off-by: discord9 <discord9@163.com>
2025-12-22 18:15:02 +08:00
discord9
2391ab1941 even more test
Signed-off-by: discord9 <discord9@163.com>
2025-12-22 16:21:27 +08:00
discord9
ec77a5d53a sanity check
Signed-off-by: discord9 <discord9@163.com>
2025-12-22 13:54:19 +08:00
discord9
dbad96eb80 more test
Signed-off-by: discord9 <discord9@163.com>
2025-12-22 13:44:00 +08:00
discord9
c0652f6dd5 chore: release push check against Cargo.toml (#7426)
Signed-off-by: discord9 <discord9@163.com>
2025-12-19 13:16:15 +00:00
Yingwen
fed6cb0806 fix: flat format use correct encoding in indexer for tags (#7440)
* test: add inverted and skipping test

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: Add tests for fulltext index

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: index dictionary type in correct encoding in flat format

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: use encode_data_type() in SortField

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: refine imports

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: add tests for sparse encoding

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: remove logs

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: update list test

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: simplify tests

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-12-19 07:36:44 +00:00
discord9
69659211f6 chore: fix bincode version (#7445)
Signed-off-by: discord9 <discord9@163.com>
2025-12-19 07:36:28 +00:00
LFC
6332d91884 test: reduce execution time of test test_suspend_frontend (#7444)
Signed-off-by: luofucong <luofc@foxmail.com>
2025-12-19 07:25:36 +00:00
Weny Xu
4d66bd96b8 feat: make distributed time constants and client timeouts configurable (#7433)
Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-12-19 02:23:20 +00:00
Ning Sun
2f4a15ec40 ci: ensure commits from main branch for whitelisted git dependencies (#7434)
* chore: update proto to include native histogram

* ci: add a CI check to ensure whitelisted dependencies are using their main branch

* chore: add changes to Cargo.toml to trigger CI

* chore: update proto

* test: update test to include histogram
2025-12-18 14:10:33 +00:00
Lanqing Yang
658332fe68 chore(mito): nit remove extra hashset in gc workers (#7399)
chore(mito): remove extra hashset in gc workers

Signed-off-by: lyang24 <lanqingy93@gmail.com>
2025-12-18 13:09:32 +00:00
shuiyisong
c088d361a4 chore: expose disable_ec2_metadata option (#7439)
chore: add option for disable ec2 metadata

Signed-off-by: shuiyisong <xixing.sys@gmail.com>
2025-12-18 11:55:08 +00:00
shuiyisong
a85864067e chore: remove canonicalize (#7430)
* chore: remove canonicalize

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: add match file name option

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: update field name

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: modify tls option

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: update config file

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: update config md

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: update option to `enable_filename_match`

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: address CR issues

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: remove option

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: remove unused test

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

---------

Signed-off-by: shuiyisong <xixing.sys@gmail.com>
2025-12-18 09:39:10 +00:00
LFC
0df69c95aa chore: use official etcd-client (#7432)
Signed-off-by: luofucong <luofc@foxmail.com>
2025-12-18 06:25:48 +00:00
McKnight22
72eede8b38 refactor(cli): unify storage configuration for export command (#7280)
* refactor(cli): unify storage configuration for export command

- Utilize ObjectStoreConfig to unify storage configuration for export command
- Support export command for Fs, S3, OSS, GCS and Azblob
- Fix the Display implementation for SecretString always returned the string
  "SecretString([REDACTED])" even when the internal secret was empty.

Signed-off-by: McKnight22 <tao.wang.22@outlook.com>

* refactor(cli): unify storage configuration for export command

- Change the encapsulation permissions of each configuration
  options for every storage backend to public access.

Signed-off-by: McKnight22 <tao.wang.22@outlook.com>
Co-authored-by: WenyXu <wenymedia@gmail.com>

* refactor(cli): unify storage configuration for export command

- Update the implementation of ObjectStoreConfig::build_xxx() using macro solutions

Signed-off-by: McKnight22 <tao.wang.22@outlook.com>
Co-authored-by: WenyXu <wenymedia@gmail.com>

* refactor(cli): unify storage configuration for export command

- Introduce config validation for each storage type

Signed-off-by: McKnight22 <tao.wang.22@outlook.com>

* refactor(cli): unify storage configuration for export command

- Enable trait-based polymorphism for storage type handling
  (from inherent impl to trait impl)
- Extract helper functions to reduce code duplication

Signed-off-by: McKnight22 <tao.wang.22@outlook.com>

* refactor(cli): unify storage configuration for export command

- Improve SecretString handling and validation
  (Distinguishing between "not provided" and "empty string")
- Add validation when using filesystem storage

Signed-off-by: McKnight22 <tao.wang.22@outlook.com>

* refactor(cli): unify storage configuration for export command

- Refactor storage field validation with macro

Signed-off-by: McKnight22 <tao.wang.22@outlook.com>

* refactor(cli): unify storage configuration for export command

- support GCS Application Default Credentials (like GKE, Cloud Run, or local development with ) in export
  (Enabling ADC without validating  or  to be present)
  (Making  optional in GCS validation (defaults to https://storage.googleapis.com))

Signed-off-by: McKnight22 <tao.wang.22@outlook.com>

* refactor(cli): unify storage configuration for export command

This commit refactors the validation logic for object store configurations in the CLI to leverage clap features and reduce boilerplate.

Key changes:
- Update wrap_with_clap_prefix macro to use clap's requires attribute.
  This ensures that storage-specific options (e.g., --s3-bucket) are only accepted when the corresponding backend is enabled (e.g., --s3).
- Simplify FieldValidator trait by removing the is_provided method, as dependency checks are now handled by clap.
- Introduce validate_backend! macro to standardize the validation of required fields for enabled backends.
- Refactor ExportCommand to remove explicit validation calls (validate_s3, etc.) and rely on the validation within backend constructors.
- Add integration tests for ExportCommand to verify build success with S3, OSS, GCS, and Azblob configurations.

Signed-off-by: McKnight22 <tao.wang.22@outlook.com>

* refactor(cli): unify storage configuration for export command

- Use macros to simplify storage export implementation

Signed-off-by: McKnight22 <tao.wang.22@outlook.com>
Co-authored-by: WenyXu <wenymedia@gmail.com>

* refactor(cli): unify storage configuration for export command

- Rollback StorageExport trait implementation to not using macro for better code clarity and maintainability
- Introduce format_uri helper function to unify URI formatting logic
- Fix OSS URI path bug inherited from legacy code

Signed-off-by: McKnight22 <tao.wang.22@outlook.com>
Co-authored-by: WenyXu <wenymedia@gmail.com>

* refactor(cli): unify storage configuration for export command

- Remove unnecessary async_trait

Signed-off-by: McKnight22 <tao.wang.22@outlook.com>
Co-authored-by: jeremyhi <jiachun_feng@proton.me>

---------

Signed-off-by: McKnight22 <tao.wang.22@outlook.com>
Co-authored-by: WenyXu <wenymedia@gmail.com>
Co-authored-by: jeremyhi <jiachun_feng@proton.me>
2025-12-18 03:16:53 +00:00
jeremyhi
95eccd6cde feat: introduce granularity for memory manager (#7416)
* feat: introduce granularity for memory manager

Signed-off-by: jeremyhi <fengjiachun@gmail.com>

* chore: add unit test

Signed-off-by: jeremyhi <fengjiachun@gmail.com>

* chore: remove granularity getter for mamanger

Signed-off-by: jeremyhi <fengjiachun@gmail.com>

* Update src/common/memory-manager/src/manager.rs

Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>

* feat: acquire_with_policy for manager

Signed-off-by: jeremyhi <fengjiachun@gmail.com>

---------

Signed-off-by: jeremyhi <fengjiachun@gmail.com>
Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>
2025-12-17 11:08:51 +00:00
fys
0bc5a305be chore: add wait_initialized method for frontend client (#7414)
* chore: add wait_initialized method for frontend client

* fix: some

* fix: cargo fmt

* add comment

* add unit test

* rename

* fix: cargo check

* fix: cr by copilot
2025-12-17 08:13:36 +00:00
discord9
1afcddd5a9 chore: feature gate vector_index (#7428)
Signed-off-by: discord9 <discord9@163.com>
2025-12-17 07:14:25 +00:00
shuiyisong
62808b887b fix: using anonymous s3 access when ak and sk is not provided (#7425)
* chore: allow s3 anon

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: disable ec2 metadata

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

---------

Signed-off-by: shuiyisong <xixing.sys@gmail.com>
2025-12-17 06:34:29 +00:00
discord9
04ddd40e00 chore: bump version to beta.3 (#7423)
chore: bump to beta.3

Signed-off-by: discord9 <discord9@163.com>
2025-12-17 04:18:23 +00:00
liyang
b4f028be5f chore: change etcd endpoints to array in the test scripts (#7419)
chore: change etcd endpoint

Signed-off-by: liyang <daviderli614@gmail.com>
2025-12-17 03:14:35 +00:00
Lei, HUANG
da964880f5 chore: expose symbols (#7417)
* refactor/expose-symbols:
 ## Refactor `bulk/part.rs` to Simplify Mutation Handling

 - Removed the `mutations_to_record_batch` function and its associated helper functions, including `ArraysSorter`, `timestamp_array_to_iter`, and `binary_array_to_dictionary`, to simplify the mutation handling logic in `bulk/part.rs`.
 - Deleted related test functions `check_binary_array_to_dictionary` and `check_mutations_to_record_batches` from the test module, along with their associated test cases.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* refactor/expose-symbols:
 ### Commit Message

 **Refactor and Enhance Deduplication Logic**

 - **`flush.rs`**: Refactored `maybe_dedup_one` function to accept `append_mode` and `merge_mode` as parameters instead of `RegionOptions`. This change enhances flexibility in deduplication logic.
 - **`memtable/bulk.rs`**: Made `BulkRangeIterBuilder` struct and its fields public to allow external access and modification, improving extensibility.
 - **`sst.rs`**: Corrected a typo in the schema documentation, changing `__prmary_key` to `__primary_key` for clarity and accuracy.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

---------

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-12-17 01:29:36 +00:00
dennis zhuang
a35a39f726 feat(vector_index): adds the foundational types and SQL parsing support for vector index (#7366)
* feat: adds the foundational types and SQL parsing support for vector index

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* refactor: by suggestions

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: ensure index option values must be greater than zero

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: validate connectivity strictly

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: compile error

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* feat: disable SIMD for ci

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

---------

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
2025-12-16 22:45:36 +00:00
Lei, HUANG
e0c1566e92 fix(servers): flight stuck on waiting for first message (#7413)
* fix/flight-stuck-on-first-message:
 **Refactor GRPC Stream Handling and Table Resolution**

 - **`grpc.rs`**: Refactored the `GrpcQueryHandler` to resolve table references and check permissions only once per stream, improving efficiency. Introduced a mechanism to handle table resolution and permission checks after receiving the first `RecordBatch`.
 - **`flight.rs`**: Enhanced `PutRecordBatchRequestStream` to manage stream states (`Init` and `Ready`) for better handling of schema and table name extraction. Improved error handling and logging for unexpected flight messages.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* chore: add some doc

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

---------

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-12-16 08:54:13 +00:00
Yingwen
f6afb10e33 feat!: download file to fill the cache on write cache miss (#7294)
* feat: download inverted index file

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: download for bloom and fulltext

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: implement maybe_download_background for FileCache

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: load file for parquet

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: reduce channel size

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: use ManifestCache

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: pass cache to ManifestObjectStore::new

Signed-off-by: evenyag <realevenyag@gmail.com>

* style: fix fmt and clippy

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: remove manifest cache ttl

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: remove read cache

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: clean old read cache path

Signed-off-by: evenyag <realevenyag@gmail.com>

* docs: update config

Signed-off-by: evenyag <realevenyag@gmail.com>

* docs: update config examples

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: update test

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: fix CI

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: also clean the root directory

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: update manifest test

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: fix compiler errors

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: skip file if it exists

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: remove warn in replace

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: add a flag to enable/disable background download

set the concurrency to 1 for background download

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: rename write_cache_enable_background_download to enable_refill_cache_on_read

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: update config test

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: address comments

Signed-off-by: evenyag <realevenyag@gmail.com>

* docs: update config.md

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: fmt code

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-12-16 08:31:26 +00:00
dennis zhuang
2dfcf35fee feat: support function aliases and add MySQL-compatible aliases (#7410)
* feat: support function aliases and add MySQL-compatible aliases

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: get_table_function_source

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* refactor: add function_alias mod

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: license

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

---------

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
2025-12-16 06:56:23 +00:00
Weny Xu
f7d5c87ac0 feat: introduce copy_region_from for mito engine (#7389)
* feat: introduce `copy_region_from`

Signed-off-by: WenyXu <wenymedia@gmail.com>

* fix: fix clippy

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-12-16 06:12:06 +00:00
Weny Xu
9cd57e9342 fix: use verified recycling method for PostgreSQL connection pool (#7407)
Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-12-16 02:49:01 +00:00
jeremyhi
32f9cc5286 feat: move memory_manager to common crate (#7408)
* feat: move memory_manager to common crate

Signed-off-by: jeremyhi <fengjiachun@gmail.com>

* chore: add license header

Signed-off-by: jeremyhi <fengjiachun@gmail.com>

* fix: by AI comment

Signed-off-by: jeremyhi <fengjiachun@gmail.com>

---------

Signed-off-by: jeremyhi <fengjiachun@gmail.com>
2025-12-15 13:15:33 +00:00
Yingwen
5232a12a8c feat: per file scan metrics (#7396)
* feat: collect per file metrics

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: divide build_cost to build_part_cost and build_reader_cost

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: limit the file metrics num to display

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: use sorted iter to get sorted files

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: output metrics in desc order

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-12-15 12:52:03 +00:00
fys
913ac325e5 chore: add is_initialized method for frontend client (#7409)
chore: add `is_initialized` for frontend client
2025-12-15 12:51:09 +00:00
LFC
0c52d5bb34 fix: cpu cores got wrongly calculated to 0 (#7405)
* fix: cpu cores got wrongly calculated to 0

Signed-off-by: luofucong <luofc@foxmail.com>

* Update src/common/stat/src/resource.rs

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

---------

Signed-off-by: luofucong <luofc@foxmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-12-15 09:40:49 +00:00
Ruihang Xia
e0697790e6 chore: sort histogram sqlness result (#7406)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-12-15 08:12:12 +00:00
shuiyisong
64e74916b9 fix: TLS option validate and merge (#7401)
* chore: unify gRPC server tls behaviour

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* fix: test

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: add validate and merge tls

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: remove mut in func sig and add back test

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* fix: test

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

---------

Signed-off-by: shuiyisong <xixing.sys@gmail.com>
2025-12-15 02:53:21 +00:00
Ruihang Xia
b601781604 feat: optimize and fix part sort on overlapping time windows (#7387)
* enforce two ends sort

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Signed-off-by: discord9 <discord9@163.com>

* primary end scope drain

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Signed-off-by: discord9 <discord9@163.com>

* correct fuzzy generator, no zero limit

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Signed-off-by: discord9 <discord9@163.com>

* early stop check

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Signed-off-by: discord9 <discord9@163.com>

* correct test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Signed-off-by: discord9 <discord9@163.com>

* simplify implementation by removing some old logic

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Signed-off-by: discord9 <discord9@163.com>

* what

Signed-off-by: discord9 <discord9@163.com>

* maybe

Signed-off-by: discord9 <discord9@163.com>

* fix: reread topk

Signed-off-by: discord9 <discord9@163.com>

* remove: unused topk_buffer_fulfilled method

Fixes clippy dead code warning by removing the unused method.

Signed-off-by: discord9 <discord9@163.com>

* fix: correct test expectations for windowed sort with limit

Updated test expectations in windowed sort tests to match actual algorithm behavior:
- Fixed descending sort test to expect global top 4 values [95, 94, 90, 85] instead of group-local selection
- Fixed ascending sort test to expect global smallest 4 values [5, 6, 7, 8] and adjusted read count accordingly
- Updated comments to reflect correct algorithm behavior for threshold-based boundary detection

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude Sonnet 4.5 <noreply@anthropic.com>
Signed-off-by: discord9 <discord9@163.com>

* skip fuzzy test for now

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Signed-off-by: discord9 <discord9@163.com>
Co-authored-by: discord9 <discord9@163.com>
Co-authored-by: Claude Sonnet 4.5 <noreply@anthropic.com>
2025-12-12 14:04:32 +00:00
Ruihang Xia
bd3ad60910 fix: promql offset direction (#7392)
* fix: promql offset direction

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* sort sqlness result

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* commit forgotten file

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-12-12 07:51:35 +00:00
Ruihang Xia
cbfdeca64c fix: promql histogram with aggregation (#7393)
* fix: promql histogram with aggregation

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update test constructors

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* sqlness tests

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update sqlness result

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* redact partition number

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-12-12 07:32:04 +00:00
jeremyhi
baffed8c6a feat: mem manager on compaction (#7305)
* feat: mem manager on compaction

Signed-off-by: jeremyhi <fengjiachun@gmail.com>

* fix: by copilot review comment

Signed-off-by: jeremyhi <fengjiachun@gmail.com>

* feat: experimental_

Signed-off-by: jeremyhi <fengjiachun@gmail.com>

* fix: refine estimate_compaction_bytes

Signed-off-by: jeremyhi <fengjiachun@gmail.com>

* feat: make them into config example

Signed-off-by: jeremyhi <fengjiachun@gmail.com>

* chore: by copilot comment

Signed-off-by: jeremyhi <fengjiachun@gmail.com>

* Update src/mito2/src/compaction.rs

Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>

* fix: dedup the regions waiting

Signed-off-by: jeremyhi <fengjiachun@gmail.com>

* chore: by comment

Signed-off-by: jeremyhi <fengjiachun@gmail.com>

* chore: minor change

Signed-off-by: jeremyhi <fengjiachun@gmail.com>

* feat: add AdditionalMemoryGuard for the running compaction task

Signed-off-by: jeremyhi <fengjiachun@gmail.com>

* refactor: do OnExhaustedPolicy before running task

Signed-off-by: jeremyhi <fengjiachun@gmail.com>

* refactor: use OwnedSemaphorePermit to impl guard

Signed-off-by: jeremyhi <fengjiachun@gmail.com>

* feat: add early_release_partial method to release a portion of memory

Signed-off-by: jeremyhi <fengjiachun@gmail.com>

* fix: 0 bytes make request_additional unlimited

Signed-off-by: jeremyhi <fengjiachun@gmail.com>

* fix: fail-fast on acquire

Signed-off-by: jeremyhi <fengjiachun@gmail.com>

---------

Signed-off-by: jeremyhi <fengjiachun@gmail.com>
Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>
2025-12-12 06:49:58 +00:00
discord9
11a5e1618d test: test_tracker_cleanup skip non linux (#7398)
test: skip non linux

Signed-off-by: discord9 <discord9@163.com>
2025-12-12 06:27:57 +00:00
Lanqing Yang
f5e0e94e3a chore(mito): nit avoid clone the batch object on inverted index building (#7388)
fix: avoid clone the batch object on inverted index building

Signed-off-by: lyang24 <lanqingy93@gmail.com>
2025-12-12 04:58:37 +00:00
Weny Xu
ba4eda40e5 refactor: optimize heartbeat channel and etcd client keepalive settings (#7390)
Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-12-11 13:32:11 +00:00
discord9
f06a64ff90 feat: mark index outdated (#7383)
* feat: mark index outdated

Signed-off-by: discord9 <discord9@163.com>

* refactor: move IndexVerwsion to store-api

Signed-off-by: discord9 <discord9@163.com>

* per review

Signed-off-by: discord9 <discord9@163.com>

* fix: condition for add files

Signed-off-by: discord9 <discord9@163.com>

* cleanup

Signed-off-by: discord9 <discord9@163.com>

* refactor(sst): extract index version check into method

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-12-11 12:08:45 +00:00
fys
84b4777925 fix: parse "KEEP FIRING FOR" (#7386)
* fix: parse "KEEP FIRING FOR"

* fix: cargo fmt
2025-12-11 03:54:47 +00:00
discord9
a26dee0ca1 fix: gc listing op first (#7385)
Signed-off-by: discord9 <discord9@163.com>
2025-12-11 03:25:05 +00:00
Ning Sun
276f6bf026 feat: grafana postgresql data source query builder support (#7379)
* feat: grafana postgresql data source query builder support

* test: add sqlness test cases
2025-12-11 03:18:35 +00:00
Weny Xu
1d5291b06d fix(procedure): update procedure state correctly during execution and on failure (#7376)
Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-12-11 02:30:32 +00:00
Ruihang Xia
564cc0c750 feat: table/column/flow COMMENT (#7060)
* initial impl

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* simplify impl

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* sqlness test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* avoid unimplemented panic

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* validate flow

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update sqlness result

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix table column comment

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* table level comment

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* simplify table info serde

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* don't txn

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove empty trait

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* wip: procedure

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update proto

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* grpc support

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* Apply suggestions from code review

Co-authored-by: dennis zhuang <killme2008@gmail.com>
Co-authored-by: LFC <990479+MichaelScofield@users.noreply.github.com>
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* try from pb struct

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* doc comment

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* check unchanged fast case

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* tune errors

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix merge error

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* use try_as_raw_value

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: dennis zhuang <killme2008@gmail.com>
Co-authored-by: LFC <990479+MichaelScofield@users.noreply.github.com>
2025-12-10 15:08:47 +00:00
LFC
f1abe5d215 feat: suspend frontend and datanode (#7370)
Signed-off-by: luofucong <luofc@foxmail.com>
2025-12-10 12:18:24 +00:00
Ruihang Xia
ab426cbf89 refactor: remove duplication coverage and code from window sort tests (#7384)
* refactor: remove duplication coverage and code from window sort tests

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* allow clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-12-10 10:11:19 +00:00
Weny Xu
cb0f1afb01 fix: improve network failure detection (#7382)
* fix(meta): add default etcd client options with keep-alive settings (#7363)

* fix: improve network failure detection (#7367)

* Update src/meta-srv/src/handler.rs

Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>

---------

Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>
2025-12-10 09:48:36 +00:00
Yingwen
a22d08f1b1 feat: collect merge and dedup metrics (#7375)
* feat: collect FlatMergeReader metrics

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: add MergeMetricsReporter, rename Metrics to MergeMetrics

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: remove num_input_rows from MergeMetrics

The merge reader won't dedup so there is no need to collect input rows

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: report merge metrics to PartitionMetrics

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: add dedup cost to DedupMetrics

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: collect dedup metrics

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: remove metrics from FlatMergeIterator

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: remove num_output_rows from MergeMetrics

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: fix clippy

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: implement merge() for merge and dedup metrics

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: report metrics after observe metrics

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-12-10 09:16:20 +00:00
Ruihang Xia
6817a376b5 fix: part sort behavior (#7374)
* fix: part sort behavior

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* tune tests

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* debug assertion and remove produced count

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-12-10 07:44:44 +00:00
discord9
4d1a587079 chore: saturating duration since (#7380)
chore: sat duration since

Signed-off-by: discord9 <discord9@163.com>
2025-12-10 07:10:46 +00:00
Lei, HUANG
9f1aefe98f feat: allow one to many VRL pipeline (#7342)
* feat/allow-one-to-many-pipeline:
 ### Enhance Pipeline Processing for One-to-Many Transformations

 - **Support One-to-Many Transformations**:
   - Updated `processor.rs`, `etl.rs`, `vrl_processor.rs`, and `greptime.rs` to handle one-to-many transformations by allowing VRL processors to return arrays, expanding each element into separate rows.
   - Introduced `transform_array_elements` and `values_to_rows` functions to facilitate this transformation.

 - **Error Handling Enhancements**:
   - Added new error types in `error.rs` to handle cases where array elements are not objects and for transformation failures.

 - **Testing Enhancements**:
   - Added tests in `pipeline.rs` to verify one-to-many transformations, single object processing, and error handling for non-object array elements.

 - **Context Management**:
   - Modified `ctx_req.rs` to clone `ContextOpt` when adding rows, ensuring correct context management during transformations.

 - **Server Pipeline Adjustments**:
   - Updated `pipeline.rs` in `servers` to handle transformed outputs with one-to-many row expansions, ensuring correct row padding and request formation.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* feat/allow-one-to-many-pipeline:
 Add one-to-many VRL pipeline test in `http.rs`

 - Introduced `test_pipeline_one_to_many_vrl` to verify VRL processor's ability to expand a single input row into multiple output rows.
 - Updated `http_tests!` macro to include the new test.
 - Implemented test scenarios for single and multiple input rows, ensuring correct data transformation and row count validation.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* feat/allow-one-to-many-pipeline:
 ### Add Tests for VRL Pipeline Transformations

 - **File:** `src/pipeline/src/etl.rs`
   - Added tests for one-to-many VRL pipeline expansion to ensure multiple output rows from a single input.
   - Introduced tests to verify backward compatibility for single object output.
   - Implemented tests to confirm zero rows are produced from empty arrays.
   - Added validation tests to ensure array elements must be objects.
   - Developed tests for one-to-many transformations with table suffix hints from VRL.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* feat/allow-one-to-many-pipeline:
 ### Enhance Pipeline Transformation with Per-Row Table Suffixes

 - **`src/pipeline/src/etl.rs`**: Updated `TransformedOutput` to include per-row table suffixes, allowing for more flexible routing of transformed data. Modified `PipelineExecOutput` and related methods to
 handle the new structure.
 - **`src/pipeline/src/etl/transform/transformer/greptime.rs`**: Enhanced `values_to_rows` to support per-row table suffix extraction and application.
 - **`src/pipeline/tests/common.rs`** and **`src/pipeline/tests/pipeline.rs`**: Adjusted tests to validate the new per-row table suffix functionality, ensuring backward compatibility and correct behavior in
 one-to-many transformations.
 - **`src/servers/src/pipeline.rs`**: Modified `run_custom_pipeline` to process transformed outputs with per-row table suffixes, grouping rows by `(opt, table_name)` for insertion.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* feat/allow-one-to-many-pipeline:
 ### Update VRL Processor Type Checks

 - **File:** `vrl_processor.rs`
 - **Changes:** Updated type checking logic to use `contains_object()` and `contains_array()` methods instead of `is_object()` and `is_array()`. This change ensures
 compatibility with VRL type inference that may return multiple possible types.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* feat/allow-one-to-many-pipeline:
 - **Enhance Error Handling**: Added new error types `ArrayElementMustBeObjectSnafu` and `TransformArrayElementSnafu` to improve error handling in `etl.rs` and `greptime.rs`.
 - **Refactor Error Usage**: Moved error usage declarations in `transform_array_elements` and `values_to_rows` functions to the top of the file for better organization in `etl.rs` and `greptime.rs`.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* feat/allow-one-to-many-pipeline:
 ### Update `greptime.rs` to Enhance Error Handling

 - **Error Handling**: Modified the `values_to_rows` function to handle invalid array elements based on the `skip_error` parameter. If `skip_error` is true, invalid elements are skipped; otherwise, an error is returned.
 - **Testing**: Added unit tests in `greptime.rs` to verify the behavior of `values_to_rows` with different `skip_error` settings, ensuring correct processing of valid objects and appropriate error handling for invalid elements.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* feat/allow-one-to-many-pipeline:
 ### Commit Summary

 - **Enhance `TransformedOutput` Structure**: Refactored `TransformedOutput` to use a `HashMap` for grouping rows by `ContextOpt`, allowing for per-row configuration options. Updated methods in `PipelineExecOutput` to support the new structure (`src/pipeline/src/etl.rs`).

 - **Add New Transformation Method**: Introduced `transform_array_elements_to_hashmap` to handle array inputs with per-row `ContextOpt` in `HashMap` format (`src/pipeline/src/etl.rs`).

 - **Update Pipeline Execution**: Modified `run_custom_pipeline` to process `TransformedOutput` using the new `HashMap` structure, ensuring rows are grouped by `ContextOpt` and table name (`src/servers/src/pipeline.rs`).

 - **Add Tests for New Structure**: Implemented tests to verify the functionality of the new `HashMap` structure in `TransformedOutput`, including scenarios for one-to-many mapping, single object input, and empty arrays (`src/pipeline/src/etl.rs`).

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* feat/allow-one-to-many-pipeline:
 ### Refactor `values_to_rows` to Return `HashMap` Grouped by `ContextOpt`

 - **`etl.rs`**:
   - Updated `values_to_rows` to return a `HashMap` grouped by `ContextOpt` instead of a vector.
   - Adjusted logic to handle single object and array inputs, ensuring rows are grouped by their `ContextOpt`.
   - Modified functions to extract rows from default `ContextOpt` and apply table suffixes accordingly.

 - **`greptime.rs`**:
   - Enhanced `values_to_rows` to handle errors gracefully with `skip_error` logic.
   - Added logic to group rows by `ContextOpt` for array inputs.

 - **Tests**:
   - Updated existing tests to validate the new `HashMap` return structure.
   - Added a new test to verify correct grouping of rows by per-element `ContextOpt`.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* feat/allow-one-to-many-pipeline:
 ### Refactor and Enhance Error Handling in ETL Pipeline

 - **Refactored Functionality**:
   - Replaced `transform_array_elements` with `transform_array_elements_by_ctx` in `etl.rs` to streamline transformation logic and improve error handling.
   - Updated `values_to_rows` in `greptime.rs` to use `or_default` for cleaner code.

 - **Enhanced Error Handling**:
   - Introduced `unwrap_or_continue_if_err` macro in `etl.rs` to allow skipping errors based on pipeline context, improving robustness in data processing.

 These changes enhance the maintainability and error resilience of the ETL pipeline.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* feat/allow-one-to-many-pipeline:
 ### Update `Row` Handling in ETL Pipeline

 - **Refactor `Row` Type**: Introduced `RowWithTableSuffix` type alias to simplify handling of rows with optional table suffixes across the ETL pipeline.
 - **Modify Function Signatures**: Updated function signatures in `etl.rs` and `greptime.rs` to use `RowWithTableSuffix` for better clarity and consistency.
 - **Enhance Test Coverage**: Adjusted test logic in `greptime.rs` to align with the new `RowWithTableSuffix` type, ensuring correct grouping and processing of rows by TTL.

 Files affected: `etl.rs`, `greptime.rs`.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

---------

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-12-10 06:38:44 +00:00
Lei, HUANG
2f9130a2de chore(mito): expose some symbols (#7373)
chore/expose-symbols:
 ### Commit Summary

 - **Visibility Changes**: Updated visibility of functions in `bulk/part.rs`:
   - Made `record_batch_estimated_size` and `sort_primary_key_record_batch` functions public.
 - **Enhancements**: Enhanced functionality in `memtable.rs` by exposing additional components from `bulk::part`:
   - `BulkPartEncoder`, `BulkPartMeta`, `UnorderedPart`, `record_batch_estimated_size`, and `sort_primary_key_record_batch`.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-12-09 14:33:14 +00:00
shuiyisong
fa2b4e5e63 refactor: extract file watcher to common-config (#7357)
* refactor: extract file watcher to common-config

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* fix: add file check

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: watch dir instead of file

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: address CR issues

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

---------

Signed-off-by: shuiyisong <xixing.sys@gmail.com>
2025-12-09 11:23:26 +00:00
discord9
9197e818ec refactor: use versioned index for index file (#7309)
* refactor: use versioned index for index file

Signed-off-by: discord9 <discord9@163.com>

* fix: sst entry table

Signed-off-by: discord9 <discord9@163.com>

* update sqlness

Signed-off-by: discord9 <discord9@163.com>

* chore: unit type

Signed-off-by: discord9 <discord9@163.com>

* fix: missing version

Signed-off-by: discord9 <discord9@163.com>

* more fix build index

Signed-off-by: discord9 <discord9@163.com>

* fix: use proper index id

Signed-off-by: discord9 <discord9@163.com>

* pcr

Signed-off-by: discord9 <discord9@163.com>

* test: update

Signed-off-by: discord9 <discord9@163.com>

* clippy

Signed-off-by: discord9 <discord9@163.com>

* test: test_list_ssts fixed

Signed-off-by: discord9 <discord9@163.com>

* test: fix test

Signed-off-by: discord9 <discord9@163.com>

* feat: stuff

Signed-off-by: discord9 <discord9@163.com>

* fix: clean temp index file on abort&delete all index version when delete file

Signed-off-by: discord9 <discord9@163.com>

* docs: explain

Signed-off-by: discord9 <discord9@163.com>

* fix: actually clean up tmp dir

Signed-off-by: discord9 <discord9@163.com>

* clippy

Signed-off-by: discord9 <discord9@163.com>

* clean tmp dir only when write cache enabled

Signed-off-by: discord9 <discord9@163.com>

* refactor: add version to index cache

Signed-off-by: discord9 <discord9@163.com>

* per review

Signed-off-by: discord9 <discord9@163.com>

* test: update size

Signed-off-by: discord9 <discord9@163.com>

* per review

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-12-09 07:31:12 +00:00
discord9
36d89c3baf fix: use saturating in gc tracker (#7369)
chore: use saturating

Signed-off-by: discord9 <discord9@163.com>
2025-12-09 06:38:59 +00:00
Ruihang Xia
0ebfd161d8 feat: allow publishing new nightly release when some platforms are absent (#7354)
* feat: allow publishing new nightly release when some platforms are absent

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* unify linux platforms

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* always evaluate conditions

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-12-09 04:59:50 +00:00
ZonaHe
8b26a98c3b feat: update dashboard to v0.11.9 (#7364)
Co-authored-by: sunchanglong <sunchanglong@users.noreply.github.com>
2025-12-09 02:37:44 +00:00
discord9
7199823be9 chore: rename to avoid git reserved name (#7359)
rename to avoid reserved name

Signed-off-by: discord9 <discord9@163.com>
2025-12-08 04:01:25 +00:00
Ruihang Xia
60f752d306 feat: run histogram quantile in safe mode for incomplete data (#7297)
* initial impl

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* sqlness test and fix

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* correct sqlness case

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* simplification

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* refine code and comment

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-12-05 09:19:21 +00:00
Ruihang Xia
edb1f6086f feat: decode pk eagerly (#7350)
* feat: decode pk eagerly

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* merge primary_key_codec and decode_primary_key_values

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-12-05 09:11:51 +00:00
discord9
1ebcef4794 chore: rm unnecessary warning (#7352)
Signed-off-by: discord9 <discord9@163.com>
2025-12-05 03:44:45 +00:00
Ning Sun
2147545c90 fix: regression with shortcutted statement on postgres extended query (#7340)
* fix: regression with shortcutted statement on postgres extended query

* chore: typo fix

* feat: also add more type support for parameters

* chore: remove dbg print
2025-12-05 02:08:23 +00:00
Yingwen
84e4e42ee7 feat: add more verbose metrics to scanners (#7336)
* feat: add inverted applier metrics

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: add metrics to bloom applier

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: add metrics to fulltext index applier

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: implement BloomFilterReadMetrics for BloomFilterReader

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: collect read metrics for inverted index

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: add metrics for range_read and metadata

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: rename elapsed to fetch_elapsed

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: collect metadata fetch metrics for inverted index

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: collect cache metrics for inverted and bloom index

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: collect read metrics in appliers

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: collect fulltext dir metrics for applier

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: collect parquet row group metrics

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: add parquet metadata metrics

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: add apply metrics

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: collect more metrics for memory row group

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: add fetch metrics to ReaderMetrics

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: init verbose metrics

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: debug print metrics in ScanMetricsSet

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: implement debug for new metrics

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: fix compiler errors

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: update parquet fetch metrics

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: collect the whole fetch time

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: add file_scan_cost

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: parquet fetch add cache_miss counter

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: print index read metrics

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: use actual bytes to increase counter

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: remove provided implementations for index reader traits

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: change get_parquet_meta_data() method to receive metrics

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: rename file_scan_cost to sst_scan_cost

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: refine ParquetFetchMetrics

Signed-off-by: evenyag <realevenyag@gmail.com>

* style: fix clippy

Signed-off-by: evenyag <realevenyag@gmail.com>

* style: fmt code

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: remove useless inner method

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: collect page size actual needed

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: simplify InvertedIndexReadMetrics

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: simplfy InvertedIndexApplyMetrics Debug

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: simplify BloomFilterReadMetrics Debug

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: simplify BloomFilterIndexApplyMetrics Debug

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: simplify FulltextIndexApplyMetrics implementation

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: simplify ParquetFetchMetrics Debug

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: simplify MetadataCacheMetrics Debug

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: only print verbose metrics when they are not empty.

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: use mutex to protect ParquetFetchMetrics

Signed-off-by: evenyag <realevenyag@gmail.com>

* style: fmt code

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: use duration for elapsed in ParquetFetchMetricsData

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-12-04 13:40:18 +00:00
Yingwen
d5c616a9ff feat: implement a cache for manifest files (#7326)
* feat: use cache in manifest store

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: use ManifestCache

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: clean empty manifest dir

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: get last checkpoint from cache

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: add hit/miss counter for manifest cache

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: add logs

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: pass cache to ManifestObjectStore::new

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: fix compiler errors

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: cache checkpoint

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: cache checkpoint in write

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: fix compiler warnings

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: update config comment

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: manifest store cache for staging

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: move recover_inner to FileCacheInner

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: remove manifest cache config from MitoConfig

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: reduce clone when cache is enabled

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: do not cache staging manifests

We clean staging manifests by remove_all which isn't easy to clean
the cache in the same way

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: fix paths in manifest cache

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: don't clean dir if it is too new

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: reuse write cache ttl as manifest cache ttl

Signed-off-by: evenyag <realevenyag@gmail.com>

* style: fix clippy

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: clean all empty subdirectories

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-12-04 12:51:09 +00:00
discord9
f02bdf5428 test: gc worker scheduler mock test (#7292)
* feat: gc worker only on local region

Signed-off-by: discord9 <discord9@163.com>

feat: gc scheduler

wip: gc trigger

Signed-off-by: discord9 <discord9@163.com>

feat: dn file removal rate

Signed-off-by: discord9 <discord9@163.com>

feat: trigger gc with stats(WIP)

Signed-off-by: discord9 <discord9@163.com>

chore

Signed-off-by: discord9 <discord9@163.com>

also move files ref manifest to store-api

Signed-off-by: discord9 <discord9@163.com>

feat: basic gc trigger impl

Signed-off-by: discord9 <discord9@163.com>

wip: handle file ref change

Signed-off-by: discord9 <discord9@163.com>

refactor: use region ids

Signed-off-by: discord9 <discord9@163.com>

fix: retry using related regions

Signed-off-by: discord9 <discord9@163.com>

chore: rm unused

Signed-off-by: discord9 <discord9@163.com>

fix: update file reference type in GC worker

Signed-off-by: discord9 <discord9@163.com>

feat: dn gc limiter

Signed-off-by: discord9 <discord9@163.com>

rename

Signed-off-by: discord9 <discord9@163.com>

feat: gc scheduler retry with outdated regions

Signed-off-by: discord9 <discord9@163.com>

feat: use real object store purger

Signed-off-by: discord9 <discord9@163.com>

wip: add to metasrv

Signed-off-by: discord9 <discord9@163.com>

feat: add to metasrv

Signed-off-by: discord9 <discord9@163.com>

feat: datanode gc worker handler

Signed-off-by: discord9 <discord9@163.com>

fix: no partition col fix

Signed-off-by: discord9 <discord9@163.com>

fix: RegionId json deser workaround

Signed-off-by: discord9 <discord9@163.com>

fix: find access layer

Signed-off-by: discord9 <discord9@163.com>

fix: on host dn

Signed-off-by: discord9 <discord9@163.com>

fix: stat dedup

Signed-off-by: discord9 <discord9@163.com>

refactor: rm load-based

Signed-off-by: discord9 <discord9@163.com>

chore: aft rebase fix

Signed-off-by: discord9 <discord9@163.com>

feat: not full scan

Signed-off-by: discord9 <discord9@163.com>

chore: after rebase fix

Signed-off-by: discord9 <discord9@163.com>

feat: clean tracker

Signed-off-by: discord9 <discord9@163.com>

after rebase fix

Signed-off-by: discord9 <discord9@163.com>

clippy

Signed-off-by: discord9 <discord9@163.com>

refactor: split gc scheduler

Signed-off-by: discord9 <discord9@163.com>

feat: smaller linger time

Signed-off-by: discord9 <discord9@163.com>

feat: parallel region gc instr

Signed-off-by: discord9 <discord9@163.com>

chore: rename

Signed-off-by: discord9 <discord9@163.com>

chore: rename

Signed-off-by: discord9 <discord9@163.com>

enable is false

Signed-off-by: discord9 <discord9@163.com>

feat: update removed files precisely

Signed-off-by: discord9 <discord9@163.com>

all default to false&use local file purger

Signed-off-by: discord9 <discord9@163.com>

feat: not evict if gc enabled

Signed-off-by: discord9 <discord9@163.com>

per review

Signed-off-by: discord9 <discord9@163.com>

fix: pass gc config in mito&test: after truncate gc

Signed-off-by: discord9 <discord9@163.com>

WIP: one more test

Signed-off-by: discord9 <discord9@163.com>

test: basic compact

Signed-off-by: discord9 <discord9@163.com>

test: compact with ref

Signed-off-by: discord9 <discord9@163.com>

refactor: for easier mock

Signed-off-by: discord9 <discord9@163.com>

docs: explain race condition

Signed-off-by: discord9 <discord9@163.com>

feat: gc region procedure

Signed-off-by: discord9 <discord9@163.com>

refactor: ctx send gc/ref instr with procedure

Signed-off-by: discord9 <discord9@163.com>

fix: config deser to default

Signed-off-by: discord9 <discord9@163.com>

refactor: gc report

Signed-off-by: discord9 <discord9@163.com>

wip: async index file rm

Signed-off-by: discord9 <discord9@163.com>

fixme?

Signed-off-by: discord9 <discord9@163.com>

typo

Signed-off-by: discord9 <discord9@163.com>

more ut

Signed-off-by: discord9 <discord9@163.com>

test: more mock test

Signed-off-by: discord9 <discord9@163.com>

more

Signed-off-by: discord9 <discord9@163.com>

refactor: split mock test

Signed-off-by: discord9 <discord9@163.com>

clippy

Signed-off-by: discord9 <discord9@163.com>

refactor: rm stuff

Signed-off-by: discord9 <discord9@163.com>

test: mock add gc report per region

Signed-off-by: discord9 <discord9@163.com>

fix: stricter table failure condition

Signed-off-by: discord9 <discord9@163.com>

sutff

Signed-off-by: discord9 <discord9@163.com>

feat: can do different table gc same time&more todos

Signed-off-by: discord9 <discord9@163.com>

after rebase check

Signed-off-by: discord9 <discord9@163.com>

* chore

Signed-off-by: discord9 <discord9@163.com>

* chore

Signed-off-by: discord9 <discord9@163.com>

* wip: refactoring test

Signed-off-by: discord9 <discord9@163.com>

* fix: also get from follower peer

Signed-off-by: discord9 <discord9@163.com>

* test: update mock test

Signed-off-by: discord9 <discord9@163.com>

* revert some change&clean up

Signed-off-by: discord9 <discord9@163.com>

* typo

Signed-off-by: discord9 <discord9@163.com>

* chore: after rebase fix

Signed-off-by: discord9 <discord9@163.com>

* choer: more fix

Signed-off-by: discord9 <discord9@163.com>

* revert

Signed-off-by: discord9 <discord9@163.com>

* revert change to handler.rs

Signed-off-by: discord9 <discord9@163.com>

* test: fix mock test

Signed-off-by: discord9 <discord9@163.com>

* chore: rm retry

Signed-off-by: discord9 <discord9@163.com>

* Apply suggestions from code review

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Signed-off-by: discord9 <discord9@163.com>

* after rebase fix

Signed-off-by: discord9 <discord9@163.com>

* pcr

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-12-04 12:00:25 +00:00
Ruihang Xia
f2288a86b0 perf: treat DISTINCT as comm/part-comm (#7348)
* perf: treat DISTINCT as comm/part-comm

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* sqlness tests

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-12-04 11:14:00 +00:00
Ruihang Xia
9d35b8cad4 refactor: remove datafusion data frame wrapper (#7347)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-12-04 09:38:41 +00:00
Weny Xu
cc99f9d65b fix: configure HTTP/2 keep-alive for heartbeat client to detect network failures faster (#7344)
* fix: configure HTTP/2 keep-alive for heartbeat client to detect network failures faster

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-12-04 08:07:45 +00:00
Lei, HUANG
11ecb7a28a refactor(servers): bulk insert service (#7329)
* refactor/bulk-insert-service:
 refactor: decode FlightData early in put_record_batch pipeline

 - Move FlightDecoder usage from Inserter up to PutRecordBatchRequestStream,
   passing decoded RecordBatch and schema bytes instead of raw FlightData.
 - Eliminate redundant per-request decoding/encoding in Inserter; encode
   once and reuse for all region requests.
 - Streamline GrpcQueryHandler trait and implementations to accept
   PutRecordBatchRequest containing pre-decoded data.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* refactor/bulk-insert-service:
 feat: stream-based bulk insert with per-batch responses

 - Introduce handle_put_record_batch_stream() to process Flight DoPut streams
 - Resolve table & permissions once, yield (request_id, AffectedRows) per batch
 - Replace loop-over-request with async-stream in frontend & server
 - Make PutRecordBatchRequestStream public for cross-crate usage

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* refactor/bulk-insert-service:
 fix: propagate request_id with errors in bulk insert stream

 Changes the bulk-insert stream item type from
 Result<(i64, AffectedRows), E> to (i64, Result<AffectedRows, E>)
 so every emitted tuple carries the request_id even on failure,
 letting callers correlate errors with the originating request.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* refactor/bulk-insert-service:
 refactor: unify DoPut response stream to return DoPutResponse

 Replace the tuple (i64, Result<AffectedRows>) with Result<DoPutResponse>
 throughout the gRPC bulk-insert path so the handler, adapter and server
 all speak the same type.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* refactor/bulk-insert-service:
 feat: add elapsed_secs to DoPutResponse for bulk-insert timing

 - DoPutResponse now carries elapsed_secs field
 - Frontend measures and attaches insert duration
 - Server observes GRPC_BULK_INSERT_ELAPSED metric from response

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* refactor/bulk-insert-service:
 refactor: unify Bytes import in flight module

 - Replace `bytes::Bytes` with `Bytes` alias for consistency
 - Remove redundant `ProstBytes` alias

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* refactor/bulk-insert-service:
 fix: terminate gRPC stream on error and optimize FlightData handling

 - Stop retrying on stream errors in gRPC handler
 - Replace Vec1 indexing with into_iter().next() for FlightData
 - Remove redundant clones in bulk_insert and flight modules

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* refactor/bulk-insert-service:
 Improve permission check placement in `grpc.rs`

 - Moved the permission check for `BulkInsert` to occur before resolving the table reference in `GrpcQueryHandler` implementation.
 - Ensures permission validation is performed earlier in the process, potentially avoiding unnecessary operations if permission is denied.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* refactor/bulk-insert-service:
 **Refactor Bulk Insert Handling in gRPC**

 - **`grpc.rs`**:
   - Switched from `async_stream::stream` to `async_stream::try_stream` for error handling.
   - Removed `body_size` parameter and added `flight_data` to `handle_bulk_insert`.
   - Simplified error handling and permission checks in `GrpcQueryHandler`.

 - **`bulk_insert.rs`**:
   - Added `raw_flight_data` parameter to `handle_bulk_insert`.
   - Calculated `body_size` from `raw_flight_data` and removed redundant encoding logic.

 - **`flight.rs`**:
   - Replaced `body_size` with `flight_data` in `PutRecordBatchRequest`.
   - Updated memory usage calculation to include `flight_data` components.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* refactor/bulk-insert-service:
 perf(bulk_insert): encode record batch once per datanode

 Move FlightData encoding outside the per-region loop so the same
 encoded bytes are reused when mask.select_all(), eliminating redundant
 serialisation work.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

---------

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-12-04 07:08:02 +00:00
jeremyhi
2a760f010f chore: members and committers update (#7341)
Signed-off-by: jeremyhi <fengjiachun@gmail.com>
2025-12-04 04:08:43 +00:00
jeremyhi
63dd37dca3 fix: reset cached channel on errors with VIP (#7335)
Signed-off-by: jeremyhi <fengjiachun@gmail.com>
2025-12-03 08:56:15 +00:00
Lei, HUANG
68fff3b1aa refactor(servers): allow custom flight service (#7333)
* refactor/allow-custom-flight-service:
 ### Add Custom Flight Handler Support

 - **`server.rs`**:
   - Introduced a new field `flight_handler` in the `Services` struct to allow optional custom flight handler configuration.
   - Added a method `with_flight_handler` to set the custom flight handler.
   - Modified `build_grpc_server` to use the custom flight handler if provided, defaulting to `GreptimeRequestHandler` otherwise.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* refactor/allow-custom-flight-service:
 ### Make structs and enums public in `flight.rs`

 - Changed visibility of `PutRecordBatchRequest` and `PutRecordBatchRequestStream` structs to public.
 - Made `PutRecordBatchRequestStreamState` enum public.
 - Updated fields within `PutRecordBatchRequest` and `PutRecordBatchRequestStream` to be public.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

---------

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-12-03 03:01:59 +00:00
Weny Xu
0177f244e9 fix: fix write stall that never recovers due to flush logic issues (#7322)
* fix: fix write stall that never recovers due to flush logic issues

Signed-off-by: WenyXu <wenymedia@gmail.com>

* fix: fix unit test

Signed-off-by: WenyXu <wenymedia@gmail.com>

* feat: flush multiple regions when engine is full

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: refine fn name

Signed-off-by: WenyXu <wenymedia@gmail.com>

* refactor: simplify flush scheduler by removing flushing state

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

* fix: fix unit tests

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-12-02 12:48:41 +00:00
Lei, HUANG
931556dbd3 perf(metric-engine)!: Replace mur3 with fxhash for faster TSID generation (#7316)
* feat/change-tsid-gen:
 perf(metric-engine): replace mur3 with fxhash for faster TSID generation

 - Switches from mur3::Hasher128 to fxhash::FxHasher for TSID hashing
 - Pre-computes label-name hash when no nulls are present, avoiding redundant work
 - Adds fast-path for rows without nulls; falls back to slow path otherwise
 - Updates Cargo.toml and lockfile to reflect dependency change

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* feat/change-tsid-gen:
 fix: only check primary-key labels for null when re-using cached hash

 - Rename has_null() → has_null_labels() and restrict the check to the
   primary-key columns so that non-label NULLs do not force a full
   TSID re-computation.
 - Update expected hashes in tests to match the new logic.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* feat/change-tsid-gen:
 test: add comprehensive TSID generation tests for label ordering and null handling

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* feat/change-tsid-gen:
 bench: add criterion benchmark for TSID generator

 - Compare original mur3 vs current fxhash fast/slow paths
 - Test 2, 5, 10 label sets plus null-value slow path
 - Add mur3 & criterion dev-deps; register bench target

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* feat/change-tsid-gen:
 test: stabilize metric-engine tests by fixing non-deterministic row order

 - Add ORDER BY to SELECTs in TTL tests to ensure consistent output
 - Update expected __tsid values after hash function change
 - Swap expected OTLP metric rows to match new ordering

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* feat/change-tsid-gen:
 refactor: simplify Default impls and remove redundant code

 - Replace manual Default for TsidGenerator with derive
 - Remove unnecessary into_iter() call
 - Simplify Option::unwrap_or_else to unwrap_or

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

---------

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-12-02 08:38:29 +00:00
Ning Sun
69f0249039 feat: update pg-catalog for describe table (#7321) 2025-12-02 01:38:36 +00:00
dennis zhuang
1f91422bae feat!: improve mysql/pg compatibility (#7315)
* feat(mysql): add SHOW WARNINGS support and return warnings for unsupported SET variables

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* feat(function): add MySQL IF() function and PostgreSQL description functions for connector compatibility

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: show tables for mysql

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: partitions table in information_schema and add starrocks external catalog compatibility

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* refactor: async udf

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: set warnings

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* feat: impl pg_my_temp_schema and make description functions simple

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* test: add test for issue 7313

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* feat: apply suggestions

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: partition_expression and partition_description

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: test

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: unit tests

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: saerch_path only works for pg

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* feat: improve warnings processing

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: warnings while writing affected rows and refactor

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: improve ShobjDescriptionFunction signature

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* refactor: array_to_boolean

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

---------

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-12-01 20:41:14 +00:00
jeremyhi
377373b8fd fix: request limiter test case fix (#7323)
Signed-off-by: jeremyhi <fengjiachun@gmail.com>
2025-12-01 20:12:32 +00:00
fys
e107030d85 chore: add more fields to DdlManagerConfigureContext (#7310)
* feat: add more context for configurator

* move the flow grpc configure context to plugins crate

* move context to plugins crate

* add more fields

* fix: cargo check

* refactor: some

* refactor some

* adjust context

* fix: cargo check

* fix: ut
2025-12-01 08:03:12 +00:00
Weny Xu
18875eed4d feat: implement Display trait for FlushRegions (#7320)
feat: implement Display trait for FlushRegions

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-12-01 06:33:23 +00:00
discord9
ee76d50569 test: gc integration test (#7306)
* test: basic infra for set gc

Signed-off-by: discord9 <discord9@163.com>

* more stuff

Signed-off-by: discord9 <discord9@163.com>

* test: basic gc integration test

Signed-off-by: discord9 <discord9@163.com>

* rm unused

Signed-off-by: discord9 <discord9@163.com>

* clippy

Signed-off-by: discord9 <discord9@163.com>

* refactor: remove loader

Signed-off-by: discord9 <discord9@163.com>

* clippy

Signed-off-by: discord9 <discord9@163.com>

* fix: allow default endpoint

Signed-off-by: discord9 <discord9@163.com>

* filter out files

Signed-off-by: discord9 <discord9@163.com>

* chore: rm minio support

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-12-01 04:10:42 +00:00
Weny Xu
5d634aeba0 feat: implement metadata update for repartition group procedure (#7311)
Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-12-01 03:15:12 +00:00
Weny Xu
8346acb900 feat: introduce EnterStagingRequest for RegionEngine (#7261)
* feat: introduce `EnterStagingRequest` for region engine

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

* refactor: improve error handling in staging mode entry

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-11-28 09:02:32 +00:00
LFC
fdab75ce27 feat: simple read write new json type values (#7175)
feat: basic json read and write

Signed-off-by: luofucong <luofc@foxmail.com>
2025-11-27 12:40:35 +00:00
Ruihang Xia
4c07d2d5de fix: metric engine deadlock when altering a group of tables (#7308)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-11-27 09:45:06 +00:00
fys
020477994b feat: add some configurable points (#7227)
* feat: enhance extension

* fix: cr

* move information schema table factories trait to standalone

* fix: self cr

* remove extension factory

* refactor

* remove extension filed from greptime options struct

* refactor

* minor refactor

* fix: cargo check

* fix: clippy

* fix: license check

* feat: enhance grpc and http configurator in servers crate

* grpc builder configurator

* remove unused file

* complete the remaining expansion points.

* fix: self-cr

* rename

* fix: typo
2025-11-27 09:21:46 +00:00
Yingwen
afefc0c604 fix: implement bulk write for time partitions and bulk memtable (#7293)
* feat: implement convert_bulk_part

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: convert bulk part in TimePartitions

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: fill missing columns for bulk parts

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: update comments

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: cast to dictionary type

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: add unit tests

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: update comment

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: do not convert part if bulk is written by write()

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-11-27 08:01:45 +00:00
Weny Xu
e44323c433 feat: add region repartition group procedure infrastructure (#7299)
* feat: add region repartition group procedure infrastructure

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-11-27 04:57:45 +00:00
discord9
0aeaf405c7 feat: add batch gc procedure (#7296)
* feat: add batch gc procedure

Signed-off-by: discord9 <discord9@163.com>

* chore

Signed-off-by: discord9 <discord9@163.com>

* chore: per review

Signed-off-by: discord9 <discord9@163.com>

* pcr

Signed-off-by: discord9 <discord9@163.com>

* per even review

Signed-off-by: discord9 <discord9@163.com>

* per review

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-11-27 03:58:15 +00:00
Yingwen
b5cbc35a0d fix: partition tree metric should the delta (#7307)
Signed-off-by: evenyag <realevenyag@gmail.com>
2025-11-27 03:49:02 +00:00
shuiyisong
5472bdfc0f chore: return 404 if trace not found (#7304)
* chore: return 404 if trace not found

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: add test and fix

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

---------

Signed-off-by: shuiyisong <xixing.sys@gmail.com>
2025-11-26 09:39:28 +00:00
discord9
6485a26fa3 refactor: load metadata using offical impl (#7302)
* refactor: load metadata using offical impl

Signed-off-by: discord9 <discord9@163.com>

* pcr

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-11-26 08:52:04 +00:00
Weny Xu
69865c831d feat: batch region migration for failover (#7245)
* refactor: support multiple rows per event in event recorder

Signed-off-by: WenyXu <wenymedia@gmail.com>

* feat: batch region migration for failover

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

* test: add tests

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

* fix: fix unit tests

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-11-26 08:31:56 +00:00
shuiyisong
713525797a chore: optimize search traces from Grafana (#7298)
* chore: minor update

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: add test

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: update ua setup

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

---------

Signed-off-by: shuiyisong <xixing.sys@gmail.com>
2025-11-26 08:06:15 +00:00
WaterWhisperer
09d1074e23 feat: add building option to build images base on distroless image (#7240)
Signed-off-by: WaterWhisperer <waterwhisperer24@qq.com>
2025-11-26 05:13:05 +00:00
dennis zhuang
1ebd25adbb ci: add multi lang tests workflow into release and nightly workflows (#7300)
* ci: add multi lang tests workflow into release and nightly workflows

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: emoji

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* refactor: apply suggestions

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* ci: add notification when multi lang tests fails

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: revert ci and add nodejs

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

---------

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-11-26 04:35:04 +00:00
dennis zhuang
c66f661494 chore: return meaningful message when content type mismatch in otel (#7301)
* chore: return meaningful message when content type mismatch in otel

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* refactor: extract duplicated code

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: use a new error for failing to decode loki request

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

---------

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
2025-11-26 03:20:52 +00:00
Sicong Hu
2783a5218e feat: implement manual type for async index build (#7104)
* feat: prepare for index_build command

Signed-off-by: SNC123 <sinhco@outlook.com>

* feat: impl manual index build

Signed-off-by: SNC123 <sinhco@outlook.com>

* chore: clippy and fmt

Signed-off-by: SNC123 <sinhco@outlook.com>

* test: add idempotency check for manual build

Signed-off-by: SNC123 <sinhco@outlook.com>

* chore: apply suggestions

Signed-off-by: SNC123 <sinhco@outlook.com>

* chore: update proto

Signed-off-by: SNC123 <sinhco@outlook.com>

* chore: apply suggestions

Signed-off-by: SNC123 <sinhco@outlook.com>

* chore: fmt

Signed-off-by: SNC123 <sinhco@outlook.com>

* chore: update proto souce to greptimedb

Signed-off-by: SNC123 <sinhco@outlook.com>

* fix: cargo.lock

Signed-off-by: SNC123 <sinhco@outlook.com>

---------

Signed-off-by: SNC123 <sinhco@outlook.com>
2025-11-25 15:21:30 +00:00
Weny Xu
6b6d1ce7c4 feat: introduce remap_manifests for RegionEngine (#7265)
* refactor: consolidate RegionManifestOptions creation logic

Signed-off-by: WenyXu <wenymedia@gmail.com>

* feat: introduce`remap_manifests` for `RegionEngine`

Signed-off-by: WenyXu <wenymedia@gmail.com>

* Apply suggestions from code review

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-11-25 12:09:20 +00:00
dennis zhuang
7e4f0af065 fix: mysql binary date type and multi-lang ci tests (#7291)
* fix: mysql binary date type

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* test: add unit test

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: typo

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* ci: add multi lang integration tests ci

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: path and branch

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* ci: prevent resuse runner

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: ci

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* ci: Multi-language Integration Tests trigged only when pushing to main

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

---------

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
2025-11-25 08:26:50 +00:00
yihong
d811c4f060 fix: pre-commit all files failed (#7290)
Signed-off-by: yihong0618 <zouzou0208@gmail.com>
2025-11-25 07:27:46 +00:00
dennis zhuang
be3c26f2b8 fix: postgres timezone setting by default (#7289)
Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
2025-11-25 03:00:43 +00:00
Ning Sun
9eb44071b1 fix: postgres show statement describe and timestamp text parsing (#7286) 2025-11-24 19:01:50 +00:00
ZonaHe
77e507cbe8 feat: update dashboard to v0.11.8 (#7281)
Co-authored-by: sunchanglong <sunchanglong@users.noreply.github.com>
2025-11-24 14:02:33 +00:00
Ruihang Xia
5bf72ab327 feat: decode_primary_key method for debugging (#7284)
* initial impl

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* third param

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* feat: support convert Dictionary type to ConcreteDataType

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* change to list array

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* simplify file_stream::create_stream

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* simplify FileRegion

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* type alias

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix format

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove staled test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-11-24 12:41:54 +00:00
shuiyisong
9f4902b10a feat: reloadable tls client config (#7230)
* feat: add ReloadableClientTlsConfig

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* refactor: merge tls option with the reloadable

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: rename function

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: update comment

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: extract tls loader

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: minor comment update

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: add serde default to watch field

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: minor update

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: add log

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* fix: add error log

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

---------

Signed-off-by: shuiyisong <xixing.sys@gmail.com>
2025-11-24 11:52:11 +00:00
Ruihang Xia
b32ca3ad86 perf: parallelize file source region (#7285)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-11-24 11:37:48 +00:00
fys
d180cc8f4b chore: add INFORMATION_SCHEMA_ALERTS_TABLE_ID const value (#7288) 2025-11-24 11:32:14 +00:00
LFC
b099abc3a3 refactor: pub HttpOutputWriter for external use (#7287)
Signed-off-by: luofucong <luofc@foxmail.com>
2025-11-24 11:29:08 +00:00
discord9
52a576cf6d feat: basic gc scheduler (#7263)
* feat: basic gc scheduler

Signed-off-by: discord9 <discord9@163.com>

* refactor: rm dup code

Signed-off-by: discord9 <discord9@163.com>

* docs: todo for cleaner code

Signed-off-by: discord9 <discord9@163.com>

* chore

Signed-off-by: discord9 <discord9@163.com>

* feat: rm retry path

Signed-off-by: discord9 <discord9@163.com>

* per review

Signed-off-by: discord9 <discord9@163.com>

* feat: skip first full listing after metasrv start

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-11-24 07:57:18 +00:00
jeremyhi
c0d0b99a32 feat: track query memory pool (#7219)
Signed-off-by: jeremyhi <fengjiachun@gmail.com>
2025-11-24 06:18:23 +00:00
shuiyisong
7d575d18ee fix: unlimit trace_id query in jaeger API (#7283)
fix: unlimit trace_id query in jaeger API

Signed-off-by: shuiyisong <xixing.sys@gmail.com>
2025-11-24 03:41:48 +00:00
LFC
ff99bce37c refactor: make json value use json type (#7248)
Signed-off-by: luofucong <luofc@foxmail.com>
2025-11-24 02:40:48 +00:00
Ning Sun
2f447e6f91 fix: postgres extended query paramater parsing and type check (#7276)
* fix: postgres extended query paramater parsing and type check

* test: update sqlness output

* feat: implement FromSqlText for pg_interval

* chore: toml format
2025-11-24 02:40:35 +00:00
fys
c9a7b1fd68 docs(config): clarify store_addrs format (#7279) 2025-11-21 22:26:52 +00:00
Ruihang Xia
8c3da5e81f feat: simplify file_stream::create_stream (#7275)
* simplify file_stream::create_stream

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* simplify FileRegion

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix merge error

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove unused errors

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-11-21 12:17:59 +00:00
Ruihang Xia
c152a45d44 feat: support Dictionary type (#7277)
* feat: support Dictionary type

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix format

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update proto

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-11-21 11:21:32 +00:00
Ruihang Xia
c054c13e48 perf: avoid unnecessary merge sort (#7274)
* perf: avoid unnecessary merge sort

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fantastic if chain

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* more comments

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix typo

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-11-21 09:02:25 +00:00
LFC
4a7c16586b refactor: remove Vectors from RecordBatch completely (#7184)
* refactor: remove `Vector`s from `RecordBatch` completely

Signed-off-by: luofucong <luofc@foxmail.com>

* resolve PR comments

Signed-off-by: luofucong <luofc@foxmail.com>

* resolve PR comments

Signed-off-by: luofucong <luofc@foxmail.com>

---------

Signed-off-by: luofucong <luofc@foxmail.com>
2025-11-21 08:53:35 +00:00
fys
c5173fccfc chore: add default value to sparse_primary_key_encoding config item (#7273) 2025-11-21 08:22:55 +00:00
LFC
c02754b44c feat: udf json_get_object (#7241)
Signed-off-by: luofucong <luofc@foxmail.com>
2025-11-21 04:50:38 +00:00
dennis zhuang
0b4f00feef fix!: align numeric type aliases with PostgreSQL and MySQL (#7270)
* fix: align numeric type aliases with those used in PostgreSQL and MySQL

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: update create_type_alias test

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: fix colon

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: clone

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: style

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

---------

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-11-21 04:49:17 +00:00
Ruihang Xia
c13febe35d feat: simplify merge scan stream (#7269)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-11-21 03:50:21 +00:00
Ning Sun
29d23e0ba1 fix: return sqlalchemy compatible version string in version() (#7271) 2025-11-21 03:30:11 +00:00
Ruihang Xia
25fab2ba7d feat: don't validate external table's region schema (#7268)
* feat: don't validate external table's region schema

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix format

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-11-21 03:28:14 +00:00
dennis zhuang
ec8263b464 fix: log not print (#7272)
fix: log missing

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
2025-11-21 03:14:45 +00:00
Weny Xu
01ea7e1468 chore: add tests for election reset and region lease failure handling (#7266)
Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-11-20 12:06:51 +00:00
WaterWhisperer
7f1da17150 feat: support alter database compaction options (#7251)
Signed-off-by: WaterWhisperer <waterwhisperer24@qq.com>
2025-11-20 01:39:35 +00:00
discord9
0cee4fa115 feat: gc get ref from manifest (#7260)
feat: get file ref from other manifest

Signed-off-by: discord9 <discord9@163.com>
2025-11-19 12:13:28 +00:00
discord9
e59612043d feat: gc scheduler ctx&procedure (#7252)
* feat: gc ctx&procedure

Signed-off-by: discord9 <discord9@163.com>

* fix: handle region not found case

Signed-off-by: discord9 <discord9@163.com>

* docs: more explain&todo

Signed-off-by: discord9 <discord9@163.com>

* per review

Signed-off-by: discord9 <discord9@163.com>

* chore: add time for region gc

Signed-off-by: discord9 <discord9@163.com>

* fix: explain why loader for gc region should fail

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-11-19 08:35:17 +00:00
Ruihang Xia
5d8819e7af fix: dynamic reload tracing layer loses trace id (#7257)
* not working

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* works

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* Apply suggestions from code review

Co-authored-by: Yingwen <realevenyag@gmail.com>

* clean up

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>
2025-11-19 06:16:56 +00:00
Yingwen
8b7b5c17c7 ci: update review code owners (#7250)
* ci: update review code owners

Signed-off-by: evenyag <realevenyag@gmail.com>

* ci: at least two owners

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: Update index owners

Co-authored-by: jeremyhi <jiachun_feng@proton.me>
Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
Co-authored-by: jeremyhi <jiachun_feng@proton.me>
2025-11-18 11:50:14 +00:00
Yingwen
ee35ec0a39 feat: split batches before merge (#7225)
* feat: split batches by rule in build_flat_sources()

It checks the num_series and splits batches when the series cardinality
is low

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: panic when no num_series available

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: don't subtract file index if checking mem range

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: update comments and control flow

Signed-off-by: evenyag <realevenyag@gmail.com>

* style: fix clippy

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-11-18 08:19:39 +00:00
McKnight22
605f3270e5 feat: implement compressed CSV/JSON export functionality (#7162)
* feat: implement compressed CSV/JSON export functionality

- Add CompressedWriter for real-time compression during CSV/JSON export
- Support GZIP, BZIP2, XZ, ZSTD compression formats
- Remove LazyBufferedWriter dependency for simplified architecture
- Implement Encoder -> Compressor -> FileWriter data flow
- Add tests for compressed CSV/JSON export

Signed-off-by: McKnight22 <tao.wang.22@outlook.com>

* feat: implement compressed CSV/JSON export functionality

- refactor and extend compressed_writer tests
- add coverage for Bzip2 and Xz compression

Signed-off-by: McKnight22 <tao.wang.22@outlook.com>

* feat: implement compressed CSV/JSON export functionality

- Switch to threshold-based chunked flushing
- Avoid unnecessary writes on empty buffers
- Replace direct write_all() calls with the new helper for consistency

Signed-off-by: McKnight22 <tao.wang.22@outlook.com>

* feat: implement compressed CSV/JSON import (COPY FROM) functionality

- Add support for reading compressed CSV and JSON in COPY FROM
- Support GZIP, BZIP2, XZ, ZSTD compression formats
- Add tests for compressed CSV/JSON import

Signed-off-by: McKnight22 <tao.wang.22@outlook.com>

* feat: implement compressed CSV/JSON export/import functionality

- Fix review comments

Signed-off-by: McKnight22 <tao.wang.22@outlook.com>

* feat: implement compressed CSV/JSON export/import functionality

- Move temp_dir out of the loop

Signed-off-by: McKnight22 <tao.wang.22@outlook.com>

* feat: implement compressed CSV/JSON export/import functionality

- Fix unreasonable locking logic

Co-authored-by: jeremyhi <jiachun_feng@proton.me>
Signed-off-by: McKnight22 <tao.wang.22@outlook.com>

---------

Signed-off-by: McKnight22 <tao.wang.22@outlook.com>
Co-authored-by: jeremyhi <jiachun_feng@proton.me>
2025-11-18 02:55:58 +00:00
LFC
4e9f419de7 refactor: make show tables fast under large tables (#7231)
fix: `show tables` is too slow under large tables

Signed-off-by: luofucong <luofc@foxmail.com>
2025-11-18 02:51:59 +00:00
discord9
29bbff3c90 feat: gc worker only local regions&test (#7203)
* feat: gc worker only on local region

Signed-off-by: discord9 <discord9@163.com>

* more check

Signed-off-by: discord9 <discord9@163.com>

* chore: stuff

Signed-off-by: discord9 <discord9@163.com>

* fix: ignore async index file for now

Signed-off-by: discord9 <discord9@163.com>

* fix: file removal rate calc

Signed-off-by: discord9 <discord9@163.com>

* chore: per review

Signed-off-by: discord9 <discord9@163.com>

* chore: per review

Signed-off-by: discord9 <discord9@163.com>

* clippy

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-11-18 02:45:09 +00:00
dennis zhuang
ff2a12a49d build: update opensrv-mysql to 0.10 (#7246)
* build: update opensrv-mysql to 0.10

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: format tomal

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: format tomal

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

---------

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
2025-11-18 02:38:16 +00:00
Yingwen
77483ad7d4 fix: allow compacting L1 files under append mode (#7239)
* fix: allow compacting L1 files under append mode

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: limit the number of compaction input files

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-11-17 12:46:30 +00:00
Weny Xu
6adc348fcd feat: support parallel table operations in COPY DATABASE (#7213)
* feat: support parallel table operations in COPY DATABASE

Signed-off-by: WenyXu <wenymedia@gmail.com>

* feat(cli): add a new `parallelism` parameter to control the parallelism during export

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: add sqlness tests

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: clippy

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

* refactor(cli): improve parallelism configuration for data export and import

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-11-17 12:22:51 +00:00
Ruihang Xia
cc61af7c65 feat: dynamic enable or disable trace (#6609)
* wip

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* set `TRACE_RELOAD_HANDLE`

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* wrap http api

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update dependencies

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* type alias and unwrap_or_else

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* better error handling

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* simplify

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* lazy initialize tracer

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* integration test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
Co-authored-by: Zhenchi <zhongzc_arch@outlook.com>
2025-11-17 12:16:46 +00:00
Ruihang Xia
1eb8d6b76b feat: build partition sources in parallel (#7243)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-11-17 11:44:48 +00:00
discord9
6c93c7d299 chore: bump version to beta.2 (#7238)
* chore: bump version to beta.2

Signed-off-by: discord9 <discord9@163.com>

* test: fix sqlness

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-11-17 08:57:59 +00:00
LFC
cdf9d18c36 refactor: create JsonValue for json value (#7214)
* refactor: create `JsonValue` for json value

Signed-off-by: luofucong <luofc@foxmail.com>

* resolve PR comments

Signed-off-by: luofucong <luofc@foxmail.com>

* update proto

Signed-off-by: luofucong <luofc@foxmail.com>

---------

Signed-off-by: luofucong <luofc@foxmail.com>
2025-11-17 08:21:17 +00:00
LFC
32168e8ca8 ci: dev-build with large page size (#7228)
* ci: able to build greptimedb with large page size in dev-build

Signed-off-by: luofucong <luofc@foxmail.com>

* Apply suggestions from code review

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

---------

Signed-off-by: luofucong <luofc@foxmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-11-17 02:38:16 +00:00
WaterWhisperer
de9ae6066f refactor: remove export_metrics and related configuration (#7236)
Signed-off-by: WaterWhisperer <waterwhisperer24@qq.com>
2025-11-17 02:32:22 +00:00
Ning Sun
2bbc4bc4bc fix: correct signature of current_schemas function (#7233) 2025-11-17 01:42:09 +00:00
Alan Tang
b1525e566b chore: fix SQLness test for COPY command from CSV file (#7235)
chore: fix SQLness test for COPY command from CSV file

Signed-off-by: StandingMan <jmtangcs@gmail.com>
2025-11-16 07:08:13 +00:00
Yingwen
df954b47d5 fix: clone the page before putting into the index cache (#7229)
* fix: clone the page before putting into the index cache

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: fix warnings

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-11-15 17:52:32 +00:00
liyang
acfd674332 ci: update helm-charts and homebrew-greptime pull request reviewer (#7232)
* ci: update helm-charts and homebrew-greptime pull request reviewer

Signed-off-by: liyang <daviderli614@gmail.com>

* add reviewer

Signed-off-by: liyang <daviderli614@gmail.com>

---------

Signed-off-by: liyang <daviderli614@gmail.com>
2025-11-15 17:51:28 +00:00
shuiyisong
e7928aaeee chore: add tls-watch option in cmd (#7226)
* chore: add tls-watch cmd option

* chore: add watch tls option to standalone and fe cmd

* chore: fix clippy

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: address CR comment

Co-authored-by: Yingwen <realevenyag@gmail.com>

* chore: address CR issue

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

---------

Signed-off-by: shuiyisong <xixing.sys@gmail.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>
2025-11-14 09:58:52 +00:00
Weny Xu
d5f52013ec feat: introduce batch region migration (#7176)
* feat: introduce batch region migration

Signed-off-by: WenyXu <wenymedia@gmail.com>

* fix: try fix unit tests

Signed-off-by: WenyXu <wenymedia@gmail.com>

* fix clippy

Signed-off-by: WenyXu <wenymedia@gmail.com>

* fix: fix get table route

Signed-off-by: WenyXu <wenymedia@gmail.com>

* fix unit tests

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: avoid cloning vec

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

* fix: fix tests

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: add suggestions

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-11-14 08:15:18 +00:00
Weny Xu
c1e762960a fix: obtain system time after fetching lease values (#7223)
* fix: acquire system time inside closure

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions

Signed-off-by: WenyXu <wenymedia@gmail.com>

* test: add tests

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-11-14 06:53:15 +00:00
Yingwen
7cc0439cc9 feat: load latest index file first (#7221)
Signed-off-by: evenyag <realevenyag@gmail.com>
2025-11-13 08:56:44 +00:00
shuiyisong
6eb7efcb76 chore: add debug log on receiving logs (#7211)
* chore: add debug log on receiving logs

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: add debug log on receiving logs

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

---------

Signed-off-by: shuiyisong <xixing.sys@gmail.com>
2025-11-13 07:15:26 +00:00
dennis zhuang
5d0e94bfa8 docs: update project status and tweak readme (#7216)
* docs: update project status and tweak readme

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: style

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: minor change

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* feat: add grafana datasource plugin project link

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* feat: adds senarior

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: apply suggestions

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

---------

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
2025-11-12 15:06:56 +00:00
shuiyisong
e842d401fb chore: allow unlimited return if timerange is applied (#7222)
Signed-off-by: shuiyisong <xixing.sys@gmail.com>
2025-11-12 10:00:11 +00:00
discord9
8153068b89 chore: bump main branch version to 1.0.0-beta.1 (#7191)
* chore: bump main branch version to 1.0.0-beta.1

Signed-off-by: discord9 <discord9@163.com>

* rename beta.1 to beta1

Signed-off-by: discord9 <discord9@163.com>

* again

Signed-off-by: discord9 <discord9@163.com>

* test: correct redact version

Signed-off-by: discord9 <discord9@163.com>

* chore

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-11-11 14:52:03 +00:00
Yingwen
bb6a3a2ff3 feat: support altering sst format for a table (#7206)
* refactor: remove memtable_builder from MitoRegion

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: add alter format

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: support changing the format and memtable

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: support changing sst format via table options

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: set scanner and memtable builder with correct format

Signed-off-by: evenyag <realevenyag@gmail.com>

* style: fix clippy

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: fix incorrect metadata in version after alter

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: add sqlness test

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: replace region_id in sqlness result

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: create correct memtable when setting sst_format explicitly

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: sqlness alter_format test set sst_format to primary_key

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: remove verbose log

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-11-11 13:19:00 +00:00
Weny Xu
49c6812e98 fix: deregister failure detectors on rollback and improve timeout handling (#7212)
Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-11-11 09:44:27 +00:00
Yingwen
24671b60b4 feat: tracks index files in another cache and preloads them (#7181)
* feat: divide parquet and puffin index

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: download index files when we open the region

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: use different label for parquet/puffin

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: control parallelism and cache size by env

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: change gauge to counter

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: correct file type labels in file cache

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: move env to config and change cache ratio to percent

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: checks capacity before download and refine metrics

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: change open to return MitoRegionRef

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: extract download to FileCache

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: run load cache task in write cache

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: check region state before downloading files

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: update config docs and test

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: use file id from index_file_id to compute puffin key

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: skip loading cache in some states

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-11-11 08:37:32 +00:00
jeremyhi
c7fded29ee feat: query mem limiter (#7078)
* feat: query mem limiter

* feat: config docs

* feat: frontend query limit config

* fix: unused imports

Signed-off-by: jeremyhi <fengjiachun@gmail.com>

* feat: add metrics for query memory tracker

Signed-off-by: jeremyhi <fengjiachun@gmail.com>

* fix: right postion for tracker

Signed-off-by: jeremyhi <fengjiachun@gmail.com>

* fix: avoid race condition

Signed-off-by: jeremyhi <fengjiachun@gmail.com>

* feat: soft and hard limit

Signed-off-by: jeremyhi <fengjiachun@gmail.com>

* feat: docs

Signed-off-by: jeremyhi <fengjiachun@gmail.com>

* fix: when soft_limit == 0

Signed-off-by: jeremyhi <fengjiachun@gmail.com>

* feat: upgrade limit algorithm

Signed-off-by: jeremyhi <fengjiachun@gmail.com>

* fix: remove batch window

Signed-off-by: jeremyhi <fengjiachun@gmail.com>

* chore: batch mem size

Signed-off-by: jeremyhi <fengjiachun@gmail.com>

* feat: refine limit algorithm

Signed-off-by: jeremyhi <fengjiachun@gmail.com>

* fix: get sys mem

Signed-off-by: jeremyhi <fengjiachun@gmail.com>

* chore: minor change

* feat: up tracker to the top stream

* feat: estimated_size for batch

Signed-off-by: jeremyhi <fengjiachun@gmail.com>

* chore: minor refactor

* feat: scan_memory_limit connect to max_concurrent_queries

Signed-off-by: jeremyhi <fengjiachun@gmail.com>

* chore: make callback clearly

* feat: add unlimted enum

Signed-off-by: jeremyhi <fengjiachun@gmail.com>

* chore: by review comment

* chore: comment on recursion_limit

Signed-off-by: jeremyhi <fengjiachun@gmail.com>

* feat: refactor and put permit into RegionScanExec

Signed-off-by: jeremyhi <fengjiachun@gmail.com>

* chore: multiple lazy static blocks

* chore: minor change

Signed-off-by: jeremyhi <fengjiachun@gmail.com>

---------

Signed-off-by: jeremyhi <fengjiachun@gmail.com>
2025-11-11 07:47:55 +00:00
Ruihang Xia
afa8684ebd feat: report scanner metrics (#7200)
* feat: report scanner metrics

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* Update src/mito2/src/read/scan_util.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>
2025-11-11 07:40:08 +00:00
Weny Xu
47937961f6 feat(metric)!: enable sparse primary key encoding by default (#7195)
* feat(metric): enable sparse primary key encoding by default

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: update config.md

Signed-off-by: WenyXu <wenymedia@gmail.com>

* fix: fix unit tests

Signed-off-by: WenyXu <wenymedia@gmail.com>

* fix: fix unit tests

Signed-off-by: WenyXu <wenymedia@gmail.com>

* fix sqlness

Signed-off-by: WenyXu <wenymedia@gmail.com>

* Update src/mito-codec/src/key_values.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

* feat: only allow setting primary key encoding for metric engine

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: support deleting rows from logical region instead of physical region

This keeps the behavior the same as put. It's easier to support sparse
encoding for deleting logical regions. Now the metric engine doesn't
support delete rows from physical region directly.

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: update sqlness

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: remove unused error

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
Signed-off-by: evenyag <realevenyag@gmail.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>
2025-11-11 06:33:51 +00:00
Lei, HUANG
182cce4cc2 fix(mito): allow region edit in writable state (#7201)
* fix/region-expire-state:
 Refactor region state handling in compaction task and manifest updates

 - Introduce a variable to hold the current region state for clarity in compaction task updates.
 - Add an expected_region_state field to RegionEditResult to manage region state expectations during manifest handling.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* fix/region-expire-state:
 Refactor region state handling in compaction task

 - Replace direct assignment of `RegionLeaderState::Writable` with dynamic state retrieval and conditional check for leader state.
 - Modify `RegionEditResult` to include a flag `update_region_state` instead of `expected_region_state` to indicate if the region state should be updated to writable.
 - Adjust handling of `RegionEditResult` in `handle_manifest` to conditionally update region state based on the new flag.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

---------

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-11-11 06:16:23 +00:00
Weny Xu
ac0e95c193 fix: correct leader state reset and region migration locking consistency (#7199)
* fix(meta): remove table route cache in region migration ctx

Signed-off-by: WenyXu <wenymedia@gmail.com>

* fix: fix unit tests

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: fix clippy

Signed-off-by: WenyXu <wenymedia@gmail.com>

* fix: fix campaign reset not clearing leader state-s

Signed-off-by: WenyXu <wenymedia@gmail.com>

* feat: gracefully handle region lease renewal errors

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-11-11 01:19:26 +00:00
Lei, HUANG
f567dcef86 feat: allow fuzz input override through env var (#7208)
* feat/allow-fuzz-input-override:
 Add environment override for fuzzing parameters and seed values

 - Implement `get_fuzz_override` function to read override values from environment variables for fuzzing parameters.
 - Allow overriding `SEED`, `ACTIONS`, `ROWS`, `TABLES`, `COLUMNS`, `INSERTS`, and `PARTITIONS` in various fuzzing targets.
 - Introduce new constants `GT_FUZZ_INPUT_MAX_PARTITIONS` and `FUZZ_OVERRIDE_PREFIX`.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* feat/allow-fuzz-input-override: Remove GT_FUZZ_INPUT_MAX_PARTITIONS constant and usage from fuzzing utils and tests

 • Deleted the GT_FUZZ_INPUT_MAX_PARTITIONS constant from fuzzing utility functions.
 • Updated FuzzInput struct in fuzz_migrate_mito_regions.rs to use a hardcoded range instead of get_gt_fuzz_input_max_partitions for determining the number of partitions.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* feat/allow-fuzz-input-override:
 Improve fuzzing documentation with environment variable overrides

 Enhanced the fuzzing instructions in the README to include guidance on how to override fuzz input using environment variables, providing an example for better clarity.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

---------

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-11-10 14:02:23 +00:00
Ruihang Xia
30192d9802 feat: disable default compression for __op_type column (#7196)
* feat: disable default compression for `__op_type` column

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* revert unrelated code

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-11-10 07:59:25 +00:00
Ning Sun
62d109c1f4 fix: allow case-insensitive timezone settings (#7207) 2025-11-08 15:56:27 +00:00
Alan Tang
910a383420 feat(expr): support avg functions on vector (#7146)
* feat(expr): support vec_elem_avg function

Signed-off-by: Alan Tang <jmtangcs@gmail.com>

* feat: support vec_avg function

Signed-off-by: Alan Tang <jmtangcs@gmail.com>

* test: add more query test for avg aggregator

Signed-off-by: Alan Tang <jmtangcs@gmail.com>

* fix: fix the merge batch mode

Signed-off-by: Alan Tang <jmtangcs@gmail.com>

* refactor: use sum and count as state for avg function

Signed-off-by: Alan Tang <jmtangcs@gmail.com>

* refactor: refactor merge batch mode for avg function

Signed-off-by: Alan Tang <jmtangcs@gmail.com>

* feat: add additional vector restrictions for validation

Signed-off-by: Alan Tang <jmtangcs@gmail.com>

---------

Signed-off-by: Alan Tang <jmtangcs@gmail.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>
2025-11-07 13:42:14 +00:00
Weny Xu
af6bbacc8c fix: add serde defaults for MetasrvNodeInfo (#7204)
* fix: add serde defaults for `MetasrvNodeInfo`

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: fmt

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-11-07 09:50:09 +00:00
Yingwen
7616ffcb35 test: only set ttl to forever in fuzz alter test (#7202)
Signed-off-by: evenyag <realevenyag@gmail.com>
2025-11-07 07:32:53 +00:00
shuiyisong
a3dbd029c5 chore: remove ttl option if presents in trace meta table (#7197)
* chore: remove ttl option if presents in trace meta table

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: update test

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

---------

Signed-off-by: shuiyisong <xixing.sys@gmail.com>
2025-11-06 11:51:45 +00:00
Yingwen
9caeae391e chore: print root cause in opendal logging interceptor (#7183)
* chore: print root cause in opendal

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: extract a function root_source() to get the cause

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-11-06 08:48:59 +00:00
fys
35951afff9 chore: remove unnecessary code related to triggers (#7192)
* chore: remove unused triggers memory tables

* fix: cargo clippy

* fix: sqlness
2025-11-06 08:09:14 +00:00
Ruihang Xia
a049b68c26 feat: import backup data from local files (#7180)
* feat: import backup data from local files

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add unit tests

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-11-06 07:33:33 +00:00
Lei, HUANG
c2ff563ac6 fix(mito): avoid shortcut in picking multi window files (#7174)
* fix/pick-continue:
 ### Add Tests for TWCS Compaction Logic

 - **`twcs.rs`**:
   - Modified the logic in `TwcsPicker` to handle cases with zero runs by using `continue` instead of `return`.
   - Added two new test cases: `test_build_output_multiple_windows_with_zero_runs` and `test_build_output_single_window_zero_runs` to verify the behavior of the compaction logic when there are zero runs in
 the windows.

 - **`memtable_util.rs`**:
   - Removed unused import `PredicateGroup`.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* fix: clippy

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* fix/pick-continue:
 ### Commit Message

 Enhance Compaction Process with Expired SST Handling and Testing

 - **`compactor.rs`**:
   - Introduced handling for expired SSTs by updating the manifest immediately upon task completion.
   - Added new test cases to verify the handling of expired SSTs and manifest updates.

 - **`task.rs`**:
   - Implemented `remove_expired` function to handle expired SSTs by updating the manifest and notifying the region worker loop.
   - Refactored `handle_compaction` to `handle_expiration_and_compaction` to integrate expired SST removal before merging inputs.
   - Added logging and error handling for expired SST removal process.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* refactor/progressive-compaction:
 **Enhance Compaction Task Error Handling**

 - Updated `task.rs` to conditionally execute the removal of expired SST files only when they exist, improving error handling and performance.
 - Added a check for non-empty `expired_ssts` before initiating the removal process, ensuring unnecessary operations are avoided.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* refactor/progressive-compaction:
 ### Refactor `DefaultCompactor` to Extract `merge_single_output` Method

 - **File**: `src/mito2/src/compaction/compactor.rs`
   - Extracted the logic for merging a single compaction output into SST files into a new method `merge_single_output` within the `DefaultCompactor` struct.
   - Simplified the `merge_ssts` method by utilizing the new `merge_single_output` method, reducing code duplication and improving maintainability.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* refactor/progressive-compaction:
 ### Add Max Background Compaction Tasks Configuration

 - **`compaction.rs`**: Added `max_background_compactions` to the compaction scheduler to limit background tasks.
 - **`compaction/compactor.rs`**: Removed immediate manifest update logic after task completion.
 - **`compaction/picker.rs`**: Introduced `max_background_tasks` parameter in `new_picker` to control task limits.
 - **`compaction/twcs.rs`**: Updated `TwcsPicker` to include `max_background_tasks` and truncate inputs exceeding this limit. Added related test cases to ensure functionality.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* fix/pick-continue:
 ### Improve Error Handling and Task Management in Compaction

 - **`task.rs`**: Enhanced error handling in `remove_expired` function by logging errors without halting the compaction process. Removed the return of `Result` type and added detailed logging for various
 failure scenarios.
 - **`twcs.rs`**: Adjusted task management logic by removing input truncation based on `max_background_tasks` and instead discarding remaining tasks if the output size exceeds the limit. This ensures better
 control over task execution and resource management.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* fix/pick-continue:
 ### Add Unit Tests for Compaction Task and TWCS Picker

 - **`task.rs`**: Added unit tests to verify the behavior of `PickerOutput` with and without expired SSTs.
 - **`twcs.rs`**: Introduced tests for `TwcsPicker` to ensure correct handling of `max_background_tasks` during compaction, including scenarios with and without task truncation.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* fix/pick-continue:
 **Improve Error Handling and Notification in Compaction Task**

 - **File:** `task.rs`
   - Changed log level from `warn` to `error` for manifest update failures to enhance error visibility.
   - Refactored the notification mechanism for expired file removal by using `BackgroundNotify::RegionEdit` with `RegionEditResult` to streamline the process.
   - Simplified error handling by consolidating match cases into a single `if let Err` block for better readability and maintainability.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

---------

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-11-06 06:27:17 +00:00
Yingwen
82812ff19e test: add a unit test to scan data from memtable in append mode (#7193)
* test: add tests for scanning append mode before flush

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: extract a function maybe_dedup_one

Signed-off-by: evenyag <realevenyag@gmail.com>

* ci: add flat format to docs.yml so we can make it required later

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-11-06 06:11:58 +00:00
Ning Sun
4a77167138 chore: update readme (#7187) 2025-11-06 03:21:01 +00:00
Lei, HUANG
934df46f53 fix(mito): append mode in flat format not working (#7186)
* mito2: add unit test for flat single-range append_mode dedup behavior

Verify memtable_flat_sources skips dedup when append_mode is true and
performs dedup otherwise for single-range flat memtables, preventing
regressions in the new append_mode path.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* fix/flat-source-merge:
 ### Improve Column Metadata Extraction Logic

 - **File**: `src/common/meta/src/ddl/utils.rs`
   - Modified the `extract_column_metadatas` function to use `swap_remove` for extracting the first schema and decode column metadata for comparison instead of raw bytes. This ensures that the extension map is considered during
 verification, enhancing the robustness of metadata consistency checks across datanodes.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

---------

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-11-06 03:19:39 +00:00
Ning Sun
fb92e4d0b2 feat: add greptime's arrow json extension type (#7168)
* feat: add arrow json extension type

* feat: add json structure settings to extension type

* refactor: store json structure settings as extension metadata

* chore: make binary an acceptable type for extension
2025-11-05 18:34:57 +00:00
Yingwen
0939dc1d32 test: run sqlness for flat format (#7178)
* test: support flat format in sqlness

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: replace region stats test result with NUM

Signed-off-by: evenyag <realevenyag@gmail.com>

* ci: add flat format to sqlness ci

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-11-05 11:23:12 +00:00
shuiyisong
50c9600ef8 fix: stabilize test results (#7182)
* fix: stablize test results

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* fix: test

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

---------

Signed-off-by: shuiyisong <xixing.sys@gmail.com>
2025-11-05 09:19:23 +00:00
Lei, HUANG
abcfbd7f41 chore(metrics): add region server requests failures count metrics (#7173)
* chore/add-region-insert-failure-metric: Add metric for failed insert requests to region server in datanode module

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* chore/add-region-insert-failure-metric:
 Add metric for tracking failed region server requests

 - Introduce a new metric `REGION_SERVER_REQUEST_FAILURE_COUNT` to count failed region server requests.
 - Update `REGION_SERVER_INSERT_FAIL_COUNT` metric description for consistency.
 - Implement error handling in `RegionServerHandler` to increment the new failure metric on request errors.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

---------

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-11-05 07:23:40 +00:00
Ruihang Xia
aac3ede261 feat: allow creating logical tabel with same partition rule with physical table's (#7177)
* feat: allow creating logical tabel with same partition rule with physical table's

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix errors

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-11-05 06:37:17 +00:00
Yingwen
3001c2d719 feat: BulkMemtable stores small fragments in another buffer (#7164)
* feat: buffer small parts in bulk memtable

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: use assert_eq instead of assert

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: fix compiler errors

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: collect bulk memtable scan metrics

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: report metrics early

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-11-05 06:35:32 +00:00
shuiyisong
6caff50d01 chore: improve search traces and jaeger resp (#7166)
* chore: add jaeger field in trace query

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: update search v1 with tags

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: update col matching using col names

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: minify code with macro

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: fix test

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: change macro to inline function

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: fix filter with tags & add test

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

---------

Signed-off-by: shuiyisong <xixing.sys@gmail.com>
2025-11-04 05:49:08 +00:00
ZonaHe
421f4eec05 feat: update dashboard to v0.11.7 (#7170)
Co-authored-by: sunchanglong <sunchanglong@users.noreply.github.com>
Co-authored-by: Ning Sun <sunng@protonmail.com>
2025-11-04 02:52:26 +00:00
Yingwen
d944e5c6b8 test: add sqlness for delete and filter (#7171)
Signed-off-by: evenyag <realevenyag@gmail.com>
2025-11-04 02:13:47 +00:00
fys
013d61acbb chore(deps): remove sqlx pg feature in greptimedb build (#7172)
* chore(deps): remove sqlx pg feature in greptimedb build

* fix: ci
2025-11-03 18:49:00 +00:00
LFC
b7e834ab92 refactor: convert to influxdb values directly from arrow (#7163)
* refactor: convert to influxdb values directly from arrow

Signed-off-by: luofucong <luofc@foxmail.com>

* resolve PR comments

Signed-off-by: luofucong <luofc@foxmail.com>

* resolve PR comments

Signed-off-by: luofucong <luofc@foxmail.com>

---------

Signed-off-by: luofucong <luofc@foxmail.com>
2025-11-03 07:52:37 +00:00
LFC
5eab9a1be3 feat: json vector builder (#7151)
* resolve PR comments

Signed-off-by: luofucong <luofc@foxmail.com>

Update src/datatypes/src/vectors/json/builder.rs

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

feat: json vector builder

Signed-off-by: luofucong <luofc@foxmail.com>

* resolve PR comments

Signed-off-by: luofucong <luofc@foxmail.com>

---------

Signed-off-by: luofucong <luofc@foxmail.com>
2025-11-03 06:06:54 +00:00
Weny Xu
9de680f456 refactor: add support for batch region upgrade operations part2 (#7160)
* add tests for metric engines

Signed-off-by: WenyXu <wenymedia@gmail.com>

* feat: catchup in background

Signed-off-by: WenyXu <wenymedia@gmail.com>

* refactor: replace sequential catchup with batch processing

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: unit tests

Signed-off-by: WenyXu <wenymedia@gmail.com>

* remove single catchup

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: remove unused error

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: refine catchup tests

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: add unit tests

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-11-03 06:01:38 +00:00
Ning Sun
5deaaa59ec chore: fix typo (#7169) 2025-11-03 02:22:34 +00:00
dennis zhuang
61724386ef fix: potential failure in tests (#7167)
* fix: potential failure in the test_index_build_type_compact test

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: relax timestamp checking in test_timestamp_default_now

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

---------

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
2025-10-31 22:08:59 +00:00
Weny Xu
6960a0183a refactor: add support for batch region upgrade operations part1 (#7155)
* refactor: convert UpgradeRegion instruction to batch operation

Signed-off-by: WenyXu <wenymedia@gmail.com>

* feat: introduce `handle_batch_catchup_requests` fn for mito engine

Signed-off-by: WenyXu <wenymedia@gmail.com>

* test: add tests

Signed-off-by: WenyXu <wenymedia@gmail.com>

* feat: introduce `handle_batch_catchup_requests` fn for metric engine

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: suggestion and add ser/de tests

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: add comments

Signed-off-by: WenyXu <wenymedia@gmail.com>

* fix: fix unit tests

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-10-31 03:08:38 +00:00
Sicong Hu
30894d7599 feat(mito): Optimize async index building with priority-based batching (#7034)
* feat: add priority-based batching to IndexBuildScheduler

Signed-off-by: SNC123 <sinhco@outlook.com>

* fix: clean old puffin-related cache

Signed-off-by: SNC123 <sinhco@outlook.com>

* test: add test for IndexBuildScheduler

Signed-off-by: SNC123 <sinhco@outlook.com>

* feat: different index file id for read and async write

Signed-off-by: SNC123 <sinhco@outlook.com>

* feat: different index file id for delete

Signed-off-by: SNC123 <sinhco@outlook.com>

* chore: clippy

Signed-off-by: SNC123 <sinhco@outlook.com>

* fix: apply suggestions

Signed-off-by: SNC123 <sinhco@outlook.com>

* fix: apply comments

Signed-off-by: SNC123 <sinhco@outlook.com>

* combine files and index files

Signed-off-by: SNC123 <sinhco@outlook.com>

* feat: add index_file_id into ManifestSstEntry

Signed-off-by: SNC123 <sinhco@outlook.com>

* Update src/mito2/src/gc.rs

Signed-off-by: SNC123 <sinhco@outlook.com>

* resolve conflicts

Signed-off-by: SNC123 <sinhco@outlook.com>

* fix: sqlness

Signed-off-by: SNC123 <sinhco@outlook.com>

* chore: fmt

Signed-off-by: SNC123 <sinhco@outlook.com>

---------

Signed-off-by: SNC123 <sinhco@outlook.com>
2025-10-31 02:13:17 +00:00
Yingwen
acf38a7091 fix: avoid filtering rows with delete op by fields under merge mode (#7154)
* chore: clear allow dead_code for flat format

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: pass exprs to build appliers

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: split field filters and index appliers

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: support skip filtering fields in RowGroupPruningStats

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: add PreFilterMode to config whether to skip filtering fields

Adds the PreFilterMode to the RangeBase and sets it in
ParquetReaderBuilder

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: support skipping fields in prune reader

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: support pre filter mode in bulk memtable

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: pass PreFilterMode to memtable

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: test mito filter delete

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: fix compiler errors

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: remove commented code

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: move predicate and sequence to RangesOptions

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: fmt code

Signed-off-by: evenyag <realevenyag@gmail.com>

* ci: skip cargo gc

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: fix cargo build warning

Signed-off-by: evenyag <realevenyag@gmail.com>

* Revert "ci: skip cargo gc"

This reverts commit 1ec9594a6d.

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-10-30 12:14:45 +00:00
LFC
109b70750a refactor: convert to prometheus values directly from arrow (#7153)
* refactor: convert to prometheus values directly from arrow

Signed-off-by: luofucong <luofc@foxmail.com>

* resolve PR comments

Signed-off-by: luofucong <luofc@foxmail.com>

---------

Signed-off-by: luofucong <luofc@foxmail.com>
2025-10-30 10:24:12 +00:00
shuiyisong
ee5b7ff3c8 chore: unify initialization of channel manager (#7159)
* chore: unify initialization of channel manager and extract loading tls

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: fix cr issue

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

---------

Signed-off-by: shuiyisong <xixing.sys@gmail.com>
2025-10-30 04:26:02 +00:00
liyang
5d0ef376de fix: initializer container not work (#7152)
* fix: initializer not work

Signed-off-by: liyang <daviderli614@gmail.com>

* use a one version of operator

Signed-off-by: liyang <daviderli614@gmail.com>

---------

Signed-off-by: liyang <daviderli614@gmail.com>
2025-10-29 18:11:55 +00:00
shuiyisong
11c0381fc1 chore: set default catalog using build env (#7156)
* chore: update reference to const

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: use option_env to set default catalog

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: use const_format

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: update reference in cli

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: introduce a build.rs to set default catalog

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: remove unused feature gate

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

---------

Signed-off-by: shuiyisong <xixing.sys@gmail.com>
2025-10-29 18:10:58 +00:00
LFC
e8b7b0ad16 fix: memtable value push result was ignored (#7136)
* fix: memtable value push result was ignored

Signed-off-by: luofucong <luofc@foxmail.com>

* chore: apply suggestion

Co-authored-by: Yingwen <realevenyag@gmail.com>

---------

Signed-off-by: luofucong <luofc@foxmail.com>
Co-authored-by: dennis zhuang <killme2008@gmail.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>
2025-10-29 13:44:36 +00:00
Weny Xu
6efffa427d fix: missing flamegraph feature in pprof dependency (#7158)
fix: fix pprof deps

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-10-29 11:41:21 +00:00
Ruihang Xia
6576e3555d fix: cache estimate methods (#7157)
* fix: cache estimate methods

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* revert page value change

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* Apply suggestion from @evenyag

Co-authored-by: Yingwen <realevenyag@gmail.com>

* update test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>
2025-10-29 09:57:28 +00:00
Lei, HUANG
f0afd675e3 feat: objbench sub command for datanode (#7114)
* feat/objbench-subcmd:
 ### Add Object Storage Benchmark Tool and Update Dependencies

 - **`Cargo.lock` & `Cargo.toml`**: Added dependencies for `colored`, `parquet`, and `pprof` to support new features.
 - **`datanode.rs`**: Introduced `ObjbenchCommand` for benchmarking object storage, including command-line options for configuration and execution. Added `StorageConfig` and `StorageConfigWrapper` for storage engine configuration.
 - **`datanode.rs`**: Implemented a stub for `build_object_store` function to initialize object storage.

 These changes introduce a new subcommand for object storage benchmarking and update dependencies to support additional functionality.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* init

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* fix: code style and clippy

* feat/objbench-subcmd:
 Improve error handling in `objbench.rs`

 - Enhanced error handling in `parse_config` and `parse_file_dir_components` functions by replacing `unwrap` with `OptionExt` and `context` for better error messages.
 - Updated `build_access_layer_simple` and `build_cache_manager` functions to use `map_err` for more descriptive error handling.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* chore: rebase main

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

---------

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-10-29 05:26:29 +00:00
discord9
37bc2e6b07 feat: gc worker heartbeat instruction (#7118)
again



false by default



test: config api



refactor: per code review



less info!



even less info!!



docs: gc regions instr



refactor: grp by region id



per code review



per review



error handling?



test: fix



todos



aft rebase fix



after refactor

Signed-off-by: discord9 <discord9@163.com>
2025-10-29 02:59:36 +00:00
Ning Sun
a9d1d33138 feat: update datafusion-pg-catalog for better dbeaver support (#7143)
* chore: update datafusion-pg-catalog to 0.12.1

* feat: import more udfs
2025-10-28 18:42:03 +00:00
discord9
22d9eb6930 feat: part sort provide dyn filter (#7140)
* feat: part sort provide dyn filter

Signed-off-by: discord9 <discord9@163.com>

* fix: reset_state reset dynamic filter

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-10-28 02:44:29 +00:00
shuiyisong
da976e534d refactor: add test feature gate to numbers table (#7148)
* refactor: add test feature gate to numbers table

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: add debug_assertions

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* refactor: extract numbers table provider

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: address CR issues

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

---------

Signed-off-by: shuiyisong <xixing.sys@gmail.com>
2025-10-27 10:16:00 +00:00
discord9
f2bc92b9e6 refactor: use generic for heartbeat instruction handler (#7149)
* refactor: use generic

Signed-off-by: discord9 <discord9@163.com>

* w

Signed-off-by: discord9 <discord9@163.com>

* per review

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-10-27 09:09:48 +00:00
Weny Xu
785f9d7fd7 fix: add delays in reconcile tests for async cache invalidation (#7147)
Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-10-27 08:07:51 +00:00
shuiyisong
a20ac4f9e5 feat: prefix option for timestamp index and value column (#7125)
* refactor: use GREPTIME_TIMESTAMP const

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* feat: add config for default ts col name

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* refactor: replace GREPTIME_TIMESTAMP with function get

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: update config doc

* fix: test

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: remove opts on flownode and metasrv

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: add validation for ts column name

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: use get_or_init to avoid test error

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: fmt

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: update docs

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: using empty string to disable prefix

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: update comment

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: address CR issues

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

---------

Signed-off-by: shuiyisong <xixing.sys@gmail.com>
2025-10-27 08:00:03 +00:00
zyy17
0a3961927d refactor!: add a opentelemetry_traces_operations table to aggregate (service_name, span_name, span_kind) to improve query performance (#7144)
refactor: add a `*_operations` table to aggregate `(service_name, span_name, span_kind)` to improve query performance

Signed-off-by: zyy17 <zyylsxm@gmail.com>
2025-10-27 03:36:22 +00:00
LFC
d7ed6a69ab feat: merge json datatype (#7142)
* feat: merge json datatype

Signed-off-by: luofucong <luofc@foxmail.com>

* resolve PR comments

Signed-off-by: luofucong <luofc@foxmail.com>

---------

Signed-off-by: luofucong <luofc@foxmail.com>
2025-10-27 03:30:52 +00:00
discord9
68247fc9b1 fix: count_state use stat to eval&predicate w/out region (#7116)
* fix: count_state use stat to eval

Signed-off-by: discord9 <discord9@163.com>

* cleanup

Signed-off-by: discord9 <discord9@163.com>

* fix: use predicate without region

Signed-off-by: discord9 <discord9@163.com>

* test: diverge standalone/dist impl

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-10-27 02:14:45 +00:00
Lei, HUANG
e386a366d0 feat: add HTTP endpoint to control prof.gdump feature (#6999)
* feat/gdump:
 ### Add Support for Jemalloc Gdump Flag

 - **`jemalloc.rs`**: Introduced `PROF_GDUMP` constant and added functions `set_gdump_active` and `is_gdump_active` to manage the gdump flag.
 - **`error.rs`**: Added error handling for reading and updating the jemalloc gdump flag with `ReadGdump` and `UpdateGdump` errors.
 - **`lib.rs`**: Exposed `is_gdump_active` and `set_gdump_active` functions for non-Windows platforms.
 - **`http.rs`**: Added HTTP routes for checking and toggling the jemalloc gdump flag status.
 - **`mem_prof.rs`**: Implemented handlers `gdump_toggle_handler` and `gdump_status_handler` for managing gdump flag via HTTP requests.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* Update docs/how-to/how-to-profile-memory.md

Co-authored-by: shuiyisong <113876041+shuiyisong@users.noreply.github.com>

* fix: typo in docs

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

---------

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
Co-authored-by: shuiyisong <113876041+shuiyisong@users.noreply.github.com>
2025-10-27 01:41:19 +00:00
dennis zhuang
d8563ba56d feat: adds regex_extract function and more type tests (#7107)
* feat: adds format, regex_extract function and more type tests

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: forgot functions

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: forgot null type

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* test: forgot date type

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* feat: remove format function

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* test: update results after upgrading datafusion

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

---------

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
2025-10-25 08:41:49 +00:00
Weny Xu
7da2f5ed12 refactor: refactor instruction handler and adds support for batch region downgrade operations (#7130)
* refactor: refactor instruction handler

Signed-off-by: WenyXu <wenymedia@gmail.com>

* refactor: support batch downgrade region instructions

Signed-off-by: WenyXu <wenymedia@gmail.com>

* fix compat

Signed-off-by: WenyXu <wenymedia@gmail.com>

* fix clippy

Signed-off-by: WenyXu <wenymedia@gmail.com>

* add tests

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: add comments

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-10-24 09:11:42 +00:00
Yingwen
4c70b4c31d feat: store estimated series num in file meta (#7126)
* feat: add num_series to FileMeta

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: add SeriesEstimator to collect num_series

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: set num_series in compactor

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: print num_series in Debug for FileMeta

Signed-off-by: evenyag <realevenyag@gmail.com>

* style: fmt code

Signed-off-by: evenyag <realevenyag@gmail.com>

* style: fix clippy

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: increase series count when next ts <= last

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: add tests for SeriesEstimator

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: add num_series to ssts_manifest table

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: update sqlness tests

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: fix metric engine list entry test

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-10-24 05:53:48 +00:00
Ning Sun
b78ee1743c feat: add a missing pg_catalog function current_database (#7138)
feat: add a missing function current_database
2025-10-24 03:36:07 +00:00
LFC
6ad23bc9b4 refactor: convert to postgres values directly from arrow (#7131)
* refactor: convert to pg values directly from arrow

Signed-off-by: luofucong <luofc@foxmail.com>

* resolve PR comments

Signed-off-by: luofucong <luofc@foxmail.com>

---------

Signed-off-by: luofucong <luofc@foxmail.com>
2025-10-24 03:28:04 +00:00
Sicong Hu
03a29c6591 fix: correct test_index_build_type_compact (#7137)
Signed-off-by: SNC123 <sinhco@outlook.com>
2025-10-24 03:24:13 +00:00
zyy17
a0e6bcbeb3 feat: add cpu_usage_millicores and memory_usage_bytes in information_schema.cluster_info table. (#7051)
* refactor: add `hostname` in cluster_info table

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* chore: update information schema result

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* feat: enable zstd for bulk memtable encoded parts (#7045)

feat: enable zstd in bulk memtable

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: add `get_total_cpu_millicores()` / `get_total_cpu_cores()` / `get_total_memory_bytes()` / `get_total_memory_readable()` in common-stat

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* feat: add `cpu_usage_millicores` and `memory_usage_bytes` in `information_schema.cluster_info` table

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* fix: compile warning and integration test failed

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* fix: integration test failed

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* refactor: add `ResourceStat`

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* refactor: apply code review comments

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* chore: update greptime-proto

Signed-off-by: zyy17 <zyylsxm@gmail.com>

---------

Signed-off-by: zyy17 <zyylsxm@gmail.com>
Signed-off-by: evenyag <realevenyag@gmail.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>
2025-10-24 03:12:45 +00:00
LFC
b53a0b86fb feat: create table with new json datatype (#7128)
* feat: create table with new json datatype

Signed-off-by: luofucong <luofc@foxmail.com>

* resolve PR comments

Signed-off-by: luofucong <luofc@foxmail.com>

---------

Signed-off-by: luofucong <luofc@foxmail.com>
2025-10-24 02:16:49 +00:00
LFC
2f637a262e chore: update datafusion to 50 (#7076)
* chore: update datafusion to 50

Signed-off-by: luofucong <luofc@foxmail.com>

* fix ci

Signed-off-by: luofucong <luofc@foxmail.com>

* fix: update datafusion_pg_catalog import

* chore: fix toml format

* chore: fix toml format again

* fix nextest

Signed-off-by: luofucong <luofc@foxmail.com>

* fix sqlness

Signed-off-by: luofucong <luofc@foxmail.com>

* chore: switch datafusion-orc to upstream tag

* fix sqlness

Signed-off-by: luofucong <luofc@foxmail.com>

* resolve PR comments

Signed-off-by: luofucong <luofc@foxmail.com>

---------

Signed-off-by: luofucong <luofc@foxmail.com>
Co-authored-by: Ning Sun <sunning@greptime.com>
2025-10-23 07:18:36 +00:00
Yingwen
f388dbdbb8 fix: fix index and tag filtering for flat format (#7121)
* perf: only decode primary keys in the batch

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: don't push none to creator

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: implement method to filter __table_id for sparse encoding

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: filter table id for sparse encoding separately

The __table_id doesn't present in projection so we have to filter it
manually

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: decode tags for sparse encoding when building bloom filter

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: support inverted index for tags under sparse encoding

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: skip tag columns in fulltext index

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: fix warnings

Signed-off-by: evenyag <realevenyag@gmail.com>

* style: fix clippy

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: fix list index metadata test

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: decode primary key columns to filter

When primary key columns are not in projection but in filters, we need
to decode them in compute_filter_mask_flat

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: reuse filter method

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: only use dictionary for string type in compat

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: safe to get column by creator's column id

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-10-23 06:43:46 +00:00
jeremyhi
136b9eef7a feat: pr review reminder frequency (#7129)
* feat: run at 9:00 am on monday, wednesday, friday

Signed-off-by: jeremyhi <fengjiachun@gmail.com>

* chore: remove unused method

Signed-off-by: jeremyhi <fengjiachun@gmail.com>

---------

Signed-off-by: jeremyhi <fengjiachun@gmail.com>
2025-10-23 06:22:02 +00:00
fys
e8f39cbc4f fix: unit test about trigger parser (#7132)
* fix: unit test about trigger parser

* fix: cargo clippy
2025-10-23 03:47:25 +00:00
jeremyhi
62b51c6736 feat: writer mem limiter for http and grpc service (#7092)
* feat: writer mem limiter for http and grpc service

Signed-off-by: jeremyhi <fengjiachun@gmail.com>

* fix: docs

Signed-off-by: jeremyhi <fengjiachun@gmail.com>

* feat: add metrics for limiter

Signed-off-by: jeremyhi <fengjiachun@gmail.com>

* Apply suggestion from @MichaelScofield

Co-authored-by: LFC <990479+MichaelScofield@users.noreply.github.com>

* chore: refactor try_acquire

Signed-off-by: jeremyhi <fengjiachun@gmail.com>

* chore: make size human readable

Signed-off-by: jeremyhi <fengjiachun@gmail.com>

---------

Signed-off-by: jeremyhi <fengjiachun@gmail.com>
Co-authored-by: LFC <990479+MichaelScofield@users.noreply.github.com>
2025-10-22 09:30:36 +00:00
discord9
a9a3e0b121 fix: prom ql logical plan use column index not name (#7109)
* feat: use index not col name

Signed-off-by: discord9 <discord9@163.com>

* fix: use name without qualifier&output schema fix

Signed-off-by: discord9 <discord9@163.com>

* proto

Signed-off-by: discord9 <discord9@163.com>

* refactor: resolve column name/index

Signed-off-by: discord9 <discord9@163.com>

* pcr

Signed-off-by: discord9 <discord9@163.com>

* chore: update proto

Signed-off-by: discord9 <discord9@163.com>

* chore: update proto

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-10-22 09:04:09 +00:00
Ning Sun
41ce100624 feat: update pgwire to 0.34 for a critical issue on accepting connection (#7127)
feat: update pgwire to 0.34
2025-10-22 07:25:04 +00:00
Weny Xu
328ec56b63 feat: introduce OpenRegions and CloseRegions instructions to support batch region operations (#7122)
* feat: introduce `OpenRegions` and `CloseRegions` instructions to support batch region operations

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions

Signed-off-by: WenyXu <wenymedia@gmail.com>

* feat: merge instructions

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-10-22 03:43:47 +00:00
Ning Sun
bfa00df9f2 fix: list inner type for json and valueref, refactor type to ref for struct/list (#7113)
* refactor: use arc for struct type

* fix: inner type of list value and ref
2025-10-21 12:46:18 +00:00
jeremyhi
2e7b3951fb feat: 14 days PRs review reminder (#7123) 2025-10-21 08:53:38 +00:00
Yingwen
1054c63503 test: run engine unit tests for flat format (#7119)
* test: support flat in basic_test

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: support flat in alter_test

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: support flat for append_mode_test

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: update bump_committed_sequence_test to test both formats

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: update close_test to test both formats

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: update compaction_test to test both formats

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: update create_test to test both formats

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: update edit_region_test to test both formats

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: update merge_mode_test to test both formats

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: update parallel_test to test both formats

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: update projection_test to test both formats

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: update prune_test to test both formats

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: update row_selector_test to test both formats

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: update scan_test to test both formats

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: update drop_test to test both formats

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: update filter_deleted_test to test both formats

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: update sync_test to test both formats

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: update set_role_state_test to test both formats

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: update staging_test to test both formats

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: update truncate_test to test both formats

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: update catchup_test to test both formats

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: update flush_test to test both formats

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: update open_test to test both formats

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: update batch_open_test to test both formats

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: fix all flat format tests

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-10-21 08:24:12 +00:00
Sicong Hu
a1af4dce0c feat: implement three build types for async index build (#7029)
* feat: impl four types index build

Signed-off-by: SNC123 <sinhco@outlook.com>

* test: add tests for four types index build

Signed-off-by: SNC123 <sinhco@outlook.com>

* test: add sqlness test for manual index build

Signed-off-by: SNC123 <sinhco@outlook.com>

* fix: add region request support and correct sqlness

Signed-off-by: SNC123 <sinhco@outlook.com>

* fix: update cargo.toml for proto and resolve conflicts

Signed-off-by: SNC123 <sinhco@outlook.com>

* fix: rebase

Signed-off-by: SNC123 <sinhco@outlook.com>

* chore: clippy

Signed-off-by: SNC123 <sinhco@outlook.com>

* fix: toml fmt and correct sqlness

Signed-off-by: SNC123 <sinhco@outlook.com>

* fix: correct sqlness result

Signed-off-by: SNC123 <sinhco@outlook.com>

* refactor: extract manual build logic

Signed-off-by: SNC123 <sinhco@outlook.com>

* apply suggestions

Signed-off-by: SNC123 <sinhco@outlook.com>

* feat: abort index build process

Signed-off-by: SNC123 <sinhco@outlook.com>

* clippy

Signed-off-by: SNC123 <sinhco@outlook.com>

* chore: wrap `should_abort_index`

Signed-off-by: SNC123 <sinhco@outlook.com>

* chore: clippy

Signed-off-by: SNC123 <sinhco@outlook.com>

---------

Signed-off-by: SNC123 <sinhco@outlook.com>
2025-10-21 02:48:28 +00:00
jeremyhi
27268cf424 chore: pr review reminder (#7120)
* chore: pr review reminder

* chore: for test

* chore: vars

* fix: gracefully handle missing webhook URL

* test: allow workflow to run in fork for testing

* test: add environment variable logging

* chore: monior change

* feat: filter draft pr
2025-10-21 02:44:04 +00:00
Zhenchi
938d757523 feat: expose SST index metadata via information schema (#7044)
Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2025-10-20 11:59:16 +00:00
LFC
855eb54ded refactor: convert to mysql values directly from arrow (#7096)
Signed-off-by: luofucong <luofc@foxmail.com>
2025-10-20 11:09:24 +00:00
Weny Xu
3119464ff9 feat: introduce the Noop WAL provider for datanode (#7105)
* feat: introduce noop log store

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: update config example

Signed-off-by: WenyXu <wenymedia@gmail.com>

* test: add noop wal tests

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-10-20 06:13:27 +00:00
fys
20b5b9bee4 chore: remove unused deps (#7108) 2025-10-17 11:53:19 +00:00
Zhenchi
7b396bb290 feat(mito2): expose puffin index metadata (#7042)
* Add encode/decode helpers for IndexTarget

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* Use IndexTarget encode for puffin index blob keys

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* Normalize puffin index blobs to use IndexTarget keys

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* feat(mito2): expose puffin index metadata

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* target json polish

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix header

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* add index path

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* address copilot comments

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* address comments

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* reuse cached index metadata

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* parallelism for reading index meta

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2025-10-17 06:22:07 +00:00
LFC
21532abf94 feat: new create table syntax for new json datatype (#7103)
* feat: new create table syntax for new json datatype

Signed-off-by: luofucong <luofc@foxmail.com>

* refactor: extract consts

* refactor: remove unused error variant

* fix tests

Signed-off-by: luofucong <luofc@foxmail.com>

* fix sqlness

Signed-off-by: luofucong <luofc@foxmail.com>

---------

Signed-off-by: luofucong <luofc@foxmail.com>
Co-authored-by: Ning Sun <sunning@greptime.com>
2025-10-17 05:22:29 +00:00
fys
331c64c6fd feat(trigger): support "for" and "keep_firing_for" (#7087)
* feat: support for and keep_firing_for optiosn in create trigger

* upgrade greptime-proto
2025-10-17 04:31:56 +00:00
Zhenchi
82e4600d1b feat: add index cache eviction support (#7064)
Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2025-10-17 03:30:02 +00:00
dennis zhuang
8a2371a05c feat: supports large string (#7097)
* feat: supports large string

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: add doc for extract_string_vector_values

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: refactor by cr comments

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: changes by cr comments

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* refactor: extract_string_vector_values

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* feat: remove large string type and refactor string vector

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: revert some changes

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* feat: adds large string type

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: impl default for StringSizeType

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: tests and test compatibility

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* test: update sqlness tests

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: remove panic

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

---------

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-10-17 01:46:11 +00:00
zyy17
cf1b8392af refactor!: unify the API of getting total cpu and memory (#7049)
* refactor: add `get_total_cpu_millicores()` / `get_total_cpu_cores()` / `get_total_memory_bytes()` / `get_total_memory_readable()` in common-stat

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* tests: update sqlness test cases

Signed-off-by: zyy17 <zyylsxm@gmail.com>

---------

Signed-off-by: zyy17 <zyylsxm@gmail.com>
2025-10-16 12:41:34 +00:00
Ning Sun
2e6ea1167f refactor: update valueref coerce function name based on its semantics (#7098) 2025-10-16 09:11:40 +00:00
Lei, HUANG
50386fda97 chore: pub route_prometheus function (#7101)
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-10-16 08:11:06 +00:00
Weny Xu
873555feb2 fix: fix build warnings (#7099)
Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-10-16 07:57:16 +00:00
zyy17
6ab4672866 refactor: add peer_hostname field in information_schema.cluster_info table (#7050)
* refactor: add `hostname` in cluster_info table

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* chore: update information schema result

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* chore: apply code review comments

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* chore: update greptime-proto

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* chore: add the compatibility for old proto

Signed-off-by: zyy17 <zyylsxm@gmail.com>

---------

Signed-off-by: zyy17 <zyylsxm@gmail.com>
2025-10-16 06:02:47 +00:00
discord9
ac65ede033 feat: memtable seq range read (#6950)
* feat: seq range memtable read

Signed-off-by: discord9 <discord9@163.com>

* test: from&range

Signed-off-by: discord9 <discord9@163.com>

* wt

Signed-off-by: discord9 <discord9@163.com>

* after rebase fix

Signed-off-by: discord9 <discord9@163.com>

* refactor: per review

Signed-off-by: discord9 <discord9@163.com>

* docs: better naming&emphaise

Signed-off-by: discord9 <discord9@163.com>

* refactor: use filter method

Signed-off-by: discord9 <discord9@163.com>

* tests: unwrap

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-10-16 05:30:56 +00:00
Lei, HUANG
552c502620 feat: manual compaction parallelism (#7086)
* feat/manual-compaction-parallelism:
 ### Add Parallelism Support to Compaction Requests

 - **`Cargo.lock` & `Cargo.toml`**: Updated `greptime-proto` dependency to a new revision.
 - **`flush_compact_table.rs`**: Enhanced `parse_compact_params` to support a new `parallelism` parameter, allowing users to
 specify the level of parallelism for table compaction.
 - **`handle_compaction.rs`**: Integrated `parallelism` into the compaction scheduling process, defaulting to 1 if not
 specified.
 - **`request.rs` & `region_request.rs`**: Modified `CompactRequest` to include `parallelism`, with logic to handle unspecifie
 values.
 - **`requests.rs`**: Updated `CompactTableRequest` structure to include an optional `parallelism` field.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* feat/manual-compaction-parallelism:
 ### Commit Message

 Enhance Compaction Request Handling

 - **`flush_compact_table.rs`**:
   - Renamed `parse_compact_params` to `parse_compact_request`.
   - Introduced `DEFAULT_COMPACTION_PARALLELISM` constant.
   - Updated parsing logic to handle keyword arguments for `strict_window` and `regular` compaction types, including `parallelism` and `window`.
   - Modified tests to reflect changes in parsing logic and default parallelism handling.

 - **`request.rs`**:
   - Updated `parallelism` handling in `RegionRequestBody::Compact` to use the new default value.

 - **`requests.rs`**:
   - Changed `CompactTableRequest` to use a non-optional `parallelism` field with a default value of 1.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* feat/manual-compaction-parallelism:
 ### Update `flush_compact_table.rs` Parameter Validation

 - Modified parameter validation in `flush_compact_table.rs` to restrict the maximum number of parameters from 4 to 3 in the `parse_compact_request` function.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* feat/manual-compaction-parallelism:
 Update `greptime-proto` dependency

 - Updated the `greptime-proto` dependency to a new revision in both `Cargo.lock` and `Cargo.toml`.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

---------

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-10-16 03:47:01 +00:00
discord9
9aca7c97d7 fix: part cols not in projection (#7090)
* fix: part cols not in projection

Signed-off-by: discord9 <discord9@163.com>

* test: table scan with projection

Signed-off-by: discord9 <discord9@163.com>

* Update src/query/src/dist_plan/analyzer.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>
Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>
2025-10-16 03:44:30 +00:00
Ning Sun
145c1024d1 feat: add Value::Json value type (#7083)
* feat: struct value

Signed-off-by: Ning Sun <sunning@greptime.com>

* feat: update for proto module

* feat: wip struct type

* feat: implement more vector operations

* feat: make datatype and api

* feat: reoslve some compilation issues

* feat: resolve all compilation issues

* chore: format update

* test: resolve tests

* test: test and refactor value-to-pb

* feat: add more tests and fix for value types

* chore: remove dbg

* feat: test and fix iterator

* fix: resolve struct_type issue

* feat: pgwire 0.33 update

* refactor: use vec for struct items

* feat: conversion from json to value

* feat: add decode function

* fix: lint issue

* feat: update how we encode raw data

* feat: add convertion to fully strcutured StructValue

* refactor: take owned value in all encode/decode functions

* feat: add pg serialization of structvalue

* chore: toml format

* refactor: adopt new and try_new from struct value

* chore: cleanup residual issues

* docs: docs up

* fix lint issue

* Apply suggestion from @MichaelScofield

Co-authored-by: LFC <990479+MichaelScofield@users.noreply.github.com>

* Apply suggestion from @MichaelScofield

Co-authored-by: LFC <990479+MichaelScofield@users.noreply.github.com>

* Apply suggestion from @MichaelScofield

Co-authored-by: LFC <990479+MichaelScofield@users.noreply.github.com>

* Apply suggestion from @MichaelScofield

Co-authored-by: LFC <990479+MichaelScofield@users.noreply.github.com>

* chore: address review comment especially collection capacity

* refactor: remove unneeded processed keys collection

* feat: Value::Json type

* chore: add some work in progress changes

* feat: adopt new json type

* refactor: limit scope json conversion functions

* fix: self review update

* test: provide tests for value::json

* test: add tests for api/helper

* switch proto to main branch

* fix: implement is_null for ValueRef::Json

---------

Signed-off-by: Ning Sun <sunning@greptime.com>
Co-authored-by: LFC <990479+MichaelScofield@users.noreply.github.com>
2025-10-15 20:13:12 +00:00
Alan Tang
8073e552df feat: add updated_on to tablemeta with a default of created_on (#7072)
* feat: add updated_on to tablemeta with a default of created_on

Signed-off-by: Alan Tang <jmtangcs@gmail.com>

* feat: support the update_on on alter procedure

Signed-off-by: Alan Tang <jmtangcs@gmail.com>

* feat: add updated_on into information_schema.tables

Signed-off-by: Alan Tang <jmtangcs@gmail.com>

* fix: make sqlness happy

Signed-off-by: Alan Tang <jmtangcs@gmail.com>

* test: add test case for tablemeta update

Signed-off-by: Alan Tang <jmtangcs@gmail.com>

* fix: fix failing test for ALTER TABLE

Signed-off-by: Alan Tang <jmtangcs@gmail.com>

* feat: use created_on as default for updated_on when missing

Signed-off-by: Alan Tang <jmtangcs@gmail.com>

---------

Signed-off-by: Alan Tang <jmtangcs@gmail.com>
2025-10-15 11:12:27 +00:00
Ruihang Xia
aa98033e85 feat(parser): ALTER TABLE ... REPARTITION ... (#7082)
* initial impl

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* sqlness tests

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* tidy up

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-10-15 03:54:36 +00:00
Ning Sun
9606a6fda8 feat: conversion between struct, value and json (#7052)
* feat: struct value

Signed-off-by: Ning Sun <sunning@greptime.com>

* feat: update for proto module

* feat: wip struct type

* feat: implement more vector operations

* feat: make datatype and api

* feat: reoslve some compilation issues

* feat: resolve all compilation issues

* chore: format update

* test: resolve tests

* test: test and refactor value-to-pb

* feat: add more tests and fix for value types

* chore: remove dbg

* feat: test and fix iterator

* fix: resolve struct_type issue

* feat: pgwire 0.33 update

* refactor: use vec for struct items

* feat: conversion from json to value

* feat: add decode function

* fix: lint issue

* feat: update how we encode raw data

* feat: add convertion to fully strcutured StructValue

* refactor: take owned value in all encode/decode functions

* feat: add pg serialization of structvalue

* chore: toml format

* refactor: adopt new and try_new from struct value

* chore: cleanup residual issues

* docs: docs up

* fix lint issue

* Apply suggestion from @MichaelScofield

Co-authored-by: LFC <990479+MichaelScofield@users.noreply.github.com>

* Apply suggestion from @MichaelScofield

Co-authored-by: LFC <990479+MichaelScofield@users.noreply.github.com>

* Apply suggestion from @MichaelScofield

Co-authored-by: LFC <990479+MichaelScofield@users.noreply.github.com>

* Apply suggestion from @MichaelScofield

Co-authored-by: LFC <990479+MichaelScofield@users.noreply.github.com>

* chore: address review comment especially collection capacity

* refactor: remove unneeded processed keys collection

---------

Signed-off-by: Ning Sun <sunning@greptime.com>
Co-authored-by: LFC <990479+MichaelScofield@users.noreply.github.com>
2025-10-14 07:22:37 +00:00
github-actions[bot]
9cc0bcb449 ci: update dev-builder image tag (#7073)
Signed-off-by: greptimedb-ci <greptimedb-ci@greptime.com>
Co-authored-by: greptimedb-ci <greptimedb-ci@greptime.com>
2025-10-14 06:53:02 +00:00
Ning Sun
5ad1eac924 refactor: remove unused grpc-expr module and pb conversions (#7085)
* refactor: remove unused grpc-expr module and pb conversions

* chore: remove unused snafu
2025-10-14 04:11:48 +00:00
shuiyisong
a027b824a2 chore: add information extension to the plugins in standalone (#7079)
chore: add information extension to the plugins

Signed-off-by: shuiyisong <xixing.sys@gmail.com>
2025-10-14 02:30:42 +00:00
Lei, HUANG
44d46a6702 fix: correct impl Clear for &[u8] (#7081)
* fix: correct impl Clear for &[u8]

* fix: clippy

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

---------

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-10-13 11:59:26 +00:00
Yingwen
a9c342b0f7 feat: support setting sst_format in table options (#7068)
* feat: add FormatType to support multi format in the future

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: add sst_format to RegionOptions

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: sets the sst_format based on RegionOptions

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: add sst_format to mito table options

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: fix RegionManifest deserialization without sst_format

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: remove Parquet suffix from FormatType

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: prefer RegionOptions::sst_format in compactor/memtable builder

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: rename enable_experimental_flat_format to
default_experimental_flat_format

Signed-off-by: evenyag <realevenyag@gmail.com>

* docs: update config.md

Signed-off-by: evenyag <realevenyag@gmail.com>

* style: fmt

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: update manifest test

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: fix compiler errors, handle sst_format in remap_manifest

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-10-13 08:38:37 +00:00
Ruihang Xia
1a73b485fe feat: apply region partition expr to region scan (#7067)
* handle null in partition expr

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* apply region partition expr on scanning

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix format

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* tidy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix gt/gteq

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-10-13 07:38:19 +00:00
Ruihang Xia
ab46127414 feat: remap SST files for partition change (#7071)
* initial impl

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update expr

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* move error

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* immutable file meta

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* tidy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* reduce state

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* inherit manifest

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* simplify test cases

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix rebase error

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* log new exprs

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-10-13 03:13:14 +00:00
LFC
8fe17d43d5 chore: update rust to nightly 2025-10-01 (#7069)
* chore: update rust to nightly 2025-10-01

Signed-off-by: luofucong <luofc@foxmail.com>

* chore: nix update

---------

Signed-off-by: luofucong <luofc@foxmail.com>
Co-authored-by: Ning Sun <sunning@greptime.com>
2025-10-11 07:30:52 +00:00
Weny Xu
40e9ce90a7 refactor: restructure sqlness to support multiple envs and extract common utils (#7066)
* refactor: restructure sqlness to support multiple envs and extract common utils

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore(ci): update sqlness cmd

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: add comments

Signed-off-by: WenyXu <wenymedia@gmail.com>

* fix: error fmt

Signed-off-by: WenyXu <wenymedia@gmail.com>

* fix: only reconnect mysql and pg client

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-10-11 06:34:17 +00:00
discord9
ba034c5a9e feat: explain custom statement (#7058)
* feat: explain tql cte

Signed-off-by: discord9 <discord9@163.com>

* chore: unused

Signed-off-by: discord9 <discord9@163.com>

* fix: analyze format

Signed-off-by: discord9 <discord9@163.com>

* Update src/sql/src/statements/statement.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>
Signed-off-by: discord9 <discord9@163.com>

* test: sqlness

Signed-off-by: discord9 <discord9@163.com>

* pcr

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>
2025-10-11 06:27:51 +00:00
Ruihang Xia
e46ce7c6da feat: divide subtasks from old/new partition rules (#7003)
* feat: divide subtasks from old/new partition rules

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix format

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* change copyright year

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* simplify filter

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* naming

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* Update src/partition/src/subtask.rs

Co-authored-by: Zhenchi <zhongzc_arch@outlook.com>
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: Zhenchi <zhongzc_arch@outlook.com>
2025-10-11 06:17:25 +00:00
dennis zhuang
57d84b9de5 feat: supports value aliasing in TQL (#7041)
* feat: supports value aliasing in TQL

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: invalid checking

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: remove invalid checking

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* test: add explain test

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: improve parser

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* test: add explain TQL-CTE

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

---------

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
2025-10-11 02:49:09 +00:00
Ning Sun
749a5ab165 feat: struct value and vector (#7033)
* feat: struct value

Signed-off-by: Ning Sun <sunning@greptime.com>

* feat: update for proto module

* feat: wip struct type

* feat: implement more vector operations

* feat: make datatype and api

* feat: reoslve some compilation issues

* feat: resolve all compilation issues

* chore: format update

* test: resolve tests

* test: test and refactor value-to-pb

* feat: add more tests and fix for value types

* chore: remove dbg

* feat: test and fix iterator

* fix: resolve struct_type issue

* refactor: use vec for struct items

* chore: update proto to main branch

* refactor: address some of review issues

* refactor: update for further review

* Add validation on new methods

* feat: update struct/list json serialization

* refactor: reimplement get in struct_vector

* refactor: struct vector functions

* refactor: fix lint issue

* refactor: address review comments

---------

Signed-off-by: Ning Sun <sunning@greptime.com>
2025-10-10 21:49:51 +00:00
LFC
3738440753 feat: align influxdb line timestamp with table time index (#7057)
* feat: align influxdb line timestamp with table time index

Signed-off-by: luofucong <luofc@foxmail.com>

* fix ci

Signed-off-by: luofucong <luofc@foxmail.com>

---------

Signed-off-by: luofucong <luofc@foxmail.com>
2025-10-10 07:37:52 +00:00
Ning Sun
aa84642afc refactor!: remove pb_value to json conversion, keep json output consistent (#7063)
* refactor: remove pb_value to json

* chore: remove unused module
2025-10-10 07:09:20 +00:00
Ning Sun
af213be403 refactor: remove duplicated valueref to json (#7062) 2025-10-10 07:08:26 +00:00
Sicong Hu
779865d389 feat: introduce IndexBuildTask for async index build (#6927)
* feat: add framework for asynchronous index building

Signed-off-by: SNC123 <sinhco@outlook.com>

* test: add unit tests for IndexBuildTask

Signed-off-by: SNC123 <sinhco@outlook.com>

* chore: clippy,format,fix-udeps

Signed-off-by: SNC123 <sinhco@outlook.com>

* fix: correct write cache logic in IndexBuildTask

Signed-off-by: SNC123 <sinhco@outlook.com>

* chore: clippy, resolve conflicts

Signed-off-by: SNC123 <sinhco@outlook.com>

* chore: resolve conflicts

Signed-off-by: SNC123 <sinhco@outlook.com>

* fix: apply review suggestions

Signed-off-by: SNC123 <sinhco@outlook.com>

* chore: resolve conflicts

Signed-off-by: SNC123 <sinhco@outlook.com>

* fix: clean up index files in aborted case

Signed-off-by: SNC123 <sinhco@outlook.com>

* refactor: move manifest update logic into IndexBuildTask

Signed-off-by: SNC123 <sinhco@outlook.com>

* fix: enhance check file logic and error handling

Signed-off-by: SNC123 <sinhco@outlook.com>

---------

Signed-off-by: SNC123 <sinhco@outlook.com>
2025-10-10 03:29:32 +00:00
Yingwen
47c1ef672a fix: support dictionary in regex match (#7055)
* fix: support dictionary in regex match

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: get key from keys buffer directly

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-10-10 03:03:34 +00:00
shyam
591b9f3e81 fix: show proper error msg, when executing non-admin functions as admin functions (#7061)
Signed-off-by: Shyamnatesan <shyamnatesan21@gmail.com>
2025-10-10 01:25:49 +00:00
LFC
979c8be51b feat: able to pass external service for sqlness test (#7032)
feat: able to pass external service instead of creating inside for sqlness test

Signed-off-by: luofucong <luofc@foxmail.com>
2025-10-09 07:02:19 +00:00
Yingwen
45b1458254 fix: only skips auto convert when encoding is sparse (#7056)
* fix: only skips auto convert when encoding is sparse

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: address comment and add tests

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-10-09 06:50:48 +00:00
fys
4cdcf2ef39 chore: add trigger querier factory trait (#7053)
feat: add trigger-querier-factory-ent
2025-10-09 02:16:50 +00:00
shuiyisong
b24a55cea4 chore: rename the default ts column name to greptime_timestamp for influxdb line protocol (#7046)
* chore: rename influxdb ts column name to greptime_timestamp

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* fix: tests

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

---------

Signed-off-by: shuiyisong <xixing.sys@gmail.com>
2025-10-09 02:14:11 +00:00
Ning Sun
1aa4f346a0 fix: build_grpc_server visibility (#7054) 2025-10-06 03:16:48 +00:00
Ning Sun
f7202bc176 feat: pgwire 0.33 update (#7048) 2025-10-03 08:06:05 +00:00
Yingwen
b7045e57a5 feat: enable zstd for bulk memtable encoded parts (#7045)
feat: enable zstd in bulk memtable

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-10-02 16:05:33 +00:00
Ning Sun
660790148d fix: various typos reported by CI (#7047)
* fix: various typos reported by CI

* fix: additional typo
2025-10-02 15:11:09 +00:00
zyy17
d777e8c52f refactor: add cgroup metrics collector (#7038)
Signed-off-by: zyy17 <zyylsxm@gmail.com>
2025-09-30 02:26:02 +00:00
zyy17
efa616ce44 fix: use instance lables to fetch greptime_memory_limit_in_bytes and greptime_cpu_limit_in_millicores metrics (#7043)
fix: remove unnecessary labels of standalone dashboard.json

Signed-off-by: zyy17 <zyylsxm@gmail.com>
2025-09-29 11:43:35 +00:00
LFC
5b13fba65b refactor: make Function trait a simple shim of DataFusion UDF (#7036)
Signed-off-by: luofucong <luofc@foxmail.com>
2025-09-29 09:07:39 +00:00
LFC
aa05b3b993 feat: add max_connection_age config to grpc server (#7031)
* feat: add `max_connection_age` config to grpc server

Signed-off-by: luofucong <luofc@foxmail.com>

* Apply suggestions from code review

Co-authored-by: Yingwen <realevenyag@gmail.com>

* fix ci

Signed-off-by: luofucong <luofc@foxmail.com>

---------

Signed-off-by: luofucong <luofc@foxmail.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>
2025-09-29 07:32:43 +00:00
fys
c4a7cc0adb chore: improve create trigger display (#7027)
* chore: improve create_trigger_statement display

* improve display of create trigger

* add components for frontend

* Revert "add components for frontend"

This reverts commit 8d71540a72.
2025-09-29 02:22:43 +00:00
Yingwen
90d37cb10e fix: fix panic and limit concurrency in flat format (#7035)
* feat: add a semaphore to control flush concurrency

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: build FlatSchemaOptions from encoding in FlatWriteFormat

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: remove allow dead_code

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: handle sparse encoding in FlatCompatBatch

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: add time index column in try_new_compact_sparse

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: add test for compaction and sparse encoding

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: remove comment

Signed-off-by: evenyag <realevenyag@gmail.com>

* style: fix clippy

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-09-29 02:20:06 +00:00
localhost
4a3c5f85e5 fix: fix test_resolve_relative_path_relative on windows (#7039) 2025-09-28 13:03:57 +00:00
discord9
3ca5c77d91 chore: not warning (#7037)
Signed-off-by: discord9 <discord9@163.com>
2025-09-28 08:11:27 +00:00
discord9
8bcf4a8ab5 test: update unit test by passing extra sort columns (#7030)
* tests: fix unit test by passing one sort columns

Signed-off-by: discord9 <discord9@163.com>

* chore: per copilot

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-09-28 03:22:43 +00:00
zyy17
0717773f62 refactor!: add enable_read_cache config to support disable read cache explicitly (#6834)
* refactor: add `enable_read_cache` config to support disable read cache explicitly

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* refactor: if `cache_path` is empty and `enable_read_cache` is true, set the default cache dir

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* refactor: remove the unessary Option type for `ObjectStorageCacheConfig`

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* refactor: sanitize cache config in `DatanodeOptions` and `StandaloneOptions`

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* chore: code review comment

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* chore: apply code review comments

Signed-off-by: zyy17 <zyylsxm@gmail.com>

---------

Signed-off-by: zyy17 <zyylsxm@gmail.com>
2025-09-26 09:44:12 +00:00
shuiyisong
195ed73448 chore: disable file not exist on watch_file_user_provider (#7028)
Signed-off-by: shuiyisong <xixing.sys@gmail.com>
2025-09-26 09:29:59 +00:00
LFC
243dbde3d5 refactor: rewrite some UDFs to DataFusion style (final part) (#7023)
Signed-off-by: luofucong <luofc@foxmail.com>
2025-09-26 09:24:29 +00:00
discord9
aca8b690d1 fix: step aggr merge phase not order nor filter (#6998)
* fix: not order

Signed-off-by: discord9 <discord9@163.com>

* test: redacted

Signed-off-by: discord9 <discord9@163.com>

* feat: fix up state wrapper

Signed-off-by: discord9 <discord9@163.com>

* df last_value state not as promised!

Signed-off-by: discord9 <discord9@163.com>

* fix?: could fix better

Signed-off-by: discord9 <discord9@163.com>

* test: unstable result

Signed-off-by: discord9 <discord9@163.com>

* fix: work around by fixing state

Signed-off-by: discord9 <discord9@163.com>

* chore: after rebase fix

Signed-off-by: discord9 <discord9@163.com>

* chore: finish some todo

Signed-off-by: discord9 <discord9@163.com>

* chore: per copilot

Signed-off-by: discord9 <discord9@163.com>

* refactor: not fix but just notify mismatch

Signed-off-by: discord9 <discord9@163.com>

* chore: warn -> debug state mismatch

Signed-off-by: discord9 <discord9@163.com>

* chore: refine error msg

Signed-off-by: discord9 <discord9@163.com>

* test: sqlness add last_value date_bin test

Signed-off-by: discord9 <discord9@163.com>

* ?: substrait order by decode failure

Signed-off-by: discord9 <discord9@163.com>

* unit test reproduce that

Signed-off-by: discord9 <discord9@163.com>

* feat: support state wrapper's order serde in substrait

Signed-off-by: discord9 <discord9@163.com>

* refactor: stuff

Signed-off-by: discord9 <discord9@163.com>

* test: standalone/distributed different exec

Signed-off-by: discord9 <discord9@163.com>

* fmt

Signed-off-by: discord9 <discord9@163.com>

* chore: per review

Signed-off-by: discord9 <discord9@163.com>

* refactor: closure

Signed-off-by: discord9 <discord9@163.com>

* test: first value order by

Signed-off-by: discord9 <discord9@163.com>

* refactor: per cr

Signed-off-by: discord9 <discord9@163.com>

* feat: ScanHint last_value last row selector

Signed-off-by: discord9 <discord9@163.com>

* docs: per cr

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-09-26 09:12:45 +00:00
ZonaHe
9564180a6a feat: update dashboard to v0.11.6 (#7026)
Co-authored-by: ZonaHex <ZonaHex@users.noreply.github.com>
2025-09-26 06:35:37 +00:00
dennis zhuang
17d16da483 feat: supports expression in TQL params (#7014)
* feat: supports expression in TQL params

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: by cr comments

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: comment

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: by cr comments

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

---------

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
2025-09-26 03:43:50 +00:00
Ning Sun
c1acce9943 refactor: cleanup datafusion-pg-catalog dependencies (#7025)
* refactor: cleanup datafusion-pg-catalog dependencies

Signed-off-by: Ning Sun <sunning@greptime.com>

* chore: toml format

* feat: update upstream

---------

Signed-off-by: Ning Sun <sunning@greptime.com>
2025-09-26 03:07:00 +00:00
Ruihang Xia
0790835c77 feat!: improve greptime_identity pipeline behavior (#6932)
* flat by default, store array in string

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* expose max_nested_levels param, store string instead of error

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove flatten option

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove unused errors

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-09-25 15:28:28 +00:00
zyy17
280df064c7 chore: add some trace logs in fetching data from cache and object store (#6877)
* chore: add some important debug logs

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* chore: add traces logs in `fetch_byte_ranges()`

Signed-off-by: zyy17 <zyylsxm@gmail.com>

---------

Signed-off-by: zyy17 <zyylsxm@gmail.com>
2025-09-25 11:54:22 +00:00
discord9
11a08d1381 fix: not step when aggr have order by/filter (#7015)
* fix: not applied

Signed-off-by: discord9 <discord9@163.com>

* chore: per review

Signed-off-by: discord9 <discord9@163.com>

* test: confirm order by not push down

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-09-25 08:43:18 +00:00
shuiyisong
06a4f0abea chore: add function for getting started on metasrv (#7022)
Signed-off-by: shuiyisong <xixing.sys@gmail.com>
2025-09-25 08:24:23 +00:00
dennis zhuang
c6e5552f05 test: migrate aggregation tests from duckdb, part4 (#6965)
* test: migrate aggregation tests from duckdb, part4

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: tests

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: rename tests

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: comments

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: ignore zero weights test

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: remove duplicated sql

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

---------

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
2025-09-25 08:00:17 +00:00
discord9
9c8ff1d8a0 fix: skip placeholder when partition columns (#7020)
Signed-off-by: discord9 <discord9@163.com>
2025-09-25 07:01:49 +00:00
Yingwen
cff9cb6327 feat: converts batches in old format to the flat format in query time (#6987)
* feat: use correct projection index for old format

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: remove allow dead_code from format

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: check and convert old format to flat format

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: sub primary key num from projection

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: always convert the batch in FlatRowGroupReader

Signed-off-by: evenyag <realevenyag@gmail.com>

* style: fix clippy

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: Change &Option<&[]> to Option<&[]>

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: only build arrow schema once

adds a method flat_sst_arrow_schema_column_num() to get the field num

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: Handle flat format and old format separately

Adds two structs ParquetFlat and ParquetPrimaryKeyToFlat.
ParquetPrimaryKeyToFlat delegates stats and projection to the
PrimaryKeyReadFormat.

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: handle non string tag correctly

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: do not register file cache twice

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: clean temp files

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: add rows and bytes to flush success log

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: convert format in memtable

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: add compaction flag to ScanInput

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: compaction should use old format for sparse encoding

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: merge schema use old format in sparse encoding

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: reads legacy format but not convert if skip_auto_convert

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: suppport sparse encoding in bulk parts

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-09-25 06:42:22 +00:00
Ning Sun
964dc254aa feat: upgraded pg_catalog support (#6918)
* refactor: add datafusion-postgres dependency

* refactor: move and include pg_catalog udfs

* chore: update upstream

* feat: register table function pg_get_keywords

* feat: bridge CatalogInfo for our CatalogManager

Signed-off-by: Ning Sun <sunning@greptime.com>

* feat: convert pg_catalog table to our system table

* feat: bridge system catalog with datafusion-postgres

Signed-off-by: Ning Sun <sunning@greptime.com>

* feat: add more udfs

* feat: add compatibility rewriter to postgres handler

* fix: various fix

* fmt: fix

* fix: use functions from pg_catalog library

* fmt

* fix: sqlness runner

Signed-off-by: Ning Sun <sunning@greptime.com>

* test: adopt arrow 56.0 to 56.1 memory size change

* fix: add additional udfs

* chore: format

* refactor: return None when creating system table failed

Signed-off-by: Ning Sun <sunning@greptime.com>

* chore: provide safety comments about expect usage

---------

Signed-off-by: Ning Sun <sunning@greptime.com>
2025-09-25 04:05:34 +00:00
dennis zhuang
91a727790d feat: supports permission mode for static user provider (#7017)
* feat: supports permission mode for static user provider

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: style

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: comment

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

---------

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-09-25 03:45:31 +00:00
Weny Xu
07b9de620e fix(cli): fix FS object store handling of absolute paths (#7018)
* fix(cli): fix FS object store handling of absolute paths

Signed-off-by: WenyXu <wenymedia@gmail.com>

* test: add unit tests

Signed-off-by: WenyXu <wenymedia@gmail.com>

* Update src/cli/src/utils.rs

Co-authored-by: LFC <990479+MichaelScofield@users.noreply.github.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
Co-authored-by: LFC <990479+MichaelScofield@users.noreply.github.com>
2025-09-25 03:38:33 +00:00
LFC
6d0dd2540e refactor: rewrite some UDFs to DataFusion style (part 4) (#7011)
Signed-off-by: luofucong <luofc@foxmail.com>
2025-09-24 19:50:58 +00:00
fys
a14c01a807 feat: sql parse about show create trigger (#7016)
* feat: sql parse for show create trigger

* fix: build

* remove unused comment

* chore: add tests for parsing complete SQL
2025-09-24 15:52:15 +00:00
discord9
238ed003df fix: group by expr not as column in step aggr (#7008)
* fix: group by expr not as column

Signed-off-by: discord9 <discord9@163.com>

* test: dist analyzer date_bin

Signed-off-by: discord9 <discord9@163.com>

* ???fix wip

Signed-off-by: discord9 <discord9@163.com>

* fix: deduce using correct input fields

Signed-off-by: discord9 <discord9@163.com>

* refactor: clearer wrapper

Signed-off-by: discord9 <discord9@163.com>

* chore: update sqlness

Signed-off-by: discord9 <discord9@163.com>

* chore: per review

Signed-off-by: discord9 <discord9@163.com>

* chore: per review

Signed-off-by: discord9 <discord9@163.com>

* chore: rm todo

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-09-24 06:57:01 +00:00
Weny Xu
0c038f755f refactor(cli): refactor object storage config (#7009)
* refactor: refactor object storage config

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: public common config

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-09-24 06:50:47 +00:00
Lin Yihai
b5a8725582 feat(copy_to_csv): add date_format/timestamp_format/time_format. (#6995)
feat(copy_to_csv): add `date_format` and so on to `Copy ... to with` syntax

Signed-off-by: Yihai Lin <yihai-lin@foxmail.com>
2025-09-24 06:22:53 +00:00
Ruihang Xia
c7050831db fix: match promql column reference in case sensitive way (#7013)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-09-24 03:28:09 +00:00
Ruihang Xia
f65dcd12cc feat: refine failure detector (#7005)
* feat: refine failure detector

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix format

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* revert back default value

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* revert change of test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-09-24 01:43:22 +00:00
Zhenchi
80c8ab42b0 feat: add ssts releated system table (#6924)
* feat: add InformationExtension.inspect_datanode for datanode inspection

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* aggregate results from all datanodes

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix fmt

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* feat: add ssts releated system table

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* update sst entry

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* address comments

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix sqlness

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix sqlness

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2025-09-23 11:06:00 +00:00
discord9
4507736528 docs: laminar flow rfc (#6928)
* docs: luminar flow

Signed-off-by: discord9 <discord9@163.com>

* chore: wording details

Signed-off-by: discord9 <discord9@163.com>

* more details

Signed-off-by: discord9 <discord9@163.com>

* rearrange phases

Signed-off-by: discord9 <discord9@163.com>

* refactor: use embed frontend per review

Signed-off-by: discord9 <discord9@163.com>

* todo per review

Signed-off-by: discord9 <discord9@163.com>

* docs: seq read impl for luminar flow

Signed-off-by: discord9 <discord9@163.com>

* rename

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-09-23 08:16:06 +00:00
LFC
2712c5cd7a refactor: rewrite some UDFs to DataFusion style (part 3) (#6990)
* refactor: rewrite some UDFs to DataFusion style (part 3)

Signed-off-by: luofucong <luofc@foxmail.com>

* resolve PR comments

Signed-off-by: luofucong <luofc@foxmail.com>

* resolve PR comments

Signed-off-by: luofucong <luofc@foxmail.com>

* resolve PR comments

Signed-off-by: luofucong <luofc@foxmail.com>

---------

Signed-off-by: luofucong <luofc@foxmail.com>
2025-09-23 07:53:51 +00:00
Ruihang Xia
078379816c fix: incorrect timestamp resolution in information_schema.partitions table (#7004)
* fix: incorrect timestamp resolution in information_schema.partitions table

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* use second for all fields in partitions table

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update sqlness result

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-09-23 06:13:10 +00:00
Ruihang Xia
6dc5fbe9a1 fix: promql range function has incorrect timestamps (#7006)
* fix: promql range function has incorrect timestamps

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* simplify

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-09-23 05:16:54 +00:00
ZonaHe
cd3fb5fd3e feat: update dashboard to v0.11.5 (#7001)
Co-authored-by: sunchanglong <sunchanglong@users.noreply.github.com>
2025-09-23 05:05:17 +00:00
shyam
5fcca4eeab fix: make EXPIRE (keyword) parsing case-insensitive, when creating flow (#6997)
fix: make EXPIRE keyword case-insensitive in CREATE FLOW parser

Signed-off-by: Shyamnatesan <shyamnatesan21@gmail.com>
2025-09-22 12:07:04 +00:00
Weny Xu
b3d413258d feat: extract standalone functionality and introduce plugin-based router configuration (#7002)
* feat: extract standalone functionality and introduce plugin-based router configuration

Signed-off-by: WenyXu <wenymedia@gmail.com>

* fix: ensure dump file does not exist

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: introduce `External` error

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-09-22 11:21:04 +00:00
discord9
03954e8b3b chore: update proto (#6992)
* chore: update proto

Signed-off-by: discord9 <discord9@163.com>

* update lockfile

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-09-19 09:01:46 +00:00
Yingwen
bd8f5d2b71 fix: print the output message of the error in admin fn macro (#6994)
Signed-off-by: evenyag <realevenyag@gmail.com>
2025-09-19 08:11:19 +00:00
Weny Xu
74721a06ba chore: improve error logging in WAL prune manager (#6993)
Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-09-19 07:08:28 +00:00
discord9
18e4839a17 feat: datanode side local gc worker (#6940)
* wip

Signed-off-by: discord9 <discord9@163.com>

* docs for behavior

Signed-off-by: discord9 <discord9@163.com>

* wip: handle outdated version

Signed-off-by: discord9 <discord9@163.com>

* feat: just retry

Signed-off-by: discord9 <discord9@163.com>

* feat: smaller lingering time

Signed-off-by: discord9 <discord9@163.com>

* refactor: partial per review

Signed-off-by: discord9 <discord9@163.com>

* refactor: rm tmp file cnt

Signed-off-by: discord9 <discord9@163.com>

* chore: per review

Signed-off-by: discord9 <discord9@163.com>

* chore: opt partial

Signed-off-by: discord9 <discord9@163.com>

* chore: rebase fix

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-09-19 03:20:13 +00:00
LFC
cbe0cf4a74 refactor: rewrite some UDFs to DataFusion style (part 2) (#6967)
* refactor: rewrite some UDFs to DataFusion style (part 2)

Signed-off-by: luofucong <luofc@foxmail.com>

* deal with vector UDFs `(scalar, scalar)` situation, and try getting the scalar value reference everytime

Signed-off-by: luofucong <luofc@foxmail.com>

* reduce some vector literal parsing

Signed-off-by: luofucong <luofc@foxmail.com>

* fix ci

Signed-off-by: luofucong <luofc@foxmail.com>

---------

Signed-off-by: luofucong <luofc@foxmail.com>
2025-09-18 06:37:27 +00:00
discord9
e26b98f452 refactor: put FileId to store-api (#6988)
* refactor: put FileId to store-api

Signed-off-by: discord9 <discord9@163.com>

* per review

Signed-off-by: discord9 <discord9@163.com>

* chore: lock file

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-09-18 03:20:42 +00:00
localhost
d8b967408e chore: modify LogExpr AggrFunc (#6948)
* chore: modify  LogExpr AggrFunc

* chore: change AggrFunc range field

* chore: remove range from aggrfunc
2025-09-17 12:19:48 +00:00
Weny Xu
c35407fdce refactor: region follower management with unified interface (#6986)
Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-09-17 10:01:03 +00:00
Lei, HUANG
edf4b3f7f8 chore: unset tz env in test (#6984)
chore/unset-tz-env-in-test:
 ### Commit Message

 Add environment variable cleanup in timezone tests

 - Updated `timezone.rs` to include removal of the `TZ` environment variable in the `test_from_tz_string` function to ensure a clean test environment.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-09-17 08:48:38 +00:00
Yingwen
14550429e9 chore: reduce SeriesScan sender timeout (#6983)
Signed-off-by: evenyag <realevenyag@gmail.com>
2025-09-17 07:02:47 +00:00
shuiyisong
ff2da4903e fix: OTel metrics naming wiht Prometheus style (#6982)
* fix: otel metrics naming

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* fix: otel metrics naming & add some tests

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

---------

Signed-off-by: shuiyisong <xixing.sys@gmail.com>
2025-09-17 06:11:38 +00:00
Lei, HUANG
c92ab4217f fix: avoid truncating SST statistics during flush (#6977)
fix/disable-parquet-stats-truncate:
 - **Update `memcomparable` Dependency**: Switched from crates.io to a Git repository for `memcomparable` in `Cargo.lock`, `mito-codec/Cargo.toml`, and removed it from `mito2/Cargo.toml`.
 - **Enhance Parquet Writer Properties**: Added `set_statistics_truncate_length` and `set_column_index_truncate_length` to `WriterProperties` in `parquet.rs`, `bulk/part.rs`, `partition_tree/data.rs`, and `writer.rs`.
 - **Add Test for Corrupt Scan**: Introduced a new test module `scan_corrupt.rs` in `mito2/src/engine` to verify handling of corrupt data.
 - **Update Test Data**: Modified test data in `flush.rs` to reflect changes in file sizes and sequences.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-09-17 03:02:52 +00:00
Zhenchi
77981a7de5 fix: clean intm ignore notfound (#6971)
* fix: clean intm ignore notfound

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* address comments

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2025-09-17 02:58:03 +00:00
Lei, HUANG
9096c5ebbf chore: bump sequence on region edit (#6947)
* chore/update-sequence-on-region-edit:
 ### Commit Message

 Refactor `get_last_seq_num` Method Across Engines

 - **Change Return Type**: Updated the `get_last_seq_num` method to return `Result<SequenceNumber, BoxedError>` instead of `Result<Option<SequenceNumber>, BoxedError>` in the following files:
   - `src/datanode/src/tests.rs`
   - `src/file-engine/src/engine.rs`
   - `src/metric-engine/src/engine.rs`
   - `src/metric-engine/src/engine/read.rs`
   - `src/mito2/src/engine.rs`
   - `src/query/src/optimizer/test_util.rs`
   - `src/store-api/src/region_engine.rs`

 - **Enhance Region Edit Handling**: Modified `RegionWorkerLoop` in `src/mito2/src/worker/handle_manifest.rs` to update file sequences during region edits.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* add committed_sequence to RegionEdit

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* chore/update-sequence-on-region-edit:
 ### Commit Message

 Refactor sequence retrieval method

 - **Renamed Method**: Changed `get_last_seq_num` to `get_committed_sequence` across multiple files to better reflect its purpose of retrieving the latest committed sequence.
   - Affected files: `tests.rs`, `engine.rs` in `file-engine`, `metric-engine`, `mito2`, `test_util.rs`, and `region_engine.rs`.
 - **Removed Unused Struct**: Deleted `RegionSequencesRequest` struct from `region_request.rs` as it is no longer needed.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* chore/update-sequence-on-region-edit:
 **Add Committed Sequence Handling in Region Engine**

 - **`engine.rs`**: Introduced a new test module `bump_committed_sequence_test` to verify committed sequence handling.
 - **`bump_committed_sequence_test.rs`**: Added a test to ensure the committed sequence is correctly updated and persisted across region reopenings.
 - **`action.rs`**: Updated `RegionManifest` and `RegionManifestBuilder` to include `committed_sequence` for tracking.
 - **`manager.rs`**: Adjusted manifest size assertion to accommodate new committed sequence data.
 - **`opener.rs`**: Implemented logic to override committed sequence during region opening.
 - **`version.rs`**: Added `set_committed_sequence` method to update the committed sequence in `VersionControl`.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* chore/update-sequence-on-region-edit:
 **Enhance `test_bump_committed_sequence` in `bump_committed_sequence_test.rs`**

 - Updated the test to include row operations using `build_rows`, `put_rows`, and `rows_schema` to verify the committed sequence behavior.
 - Adjusted assertions to reflect changes in committed sequence after row operations and region edits.
 - Added comments to clarify the expected behavior of committed sequence after reopening the region and replaying the WAL.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* chore/update-sequence-on-region-edit:
 **Enhance Region Sequence Management**

 - **`bump_committed_sequence_test.rs`**: Updated test to handle region reopening and sequence management, ensuring committed sequences are correctly set and verified after edits.
 - **`opener.rs`**: Improved committed sequence handling by overriding it only if the manifest's sequence is greater than the replayed sequence. Added logging for mutation sequence replay.
 - **`region_write_ctx.rs`**: Modified `push_mutation` and `push_bulk` methods to adopt sequence numbers from parameters, enhancing sequence management during write operations.
 - **`handle_write.rs`**: Updated `RegionWorkerLoop` to pass sequence numbers in `push_bulk` and `push_mutation` methods, ensuring consistent sequence handling.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* chore/update-sequence-on-region-edit:
 ### Remove Debug Logging from `opener.rs`

 - Removed debug logging for mutation sequences in `opener.rs` to clean up the output and improve performance.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

---------

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-09-16 16:22:25 +00:00
Weny Xu
0a959f9920 feat: add TLS support for mysql backend (#6979)
* refactor: move etcd tls code to `common-meta`

Signed-off-by: WenyXu <wenymedia@gmail.com>

* refactor: move postgre pool logic to `utils::postgre`

Signed-off-by: WenyXu <wenymedia@gmail.com>

* feat: setup mysql ssl options

Signed-off-by: WenyXu <wenymedia@gmail.com>

* feat: add test for mysql backend with tls

Signed-off-by: WenyXu <wenymedia@gmail.com>

* refactor: simplify certs generation

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-09-16 13:46:37 +00:00
discord9
85c1a91bae feat: support SubqueryAlias pushdown (#6963)
* wip enforce dist requirement rewriter

Signed-off-by: discord9 <discord9@163.com>

* feat: enforce dist req

Signed-off-by: discord9 <discord9@163.com>

* test: sqlness result

Signed-off-by: discord9 <discord9@163.com>

* fix: double projection

Signed-off-by: discord9 <discord9@163.com>

* test: fix sqlness

Signed-off-by: discord9 <discord9@163.com>

* refactor: per review

Signed-off-by: discord9 <discord9@163.com>

* docs: use btree map

Signed-off-by: discord9 <discord9@163.com>

* test: sqlness explain&comment

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-09-16 13:27:35 +00:00
Weny Xu
7aba9a18fd chore: add tests for postgre backend with tls (#6973)
* chore: add tests for postgre backend with tls

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: minor

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-09-16 11:03:11 +00:00
shuiyisong
4c18d140b4 fix: deadlock in dashmap (#6978)
* fix: deadlock in dashmap

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* Update src/frontend/src/instance.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

* chore: extract fast cache check and add test

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

---------

Signed-off-by: shuiyisong <xixing.sys@gmail.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>
2025-09-16 10:49:28 +00:00
Yingwen
b8e0c49cb4 feat: add an flag to enable the experimental flat format (#6976)
* feat: add enable_experimental_flat_format flag to enable flat format

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: extract build_scan_input for CompactionSstReaderBuilder

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: add compact memtable cost to flush metrics

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: Sets compact dispatcher for bulk memtable

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: Cast dictionary to target type in FlatProjectionMapper

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: add time index to FlatProjectionMapper::batch_schema

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: update config toml

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: pass flat_format to ProjectionMapper in CompactionSstReaderBuilder

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-09-16 09:33:12 +00:00
Zhenchi
db42ad42dc feat: add visible to sst entry for staging mode (#6964)
Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2025-09-15 09:05:54 +00:00
shuiyisong
8ce963f63e fix: shorten lock time (#6968) 2025-09-15 03:37:36 +00:00
Yingwen
b3aabb6706 feat: support flush and compact flat format files (#6949)
* feat: basic functions for flush/compact flat format

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: bridge flush and compaction for flat format

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: add write cache support

Signed-off-by: evenyag <realevenyag@gmail.com>

* style: fix clippy

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: change log level to debug

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: wrap duplicated code to merge and dedup iter

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: wrap some code into flush_flat_mem_ranges

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: extract logic into do_flush_memtables

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-09-14 13:36:24 +00:00
Ning Sun
028effe952 docs: update memory profiling description doc (#6960)
doc: update memory profiling description doc
2025-09-12 08:30:22 +00:00
Ruihang Xia
d86f489a74 fix: staging mode with proper region edit operations (#6962)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-09-12 04:39:42 +00:00
dennis zhuang
6c066c1a4a test: migrate join tests from duckdb, part3 (#6881)
* test: migrate join tests

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: update test results after rebasing main branch

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: unstable query sort results and natural_join test

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: count(*) with joining

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: unstable query sort results and style

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

---------

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
2025-09-12 04:20:00 +00:00
LFC
9ab87e11a4 refactor: rewrite h3 functions to DataFusion style (#6942)
* refactor: rewrite h3 functions to DataFusion style

Signed-off-by: luofucong <luofc@foxmail.com>

* resolve PR comments

Signed-off-by: luofucong <luofc@foxmail.com>

---------

Signed-off-by: luofucong <luofc@foxmail.com>
2025-09-12 02:27:24 +00:00
Weny Xu
9fe7069146 feat: add postgres tls support for CLI (#6941)
* feat: add postgres tls support for cli

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-09-11 12:18:13 +00:00
fys
733a1afcd1 fix: correct jemalloc metrics (#6959)
The allocated and resident metrics were swapped in the set calls. This commit
fixes the issue by ensuring each metric receives its corresponding value.
2025-09-11 06:37:19 +00:00
Yingwen
5e65581f94 feat: support flat format for SeriesScan (#6938)
* feat: Support flat format for SeriesScan

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: simplify tests

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: update comment

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: only accumulate fetch time to scan_cost in SeriesDistributor of
the SeriesScan

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: update comment

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-09-11 06:12:53 +00:00
ZonaHe
e75e5baa63 feat: update dashboard to v0.11.4 (#6956)
Co-authored-by: sunchanglong <sunchanglong@users.noreply.github.com>
2025-09-11 04:34:25 +00:00
zyy17
c4b89df523 fix: use pull_request_target to fix add labels 403 error (#6958)
Signed-off-by: zyy17 <zyylsxm@gmail.com>
2025-09-11 03:53:14 +00:00
Weny Xu
6a15e62719 feat: expose workload filter to selector options (#6951)
* feat: add workload filtering support to selector options

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-09-11 03:11:13 +00:00
discord9
2bddbe8c47 feat(query): better alias tracker (#6909)
* better resolve

Signed-off-by: discord9 <discord9@163.com>

feat: layered alias tracker

Signed-off-by: discord9 <discord9@163.com>

refactor

Signed-off-by: discord9 <discord9@163.com>

docs: expalin for no offset by one

Signed-off-by: discord9 <discord9@163.com>

test: more

Signed-off-by: discord9 <discord9@163.com>

simpify api

Signed-off-by: discord9 <discord9@163.com>

wip

Signed-off-by: discord9 <discord9@163.com>

fix: filter non-exist columns

Signed-off-by: discord9 <discord9@163.com>

feat: stuff

Signed-off-by: discord9 <discord9@163.com>

feat: cache partition columns

Signed-off-by: discord9 <discord9@163.com>

refactor: rm unused fn

Signed-off-by: discord9 <discord9@163.com>

no need res

Signed-off-by: discord9 <discord9@163.com>

chore: rm unwrap&docs update

Signed-off-by: discord9 <discord9@163.com>

* chore: after rebase fix

Signed-off-by: discord9 <discord9@163.com>

* refactor: per review

Signed-off-by: discord9 <discord9@163.com>

* fix: unsupport part

Signed-off-by: discord9 <discord9@163.com>

* err msg

Signed-off-by: discord9 <discord9@163.com>

* fix: pass correct partition cols

Signed-off-by: discord9 <discord9@163.com>

* fix? use column name only

Signed-off-by: discord9 <discord9@163.com>

* fix: merge scan has partition columns no alias/no partition diff

Signed-off-by: discord9 <discord9@163.com>

* refactor: loop instead of recursive

Signed-off-by: discord9 <discord9@163.com>

* refactor: per review

Signed-off-by: discord9 <discord9@163.com>

* feat: overlaps

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-09-11 02:30:51 +00:00
discord9
ea8125aafb fix: count(1) instead of count(ts) when >1 inputs (#6952)
Signed-off-by: discord9 <discord9@163.com>
2025-09-10 21:30:43 +00:00
dennis zhuang
49722951c6 fix: unstable query sort results (#6944)
Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
2025-09-10 20:41:10 +00:00
Cason Kervis
c431b036ec fix(path): fix program lookup failure on Windows CI (#6946)
* fix(path): fix program lookup failure on Windows CI

Signed-off-by: Cason Kervis <cscnk52@outlook.com>

* fix(path): fix program exec name

Signed-off-by: Cason Kervis <cscnk52@outlook.com>

* fix(path): using absolute path

Signed-off-by: Cason Kervis <cscnk52@outlook.com>

* style: using fmt

Signed-off-by: Cason Kervis <cscnk52@outlook.com>

---------

Signed-off-by: Cason Kervis <cscnk52@outlook.com>
2025-09-10 13:38:54 +00:00
Ruihang Xia
c797d87210 fix: handle hash distribution properly (#6943)
* fix: handle hash distribution properly

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* Update src/query/src/optimizer/pass_distribution.rs

Co-authored-by: dennis zhuang <killme2008@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: dennis zhuang <killme2008@gmail.com>
2025-09-10 06:35:10 +00:00
Weny Xu
e0ce0a6446 refactor: refactor PeerLookupService and simplify Selector implementations (#6939)
* refactor: move `lease` into `discovery` dir

Signed-off-by: WenyXu <wenymedia@gmail.com>

* feat: introduce discovery components

Signed-off-by: WenyXu <wenymedia@gmail.com>

* refactor:  simplify selector

Signed-off-by: WenyXu <wenymedia@gmail.com>

* refactor: remove  duplicate peer allocator trait

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions

Signed-off-by: WenyXu <wenymedia@gmail.com>

* refactor: minor refactor

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: refine comments

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-09-10 03:43:46 +00:00
Ruihang Xia
2f6da3b718 feat: exiting staging mode on success case (#6913)
* clear staging directories on entering staging mode

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* test and merger

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* storage accessor

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* exit on success

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* use remove_all

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-09-10 01:52:55 +00:00
Ruihang Xia
83290be3ba feat: store partition expr per file in region manifest (#6849)
* feat: store partition expr per file in region manifest

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* use deserialized form in memory

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* test non-existing partition expr

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* try remove partition expr dup

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* Revert "try remove partition expr dup"

This reverts commit a0f2c8ca6a127ef356a5510466e84513cf4b13d8.

* fix merge test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* format doc

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-09-09 23:37:29 +00:00
Zhenchi
21ee981b49 feat: add origin_region_id and node_id to sst entry (#6937)
Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2025-09-09 09:45:15 +00:00
LFC
44c2aa4c23 refactor: use DataFusion's return_type in our function trait directly (#6935)
Signed-off-by: luofucong <luofc@foxmail.com>
2025-09-09 09:30:31 +00:00
Yingwen
9c22092189 feat: Implements compaction for bulk memtable (#6923)
* feat: initial bulk memtable compaction implementation

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: implement compact for memtable

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: add tests for bulk memtable compaction

Signed-off-by: evenyag <realevenyag@gmail.com>

* style: clippy

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: address review comments

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-09-09 09:15:25 +00:00
discord9
ad690e14d0 chore: clean up FlowEngine trait (#6934)
Signed-off-by: discord9 <discord9@163.com>
2025-09-09 07:14:52 +00:00
Zhenchi
264d05d20e feat: add InformationExtension.inspect_datanode for datanode inspection (#6921)
* feat: add InformationExtension.inspect_datanode for datanode inspection

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* aggregate results from all datanodes

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix fmt

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix unreleased mito engine

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2025-09-09 03:29:04 +00:00
Zhenchi
9fe84f6fbd feat(mito): backfill partition expr on region open (#6862)
* feat(mito): backfill partition expr on region open

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* tiny polish

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* only writable leader persists

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* catchup needs backfill

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* add log

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* handle distribute mode

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* only when transfer to set gracefully

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix fmt

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2025-09-09 02:50:01 +00:00
shuiyisong
ac051af201 feat(pipeline): generate create table sql from pipeline config (#6930)
* chore: add expr_to_create function using cc and polish

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* feat: add get pipeline create table

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: update test_expr_to_create

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* fix: column extension

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: add message hint for variant table name

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: add message hint for existing table

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* test: create table for pipeline

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: update comment

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: rename to query pipeline ddl

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

---------

Signed-off-by: shuiyisong <xixing.sys@gmail.com>
2025-09-09 02:47:51 +00:00
Ning Sun
948a6578fa feat: add udtf (table function) registration (#6922)
* feat: work-in-progress udtf support

Signed-off-by: Ning Sun <sunning@greptime.com>

* feat: add table function support

Signed-off-by: Ning Sun <sunning@greptime.com>

* test: resolve test error

Signed-off-by: Ning Sun <sunning@greptime.com>

---------

Signed-off-by: Ning Sun <sunning@greptime.com>
2025-09-09 01:26:38 +00:00
Weny Xu
16febbd4c2 feat: add CPU, memory and node status info to cluster_info (#6897)
* feat: add CPU and memory info to `cluster_info`

Signed-off-by: WenyXu <wenymedia@gmail.com>

* feat: add `node_status` to `cluster_info` table

Signed-off-by: WenyXu <wenymedia@gmail.com>

* test: update sqlness

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: update proto

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-09-08 08:59:34 +00:00
LFC
47384c7701 feat: support function alias (#6917)
* feat: udf alias

Signed-off-by: luofucong <luofc@foxmail.com>

* trying to fix sqlness

Signed-off-by: luofucong <luofc@foxmail.com>

* x

Signed-off-by: luofucong <luofc@foxmail.com>

---------

Signed-off-by: luofucong <luofc@foxmail.com>
2025-09-08 08:57:24 +00:00
Ruihang Xia
c9377e7c5a build: bump rust edition to 2024 (#6920)
* bump edition

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* format

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* gen keyword

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* lifetime and env var

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* one more gen fix

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* lifetime of temporaries in tail expressions

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* format again

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* clippy nested if

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* clippy let and return

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-09-08 02:37:18 +00:00
Weny Xu
658d07bfc8 feat: add written_bytes_since_open column to region_statistics table (#6904)
* feat: add `write_bytes` column to `region_statistics` table

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: update comments

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: rename `write_bytes` to `written_bytes`

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: rename `written_bytes` to `written_bytes_since_open`

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-09-05 07:27:30 +00:00
dennis zhuang
24e5c9f6da test: migrate duckdb tests, part 1 (#6870)
* test: migrate duckdb tests

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: style

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* test: add more duckdb tests

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: stable order

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: simplfy comments

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: remove tests/cases/standalone/common/DUCKDB_MIGRATION_GUIDE.md

* fix: incorrect_sql.sql

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: integer flow test

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: integer flow test

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* docs: add todo

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

---------

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
2025-09-05 06:10:14 +00:00
Ruihang Xia
4158afa618 fix: wrap tql cte in a subquery alias (#6910)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-09-05 06:07:20 +00:00
LFC
48f5db3f5f refactor: use DataFusion's Signature directly in UDF (#6908)
* refactor: use DataFusion's Signature directly in UDF

Signed-off-by: luofucong <luofc@foxmail.com>

* fix sqlness

Signed-off-by: luofucong <luofc@foxmail.com>

* resolve PR comments

Signed-off-by: luofucong <luofc@foxmail.com>

* fix ci

Signed-off-by: luofucong <luofc@foxmail.com>

---------

Signed-off-by: luofucong <luofc@foxmail.com>
2025-09-05 05:04:56 +00:00
discord9
f7c8d86ebb feat: file ref mgr (#6844)
* feat: file ref manager

Signed-off-by: discord9 <discord9@163.com>

* refactor: put file ref mgr to sep file

Signed-off-by: discord9 <discord9@163.com>

* chore: rename path

Signed-off-by: discord9 <discord9@163.com>

* test: ref path

Signed-off-by: discord9 <discord9@163.com>

* fix: pass node id

Signed-off-by: discord9 <discord9@163.com>

* chore: after rebase fix

Signed-off-by: discord9 <discord9@163.com>

* feat: exlucde already in manifest files

Signed-off-by: discord9 <discord9@163.com>

* docs: explain why it can work

Signed-off-by: discord9 <discord9@163.com>

* feat: also include manifest versions

Signed-off-by: discord9 <discord9@163.com>

* refactor: per review

Signed-off-by: discord9 <discord9@163.com>

* rename func to imply what's it actually doing

Signed-off-by: discord9 <discord9@163.com>

* more docs

Signed-off-by: discord9 <discord9@163.com>

* docs: expect gc worker to do the job right

Signed-off-by: discord9 <discord9@163.com>

* refactor: partially per review

Signed-off-by: discord9 <discord9@163.com>

* refactor: per review

Signed-off-by: discord9 <discord9@163.com>

* chore: unused

Signed-off-by: discord9 <discord9@163.com>

* metrics: change to per datanode instead

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-09-05 03:45:01 +00:00
Ruihang Xia
faae1db066 feat: humanize analyze numbers (#6889)
* feat: humanize analyze numbers

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update cargo lock

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* tailing space

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* use readable size

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-09-05 03:42:38 +00:00
Ruihang Xia
ba105e18b0 feat: put sqlness into a separated dir (#6911)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-09-05 01:39:29 +00:00
Yingwen
8bbb396506 feat: Supports flat format in SeqScan and UnorderedScan (#6905)
* feat: support flat format in SeqScan

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: support flat format in unordered scan

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: support parallel read for flat format in SeqScan

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: rename flat DedupReader to FlatDedupReader

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: address review comments

It also precomputes the input arrow schema

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-09-04 13:12:24 +00:00
dennis zhuang
8e7f2e92cc test: adds approx_percentile_cont to range query test (#6903)
Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
2025-09-04 12:18:27 +00:00
Ruihang Xia
493a1efe65 feat: skip compaction on large file on append only mode (#6838)
* feat: skip compaction on large file on append only mode

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* log ignored files

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* format

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* only ignore level 1 files

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* early exit

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix typo

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-09-04 12:01:09 +00:00
Yingwen
40a49ddc82 feat: implement basic write/read methods for bulk memtable (#6888)
* feat: implement basic write/read methods for bulk memtable

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: change update stats atomic ordering

We have already acquired the write lock so Relaxed ordering is fine

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-09-04 08:42:05 +00:00
Alex Araujo
2c019965be feat: unify FlushRegions instructions (#6819)
* feat: add FlushRegionsV2 instruction with unified semantics

- Add FlushRegionsV2 struct supporting both single and batch operations
- Preserve original FlushRegion(RegionId) API for backward compatibility
- Support configurable FlushStrategy (Sync/Async) and FlushErrorStrategy (FailFast/TryAll)
- Add detailed per-region error reporting in FlushRegionReply
- Update datanode handlers to support both legacy and enhanced flush instructions
- Maintain zero breaking changes through automatic conversion of legacy formats

Signed-off-by: Alex Araujo <alexaraujo@gmail.com>

* chore: run make fmt

Signed-off-by: Alex Araujo <alexaraujo@gmail.com>

* Apply suggestions from code review

Co-authored-by: LFC <990479+MichaelScofield@users.noreply.github.com>

Signed-off-by: Alex Araujo <alexaraujo@gmail.com>

* refactor: extract shared perform_region_flush fn

Signed-off-by: Alex Araujo <alexaraujo@gmail.com>

* refactor: use consistent error type across similar methods

see gh copilot suggestion: https://github.com/GreptimeTeam/greptimedb/pull/6819#discussion_r2299603698

Signed-off-by: Alex Araujo <alexaraujo@gmail.com>

* chore: make fmt

Signed-off-by: Alex Araujo <alexaraujo@gmail.com>

* refactor: consolidate FlushRegion instructions

Signed-off-by: Alex Araujo <alexaraujo@gmail.com>

---------

Signed-off-by: Alex Araujo <alexaraujo@gmail.com>
2025-09-04 01:18:54 +00:00
localhost
756856799f refactor(pipeline)!: change dispatch table name format (#6901)
* chore(breaking): change dispatch table name format

* chore: fix test
2025-09-03 13:47:40 +00:00
Weny Xu
dbb76483e8 chore: disable stats persistence by default (#6900)
* chore: disable stats persistence by default

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions

Signed-off-by: WenyXu <wenymedia@gmail.com>

* fix: fix clippy

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-09-03 12:39:43 +00:00
discord9
a65db7121e feat: flow full aggr only trigger on new data (#6880)
* fix: flow full aggr only trigger on new data

Signed-off-by: discord9 <discord9@163.com>

* chore: better debug msg

Signed-off-by: discord9 <discord9@163.com>

* refactor: per review

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-09-03 08:45:24 +00:00
liyang
ee4b8708d7 fix: fix deploy greptimedb in sqlness-test (#6894)
Signed-off-by: liyang <daviderli614@gmail.com>
2025-09-03 08:11:18 +00:00
Weny Xu
2512f9456d chore: bump main branch version to 0.18 (#6890)
Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-09-03 07:50:24 +00:00
Zhenchi
deb728c38a fix: move prune_region_dir to region drop (#6891)
* fix: move prune_region_dir to region drop

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* address comments

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2025-09-03 07:11:05 +00:00
dennis zhuang
9dbf6dd8d0 test: migrate duckdb tests part2, window functions (#6875)
* test: migrate window tests

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: blank line at the end

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

---------

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
2025-09-03 06:55:47 +00:00
Weny Xu
7cf47ccf54 fix: initialize remote WAL regions with correct flushed entry IDs (#6856)
* fix: initialize remote WAL regions with correct flushed entry IDs

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: add logs

Signed-off-by: WenyXu <wenymedia@gmail.com>

* fix: correct latest offset

Signed-off-by: WenyXu <wenymedia@gmail.com>

* fix: update sqlness

Signed-off-by: WenyXu <wenymedia@gmail.com>

* feat: add replay checkpoint to catchup request

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: logs

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-09-03 04:00:02 +00:00
Weny Xu
c3c79e4c79 chore: fix typo (#6887)
Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-09-03 03:45:55 +00:00
Ruihang Xia
56db1d468a chore: fix typo (#6885)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-09-03 02:43:25 +00:00
WaterWhisperer
7fbcae1ef8 test: add upgrade compatibility tests (#6863)
test: add upgrade compatibility tests (#5188)

Signed-off-by: WaterWhisperer <waterwhisperer24@qq.com>
2025-09-02 21:16:00 +00:00
Weny Xu
b7f507cb37 chore: update dashboard (#6883)
* chore: update dashboard

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: update json

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: update json

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: add desc

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-09-02 14:54:22 +00:00
Zhenchi
9b8f04b065 fix: prune intermediate dirs on index finish and region pruge (#6878)
* fix: prune intermediate dirs on index finish and region pruge

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* address comments

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2025-09-02 13:57:16 +00:00
Yingwen
8fc42aeb27 feat: Update parquet writer and indexer to support the flat format (#6866)
* feat: implements method to write flat batch for ParquetWriter

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: add update method for flat RecordBatch in Indexer

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: calls indexer to write flat batch in ParquetWriter

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: handle empty projection for flat format

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: eval array in precise_filter_flat

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: cache column lookup result in inverted indexer

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: add test

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: support dict type in dense codec

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: remove read part in test as it need modifying the reader

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: support dictionary type in other methods for dense codec

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: fulltext use string array directly

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-09-02 12:48:34 +00:00
Weny Xu
d394f38d18 fix: ignore reserved column IDs and prevent panic on chunk_size is zero (#6882)
Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-09-02 12:37:16 +00:00
discord9
6a3791ab31 fix(flow): promql auto create table (#6867)
* fix: non aggr prom ql auto create table

Signed-off-by: discord9 <discord9@163.com>

* feat: val column use any name

Signed-off-by: discord9 <discord9@163.com>

* feat: check if it's tql src table

Signed-off-by: discord9 <discord9@163.com>

* test: check sink table is tql-able

Signed-off-by: discord9 <discord9@163.com>

* test: sqlness redacted

Signed-off-by: discord9 <discord9@163.com>

* fix: sql also handle no aggr case

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-09-02 08:54:16 +00:00
Zhenchi
9175fa643d feat: add schema and recordbatch builder for sst entry (#6841)
* feat: add schema and recordbatch builder for sst entry

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* add build plan

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* address comments

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* address comments

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2025-09-01 12:46:49 +00:00
Weny Xu
0e962844ac fix: fix incorrect timestamp precision in information_schema.tables (#6872)
* fix: fix incorrect timestamp precision in information_schema.tables

Signed-off-by: WenyXu <wenymedia@gmail.com>

* fix: fix unit test

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-09-01 12:20:23 +00:00
Weny Xu
246b832d79 fix: use configured kv_client in etcd multi-transaction operations (#6871)
Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-09-01 11:13:48 +00:00
ZonaHe
e62a022d76 feat: update dashboard to v0.11.3 (#6864)
Co-authored-by: sunchanglong <sunchanglong@users.noreply.github.com>
2025-09-01 11:08:35 +00:00
liyang
e595885dc6 chore: use greptime dockerhub image (#6865)
Signed-off-by: liyang <daviderli614@gmail.com>
2025-09-01 11:07:26 +00:00
Lei, HUANG
dd3432e6ca chore: change encode raw values signature (#6869)
* chore/change-encode-raw-values-sig:
 ### Update Sparse Encoding to Use Byte Slices

 - **`bench_sparse_encoding.rs`**: Modified the `encode_raw_tag_value` function to use byte slices instead of `Bytes` for tag values.
 - **`sparse.rs`**: Updated the `encode_raw_tag_value` method in `SparsePrimaryKeyCodec` to accept byte slices (`&[u8]`) instead of `Bytes`. Adjusted related test cases to reflect this change.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* chore/change-encode-raw-values-sig:
 ### Add `Clear` Trait Implementation for Byte Slices

 - Implemented the `Clear` trait for byte slices (`&[u8]`) in `repeated_field.rs` to enhance trait coverage and provide a default clear operation for byte slice types.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

---------

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-09-01 09:53:08 +00:00
Yingwen
ab96703d8f chore: enlarge max file limit to 384 (#6868)
chore: enlarge max concurrent files to 384

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-09-01 09:26:20 +00:00
github-actions[bot]
73add808a6 ci: update dev-builder image tag (#6858)
Signed-off-by: greptimedb-ci <greptimedb-ci@greptime.com>
Co-authored-by: greptimedb-ci <greptimedb-ci@greptime.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>
2025-09-01 08:44:43 +00:00
dennis zhuang
1234911ed3 refactor: query config options (#6781)
* feat: refactor columnar and vector conversion

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* feat: initialize config options from query context

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: failure tests

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: revert ColumnarValue::try_from_vector

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

---------

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
2025-09-01 07:00:26 +00:00
LFC
d57c0db9e6 fix: no need to early lookup DNS for kafka broker (#6845)
Signed-off-by: luofucong <luofc@foxmail.com>
2025-09-01 03:03:33 +00:00
Ning Sun
4daf5adce5 feat: update rate limiter to use semaphore that will block without re… (#6853)
* feat: update rate limiter to use semaphore that will block without return error

Signed-off-by: Ning Sun <sunning@greptime.com>

* fix: remove unused error

Signed-off-by: Ning Sun <sunning@greptime.com>

---------

Signed-off-by: Ning Sun <sunning@greptime.com>
2025-08-30 18:17:08 +00:00
Yingwen
575093f85f feat: Support more key types for the DictionaryVector (#6855)
* feat: support different key type for the dictionary vector

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: support more dictionary type in try_into_vector

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: use key array's type as key type

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-08-29 13:23:25 +00:00
jeremyhi
ac82ad4549 feat: make etcd store max codec size configurable (#6859)
* feat: make etcd store max codec size configable

* feat: only decoding limit
2025-08-29 12:21:59 +00:00
discord9
367a25af06 feat: flow prom ql auto sink table is also promql-able (#6852)
* feat: flow prom ql auto sink table is also promql-able

Signed-off-by: discord9 <discord9@163.com>

* fix: gen create table expr without aggr/projection outermost

Signed-off-by: discord9 <discord9@163.com>

* test: update non-aggr testcase

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-08-29 12:16:03 +00:00
zyy17
d585c23ba5 refactor: add stop methods for LocalFilePurger and CompactionRegion (#6848)
* refactor: add `LocalFilePurger::stop(&self)` and `stop_file_purger()` of `CompactionRegion`

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* chore: rename methods

Signed-off-by: zyy17 <zyylsxm@gmail.com>

---------

Signed-off-by: zyy17 <zyylsxm@gmail.com>
2025-08-29 09:23:59 +00:00
LFC
f55023f300 ci: install ssh for Android dev-builder (#6854)
Signed-off-by: luofucong <luofc@foxmail.com>
2025-08-29 08:42:35 +00:00
Weny Xu
9213315613 chore: add server-side error logging to improve observability in gRPC (#6846)
chore: print tonic code

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-08-29 07:47:24 +00:00
zyy17
e77ad8e9dc chore: run pull-test-deps-images.sh before docker compose to avoid rate limit (#6851)
chore: run `pull-test-deps-images.sh` before docker compose

Signed-off-by: zyy17 <zyylsxm@gmail.com>
2025-08-29 05:03:45 +00:00
liyang
7bc669e991 chore: update bitnami config (#6847)
* chore: update bitnami config

Signed-off-by: liyang <daviderli614@gmail.com>

* update postgresql chart version

Signed-off-by: liyang <daviderli614@gmail.com>

* fix ci

Signed-off-by: liyang <daviderli614@gmail.com>

* refactor: add pull-test-deps-images.sh to pull images one by one to avoid rate limit

Signed-off-by: zyy17 <zyylsxm@gmail.com>

---------

Signed-off-by: liyang <daviderli614@gmail.com>
Signed-off-by: zyy17 <zyylsxm@gmail.com>
Co-authored-by: zyy17 <zyylsxm@gmail.com>
2025-08-29 02:45:14 +00:00
ZonaHe
b84cd19145 feat: update dashboard to v0.11.2 (#6843)
Co-authored-by: ZonaHex <ZonaHex@users.noreply.github.com>
2025-08-28 10:59:41 +00:00
Logic
cbcfdf9d65 feat: add optional schema for Postgres metadata tables (#6764)
* feat(meta): add optional schema for Postgres metadata tables

- Add `schema` option to specify a custom schema for metadata tables
- Update `PgStore` and `PgElection` to support optional schema
- Modify SQL templates to use schema when provided
- Add tests for schema support in Postgres backend

Signed-off-by: Logic <zqr10159@dromara.org>

* refactor(meta): remove unused `create_schema_statement` and simplify `PgSqlTemplateFactory`

- Remove `create_schema_statement` from `PgSqlTemplateSet` struct
- Simplify `PgSqlTemplateFactory` by removing `new` method and merging it with `with_schema`
- Update related tests to reflect these changes

Signed-off-by: Logic <zqr10159@dromara.org>

* refactor(meta-srv): remove unused imports

- Remove unused import of BoxedError from common_error::ext- Remove unused import of TlsOption from servers::tls

Signed-off-by: Logic <zqr10159@dromara.org>

* build(meta): update Postgres version and add error handling imports

- Update Postgres version to 17 in docker-compose.yml
- Add BoxedError import for error handling in meta-srv

Signed-off-by: Logic <zqr10159@dromara.org>

* feat(postgres): add support for optional schema in PgElection and related components

Signed-off-by: Logic <zqr10159@dromara.org>

* feat(postgres): add support for optional schema in PgElection and related components

Signed-off-by: Logic <zqr10159@dromara.org>

* fix(develop): update Postgres schema commands to specify host

Signed-off-by: Logic <zqr10159@dromara.org>

* refactor(postgres): simplify plugin options handling and update SQL examples

Signed-off-by: Logic <zqr10159@dromara.org>

* refactor(postgres): simplify plugin options handling and update SQL examples

Signed-off-by: Logic <zqr10159@dromara.org>

* fix(postgres): update meta_election_lock_id description for optional schema support

Signed-off-by: Logic <zqr10159@dromara.org>

* fix(postgres): add health check and fallback wait for Postgres in CI setup

* fix(postgres): update Docker setup for Postgres and add support for Postgres 15

* fix(postgres): remove redundant Postgres setup step in CI configuration

* Update tests-integration/fixtures/postgres/init.sql

Co-authored-by: Weny Xu <wenymedia@gmail.com>

* Update .github/workflows/develop.yml

* Update tests-integration/fixtures/docker-compose.yml

* Update src/common/meta/src/kv_backend/rds/postgres.rs

Co-authored-by: Weny Xu <wenymedia@gmail.com>

* Update src/common/meta/src/kv_backend/rds/postgres.rs

Co-authored-by: Weny Xu <wenymedia@gmail.com>

* Update src/common/meta/src/kv_backend/rds/postgres.rs

Co-authored-by: Weny Xu <wenymedia@gmail.com>

* Update src/common/meta/src/kv_backend/rds/postgres.rs

Co-authored-by: Weny Xu <wenymedia@gmail.com>

* fix: Refactor PostgreSQL backend to support optional schema in PgStore and related SQL templates

* feat: Update PostgreSQL configuration and add PG15 specific integration tests

* feat: Update PostgreSQL configuration and add PG15 specific integration tests

* refactor(postgres): update test schemas from 'greptime_schema' to 'test_schema'

* Update .github/workflows/develop.yml

* refactor: minor factor

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions

Signed-off-by: WenyXu <wenymedia@gmail.com>

* fix: fix unit test

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: Logic <zqr10159@dromara.org>
Signed-off-by: WenyXu <wenymedia@gmail.com>
Co-authored-by: Weny Xu <wenymedia@gmail.com>
2025-08-28 09:24:14 +00:00
discord9
bacd9c7d15 feat: add event ts to region manifest (#6751)
* feat: add event ts to region manifest

Signed-off-by: discord9 <discord9@163.com>

delete files

Signed-off-by: discord9 <discord9@163.com>

chore: clippy

Signed-off-by: discord9 <discord9@163.com>

lower concurrency

Signed-off-by: discord9 <discord9@163.com>

feat: gc use delta manifest to get expel time

Signed-off-by: discord9 <discord9@163.com>

docs: terminalogy

Signed-off-by: discord9 <discord9@163.com>

refactor: some advices from review

Signed-off-by: discord9 <discord9@163.com>

feat: manifest add removed files field

Signed-off-by: discord9 <discord9@163.com>

feat(WIP): add remove time in manifest

Signed-off-by: discord9 <discord9@163.com>

wip: more config

Signed-off-by: discord9 <discord9@163.com>

feat: manifest uodate removed files

Signed-off-by: discord9 <discord9@163.com>

test: add remove file opts field

Signed-off-by: discord9 <discord9@163.com>

test: fix test

Signed-off-by: discord9 <discord9@163.com>

chore: delete gc.rs

Signed-off-by: discord9 <discord9@163.com>

* feat: proper option name

Signed-off-by: discord9 <discord9@163.com>

* refactor: per review

Signed-off-by: discord9 <discord9@163.com>

* test: update manifest size

Signed-off-by: discord9 <discord9@163.com>

* test: fix eq

Signed-off-by: discord9 <discord9@163.com>

* refactor: some per review

Signed-off-by: discord9 <discord9@163.com>

* refactor: per review

Signed-off-by: discord9 <discord9@163.com>

* refactor: more per review

Signed-off-by: discord9 <discord9@163.com>

* test: update manifest size

Signed-off-by: discord9 <discord9@163.com>

* test: update config

Signed-off-by: discord9 <discord9@163.com>

* feat: keep count 0 means only use ttl

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-08-28 09:08:18 +00:00
Weny Xu
f441598247 fix: correct config doc (#6836)
fix: fix typo

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-08-28 06:26:25 +00:00
Weny Xu
556c408e7b feat: rename region_statistics to region_statistics_history (#6837)
Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-08-28 06:14:20 +00:00
shuiyisong
ec817f6877 fix: gRPC auth (#6827)
* fix: internal service

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* refactor: gRPC auth

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: add permission check for bulk ingest

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: remove unused grpc auth middleware

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: extract header function

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* refactor: extract common code and add auth to otel arrow api

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: rename utils to context_auth

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* test: otel arrow auth

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: add support for old auth value

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

---------

Signed-off-by: shuiyisong <xixing.sys@gmail.com>
2025-08-28 04:00:45 +00:00
Yingwen
32e73dad12 fix: use actual buf size as cache page value size (#6829)
* feat: cache the cloned page bytes

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: cache the whole row group pages

The opendal reader may merge IO requests so the pages of different
columns can share the same Bytes.
When we use a per-column page cache, the page cache may still referencing
the whole Bytes after eviction if there are other columns in the cache that
share the same Bytes.

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: check possible max byte range and copy pages if needed

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: always copy pages

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: returns the copied pages

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: compute cache size by MERGE_GAP

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: align to buf size

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: aligh to 2MB

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: remove unused code

Signed-off-by: evenyag <realevenyag@gmail.com>

* style: fix clippy

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: fix typo

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: fix parquet read with cache test

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-08-28 03:37:11 +00:00
Sicong Hu
bc20b17bc5 docs(rfc): async index build (#6757)
* docs(rfc): async index build

Signed-off-by: SNC123 <sinhco@outlook.com>

* update rfc for retaining sync build

Signed-off-by: SNC123 <sinhco@outlook.com>

* fix bug and update rfc for index resource management

Signed-off-by: SNC123 <sinhco@outlook.com>

* update rfc for manual rebuild

Signed-off-by: SNC123 <sinhco@outlook.com>

---------

Signed-off-by: SNC123 <sinhco@outlook.com>
2025-08-28 02:32:44 +00:00
Weny Xu
200422313f refactor(meta): refactor admin service to use modern axum handlers (#6833)
* refactor(meta): refactor admin service to use modern axum handlers

Signed-off-by: WenyXu <wenymedia@gmail.com>

* Update src/meta-srv/src/service/admin/health.rs

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-08-27 10:47:11 +00:00
discord9
8452a9d579 feat(flow): add eval interval option (#6623)
* feat: add flow eval interval

Signed-off-by: discord9 <discord9@163.com>

* feat: tql flow must have eval interval

Signed-off-by: discord9 <discord9@163.com>

* chore: clippy

Signed-off-by: discord9 <discord9@163.com>

* test: update sqlness

Signed-off-by: discord9 <discord9@163.com>

* wip

Signed-off-by: discord9 <discord9@163.com>

* wip

Signed-off-by: discord9 <discord9@163.com>

* feat: check for now func

Signed-off-by: discord9 <discord9@163.com>

* refactor: use ms instead

Signed-off-by: discord9 <discord9@163.com>

* fix: not panic&proper simplifier

Signed-off-by: discord9 <discord9@163.com>

* test: update to fix

Signed-off-by: discord9 <discord9@163.com>

* feat: not allow month in interval

Signed-off-by: discord9 <discord9@163.com>

* test: update remov months

Signed-off-by: discord9 <discord9@163.com>

* refactor: per review

Signed-off-by: discord9 <discord9@163.com>

* chore: after rebase fix

Signed-off-by: discord9 <discord9@163.com>

* feat: use seconds and add to field instead

Signed-off-by: discord9 <discord9@163.com>

* chore: aft rebase fix

Signed-off-by: discord9 <discord9@163.com>

* fix: add check for month

Signed-off-by: discord9 <discord9@163.com>

* chore: fmt

Signed-off-by: discord9 <discord9@163.com>

* refactor: per review

Signed-off-by: discord9 <discord9@163.com>

* refactor: rm clone per review

Signed-off-by: discord9 <discord9@163.com>

* chore: update proto

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-08-27 09:44:32 +00:00
discord9
5ef4dd1743 docs: add internal grpc ports (#6815)
* docs: add internal grpc ports

Signed-off-by: discord9 <discord9@163.com>

* fix: update example toml

Signed-off-by: discord9 <discord9@163.com>

* fix: grpc option use default for missing field

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-08-27 08:20:27 +00:00
Yan Tingwang
32a3ef36f9 feat(metasrv): support tls for etcd client (#6818)
* add TLS support for etcd client connections~

Signed-off-by: codephage2020 <tingwangyan2020@163.com>

* locate correct certs

Signed-off-by: codephage2020 <tingwangyan2020@163.com>

* Updated certs

Signed-off-by: codephage2020 <tingwangyan2020@163.com>

* Updated CI

Signed-off-by: codephage2020 <tingwangyan2020@163.com>

* Updated CI

Signed-off-by: codephage2020 <tingwangyan2020@163.com>

* Update docker-compose.yml

* tests for TLS client creation

Signed-off-by: codephage2020 <tingwangyan2020@163.com>

* modify tests

Signed-off-by: codephage2020 <tingwangyan2020@163.com>

---------

Signed-off-by: codephage2020 <tingwangyan2020@163.com>
2025-08-27 07:41:05 +00:00
Weny Xu
566a647ec7 feat: add replay checkpoint to reduce overhead for remote WAL (#6816)
* feat: introduce `TopicRegionValue`

Signed-off-by: WenyXu <wenymedia@gmail.com>

* feat: persist region replay checkpoint

Signed-off-by: WenyXu <wenymedia@gmail.com>

* feat: introduce checkpoint

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: udpate config.md

Signed-off-by: WenyXu <wenymedia@gmail.com>

* refactor: minor refactor

Signed-off-by: WenyXu <wenymedia@gmail.com>

* feat: send open region instructions with reply checkpoint

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: use usize

Signed-off-by: WenyXu <wenymedia@gmail.com>

* fix: fix unit tests

Signed-off-by: WenyXu <wenymedia@gmail.com>

* fix: fix unit tests

Signed-off-by: WenyXu <wenymedia@gmail.com>

* feat: add topic name pattern

Signed-off-by: WenyXu <wenymedia@gmail.com>

* feat: enable wal prune by default

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-08-27 07:24:33 +00:00
Yingwen
906e1ca0bf feat: functions and structs to scan flat format file and mem ranges (#6817)
* feat: implement function to scan flat memtable ranges

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: implement function to scan flat file ranges

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: compat batch in scan file range

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: scan other ranges

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: fix compiler errors

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-08-27 06:31:30 +00:00
Weny Xu
b921e41abf chore: remove unused deps (#6828)
Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-08-26 12:29:17 +00:00
Weny Xu
6782bcddfa fix: prevent stale physical table route during procedure retries (#6825)
fix(meta): prevent stale physical table route during procedure retries

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-08-26 12:14:06 +00:00
Weny Xu
3d1a4b56a4 feat: add support for TWCS time window hints in insert operations (#6823)
* feat: Add support for TWCS time window hints in insert operations

Signed-off-by: WenyXu <wenymedia@gmail.com>

* feat: set system events table time window to 1d

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-08-26 10:52:00 +00:00
Arshdeep
8894cb5406 feat: resolve unused dependencies with cargo-udeps (#6578) (#6619)
* feat:resolve unused dependencies with cargo-udeps (#6578)

Signed-off-by: Arshdeep54 <balarsh535@gmail.com>

* Apply suggestion from @zyy17

Co-authored-by: zyy17 <zyylsxm@gmail.com>

* Apply suggestion from @zyy17

Co-authored-by: zyy17 <zyylsxm@gmail.com>

---------

Signed-off-by: Arshdeep54 <balarsh535@gmail.com>
Co-authored-by: Ning Sun <classicning@gmail.com>
Co-authored-by: zyy17 <zyylsxm@gmail.com>
2025-08-26 10:22:53 +00:00
ZonaHe
bb334e1594 feat: update dashboard to v0.11.1 (#6824)
Co-authored-by: ZonaHex <ZonaHex@users.noreply.github.com>
2025-08-26 08:24:05 +00:00
Weny Xu
ec8ff48473 fix: correct heartbeat stream handling logic (#6821)
* fix: correct heartbeat stream handling logic

Signed-off-by: WenyXu <wenymedia@gmail.com>

* Update src/meta-srv/src/service/heartbeat.rs

Co-authored-by: jeremyhi <jiachun_feng@proton.me>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
Co-authored-by: jeremyhi <jiachun_feng@proton.me>
2025-08-26 07:39:59 +00:00
Lei, HUANG
d99734b97b perf: sparse encoder (#6809)
* perf/sparse-encoder:
 - **Update Dependencies**: Updated `criterion-plot` to version `0.5.0` and added `criterion` version `0.7.0` in `Cargo.lock`. Added `bytes` to `Cargo.toml` in `src/metric-engine`.
 - **Benchmarking**: Added a new benchmark for sparse encoding in `bench_sparse_encoding.rs` and updated `Cargo.toml` in `src/mito-codec` to include `criterion` as a dev-dependency.
 - **Sparse Encoding Enhancements**: Modified `SparsePrimaryKeyCodec` in `sparse.rs` to include new methods `encode_raw_tag_value` and `encode_internal`. Added public constants `RESERVED_COLUMN_ID_TSID` and `RESERVED_COLUMN_ID_TABLE_ID`.
 - **HTTP Server**: Made `try_decompress` function public in `prom_store.rs`.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* perf/sparse-encoder:
 Improve buffer handling in `sparse.rs`

 - Refactored buffer reservation logic to use `value_len` for clarity.
 - Optimized chunk processing by calculating `num_chunks` and `remainder` for efficient data handling.
 - Enhanced manual serialization of bytes to avoid byte-by-byte operations, improving performance.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* Update src/mito-codec/src/row_converter/sparse.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

---------

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>
2025-08-26 04:10:11 +00:00
Ruihang Xia
eb5e627ddd fix: follow promql rule for hanndling label of aggr (#6788)
* fix: follow promql rule for hanndling label of aggr

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* adopt more rules

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-08-26 03:37:08 +00:00
Ning Sun
69eed2c3fa feat: only show prometheus logical tables for __name__ values query (#6814)
feat: only show prometheus logical tables for __name__ query

Signed-off-by: Ning Sun <sunning@greptime.com>
2025-08-25 15:04:42 +00:00
Ning Sun
48572d18a8 feat: name label regex matcher in label values api (#6799)
* test: add failing test for #6791

* test: add support for = and =~

* fix: lint

* fix: code merge issue

Signed-off-by: Ning Sun <sunning@greptime.com>

---------

Signed-off-by: Ning Sun <sunning@greptime.com>
2025-08-25 08:48:53 +00:00
Yingwen
d5575d3fa4 feat: add FlatConvertFormat to convert record batches in old format to the flat format (#6786)
* feat: add convert format to FlatReadFormat

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: test convert format

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: only convert string pks to dictionary

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-08-25 06:47:06 +00:00
Ning Sun
83a65a81c0 feat: add cli option for internal grpc (#6806) 2025-08-25 02:12:53 +00:00
Ruihang Xia
288f69a30f fix: plan disorder from upgrading datafusion (#6787)
* fix: plan disorder from upgrading datafusion

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update sqlness

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update sqlness again

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-08-23 12:29:47 +00:00
zyy17
d6d5dad758 chore: revert #6763 (#6800)
Revert "refactor: change plugin option type from `&[PluginOptions]` to `Optio…"

This reverts commit 5420d6f7fb.
2025-08-23 06:57:25 +00:00
Weny Xu
d82f36db6a chore: add peer address context to client error logging (#6793)
chore: add context to client error logging

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-08-23 06:56:24 +00:00
Ning Sun
68ac37461b feat: add limit to label value api (#6795)
* feat: add limit to label value api

Signed-off-by: Ning Sun <sunning@greptime.com>

* feat: limit for special labels

Signed-off-by: Ning Sun <sunning@greptime.com>

---------

Signed-off-by: Ning Sun <sunning@greptime.com>
2025-08-23 06:23:51 +00:00
Weny Xu
4c2955b86b fix: time unit mismatch in lookup_frontends function (#6798)
* fix: lookup frontend

Signed-off-by: WenyXu <wenymedia@gmail.com>

* test: add tests

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-08-22 12:34:26 +00:00
localhost
390aef7563 chore: modifying the visibility of the ScalarFunctionFactory field (#6797) 2025-08-22 10:55:01 +00:00
dennis zhuang
c6c33d14aa feat: bump opendal to v0.54 (#6792)
* feat: bump opendal to v0.54.0

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: update cargo

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

---------

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
2025-08-22 10:51:06 +00:00
dennis zhuang
3014972202 chore: improve error message when there are more than one time index (#6796)
* chore: improve error message when there are more than one time index

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: style

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

---------

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
2025-08-22 09:22:55 +00:00
Zhenchi
6839b5aef4 feat(mito): list SSTs from manifest and storage (#6766)
* feat(engine): list SSTs from manifest and storage

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* mito engine only

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* try best to remove allocation

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* address comments

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2025-08-22 09:22:42 +00:00
ZonaHe
a04ec07b61 feat: update dashboard to v0.11.0 (#6794)
Co-authored-by: ZonaHex <ZonaHex@users.noreply.github.com>
2025-08-22 09:08:48 +00:00
discord9
d774996e89 chore: make internal grpc optional (#6789)
* chore: make internal grpc optional

Signed-off-by: discord9 <discord9@163.com>

* revert sqlness runner too

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-08-22 04:22:49 +00:00
discord9
2b43ff30b6 feat: provide plan info when flow exec (#6783)
* feat: provide  plan info when flow exec

Signed-off-by: discord9 <discord9@163.com>

* backoff?

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-08-22 03:47:26 +00:00
discord9
eaceae4c91 feat: frontend internal grpc port (#6784)
* feat: frontend internal grpc port

Signed-off-by: discord9 <discord9@163.com>

* fix: grpc server naming

Signed-off-by: discord9 <discord9@163.com>

* test: sqlness test fix

Signed-off-by: discord9 <discord9@163.com>

* fix: internal not use process manager

Signed-off-by: discord9 <discord9@163.com>

* test: test integration port alloc

Signed-off-by: discord9 <discord9@163.com>

* feat: skip auth for internal grpc

Signed-off-by: discord9 <discord9@163.com>

* test: is distributed

Signed-off-by: discord9 <discord9@163.com>

* what:

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
Co-authored-by: Ning Sun <sunning@greptime.com>
2025-08-22 02:46:35 +00:00
Ning Sun
cdc168e753 feat: support for custom headers in otel exporter (#6773)
* feat: support for custom headers in otel exporter

Signed-off-by: Ning Sun <sunning@greptime.com>

* chore: remove wrapping option

Signed-off-by: Ning Sun <sunning@greptime.com>

---------

Signed-off-by: Ning Sun <sunning@greptime.com>
2025-08-21 13:10:13 +00:00
discord9
5d5817b851 chore: no logging when init table_flow cache if empty (#6785)
chore: no logging

Signed-off-by: discord9 <discord9@163.com>
2025-08-21 12:38:11 +00:00
Alex Araujo
a98c48a9b2 feat: optimize CreateFlowData with lightweight FlowQueryContext (#6780)
* feat: optimize CreateFlowData with lightweight FlowQueryContext

Replace full QueryContext with FlowQueryContext containing only essential fields (catalog, schema, timezone) in CreateFlowData struct. This improves serialization performance by eliminating unused extensions HashMap and channel fields.

Key changes:
- Add FlowQueryContext struct with conversion implementations
- Update CreateFlowData to use FlowQueryContext with backward compatibility
- Add tests for serialization and conversions

Signed-off-by: Alex Araujo <alexaraujo@gmail.com>

* chore: run make fmt

Signed-off-by: Alex Araujo <alexaraujo@gmail.com>

---------

Signed-off-by: Alex Araujo <alexaraujo@gmail.com>
2025-08-21 12:02:57 +00:00
Weny Xu
6692957e08 refactor: simplify WAL Pruning procedure part2 (#6782)
refactor: simplify prune wal procedure

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-08-21 11:09:31 +00:00
Weny Xu
896d72191e feat: introduce PersistStatsHandler (#6777)
* feat: add `Inserter` trait and impl

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: import items

Signed-off-by: WenyXu <wenymedia@gmail.com>

* feat: introduce `PersistStatsHandler`

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: disable persisting stats in sqlness

Signed-off-by: WenyXu <wenymedia@gmail.com>

* reset channel manager

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: avoid to collect

Signed-off-by: WenyXu <wenymedia@gmail.com>

* refactor: remove insert options

Signed-off-by: WenyXu <wenymedia@gmail.com>

* refactor: use `write_bytes` instead of `write_bytes_per_sec`

Signed-off-by: WenyXu <wenymedia@gmail.com>

* refactor: compute write bytes delta

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions

Signed-off-by: WenyXu <wenymedia@gmail.com>

* test: add unit tests

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions

Signed-off-by: WenyXu <wenymedia@gmail.com>

* Update src/meta-srv/src/handler/persist_stats_handler.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>
2025-08-21 09:34:58 +00:00
Weny Xu
5eec3485fe feat: add IntoRow and Schema derive macros (#6778)
* chore: import items

Signed-off-by: WenyXu <wenymedia@gmail.com>

* feat: add `IntoRow` and `Schema` derive macros

Signed-off-by: WenyXu <wenymedia@gmail.com>

* refactor: styling

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-08-21 06:29:59 +00:00
Ruihang Xia
7e573e497c feat: simplify more regex patterns in promql (#6747)
* feat: simplify more regex patterns in promql

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add sqlness cases

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update sqlness case

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-08-20 18:51:10 +00:00
Ruihang Xia
474a689309 feat: region prune part 2 (#6752)
* skeleton

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* get rule set

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* adjust style

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* adjust params

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* reuse collider

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* canonize

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* more robust predicate extractor

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* simplify predicate extractor's test and impl

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* unify import

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* simplification, remove unnecessary interfaces

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* handle partial referenced exprs

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* finalize predicate extractor

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* document region pruner

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* chore: reduce diff

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* simplify checker

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* refine overlapping check method

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* reduce diff

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* coerce types

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove unused errors

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* apply review comment

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* refactor use Bound

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* simplify hashmap

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* Apply suggestions from code review

Co-authored-by: Yingwen <realevenyag@gmail.com>

* sqlness tests

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* redact region id

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* test: update sqlness result after udpate datafusion

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Signed-off-by: discord9 <discord9@163.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>
Co-authored-by: discord9 <55937128+discord9@users.noreply.github.com>
Co-authored-by: discord9 <discord9@163.com>
2025-08-20 18:47:38 +00:00
Yingwen
2995eddca5 feat: Implements FlatCompatBatch to adapt schema in flat format (#6771)
* feat: compat flat wip

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: rewrite key

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: support append key

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: Split rewrite logic

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: rename CompatFlatBatch to FlatCompatBatch

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: compat primary key

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: address fixme

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: new CompatBatch if need convert

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: use different compat

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: plain_projection -> flat_projection

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: test compat

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: test FlatCompatBatch

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: fix warnings and remove unused code

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: support compat tags with dictionary types

Signed-off-by: evenyag <realevenyag@gmail.com>

* style: fix clippy

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: reuse column_id_values

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: avoid zero pk values len

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-08-20 12:36:14 +00:00
LFC
05529387d9 refactor: use DataFusion's UDAF implementation directly (#6776)
* refactor: use DataFusion's UDAF implementation directly

Signed-off-by: luofucong <luofc@foxmail.com>

* remove: delete how-to guide for writing aggregate functions

Signed-off-by: luofucong <luofc@foxmail.com>

* fix ci

Signed-off-by: luofucong <luofc@foxmail.com>

* refactor: port json_encode_path to datafusion udaf

Signed-off-by: Ning Sun <sunning@greptime.com>

---------

Signed-off-by: luofucong <luofc@foxmail.com>
Signed-off-by: Ning Sun <sunning@greptime.com>
Co-authored-by: Ning Sun <sunning@greptime.com>
2025-08-20 08:25:00 +00:00
fys
819531393f feat: disable month in trigger interval expr (#6774)
* feat: disable month in trigger interval expr

* fix: cargo clippy

* fix: cargo clippy

* add unit test

* remove unused comment
2025-08-20 07:21:39 +00:00
dennis zhuang
d6bc117408 refactor: refactor admin functions with async udf (#6770)
* refactor: use async udf for admin functions

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: sqlness test

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: code style

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: clippy

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: remove unused error

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: style

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: style

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: code style

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: apply suggestions

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: logical_metric_table test

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

---------

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
2025-08-20 03:35:38 +00:00
yihong
7402320abc fix: time() function should the same as behavior prometheus (#6704)
* fix: close issue_6701 phase 1 make it return now

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

* fix: tests

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

* fix: make tests stable

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

* fix: drop useless tests

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

* fix: address comments

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

* fix: close issue_6701 phase 1 make it return now

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

* fix: tests

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

* fix: make tests stable

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

* fix: drop useless tests

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

* fix: address comments

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

* fix: make time() real right

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

* fix: fix tests

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

* add two sqlness cases

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* simplify impl

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: yihong0618 <zouzou0208@gmail.com>
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
2025-08-20 03:05:21 +00:00
zyy17
5420d6f7fb refactor: change plugin option type from &[PluginOptions] to Option<&PluginOptions> for understandability (#6763)
refactor: change plugin option type from `[PluginOptions]` to `Option<&PluginOptions>` for understandability

Signed-off-by: zyy17 <zyylsxm@gmail.com>
2025-08-20 02:11:00 +00:00
liyang
7af471c5aa ci: add is-current-version-latest check to helm-charts/homebrew-greptime bump jobs (#6772)
ci: add  check to helm/homebrew bump jobs

Signed-off-by: liyang <daviderli614@gmail.com>
2025-08-19 17:49:07 +00:00
Weny Xu
9cdd0d8251 feat: derive macro ToRow (#6768)
* feat: introduce `ToRow` derive marco

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: use map

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: simplify

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-08-19 14:06:03 +00:00
Ning Sun
5a4036cc66 feat: update opentelemetry family (#6762)
* feat: update opentelemetry family

Signed-off-by: Ning Sun <sunning@greptime.com>

* doc: update doc samples

Signed-off-by: Ning Sun <sunning@greptime.com>

* chore: toml format

Signed-off-by: Ning Sun <sunning@greptime.com>

* chore: update default otel enpoint

Signed-off-by: Ning Sun <sunning@greptime.com>

---------

Signed-off-by: Ning Sun <sunning@greptime.com>
2025-08-19 09:23:50 +00:00
Yingwen
8fc3a9a9d7 feat: Implements async FlatMergeReader and FlatDedupReader (#6761)
* refactor: Add Flat prefix to MergeIterator and DedupIterator

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: implement MergeReader for RecordBatch

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: use GenericNode for IterNode

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: flat merge reader to stream

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: implement FlatDedupReader

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: add a benchmark for FlatMergeIterator

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: rename plain_projection to flat_projection

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-08-19 06:30:59 +00:00
liyang
0b29b41c17 ci: add Signed-off-by in update-dev-builder-version script (#6765)
Signed-off-by: liyang <daviderli614@gmail.com>
2025-08-19 04:15:13 +00:00
github-actions[bot]
78dca8a0d7 ci: update dev-builder image tag (#6759)
* ci: update dev-builder image tag

Signed-off-by: evenyag <realevenyag@gmail.com>

* ci: ignore makefile

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
Co-authored-by: greptimedb-ci <greptimedb-ci@greptime.com>
Co-authored-by: evenyag <realevenyag@gmail.com>
2025-08-19 02:51:34 +00:00
Ning Sun
bf191c5763 test: fix sqlness hash character count (#6758) 2025-08-18 11:06:05 +00:00
Ruihang Xia
b652ea52ee fix: partition tree's dict size metrics mismatch (#6746)
fix: partition tree metrics mismatch

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-08-18 08:36:35 +00:00
Weny Xu
f64fc3a57a feat: add RateMeter for tracking memtable write throughput (#6744)
* feat: introduce `RateMeter`

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-08-18 06:45:31 +00:00
fys
326198162e refactor: enhanced trigger interval (#6740)
* refactor: enhance trigger interval

* update greptime-proto

* fix: build
2025-08-18 04:03:26 +00:00
LFC
f9d2a89a0c chore: update datafusion family (#6675)
* chore: update datafusion family

Signed-off-by: luofucong <luofc@foxmail.com>

* fix ci

Signed-off-by: luofucong <luofc@foxmail.com>

* use official otel-arrow-rust

Signed-off-by: luofucong <luofc@foxmail.com>

* rebase

Signed-off-by: luofucong <luofc@foxmail.com>

* use the official orc-rust

Signed-off-by: luofucong <luofc@foxmail.com>

* resolve PR comments

Signed-off-by: luofucong <luofc@foxmail.com>

* remove the empty lines

Signed-off-by: luofucong <luofc@foxmail.com>

* try following PR comments

Signed-off-by: luofucong <luofc@foxmail.com>

---------

Signed-off-by: luofucong <luofc@foxmail.com>
2025-08-15 12:41:49 +00:00
discord9
dfc29eb3b3 feat: flownode grpc client to frontend tls option (#6750)
* feat: flownode grpc client to frontend tls option

Signed-off-by: discord9 <discord9@163.com>

* refactor: client tls option

Signed-off-by: discord9 <discord9@163.com>

* refactor: client_tls to frontend_tls

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-08-15 10:44:27 +00:00
Ning Sun
351826cd32 fix: refine shadowrs dependency, remove libgit2, libz and potentally libssl (#6748)
* feat: update shadown-rs config and remove libgit/libz/libssl deps

* chore: remove native inputs from flake
2025-08-15 03:01:14 +00:00
Yingwen
60e01c7c3d feat: Implements last-non-null dedup strategy for flat format (#6709)
* feat: Implements FlatLastNonNull strategy

Dedup rows and keep last non null fields

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: add basic test for FlatLastNonNull

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: port more last non null test

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: do merge rows after delete op

Signed-off-by: evenyag <realevenyag@gmail.com>

* style: fix clippy

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: rename num_pk_columns to field_column_start

So we can support different format later

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: address comment

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-08-14 14:38:47 +00:00
Weny Xu
021ad09c21 refactor: simplify WAL pruning procedure and introduce region flush trigger (#6741)
* chore: add logs

Signed-off-by: WenyXu <wenymedia@gmail.com>

* refactor: update wal config for metasrv

Signed-off-by: WenyXu <wenymedia@gmail.com>

* feat: introduce region flush trigger

Signed-off-by: WenyXu <wenymedia@gmail.com>

* fix: debug assert

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: log level

Signed-off-by: WenyXu <wenymedia@gmail.com>

* refactor: simplify wal prune procedure

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: upgrade rskafka

Signed-off-by: WenyXu <wenymedia@gmail.com>

* feat: always flush inactive regions

Signed-off-by: WenyXu <wenymedia@gmail.com>

* refactor: refactor flush trigger

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: remove unused code

Signed-off-by: WenyXu <wenymedia@gmail.com>

* fix: typo

Signed-off-by: WenyXu <wenymedia@gmail.com>

* fix: update unit tests

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: add metrics

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: rename

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-08-14 14:15:30 +00:00
discord9
2a3e4c7a82 fix: truncate manifest action compat (#6742)
* fix: truncate manifest action compat

Signed-off-by: discord9 <discord9@163.com>

* refactor: use simpler compat

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-08-14 14:05:17 +00:00
Zhenchi
92fd34ba22 refactor: split node manager trait (#6743)
Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2025-08-14 08:33:57 +00:00
sunheyi
d03f85287e feat: mysql add prepared_stmt_cache_capacity (#6639)
* feat: your clear and concise commit message

Signed-off-by: sunheyi <1061867552@qq.com>

* fix error

Signed-off-by: sunheyi <1061867552@qq.com>

* add param

Signed-off-by: sunheyi <1061867552@qq.com>

* fix

Signed-off-by: sunheyi <1061867552@qq.com>

* fix doc error

Signed-off-by: sunheyi <1061867552@qq.com>

---------

Signed-off-by: sunheyi <1061867552@qq.com>
2025-08-14 08:19:10 +00:00
Ruihang Xia
4d97754cb4 feat: persist manifest, SST and index files to staging dir (#6726)
* make flush and access layer aware of staging mode

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* staging flag in flush notify

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* staging manifest

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* index methods

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* only stage manifest

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* improve comment

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-08-14 06:02:31 +00:00
Ruihang Xia
1b6d924169 feat: predicate extractor (region prune part 1) (#6729)
* feat: predicate extractor (region prune part 1)

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* stricter check

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-08-14 04:42:50 +00:00
dennis zhuang
80f3ae650c docs: improve CONTRIBUTING.md (#6698)
* docs: improve CONTRIBUTING.md

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: features

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

* chore: adds features to Makefile

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: adds make commands

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

---------

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-08-14 03:48:05 +00:00
dennis zhuang
c0fe800e79 feat: improve slow queries options deserialization (#6734)
* feat: improve slow queries options deserialization

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* refactor: use serde default for struct

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

---------

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
2025-08-14 03:43:13 +00:00
yihong
56f5ccf823 fix: support unknown for timestamp function (#6708)
* fix: support unknown for timestamp function

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

* fix: some sqlness now no error

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

* fix: make clippy happy

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

* fix: address comments

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

* fix: address comments

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

---------

Signed-off-by: yihong0618 <zouzou0208@gmail.com>
2025-08-14 02:56:33 +00:00
Zhenchi
fb3b1d4866 feat: Store partition expr in RegionMetadata (#6699)
* wire partition.expr_json option constant and parsing

test(mito2): manifest roundtrip persists partition_expr JSON

test(mito2): create/open with partition.expr_json persists in manifest

docs: add comments for partition.expr_json option and RegionOptions.partition_expr

serde: include RegionOptions.partition_expr (skip if None)

test(mito2): doc intent and verify runtime backfill + persistence-after-alter for partition expr

add partition expr to create request

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* add create_with_partition_expr_persists_manifest

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* pass partition expr to create request

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* polish

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix sqlness

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix test

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* remove unused dep

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
2025-08-13 22:05:37 +00:00
Weny Xu
4fb7d92f7c feat(metasrv): implement topic statistics collection (#6732)
* feat(metasrv): collect topic stats

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: fix tests and apply suggestions

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-08-13 12:52:28 +00:00
Weny Xu
8659412cac feat: introduce PeriodicTopicStatsReporter (#6730)
* refactor: introduce `PeriodicTopicStatsReporter`

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

* fix: fix typo

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: remote wal tests styling

Signed-off-by: WenyXu <wenymedia@gmail.com>

* fix: fix unit test

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions

Signed-off-by: WenyXu <wenymedia@gmail.com>

* fix: handling region wal options not found

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

* fix: minor

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: upgrade greptime-proto

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-08-13 11:46:50 +00:00
Zhenchi
dea87b7e57 refactor: use DummyCatalog to construct query engine for datanode (#6723)
* refactor: use DummyCatalog to construct query engine for datanode

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix clippy

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* move to query/dummy_catalog

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2025-08-13 09:49:51 +00:00
localhost
a678b4dfd6 chore: add u64 for EqualValue and set expr is true when filter is empty (#6731)
* chore: add u64 for EqualValue and set expr is true when filter is empty

* Update src/log-query/src/log_query.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

* chore: update EqualValue Uinit to UInt

---------

Co-authored-by: Yingwen <realevenyag@gmail.com>
2025-08-13 08:25:33 +00:00
Ruihang Xia
ccccaf7734 feat(log-query): try infer and cast type for literal value (#6712)
* initial impl

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* one more test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove duplicated test cases

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove duplicated methods

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* initial impl

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* one more test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove duplicated test cases

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove duplicated methods

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* chore: add eq for log query

* skip for both literals

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: paomian <xpaomian@gmail.com>
2025-08-13 06:28:37 +00:00
yihong
f0bec4940f fix: two label_replace different from promql (#6720)
* fix: two label_replace different from promql

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

* fix: address comments

Signed-off-by: yihong0618 <zouzou0208@gmail.com>
Co-authored-by: Jiachun Feng <jiachun_feng@proton.me>

* fix: another address

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

---------

Signed-off-by: yihong0618 <zouzou0208@gmail.com>
Co-authored-by: Jiachun Feng <jiachun_feng@proton.me>
2025-08-13 06:27:49 +00:00
yihong
5eb491df12 fix: label_join should work with unknown (#6714)
* fix: label_join should work with unknown

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

* fix: address comments

Signed-off-by: yihong0618 <zouzou0208@gmail.com>
Co-authored-by: Jiachun Feng <jiachun_feng@proton.me>

* fix: address forget comments

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

---------

Signed-off-by: yihong0618 <zouzou0208@gmail.com>
Co-authored-by: Jiachun Feng <jiachun_feng@proton.me>
2025-08-13 03:45:40 +00:00
Weny Xu
1d84e802d8 feat: add integration tests for table reconciliation procedures part1 (#6705)
* feat: add integration tests for table reconciliation procedures

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-08-13 03:29:39 +00:00
Ruihang Xia
2992e70393 fix: correct offset's symbol (#6728)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-08-13 03:20:47 +00:00
Ruihang Xia
8a44137f37 chore: prefix debug_assertion only variables with underscore (#6727)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-08-13 02:50:16 +00:00
zyy17
777da35b0d refactor: unify the event recorder (#6689)
* refactor: unify the event recorder

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* refactor: add `table_name()` in `Event` trait

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* chore: add `slow_query_options` in `Instance`

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* refactor: add `EventHandlerOptions` and `options()` in `EventHandler` trait

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* chore: add `aggregate_events_by_type()` and support log mode of slow query

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* chore: polish the code

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* fix: clippy errors

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* chore: support to set ttl by using extension of query context

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* chore: refine the configs fields

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* fix: sqlness test errors

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* refactor: use `Duration` type instead of `String` for ttl fields

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* refactor: use pre-allocation for building RowInsertRequests

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* chore: fix clippy errors

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* refactor: code review

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* chore: fix integration errors

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* refactor: polish code for `group_events_by_type()` and `build_row_inserts_request()`, also add the unit tests

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* chore: refine comments

---------

Signed-off-by: zyy17 <zyylsxm@gmail.com>
2025-08-12 18:26:12 +00:00
Ruihang Xia
9ad9a7d2bc feat: add all partition column to logical table automatically (#6711)
* feat: add all partition column to logical table automatically

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* sqlness test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* simplify builder

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix request builder

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* test: update sqlness

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Signed-off-by: discord9 <discord9@163.com>
Co-authored-by: discord9 <55937128+discord9@users.noreply.github.com>
Co-authored-by: discord9 <discord9@163.com>
2025-08-12 17:25:24 +00:00
Lei, HUANG
ff5d672583 chore: impl cast from primitives to PathType (#6724)
chore/impl-cast-to-primitives-for-path-type:
 ### Add `num_enum` for Enum Conversion and Update `PathType`

 - **Added `num_enum` Dependency**: Updated `Cargo.lock` and `Cargo.toml` to include `num_enum` for enum conversion functionality.
   - Files: `Cargo.lock`, `src/store-api/Cargo.toml`

 - **Enhanced `PathType` Enum**: Implemented `TryFromPrimitive` for `PathType` to enable conversion from primitive types.
   - Files: `src/store-api/src/region_request.rs`

 - **Added Unit Tests**: Introduced tests to verify the conversion of `PathType` enum to and from primitive types.
   - Files: `src/store-api/src/region_request.rs`

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-08-12 13:07:01 +00:00
Ruihang Xia
e495c614f7 perf: improve bloom filter reader's byte reading logic (#6658)
* perf: improve bloom filter reader's byte reading logic

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* revert toml change

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* clearify comment

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* benchmark

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update lock file

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* pub util fn

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* note endian

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-08-12 11:37:25 +00:00
Ning Sun
e80e4a9ed7 fix: update pgwire to fix windows timeout issue (#6710)
* test: reproduce windows ci issue

* chore: update sqlx

* chore: update pgwire

* chore: update to a debug version of pgwire

* fix: update pgwire to resolve peek after read on windows

* ci: remove windows task from regular ci
2025-08-12 08:24:58 +00:00
Yingwen
1977ae50ee feat: Projection mapper for flat schema (#6679)
* feat: plain projection mapper wip

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: change ProjectionMapper to enum

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: convert plain batch

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: add tests for the mapper

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: allow dead code

Signed-off-by: evenyag <realevenyag@gmail.com>

* style: format code

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: fix compiler errors

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: change PlainProjectionMapper to FlatProjectionMapper

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: fix projection tests

Signed-off-by: evenyag <realevenyag@gmail.com>

* style: fmt code

Signed-off-by: evenyag <realevenyag@gmail.com>

* style: fix clippy

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: Address  comments

Removes some unwrap()

Signed-off-by: evenyag <realevenyag@gmail.com>

* style: fmt

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: address comment

as_plain -> as_flat

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-08-12 07:36:23 +00:00
yihong
5cec0d4e3a fix: http and tql should return the same value for nuknown (#6718)
Signed-off-by: yihong0618 <zouzou0208@gmail.com>
2025-08-12 06:38:01 +00:00
fys
d2d6489b2f fix: unit test about trigger labels parse (#6716) 2025-08-12 04:43:42 +00:00
Ruihang Xia
25f926ea7d feat: mito region staging state (#6664)
* fix: not mark all deleted when partial trunc (#6654)

* fix: not mark all deleted when partial trunc&not update manifest when partial file range is empty

Signed-off-by: discord9 <discord9@163.com>

* docs: note

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* some tests and DdlRequest

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* stage transit

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* address CR comments

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* correct error type

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: discord9 <discord9@163.com>
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: discord9 <55937128+discord9@users.noreply.github.com>
2025-08-12 03:17:47 +00:00
discord9
f159fcf599 fix: metrics without physical partition columns query push down (#6694)
* fix: metrics no part cols

Signed-off-by: discord9 <discord9@163.com>

* chore: typos

Signed-off-by: discord9 <discord9@163.com>

* chore: clippy

Signed-off-by: discord9 <discord9@163.com>

* chore: rename stuff

Signed-off-by: discord9 <discord9@163.com>

* refactor: put partition rules in table

Signed-off-by: discord9 <discord9@163.com>

* more tests

Signed-off-by: discord9 <discord9@163.com>

* test: redact more

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-08-12 02:51:38 +00:00
Yingwen
e4454e0c7d feat: Implements last row dedup strategy for flat format (#6695)
* feat: implement FlatLastRow dedup strategy

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: fix dedup metrics

Add tests for iter with FlatLastRow strategy

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: fix delete metrics

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: address review comments

Signed-off-by: evenyag <realevenyag@gmail.com>

* style: fix clippy

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: move BatchLastRow position

Signed-off-by: evenyag <realevenyag@gmail.com>

* style: fix typos

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-08-11 07:44:45 +00:00
Ruihang Xia
0781adaa3d feat: new HTTP API for formatting SQL (#6691)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-08-09 07:39:41 +00:00
LFC
253d89b5cc feat: able to set read preference to flownode (#6696)
fix: correctly compare the opened follower regions in startup

Signed-off-by: luofucong <luofc@foxmail.com>
2025-08-08 09:08:09 +00:00
localhost
3a2f5413e0 chore: add and/or for log query (#6681)
* chore: add and/or for log query

* chore: remove impl From<Vec<ColumnFilters>> for Filters
2025-08-08 08:48:03 +00:00
Yingwen
214ffe7109 feat: Implements an iterator to merge RecordBatches (#6666)
* feat: Implements the sync merge iterator for RecordBatch

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: test merge iterator

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: fix incorrect column index

Signed-off-by: evenyag <realevenyag@gmail.com>

* style: fix clippy

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: fix typos

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-08-08 07:03:22 +00:00
Ruihang Xia
3b1f172ab8 fix: TQL CTE parser take raw query string (#6671)
* take raw TQL part

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* more tests

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* sort sqlness result

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add order by

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* more order by

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add comment back

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-08-08 06:22:56 +00:00
LFC
0215b39f61 fix: correctly set extension range source index (#6692)
refactor: extract the common codes of creating proto ColumnSchema and Row to helper functions

fix: explicitly set the follower max sequence when finding extension ranges to avoid potential concurrency hazard

Signed-off-by: luofucong <luofc@foxmail.com>
2025-08-08 06:17:25 +00:00
Weny Xu
01dc789816 refactor: refine error status code mappings (#6678)
Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-08-07 09:29:32 +00:00
Ning Sun
bbe48e9e8b feat: update pgwire to 0.32 (#6674)
* feat: update pgwire api

* feat: update pgwire and override on_query/on_execute

* feat: update pgwire to 0.32

* chore: remove code example

Signed-off-by: Ning Sun <sunning@greptime.com>

---------

Signed-off-by: Ning Sun <sunning@greptime.com>
2025-08-07 06:17:52 +00:00
Weny Xu
e2015ce1af feat(metric-engine): add metadata region cache (#6657)
* feat(metric-engine): add metadata region cache

Signed-off-by: WenyXu <wenymedia@gmail.com>

* feat: use lru

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: rename

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: rename

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: add comments

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: default ttl

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: longer ttl

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-08-07 06:16:23 +00:00
Yingwen
7bb765af1d chore: pub access layer (#6670)
Signed-off-by: evenyag <realevenyag@gmail.com>
2025-08-06 13:34:38 +00:00
discord9
080b4b5d53 docs(rfc): rfc for gc worker (#6572)
* docs: rfc for gc worker

Signed-off-by: discord9 <discord9@163.com>

* revise: gc worker now run after compaction&long run queries check

Signed-off-by: discord9 <discord9@163.com>

* more drawback

Signed-off-by: discord9 <discord9@163.com>

* flowchart

Signed-off-by: discord9 <discord9@163.com>

* read consist optional&more detail

Signed-off-by: discord9 <discord9@163.com>

* chore: rephrase

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-08-06 12:03:28 +00:00
Weny Xu
c7c8495a6b feat: add metrics for reconciliation procedures (#6652)
* feat: add metrics for reconciliation procedures

Signed-off-by: WenyXu <wenymedia@gmail.com>

* refactor: improve error handling

Signed-off-by: WenyXu <wenymedia@gmail.com>

* fix(datanode): handle ignore_nonexistent_region flag in open_all_regions

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

* refactor: merge metrics

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: minor refactor

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-08-06 11:24:03 +00:00
Yingwen
bbab35f285 perf: Reduce fulltext bloom load time (#6651)
* perf: cached reader do not get page concurrently

Otherwise they will all fetch the same pages in parallel

Signed-off-by: evenyag <realevenyag@gmail.com>

* perf: always disable zstd for bloom

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-08-06 08:25:31 +00:00
Zhenchi
6c6487ab30 chore: bump version to 0.17.0 (#6663)
Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2025-08-06 07:45:39 +00:00
Ruihang Xia
757694ae38 feat: count underscore in English tokenizer and improve performance (#6660)
* feat: count underscore in English tokenizer and improve performance

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update lock file

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update test results

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* assert lookup table

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* handle utf8 alphanumeric

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* finalize

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-08-06 07:23:18 +00:00
Yingwen
39e2f122eb feat: EncodedBulkPartIter iters flat format and returns RecordBatch (#6655)
* feat: implements iter to read bulk part

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: BulkPartEncoder encodes BulkPart instead of mutation

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-08-06 06:50:01 +00:00
Lei, HUANG
877ce6e893 chore: add methods to catalog manager (#6656)
* chore/optimize-catalog:
 ### Add `table_id` Method to `CatalogManager`

 - **Files Modified**:
   - `src/catalog/src/kvbackend/manager.rs`
   - `src/catalog/src/lib.rs`

 - **Key Changes**:
   - Introduced a new asynchronous method `table_id` in the `CatalogManager` trait to retrieve the table ID based on catalog, schema, and table name.
   - Implemented the `table_id` method in `KvBackendCatalogManager` to fetch the table ID from the system catalog or cache, with a fallback to `pg_catalog` for Postgres channels.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* chore/optimize-catalog:
 ### Add `table_info_by_id` Method to Catalog Managers

 - **`manager.rs`**: Introduced the `table_info_by_id` method in `KvBackendCatalogManager` to retrieve table information by table ID using the `TableInfoCacheRef`.
 - **`lib.rs`**: Updated the `CatalogManager` trait to include the new `table_info_by_id` method.
 - **`memory/manager.rs`**: Implemented the `table_info_by_id` method in `MemoryCatalogManager` to fetch table information by table ID from in-memory catalogs.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

---------

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-08-06 06:25:32 +00:00
Ruihang Xia
c8da35c7e5 feat(log-query): support binary op, scalar fn & is_true/is_false (#6659)
* rename symbol

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* handle binary op

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update test results

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* Update src/query/src/log_query/planner.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

* fix format

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* reduce duplication

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>
2025-08-06 04:38:25 +00:00
Ruihang Xia
309e9d978c feat: support TQL CTE in planner (#6645)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-08-06 04:07:38 +00:00
zyy17
3a9f0220b5 fix: unable to record slow query (#6590)
* refactor: add process manager for prometheus query

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* refactor: modify `register_query()` API to accept parsed statement(`catalog::process_manager::QueryStatement`)

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* refactor: add the slow query timer in the `Tikcet` of ProcessManager

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* test: add integration tests

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* refactor: add process manager in `do_exec_plan()`

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* tests: add `test_postgres_slow_query` integration test

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* chore: polish the code

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* refactor: create a query ticket and slow query timer if the statement is a query in `query_statement()`

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* fix: sqlness errors

Signed-off-by: zyy17 <zyylsxm@gmail.com>

---------

Signed-off-by: zyy17 <zyylsxm@gmail.com>
2025-08-06 03:35:12 +00:00
zyy17
cc35bab5e4 feat: record the migration events in metasrv (#6579)
* feat: collect procedure event

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* feat: collect region migration events

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* test: add integration test

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* chore: fix docs error

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* chore: fix integration test error

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* chore: change status code for errors

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* refactor: add `event()` in Procedure

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* refactor: improve trait design

1. Add `user_metadata()` in `Procedure` trait;

2. Add `Eventable` trait;

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* chore: polish the code

Signed-off-by: zyy17 <zyylsxm@gmail.com>

---------

Signed-off-by: zyy17 <zyylsxm@gmail.com>
2025-08-06 03:30:33 +00:00
Ruihang Xia
414db41219 fix: box Explain node in Statement to reduce stack size (#6661)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-08-06 02:24:45 +00:00
Ruihang Xia
ea024874e7 feat: use column expr with filters in LogQuery (#6646)
* feat: use column expr with filters in LogQuery

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove some clone

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-08-05 18:35:09 +00:00
discord9
e64469bbc4 fix: not mark all deleted when partial trunc (#6654)
* fix: not mark all deleted when partial trunc&not update manifest when partial file range is empty

Signed-off-by: discord9 <discord9@163.com>

* docs: note

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-08-05 11:45:47 +00:00
discord9
875207d26c feat: register all aggregate function to auto step aggr fn (#6596)
* feat: support generic aggr push down

Signed-off-by: discord9 <discord9@163.com>

* typo

Signed-off-by: discord9 <discord9@163.com>

* fix: type ck in merge wrapper

Signed-off-by: discord9 <discord9@163.com>

* test: update sqlness

Signed-off-by: discord9 <discord9@163.com>

* feat: support all registried aggr func

Signed-off-by: discord9 <discord9@163.com>

* chore: per review

Signed-off-by: discord9 <discord9@163.com>

* chore: per review

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-08-05 11:37:45 +00:00
jeremyhi
9871c22740 fix: sequence peek with remote value (#6648)
* fix: sequence peek with remote value

* chore: more ut

* chore: add more ut
2025-08-05 08:28:09 +00:00
Yingwen
50f7f61fdc feat: Implements an iterator to read the RecordBatch in BulkPart (#6647)
* feat: impl RecordBatchIter for BulkPart

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: rename BulkPartIter to EncodedBulkPartIter

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: add iter benchmark

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: filter by primary key columns

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: move struct definitions

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: bulk iter for flat schema

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: iter filter benchmark

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: fix compiler errors

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: use corrent sequence array to compare

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: remove RecordBatchIter

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: update comments

Signed-off-by: evenyag <realevenyag@gmail.com>

* style: fix clippy

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: apply projection first

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: address comment

No need to check number of rows after filter

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-08-05 08:11:28 +00:00
Ruihang Xia
9c3b83e84d feat: use real data to truncate manipulate range (#6649)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-08-05 04:55:24 +00:00
Yingwen
e81d0f5861 feat: implements FlatReadFormat to project parquets with flat schema (#6638)
* feat: add plain read format

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: reduce unused code

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: reuse code

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: allow dead code

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: change ReadFormat to enum

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: as_primary_key() returns option

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: remove some allow dead_code

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: rename WriteFormat to PrimaryKeyWriteFormat

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: add tests for read/write format

Signed-off-by: evenyag <realevenyag@gmail.com>

* style: format code

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: dedup column ids in format

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: rename plain to flat

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: implements FlatReadFormat based on the new format

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: fix tests

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: support override sequence

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: new_override_sequence_array for ReadFormat

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: update comments

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: address comment

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-08-04 12:43:50 +00:00
Ning Sun
29e0092468 feat: schema/database support for label_values (#6631)
* feat: initial support for __schema__ in label values

* feat: filter database with matches

* refactor: skip unnecessary check

* fix: resolve schema matcher in label values

* test: add a test case for table not exists

* refactor: add matchop check on db label

* chore: merge main
2025-08-04 11:56:10 +00:00
Weny Xu
67a93a07a2 fix: fix sequence peek method to return correct values when sequence is not initialized (#6643)
fix: improve sequence peek method to handle uninitialized sequences

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-08-04 11:31:06 +00:00
discord9
1afa0afc67 feat: add partial truncate (#6602)
* feat: add partial truncate

Signed-off-by: discord9 <discord9@163.com>

* fix: per review

Signed-off-by: discord9 <discord9@163.com>

* feat: add proto partial truncate kind

Signed-off-by: discord9 <discord9@163.com>

* chore: clippy

Signed-off-by: discord9 <discord9@163.com>

* chore: update branched proto

Signed-off-by: discord9 <discord9@163.com>

* feat: grpc support truncate WIP sql support

Signed-off-by: discord9 <discord9@163.com>

* wip: parse truncate range

Signed-off-by: discord9 <discord9@163.com>

* feat: truncate by range

Signed-off-by: discord9 <discord9@163.com>

* fix: truncate range display

Signed-off-by: discord9 <discord9@163.com>

* chore: resolve todo

Signed-off-by: discord9 <discord9@163.com>

* refactor: per review

Signed-off-by: discord9 <discord9@163.com>

* test: more invalid parse

Signed-off-by: discord9 <discord9@163.com>

* chore: per review

Signed-off-by: discord9 <discord9@163.com>

* refactor: per review

Signed-off-by: discord9 <discord9@163.com>

* chore: unused

Signed-off-by: discord9 <discord9@163.com>

* chore: per review

Signed-off-by: discord9 <discord9@163.com>

* chore: update branch

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-08-04 10:50:27 +00:00
Weny Xu
414101fafa feat: introduce reconciliation interface (#6614)
* feat: introduce reconcile interface

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: upgrade proto

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-08-04 09:12:48 +00:00
Yingwen
280024d7f8 feat: Add option to limit the files reading simultaneously (#6635)
* feat: limits the max number of files to scan at the same time

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: make max_concurrent_scan_files configurable

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: reduce concurrent scan files to 128

Signed-off-by: evenyag <realevenyag@gmail.com>

* docs: update config example

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: add test for max_concurrent_scan_files

Signed-off-by: evenyag <realevenyag@gmail.com>

* style: fix clippy

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: update config test

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-08-04 07:18:58 +00:00
Ruihang Xia
865ca44dbd feat: absent function in PromQL (#6618)
* feat: absent function in PromQL

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* impl serde

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* sqlness test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* ai suggests

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* resolve PR comments

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* comment out some tests

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-08-04 06:59:58 +00:00
discord9
a3e55565dc fix: show create flow's expire after (#6641)
* fix: show create flow's expire after

Signed-off-by: discord9 <discord9@163.com>

* chore: per review

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-08-04 05:03:14 +00:00
Keming
bed0c1e55f fix: bump greptime-sqlparser to avoid convert statement overflow (#6634)
bump the greptime-sqlparser

Co-authored-by: Yihong <zouzou0208@gmail.com>
2025-08-04 02:15:34 +00:00
Ruihang Xia
572e29b158 feat: support tls for pg backend (#6611)
* load tls

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* impl tls

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* pass options

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* implement require mode

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* clean up

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update config

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* default to prefer

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update example config

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* adjust example config

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* handle client cert and key properly

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* implement verify_ca and verify_full

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update integration test for config api

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* change config name and default mode

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-08-04 00:41:08 +00:00
zyy17
31cb769507 chore: add limit in resources panel and Cache Miss panel (#6636)
chore: add `limit` in resources panel and 'Cache Miss' panel

Signed-off-by: zyy17 <zyylsxm@gmail.com>
2025-08-03 19:09:32 +00:00
yihong
e19493db4a chore: update jieba tantivy-jieba and tantivy version (#6637)
* chore: update jieba tantivy-jieba and tantivy version

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

* fix: address comments

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

---------

Signed-off-by: yihong0618 <zouzou0208@gmail.com>
2025-08-03 19:08:36 +00:00
Ruihang Xia
9817eb934d feat: support __schema__ and __database__ in Prom Remote Read (#6610)
* feat: support __schema__ and __database__ in Prom remote R/W

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix integration test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* revert remote write changes

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* check matcher type

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-08-03 07:09:44 +00:00
Lei, HUANG
8639961cc9 chore: refine metrics tracking the flush/compaction cost time (#6630)
chore: refine metrics tracking the per-stage cost time during flush and compaction

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-08-02 12:13:42 +00:00
Ruihang Xia
a9cd117706 fix: only return the __name__ label when there is one (#6629)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-08-02 08:42:28 +00:00
ZonaHe
9485dbed64 feat: update dashboard to v0.10.6 (#6632)
Co-authored-by: ZonaHex <ZonaHex@users.noreply.github.com>
2025-08-01 17:55:58 +00:00
discord9
21b71d1e10 feat: panic logger (#6633)
Signed-off-by: discord9 <discord9@163.com>
2025-08-01 11:31:15 +00:00
Weny Xu
cfaa9b4dda feat: introduce reconcile catalog procedure (#6613)
Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-08-01 11:03:00 +00:00
Weny Xu
19ad9a7f85 refactor: remove procedure executor from DDL manager (#6625)
* refactor: remove procedure executor from DDL manager

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: clippy

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from  CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-08-01 09:33:47 +00:00
shuiyisong
9e2f793b04 chore(otlp_metric): update metric and label naming (#6624)
* chore: update otlp metrics & labels naming

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* fix: typo and test

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* Update src/session/src/protocol_ctx.rs

* chore: add test cases for normalizing functions

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

---------

Signed-off-by: shuiyisong <xixing.sys@gmail.com>
Co-authored-by: Ning Sun <classicning@gmail.com>
2025-08-01 08:17:12 +00:00
Yingwen
52466fdd92 feat: Implement a converter to converts KeyValues into BulkPart (#6620)
* chore: add api to memtable to check bulk capability

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: Add a converter to convert KeyValues into BulkPart

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: move supports_bulk_insert to MemtableBuilder

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: benchmark

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: use write_bulk if the memtable benefits from it

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: test BulkPartConverter

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: add a flag to store unencoded primary keys

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: cache schema for converter

Implements to_flat_sst_arrow_schema

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: simplify tests

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: don't use bulk convert branch now

Signed-off-by: evenyag <realevenyag@gmail.com>

* style: fix clippy

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: address review comments

* simplify primary_key_column_builders check
* return error if value is not string

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: add FlatSchemaOptions::from_encoding and test sparse encoding

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-08-01 07:59:11 +00:00
Ruihang Xia
869f8bf68a docs(rfc): compatibility test framework (#6460)
* docs(rfc): compatibility test framework

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* rename file

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix typo

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-08-01 04:32:53 +00:00
Yingwen
9527e0df2f feat: HTTP API to activate/deactive heap prof (activate by default) (#6593)
* feat: add HTTP API to activate/deactivate heap profiling

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: add HTTP API to get profiling status

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: enable heap prof by default

Signed-off-by: evenyag <realevenyag@gmail.com>

* build: add "prof:true,prof_active:false" as default env to dockerfiles

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: activate heap profiling after log initialization

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: add memory options to control whether to activate profiling

Signed-off-by: evenyag <realevenyag@gmail.com>

* docs: update docs

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: fmt toml

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: fix config test

Signed-off-by: evenyag <realevenyag@gmail.com>

* docs: usage of new api

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: log profile after version

Signed-off-by: evenyag <realevenyag@gmail.com>

* docs: update how to docs

Signed-off-by: evenyag <realevenyag@gmail.com>

* docs: fix how to docs

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-08-01 03:24:56 +00:00
Weny Xu
164afb26da feat: introduce reconcile logical tables procedure (#6588)
* feat: introduce reconcile logical tables procedure

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

* fix: lock logical tables

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-07-31 11:48:06 +00:00
Weny Xu
7d8473e9bc feat: introduce reconcile database procedure (#6612)
* feat: introduce reconcile database procedure

Signed-off-by: WenyXu <wenymedia@gmail.com>

* feat: hold the schema lock

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: add todo

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: update comments

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: rename to `fast_fail`

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: add logs

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-07-31 11:01:56 +00:00
shuiyisong
51dc393371 chore(otlp_metric): support attr list in header opts (#6617)
* chore: support attr list in OTLP metrics

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: fix cr issue

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

---------

Signed-off-by: shuiyisong <xixing.sys@gmail.com>
2025-07-31 08:54:45 +00:00
Lei, HUANG
a265499325 fix(test): concurrency issue in compaction tests (#6615)
fix/compaction-concurrency:
 Add delay before compaction in `compaction_test.rs`

 - Introduced a 2-millisecond delay using `tokio::time::sleep` before the `compact` function call in `test_compaction_region_with_overlapping_delete_all` to ensure proper timing and synchronization during the test execution.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-07-30 11:24:56 +00:00
shuiyisong
2b4fb2f32a refactor(otlp_metric): make otlp metric compatible with promql (#6543)
* chore: tmp save

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: minor update

* chore: remove metric metadata and introduce shared attrs

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: replace . with _ in metric name

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: minor update & fix tests

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: add legacy mode param to otlp metrics

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: update test

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: update test & fix

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: add automatically legacy check for otlp metrics

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: fix clippy

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* fix: typos

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: insert table options in compat mode & add test

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* fix: check table options consistency

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: update test and add comments

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: minor tags update

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: minor update about scope labeling

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: update opts using header & update test

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: minor code refactor

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: fix cr issue

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

---------

Signed-off-by: shuiyisong <xixing.sys@gmail.com>
2025-07-30 11:20:03 +00:00
Yingwen
1df605ec4b feat: more logs and metrics under explain verbose mode (#6575)
* feat: collect region metrics

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: log in info level

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: add CoalescePartitionsExec to explain

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: finish metrics in partition and add sender full to metrics

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: add eof flag on finish

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: output cost as string

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: log on stream done

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: region id as string

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: enlarge send channel size

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: more log in flight and scan

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: logs about rows/batches/bytes

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: enlarge channel size

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: remote read only log in verbose

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: revert channel change

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: get explain verbose in RegionScanExec

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: print scan log in verbose mode

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: collect region metrics after finishing one region

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: define StreamMetrics and log in verbose mode

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: only log non zero filter and distributor metrics

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: revert displaying CoalescePartitions in explain

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: collect memtable metrics in partition metrics

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-07-30 09:23:32 +00:00
Weny Xu
ac8493ab4a feat: introduce reconcile table procedure (#6584)
* feat: introduce `SyncColumns`

Signed-off-by: WenyXu <wenymedia@gmail.com>

* feat: introduce reconcile table procedure

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggesions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: add tests

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: add comments

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: update proto

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-07-30 04:42:38 +00:00
Weny Xu
d9d1773913 feat: ignore internal keys in metadata snapshots (#6606)
feat: ignore dumpping internal keys

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-07-30 03:50:11 +00:00
Weny Xu
6afdf672b3 feat: allow setting next table id via http api (#6597)
* feat: allow reset next table id via http api

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggesions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-07-30 03:46:39 +00:00
ZonaHe
12c43ee27b feat: update dashboard to v0.10.5 (#6604)
Co-authored-by: sunchanglong <sunchanglong@users.noreply.github.com>
2025-07-30 02:18:48 +00:00
fys
a10b1d9885 feat: trigger alter parse (#6553)
* feat: support trigger alter

* fix: cargo fmt

* fix: clippy

* fix: some docs

* fix: cr

* fix: ON -> RENAME
2025-07-29 11:07:31 +00:00
discord9
f07b1daed4 feat: struct vector (#6595)
* feat: struct vector

Signed-off-by: discord9 <discord9@163.com>

* fix: array2vector&arrow type2concrete type

Signed-off-by: discord9 <discord9@163.com>

* chore: clippy

Signed-off-by: discord9 <discord9@163.com>

* chore: resolve some todos

Signed-off-by: discord9 <discord9@163.com>

* refactor: per review

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-07-29 08:22:27 +00:00
Ruihang Xia
5377db5392 docs(rfc): repartition (#6557)
* docs(rfc): repartition

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix comment and update link

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-07-29 07:58:03 +00:00
Lin Yihai
b6cef77a5c feat: add SET DEFAULT syntax (#6421)
* feat: add `SET DEFAULT` syntax

Signed-off-by: Yihai Lin <yihai-lin@foxmail.com>

* test: add `CURRENT_TIMESTAMP()` as default value for `SET DEFAULT` syntax

Signed-off-by: Yihai Lin <yihai-lin@foxmail.com>

* refactor: Make the error types more precise.

Signed-off-by: Yihai Lin <yihai-lin@foxmail.com>

* chore: a minor error display enchancement for `SET DEFAULT`

Signed-off-by: Yihai Lin <yihai-lin@foxmail.com>

* refactor: Using `MODIFY COLUMN` for `DROP/SET DEFUALT`

Signed-off-by: Yihai Lin <yihai-lin@foxmail.com>

* chore: update `greptime-proto`

Signed-off-by: Yihai Lin <yihai-lin@foxmail.com>

---------

Signed-off-by: Yihai Lin <yihai-lin@foxmail.com>
2025-07-29 06:41:02 +00:00
discord9
8fef177575 feat: fallback when failed to push down using DistPlanner (#6574)
* test: fix fallback testcase

Signed-off-by: discord9 <discord9@163.com>

* add metric

Signed-off-by: discord9 <discord9@163.com>

* feat: fallback add to config variable

Signed-off-by: discord9 <discord9@163.com>

* feat: set in var&set in hint

Signed-off-by: discord9 <discord9@163.com>

* chore: update test

Signed-off-by: discord9 <discord9@163.com>

* feat: also in toml config

Signed-off-by: discord9 <discord9@163.com>

* fix test

Signed-off-by: discord9 <discord9@163.com>

* docs: comment about setting from different source

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-07-29 04:37:00 +00:00
discord9
4cfc878067 feat: poll result stream more often (#6599)
* feat: poll result stream more often

Signed-off-by: discord9 <discord9@163.com>

* refactor: cleanup match

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-07-29 03:59:59 +00:00
dennis zhuang
086777d938 feat: impl some promql scalar functions (#6567)
* feat: impl some promql scalar functions

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* feat: supports pi function

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: by cr comments

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: compile

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

---------

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
2025-07-29 03:29:00 +00:00
Weny Xu
b7fd4ca65d feat: allow igoring nonexistent regions in recovery mode (#6592)
* feat: allow ignoring nonexistent regions

Signed-off-by: WenyXu <wenymedia@gmail.com>

* feat: ignore nonexistent regions during startup in recovery mode

Signed-off-by: WenyXu <wenymedia@gmail.com>

* feat: allow enabling recovery mode via http api

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-07-28 11:02:13 +00:00
Arshdeep
2e571e351f fix: add map datatype conversion in copy_table_from (#6185) (#6422)
Signed-off-by: Arshdeep54 <balarsh535@gmail.com>
2025-07-28 03:53:10 +00:00
yihong
6ded2d267a fix: close issue #6586 make pg also show error as mysql (#6587)
* fix: close issue #6586 make pg also show error as mysql

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

* fix: address comments

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

* fix: drop useless debug print

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

* fix: revert wrong change

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

* refactor: convert error types

* refactor: inline

* chore: minimize changes

* fix: address comments

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

* fix: make clippy happy

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

* fix: address comments

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

* refactor: convert datafusion error to ErrorExt

Signed-off-by: Ning Sun <sunning@greptime.com>

* fix: headers ?

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

---------

Signed-off-by: yihong0618 <zouzou0208@gmail.com>
Signed-off-by: Ning Sun <sunning@greptime.com>
Co-authored-by: Ning Sun <sunning@greptime.com>
2025-07-25 09:59:26 +00:00
yihong
3465bedc7f fix: ignore target files in make fmt-check (#6560)
* fix: ignore target files in make fmt-check

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

* fix: address comments

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

---------

Signed-off-by: yihong0618 <zouzou0208@gmail.com>
2025-07-25 09:54:38 +00:00
dennis zhuang
052033d436 feat: supports more db options (#6529)
* feat: supports more db options

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: tests

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: use btree map for consistent results

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* feat: adds compaction keys into valid db options

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

---------

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
2025-07-25 08:31:11 +00:00
Lei, HUANG
c3201c32c3 feat(mito): replace Memtable::iter with Memtable::ranges (#6549)
* bulk-multiparts-merge-reader:
 **Enhance Memtable Iteration and Flushing Logic**

 - **`flush.rs`**: Updated `RegionFlushTask` to handle multiple ranges using `MergeReaderBuilder` for improved source management during flush operations.
 - **`memtable.rs`**: Introduced `build_prune_iter` and `build_iter` methods in `MemtableRange` for flexible iteration. Added `MemtableRanges` struct to manage multiple contexts.
 - **`simple_bulk_memtable.rs`**: Refactored to use `BatchIterBuilder` and `BatchIterBuilderDeprecated` for iteration, supporting new `read_to_values` method in `Series`.
 - **`time_series.rs`**: Added `read_to_values` and `finish_cloned` methods in `Series` and `ValueBuilder` for efficient data handling.
 - **`scan_util.rs`**: Replaced `build_iter` with `build_prune_iter` for range iteration, enhancing scan utility.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* bulk-multiparts-merge-reader:
 - **Add Rayon for Parallel Processing**: Introduced `rayon` for parallel processing in `simple_bulk_memtable.rs` and updated `Cargo.toml` and `Cargo.lock` to include `rayon` dependency.
 - **Enhance Benchmarking**: Added new benchmarks in `simple_bulk_memtable.rs` to compare parallel vs sequential processing, projection, sequence filtering, and write performance.
 - **Make Structs and Methods Public**: Changed visibility of several structs and methods to `pub` in `simple_bulk_memtable.rs`, `memtable.rs`, `time_series.rs`, and `test_util.rs` to facilitate testing and benchmarking.
 - **Update Criterion Features**: Modified `Cargo.toml` to include `html_reports` feature for `criterion`.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* bulk-multiparts-merge-reader:
 ### Commit Summary

 - **Refactor `SimpleBulkMemtable`**:
   - Moved `ranges_sequential` function to a new `test_only` module and made it a method of `SimpleBulkMemtable`.
   - Made several fields in `SimpleBulkMemtable` private and added a `region_metadata` getter.
   - Affected files: `simple_bulk_memtable.rs`, `test_only.rs`.

 - **Benchmark Adjustments**:
   - Updated benchmark functions to use the new `ranges_sequential` method.
   - Affected file: `simple_bulk_memtable.rs`.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* bulk-multiparts-merge-reader:
 ### Add Test Configuration for `iter` Method in Memtable Implementations

 - **Enhancements**:
   - Added `#[cfg(any(test, feature = "test"))]` attribute to the `iter` method in various `Memtable` implementations to enable conditional compilation for testing purposes.
   - Affected files:
     - `src/mito2/src/memtable.rs`
     - `src/mito2/src/memtable/bulk.rs`
     - `src/mito2/src/memtable/partition_tree.rs`
     - `src/mito2/src/memtable/simple_bulk_memtable.rs`
     - `src/mito2/src/memtable/time_series.rs`
     - `src/mito2/src/test_util/memtable_util.rs`

 - **Benchmark Adjustments**:
   - Removed `black_box` usage in `bench_memtable_write_performance` function to streamline benchmarking.
   - Affected file: `src/mito2/benches/simple_bulk_memtable.rs`

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* bulk-multiparts-merge-reader:
 **Enhance Async Support and Refactor Iteration in `mito2`**

 - **Add Async Features**: Updated `Cargo.toml` to include `async` and `async_tokio` features for `criterion`.
 - **Async Iteration**: Introduced async functions `flush` and `flush_original` in `simple_bulk_memtable.rs` to handle memtable flushing using async iterators.
 - **Refactor Iteration Logic**: Moved `create_iter` and `BatchIterBuilderDeprecated` to `test_only.rs` for better separation of concerns.
 - **Public API Change**: Made `next_batch` in `read.rs` public to support async batch processing.
 - **Benchmark Updates**: Modified benchmarks in `simple_bulk_memtable.rs` to use async runtime for performance testing.

 Files affected: `Cargo.toml`, `simple_bulk_memtable.rs`, `test_only.rs`, `read.rs`.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* bulk-multiparts-merge-reader:
 **Enhance Benchmarking for Memtable**

 - Refactored `create_large_memtable` to `create_memtable_with_rows` in `simple_bulk_memtable.rs` to allow dynamic row count configuration.
 - Introduced parameterized benchmarking in `bench_ranges_parallel_vs_sequential` to test various row counts, improving the flexibility and coverage of performance tests.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* bulk-multiparts-merge-reader:
 ### Enhance Memory Management and Public API

 - **`builder.rs`**: Made `next_offset` method public to allow external access to offset calculations.
 - **`simple_bulk_memtable.rs`**: Simplified the `series.extend` method by removing the iterator conversion for `fields`.
 - **`time_series.rs`**:
   - Added `can_accommodate` method to `ValueBuilder` to check if fields can be accommodated without offset overflow.
   - Modified `extend` method to use a `Vec` for `fields` instead of an iterator, improving memory management and error handling.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* bulk-multiparts-merge-reader:
 Add License and Enhance Testing in `simple_bulk_memtable.rs`

 - Added Apache License header to `simple_bulk_memtable.rs`.
 - Modified test configuration in `simple_bulk_memtable.rs` to include `any(test, feature = "test")`.
 - Introduced a new test `test_write_read_large_string` in `simple_bulk_memtable.rs` to verify handling of large strings.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* bulk-multiparts-merge-reader:
 Update `Cargo.toml` dependencies

 - Adjust features for `common-meta` and `mito-codec` to include "testing".
 - Maintain `criterion` version and features for async support.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* bulk-multiparts-merge-reader:
 ### Update Predicate Type in Memtable Iterators

 - **Files Modified**:
   - `src/mito2/src/memtable.rs`
   - `src/mito2/src/memtable/bulk.rs`
   - `src/mito2/src/memtable/simple_bulk_memtable.rs`

 - **Key Changes**:
   - Updated the `iter` method in `Memtable` trait and its implementations to use `Option<table::predicate::Predicate>` instead of `Option<Predicate>`.
   - Adjusted return type in `BulkMemtable`'s `iter` method to `Result<crate::memtable::BoxedBatchIterator>`.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* bulk-multiparts-merge-reader:
 **Enhance Memtable Functionality**

 - **`memtable.rs`**:
   - Added `Clone` trait to `MemtableStats` and made `num_ranges` public.
   - Introduced `num_rows` field in `MemtableRange` and updated its constructor.
   - Added `num_rows` method to `MemtableRange`.

 - **`partition_tree.rs`, `simple_bulk_memtable.rs`, `time_series.rs`**:
   - Updated `MemtableRange` instantiation to include `num_rows`.

 - **`range.rs`**:
   - Refactored `MemRangeBuilder` to handle a single `MemtableRange` and `MemtableStats`.

 - **`scan_region.rs`**:
   - Enhanced memtable filtering based on time range and updated `MemRangeBuilder` usage.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* bulk-multiparts-merge-reader:
 **Enhancements and Bug Fixes**

 - **Deduplication Enhancements**:
   - Introduced `DedupReader` and `LastRow` as public structs in `dedup.rs` to enhance deduplication capabilities.
   - Added `LastNonNull` deduplication strategy in `flush.rs` and `simple_bulk_memtable.rs`.

 - **Memtable Improvements**:
   - Updated `SimpleBulkMemtable` to support batch size configuration and deduplication strategies.
   - Modified `Series` struct in `time_series.rs` to include a configurable capacity.

 - **Testing Enhancements**:
   - Added new test `test_write_dedup` in `simple_bulk_memtable.rs` to verify deduplication functionality.
   - Updated existing tests to include `OpType` parameter for better operation type handling.

 - **Refactoring**:
   - Renamed `BatchIterBuilder` to `BatchRangeBuilder` in `simple_bulk_memtable.rs` for clarity.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* bulk-multiparts-merge-reader:
 - **Refactor `flush.rs`:** Removed `LastNonNullIter` usage and adjusted `DedupReader` instantiation to use `LastRow::new(false)` and `LastNonNull::new(false)`.
 - **Enhance `simple_bulk_memtable.rs`:** Added logic to handle `LastNonNull` merge mode in `IterBuilder`. Introduced new tests: `test_delete_only` and `test_single_range` to verify delete operations and single range handling.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* fix: tests

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

---------

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-07-25 03:05:54 +00:00
Zhenchi
dc01511adc refactor: explicitly accept path type as param (#6583)
Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2025-07-24 09:56:41 +00:00
Lanqing Yang
b01be5efc5 feat: move metasrv admin to http server while keep tonic for backward compatibility (#6466)
* feat: move metasrv admin to http server while keep tonic for backward compatibility

Signed-off-by: lyang24 <lanqingy93@gmail.com>

* refactor with nest method

Signed-off-by: lyang24 <lanqingy93@gmail.com>

---------

Signed-off-by: lyang24 <lanqingy93@gmail.com>
Co-authored-by: lyang24 <lanqingy@usc.edu>
2025-07-24 09:11:04 +00:00
Zhenchi
5908febd6c refactor: remove unused PartitionDef (#6573)
* refactor: remove unused PartitionDef

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix snafu

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2025-07-24 07:55:51 +00:00
Ruihang Xia
076e20081b feat: add RegionId to FileId (#6410)
* refactor: remove staled manifest structures

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add RegionId to FileId

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* rename method

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix test cases

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* refactor: introduce RegionFileId

- FileId still only consist of an uuid
- PathProvider accepts RegionFileId and doesn't need to keep a region id
  in it
- All Index applier takes RegionFileId and respects the region id in the RegionFileId
- FileMeta can still derive Serialize/Deserialize
- Refactor the CacheManager to accept RegionFileId

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: define PathType

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: adding PathType WIP

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: fix compiler errors

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: add path_type to region_dir_from_table_dir

Move region_dir_from_table_dir to mito and use join_dir internally

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: set path type to ApplierBuilder

Signed-off-by: evenyag <realevenyag@gmail.com>

* style: fmt code

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: fix passing incorrect dir to access layer

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: remove region_dir from CompactionRegion

We can get table_dir and path_type from the access layer

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: fix unit tests

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: fix typo

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: update comment

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: correct marker path

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: use AccessLayer::build_region_dir to get region dir

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: log entries in test

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: set path type in catchup

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: fix test_open_region_failure test

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: fix compiler errors

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Signed-off-by: evenyag <realevenyag@gmail.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>
2025-07-24 06:29:38 +00:00
yihong
61d83bc36b fix: closee issue #6555 return empty result (#6569)
* fix: closee issue #6555 return empty result

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

* fix: only start one instance one regrex sqlness test (#6570)

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

* refactor: refactor partition mod to use PartitionExpr instead of PartitionDef (#6554)

* refactor: refactor partition mod to use PartitionExpr instead of PartitionDef

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix snafu

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* Puts expression into PbPartition

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* address comments

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix compile

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* update proto

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* add serde test

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* add serde test

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix: address comments

Signed-off-by: yihong0618 <zouzou0208@gmail.com>

---------

Signed-off-by: yihong0618 <zouzou0208@gmail.com>
Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
Co-authored-by: Zhenchi <zhongzc_arch@outlook.com>
2025-07-24 03:30:59 +00:00
discord9
8d71213ea6 fix: aggr group by all partition cols use partial commutative (#6534)
* fix: aggr group by all partition cols use partial commutative

Signed-off-by: discord9 <discord9@163.com>

* test: bugged case

Signed-off-by: discord9 <discord9@163.com>

* test: sqlness fix

Signed-off-by: discord9 <discord9@163.com>

* test: more redacted

Signed-off-by: discord9 <discord9@163.com>

* more cases

Signed-off-by: discord9 <discord9@163.com>

* even more test cases

Signed-off-by: discord9 <discord9@163.com>

* join testcase

Signed-off-by: discord9 <discord9@163.com>

* fix: column requirement added in correct location

Signed-off-by: discord9 <discord9@163.com>

* fix test

Signed-off-by: discord9 <discord9@163.com>

* chore: clippy

Signed-off-by: discord9 <discord9@163.com>

* track col reqs per stack

Signed-off-by: discord9 <discord9@163.com>

* fix: continue

Signed-off-by: discord9 <discord9@163.com>

* chore: clippy

Signed-off-by: discord9 <discord9@163.com>

* refactor: test mod

Signed-off-by: discord9 <discord9@163.com>

* test utils

Signed-off-by: discord9 <discord9@163.com>

* test: better test

Signed-off-by: discord9 <discord9@163.com>

* more testcases

Signed-off-by: discord9 <discord9@163.com>

* test limit push down

Signed-off-by: discord9 <discord9@163.com>

* more testcases

Signed-off-by: discord9 <discord9@163.com>

* more testcase

Signed-off-by: discord9 <discord9@163.com>

* more test

Signed-off-by: discord9 <discord9@163.com>

* chore: update sqlness

Signed-off-by: discord9 <discord9@163.com>

* chore: update commnets

Signed-off-by: discord9 <discord9@163.com>

* fix: check col reqs from bottom to upper

Signed-off-by: discord9 <discord9@163.com>

* chore: more comment

Signed-off-by: discord9 <discord9@163.com>

* docs: more todo

Signed-off-by: discord9 <discord9@163.com>

* chore: comments

Signed-off-by: discord9 <discord9@163.com>

* test: a new failing test that should be fixed

Signed-off-by: discord9 <discord9@163.com>

* fix: part col alias tracking

Signed-off-by: discord9 <discord9@163.com>

* chore: unused

Signed-off-by: discord9 <discord9@163.com>

* chore: clippy

Signed-off-by: discord9 <discord9@163.com>

* docs: comment

Signed-off-by: discord9 <discord9@163.com>

* mroe testcase

Signed-off-by: discord9 <discord9@163.com>

* more testcase for step/part aggr combine

Signed-off-by: discord9 <discord9@163.com>

* FIXME: a new bug

Signed-off-by: discord9 <discord9@163.com>

* literally unfixable

Signed-off-by: discord9 <discord9@163.com>

* chore: remove some debug print

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-07-23 09:05:34 +00:00
Zhenchi
2298227e0c refactor: refactor partition mod to use PartitionExpr instead of PartitionDef (#6554)
* refactor: refactor partition mod to use PartitionExpr instead of PartitionDef

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix snafu

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* Puts expression into PbPartition

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* address comments

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix compile

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* update proto

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* add serde test

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* add serde test

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2025-07-23 03:51:28 +00:00
yihong
b51bafe9d5 fix: only start one instance one regrex sqlness test (#6570)
Signed-off-by: yihong0618 <zouzou0208@gmail.com>
2025-07-23 03:14:06 +00:00
ZonaHe
6e324adf02 feat: update dashboard to v0.10.4 (#6568)
Co-authored-by: ZonaHex <ZonaHex@users.noreply.github.com>
2025-07-22 15:38:47 +00:00
ZonaHe
02a9edef8b feat: update dashboard to v0.10.3 (#6566)
Co-authored-by: ZonaHex <ZonaHex@users.noreply.github.com>
2025-07-22 06:05:03 +00:00
Weny Xu
71b564b4aa refactor: support multiple index operations in single alter region request (#6487)
* refactor: support multiple index operations in single alter region request

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: update greptime-proto

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-07-22 03:10:42 +00:00
Ning Sun
2d67f9ae8b fix: windows path in error tests (#6564) 2025-07-21 09:07:18 +00:00
discord9
6501b0b13c feat: MergeScan print input (#6563)
* feat: MergeScan print input

Signed-off-by: discord9 <discord9@163.com>

* test: fix ut

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-07-21 06:40:15 +00:00
zyy17
dfe9eeb15c fix: unit test failed by adding necessary sleep function to ensure the time seqence (#6548)
Signed-off-by: zyy17 <zyylsxm@gmail.com>
2025-07-21 03:47:43 +00:00
dennis zhuang
78b1c6c554 feat: impl timestamp function for promql (#6556)
* feat: impl timestamp function for promql

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: style and typo

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: test

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* docs: update comments

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: comment

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

---------

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
2025-07-21 03:46:41 +00:00
yihong
1c8e8b96c1 docs: fix all dead links using lychee (#6559)
Signed-off-by: yihong0618 <zouzou0208@gmail.com>
2025-07-19 14:04:06 +00:00
discord9
73a3ac1320 fix: flow mirror cache (#6551)
* fix: invalid cache when flownode change address

Signed-off-by: discord9 <discord9@163.com>

* update comments

Signed-off-by: discord9 <discord9@163.com>

* fix

Signed-off-by: discord9 <discord9@163.com>

* refactor: add log&rename

Signed-off-by: discord9 <discord9@163.com>

* stuff

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-07-18 12:21:01 +00:00
Yan Tingwang
efefddbc85 test: add sqlness test for max execution time (#6517)
* add sqlness test for max_execution_time

Signed-off-by: codephage. <tingwangyan2020@163.com>

* add Pre-line comments SQLNESS PROTOCOL MYSQL

Signed-off-by: codephage. <tingwangyan2020@163.com>

* fix(mysql): support max_execution_time variable

Co-authored-by: evenyag <realevenyag@gmail.com>
Signed-off-by: codephage. <tingwangyan2020@163.com>

* fix: test::test_check & sqlness test mysql

Signed-off-by: codephage. <tingwangyan2020@163.com>

* add sqlness test for max_execution_time

Signed-off-by: codephage. <tingwangyan2020@163.com>

* add Pre-line comments SQLNESS PROTOCOL MYSQL

Signed-off-by: codephage. <tingwangyan2020@163.com>

* fix(mysql): support max_execution_time variable

Co-authored-by: evenyag <realevenyag@gmail.com>
Signed-off-by: codephage. <tingwangyan2020@163.com>

* fix: test::test_check & sqlness test mysql

Signed-off-by: codephage. <tingwangyan2020@163.com>

* chore: Unify the sql style

Signed-off-by: codephage. <tingwangyan2020@163.com>

---------

Signed-off-by: codephage. <tingwangyan2020@163.com>
Co-authored-by: evenyag <realevenyag@gmail.com>
2025-07-18 12:18:14 +00:00
localhost
58d2b7764f chore: Add explicit channels to Grpc and Prometheus query contexts (#6552) 2025-07-18 08:37:48 +00:00
fys
0dad8fc4a8 fix: estimate mem size for bulk ingester (#6550) 2025-07-18 08:03:07 +00:00
LFC
370d27587a refactor: make greptimedb's tests ran as a submodule (#6544)
fix: failed to run a test when as a submodule

Signed-off-by: luofucong <luofc@foxmail.com>
2025-07-18 08:01:30 +00:00
discord9
77b540ff68 feat: state/merge wrapper for aggr func (#6377)
* refactor: move to query crate

Signed-off-by: discord9 <discord9@163.com>

* refactor: split to multiple columns

Signed-off-by: discord9 <discord9@163.com>

* feat: aggr merge accum wrapper

Signed-off-by: discord9 <discord9@163.com>

* rename shorter

Signed-off-by: discord9 <discord9@163.com>

* feat: add all in one helper

Signed-off-by: discord9 <discord9@163.com>

* tests: sum&avg

Signed-off-by: discord9 <discord9@163.com>

* chore: allow unused

Signed-off-by: discord9 <discord9@163.com>

* chore: typos

Signed-off-by: discord9 <discord9@163.com>

* refactor: per ds

Signed-off-by: discord9 <discord9@163.com>

* chore: fix tests

Signed-off-by: discord9 <discord9@163.com>

* refactor: move to common-function

Signed-off-by: discord9 <discord9@163.com>

* WIP massive refactor

Signed-off-by: discord9 <discord9@163.com>

* typo

Signed-off-by: discord9 <discord9@163.com>

* todo: stuff

Signed-off-by: discord9 <discord9@163.com>

* refactor: state2input type

Signed-off-by: discord9 <discord9@163.com>

* chore: rm unused

Signed-off-by: discord9 <discord9@163.com>

* refactor: per bot review

Signed-off-by: discord9 <discord9@163.com>

* chore: per bot

Signed-off-by: discord9 <discord9@163.com>

* refactor: rm duplicate infer type

Signed-off-by: discord9 <discord9@163.com>

* chore: better test

Signed-off-by: discord9 <discord9@163.com>

* fix: test sum refactor&fix wrong state types

Signed-off-by: discord9 <discord9@163.com>

* test: refactor avg udaf test

Signed-off-by: discord9 <discord9@163.com>

* refactor: split files

Signed-off-by: discord9 <discord9@163.com>

* refactor: docs&dedup

Signed-off-by: discord9 <discord9@163.com>

* refactor: allow merge to carry extra info

Signed-off-by: discord9 <discord9@163.com>

* chore: rm unused

Signed-off-by: discord9 <discord9@163.com>

* chore: clippy

Signed-off-by: discord9 <discord9@163.com>

* chore: docs&unused

Signed-off-by: discord9 <discord9@163.com>

* refactor: check fields equal

Signed-off-by: discord9 <discord9@163.com>

* test: test count_hash

Signed-off-by: discord9 <discord9@163.com>

* test: more custom udaf

Signed-off-by: discord9 <discord9@163.com>

* chore: clippy

Signed-off-by: discord9 <discord9@163.com>

* refactor: per review

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-07-17 17:37:40 +00:00
Lei, HUANG
0cf25f7b05 feat: add sst file num in region stat (#6537)
* feat/add-sst-file-num-in-region-stat:
 ### Add SST File Count to Region Statistics

 - **Enhancements**:
   - Added `sst_num` to track the number of SST files in region statistics across multiple modules.
   - Updated `RegionStat` and `RegionStatistic` structs in `datanode.rs` and `region_engine.rs` to include `sst_num`.
   - Modified `MitoRegion` and `SstVersion` in `region.rs` and `version.rs` to compute and return the number of SST files.
   - Adjusted test cases in `collect_leader_region_handler.rs`, `failure_handler.rs`, `region_lease_handler.rs`, and `weight_compute.rs` to initialize `sst_num`.
   - Updated `get_region_statistic` in `utils.rs` to sum `sst_num` from metadata and data statistics.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* feat/add-sst-file-num-in-region-stat:
 Add `sst_num` to `region_statistics`

 - Updated `region_statistics.rs` to include a new constant `SST_NUM` and added it to the schema and builder structures.
 - Modified `information_schema.result` to reflect the addition of `sst_num` in the `region_statistics` table.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

---------

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-07-17 17:36:20 +00:00
Yingwen
c23b26461c feat: add metrics for request wait time and adjust stall metrics (#6540)
* feat: add metric greptime_mito_request_wait_time to observe wait time

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: add worker to wait time metric

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: rename stall gauge to greptime_mito_write_stalling_count

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: change greptime_mito_write_stall_total to total stalled requests

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: merge lazy static blocks

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-07-17 17:17:51 +00:00
zyy17
90ababca97 fix: typo for existed -> exited (#6547)
chore: `existed` -> `exited`

Signed-off-by: zyy17 <zyylsxm@gmail.com>
2025-07-17 12:27:29 +00:00
dennis zhuang
37dc057423 feat: adds uptime telemetry (#6545)
* feat: adds uptime telemetry

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: remove seconds and minutes

---------

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
2025-07-17 09:55:28 +00:00
Weny Xu
8237646055 feat: add table reconciliation utilities (#6519)
* feat: add table reconciliation utilities

Signed-off-by: WenyXu <wenymedia@gmail.com>

* fix: fix unit tests

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestison from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: update comment

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-07-17 08:05:38 +00:00
Lei, HUANG
011dd51495 chore: update opendal dashboard (#6541)
* chore/update-opendal-dashboard:
 ### Update Grafana Dashboard Queries

 - **Enhanced Metrics Queries**: Updated Prometheus queries in `dashboard.json`, `dashboard.md`, and `dashboard.yaml` files for both `cluster` and `standalone` dashboards to include additional operations (`Reader::read`, `Writer::write`, `Writer::close`) in the metrics calculations.
 - **Legend Format Adjustments**: Modified legend formats to include the `operation` field for better clarity in visualizations.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* chore/update-opendal-dashboard:
 Enhance Legend Format in Grafana Dashboards

 - Updated the `legendFormat` in `dashboard.json`, `dashboard.md`, and `dashboard.yaml` files for both `cluster` and `standalone` dashboards to include the `operation` field.
 - This change affects the following files:
   - `grafana/dashboards/metrics/cluster/dashboard.json`
   - `grafana/dashboards/metrics/cluster/dashboard.md`
   - `grafana/dashboards/metrics/cluster/dashboard.yaml`
   - `grafana/dashboards/metrics/standalone/dashboard.json`
   - `grafana/dashboards/metrics/standalone/dashboard.md`
   - `grafana/dashboards/metrics/standalone/dashboard.yaml`

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

---------

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-07-17 07:17:46 +00:00
Zhenchi
eb99e439c7 feat: MatchesConstTerm displays probes (#6518)
* feat: `MatchesConstTerm` displays probes

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix fmt

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2025-07-17 07:01:55 +00:00
Zhenchi
50148b25b5 fix: row selection intersection removes trailing rows (#6539)
* fix: row selection intersection removes trailing rows

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix typos

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2025-07-16 17:00:10 +00:00
Ruihang Xia
639b3ddc3e feat: update partial execution metrics (#6499)
* feat: update partial execution metrics

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* send data with metrics in distributed mode

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* only send partial metrics under VERBOSE flag

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* loop to while

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-07-16 16:59:10 +00:00
discord9
139a36a459 fix: breaking loop when not retryable (#6538)
fix: breaking when not retryable

Signed-off-by: discord9 <discord9@163.com>
2025-07-16 09:22:41 +00:00
LFC
e6b9d09901 feat: Flight supports RecordBatch with dictionary arrays (#6521)
* feat: Flight supports RecordBatch with dictionary arrays

Signed-off-by: luofucong <luofc@foxmail.com>

* resolve PR comments

Signed-off-by: luofucong <luofc@foxmail.com>

---------

Signed-off-by: luofucong <luofc@foxmail.com>
2025-07-16 09:21:12 +00:00
discord9
bbc9f3ea1e refactor: expose flow batching mode constants to config (#6442)
* refactor: make flow batching mode constant to configs

Signed-off-by: discord9 <discord9@163.com>

* docs: config docs

Signed-off-by: discord9 <discord9@163.com>

* docs: update code comment

Signed-off-by: discord9 <discord9@163.com>

* test: fix test_config_api

Signed-off-by: discord9 <discord9@163.com>

* feat: more batch opts

Signed-off-by: discord9 <discord9@163.com>

* fix after rebase

Signed-off-by: discord9 <discord9@163.com>

* chore: per review

Signed-off-by: discord9 <discord9@163.com>

* per review experimental options

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-07-16 08:05:20 +00:00
zyy17
fac8c3e62c feat: introduce common event recorder (#6501)
* feat: introduce common event recorder

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* refactor: use `backon` as retry lib

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* chore: remove unused error

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* refactor: add CancellationToken

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* refactor: add `cancel()`

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* refactor: `cancel()` -> `close()`
chore: `cancel()` -> `close()` and polish some code

Signed-off-by: zyy17 <zyylsxm@gmail.com>

---------

Signed-off-by: zyy17 <zyylsxm@gmail.com>
2025-07-16 03:59:41 +00:00
dennis zhuang
411eb768b1 feat: supports null reponse format for http API (#6531)
* feat: supports null reponse format for http API

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: license header and assertion

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: in seconds

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

---------

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
2025-07-16 03:46:39 +00:00
shuiyisong
95d2549007 chore: minor update for using pipeline with prometheus (#6522)
Signed-off-by: shuiyisong <xixing.sys@gmail.com>
2025-07-16 03:03:07 +00:00
Lei, HUANG
6744f5470b fix(grpc): check grpc client unavailable (#6488)
* fix/check-grpc-client-unavailable:
 Improve async handling in `greptime_handler.rs`

 - Updated the `DoPut` response handling to use `await` with `result_sender.send` for better asynchronous operation.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* fix/check-grpc-client-unavailable:
 ### Improve Error Handling in `greptime_handler.rs`

 - Enhanced error handling for the `DoPut` operation by switching from `send` to `try_send` for the `result_sender`.
 - Added specific logging for unreachable clients, including `request_id` in the warning message.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

---------

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-07-15 14:32:45 +00:00
localhost
9d05c7c5fa chore: subdivision of different channels (#6526)
* chore: subdivision of different channels

* chore: add as_str for channel

* chore: fix by pr comment

* chore: fix by pr comment
2025-07-15 09:22:01 +00:00
localhost
9442284fd4 chore: add db label for greptime_table_operator_ingest_rows (#6520)
* chore: add db label for greptime_table_operator_ingest_rows

* chore: add db label for greptime_table_operator_delete_rows
2025-07-15 07:16:44 +00:00
Weny Xu
1065db9518 fix: fix state transition in create table procedure (#6523)
Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-07-15 06:06:27 +00:00
Lei, HUANG
2f9a10ab74 refactor: expose bulk symbols (#6467)
* wip

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* refactor/expose-bulk-symbols:
 ### Commit Message

 Enhance DDL Module Accessibility and Refactor `verify_alter` Function

 - **`statement.rs`**: Made the `ddl` module public to enhance accessibility.
 - **`ddl.rs`**:
   - Made `NAME_PATTERN_REG` public for broader usage.
   - Refactored `verify_alter` function to be a standalone public function, improving modularity and reusability.
   - Made `parse_partitions` function public to allow external access.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* refactor/expose-bulk-symbols:
 ### Add Parquet Writer and Enhance Row Modifier

 - **Add Parquet Writer Module**: Introduced a new module `parquet_writer.rs` to bridge `opendal` `Writer` with `parquet` `AsyncFileWriter`.
 - **Enhance Row Modifier**: Updated `RowModifier` to use `Default` trait and made `fill_internal_columns` a public static method in `row_modifier.rs`.
 - **Expose Internal Structures**: Made `RowsIter`, `RowIter`, `TablesBuilder`, and `TableBuilder` structs public in `row_modifier.rs` and `prom_row_builder.rs`.
 - **Update Metric Engine**: Changed `RowModifier` instantiation to use `default()` in `engine.rs`.
 - **Modify Table Options Handling**: Added `fill_table_options_for_create` function in `insert.rs` to handle table options based on `AutoCreateTableType`.
 - **Make Constants Public**: Changed `DEFAULT_ROW_GROUP_SIZE` to public in `parquet.rs`.
 - **Expose Functions**: Made `extract_add_columns_expr` public in `expr_helper.rs` and `AutoCreateTableType` public in `insert.rs`.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* refactor/expose-bulk-symbols:
 ### Commit Message

 Enhance HTTP Server and Prometheus Integration

 - **`http.rs`**: Made `extractor` module public to allow external access.
 - **`prom_store.rs`**: Refactored `decode_remote_write_request` to return `TablesBuilder` and adjusted logic for processing requests based on pipeline usage.
 - **`lib.rs`**: Made `metrics` module public for broader accessibility.
 - **`prom_row_builder.rs`**: Exposed `tables` field in `TablesBuilder` for external manipulation.
 - **`proto.rs`**: Changed visibility of `table_data` in `PromWriteRequest` to `pub(crate)` for internal module access.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* refactor/expose-bulk-symbols:
 ### Add Accessor Methods for Managers and Executors

 - **`src/frontend/src/instance.rs`**: Added accessor methods for `NodeManagerRef`, `PartitionRuleManagerRef`, `CacheInvalidatorRef`, and `ProcedureExecutorRef` to the `Instance` struct.
 - **`src/operator/src/insert.rs`**: Introduced methods to access `NodeManagerRef` and `PartitionRuleManagerRef` in the `Inserter` struct.
 - **`src/operator/src/statement.rs`**: Added methods to retrieve `ProcedureExecutorRef` and `CacheInvalidatorRef` in the `StatementExecutor` struct.

 ### Change HashMap Implementation

 - **`src/servers/src/prom_row_builder.rs`**: Replaced `ahash::HashMap` with `std::collections::HashMap`.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* refactor/expose-bulk-symbols:
 Refactor table option handling in `insert.rs`

 - Replaced `Vec` with `HashMap` for `table_options` to improve efficiency.
 - Extracted logic for filling table options into a new function `fill_table_options_for_create`.
 - Modified `fill_table_options_for_create` to return the engine name based on `create_type`.
 - Simplified the insertion of table options into `create_table_expr` by using `extend` method.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* refactor/expose-bulk-symbols:
 Refactor `insert.rs` to separate engine name logic from table options

 - Updated `Inserter` implementation to determine `engine_name` separately from `fill_table_options_for_create`.
 - Modified `fill_table_options_for_create` to no longer return an engine name, focusing solely on populating table options.
 - Adjusted logic to set `engine_name` based on `AutoCreateTableType`, using `METRIC_ENGINE_NAME` for logical tables and `default_engine()` otherwise.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

---------

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-07-14 07:32:01 +00:00
Ruihang Xia
d82bc98717 feat(parser): parse TQL in CTE position (#6456)
* naive implementation

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* clean up

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* refactor to use existing tql parse logic

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* refactor display logic

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* refactor column list parsing logic

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* refactor to remove redundent check logic

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* set sql cte into Query

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-07-14 06:44:56 +00:00
shuiyisong
582bcc3b14 feat(pipeline): filter processor (#6502)
* feat: add filter processor

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* test: add tests

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: change target to list and use `in` and `not_in`

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: rebase main and fix error

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

---------

Signed-off-by: shuiyisong <xixing.sys@gmail.com>
2025-07-13 23:18:42 +00:00
zyy17
e5e10fd362 refactor: add the active_frontends() in PeerLookupService (#6504)
Signed-off-by: zyy17 <zyylsxm@gmail.com>
2025-07-11 09:42:00 +00:00
zyy17
104d607b3f refactor: support to speficy ttl in open_compaction_region() (#6515)
refactor: add `ttl` in `open_compaction_region()` and `CompactionJob`

Signed-off-by: zyy17 <zyylsxm@gmail.com>
2025-07-11 09:00:05 +00:00
zyy17
93e3a04aa8 refactor: add row_inserts() and row_inserts_with_hints(). (#6503)
Signed-off-by: zyy17 <zyylsxm@gmail.com>
2025-07-11 08:06:36 +00:00
Weny Xu
c1847e6b6a chore: change log level for region not found during lease renewal (#6513)
Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-07-11 07:51:29 +00:00
discord9
d258739c26 refactor(flow): faster time window expr (#6495)
* refactor: faster window expr

Signed-off-by: discord9 <discord9@163.com>

* docs: explain fast path

Signed-off-by: discord9 <discord9@163.com>

* chore: rm unwrap

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-07-11 06:43:14 +00:00
Yan Tingwang
914086668d fix: add system variable max_execution_time (#6511)
add system variable : max_execution_time

Signed-off-by: codephage. <tingwangyan2020@163.com>
2025-07-11 02:11:21 +00:00
localhost
01a8ad1304 chore: add prom store metrics (#6508)
chore: add metrics for db
2025-07-10 17:09:58 +00:00
shuiyisong
1594859957 refactor: replace pipeline::value with vrl::value (#6430)
* chore: pass compile

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* fix: default case

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* fix: test

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: remove and move code

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: remove serde_value to vrlvalue conversion

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* refactor: optimized vrl value related code

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* refactor: loki transform using vrl

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* fix: remove unused error

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: fix cr issue

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: use from_utf8_lossy_owned

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: CR issue

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

---------

Signed-off-by: shuiyisong <xixing.sys@gmail.com>
2025-07-10 17:08:31 +00:00
Ruihang Xia
351a77a2e5 fix: expand on conditional commutative as well (#6484)
* fix: expand on conditional commutative as well

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Signed-off-by: discord9 <discord9@163.com>

* update sqlness result

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Signed-off-by: discord9 <discord9@163.com>

* add logging to figure test failure

Signed-off-by: discord9 <discord9@163.com>

* revert

Signed-off-by: discord9 <discord9@163.com>

* feat: stream drop record metrics

Signed-off-by: discord9 <discord9@163.com>

* Revert "feat: stream drop record metrics"

This reverts commit 6a16946a5b8ea37557bbb1b600847d24274d6500.

Signed-off-by: discord9 <discord9@163.com>

* feat: stream drop record metrics

Signed-off-by: discord9 <discord9@163.com>

refactor: move logging to drop too

Signed-off-by: discord9 <discord9@163.com>

fix: drop input stream before collect metrics

Signed-off-by: discord9 <discord9@163.com>

* fix: expand differently

Signed-off-by: discord9 <discord9@163.com>

* test: update sqlness

Signed-off-by: discord9 <discord9@163.com>

* chore: more dbg

Signed-off-by: discord9 <discord9@163.com>

* Revert "feat: stream drop record metrics"

This reverts commit 3eda4a2257928d95cf9c1328ae44fae84cfbb017.

Signed-off-by: discord9 <discord9@163.com>

* test: sqlness redacted

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Signed-off-by: discord9 <discord9@163.com>
Co-authored-by: discord9 <discord9@163.com>
2025-07-10 15:13:52 +00:00
shuiyisong
7723cba7da chore: skip calc ts in doc 2 with transform (#6509)
Signed-off-by: shuiyisong <xixing.sys@gmail.com>
2025-07-10 13:10:02 +00:00
localhost
dd7da3d2c2 chore: remove region id to reduce time series (#6506) 2025-07-10 12:33:06 +00:00
Weny Xu
ffe0da0405 fix: correctly update partition key indices during alter table operations (#6494)
* fix: correctly update partition key indices in alter table operations

Signed-off-by: WenyXu <wenymedia@gmail.com>

* test: add sqlness tests

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-07-10 08:08:07 +00:00
Ning Sun
f2c7b09825 fix: add tracing dependencies (#6497) 2025-07-10 03:01:31 +00:00
Yingwen
3583b3204f feat: override batch sequence by the sequence in FileMeta (#6492)
* feat: support overriding read sequence by sequence in the FileMeta

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: add test for parquet reader

Signed-off-by: evenyag <realevenyag@gmail.com>

* style: fix clippy

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: update need_override_sequence to check all row groups

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-07-10 02:26:25 +00:00
Copilot
fea8bc5ee7 chore(comments): fix typo and grammar issues (#6496)
* Initial plan

* Fix 5 TODO comments: spelling typos and formatting issues

Co-authored-by: waynexia <15380403+waynexia@users.noreply.github.com>

---------

Co-authored-by: copilot-swe-agent[bot] <198982749+Copilot@users.noreply.github.com>
Co-authored-by: waynexia <15380403+waynexia@users.noreply.github.com>
2025-07-10 02:24:42 +00:00
Yingwen
40363bfc0f fix: range query returns range selector error when table not found (#6481)
* test: add sqlness test for range vector with non-existence metric

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: handle empty metric for matrix selector

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: update sqlness result

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: add newline

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-07-10 01:53:23 +00:00
jeremyhi
85c0136619 fix: greptime timestamp display null (#6469)
* feat: is_overflow method

* feat: check ts overflow
2025-07-10 01:53:00 +00:00
dennis zhuang
b70d998596 feat: improve install script (#6490)
Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
2025-07-09 17:04:20 +00:00
LFC
2f765c8fd4 refactor: remove unnecessary args (#6493)
* x

Signed-off-by: luofucong <luofc@foxmail.com>

* refactor: remove unnecessary args

Signed-off-by: luofucong <luofc@foxmail.com>

---------

Signed-off-by: luofucong <luofc@foxmail.com>
2025-07-09 13:23:15 +00:00
shuiyisong
d99cd98c01 fix: skip nan in prom remote write pipeline (#6489)
Signed-off-by: shuiyisong <xixing.sys@gmail.com>
2025-07-09 11:46:07 +00:00
Weny Xu
a858f55257 refactor(meta): separate validation and execution logic in alter logical tables procedure (#6478)
* refactor(meta): separate validation and execution logic in alter logical tables procedure

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-07-09 06:48:27 +00:00
Ning Sun
916967ea59 feat: allow alternative version string (#6472)
* feat: allow alternative version string

* refactor: rename original version function to verbose_version

Signed-off-by: Ning Sun <sunning@greptime.com>

---------

Signed-off-by: Ning Sun <sunning@greptime.com>
2025-07-09 06:43:01 +00:00
Weny Xu
c58d8aa94a refactor(meta): extract AlterTableExecutor from AlterTableProcedure (#6470)
* refactor(meta): extract `AlterTableExecutor` from `AlterTableProcedure`

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-07-09 05:13:19 +00:00
Ning Sun
eeb061ca74 feat: allow float number literal in step (#6483)
* chore: allow float number literal as step

Signed-off-by: Ning Sun <sunning@greptime.com>

* chore: switch to released version of promql parser

Signed-off-by: Ning Sun <sunning@greptime.com>

---------

Signed-off-by: Ning Sun <sunning@greptime.com>
2025-07-09 03:09:09 +00:00
shuiyisong
f7282fde28 chore: sort range query return values (#6474)
* chore: sort range query return values

* chore: add comments

* chore: add is_sorted check

* fix: test
2025-07-09 02:27:12 +00:00
dennis zhuang
a4bd11fb9c fix: empty statements hang (#6480)
* fix: empty statements hang

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* tests: add cases

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

---------

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
2025-07-09 02:13:14 +00:00
LFC
6dc9e8ddb4 feat: display extension ranges in "explain" (#6475)
* feat: display extension ranges in "explain"

Signed-off-by: luofucong <luofc@foxmail.com>

* fix ci

Signed-off-by: luofucong <luofc@foxmail.com>

---------

Signed-off-by: luofucong <luofc@foxmail.com>
2025-07-09 02:11:23 +00:00
discord9
af03e89139 fix: stricter win sort condition (#6477)
test: sqlness



test: fix sqlness redacted

Signed-off-by: discord9 <discord9@163.com>
2025-07-08 22:27:17 +00:00
jeremyhi
e7a64b7dc0 chore: refactor register_region method to avoid TOCTOU issues (#6468) 2025-07-08 13:26:38 +00:00
Lin Yihai
29739b556e refactor: split some convert function into sql-common crate (#6452)
refactor: split some convert function into `sql-common` crates

Signed-off-by: Yihai Lin <yihai-lin@foxmail.com>
2025-07-08 12:08:33 +00:00
Lei, HUANG
77e50d0e08 chore: expose some config (#6479)
refactor/expose-config:
 ### Make SubCommand and Fields Public in `frontend.rs`

 - Made `subcmd` field in `Command` struct public.
 - Made `SubCommand` enum public.
 - Made `config_file` and `env_prefix` fields in `StartCommand` struct public.

 These changes enhance the accessibility of command-related structures and fields, facilitating external usage and integration.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-07-08 11:52:23 +00:00
LFC
c2f1447345 refactor: stores the http server builder in Metasrv instance (#6461)
* refactor: stores the http server builder in Metasrv instance

Signed-off-by: luofucong <luofc@foxmail.com>

* resolve PR comments

Signed-off-by: luofucong <luofc@foxmail.com>

* fix ci

Signed-off-by: luofucong <luofc@foxmail.com>

---------

Signed-off-by: luofucong <luofc@foxmail.com>
2025-07-07 06:39:05 +00:00
Weny Xu
30f7955d2b feat: add column metadata to response extensions (#6451)
Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-07-07 03:38:13 +00:00
fys
3508fddd74 feat: support show triggers (#6465)
* feat: support show triggers

* add enterprise feature

* chore: remove unused error
2025-07-07 03:30:57 +00:00
Weny Xu
351c741c70 fix(metric-engine): handle stale metadata region recovery failures (#6395)
* fix(metric-engine): handle stale metadata region recovery failures

Signed-off-by: WenyXu <wenymedia@gmail.com>

* test: add unit tests

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-07-07 03:30:40 +00:00
liyang
bb43d604a4 ci: use ubuntu 16 core machine in release-cn-artifacts (#6464)
ci: use ubuntu 16 core machine
2025-07-04 18:06:07 +00:00
Lei, HUANG
9576bcb9ae fix: filter empty batch in bulk insert api (#6459)
* fix/filter-empty-batch-in-bulk-insert-api:
 **Add Early Return for Empty Record Batches in `bulk_insert.rs`**

 - Implemented an early return in the `Inserter` implementation to handle cases where `record_batch.num_rows()` is zero, improving efficiency by avoiding unnecessary processing.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* fix/filter-empty-batch-in-bulk-insert-api:
 **Improve Bulk Insert Handling**

 - **`handle_bulk_insert.rs`**: Added a check to handle cases where the batch has zero rows, immediately returning and sending a success response with zero rows processed.
 - **`bulk_insert.rs`**: Enhanced logic to skip processing for masks that select none, optimizing the bulk insert operation by avoiding unnecessary iterations.

 These changes improve the efficiency and robustness of the bulk insert process by handling edge cases more effectively.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* fix/filter-empty-batch-in-bulk-insert-api:
 ### Refactor and Error Handling Enhancements

 - **Refactored Timestamp Handling**: Introduced `timestamp_array_to_primitive` function in `timestamp.rs` to streamline conversion of timestamp arrays to primitive arrays, reducing redundancy in `handle_bulk_insert.rs` and `bulk_insert.rs`.
 - **Error Handling**: Added `InconsistentTimestampLength` error in `error.rs` to handle mismatched timestamp column lengths in bulk insert operations.
 - **Bulk Insert Logic**: Updated `handle_bulk_insert.rs` to utilize the new timestamp conversion function and added checks for timestamp length consistency.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* fix/filter-empty-batch-in-bulk-insert-api:
 **Refactor `bulk_insert.rs` to streamline imports**

 - Simplified import statements by removing unused timestamp-related arrays and data types from the `arrow` crate in `bulk_insert.rs`.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

---------

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-07-04 13:32:10 +00:00
Zhenchi
dc17e6e517 fix: add backward compatibility for SkippingIndexOptions deserialization (#6458)
* fix: add backward compatibility for `SkippingIndexOptions` deserialization

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* address comments

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* address comments

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2025-07-04 11:58:27 +00:00
Yiran
563d25ee04 fix: doc links (#6305)
Signed-off-by: Yiran <cuiyiran3@gmail.com>
2025-07-04 09:52:47 +00:00
Weny Xu
7d17782fd5 feat: persist column ids in table metadata (#6457)
* feat: persist column ids in table metadata

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-07-04 08:12:29 +00:00
fys
c5360601f5 feat: information table extension (#6434)
* feat: information table extension

* avoid use std HashMap behind cfg feature
2025-07-04 04:37:36 +00:00
discord9
9b5baa965c feat: truly limit time range by split window (#6295)
* feat: actually split window to limit time range

feat: truly limit time range by split window

Update src/flow/src/batching_mode/state.rs

Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>
Signed-off-by: discord9 <discord9@163.com>

* chore: added stalled time window range

Signed-off-by: discord9 <discord9@163.com>

* fix: not flush all time range as too expensive

Signed-off-by: discord9 <discord9@163.com>

* test: make it more robust

Signed-off-by: discord9 <discord9@163.com>

* what

Signed-off-by: discord9 <discord9@163.com>

* feat: denfensively handle surplus

Signed-off-by: discord9 <discord9@163.com>

* refactor: per review,explain flush flow

Signed-off-by: discord9 <discord9@163.com>

* chore: per bugbot

Signed-off-by: discord9 <discord9@163.com>

* fix: a temp fix to make mirror insert go first(still need better fix to sync with mirror insert that happens before

Signed-off-by: discord9 <discord9@163.com>

* chore: add todo

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>
2025-07-04 03:37:43 +00:00
Yingwen
76a5145def fix: enable max_execution time for other read only statements (#6454)
Also disable the timeout when timeout is 0

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-07-03 13:46:02 +00:00
Ruihang Xia
7b2703760b feat: skip rule checker on ingestion (#6453)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-07-03 13:31:16 +00:00
Ruihang Xia
81ea172ce4 feat!: point matrix based partition rule checker (#6431)
* bare implementation

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* stateful generator

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* error report

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix remap checkpoint

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* use matrix generator as iterator

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* pre-calculate suffix product

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update existing test cases

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* sqlness

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix ut

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* clean up

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-07-03 06:50:02 +00:00
dennis zhuang
f7c363f969 fix: label_replace and label_join functions when used as sub‐expressions (#6443)
* fix: label_replace and label_join functions in expressions

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: remove update_fields

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: tql eval -> TQL EVAL

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: empty regex and not existing source label

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: simplfy test

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: test

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: test

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

---------

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
2025-07-03 05:34:22 +00:00
Ruihang Xia
5f2daae087 fix: remap column indices on overriding logical table partitions (#6446)
* fix: remap column indices on overriding logical table partitions

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* sqlness

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* refactor map query

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-07-02 12:12:00 +00:00
Yingwen
b1b0d0136f fix: correct MAX_EXECUTION_TIME timeout calculation (#6444)
* feat: implement statement timeout in frontend instance

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: fail fast when timeout is 0

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: update start time

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-07-02 08:31:40 +00:00
Zhenchi
599f289f59 feat: add granularity and false_positive_rate options for indexes (#6416)
* feat: add `granularity` and `false_positive_rate` options for indexes

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* address comments

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* upgrade proto

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2025-07-02 07:33:39 +00:00
LFC
385f12a62e refactor: extract the common method for errors into tonic status (#6437)
Signed-off-by: luofucong <luofc@foxmail.com>
2025-07-02 02:57:30 +00:00
fys
6b90e2b6b4 fix: allow clippy::print_stdout in cli crate (#6436)
* fix: allow clippy::print_stdout in cli crate

* add clippy lint options
2025-07-02 01:40:58 +00:00
ZonaHe
a4f3e96e96 feat: update dashboard to v0.10.2 (#6433)
Co-authored-by: sunchanglong <sunchanglong@users.noreply.github.com>
2025-07-02 01:27:37 +00:00
Ruihang Xia
2b0f27da51 feat: don't allow creating flow with the same sink and source table (#6435)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-07-01 11:33:09 +00:00
Weny Xu
e0382eeb7c fix: fix dest_keys chunks bug in TombstoneManager (#6432)
* fix(meta): fix dest_keys_chunks bug in TombstoneManager

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: fix typo

Signed-off-by: WenyXu <wenymedia@gmail.com>

* fix: fix sqlness tests

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-07-01 09:20:13 +00:00
liyang
4aa6add8dc ci: add check-version script to check whether push the latast image (#6415)
Signed-off-by: liyang <daviderli614@gmail.com>
2025-07-01 07:45:47 +00:00
zyy17
645988975e refactor: add RegionMigrationTriggerReason in RegionMigrationProcedureTask (#6413)
Signed-off-by: zyy17 <zyylsxm@gmail.com>
2025-06-30 11:16:41 +00:00
LFC
a203909de3 feat: extension range definition (#6386)
* feat: defined extension range

Signed-off-by: luofucong <luofc@foxmail.com>

* remove feature parameters

Signed-off-by: luofucong <luofc@foxmail.com>

* resolve PR comments

Signed-off-by: luofucong <luofc@foxmail.com>

* resolve PR comments

Signed-off-by: luofucong <luofc@foxmail.com>

---------

Signed-off-by: luofucong <luofc@foxmail.com>
2025-06-30 02:42:40 +00:00
discord9
616e76941a feat: flow query parallel=1&query faster with many windows&min one time window (#6324)
* feat: flow query parallel=1&query faster when
windows too many&min one time window

Signed-off-by: discord9 <discord9@163.com>

* chore: default flow query parallelism=1

Signed-off-by: discord9 <discord9@163.com>

* refactor: use query options in flownode per review

Signed-off-by: discord9 <discord9@163.com>

* docs: update comment

Signed-off-by: discord9 <discord9@163.com>

* chore: fix test

Signed-off-by: discord9 <discord9@163.com>

* chore: per review

Signed-off-by: discord9 <discord9@163.com>

* chore: make config docs

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-06-30 02:17:01 +00:00
Yingwen
bc42d35c2a chore: bump version to 0.16 (#6417)
Signed-off-by: evenyag <realevenyag@gmail.com>
2025-06-28 01:46:01 +00:00
fys
524bdfff22 fix: add cfg for DecodeSqlValue error (#6420) 2025-06-28 01:39:06 +00:00
fys
6bed0b6ba0 feat: add trigger-related error code (#6419) 2025-06-28 01:25:20 +00:00
shuiyisong
dec8c52b18 feat(pipeline): support Loki API (#6390)
* chore: use schema_info

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* refactor: abstract loki item generator

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: introduce middle item

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* feat: introduce pipeline in loki api

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* test: add tests

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: minor update

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: minor update

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: update prefix and test

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: change recursion to loop

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* fix: cr issue

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

---------

Signed-off-by: shuiyisong <xixing.sys@gmail.com>
2025-06-28 01:01:08 +00:00
zyy17
753a7e1a24 refactor: pass pipeline name through http header and get db from query context (#6405)
Signed-off-by: zyy17 <zyylsxm@gmail.com>
2025-06-27 10:43:37 +00:00
Lei, HUANG
6684200fce fix: skip failing nodes when gathering porcess info (#6412)
* fix/process-manager-skip-fail-nodes:
 - **Enhance Error Handling in `process_manager.rs`:**
   Improved error handling by adding a warning log for failing nodes in the `list_process` method. This ensures that the process listing continues even if some nodes fail to respond.

 - **Add Error Type Import in `process_manager.rs`:**
   Included the `Error` type from the `error` module to handle errors more effectively within the `ProcessManager` implementation.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* fix: clippy

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* fix/process-manager-skip-fail-nodes:
 **Enhancements to Debugging and Trait Implementation**

 - **`process_manager.rs`**: Improved logging by adding more detailed error messages when skipping failing nodes.
 - **`selector.rs`**: Enhanced the `FrontendClient` trait by adding the `Debug` trait bound to improve debugging capabilities.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

---------

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-06-27 08:20:01 +00:00
Weny Xu
5fcb97724f chore: correct typo in configuration (#6411)
Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-06-27 07:55:13 +00:00
Zhenchi
ff559b2688 fix: complete partial index search results in cache (#6403)
* fix: complete partial index search results in cache

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* polish

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* address comments

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* add initial tests

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* cover issue case

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* TestEnv new -> async

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2025-06-27 07:40:14 +00:00
Ruihang Xia
8473a34fc9 feat: Collider for playing with PartitionRule (#6399)
* skeleton

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* initial impl and tests

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* refactor and reorganize

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix typo

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add comment

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* error handling

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* explain naming

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-06-27 07:15:33 +00:00
jeremyhi
df0ebf0378 feat: override logical table's partition key indices (#6385)
* feat: Override logical table's partition key indices with physical table's

* chore: by comment
2025-06-27 02:55:56 +00:00
Weny Xu
4a665fd27b refactor: move #[allow(clippy::print_stdout)] to lib level (#6398)
chore: allow cli to print stdout

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-06-27 02:40:12 +00:00
Yingwen
b4d6441716 refactor: rename test show_processList to show_process_list (#6408)
refactor: rename show_processList to show_process_list

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-06-27 01:18:54 +00:00
liyang
bdd50a2263 ci: try to fix the job permissions (#6407)
Signed-off-by: liyang <daviderli614@gmail.com>
2025-06-26 17:23:43 +00:00
codephage
f87b12b2aa feat: remove own pow fn (#6404)
feat remove own pow fn

Signed-off-by: codephage. <381510760@qq.com>
2025-06-26 09:27:30 +00:00
Yiran
07eec083b9 fix: doc issue assignee (#6406)
Signed-off-by: Yiran <cuiyiran3@gmail.com>
2025-06-26 09:18:47 +00:00
Weny Xu
4737285275 feat: implement pause/resume functionality for procedure manager (#6393)
* feat: implement pause/resume functionality for procedure manager

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-06-26 01:57:12 +00:00
codephage
55f5e09885 fix: sqlness_test show_processList (#6401)
fix test sqlness_test show_processList

Signed-off-by: codephage. <381510760@qq.com>
2025-06-26 01:53:55 +00:00
liyang
9ab36e9a6f test: add a test load configuration example for flownode (#6397)
* test: add a test load configuration example for flownode

Signed-off-by: liyang <daviderli614@gmail.com>

* format rust

Signed-off-by: liyang <daviderli614@gmail.com>

* fix cargo clippy

Signed-off-by: liyang <daviderli614@gmail.com>

* refine FlownodeOptions visibility

Signed-off-by: liyang <daviderli614@gmail.com>

* format rust

Signed-off-by: liyang <daviderli614@gmail.com>

---------

Signed-off-by: liyang <daviderli614@gmail.com>
2025-06-26 01:48:53 +00:00
Lei, HUANG
4bb5d00a4b fix(http): apply string validation mode to pipeline processor (#6378)
* fix/apply-string-validation-to-pipeline:
 ### Commit Summary

 - **Refactor `decode_string` Functionality**:
   - Moved `decode_string` logic into `PromValidationMode` as a method `decode_string`.
   - Updated all references to use the new method.
   - Files affected: `http.rs`, `prom_row_builder.rs`, `proto.rs`.

 - **Logging Enhancements**:
   - Added `debug` logging for invalid UTF-8 string values.
   - File affected: `http.rs`.

 - **Test Updates**:
   - Modified tests to use the new `decode_string` method in `PromValidationMode`.
   - File affected: `proto.rs`.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* fix clippy

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

---------

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-06-25 18:56:35 +00:00
Lei, HUANG
1d07864b29 refactor(object-store): move backends building functions back to object-store (#6400)
refactor/building-backend-in-object-store:
 ### Refactor Object Store Configuration

 - **Centralize Object Store Configurations**: Moved object store configurations (`FileConfig`, `S3Config`, `OssConfig`, `AzblobConfig`, `GcsConfig`) to `object-store/src/config.rs`.
 - **Error Handling Enhancements**: Introduced `object-store/src/error.rs` for improved error handling related to object store operations.
 - **Factory Pattern for Object Store**: Implemented `object-store/src/factory.rs` to create object store instances, consolidating logic from `datanode/src/store.rs`.
 - **Remove Redundant Store Implementations**: Deleted individual store files (`azblob.rs`, `fs.rs`, `gcs.rs`, `oss.rs`, `s3.rs`) from `datanode/src/store/`.
 - **Update Usage of Object Store Config**: Updated references to `ObjectStoreConfig` in `datanode.rs`, `standalone.rs`, `config.rs`, and `error.rs` to use the new centralized configuration.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-06-25 13:49:55 +00:00
ZonaHe
9be75361a4 feat: update dashboard to v0.10.1 (#6396)
Co-authored-by: ZonaHex <ZonaHex@users.noreply.github.com>
2025-06-25 12:58:04 +00:00
Ning Sun
9c1df68a5f feat: introduce /v1/health for healthcheck from external (#6388)
Signed-off-by: Ning Sun <sunning@greptime.com>
2025-06-25 12:25:36 +00:00
fys
0209461155 chore: add components for standalone instance (#6383) 2025-06-25 12:17:34 +00:00
Ruihang Xia
e728cb33fb feat: implement count_hash aggr function (#6342)
* feat: implement count_hash aggr function

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* sqlness case

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* change copyright year

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* review changes

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-06-25 12:14:03 +00:00
fys
cde7e11983 refactor: avoid adding feature to parameter (#6391)
* refactor: avoid adding feature to parameter

* avoid `cfg(not(feature = ...))` block
2025-06-25 10:47:20 +00:00
dennis zhuang
944b4b3e49 feat: supports CsvWithNames and CsvWithNamesAndTypes formats (#6384)
* feat: supports CsvWithNames and CsvWithNamesAndTypes formats and object/array types

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* test: added and fixed tests

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: fix test

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: remove comments

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* test: add json type csv tests

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: remove comment

Co-authored-by: Yingwen <realevenyag@gmail.com>

---------

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>
2025-06-25 07:28:11 +00:00
fys
7953b090c0 feat: support syntax parsing of drop trigger (#6371)
* feat: trigger drop

* Update Cargo.toml

* Update Cargo.lock

---------

Co-authored-by: Ning Sun <classicning@gmail.com>
2025-06-24 18:48:39 +00:00
liyang
7aa9af5ba6 chore: clarify default OTLP endpoint value (#6381)
Signed-off-by: liyang <daviderli614@gmail.com>
2025-06-24 11:44:45 +00:00
Ruihang Xia
7a9444c85b refactor: remove staled manifest structures (#6382)
* refactor: remove staled manifest structures

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* Update src/store-api/src/lib.rs

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-06-24 09:23:10 +00:00
LFC
bb12be3310 refactor: scan Batches directly (#6369)
* refactor: scan `Batch`es directly

Signed-off-by: luofucong <luofc@foxmail.com>

* fix ci

Signed-off-by: luofucong <luofc@foxmail.com>

* resolve PR comments

Signed-off-by: luofucong <luofc@foxmail.com>

* resolve PR comments

Signed-off-by: luofucong <luofc@foxmail.com>

---------

Signed-off-by: luofucong <luofc@foxmail.com>
2025-06-24 07:55:49 +00:00
Weny Xu
24019334ee feat: implement automatic region failure detector registrations (#6370)
* feat: implement automatic region failure detector registrations

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: remove unused error

Signed-off-by: WenyXu <wenymedia@gmail.com>

* test: add more tests

Signed-off-by: WenyXu <wenymedia@gmail.com>

* feat: add `region_failure_detector_initialization_delay` option

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: update config.md

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: update config.md

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-06-24 06:12:12 +00:00
codephage
116d5cf82b feat: support mysql flavor show processlist shortcut (#6328) (#6379)
* feat: support mysql flavor show processlist shortcut (#6328)

Signed-off-by: codephage. <381510760@qq.com>

* Refactor SHOW PROCESSLIST handling and add tests

Signed-off-by: codephage. <381510760@qq.com>

* add sqlness test

Signed-off-by: codephage. <381510760@qq.com>

* add sqlness test result

Signed-off-by: codephage. <381510760@qq.com>

* fix sqlness test show_processList

Signed-off-by: codephage. <381510760@qq.com>

---------

Signed-off-by: codephage. <381510760@qq.com>
2025-06-24 03:50:16 +00:00
zyy17
90a3894564 refactor: always write parent_span_id for otlp traces ingestion (#6356)
* refactor: always write `parent_span_id` for otlp traces ingestion

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* refactor: support to write None value in row writer

Signed-off-by: zyy17 <zyylsxm@gmail.com>

---------

Signed-off-by: zyy17 <zyylsxm@gmail.com>
2025-06-23 11:36:57 +00:00
Yingwen
39d3e0651d feat: Support ListMetadataRequest to retrieve regions' metadata (#6348)
* feat: support list metadata in region server

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: add test for list region metadata

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: return null if region not exists

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: update greptime-proto

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-06-23 07:11:20 +00:00
zyy17
a49edc6ca6 refactor: add otlp_export_protocol config to support export trace data through gRPC and HTTP protocol (#6357)
* refactor: support http traces

* refactor: add `otlp_export_protocol` config to support export trace data through gRPC and HTTP protocol

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* Update src/common/telemetry/src/logging.rs

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

---------

Signed-off-by: zyy17 <zyylsxm@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-06-23 02:56:12 +00:00
shuiyisong
7cd6be41ce feat(pipeline): introduce pipeline doc version 2 for combine-transform (#6360)
* chore: init commit of pipeline doc version v2

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: remove unused code

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: remove unused code

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: add test

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: add test

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: compatible with v1 to remain field in the map during transform

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* refactor: pipeline.exec_mut

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* fix: typo

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: change from v2 to 2 in version setting

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

---------

Signed-off-by: shuiyisong <xixing.sys@gmail.com>
2025-06-22 00:58:36 +00:00
ZonaHe
15616d0c43 feat: update dashboard to v0.10.0 (#6368)
Co-authored-by: ZonaHex <ZonaHex@users.noreply.github.com>
2025-06-20 23:48:35 +00:00
dennis zhuang
b43e315c67 fix: test test_tls_file_change_watch (#6366)
* fix: test test_tls_file_change_watch

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: cert_path

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: test

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: revert times

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: debug

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* fix: times

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: remove assertions

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

* chore: use inspect_err

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>

---------

Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
2025-06-20 16:57:07 +00:00
Yingwen
36ab1ceef7 chore: prints a warning when skip_ssl_validation is true (#6367)
chore: warn when skip_ssl_validation is true

We already log all configs when a node starts.

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-06-20 13:14:50 +00:00
Weny Xu
3fb1b726c6 refactor(cli): simplify metadata command parameters (#6364)
Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-06-20 09:00:21 +00:00
Olexandr88
c423bb31fe docs: added YouTube link to documentation (#6362)
Update README.md
2025-06-20 08:16:54 +00:00
rgidda
e026f766d2 feat(storage): Add skip_ssl_validation option for object storage HTTP client (#6358)
* feat(storage): Add skip_ssl_validation option for object storage HTTP client

Signed-off-by: rgidda <rgidda@hitachivantara.com>

* fix(test): Broken test case for - Add skip_ssl_validation option for object storage HTTP client

Signed-off-by: rgidda <rgidda@hitachivantara.com>

* fix: test

* fix: test

---------

Signed-off-by: rgidda <rgidda@hitachivantara.com>
Co-authored-by: rgidda <rgidda@hitachivantara.com>
Co-authored-by: dennis zhuang <killme2008@gmail.com>
2025-06-20 08:08:19 +00:00
discord9
9d08f2532a feat: dist auto step aggr pushdown (#6268)
* wip: steppable aggr fn

Signed-off-by: discord9 <discord9@163.com>

* poc: step aggr query

Signed-off-by: discord9 <discord9@163.com>

* feat: mvp poc stuff

Signed-off-by: discord9 <discord9@163.com>

* test: sqlness

Signed-off-by: discord9 <discord9@163.com>

* chore: import missing

Signed-off-by: discord9 <discord9@163.com>

* feat: support first/last_value

Signed-off-by: discord9 <discord9@163.com>

* fix: check also include first/last value

Signed-off-by: discord9 <discord9@163.com>

* chore: clean up after rebase

Signed-off-by: discord9 <discord9@163.com>

* feat: optimize yes!

Signed-off-by: discord9 <discord9@163.com>

* fix: alias qualifled

Signed-off-by: discord9 <discord9@163.com>

* test: more testcases

Signed-off-by: discord9 <discord9@163.com>

* chore: qualified column

Signed-off-by: discord9 <discord9@163.com>

* chore: per review

Signed-off-by: discord9 <discord9@163.com>

* fix: case when can push down

Signed-off-by: discord9 <discord9@163.com>

* feat: udd/hll_merge is also the same

Signed-off-by: discord9 <discord9@163.com>

* fix: udd/hll_merge args

Signed-off-by: discord9 <discord9@163.com>

* tests: fix sqlness

Signed-off-by: discord9 <discord9@163.com>

* tests: fix sqlness

Signed-off-by: discord9 <discord9@163.com>

* fix: udd/hll merge steppable

Signed-off-by: discord9 <discord9@163.com>

* chore: per review

Signed-off-by: discord9 <discord9@163.com>

* test: REDACTED

Signed-off-by: discord9 <discord9@163.com>

* refactor: per review

Signed-off-by: discord9 <discord9@163.com>

* refactor: more formal transform action

Signed-off-by: discord9 <discord9@163.com>

* feat: support modify child plan too

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-06-20 07:18:55 +00:00
LFC
e072726ea8 refactor: make scanner creation async (#6349)
* refactor: make scanner creation async

Signed-off-by: luofucong <luofc@foxmail.com>

* resolve PR comments

Signed-off-by: luofucong <luofc@foxmail.com>

---------

Signed-off-by: luofucong <luofc@foxmail.com>
2025-06-20 06:44:49 +00:00
Ning Sun
e78c3e1eaa refactor: make metadata region option opt-in (#6350)
* refactor: make metadata region option opt-in

Signed-off-by: Ning Sun <sunning@greptime.com>

* fix: preserve wal_options for metadata region

Signed-off-by: Ning Sun <sunning@greptime.com>

* Update src/metric-engine/src/engine/create.rs

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ning Sun <sunning@greptime.com>
Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
2025-06-20 03:31:16 +00:00
Weny Xu
89e3c8edab fix(meta): enhance mysql election client with timeouts and reconnection (#6341)
* fix(meta): enhance mysql election client with timeouts and reconnection

Signed-off-by: WenyXu <wenymedia@gmail.com>

* feat: improve MySQL election client lease management and error handling

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: adjust timeout configurations for election clients

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: remove unused error

Signed-off-by: WenyXu <wenymedia@gmail.com>

* fix: fix unit test

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-06-19 12:44:23 +00:00
fys
d4826b998d feat: support execute sql in frontend_client (#6355)
* feat: support execute sql in frontend_client

* chore: remove unnecessary clone

* add components for flownode instance

* add feature gate for component

* fix: enterprise feature
2025-06-19 09:47:16 +00:00
Weny Xu
d9faa5c801 feat(cli): add metadata del commands (#6339)
* feat: introduce cli for deleting metadata

Signed-off-by: WenyXu <wenymedia@gmail.com>

* refactor(cli): flatten metadata command structure

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: add alias

Signed-off-by: WenyXu <wenymedia@gmail.com>

* refactor(meta): implement logical deletion for CLI tool

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-06-19 06:48:00 +00:00
Ning Sun
12c3a3205b chore: security updates (#6351) 2025-06-19 06:43:43 +00:00
Weny Xu
5231505021 fix(metric-engine): properly propagate errors during batch open operation (#6325)
* fix(metric-engine): properly propagate errors during batch open operation

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: add comments

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-06-19 06:37:54 +00:00
Lei, HUANG
6ece560f8c fix: reordered write cause incorrect kv (#6345)
* fix/reordered-write-cause-incorrect-kv:
 - **Enhance Testing in `partition_tree.rs`**: Added comprehensive test functions such as `kv_region_metadata`, `key_values`, and `collect_kvs` to improve the robustness of key-value operations and ensure correct behavior of the `PartitionTreeMemtable`.
 - **Improve Key Handling in `dict.rs`**: Modified `KeyDictBuilder` to handle both full and sparse keys, ensuring correct mapping and insertion. Added a new test `test_builder_finish_with_sparse_key` to validate the handling of sparse keys.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* fix/reordered-write-cause-incorrect-kv:
 ### Refactor `partition_tree.rs` for Improved Key Handling

 - **Refactored Key Handling**: Simplified the `key_values` function to accept an iterator of keys, removing hardcoded key-value pairs. This change enhances flexibility and reduces redundancy in key management.
 - **Updated Test Cases**: Modified test cases to use the new `key_values` function signature, ensuring they iterate over keys dynamically rather than relying on predefined lists.

 Files affected:
 - `src/mito2/src/memtable/partition_tree.rs`

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* fix/reordered-write-cause-incorrect-kv:
 Enhance Testing in `partition_tree.rs`

 - Added assertions to verify key-value collection after `memtable` and `forked` operations.
 - Refactored key-value writing logic for clarity in `forked` operations.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

---------

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-06-19 06:32:40 +00:00
fys
2ab08a8f93 chore(deps): switch greptime-proto to official repository (#6347) 2025-06-18 12:52:46 +00:00
Lei, HUANG
086ae9cdcd chore: print series count after wal replay (#6344)
* chore/print-series-count-after-wal-replay:
 ### Add Series Count Functionality and Logging Enhancements

 - **`time_partition.rs`**: Introduced `series_count` method to calculate the total timeseries count across all time partitions.
 - **`opener.rs`**: Enhanced logging to include the total timeseries replayed during WAL replay.
 - **`version.rs`**: Added `series_count` method to `VersionControlData` for approximating timeseries count in the current version.
 - **`handler.rs`**: Added entry and exit logging for the `sql` function to trace execution flow.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* chore/print-series-count-after-wal-replay:
 ### Remove Unused Import

 - **File Modified**: `src/servers/src/http/handler.rs`
 - **Change Summary**: Removed the unused `info` import from `common_telemetry`.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

---------

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-06-18 12:04:39 +00:00
LFC
6da8e00243 refactor: make finding leader in metasrv client dynamic (#6343)
Signed-off-by: luofucong <luofc@foxmail.com>
2025-06-18 11:29:23 +00:00
Arshdeep
4b04c402b6 fix: add path exist check in copy_table_from (#6182) (#6300)
Signed-off-by: Arshdeep54 <balarsh535@gmail.com>
2025-06-18 09:50:27 +00:00
Lei, HUANG
a59b6c36d2 chore: add metrics for active series and field builders (#6332)
* chore/series-metrics:
 ### Add Metrics for Active Series and Values in Memtable

 - **`simple_bulk_memtable.rs`**: Implemented `Drop` trait for `SimpleBulkMemtable` to decrement `MEMTABLE_ACTIVE_SERIES_COUNT` and `MEMTABLE_ACTIVE_VALUES_COUNT` upon dropping.
 - **`time_series.rs`**:
   - Introduced `SeriesMap` with `Drop` implementation to manage active series and values count.
   - Updated `SeriesSet` and `Iter` to use `SeriesMap`.
   - Added `num_values` method in `Series` to calculate the number of values.
 - **`metrics.rs`**: Added `MEMTABLE_ACTIVE_SERIES_COUNT` and `MEMTABLE_ACTIVE_VALUES_COUNT` metrics to track active series and values in `TimeSeriesMemtable`.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* chore/series-metrics:
- Add metrics for active series and field builders
- Update dashboard

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* chore/series-metrics:
 **Add Series Count Tracking in Memtables**

 - **`flush.rs`**: Updated `RegionFlushTask` to track and log the series count during memtable flush operations.
 - **`memtable.rs`**: Introduced `series_count` in `MemtableStats` and added a method to retrieve it.
 - **`partition_tree.rs`, `partition.rs`, `tree.rs`**: Implemented series count calculation in `PartitionTreeMemtable` and its components.
 - **`simple_bulk_memtable.rs`, `time_series.rs`**: Integrated series count tracking in `SimpleBulkMemtable` and `TimeSeriesMemtable` implementations.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* Update src/mito2/src/memtable.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

---------

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>
2025-06-18 09:16:45 +00:00
zyy17
f6ce6fe385 fix(jaeger-api): incorrect find_traces() logic and multiple api compatible issues (#6293)
* fix: use `limit` params in jaeger http

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* refactor: only parse `max_duration` and `min_duration` when it's not empty

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* fix: handle the input for empty `limit` string

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* fix: missing the fileter for `service_name`

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* test: fix ci errors

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* fix: incorrect behavior of find_traces

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* fix: the logic of `find_traces()`

The correct logic should be:

1. Get all trace ids that match the filters;

2. Get all traces that match the trace ids from the previous query;

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* fix: integration test errors

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* refactor: add `empty_string_as_none`

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* refactor: refine naming

Signed-off-by: zyy17 <zyylsxm@gmail.com>

---------

Signed-off-by: zyy17 <zyylsxm@gmail.com>
2025-06-18 08:01:36 +00:00
Weny Xu
4d4bfb7d8b fix(metric): prevent setting memtable type for metadata region (#6340)
Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-06-18 03:49:22 +00:00
Ruihang Xia
6e1e8f19e6 feat: support setting FORMAT in TQL ANALYZE/VERBOSE (#6327)
* feat: support setting FORMAT in TQL ANALYZE/VERBOSE

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update sqlness result

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-06-18 03:39:12 +00:00
Weny Xu
49cb4da6d2 feat: introduce CLI tool for repairing logical table metadata (#6322)
* feat: introduce logical table metadata repair cli tool

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: deps

Signed-off-by: WenyXu <wenymedia@gmail.com>

* refactor: flatten doctor module structure

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-06-17 13:53:59 +00:00
Lei, HUANG
0d0236ddab fix: revert string builder initial capacity in TimeSeriesMemtable (#6330)
fix/revert-string-builder-initial-capacity:
 ### Update `time_series.rs` Memory Allocation

 - **Reduced StringBuilder Capacity**: Adjusted the initial capacity of `StringBuilder` in `ValueBuilder` from `(256, 4096)` to `(4, 8)` to optimize memory usage in `time_series.rs`.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
2025-06-17 13:24:52 +00:00
Lei, HUANG
f8edb53b30 fix: carry process id in query ctx (#6335)
fix/carry-process-id-in-query-ctx:
 ### Commit Message

 Enhance query context handling in `instance.rs`

 - Updated `Instance` implementation to include `process_id` in query context initialization, improving traceability and debugging capabilities.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-06-17 12:04:23 +00:00
Lin Yihai
438791b3e4 feat: Add DROP DEFAULT (#6290)
* feat: Add `DROP DEFAULT`

Signed-off-by: Yihai Lin <yihai-lin@foxmail.com>

* chore: use `next_token`

Signed-off-by: Yihai Lin <yihai-lin@foxmail.com>

---------

Signed-off-by: Yihai Lin <yihai-lin@foxmail.com>
2025-06-17 09:33:56 +00:00
discord9
50e4c916e7 chore: clean up unused impl &standalone use mark dirty (#6331)
Signed-off-by: discord9 <discord9@163.com>
2025-06-17 08:18:17 +00:00
Ruihang Xia
16e7f7b64b fix: override logical table's partition column with physical table's (#6326)
* fix: override logical table's partition column with physical table's

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add more sqlness test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* naming

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-06-17 08:00:54 +00:00
localhost
53c4fd478e chore: add skip error for pipeline skip error log (#6318)
* chore: wip

Signed-off-by: paomian <xpaomian@gmail.com>

* chore: add skip error for pipeline skip error log

Signed-off-by: paomian <xpaomian@gmail.com>

* chore: add test and check timestamp must be non null

Signed-off-by: paomian <xpaomian@gmail.com>

* chore: fix test

* chore: fix by pr comment

* fix: typo

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>

---------

Signed-off-by: paomian <xpaomian@gmail.com>
Co-authored-by: dennis zhuang <killme2008@gmail.com>
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
2025-06-17 06:44:11 +00:00
Lei, HUANG
ecbbd2fbdb feat: handle Ctrl-C command in MySQL client (#6320)
* feat/answer-ctrl-c-in-mysql:
 ## Implement Connection ID-based Query Killing

 ### Key Changes:
 - **Connection ID Management:**
   - Added `connection_id` to `Session` and `QueryContext` in `src/session/src/lib.rs` and `src/session/src/context.rs`.
   - Updated `MysqlInstanceShim` and `MysqlServer` to handle `connection_id` in `src/servers/src/mysql/handler.rs` and `src/servers/src/mysql/server.rs`.

 - **KILL Statement Enhancements:**
   - Introduced `Kill` enum to handle both `ProcessId` and `ConnectionId` in `src/sql/src/statements/kill.rs`.
   - Updated `ParserContext` to parse `KILL QUERY <connection_id>` in `src/sql/src/parser.rs`.
   - Modified `StatementExecutor` to support killing queries by `connection_id` in `src/operator/src/statement/kill.rs`.

 - **Process Management:**
   - Refactored `ProcessManager` to include `connection_id` in `src/catalog/src/process_manager.rs`.
   - Added `kill_local_process` method for local query termination.

 - **Testing:**
   - Added tests for `KILL` statement parsing and execution in `src/sql/src/parser.rs`.

 ### Affected Files:
 - `Cargo.lock`, `Cargo.toml`
 - `src/catalog/src/process_manager.rs`
 - `src/frontend/src/instance.rs`
 - `src/frontend/src/stream_wrapper.rs`
 - `src/operator/src/statement.rs`
 - `src/operator/src/statement/kill.rs`
 - `src/servers/src/mysql/federated.rs`
 - `src/servers/src/mysql/handler.rs`
 - `src/servers/src/mysql/server.rs`
 - `src/servers/src/postgres.rs`
 - `src/session/src/context.rs`
 - `src/session/src/lib.rs`
 - `src/sql/src/parser.rs`
 - `src/sql/src/statements.rs`
 - `src/sql/src/statements/kill.rs`
 - `src/sql/src/statements/statement.rs`

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

 Conflicts:
	Cargo.lock
	Cargo.toml

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* feat/answer-ctrl-c-in-mysql:
 ### Enhance Process Management and Execution

 - **`process_manager.rs`**: Added a new method `find_processes_by_connection_id` to filter processes by connection ID, improving process management capabilities.
 - **`kill.rs`**: Refactored the process killing logic to utilize the new `find_processes_by_connection_id` method, streamlining the execution flow and reducing redundant checks.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* feat/answer-ctrl-c-in-mysql:
 ## Commit Message

 ### Update Process ID Type and Refactor Code

 - **Change Process ID Type**: Updated the process ID type from `u64` to `u32` across multiple files to optimize memory usage. Affected files include `process_manager.rs`, `lib.rs`, `database.rs`, `instance.rs`, `server.rs`, `stream_wrapper.rs`, `kill.rs`, `federated.rs`, `handler.rs`, `server.rs`,
 `postgres.rs`, `mysql_server_test.rs`, `context.rs`, `lib.rs`, and `test_util.rs`.

 - **Remove Connection ID**: Removed the `connection_id` field and related logic from `process_manager.rs`, `lib.rs`, `instance.rs`, `server.rs`, `stream_wrapper.rs`, `kill.rs`, `federated.rs`, `handler.rs`, `server.rs`, `postgres.rs`, `mysql_server_test.rs`, `context.rs`, `lib.rs`, and `test_util.rs` to
 simplify the codebase.

 - **Refactor Process Management**: Refactored process management logic to improve clarity and maintainability in `process_manager.rs`, `kill.rs`, and `handler.rs`.

 - **Enhance MySQL Server Handling**: Improved MySQL server handling by integrating process management in `server.rs` and `mysql_server_test.rs`.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* feat/answer-ctrl-c-in-mysql:
 ### Add Process Manager to Postgres Server

 - **`src/frontend/src/server.rs`**: Updated server initialization to include `process_manager`.
 - **`src/servers/src/postgres.rs`**: Modified `MakePostgresServerHandler` to accept `process_id` for session creation.
 - **`src/servers/src/postgres/server.rs`**: Integrated `process_manager` into `PostgresServer` for generating `process_id` during connection handling.
 - **`src/servers/tests/postgres/mod.rs`** and **`tests-integration/src/test_util.rs`**: Adjusted test server setup to accommodate optional `process_manager`.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* feat/answer-ctrl-c-in-mysql:
 Update `greptime-proto` Dependency

 - Updated the `greptime-proto` dependency to a new revision in both `Cargo.lock` and `Cargo.toml`.
   - `Cargo.lock`: Changed source revision from `d75a56e05a87594fe31ad5c48525e9b2124149ba` to `fdcbe5f1c7c467634c90a1fd1a00a784b92a4e80`.
   - `Cargo.toml`: Updated the `greptime-proto` git revision to match the new commit.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

---------

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-06-17 06:36:23 +00:00
fys
3e3a12385c refactor: make flownode gRPC services able to be added dynamically (#6323)
chore: enhance the flownode gRPC servers extension
2025-06-17 06:27:41 +00:00
shuiyisong
079daf5db9 feat: support special labels parsing in prom remote write (#6302)
* feat: support special labels parsing in prom remote write

* chore: change __schema__ to __database__

* fix: test

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* fix: remove the missing type alias

Signed-off-by: shuiyisong <xixing.sys@gmail.com>

* chore: update cr issue

Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>

---------

Signed-off-by: shuiyisong <xixing.sys@gmail.com>
Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>
2025-06-17 03:20:33 +00:00
liyang
56b9ab5279 ci: add pr label workflow (#6316)
* ci: add pr label workflow

Signed-off-by: liyang <daviderli614@gmail.com>

* move permissions to jobs

Signed-off-by: liyang <daviderli614@gmail.com>

* add checkout step

Signed-off-by: liyang <daviderli614@gmail.com>

* add job permissions

Signed-off-by: liyang <daviderli614@gmail.com>

* custom sizes

Signed-off-by: liyang <daviderli614@gmail.com>

* Update .github/workflows/pr-labeling.yaml

Co-authored-by: shuiyisong <113876041+shuiyisong@users.noreply.github.com>

---------

Signed-off-by: liyang <daviderli614@gmail.com>
Co-authored-by: shuiyisong <113876041+shuiyisong@users.noreply.github.com>
2025-06-16 17:26:16 +00:00
Ruihang Xia
be4e0d589e feat: support arbitrary constant expression in PromQL function (#6315)
* refactor holt_winters, predict_linear, quantile, round

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* some sqlness result

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* support some functions

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* make all sqlness cases pass

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix other sqlness cases

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* some refactor

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-06-16 15:12:27 +00:00
Yingwen
2a3445c72c fix: ignore missing columns and tables in PromQL (#6285)
* fix: handle table/column not found in or

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: update result

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: drop table after test

Signed-off-by: evenyag <realevenyag@gmail.com>

* test: fix test cases

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: do not return table not found error in series_query

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-06-16 12:15:38 +00:00
Lei, HUANG
9d997d593c feat: bulk support flow batch (#6291)
* feat/bulk-support-flow-batch:
 ### Refactor and Enhance Timestamp Handling in gRPC and Bulk Insert

 - **Refactor Table Handling**:
   - Updated `put_record_batch` method to use `TableRef` instead of `TableId` in `grpc.rs`, `greptime_handler.rs`, and `grpc.rs`.
   - Modified `handle_bulk_insert` to accept `TableRef` and extract `TableId` internally in `bulk_insert.rs`.

 - **Enhance Timestamp Processing**:
   - Added `compute_timestamp_range` function to calculate timestamp range in `bulk_insert.rs`.
   - Introduced error handling for invalid time index types in `error.rs`.

 - **Test Adjustments**:
   - Updated `DummyInstance` implementation in `tests/mod.rs` to align with new method signatures.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* feat/bulk-support-flow-batch:
 ### Add Dirty Window Handling in Flow Module

 - **Updated `greptime-proto` Dependency**: Updated the `greptime-proto` dependency to a new revision in `Cargo.lock` and `Cargo.toml`.
 - **Flow Module Enhancements**:
   - Added `DirtyWindowRequest` handling in `flow.rs`, `node_manager.rs`, `test_util.rs`, `flownode_impl.rs`, and `server.rs`.
   - Implemented `handle_mark_window_dirty` function to manage dirty time windows.
 - **Bulk Insert Enhancements**:
   - Modified `bulk_insert.rs` to notify flownodes about dirty time windows using `update_flow_dirty_window`.
 - **Removed Unused Imports**: Cleaned up unused imports in `greptime_handler.rs`, `grpc.rs`, and `mod.rs`.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* feat: mark dirty time window

* feat: metrics

* metrics: more useful metrics batching mode

* feat/bulk-support-flow-batch:
 **Refactor Timestamp Handling and Update Dependencies**

 - **Dependency Update**: Updated `greptime-proto` dependency in `Cargo.lock` and `Cargo.toml` to a new revision.
 - **Batching Engine Refactor**: Modified `src/flow/src/batching_mode/engine.rs` to replace `dirty_time_ranges` with `timestamps` for improved timestamp handling.
 - **Bulk Insert Refactor**: Updated `src/operator/src/bulk_insert.rs` to refactor timestamp extraction and handling. Replaced `compute_timestamp_range` with `extract_timestamps` and adjusted related logic to handle timestamps directly.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* feat/bulk-support-flow-batch:
 ### Update Metrics in Batching Mode Engine

 - **Modified Metrics**: Replaced `METRIC_FLOW_BATCHING_ENGINE_BULK_MARK_TIME_WINDOW_RANGE` with `METRIC_FLOW_BATCHING_ENGINE_BULK_MARK_TIME_WINDOW` to track the count of time windows instead of their range.
   - Files affected: `engine.rs`, `metrics.rs`
 - **New Method**: Added `len()` method to `DirtyTimeWindows` to return the number of dirty windows.
   - File affected: `state.rs`

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* feat/bulk-support-flow-batch:
 **Refactor and Enhance Timestamp Handling in `bulk_insert.rs`**

 - **Refactored Timestamp Extraction**: Moved timestamp extraction logic to a new method `maybe_update_flow_dirty_window` to improve code readability and maintainability.
 - **Enhanced Flow Update Logic**: Updated the flow dirty window update mechanism to conditionally notify flownodes only if they are configured, using `table_info` and `record_batch`.
 - **Imports Adjusted**: Updated imports to reflect changes in table metadata handling, replacing `TableId` with `TableInfoRef`.

 Files affected:
 - `src/operator/src/bulk_insert.rs`

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* feat/bulk-support-flow-batch:
 ## Update `handle_mark_window_dirty` Method in `flownode_impl.rs`

 - Replaced `unimplemented!()` with `unreachable!()` in the `handle_mark_window_dirty` method for both `FlowDualEngine` and `StreamingEngine` implementations in `flownode_impl.rs`.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* feat/bulk-support-flow-batch:
 Update `greptime-proto` Dependency

 - Updated the `greptime-proto` dependency to a new revision in both `Cargo.lock` and `Cargo.toml`.
   - `Cargo.lock`: Changed the source revision from `f0913f179ee1d2ce428f8b85a9ea12b5f69ad636` to `17971523673f4fbc982510d3c9d6647ff642e16f`.
   - `Cargo.toml`: Updated the `greptime-proto` git revision to `17971523673f4fbc982510d3c9d6647ff642e16f`.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

---------

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
Co-authored-by: discord9 <discord9@163.com>
2025-06-16 08:19:14 +00:00
Weny Xu
10bf9b11f6 fix: handle corner case in catchup where compacted entry id exceeds region last entry id (#6312)
* fix(mito2): handle corner case in catchup where compacted entry id exceeds region last entry id

Signed-off-by: WenyXu <wenymedia@gmail.com>

* chore: apply suggestions from CR

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-06-16 06:36:31 +00:00
localhost
f4f8d65a39 fix: event api content type only check type and subtype (#6317)
* fix: event api content type only check type and subtype

Signed-off-by: paomian <xpaomian@gmail.com>

* chore: make clippy happy

Signed-off-by: paomian <xpaomian@gmail.com>

---------

Signed-off-by: paomian <xpaomian@gmail.com>
2025-06-13 18:50:05 +00:00
Lei, HUANG
b31990e881 chore: add connection info to QueryContext (#6319)
chore/add-conn-info-to-query-ctx:
 ### Add Connection Information to Query Context

 - **`src/frontend/src/instance.rs`**: Updated to use `query_ctx.conn_info().to_string()` for connection information instead of a placeholder string.
 - **`src/session/src/context.rs`**: Introduced `conn_info` field in `QueryContext` and added a method `conn_info()` to retrieve it. Updated `QueryContextBuilder` to handle `conn_info`.
 - **`src/session/src/lib.rs`**: Modified `Session` to include `conn_info` in the query context building process.

 These changes enhance the query context by incorporating connection information, allowing for more detailed session management.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-06-13 18:42:13 +00:00
Lei, HUANG
6da633e70d feat: support killing process (#6309)
* feat/kill-process:
 ### Add Cancellation Support and Enhance Process Management

 - **Cancellation Handle Implementation**: Introduced `CancellationHandle` in `cancellation_handle.rs` to facilitate cancellation of futures and streams.
 - **Process Management Enhancements**:
   - Updated `ProcessManager` in `process_manager.rs` to support cancellable processes using `CancellableProcess`.
   - Added `kill_process` method for terminating processes.
 - **Stream Wrapper Update**:
   - Replaced `StreamWrapper` with `CancellableStreamWrapper` in `stream_wrapper.rs` and `instance.rs` to handle stream cancellation.
 - **Error Handling**:
   - Added `StreamCancelled` error variant in `error.rs` to handle stream cancellation scenarios.
 - **gRPC Handler Update**:
   - Added `kill_process` gRPC method in `frontend_grpc_handler.rs` to allow external process termination.
 - **Dependency Updates**:
   - Updated `Cargo.lock` and `Cargo.toml` to include `common-base` and `tokio-util`.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* feat/kill-process:
 **Enhancements and Bug Fixes**

 - **Dependency Update**: Updated `greptime-proto` dependency in `Cargo.lock` and `Cargo.toml` to a new revision.
 - **Error Handling Improvements**:
   - Modified error variants in `src/catalog/src/error.rs` and `src/common/frontend/src/error.rs` to improve error messages and handling.
   - Added `FrontendNotFound` error variant for better error specificity.
 - **Process Management Enhancements**:
   - Updated `ProcessManager` in `src/catalog/src/process_manager.rs` to include `kill_process` functionality with server address validation.
   - Enhanced `FrontendClient` trait in `src/common/frontend/src/selector.rs` to support `kill_process` requests.
 - **gRPC Handler Update**:
   - Refactored `FrontendGrpcHandler` in `src/servers/src/grpc/frontend_grpc_handler.rs` to handle `kill_process` requests asynchronously and return process status.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* feat/kill-process:
 ### Add Kill Process Functionality

 - **`Cargo.lock`, `Cargo.toml`**: Added `common-frontend` as a dependency.
 - **`server.rs`, `builder.rs`, `instance.rs`**: Updated `FrontendInvoker` and `FrontendBuilder` to support process management.
 - **`error.rs`**: Introduced `InvalidProcessId` error for handling invalid process IDs.
 - **`statement.rs`, `kill.rs`**: Implemented `execute_kill` method in `StatementExecutor` to handle the `KILL` statement.
 - **`parser.rs`, `statement.rs`**: Updated SQL parser to recognize and parse the `KILL` statement.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* feat/kill-process:
 ## Add Cancellation Support to Query Execution

 - **`process_manager.rs`**: Updated `CancellationHandle` initialization to use `default()` method.
 - **`cancellation_handle.rs`**: Implemented `Debug` trait for `CancellationHandle` and added `Cancellation` and `CancellableFuture` structs to support cancellable futures.
 - **`error.rs`**: Introduced `Cancelled` error variant to handle query cancellations.
 - **`instance.rs`**: Integrated `CancellableFuture` to manage query execution with cancellation support.
 - **`stream_wrapper.rs`**: Modified `CancellableStreamWrapper` to use the new `waker()` method for cancellation handling.
 - **`statement.rs`**: Added `#[allow(clippy::too_many_arguments)]` to `StatementExecutor::new` to suppress clippy warnings.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* feat/kill-process:
 - **Add `MetaClientMissing` Error**: Introduced a new error variant `MetaClientMissing` in `error.rs` to handle missing meta client scenarios.
 - **Refactor Cancellation Handling**: Merged `cancellation_handle.rs` into `cancellation.rs` and updated related logic in `process_manager.rs`, `instance.rs`, and `stream_wrapper.rs`.
 - **Enhance Process Management**: Improved process management logic in `process_manager.rs` to handle process cancellation more effectively.
 - **Update Tests**: Added and updated tests in `cancellation.rs` and `stream_wrapper.rs` to cover new cancellation logic and error handling.
 - **Cargo.toml Update**: Adjusted workspace settings in `Cargo.toml` for `common-frontend`.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* feat/kill-process:
 - **Add Tests for Process Management**: Introduced multiple async tests in `process_manager.rs` to verify query registration, deregistration, cancellation, and process killing functionalities.
 - **Update Error Message in SQL Parser**: Modified the expected error message in `parser.rs` to clarify the expected token as a "process id string literal".

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* feat/kill-process:
 ### Add Process Count Metrics to Catalog

 - **`metrics.rs`**: Introduced a new metric `PROCESS_LIST_COUNT` to track the count of running processes per catalog using `IntGaugeVec`.
 - **`process_manager.rs`**: Updated `CancellableProcess` to increment and decrement `PROCESS_LIST_COUNT` upon creation and destruction, respectively. Added a `Drop` implementation for `CancellableProcess` to handle metric updates.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* feat/kill-process:
 ### Fix process removal logic in `process_manager.rs`

 - Corrected the condition for removing an entry from the catalog in `ProcessManager` by using `o.get()` instead of `o.get_mut()`.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* feat/kill-process:
 - **Error Handling Improvements**:
   - Updated status codes for `Error::FrontendNotFound` and `Error::MetaClientMissing` to `StatusCode::Unexpected` in `src/catalog/src/error.rs`.
   - Changed `InvokeFrontend` error display message and status code in `src/common/frontend/src/error.rs`.
   - Added `ProcessManagerMissing` error in `src/operator/src/error.rs` and updated its handling in `src/operator/src/statement/kill.rs`.

 - **Process Management Enhancements**:
   - Added documentation for `ProcessManager` and `register_query` in `src/catalog/src/process_manager.rs`.
   - Modified `kill_process` response handling in `src/servers/src/grpc/frontend_grpc_handler.rs`.

 - **Cancellation Logic Update**:
   - Improved cancellation logic in `src/common/base/src/cancellation.rs` to use `compare_exchange` for atomic operations.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* feat/kill-process:
 ### Add Process Kill Count Metric and Refactor Cancellation Handle

 - **Metrics Update**: Added a new metric `PROCESS_KILL_COUNT` in `metrics.rs` to track the count of completed kill process requests per catalog.
 - **Refactor Cancellation Handle**: Renamed `cancellation_handler` to `cancellation_handle` across multiple files for consistency:
   - `process_manager.rs`
   - `instance.rs`
   - `stream_wrapper.rs`
 - **Process Management**: Updated process management logic in `process_manager.rs` to increment the `PROCESS_KILL_COUNT` metric upon successful process termination.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* feat/kill-process:
 Update metric description in `metrics.rs`

 - Changed the description of `PROCESS_KILL_COUNT` to reflect the count of killed processes instead of running processes in `metrics.rs`.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* feat/kill-process:
 Update `greptime-proto` Dependency and Fix Response Field

 - **Updated Dependency**: Changed the `greptime-proto` Git revision in `Cargo.lock` and `Cargo.toml` to `f0913f1`.
 - **Code Fix**: Modified `frontend_grpc_handler.rs` to correct the response field from `found` to `success` in `KillProcessResponse`.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

---------

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-06-13 13:30:25 +00:00
zyy17
9633e794c7 fix: always use linux path style in windows platform unit tests (#6314)
Signed-off-by: zyy17 <zyylsxm@gmail.com>
2025-06-13 07:15:53 +00:00
Yingwen
eaf1e1198f refactor: Extract mito codec part into a new crate (#6307)
* chore: add a new crate mito-codec

Signed-off-by: evenyag <realevenyag@gmail.com>

* feat: port necessary mods for primary key codec

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: use codec utils in mito-codec

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: remove unused mods

Signed-off-by: evenyag <realevenyag@gmail.com>

* style: fix clippy

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: remove Partition::is_partition_column()

Signed-off-by: evenyag <realevenyag@gmail.com>

* refactor: remove duplicated test utils

Signed-off-by: evenyag <realevenyag@gmail.com>

* chore: remove unused comment

Signed-off-by: evenyag <realevenyag@gmail.com>

* fix: fix is_partition_column check

Signed-off-by: evenyag <realevenyag@gmail.com>

---------

Signed-off-by: evenyag <realevenyag@gmail.com>
2025-06-13 07:14:29 +00:00
ZonaHe
505bf25505 feat: update dashboard to v0.9.3 (#6311)
Co-authored-by: ZonaHex <ZonaHex@users.noreply.github.com>
2025-06-13 07:13:12 +00:00
Ning Sun
f1b29ece3c feat: process id for session, query context and postgres (#6301)
* feat: process id for session, query context and postgres

Signed-off-by: Ning Sun <sunning@greptime.com>

* feat: add sql functions to retrieve connection/process id

Signed-off-by: Ning Sun <sunning@greptime.com>

---------

Signed-off-by: Ning Sun <sunning@greptime.com>
2025-06-12 16:53:57 +00:00
discord9
74df12e8c0 fix: check for zero parallelism (#6310)
* fix: check for zero parallelism

Signed-off-by: discord9 <discord9@163.com>

* chore: silently use default value

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-06-12 15:58:59 +00:00
discord9
be6a5d2da8 feat: parallelism hint in grpc (#6306)
* feat: parallelism hint in grpc

Signed-off-by: discord9 <discord9@163.com>

* chore: per review

Signed-off-by: discord9 <discord9@163.com>

* chore: comment

Signed-off-by: discord9 <discord9@163.com>

* chore:docs

Signed-off-by: discord9 <discord9@163.com>

---------

Signed-off-by: discord9 <discord9@163.com>
2025-06-12 10:12:45 +00:00
Ruihang Xia
7468a8ab2a feat: organize EXPLAIN ANALYZE VERBOSE's output in JSON format (#6308)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-06-12 09:55:53 +00:00
Lei, HUANG
5bb0466ff2 feat: introduce file group in compaction (#6261)
* fix/file-group-in-compaction:
 ### Enhance Compaction Logic with File Grouping

 - **`run.rs`**: Introduced `FileGroup` struct to manage groups of `FileHandle` objects, allowing for more efficient compaction operations. Updated `Ranged` and `Item` trait implementations to work with `FileGroup`.
 - **`test_util.rs`**: Added `new_file_handle_with_sequence` function to support file handles with sequence numbers, enhancing test utilities.
 - **`twcs.rs`**: Modified `TwcsPicker` to utilize `FileGroup` for managing files within windows, improving compaction logic. Updated `Window` struct to use `HashMap` for storing `FileGroup` objects.
 - **`version_util.rs`**: Updated version control utilities to handle sequence numbers in file metadata, aligning with new compaction logic.

Signed-off-by: Lei, HUANG <lhuang@greptime.com>

* fix/file-group-in-compaction:
 ### Add Test for File Group Assignment in TWCS

 - **Enhancements in `twcs.rs`:**
   - Added a new test `test_assign_file_groups_to_windows` to verify the correct assignment of file groups to windows.
   - Enhanced `test_assign_compacting_to_windows` with a new case to ensure files with overlapping time ranges and the same sequence are treated as one `FileGroup`.

Signed-off-by: Lei, HUANG <lhuang@greptime.com>

* fix/file-group-in-compaction:
 **Enhance Compaction Task Documentation and Initialization**

 - **`run.rs`**: Added documentation for `FileGroup` to clarify its role in representing a group of files created by the same compaction task.
 - **`twcs.rs`**: Introduced comments in the `Window` struct to explain the mapping of file sequences to file groups, indicating files created from the same compaction task. Simplified the initialization of the `files` hashmap using `HashMap::from`.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

---------

Signed-off-by: Lei, HUANG <lhuang@greptime.com>
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
2025-06-12 09:33:40 +00:00
Ruihang Xia
f6db419afd feat: support using expressions as literal in PromQL (#6297)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-06-12 08:18:10 +00:00
Lei, HUANG
05b708ed2e feat: implement process manager and information_schema.process_list (#5865)
* ### Add Process List Management

 - **Error Handling Enhancements**:

* refactor: Update test IP addresses to include ports in ProcessKey

* feat/show-process-list:
 Refactor Process Management in Meta Module

 - Introduced `ProcessManager` for handling process registration and deregistration.
 - Added methods for managing and querying process states, including `register_query`, `deregister_query`, and `list_all_processes`.
 - Removed redundant process management code from the query module.
 - Updated error handling to reflect changes in process management.
 - Enhanced test coverage for process management functionalities.

* chore: rebase main

* add information schema process list table

* integrate process list table to system catalog

* build ProcessManager on frontend and standalone mode

* feat/show-process-list:
 **Add Process Management Enhancements**

 - **`manager.rs`**: Introduced `process_manager` to `SystemCatalog` and `KvBackendCatalogManager` for improved process handling.
 - **`information_schema.rs`**: Updated table insertion logic to conditionally include `PROCESS_LIST`.
 - **`frontend.rs`, `standalone.rs`**: Enhanced `StartCommand` to clone `process_manager` for better resource management.
 - **`instance.rs`, `builder.rs`**: Integrated `ProcessManager` into `Instance` and `FrontendBuilder` to manage query

* feat/show-process-list:
 ### Add Process Listing and Error Handling Enhancements

 - **Error Handling**: Introduced a new error variant `ListProcess` in `error.rs` to handle failures when listing running processes.
 - **Process List Implementation**: Enhanced `InformationSchemaProcessList` in `process_list.rs` to track running queries, including defining column names and implementing the `make_process_list` function to build the process list.
 - **Frontend Builder**: Added a `#[allow(clippy::too_many_arguments)]` attribute in `builder.rs` to suppress Clippy warnings for the `FrontendBuilder::new` function.

 These changes improve error handling and process tracking capabilities within the system.

* feat/show-process-list:
 Refactor imports in `process_list.rs`

 - Updated import paths for `Predicates` and `InformationTable` in `process_list.rs` to align with the new module structure.

* feat/show-process-list:
 Refactor process list generation in `process_list.rs`

 - Simplified the process list generation by removing intermediate row storage and directly building vectors.
 - Updated `process_to_row` function to use a mutable vector for current row data, improving memory efficiency.
 - Removed `rows_to_record_batch` function, integrating its logic directly into the main loop for streamlined processing.

* wip: move ProcessManager to catalog crate

* feat/show-process-list:
 - **Refactor Row Construction**: Updated row construction in multiple files to use references for `Value` objects, improving memory efficiency. Affected files include:
   - `cluster_info.rs`
   - `columns.rs`
   - `flows.rs`
   - `key_column_usage.rs`
   - `partitions.rs`
   - `procedure_info.rs`
   - `process_list.rs`
   - `region_peers.rs`
   - `region_statistics.rs`
   - `schemata.rs`
   - `table_constraints.rs`
   - `tables.rs`
   - `views.rs`
   - `pg_class.rs`
   - `pg_database.rs`
   - `pg_namespace.rs`
 - **Remove Unused Code**: Deleted unused functions and error variants related to process management in `process_list.rs` and `error.rs`.
 - **Predicate Evaluation Update**: Modified predicate evaluation functions in `predicate.rs` to work with references, enhancing performance.

* feat/show-process-list:
 ### Implement Process Management Enhancements

 - **Error Handling Enhancements**:
   - Added new error variants `BumpSequence`, `StartReportTask`, `ReportProcess`, and `BuildProcessManager` in `error.rs` to improve error handling for process management tasks.
   - Updated `ErrorExt` implementations to handle new error types.

 - **Process Manager Improvements**:
   - Introduced `ProcessManager` enhancements in `process_manager.rs` to manage process states using `ProcessWithState` and `ProcessState` enums.
   - Implemented periodic task `ReportTask` to report running queries to the KV backend.
   - Modified `register_query` and `deregister_query` methods to use the new state management system.

 - **Testing and Validation**:
   - Updated tests in `process_manager.rs` to validate new process management logic.
   - Replaced `dump` method with `list_all_processes` for listing processes.

 - **Integration with Frontend and Standalone**:
   - Updated `frontend.rs` and `standalone.rs` to handle `ProcessManager` initialization errors using `BuildProcessManager` error variant.

 - **Schema Adjustments**:
   - Modified `process_list.rs` in `system_schema/information_schema` to use the updated process listing method.

 - **Key-Value Conversion**:
   - Added `TryFrom` implementation for converting `Process` to `KeyValue` in `process_list.rs`.

* chore: remove register

* fix: sqlness tests

* merge main

Signed-off-by: Lei, HUANG <lhuang@greptime.com>

* feat/show-process-list:
 - **Update `greptime-proto` Dependency**: Updated the `greptime-proto` dependency in `Cargo.lock` and `Cargo.toml` to a new revision.
 - **Refactor `ProcessManager`**: Simplified the `ProcessManager` implementation by removing the use of `KvBackendRef` and `SequenceRef`, and replaced them with `AtomicU64` and `RwLock` for managing process IDs and catalogs in `process_manager.rs`.
 - **Remove Process List Metadata**: Deleted the `process_list.rs` file and removed related metadata key definitions in `key.rs`.
 - **Update Process List Logic**: Modified the process list logic in `process_list.rs` to use the new `ProcessManager` structure.
 - **Adjust Frontend and Standalone Start Commands**: Updated `frontend.rs` and `standalone.rs` to use the new `ProcessManager` constructor.

Signed-off-by: Lei, HUANG <lhuang@greptime.com>

* feat/show-process-list:
 - **Update `greptime-proto` Dependency**: Updated the `greptime-proto` dependency version in `Cargo.lock` and `Cargo.toml` to a new commit hash.
 - **Refactor Error Handling**: Removed unused error variants and added a new `ParseProcessId` error in `src/catalog/src/error.rs`.
 - **Enhance Process Management**: Introduced `DisplayProcessId` struct for better process ID representation and parsing in `src/catalog/src/process_manager.rs`.
 - **Revise Process List Schema**: Updated the schema and logic for process listing in `src/catalog/src/system_schema/information_schema/process_list.rs` to include new fields like `client` and `frontend`.

Signed-off-by: Lei, HUANG <lhuang@greptime.com>

* feat/show-process-list:
 ### Commit Message

 **Enhancements and Refactoring**

 - **Process Management:**
   - Refactored `ProcessManager` to list local processes with an optional catalog filter in `process_manager.rs`.
   - Updated related tests in `process_manager.rs` and `process_list.rs`.

 - **Client Enhancements:**
   - Added `frontend_client` method in `client.rs` to support gRPC communication with the frontend.

 - **Error Handling:**
   - Extended error handling in `error.rs` to include gRPC and Meta errors.

 - **Frontend Module:**
   - Introduced `selector.rs` for frontend client selection and process listing.
   - Updated `Cargo.toml` to include new dependencies and dev-dependencies.

 - **gRPC Server:**
   - Integrated `FrontendServer` in `builder.rs` for enhanced gRPC server capabilities.

Signed-off-by: Lei, HUANG <lhuang@greptime.com>

* feat/show-process-list:
 ### Commit Message

 **Refactor Process Management and Frontend Integration**

 - **Add `common-frontend` Dependency**:
   - Updated `Cargo.lock`, `Cargo.toml` files to include `common-frontend` as a dependency.

 - **Refactor Process Management**:
   - Moved `ProcessManager` trait and `DisplayProcessId` struct to `common-frontend`.
   - Updated `process_manager.rs` to use `MetaProcessManager` and `ProcessManagerRef`.
   - Removed `ParseProcessId` error variant from `error.rs` in `catalog` and `frontend`.

 - **Frontend gRPC Service**:
   - Added `frontend_grpc_handler.rs` to handle gRPC requests for frontend processes.
   - Updated `grpc.rs` and `builder.rs` to integrate `FrontendGrpcHandler`.

 - **Update Tests**:
   - Modified tests in `process_manager.rs` to align with new `ProcessManager` implementation.

 - **Remove Unused Code**:
   - Removed `DisplayProcessId` and related parsing logic from `process_manager.rs`.

Signed-off-by: Lei, HUANG <lhuang@greptime.com>

* feat/show-process-list:
 ### Add `MetaClientRef` to `MetaProcessManager` and Update Instantiation

 - **Files Modified**:
   - `src/catalog/src/process_manager.rs`
   - `src/cmd/src/frontend.rs`
   - `src/cmd/src/standalone.rs`

 - **Key Changes**:
   - Added `MetaClientRef` as an optional parameter to the `MetaProcessManager::new` method.
   - Updated instantiation of `MetaProcessManager` to include `MetaClientRef` where applicable.

 ### Update `ProcessManagerRef` Usage

 - **Files Modified**:
   - `src/catalog/src/kvbackend/manager.rs`
   - `src/catalog/src/system_schema/information_schema.rs`
   - `src/catalog/src/system_schema/information_schema/process_list.rs`
   - `src/frontend/src/instance.rs`
   - `src/frontend/src/instance/builder.rs`

 - **Key Changes**:
   - Ensured consistent usage of `ProcessManagerRef` across various modules.

Signed-off-by: Lei, HUANG <lhuang@greptime.com>

* feat/show-process-list:
 ## Refactor Process Management

 - **Unified Process Manager**:
   - Replaced `MetaProcessManager` with `ProcessManager` across the codebase.
   - Updated `ProcessManager` to use `Arc` for shared references and introduced a `Ticket` struct for query registration and deregistration.
   - Affected files: `manager.rs`, `process_manager.rs`, `frontend.rs`, `standalone.rs`, `frontend_grpc_handler.rs`, `instance.rs`, `builder.rs`, `cluster.rs`, `standalone.rs`.

 - **Stream Wrapper Implementation**:
   - Added `StreamWrapper` to handle record batch streams with process management.
   - Affected file: `stream_wrapper.rs`.

 - **Test Adjustments**:
   - Updated tests to align with the new `ProcessManager` implementation.
   - Affected file: `tests-integration/src/cluster.rs`, `tests-integration/src/standalone.rs`.

Signed-off-by: Lei, HUANG <lhuang@greptime.com>

* feat/show-process-list:
 ### Add Error Handling and Process Management

 - **Error Handling Enhancements**:
   - Added new error variants `ListProcess` and `CreateChannel` in `error.rs` to handle specific gRPC service invocation failures.
   - Updated error handling in `selector.rs` to use the new error variants for better context and error propagation.

 - **Process Management Integration**:
   - Introduced `process_manager` method in `instance.rs` to access the process manager.
   - Integrated `FrontendGrpcHandler` with process management in `server.rs` to handle gRPC requests related to process management.

 - **gRPC Server Enhancements**:
   - Made `frontend_grpc_handler` public in `grpc.rs` to allow external access and integration with other modules.

Signed-off-by: Lei, HUANG <lhuang@greptime.com>

* feat/show-process-list:
 Update `greptime-proto` dependency and enhance process management

 - **Dependency Update**: Updated `greptime-proto` in `Cargo.lock` and `Cargo.toml` to a new revision.
 - **Process Management**:
   - Modified `process_manager.rs` to include catalog filtering in `list_process`.
   - Updated `frontend_grpc_handler.rs` to handle catalog filtering in `list_process` requests.
 - **System Schema**: Added a TODO comment in `process_list.rs` for future user catalog filtering implementation.

Signed-off-by: Lei, HUANG <lhuang@greptime.com>

* feat/show-process-list:
 - **Update Workspace Dependencies**:
   - Modified `Cargo.toml` files in `src/catalog`, `src/common/frontend`, and `src/servers` to adjust workspace dependencies.

 - **Refactor `ProcessManager` Logic**:
   - Updated `process_manager.rs` to simplify the condition in the `select` method.

 - **Remove Unused Error Variants**:
   - Deleted `BuildProcessManager` error variant from `error.rs` in `src/cmd`.
   - Removed `InvalidProcessKey` error variant from `error.rs` in `src/common/meta`.

 - **Add License Header**:
   - Added Apache License header to `stream_wrapper.rs` in `src/frontend`.

 - **Update Test Results**:
   - Adjusted expected results in `information_schema.result` to reflect changes in the schema.

Signed-off-by: Lei, HUANG <lhuang@greptime.com>

* feat/show-process-list:
 ### Add Error Handling for Process Listing

 - **`src/catalog/src/error.rs`**: Introduced a new error variant `ListProcess` to handle failures in listing frontend nodes.
 - **`src/catalog/src/process_manager.rs`**: Updated `local_processes` and `list_all_processes` methods to return the new error type, adding context for error handling.
 - **`src/catalog/src/system_schema/information_schema/process_list.rs`**: Modified `make_process_list` to propagate errors using the new error handling mechanism.
 - **`src/servers/src/grpc/frontend_grpc_handler.rs`**: Enhanced error handling in the `list_process` method to log errors and return appropriate gRPC status codes.

Signed-off-by: Lei, HUANG <lhuang@greptime.com>

* feat/show-process-list:
 Update `greptime-proto` Dependency and Remove `frontend_client` Method

 - **Cargo.lock** and **Cargo.toml**: Updated the `greptime-proto` dependency to a new revision (`5f6119ac7952878d39dcde0343c4bf828d18ffc8`).
 - **src/client/src/client.rs**: Removed the `frontend_client` method from the `Client` implementation.

Signed-off-by: Lei, HUANG <lhuang@greptime.com>

* feat/show-process-list:
 ### Add Query Registration with Pre-Generated ID

 - **`process_manager.rs`**: Introduced `register_query_with_id` method to allow registering queries with a pre-generated ID. This includes creating a `ProcessInfo` instance and inserting it into the catalog. Added `next_id` method to generate the next process ID.

Signed-off-by: Lei, HUANG <lhuang@greptime.com>

* feat/show-process-list:
 ### Update Process List Retrieval Method

 - **File**: `process_list.rs`
   - Updated the method for retrieving process lists from `local_processes` to `list_all_processes` to support asynchronous operations.

Signed-off-by: Lei, HUANG <lhuang@greptime.com>

* feat/show-process-list:
 ### Update error handling in `error.rs`

 - Refined status code handling for `CreateChannel` error by delegating to `source.status_code()`.
 - Separated `ListProcess` and `CreateChannel` error handling for clarity.

Signed-off-by: Lei, HUANG <lhuang@greptime.com>

---------

Signed-off-by: Lei, HUANG <lhuang@greptime.com>
2025-06-12 06:55:22 +00:00
Yiran
f4c3950f57 fix: doc links (#6304)
Signed-off-by: Yiran <cuiyiran3@gmail.com>
2025-06-12 03:18:26 +00:00
liyang
88c4409df4 ci: use the new meta backendStorage etcd structure (#6303)
Signed-off-by: liyang <daviderli614@gmail.com>
2025-06-12 03:17:32 +00:00
localhost
c10b8f8474 chore: add failover cache for pipeline table (#6284)
* chore: add second level cache for pipeline table

* chore: change pipeline failover cache name

* chore: add counter metrics for get pipeline operate
2025-06-12 03:15:02 +00:00
shuiyisong
041b683a8d refactor: remove PipelineMap and use Value instead (#6278)
* refactor: remove pipeline_map and use value instead

* chore: remove unused comments

* chore: move error to illegal state
2025-06-11 17:02:32 +00:00
Weny Xu
03bb6e4f28 feat(cli): add metadata get commands (#6299)
* refactor(cli): restructure cli modules and commands

Signed-off-by: WenyXu <wenymedia@gmail.com>

* feat(cli): add metadata get commands

Signed-off-by: WenyXu <wenymedia@gmail.com>

* feat(cli): enhance table metadata query capabilities

Signed-off-by: WenyXu <wenymedia@gmail.com>

* refactor: minor refactor

Signed-off-by: WenyXu <wenymedia@gmail.com>

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2025-06-11 16:33:36 +00:00
discord9
09e5a6580f chore: silence clippy (#6298)
Signed-off-by: discord9 <discord9@163.com>
2025-06-11 14:32:41 +00:00
Lei, HUANG
f9f905ae14 fix: config docs (#6294)
fix/config-docs:
 Update `config.md` to specify default compression mode

 - Added default value `none` for `grpc.flight_compression` in both frontend and datanode sections of `config/config.md`.

Signed-off-by: Lei, HUANG <lhuang@greptime.com>
2025-06-11 07:31:42 +00:00
Lei, HUANG
1d53dd26ae chore: add option for arrow flight compression mode (#6283)
* chore/enable-flight-encoder:
 ### Add Flight Compression Support

 - **Configuration Updates**:
   - Added `grpc.flight_compression` option to `config/config.md`, `config/datanode.example.toml`, and `config/frontend.example.toml` to specify compression modes for Arrow IPC service.

 - **Code Enhancements**:
   - Updated `FlightEncoder` in `src/common/grpc/src/flight.rs` to support compression modes.
   - Modified `RegionServer` and `DatanodeBuilder` in `src/datanode/src/datanode.rs` and `src/datanode/src/region_server.rs` to handle `FlightCompression`.
   - Integrated `FlightCompression` in `src/servers/src/grpc.rs` and `src/servers/src/grpc/flight.rs` to manage compression settings.

 - **Testing and Integration**:
   - Updated test utilities and integration tests in `tests-integration/src/grpc/flight.rs` and `tests-integration/src/test_util.rs` to include `FlightCompression`.

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>

* chore/enable-flight-encoder:
 ### Enable Compression in FlightClient

 - **`client.rs`**: Updated `make_flight_client` to accept `send_compression` and `accept_compression` parameters, enabling Zstd compression for sending and receiving messages.
 - **`client_manager.rs`**: Modified `datanode` method to pass compression settings from `ChannelConfig` to `RegionRequester`.
 - **`database.rs`**: Adjusted calls to `make_flight_client` to include compression parameters.
 - **`region.rs`**: Updated `RegionRequester` to store and utilize compression settings.
 - **`frontend.rs`**: Configured `ChannelConfig` to enable compression based on options.
 - **`channel_manager.rs`**: Added `send_compression` and `accept_compression` fields to `ChannelConfig` with default values and updated tests accordingly.

Signed-off-by: Lei, HUANG <lhuang@greptime.com>

* chore/enable-flight-encoder:
 ### Update Compression Defaults and Documentation

 - **Configuration Files**: Updated `datanode.example.toml` and `frontend.example.toml` to include a default setting comment for `flight_compression`, specifying it defaults to `none`.
 - **gRPC Server Code**: Modified `grpc.rs` to set `None` as the default for `FlightCompression` instead of `ArrowIpc`.

Signed-off-by: Lei, HUANG <lhuang@greptime.com>

---------

Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
Signed-off-by: Lei, HUANG <lhuang@greptime.com>
2025-06-11 06:54:10 +00:00
localhost
01796c9cc0 chore: org cli sub command (#6265)
* chore: org cli sub command

* chore: make clippy happy

* chore: fix info command not support absolute path

* chore: fix cli test

* Apply suggestions from code review

Co-authored-by: Weny Xu <wenymedia@gmail.com>

* chore: reorganizing the cli tool

* chore: fix limit issue

* chore: add some doc for cli

* chore: format code

---------

Co-authored-by: Weny Xu <wenymedia@gmail.com>
2025-06-11 03:34:56 +00:00
liyang
9469a8f8f2 ci: add signature information when updating downstream repository (#6282)
Signed-off-by: liyang <daviderli614@gmail.com>
2025-06-10 17:18:29 +00:00
Ruihang Xia
2fabe346a1 fix: null value handling on PromQL's join (#6289)
* fix: null value handling on PromQL's join

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update sqlness comment

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2025-06-10 13:24:09 +00:00
Zhenchi
c26138963e refactor: unify function registry (Part 1) (#6262)
* refactor: unify function registry (Part 1)

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* refactor: simplify via register_scalar

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2025-06-10 10:11:06 +00:00
2230 changed files with 246119 additions and 65576 deletions

View File

@@ -2,7 +2,7 @@
linker = "aarch64-linux-gnu-gcc"
[alias]
sqlness = "run --bin sqlness-runner --"
sqlness = "run --bin sqlness-runner --target-dir target/sqlness --"
[unstable.git]
shallow_index = true
@@ -12,3 +12,6 @@ fetch = true
checkout = true
list_files = true
internal_use_git2 = false
[env]
CARGO_WORKSPACE_DIR = { value = "", relative = true }

22
.github/CODEOWNERS vendored
View File

@@ -5,23 +5,23 @@
* @GreptimeTeam/db-approver
## [Module] Database Engine
/src/index @zhongzc
/src/index @evenyag @discord9 @WenyXu
/src/mito2 @evenyag @v0y4g3r @waynexia
/src/query @evenyag
/src/query @evenyag @waynexia @discord9
## [Module] Distributed
/src/common/meta @MichaelScofield
/src/common/procedure @MichaelScofield
/src/meta-client @MichaelScofield
/src/meta-srv @MichaelScofield
/src/common/meta @MichaelScofield @WenyXu
/src/common/procedure @MichaelScofield @WenyXu
/src/meta-client @MichaelScofield @WenyXu
/src/meta-srv @MichaelScofield @WenyXu
## [Module] Write Ahead Log
/src/log-store @v0y4g3r
/src/store-api @v0y4g3r
/src/log-store @v0y4g3r @WenyXu
/src/store-api @v0y4g3r @evenyag
## [Module] Metrics Engine
/src/metric-engine @waynexia
/src/promql @waynexia
/src/metric-engine @waynexia @WenyXu
/src/promql @waynexia @evenyag @discord9
## [Module] Flow
/src/flow @zhongzc @waynexia
/src/flow @discord9 @waynexia

View File

@@ -32,9 +32,23 @@ inputs:
description: Image Registry
required: false
default: 'docker.io'
large-page-size:
description: Build GreptimeDB with large page size (65536).
required: false
default: 'false'
runs:
using: composite
steps:
- name: Set extra build environment variables
shell: bash
run: |
if [[ '${{ inputs.large-page-size }}' == 'true' ]]; then
echo 'EXTRA_BUILD_ENVS="JEMALLOC_SYS_WITH_LG_PAGE=16"' >> $GITHUB_ENV
else
echo 'EXTRA_BUILD_ENVS=' >> $GITHUB_ENV
fi
- name: Build greptime binary
shell: bash
if: ${{ inputs.build-android-artifacts == 'false' }}
@@ -45,7 +59,8 @@ runs:
FEATURES=${{ inputs.features }} \
BASE_IMAGE=${{ inputs.base-image }} \
IMAGE_NAMESPACE=${{ inputs.image-namespace }} \
IMAGE_REGISTRY=${{ inputs.image-registry }}
IMAGE_REGISTRY=${{ inputs.image-registry }} \
EXTRA_BUILD_ENVS=$EXTRA_BUILD_ENVS
- name: Upload artifacts
uses: ./.github/actions/upload-artifacts

View File

@@ -27,6 +27,10 @@ inputs:
description: Working directory to build the artifacts
required: false
default: .
large-page-size:
description: Build GreptimeDB with large page size (65536).
required: false
default: 'false'
runs:
using: composite
steps:
@@ -59,6 +63,7 @@ runs:
working-dir: ${{ inputs.working-dir }}
image-registry: ${{ inputs.image-registry }}
image-namespace: ${{ inputs.image-namespace }}
large-page-size: ${{ inputs.large-page-size }}
- name: Clean up the target directory # Clean up the target directory for the centos7 base image, or it will still use the objects of last build.
shell: bash
@@ -77,6 +82,7 @@ runs:
working-dir: ${{ inputs.working-dir }}
image-registry: ${{ inputs.image-registry }}
image-namespace: ${{ inputs.image-namespace }}
large-page-size: ${{ inputs.large-page-size }}
- name: Build greptime on android base image
uses: ./.github/actions/build-greptime-binary
@@ -89,3 +95,4 @@ runs:
build-android-artifacts: true
image-registry: ${{ inputs.image-registry }}
image-namespace: ${{ inputs.image-namespace }}
large-page-size: ${{ inputs.large-page-size }}

View File

@@ -12,7 +12,7 @@ runs:
steps:
- name: Install Etcd cluster
shell: bash
run: |
run: |
helm upgrade \
--install etcd oci://registry-1.docker.io/bitnamicharts/etcd \
--set replicaCount=${{ inputs.etcd-replicas }} \
@@ -24,4 +24,9 @@ runs:
--set auth.rbac.token.enabled=false \
--set persistence.size=2Gi \
--create-namespace \
--set global.security.allowInsecureImages=true \
--set image.registry=docker.io \
--set image.repository=greptime/etcd \
--set image.tag=3.6.1-debian-12-r3 \
--version 12.0.8 \
-n ${{ inputs.namespace }}

View File

@@ -10,13 +10,13 @@ inputs:
meta-replicas:
default: 2
description: "Number of Metasrv replicas"
image-registry:
image-registry:
default: "docker.io"
description: "Image registry"
image-repository:
image-repository:
default: "greptime/greptimedb"
description: "Image repository"
image-tag:
image-tag:
default: "latest"
description: 'Image tag'
etcd-endpoints:
@@ -32,12 +32,12 @@ runs:
steps:
- name: Install GreptimeDB operator
uses: nick-fields/retry@v3
with:
with:
timeout_minutes: 3
max_attempts: 3
shell: bash
command: |
helm repo add greptime https://greptimeteam.github.io/helm-charts/
helm repo add greptime https://greptimeteam.github.io/helm-charts/
helm repo update
helm upgrade \
--install \
@@ -48,10 +48,10 @@ runs:
--wait-for-jobs
- name: Install GreptimeDB cluster
shell: bash
run: |
run: |
helm upgrade \
--install my-greptimedb \
--set meta.etcdEndpoints=${{ inputs.etcd-endpoints }} \
--set 'meta.backendStorage.etcd.endpoints[0]=${{ inputs.etcd-endpoints }}' \
--set meta.enableRegionFailover=${{ inputs.enable-region-failover }} \
--set image.registry=${{ inputs.image-registry }} \
--set image.repository=${{ inputs.image-repository }} \
@@ -72,7 +72,7 @@ runs:
- name: Wait for GreptimeDB
shell: bash
run: |
while true; do
while true; do
PHASE=$(kubectl -n my-greptimedb get gtc my-greptimedb -o jsonpath='{.status.clusterPhase}')
if [ "$PHASE" == "Running" ]; then
echo "Cluster is ready"
@@ -86,10 +86,10 @@ runs:
- name: Print GreptimeDB info
if: always()
shell: bash
run: |
run: |
kubectl get all --show-labels -n my-greptimedb
- name: Describe Nodes
if: always()
shell: bash
run: |
run: |
kubectl describe nodes

View File

@@ -1,3 +1,8 @@
logging:
level: "info"
format: "json"
filters:
- log_store=debug
meta:
configData: |-
[runtime]

View File

@@ -12,7 +12,7 @@ runs:
steps:
- name: Install Kafka cluster
shell: bash
run: |
run: |
helm upgrade \
--install kafka oci://registry-1.docker.io/bitnamicharts/kafka \
--set controller.replicaCount=${{ inputs.controller-replicas }} \
@@ -23,4 +23,8 @@ runs:
--set listeners.controller.protocol=PLAINTEXT \
--set listeners.client.protocol=PLAINTEXT \
--create-namespace \
--set image.registry=docker.io \
--set image.repository=greptime/kafka \
--set image.tag=3.9.0-debian-12-r1 \
--version 31.0.0 \
-n ${{ inputs.namespace }}

View File

@@ -6,9 +6,7 @@ inputs:
description: "Number of PostgreSQL replicas"
namespace:
default: "postgres-namespace"
postgres-version:
default: "14.2"
description: "PostgreSQL version"
description: "The PostgreSQL namespace"
storage-size:
default: "1Gi"
description: "Storage size for PostgreSQL"
@@ -22,7 +20,11 @@ runs:
helm upgrade \
--install postgresql oci://registry-1.docker.io/bitnamicharts/postgresql \
--set replicaCount=${{ inputs.postgres-replicas }} \
--set image.tag=${{ inputs.postgres-version }} \
--set global.security.allowInsecureImages=true \
--set image.registry=docker.io \
--set image.repository=greptime/postgresql \
--set image.tag=17.5.0-debian-12-r3 \
--version 16.7.4 \
--set persistence.size=${{ inputs.storage-size }} \
--set postgresql.username=greptimedb \
--set postgresql.password=admin \

15
.github/labeler.yaml vendored Normal file
View File

@@ -0,0 +1,15 @@
ci:
- changed-files:
- any-glob-to-any-file: .github/**
docker:
- changed-files:
- any-glob-to-any-file: docker/**
documentation:
- changed-files:
- any-glob-to-any-file: docs/**
dashboard:
- changed-files:
- any-glob-to-any-file: grafana/**

42
.github/scripts/check-version.sh vendored Executable file
View File

@@ -0,0 +1,42 @@
#!/bin/bash
# Get current version
CURRENT_VERSION=$1
if [ -z "$CURRENT_VERSION" ]; then
echo "Error: Failed to get current version"
exit 1
fi
# Get the latest version from GitHub Releases
API_RESPONSE=$(curl -s "https://api.github.com/repos/GreptimeTeam/greptimedb/releases/latest")
if [ -z "$API_RESPONSE" ] || [ "$(echo "$API_RESPONSE" | jq -r '.message')" = "Not Found" ]; then
echo "Error: Failed to fetch latest version from GitHub"
exit 1
fi
# Get the latest version
LATEST_VERSION=$(echo "$API_RESPONSE" | jq -r '.tag_name')
if [ -z "$LATEST_VERSION" ] || [ "$LATEST_VERSION" = "null" ]; then
echo "Error: No valid version found in GitHub releases"
exit 1
fi
# Cleaned up version number format (removed possible 'v' prefix and -nightly suffix)
CLEAN_CURRENT=$(echo "$CURRENT_VERSION" | sed 's/^v//' | sed 's/-nightly-.*//')
CLEAN_LATEST=$(echo "$LATEST_VERSION" | sed 's/^v//' | sed 's/-nightly-.*//')
echo "Current version: $CLEAN_CURRENT"
echo "Latest release version: $CLEAN_LATEST"
# Use sort -V to compare versions
HIGHER_VERSION=$(printf "%s\n%s" "$CLEAN_CURRENT" "$CLEAN_LATEST" | sort -V | tail -n1)
if [ "$HIGHER_VERSION" = "$CLEAN_CURRENT" ]; then
echo "Current version ($CLEAN_CURRENT) is NEWER than or EQUAL to latest ($CLEAN_LATEST)"
echo "is-current-version-latest=true" >> $GITHUB_OUTPUT
else
echo "Current version ($CLEAN_CURRENT) is OLDER than latest ($CLEAN_LATEST)"
echo "is-current-version-latest=false" >> $GITHUB_OUTPUT
fi

View File

@@ -49,6 +49,17 @@ function create_version() {
echo "GITHUB_REF_NAME is empty in push event" >&2
exit 1
fi
# For tag releases, ensure GITHUB_REF_NAME matches the version in Cargo.toml
CARGO_VERSION=$(grep '^version = ' Cargo.toml | cut -d '"' -f 2 | head -n 1)
EXPECTED_REF_NAME="v${CARGO_VERSION}"
if [ "$GITHUB_REF_NAME" != "$EXPECTED_REF_NAME" ]; then
echo "Error: GITHUB_REF_NAME '$GITHUB_REF_NAME' does not match Cargo.toml version 'v${CARGO_VERSION}'" >&2
echo "Expected tag name: '$EXPECTED_REF_NAME'" >&2
exit 1
fi
echo "$GITHUB_REF_NAME"
elif [ "$GITHUB_EVENT_NAME" = workflow_dispatch ]; then
echo "$NEXT_RELEASE_VERSION-$(git rev-parse --short HEAD)-$(date "+%Y%m%d-%s")"

View File

@@ -3,12 +3,16 @@
set -e
set -o pipefail
KUBERNETES_VERSION="${KUBERNETES_VERSION:-v1.24.0}"
KUBERNETES_VERSION="${KUBERNETES_VERSION:-v1.32.0}"
ENABLE_STANDALONE_MODE="${ENABLE_STANDALONE_MODE:-true}"
DEFAULT_INSTALL_NAMESPACE=${DEFAULT_INSTALL_NAMESPACE:-default}
GREPTIMEDB_IMAGE_TAG=${GREPTIMEDB_IMAGE_TAG:-latest}
ETCD_CHART="oci://registry-1.docker.io/bitnamicharts/etcd"
GREPTIMEDB_OPERATOR_IMAGE_TAG=${GREPTIMEDB_OPERATOR_IMAGE_TAG:-v0.5.1}
GREPTIMEDB_INITIALIZER_IMAGE_TAG="${GREPTIMEDB_OPERATOR_IMAGE_TAG}"
GREPTIME_CHART="https://greptimeteam.github.io/helm-charts/"
ETCD_CHART="oci://registry-1.docker.io/bitnamicharts/etcd"
ETCD_CHART_VERSION="${ETCD_CHART_VERSION:-12.0.8}"
ETCD_IMAGE_TAG="${ETCD_IMAGE_TAG:-3.6.1-debian-12-r3}"
# Create a cluster with 1 control-plane node and 5 workers.
function create_kind_cluster() {
@@ -35,10 +39,16 @@ function add_greptime_chart() {
function deploy_etcd_cluster() {
local namespace="$1"
helm install etcd "$ETCD_CHART" \
helm upgrade --install etcd "$ETCD_CHART" \
--version "$ETCD_CHART_VERSION" \
--create-namespace \
--set replicaCount=3 \
--set auth.rbac.create=false \
--set auth.rbac.token.enabled=false \
--set global.security.allowInsecureImages=true \
--set image.registry=docker.io \
--set image.repository=greptime/etcd \
--set image.tag="$ETCD_IMAGE_TAG" \
-n "$namespace"
# Wait for etcd cluster to be ready.
@@ -48,8 +58,9 @@ function deploy_etcd_cluster() {
# Deploy greptimedb-operator.
function deploy_greptimedb_operator() {
# Use the latest chart and image.
helm install greptimedb-operator greptime/greptimedb-operator \
--set image.tag=latest \
helm upgrade --install greptimedb-operator greptime/greptimedb-operator \
--create-namespace \
--set image.tag="$GREPTIMEDB_OPERATOR_IMAGE_TAG" \
-n "$DEFAULT_INSTALL_NAMESPACE"
# Wait for greptimedb-operator to be ready.
@@ -66,9 +77,12 @@ function deploy_greptimedb_cluster() {
deploy_etcd_cluster "$install_namespace"
helm install "$cluster_name" greptime/greptimedb-cluster \
helm upgrade --install "$cluster_name" greptime/greptimedb-cluster \
--create-namespace \
--set image.tag="$GREPTIMEDB_IMAGE_TAG" \
--set meta.etcdEndpoints="etcd.$install_namespace:2379" \
--set initializer.tag="$GREPTIMEDB_INITIALIZER_IMAGE_TAG" \
--set "meta.backendStorage.etcd.endpoints[0]=etcd.$install_namespace.svc.cluster.local:2379" \
--set meta.backendStorage.etcd.storeKeyPrefix="$cluster_name" \
-n "$install_namespace"
# Wait for greptimedb cluster to be ready.
@@ -101,15 +115,18 @@ function deploy_greptimedb_cluster_with_s3_storage() {
deploy_etcd_cluster "$install_namespace"
helm install "$cluster_name" greptime/greptimedb-cluster -n "$install_namespace" \
helm upgrade --install "$cluster_name" greptime/greptimedb-cluster -n "$install_namespace" \
--create-namespace \
--set image.tag="$GREPTIMEDB_IMAGE_TAG" \
--set meta.etcdEndpoints="etcd.$install_namespace:2379" \
--set storage.s3.bucket="$AWS_CI_TEST_BUCKET" \
--set storage.s3.region="$AWS_REGION" \
--set storage.s3.root="$DATA_ROOT" \
--set storage.credentials.secretName=s3-credentials \
--set storage.credentials.accessKeyId="$AWS_ACCESS_KEY_ID" \
--set storage.credentials.secretAccessKey="$AWS_SECRET_ACCESS_KEY"
--set initializer.tag="$GREPTIMEDB_INITIALIZER_IMAGE_TAG" \
--set "meta.backendStorage.etcd.endpoints[0]=etcd.$install_namespace.svc.cluster.local:2379" \
--set meta.backendStorage.etcd.storeKeyPrefix="$cluster_name" \
--set objectStorage.s3.bucket="$AWS_CI_TEST_BUCKET" \
--set objectStorage.s3.region="$AWS_REGION" \
--set objectStorage.s3.root="$DATA_ROOT" \
--set objectStorage.credentials.secretName=s3-credentials \
--set objectStorage.credentials.accessKeyId="$AWS_ACCESS_KEY_ID" \
--set objectStorage.credentials.secretAccessKey="$AWS_SECRET_ACCESS_KEY"
# Wait for greptimedb cluster to be ready.
while true; do
@@ -134,7 +151,8 @@ function deploy_greptimedb_cluster_with_s3_storage() {
# Deploy standalone greptimedb.
# It will expose cluster service ports as '34000', '34001', '34002', '34003' to local access.
function deploy_standalone_greptimedb() {
helm install greptimedb-standalone greptime/greptimedb-standalone \
helm upgrade --install greptimedb-standalone greptime/greptimedb-standalone \
--create-namespace \
--set image.tag="$GREPTIMEDB_IMAGE_TAG" \
-n "$DEFAULT_INSTALL_NAMESPACE"

507
.github/scripts/package-lock.json generated vendored Normal file
View File

@@ -0,0 +1,507 @@
{
"name": "greptimedb-github-scripts",
"version": "1.0.0",
"lockfileVersion": 3,
"requires": true,
"packages": {
"": {
"name": "greptimedb-github-scripts",
"version": "1.0.0",
"dependencies": {
"@octokit/rest": "^21.0.0",
"axios": "^1.7.0"
}
},
"node_modules/@octokit/auth-token": {
"version": "5.1.2",
"resolved": "https://registry.npmjs.org/@octokit/auth-token/-/auth-token-5.1.2.tgz",
"integrity": "sha512-JcQDsBdg49Yky2w2ld20IHAlwr8d/d8N6NiOXbtuoPCqzbsiJgF633mVUw3x4mo0H5ypataQIX7SFu3yy44Mpw==",
"license": "MIT",
"engines": {
"node": ">= 18"
}
},
"node_modules/@octokit/core": {
"version": "6.1.6",
"resolved": "https://registry.npmjs.org/@octokit/core/-/core-6.1.6.tgz",
"integrity": "sha512-kIU8SLQkYWGp3pVKiYzA5OSaNF5EE03P/R8zEmmrG6XwOg5oBjXyQVVIauQ0dgau4zYhpZEhJrvIYt6oM+zZZA==",
"license": "MIT",
"dependencies": {
"@octokit/auth-token": "^5.0.0",
"@octokit/graphql": "^8.2.2",
"@octokit/request": "^9.2.3",
"@octokit/request-error": "^6.1.8",
"@octokit/types": "^14.0.0",
"before-after-hook": "^3.0.2",
"universal-user-agent": "^7.0.0"
},
"engines": {
"node": ">= 18"
}
},
"node_modules/@octokit/endpoint": {
"version": "10.1.4",
"resolved": "https://registry.npmjs.org/@octokit/endpoint/-/endpoint-10.1.4.tgz",
"integrity": "sha512-OlYOlZIsfEVZm5HCSR8aSg02T2lbUWOsCQoPKfTXJwDzcHQBrVBGdGXb89dv2Kw2ToZaRtudp8O3ZIYoaOjKlA==",
"license": "MIT",
"dependencies": {
"@octokit/types": "^14.0.0",
"universal-user-agent": "^7.0.2"
},
"engines": {
"node": ">= 18"
}
},
"node_modules/@octokit/graphql": {
"version": "8.2.2",
"resolved": "https://registry.npmjs.org/@octokit/graphql/-/graphql-8.2.2.tgz",
"integrity": "sha512-Yi8hcoqsrXGdt0yObxbebHXFOiUA+2v3n53epuOg1QUgOB6c4XzvisBNVXJSl8RYA5KrDuSL2yq9Qmqe5N0ryA==",
"license": "MIT",
"dependencies": {
"@octokit/request": "^9.2.3",
"@octokit/types": "^14.0.0",
"universal-user-agent": "^7.0.0"
},
"engines": {
"node": ">= 18"
}
},
"node_modules/@octokit/openapi-types": {
"version": "25.1.0",
"resolved": "https://registry.npmjs.org/@octokit/openapi-types/-/openapi-types-25.1.0.tgz",
"integrity": "sha512-idsIggNXUKkk0+BExUn1dQ92sfysJrje03Q0bv0e+KPLrvyqZF8MnBpFz8UNfYDwB3Ie7Z0TByjWfzxt7vseaA==",
"license": "MIT"
},
"node_modules/@octokit/plugin-paginate-rest": {
"version": "11.6.0",
"resolved": "https://registry.npmjs.org/@octokit/plugin-paginate-rest/-/plugin-paginate-rest-11.6.0.tgz",
"integrity": "sha512-n5KPteiF7pWKgBIBJSk8qzoZWcUkza2O6A0za97pMGVrGfPdltxrfmfF5GucHYvHGZD8BdaZmmHGz5cX/3gdpw==",
"license": "MIT",
"dependencies": {
"@octokit/types": "^13.10.0"
},
"engines": {
"node": ">= 18"
},
"peerDependencies": {
"@octokit/core": ">=6"
}
},
"node_modules/@octokit/plugin-paginate-rest/node_modules/@octokit/openapi-types": {
"version": "24.2.0",
"resolved": "https://registry.npmjs.org/@octokit/openapi-types/-/openapi-types-24.2.0.tgz",
"integrity": "sha512-9sIH3nSUttelJSXUrmGzl7QUBFul0/mB8HRYl3fOlgHbIWG+WnYDXU3v/2zMtAvuzZ/ed00Ei6on975FhBfzrg==",
"license": "MIT"
},
"node_modules/@octokit/plugin-paginate-rest/node_modules/@octokit/types": {
"version": "13.10.0",
"resolved": "https://registry.npmjs.org/@octokit/types/-/types-13.10.0.tgz",
"integrity": "sha512-ifLaO34EbbPj0Xgro4G5lP5asESjwHracYJvVaPIyXMuiuXLlhic3S47cBdTb+jfODkTE5YtGCLt3Ay3+J97sA==",
"license": "MIT",
"dependencies": {
"@octokit/openapi-types": "^24.2.0"
}
},
"node_modules/@octokit/plugin-request-log": {
"version": "5.3.1",
"resolved": "https://registry.npmjs.org/@octokit/plugin-request-log/-/plugin-request-log-5.3.1.tgz",
"integrity": "sha512-n/lNeCtq+9ofhC15xzmJCNKP2BWTv8Ih2TTy+jatNCCq/gQP/V7rK3fjIfuz0pDWDALO/o/4QY4hyOF6TQQFUw==",
"license": "MIT",
"engines": {
"node": ">= 18"
},
"peerDependencies": {
"@octokit/core": ">=6"
}
},
"node_modules/@octokit/plugin-rest-endpoint-methods": {
"version": "13.5.0",
"resolved": "https://registry.npmjs.org/@octokit/plugin-rest-endpoint-methods/-/plugin-rest-endpoint-methods-13.5.0.tgz",
"integrity": "sha512-9Pas60Iv9ejO3WlAX3maE1+38c5nqbJXV5GrncEfkndIpZrJ/WPMRd2xYDcPPEt5yzpxcjw9fWNoPhsSGzqKqw==",
"license": "MIT",
"dependencies": {
"@octokit/types": "^13.10.0"
},
"engines": {
"node": ">= 18"
},
"peerDependencies": {
"@octokit/core": ">=6"
}
},
"node_modules/@octokit/plugin-rest-endpoint-methods/node_modules/@octokit/openapi-types": {
"version": "24.2.0",
"resolved": "https://registry.npmjs.org/@octokit/openapi-types/-/openapi-types-24.2.0.tgz",
"integrity": "sha512-9sIH3nSUttelJSXUrmGzl7QUBFul0/mB8HRYl3fOlgHbIWG+WnYDXU3v/2zMtAvuzZ/ed00Ei6on975FhBfzrg==",
"license": "MIT"
},
"node_modules/@octokit/plugin-rest-endpoint-methods/node_modules/@octokit/types": {
"version": "13.10.0",
"resolved": "https://registry.npmjs.org/@octokit/types/-/types-13.10.0.tgz",
"integrity": "sha512-ifLaO34EbbPj0Xgro4G5lP5asESjwHracYJvVaPIyXMuiuXLlhic3S47cBdTb+jfODkTE5YtGCLt3Ay3+J97sA==",
"license": "MIT",
"dependencies": {
"@octokit/openapi-types": "^24.2.0"
}
},
"node_modules/@octokit/request": {
"version": "9.2.4",
"resolved": "https://registry.npmjs.org/@octokit/request/-/request-9.2.4.tgz",
"integrity": "sha512-q8ybdytBmxa6KogWlNa818r0k1wlqzNC+yNkcQDECHvQo8Vmstrg18JwqJHdJdUiHD2sjlwBgSm9kHkOKe2iyA==",
"license": "MIT",
"dependencies": {
"@octokit/endpoint": "^10.1.4",
"@octokit/request-error": "^6.1.8",
"@octokit/types": "^14.0.0",
"fast-content-type-parse": "^2.0.0",
"universal-user-agent": "^7.0.2"
},
"engines": {
"node": ">= 18"
}
},
"node_modules/@octokit/request-error": {
"version": "6.1.8",
"resolved": "https://registry.npmjs.org/@octokit/request-error/-/request-error-6.1.8.tgz",
"integrity": "sha512-WEi/R0Jmq+IJKydWlKDmryPcmdYSVjL3ekaiEL1L9eo1sUnqMJ+grqmC9cjk7CA7+b2/T397tO5d8YLOH3qYpQ==",
"license": "MIT",
"dependencies": {
"@octokit/types": "^14.0.0"
},
"engines": {
"node": ">= 18"
}
},
"node_modules/@octokit/rest": {
"version": "21.1.1",
"resolved": "https://registry.npmjs.org/@octokit/rest/-/rest-21.1.1.tgz",
"integrity": "sha512-sTQV7va0IUVZcntzy1q3QqPm/r8rWtDCqpRAmb8eXXnKkjoQEtFe3Nt5GTVsHft+R6jJoHeSiVLcgcvhtue/rg==",
"license": "MIT",
"dependencies": {
"@octokit/core": "^6.1.4",
"@octokit/plugin-paginate-rest": "^11.4.2",
"@octokit/plugin-request-log": "^5.3.1",
"@octokit/plugin-rest-endpoint-methods": "^13.3.0"
},
"engines": {
"node": ">= 18"
}
},
"node_modules/@octokit/types": {
"version": "14.1.0",
"resolved": "https://registry.npmjs.org/@octokit/types/-/types-14.1.0.tgz",
"integrity": "sha512-1y6DgTy8Jomcpu33N+p5w58l6xyt55Ar2I91RPiIA0xCJBXyUAhXCcmZaDWSANiha7R9a6qJJ2CRomGPZ6f46g==",
"license": "MIT",
"dependencies": {
"@octokit/openapi-types": "^25.1.0"
}
},
"node_modules/asynckit": {
"version": "0.4.0",
"resolved": "https://registry.npmjs.org/asynckit/-/asynckit-0.4.0.tgz",
"integrity": "sha512-Oei9OH4tRh0YqU3GxhX79dM/mwVgvbZJaSNaRk+bshkj0S5cfHcgYakreBjrHwatXKbz+IoIdYLxrKim2MjW0Q==",
"license": "MIT"
},
"node_modules/axios": {
"version": "1.12.2",
"resolved": "https://registry.npmjs.org/axios/-/axios-1.12.2.tgz",
"integrity": "sha512-vMJzPewAlRyOgxV2dU0Cuz2O8zzzx9VYtbJOaBgXFeLc4IV/Eg50n4LowmehOOR61S8ZMpc2K5Sa7g6A4jfkUw==",
"license": "MIT",
"dependencies": {
"follow-redirects": "^1.15.6",
"form-data": "^4.0.4",
"proxy-from-env": "^1.1.0"
}
},
"node_modules/before-after-hook": {
"version": "3.0.2",
"resolved": "https://registry.npmjs.org/before-after-hook/-/before-after-hook-3.0.2.tgz",
"integrity": "sha512-Nik3Sc0ncrMK4UUdXQmAnRtzmNQTAAXmXIopizwZ1W1t8QmfJj+zL4OA2I7XPTPW5z5TDqv4hRo/JzouDJnX3A==",
"license": "Apache-2.0"
},
"node_modules/call-bind-apply-helpers": {
"version": "1.0.2",
"resolved": "https://registry.npmjs.org/call-bind-apply-helpers/-/call-bind-apply-helpers-1.0.2.tgz",
"integrity": "sha512-Sp1ablJ0ivDkSzjcaJdxEunN5/XvksFJ2sMBFfq6x0ryhQV/2b/KwFe21cMpmHtPOSij8K99/wSfoEuTObmuMQ==",
"license": "MIT",
"dependencies": {
"es-errors": "^1.3.0",
"function-bind": "^1.1.2"
},
"engines": {
"node": ">= 0.4"
}
},
"node_modules/combined-stream": {
"version": "1.0.8",
"resolved": "https://registry.npmjs.org/combined-stream/-/combined-stream-1.0.8.tgz",
"integrity": "sha512-FQN4MRfuJeHf7cBbBMJFXhKSDq+2kAArBlmRBvcvFE5BB1HZKXtSFASDhdlz9zOYwxh8lDdnvmMOe/+5cdoEdg==",
"license": "MIT",
"dependencies": {
"delayed-stream": "~1.0.0"
},
"engines": {
"node": ">= 0.8"
}
},
"node_modules/delayed-stream": {
"version": "1.0.0",
"resolved": "https://registry.npmjs.org/delayed-stream/-/delayed-stream-1.0.0.tgz",
"integrity": "sha512-ZySD7Nf91aLB0RxL4KGrKHBXl7Eds1DAmEdcoVawXnLD7SDhpNgtuII2aAkg7a7QS41jxPSZ17p4VdGnMHk3MQ==",
"license": "MIT",
"engines": {
"node": ">=0.4.0"
}
},
"node_modules/dunder-proto": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/dunder-proto/-/dunder-proto-1.0.1.tgz",
"integrity": "sha512-KIN/nDJBQRcXw0MLVhZE9iQHmG68qAVIBg9CqmUYjmQIhgij9U5MFvrqkUL5FbtyyzZuOeOt0zdeRe4UY7ct+A==",
"license": "MIT",
"dependencies": {
"call-bind-apply-helpers": "^1.0.1",
"es-errors": "^1.3.0",
"gopd": "^1.2.0"
},
"engines": {
"node": ">= 0.4"
}
},
"node_modules/es-define-property": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/es-define-property/-/es-define-property-1.0.1.tgz",
"integrity": "sha512-e3nRfgfUZ4rNGL232gUgX06QNyyez04KdjFrF+LTRoOXmrOgFKDg4BCdsjW8EnT69eqdYGmRpJwiPVYNrCaW3g==",
"license": "MIT",
"engines": {
"node": ">= 0.4"
}
},
"node_modules/es-errors": {
"version": "1.3.0",
"resolved": "https://registry.npmjs.org/es-errors/-/es-errors-1.3.0.tgz",
"integrity": "sha512-Zf5H2Kxt2xjTvbJvP2ZWLEICxA6j+hAmMzIlypy4xcBg1vKVnx89Wy0GbS+kf5cwCVFFzdCFh2XSCFNULS6csw==",
"license": "MIT",
"engines": {
"node": ">= 0.4"
}
},
"node_modules/es-object-atoms": {
"version": "1.1.1",
"resolved": "https://registry.npmjs.org/es-object-atoms/-/es-object-atoms-1.1.1.tgz",
"integrity": "sha512-FGgH2h8zKNim9ljj7dankFPcICIK9Cp5bm+c2gQSYePhpaG5+esrLODihIorn+Pe6FGJzWhXQotPv73jTaldXA==",
"license": "MIT",
"dependencies": {
"es-errors": "^1.3.0"
},
"engines": {
"node": ">= 0.4"
}
},
"node_modules/es-set-tostringtag": {
"version": "2.1.0",
"resolved": "https://registry.npmjs.org/es-set-tostringtag/-/es-set-tostringtag-2.1.0.tgz",
"integrity": "sha512-j6vWzfrGVfyXxge+O0x5sh6cvxAog0a/4Rdd2K36zCMV5eJ+/+tOAngRO8cODMNWbVRdVlmGZQL2YS3yR8bIUA==",
"license": "MIT",
"dependencies": {
"es-errors": "^1.3.0",
"get-intrinsic": "^1.2.6",
"has-tostringtag": "^1.0.2",
"hasown": "^2.0.2"
},
"engines": {
"node": ">= 0.4"
}
},
"node_modules/fast-content-type-parse": {
"version": "2.0.1",
"resolved": "https://registry.npmjs.org/fast-content-type-parse/-/fast-content-type-parse-2.0.1.tgz",
"integrity": "sha512-nGqtvLrj5w0naR6tDPfB4cUmYCqouzyQiz6C5y/LtcDllJdrcc6WaWW6iXyIIOErTa/XRybj28aasdn4LkVk6Q==",
"funding": [
{
"type": "github",
"url": "https://github.com/sponsors/fastify"
},
{
"type": "opencollective",
"url": "https://opencollective.com/fastify"
}
],
"license": "MIT"
},
"node_modules/follow-redirects": {
"version": "1.15.11",
"resolved": "https://registry.npmjs.org/follow-redirects/-/follow-redirects-1.15.11.tgz",
"integrity": "sha512-deG2P0JfjrTxl50XGCDyfI97ZGVCxIpfKYmfyrQ54n5FO/0gfIES8C/Psl6kWVDolizcaaxZJnTS0QSMxvnsBQ==",
"funding": [
{
"type": "individual",
"url": "https://github.com/sponsors/RubenVerborgh"
}
],
"license": "MIT",
"engines": {
"node": ">=4.0"
},
"peerDependenciesMeta": {
"debug": {
"optional": true
}
}
},
"node_modules/form-data": {
"version": "4.0.4",
"resolved": "https://registry.npmjs.org/form-data/-/form-data-4.0.4.tgz",
"integrity": "sha512-KrGhL9Q4zjj0kiUt5OO4Mr/A/jlI2jDYs5eHBpYHPcBEVSiipAvn2Ko2HnPe20rmcuuvMHNdZFp+4IlGTMF0Ow==",
"license": "MIT",
"dependencies": {
"asynckit": "^0.4.0",
"combined-stream": "^1.0.8",
"es-set-tostringtag": "^2.1.0",
"hasown": "^2.0.2",
"mime-types": "^2.1.12"
},
"engines": {
"node": ">= 6"
}
},
"node_modules/function-bind": {
"version": "1.1.2",
"resolved": "https://registry.npmjs.org/function-bind/-/function-bind-1.1.2.tgz",
"integrity": "sha512-7XHNxH7qX9xG5mIwxkhumTox/MIRNcOgDrxWsMt2pAr23WHp6MrRlN7FBSFpCpr+oVO0F744iUgR82nJMfG2SA==",
"license": "MIT",
"funding": {
"url": "https://github.com/sponsors/ljharb"
}
},
"node_modules/get-intrinsic": {
"version": "1.3.0",
"resolved": "https://registry.npmjs.org/get-intrinsic/-/get-intrinsic-1.3.0.tgz",
"integrity": "sha512-9fSjSaos/fRIVIp+xSJlE6lfwhES7LNtKaCBIamHsjr2na1BiABJPo0mOjjz8GJDURarmCPGqaiVg5mfjb98CQ==",
"license": "MIT",
"dependencies": {
"call-bind-apply-helpers": "^1.0.2",
"es-define-property": "^1.0.1",
"es-errors": "^1.3.0",
"es-object-atoms": "^1.1.1",
"function-bind": "^1.1.2",
"get-proto": "^1.0.1",
"gopd": "^1.2.0",
"has-symbols": "^1.1.0",
"hasown": "^2.0.2",
"math-intrinsics": "^1.1.0"
},
"engines": {
"node": ">= 0.4"
},
"funding": {
"url": "https://github.com/sponsors/ljharb"
}
},
"node_modules/get-proto": {
"version": "1.0.1",
"resolved": "https://registry.npmjs.org/get-proto/-/get-proto-1.0.1.tgz",
"integrity": "sha512-sTSfBjoXBp89JvIKIefqw7U2CCebsc74kiY6awiGogKtoSGbgjYE/G/+l9sF3MWFPNc9IcoOC4ODfKHfxFmp0g==",
"license": "MIT",
"dependencies": {
"dunder-proto": "^1.0.1",
"es-object-atoms": "^1.0.0"
},
"engines": {
"node": ">= 0.4"
}
},
"node_modules/gopd": {
"version": "1.2.0",
"resolved": "https://registry.npmjs.org/gopd/-/gopd-1.2.0.tgz",
"integrity": "sha512-ZUKRh6/kUFoAiTAtTYPZJ3hw9wNxx+BIBOijnlG9PnrJsCcSjs1wyyD6vJpaYtgnzDrKYRSqf3OO6Rfa93xsRg==",
"license": "MIT",
"engines": {
"node": ">= 0.4"
},
"funding": {
"url": "https://github.com/sponsors/ljharb"
}
},
"node_modules/has-symbols": {
"version": "1.1.0",
"resolved": "https://registry.npmjs.org/has-symbols/-/has-symbols-1.1.0.tgz",
"integrity": "sha512-1cDNdwJ2Jaohmb3sg4OmKaMBwuC48sYni5HUw2DvsC8LjGTLK9h+eb1X6RyuOHe4hT0ULCW68iomhjUoKUqlPQ==",
"license": "MIT",
"engines": {
"node": ">= 0.4"
},
"funding": {
"url": "https://github.com/sponsors/ljharb"
}
},
"node_modules/has-tostringtag": {
"version": "1.0.2",
"resolved": "https://registry.npmjs.org/has-tostringtag/-/has-tostringtag-1.0.2.tgz",
"integrity": "sha512-NqADB8VjPFLM2V0VvHUewwwsw0ZWBaIdgo+ieHtK3hasLz4qeCRjYcqfB6AQrBggRKppKF8L52/VqdVsO47Dlw==",
"license": "MIT",
"dependencies": {
"has-symbols": "^1.0.3"
},
"engines": {
"node": ">= 0.4"
},
"funding": {
"url": "https://github.com/sponsors/ljharb"
}
},
"node_modules/hasown": {
"version": "2.0.2",
"resolved": "https://registry.npmjs.org/hasown/-/hasown-2.0.2.tgz",
"integrity": "sha512-0hJU9SCPvmMzIBdZFqNPXWa6dqh7WdH0cII9y+CyS8rG3nL48Bclra9HmKhVVUHyPWNH5Y7xDwAB7bfgSjkUMQ==",
"license": "MIT",
"dependencies": {
"function-bind": "^1.1.2"
},
"engines": {
"node": ">= 0.4"
}
},
"node_modules/math-intrinsics": {
"version": "1.1.0",
"resolved": "https://registry.npmjs.org/math-intrinsics/-/math-intrinsics-1.1.0.tgz",
"integrity": "sha512-/IXtbwEk5HTPyEwyKX6hGkYXxM9nbj64B+ilVJnC/R6B0pH5G4V3b0pVbL7DBj4tkhBAppbQUlf6F6Xl9LHu1g==",
"license": "MIT",
"engines": {
"node": ">= 0.4"
}
},
"node_modules/mime-db": {
"version": "1.52.0",
"resolved": "https://registry.npmjs.org/mime-db/-/mime-db-1.52.0.tgz",
"integrity": "sha512-sPU4uV7dYlvtWJxwwxHD0PuihVNiE7TyAbQ5SWxDCB9mUYvOgroQOwYQQOKPJ8CIbE+1ETVlOoK1UC2nU3gYvg==",
"license": "MIT",
"engines": {
"node": ">= 0.6"
}
},
"node_modules/mime-types": {
"version": "2.1.35",
"resolved": "https://registry.npmjs.org/mime-types/-/mime-types-2.1.35.tgz",
"integrity": "sha512-ZDY+bPm5zTTF+YpCrAU9nK0UgICYPT0QtT1NZWFv4s++TNkcgVaT0g6+4R2uI4MjQjzysHB1zxuWL50hzaeXiw==",
"license": "MIT",
"dependencies": {
"mime-db": "1.52.0"
},
"engines": {
"node": ">= 0.6"
}
},
"node_modules/proxy-from-env": {
"version": "1.1.0",
"resolved": "https://registry.npmjs.org/proxy-from-env/-/proxy-from-env-1.1.0.tgz",
"integrity": "sha512-D+zkORCbA9f1tdWRK0RaCR3GPv50cMxcrz4X8k5LTSUD1Dkw47mKJEZQNunItRTkWwgtaUSo1RVFRIG9ZXiFYg==",
"license": "MIT"
},
"node_modules/universal-user-agent": {
"version": "7.0.3",
"resolved": "https://registry.npmjs.org/universal-user-agent/-/universal-user-agent-7.0.3.tgz",
"integrity": "sha512-TmnEAEAsBJVZM/AADELsK76llnwcf9vMKuPz8JflO1frO8Lchitr0fNaN9d+Ap0BjKtqWqd/J17qeDnXh8CL2A==",
"license": "ISC"
}
}
}

10
.github/scripts/package.json vendored Normal file
View File

@@ -0,0 +1,10 @@
{
"name": "greptimedb-github-scripts",
"version": "1.0.0",
"type": "module",
"description": "GitHub automation scripts for GreptimeDB",
"dependencies": {
"@octokit/rest": "^21.0.0",
"axios": "^1.7.0"
}
}

152
.github/scripts/pr-review-reminder.js vendored Normal file
View File

@@ -0,0 +1,152 @@
// Daily PR Review Reminder Script
// Fetches open PRs from GreptimeDB repository and sends Slack notifications
// to PR owners and assigned reviewers to keep review process moving.
(async () => {
const { Octokit } = await import("@octokit/rest");
const { default: axios } = await import('axios');
// Configuration
const GITHUB_TOKEN = process.env.GITHUB_TOKEN;
const SLACK_WEBHOOK_URL = process.env.SLACK_PR_REVIEW_WEBHOOK_URL;
const REPO_OWNER = "GreptimeTeam";
const REPO_NAME = "greptimedb";
const GITHUB_TO_SLACK = JSON.parse(process.env.GITHUBID_SLACKID_MAPPING || '{}');
// Debug: Print environment variable status
console.log("=== Environment Variables Debug ===");
console.log(`GITHUB_TOKEN: ${GITHUB_TOKEN ? 'Set ✓' : 'NOT SET ✗'}`);
console.log(`SLACK_PR_REVIEW_WEBHOOK_URL: ${SLACK_WEBHOOK_URL ? 'Set ✓' : 'NOT SET ✗'}`);
console.log(`GITHUBID_SLACKID_MAPPING: ${process.env.GITHUBID_SLACKID_MAPPING ? `Set ✓ (${Object.keys(GITHUB_TO_SLACK).length} mappings)` : 'NOT SET ✗'}`);
console.log("===================================\n");
const octokit = new Octokit({
auth: GITHUB_TOKEN
});
// Fetch all open PRs from the repository
async function fetchOpenPRs() {
try {
const prs = await octokit.pulls.list({
owner: REPO_OWNER,
repo: REPO_NAME,
state: "open",
per_page: 100,
sort: "created",
direction: "asc"
});
return prs.data.filter((pr) => !pr.draft);
} catch (error) {
console.error("Error fetching PRs:", error);
return [];
}
}
// Convert GitHub username to Slack mention or fallback to GitHub username
function toSlackMention(githubUser) {
const slackUserId = GITHUB_TO_SLACK[githubUser];
return slackUserId ? `<@${slackUserId}>` : `@${githubUser}`;
}
// Calculate days since PR was opened
function getDaysOpen(createdAt) {
const created = new Date(createdAt);
const now = new Date();
const diffMs = now - created;
const days = Math.floor(diffMs / (1000 * 60 * 60 * 24));
return days;
}
// Build Slack notification message from PR list
function buildSlackMessage(prs) {
if (prs.length === 0) {
return "*🎉 Great job! No pending PRs for review.*";
}
// Separate PRs by age threshold (14 days)
const criticalPRs = [];
const recentPRs = [];
prs.forEach(pr => {
const daysOpen = getDaysOpen(pr.created_at);
if (daysOpen >= 14) {
criticalPRs.push(pr);
} else {
recentPRs.push(pr);
}
});
const lines = [
`*🔍 Daily PR Review Reminder 🔍*`,
`Found *${criticalPRs.length}* critical PR(s) (14+ days old)\n`
];
// Show critical PRs (14+ days) in detail
if (criticalPRs.length > 0) {
criticalPRs.forEach((pr, index) => {
const owner = toSlackMention(pr.user.login);
const reviewers = pr.requested_reviewers || [];
const reviewerMentions = reviewers.map(r => toSlackMention(r.login)).join(", ");
const daysOpen = getDaysOpen(pr.created_at);
const prInfo = `${index + 1}. <${pr.html_url}|#${pr.number}: ${pr.title}>`;
const ageInfo = ` 🔴 Opened *${daysOpen}* day(s) ago`;
const ownerInfo = ` 👤 Owner: ${owner}`;
const reviewerInfo = reviewers.length > 0
? ` 👁️ Reviewers: ${reviewerMentions}`
: ` 👁️ Reviewers: _Not assigned yet_`;
lines.push(prInfo);
lines.push(ageInfo);
lines.push(ownerInfo);
lines.push(reviewerInfo);
lines.push(""); // Empty line between PRs
});
}
lines.push("_Let's keep the code review process moving! 🚀_");
return lines.join("\n");
}
// Send notification to Slack webhook
async function sendSlackNotification(message) {
if (!SLACK_WEBHOOK_URL) {
console.log("⚠️ SLACK_PR_REVIEW_WEBHOOK_URL not configured. Message preview:");
console.log("=".repeat(60));
console.log(message);
console.log("=".repeat(60));
return;
}
try {
const response = await axios.post(SLACK_WEBHOOK_URL, {
text: message
});
if (response.status !== 200) {
throw new Error(`Slack API returned status ${response.status}`);
}
console.log("Slack notification sent successfully.");
} catch (error) {
console.error("Error sending Slack notification:", error);
throw error;
}
}
// Main execution flow
async function run() {
console.log(`Fetching open PRs from ${REPO_OWNER}/${REPO_NAME}...`);
const prs = await fetchOpenPRs();
console.log(`Found ${prs.length} open PR(s).`);
const message = buildSlackMessage(prs);
console.log("Sending Slack notification...");
await sendSlackNotification(message);
}
run().catch(error => {
console.error("Script execution failed:", error);
process.exit(1);
});
})();

34
.github/scripts/pull-test-deps-images.sh vendored Executable file
View File

@@ -0,0 +1,34 @@
#!/bin/bash
# This script is used to pull the test dependency images that are stored in public ECR one by one to avoid rate limiting.
set -e
MAX_RETRIES=3
IMAGES=(
"greptime/zookeeper:3.7"
"greptime/kafka:3.9.0-debian-12-r1"
"greptime/etcd:3.6.1-debian-12-r3"
"greptime/minio:2024"
"greptime/mysql:5.7"
)
for image in "${IMAGES[@]}"; do
for ((attempt=1; attempt<=MAX_RETRIES; attempt++)); do
if docker pull "$image"; then
# Successfully pulled the image.
break
else
# Use some simple exponential backoff to avoid rate limiting.
if [ $attempt -lt $MAX_RETRIES ]; then
sleep_seconds=$((attempt * 5))
echo "Attempt $attempt failed for $image, waiting $sleep_seconds seconds"
sleep $sleep_seconds # 5s, 10s delays
else
echo "Failed to pull $image after $MAX_RETRIES attempts"
exit 1
fi
fi
done
done

View File

@@ -21,7 +21,7 @@ update_dev_builder_version() {
# Commit the changes.
git add Makefile
git commit -m "ci: update dev-builder image tag"
git commit -s -m "ci: update dev-builder image tag"
git push origin $BRANCH_NAME
# Create a Pull Request.

View File

@@ -30,7 +30,7 @@ update_helm_charts_version() {
# Commit the changes.
git add .
git commit -m "chore: Update GreptimeDB version to ${VERSION}"
git commit -s -m "chore: Update GreptimeDB version to ${VERSION}"
git push origin $BRANCH_NAME
# Create a Pull Request.
@@ -39,8 +39,11 @@ update_helm_charts_version() {
--body "This PR updates the GreptimeDB version." \
--base main \
--head $BRANCH_NAME \
--reviewer zyy17 \
--reviewer daviderli614
--reviewer sunng87 \
--reviewer daviderli614 \
--reviewer killme2008 \
--reviewer evenyag \
--reviewer fengjiachun
}
update_helm_charts_version

View File

@@ -26,7 +26,7 @@ update_homebrew_greptime_version() {
# Commit the changes.
git add .
git commit -m "chore: Update GreptimeDB version to ${VERSION}"
git commit -s -m "chore: Update GreptimeDB version to ${VERSION}"
git push origin $BRANCH_NAME
# Create a Pull Request.
@@ -35,8 +35,11 @@ update_homebrew_greptime_version() {
--body "This PR updates the GreptimeDB version." \
--base main \
--head $BRANCH_NAME \
--reviewer zyy17 \
--reviewer daviderli614
--reviewer sunng87 \
--reviewer daviderli614 \
--reviewer killme2008 \
--reviewer evenyag \
--reviewer fengjiachun
}
update_homebrew_greptime_version

154
.github/workflows/check-git-deps.yml vendored Normal file
View File

@@ -0,0 +1,154 @@
name: Check Git Dependencies on Main Branch
on:
pull_request:
branches: [main]
paths:
- 'Cargo.toml'
push:
branches: [main]
paths:
- 'Cargo.toml'
jobs:
check-git-deps:
runs-on: ubuntu-latest
steps:
- name: Checkout repository
uses: actions/checkout@v6
- name: Check git dependencies
env:
WHITELIST_DEPS: "greptime-proto,meter-core,meter-macros"
run: |
#!/bin/bash
set -e
echo "Checking whitelisted git dependencies..."
# Function to check if a commit is on main branch
check_commit_on_main() {
local repo_url="$1"
local commit="$2"
local repo_name=$(basename "$repo_url" .git)
echo "Checking $repo_name"
echo "Repo: $repo_url"
echo "Commit: $commit"
# Create a temporary directory for cloning
local temp_dir=$(mktemp -d)
# Clone the repository
if git clone "$repo_url" "$temp_dir" 2>/dev/null; then
cd "$temp_dir"
# Try to determine the main branch name
local main_branch="main"
if ! git rev-parse --verify origin/main >/dev/null 2>&1; then
if git rev-parse --verify origin/master >/dev/null 2>&1; then
main_branch="master"
else
# Try to get the default branch
main_branch=$(git symbolic-ref refs/remotes/origin/HEAD | sed 's@^refs/remotes/origin/@@')
fi
fi
echo "Main branch: $main_branch"
# Check if commit exists
if git cat-file -e "$commit" 2>/dev/null; then
# Check if commit is on main branch
if git merge-base --is-ancestor "$commit" "origin/$main_branch" 2>/dev/null; then
echo "PASS: Commit $commit is on $main_branch branch"
cd - >/dev/null
rm -rf "$temp_dir"
return 0
else
echo "FAIL: Commit $commit is NOT on $main_branch branch"
# Try to find which branch contains this commit
local branch_name=$(git branch -r --contains "$commit" 2>/dev/null | head -1 | sed 's/^[[:space:]]*origin\///' | sed 's/[[:space:]]*$//')
if [[ -n "$branch_name" ]]; then
echo "Found on branch: $branch_name"
fi
cd - >/dev/null
rm -rf "$temp_dir"
return 1
fi
else
echo "FAIL: Commit $commit not found in repository"
cd - >/dev/null
rm -rf "$temp_dir"
return 1
fi
else
echo "FAIL: Failed to clone $repo_url"
rm -rf "$temp_dir"
return 1
fi
}
# Extract whitelisted git dependencies from Cargo.toml
echo "Extracting git dependencies from Cargo.toml..."
# Create temporary array to store dependencies
declare -a deps=()
# Build awk pattern from whitelist
IFS=',' read -ra WHITELIST <<< "$WHITELIST_DEPS"
awk_pattern=""
for dep in "${WHITELIST[@]}"; do
if [[ -n "$awk_pattern" ]]; then
awk_pattern="$awk_pattern|"
fi
awk_pattern="$awk_pattern$dep"
done
# Extract whitelisted dependencies
while IFS= read -r line; do
if [[ -n "$line" ]]; then
deps+=("$line")
fi
done < <(awk -v pattern="$awk_pattern" '
$0 ~ pattern ".*git = \"https:/" {
match($0, /git = "([^"]+)"/, arr)
git_url = arr[1]
if (match($0, /rev = "([^"]+)"/, rev_arr)) {
rev = rev_arr[1]
print git_url " " rev
} else {
# Check next line for rev
getline
if (match($0, /rev = "([^"]+)"/, rev_arr)) {
rev = rev_arr[1]
print git_url " " rev
}
}
}
' Cargo.toml)
echo "Found ${#deps[@]} dependencies to check:"
for dep in "${deps[@]}"; do
echo " $dep"
done
failed=0
for dep in "${deps[@]}"; do
read -r repo_url commit <<< "$dep"
if ! check_commit_on_main "$repo_url" "$commit"; then
failed=1
fi
done
echo "Check completed."
if [[ $failed -eq 1 ]]; then
echo "ERROR: Some git dependencies are not on their main branches!"
echo "Please update the commits to point to main branch commits."
exit 1
else
echo "SUCCESS: All git dependencies are on their main branches!"
fi

View File

@@ -4,10 +4,11 @@ name: GreptimeDB Development Build
on:
workflow_dispatch: # Allows you to run this workflow manually.
inputs:
repository:
description: The public repository to build
large-page-size:
description: Build GreptimeDB with large page size (65536).
type: boolean
required: false
default: GreptimeTeam/greptimedb
default: false
commit: # Note: We only pull the source code and use the current workflow to build the artifacts.
description: The commit to build
required: true
@@ -181,6 +182,7 @@ jobs:
working-dir: ${{ env.CHECKOUT_GREPTIMEDB_PATH }}
image-registry: ${{ vars.ECR_IMAGE_REGISTRY }}
image-namespace: ${{ vars.ECR_IMAGE_NAMESPACE }}
large-page-size: ${{ inputs.large-page-size }}
build-linux-arm64-artifacts:
name: Build linux-arm64 artifacts
@@ -214,6 +216,7 @@ jobs:
working-dir: ${{ env.CHECKOUT_GREPTIMEDB_PATH }}
image-registry: ${{ vars.ECR_IMAGE_REGISTRY }}
image-namespace: ${{ vars.ECR_IMAGE_NAMESPACE }}
large-page-size: ${{ inputs.large-page-size }}
release-images-to-dockerhub:
name: Build and push images to DockerHub

View File

@@ -12,6 +12,7 @@ on:
- 'docker/**'
- '.gitignore'
- 'grafana/**'
- 'Makefile'
workflow_dispatch:
name: CI
@@ -612,15 +613,20 @@ jobs:
- name: "MySQL Kvbackend"
opts: "--setup-mysql"
kafka: false
- name: "Flat format"
opts: "--enable-flat-format"
kafka: false
timeout-minutes: 60
steps:
- uses: actions/checkout@v4
with:
persist-credentials: false
- if: matrix.mode.kafka
name: Setup kafka server
working-directory: tests-integration/fixtures
run: docker compose up -d --wait kafka
run: ../../.github/scripts/pull-test-deps-images.sh && docker compose up -d --wait kafka
- name: Download pre-built binaries
uses: actions/download-artifact@v4
with:
@@ -629,7 +635,7 @@ jobs:
- name: Unzip binaries
run: tar -xvf ./bins.tar.gz
- name: Run sqlness
run: RUST_BACKTRACE=1 ./bins/sqlness-runner ${{ matrix.mode.opts }} -c ./tests/cases --bins-dir ./bins --preserve-state
run: RUST_BACKTRACE=1 ./bins/sqlness-runner bare ${{ matrix.mode.opts }} -c ./tests/cases --bins-dir ./bins --preserve-state
- name: Upload sqlness logs
if: failure()
uses: actions/upload-artifact@v4
@@ -682,6 +688,30 @@ jobs:
- name: Run cargo clippy
run: make clippy
check-udeps:
if: ${{ github.repository == 'GreptimeTeam/greptimedb' }}
name: Check Unused Dependencies
runs-on: ubuntu-latest
timeout-minutes: 60
steps:
- uses: actions/checkout@v4
with:
persist-credentials: false
- uses: arduino/setup-protoc@v3
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
- uses: actions-rust-lang/setup-rust-toolchain@v1
- name: Rust Cache
uses: Swatinem/rust-cache@v2
with:
shared-key: "check-udeps"
cache-all-crates: "true"
save-if: ${{ github.ref == 'refs/heads/main' }}
- name: Install cargo-udeps
run: cargo install cargo-udeps --locked
- name: Check unused dependencies
run: make check-udeps
conflict-check:
if: ${{ github.repository == 'GreptimeTeam/greptimedb' }}
name: Check for conflict
@@ -697,7 +727,7 @@ jobs:
if: ${{ github.repository == 'GreptimeTeam/greptimedb' && github.event_name != 'merge_group' }}
runs-on: ubuntu-22.04-arm
timeout-minutes: 60
needs: [conflict-check, clippy, fmt]
needs: [conflict-check, clippy, fmt, check-udeps]
steps:
- uses: actions/checkout@v4
with:
@@ -709,7 +739,7 @@ jobs:
- name: Install toolchain
uses: actions-rust-lang/setup-rust-toolchain@v1
with:
cache: false
cache: false
- name: Rust Cache
uses: Swatinem/rust-cache@v2
with:
@@ -719,9 +749,11 @@ jobs:
save-if: ${{ github.ref == 'refs/heads/main' }}
- name: Install latest nextest release
uses: taiki-e/install-action@nextest
- name: Setup external services
working-directory: tests-integration/fixtures
run: docker compose up -d --wait
run: ../../.github/scripts/pull-test-deps-images.sh && docker compose up -d --wait
- name: Run nextest cases
run: cargo nextest run --workspace -F dashboard -F pg_kvbackend -F mysql_kvbackend
env:
@@ -738,8 +770,11 @@ jobs:
GT_MINIO_ACCESS_KEY: superpower_password
GT_MINIO_REGION: us-west-2
GT_MINIO_ENDPOINT_URL: http://127.0.0.1:9000
GT_ETCD_TLS_ENDPOINTS: https://127.0.0.1:2378
GT_ETCD_ENDPOINTS: http://127.0.0.1:2379
GT_POSTGRES_ENDPOINTS: postgres://greptimedb:admin@127.0.0.1:5432/postgres
GT_POSTGRES15_ENDPOINTS: postgres://test_user:test_password@127.0.0.1:5433/postgres
GT_POSTGRES15_SCHEMA: test_schema
GT_MYSQL_ENDPOINTS: mysql://greptimedb:admin@127.0.0.1:3306/mysql
GT_KAFKA_ENDPOINTS: 127.0.0.1:9092
GT_KAFKA_SASL_ENDPOINTS: 127.0.0.1:9093
@@ -772,9 +807,11 @@ jobs:
uses: taiki-e/install-action@nextest
- name: Install cargo-llvm-cov
uses: taiki-e/install-action@cargo-llvm-cov
- name: Setup external services
working-directory: tests-integration/fixtures
run: docker compose up -d --wait
run: ../../.github/scripts/pull-test-deps-images.sh && docker compose up -d --wait
- name: Run nextest cases
run: cargo llvm-cov nextest --workspace --lcov --output-path lcov.info -F dashboard -F pg_kvbackend -F mysql_kvbackend
env:
@@ -790,8 +827,11 @@ jobs:
GT_MINIO_ACCESS_KEY: superpower_password
GT_MINIO_REGION: us-west-2
GT_MINIO_ENDPOINT_URL: http://127.0.0.1:9000
GT_ETCD_TLS_ENDPOINTS: https://127.0.0.1:2378
GT_ETCD_ENDPOINTS: http://127.0.0.1:2379
GT_POSTGRES_ENDPOINTS: postgres://greptimedb:admin@127.0.0.1:5432/postgres
GT_POSTGRES15_ENDPOINTS: postgres://test_user:test_password@127.0.0.1:5433/postgres
GT_POSTGRES15_SCHEMA: test_schema
GT_MYSQL_ENDPOINTS: mysql://greptimedb:admin@127.0.0.1:3306/mysql
GT_KAFKA_ENDPOINTS: 127.0.0.1:9092
GT_KAFKA_SASL_ENDPOINTS: 127.0.0.1:9093

View File

@@ -10,6 +10,7 @@ on:
- 'docker/**'
- '.gitignore'
- 'grafana/**'
- 'Makefile'
push:
branches:
- main
@@ -21,6 +22,7 @@ on:
- 'docker/**'
- '.gitignore'
- 'grafana/**'
- 'Makefile'
workflow_dispatch:
name: CI
@@ -65,6 +67,12 @@ jobs:
steps:
- run: 'echo "No action required"'
check-udeps:
name: Unused Dependencies
runs-on: ubuntu-latest
steps:
- run: 'echo "No action required"'
coverage:
runs-on: ubuntu-latest
steps:
@@ -84,5 +92,6 @@ jobs:
mode:
- name: "Basic"
- name: "Remote WAL"
- name: "Flat format"
steps:
- run: 'echo "No action required"'

57
.github/workflows/multi-lang-tests.yml vendored Normal file
View File

@@ -0,0 +1,57 @@
name: Multi-language Integration Tests
on:
push:
branches:
- main
workflow_dispatch:
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
cancel-in-progress: true
jobs:
build-greptimedb:
if: ${{ github.repository == 'GreptimeTeam/greptimedb' }}
name: Build GreptimeDB binary
runs-on: ubuntu-latest
timeout-minutes: 60
steps:
- uses: actions/checkout@v4
with:
persist-credentials: false
- uses: arduino/setup-protoc@v3
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
- uses: actions-rust-lang/setup-rust-toolchain@v1
- uses: Swatinem/rust-cache@v2
with:
shared-key: "multi-lang-build"
cache-all-crates: "true"
save-if: ${{ github.ref == 'refs/heads/main' }}
- name: Install cargo-gc-bin
shell: bash
run: cargo install cargo-gc-bin --force
- name: Build greptime binary
shell: bash
run: cargo gc -- --bin greptime --features "pg_kvbackend,mysql_kvbackend"
- name: Pack greptime binary
shell: bash
run: |
mkdir bin && \
mv ./target/debug/greptime bin
- name: Print greptime binary info
run: ls -lh bin
- name: Upload greptime binary
uses: actions/upload-artifact@v4
with:
name: greptime-bin
path: bin/
retention-days: 1
run-multi-lang-tests:
name: Run Multi-language SDK Tests
needs: build-greptimedb
uses: ./.github/workflows/run-multi-lang-tests.yml
with:
artifact-name: greptime-bin

View File

@@ -174,6 +174,18 @@ jobs:
image-registry: ${{ vars.ECR_IMAGE_REGISTRY }}
image-namespace: ${{ vars.ECR_IMAGE_NAMESPACE }}
run-multi-lang-tests:
name: Run Multi-language SDK Tests
if: ${{ inputs.build_linux_amd64_artifacts || github.event_name == 'schedule' }}
needs: [
allocate-runners,
build-linux-amd64-artifacts,
]
uses: ./.github/workflows/run-multi-lang-tests.yml
with:
artifact-name: greptime-linux-amd64-${{ needs.allocate-runners.outputs.version }}
artifact-is-tarball: true
release-images-to-dockerhub:
name: Build and push images to DockerHub
if: ${{ inputs.release_images || github.event_name == 'schedule' }}
@@ -301,7 +313,8 @@ jobs:
if: ${{ github.repository == 'GreptimeTeam/greptimedb' && always() }} # Not requiring successful dependent jobs, always run.
name: Send notification to Greptime team
needs: [
release-images-to-dockerhub
release-images-to-dockerhub,
run-multi-lang-tests,
]
runs-on: ubuntu-latest
permissions:
@@ -319,17 +332,17 @@ jobs:
run: pnpm tsx bin/report-ci-failure.ts
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
CI_REPORT_STATUS: ${{ needs.release-images-to-dockerhub.outputs.nightly-build-result == 'success' }}
CI_REPORT_STATUS: ${{ needs.release-images-to-dockerhub.outputs.nightly-build-result == 'success' && (needs.run-multi-lang-tests.result == 'success' || needs.run-multi-lang-tests.result == 'skipped') }}
- name: Notify nightly build successful result
uses: slackapi/slack-github-action@v1.23.0
if: ${{ needs.release-images-to-dockerhub.outputs.nightly-build-result == 'success' }}
if: ${{ needs.release-images-to-dockerhub.outputs.nightly-build-result == 'success' && (needs.run-multi-lang-tests.result == 'success' || needs.run-multi-lang-tests.result == 'skipped') }}
with:
payload: |
{"text": "GreptimeDB's ${{ env.NEXT_RELEASE_VERSION }} build has completed successfully."}
- name: Notify nightly build failed result
uses: slackapi/slack-github-action@v1.23.0
if: ${{ needs.release-images-to-dockerhub.outputs.nightly-build-result != 'success' }}
if: ${{ needs.release-images-to-dockerhub.outputs.nightly-build-result != 'success' || needs.run-multi-lang-tests.result == 'failure' }}
with:
payload: |
{"text": "GreptimeDB's ${{ env.NEXT_RELEASE_VERSION }} build has failed, please check ${{ steps.report-ci-status.outputs.html_url }}."}

42
.github/workflows/pr-labeling.yaml vendored Normal file
View File

@@ -0,0 +1,42 @@
name: 'PR Labeling'
on:
pull_request_target:
types:
- opened
- synchronize
- reopened
permissions:
contents: read
pull-requests: write
issues: write
jobs:
labeler:
runs-on: ubuntu-latest
steps:
- name: Checkout sources
uses: actions/checkout@v4
- uses: actions/labeler@v5
with:
configuration-path: ".github/labeler.yaml"
repo-token: "${{ secrets.GITHUB_TOKEN }}"
size-label:
runs-on: ubuntu-latest
steps:
- uses: pascalgn/size-label-action@v0.5.5
env:
GITHUB_TOKEN: "${{ secrets.GITHUB_TOKEN }}"
with:
sizes: >
{
"0": "XS",
"100": "S",
"300": "M",
"1000": "L",
"1500": "XL",
"2000": "XXL"
}

View File

@@ -0,0 +1,36 @@
name: PR Review Reminder
on:
schedule:
# Run at 9:00 AM UTC+8 (01:00 AM UTC) on Monday, Wednesday, Friday
- cron: '0 1 * * 1,3,5'
workflow_dispatch:
jobs:
pr-review-reminder:
name: Send PR Review Reminders
runs-on: ubuntu-latest
permissions:
contents: read
pull-requests: read
if: ${{ github.repository == 'GreptimeTeam/greptimedb' }}
steps:
- name: Checkout repository
uses: actions/checkout@v4
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '20'
- name: Install dependencies
working-directory: .github/scripts
run: npm ci
- name: Run PR review reminder
working-directory: .github/scripts
run: node pr-review-reminder.js
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
SLACK_PR_REVIEW_WEBHOOK_URL: ${{ vars.SLACK_PR_REVIEW_WEBHOOK_URL }}
GITHUBID_SLACKID_MAPPING: ${{ vars.GITHUBID_SLACKID_MAPPING }}

View File

@@ -49,14 +49,9 @@ on:
description: Do not run integration tests during the build
type: boolean
default: true
build_linux_amd64_artifacts:
build_linux_artifacts:
type: boolean
description: Build linux-amd64 artifacts
required: false
default: false
build_linux_arm64_artifacts:
type: boolean
description: Build linux-arm64 artifacts
description: Build linux artifacts (both amd64 and arm64)
required: false
default: false
build_macos_artifacts:
@@ -110,6 +105,9 @@ jobs:
# The 'version' use as the global tag name of the release workflow.
version: ${{ steps.create-version.outputs.version }}
# The 'is-current-version-latest' determines whether to update 'latest' Docker tags and downstream repositories.
is-current-version-latest: ${{ steps.check-version.outputs.is-current-version-latest }}
steps:
- name: Checkout
uses: actions/checkout@v4
@@ -135,8 +133,13 @@ jobs:
GITHUB_REF_NAME: ${{ github.ref_name }}
NIGHTLY_RELEASE_PREFIX: ${{ env.NIGHTLY_RELEASE_PREFIX }}
- name: Check version
id: check-version
run: |
./.github/scripts/check-version.sh "${{ steps.create-version.outputs.version }}"
- name: Allocate linux-amd64 runner
if: ${{ inputs.build_linux_amd64_artifacts || github.event_name == 'push' || github.event_name == 'schedule' }}
if: ${{ inputs.build_linux_artifacts || github.event_name == 'push' || github.event_name == 'schedule' }}
uses: ./.github/actions/start-runner
id: start-linux-amd64-runner
with:
@@ -150,7 +153,7 @@ jobs:
subnet-id: ${{ vars.EC2_RUNNER_SUBNET_ID }}
- name: Allocate linux-arm64 runner
if: ${{ inputs.build_linux_arm64_artifacts || github.event_name == 'push' || github.event_name == 'schedule' }}
if: ${{ inputs.build_linux_artifacts || github.event_name == 'push' || github.event_name == 'schedule' }}
uses: ./.github/actions/start-runner
id: start-linux-arm64-runner
with:
@@ -165,7 +168,7 @@ jobs:
build-linux-amd64-artifacts:
name: Build linux-amd64 artifacts
if: ${{ inputs.build_linux_amd64_artifacts || github.event_name == 'push' || github.event_name == 'schedule' }}
if: ${{ inputs.build_linux_artifacts || github.event_name == 'push' || github.event_name == 'schedule' }}
needs: [
allocate-runners,
]
@@ -187,7 +190,7 @@ jobs:
build-linux-arm64-artifacts:
name: Build linux-arm64 artifacts
if: ${{ inputs.build_linux_arm64_artifacts || github.event_name == 'push' || github.event_name == 'schedule' }}
if: ${{ inputs.build_linux_artifacts || github.event_name == 'push' || github.event_name == 'schedule' }}
needs: [
allocate-runners,
]
@@ -207,6 +210,18 @@ jobs:
image-registry: ${{ vars.ECR_IMAGE_REGISTRY }}
image-namespace: ${{ vars.ECR_IMAGE_NAMESPACE }}
run-multi-lang-tests:
name: Run Multi-language SDK Tests
if: ${{ inputs.build_linux_artifacts || github.event_name == 'push' || github.event_name == 'schedule' }}
needs: [
allocate-runners,
build-linux-amd64-artifacts,
]
uses: ./.github/workflows/run-multi-lang-tests.yml
with:
artifact-name: greptime-linux-amd64-${{ needs.allocate-runners.outputs.version }}
artifact-is-tarball: true
build-macos-artifacts:
name: Build macOS artifacts
strategy:
@@ -295,6 +310,7 @@ jobs:
allocate-runners,
build-linux-amd64-artifacts,
build-linux-arm64-artifacts,
run-multi-lang-tests,
]
runs-on: ubuntu-latest
outputs:
@@ -314,7 +330,7 @@ jobs:
image-registry-username: ${{ secrets.DOCKERHUB_USERNAME }}
image-registry-password: ${{ secrets.DOCKERHUB_TOKEN }}
version: ${{ needs.allocate-runners.outputs.version }}
push-latest-tag: ${{ github.ref_type == 'tag' && !contains(github.ref_name, 'nightly') && github.event_name != 'schedule' }}
push-latest-tag: ${{ needs.allocate-runners.outputs.is-current-version-latest == 'true' && github.ref_type == 'tag' && !contains(github.ref_name, 'nightly') && github.event_name != 'schedule' }}
- name: Set build image result
id: set-build-image-result
@@ -332,7 +348,7 @@ jobs:
build-windows-artifacts,
release-images-to-dockerhub,
]
runs-on: ubuntu-latest
runs-on: ubuntu-latest-16-cores
# When we push to ACR, it's easy to fail due to some unknown network issues.
# However, we don't want to fail the whole workflow because of this.
# The ACR have daily sync with DockerHub, so don't worry about the image not being updated.
@@ -361,11 +377,22 @@ jobs:
dev-mode: false
upload-to-s3: true
update-version-info: true
push-latest-tag: ${{ github.ref_type == 'tag' && !contains(github.ref_name, 'nightly') && github.event_name != 'schedule' }}
push-latest-tag: ${{ needs.allocate-runners.outputs.is-current-version-latest == 'true' && github.ref_type == 'tag' && !contains(github.ref_name, 'nightly') && github.event_name != 'schedule' }}
publish-github-release:
name: Create GitHub release and upload artifacts
if: ${{ inputs.publish_github_release || github.event_name == 'push' || github.event_name == 'schedule' }}
# Use always() to run even when optional jobs (macos, windows) are skipped.
# Then check that required jobs succeeded and optional jobs didn't fail.
if: |
always() &&
(inputs.publish_github_release || github.event_name == 'push' || github.event_name == 'schedule') &&
needs.allocate-runners.result == 'success' &&
(needs.build-linux-amd64-artifacts.result == 'success' || needs.build-linux-amd64-artifacts.result == 'skipped') &&
(needs.build-linux-arm64-artifacts.result == 'success' || needs.build-linux-arm64-artifacts.result == 'skipped') &&
(needs.build-macos-artifacts.result == 'success' || needs.build-macos-artifacts.result == 'skipped') &&
(needs.build-windows-artifacts.result == 'success' || needs.build-windows-artifacts.result == 'skipped') &&
(needs.release-images-to-dockerhub.result == 'success' || needs.release-images-to-dockerhub.result == 'skipped') &&
(needs.run-multi-lang-tests.result == 'success' || needs.run-multi-lang-tests.result == 'skipped')
needs: [ # The job have to wait for all the artifacts are built.
allocate-runners,
build-linux-amd64-artifacts,
@@ -373,6 +400,7 @@ jobs:
build-macos-artifacts,
build-windows-artifacts,
release-images-to-dockerhub,
run-multi-lang-tests,
]
runs-on: ubuntu-latest
steps:
@@ -469,7 +497,7 @@ jobs:
bump-helm-charts-version:
name: Bump helm charts version
if: ${{ github.ref_type == 'tag' && !contains(github.ref_name, 'nightly') && github.event_name != 'schedule' }}
if: ${{ github.ref_type == 'tag' && !contains(github.ref_name, 'nightly') && github.event_name != 'schedule' && needs.allocate-runners.outputs.is-current-version-latest == 'true' }}
needs: [allocate-runners, publish-github-release]
runs-on: ubuntu-latest
permissions:
@@ -490,7 +518,7 @@ jobs:
bump-homebrew-greptime-version:
name: Bump homebrew greptime version
if: ${{ github.ref_type == 'tag' && !contains(github.ref_name, 'nightly') && github.event_name != 'schedule' }}
if: ${{ github.ref_type == 'tag' && !contains(github.ref_name, 'nightly') && github.event_name != 'schedule' && needs.allocate-runners.outputs.is-current-version-latest == 'true' }}
needs: [allocate-runners, publish-github-release]
runs-on: ubuntu-latest
permissions:

View File

@@ -0,0 +1,194 @@
# Reusable workflow for running multi-language SDK tests against GreptimeDB
# Used by: multi-lang-tests.yml, release.yml, nightly-build.yml
# Supports both direct binary artifacts and tarball artifacts
name: Run Multi-language SDK Tests
on:
workflow_call:
inputs:
artifact-name:
required: true
type: string
description: 'Name of the artifact containing greptime binary'
http-port:
required: false
type: string
default: '4000'
description: 'HTTP server port'
mysql-port:
required: false
type: string
default: '4002'
description: 'MySQL server port'
postgres-port:
required: false
type: string
default: '4003'
description: 'PostgreSQL server port'
db-name:
required: false
type: string
default: 'test_db'
description: 'Test database name'
username:
required: false
type: string
default: 'greptime_user'
description: 'Authentication username'
password:
required: false
type: string
default: 'greptime_pwd'
description: 'Authentication password'
timeout-minutes:
required: false
type: number
default: 30
description: 'Job timeout in minutes'
artifact-is-tarball:
required: false
type: boolean
default: false
description: 'Whether the artifact is a tarball (tar.gz) that needs to be extracted'
jobs:
run-tests:
name: Run Multi-language SDK Tests
runs-on: ubuntu-latest
timeout-minutes: ${{ inputs.timeout-minutes }}
steps:
- name: Checkout greptimedb-tests repository
uses: actions/checkout@v4
with:
repository: GreptimeTeam/greptimedb-tests
persist-credentials: false
- name: Download pre-built greptime binary
uses: actions/download-artifact@v4
with:
name: ${{ inputs.artifact-name }}
path: artifact
- name: Setup greptime binary
run: |
mkdir -p bin
if [ "${{ inputs.artifact-is-tarball }}" = "true" ]; then
# Extract tarball and find greptime binary
tar -xzf artifact/*.tar.gz -C artifact
find artifact -name "greptime" -type f -exec cp {} bin/greptime \;
else
# Direct binary format
if [ -f artifact/greptime ]; then
cp artifact/greptime bin/greptime
else
cp artifact/* bin/greptime
fi
fi
chmod +x ./bin/greptime
ls -lh ./bin/greptime
./bin/greptime --version
- name: Setup Java 17
uses: actions/setup-java@v4
with:
distribution: 'temurin'
java-version: '17'
cache: 'maven'
- name: Setup Python 3.8
uses: actions/setup-python@v5
with:
python-version: '3.8'
- name: Setup Go 1.24
uses: actions/setup-go@v5
with:
go-version: '1.24'
cache: true
cache-dependency-path: go-tests/go.sum
- name: Set up Node.js
uses: actions/setup-node@v4
with:
node-version: '18'
- name: Install Python dependencies
run: |
pip install mysql-connector-python psycopg2-binary
python3 -c "import mysql.connector; print(f'mysql-connector-python {mysql.connector.__version__}')"
python3 -c "import psycopg2; print(f'psycopg2 {psycopg2.__version__}')"
- name: Install Go dependencies
working-directory: go-tests
run: |
go mod download
go mod verify
go version
- name: Kill existing GreptimeDB processes
run: |
pkill -f greptime || true
sleep 2
- name: Start GreptimeDB standalone
run: |
./bin/greptime standalone start \
--http-addr 0.0.0.0:${{ inputs.http-port }} \
--rpc-addr 0.0.0.0:4001 \
--mysql-addr 0.0.0.0:${{ inputs.mysql-port }} \
--postgres-addr 0.0.0.0:${{ inputs.postgres-port }} \
--user-provider=static_user_provider:cmd:${{ inputs.username }}=${{ inputs.password }} > /tmp/greptimedb.log 2>&1 &
- name: Wait for GreptimeDB to be ready
run: |
echo "Waiting for GreptimeDB..."
for i in {1..60}; do
if curl -sf http://localhost:${{ inputs.http-port }}/health > /dev/null; then
echo "✅ GreptimeDB is ready"
exit 0
fi
sleep 2
done
echo "❌ GreptimeDB failed to start"
cat /tmp/greptimedb.log
exit 1
- name: Run multi-language tests
env:
DB_NAME: ${{ inputs.db-name }}
MYSQL_HOST: 127.0.0.1
MYSQL_PORT: ${{ inputs.mysql-port }}
POSTGRES_HOST: 127.0.0.1
POSTGRES_PORT: ${{ inputs.postgres-port }}
HTTP_HOST: 127.0.0.1
HTTP_PORT: ${{ inputs.http-port }}
GREPTIME_USERNAME: ${{ inputs.username }}
GREPTIME_PASSWORD: ${{ inputs.password }}
run: |
chmod +x ./run_tests.sh
./run_tests.sh
- name: Collect logs on failure
if: failure()
run: |
echo "=== GreptimeDB Logs ==="
cat /tmp/greptimedb.log || true
- name: Upload test logs on failure
if: failure()
uses: actions/upload-artifact@v4
with:
name: test-logs
path: |
/tmp/greptimedb.log
java-tests/target/surefire-reports/
python-tests/.pytest_cache/
go-tests/*.log
**/test-output/
retention-days: 7
- name: Cleanup
if: always()
run: |
pkill -f greptime || true

View File

@@ -1,7 +1,7 @@
name: "Semantic Pull Request"
on:
pull_request:
pull_request_target:
types:
- opened
- reopened
@@ -11,17 +11,17 @@ concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
cancel-in-progress: true
permissions:
contents: read
pull-requests: write
issues: write
jobs:
check:
runs-on: ubuntu-latest
permissions:
pull-requests: write # Add permissions to modify PRs
issues: write
timeout-minutes: 10
steps:
- uses: actions/checkout@v4
with:
persist-credentials: false
- uses: ./.github/actions/setup-cyborg
- name: Check Pull Request
working-directory: cyborg

8
.gitignore vendored
View File

@@ -52,6 +52,9 @@ venv/
tests-fuzz/artifacts/
tests-fuzz/corpus/
# cargo-udeps reports
udeps-report.json
# Nix
.direnv
.envrc
@@ -60,4 +63,7 @@ tests-fuzz/corpus/
greptimedb_data
# github
!/.github
!/.github
# Claude code
CLAUDE.md

View File

@@ -2,43 +2,41 @@
## Individual Committers (in alphabetical order)
* [CookiePieWw](https://github.com/CookiePieWw)
* [etolbakov](https://github.com/etolbakov)
* [irenjj](https://github.com/irenjj)
* [KKould](https://github.com/KKould)
* [Lanqing Yang](https://github.com/lyang24)
* [NiwakaDev](https://github.com/NiwakaDev)
* [tisonkun](https://github.com/tisonkun)
- [apdong2022](https://github.com/apdong2022)
- [beryl678](https://github.com/beryl678)
- [CookiePieWw](https://github.com/CookiePieWw)
- [etolbakov](https://github.com/etolbakov)
- [irenjj](https://github.com/irenjj)
- [KKould](https://github.com/KKould)
- [Lanqing Yang](https://github.com/lyang24)
- [nicecui](https://github.com/nicecui)
- [NiwakaDev](https://github.com/NiwakaDev)
- [paomian](https://github.com/paomian)
- [tisonkun](https://github.com/tisonkun)
- [Wenjie0329](https://github.com/Wenjie0329)
- [zhaoyingnan01](https://github.com/zhaoyingnan01)
- [zhongzc](https://github.com/zhongzc)
- [ZonaHex](https://github.com/ZonaHex)
- [zyy17](https://github.com/zyy17)
## Team Members (in alphabetical order)
* [apdong2022](https://github.com/apdong2022)
* [beryl678](https://github.com/beryl678)
* [Breeze-P](https://github.com/Breeze-P)
* [daviderli614](https://github.com/daviderli614)
* [discord9](https://github.com/discord9)
* [evenyag](https://github.com/evenyag)
* [fengjiachun](https://github.com/fengjiachun)
* [fengys1996](https://github.com/fengys1996)
* [GrepTime](https://github.com/GrepTime)
* [holalengyu](https://github.com/holalengyu)
* [killme2008](https://github.com/killme2008)
* [MichaelScofield](https://github.com/MichaelScofield)
* [nicecui](https://github.com/nicecui)
* [paomian](https://github.com/paomian)
* [shuiyisong](https://github.com/shuiyisong)
* [sunchanglong](https://github.com/sunchanglong)
* [sunng87](https://github.com/sunng87)
* [v0y4g3r](https://github.com/v0y4g3r)
* [waynexia](https://github.com/waynexia)
* [Wenjie0329](https://github.com/Wenjie0329)
* [WenyXu](https://github.com/WenyXu)
* [xtang](https://github.com/xtang)
* [zhaoyingnan01](https://github.com/zhaoyingnan01)
* [zhongzc](https://github.com/zhongzc)
* [ZonaHex](https://github.com/ZonaHex)
* [zyy17](https://github.com/zyy17)
- [daviderli614](https://github.com/daviderli614)
- [discord9](https://github.com/discord9)
- [evenyag](https://github.com/evenyag)
- [fengjiachun](https://github.com/fengjiachun)
- [fengys1996](https://github.com/fengys1996)
- [GrepTime](https://github.com/GrepTime)
- [holalengyu](https://github.com/holalengyu)
- [killme2008](https://github.com/killme2008)
- [MichaelScofield](https://github.com/MichaelScofield)
- [shuiyisong](https://github.com/shuiyisong)
- [sunchanglong](https://github.com/sunchanglong)
- [sunng87](https://github.com/sunng87)
- [v0y4g3r](https://github.com/v0y4g3r)
- [waynexia](https://github.com/waynexia)
- [WenyXu](https://github.com/WenyXu)
- [xtang](https://github.com/xtang)
## All Contributors

View File

@@ -55,14 +55,18 @@ GreptimeDB uses the [Apache 2.0 license](https://github.com/GreptimeTeam/greptim
- To ensure that community is free and confident in its ability to use your contributions, please sign the Contributor License Agreement (CLA) which will be incorporated in the pull request process.
- Make sure all files have proper license header (running `docker run --rm -v $(pwd):/github/workspace ghcr.io/korandoru/hawkeye-native:v3 format` from the project root).
- Make sure all your codes are formatted and follow the [coding style](https://pingcap.github.io/style-guide/rust/) and [style guide](docs/style-guide.md).
- Make sure all unit tests are passed using [nextest](https://nexte.st/index.html) `cargo nextest run`.
- Make sure all clippy warnings are fixed (you can check it locally by running `cargo clippy --workspace --all-targets -- -D warnings`).
- Make sure all unit tests are passed using [nextest](https://nexte.st/index.html) `cargo nextest run --workspace --features pg_kvbackend,mysql_kvbackend` or `make test`.
- Make sure all clippy warnings are fixed (you can check it locally by running `cargo clippy --workspace --all-targets -- -D warnings` or `make clippy`).
- Ensure there are no unused dependencies by running `make check-udeps` (clean them up with `make fix-udeps` if reported).
- If you must keep a target-specific dependency (e.g. under `[target.'cfg(...)'.dev-dependencies]`), add a cargo-udeps ignore entry in the same `Cargo.toml`, for example:
`[package.metadata.cargo-udeps.ignore]` with `development = ["rexpect"]` (or `dependencies`/`build` as appropriate).
- When modifying sample configuration files in `config/`, run `make config-docs` (which requires Docker to be installed) to update the configuration documentation and include it in your commit.
#### `pre-commit` Hooks
You could setup the [`pre-commit`](https://pre-commit.com/#plugins) hooks to run these checks on every commit automatically.
1. Install `pre-commit`
1. Install `pre-commit`
pip install pre-commit
@@ -70,7 +74,7 @@ You could setup the [`pre-commit`](https://pre-commit.com/#plugins) hooks to run
brew install pre-commit
2. Install the `pre-commit` hooks
2. Install the `pre-commit` hooks
$ pre-commit install
pre-commit installed at .git/hooks/pre-commit

5711
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -13,6 +13,7 @@ members = [
"src/common/datasource",
"src/common/decimal",
"src/common/error",
"src/common/event-recorder",
"src/common/frontend",
"src/common/function",
"src/common/greptimedb-telemetry",
@@ -20,6 +21,7 @@ members = [
"src/common/grpc-expr",
"src/common/macro",
"src/common/mem-prof",
"src/common/memory-manager",
"src/common/meta",
"src/common/options",
"src/common/plugins",
@@ -30,6 +32,7 @@ members = [
"src/common/recordbatch",
"src/common/runtime",
"src/common/session",
"src/common/sql",
"src/common/stat",
"src/common/substrait",
"src/common/telemetry",
@@ -49,6 +52,7 @@ members = [
"src/meta-client",
"src/meta-srv",
"src/metric-engine",
"src/mito-codec",
"src/mito2",
"src/object-store",
"src/operator",
@@ -58,6 +62,7 @@ members = [
"src/promql",
"src/puffin",
"src/query",
"src/standalone",
"src/servers",
"src/session",
"src/sql",
@@ -70,11 +75,13 @@ members = [
resolver = "2"
[workspace.package]
version = "0.15.0"
edition = "2021"
version = "1.0.0-beta.3"
edition = "2024"
license = "Apache-2.0"
[workspace.lints]
clippy.print_stdout = "warn"
clippy.print_stderr = "warn"
clippy.dbg_macro = "warn"
clippy.implicit_clone = "warn"
clippy.result_large_err = "allow"
@@ -93,11 +100,12 @@ rust.unexpected_cfgs = { level = "warn", check-cfg = ['cfg(tokio_unstable)'] }
# See for more detaiils: https://github.com/rust-lang/cargo/issues/11329
ahash = { version = "0.8", features = ["compile-time-rng"] }
aquamarine = "0.6"
arrow = { version = "54.2", features = ["prettyprint"] }
arrow-array = { version = "54.2", default-features = false, features = ["chrono-tz"] }
arrow-flight = "54.2"
arrow-ipc = { version = "54.2", default-features = false, features = ["lz4", "zstd"] }
arrow-schema = { version = "54.2", features = ["serde"] }
arrow = { version = "56.2", features = ["prettyprint"] }
arrow-array = { version = "56.2", default-features = false, features = ["chrono-tz"] }
arrow-buffer = "56.2"
arrow-flight = "56.2"
arrow-ipc = { version = "56.2", default-features = false, features = ["lz4", "zstd"] }
arrow-schema = { version = "56.2", features = ["serde"] }
async-stream = "0.3"
async-trait = "0.1"
# Remember to update axum-extra, axum-macros when updating axum
@@ -111,29 +119,38 @@ bitflags = "2.4.1"
bytemuck = "1.12"
bytes = { version = "1.7", features = ["serde"] }
chrono = { version = "0.4", features = ["serde"] }
chrono-tz = "0.10.1"
chrono-tz = { version = "0.10.1", features = ["case-insensitive"] }
clap = { version = "4.4", features = ["derive"] }
config = "0.13.0"
const_format = "0.2"
crossbeam-utils = "0.8"
dashmap = "6.1"
datafusion = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "e104c7cf62b11dd5fe41461b82514978234326b4" }
datafusion-common = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "e104c7cf62b11dd5fe41461b82514978234326b4" }
datafusion-expr = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "e104c7cf62b11dd5fe41461b82514978234326b4" }
datafusion-functions = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "e104c7cf62b11dd5fe41461b82514978234326b4" }
datafusion-optimizer = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "e104c7cf62b11dd5fe41461b82514978234326b4" }
datafusion-physical-expr = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "e104c7cf62b11dd5fe41461b82514978234326b4" }
datafusion-physical-plan = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "e104c7cf62b11dd5fe41461b82514978234326b4" }
datafusion-sql = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "e104c7cf62b11dd5fe41461b82514978234326b4" }
datafusion-substrait = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "e104c7cf62b11dd5fe41461b82514978234326b4" }
datafusion = "50"
datafusion-common = "50"
datafusion-expr = "50"
datafusion-functions = "50"
datafusion-functions-aggregate-common = "50"
datafusion-optimizer = "50"
datafusion-orc = "0.5"
datafusion-pg-catalog = "0.12.3"
datafusion-physical-expr = "50"
datafusion-physical-plan = "50"
datafusion-sql = "50"
datafusion-substrait = "50"
deadpool = "0.12"
deadpool-postgres = "0.14"
derive_builder = "0.20"
derive_more = { version = "2.1", features = ["full"] }
dotenv = "0.15"
etcd-client = "0.14"
either = "1.15"
etcd-client = { version = "0.16.1", features = [
"tls",
"tls-roots",
] }
fst = "0.4.7"
futures = "0.3"
futures-util = "0.3"
greptime-proto = { git = "https://github.com/GreptimeTeam/greptime-proto.git", rev = "454c52634c3bac27de10bf0d85d5533eed1cf03f" }
greptime-proto = { git = "https://github.com/GreptimeTeam/greptime-proto.git", rev = "173efe5ec62722089db7c531c0b0d470a072b915" }
hex = "0.4"
http = "1"
humantime = "2.1"
@@ -144,7 +161,7 @@ itertools = "0.14"
jsonb = { git = "https://github.com/databendlabs/jsonb.git", rev = "8c8d2fc294a39f3ff08909d60f718639cfba3875", default-features = false }
lazy_static = "1.4"
local-ip-address = "0.6"
loki-proto = { git = "https://github.com/GreptimeTeam/loki-proto.git", rev = "1434ecf23a2654025d86188fb5205e7a74b225d3" }
loki-proto = { git = "https://github.com/GreptimeTeam/loki-proto.git", rev = "3b7cd33234358b18ece977bf689dc6fb760f29ab" }
meter-core = { git = "https://github.com/GreptimeTeam/greptime-meter.git", rev = "5618e779cf2bb4755b499c630fba4c35e91898cb" }
mockall = "0.13"
moka = "0.12"
@@ -152,28 +169,32 @@ nalgebra = "0.33"
nix = { version = "0.30.1", default-features = false, features = ["event", "fs", "process"] }
notify = "8.0"
num_cpus = "1.16"
object_store_opendal = "0.50"
object_store_opendal = "0.54"
once_cell = "1.18"
opentelemetry-proto = { version = "0.27", features = [
opentelemetry-proto = { version = "0.30", features = [
"gen-tonic",
"metrics",
"trace",
"with-serde",
"logs",
] }
ordered-float = { version = "4.3", features = ["serde"] }
otel-arrow-rust = { git = "https://github.com/GreptimeTeam/otel-arrow", rev = "2d64b7c0fa95642028a8205b36fe9ea0b023ec59", features = [
"server",
] }
parking_lot = "0.12"
parquet = { version = "54.2", default-features = false, features = ["arrow", "async", "object_store"] }
parquet = { version = "56.2", default-features = false, features = ["arrow", "async", "object_store"] }
paste = "1.0"
pin-project = "1.0"
pretty_assertions = "1.4.0"
prometheus = { version = "0.13.3", features = ["process"] }
promql-parser = { git = "https://github.com/GreptimeTeam/promql-parser.git", rev = "0410e8b459dda7cb222ce9596f8bf3971bd07bd2", features = [
"ser",
] }
promql-parser = { version = "0.6", features = ["ser"] }
prost = { version = "0.13", features = ["no-recursion-limit"] }
prost-types = "0.13"
raft-engine = { version = "0.4.1", default-features = false }
rand = "0.9"
ratelimit = "0.10"
regex = "1.8"
regex = "1.12"
regex-automata = "0.4"
reqwest = { version = "0.12", default-features = false, features = [
"json",
@@ -181,7 +202,8 @@ reqwest = { version = "0.12", default-features = false, features = [
"stream",
"multipart",
] }
rskafka = { git = "https://github.com/influxdata/rskafka.git", rev = "8dbd01ed809f5a791833a594e85b144e36e45820", features = [
# Branch: feat/request-timeout
rskafka = { git = "https://github.com/GreptimeTeam/rskafka.git", rev = "f5688f83e7da591cda3f2674c2408b4c0ed4ed50", features = [
"transport-tls",
] }
rstest = "0.25"
@@ -189,40 +211,37 @@ rstest_reuse = "0.7"
rust_decimal = "1.33"
rustc-hash = "2.0"
# It is worth noting that we should try to avoid using aws-lc-rs until it can be compiled on various platforms.
hostname = "0.4.0"
rustls = { version = "0.23.25", default-features = false }
sea-query = "0.32"
serde = { version = "1.0", features = ["derive"] }
serde_json = { version = "1.0", features = ["float_roundtrip"] }
serde_with = "3"
shadow-rs = "1.1"
simd-json = "0.15"
similar-asserts = "1.6.0"
smallvec = { version = "1", features = ["serde"] }
snafu = "0.8"
sqlparser = { git = "https://github.com/GreptimeTeam/sqlparser-rs.git", rev = "0cf6c04490d59435ee965edd2078e8855bd8471e", features = [
"visitor",
"serde",
] } # branch = "v0.54.x"
sqlx = { version = "0.8", features = [
"runtime-tokio-rustls",
"mysql",
"postgres",
"chrono",
] }
sqlparser = { version = "0.58.0", default-features = false, features = ["std", "visitor", "serde"] }
sqlx = { version = "0.8", default-features = false, features = ["any", "macros", "json", "runtime-tokio-rustls"] }
strum = { version = "0.27", features = ["derive"] }
sysinfo = "0.33"
tempfile = "3"
tokio = { version = "1.40", features = ["full"] }
tokio = { version = "1.47", features = ["full"] }
tokio-postgres = "0.7"
tokio-rustls = { version = "0.26.2", default-features = false }
tokio-stream = "0.1"
tokio-util = { version = "0.7", features = ["io-util", "compat"] }
toml = "0.8.8"
tonic = { version = "0.12", features = ["tls", "gzip", "zstd"] }
tonic = { version = "0.13", features = ["tls-ring", "gzip", "zstd"] }
tower = "0.5"
tower-http = "0.6"
tracing = "0.1"
tracing-appender = "0.2"
tracing-opentelemetry = "0.31.0"
tracing-subscriber = { version = "0.3", features = ["env-filter", "json", "fmt"] }
typetag = "0.2"
uuid = { version = "1.7", features = ["serde", "v4", "fast-rng"] }
uuid = { version = "1.17", features = ["serde", "v4", "fast-rng"] }
vrl = "0.25"
zstd = "0.13"
# DO_NOT_REMOVE_THIS: END_OF_EXTERNAL_DEPENDENCIES
@@ -240,6 +259,7 @@ common-config = { path = "src/common/config" }
common-datasource = { path = "src/common/datasource" }
common-decimal = { path = "src/common/decimal" }
common-error = { path = "src/common/error" }
common-event-recorder = { path = "src/common/event-recorder" }
common-frontend = { path = "src/common/frontend" }
common-function = { path = "src/common/function" }
common-greptimedb-telemetry = { path = "src/common/greptimedb-telemetry" }
@@ -247,6 +267,7 @@ common-grpc = { path = "src/common/grpc" }
common-grpc-expr = { path = "src/common/grpc-expr" }
common-macro = { path = "src/common/macro" }
common-mem-prof = { path = "src/common/mem-prof" }
common-memory-manager = { path = "src/common/memory-manager" }
common-meta = { path = "src/common/meta" }
common-options = { path = "src/common/options" }
common-plugins = { path = "src/common/plugins" }
@@ -257,6 +278,8 @@ common-query = { path = "src/common/query" }
common-recordbatch = { path = "src/common/recordbatch" }
common-runtime = { path = "src/common/runtime" }
common-session = { path = "src/common/session" }
common-sql = { path = "src/common/sql" }
common-stat = { path = "src/common/stat" }
common-telemetry = { path = "src/common/telemetry" }
common-test-util = { path = "src/common/test-util" }
common-time = { path = "src/common/time" }
@@ -274,12 +297,10 @@ log-store = { path = "src/log-store" }
meta-client = { path = "src/meta-client" }
meta-srv = { path = "src/meta-srv" }
metric-engine = { path = "src/metric-engine" }
mito-codec = { path = "src/mito-codec" }
mito2 = { path = "src/mito2" }
object-store = { path = "src/object-store" }
operator = { path = "src/operator" }
otel-arrow-rust = { git = "https://github.com/open-telemetry/otel-arrow", rev = "5d551412d2a12e689cde4d84c14ef29e36784e51", features = [
"server",
] }
partition = { path = "src/partition" }
pipeline = { path = "src/pipeline" }
plugins = { path = "src/plugins" }
@@ -289,7 +310,7 @@ query = { path = "src/query" }
servers = { path = "src/servers" }
session = { path = "src/session" }
sql = { path = "src/sql" }
stat = { path = "src/common/stat" }
standalone = { path = "src/standalone" }
store-api = { path = "src/store-api" }
substrait = { path = "src/common/substrait" }
table = { path = "src/table" }
@@ -298,6 +319,21 @@ table = { path = "src/table" }
git = "https://github.com/GreptimeTeam/greptime-meter.git"
rev = "5618e779cf2bb4755b499c630fba4c35e91898cb"
[patch.crates-io]
datafusion = { git = "https://github.com/GreptimeTeam/datafusion.git", rev = "fd4b2abcf3c3e43e94951bda452c9fd35243aab0" }
datafusion-common = { git = "https://github.com/GreptimeTeam/datafusion.git", rev = "fd4b2abcf3c3e43e94951bda452c9fd35243aab0" }
datafusion-expr = { git = "https://github.com/GreptimeTeam/datafusion.git", rev = "fd4b2abcf3c3e43e94951bda452c9fd35243aab0" }
datafusion-functions = { git = "https://github.com/GreptimeTeam/datafusion.git", rev = "fd4b2abcf3c3e43e94951bda452c9fd35243aab0" }
datafusion-functions-aggregate-common = { git = "https://github.com/GreptimeTeam/datafusion.git", rev = "fd4b2abcf3c3e43e94951bda452c9fd35243aab0" }
datafusion-optimizer = { git = "https://github.com/GreptimeTeam/datafusion.git", rev = "fd4b2abcf3c3e43e94951bda452c9fd35243aab0" }
datafusion-physical-expr = { git = "https://github.com/GreptimeTeam/datafusion.git", rev = "fd4b2abcf3c3e43e94951bda452c9fd35243aab0" }
datafusion-physical-expr-common = { git = "https://github.com/GreptimeTeam/datafusion.git", rev = "fd4b2abcf3c3e43e94951bda452c9fd35243aab0" }
datafusion-physical-plan = { git = "https://github.com/GreptimeTeam/datafusion.git", rev = "fd4b2abcf3c3e43e94951bda452c9fd35243aab0" }
datafusion-datasource = { git = "https://github.com/GreptimeTeam/datafusion.git", rev = "fd4b2abcf3c3e43e94951bda452c9fd35243aab0" }
datafusion-sql = { git = "https://github.com/GreptimeTeam/datafusion.git", rev = "fd4b2abcf3c3e43e94951bda452c9fd35243aab0" }
datafusion-substrait = { git = "https://github.com/GreptimeTeam/datafusion.git", rev = "fd4b2abcf3c3e43e94951bda452c9fd35243aab0" }
sqlparser = { git = "https://github.com/GreptimeTeam/sqlparser-rs.git", rev = "4b519a5caa95472cc3988f5556813a583dd35af1" } # branch = "v0.58.x"
[profile.release]
debug = 1

View File

@@ -8,7 +8,7 @@ CARGO_BUILD_OPTS := --locked
IMAGE_REGISTRY ?= docker.io
IMAGE_NAMESPACE ?= greptime
IMAGE_TAG ?= latest
DEV_BUILDER_IMAGE_TAG ?= 2025-05-19-b2377d4b-20250520045554
DEV_BUILDER_IMAGE_TAG ?= 2025-10-01-8fe17d43-20251011080129
BUILDX_MULTI_PLATFORM_BUILD ?= false
BUILDX_BUILDER_NAME ?= gtbuilder
BASE_IMAGE ?= ubuntu
@@ -17,12 +17,14 @@ CARGO_REGISTRY_CACHE ?= ${HOME}/.cargo/registry
ARCH := $(shell uname -m | sed 's/x86_64/amd64/' | sed 's/aarch64/arm64/')
OUTPUT_DIR := $(shell if [ "$(RELEASE)" = "true" ]; then echo "release"; elif [ ! -z "$(CARGO_PROFILE)" ]; then echo "$(CARGO_PROFILE)" ; else echo "debug"; fi)
SQLNESS_OPTS ?=
EXTRA_BUILD_ENVS ?=
ASSEMBLED_EXTRA_BUILD_ENV := $(foreach var,$(EXTRA_BUILD_ENVS),-e $(var))
# The arguments for running integration tests.
ETCD_VERSION ?= v3.5.9
ETCD_IMAGE ?= quay.io/coreos/etcd:${ETCD_VERSION}
RETRY_COUNT ?= 3
NEXTEST_OPTS := --retries ${RETRY_COUNT}
NEXTEST_OPTS := --retries ${RETRY_COUNT} --features pg_kvbackend,mysql_kvbackend
BUILD_JOBS ?= $(shell which nproc 1>/dev/null && expr $$(nproc) / 2) # If nproc is not available, we don't set the build jobs.
ifeq ($(BUILD_JOBS), 0) # If the number of cores is less than 2, set the build jobs to 1.
BUILD_JOBS := 1
@@ -83,6 +85,7 @@ build: ## Build debug version greptime.
.PHONY: build-by-dev-builder
build-by-dev-builder: ## Build greptime by dev-builder.
docker run --network=host \
${ASSEMBLED_EXTRA_BUILD_ENV} \
-v ${PWD}:/greptimedb -v ${CARGO_REGISTRY_CACHE}:/root/.cargo/registry \
-w /greptimedb ${IMAGE_REGISTRY}/${IMAGE_NAMESPACE}/dev-builder-${BASE_IMAGE}:${DEV_BUILDER_IMAGE_TAG} \
make build \
@@ -169,7 +172,7 @@ nextest: ## Install nextest tools.
.PHONY: sqlness-test
sqlness-test: ## Run sqlness test.
cargo sqlness ${SQLNESS_OPTS}
cargo sqlness bare ${SQLNESS_OPTS}
RUNS ?= 1
FUZZ_TARGET ?= fuzz_alter_table
@@ -193,6 +196,17 @@ clippy: ## Check clippy rules.
fix-clippy: ## Fix clippy violations.
cargo clippy --workspace --all-targets --all-features --fix
.PHONY: check-udeps
check-udeps: ## Check unused dependencies.
cargo udeps --workspace --all-targets
.PHONY: fix-udeps
fix-udeps: ## Remove unused dependencies automatically.
@echo "Running cargo-udeps to find unused dependencies..."
@cargo udeps --workspace --all-targets --output json > udeps-report.json || true
@echo "Removing unused dependencies..."
@python3 scripts/fix-udeps.py udeps-report.json
.PHONY: fmt-check
fmt-check: ## Check code format.
cargo fmt --all -- --check

View File

@@ -12,8 +12,7 @@
<div align="center">
<h3 align="center">
<a href="https://greptime.com/product/cloud">GreptimeCloud</a> |
<a href="https://docs.greptime.com/">User Guide</a> |
<a href="https://docs.greptime.com/user-guide/overview/">User Guide</a> |
<a href="https://greptimedb.rs/">API Docs</a> |
<a href="https://github.com/GreptimeTeam/greptimedb/issues/5446">Roadmap 2025</a>
</h4>
@@ -67,17 +66,24 @@
## Introduction
**GreptimeDB** is an open-source, cloud-native database purpose-built for the unified collection and analysis of observability data (metrics, logs, and traces). Whether youre operating on the edge, in the cloud, or across hybrid environments, GreptimeDB empowers real-time insights at massive scale — all in one system.
**GreptimeDB** is an open-source, cloud-native database that unifies metrics, logs, and traces, enabling real-time observability at any scale — across edge, cloud, and hybrid environments.
## Features
| Feature | Description |
| --------- | ----------- |
| [Unified Observability Data](https://docs.greptime.com/user-guide/concepts/why-greptimedb) | Store metrics, logs, and traces as timestamped, contextual wide events. Query via [SQL](https://docs.greptime.com/user-guide/query-data/sql), [PromQL](https://docs.greptime.com/user-guide/query-data/promql), and [streaming](https://docs.greptime.com/user-guide/flow-computation/overview). |
| [High Performance & Cost Effective](https://docs.greptime.com/user-guide/manage-data/data-index) | Written in Rust, with a distributed query engine, [rich indexing](https://docs.greptime.com/user-guide/manage-data/data-index), and optimized columnar storage, delivering sub-second responses at PB scale. |
| [Cloud-Native Architecture](https://docs.greptime.com/user-guide/concepts/architecture) | Designed for [Kubernetes](https://docs.greptime.com/user-guide/deployments/deploy-on-kubernetes/greptimedb-operator-management), with compute/storage separation, native object storage (AWS S3, Azure Blob, etc.) and seamless cross-cloud access. |
| [Developer-Friendly](https://docs.greptime.com/user-guide/protocols/overview) | Access via SQL/PromQL interfaces, REST API, MySQL/PostgreSQL protocols, and popular ingestion [protocols](https://docs.greptime.com/user-guide/protocols/overview). |
| [Flexible Deployment](https://docs.greptime.com/user-guide/deployments/overview) | Deploy anywhere: edge (including ARM/[Android](https://docs.greptime.com/user-guide/deployments/run-on-android)) or cloud, with unified APIs and efficient data sync. |
| [All-in-One Observability](https://docs.greptime.com/user-guide/concepts/why-greptimedb) | OpenTelemetry-native platform unifying metrics, logs, and traces. Query via [SQL](https://docs.greptime.com/user-guide/query-data/sql), [PromQL](https://docs.greptime.com/user-guide/query-data/promql), and [Flow](https://docs.greptime.com/user-guide/flow-computation/overview). |
| [High Performance](https://docs.greptime.com/user-guide/manage-data/data-index) | Written in Rust with [rich indexing](https://docs.greptime.com/user-guide/manage-data/data-index) (inverted, fulltext, skipping, vector), delivering sub-second responses at PB scale. |
| [Cost Efficiency](https://docs.greptime.com/user-guide/concepts/architecture) | 50x lower operational and storage costs with compute-storage separation and native object storage (S3, Azure Blob, etc.). |
| [Cloud-Native & Scalable](https://docs.greptime.com/user-guide/deployments-administration/deploy-on-kubernetes/greptimedb-operator-management) | Purpose-built for [Kubernetes](https://docs.greptime.com/user-guide/deployments-administration/deploy-on-kubernetes/greptimedb-operator-management) with unlimited cross-cloud scaling, handling hundreds of thousands of concurrent requests. |
| [Developer-Friendly](https://docs.greptime.com/user-guide/protocols/overview) | SQL/PromQL interfaces, built-in web dashboard, REST API, MySQL/PostgreSQL protocol compatibility, and native [OpenTelemetry](https://docs.greptime.com/user-guide/ingest-data/for-observability/opentelemetry/) support. |
| [Flexible Deployment](https://docs.greptime.com/user-guide/deployments-administration/overview) | Deploy anywhere from ARM-based edge devices (including [Android](https://docs.greptime.com/user-guide/deployments-administration/run-on-android)) to cloud, with unified APIs and efficient data sync. |
**Perfect for:**
- Unified observability stack replacing Prometheus + Loki + Tempo
- Large-scale metrics with high cardinality (millions to billions of time series)
- Large-scale observability platform requiring cost efficiency and scalability
- IoT and edge computing with resource and bandwidth constraints
Learn more in [Why GreptimeDB](https://docs.greptime.com/user-guide/concepts/why-greptimedb) and [Observability 2.0 and the Database for It](https://greptime.com/blogs/2025-04-25-greptimedb-observability2-new-database).
@@ -86,10 +92,10 @@ Learn more in [Why GreptimeDB](https://docs.greptime.com/user-guide/concepts/why
| Feature | GreptimeDB | Traditional TSDB | Log Stores |
|----------------------------------|-----------------------|--------------------|-----------------|
| Data Types | Metrics, Logs, Traces | Metrics only | Logs only |
| Query Language | SQL, PromQL, Streaming| Custom/PromQL | Custom/DSL |
| Query Language | SQL, PromQL | Custom/PromQL | Custom/DSL |
| Deployment | Edge + Cloud | Cloud/On-prem | Mostly central |
| Indexing & Performance | PB-Scale, Sub-second | Varies | Varies |
| Integration | REST, SQL, Common protocols | Varies | Varies |
| Integration | REST API, SQL, Common protocols | Varies | Varies |
**Performance:**
* [GreptimeDB tops JSONBench's billion-record cold run test!](https://greptime.com/blogs/2025-03-18-jsonbench-greptimedb-performance)
@@ -99,22 +105,18 @@ Read [more benchmark reports](https://docs.greptime.com/user-guide/concepts/feat
## Architecture
* Read the [architecture](https://docs.greptime.com/contributor-guide/overview/#architecture) document.
* [DeepWiki](https://deepwiki.com/GreptimeTeam/greptimedb/1-overview) provides an in-depth look at GreptimeDB:
GreptimeDB can run in two modes:
* **Standalone Mode** - Single binary for development and small deployments
* **Distributed Mode** - Separate components for production scale:
- Frontend: Query processing and protocol handling
- Datanode: Data storage and retrieval
- Metasrv: Metadata management and coordination
Read the [architecture](https://docs.greptime.com/contributor-guide/overview/#architecture) document. [DeepWiki](https://deepwiki.com/GreptimeTeam/greptimedb/1-overview) provides an in-depth look at GreptimeDB:
<img alt="GreptimeDB System Overview" src="docs/architecture.png">
## Try GreptimeDB
### 1. [Live Demo](https://greptime.com/playground)
Experience GreptimeDB directly in your browser.
### 2. [GreptimeCloud](https://console.greptime.cloud/)
Start instantly with a free cluster.
### 3. Docker (Local Quickstart)
```shell
docker pull greptime/greptimedb
```
@@ -130,7 +132,8 @@ docker run -p 127.0.0.1:4000-4003:4000-4003 \
--postgres-addr 0.0.0.0:4003
```
Dashboard: [http://localhost:4000/dashboard](http://localhost:4000/dashboard)
[Full Install Guide](https://docs.greptime.com/getting-started/installation/overview)
Read more in the [full Install Guide](https://docs.greptime.com/getting-started/installation/overview).
**Troubleshooting:**
* Cannot connect to the database? Ensure that ports `4000`, `4001`, `4002`, and `4003` are not blocked by a firewall or used by other services.
@@ -159,21 +162,26 @@ cargo run -- standalone start
## Tools & Extensions
- **Kubernetes:** [GreptimeDB Operator](https://github.com/GrepTimeTeam/greptimedb-operator)
- **Helm Charts:** [Greptime Helm Charts](https://github.com/GreptimeTeam/helm-charts)
- **Dashboard:** [Web UI](https://github.com/GreptimeTeam/dashboard)
- **SDKs/Ingester:** [Go](https://github.com/GreptimeTeam/greptimedb-ingester-go), [Java](https://github.com/GreptimeTeam/greptimedb-ingester-java), [C++](https://github.com/GreptimeTeam/greptimedb-ingester-cpp), [Erlang](https://github.com/GreptimeTeam/greptimedb-ingester-erl), [Rust](https://github.com/GreptimeTeam/greptimedb-ingester-rust), [JS](https://github.com/GreptimeTeam/greptimedb-ingester-js)
- **Grafana**: [Official Dashboard](https://github.com/GreptimeTeam/greptimedb/blob/main/grafana/README.md)
- **Kubernetes**: [GreptimeDB Operator](https://github.com/GrepTimeTeam/greptimedb-operator)
- **Helm Charts**: [Greptime Helm Charts](https://github.com/GreptimeTeam/helm-charts)
- **Dashboard**: [Web UI](https://github.com/GreptimeTeam/dashboard)
- **gRPC Ingester**: [Go](https://github.com/GreptimeTeam/greptimedb-ingester-go), [Java](https://github.com/GreptimeTeam/greptimedb-ingester-java), [C++](https://github.com/GreptimeTeam/greptimedb-ingester-cpp), [Erlang](https://github.com/GreptimeTeam/greptimedb-ingester-erl), [Rust](https://github.com/GreptimeTeam/greptimedb-ingester-rust)
- **Grafana Data Source**: [GreptimeDB Grafana data source plugin](https://github.com/GreptimeTeam/greptimedb-grafana-datasource)
- **Grafana Dashboard**: [Official Dashboard for monitoring](https://github.com/GreptimeTeam/greptimedb/blob/main/grafana/README.md)
## Project Status
> **Status:** Beta.
> **GA (v1.0):** Targeted for mid 2025.
> **Status:** Beta — marching toward v1.0 GA!
> **GA (v1.0):** January 10, 2026
- Being used in production by early adopters
- Deployed in production by open-source projects and commercial users
- Stable, actively maintained, with regular releases ([version info](https://docs.greptime.com/nightly/reference/about-greptimedb-version))
- Suitable for evaluation and pilot deployments
GreptimeDB v1.0 represents a major milestone toward maturity — marking stable APIs, production readiness, and proven performance.
**Roadmap:** Beta1 (Nov 10) → Beta2 (Nov 24) → RC1 (Dec 8) → GA (Jan 10, 2026), please read [v1.0 highlights and release plan](https://greptime.com/blogs/2025-11-05-greptimedb-v1-highlights) for details.
For production use, we recommend using the latest stable release.
[![Star History Chart](https://api.star-history.com/svg?repos=GreptimeTeam/GreptimeDB&type=Date)](https://www.star-history.com/#GreptimeTeam/GreptimeDB&Date)
@@ -189,7 +197,8 @@ We invite you to engage and contribute!
- [Official Website](https://greptime.com/)
- [Blog](https://greptime.com/blogs/)
- [LinkedIn](https://www.linkedin.com/company/greptime/)
- [Twitter](https://twitter.com/greptime)
- [X (Twitter)](https://X.com/greptime)
- [YouTube](https://www.youtube.com/@greptime)
## License
@@ -213,5 +222,5 @@ Special thanks to all contributors! See [AUTHORS.md](https://github.com/Greptime
- Uses [Apache Arrow™](https://arrow.apache.org/) (memory model)
- [Apache Parquet™](https://parquet.apache.org/) (file storage)
- [Apache Arrow DataFusion™](https://arrow.apache.org/datafusion/) (query engine)
- [Apache DataFusion™](https://arrow.apache.org/datafusion/) (query engine)
- [Apache OpenDAL™](https://opendal.apache.org/) (data access abstraction)

View File

@@ -13,9 +13,10 @@
| Key | Type | Default | Descriptions |
| --- | -----| ------- | ----------- |
| `default_timezone` | String | Unset | The default timezone of the server. |
| `default_column_prefix` | String | Unset | The default column prefix for auto-created time index and value columns. |
| `init_regions_in_background` | Bool | `false` | Initialize all regions in the background during the startup.<br/>By default, it provides services after all regions have been initialized. |
| `init_regions_parallelism` | Integer | `16` | Parallelism of initializing regions. |
| `max_concurrent_queries` | Integer | `0` | The maximum current queries allowed to be executed. Zero means unlimited. |
| `max_concurrent_queries` | Integer | `0` | The maximum current queries allowed to be executed. Zero means unlimited.<br/>NOTE: This setting affects scan_memory_limit's privileged tier allocation.<br/>When set, 70% of queries get privileged memory access (full scan_memory_limit).<br/>The remaining 30% get standard tier access (70% of scan_memory_limit). |
| `enable_telemetry` | Bool | `true` | Enable telemetry to collect anonymous usage data. Enabled by default. |
| `max_in_flight_write_bytes` | String | Unset | The maximum in-flight write bytes. |
| `runtime` | -- | -- | The runtime options. |
@@ -25,12 +26,15 @@
| `http.addr` | String | `127.0.0.1:4000` | The address to bind the HTTP server. |
| `http.timeout` | String | `0s` | HTTP request timeout. Set to 0 to disable timeout. |
| `http.body_limit` | String | `64MB` | HTTP request body limit.<br/>The following units are supported: `B`, `KB`, `KiB`, `MB`, `MiB`, `GB`, `GiB`, `TB`, `TiB`, `PB`, `PiB`.<br/>Set to 0 to disable limit. |
| `http.max_total_body_memory` | String | Unset | Maximum total memory for all concurrent HTTP request bodies.<br/>Set to 0 to disable the limit. Default: "0" (unlimited) |
| `http.enable_cors` | Bool | `true` | HTTP CORS support, it's turned on by default<br/>This allows browser to access http APIs without CORS restrictions |
| `http.cors_allowed_origins` | Array | Unset | Customize allowed origins for HTTP CORS. |
| `http.prom_validation_mode` | String | `strict` | Whether to enable validation for Prometheus remote write requests.<br/>Available options:<br/>- strict: deny invalid UTF-8 strings (default).<br/>- lossy: allow invalid UTF-8 strings, replace invalid characters with REPLACEMENT_CHARACTER(U+FFFD).<br/>- unchecked: do not valid strings. |
| `grpc` | -- | -- | The gRPC server options. |
| `grpc.bind_addr` | String | `127.0.0.1:4001` | The address to bind the gRPC server. |
| `grpc.runtime_size` | Integer | `8` | The number of server worker threads. |
| `grpc.max_total_message_memory` | String | Unset | Maximum total memory for all concurrent gRPC request messages.<br/>Set to 0 to disable the limit. Default: "0" (unlimited) |
| `grpc.max_connection_age` | String | Unset | The maximum connection age for gRPC connection.<br/>The value can be a human-readable time string. For example: `10m` for ten minutes or `1h` for one hour.<br/>Refer to https://grpc.io/docs/guides/keepalive/ for more details. |
| `grpc.tls` | -- | -- | gRPC server TLS options, see `mysql.tls` section. |
| `grpc.tls.mode` | String | `disable` | TLS mode. |
| `grpc.tls.cert_path` | String | Unset | Certificate file path. |
@@ -41,6 +45,7 @@
| `mysql.addr` | String | `127.0.0.1:4002` | The addr to bind the MySQL server. |
| `mysql.runtime_size` | Integer | `2` | The number of server worker threads. |
| `mysql.keep_alive` | String | `0s` | Server-side keep-alive time.<br/>Set to 0 (default) to disable. |
| `mysql.prepared_stmt_cache_size` | Integer | `10000` | Maximum entries in the MySQL prepared statement cache; default is 10,000. |
| `mysql.tls` | -- | -- | -- |
| `mysql.tls.mode` | String | `disable` | TLS mode, refer to https://www.postgresql.org/docs/current/libpq-ssl.html<br/>- `disable` (default value)<br/>- `prefer`<br/>- `require`<br/>- `verify-ca`<br/>- `verify-full` |
| `mysql.tls.cert_path` | String | Unset | Certificate file path. |
@@ -78,6 +83,8 @@
| `wal.sync_period` | String | `10s` | Duration for fsyncing log files.<br/>**It's only used when the provider is `raft_engine`**. |
| `wal.recovery_parallelism` | Integer | `2` | Parallelism during WAL recovery. |
| `wal.broker_endpoints` | Array | -- | The Kafka broker endpoints.<br/>**It's only used when the provider is `kafka`**. |
| `wal.connect_timeout` | String | `3s` | The connect timeout for kafka client.<br/>**It's only used when the provider is `kafka`**. |
| `wal.timeout` | String | `3s` | The timeout for kafka client.<br/>**It's only used when the provider is `kafka`**. |
| `wal.auto_create_topics` | Bool | `true` | Automatically create topics for WAL.<br/>Set to `true` to automatically create topics for WAL.<br/>Otherwise, use topics named `topic_name_prefix_[0..num_topics)` |
| `wal.num_topics` | Integer | `64` | Number of topics.<br/>**It's only used when the provider is `kafka`**. |
| `wal.selector_type` | String | `round_robin` | Topic selector type.<br/>Available selector types:<br/>- `round_robin` (default)<br/>**It's only used when the provider is `kafka`**. |
@@ -99,11 +106,10 @@
| `flow.num_workers` | Integer | `0` | The number of flow worker in flownode.<br/>Not setting(or set to 0) this value will use the number of CPU cores divided by 2. |
| `query` | -- | -- | The query engine options. |
| `query.parallelism` | Integer | `0` | Parallelism of the query engine.<br/>Default to 0, which means the number of CPU cores. |
| `query.memory_pool_size` | String | `50%` | Memory pool size for query execution operators (aggregation, sorting, join).<br/>Supports absolute size (e.g., "2GB", "4GB") or percentage of system memory (e.g., "20%").<br/>Setting it to 0 disables the limit (unbounded, default behavior).<br/>When this limit is reached, queries will fail with ResourceExhausted error.<br/>NOTE: This does NOT limit memory used by table scans. |
| `storage` | -- | -- | The data storage options. |
| `storage.data_home` | String | `./greptimedb_data` | The working home directory. |
| `storage.type` | String | `File` | The storage type used to store the data.<br/>- `File`: the data is stored in the local file system.<br/>- `S3`: the data is stored in the S3 object storage.<br/>- `Gcs`: the data is stored in the Google Cloud Storage.<br/>- `Azblob`: the data is stored in the Azure Blob Storage.<br/>- `Oss`: the data is stored in the Aliyun OSS. |
| `storage.cache_path` | String | Unset | Read cache configuration for object storage such as 'S3' etc, it's configured by default when using object storage. It is recommended to configure it when using object storage for better performance.<br/>A local file directory, defaults to `{data_home}`. An empty string means disabling. |
| `storage.cache_capacity` | String | Unset | The local file cache capacity in bytes. If your disk space is sufficient, it is recommended to set it larger. |
| `storage.bucket` | String | Unset | The S3 bucket name.<br/>**It's only used when the storage type is `S3`, `Oss` and `Gcs`**. |
| `storage.root` | String | Unset | The S3 data will be stored in the specified prefix, for example, `s3://${bucket}/${root}`.<br/>**It's only used when the storage type is `S3`, `Oss` and `Azblob`**. |
| `storage.access_key_id` | String | Unset | The access key id of the aws account.<br/>It's **highly recommended** to use AWS IAM roles instead of hardcoding the access key id and secret key.<br/>**It's only used when the storage type is `S3` and `Oss`**. |
@@ -123,6 +129,7 @@
| `storage.http_client.connect_timeout` | String | `30s` | The timeout for only the connect phase of a http client. |
| `storage.http_client.timeout` | String | `30s` | The total request timeout, applied from when the request starts connecting until the response body has finished.<br/>Also considered a total deadline. |
| `storage.http_client.pool_idle_timeout` | String | `90s` | The timeout for idle sockets being kept-alive. |
| `storage.http_client.skip_ssl_validation` | Bool | `false` | To skip the ssl verification<br/>**Security Notice**: Setting `skip_ssl_validation = true` disables certificate verification, making connections vulnerable to man-in-the-middle attacks. Only use this in development or trusted private networks. |
| `[[region_engine]]` | -- | -- | The region engine options. You can configure multiple region engines. |
| `region_engine.mito` | -- | -- | The Mito engine options. |
| `region_engine.mito.num_workers` | Integer | `8` | Number of region workers. |
@@ -133,6 +140,8 @@
| `region_engine.mito.max_background_flushes` | Integer | Auto | Max number of running background flush jobs (default: 1/2 of cpu cores). |
| `region_engine.mito.max_background_compactions` | Integer | Auto | Max number of running background compaction jobs (default: 1/4 of cpu cores). |
| `region_engine.mito.max_background_purges` | Integer | Auto | Max number of running background purge jobs (default: number of cpu cores). |
| `region_engine.mito.experimental_compaction_memory_limit` | String | 0 | Memory budget for compaction tasks. Setting it to 0 or "unlimited" disables the limit. |
| `region_engine.mito.experimental_compaction_on_exhausted` | String | wait | Behavior when compaction cannot acquire memory from the budget.<br/>Options: "wait" (default, 10s), "wait(<duration>)", "fail" |
| `region_engine.mito.auto_flush_interval` | String | `1h` | Interval to auto flush a region if it has not flushed yet. |
| `region_engine.mito.global_write_buffer_size` | String | Auto | Global write buffer size for all regions. If not set, it's default to 1/8 of OS memory with a max limitation of 1GB. |
| `region_engine.mito.global_write_buffer_reject_size` | String | Auto | Global write buffer size threshold to reject write requests. If not set, it's default to 2 times of `global_write_buffer_size`. |
@@ -144,10 +153,17 @@
| `region_engine.mito.write_cache_path` | String | `""` | File system path for write cache, defaults to `{data_home}`. |
| `region_engine.mito.write_cache_size` | String | `5GiB` | Capacity for write cache. If your disk space is sufficient, it is recommended to set it larger. |
| `region_engine.mito.write_cache_ttl` | String | Unset | TTL for write cache. |
| `region_engine.mito.preload_index_cache` | Bool | `true` | Preload index (puffin) files into cache on region open (default: true).<br/>When enabled, index files are loaded into the write cache during region initialization,<br/>which can improve query performance at the cost of longer startup times. |
| `region_engine.mito.index_cache_percent` | Integer | `20` | Percentage of write cache capacity allocated for index (puffin) files (default: 20).<br/>The remaining capacity is used for data (parquet) files.<br/>Must be between 0 and 100 (exclusive). For example, with a 5GiB write cache and 20% allocation,<br/>1GiB is reserved for index files and 4GiB for data files. |
| `region_engine.mito.enable_refill_cache_on_read` | Bool | `true` | Enable refilling cache on read operations (default: true).<br/>When disabled, cache refilling on read won't happen. |
| `region_engine.mito.manifest_cache_size` | String | `256MB` | Capacity for manifest cache (default: 256MB). |
| `region_engine.mito.sst_write_buffer_size` | String | `8MB` | Buffer size for SST writing. |
| `region_engine.mito.parallel_scan_channel_size` | Integer | `32` | Capacity of the channel to send data from parallel scan tasks to the main task. |
| `region_engine.mito.max_concurrent_scan_files` | Integer | `384` | Maximum number of SST files to scan concurrently. |
| `region_engine.mito.allow_stale_entries` | Bool | `false` | Whether to allow stale WAL entries read during replay. |
| `region_engine.mito.scan_memory_limit` | String | `50%` | Memory limit for table scans across all queries.<br/>Supports absolute size (e.g., "2GB") or percentage of system memory (e.g., "20%").<br/>Setting it to 0 disables the limit.<br/>NOTE: Works with max_concurrent_queries for tiered memory allocation.<br/>- If max_concurrent_queries is set: 70% of queries get full access, 30% get 70% access.<br/>- If max_concurrent_queries is 0 (unlimited): first 20 queries get full access, rest get 70% access. |
| `region_engine.mito.min_compaction_interval` | String | `0m` | Minimum time interval between two compactions.<br/>To align with the old behavior, the default value is 0 (no restrictions). |
| `region_engine.mito.default_experimental_flat_format` | Bool | `false` | Whether to enable experimental flat format as the default format. |
| `region_engine.mito.index` | -- | -- | The options for index in Mito engine. |
| `region_engine.mito.index.aux_path` | String | `""` | Auxiliary directory path for the index in filesystem, used to store intermediate files for<br/>creating the index and staging files for searching the index, defaults to `{data_home}/index_intermediate`.<br/>The default name for this directory is `index_intermediate` for backward compatibility.<br/><br/>This path contains two subdirectories:<br/>- `__intm`: for storing intermediate files used during creating index.<br/>- `staging`: for storing staging files used during searching index. |
| `region_engine.mito.index.staging_size` | String | `2GB` | The max capacity of the staging directory. |
@@ -179,32 +195,28 @@
| `region_engine.mito.memtable.fork_dictionary_bytes` | String | `1GiB` | Max dictionary bytes.<br/>Only available for `partition_tree` memtable. |
| `region_engine.file` | -- | -- | Enable the file engine. |
| `region_engine.metric` | -- | -- | Metric engine options. |
| `region_engine.metric.experimental_sparse_primary_key_encoding` | Bool | `false` | Whether to enable the experimental sparse primary key encoding. |
| `region_engine.metric.sparse_primary_key_encoding` | Bool | `true` | Whether to use sparse primary key encoding. |
| `logging` | -- | -- | The logging options. |
| `logging.dir` | String | `./greptimedb_data/logs` | The directory to store the log files. If set to empty, logs will not be written to files. |
| `logging.level` | String | Unset | The log level. Can be `info`/`debug`/`warn`/`error`. |
| `logging.enable_otlp_tracing` | Bool | `false` | Enable OTLP tracing. |
| `logging.otlp_endpoint` | String | `http://localhost:4317` | The OTLP tracing endpoint. |
| `logging.otlp_endpoint` | String | `http://localhost:4318/v1/traces` | The OTLP tracing endpoint. |
| `logging.append_stdout` | Bool | `true` | Whether to append logs to stdout. |
| `logging.log_format` | String | `text` | The log format. Can be `text`/`json`. |
| `logging.max_log_files` | Integer | `720` | The maximum amount of log files. |
| `logging.tracing_sample_ratio` | -- | -- | The percentage of tracing will be sampled and exported.<br/>Valid range `[0, 1]`, 1 means all traces are sampled, 0 means all traces are not sampled, the default value is 1.<br/>ratio > 1 are treated as 1. Fractions < 0 are treated as 0 |
| `logging.otlp_export_protocol` | String | `http` | The OTLP tracing export protocol. Can be `grpc`/`http`. |
| `logging.otlp_headers` | -- | -- | Additional OTLP headers, only valid when using OTLP http |
| `logging.tracing_sample_ratio` | -- | Unset | The percentage of tracing will be sampled and exported.<br/>Valid range `[0, 1]`, 1 means all traces are sampled, 0 means all traces are not sampled, the default value is 1.<br/>ratio > 1 are treated as 1. Fractions < 0 are treated as 0 |
| `logging.tracing_sample_ratio.default_ratio` | Float | `1.0` | -- |
| `slow_query` | -- | -- | The slow query log options. |
| `slow_query.enable` | Bool | `false` | Whether to enable slow query log. |
| `slow_query.record_type` | String | Unset | The record type of slow queries. It can be `system_table` or `log`. |
| `slow_query.threshold` | String | Unset | The threshold of slow query. |
| `slow_query.sample_ratio` | Float | Unset | The sampling ratio of slow query log. The value should be in the range of (0, 1]. |
| `export_metrics` | -- | -- | The standalone can export its metrics and send to Prometheus compatible service (e.g. `greptimedb`) from remote-write API.<br/>This is only used for `greptimedb` to export its own metrics internally. It's different from prometheus scrape. |
| `export_metrics.enable` | Bool | `false` | whether enable export metrics. |
| `export_metrics.write_interval` | String | `30s` | The interval of export metrics. |
| `export_metrics.self_import` | -- | -- | For `standalone` mode, `self_import` is recommended to collect metrics generated by itself<br/>You must create the database before enabling it. |
| `export_metrics.self_import.db` | String | Unset | -- |
| `export_metrics.remote_write` | -- | -- | -- |
| `export_metrics.remote_write.url` | String | `""` | The prometheus remote write endpoint that the metrics send to. The url example can be: `http://127.0.0.1:4000/v1/prometheus/write?db=greptime_metrics`. |
| `export_metrics.remote_write.headers` | InlineTable | -- | HTTP headers of Prometheus remote-write carry. |
| `tracing` | -- | -- | The tracing options. Only effect when compiled with `tokio-console` feature. |
| `tracing.tokio_console_addr` | String | Unset | The tokio console address. |
| `memory` | -- | -- | The memory options. |
| `memory.enable_heap_profiling` | Bool | `true` | Whether to enable heap profiling activation during startup.<br/>When enabled, heap profiling will be activated if the `MALLOC_CONF` environment variable<br/>is set to "prof:true,prof_active:false". The official image adds this env variable.<br/>Default is true. |
## Distributed Mode
@@ -214,6 +226,7 @@
| Key | Type | Default | Descriptions |
| --- | -----| ------- | ----------- |
| `default_timezone` | String | Unset | The default timezone of the server. |
| `default_column_prefix` | String | Unset | The default column prefix for auto-created time index and value columns. |
| `max_in_flight_write_bytes` | String | Unset | The maximum in-flight write bytes. |
| `runtime` | -- | -- | The runtime options. |
| `runtime.global_rt_size` | Integer | `8` | The number of threads to execute the runtime for global read operations. |
@@ -225,6 +238,7 @@
| `http.addr` | String | `127.0.0.1:4000` | The address to bind the HTTP server. |
| `http.timeout` | String | `0s` | HTTP request timeout. Set to 0 to disable timeout. |
| `http.body_limit` | String | `64MB` | HTTP request body limit.<br/>The following units are supported: `B`, `KB`, `KiB`, `MB`, `MiB`, `GB`, `GiB`, `TB`, `TiB`, `PB`, `PiB`.<br/>Set to 0 to disable limit. |
| `http.max_total_body_memory` | String | Unset | Maximum total memory for all concurrent HTTP request bodies.<br/>Set to 0 to disable the limit. Default: "0" (unlimited) |
| `http.enable_cors` | Bool | `true` | HTTP CORS support, it's turned on by default<br/>This allows browser to access http APIs without CORS restrictions |
| `http.cors_allowed_origins` | Array | Unset | Customize allowed origins for HTTP CORS. |
| `http.prom_validation_mode` | String | `strict` | Whether to enable validation for Prometheus remote write requests.<br/>Available options:<br/>- strict: deny invalid UTF-8 strings (default).<br/>- lossy: allow invalid UTF-8 strings, replace invalid characters with REPLACEMENT_CHARACTER(U+FFFD).<br/>- unchecked: do not valid strings. |
@@ -232,16 +246,30 @@
| `grpc.bind_addr` | String | `127.0.0.1:4001` | The address to bind the gRPC server. |
| `grpc.server_addr` | String | `127.0.0.1:4001` | The address advertised to the metasrv, and used for connections from outside the host.<br/>If left empty or unset, the server will automatically use the IP address of the first network interface<br/>on the host, with the same port number as the one specified in `grpc.bind_addr`. |
| `grpc.runtime_size` | Integer | `8` | The number of server worker threads. |
| `grpc.max_total_message_memory` | String | Unset | Maximum total memory for all concurrent gRPC request messages.<br/>Set to 0 to disable the limit. Default: "0" (unlimited) |
| `grpc.flight_compression` | String | `arrow_ipc` | Compression mode for frontend side Arrow IPC service. Available options:<br/>- `none`: disable all compression<br/>- `transport`: only enable gRPC transport compression (zstd)<br/>- `arrow_ipc`: only enable Arrow IPC compression (lz4)<br/>- `all`: enable all compression.<br/>Default to `none` |
| `grpc.max_connection_age` | String | Unset | The maximum connection age for gRPC connection.<br/>The value can be a human-readable time string. For example: `10m` for ten minutes or `1h` for one hour.<br/>Refer to https://grpc.io/docs/guides/keepalive/ for more details. |
| `grpc.tls` | -- | -- | gRPC server TLS options, see `mysql.tls` section. |
| `grpc.tls.mode` | String | `disable` | TLS mode. |
| `grpc.tls.cert_path` | String | Unset | Certificate file path. |
| `grpc.tls.key_path` | String | Unset | Private key file path. |
| `grpc.tls.watch` | Bool | `false` | Watch for Certificate and key file change and auto reload.<br/>For now, gRPC tls config does not support auto reload. |
| `internal_grpc` | -- | -- | The internal gRPC server options. Internal gRPC port for nodes inside cluster to access frontend. |
| `internal_grpc.bind_addr` | String | `127.0.0.1:4010` | The address to bind the gRPC server. |
| `internal_grpc.server_addr` | String | `127.0.0.1:4010` | The address advertised to the metasrv, and used for connections from outside the host.<br/>If left empty or unset, the server will automatically use the IP address of the first network interface<br/>on the host, with the same port number as the one specified in `grpc.bind_addr`. |
| `internal_grpc.runtime_size` | Integer | `8` | The number of server worker threads. |
| `internal_grpc.flight_compression` | String | `arrow_ipc` | Compression mode for frontend side Arrow IPC service. Available options:<br/>- `none`: disable all compression<br/>- `transport`: only enable gRPC transport compression (zstd)<br/>- `arrow_ipc`: only enable Arrow IPC compression (lz4)<br/>- `all`: enable all compression.<br/>Default to `none` |
| `internal_grpc.tls` | -- | -- | internal gRPC server TLS options, see `mysql.tls` section. |
| `internal_grpc.tls.mode` | String | `disable` | TLS mode. |
| `internal_grpc.tls.cert_path` | String | Unset | Certificate file path. |
| `internal_grpc.tls.key_path` | String | Unset | Private key file path. |
| `internal_grpc.tls.watch` | Bool | `false` | Watch for Certificate and key file change and auto reload.<br/>For now, gRPC tls config does not support auto reload. |
| `mysql` | -- | -- | MySQL server options. |
| `mysql.enable` | Bool | `true` | Whether to enable. |
| `mysql.addr` | String | `127.0.0.1:4002` | The addr to bind the MySQL server. |
| `mysql.runtime_size` | Integer | `2` | The number of server worker threads. |
| `mysql.keep_alive` | String | `0s` | Server-side keep-alive time.<br/>Set to 0 (default) to disable. |
| `mysql.prepared_stmt_cache_size` | Integer | `10000` | Maximum entries in the MySQL prepared statement cache; default is 10,000. |
| `mysql.tls` | -- | -- | -- |
| `mysql.tls.mode` | String | `disable` | TLS mode, refer to https://www.postgresql.org/docs/current/libpq-ssl.html<br/>- `disable` (default value)<br/>- `prefer`<br/>- `require`<br/>- `verify-ca`<br/>- `verify-full` |
| `mysql.tls.cert_path` | String | Unset | Certificate file path. |
@@ -269,7 +297,6 @@
| `meta_client` | -- | -- | The metasrv client options. |
| `meta_client.metasrv_addrs` | Array | -- | The addresses of the metasrv. |
| `meta_client.timeout` | String | `3s` | Operation timeout. |
| `meta_client.heartbeat_timeout` | String | `500ms` | Heartbeat timeout. |
| `meta_client.ddl_timeout` | String | `10s` | DDL timeout. |
| `meta_client.connect_timeout` | String | `1s` | Connect server timeout. |
| `meta_client.tcp_nodelay` | Bool | `true` | `TCP_NODELAY` option for accepted connections. |
@@ -278,6 +305,8 @@
| `meta_client.metadata_cache_tti` | String | `5m` | -- |
| `query` | -- | -- | The query engine options. |
| `query.parallelism` | Integer | `0` | Parallelism of the query engine.<br/>Default to 0, which means the number of CPU cores. |
| `query.allow_query_fallback` | Bool | `false` | Whether to allow query fallback when push down optimize fails.<br/>Default to false, meaning when push down optimize failed, return error msg |
| `query.memory_pool_size` | String | `50%` | Memory pool size for query execution operators (aggregation, sorting, join).<br/>Supports absolute size (e.g., "4GB", "8GB") or percentage of system memory (e.g., "30%").<br/>Setting it to 0 disables the limit (unbounded, default behavior).<br/>When this limit is reached, queries will fail with ResourceExhausted error.<br/>NOTE: This does NOT limit memory used by table scans (only applies to datanodes). |
| `datanode` | -- | -- | Datanode options. |
| `datanode.client` | -- | -- | Datanode client options. |
| `datanode.client.connect_timeout` | String | `10s` | -- |
@@ -286,26 +315,26 @@
| `logging.dir` | String | `./greptimedb_data/logs` | The directory to store the log files. If set to empty, logs will not be written to files. |
| `logging.level` | String | Unset | The log level. Can be `info`/`debug`/`warn`/`error`. |
| `logging.enable_otlp_tracing` | Bool | `false` | Enable OTLP tracing. |
| `logging.otlp_endpoint` | String | `http://localhost:4317` | The OTLP tracing endpoint. |
| `logging.otlp_endpoint` | String | `http://localhost:4318/v1/traces` | The OTLP tracing endpoint. |
| `logging.append_stdout` | Bool | `true` | Whether to append logs to stdout. |
| `logging.log_format` | String | `text` | The log format. Can be `text`/`json`. |
| `logging.max_log_files` | Integer | `720` | The maximum amount of log files. |
| `logging.tracing_sample_ratio` | -- | -- | The percentage of tracing will be sampled and exported.<br/>Valid range `[0, 1]`, 1 means all traces are sampled, 0 means all traces are not sampled, the default value is 1.<br/>ratio > 1 are treated as 1. Fractions < 0 are treated as 0 |
| `logging.otlp_export_protocol` | String | `http` | The OTLP tracing export protocol. Can be `grpc`/`http`. |
| `logging.otlp_headers` | -- | -- | Additional OTLP headers, only valid when using OTLP http |
| `logging.tracing_sample_ratio` | -- | Unset | The percentage of tracing will be sampled and exported.<br/>Valid range `[0, 1]`, 1 means all traces are sampled, 0 means all traces are not sampled, the default value is 1.<br/>ratio > 1 are treated as 1. Fractions < 0 are treated as 0 |
| `logging.tracing_sample_ratio.default_ratio` | Float | `1.0` | -- |
| `slow_query` | -- | -- | The slow query log options. |
| `slow_query.enable` | Bool | `true` | Whether to enable slow query log. |
| `slow_query.record_type` | String | `system_table` | The record type of slow queries. It can be `system_table` or `log`.<br/>If `system_table` is selected, the slow queries will be recorded in a system table `greptime_private.slow_queries`.<br/>If `log` is selected, the slow queries will be logged in a log file `greptimedb-slow-queries.*`. |
| `slow_query.threshold` | String | `30s` | The threshold of slow query. It can be human readable time string, for example: `10s`, `100ms`, `1s`. |
| `slow_query.sample_ratio` | Float | `1.0` | The sampling ratio of slow query log. The value should be in the range of (0, 1]. For example, `0.1` means 10% of the slow queries will be logged and `1.0` means all slow queries will be logged. |
| `slow_query.ttl` | String | `30d` | The TTL of the `slow_queries` system table. Default is `30d` when `record_type` is `system_table`. |
| `export_metrics` | -- | -- | The frontend can export its metrics and send to Prometheus compatible service (e.g. `greptimedb` itself) from remote-write API.<br/>This is only used for `greptimedb` to export its own metrics internally. It's different from prometheus scrape. |
| `export_metrics.enable` | Bool | `false` | whether enable export metrics. |
| `export_metrics.write_interval` | String | `30s` | The interval of export metrics. |
| `export_metrics.remote_write` | -- | -- | -- |
| `export_metrics.remote_write.url` | String | `""` | The prometheus remote write endpoint that the metrics send to. The url example can be: `http://127.0.0.1:4000/v1/prometheus/write?db=greptime_metrics`. |
| `export_metrics.remote_write.headers` | InlineTable | -- | HTTP headers of Prometheus remote-write carry. |
| `slow_query.ttl` | String | `90d` | The TTL of the `slow_queries` system table. Default is `90d` when `record_type` is `system_table`. |
| `tracing` | -- | -- | The tracing options. Only effect when compiled with `tokio-console` feature. |
| `tracing.tokio_console_addr` | String | Unset | The tokio console address. |
| `memory` | -- | -- | The memory options. |
| `memory.enable_heap_profiling` | Bool | `true` | Whether to enable heap profiling activation during startup.<br/>When enabled, heap profiling will be activated if the `MALLOC_CONF` environment variable<br/>is set to "prof:true,prof_active:false". The official image adds this env variable.<br/>Default is true. |
| `event_recorder` | -- | -- | Configuration options for the event recorder. |
| `event_recorder.ttl` | String | `90d` | TTL for the events table that will be used to store the events. Default is `90d`. |
### Metasrv
@@ -313,26 +342,40 @@
| Key | Type | Default | Descriptions |
| --- | -----| ------- | ----------- |
| `data_home` | String | `./greptimedb_data` | The working home directory. |
| `store_addrs` | Array | -- | Store server address default to etcd store.<br/>For postgres store, the format is:<br/>"password=password dbname=postgres user=postgres host=localhost port=5432"<br/>For etcd store, the format is:<br/>"127.0.0.1:2379" |
| `store_addrs` | Array | -- | Store server address(es). The format depends on the selected backend.<br/><br/>For etcd: a list of "host:port" endpoints.<br/>e.g. ["192.168.1.1:2379", "192.168.1.2:2379"]<br/><br/>For PostgreSQL: a connection string in libpq format or URI.<br/>e.g.<br/>- "host=localhost port=5432 user=postgres password=<PASSWORD> dbname=postgres"<br/>- "postgresql://user:password@localhost:5432/mydb?connect_timeout=10"<br/>The detail see: https://docs.rs/tokio-postgres/latest/tokio_postgres/config/struct.Config.html<br/><br/>For mysql store, the format is a MySQL connection URL.<br/>e.g. "mysql://user:password@localhost:3306/greptime_meta?ssl-mode=VERIFY_CA&ssl-ca=/path/to/ca.pem" |
| `store_key_prefix` | String | `""` | If it's not empty, the metasrv will store all data with this key prefix. |
| `backend` | String | `etcd_store` | The datastore for meta server.<br/>Available values:<br/>- `etcd_store` (default value)<br/>- `memory_store`<br/>- `postgres_store`<br/>- `mysql_store` |
| `meta_table_name` | String | `greptime_metakv` | Table name in RDS to store metadata. Effect when using a RDS kvbackend.<br/>**Only used when backend is `postgres_store`.** |
| `meta_schema_name` | String | `greptime_schema` | Optional PostgreSQL schema for metadata table and election table name qualification.<br/>When PostgreSQL public schema is not writable (e.g., PostgreSQL 15+ with restricted public),<br/>set this to a writable schema. GreptimeDB will use `meta_schema_name`.`meta_table_name`.<br/>GreptimeDB will NOT create the schema automatically; please ensure it exists or the user has permission.<br/>**Only used when backend is `postgres_store`.** |
| `meta_election_lock_id` | Integer | `1` | Advisory lock id in PostgreSQL for election. Effect when using PostgreSQL as kvbackend<br/>Only used when backend is `postgres_store`. |
| `selector` | String | `round_robin` | Datanode selector type.<br/>- `round_robin` (default value)<br/>- `lease_based`<br/>- `load_based`<br/>For details, please see "https://docs.greptime.com/developer-guide/metasrv/selector". |
| `use_memory_store` | Bool | `false` | Store data in memory. |
| `enable_region_failover` | Bool | `false` | Whether to enable region failover.<br/>This feature is only available on GreptimeDB running on cluster mode and<br/>- Using Remote WAL<br/>- Using shared storage (e.g., s3). |
| `region_failure_detector_initialization_delay` | String | `10m` | The delay before starting region failure detection.<br/>This delay helps prevent Metasrv from triggering unnecessary region failovers before all Datanodes are fully started.<br/>Especially useful when the cluster is not deployed with GreptimeDB Operator and maintenance mode is not enabled. |
| `allow_region_failover_on_local_wal` | Bool | `false` | Whether to allow region failover on local WAL.<br/>**This option is not recommended to be set to true, because it may lead to data loss during failover.** |
| `node_max_idle_time` | String | `24hours` | Max allowed idle time before removing node info from metasrv memory. |
| `heartbeat_interval` | String | `3s` | Base heartbeat interval for calculating distributed time constants.<br/>The frontend heartbeat interval is 6 times of the base heartbeat interval.<br/>The flownode/datanode heartbeat interval is 1 times of the base heartbeat interval.<br/>e.g., If the base heartbeat interval is 3s, the frontend heartbeat interval is 18s, the flownode/datanode heartbeat interval is 3s.<br/>If you change this value, you need to change the heartbeat interval of the flownode/frontend/datanode accordingly. |
| `enable_telemetry` | Bool | `true` | Whether to enable greptimedb telemetry. Enabled by default. |
| `runtime` | -- | -- | The runtime options. |
| `runtime.global_rt_size` | Integer | `8` | The number of threads to execute the runtime for global read operations. |
| `runtime.compact_rt_size` | Integer | `4` | The number of threads to execute the runtime for global write operations. |
| `backend_tls` | -- | -- | TLS configuration for kv store backend (applicable for etcd, PostgreSQL, and MySQL backends)<br/>When using etcd, PostgreSQL, or MySQL as metadata store, you can configure TLS here<br/><br/>Note: if TLS is configured in both this section and the `store_addrs` connection string, the<br/>settings here will override the TLS settings in `store_addrs`. |
| `backend_tls.mode` | String | `prefer` | TLS mode, refer to https://www.postgresql.org/docs/current/libpq-ssl.html<br/>- "disable" - No TLS<br/>- "prefer" (default) - Try TLS, fallback to plain<br/>- "require" - Require TLS<br/>- "verify_ca" - Require TLS and verify CA<br/>- "verify_full" - Require TLS and verify hostname |
| `backend_tls.cert_path` | String | `""` | Path to client certificate file (for client authentication)<br/>Like "/path/to/client.crt" |
| `backend_tls.key_path` | String | `""` | Path to client private key file (for client authentication)<br/>Like "/path/to/client.key" |
| `backend_tls.ca_cert_path` | String | `""` | Path to CA certificate file (for server certificate verification)<br/>Required when using custom CAs or self-signed certificates<br/>Leave empty to use system root certificates only<br/>Like "/path/to/ca.crt" |
| `backend_client` | -- | -- | The backend client options.<br/>Currently, only applicable when using etcd as the metadata store. |
| `backend_client.keep_alive_timeout` | String | `3s` | The keep alive timeout for backend client. |
| `backend_client.keep_alive_interval` | String | `10s` | The keep alive interval for backend client. |
| `backend_client.connect_timeout` | String | `3s` | The connect timeout for backend client. |
| `grpc` | -- | -- | The gRPC server options. |
| `grpc.bind_addr` | String | `127.0.0.1:3002` | The address to bind the gRPC server. |
| `grpc.server_addr` | String | `127.0.0.1:3002` | The communication server address for the frontend and datanode to connect to metasrv.<br/>If left empty or unset, the server will automatically use the IP address of the first network interface<br/>on the host, with the same port number as the one specified in `bind_addr`. |
| `grpc.runtime_size` | Integer | `8` | The number of server worker threads. |
| `grpc.max_recv_message_size` | String | `512MB` | The maximum receive message size for gRPC server. |
| `grpc.max_send_message_size` | String | `512MB` | The maximum send message size for gRPC server. |
| `grpc.http2_keep_alive_interval` | String | `10s` | The server side HTTP/2 keep-alive interval |
| `grpc.http2_keep_alive_timeout` | String | `3s` | The server side HTTP/2 keep-alive timeout. |
| `http` | -- | -- | The HTTP server options. |
| `http.addr` | String | `127.0.0.1:4000` | The address to bind the HTTP server. |
| `http.timeout` | String | `0s` | HTTP request timeout. Set to 0 to disable timeout. |
@@ -343,10 +386,9 @@
| `procedure.max_metadata_value_size` | String | `1500KiB` | Auto split large value<br/>GreptimeDB procedure uses etcd as the default metadata storage backend.<br/>The etcd the maximum size of any request is 1.5 MiB<br/>1500KiB = 1536KiB (1.5MiB) - 36KiB (reserved size of key)<br/>Comments out the `max_metadata_value_size`, for don't split large value (no limit). |
| `procedure.max_running_procedures` | Integer | `128` | Max running procedures.<br/>The maximum number of procedures that can be running at the same time.<br/>If the number of running procedures exceeds this limit, the procedure will be rejected. |
| `failure_detector` | -- | -- | -- |
| `failure_detector.threshold` | Float | `8.0` | The threshold value used by the failure detector to determine failure conditions. |
| `failure_detector.min_std_deviation` | String | `100ms` | The minimum standard deviation of the heartbeat intervals, used to calculate acceptable variations. |
| `failure_detector.acceptable_heartbeat_pause` | String | `10000ms` | The acceptable pause duration between heartbeats, used to determine if a heartbeat interval is acceptable. |
| `failure_detector.first_heartbeat_estimate` | String | `1000ms` | The initial estimate of the heartbeat interval used by the failure detector. |
| `failure_detector.threshold` | Float | `8.0` | Maximum acceptable φ before the peer is treated as failed.<br/>Lower values react faster but yield more false positives. |
| `failure_detector.min_std_deviation` | String | `100ms` | The minimum standard deviation of the heartbeat intervals.<br/>So tiny variations dont make φ explode. Prevents hypersensitivity when heartbeat intervals barely vary. |
| `failure_detector.acceptable_heartbeat_pause` | String | `10000ms` | The acceptable pause duration between heartbeats.<br/>Additional extra grace period to the learned mean interval before φ rises, absorbing temporary network hiccups or GC pauses. |
| `datanode` | -- | -- | Datanode options. |
| `datanode.client` | -- | -- | Datanode client options. |
| `datanode.client.timeout` | String | `10s` | Operation timeout. |
@@ -354,34 +396,38 @@
| `datanode.client.tcp_nodelay` | Bool | `true` | `TCP_NODELAY` option for accepted connections. |
| `wal` | -- | -- | -- |
| `wal.provider` | String | `raft_engine` | -- |
| `wal.broker_endpoints` | Array | -- | The broker endpoints of the Kafka cluster. |
| `wal.auto_create_topics` | Bool | `true` | Automatically create topics for WAL.<br/>Set to `true` to automatically create topics for WAL.<br/>Otherwise, use topics named `topic_name_prefix_[0..num_topics)` |
| `wal.auto_prune_interval` | String | `0s` | Interval of automatically WAL pruning.<br/>Set to `0s` to disable automatically WAL pruning which delete unused remote WAL entries periodically. |
| `wal.trigger_flush_threshold` | Integer | `0` | The threshold to trigger a flush operation of a region in automatically WAL pruning.<br/>Metasrv will send a flush request to flush the region when:<br/>`trigger_flush_threshold` + `prunable_entry_id` < `max_prunable_entry_id`<br/>where:<br/>- `prunable_entry_id` is the maximum entry id that can be pruned of the region.<br/>- `max_prunable_entry_id` is the maximum prunable entry id among all regions in the same topic.<br/>Set to `0` to disable the flush operation. |
| `wal.auto_prune_parallelism` | Integer | `10` | Concurrent task limit for automatically WAL pruning. |
| `wal.num_topics` | Integer | `64` | Number of topics. |
| `wal.selector_type` | String | `round_robin` | Topic selector type.<br/>Available selector types:<br/>- `round_robin` (default) |
| `wal.topic_name_prefix` | String | `greptimedb_wal_topic` | A Kafka topic is constructed by concatenating `topic_name_prefix` and `topic_id`.<br/>Only accepts strings that match the following regular expression pattern:<br/>[a-zA-Z_:-][a-zA-Z0-9_:\-\.@#]*<br/>i.g., greptimedb_wal_topic_0, greptimedb_wal_topic_1. |
| `wal.replication_factor` | Integer | `1` | Expected number of replicas of each partition. |
| `wal.create_topic_timeout` | String | `30s` | Above which a topic creation operation will be cancelled. |
| `wal.broker_endpoints` | Array | -- | The broker endpoints of the Kafka cluster.<br/><br/>**It's only used when the provider is `kafka`**. |
| `wal.auto_create_topics` | Bool | `true` | Automatically create topics for WAL.<br/>Set to `true` to automatically create topics for WAL.<br/>Otherwise, use topics named `topic_name_prefix_[0..num_topics)`<br/>**It's only used when the provider is `kafka`**. |
| `wal.auto_prune_interval` | String | `30m` | Interval of automatically WAL pruning.<br/>Set to `0s` to disable automatically WAL pruning which delete unused remote WAL entries periodically.<br/>**It's only used when the provider is `kafka`**. |
| `wal.flush_trigger_size` | String | `512MB` | Estimated size threshold to trigger a flush when using Kafka remote WAL.<br/>Since multiple regions may share a Kafka topic, the estimated size is calculated as:<br/> (latest_entry_id - flushed_entry_id) * avg_record_size<br/>MetaSrv triggers a flush for a region when this estimated size exceeds `flush_trigger_size`.<br/>- `latest_entry_id`: The latest entry ID in the topic.<br/>- `flushed_entry_id`: The last flushed entry ID for the region.<br/>Set to "0" to let the system decide the flush trigger size.<br/>**It's only used when the provider is `kafka`**. |
| `wal.checkpoint_trigger_size` | String | `128MB` | Estimated size threshold to trigger a checkpoint when using Kafka remote WAL.<br/>The estimated size is calculated as:<br/> (latest_entry_id - last_checkpoint_entry_id) * avg_record_size<br/>MetaSrv triggers a checkpoint for a region when this estimated size exceeds `checkpoint_trigger_size`.<br/>Set to "0" to let the system decide the checkpoint trigger size.<br/>**It's only used when the provider is `kafka`**. |
| `wal.auto_prune_parallelism` | Integer | `10` | Concurrent task limit for automatically WAL pruning.<br/>**It's only used when the provider is `kafka`**. |
| `wal.num_topics` | Integer | `64` | Number of topics used for remote WAL.<br/>**It's only used when the provider is `kafka`**. |
| `wal.selector_type` | String | `round_robin` | Topic selector type.<br/>Available selector types:<br/>- `round_robin` (default)<br/>**It's only used when the provider is `kafka`**. |
| `wal.topic_name_prefix` | String | `greptimedb_wal_topic` | A Kafka topic is constructed by concatenating `topic_name_prefix` and `topic_id`.<br/>Only accepts strings that match the following regular expression pattern:<br/>[a-zA-Z_:-][a-zA-Z0-9_:\-\.@#]*<br/>i.g., greptimedb_wal_topic_0, greptimedb_wal_topic_1.<br/>**It's only used when the provider is `kafka`**. |
| `wal.replication_factor` | Integer | `1` | Expected number of replicas of each partition.<br/>**It's only used when the provider is `kafka`**. |
| `wal.create_topic_timeout` | String | `30s` | The timeout for creating a Kafka topic.<br/>**It's only used when the provider is `kafka`**. |
| `event_recorder` | -- | -- | Configuration options for the event recorder. |
| `event_recorder.ttl` | String | `90d` | TTL for the events table that will be used to store the events. Default is `90d`. |
| `stats_persistence` | -- | -- | Configuration options for the stats persistence. |
| `stats_persistence.ttl` | String | `0s` | TTL for the stats table that will be used to store the stats.<br/>Set to `0s` to disable stats persistence.<br/>Default is `0s`.<br/>If you want to enable stats persistence, set the TTL to a value greater than 0.<br/>It is recommended to set a small value, e.g., `3h`. |
| `stats_persistence.interval` | String | `10m` | The interval to persist the stats. Default is `10m`.<br/>The minimum value is `10m`, if the value is less than `10m`, it will be overridden to `10m`. |
| `logging` | -- | -- | The logging options. |
| `logging.dir` | String | `./greptimedb_data/logs` | The directory to store the log files. If set to empty, logs will not be written to files. |
| `logging.level` | String | Unset | The log level. Can be `info`/`debug`/`warn`/`error`. |
| `logging.enable_otlp_tracing` | Bool | `false` | Enable OTLP tracing. |
| `logging.otlp_endpoint` | String | `http://localhost:4317` | The OTLP tracing endpoint. |
| `logging.otlp_endpoint` | String | `http://localhost:4318/v1/traces` | The OTLP tracing endpoint. |
| `logging.append_stdout` | Bool | `true` | Whether to append logs to stdout. |
| `logging.log_format` | String | `text` | The log format. Can be `text`/`json`. |
| `logging.max_log_files` | Integer | `720` | The maximum amount of log files. |
| `logging.tracing_sample_ratio` | -- | -- | The percentage of tracing will be sampled and exported.<br/>Valid range `[0, 1]`, 1 means all traces are sampled, 0 means all traces are not sampled, the default value is 1.<br/>ratio > 1 are treated as 1. Fractions < 0 are treated as 0 |
| `logging.otlp_export_protocol` | String | `http` | The OTLP tracing export protocol. Can be `grpc`/`http`. |
| `logging.otlp_headers` | -- | -- | Additional OTLP headers, only valid when using OTLP http |
| `logging.tracing_sample_ratio` | -- | Unset | The percentage of tracing will be sampled and exported.<br/>Valid range `[0, 1]`, 1 means all traces are sampled, 0 means all traces are not sampled, the default value is 1.<br/>ratio > 1 are treated as 1. Fractions < 0 are treated as 0 |
| `logging.tracing_sample_ratio.default_ratio` | Float | `1.0` | -- |
| `export_metrics` | -- | -- | The metasrv can export its metrics and send to Prometheus compatible service (e.g. `greptimedb` itself) from remote-write API.<br/>This is only used for `greptimedb` to export its own metrics internally. It's different from prometheus scrape. |
| `export_metrics.enable` | Bool | `false` | whether enable export metrics. |
| `export_metrics.write_interval` | String | `30s` | The interval of export metrics. |
| `export_metrics.remote_write` | -- | -- | -- |
| `export_metrics.remote_write.url` | String | `""` | The prometheus remote write endpoint that the metrics send to. The url example can be: `http://127.0.0.1:4000/v1/prometheus/write?db=greptime_metrics`. |
| `export_metrics.remote_write.headers` | InlineTable | -- | HTTP headers of Prometheus remote-write carry. |
| `tracing` | -- | -- | The tracing options. Only effect when compiled with `tokio-console` feature. |
| `tracing.tokio_console_addr` | String | Unset | The tokio console address. |
| `memory` | -- | -- | The memory options. |
| `memory.enable_heap_profiling` | Bool | `true` | Whether to enable heap profiling activation during startup.<br/>When enabled, heap profiling will be activated if the `MALLOC_CONF` environment variable<br/>is set to "prof:true,prof_active:false". The official image adds this env variable.<br/>Default is true. |
### Datanode
@@ -389,10 +435,11 @@
| Key | Type | Default | Descriptions |
| --- | -----| ------- | ----------- |
| `node_id` | Integer | Unset | The datanode identifier and should be unique in the cluster. |
| `default_column_prefix` | String | Unset | The default column prefix for auto-created time index and value columns. |
| `require_lease_before_startup` | Bool | `false` | Start services after regions have obtained leases.<br/>It will block the datanode start if it can't receive leases in the heartbeat from metasrv. |
| `init_regions_in_background` | Bool | `false` | Initialize all regions in the background during the startup.<br/>By default, it provides services after all regions have been initialized. |
| `init_regions_parallelism` | Integer | `16` | Parallelism of initializing regions. |
| `max_concurrent_queries` | Integer | `0` | The maximum current queries allowed to be executed. Zero means unlimited. |
| `max_concurrent_queries` | Integer | `0` | The maximum current queries allowed to be executed. Zero means unlimited.<br/>NOTE: This setting affects scan_memory_limit's privileged tier allocation.<br/>When set, 70% of queries get privileged memory access (full scan_memory_limit).<br/>The remaining 30% get standard tier access (70% of scan_memory_limit). |
| `enable_telemetry` | Bool | `true` | Enable telemetry to collect anonymous usage data. Enabled by default. |
| `http` | -- | -- | The HTTP server options. |
| `http.addr` | String | `127.0.0.1:4000` | The address to bind the HTTP server. |
@@ -404,6 +451,7 @@
| `grpc.runtime_size` | Integer | `8` | The number of server worker threads. |
| `grpc.max_recv_message_size` | String | `512MB` | The maximum receive message size for gRPC server. |
| `grpc.max_send_message_size` | String | `512MB` | The maximum send message size for gRPC server. |
| `grpc.flight_compression` | String | `arrow_ipc` | Compression mode for datanode side Arrow IPC service. Available options:<br/>- `none`: disable all compression<br/>- `transport`: only enable gRPC transport compression (zstd)<br/>- `arrow_ipc`: only enable Arrow IPC compression (lz4)<br/>- `all`: enable all compression.<br/>Default to `none` |
| `grpc.tls` | -- | -- | gRPC server TLS options, see `mysql.tls` section. |
| `grpc.tls.mode` | String | `disable` | TLS mode. |
| `grpc.tls.cert_path` | String | Unset | Certificate file path. |
@@ -418,7 +466,6 @@
| `meta_client` | -- | -- | The metasrv client options. |
| `meta_client.metasrv_addrs` | Array | -- | The addresses of the metasrv. |
| `meta_client.timeout` | String | `3s` | Operation timeout. |
| `meta_client.heartbeat_timeout` | String | `500ms` | Heartbeat timeout. |
| `meta_client.ddl_timeout` | String | `10s` | DDL timeout. |
| `meta_client.connect_timeout` | String | `1s` | Connect server timeout. |
| `meta_client.tcp_nodelay` | Bool | `true` | `TCP_NODELAY` option for accepted connections. |
@@ -426,11 +473,11 @@
| `meta_client.metadata_cache_ttl` | String | `10m` | TTL of the metadata cache. |
| `meta_client.metadata_cache_tti` | String | `5m` | -- |
| `wal` | -- | -- | The WAL options. |
| `wal.provider` | String | `raft_engine` | The provider of the WAL.<br/>- `raft_engine`: the wal is stored in the local file system by raft-engine.<br/>- `kafka`: it's remote wal that data is stored in Kafka. |
| `wal.provider` | String | `raft_engine` | The provider of the WAL.<br/>- `raft_engine`: the wal is stored in the local file system by raft-engine.<br/>- `kafka`: it's remote wal that data is stored in Kafka.<br/>- `noop`: it's a no-op WAL provider that does not store any WAL data.<br/>**Notes: any unflushed data will be lost when the datanode is shutdown.** |
| `wal.dir` | String | Unset | The directory to store the WAL files.<br/>**It's only used when the provider is `raft_engine`**. |
| `wal.file_size` | String | `128MB` | The size of the WAL segment file.<br/>**It's only used when the provider is `raft_engine`**. |
| `wal.purge_threshold` | String | `1GB` | The threshold of the WAL size to trigger a flush.<br/>**It's only used when the provider is `raft_engine`**. |
| `wal.purge_interval` | String | `1m` | The interval to trigger a flush.<br/>**It's only used when the provider is `raft_engine`**. |
| `wal.purge_threshold` | String | `1GB` | The threshold of the WAL size to trigger a purge.<br/>**It's only used when the provider is `raft_engine`**. |
| `wal.purge_interval` | String | `1m` | The interval to trigger a purge.<br/>**It's only used when the provider is `raft_engine`**. |
| `wal.read_batch_size` | Integer | `128` | The read batch size.<br/>**It's only used when the provider is `raft_engine`**. |
| `wal.sync_write` | Bool | `false` | Whether to use sync write.<br/>**It's only used when the provider is `raft_engine`**. |
| `wal.enable_log_recycle` | Bool | `true` | Whether to reuse logically truncated log files.<br/>**It's only used when the provider is `raft_engine`**. |
@@ -438,6 +485,8 @@
| `wal.sync_period` | String | `10s` | Duration for fsyncing log files.<br/>**It's only used when the provider is `raft_engine`**. |
| `wal.recovery_parallelism` | Integer | `2` | Parallelism during WAL recovery. |
| `wal.broker_endpoints` | Array | -- | The Kafka broker endpoints.<br/>**It's only used when the provider is `kafka`**. |
| `wal.connect_timeout` | String | `3s` | The connect timeout for kafka client.<br/>**It's only used when the provider is `kafka`**. |
| `wal.timeout` | String | `3s` | The timeout for kafka client.<br/>**It's only used when the provider is `kafka`**. |
| `wal.max_batch_bytes` | String | `1MB` | The max size of a single producer batch.<br/>Warning: Kafka has a default limit of 1MB per message in a topic.<br/>**It's only used when the provider is `kafka`**. |
| `wal.consumer_wait_timeout` | String | `100ms` | The consumer wait timeout.<br/>**It's only used when the provider is `kafka`**. |
| `wal.create_index` | Bool | `true` | Whether to enable WAL index creation.<br/>**It's only used when the provider is `kafka`**. |
@@ -445,11 +494,10 @@
| `wal.overwrite_entry_start_id` | Bool | `false` | Ignore missing entries during read WAL.<br/>**It's only used when the provider is `kafka`**.<br/><br/>This option ensures that when Kafka messages are deleted, the system<br/>can still successfully replay memtable data without throwing an<br/>out-of-range error.<br/>However, enabling this option might lead to unexpected data loss,<br/>as the system will skip over missing entries instead of treating<br/>them as critical errors. |
| `query` | -- | -- | The query engine options. |
| `query.parallelism` | Integer | `0` | Parallelism of the query engine.<br/>Default to 0, which means the number of CPU cores. |
| `query.memory_pool_size` | String | `50%` | Memory pool size for query execution operators (aggregation, sorting, join).<br/>Supports absolute size (e.g., "2GB", "4GB") or percentage of system memory (e.g., "20%").<br/>Setting it to 0 disables the limit (unbounded, default behavior).<br/>When this limit is reached, queries will fail with ResourceExhausted error.<br/>NOTE: This does NOT limit memory used by table scans. |
| `storage` | -- | -- | The data storage options. |
| `storage.data_home` | String | `./greptimedb_data` | The working home directory. |
| `storage.type` | String | `File` | The storage type used to store the data.<br/>- `File`: the data is stored in the local file system.<br/>- `S3`: the data is stored in the S3 object storage.<br/>- `Gcs`: the data is stored in the Google Cloud Storage.<br/>- `Azblob`: the data is stored in the Azure Blob Storage.<br/>- `Oss`: the data is stored in the Aliyun OSS. |
| `storage.cache_path` | String | Unset | Read cache configuration for object storage such as 'S3' etc, it's configured by default when using object storage. It is recommended to configure it when using object storage for better performance.<br/>A local file directory, defaults to `{data_home}`. An empty string means disabling. |
| `storage.cache_capacity` | String | Unset | The local file cache capacity in bytes. If your disk space is sufficient, it is recommended to set it larger. |
| `storage.bucket` | String | Unset | The S3 bucket name.<br/>**It's only used when the storage type is `S3`, `Oss` and `Gcs`**. |
| `storage.root` | String | Unset | The S3 data will be stored in the specified prefix, for example, `s3://${bucket}/${root}`.<br/>**It's only used when the storage type is `S3`, `Oss` and `Azblob`**. |
| `storage.access_key_id` | String | Unset | The access key id of the aws account.<br/>It's **highly recommended** to use AWS IAM roles instead of hardcoding the access key id and secret key.<br/>**It's only used when the storage type is `S3` and `Oss`**. |
@@ -469,16 +517,21 @@
| `storage.http_client.connect_timeout` | String | `30s` | The timeout for only the connect phase of a http client. |
| `storage.http_client.timeout` | String | `30s` | The total request timeout, applied from when the request starts connecting until the response body has finished.<br/>Also considered a total deadline. |
| `storage.http_client.pool_idle_timeout` | String | `90s` | The timeout for idle sockets being kept-alive. |
| `storage.http_client.skip_ssl_validation` | Bool | `false` | To skip the ssl verification<br/>**Security Notice**: Setting `skip_ssl_validation = true` disables certificate verification, making connections vulnerable to man-in-the-middle attacks. Only use this in development or trusted private networks. |
| `[[region_engine]]` | -- | -- | The region engine options. You can configure multiple region engines. |
| `region_engine.mito` | -- | -- | The Mito engine options. |
| `region_engine.mito.num_workers` | Integer | `8` | Number of region workers. |
| `region_engine.mito.worker_channel_size` | Integer | `128` | Request channel size of each worker. |
| `region_engine.mito.worker_request_batch_size` | Integer | `64` | Max batch size for a worker to handle requests. |
| `region_engine.mito.manifest_checkpoint_distance` | Integer | `10` | Number of meta action updated to trigger a new checkpoint for the manifest. |
| `region_engine.mito.experimental_manifest_keep_removed_file_count` | Integer | `256` | Number of removed files to keep in manifest's `removed_files` field before also<br/>remove them from `removed_files`. Mostly for debugging purpose.<br/>If set to 0, it will only use `keep_removed_file_ttl` to decide when to remove files<br/>from `removed_files` field. |
| `region_engine.mito.experimental_manifest_keep_removed_file_ttl` | String | `1h` | How long to keep removed files in the `removed_files` field of manifest<br/>after they are removed from manifest.<br/>files will only be removed from `removed_files` field<br/>if both `keep_removed_file_count` and `keep_removed_file_ttl` is reached. |
| `region_engine.mito.compress_manifest` | Bool | `false` | Whether to compress manifest and checkpoint file by gzip (default false). |
| `region_engine.mito.max_background_flushes` | Integer | Auto | Max number of running background flush jobs (default: 1/2 of cpu cores). |
| `region_engine.mito.max_background_compactions` | Integer | Auto | Max number of running background compaction jobs (default: 1/4 of cpu cores). |
| `region_engine.mito.max_background_purges` | Integer | Auto | Max number of running background purge jobs (default: number of cpu cores). |
| `region_engine.mito.experimental_compaction_memory_limit` | String | 0 | Memory budget for compaction tasks. Setting it to 0 or "unlimited" disables the limit. |
| `region_engine.mito.experimental_compaction_on_exhausted` | String | wait | Behavior when compaction cannot acquire memory from the budget.<br/>Options: "wait" (default, 10s), "wait(<duration>)", "fail" |
| `region_engine.mito.auto_flush_interval` | String | `1h` | Interval to auto flush a region if it has not flushed yet. |
| `region_engine.mito.global_write_buffer_size` | String | Auto | Global write buffer size for all regions. If not set, it's default to 1/8 of OS memory with a max limitation of 1GB. |
| `region_engine.mito.global_write_buffer_reject_size` | String | Auto | Global write buffer size threshold to reject write requests. If not set, it's default to 2 times of `global_write_buffer_size` |
@@ -490,10 +543,17 @@
| `region_engine.mito.write_cache_path` | String | `""` | File system path for write cache, defaults to `{data_home}`. |
| `region_engine.mito.write_cache_size` | String | `5GiB` | Capacity for write cache. If your disk space is sufficient, it is recommended to set it larger. |
| `region_engine.mito.write_cache_ttl` | String | Unset | TTL for write cache. |
| `region_engine.mito.preload_index_cache` | Bool | `true` | Preload index (puffin) files into cache on region open (default: true).<br/>When enabled, index files are loaded into the write cache during region initialization,<br/>which can improve query performance at the cost of longer startup times. |
| `region_engine.mito.index_cache_percent` | Integer | `20` | Percentage of write cache capacity allocated for index (puffin) files (default: 20).<br/>The remaining capacity is used for data (parquet) files.<br/>Must be between 0 and 100 (exclusive). For example, with a 5GiB write cache and 20% allocation,<br/>1GiB is reserved for index files and 4GiB for data files. |
| `region_engine.mito.enable_refill_cache_on_read` | Bool | `true` | Enable refilling cache on read operations (default: true).<br/>When disabled, cache refilling on read won't happen. |
| `region_engine.mito.manifest_cache_size` | String | `256MB` | Capacity for manifest cache (default: 256MB). |
| `region_engine.mito.sst_write_buffer_size` | String | `8MB` | Buffer size for SST writing. |
| `region_engine.mito.parallel_scan_channel_size` | Integer | `32` | Capacity of the channel to send data from parallel scan tasks to the main task. |
| `region_engine.mito.max_concurrent_scan_files` | Integer | `384` | Maximum number of SST files to scan concurrently. |
| `region_engine.mito.allow_stale_entries` | Bool | `false` | Whether to allow stale WAL entries read during replay. |
| `region_engine.mito.scan_memory_limit` | String | `50%` | Memory limit for table scans across all queries.<br/>Supports absolute size (e.g., "2GB") or percentage of system memory (e.g., "20%").<br/>Setting it to 0 disables the limit.<br/>NOTE: Works with max_concurrent_queries for tiered memory allocation.<br/>- If max_concurrent_queries is set: 70% of queries get full access, 30% get 70% access.<br/>- If max_concurrent_queries is 0 (unlimited): first 20 queries get full access, rest get 70% access. |
| `region_engine.mito.min_compaction_interval` | String | `0m` | Minimum time interval between two compactions.<br/>To align with the old behavior, the default value is 0 (no restrictions). |
| `region_engine.mito.default_experimental_flat_format` | Bool | `false` | Whether to enable experimental flat format as the default format. |
| `region_engine.mito.index` | -- | -- | The options for index in Mito engine. |
| `region_engine.mito.index.aux_path` | String | `""` | Auxiliary directory path for the index in filesystem, used to store intermediate files for<br/>creating the index and staging files for searching the index, defaults to `{data_home}/index_intermediate`.<br/>The default name for this directory is `index_intermediate` for backward compatibility.<br/><br/>This path contains two subdirectories:<br/>- `__intm`: for storing intermediate files used during creating index.<br/>- `staging`: for storing staging files used during searching index. |
| `region_engine.mito.index.staging_size` | String | `2GB` | The max capacity of the staging directory. |
@@ -525,25 +585,23 @@
| `region_engine.mito.memtable.fork_dictionary_bytes` | String | `1GiB` | Max dictionary bytes.<br/>Only available for `partition_tree` memtable. |
| `region_engine.file` | -- | -- | Enable the file engine. |
| `region_engine.metric` | -- | -- | Metric engine options. |
| `region_engine.metric.experimental_sparse_primary_key_encoding` | Bool | `false` | Whether to enable the experimental sparse primary key encoding. |
| `region_engine.metric.sparse_primary_key_encoding` | Bool | `true` | Whether to use sparse primary key encoding. |
| `logging` | -- | -- | The logging options. |
| `logging.dir` | String | `./greptimedb_data/logs` | The directory to store the log files. If set to empty, logs will not be written to files. |
| `logging.level` | String | Unset | The log level. Can be `info`/`debug`/`warn`/`error`. |
| `logging.enable_otlp_tracing` | Bool | `false` | Enable OTLP tracing. |
| `logging.otlp_endpoint` | String | `http://localhost:4317` | The OTLP tracing endpoint. |
| `logging.otlp_endpoint` | String | `http://localhost:4318/v1/traces` | The OTLP tracing endpoint. |
| `logging.append_stdout` | Bool | `true` | Whether to append logs to stdout. |
| `logging.log_format` | String | `text` | The log format. Can be `text`/`json`. |
| `logging.max_log_files` | Integer | `720` | The maximum amount of log files. |
| `logging.tracing_sample_ratio` | -- | -- | The percentage of tracing will be sampled and exported.<br/>Valid range `[0, 1]`, 1 means all traces are sampled, 0 means all traces are not sampled, the default value is 1.<br/>ratio > 1 are treated as 1. Fractions < 0 are treated as 0 |
| `logging.otlp_export_protocol` | String | `http` | The OTLP tracing export protocol. Can be `grpc`/`http`. |
| `logging.otlp_headers` | -- | -- | Additional OTLP headers, only valid when using OTLP http |
| `logging.tracing_sample_ratio` | -- | Unset | The percentage of tracing will be sampled and exported.<br/>Valid range `[0, 1]`, 1 means all traces are sampled, 0 means all traces are not sampled, the default value is 1.<br/>ratio > 1 are treated as 1. Fractions < 0 are treated as 0 |
| `logging.tracing_sample_ratio.default_ratio` | Float | `1.0` | -- |
| `export_metrics` | -- | -- | The datanode can export its metrics and send to Prometheus compatible service (e.g. `greptimedb` itself) from remote-write API.<br/>This is only used for `greptimedb` to export its own metrics internally. It's different from prometheus scrape. |
| `export_metrics.enable` | Bool | `false` | whether enable export metrics. |
| `export_metrics.write_interval` | String | `30s` | The interval of export metrics. |
| `export_metrics.remote_write` | -- | -- | -- |
| `export_metrics.remote_write.url` | String | `""` | The prometheus remote write endpoint that the metrics send to. The url example can be: `http://127.0.0.1:4000/v1/prometheus/write?db=greptime_metrics`. |
| `export_metrics.remote_write.headers` | InlineTable | -- | HTTP headers of Prometheus remote-write carry. |
| `tracing` | -- | -- | The tracing options. Only effect when compiled with `tokio-console` feature. |
| `tracing.tokio_console_addr` | String | Unset | The tokio console address. |
| `memory` | -- | -- | The memory options. |
| `memory.enable_heap_profiling` | Bool | `true` | Whether to enable heap profiling activation during startup.<br/>When enabled, heap profiling will be activated if the `MALLOC_CONF` environment variable<br/>is set to "prof:true,prof_active:false". The official image adds this env variable.<br/>Default is true. |
### Flownode
@@ -553,6 +611,22 @@
| `node_id` | Integer | Unset | The flownode identifier and should be unique in the cluster. |
| `flow` | -- | -- | flow engine options. |
| `flow.num_workers` | Integer | `0` | The number of flow worker in flownode.<br/>Not setting(or set to 0) this value will use the number of CPU cores divided by 2. |
| `flow.batching_mode` | -- | -- | -- |
| `flow.batching_mode.query_timeout` | String | `600s` | The default batching engine query timeout is 10 minutes. |
| `flow.batching_mode.slow_query_threshold` | String | `60s` | will output a warn log for any query that runs for more that this threshold |
| `flow.batching_mode.experimental_min_refresh_duration` | String | `5s` | The minimum duration between two queries execution by batching mode task |
| `flow.batching_mode.grpc_conn_timeout` | String | `5s` | The gRPC connection timeout |
| `flow.batching_mode.experimental_grpc_max_retries` | Integer | `3` | The gRPC max retry number |
| `flow.batching_mode.experimental_frontend_scan_timeout` | String | `30s` | Flow wait for available frontend timeout,<br/>if failed to find available frontend after frontend_scan_timeout elapsed, return error<br/>which prevent flownode from starting |
| `flow.batching_mode.experimental_frontend_activity_timeout` | String | `60s` | Frontend activity timeout<br/>if frontend is down(not sending heartbeat) for more than frontend_activity_timeout,<br/>it will be removed from the list that flownode use to connect |
| `flow.batching_mode.experimental_max_filter_num_per_query` | Integer | `20` | Maximum number of filters allowed in a single query |
| `flow.batching_mode.experimental_time_window_merge_threshold` | Integer | `3` | Time window merge distance |
| `flow.batching_mode.read_preference` | String | `Leader` | Read preference of the Frontend client. |
| `flow.batching_mode.frontend_tls` | -- | -- | -- |
| `flow.batching_mode.frontend_tls.enabled` | Bool | `false` | Whether to enable TLS for client. |
| `flow.batching_mode.frontend_tls.server_ca_cert_path` | String | Unset | Server Certificate file path. |
| `flow.batching_mode.frontend_tls.client_cert_path` | String | Unset | Client Certificate file path. |
| `flow.batching_mode.frontend_tls.client_key_path` | String | Unset | Client Private key file path. |
| `grpc` | -- | -- | The gRPC server options. |
| `grpc.bind_addr` | String | `127.0.0.1:6800` | The address to bind the gRPC server. |
| `grpc.server_addr` | String | `127.0.0.1:6800` | The address advertised to the metasrv,<br/>and used for connections from outside the host |
@@ -566,7 +640,6 @@
| `meta_client` | -- | -- | The metasrv client options. |
| `meta_client.metasrv_addrs` | Array | -- | The addresses of the metasrv. |
| `meta_client.timeout` | String | `3s` | Operation timeout. |
| `meta_client.heartbeat_timeout` | String | `500ms` | Heartbeat timeout. |
| `meta_client.ddl_timeout` | String | `10s` | DDL timeout. |
| `meta_client.connect_timeout` | String | `1s` | Connect server timeout. |
| `meta_client.tcp_nodelay` | Bool | `true` | `TCP_NODELAY` option for accepted connections. |
@@ -580,11 +653,18 @@
| `logging.dir` | String | `./greptimedb_data/logs` | The directory to store the log files. If set to empty, logs will not be written to files. |
| `logging.level` | String | Unset | The log level. Can be `info`/`debug`/`warn`/`error`. |
| `logging.enable_otlp_tracing` | Bool | `false` | Enable OTLP tracing. |
| `logging.otlp_endpoint` | String | `http://localhost:4317` | The OTLP tracing endpoint. |
| `logging.otlp_endpoint` | String | `http://localhost:4318/v1/traces` | The OTLP tracing endpoint. |
| `logging.append_stdout` | Bool | `true` | Whether to append logs to stdout. |
| `logging.log_format` | String | `text` | The log format. Can be `text`/`json`. |
| `logging.max_log_files` | Integer | `720` | The maximum amount of log files. |
| `logging.tracing_sample_ratio` | -- | -- | The percentage of tracing will be sampled and exported.<br/>Valid range `[0, 1]`, 1 means all traces are sampled, 0 means all traces are not sampled, the default value is 1.<br/>ratio > 1 are treated as 1. Fractions < 0 are treated as 0 |
| `logging.otlp_export_protocol` | String | `http` | The OTLP tracing export protocol. Can be `grpc`/`http`. |
| `logging.otlp_headers` | -- | -- | Additional OTLP headers, only valid when using OTLP http |
| `logging.tracing_sample_ratio` | -- | Unset | The percentage of tracing will be sampled and exported.<br/>Valid range `[0, 1]`, 1 means all traces are sampled, 0 means all traces are not sampled, the default value is 1.<br/>ratio > 1 are treated as 1. Fractions < 0 are treated as 0 |
| `logging.tracing_sample_ratio.default_ratio` | Float | `1.0` | -- |
| `tracing` | -- | -- | The tracing options. Only effect when compiled with `tokio-console` feature. |
| `tracing.tokio_console_addr` | String | Unset | The tokio console address. |
| `query` | -- | -- | -- |
| `query.parallelism` | Integer | `1` | Parallelism of the query engine for query sent by flownode.<br/>Default to 1, so it won't use too much cpu or memory |
| `query.memory_pool_size` | String | `50%` | Memory pool size for query execution operators (aggregation, sorting, join).<br/>Supports absolute size (e.g., "1GB", "2GB") or percentage of system memory (e.g., "20%").<br/>Setting it to 0 disables the limit (unbounded, default behavior).<br/>When this limit is reached, queries will fail with ResourceExhausted error.<br/>NOTE: This does NOT limit memory used by table scans. |
| `memory` | -- | -- | The memory options. |
| `memory.enable_heap_profiling` | Bool | `true` | Whether to enable heap profiling activation during startup.<br/>When enabled, heap profiling will be activated if the `MALLOC_CONF` environment variable<br/>is set to "prof:true,prof_active:false". The official image adds this env variable.<br/>Default is true. |

View File

@@ -2,6 +2,10 @@
## @toml2docs:none-default
node_id = 42
## The default column prefix for auto-created time index and value columns.
## @toml2docs:none-default
default_column_prefix = "greptime"
## Start services after regions have obtained leases.
## It will block the datanode start if it can't receive leases in the heartbeat from metasrv.
require_lease_before_startup = false
@@ -14,6 +18,9 @@ init_regions_in_background = false
init_regions_parallelism = 16
## The maximum current queries allowed to be executed. Zero means unlimited.
## NOTE: This setting affects scan_memory_limit's privileged tier allocation.
## When set, 70% of queries get privileged memory access (full scan_memory_limit).
## The remaining 30% get standard tier access (70% of scan_memory_limit).
max_concurrent_queries = 0
## Enable telemetry to collect anonymous usage data. Enabled by default.
@@ -44,6 +51,13 @@ runtime_size = 8
max_recv_message_size = "512MB"
## The maximum send message size for gRPC server.
max_send_message_size = "512MB"
## Compression mode for datanode side Arrow IPC service. Available options:
## - `none`: disable all compression
## - `transport`: only enable gRPC transport compression (zstd)
## - `arrow_ipc`: only enable Arrow IPC compression (lz4)
## - `all`: enable all compression.
## Default to `none`
flight_compression = "arrow_ipc"
## gRPC server TLS options, see `mysql.tls` section.
[grpc.tls]
@@ -85,9 +99,6 @@ metasrv_addrs = ["127.0.0.1:3002"]
## Operation timeout.
timeout = "3s"
## Heartbeat timeout.
heartbeat_timeout = "500ms"
## DDL timeout.
ddl_timeout = "10s"
@@ -111,6 +122,7 @@ metadata_cache_tti = "5m"
## The provider of the WAL.
## - `raft_engine`: the wal is stored in the local file system by raft-engine.
## - `kafka`: it's remote wal that data is stored in Kafka.
## - `noop`: it's a no-op WAL provider that does not store any WAL data.<br/>**Notes: any unflushed data will be lost when the datanode is shutdown.**
provider = "raft_engine"
## The directory to store the WAL files.
@@ -122,11 +134,11 @@ dir = "./greptimedb_data/wal"
## **It's only used when the provider is `raft_engine`**.
file_size = "128MB"
## The threshold of the WAL size to trigger a flush.
## The threshold of the WAL size to trigger a purge.
## **It's only used when the provider is `raft_engine`**.
purge_threshold = "1GB"
## The interval to trigger a flush.
## The interval to trigger a purge.
## **It's only used when the provider is `raft_engine`**.
purge_interval = "1m"
@@ -157,6 +169,14 @@ recovery_parallelism = 2
## **It's only used when the provider is `kafka`**.
broker_endpoints = ["127.0.0.1:9092"]
## The connect timeout for kafka client.
## **It's only used when the provider is `kafka`**.
#+ connect_timeout = "3s"
## The timeout for kafka client.
## **It's only used when the provider is `kafka`**.
#+ timeout = "3s"
## The max size of a single producer batch.
## Warning: Kafka has a default limit of 1MB per message in a topic.
## **It's only used when the provider is `kafka`**.
@@ -213,6 +233,7 @@ overwrite_entry_start_id = false
# endpoint = "https://s3.amazonaws.com"
# region = "us-west-2"
# enable_virtual_host_style = false
# disable_ec2_metadata = false
# Example of using Oss as the storage.
# [storage]
@@ -249,6 +270,13 @@ overwrite_entry_start_id = false
## Default to 0, which means the number of CPU cores.
parallelism = 0
## Memory pool size for query execution operators (aggregation, sorting, join).
## Supports absolute size (e.g., "2GB", "4GB") or percentage of system memory (e.g., "20%").
## Setting it to 0 disables the limit (unbounded, default behavior).
## When this limit is reached, queries will fail with ResourceExhausted error.
## NOTE: This does NOT limit memory used by table scans.
memory_pool_size = "50%"
## The data storage options.
[storage]
## The working home directory.
@@ -262,15 +290,6 @@ data_home = "./greptimedb_data"
## - `Oss`: the data is stored in the Aliyun OSS.
type = "File"
## Read cache configuration for object storage such as 'S3' etc, it's configured by default when using object storage. It is recommended to configure it when using object storage for better performance.
## A local file directory, defaults to `{data_home}`. An empty string means disabling.
## @toml2docs:none-default
#+ cache_path = ""
## The local file cache capacity in bytes. If your disk space is sufficient, it is recommended to set it larger.
## @toml2docs:none-default
cache_capacity = "5GiB"
## The S3 bucket name.
## **It's only used when the storage type is `S3`, `Oss` and `Gcs`**.
## @toml2docs:none-default
@@ -360,6 +379,10 @@ timeout = "30s"
## The timeout for idle sockets being kept-alive.
pool_idle_timeout = "90s"
## To skip the ssl verification
## **Security Notice**: Setting `skip_ssl_validation = true` disables certificate verification, making connections vulnerable to man-in-the-middle attacks. Only use this in development or trusted private networks.
skip_ssl_validation = false
# Custom storage options
# [[storage.providers]]
# name = "S3"
@@ -398,6 +421,19 @@ worker_request_batch_size = 64
## Number of meta action updated to trigger a new checkpoint for the manifest.
manifest_checkpoint_distance = 10
## Number of removed files to keep in manifest's `removed_files` field before also
## remove them from `removed_files`. Mostly for debugging purpose.
## If set to 0, it will only use `keep_removed_file_ttl` to decide when to remove files
## from `removed_files` field.
experimental_manifest_keep_removed_file_count = 256
## How long to keep removed files in the `removed_files` field of manifest
## after they are removed from manifest.
## files will only be removed from `removed_files` field
## if both `keep_removed_file_count` and `keep_removed_file_ttl` is reached.
experimental_manifest_keep_removed_file_ttl = "1h"
## Whether to compress manifest and checkpoint file by gzip (default false).
compress_manifest = false
@@ -413,6 +449,15 @@ compress_manifest = false
## @toml2docs:none-default="Auto"
#+ max_background_purges = 8
## Memory budget for compaction tasks. Setting it to 0 or "unlimited" disables the limit.
## @toml2docs:none-default="0"
#+ experimental_compaction_memory_limit = "0"
## Behavior when compaction cannot acquire memory from the budget.
## Options: "wait" (default, 10s), "wait(<duration>)", "fail"
## @toml2docs:none-default="wait"
#+ experimental_compaction_on_exhausted = "wait"
## Interval to auto flush a region if it has not flushed yet.
auto_flush_interval = "1h"
@@ -457,19 +502,51 @@ write_cache_size = "5GiB"
## @toml2docs:none-default
write_cache_ttl = "8h"
## Preload index (puffin) files into cache on region open (default: true).
## When enabled, index files are loaded into the write cache during region initialization,
## which can improve query performance at the cost of longer startup times.
preload_index_cache = true
## Percentage of write cache capacity allocated for index (puffin) files (default: 20).
## The remaining capacity is used for data (parquet) files.
## Must be between 0 and 100 (exclusive). For example, with a 5GiB write cache and 20% allocation,
## 1GiB is reserved for index files and 4GiB for data files.
index_cache_percent = 20
## Enable refilling cache on read operations (default: true).
## When disabled, cache refilling on read won't happen.
enable_refill_cache_on_read = true
## Capacity for manifest cache (default: 256MB).
manifest_cache_size = "256MB"
## Buffer size for SST writing.
sst_write_buffer_size = "8MB"
## Capacity of the channel to send data from parallel scan tasks to the main task.
parallel_scan_channel_size = 32
## Maximum number of SST files to scan concurrently.
max_concurrent_scan_files = 384
## Whether to allow stale WAL entries read during replay.
allow_stale_entries = false
## Memory limit for table scans across all queries.
## Supports absolute size (e.g., "2GB") or percentage of system memory (e.g., "20%").
## Setting it to 0 disables the limit.
## NOTE: Works with max_concurrent_queries for tiered memory allocation.
## - If max_concurrent_queries is set: 70% of queries get full access, 30% get 70% access.
## - If max_concurrent_queries is 0 (unlimited): first 20 queries get full access, rest get 70% access.
scan_memory_limit = "50%"
## Minimum time interval between two compactions.
## To align with the old behavior, the default value is 0 (no restrictions).
min_compaction_interval = "0m"
## Whether to enable experimental flat format as the default format.
default_experimental_flat_format = false
## The options for index in Mito engine.
[region_engine.mito.index]
@@ -602,8 +679,8 @@ fork_dictionary_bytes = "1GiB"
[[region_engine]]
## Metric engine options.
[region_engine.metric]
## Whether to enable the experimental sparse primary key encoding.
experimental_sparse_primary_key_encoding = false
## Whether to use sparse primary key encoding.
sparse_primary_key_encoding = true
## The logging options.
[logging]
@@ -618,7 +695,7 @@ level = "info"
enable_otlp_tracing = false
## The OTLP tracing endpoint.
otlp_endpoint = "http://localhost:4317"
otlp_endpoint = "http://localhost:4318/v1/traces"
## Whether to append logs to stdout.
append_stdout = true
@@ -629,29 +706,32 @@ log_format = "text"
## The maximum amount of log files.
max_log_files = 720
## The OTLP tracing export protocol. Can be `grpc`/`http`.
otlp_export_protocol = "http"
## Additional OTLP headers, only valid when using OTLP http
[logging.otlp_headers]
## @toml2docs:none-default
#Authorization = "Bearer my-token"
## @toml2docs:none-default
#Database = "My database"
## The percentage of tracing will be sampled and exported.
## Valid range `[0, 1]`, 1 means all traces are sampled, 0 means all traces are not sampled, the default value is 1.
## ratio > 1 are treated as 1. Fractions < 0 are treated as 0
[logging.tracing_sample_ratio]
default_ratio = 1.0
## The datanode can export its metrics and send to Prometheus compatible service (e.g. `greptimedb` itself) from remote-write API.
## This is only used for `greptimedb` to export its own metrics internally. It's different from prometheus scrape.
[export_metrics]
## whether enable export metrics.
enable = false
## The interval of export metrics.
write_interval = "30s"
[export_metrics.remote_write]
## The prometheus remote write endpoint that the metrics send to. The url example can be: `http://127.0.0.1:4000/v1/prometheus/write?db=greptime_metrics`.
url = ""
## HTTP headers of Prometheus remote-write carry.
headers = { }
## The tracing options. Only effect when compiled with `tokio-console` feature.
#+ [tracing]
## The tokio console address.
## @toml2docs:none-default
#+ tokio_console_addr = "127.0.0.1"
## The memory options.
[memory]
## Whether to enable heap profiling activation during startup.
## When enabled, heap profiling will be activated if the `MALLOC_CONF` environment variable
## is set to "prof:true,prof_active:false". The official image adds this env variable.
## Default is true.
enable_heap_profiling = true

View File

@@ -7,6 +7,43 @@ node_id = 14
## The number of flow worker in flownode.
## Not setting(or set to 0) this value will use the number of CPU cores divided by 2.
#+num_workers=0
[flow.batching_mode]
## The default batching engine query timeout is 10 minutes.
#+query_timeout="600s"
## will output a warn log for any query that runs for more that this threshold
#+slow_query_threshold="60s"
## The minimum duration between two queries execution by batching mode task
#+experimental_min_refresh_duration="5s"
## The gRPC connection timeout
#+grpc_conn_timeout="5s"
## The gRPC max retry number
#+experimental_grpc_max_retries=3
## Flow wait for available frontend timeout,
## if failed to find available frontend after frontend_scan_timeout elapsed, return error
## which prevent flownode from starting
#+experimental_frontend_scan_timeout="30s"
## Frontend activity timeout
## if frontend is down(not sending heartbeat) for more than frontend_activity_timeout,
## it will be removed from the list that flownode use to connect
#+experimental_frontend_activity_timeout="60s"
## Maximum number of filters allowed in a single query
#+experimental_max_filter_num_per_query=20
## Time window merge distance
#+experimental_time_window_merge_threshold=3
## Read preference of the Frontend client.
#+read_preference="Leader"
[flow.batching_mode.frontend_tls]
## Whether to enable TLS for client.
#+enabled=false
## Server Certificate file path.
## @toml2docs:none-default
#+server_ca_cert_path=""
## Client Certificate file path.
## @toml2docs:none-default
#+client_cert_path=""
## Client Private key file path.
## @toml2docs:none-default
#+client_key_path=""
## The gRPC server options.
[grpc]
@@ -41,9 +78,6 @@ metasrv_addrs = ["127.0.0.1:3002"]
## Operation timeout.
timeout = "3s"
## Heartbeat timeout.
heartbeat_timeout = "500ms"
## DDL timeout.
ddl_timeout = "10s"
@@ -83,7 +117,7 @@ level = "info"
enable_otlp_tracing = false
## The OTLP tracing endpoint.
otlp_endpoint = "http://localhost:4317"
otlp_endpoint = "http://localhost:4318/v1/traces"
## Whether to append logs to stdout.
append_stdout = true
@@ -94,6 +128,16 @@ log_format = "text"
## The maximum amount of log files.
max_log_files = 720
## The OTLP tracing export protocol. Can be `grpc`/`http`.
otlp_export_protocol = "http"
## Additional OTLP headers, only valid when using OTLP http
[logging.otlp_headers]
## @toml2docs:none-default
#Authorization = "Bearer my-token"
## @toml2docs:none-default
#Database = "My database"
## The percentage of tracing will be sampled and exported.
## Valid range `[0, 1]`, 1 means all traces are sampled, 0 means all traces are not sampled, the default value is 1.
## ratio > 1 are treated as 1. Fractions < 0 are treated as 0
@@ -105,3 +149,23 @@ default_ratio = 1.0
## The tokio console address.
## @toml2docs:none-default
#+ tokio_console_addr = "127.0.0.1"
[query]
## Parallelism of the query engine for query sent by flownode.
## Default to 1, so it won't use too much cpu or memory
parallelism = 1
## Memory pool size for query execution operators (aggregation, sorting, join).
## Supports absolute size (e.g., "1GB", "2GB") or percentage of system memory (e.g., "20%").
## Setting it to 0 disables the limit (unbounded, default behavior).
## When this limit is reached, queries will fail with ResourceExhausted error.
## NOTE: This does NOT limit memory used by table scans.
memory_pool_size = "50%"
## The memory options.
[memory]
## Whether to enable heap profiling activation during startup.
## When enabled, heap profiling will be activated if the `MALLOC_CONF` environment variable
## is set to "prof:true,prof_active:false". The official image adds this env variable.
## Default is true.
enable_heap_profiling = true

View File

@@ -2,6 +2,10 @@
## @toml2docs:none-default
default_timezone = "UTC"
## The default column prefix for auto-created time index and value columns.
## @toml2docs:none-default
default_column_prefix = "greptime"
## The maximum in-flight write bytes.
## @toml2docs:none-default
#+ max_in_flight_write_bytes = "500MB"
@@ -31,6 +35,10 @@ timeout = "0s"
## The following units are supported: `B`, `KB`, `KiB`, `MB`, `MiB`, `GB`, `GiB`, `TB`, `TiB`, `PB`, `PiB`.
## Set to 0 to disable limit.
body_limit = "64MB"
## Maximum total memory for all concurrent HTTP request bodies.
## Set to 0 to disable the limit. Default: "0" (unlimited)
## @toml2docs:none-default
#+ max_total_body_memory = "1GB"
## HTTP CORS support, it's turned on by default
## This allows browser to access http APIs without CORS restrictions
enable_cors = true
@@ -54,6 +62,22 @@ bind_addr = "127.0.0.1:4001"
server_addr = "127.0.0.1:4001"
## The number of server worker threads.
runtime_size = 8
## Maximum total memory for all concurrent gRPC request messages.
## Set to 0 to disable the limit. Default: "0" (unlimited)
## @toml2docs:none-default
#+ max_total_message_memory = "1GB"
## Compression mode for frontend side Arrow IPC service. Available options:
## - `none`: disable all compression
## - `transport`: only enable gRPC transport compression (zstd)
## - `arrow_ipc`: only enable Arrow IPC compression (lz4)
## - `all`: enable all compression.
## Default to `none`
flight_compression = "arrow_ipc"
## The maximum connection age for gRPC connection.
## The value can be a human-readable time string. For example: `10m` for ten minutes or `1h` for one hour.
## Refer to https://grpc.io/docs/guides/keepalive/ for more details.
## @toml2docs:none-default
#+ max_connection_age = "10m"
## gRPC server TLS options, see `mysql.tls` section.
[grpc.tls]
@@ -72,6 +96,41 @@ key_path = ""
## For now, gRPC tls config does not support auto reload.
watch = false
## The internal gRPC server options. Internal gRPC port for nodes inside cluster to access frontend.
[internal_grpc]
## The address to bind the gRPC server.
bind_addr = "127.0.0.1:4010"
## The address advertised to the metasrv, and used for connections from outside the host.
## If left empty or unset, the server will automatically use the IP address of the first network interface
## on the host, with the same port number as the one specified in `grpc.bind_addr`.
server_addr = "127.0.0.1:4010"
## The number of server worker threads.
runtime_size = 8
## Compression mode for frontend side Arrow IPC service. Available options:
## - `none`: disable all compression
## - `transport`: only enable gRPC transport compression (zstd)
## - `arrow_ipc`: only enable Arrow IPC compression (lz4)
## - `all`: enable all compression.
## Default to `none`
flight_compression = "arrow_ipc"
## internal gRPC server TLS options, see `mysql.tls` section.
[internal_grpc.tls]
## TLS mode.
mode = "disable"
## Certificate file path.
## @toml2docs:none-default
cert_path = ""
## Private key file path.
## @toml2docs:none-default
key_path = ""
## Watch for Certificate and key file change and auto reload.
## For now, gRPC tls config does not support auto reload.
watch = false
## MySQL server options.
[mysql]
## Whether to enable.
@@ -83,6 +142,8 @@ runtime_size = 2
## Server-side keep-alive time.
## Set to 0 (default) to disable.
keep_alive = "0s"
## Maximum entries in the MySQL prepared statement cache; default is 10,000.
prepared_stmt_cache_size = 10000
# MySQL server TLS options.
[mysql.tls]
@@ -164,9 +225,6 @@ metasrv_addrs = ["127.0.0.1:3002"]
## Operation timeout.
timeout = "3s"
## Heartbeat timeout.
heartbeat_timeout = "500ms"
## DDL timeout.
ddl_timeout = "10s"
@@ -190,6 +248,16 @@ metadata_cache_tti = "5m"
## Parallelism of the query engine.
## Default to 0, which means the number of CPU cores.
parallelism = 0
## Whether to allow query fallback when push down optimize fails.
## Default to false, meaning when push down optimize failed, return error msg
allow_query_fallback = false
## Memory pool size for query execution operators (aggregation, sorting, join).
## Supports absolute size (e.g., "4GB", "8GB") or percentage of system memory (e.g., "30%").
## Setting it to 0 disables the limit (unbounded, default behavior).
## When this limit is reached, queries will fail with ResourceExhausted error.
## NOTE: This does NOT limit memory used by table scans (only applies to datanodes).
memory_pool_size = "50%"
## Datanode options.
[datanode]
@@ -211,7 +279,7 @@ level = "info"
enable_otlp_tracing = false
## The OTLP tracing endpoint.
otlp_endpoint = "http://localhost:4317"
otlp_endpoint = "http://localhost:4318/v1/traces"
## Whether to append logs to stdout.
append_stdout = true
@@ -222,6 +290,16 @@ log_format = "text"
## The maximum amount of log files.
max_log_files = 720
## The OTLP tracing export protocol. Can be `grpc`/`http`.
otlp_export_protocol = "http"
## Additional OTLP headers, only valid when using OTLP http
[logging.otlp_headers]
## @toml2docs:none-default
#Authorization = "Bearer my-token"
## @toml2docs:none-default
#Database = "My database"
## The percentage of tracing will be sampled and exported.
## Valid range `[0, 1]`, 1 means all traces are sampled, 0 means all traces are not sampled, the default value is 1.
## ratio > 1 are treated as 1. Fractions < 0 are treated as 0
@@ -244,26 +322,24 @@ threshold = "30s"
## The sampling ratio of slow query log. The value should be in the range of (0, 1]. For example, `0.1` means 10% of the slow queries will be logged and `1.0` means all slow queries will be logged.
sample_ratio = 1.0
## The TTL of the `slow_queries` system table. Default is `30d` when `record_type` is `system_table`.
ttl = "30d"
## The frontend can export its metrics and send to Prometheus compatible service (e.g. `greptimedb` itself) from remote-write API.
## This is only used for `greptimedb` to export its own metrics internally. It's different from prometheus scrape.
[export_metrics]
## whether enable export metrics.
enable = false
## The interval of export metrics.
write_interval = "30s"
[export_metrics.remote_write]
## The prometheus remote write endpoint that the metrics send to. The url example can be: `http://127.0.0.1:4000/v1/prometheus/write?db=greptime_metrics`.
url = ""
## HTTP headers of Prometheus remote-write carry.
headers = { }
## The TTL of the `slow_queries` system table. Default is `90d` when `record_type` is `system_table`.
ttl = "90d"
## The tracing options. Only effect when compiled with `tokio-console` feature.
#+ [tracing]
## The tokio console address.
## @toml2docs:none-default
#+ tokio_console_addr = "127.0.0.1"
## The memory options.
[memory]
## Whether to enable heap profiling activation during startup.
## When enabled, heap profiling will be activated if the `MALLOC_CONF` environment variable
## is set to "prof:true,prof_active:false". The official image adds this env variable.
## Default is true.
enable_heap_profiling = true
## Configuration options for the event recorder.
[event_recorder]
## TTL for the events table that will be used to store the events. Default is `90d`.
ttl = "90d"

View File

@@ -1,11 +1,19 @@
## The working home directory.
data_home = "./greptimedb_data"
## Store server address default to etcd store.
## For postgres store, the format is:
## "password=password dbname=postgres user=postgres host=localhost port=5432"
## For etcd store, the format is:
## "127.0.0.1:2379"
## Store server address(es). The format depends on the selected backend.
##
## For etcd: a list of "host:port" endpoints.
## e.g. ["192.168.1.1:2379", "192.168.1.2:2379"]
##
## For PostgreSQL: a connection string in libpq format or URI.
## e.g.
## - "host=localhost port=5432 user=postgres password=<PASSWORD> dbname=postgres"
## - "postgresql://user:password@localhost:5432/mydb?connect_timeout=10"
## The detail see: https://docs.rs/tokio-postgres/latest/tokio_postgres/config/struct.Config.html
##
## For mysql store, the format is a MySQL connection URL.
## e.g. "mysql://user:password@localhost:3306/greptime_meta?ssl-mode=VERIFY_CA&ssl-ca=/path/to/ca.pem"
store_addrs = ["127.0.0.1:2379"]
## If it's not empty, the metasrv will store all data with this key prefix.
@@ -23,6 +31,14 @@ backend = "etcd_store"
## **Only used when backend is `postgres_store`.**
meta_table_name = "greptime_metakv"
## Optional PostgreSQL schema for metadata table and election table name qualification.
## When PostgreSQL public schema is not writable (e.g., PostgreSQL 15+ with restricted public),
## set this to a writable schema. GreptimeDB will use `meta_schema_name`.`meta_table_name`.
## GreptimeDB will NOT create the schema automatically; please ensure it exists or the user has permission.
## **Only used when backend is `postgres_store`.**
meta_schema_name = "greptime_schema"
## Advisory lock id in PostgreSQL for election. Effect when using PostgreSQL as kvbackend
## Only used when backend is `postgres_store`.
meta_election_lock_id = 1
@@ -43,6 +59,11 @@ use_memory_store = false
## - Using shared storage (e.g., s3).
enable_region_failover = false
## The delay before starting region failure detection.
## This delay helps prevent Metasrv from triggering unnecessary region failovers before all Datanodes are fully started.
## Especially useful when the cluster is not deployed with GreptimeDB Operator and maintenance mode is not enabled.
region_failure_detector_initialization_delay = '10m'
## Whether to allow region failover on local WAL.
## **This option is not recommended to be set to true, because it may lead to data loss during failover.**
allow_region_failover_on_local_wal = false
@@ -50,6 +71,13 @@ allow_region_failover_on_local_wal = false
## Max allowed idle time before removing node info from metasrv memory.
node_max_idle_time = "24hours"
## Base heartbeat interval for calculating distributed time constants.
## The frontend heartbeat interval is 6 times of the base heartbeat interval.
## The flownode/datanode heartbeat interval is 1 times of the base heartbeat interval.
## e.g., If the base heartbeat interval is 3s, the frontend heartbeat interval is 18s, the flownode/datanode heartbeat interval is 3s.
## If you change this value, you need to change the heartbeat interval of the flownode/frontend/datanode accordingly.
#+ heartbeat_interval = "3s"
## Whether to enable greptimedb telemetry. Enabled by default.
#+ enable_telemetry = true
@@ -60,6 +88,44 @@ node_max_idle_time = "24hours"
## The number of threads to execute the runtime for global write operations.
#+ compact_rt_size = 4
## TLS configuration for kv store backend (applicable for etcd, PostgreSQL, and MySQL backends)
## When using etcd, PostgreSQL, or MySQL as metadata store, you can configure TLS here
##
## Note: if TLS is configured in both this section and the `store_addrs` connection string, the
## settings here will override the TLS settings in `store_addrs`.
[backend_tls]
## TLS mode, refer to https://www.postgresql.org/docs/current/libpq-ssl.html
## - "disable" - No TLS
## - "prefer" (default) - Try TLS, fallback to plain
## - "require" - Require TLS
## - "verify_ca" - Require TLS and verify CA
## - "verify_full" - Require TLS and verify hostname
mode = "prefer"
## Path to client certificate file (for client authentication)
## Like "/path/to/client.crt"
cert_path = ""
## Path to client private key file (for client authentication)
## Like "/path/to/client.key"
key_path = ""
## Path to CA certificate file (for server certificate verification)
## Required when using custom CAs or self-signed certificates
## Leave empty to use system root certificates only
## Like "/path/to/ca.crt"
ca_cert_path = ""
## The backend client options.
## Currently, only applicable when using etcd as the metadata store.
#+ [backend_client]
## The keep alive timeout for backend client.
#+ keep_alive_timeout = "3s"
## The keep alive interval for backend client.
#+ keep_alive_interval = "10s"
## The connect timeout for backend client.
#+ connect_timeout = "3s"
## The gRPC server options.
[grpc]
## The address to bind the gRPC server.
@@ -74,6 +140,10 @@ runtime_size = 8
max_recv_message_size = "512MB"
## The maximum send message size for gRPC server.
max_send_message_size = "512MB"
## The server side HTTP/2 keep-alive interval
#+ http2_keep_alive_interval = "10s"
## The server side HTTP/2 keep-alive timeout.
#+ http2_keep_alive_timeout = "3s"
## The HTTP server options.
[http]
@@ -108,20 +178,18 @@ max_metadata_value_size = "1500KiB"
max_running_procedures = 128
# Failure detectors options.
# GreptimeDB uses the Phi Accrual Failure Detector algorithm to detect datanode failures.
[failure_detector]
## The threshold value used by the failure detector to determine failure conditions.
## Maximum acceptable φ before the peer is treated as failed.
## Lower values react faster but yield more false positives.
threshold = 8.0
## The minimum standard deviation of the heartbeat intervals, used to calculate acceptable variations.
## The minimum standard deviation of the heartbeat intervals.
## So tiny variations dont make φ explode. Prevents hypersensitivity when heartbeat intervals barely vary.
min_std_deviation = "100ms"
## The acceptable pause duration between heartbeats, used to determine if a heartbeat interval is acceptable.
## The acceptable pause duration between heartbeats.
## Additional extra grace period to the learned mean interval before φ rises, absorbing temporary network hiccups or GC pauses.
acceptable_heartbeat_pause = "10000ms"
## The initial estimate of the heartbeat interval used by the failure detector.
first_heartbeat_estimate = "1000ms"
## Datanode options.
[datanode]
@@ -143,50 +211,69 @@ tcp_nodelay = true
# - `kafka`: metasrv **have to be** configured with kafka wal config when using kafka wal provider in datanode.
provider = "raft_engine"
# Kafka wal config.
## The broker endpoints of the Kafka cluster.
##
## **It's only used when the provider is `kafka`**.
broker_endpoints = ["127.0.0.1:9092"]
## Automatically create topics for WAL.
## Set to `true` to automatically create topics for WAL.
## Otherwise, use topics named `topic_name_prefix_[0..num_topics)`
## **It's only used when the provider is `kafka`**.
auto_create_topics = true
## Interval of automatically WAL pruning.
## Set to `0s` to disable automatically WAL pruning which delete unused remote WAL entries periodically.
auto_prune_interval = "0s"
## **It's only used when the provider is `kafka`**.
auto_prune_interval = "30m"
## The threshold to trigger a flush operation of a region in automatically WAL pruning.
## Metasrv will send a flush request to flush the region when:
## `trigger_flush_threshold` + `prunable_entry_id` < `max_prunable_entry_id`
## where:
## - `prunable_entry_id` is the maximum entry id that can be pruned of the region.
## - `max_prunable_entry_id` is the maximum prunable entry id among all regions in the same topic.
## Set to `0` to disable the flush operation.
trigger_flush_threshold = 0
## Estimated size threshold to trigger a flush when using Kafka remote WAL.
## Since multiple regions may share a Kafka topic, the estimated size is calculated as:
## (latest_entry_id - flushed_entry_id) * avg_record_size
## MetaSrv triggers a flush for a region when this estimated size exceeds `flush_trigger_size`.
## - `latest_entry_id`: The latest entry ID in the topic.
## - `flushed_entry_id`: The last flushed entry ID for the region.
## Set to "0" to let the system decide the flush trigger size.
## **It's only used when the provider is `kafka`**.
flush_trigger_size = "512MB"
## Estimated size threshold to trigger a checkpoint when using Kafka remote WAL.
## The estimated size is calculated as:
## (latest_entry_id - last_checkpoint_entry_id) * avg_record_size
## MetaSrv triggers a checkpoint for a region when this estimated size exceeds `checkpoint_trigger_size`.
## Set to "0" to let the system decide the checkpoint trigger size.
## **It's only used when the provider is `kafka`**.
checkpoint_trigger_size = "128MB"
## Concurrent task limit for automatically WAL pruning.
## **It's only used when the provider is `kafka`**.
auto_prune_parallelism = 10
## Number of topics.
## Number of topics used for remote WAL.
## **It's only used when the provider is `kafka`**.
num_topics = 64
## Topic selector type.
## Available selector types:
## - `round_robin` (default)
## **It's only used when the provider is `kafka`**.
selector_type = "round_robin"
## A Kafka topic is constructed by concatenating `topic_name_prefix` and `topic_id`.
## Only accepts strings that match the following regular expression pattern:
## [a-zA-Z_:-][a-zA-Z0-9_:\-\.@#]*
## i.g., greptimedb_wal_topic_0, greptimedb_wal_topic_1.
## **It's only used when the provider is `kafka`**.
topic_name_prefix = "greptimedb_wal_topic"
## Expected number of replicas of each partition.
## **It's only used when the provider is `kafka`**.
replication_factor = 1
## Above which a topic creation operation will be cancelled.
## The timeout for creating a Kafka topic.
## **It's only used when the provider is `kafka`**.
create_topic_timeout = "30s"
# The Kafka SASL configuration.
@@ -207,6 +294,23 @@ create_topic_timeout = "30s"
# client_cert_path = "/path/to/client_cert"
# client_key_path = "/path/to/key"
## Configuration options for the event recorder.
[event_recorder]
## TTL for the events table that will be used to store the events. Default is `90d`.
ttl = "90d"
## Configuration options for the stats persistence.
[stats_persistence]
## TTL for the stats table that will be used to store the stats.
## Set to `0s` to disable stats persistence.
## Default is `0s`.
## If you want to enable stats persistence, set the TTL to a value greater than 0.
## It is recommended to set a small value, e.g., `3h`.
ttl = "0s"
## The interval to persist the stats. Default is `10m`.
## The minimum value is `10m`, if the value is less than `10m`, it will be overridden to `10m`.
interval = "10m"
## The logging options.
[logging]
## The directory to store the log files. If set to empty, logs will not be written to files.
@@ -220,7 +324,7 @@ level = "info"
enable_otlp_tracing = false
## The OTLP tracing endpoint.
otlp_endpoint = "http://localhost:4317"
otlp_endpoint = "http://localhost:4318/v1/traces"
## Whether to append logs to stdout.
append_stdout = true
@@ -231,29 +335,33 @@ log_format = "text"
## The maximum amount of log files.
max_log_files = 720
## The OTLP tracing export protocol. Can be `grpc`/`http`.
otlp_export_protocol = "http"
## Additional OTLP headers, only valid when using OTLP http
[logging.otlp_headers]
## @toml2docs:none-default
#Authorization = "Bearer my-token"
## @toml2docs:none-default
#Database = "My database"
## The percentage of tracing will be sampled and exported.
## Valid range `[0, 1]`, 1 means all traces are sampled, 0 means all traces are not sampled, the default value is 1.
## ratio > 1 are treated as 1. Fractions < 0 are treated as 0
[logging.tracing_sample_ratio]
default_ratio = 1.0
## The metasrv can export its metrics and send to Prometheus compatible service (e.g. `greptimedb` itself) from remote-write API.
## This is only used for `greptimedb` to export its own metrics internally. It's different from prometheus scrape.
[export_metrics]
## whether enable export metrics.
enable = false
## The interval of export metrics.
write_interval = "30s"
[export_metrics.remote_write]
## The prometheus remote write endpoint that the metrics send to. The url example can be: `http://127.0.0.1:4000/v1/prometheus/write?db=greptime_metrics`.
url = ""
## HTTP headers of Prometheus remote-write carry.
headers = { }
## The tracing options. Only effect when compiled with `tokio-console` feature.
#+ [tracing]
## The tokio console address.
## @toml2docs:none-default
#+ tokio_console_addr = "127.0.0.1"
## The memory options.
[memory]
## Whether to enable heap profiling activation during startup.
## When enabled, heap profiling will be activated if the `MALLOC_CONF` environment variable
## is set to "prof:true,prof_active:false". The official image adds this env variable.
## Default is true.
enable_heap_profiling = true

View File

@@ -2,6 +2,10 @@
## @toml2docs:none-default
default_timezone = "UTC"
## The default column prefix for auto-created time index and value columns.
## @toml2docs:none-default
default_column_prefix = "greptime"
## Initialize all regions in the background during the startup.
## By default, it provides services after all regions have been initialized.
init_regions_in_background = false
@@ -10,6 +14,9 @@ init_regions_in_background = false
init_regions_parallelism = 16
## The maximum current queries allowed to be executed. Zero means unlimited.
## NOTE: This setting affects scan_memory_limit's privileged tier allocation.
## When set, 70% of queries get privileged memory access (full scan_memory_limit).
## The remaining 30% get standard tier access (70% of scan_memory_limit).
max_concurrent_queries = 0
## Enable telemetry to collect anonymous usage data. Enabled by default.
@@ -36,6 +43,10 @@ timeout = "0s"
## The following units are supported: `B`, `KB`, `KiB`, `MB`, `MiB`, `GB`, `GiB`, `TB`, `TiB`, `PB`, `PiB`.
## Set to 0 to disable limit.
body_limit = "64MB"
## Maximum total memory for all concurrent HTTP request bodies.
## Set to 0 to disable the limit. Default: "0" (unlimited)
## @toml2docs:none-default
#+ max_total_body_memory = "1GB"
## HTTP CORS support, it's turned on by default
## This allows browser to access http APIs without CORS restrictions
enable_cors = true
@@ -56,6 +67,15 @@ prom_validation_mode = "strict"
bind_addr = "127.0.0.1:4001"
## The number of server worker threads.
runtime_size = 8
## Maximum total memory for all concurrent gRPC request messages.
## Set to 0 to disable the limit. Default: "0" (unlimited)
## @toml2docs:none-default
#+ max_total_message_memory = "1GB"
## The maximum connection age for gRPC connection.
## The value can be a human-readable time string. For example: `10m` for ten minutes or `1h` for one hour.
## Refer to https://grpc.io/docs/guides/keepalive/ for more details.
## @toml2docs:none-default
#+ max_connection_age = "10m"
## gRPC server TLS options, see `mysql.tls` section.
[grpc.tls]
@@ -85,7 +105,8 @@ runtime_size = 2
## Server-side keep-alive time.
## Set to 0 (default) to disable.
keep_alive = "0s"
## Maximum entries in the MySQL prepared statement cache; default is 10,000.
prepared_stmt_cache_size= 10000
# MySQL server TLS options.
[mysql.tls]
@@ -209,6 +230,14 @@ recovery_parallelism = 2
## **It's only used when the provider is `kafka`**.
broker_endpoints = ["127.0.0.1:9092"]
## The connect timeout for kafka client.
## **It's only used when the provider is `kafka`**.
#+ connect_timeout = "3s"
## The timeout for kafka client.
## **It's only used when the provider is `kafka`**.
#+ timeout = "3s"
## Automatically create topics for WAL.
## Set to `true` to automatically create topics for WAL.
## Otherwise, use topics named `topic_name_prefix_[0..num_topics)`
@@ -311,6 +340,7 @@ max_running_procedures = 128
# endpoint = "https://s3.amazonaws.com"
# region = "us-west-2"
# enable_virtual_host_style = false
# disable_ec2_metadata = false
# Example of using Oss as the storage.
# [storage]
@@ -347,6 +377,13 @@ max_running_procedures = 128
## Default to 0, which means the number of CPU cores.
parallelism = 0
## Memory pool size for query execution operators (aggregation, sorting, join).
## Supports absolute size (e.g., "2GB", "4GB") or percentage of system memory (e.g., "20%").
## Setting it to 0 disables the limit (unbounded, default behavior).
## When this limit is reached, queries will fail with ResourceExhausted error.
## NOTE: This does NOT limit memory used by table scans.
memory_pool_size = "50%"
## The data storage options.
[storage]
## The working home directory.
@@ -360,15 +397,6 @@ data_home = "./greptimedb_data"
## - `Oss`: the data is stored in the Aliyun OSS.
type = "File"
## Read cache configuration for object storage such as 'S3' etc, it's configured by default when using object storage. It is recommended to configure it when using object storage for better performance.
## A local file directory, defaults to `{data_home}`. An empty string means disabling.
## @toml2docs:none-default
#+ cache_path = ""
## The local file cache capacity in bytes. If your disk space is sufficient, it is recommended to set it larger.
## @toml2docs:none-default
cache_capacity = "5GiB"
## The S3 bucket name.
## **It's only used when the storage type is `S3`, `Oss` and `Gcs`**.
## @toml2docs:none-default
@@ -458,6 +486,10 @@ timeout = "30s"
## The timeout for idle sockets being kept-alive.
pool_idle_timeout = "90s"
## To skip the ssl verification
## **Security Notice**: Setting `skip_ssl_validation = true` disables certificate verification, making connections vulnerable to man-in-the-middle attacks. Only use this in development or trusted private networks.
skip_ssl_validation = false
# Custom storage options
# [[storage.providers]]
# name = "S3"
@@ -511,6 +543,15 @@ compress_manifest = false
## @toml2docs:none-default="Auto"
#+ max_background_purges = 8
## Memory budget for compaction tasks. Setting it to 0 or "unlimited" disables the limit.
## @toml2docs:none-default="0"
#+ experimental_compaction_memory_limit = "0"
## Behavior when compaction cannot acquire memory from the budget.
## Options: "wait" (default, 10s), "wait(<duration>)", "fail"
## @toml2docs:none-default="wait"
#+ experimental_compaction_on_exhausted = "wait"
## Interval to auto flush a region if it has not flushed yet.
auto_flush_interval = "1h"
@@ -555,19 +596,51 @@ write_cache_size = "5GiB"
## @toml2docs:none-default
write_cache_ttl = "8h"
## Preload index (puffin) files into cache on region open (default: true).
## When enabled, index files are loaded into the write cache during region initialization,
## which can improve query performance at the cost of longer startup times.
preload_index_cache = true
## Percentage of write cache capacity allocated for index (puffin) files (default: 20).
## The remaining capacity is used for data (parquet) files.
## Must be between 0 and 100 (exclusive). For example, with a 5GiB write cache and 20% allocation,
## 1GiB is reserved for index files and 4GiB for data files.
index_cache_percent = 20
## Enable refilling cache on read operations (default: true).
## When disabled, cache refilling on read won't happen.
enable_refill_cache_on_read = true
## Capacity for manifest cache (default: 256MB).
manifest_cache_size = "256MB"
## Buffer size for SST writing.
sst_write_buffer_size = "8MB"
## Capacity of the channel to send data from parallel scan tasks to the main task.
parallel_scan_channel_size = 32
## Maximum number of SST files to scan concurrently.
max_concurrent_scan_files = 384
## Whether to allow stale WAL entries read during replay.
allow_stale_entries = false
## Memory limit for table scans across all queries.
## Supports absolute size (e.g., "2GB") or percentage of system memory (e.g., "20%").
## Setting it to 0 disables the limit.
## NOTE: Works with max_concurrent_queries for tiered memory allocation.
## - If max_concurrent_queries is set: 70% of queries get full access, 30% get 70% access.
## - If max_concurrent_queries is 0 (unlimited): first 20 queries get full access, rest get 70% access.
scan_memory_limit = "50%"
## Minimum time interval between two compactions.
## To align with the old behavior, the default value is 0 (no restrictions).
min_compaction_interval = "0m"
## Whether to enable experimental flat format as the default format.
default_experimental_flat_format = false
## The options for index in Mito engine.
[region_engine.mito.index]
@@ -700,8 +773,8 @@ fork_dictionary_bytes = "1GiB"
[[region_engine]]
## Metric engine options.
[region_engine.metric]
## Whether to enable the experimental sparse primary key encoding.
experimental_sparse_primary_key_encoding = false
## Whether to use sparse primary key encoding.
sparse_primary_key_encoding = true
## The logging options.
[logging]
@@ -716,7 +789,7 @@ level = "info"
enable_otlp_tracing = false
## The OTLP tracing endpoint.
otlp_endpoint = "http://localhost:4317"
otlp_endpoint = "http://localhost:4318/v1/traces"
## Whether to append logs to stdout.
append_stdout = true
@@ -727,6 +800,16 @@ log_format = "text"
## The maximum amount of log files.
max_log_files = 720
## The OTLP tracing export protocol. Can be `grpc`/`http`.
otlp_export_protocol = "http"
## Additional OTLP headers, only valid when using OTLP http
[logging.otlp_headers]
## @toml2docs:none-default
#Authorization = "Bearer my-token"
## @toml2docs:none-default
#Database = "My database"
## The percentage of tracing will be sampled and exported.
## Valid range `[0, 1]`, 1 means all traces are sampled, 0 means all traces are not sampled, the default value is 1.
## ratio > 1 are treated as 1. Fractions < 0 are treated as 0
@@ -750,29 +833,16 @@ default_ratio = 1.0
## @toml2docs:none-default
#+ sample_ratio = 1.0
## The standalone can export its metrics and send to Prometheus compatible service (e.g. `greptimedb`) from remote-write API.
## This is only used for `greptimedb` to export its own metrics internally. It's different from prometheus scrape.
[export_metrics]
## whether enable export metrics.
enable = false
## The interval of export metrics.
write_interval = "30s"
## For `standalone` mode, `self_import` is recommended to collect metrics generated by itself
## You must create the database before enabling it.
[export_metrics.self_import]
## @toml2docs:none-default
db = "greptime_metrics"
[export_metrics.remote_write]
## The prometheus remote write endpoint that the metrics send to. The url example can be: `http://127.0.0.1:4000/v1/prometheus/write?db=greptime_metrics`.
url = ""
## HTTP headers of Prometheus remote-write carry.
headers = { }
## The tracing options. Only effect when compiled with `tokio-console` feature.
#+ [tracing]
## The tokio console address.
## @toml2docs:none-default
#+ tokio_console_addr = "127.0.0.1"
## The memory options.
[memory]
## Whether to enable heap profiling activation during startup.
## When enabled, heap profiling will be activated if the `MALLOC_CONF` environment variable
## is set to "prof:true,prof_active:false". The official image adds this env variable.
## Default is true.
enable_heap_profiling = true

View File

@@ -55,12 +55,25 @@ async function main() {
await client.rest.issues.addLabels({
owner, repo, issue_number: number, labels: [labelDocsRequired],
})
// Get available assignees for the docs repo
const assigneesResponse = await docsClient.rest.issues.listAssignees({
owner: 'GreptimeTeam',
repo: 'docs',
})
const validAssignees = assigneesResponse.data.map(assignee => assignee.login)
core.info(`Available assignees: ${validAssignees.join(', ')}`)
// Check if the actor is a valid assignee, otherwise fallback to fengjiachun
const assignee = validAssignees.includes(actor) ? actor : 'fengjiachun'
core.info(`Assigning issue to: ${assignee}`)
await docsClient.rest.issues.create({
owner: 'GreptimeTeam',
repo: 'docs',
title: `Update docs for ${title}`,
body: `A document change request is generated from ${html_url}`,
assignee: actor,
assignee: assignee,
}).then((res) => {
core.info(`Created issue ${res.data}`)
})

View File

@@ -1,10 +1,10 @@
FROM centos:7 as builder
FROM centos:7 AS builder
ARG CARGO_PROFILE
ARG FEATURES
ARG OUTPUT_DIR
ENV LANG en_US.utf8
ENV LANG=en_US.utf8
WORKDIR /greptimedb
# Install dependencies
@@ -22,7 +22,7 @@ RUN unzip protoc-3.15.8-linux-x86_64.zip -d /usr/local/
# Install Rust
SHELL ["/bin/bash", "-c"]
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- --no-modify-path --default-toolchain none -y
ENV PATH /usr/local/bin:/root/.cargo/bin/:$PATH
ENV PATH=/usr/local/bin:/root/.cargo/bin/:$PATH
# Build the project in release mode.
RUN --mount=target=.,rw \
@@ -33,7 +33,7 @@ RUN --mount=target=.,rw \
TARGET_DIR=/out/target
# Export the binary to the clean image.
FROM centos:7 as base
FROM centos:7 AS base
ARG OUTPUT_DIR
@@ -45,6 +45,8 @@ RUN yum install -y epel-release \
WORKDIR /greptime
COPY --from=builder /out/target/${OUTPUT_DIR}/greptime /greptime/bin/
ENV PATH /greptime/bin/:$PATH
ENV PATH=/greptime/bin/:$PATH
ENV MALLOC_CONF="prof:true,prof_active:false"
ENTRYPOINT ["greptime"]

View File

@@ -0,0 +1,65 @@
FROM ubuntu:22.04 AS builder
ARG CARGO_PROFILE
ARG FEATURES
ARG OUTPUT_DIR
ENV LANG=en_US.utf8
WORKDIR /greptimedb
RUN apt-get update && \
DEBIAN_FRONTEND=noninteractive apt-get install -y software-properties-common
# Install dependencies.
RUN --mount=type=cache,target=/var/cache/apt \
apt-get update && apt-get install -y \
libssl-dev \
protobuf-compiler \
curl \
git \
build-essential \
pkg-config
# Install Rust.
SHELL ["/bin/bash", "-c"]
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- --no-modify-path --default-toolchain none -y
ENV PATH=/root/.cargo/bin/:$PATH
# Build the project in release mode.
RUN --mount=target=. \
--mount=type=cache,target=/root/.cargo/registry \
make build \
CARGO_PROFILE=${CARGO_PROFILE} \
FEATURES=${FEATURES} \
TARGET_DIR=/out/target
FROM ubuntu:22.04 AS libs
ARG TARGETARCH
# Copy required library dependencies based on architecture
RUN if [ "$TARGETARCH" = "amd64" ]; then \
cp /lib/x86_64-linux-gnu/libz.so.1.2.11 /lib/x86_64-linux-gnu/libz.so.1; \
elif [ "$TARGETARCH" = "arm64" ]; then \
cp /lib/aarch64-linux-gnu/libz.so.1.2.11 /lib/aarch64-linux-gnu/libz.so.1; \
else \
echo "Unsupported architecture: $TARGETARCH" && exit 1; \
fi
# Export the binary to the clean distroless image.
FROM gcr.io/distroless/cc-debian12:latest AS base
ARG OUTPUT_DIR
ARG TARGETARCH
# Copy required library dependencies
COPY --from=libs /lib /lib
COPY --from=busybox:stable /bin/busybox /bin/busybox
WORKDIR /greptime
COPY --from=builder /out/target/${OUTPUT_DIR}/greptime /greptime/bin/greptime
ENV PATH=/greptime/bin/:$PATH
ENV MALLOC_CONF="prof:true,prof_active:false"
ENTRYPOINT ["greptime"]

View File

@@ -1,10 +1,10 @@
FROM ubuntu:22.04 as builder
FROM ubuntu:22.04 AS builder
ARG CARGO_PROFILE
ARG FEATURES
ARG OUTPUT_DIR
ENV LANG en_US.utf8
ENV LANG=en_US.utf8
WORKDIR /greptimedb
RUN apt-get update && \
@@ -23,7 +23,7 @@ RUN --mount=type=cache,target=/var/cache/apt \
# Install Rust.
SHELL ["/bin/bash", "-c"]
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- --no-modify-path --default-toolchain none -y
ENV PATH /root/.cargo/bin/:$PATH
ENV PATH=/root/.cargo/bin/:$PATH
# Build the project in release mode.
RUN --mount=target=. \
@@ -35,7 +35,7 @@ RUN --mount=target=. \
# Export the binary to the clean image.
# TODO(zyy17): Maybe should use the more secure container image.
FROM ubuntu:22.04 as base
FROM ubuntu:22.04 AS base
ARG OUTPUT_DIR
@@ -45,6 +45,8 @@ RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get \
WORKDIR /greptime
COPY --from=builder /out/target/${OUTPUT_DIR}/greptime /greptime/bin/
ENV PATH /greptime/bin/:$PATH
ENV PATH=/greptime/bin/:$PATH
ENV MALLOC_CONF="prof:true,prof_active:false"
ENTRYPOINT ["greptime"]

View File

@@ -13,6 +13,8 @@ ARG TARGETARCH
ADD $TARGETARCH/greptime /greptime/bin/
ENV PATH /greptime/bin/:$PATH
ENV PATH=/greptime/bin/:$PATH
ENV MALLOC_CONF="prof:true,prof_active:false"
ENTRYPOINT ["greptime"]

View File

@@ -0,0 +1,40 @@
FROM ubuntu:22.04 AS libs
ARG TARGETARCH
# Copy required library dependencies based on architecture
# TARGETARCH values: amd64, arm64
# Ubuntu library paths: x86_64-linux-gnu, aarch64-linux-gnu
RUN if [ "$TARGETARCH" = "amd64" ]; then \
mkdir -p /output/x86_64-linux-gnu && \
cp /lib/x86_64-linux-gnu/libz.so.1.2.11 /output/x86_64-linux-gnu/libz.so.1; \
elif [ "$TARGETARCH" = "arm64" ]; then \
mkdir -p /output/aarch64-linux-gnu && \
cp /lib/aarch64-linux-gnu/libz.so.1.2.11 /output/aarch64-linux-gnu/libz.so.1; \
else \
echo "Unsupported architecture: $TARGETARCH" && exit 1; \
fi
FROM gcr.io/distroless/cc-debian12:latest
# The root path under which contains all the dependencies to build this Dockerfile.
ARG DOCKER_BUILD_ROOT=.
# The binary name of GreptimeDB executable.
# Defaults to "greptime", but sometimes in other projects it might be different.
ARG TARGET_BIN=greptime
ARG TARGETARCH
# Copy required library dependencies
COPY --from=libs /output /lib
COPY --from=busybox:stable /bin/busybox /bin/busybox
ADD $TARGETARCH/$TARGET_BIN /greptime/bin/
ENV PATH=/greptime/bin/:$PATH
ENV TARGET_BIN=$TARGET_BIN
ENV MALLOC_CONF="prof:true,prof_active:false"
ENTRYPOINT ["greptime"]

View File

@@ -14,8 +14,10 @@ ARG TARGETARCH
ADD $TARGETARCH/$TARGET_BIN /greptime/bin/
ENV PATH /greptime/bin/:$PATH
ENV PATH=/greptime/bin/:$PATH
ENV TARGET_BIN=$TARGET_BIN
ENV MALLOC_CONF="prof:true,prof_active:false"
ENTRYPOINT ["sh", "-c", "exec $TARGET_BIN \"$@\"", "--"]

View File

@@ -13,7 +13,8 @@ RUN apt-get update && apt-get install -y \
git \
unzip \
build-essential \
pkg-config
pkg-config \
openssh-client
# Install protoc
ARG PROTOBUF_VERSION=29.3

View File

@@ -19,7 +19,7 @@ ARG PROTOBUF_VERSION=29.3
RUN curl -OL https://github.com/protocolbuffers/protobuf/releases/download/v${PROTOBUF_VERSION}/protoc-${PROTOBUF_VERSION}-linux-x86_64.zip && \
unzip protoc-${PROTOBUF_VERSION}-linux-x86_64.zip -d protoc3;
RUN mv protoc3/bin/* /usr/local/bin/
RUN mv protoc3/include/* /usr/local/include/

View File

@@ -34,6 +34,48 @@ services:
networks:
- greptimedb
etcd-tls:
<<: *etcd_common_settings
container_name: etcd-tls
ports:
- 2378:2378
- 2381:2381
command:
- --name=etcd-tls
- --data-dir=/var/lib/etcd
- --initial-advertise-peer-urls=https://etcd-tls:2381
- --listen-peer-urls=https://0.0.0.0:2381
- --listen-client-urls=https://0.0.0.0:2378
- --advertise-client-urls=https://etcd-tls:2378
- --heartbeat-interval=250
- --election-timeout=1250
- --initial-cluster=etcd-tls=https://etcd-tls:2381
- --initial-cluster-state=new
- --initial-cluster-token=etcd-tls-cluster
- --cert-file=/certs/server.crt
- --key-file=/certs/server-key.pem
- --peer-cert-file=/certs/server.crt
- --peer-key-file=/certs/server-key.pem
- --trusted-ca-file=/certs/ca.crt
- --peer-trusted-ca-file=/certs/ca.crt
- --client-cert-auth
- --peer-client-cert-auth
volumes:
- ./greptimedb-cluster-docker-compose/etcd-tls:/var/lib/etcd
- ./greptimedb-cluster-docker-compose/certs:/certs:ro
environment:
- ETCDCTL_API=3
- ETCDCTL_CACERT=/certs/ca.crt
- ETCDCTL_CERT=/certs/server.crt
- ETCDCTL_KEY=/certs/server-key.pem
healthcheck:
test: [ "CMD", "etcdctl", "--endpoints=https://etcd-tls:2378", "--cacert=/certs/ca.crt", "--cert=/certs/server.crt", "--key=/certs/server-key.pem", "endpoint", "health" ]
interval: 10s
timeout: 5s
retries: 5
networks:
- greptimedb
metasrv:
image: *greptimedb_image
container_name: metasrv

View File

@@ -48,4 +48,4 @@ Please refer to [SQL query](./query.sql) for GreptimeDB and Clickhouse, and [que
## Addition
- You can tune GreptimeDB's configuration to get better performance.
- You can setup GreptimeDB to use S3 as storage, see [here](https://docs.greptime.com/user-guide/deployments/configuration#storage-options).
- You can setup GreptimeDB to use S3 as storage, see [here](https://docs.greptime.com/user-guide/deployments-administration/configuration#storage-options).

View File

@@ -13,4 +13,19 @@ Log Level changed from Some("info") to "trace,flow=debug"%
The data is a string in the format of `global_level,module1=level1,module2=level2,...` that follows the same rule of `RUST_LOG`.
The module is the module name of the log, and the level is the log level. The log level can be one of the following: `trace`, `debug`, `info`, `warn`, `error`, `off`(case insensitive).
The module is the module name of the log, and the level is the log level. The log level can be one of the following: `trace`, `debug`, `info`, `warn`, `error`, `off`(case insensitive).
# Enable/Disable Trace on the Fly
## HTTP API
example:
```bash
curl --data "true" 127.0.0.1:4000/debug/enable_trace
```
And database will reply with something like:
```
trace enabled%
```
Possible values are "true" or "false".

View File

@@ -30,6 +30,8 @@ curl https://raw.githubusercontent.com/brendangregg/FlameGraph/master/flamegraph
## Profiling
### Enable memory profiling for greptimedb binary
Start GreptimeDB instance with environment variables:
```bash
@@ -40,6 +42,48 @@ MALLOC_CONF=prof:true ./target/debug/greptime standalone start
_RJEM_MALLOC_CONF=prof:true ./target/debug/greptime standalone start
```
### Memory profiling for greptimedb docker image
We have memory profiling enabled and activated by default in our official docker
image.
This behavior is controlled by configuration `enable_heap_profiling`:
```toml
[memory]
# Whether to enable heap profiling activation during startup.
# Default is true.
enable_heap_profiling = true
```
To disable memory profiling, set `enable_heap_profiling` to `false`.
### Memory profiling control
You can control heap profiling activation using the new HTTP APIs:
```bash
# Check current profiling status
curl -X GET localhost:4000/debug/prof/mem/status
# Activate heap profiling (if not already active)
curl -X POST localhost:4000/debug/prof/mem/activate
# Deactivate heap profiling
curl -X POST localhost:4000/debug/prof/mem/deactivate
# Activate gdump feature that dumps memory profiling data every time virtual memory usage exceeds previous maximum value.
curl -X POST localhost:4000/debug/prof/mem/gdump -d 'activate=true'
# Deactivate gdump.
curl -X POST localhost:4000/debug/prof/mem/gdump -d 'activate=false'
# Retrieve current gdump status.
curl -X GET localhost:4000/debug/prof/mem/gdump
```
### Dump memory profiling data
Dump memory profiling data through HTTP API:
```bash

View File

@@ -1,72 +0,0 @@
Currently, our query engine is based on DataFusion, so all aggregate function is executed by DataFusion, through its UDAF interface. You can find DataFusion's UDAF example [here](https://github.com/apache/arrow-datafusion/blob/arrow2/datafusion-examples/examples/simple_udaf.rs). Basically, we provide the same way as DataFusion to write aggregate functions: both are centered in a struct called "Accumulator" to accumulates states along the way in aggregation.
However, DataFusion's UDAF implementation has a huge restriction, that it requires user to provide a concrete "Accumulator". Take `Median` aggregate function for example, to aggregate a `u32` datatype column, you have to write a `MedianU32`, and use `SELECT MEDIANU32(x)` in SQL. `MedianU32` cannot be used to aggregate a `i32` datatype column. Or, there's another way: you can use a special type that can hold all kinds of data (like our `Value` enum or Arrow's `ScalarValue`), and `match` all the way up to do aggregate calculations. It might work, though rather tedious. (But I think it's DataFusion's preferred way to write UDAF.)
So is there a way we can make an aggregate function that automatically match the input data's type? For example, a `Median` aggregator that can work on both `u32` column and `i32`? The answer is yes until we find a way to bypass DataFusion's restriction, a restriction that DataFusion simply doesn't pass the input data's type when creating an Accumulator.
> There's an example in `my_sum_udaf_example.rs`, take that as quick start.
# 1. Impl `AggregateFunctionCreator` trait for your accumulator creator.
You must first define a struct that will be used to create your accumulator. For example,
```Rust
#[as_aggr_func_creator]
#[derive(Debug, AggrFuncTypeStore)]
struct MySumAccumulatorCreator {}
```
Attribute macro `#[as_aggr_func_creator]` and derive macro `#[derive(Debug, AggrFuncTypeStore)]` must both be annotated on the struct. They work together to provide a storage of aggregate function's input data types, which are needed for creating generic accumulator later.
> Note that the `as_aggr_func_creator` macro will add fields to the struct, so the struct cannot be defined as an empty struct without field like `struct Foo;`, neither as a new type like `struct Foo(bar)`.
Then impl `AggregateFunctionCreator` trait on it. The definition of the trait is:
```Rust
pub trait AggregateFunctionCreator: Send + Sync + Debug {
fn creator(&self) -> AccumulatorCreatorFunction;
fn output_type(&self) -> ConcreteDataType;
fn state_types(&self) -> Vec<ConcreteDataType>;
}
```
You can use input data's type in methods that return output type and state types (just invoke `input_types()`).
The output type is aggregate function's output data's type. For example, `SUM` aggregate function's output type is `u64` for a `u32` datatype column. The state types are accumulator's internal states' types. Take `AVG` aggregate function on a `i32` column as example, its state types are `i64` (for sum) and `u64` (for count).
The `creator` function is where you define how an accumulator (that will be used in DataFusion) is created. You define "how" to create the accumulator (instead of "what" to create), using the input data's type as arguments. With input datatype known, you can create accumulator generically.
# 2. Impl `Accumulator` trait for your accumulator.
The accumulator is where you store the aggregate calculation states and evaluate a result. You must impl `Accumulator` trait for it. The trait's definition is:
```Rust
pub trait Accumulator: Send + Sync + Debug {
fn state(&self) -> Result<Vec<Value>>;
fn update_batch(&mut self, values: &[VectorRef]) -> Result<()>;
fn merge_batch(&mut self, states: &[VectorRef]) -> Result<()>;
fn evaluate(&self) -> Result<Value>;
}
```
The DataFusion basically executes aggregate like this:
1. Partitioning all input data for aggregate. Create an accumulator for each part.
2. Call `update_batch` on each accumulator with partitioned data, to let you update your aggregate calculation.
3. Call `state` to get each accumulator's internal state, the medial calculation result.
4. Call `merge_batch` to merge all accumulator's internal state to one.
5. Execute `evaluate` on the chosen one to get the final calculation result.
Once you know the meaning of each method, you can easily write your accumulator. You can refer to `Median` accumulator or `SUM` accumulator defined in file `my_sum_udaf_example.rs` for more details.
# 3. Register your aggregate function to our query engine.
You can call `register_aggregate_function` method in query engine to register your aggregate function. To do that, you have to new an instance of struct `AggregateFunctionMeta`. The struct has three fields, first is the name of your aggregate function's name. The function name is case-sensitive due to DataFusion's restriction. We strongly recommend using lowercase for your name. If you have to use uppercase name, wrap your aggregate function with quotation marks. For example, if you define an aggregate function named "my_aggr", you can use "`SELECT MY_AGGR(x)`"; if you define "my_AGGR", you have to use "`SELECT "my_AGGR"(x)`".
The second field is arg_counts ,the count of the arguments. Like accumulator `percentile`, calculating the p_number of the column. We need to input the value of column and the value of p to calculate, and so the count of the arguments is two.
The third field is a function about how to create your accumulator creator that you defined in step 1 above. Create creator, that's a bit intertwined, but it is how we make DataFusion use a newly created aggregate function each time it executes a SQL, preventing the stored input types from affecting each other. The key detail can be starting looking at our `DfContextProviderAdapter` struct's `get_aggregate_meta` method.
# (Optional) 4. Make your aggregate function automatically registered.
If you've written a great aggregate function that wants to let everyone use it, you can make it automatically register to our query engine at start time. It's quick and simple, just refer to the `AggregateFunctions::register` function in `common/function/src/scalars/aggregate/mod.rs`.

View File

@@ -76,7 +76,7 @@ pub trait CompactionStrategy {
```
The most suitable compaction strategy for time-series scenario would be
a hybrid strategy that combines time window compaction with size-tired compaction, just like [Cassandra](https://cassandra.apache.org/doc/latest/cassandra/operating/compaction/twcs.html) and [ScyllaDB](https://docs.scylladb.com/stable/architecture/compaction/compaction-strategies.html#time-window-compaction-strategy-twcs) does.
a hybrid strategy that combines time window compaction with size-tired compaction, just like [Cassandra](https://cassandra.apache.org/doc/latest/cassandra/managing/operating/compaction/twcs.html) and [ScyllaDB](https://docs.scylladb.com/stable/architecture/compaction/compaction-strategies.html#time-window-compaction-strategy-twcs) does.
We can first group SSTs in level n into buckets according to some predefined time window. Within that window,
SSTs are compacted in a size-tired manner (find SSTs with similar size and compact them to level n+1).

View File

@@ -28,7 +28,7 @@ In order to do those things while maintaining a low memory footprint, you need t
- Greptime Flow's is built on top of [Hydroflow](https://github.com/hydro-project/hydroflow).
- We have three choices for the Dataflow/Streaming process framework for our simple continuous aggregation feature:
1. Based on the timely/differential dataflow crate that [materialize](https://github.com/MaterializeInc/materialize) based on. Later, it's proved too obscure for a simple usage, and is hard to customize memory usage control.
2. Based on a simple dataflow framework that we write from ground up, like what [arroyo](https://www.arroyo.dev/) or [risingwave](https://www.risingwave.dev/) did, for example the core streaming logic of [arroyo](https://github.com/ArroyoSystems/arroyo/blob/master/arroyo-datastream/src/lib.rs) only takes up to 2000 line of codes. However, it means maintaining another layer of dataflow framework, which might seem easy in the beginning, but I fear it might be too burdensome to maintain once we need more features.
2. Based on a simple dataflow framework that we write from ground up, like what [arroyo](https://www.arroyo.dev/) or [risingwave](https://www.risingwave.dev/) did, for example the core streaming logic of [arroyo](https://github.com/ArroyoSystems/arroyo/blob/master/crates/arroyo-datastream/src/lib.rs) only takes up to 2000 line of codes. However, it means maintaining another layer of dataflow framework, which might seem easy in the beginning, but I fear it might be too burdensome to maintain once we need more features.
3. Based on a simple and lower level dataflow framework that someone else write, like [hydroflow](https://github.com/hydro-project/hydroflow), this approach combines the best of both worlds. Firstly, it boasts ease of comprehension and customization. Secondly, the dataflow framework offers precisely the necessary features for crafting uncomplicated single-node dataflow programs while delivering decent performance.
Hence, we choose the third option, and use a simple logical plan that's anagonistic to the underlying dataflow framework, as it only describe how the dataflow graph should be doing, not how it do that. And we built operator in hydroflow to execute the plan. And the result hydroflow graph is wrapped in a engine that only support data in/out and tick event to flush and compute the result. This provide a thin middle layer that's easy to maintain and allow switching to other dataflow framework if necessary.

View File

@@ -0,0 +1,154 @@
---
Feature Name: Repartition
Tracking Issue: https://github.com/GreptimeTeam/greptimedb/issues/6558
Date: 2025-06-20
Author: "Ruihang Xia <waynestxia@gmail.com>"
---
# Summary
This RFC proposes a method for repartitioning a table, to adjust the partition rule and data distribution.
# Motivation
With time passing, the data distribution and skew pattern of a table might change. We need a way to repartition the table to suit the new pattern.
# Details
Here is a rough workflow diagram of the entire repartition process, each step is described in detail below.
```mermaid
sequenceDiagram
participant Frontend
participant Metasrv
participant Datanodes
participant Region0 as Region 0
Frontend->>Frontend: Process request, validation etc.
Frontend->>Metasrv: Submit procedure
Metasrv->>Metasrv: Compute diff and generate migration plan
Metasrv->>Metasrv: Allocate necessary region resources (with Paas)
Metasrv->>Datanodes: Stop compaction and snapshot
rect rgb(255, 225, 225)
note over Frontend, Region0: No Ingestion Period
Metasrv->>Frontend: Stop processing write requests
Metasrv->>Metasrv: Update metadata
Metasrv->>Frontend: Start processing read requests
end
Metasrv->>Datanodes: Update region rule, stage version changes from now on
Region0->>Region0: Compute new manifests for all regions
Region0->>Datanodes: Submit manifest changes
Metasrv->>Datanodes: Recover compaction and snapshot, make staged changes visible
note over Frontend, Datanodes: Reload Cache
Metasrv->>Metasrv: Release resources (with Paas)
Metasrv->>Metasrv: Schedule optional compaction (to remote compactor)
```
## Preprocessing
This phase is for static analysis of the new partition rule. The server can know whether the repartitioning is possible, how to do the repartitioning, and how much resources are needed.
In theory, the input and output partition rules for repartitioning can be completely unrelated. But in practice, to avoid a very large change set, we'll only allow two simple kinds of change. One splits one region into two regions (region split) and another merges two regions into one (region merge).
After validating the new partition rule using the same validation logic as table creation, we compute the difference between the old and new partition rules. The resulting diff may contain several independent groups of changes. During subsequent processing, each group of changes can be handled independently and can succeed or fail without affecting other groups or creating non-idempotently retryable scenarios.
Next, we generate a repartition plan for each group of changes. Each plan contains this information for all regions involved in that particular plan. And one target region will only be referenced by a single plan.
With those plans, we can determine the resource requirements for the repartition operation, where resources here primarily refer to Regions. Metasrv will coordinate with PaaS layer to pre-allocate the necessary regions at this stage. These new regions start completely empty, and their metadata and manifests will be populated during subsequent modification steps.
## Data Processing
This phase is primarily for region's change, including region's metadata (route table and the corresponding rule) and manifest.
Once we start processing one plan through a procedure, we'll first stop the region's compaction and snapshot. This is to avoid any states being removed due to compaction (which may removes old SST files) and snapshot (which may removes old manifest files).
Metasrv will trying to update the metadata of partition, or the region route table (related to `PartitionRuleManager`). This step is in the "no ingestion" scope, so no new data will be ingested. Since this won't take much time, the affection to the cluster is minimized. Metasrv will also update the region rule to corresponding regions on Datanodes.
Every regions and all the ingestion requests to the region server will have a version of region rule, to identify under which rule the request is processed. The version can be something like `hash(region_rule)`. Once the region rule on region server is updated, all ingestion request with old rule will be rejected, and all requests with new rule will be accepted but not visible. They can still be flushed to persisted storage, but their version change (new manifest) will be staged.
Then region 0 (or let metasrv to pick any operational region) will compute the new manifests for all target regions. This step is done by first reading all old manifests, and remapping the files with new partition rule, to get the content of new manifests. Notice this step only handles the manifests before region rule change on region server, and won't touch those staged manifests, as they are already with the new rule.
Those new manifest will be submitted to the corresponding target regions by region 0 via a `RegionEdit` request. If this request falls after a few retries, region 0 will try to rollback this change by directly overwriting the manifest on object storage. and report this failure to metasrv and let the entire repartition procedure to fail. And we can also optionally compute the new manifest for those staged version changes (like another repartition) and submit them to the target regions to make the also visible even if the repartition fails.
In the other hand, a successful `RegionEdit` request also acknowledges those staged version changes and make them visible.
After this step, the repartition is done in the data plane. We can start to process compaction and snapshot again.
## Postprocessing
After the main processing is done, we can do some extra postprocessing to reduce the performance impact of repartition. Including reloading caches in frontend's route table, metasrv's kv cache and datanode's read/write/page cache etc.
We can also schedule an optional compaction to reorganize all the data file under the new partition rule to reduce potential fragmentation or read amplification.
## Procedure
Here describe the repartition procedure step by step:
- <on frontend> Validating repartition request
- <on frontend> Initialize the repartition procedure
- Calculate rule diff and repartition plan group
- Allocate necessary new regions
- Lock the table key
- For each repartition subprocedure
- Stop compaction and snapshot
- Forbid new ingestion requests, update metadata, allow ingestion requests.
- Update region rule to regions
- Pick one region to calculate new manifest for all regions in this repartition group
- Let that region to apply new manifest to each region via `RegionEdit`
- If failed after some retries, revert this manifest change to other succeeded regions and mark this failure.
- If all succeeded, acknowledge those staged version changes and make them visible.
- Return result
- Collect results from subprocedure.
- For those who failed, we need to restart those regions to force reconstruct their status from manifests
- For those who succeeded, collect and merge their rule diff
- Unlock the table key
- Report the result to user.
- <in background> Reload cache
- <in background> Maybe trigger a special compaction
In addition of sequential step, rollback is also an important part of this procedure. There are three steps can be rolled back when unrecoverable failure occurs.
If the metadata update is not committed, we can overwrite the metadata to previous version. This step is scoped in the "no ingestion" period, so no new data will be ingested and the status of both datanode and metasrv will be consistent.
If the `RegionEdit` to other regions is not acknowledged, or partial acknowledged, we can directly overwrite the manifest on object storage from the central region (who computes the new manifest), and force region server to reload corresponding region to load its state from object storage to recover.
If the staged version changes are not acknowledged, we can re-compute manifest based on old rule for staged data, and apply them directly like above. This is like another smaller repartition for those staged data.
## Region rule validation and diff calculation
In the current codebase, the rule checker is not complete. It can't check uniqueness and completeness of the rule. This RFC also propose a new way to validate the rule.
The proposed validation way is based on a check-point system, which first generates a group of check-points from the rule, and then check if all the point is covered and only covered by one rule.
All the partition rule expressionis limited to be the form of `<column> <operator> <value>`, and the operator is limited to be comparison operators. Those expressions are allowed to be nested with `AND` and `OR` operators. Based on this, we can first extract all the unique values on each column, adding and subtracting a little epsilon to cover its left and right boundary.
Since we accept integer, float and string as the value type, compute on them directly is not convenient. So we'll first normalize them to a common type and only need to preserve the relative partial ordering. This also avoids the problem of "what is next/previous value" of string and "what's a good precision" for float.
After normalization, we get a set of scatter points for each column. Then we can generate a set of check-points by combining all the scatter points like building a cartesian product. This might bring a large number of check-points, so we can do an prune optimization to remove some of them by merging some of the expression zones. Those expressions who have identical N-1 edge sub-expressions with one adjacent edge can be merged together. This prune check is with a time complexity of O(N * M * log(M)), where N is the number of active dimensions and M is the number of expression zones. Diff calculation is also done by finding different expression zones between the old and new rule set, and check if we can transform one to another by merging some of the expression zones.
The step to validate the check-points set against expressions can be treated as a tiny expression of `PhysicalExpr`. This evaluation will give a boolean matrix of K*M shape, where K is the number of check-points. We then check in each row of the matrix, if there is one and only one true value.
## Compute and use new manifest
We can generate a new set of manifest file based on old manifest and two versions of rule. From abvoe rule processing part, we can tell how a new rule & region is from previous one. So a simple way to get the new manifest is also apply the step of change to manifest files. E.g., if region A is from region B and C, we simply combine all file IDs from B and C to generate the content of A.
If necessary, we can do this better by involving some metadata related to data, like min-max statistics of each file, and pre-evaluate over min-max to filter out unneeded files when generating new manifest.
The way to use new manifest needs one more extra step based on the current implementation. We'll need to record either in manifest or in file metadata, of what rule is used when generating (flush or compaction) a SST file. Then in every single read request, we need to append the current region rule as predicate to the read request, to ensure no data belong to other regions will be read. We can use the stored region rule to reduce the number of new predicates to apply, by removing the identical predicate between the current region rule and the stored region rule. So ideally in a table that has not been repartitioned recently, the overhead of checking region rule is minimal.
## Pre-required tasks
In above steps, we assume some functionalities are implemented. Here list them with where they are used and how to implement them.
### Cross-region read
The current data directory structure is `{table_id}/{region_id}/[data/metadata]/{file_id}`, every region can only access files under their own directory. After repartition, data file may be placed in other previous old regions. So we need to support cross-region read. This new access method allows region to access any file under the same table. Related tracking issue is <https://github.com/GreptimeTeam/greptimedb/issues/6409>.
### Global GC worker
This is to simplify state management of data files. As one file may be referenced in multiple manifests, or no manifest at all. After this, every region and the repartition process only need to care about generateing and using new files, without tracking whether a file should be deleted or not. Leaving the deletion to the global GC worker. This worker basically works by counting reference from manifest file, and remove unused one. Related tracking issue is **TBD**.
# Alternatives
In the "Data Processing" section, we can enlarge the "no ingestion" period to include almost all the steps. This can simplify the entire procedure by a lot, but will bring a longer time of ingestion pause which may not be acceptable.

View File

@@ -0,0 +1,151 @@
---
Feature Name: Compatibility Test Framework
Tracking Issue: TBD
Date: 2025-07-04
Author: "Ruihang Xia <waynestxia@gmail.com>"
---
# Summary
This RFC proposes a compatibility test framework for GreptimeDB to ensure backward/forward compatibility for different versions of GreptimeDB.
# Motivation
In current practice, we don't have a systematic way to test and ensure the compatibility of different versions of GreptimeDB. Each time we release a new version, we need to manually test the compatibility with ad-hoc cases. This is not only time-consuming, but also prone to errors and unmaintainable. Highly rely on the release manager to ensure the compatibility of different versions of GreptimeDB.
We don't have a detailed guide on the release SoP of how to test and ensure the compatibility of the new version. And has broken the compatibility of the new version many times (`v0.14.1` and `v0.15.1` are two examples, which are both released right after the major release).
# Details
This RFC proposes a compatibility test framework that is easy to maintain, extend and run. It can tell the compatibility between any given two versions of GreptimeDB, both backward and forward. It's based on the Sqlness library but used in a different way.
Generally speaking, the framework is composed of two parts:
1. Test cases: A set of test cases that are maintained dedicatedly for the compatibility test. Still in the `.sql` and `.result` format.
2. Test framework: A new sqlness runner that is used to run the test cases. With some new features that is not required by the integration sqlness test.
## Test Cases
### Structure
The case set is organized in three parts:
- `1.feature`: Use a new feature
- `2.verify`: Verify database behavior
- `3.cleanup`: Paired with `1.feature`, cleanup the test environment.
These three parts are organized in a tree structure, and should be run in sequence:
```
compatibility_test/
├── 1.feature/
│ ├── feature-a/
│ ├── feature-b/
│ └── feature-c/
├── 2.verify/
│ ├── verify-metadata/
│ ├── verify-data/
│ └── verify-schema/
└── 3.cleanup/
├── cleanup-a/
├── cleanup-b/
└── cleanup-c/
```
### Example
For example, for a new feature like adding new index option ([#6416](https://github.com/GreptimeTeam/greptimedb/pull/6416)), we (who implement the feature) create a new test case like this:
```sql
-- path: compatibility_test/1.feature/index-option/granularity_and_false_positive_rate.sql
-- SQLNESS ARG since=0.15.0
-- SQLNESS IGNORE_RESULT
CREATE TABLE granularity_and_false_positive_rate (ts timestamp time index, val double) with ("index.granularity" = "8192", "index.false_positive_rate" = "0.01");
```
And
```sql
-- path: compatibility_test/3.cleanup/index-option/granularity_and_false_positive_rate.sql
drop table granularity_and_false_positive_rate;
```
Since this new feature don't require some special way to verify the database behavior, we can reuse existing test cases in `2.verify/` to verify the database behavior. For example, we can reuse the `verify-metadata` test case to verify the metadata of the table.
```sql
-- path: compatibility_test/2.verify/verify-metadata/show-create-table.sql
-- SQLNESS TEMPLATE TABLE="SHOW TABLES";
SHOW CREATE TABLE $TABLE;
```
In this example, we use some new sqlness features that will be introduced in the next section (`since`, `IGNORE_RESULT`, `TEMPLATE`).
### Maintenance
Each time implement a new feature that should be covered by the compatibility test, we should create a new test case in `1.feature/` and `3.cleanup/` for them. And check if existing cases in `2.verify/` can be reused to verify the database behavior.
This simulates an enthusiastic user who uses all the new features at the first time. All the new Maintenance burden is on the feature implementer to write one more test case for the new feature, to "fixation" the behavior. And once there is a breaking change in the future, it can be detected by the compatibility test framework automatically.
Another topic is about deprecation. If a feature is deprecated, we should also mark it in the test case. Still use above example, assume we deprecate the `index.granularity` and `index.false_positive_rate` index options in `v0.99.0`, we can mark them as:
```sql
-- SQLNESS ARG since=0.15.0 till=0.99.0
...
```
This tells the framework to ignore this feature in version `v0.99.0` and later. Currently, we have so many experimental features that are scheduled to be broken in the future, this is a good way to mark them.
## Test Framework
This section is about new sqlness features required by this framework.
### Since and Till
Follows the `ARG` interceptor in sqlness, we can mark a feature is available between two given versions. Only the `since` is required:
```sql
-- SQLNESS ARG since=VERSION_STRING [till=VERSION_STRING]
```
### IGNORE_RESULT
`IGNORE_RESULT` is a new interceptor, it tells the runner to ignore the result of the query, only check whether the query is executed successfully.
This is useful to reduce the Maintenance burden of the test cases, unlike the integration sqlness test, in most cases we don't care about the result of the query, only need to make sure the query is executed successfully.
### TEMPLATE
`TEMPLATE` is another new interceptor, it can generate queries from a template based on a runtime data.
In above example, we need to run the `SHOW CREATE TABLE` query for all existing tables, so we can use the `TEMPLATE` interceptor to generate the query with a dynamic table list.
### RUNNER
There are also some extra requirement for the runner itself:
- It should run the test cases in sequence, first `1.feature/`, then `2.verify/`, and finally `3.cleanup/`.
- It should be able to fetch required version automatically to finish the test.
- It should handle the `since` and `till` properly.
On the `1.feature` phase, the runner needs to identify all features need to be tested by version number. And then restart with a new version (the `to` version) to run `2.verify/` and `3.cleanup/` phase.
## Test Report
Finally, we can run the compatibility test to verify the compatibility between any given two versions of GreptimeDB, for example:
```bash
# check backward compatibility between v0.15.0 and v0.16.0 when releasing v0.16.0
./sqlness run --from=0.15.0 --to=0.16.0
# check forward compatibility when downgrading from v0.15.0 to v0.13.0
./sqlness run --from=0.15.0 --to=0.13.0
```
We can also use a script to run the compatibility test for all the versions in a given range to give a quick report with all versions we need.
And we always bump the version in `Cargo.toml` to the next major release version, so the next major release version can be used as "latest" unpublished version for scenarios like local testing.
# Alternatives
There was a previous attempt to implement a compatibility test framework that was disabled due to some reasons [#3728](https://github.com/GreptimeTeam/greptimedb/issues/3728).

View File

@@ -0,0 +1,188 @@
---
Feature Name: "global-gc-worker"
Tracking Issue: https://github.com/GreptimeTeam/greptimedb/issues/6571
Date: 2025-07-23
Author: "discord9 <discord9@163.com>"
---
# Global GC Worker
## Summary
This RFC proposes the integration of a garbage collection (GC) mechanism within the Compaction process. This mechanism aims to manage and remove stale files that are no longer actively used by any system component, thereby reclaiming storage space.
## Motivation
With the introduction of features such as table repartitioning, a substantial number of Parquet files can become obsolete. Furthermore, failures during manifest updates may result in orphaned files that are never referenced by the system. Therefore, a periodic garbage collection mechanism is essential to reclaim storage space by systematically removing these unused files.
## Details
### Overview
The garbage collection process will be integrated directly into the Compaction process. Upon the completion of a Compaction for a given region, the GC worker will be automatically triggered. Its primary function will be to identify and subsequently delete obsolete files that have persisted beyond their designated retention period. This integration ensures that garbage collection is performed in close conjunction with data lifecycle management, effectively leveraging the compaction process's inherent knowledge of file states.
This design prioritizes correctness and safety by explicitly linking GC execution to a well-defined operational boundary: the successful completion of a compaction cycle.
### Terminology
- **Unused File**: Refers to a file present in the storage directory that has never been formally recorded in any manifest. A common scenario for this includes cases where a new SST file is successfully written to storage, but the subsequent update to the manifest fails, leaving the file unreferenced.
- **Obsolete File**: Denotes a file that was previously recorded in a manifest but has since been explicitly marked for removal. This typically occurs following operations such as data repartitioning or compaction.
### GC Worker Process
The GC worker operates as an integral part of the Compaction process. Once a Compaction for a specific region is completed, the GC worker is automatically triggered. Executing this process on a `datanode` is preferred to eliminate the overhead associated with having to set object storage configurations in the `metasrv`.
The detailed process is as follows:
1. **Invocation**: Upon the successful completion of a Compaction for a region, the GC worker is invoked.
2. **Manifest Reading**: The worker reads the region's primary manifest to obtain a comprehensive list of all files marked as obsolete. Concurrently, it reads any temporary manifests generated by long-running queries to identify files that are currently in active use, thereby preventing their premature deletion.
3. **Lingering Time Check (Obsolete Files)**: For each identified obsolete file, the GC worker evaluates its "lingering time." Which is the time passed after it had been removed from manifest.
4. **Deletion Marking (Obsolete Files)**: Files that have exceeded their maximum configurable lingering time and are not referenced by any active temporary manifests are marked for deletion.
5. **Lingering Time (Unused Files)**: Unused files (those never recorded in any manifest) are also subject to a configurable maximum lingering time before they are eligible for deletion.
Following flowchart illustrates the GC worker's process:
```mermaid
flowchart TD
A[Compaction Completed] --> B[Trigger GC Worker]
B --> C[Scan Region Manifest]
C --> D[Identify File Types]
D --> E[Unused Files<br/>Never recorded in manifest]
D --> F[Obsolete Files<br/>Previously in manifest<br/>but marked for removal]
E --> G[Check Lingering Time]
F --> G
G --> H{File exceeds<br/>configured lingering time?}
H -->|No| I[Skip deletion]
H -->|Yes| J[Check Temporary Manifest]
J --> K{File in use by<br/>active queries?}
K -->|Yes| L[Retain file<br/>Wait for next GC cycle]
K -->|No| M[Safely delete file]
I --> N[End GC cycle]
L --> N
M --> O[Update Manifest]
O --> N
N --> P[Wait for next Compaction]
P --> A
style A fill:#e1f5fe
style B fill:#f3e5f5
style M fill:#e8f5e8
style L fill:#fff3e0
```
#### Handling Obsolete Files
An obsolete file is permanently deleted only if two conditions are met:
1. The time elapsed since its removal from the manifest (its obsolescence timestamp) exceeds a configurable threshold.
2. It is not currently referenced by any active temporary manifests.
#### Handling Unused Files
With the integration of the GC worker into the Compaction process, the risk of accidentally deleting newly created SST files that have not yet been recorded in the manifest is significantly mitigated. Consequently, the concept of "Unused Files" as a distinct category primarily susceptible to accidental deletion is largely resolved. Any files that are genuinely "unused" (i.e., never referenced by any manifest, including temporary ones) can be safely deleted after a configurable maximum lingering time.
For debugging and auditing purposes, a comprehensive list of recently deleted files can be maintained.
### Ensuring Read Consistency
To prevent the GC worker from inadvertently deleting files that are actively being utilized by long-running analytical queries, a robust protection mechanism is introduced. This mechanism relies on temporary manifests that are actively kept "alive" by the queries using them.
When a long-running query is detected (e.g., by a slow query recorder), it will write a temporary manifest to the region's manifest directory. This manifest lists all files required for the query. However, simply creating this file is not enough, as a query runner might crash, leaving the temporary manifest orphaned and preventing garbage collection indefinitely.
To address this, the following "heartbeat" mechanism is implemented:
1. **Periodic Updates**: The process executing the long-running query is responsible for periodically updating the modification timestamp of its temporary manifest file (i.e., "touching" the file). This serves as a heartbeat, signaling that the query is still active.
2. **GC Worker Verification**: When the GC worker runs, it scans for temporary manifests. For each one it finds, it checks the file's last modification time.
3. **Stale File Handling**: If a temporary manifest's last modification time is older than a configurable threshold, the GC worker considers it stale (left over from a crashed or terminated query). The GC worker will then delete this stale temporary manifest. Files that were protected only by this stale manifest are no longer shielded from garbage collection.
This approach ensures that only files for genuinely active queries are protected. The lifecycle of the temporary manifest is managed dynamically: it is created when a long query starts, kept alive through periodic updates, and is either deleted by the query upon normal completion or automatically cleaned up by the GC worker if the query terminates unexpectedly.
This mechanism may be too complex to implement at once. We can consider a two-phased approach:
1. **Phase 1 (Simple Time-Based Deletion)**: Initially, implement a simpler GC strategy that deletes obsolete files based solely on a configurable lingering time. This provides a baseline for space reclamation without the complexity of temporary manifests.
2. **Phase 2 (Consistency-Aware GC)**: Based on the practical effectiveness and observed issues from Phase 1, we can then decide whether to implement the full temporary manifest and heartbeat mechanism to handle long-running queries. This iterative approach allows for a quicker initial implementation while gathering real-world data to justify the need for a more complex solution.
## Drawbacks
- **Dependency on Compaction Frequency**: The integration of the GC worker with Compaction means that GC cycles are directly tied to the frequency of compactions. In environments with infrequent compaction operations, obsolete files may accumulate for extended periods before being reclaimed, potentially leading to increased storage consumption.
- **Race Condition with Long-Running Queries**: A potential race condition exists if a long-running query initiates but haven't write its temporary manifest in time, while a compaction process simultaneously begins and marks files used by that query as obsolete. This scenario could lead to the premature deletion of files still required by the active query. To mitigate this, the threshold time for writing a temporary manifest should be significantly shorter than the lingering time configured for obsolete files, ensuring that next GC worker runs do not delete files that are now referenced by a temporary manifest if the query is still running.
Also the read replica shouldn't be later in manifest version for more than the lingering time of obsolete files, otherwise it might ref to files that are already deleted by the GC worker.
- need to upload tmp manifest to object storage, which may introduce additional complexity and potential performance overhead. But since long-running queries are typically not frequent, the performance impact is expected to be minimal.
one potential race condition with region-migration is illustrated below:
```mermaid
sequenceDiagram
participant gc_worker as GC Worker(same dn as region 1)
participant region1 as Region 1 (Leader → Follower)
participant region2 as Region 2 (Follower → Leader)
participant region_dir as Region Directory
gc_worker->>region1: Start GC, get region manifest
activate region1
region1-->>gc_worker: Region 1 manifest
deactivate region1
gc_worker->>region_dir: Scan region directory
Note over region1,region2: Region Migration Occurs
region1-->>region2: Downgrade to Follower
region2-->>region1: Becomes Leader
region2->>region_dir: Add new file
gc_worker->>region_dir: Continue scanning
gc_worker-->>region_dir: Discovers new file
Note over gc_worker: New file not in Region 1's manifest
gc_worker->>gc_worker: Mark file as orphan(incorrectly)
```
which could cause gc worker to incorrectly mark the new file as orphan and delete it, if config the lingering time for orphan files(files not mentioned anywhere(in used or unused)) is not long enough.
A good enough solution could be to use lock to prevent gc worker to happen on the region if region migration is happening on the region, and vise versa.
The race condition between gc worker and repartition also needs to be considered carefully. For now, acquiring lock for both region-migration and repartition during gc worker process could be a simple solution.
## Conclusion and Rationale
This section summarizes the key aspects and trade-offs of the proposed integrated GC worker, highlighting its advantages and potential challenges.
| Aspect | Current Proposal (Integrated GC) |
| :--- | :--- |
| **Implementation Complexity** | **Medium**. Requires careful integration with the compaction process and the slow query recorder for temporary manifest management. |
| **Reliability** | **High**. Integration with compaction and leveraging temporary manifests from long-running queries significantly mitigates the risk of incorrect deletion. Accurate management of lingering times for obsolete files and prevention of accidental deletion of newly created SSTs enhance data safety. |
| **Performance Overhead** | **Low to Medium**. The GC worker runs post-compaction, minimizing direct impact on write paths. Overhead from temporary manifest management by the slow query recorder is expected to be acceptable for long-running queries. |
| **Impact on Other Components** | **Moderate**. Requires modifications to the compaction process to trigger GC and the slow query recorder to manage temporary manifests. This introduces some coupling but enhances overall data safety. |
| **Deletion Strategy** | **State- and Time-Based**. Obsolete files are deleted based on a configurable lingering time, which is paused if the file is referenced by a temporary manifest. Unused files (never in a manifest) are also subject to a lingering time. |
## Unresolved Questions and Future Work
This section outlines key areas requiring further discussion and defines potential avenues for future development.
* **Slow Query Recorder Implementation**: Detailed specifications for modify slow query recorder's implementation and its precise interaction mechanisms with temporary manifests are needed.
* **Configurable Lingering Times**: Establish and make configurable the specific lingering times for both obsolete and unused files to optimize storage reclamation and data availability.
## Alternatives
### 1. Standalone GC Service
Instead of integrating the GC worker directly into the Compaction process, a standalone GC service could be implemented. This service would operate independently, periodically scanning the storage for obsolete and unused files based on manifest information and predefined retention policies.
**Pros:**
* **Decoupling**: Separates GC logic from compaction, allowing independent scaling and deployment.
* **Flexibility**: Can be configured to run at different frequencies and with different strategies than compaction.
**Cons:**
* **Increased Complexity**: Requires a separate service to manage, monitor, and coordinate with other components.
* **Potential for Redundancy**: May duplicate some file scanning logic already present in compaction.
* **Consistency Challenges**: Ensuring read consistency would require more complex coordination mechanisms between the standalone GC service and active queries, potentially involving a distributed lock manager or a more sophisticated temporary manifest system.
This alternative could be implemented in the future if the integrated GC worker proves insufficient or if there is a need for more advanced GC strategies.
### 2. Manifest-Driven Deletion (No Lingering Time)
This alternative would involve immediate deletion of files once they are removed from the manifest, without a lingering time.
**Pros:**
* **Simplicity**: Simplifies the GC logic by removing the need for lingering time management.
* **Immediate Space Reclamation**: Storage space is reclaimed as soon as files are marked for deletion.
**Cons:**
* **Increased Risk of Data Loss**: Higher risk of deleting files still in use by long-running queries or other processes if not perfectly synchronized.
* **Complex Read Consistency**: Requires extremely robust and immediate mechanisms to ensure that no active queries are referencing files marked for deletion, potentially leading to performance bottlenecks or complex error handling.
* **Debugging Challenges**: Difficult to debug issues related to premature file deletion due to the immediate nature of the operation.

View File

@@ -0,0 +1,112 @@
---
Feature Name: Async Index Build
Tracking Issue: https://github.com/GreptimeTeam/greptimedb/issues/6756
Date: 2025-08-16
Author: "SNC123 <sinhco@outlook.com>"
---
# Summary
This RFC proposes an asynchronous index build mechanism in the database, with a configuration option to choose between synchronous and asynchronous modes, aiming to improve flexibility and adapt to different workload requirements.
# Motivation
Currently, index creation is performed synchronously, which may lead to prolonged write suspension and impact business continuity. As data volume grows, the time required for index building increases significantly. An asynchronous solution is urgently needed to enhance user experience and system throughput.
# Details
## Overview
The following table highlights the difference between async and sync index approach:
| Approach | Trigger | Data Source | Additional Index Metadata Installation | Fine-grained `FileMeta` Index |
| :--- | :--- | :--- | :--- | :--- |
| Sync Index | On `write_sst` | Memory (on flush) / Disk (on compact) | Not required(already installed synchronously) | Not required |
| Async Index | 4 trigger types | Disk | Required | Required |
The index build mode (synchronous or asynchronous) can be selected via configuration file.
### Four Trigger Types
This RFC introduces four `IndexBuildType`s to trigger index building:
- **Manual Rebuild**: Triggered by the user via `ADMIN build_index("table_name")`, for scenarios like recovering from failed builds or migrating data. SST files whose `ColumnIndexMetadata` (see below) is already consistent with the `RegionMetadata` will be skipped.
- **Schema Change**: Automatically triggered when the schema of an indexed column is altered.
- **Flush**: Automatically builds indexes for new SST files created by a flush.
- **Compact**: Automatically builds indexes for new SST files created by a compaction.
### Additional Index Metadata Installation
Previously, index information in the in-memory `FileMeta` was updated synchronously. The async approach requires an explicit installation step.
A race condition can occur when compaction and index building run concurrently, leading to:
1. Building an index for a file that is about to be deleted by compaction.
2. Creating an unnecessary index file and an incorrect manifest record.
3. On restart, replaying the manifest could load metadata for a non-existent file.
To prevent this, the system checks if a file's `FileMeta` is in a `compacting` state before updating the manifest. If it is, the installation is aborted.
### Fine-grained `FileMeta` Index
The original `FileMeta` only stored file-level index information. However, manual rebuilds require column-level details to identify files inconsistent with the current DDL. Therefore, the `indexes` field in `FileMeta` is updated as follows:
```rust
struct FileMeta {
...
// From file-level:
// available_indexes: SmallVec<[IndexType; 4]>
// To column-level:
indexes: Vec<ColumnIndexMetadata>,
...
}
pub struct ColumnIndexMetadata {
pub column_id: ColumnId,
pub created_indexes: IndexTypes,
}
```
## Process
The index building process is similar to a flush and is illustrated below:
```mermaid
sequenceDiagram
Region0->>Region0: Triggered by one of 4 conditions, targets specific files
loop For each target file
Region0->>IndexBuildScheduler: Submits an index build task
end
IndexBuildScheduler->>IndexBuildTask: Executes the task
IndexBuildTask->>Storage Interfaces: Reads SST data from disk
IndexBuildTask->>IndexBuildTask: Builds the index file
alt Index file size > 0
IndexBuildTask->>Region0: Sends IndexBuildFinished notification
end
alt File exists in Version and is not compacting
Region0->>Storage Interfaces: Updates manifest and Version
end
```
### Task Triggering and Scheduling
The process starts with one of the four `IndexBuildType` triggers. In `handle_rebuild_index`, the `RegionWorkerLoop` identifies target SSTs from the request or the current region version. It then creates an `IndexBuildTask` for each file and submits it to the `index_build_scheduler`.
Similar to Flush and Compact operations, index build tasks are ultimately dispatched to the LocalScheduler. Resource usage can be adjusted via configuration files. Since asynchronous index tasks are both memory-intensive and IO-intensive but have lower priority, it is recommended to allocate fewer resources to them compared to compaction and flush tasks—for example, limiting them to 1/8 of the CPU cores.
### Index Building and Notification
The scheduled `IndexBuildTask` executes its `index_build` method. It uses an `indexer_builder` to create an `Indexer` that reads SST data and builds the index. If a new index file is created (`IndexOutput.file_size > 0`), the task sends an `IndexBuildFinished` notification back to the `RegionWorkerLoop`.
### Index Metadata Installation
Upon receiving the `IndexBuildFinished` notification in `handle_index_build_finished`, the `RegionWorkerLoop` verifies that the file still exists in the current `version` and is not being compacted. If the check passes, it calls `manifest_ctx.update_manifest` to apply a `RegionEdit` with the new index information, completing the installation.
# Drawbacks
Asynchronous index building may consume extra system resources, potentially affecting overall performance during peak periods.
There may be a delay before the new index becomes available for queries, which could impact certain use cases.
# Unresolved Questions and Future Work
**Resource Management and Throttling**: The resource consumption (CPU, I/O) of background index building can be managed and limited to some extent by configuring a dedicated background thread pool. However, this approach cannot fully eliminate resource contention, especially under heavy workloads or when I/O is highly competitive. Additional throttling mechanisms or dynamic prioritization may still be necessary to avoid impacting foreground operations.
# Alternatives
Instead of being triggered by events like Flush or Compact, index building could be performed in batches during scheduled maintenance windows. This offers predictable resource usage but delays index availability.

View File

@@ -0,0 +1,463 @@
---
Feature Name: "laminar-flow"
Tracking Issue: https://github.com/GreptimeTeam/greptimedb/issues/TBD
Date: 2025-09-08
Author: "discord9 <discord9@163.com>"
---
# laminar Flow
## Summary
This RFC proposes a redesign of the flow architecture where flownode becomes a lightweight in-memory state management node with an embedded frontend for direct computation. This approach optimizes resource utilization and improves scalability by eliminating network hops while maintaining clear separation between coordination and computation tasks.
## Motivation
The current flow architecture has several limitations:
1. **Resource Inefficiency**: Flownodes perform both state management and computation, leading to resource duplication and inefficient utilization.
2. **Scalability Constraints**: Computation resources are tied to flownode instances, limiting horizontal scaling capabilities.
3. **State Management Complexity**: Mixing computation with state management makes the system harder to maintain and debug.
4. **Network Overhead**: Additional network hops between flownode and separate frontend nodes add latency.
The laminar Flow architecture addresses these issues by:
- Consolidating computation within flownode through embedded frontend
- Eliminating network overhead by removing separate frontend node communication
- Simplifying state management by focusing flownode on its core responsibility
- Improving system scalability and maintainability
## Details
### Architecture Overview
The laminar Flow architecture transforms flownode into a lightweight coordinator that maintains flow state with an embedded frontend for computation. The key components involved are:
1. **Flownode**: Maintains in-memory state, coordinates computation, and includes an embedded frontend for query execution
2. **Embedded Frontend**: Executes **incremental** computations within the flownode
3. **Datanode**: Stores final results and source data
```mermaid
graph TB
subgraph "laminar Flow Architecture"
subgraph Flownode["Flownode (State Manager + Embedded Frontend)"]
StateMap["Flow State Map<br/>Map<Timestamp, (Map<Key, Value>, Sequence)>"]
Coordinator["Computation Coordinator"]
subgraph EmbeddedFrontend["Embedded Frontend"]
QueryEngine["Query Engine"]
AggrState["__aggr_state Executor"]
end
end
subgraph Datanode["Datanode"]
Storage["Data Storage"]
Results["Result Tables"]
end
end
Coordinator -->|Internal Query| EmbeddedFrontend
EmbeddedFrontend -->|Incremental States| Coordinator
Flownode -->|Incremental Results| Datanode
EmbeddedFrontend -.->|Read Data| Datanode
```
### Core Components
#### 1. Flow State Management
Flownode maintains a state map for each flow:
```rust
type FlowState = Map<Timestamp, (Map<Key, Value>, Sequence)>;
```
Where:
- **Timestamp**: Time window identifier for aggregation groups
- **Key**: Aggregation group expressions (`group_exprs`)
- **Value**: Aggregation expressions results (`aggr_exprs`)
- **Sequence**: Computation progress marker for incremental updates
#### 2. Incremental Computation Process
The computation process follows these steps:
1. **Trigger Evaluation**: Flownode determines when to trigger computation based on:
- Time intervals (periodic updates)
- Data volume thresholds
- Sequence progress requirements
2. **Query Execution**: Flownode executes `__aggr_state` queries using its embedded frontend with:
- Time window filters
- Sequence range constraints
3. **State Update**: Flownode receives partial state results and updates its internal state:
- Merges new values with existing aggregation state
- Updates sequence markers to track progress
- Identifies changed time windows for result computation
4. **Result Materialization**: Flownode computes final results using `__aggr_merge` operations:
- Processes only updated time windows(and time series) for efficiency
- Writes results back to datanode directly through its embedded frontend
### Detailed Workflow
#### Incremental State Query
```sql
-- Example incremental state query executed by embedded frontend
SELECT
__aggr_state(avg(value)) as state,
time_window,
group_key
FROM source_table
WHERE
timestamp >= :window_start
AND timestamp < :window_end
AND __sequence >= :last_sequence
AND __sequence < :current_sequence
-- sequence range is actually written in grpc header, but shown here for clarity
GROUP BY time_window, group_key;
```
#### State Merge Process
```mermaid
sequenceDiagram
participant F as Flownode (Coordinator)
participant EF as Embedded Frontend (Lightweight)
participant DN as Datanode (Heavy Computation)
F->>F: Evaluate trigger conditions
F->>EF: Execute __aggr_state query with sequence range
EF->>DN: Send query to datanode (Heavy scan & aggregation)
DN->>DN: Scan data and compute partial aggregation state (Heavy CPU/I/O)
DN->>EF: Return aggregated state results
EF->>F: Forward state results (Lightweight merge)
F->>F: Merge with existing state
F->>F: Update sequence markers (Lightweight)
F->>EF: Compute incremental results with __aggr_merge
EF->>DN: Write incremental results to datanode
```
### Refill Implementation and State Management
#### Refill Process
Refill is implemented as a straightforward `__aggr_state` query with time and sequence constraints:
```sql
-- Refill query for flow state recovery
SELECT
__aggr_state(aggregation_functions) as state,
time_window,
group_keys
FROM source_table
WHERE
timestamp >= :refill_start_time
AND timestamp < :refill_end_time
AND __sequence >= :start_sequence
AND __sequence < :end_sequence
-- sequence range is actually written in grpc header, but shown here for clarity
GROUP BY time_window, group_keys;
```
#### State Recovery Strategy
1. **Recent Data (Stream Mode)**: For recent time windows, flownode refills state using incremental queries
2. **Historical Data (Batch Mode)**: For older time windows, flownode triggers batch computation directly and no need to refill state
3. **Hybrid Approach**: Combines stream and batch processing based on data age and availability
#### Mirror Write Optimization
Mirror writes are simplified to only transmit timestamps to flownode:
```rust
struct MirrorWrite {
timestamps: Vec<Timestamp>,
// Removed: actual data payload
}
```
This optimization:
- Eliminates network overhead by using embedded frontend
- Enables flownode to track pending time windows efficiently
- Allows flownode to decide processing mode (stream vs batch) based on timestamp age
Another optimization could be just send dirty time windows range for each flow to flownode directly, no need to send timestamps one by one.
### Query Optimization Strategies
#### Sequence-Based Incremental Processing
The core optimization relies on sequence-constrained queries:
```sql
-- Optimized incremental query
SELECT __aggr_state(expr)
FROM table
WHERE time_range AND sequence_range
```
Benefits:
- **Reduced Scan Volume**: Only processes data since last computation
- **Efficient Resource Usage**: Minimizes CPU and I/O overhead
- **Predictable Performance**: Query cost scales with incremental data size
#### Time Window Partitioning
```mermaid
graph LR
subgraph "Time Windows"
W1["Window 1<br/>09:00-09:05"]
W2["Window 2<br/>09:05-09:10"]
W3["Window 3<br/>09:10-09:15"]
end
subgraph "Processing Strategy"
W1 --> Batch["Batch Mode<br/>(Old Data)"]
W2 --> Stream["Stream Mode<br/>(Recent Data)"]
W3 --> Stream2["Stream Mode<br/>(Current Data)"]
end
```
### Performance Characteristics
#### Memory Usage
- **Flownode**: O(active_time_windows × group_cardinality) for state storage
- **Embedded Frontend**: O(query_batch_size) for temporary computation
- **Overall**: Significantly reduced compared to current architecture
#### Computation Distribution
- **Direct Processing**: Queries processed directly within flownode's embedded frontend
- **Fault Tolerance**: Simplified error handling with fewer distributed components
- **Scalability**: Computation capacity scales with flownode instances
#### Network Optimization
- **Reduced Payload**: Mirror writes only contain timestamps
- **Efficient Queries**: Sequence constraints minimize data transfer
- **Result Caching**: State results cached in flownode memory
### Sequential Read Implementation for Incremental Queries
#### Sequence Management
Flow maintains two critical sequences to track incremental query progress for each region:
- **`memtable_last_seq`**: Tracks the latest sequence number read from the memtable
- **`sst_last_seq`**: Tracks the latest sequence number read from SST files
These sequences enable precise incremental data processing by defining the exact range of data to query in subsequent iterations.
#### Query Protocol
When executing incremental queries, flownode provides both sequence parameters to datanode:
```rust
struct GrpcHeader {
...
// Sequence tracking for incremental reads
memtable_last_seq: HashMap<RegionId, SequenceNumber>,
sst_last_seqs: HashMap<RegionId, SequenceNumber>,
}
```
The datanode processes these parameters to return only the data within the specified sequence ranges, ensuring efficient incremental processing.
#### Sequence Invalidation and Refill Mechanism
A critical challenge occurs when data referenced by `memtable_last_seq` gets flushed from memory to disk. Since SST files only maintain a single maximum sequence number for the entire file (rather than per-record sequence tracking), precise incremental queries become impossible for the affected time ranges.
**Detection of Invalidation:**
```rust
// When memtable_last_seq data has been flushed to SST
if memtable_last_seq_flushed_to_disk {
// Incremental query is no longer feasible
// Need to trigger refill for affected time ranges
}
```
**Refill Process:**
1. **Identify Affected Time Range**: Query the time range corresponding to the flushed `memtable_last_seq` data
2. **Full Recomputation**: Execute a complete aggregation query for the affected time windows
3. **State Replacement**: Replace the existing flow state for these time ranges with newly computed values
4. **Sequence Update**: Update `memtable_last_seq` to the current latest sequence, while `sst_last_seq` continues normal incremental updates
```sql
-- Refill query when memtable data has been flushed
SELECT
__aggr_state(aggregation_functions) as state,
time_window,
group_keys
FROM source_table
WHERE
timestamp >= :affected_time_start
AND timestamp < :affected_time_end
-- Full scan required since sequence precision is lost in SST
GROUP BY time_window, group_keys;
```
#### Datanode Implementation Requirements
Datanode must implement enhanced query processing capabilities to support sequence-based incremental reads:
**Input Processing:**
- Accept `memtable_last_seq` and `sst_last_seq` parameters in query requests
- Filter data based on sequence ranges across both memtable and SST storage layers
**Output Enhancement:**
```rust
struct OutputMeta {
pub plan: Option<Arc<dyn ExecutionPlan>>,
pub cost: OutputCost,
pub sequence_info: HashMap<RegionId, SequenceInfo>, // New field for sequence tracking per regions involved in the query
}
struct SequenceInfo {
// Sequence tracking for next iteration
max_memtable_seq: SequenceNumber, // Highest sequence from memtable in this result
max_sst_seq: SequenceNumber, // Highest sequence from SST in this result
}
```
**Sequence Tracking Logic:**
datanode already impl `max_sst_seq` in leader range read, can reuse similar logic for `max_memtable_seq`.
#### Sequence Update Strategy
**Normal Incremental Updates:**
- Update both `memtable_last_seq` and `sst_last_seq` after successful query execution
- Use returned `max_memtable_seq` and `max_sst_seq` values for next iteration
**Refill Scenario:**
- Reset `memtable_last_seq` to current maximum after refill completion
- Continue normal `sst_last_seq` updates based on successful query responses
- Maintain separate tracking to detect future flush events
#### Performance Considerations
**Sequence Range Optimization:**
- Minimize sequence range spans to reduce scan overhead
- Batch multiple small incremental updates when beneficial
- Balance between query frequency and processing efficiency
**Memory Management:**
- Monitor memtable flush frequency to predict refill requirements
- Implement adaptive query scheduling based on flush patterns
- Optimize state storage to handle frequent updates efficiently
This sequential read implementation ensures reliable incremental processing while gracefully handling the complexities of storage architecture, maintaining both correctness and performance in the face of background compaction and flush operations.
## Implementation Plan
### Phase 1: Core Infrastructure
1. **State Management**: Implement in-memory state map in flownode
2. **Query Interface**: Integrate `__aggr_state` query interface in embedded frontend(Already done in previous query pushdown optimizer work)
3. **Basic Coordination**: Implement query dispatch and result collection
4. **Sequence Tracking**: Implement sequence-based incremental processing(Can use similar interface which leader range read use)
After phase 1, the system should support basic flow operations with incremental updates.
### Phase 2: Optimization Features
1. **Refill Logic**: Develop state recovery mechanisms
2. **Mirror Write Optimization**: Simplify mirror write protocol
### Phase 3: Advanced Features
1. **Load Balancing**: Implement intelligent resource allocation for partitioned flow(Flow distributed executed on multiple flownodes)
2. **Fault Tolerance**: Add retry mechanisms and error handling
3. **Performance Tuning**: Optimize query batching and state management
## Drawbacks
### Reduced Network Communication
- **Eliminated Hops**: Direct communication between flownode and datanode through embedded frontend
- **Reduced Latency**: No separate frontend node communication overhead
- **Simplified Network Topology**: Fewer network dependencies and failure points
### Complexity in Error Handling
- **Distributed Failures**: Need to handle failures across multiple components
- **State Consistency**: Ensuring state consistency during partial failures
- **Recovery Complexity**: More complex recovery procedures
### Datanode Resource Requirements
- **Computation Load**: Datanode handles the heavy computational workload for flow queries
- **Query Interference**: Flow queries may impact regular query performance on datanode
- **Resource Contention**: Need careful resource management and isolation on datanode
## Alternatives
### Alternative 1: Enhanced Current Architecture
Keep computation in flownode but optimize through:
- Better resource management
- Improved query optimization
- Enhanced state persistence
**Pros:**
- Simpler architecture
- Fewer network hops
- Easier debugging
**Cons:**
- Limited scalability
- Resource inefficiency
- Harder to optimize computation distribution
### Alternative 2: Embedded Computation
Embed lightweight computation engines within flownode:
**Pros:**
- Reduced network communication
- Better performance for simple queries
- Simpler deployment
**Cons:**
- Limited scalability
- Resource constraints
- Harder to leverage existing frontend optimizations
## Future Work
### Advanced Query Optimization
- **Parallel Processing**: Enable parallel execution of flow queries
- **Query Caching**: Cache frequently executed query patterns
### Enhanced State Management
- **State Compression**: Implement efficient state serialization
- **Distributed State**: Support state distribution across multiple flownodes
- **State Persistence**: Add optional state persistence for durability
### Monitoring and Observability
- **Performance Metrics**: Track query execution times and resource usage
- **State Visualization**: Provide tools for state inspection and debugging
- **Health Monitoring**: Monitor system health and performance characteristics
### Integration Improvements
- **Embedded Frontend Optimization**: Optimize embedded frontend query planning and execution
- **Datanode Optimization**: Optimize result writing from flownode
- **Metasrv Coordination**: Enhanced metadata management and coordination
## Conclusion
The laminar Flow architecture represents a significant improvement over the current flow system by separating state management from computation execution. This design enables better resource utilization, improved scalability, and simplified maintenance while maintaining the core functionality of continuous aggregation.
The key benefits include:
1. **Improved Scalability**: Computation can scale independently of state management
2. **Better Resource Utilization**: Eliminates network overhead and leverages embedded frontend infrastructure
3. **Simplified Architecture**: Clear separation of concerns between components
4. **Enhanced Performance**: Sequence-based incremental processing reduces computational overhead
While the architecture introduces some complexity in terms of distributed coordination and error handling, the benefits significantly outweigh the drawbacks, making it a compelling evolution of the flow system.

20
flake.lock generated
View File

@@ -8,11 +8,11 @@
"rust-analyzer-src": "rust-analyzer-src"
},
"locked": {
"lastModified": 1745735608,
"narHash": "sha256-L0jzm815XBFfF2wCFmR+M1CF+beIEFj6SxlqVKF59Ec=",
"lastModified": 1765252472,
"narHash": "sha256-byMt/uMi7DJ8tRniFopDFZMO3leSjGp6GS4zWOFT+uQ=",
"owner": "nix-community",
"repo": "fenix",
"rev": "c39a78eba6ed2a022cc3218db90d485077101496",
"rev": "8456b985f6652e3eef0632ee9992b439735c5544",
"type": "github"
},
"original": {
@@ -41,16 +41,16 @@
},
"nixpkgs": {
"locked": {
"lastModified": 1748162331,
"narHash": "sha256-rqc2RKYTxP3tbjA+PB3VMRQNnjesrT0pEofXQTrMsS8=",
"lastModified": 1764983851,
"narHash": "sha256-y7RPKl/jJ/KAP/VKLMghMgXTlvNIJMHKskl8/Uuar7o=",
"owner": "NixOS",
"repo": "nixpkgs",
"rev": "7c43f080a7f28b2774f3b3f43234ca11661bf334",
"rev": "d9bc5c7dceb30d8d6fafa10aeb6aa8a48c218454",
"type": "github"
},
"original": {
"owner": "NixOS",
"ref": "nixos-25.05",
"ref": "nixos-25.11",
"repo": "nixpkgs",
"type": "github"
}
@@ -65,11 +65,11 @@
"rust-analyzer-src": {
"flake": false,
"locked": {
"lastModified": 1745694049,
"narHash": "sha256-fxvRYH/tS7hGQeg9zCVh5RBcSWT+JGJet7RA8Ss+rC0=",
"lastModified": 1765120009,
"narHash": "sha256-nG76b87rkaDzibWbnB5bYDm6a52b78A+fpm+03pqYIw=",
"owner": "rust-lang",
"repo": "rust-analyzer",
"rev": "d8887c0758bbd2d5f752d5bd405d4491e90e7ed6",
"rev": "5e3e9c4e61bba8a5e72134b9ffefbef8f531d008",
"type": "github"
},
"original": {

View File

@@ -2,7 +2,7 @@
description = "Development environment flake";
inputs = {
nixpkgs.url = "github:NixOS/nixpkgs/nixos-25.05";
nixpkgs.url = "github:NixOS/nixpkgs/nixos-25.11";
fenix = {
url = "github:nix-community/fenix";
inputs.nixpkgs.follows = "nixpkgs";
@@ -15,13 +15,11 @@
let
pkgs = nixpkgs.legacyPackages.${system};
buildInputs = with pkgs; [
libgit2
libz
];
lib = nixpkgs.lib;
rustToolchain = fenix.packages.${system}.fromToolchainName {
name = (lib.importTOML ./rust-toolchain.toml).toolchain.channel;
sha256 = "sha256-tJJr8oqX3YD+ohhPK7jlt/7kvKBnBqJVjYtoFr520d4=";
sha256 = "sha256-GCGEXGZeJySLND0KU5TdtTrqFV76TF3UdvAHSUegSsk=";
};
in
{
@@ -50,7 +48,7 @@
gnuplot ## for cargo bench
];
LD_LIBRARY_PATH = pkgs.lib.makeLibraryPath buildInputs;
buildInputs = buildInputs;
NIX_HARDENING_ENABLE = "";
};
});

View File

@@ -9,7 +9,7 @@ We highly recommend using the self-monitoring feature provided by [GreptimeDB Op
- **Metrics Dashboards**
- `dashboards/metrics/cluster/dashboard.json`: The Grafana dashboard for the GreptimeDB cluster. Read the [dashboard.md](./dashboards/metrics/cluster/dashboard.md) for more details.
- `dashboards/metrics/standalone/dashboard.json`: The Grafana dashboard for the standalone GreptimeDB instance. **It's generated from the `cluster/dashboard.json` by removing the instance filter through the `make dashboards` command**. Read the [dashboard.md](./dashboards/metrics/standalone/dashboard.md) for more details.
- **Logs Dashboard**
@@ -83,7 +83,7 @@ If you use the [Helm Chart](https://github.com/GreptimeTeam/helm-charts) to depl
- `monitoring.enabled=true`: Deploys a standalone GreptimeDB instance dedicated to monitoring the cluster;
- `grafana.enabled=true`: Deploys Grafana and automatically imports the monitoring dashboard;
The standalone GreptimeDB instance will collect metrics from your cluster, and the dashboard will be available in the Grafana UI. For detailed deployment instructions, please refer to our [Kubernetes deployment guide](https://docs.greptime.com/nightly/user-guide/deployments/deploy-on-kubernetes/getting-started).
The standalone GreptimeDB instance will collect metrics from your cluster, and the dashboard will be available in the Grafana UI. For detailed deployment instructions, please refer to our [Kubernetes deployment guide](https://docs.greptime.com/user-guide/deployments-administration/deploy-on-kubernetes/overview).
### Self-host Prometheus and import dashboards manually

File diff suppressed because it is too large Load Diff

View File

@@ -21,14 +21,14 @@
# Resources
| Title | Query | Type | Description | Datasource | Unit | Legend Format |
| --- | --- | --- | --- | --- | --- | --- |
| Datanode Memory per Instance | `sum(process_resident_memory_bytes{instance=~"$datanode"}) by (instance, pod)` | `timeseries` | Current memory usage by instance | `prometheus` | `decbytes` | `[{{instance}}]-[{{ pod }}]` |
| Datanode CPU Usage per Instance | `sum(rate(process_cpu_seconds_total{instance=~"$datanode"}[$__rate_interval]) * 1000) by (instance, pod)` | `timeseries` | Current cpu usage by instance | `prometheus` | `none` | `[{{ instance }}]-[{{ pod }}]` |
| Frontend Memory per Instance | `sum(process_resident_memory_bytes{instance=~"$frontend"}) by (instance, pod)` | `timeseries` | Current memory usage by instance | `prometheus` | `decbytes` | `[{{ instance }}]-[{{ pod }}]` |
| Frontend CPU Usage per Instance | `sum(rate(process_cpu_seconds_total{instance=~"$frontend"}[$__rate_interval]) * 1000) by (instance, pod)` | `timeseries` | Current cpu usage by instance | `prometheus` | `none` | `[{{ instance }}]-[{{ pod }}]-cpu` |
| Metasrv Memory per Instance | `sum(process_resident_memory_bytes{instance=~"$metasrv"}) by (instance, pod)` | `timeseries` | Current memory usage by instance | `prometheus` | `decbytes` | `[{{ instance }}]-[{{ pod }}]-resident` |
| Metasrv CPU Usage per Instance | `sum(rate(process_cpu_seconds_total{instance=~"$metasrv"}[$__rate_interval]) * 1000) by (instance, pod)` | `timeseries` | Current cpu usage by instance | `prometheus` | `none` | `[{{ instance }}]-[{{ pod }}]` |
| Flownode Memory per Instance | `sum(process_resident_memory_bytes{instance=~"$flownode"}) by (instance, pod)` | `timeseries` | Current memory usage by instance | `prometheus` | `decbytes` | `[{{ instance }}]-[{{ pod }}]` |
| Flownode CPU Usage per Instance | `sum(rate(process_cpu_seconds_total{instance=~"$flownode"}[$__rate_interval]) * 1000) by (instance, pod)` | `timeseries` | Current cpu usage by instance | `prometheus` | `none` | `[{{ instance }}]-[{{ pod }}]` |
| Datanode Memory per Instance | `sum(process_resident_memory_bytes{instance=~"$datanode"}) by (instance, pod)`<br/>`max(greptime_memory_limit_in_bytes{instance=~"$datanode"})` | `timeseries` | Current memory usage by instance | `prometheus` | `bytes` | `[{{instance}}]-[{{ pod }}]` |
| Datanode CPU Usage per Instance | `sum(rate(process_cpu_seconds_total{instance=~"$datanode"}[$__rate_interval]) * 1000) by (instance, pod)`<br/>`max(greptime_cpu_limit_in_millicores{instance=~"$datanode"})` | `timeseries` | Current cpu usage by instance | `prometheus` | `none` | `[{{ instance }}]-[{{ pod }}]` |
| Frontend Memory per Instance | `sum(process_resident_memory_bytes{instance=~"$frontend"}) by (instance, pod)`<br/>`max(greptime_memory_limit_in_bytes{instance=~"$frontend"})` | `timeseries` | Current memory usage by instance | `prometheus` | `bytes` | `[{{ instance }}]-[{{ pod }}]` |
| Frontend CPU Usage per Instance | `sum(rate(process_cpu_seconds_total{instance=~"$frontend"}[$__rate_interval]) * 1000) by (instance, pod)`<br/>`max(greptime_cpu_limit_in_millicores{instance=~"$frontend"})` | `timeseries` | Current cpu usage by instance | `prometheus` | `none` | `[{{ instance }}]-[{{ pod }}]-cpu` |
| Metasrv Memory per Instance | `sum(process_resident_memory_bytes{instance=~"$metasrv"}) by (instance, pod)`<br/>`max(greptime_memory_limit_in_bytes{instance=~"$metasrv"})` | `timeseries` | Current memory usage by instance | `prometheus` | `bytes` | `[{{ instance }}]-[{{ pod }}]-resident` |
| Metasrv CPU Usage per Instance | `sum(rate(process_cpu_seconds_total{instance=~"$metasrv"}[$__rate_interval]) * 1000) by (instance, pod)`<br/>`max(greptime_cpu_limit_in_millicores{instance=~"$metasrv"})` | `timeseries` | Current cpu usage by instance | `prometheus` | `none` | `[{{ instance }}]-[{{ pod }}]` |
| Flownode Memory per Instance | `sum(process_resident_memory_bytes{instance=~"$flownode"}) by (instance, pod)`<br/>`max(greptime_memory_limit_in_bytes{instance=~"$flownode"})` | `timeseries` | Current memory usage by instance | `prometheus` | `bytes` | `[{{ instance }}]-[{{ pod }}]` |
| Flownode CPU Usage per Instance | `sum(rate(process_cpu_seconds_total{instance=~"$flownode"}[$__rate_interval]) * 1000) by (instance, pod)`<br/>`max(greptime_cpu_limit_in_millicores{instance=~"$flownode"})` | `timeseries` | Current cpu usage by instance | `prometheus` | `none` | `[{{ instance }}]-[{{ pod }}]` |
# Frontend Requests
| Title | Query | Type | Description | Datasource | Unit | Legend Format |
| --- | --- | --- | --- | --- | --- | --- |
@@ -70,21 +70,30 @@
| Inflight Flush | `greptime_mito_inflight_flush_count` | `timeseries` | Ongoing flush task count | `prometheus` | `none` | `[{{instance}}]-[{{pod}}]` |
| Compaction Input/Output Bytes | `sum by(instance, pod) (greptime_mito_compaction_input_bytes)`<br/>`sum by(instance, pod) (greptime_mito_compaction_output_bytes)` | `timeseries` | Compaction oinput output bytes | `prometheus` | `bytes` | `[{{instance}}]-[{{pod}}]-input` |
| Region Worker Handle Bulk Insert Requests | `histogram_quantile(0.95, sum by(le,instance, stage, pod) (rate(greptime_region_worker_handle_write_bucket[$__rate_interval])))`<br/>`sum by(instance, stage, pod) (rate(greptime_region_worker_handle_write_sum[$__rate_interval]))/sum by(instance, stage, pod) (rate(greptime_region_worker_handle_write_count[$__rate_interval]))` | `timeseries` | Per-stage elapsed time for region worker to handle bulk insert region requests. | `prometheus` | `s` | `[{{instance}}]-[{{pod}}]-[{{stage}}]-P95` |
| Active Series and Field Builders Count | `sum by(instance, pod) (greptime_mito_memtable_active_series_count)`<br/>`sum by(instance, pod) (greptime_mito_memtable_field_builder_count)` | `timeseries` | Compaction oinput output bytes | `prometheus` | `none` | `[{{instance}}]-[{{pod}}]-series` |
| Region Worker Convert Requests | `histogram_quantile(0.95, sum by(le, instance, stage, pod) (rate(greptime_datanode_convert_region_request_bucket[$__rate_interval])))`<br/>`sum by(le,instance, stage, pod) (rate(greptime_datanode_convert_region_request_sum[$__rate_interval]))/sum by(le,instance, stage, pod) (rate(greptime_datanode_convert_region_request_count[$__rate_interval]))` | `timeseries` | Per-stage elapsed time for region worker to decode requests. | `prometheus` | `s` | `[{{instance}}]-[{{pod}}]-[{{stage}}]-P95` |
| Cache Miss | `sum by (instance,pod, type) (rate(greptime_mito_cache_miss{instance=~"$datanode"}[$__rate_interval]))` | `timeseries` | The local cache miss of the datanode. | `prometheus` | -- | `[{{instance}}]-[{{pod}}]-[{{type}}]` |
# OpenDAL
| Title | Query | Type | Description | Datasource | Unit | Legend Format |
| --- | --- | --- | --- | --- | --- | --- |
| QPS per Instance | `sum by(instance, pod, scheme, operation) (rate(opendal_operation_duration_seconds_count{instance=~"$datanode"}[$__rate_interval]))` | `timeseries` | QPS per Instance. | `prometheus` | `ops` | `[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]` |
| Read QPS per Instance | `sum by(instance, pod, scheme) (rate(opendal_operation_duration_seconds_count{instance=~"$datanode", operation="read"}[$__rate_interval]))` | `timeseries` | Read QPS per Instance. | `prometheus` | `ops` | `[{{instance}}]-[{{pod}}]-[{{scheme}}]` |
| Read P99 per Instance | `histogram_quantile(0.99, sum by(instance, pod, le, scheme) (rate(opendal_operation_duration_seconds_bucket{instance=~"$datanode",operation="read"}[$__rate_interval])))` | `timeseries` | Read P99 per Instance. | `prometheus` | `s` | `[{{instance}}]-[{{pod}}]-{{scheme}}` |
| Write QPS per Instance | `sum by(instance, pod, scheme) (rate(opendal_operation_duration_seconds_count{instance=~"$datanode", operation="write"}[$__rate_interval]))` | `timeseries` | Write QPS per Instance. | `prometheus` | `ops` | `[{{instance}}]-[{{pod}}]-{{scheme}}` |
| Write P99 per Instance | `histogram_quantile(0.99, sum by(instance, pod, le, scheme) (rate(opendal_operation_duration_seconds_bucket{instance=~"$datanode", operation="write"}[$__rate_interval])))` | `timeseries` | Write P99 per Instance. | `prometheus` | `s` | `[{{instance}}]-[{{pod}}]-[{{scheme}}]` |
| Read QPS per Instance | `sum by(instance, pod, scheme, operation) (rate(opendal_operation_duration_seconds_count{instance=~"$datanode", operation=~"read\|Reader::read"}[$__rate_interval]))` | `timeseries` | Read QPS per Instance. | `prometheus` | `ops` | `[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]` |
| Read P99 per Instance | `histogram_quantile(0.99, sum by(instance, pod, le, scheme, operation) (rate(opendal_operation_duration_seconds_bucket{instance=~"$datanode",operation=~"read\|Reader::read"}[$__rate_interval])))` | `timeseries` | Read P99 per Instance. | `prometheus` | `s` | `[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]` |
| Write QPS per Instance | `sum by(instance, pod, scheme, operation) (rate(opendal_operation_duration_seconds_count{instance=~"$datanode", operation=~"write\|Writer::write\|Writer::close"}[$__rate_interval]))` | `timeseries` | Write QPS per Instance. | `prometheus` | `ops` | `[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]` |
| Write P99 per Instance | `histogram_quantile(0.99, sum by(instance, pod, le, scheme, operation) (rate(opendal_operation_duration_seconds_bucket{instance=~"$datanode", operation =~ "Writer::write\|Writer::close\|write"}[$__rate_interval])))` | `timeseries` | Write P99 per Instance. | `prometheus` | `s` | `[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]` |
| List QPS per Instance | `sum by(instance, pod, scheme) (rate(opendal_operation_duration_seconds_count{instance=~"$datanode", operation="list"}[$__rate_interval]))` | `timeseries` | List QPS per Instance. | `prometheus` | `ops` | `[{{instance}}]-[{{pod}}]-[{{scheme}}]` |
| List P99 per Instance | `histogram_quantile(0.99, sum by(instance, pod, le, scheme) (rate(opendal_operation_duration_seconds_bucket{instance=~"$datanode", operation="list"}[$__rate_interval])))` | `timeseries` | List P99 per Instance. | `prometheus` | `s` | `[{{instance}}]-[{{pod}}]-[{{scheme}}]` |
| Other Requests per Instance | `sum by(instance, pod, scheme, operation) (rate(opendal_operation_duration_seconds_count{instance=~"$datanode",operation!~"read\|write\|list\|stat"}[$__rate_interval]))` | `timeseries` | Other Requests per Instance. | `prometheus` | `ops` | `[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]` |
| Other Request P99 per Instance | `histogram_quantile(0.99, sum by(instance, pod, le, scheme, operation) (rate(opendal_operation_duration_seconds_bucket{instance=~"$datanode", operation!~"read\|write\|list"}[$__rate_interval])))` | `timeseries` | Other Request P99 per Instance. | `prometheus` | `s` | `[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]` |
| Other Request P99 per Instance | `histogram_quantile(0.99, sum by(instance, pod, le, scheme, operation) (rate(opendal_operation_duration_seconds_bucket{instance=~"$datanode", operation!~"read\|write\|list\|Writer::write\|Writer::close\|Reader::read"}[$__rate_interval])))` | `timeseries` | Other Request P99 per Instance. | `prometheus` | `s` | `[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]` |
| Opendal traffic | `sum by(instance, pod, scheme, operation) (rate(opendal_operation_bytes_sum{instance=~"$datanode"}[$__rate_interval]))` | `timeseries` | Total traffic as in bytes by instance and operation | `prometheus` | `decbytes` | `[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]` |
| OpenDAL errors per Instance | `sum by(instance, pod, scheme, operation, error) (rate(opendal_operation_errors_total{instance=~"$datanode", error!="NotFound"}[$__rate_interval]))` | `timeseries` | OpenDAL error counts per Instance. | `prometheus` | -- | `[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]-[{{error}}]` |
# Remote WAL
| Title | Query | Type | Description | Datasource | Unit | Legend Format |
| --- | --- | --- | --- | --- | --- | --- |
| Triggered region flush total | `meta_triggered_region_flush_total` | `timeseries` | Triggered region flush total | `prometheus` | `none` | `{{pod}}-{{topic_name}}` |
| Triggered region checkpoint total | `meta_triggered_region_checkpoint_total` | `timeseries` | Triggered region checkpoint total | `prometheus` | `none` | `{{pod}}-{{topic_name}}` |
| Topic estimated replay size | `meta_topic_estimated_replay_size` | `timeseries` | Topic estimated max replay size | `prometheus` | `bytes` | `{{pod}}-{{topic_name}}` |
| Kafka logstore's bytes traffic | `rate(greptime_logstore_kafka_client_bytes_total[$__rate_interval])` | `timeseries` | Kafka logstore's bytes traffic | `prometheus` | `bytes` | `{{pod}}-{{logstore}}` |
# Metasrv
| Title | Query | Type | Description | Datasource | Unit | Legend Format |
| --- | --- | --- | --- | --- | --- | --- |
@@ -101,6 +110,8 @@
| Meta KV Ops Latency | `histogram_quantile(0.99, sum by(pod, le, op, target) (greptime_meta_kv_request_elapsed_bucket))` | `timeseries` | Gauge of load information of each datanode, collected via heartbeat between datanode and metasrv. This information is for metasrv to schedule workloads. | `prometheus` | `s` | `{{pod}}-{{op}} p99` |
| Rate of meta KV Ops | `rate(greptime_meta_kv_request_elapsed_count[$__rate_interval])` | `timeseries` | Gauge of load information of each datanode, collected via heartbeat between datanode and metasrv. This information is for metasrv to schedule workloads. | `prometheus` | `none` | `{{pod}}-{{op}} p99` |
| DDL Latency | `histogram_quantile(0.9, sum by(le, pod, step) (greptime_meta_procedure_create_tables_bucket))`<br/>`histogram_quantile(0.9, sum by(le, pod, step) (greptime_meta_procedure_create_table))`<br/>`histogram_quantile(0.9, sum by(le, pod, step) (greptime_meta_procedure_create_view))`<br/>`histogram_quantile(0.9, sum by(le, pod, step) (greptime_meta_procedure_create_flow))`<br/>`histogram_quantile(0.9, sum by(le, pod, step) (greptime_meta_procedure_drop_table))`<br/>`histogram_quantile(0.9, sum by(le, pod, step) (greptime_meta_procedure_alter_table))` | `timeseries` | Gauge of load information of each datanode, collected via heartbeat between datanode and metasrv. This information is for metasrv to schedule workloads. | `prometheus` | `s` | `CreateLogicalTables-{{step}} p90` |
| Reconciliation stats | `greptime_meta_reconciliation_stats` | `timeseries` | Reconciliation stats | `prometheus` | `s` | `{{pod}}-{{table_type}}-{{type}}` |
| Reconciliation steps | `histogram_quantile(0.9, greptime_meta_reconciliation_procedure_bucket)` | `timeseries` | Elapsed of Reconciliation steps | `prometheus` | `s` | `{{procedure_name}}-{{step}}-P90` |
# Flownode
| Title | Query | Type | Description | Datasource | Unit | Legend Format |
| --- | --- | --- | --- | --- | --- | --- |

View File

@@ -180,13 +180,18 @@ groups:
- title: Datanode Memory per Instance
type: timeseries
description: Current memory usage by instance
unit: decbytes
unit: bytes
queries:
- expr: sum(process_resident_memory_bytes{instance=~"$datanode"}) by (instance, pod)
datasource:
type: prometheus
uid: ${metrics}
legendFormat: '[{{instance}}]-[{{ pod }}]'
- expr: max(greptime_memory_limit_in_bytes{instance=~"$datanode"})
datasource:
type: prometheus
uid: ${metrics}
legendFormat: limit
- title: Datanode CPU Usage per Instance
type: timeseries
description: Current cpu usage by instance
@@ -197,16 +202,26 @@ groups:
type: prometheus
uid: ${metrics}
legendFormat: '[{{ instance }}]-[{{ pod }}]'
- expr: max(greptime_cpu_limit_in_millicores{instance=~"$datanode"})
datasource:
type: prometheus
uid: ${metrics}
legendFormat: limit
- title: Frontend Memory per Instance
type: timeseries
description: Current memory usage by instance
unit: decbytes
unit: bytes
queries:
- expr: sum(process_resident_memory_bytes{instance=~"$frontend"}) by (instance, pod)
datasource:
type: prometheus
uid: ${metrics}
legendFormat: '[{{ instance }}]-[{{ pod }}]'
- expr: max(greptime_memory_limit_in_bytes{instance=~"$frontend"})
datasource:
type: prometheus
uid: ${metrics}
legendFormat: limit
- title: Frontend CPU Usage per Instance
type: timeseries
description: Current cpu usage by instance
@@ -217,16 +232,26 @@ groups:
type: prometheus
uid: ${metrics}
legendFormat: '[{{ instance }}]-[{{ pod }}]-cpu'
- expr: max(greptime_cpu_limit_in_millicores{instance=~"$frontend"})
datasource:
type: prometheus
uid: ${metrics}
legendFormat: limit
- title: Metasrv Memory per Instance
type: timeseries
description: Current memory usage by instance
unit: decbytes
unit: bytes
queries:
- expr: sum(process_resident_memory_bytes{instance=~"$metasrv"}) by (instance, pod)
datasource:
type: prometheus
uid: ${metrics}
legendFormat: '[{{ instance }}]-[{{ pod }}]-resident'
- expr: max(greptime_memory_limit_in_bytes{instance=~"$metasrv"})
datasource:
type: prometheus
uid: ${metrics}
legendFormat: limit
- title: Metasrv CPU Usage per Instance
type: timeseries
description: Current cpu usage by instance
@@ -237,16 +262,26 @@ groups:
type: prometheus
uid: ${metrics}
legendFormat: '[{{ instance }}]-[{{ pod }}]'
- expr: max(greptime_cpu_limit_in_millicores{instance=~"$metasrv"})
datasource:
type: prometheus
uid: ${metrics}
legendFormat: limit
- title: Flownode Memory per Instance
type: timeseries
description: Current memory usage by instance
unit: decbytes
unit: bytes
queries:
- expr: sum(process_resident_memory_bytes{instance=~"$flownode"}) by (instance, pod)
datasource:
type: prometheus
uid: ${metrics}
legendFormat: '[{{ instance }}]-[{{ pod }}]'
- expr: max(greptime_memory_limit_in_bytes{instance=~"$flownode"})
datasource:
type: prometheus
uid: ${metrics}
legendFormat: limit
- title: Flownode CPU Usage per Instance
type: timeseries
description: Current cpu usage by instance
@@ -257,6 +292,11 @@ groups:
type: prometheus
uid: ${metrics}
legendFormat: '[{{ instance }}]-[{{ pod }}]'
- expr: max(greptime_cpu_limit_in_millicores{instance=~"$flownode"})
datasource:
type: prometheus
uid: ${metrics}
legendFormat: limit
- title: Frontend Requests
panels:
- title: HTTP QPS per Instance
@@ -612,6 +652,21 @@ groups:
type: prometheus
uid: ${metrics}
legendFormat: '[{{instance}}]-[{{pod}}]-[{{stage}}]-AVG'
- title: Active Series and Field Builders Count
type: timeseries
description: Compaction oinput output bytes
unit: none
queries:
- expr: sum by(instance, pod) (greptime_mito_memtable_active_series_count)
datasource:
type: prometheus
uid: ${metrics}
legendFormat: '[{{instance}}]-[{{pod}}]-series'
- expr: sum by(instance, pod) (greptime_mito_memtable_field_builder_count)
datasource:
type: prometheus
uid: ${metrics}
legendFormat: '[{{instance}}]-[{{pod}}]-field_builders'
- title: Region Worker Convert Requests
type: timeseries
description: Per-stage elapsed time for region worker to decode requests.
@@ -627,6 +682,15 @@ groups:
type: prometheus
uid: ${metrics}
legendFormat: '[{{instance}}]-[{{pod}}]-[{{stage}}]-AVG'
- title: Cache Miss
type: timeseries
description: The local cache miss of the datanode.
queries:
- expr: sum by (instance,pod, type) (rate(greptime_mito_cache_miss{instance=~"$datanode"}[$__rate_interval]))
datasource:
type: prometheus
uid: ${metrics}
legendFormat: '[{{instance}}]-[{{pod}}]-[{{type}}]'
- title: OpenDAL
panels:
- title: QPS per Instance
@@ -644,41 +708,41 @@ groups:
description: Read QPS per Instance.
unit: ops
queries:
- expr: sum by(instance, pod, scheme) (rate(opendal_operation_duration_seconds_count{instance=~"$datanode", operation="read"}[$__rate_interval]))
- expr: sum by(instance, pod, scheme, operation) (rate(opendal_operation_duration_seconds_count{instance=~"$datanode", operation=~"read|Reader::read"}[$__rate_interval]))
datasource:
type: prometheus
uid: ${metrics}
legendFormat: '[{{instance}}]-[{{pod}}]-[{{scheme}}]'
legendFormat: '[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]'
- title: Read P99 per Instance
type: timeseries
description: Read P99 per Instance.
unit: s
queries:
- expr: histogram_quantile(0.99, sum by(instance, pod, le, scheme) (rate(opendal_operation_duration_seconds_bucket{instance=~"$datanode",operation="read"}[$__rate_interval])))
- expr: histogram_quantile(0.99, sum by(instance, pod, le, scheme, operation) (rate(opendal_operation_duration_seconds_bucket{instance=~"$datanode",operation=~"read|Reader::read"}[$__rate_interval])))
datasource:
type: prometheus
uid: ${metrics}
legendFormat: '[{{instance}}]-[{{pod}}]-{{scheme}}'
legendFormat: '[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]'
- title: Write QPS per Instance
type: timeseries
description: Write QPS per Instance.
unit: ops
queries:
- expr: sum by(instance, pod, scheme) (rate(opendal_operation_duration_seconds_count{instance=~"$datanode", operation="write"}[$__rate_interval]))
- expr: sum by(instance, pod, scheme, operation) (rate(opendal_operation_duration_seconds_count{instance=~"$datanode", operation=~"write|Writer::write|Writer::close"}[$__rate_interval]))
datasource:
type: prometheus
uid: ${metrics}
legendFormat: '[{{instance}}]-[{{pod}}]-{{scheme}}'
legendFormat: '[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]'
- title: Write P99 per Instance
type: timeseries
description: Write P99 per Instance.
unit: s
queries:
- expr: histogram_quantile(0.99, sum by(instance, pod, le, scheme) (rate(opendal_operation_duration_seconds_bucket{instance=~"$datanode", operation="write"}[$__rate_interval])))
- expr: histogram_quantile(0.99, sum by(instance, pod, le, scheme, operation) (rate(opendal_operation_duration_seconds_bucket{instance=~"$datanode", operation =~ "Writer::write|Writer::close|write"}[$__rate_interval])))
datasource:
type: prometheus
uid: ${metrics}
legendFormat: '[{{instance}}]-[{{pod}}]-[{{scheme}}]'
legendFormat: '[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]'
- title: List QPS per Instance
type: timeseries
description: List QPS per Instance.
@@ -714,7 +778,7 @@ groups:
description: Other Request P99 per Instance.
unit: s
queries:
- expr: histogram_quantile(0.99, sum by(instance, pod, le, scheme, operation) (rate(opendal_operation_duration_seconds_bucket{instance=~"$datanode", operation!~"read|write|list"}[$__rate_interval])))
- expr: histogram_quantile(0.99, sum by(instance, pod, le, scheme, operation) (rate(opendal_operation_duration_seconds_bucket{instance=~"$datanode", operation!~"read|write|list|Writer::write|Writer::close|Reader::read"}[$__rate_interval])))
datasource:
type: prometheus
uid: ${metrics}
@@ -738,6 +802,48 @@ groups:
type: prometheus
uid: ${metrics}
legendFormat: '[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]-[{{error}}]'
- title: Remote WAL
panels:
- title: Triggered region flush total
type: timeseries
description: Triggered region flush total
unit: none
queries:
- expr: meta_triggered_region_flush_total
datasource:
type: prometheus
uid: ${metrics}
legendFormat: '{{pod}}-{{topic_name}}'
- title: Triggered region checkpoint total
type: timeseries
description: Triggered region checkpoint total
unit: none
queries:
- expr: meta_triggered_region_checkpoint_total
datasource:
type: prometheus
uid: ${metrics}
legendFormat: '{{pod}}-{{topic_name}}'
- title: Topic estimated replay size
type: timeseries
description: Topic estimated max replay size
unit: bytes
queries:
- expr: meta_topic_estimated_replay_size
datasource:
type: prometheus
uid: ${metrics}
legendFormat: '{{pod}}-{{topic_name}}'
- title: Kafka logstore's bytes traffic
type: timeseries
description: Kafka logstore's bytes traffic
unit: bytes
queries:
- expr: rate(greptime_logstore_kafka_client_bytes_total[$__rate_interval])
datasource:
type: prometheus
uid: ${metrics}
legendFormat: '{{pod}}-{{logstore}}'
- title: Metasrv
panels:
- title: Region migration datanode
@@ -884,6 +990,26 @@ groups:
type: prometheus
uid: ${metrics}
legendFormat: AlterTable-{{step}} p90
- title: Reconciliation stats
type: timeseries
description: Reconciliation stats
unit: s
queries:
- expr: greptime_meta_reconciliation_stats
datasource:
type: prometheus
uid: ${metrics}
legendFormat: '{{pod}}-{{table_type}}-{{type}}'
- title: Reconciliation steps
type: timeseries
description: 'Elapsed of Reconciliation steps '
unit: s
queries:
- expr: histogram_quantile(0.9, greptime_meta_reconciliation_procedure_bucket)
datasource:
type: prometheus
uid: ${metrics}
legendFormat: '{{procedure_name}}-{{step}}-P90'
- title: Flownode
panels:
- title: Flow Ingest / Output Rate

File diff suppressed because it is too large Load Diff

View File

@@ -21,14 +21,14 @@
# Resources
| Title | Query | Type | Description | Datasource | Unit | Legend Format |
| --- | --- | --- | --- | --- | --- | --- |
| Datanode Memory per Instance | `sum(process_resident_memory_bytes{}) by (instance, pod)` | `timeseries` | Current memory usage by instance | `prometheus` | `decbytes` | `[{{instance}}]-[{{ pod }}]` |
| Datanode CPU Usage per Instance | `sum(rate(process_cpu_seconds_total{}[$__rate_interval]) * 1000) by (instance, pod)` | `timeseries` | Current cpu usage by instance | `prometheus` | `none` | `[{{ instance }}]-[{{ pod }}]` |
| Frontend Memory per Instance | `sum(process_resident_memory_bytes{}) by (instance, pod)` | `timeseries` | Current memory usage by instance | `prometheus` | `decbytes` | `[{{ instance }}]-[{{ pod }}]` |
| Frontend CPU Usage per Instance | `sum(rate(process_cpu_seconds_total{}[$__rate_interval]) * 1000) by (instance, pod)` | `timeseries` | Current cpu usage by instance | `prometheus` | `none` | `[{{ instance }}]-[{{ pod }}]-cpu` |
| Metasrv Memory per Instance | `sum(process_resident_memory_bytes{}) by (instance, pod)` | `timeseries` | Current memory usage by instance | `prometheus` | `decbytes` | `[{{ instance }}]-[{{ pod }}]-resident` |
| Metasrv CPU Usage per Instance | `sum(rate(process_cpu_seconds_total{}[$__rate_interval]) * 1000) by (instance, pod)` | `timeseries` | Current cpu usage by instance | `prometheus` | `none` | `[{{ instance }}]-[{{ pod }}]` |
| Flownode Memory per Instance | `sum(process_resident_memory_bytes{}) by (instance, pod)` | `timeseries` | Current memory usage by instance | `prometheus` | `decbytes` | `[{{ instance }}]-[{{ pod }}]` |
| Flownode CPU Usage per Instance | `sum(rate(process_cpu_seconds_total{}[$__rate_interval]) * 1000) by (instance, pod)` | `timeseries` | Current cpu usage by instance | `prometheus` | `none` | `[{{ instance }}]-[{{ pod }}]` |
| Datanode Memory per Instance | `sum(process_resident_memory_bytes{}) by (instance, pod)`<br/>`max(greptime_memory_limit_in_bytes{})` | `timeseries` | Current memory usage by instance | `prometheus` | `bytes` | `[{{instance}}]-[{{ pod }}]` |
| Datanode CPU Usage per Instance | `sum(rate(process_cpu_seconds_total{}[$__rate_interval]) * 1000) by (instance, pod)`<br/>`max(greptime_cpu_limit_in_millicores{})` | `timeseries` | Current cpu usage by instance | `prometheus` | `none` | `[{{ instance }}]-[{{ pod }}]` |
| Frontend Memory per Instance | `sum(process_resident_memory_bytes{}) by (instance, pod)`<br/>`max(greptime_memory_limit_in_bytes{})` | `timeseries` | Current memory usage by instance | `prometheus` | `bytes` | `[{{ instance }}]-[{{ pod }}]` |
| Frontend CPU Usage per Instance | `sum(rate(process_cpu_seconds_total{}[$__rate_interval]) * 1000) by (instance, pod)`<br/>`max(greptime_cpu_limit_in_millicores{})` | `timeseries` | Current cpu usage by instance | `prometheus` | `none` | `[{{ instance }}]-[{{ pod }}]-cpu` |
| Metasrv Memory per Instance | `sum(process_resident_memory_bytes{}) by (instance, pod)`<br/>`max(greptime_memory_limit_in_bytes{})` | `timeseries` | Current memory usage by instance | `prometheus` | `bytes` | `[{{ instance }}]-[{{ pod }}]-resident` |
| Metasrv CPU Usage per Instance | `sum(rate(process_cpu_seconds_total{}[$__rate_interval]) * 1000) by (instance, pod)`<br/>`max(greptime_cpu_limit_in_millicores{})` | `timeseries` | Current cpu usage by instance | `prometheus` | `none` | `[{{ instance }}]-[{{ pod }}]` |
| Flownode Memory per Instance | `sum(process_resident_memory_bytes{}) by (instance, pod)`<br/>`max(greptime_memory_limit_in_bytes{})` | `timeseries` | Current memory usage by instance | `prometheus` | `bytes` | `[{{ instance }}]-[{{ pod }}]` |
| Flownode CPU Usage per Instance | `sum(rate(process_cpu_seconds_total{}[$__rate_interval]) * 1000) by (instance, pod)`<br/>`max(greptime_cpu_limit_in_millicores{})` | `timeseries` | Current cpu usage by instance | `prometheus` | `none` | `[{{ instance }}]-[{{ pod }}]` |
# Frontend Requests
| Title | Query | Type | Description | Datasource | Unit | Legend Format |
| --- | --- | --- | --- | --- | --- | --- |
@@ -70,21 +70,30 @@
| Inflight Flush | `greptime_mito_inflight_flush_count` | `timeseries` | Ongoing flush task count | `prometheus` | `none` | `[{{instance}}]-[{{pod}}]` |
| Compaction Input/Output Bytes | `sum by(instance, pod) (greptime_mito_compaction_input_bytes)`<br/>`sum by(instance, pod) (greptime_mito_compaction_output_bytes)` | `timeseries` | Compaction oinput output bytes | `prometheus` | `bytes` | `[{{instance}}]-[{{pod}}]-input` |
| Region Worker Handle Bulk Insert Requests | `histogram_quantile(0.95, sum by(le,instance, stage, pod) (rate(greptime_region_worker_handle_write_bucket[$__rate_interval])))`<br/>`sum by(instance, stage, pod) (rate(greptime_region_worker_handle_write_sum[$__rate_interval]))/sum by(instance, stage, pod) (rate(greptime_region_worker_handle_write_count[$__rate_interval]))` | `timeseries` | Per-stage elapsed time for region worker to handle bulk insert region requests. | `prometheus` | `s` | `[{{instance}}]-[{{pod}}]-[{{stage}}]-P95` |
| Active Series and Field Builders Count | `sum by(instance, pod) (greptime_mito_memtable_active_series_count)`<br/>`sum by(instance, pod) (greptime_mito_memtable_field_builder_count)` | `timeseries` | Compaction oinput output bytes | `prometheus` | `none` | `[{{instance}}]-[{{pod}}]-series` |
| Region Worker Convert Requests | `histogram_quantile(0.95, sum by(le, instance, stage, pod) (rate(greptime_datanode_convert_region_request_bucket[$__rate_interval])))`<br/>`sum by(le,instance, stage, pod) (rate(greptime_datanode_convert_region_request_sum[$__rate_interval]))/sum by(le,instance, stage, pod) (rate(greptime_datanode_convert_region_request_count[$__rate_interval]))` | `timeseries` | Per-stage elapsed time for region worker to decode requests. | `prometheus` | `s` | `[{{instance}}]-[{{pod}}]-[{{stage}}]-P95` |
| Cache Miss | `sum by (instance,pod, type) (rate(greptime_mito_cache_miss{}[$__rate_interval]))` | `timeseries` | The local cache miss of the datanode. | `prometheus` | -- | `[{{instance}}]-[{{pod}}]-[{{type}}]` |
# OpenDAL
| Title | Query | Type | Description | Datasource | Unit | Legend Format |
| --- | --- | --- | --- | --- | --- | --- |
| QPS per Instance | `sum by(instance, pod, scheme, operation) (rate(opendal_operation_duration_seconds_count{}[$__rate_interval]))` | `timeseries` | QPS per Instance. | `prometheus` | `ops` | `[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]` |
| Read QPS per Instance | `sum by(instance, pod, scheme) (rate(opendal_operation_duration_seconds_count{ operation="read"}[$__rate_interval]))` | `timeseries` | Read QPS per Instance. | `prometheus` | `ops` | `[{{instance}}]-[{{pod}}]-[{{scheme}}]` |
| Read P99 per Instance | `histogram_quantile(0.99, sum by(instance, pod, le, scheme) (rate(opendal_operation_duration_seconds_bucket{operation="read"}[$__rate_interval])))` | `timeseries` | Read P99 per Instance. | `prometheus` | `s` | `[{{instance}}]-[{{pod}}]-{{scheme}}` |
| Write QPS per Instance | `sum by(instance, pod, scheme) (rate(opendal_operation_duration_seconds_count{ operation="write"}[$__rate_interval]))` | `timeseries` | Write QPS per Instance. | `prometheus` | `ops` | `[{{instance}}]-[{{pod}}]-{{scheme}}` |
| Write P99 per Instance | `histogram_quantile(0.99, sum by(instance, pod, le, scheme) (rate(opendal_operation_duration_seconds_bucket{ operation="write"}[$__rate_interval])))` | `timeseries` | Write P99 per Instance. | `prometheus` | `s` | `[{{instance}}]-[{{pod}}]-[{{scheme}}]` |
| Read QPS per Instance | `sum by(instance, pod, scheme, operation) (rate(opendal_operation_duration_seconds_count{ operation=~"read\|Reader::read"}[$__rate_interval]))` | `timeseries` | Read QPS per Instance. | `prometheus` | `ops` | `[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]` |
| Read P99 per Instance | `histogram_quantile(0.99, sum by(instance, pod, le, scheme, operation) (rate(opendal_operation_duration_seconds_bucket{operation=~"read\|Reader::read"}[$__rate_interval])))` | `timeseries` | Read P99 per Instance. | `prometheus` | `s` | `[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]` |
| Write QPS per Instance | `sum by(instance, pod, scheme, operation) (rate(opendal_operation_duration_seconds_count{ operation=~"write\|Writer::write\|Writer::close"}[$__rate_interval]))` | `timeseries` | Write QPS per Instance. | `prometheus` | `ops` | `[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]` |
| Write P99 per Instance | `histogram_quantile(0.99, sum by(instance, pod, le, scheme, operation) (rate(opendal_operation_duration_seconds_bucket{ operation =~ "Writer::write\|Writer::close\|write"}[$__rate_interval])))` | `timeseries` | Write P99 per Instance. | `prometheus` | `s` | `[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]` |
| List QPS per Instance | `sum by(instance, pod, scheme) (rate(opendal_operation_duration_seconds_count{ operation="list"}[$__rate_interval]))` | `timeseries` | List QPS per Instance. | `prometheus` | `ops` | `[{{instance}}]-[{{pod}}]-[{{scheme}}]` |
| List P99 per Instance | `histogram_quantile(0.99, sum by(instance, pod, le, scheme) (rate(opendal_operation_duration_seconds_bucket{ operation="list"}[$__rate_interval])))` | `timeseries` | List P99 per Instance. | `prometheus` | `s` | `[{{instance}}]-[{{pod}}]-[{{scheme}}]` |
| Other Requests per Instance | `sum by(instance, pod, scheme, operation) (rate(opendal_operation_duration_seconds_count{operation!~"read\|write\|list\|stat"}[$__rate_interval]))` | `timeseries` | Other Requests per Instance. | `prometheus` | `ops` | `[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]` |
| Other Request P99 per Instance | `histogram_quantile(0.99, sum by(instance, pod, le, scheme, operation) (rate(opendal_operation_duration_seconds_bucket{ operation!~"read\|write\|list"}[$__rate_interval])))` | `timeseries` | Other Request P99 per Instance. | `prometheus` | `s` | `[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]` |
| Other Request P99 per Instance | `histogram_quantile(0.99, sum by(instance, pod, le, scheme, operation) (rate(opendal_operation_duration_seconds_bucket{ operation!~"read\|write\|list\|Writer::write\|Writer::close\|Reader::read"}[$__rate_interval])))` | `timeseries` | Other Request P99 per Instance. | `prometheus` | `s` | `[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]` |
| Opendal traffic | `sum by(instance, pod, scheme, operation) (rate(opendal_operation_bytes_sum{}[$__rate_interval]))` | `timeseries` | Total traffic as in bytes by instance and operation | `prometheus` | `decbytes` | `[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]` |
| OpenDAL errors per Instance | `sum by(instance, pod, scheme, operation, error) (rate(opendal_operation_errors_total{ error!="NotFound"}[$__rate_interval]))` | `timeseries` | OpenDAL error counts per Instance. | `prometheus` | -- | `[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]-[{{error}}]` |
# Remote WAL
| Title | Query | Type | Description | Datasource | Unit | Legend Format |
| --- | --- | --- | --- | --- | --- | --- |
| Triggered region flush total | `meta_triggered_region_flush_total` | `timeseries` | Triggered region flush total | `prometheus` | `none` | `{{pod}}-{{topic_name}}` |
| Triggered region checkpoint total | `meta_triggered_region_checkpoint_total` | `timeseries` | Triggered region checkpoint total | `prometheus` | `none` | `{{pod}}-{{topic_name}}` |
| Topic estimated replay size | `meta_topic_estimated_replay_size` | `timeseries` | Topic estimated max replay size | `prometheus` | `bytes` | `{{pod}}-{{topic_name}}` |
| Kafka logstore's bytes traffic | `rate(greptime_logstore_kafka_client_bytes_total[$__rate_interval])` | `timeseries` | Kafka logstore's bytes traffic | `prometheus` | `bytes` | `{{pod}}-{{logstore}}` |
# Metasrv
| Title | Query | Type | Description | Datasource | Unit | Legend Format |
| --- | --- | --- | --- | --- | --- | --- |
@@ -101,6 +110,8 @@
| Meta KV Ops Latency | `histogram_quantile(0.99, sum by(pod, le, op, target) (greptime_meta_kv_request_elapsed_bucket))` | `timeseries` | Gauge of load information of each datanode, collected via heartbeat between datanode and metasrv. This information is for metasrv to schedule workloads. | `prometheus` | `s` | `{{pod}}-{{op}} p99` |
| Rate of meta KV Ops | `rate(greptime_meta_kv_request_elapsed_count[$__rate_interval])` | `timeseries` | Gauge of load information of each datanode, collected via heartbeat between datanode and metasrv. This information is for metasrv to schedule workloads. | `prometheus` | `none` | `{{pod}}-{{op}} p99` |
| DDL Latency | `histogram_quantile(0.9, sum by(le, pod, step) (greptime_meta_procedure_create_tables_bucket))`<br/>`histogram_quantile(0.9, sum by(le, pod, step) (greptime_meta_procedure_create_table))`<br/>`histogram_quantile(0.9, sum by(le, pod, step) (greptime_meta_procedure_create_view))`<br/>`histogram_quantile(0.9, sum by(le, pod, step) (greptime_meta_procedure_create_flow))`<br/>`histogram_quantile(0.9, sum by(le, pod, step) (greptime_meta_procedure_drop_table))`<br/>`histogram_quantile(0.9, sum by(le, pod, step) (greptime_meta_procedure_alter_table))` | `timeseries` | Gauge of load information of each datanode, collected via heartbeat between datanode and metasrv. This information is for metasrv to schedule workloads. | `prometheus` | `s` | `CreateLogicalTables-{{step}} p90` |
| Reconciliation stats | `greptime_meta_reconciliation_stats` | `timeseries` | Reconciliation stats | `prometheus` | `s` | `{{pod}}-{{table_type}}-{{type}}` |
| Reconciliation steps | `histogram_quantile(0.9, greptime_meta_reconciliation_procedure_bucket)` | `timeseries` | Elapsed of Reconciliation steps | `prometheus` | `s` | `{{procedure_name}}-{{step}}-P90` |
# Flownode
| Title | Query | Type | Description | Datasource | Unit | Legend Format |
| --- | --- | --- | --- | --- | --- | --- |

View File

@@ -180,13 +180,18 @@ groups:
- title: Datanode Memory per Instance
type: timeseries
description: Current memory usage by instance
unit: decbytes
unit: bytes
queries:
- expr: sum(process_resident_memory_bytes{}) by (instance, pod)
datasource:
type: prometheus
uid: ${metrics}
legendFormat: '[{{instance}}]-[{{ pod }}]'
- expr: max(greptime_memory_limit_in_bytes{})
datasource:
type: prometheus
uid: ${metrics}
legendFormat: limit
- title: Datanode CPU Usage per Instance
type: timeseries
description: Current cpu usage by instance
@@ -197,16 +202,26 @@ groups:
type: prometheus
uid: ${metrics}
legendFormat: '[{{ instance }}]-[{{ pod }}]'
- expr: max(greptime_cpu_limit_in_millicores{})
datasource:
type: prometheus
uid: ${metrics}
legendFormat: limit
- title: Frontend Memory per Instance
type: timeseries
description: Current memory usage by instance
unit: decbytes
unit: bytes
queries:
- expr: sum(process_resident_memory_bytes{}) by (instance, pod)
datasource:
type: prometheus
uid: ${metrics}
legendFormat: '[{{ instance }}]-[{{ pod }}]'
- expr: max(greptime_memory_limit_in_bytes{})
datasource:
type: prometheus
uid: ${metrics}
legendFormat: limit
- title: Frontend CPU Usage per Instance
type: timeseries
description: Current cpu usage by instance
@@ -217,16 +232,26 @@ groups:
type: prometheus
uid: ${metrics}
legendFormat: '[{{ instance }}]-[{{ pod }}]-cpu'
- expr: max(greptime_cpu_limit_in_millicores{})
datasource:
type: prometheus
uid: ${metrics}
legendFormat: limit
- title: Metasrv Memory per Instance
type: timeseries
description: Current memory usage by instance
unit: decbytes
unit: bytes
queries:
- expr: sum(process_resident_memory_bytes{}) by (instance, pod)
datasource:
type: prometheus
uid: ${metrics}
legendFormat: '[{{ instance }}]-[{{ pod }}]-resident'
- expr: max(greptime_memory_limit_in_bytes{})
datasource:
type: prometheus
uid: ${metrics}
legendFormat: limit
- title: Metasrv CPU Usage per Instance
type: timeseries
description: Current cpu usage by instance
@@ -237,16 +262,26 @@ groups:
type: prometheus
uid: ${metrics}
legendFormat: '[{{ instance }}]-[{{ pod }}]'
- expr: max(greptime_cpu_limit_in_millicores{})
datasource:
type: prometheus
uid: ${metrics}
legendFormat: limit
- title: Flownode Memory per Instance
type: timeseries
description: Current memory usage by instance
unit: decbytes
unit: bytes
queries:
- expr: sum(process_resident_memory_bytes{}) by (instance, pod)
datasource:
type: prometheus
uid: ${metrics}
legendFormat: '[{{ instance }}]-[{{ pod }}]'
- expr: max(greptime_memory_limit_in_bytes{})
datasource:
type: prometheus
uid: ${metrics}
legendFormat: limit
- title: Flownode CPU Usage per Instance
type: timeseries
description: Current cpu usage by instance
@@ -257,6 +292,11 @@ groups:
type: prometheus
uid: ${metrics}
legendFormat: '[{{ instance }}]-[{{ pod }}]'
- expr: max(greptime_cpu_limit_in_millicores{})
datasource:
type: prometheus
uid: ${metrics}
legendFormat: limit
- title: Frontend Requests
panels:
- title: HTTP QPS per Instance
@@ -612,6 +652,21 @@ groups:
type: prometheus
uid: ${metrics}
legendFormat: '[{{instance}}]-[{{pod}}]-[{{stage}}]-AVG'
- title: Active Series and Field Builders Count
type: timeseries
description: Compaction oinput output bytes
unit: none
queries:
- expr: sum by(instance, pod) (greptime_mito_memtable_active_series_count)
datasource:
type: prometheus
uid: ${metrics}
legendFormat: '[{{instance}}]-[{{pod}}]-series'
- expr: sum by(instance, pod) (greptime_mito_memtable_field_builder_count)
datasource:
type: prometheus
uid: ${metrics}
legendFormat: '[{{instance}}]-[{{pod}}]-field_builders'
- title: Region Worker Convert Requests
type: timeseries
description: Per-stage elapsed time for region worker to decode requests.
@@ -627,6 +682,15 @@ groups:
type: prometheus
uid: ${metrics}
legendFormat: '[{{instance}}]-[{{pod}}]-[{{stage}}]-AVG'
- title: Cache Miss
type: timeseries
description: The local cache miss of the datanode.
queries:
- expr: sum by (instance,pod, type) (rate(greptime_mito_cache_miss{}[$__rate_interval]))
datasource:
type: prometheus
uid: ${metrics}
legendFormat: '[{{instance}}]-[{{pod}}]-[{{type}}]'
- title: OpenDAL
panels:
- title: QPS per Instance
@@ -644,41 +708,41 @@ groups:
description: Read QPS per Instance.
unit: ops
queries:
- expr: sum by(instance, pod, scheme) (rate(opendal_operation_duration_seconds_count{ operation="read"}[$__rate_interval]))
- expr: sum by(instance, pod, scheme, operation) (rate(opendal_operation_duration_seconds_count{ operation=~"read|Reader::read"}[$__rate_interval]))
datasource:
type: prometheus
uid: ${metrics}
legendFormat: '[{{instance}}]-[{{pod}}]-[{{scheme}}]'
legendFormat: '[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]'
- title: Read P99 per Instance
type: timeseries
description: Read P99 per Instance.
unit: s
queries:
- expr: histogram_quantile(0.99, sum by(instance, pod, le, scheme) (rate(opendal_operation_duration_seconds_bucket{operation="read"}[$__rate_interval])))
- expr: histogram_quantile(0.99, sum by(instance, pod, le, scheme, operation) (rate(opendal_operation_duration_seconds_bucket{operation=~"read|Reader::read"}[$__rate_interval])))
datasource:
type: prometheus
uid: ${metrics}
legendFormat: '[{{instance}}]-[{{pod}}]-{{scheme}}'
legendFormat: '[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]'
- title: Write QPS per Instance
type: timeseries
description: Write QPS per Instance.
unit: ops
queries:
- expr: sum by(instance, pod, scheme) (rate(opendal_operation_duration_seconds_count{ operation="write"}[$__rate_interval]))
- expr: sum by(instance, pod, scheme, operation) (rate(opendal_operation_duration_seconds_count{ operation=~"write|Writer::write|Writer::close"}[$__rate_interval]))
datasource:
type: prometheus
uid: ${metrics}
legendFormat: '[{{instance}}]-[{{pod}}]-{{scheme}}'
legendFormat: '[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]'
- title: Write P99 per Instance
type: timeseries
description: Write P99 per Instance.
unit: s
queries:
- expr: histogram_quantile(0.99, sum by(instance, pod, le, scheme) (rate(opendal_operation_duration_seconds_bucket{ operation="write"}[$__rate_interval])))
- expr: histogram_quantile(0.99, sum by(instance, pod, le, scheme, operation) (rate(opendal_operation_duration_seconds_bucket{ operation =~ "Writer::write|Writer::close|write"}[$__rate_interval])))
datasource:
type: prometheus
uid: ${metrics}
legendFormat: '[{{instance}}]-[{{pod}}]-[{{scheme}}]'
legendFormat: '[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]'
- title: List QPS per Instance
type: timeseries
description: List QPS per Instance.
@@ -714,7 +778,7 @@ groups:
description: Other Request P99 per Instance.
unit: s
queries:
- expr: histogram_quantile(0.99, sum by(instance, pod, le, scheme, operation) (rate(opendal_operation_duration_seconds_bucket{ operation!~"read|write|list"}[$__rate_interval])))
- expr: histogram_quantile(0.99, sum by(instance, pod, le, scheme, operation) (rate(opendal_operation_duration_seconds_bucket{ operation!~"read|write|list|Writer::write|Writer::close|Reader::read"}[$__rate_interval])))
datasource:
type: prometheus
uid: ${metrics}
@@ -738,6 +802,48 @@ groups:
type: prometheus
uid: ${metrics}
legendFormat: '[{{instance}}]-[{{pod}}]-[{{scheme}}]-[{{operation}}]-[{{error}}]'
- title: Remote WAL
panels:
- title: Triggered region flush total
type: timeseries
description: Triggered region flush total
unit: none
queries:
- expr: meta_triggered_region_flush_total
datasource:
type: prometheus
uid: ${metrics}
legendFormat: '{{pod}}-{{topic_name}}'
- title: Triggered region checkpoint total
type: timeseries
description: Triggered region checkpoint total
unit: none
queries:
- expr: meta_triggered_region_checkpoint_total
datasource:
type: prometheus
uid: ${metrics}
legendFormat: '{{pod}}-{{topic_name}}'
- title: Topic estimated replay size
type: timeseries
description: Topic estimated max replay size
unit: bytes
queries:
- expr: meta_topic_estimated_replay_size
datasource:
type: prometheus
uid: ${metrics}
legendFormat: '{{pod}}-{{topic_name}}'
- title: Kafka logstore's bytes traffic
type: timeseries
description: Kafka logstore's bytes traffic
unit: bytes
queries:
- expr: rate(greptime_logstore_kafka_client_bytes_total[$__rate_interval])
datasource:
type: prometheus
uid: ${metrics}
legendFormat: '{{pod}}-{{logstore}}'
- title: Metasrv
panels:
- title: Region migration datanode
@@ -884,6 +990,26 @@ groups:
type: prometheus
uid: ${metrics}
legendFormat: AlterTable-{{step}} p90
- title: Reconciliation stats
type: timeseries
description: Reconciliation stats
unit: s
queries:
- expr: greptime_meta_reconciliation_stats
datasource:
type: prometheus
uid: ${metrics}
legendFormat: '{{pod}}-{{table_type}}-{{type}}'
- title: Reconciliation steps
type: timeseries
description: 'Elapsed of Reconciliation steps '
unit: s
queries:
- expr: histogram_quantile(0.9, greptime_meta_reconciliation_procedure_bucket)
datasource:
type: prometheus
uid: ${metrics}
legendFormat: '{{procedure_name}}-{{step}}-P90'
- title: Flownode
panels:
- title: Flow Ingest / Output Rate

View File

@@ -26,7 +26,7 @@ check_dashboards_generation() {
./grafana/scripts/gen-dashboards.sh
if [[ -n "$(git diff --name-only grafana/dashboards/metrics)" ]]; then
echo "Error: The dashboards are not generated correctly. You should execute the `make dashboards` command."
echo "Error: The dashboards are not generated correctly. You should execute the 'make dashboards' command."
exit 1
fi
}

View File

@@ -29,10 +29,14 @@ excludes = [
# enterprise
"src/common/meta/src/rpc/ddl/trigger.rs",
"src/operator/src/expr_helper/trigger.rs",
"src/sql/src/statements/alter/trigger.rs",
"src/sql/src/statements/create/trigger.rs",
"src/sql/src/statements/show/trigger.rs",
"src/sql/src/statements/drop/trigger.rs",
"src/sql/src/parsers/alter_parser/trigger.rs",
"src/sql/src/parsers/create_parser/trigger.rs",
"src/sql/src/parsers/show_parser/trigger.rs",
"src/mito2/src/extension.rs",
]
[properties]

View File

@@ -1,2 +1,2 @@
[toolchain]
channel = "nightly-2025-05-19"
channel = "nightly-2025-10-01"

View File

@@ -13,8 +13,8 @@
# limitations under the License.
import os
import re
from multiprocessing import Pool
from pathlib import Path
def find_rust_files(directory):
@@ -24,6 +24,10 @@ def find_rust_files(directory):
if "test" in root.lower():
continue
# Skip the target directory
if "target" in Path(root).parts:
continue
for file in files:
# Skip files with "test" in the filename
if "test" in file.lower():

265
scripts/fix-udeps.py Executable file
View File

@@ -0,0 +1,265 @@
# Copyright 2023 Greptime Team
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
import json
import os
import re
import sys
def load_udeps_report(report_path):
try:
with open(report_path, "r") as f:
return json.load(f)
except FileNotFoundError:
print(f"Error: Report file '{report_path}' not found.")
return None
except json.JSONDecodeError as e:
print(f"Error: Invalid JSON in report file: {e}")
return None
def extract_unused_dependencies(report):
"""
Extract and organize unused dependencies from the cargo-udeps JSON report.
The cargo-udeps report has this structure:
{
"unused_deps": {
"package_name v0.1.0 (/path/to/package)": {
"normal": ["dep1", "dep2"],
"development": ["dev_dep1"],
"build": ["build_dep1"],
"manifest_path": "/path/to/Cargo.toml"
}
}
}
Args:
report (dict): The parsed JSON report from cargo-udeps
Returns:
dict: Organized unused dependencies by package name:
{
"package_name": {
"dependencies": [("dep1", "normal"), ("dev_dep1", "dev")],
"manifest_path": "/path/to/Cargo.toml"
}
}
"""
if not report or "unused_deps" not in report:
return {}
unused_deps = {}
for package_full_name, deps_info in report["unused_deps"].items():
package_name = package_full_name.split(" ")[0]
all_unused = []
if deps_info.get("normal"):
all_unused.extend([(dep, "normal") for dep in deps_info["normal"]])
if deps_info.get("development"):
all_unused.extend([(dep, "dev") for dep in deps_info["development"]])
if deps_info.get("build"):
all_unused.extend([(dep, "build") for dep in deps_info["build"]])
if all_unused:
unused_deps[package_name] = {
"dependencies": all_unused,
"manifest_path": deps_info.get("manifest_path", "unknown"),
}
return unused_deps
def get_section_pattern(dep_type):
"""
Get regex patterns to identify different dependency sections in Cargo.toml.
Args:
dep_type (str): Type of dependency ("normal", "dev", or "build")
Returns:
list: List of regex patterns to match the appropriate section headers
"""
patterns = {
"normal": [r"\[dependencies\]", r"\[dependencies\..*?\]"],
"dev": [r"\[dev-dependencies\]", r"\[dev-dependencies\..*?\]"],
"build": [r"\[build-dependencies\]", r"\[build-dependencies\..*?\]"],
}
return patterns.get(dep_type, [])
def remove_dependency_line(content, dep_name, section_start, section_end):
"""
Remove a dependency line from a specific section of a Cargo.toml file.
Args:
content (str): The entire content of the Cargo.toml file
dep_name (str): Name of the dependency to remove (e.g., "serde", "tokio")
section_start (int): Starting position of the section in the content
section_end (int): Ending position of the section in the content
Returns:
tuple: (new_content, removed) where:
- new_content (str): The modified content with dependency removed
- removed (bool): True if dependency was found and removed, False otherwise
Example input content format:
content = '''
[package]
name = "my-crate"
version = "0.1.0"
[dependencies]
serde = "1.0"
tokio = { version = "1.0", features = ["full"] }
serde_json.workspace = true
[dev-dependencies]
tempfile = "3.0"
'''
# If dep_name = "serde", section_start = start of [dependencies],
# section_end = start of [dev-dependencies], this function will:
# 1. Extract the section: "serde = "1.0"\ntokio = { version = "1.0", features = ["full"] }\nserde_json.workspace = true\n"
# 2. Find and remove the line: "serde = "1.0""
# 3. Return the modified content with that line removed
"""
section_content = content[section_start:section_end]
dep_patterns = [
rf"^{re.escape(dep_name)}\s*=.*$", # e.g., "serde = "1.0""
rf"^{re.escape(dep_name)}\.workspace\s*=.*$", # e.g., "serde_json.workspace = true"
]
for pattern in dep_patterns:
match = re.search(pattern, section_content, re.MULTILINE)
if match:
line_start = section_start + match.start() # Start of the matched line
line_end = section_start + match.end() # End of the matched line
if line_end < len(content) and content[line_end] == "\n":
line_end += 1
return content[:line_start] + content[line_end:], True
return content, False
def remove_dependency_from_toml(file_path, dep_name, dep_type):
"""
Remove a specific dependency from a Cargo.toml file.
Args:
file_path (str): Path to the Cargo.toml file
dep_name (str): Name of the dependency to remove
dep_type (str): Type of dependency ("normal", "dev", or "build")
Returns:
bool: True if dependency was successfully removed, False otherwise
"""
try:
with open(file_path, "r") as f:
content = f.read()
section_patterns = get_section_pattern(dep_type)
if not section_patterns:
return False
for pattern in section_patterns:
section_match = re.search(pattern, content, re.IGNORECASE)
if not section_match:
continue
section_start = section_match.end()
next_section = re.search(r"\n\s*\[", content[section_start:])
section_end = (
section_start + next_section.start() if next_section else len(content)
)
new_content, removed = remove_dependency_line(
content, dep_name, section_start, section_end
)
if removed:
with open(file_path, "w") as f:
f.write(new_content)
return True
return False
except Exception as e:
print(f"Error processing {file_path}: {e}")
return False
def process_unused_dependencies(unused_deps):
"""
Process and remove all unused dependencies from their respective Cargo.toml files.
Args:
unused_deps (dict): Dictionary of unused dependencies organized by package:
{
"package_name": {
"dependencies": [("dep1", "normal"), ("dev_dep1", "dev")],
"manifest_path": "/path/to/Cargo.toml"
}
}
"""
if not unused_deps:
print("No unused dependencies found.")
return
total_removed = 0
total_failed = 0
for package, info in unused_deps.items():
deps = info["dependencies"]
manifest_path = info["manifest_path"]
if not os.path.exists(manifest_path):
print(f"Manifest file not found: {manifest_path}")
total_failed += len(deps)
continue
for dep, dep_type in deps:
if remove_dependency_from_toml(manifest_path, dep, dep_type):
print(f"Removed {dep} from {package}")
total_removed += 1
else:
print(f"Failed to remove {dep} from {package}")
total_failed += 1
print(f"Removed {total_removed} dependencies")
if total_failed > 0:
print(f"Failed to remove {total_failed} dependencies")
def main():
if len(sys.argv) > 1:
report_path = sys.argv[1]
else:
report_path = "udeps-report.json"
report = load_udeps_report(report_path)
if report is None:
sys.exit(1)
unused_deps = extract_unused_dependencies(report)
process_unused_dependencies(unused_deps)
if __name__ == "__main__":
main()

View File

@@ -0,0 +1,71 @@
#!/bin/bash
# Generate TLS certificates for etcd testing
# This script creates certificates for TLS-enabled etcd in testing environments
set -euo pipefail
CERT_DIR="${1:-$(dirname "$0")/../tests-integration/fixtures/etcd-tls-certs}"
DAYS="${2:-365}"
echo "Generating TLS certificates for etcd in ${CERT_DIR}..."
mkdir -p "${CERT_DIR}"
cd "${CERT_DIR}"
echo "Generating CA private key..."
openssl genrsa -out ca-key.pem 2048
echo "Generating CA certificate..."
openssl req -new -x509 -key ca-key.pem -out ca.crt -days "${DAYS}" \
-subj "/C=US/ST=CA/L=SF/O=Greptime/CN=etcd-ca"
# Create server certificate config with Subject Alternative Names
echo "Creating server certificate configuration..."
cat > server.conf << 'EOF'
[req]
distinguished_name = req
[v3_req]
basicConstraints = CA:FALSE
keyUsage = keyEncipherment, dataEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = localhost
DNS.2 = etcd-tls
DNS.3 = 127.0.0.1
IP.1 = 127.0.0.1
IP.2 = ::1
EOF
echo "Generating server private key..."
openssl genrsa -out server-key.pem 2048
echo "Generating server certificate signing request..."
openssl req -new -key server-key.pem -out server.csr \
-subj "/CN=etcd-tls"
echo "Generating server certificate..."
openssl x509 -req -in server.csr -CA ca.crt \
-CAkey ca-key.pem -CAcreateserial -out server.crt \
-days "${DAYS}" -extensions v3_req -extfile server.conf
echo "Generating client private key..."
openssl genrsa -out client-key.pem 2048
echo "Generating client certificate signing request..."
openssl req -new -key client-key.pem -out client.csr \
-subj "/CN=etcd-client"
echo "Generating client certificate..."
openssl x509 -req -in client.csr -CA ca.crt \
-CAkey ca-key.pem -CAcreateserial -out client.crt \
-days "${DAYS}"
echo "Setting proper file permissions..."
chmod 644 ca.crt server.crt client.crt
chmod 600 ca-key.pem server-key.pem client-key.pem
# Clean up intermediate files
rm -f server.csr client.csr server.conf
echo "TLS certificates generated successfully in ${CERT_DIR}"

41
scripts/generate_certs.sh Executable file
View File

@@ -0,0 +1,41 @@
#!/usr/bin/env bash
set -euo pipefail
CERT_DIR="${1:-$(dirname "$0")/../tests-integration/fixtures/certs}"
DAYS="${2:-365}"
mkdir -p "${CERT_DIR}"
cd "${CERT_DIR}"
echo "Generating CA certificate..."
openssl req -new -x509 -days "${DAYS}" -nodes -text \
-out root.crt -keyout root.key \
-subj "/CN=GreptimeDBRootCA"
echo "Generating server certificate..."
openssl req -new -nodes -text \
-out server.csr -keyout server.key \
-subj "/CN=greptime"
openssl x509 -req -in server.csr -text -days "${DAYS}" \
-CA root.crt -CAkey root.key -CAcreateserial \
-out server.crt \
-extensions v3_req -extfile <(printf "[v3_req]\nsubjectAltName=DNS:localhost,IP:127.0.0.1")
echo "Generating client certificate..."
# Make sure the client certificate is for the greptimedb user
openssl req -new -nodes -text \
-out client.csr -keyout client.key \
-subj "/CN=greptimedb"
openssl x509 -req -in client.csr -CA root.crt -CAkey root.key -CAcreateserial \
-out client.crt -days 365 -extensions v3_req -extfile <(printf "[v3_req]\nsubjectAltName=DNS:localhost")
rm -f *.csr
echo "TLS certificates generated successfully in ${CERT_DIR}"
chmod 644 root.key
chmod 644 client.key
chmod 644 server.key

149
scripts/install.sh Executable file → Normal file
View File

@@ -53,6 +53,54 @@ get_arch_type() {
esac
}
# Verify SHA256 checksum
verify_sha256() {
file="$1"
expected_sha256="$2"
if command -v sha256sum >/dev/null 2>&1; then
actual_sha256=$(sha256sum "$file" | cut -d' ' -f1)
elif command -v shasum >/dev/null 2>&1; then
actual_sha256=$(shasum -a 256 "$file" | cut -d' ' -f1)
else
echo "Warning: No SHA256 verification tool found (sha256sum or shasum). Skipping checksum verification."
return 0
fi
if [ "$actual_sha256" = "$expected_sha256" ]; then
echo "SHA256 checksum verified successfully."
return 0
else
echo "Error: SHA256 checksum verification failed!"
echo "Expected: $expected_sha256"
echo "Actual: $actual_sha256"
return 1
fi
}
# Prompt for user confirmation (compatible with different shells)
prompt_confirmation() {
message="$1"
printf "%s (y/N): " "$message"
# Try to read user input, fallback if read fails
answer=""
if read answer </dev/tty 2>/dev/null; then
case "$answer" in
[Yy]|[Yy][Ee][Ss])
return 0
;;
*)
return 1
;;
esac
else
echo ""
echo "Cannot read user input. Defaulting to No."
return 1
fi
}
download_artifact() {
if [ -n "${OS_TYPE}" ] && [ -n "${ARCH_TYPE}" ]; then
# Use the latest stable released version.
@@ -71,17 +119,104 @@ download_artifact() {
fi
echo "Downloading ${BIN}, OS: ${OS_TYPE}, Arch: ${ARCH_TYPE}, Version: ${VERSION}"
PACKAGE_NAME="${BIN}-${OS_TYPE}-${ARCH_TYPE}-${VERSION}.tar.gz"
PKG_NAME="${BIN}-${OS_TYPE}-${ARCH_TYPE}-${VERSION}"
PACKAGE_NAME="${PKG_NAME}.tar.gz"
SHA256_FILE="${PKG_NAME}.sha256sum"
if [ -n "${PACKAGE_NAME}" ]; then
wget "https://github.com/${GITHUB_ORG}/${GITHUB_REPO}/releases/download/${VERSION}/${PACKAGE_NAME}"
# Check if files already exist and prompt for override
if [ -f "${PACKAGE_NAME}" ]; then
echo "File ${PACKAGE_NAME} already exists."
if prompt_confirmation "Do you want to override it?"; then
echo "Overriding existing file..."
rm -f "${PACKAGE_NAME}"
else
echo "Skipping download. Using existing file."
fi
fi
if [ -f "${BIN}" ]; then
echo "Binary ${BIN} already exists."
if prompt_confirmation "Do you want to override it?"; then
echo "Will override existing binary..."
rm -f "${BIN}"
else
echo "Installation cancelled."
exit 0
fi
fi
# Download package if not exists
if [ ! -f "${PACKAGE_NAME}" ]; then
echo "Downloading ${PACKAGE_NAME}..."
# Use curl instead of wget for better compatibility
if command -v curl >/dev/null 2>&1; then
if ! curl -L -o "${PACKAGE_NAME}" "https://github.com/${GITHUB_ORG}/${GITHUB_REPO}/releases/download/${VERSION}/${PACKAGE_NAME}"; then
echo "Error: Failed to download ${PACKAGE_NAME}"
exit 1
fi
elif command -v wget >/dev/null 2>&1; then
if ! wget -O "${PACKAGE_NAME}" "https://github.com/${GITHUB_ORG}/${GITHUB_REPO}/releases/download/${VERSION}/${PACKAGE_NAME}"; then
echo "Error: Failed to download ${PACKAGE_NAME}"
exit 1
fi
else
echo "Error: Neither curl nor wget is available for downloading."
exit 1
fi
fi
# Download and verify SHA256 checksum
echo "Downloading SHA256 checksum..."
sha256_download_success=0
if command -v curl >/dev/null 2>&1; then
if curl -L -s -o "${SHA256_FILE}" "https://github.com/${GITHUB_ORG}/${GITHUB_REPO}/releases/download/${VERSION}/${SHA256_FILE}" 2>/dev/null; then
sha256_download_success=1
fi
elif command -v wget >/dev/null 2>&1; then
if wget -q -O "${SHA256_FILE}" "https://github.com/${GITHUB_ORG}/${GITHUB_REPO}/releases/download/${VERSION}/${SHA256_FILE}" 2>/dev/null; then
sha256_download_success=1
fi
fi
if [ $sha256_download_success -eq 1 ] && [ -f "${SHA256_FILE}" ]; then
expected_sha256=$(cat "${SHA256_FILE}" | cut -d' ' -f1)
if [ -n "$expected_sha256" ]; then
if ! verify_sha256 "${PACKAGE_NAME}" "${expected_sha256}"; then
echo "SHA256 verification failed. Removing downloaded file."
rm -f "${PACKAGE_NAME}" "${SHA256_FILE}"
exit 1
fi
else
echo "Warning: Could not parse SHA256 checksum from file."
fi
rm -f "${SHA256_FILE}"
else
echo "Warning: Could not download SHA256 checksum file. Skipping verification."
fi
# Extract the binary and clean the rest.
tar xvf "${PACKAGE_NAME}" && \
mv "${PACKAGE_NAME%.tar.gz}/${BIN}" "${PWD}" && \
rm -r "${PACKAGE_NAME}" && \
rm -r "${PACKAGE_NAME%.tar.gz}" && \
echo "Run './${BIN} --help' to get started"
echo "Extracting ${PACKAGE_NAME}..."
if ! tar xf "${PACKAGE_NAME}"; then
echo "Error: Failed to extract ${PACKAGE_NAME}"
exit 1
fi
# Find the binary in the extracted directory
extracted_dir="${PACKAGE_NAME%.tar.gz}"
if [ -f "${extracted_dir}/${BIN}" ]; then
mv "${extracted_dir}/${BIN}" "${PWD}/"
rm -f "${PACKAGE_NAME}"
rm -rf "${extracted_dir}"
chmod +x "${BIN}"
echo "Installation completed successfully!"
echo "Run './${BIN} --help' to get started"
else
echo "Error: Binary ${BIN} not found in extracted archive"
rm -f "${PACKAGE_NAME}"
rm -rf "${extracted_dir}"
exit 1
fi
fi
fi
}

View File

@@ -8,6 +8,7 @@ license.workspace = true
workspace = true
[dependencies]
arrow-schema.workspace = true
common-base.workspace = true
common-decimal.workspace = true
common-error.workspace = true
@@ -19,6 +20,3 @@ paste.workspace = true
prost.workspace = true
serde_json.workspace = true
snafu.workspace = true
[build-dependencies]
tonic-build = "0.11"

View File

@@ -17,9 +17,10 @@ use std::any::Any;
use common_error::ext::ErrorExt;
use common_error::status_code::StatusCode;
use common_macro::stack_trace_debug;
use common_time::timestamp::TimeUnit;
use datatypes::prelude::ConcreteDataType;
use snafu::prelude::*;
use snafu::Location;
use snafu::prelude::*;
pub type Result<T> = std::result::Result<T, Error>;
@@ -66,12 +67,28 @@ pub enum Error {
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Invalid time unit: {time_unit}"))]
InvalidTimeUnit {
time_unit: i32,
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Inconsistent time unit: {:?}", units))]
InconsistentTimeUnit {
units: Vec<TimeUnit>,
#[snafu(implicit)]
location: Location,
},
}
impl ErrorExt for Error {
fn status_code(&self) -> StatusCode {
match self {
Error::UnknownColumnDataType { .. } => StatusCode::InvalidArguments,
Error::UnknownColumnDataType { .. }
| Error::InvalidTimeUnit { .. }
| Error::InconsistentTimeUnit { .. } => StatusCode::InvalidArguments,
Error::IntoColumnDataType { .. } | Error::SerializeJson { .. } => {
StatusCode::Unexpected
}

File diff suppressed because it is too large Load Diff

View File

@@ -12,8 +12,6 @@
// See the License for the specific language governing permissions and
// limitations under the License.
#![feature(let_chains)]
pub mod error;
pub mod helper;

View File

@@ -22,6 +22,7 @@ use greptime_proto::v1::region::RegionResponse as RegionResponseV1;
pub struct RegionResponse {
pub affected_rows: AffectedRows,
pub extensions: HashMap<String, Vec<u8>>,
pub metadata: Vec<u8>,
}
impl RegionResponse {
@@ -29,6 +30,7 @@ impl RegionResponse {
Self {
affected_rows: region_response.affected_rows as _,
extensions: region_response.extensions,
metadata: region_response.metadata,
}
}
@@ -37,6 +39,16 @@ impl RegionResponse {
Self {
affected_rows,
extensions: Default::default(),
metadata: Vec::new(),
}
}
/// Creates one response with metadata.
pub fn from_metadata(metadata: Vec<u8>) -> Self {
Self {
affected_rows: 0,
extensions: Default::default(),
metadata,
}
}
}

View File

@@ -14,6 +14,8 @@
pub mod column_def;
pub mod helper;
pub mod meta {
pub use greptime_proto::v1::meta::*;
}

View File

@@ -14,17 +14,18 @@
use std::collections::HashMap;
use arrow_schema::extension::{EXTENSION_TYPE_METADATA_KEY, EXTENSION_TYPE_NAME_KEY};
use datatypes::schema::{
ColumnDefaultConstraint, ColumnSchema, FulltextAnalyzer, FulltextBackend, FulltextOptions,
SkippingIndexOptions, SkippingIndexType, COMMENT_KEY, FULLTEXT_KEY, INVERTED_INDEX_KEY,
SKIPPING_INDEX_KEY,
COMMENT_KEY, ColumnDefaultConstraint, ColumnSchema, FULLTEXT_KEY, FulltextAnalyzer,
FulltextBackend, FulltextOptions, INVERTED_INDEX_KEY, SKIPPING_INDEX_KEY, SkippingIndexOptions,
SkippingIndexType,
};
use greptime_proto::v1::{
Analyzer, FulltextBackend as PbFulltextBackend, SkippingIndexType as PbSkippingIndexType,
};
use snafu::ResultExt;
use crate::error::{self, Result};
use crate::error::{self, ConvertColumnDefaultConstraintSnafu, Result};
use crate::helper::ColumnDataTypeWrapper;
use crate::v1::{ColumnDef, ColumnOptions, SemanticType};
@@ -37,8 +38,10 @@ const SKIPPING_INDEX_GRPC_KEY: &str = "skipping_index";
/// Tries to construct a `ColumnSchema` from the given `ColumnDef`.
pub fn try_as_column_schema(column_def: &ColumnDef) -> Result<ColumnSchema> {
let data_type =
ColumnDataTypeWrapper::try_new(column_def.data_type, column_def.datatype_extension)?;
let data_type = ColumnDataTypeWrapper::try_new(
column_def.data_type,
column_def.datatype_extension.clone(),
)?;
let constraint = if column_def.default_constraint.is_empty() {
None
@@ -66,6 +69,15 @@ pub fn try_as_column_schema(column_def: &ColumnDef) -> Result<ColumnSchema> {
if let Some(skipping_index) = options.options.get(SKIPPING_INDEX_GRPC_KEY) {
metadata.insert(SKIPPING_INDEX_KEY.to_string(), skipping_index.to_owned());
}
if let Some(extension_name) = options.options.get(EXTENSION_TYPE_NAME_KEY) {
metadata.insert(EXTENSION_TYPE_NAME_KEY.to_string(), extension_name.clone());
}
if let Some(extension_metadata) = options.options.get(EXTENSION_TYPE_METADATA_KEY) {
metadata.insert(
EXTENSION_TYPE_METADATA_KEY.to_string(),
extension_metadata.clone(),
);
}
}
ColumnSchema::new(&column_def.name, data_type.into(), column_def.is_nullable)
@@ -77,6 +89,48 @@ pub fn try_as_column_schema(column_def: &ColumnDef) -> Result<ColumnSchema> {
})
}
/// Tries to construct a `ColumnDef` from the given `ColumnSchema`.
///
/// TODO(weny): Add tests for this function.
pub fn try_as_column_def(column_schema: &ColumnSchema, is_primary_key: bool) -> Result<ColumnDef> {
let column_datatype =
ColumnDataTypeWrapper::try_from(column_schema.data_type.clone()).map(|w| w.to_parts())?;
let semantic_type = if column_schema.is_time_index() {
SemanticType::Timestamp
} else if is_primary_key {
SemanticType::Tag
} else {
SemanticType::Field
} as i32;
let comment = column_schema
.metadata()
.get(COMMENT_KEY)
.cloned()
.unwrap_or_default();
let default_constraint = match column_schema.default_constraint() {
None => vec![],
Some(v) => v
.clone()
.try_into()
.context(ConvertColumnDefaultConstraintSnafu {
column: &column_schema.name,
})?,
};
let options = options_from_column_schema(column_schema);
Ok(ColumnDef {
name: column_schema.name.clone(),
data_type: column_datatype.0 as i32,
is_nullable: column_schema.is_nullable(),
default_constraint,
semantic_type,
comment,
datatype_extension: column_datatype.1,
options,
})
}
/// Constructs a `ColumnOptions` from the given `ColumnSchema`.
pub fn options_from_column_schema(column_schema: &ColumnSchema) -> Option<ColumnOptions> {
let mut options = ColumnOptions::default();
@@ -95,6 +149,17 @@ pub fn options_from_column_schema(column_schema: &ColumnSchema) -> Option<Column
.options
.insert(SKIPPING_INDEX_GRPC_KEY.to_string(), skipping_index.clone());
}
if let Some(extension_name) = column_schema.metadata().get(EXTENSION_TYPE_NAME_KEY) {
options
.options
.insert(EXTENSION_TYPE_NAME_KEY.to_string(), extension_name.clone());
}
if let Some(extension_metadata) = column_schema.metadata().get(EXTENSION_TYPE_METADATA_KEY) {
options.options.insert(
EXTENSION_TYPE_METADATA_KEY.to_string(),
extension_metadata.clone(),
);
}
(!options.options.is_empty()).then_some(options)
}
@@ -226,18 +291,20 @@ mod tests {
assert!(options.is_none());
let mut schema = ColumnSchema::new("test", ConcreteDataType::string_datatype(), true)
.with_fulltext_options(FulltextOptions {
enable: true,
analyzer: FulltextAnalyzer::English,
case_sensitive: false,
backend: FulltextBackend::Bloom,
})
.with_fulltext_options(FulltextOptions::new_unchecked(
true,
FulltextAnalyzer::English,
false,
FulltextBackend::Bloom,
10240,
0.01,
))
.unwrap();
schema.set_inverted_index(true);
let options = options_from_column_schema(&schema).unwrap();
assert_eq!(
options.options.get(FULLTEXT_GRPC_KEY).unwrap(),
"{\"enable\":true,\"analyzer\":\"English\",\"case-sensitive\":false,\"backend\":\"bloom\"}"
"{\"enable\":true,\"analyzer\":\"English\",\"case-sensitive\":false,\"backend\":\"bloom\",\"granularity\":10240,\"false-positive-rate-in-10000\":100}"
);
assert_eq!(
options.options.get(INVERTED_INDEX_GRPC_KEY).unwrap(),
@@ -247,16 +314,18 @@ mod tests {
#[test]
fn test_options_with_fulltext() {
let fulltext = FulltextOptions {
enable: true,
analyzer: FulltextAnalyzer::English,
case_sensitive: false,
backend: FulltextBackend::Bloom,
};
let fulltext = FulltextOptions::new_unchecked(
true,
FulltextAnalyzer::English,
false,
FulltextBackend::Bloom,
10240,
0.01,
);
let options = options_from_fulltext(&fulltext).unwrap().unwrap();
assert_eq!(
options.options.get(FULLTEXT_GRPC_KEY).unwrap(),
"{\"enable\":true,\"analyzer\":\"English\",\"case-sensitive\":false,\"backend\":\"bloom\"}"
"{\"enable\":true,\"analyzer\":\"English\",\"case-sensitive\":false,\"backend\":\"bloom\",\"granularity\":10240,\"false-positive-rate-in-10000\":100}"
);
}

65
src/api/src/v1/helper.rs Normal file
View File

@@ -0,0 +1,65 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use greptime_proto::v1::value::ValueData;
use greptime_proto::v1::{ColumnDataType, ColumnSchema, Row, SemanticType, Value};
/// Create a time index [ColumnSchema] with column's name and datatype.
/// Other fields are left default.
/// Useful when you just want to create a simple [ColumnSchema] without providing much struct fields.
pub fn time_index_column_schema(name: &str, datatype: ColumnDataType) -> ColumnSchema {
ColumnSchema {
column_name: name.to_string(),
datatype: datatype as i32,
semantic_type: SemanticType::Timestamp as i32,
..Default::default()
}
}
/// Create a tag [ColumnSchema] with column's name and datatype.
/// Other fields are left default.
/// Useful when you just want to create a simple [ColumnSchema] without providing much struct fields.
pub fn tag_column_schema(name: &str, datatype: ColumnDataType) -> ColumnSchema {
ColumnSchema {
column_name: name.to_string(),
datatype: datatype as i32,
semantic_type: SemanticType::Tag as i32,
..Default::default()
}
}
/// Create a field [ColumnSchema] with column's name and datatype.
/// Other fields are left default.
/// Useful when you just want to create a simple [ColumnSchema] without providing much struct fields.
pub fn field_column_schema(name: &str, datatype: ColumnDataType) -> ColumnSchema {
ColumnSchema {
column_name: name.to_string(),
datatype: datatype as i32,
semantic_type: SemanticType::Field as i32,
..Default::default()
}
}
/// Create a [Row] from [ValueData]s.
/// Useful when you don't want to write much verbose codes.
pub fn row(values: Vec<ValueData>) -> Row {
Row {
values: values
.into_iter()
.map(|x| Value {
value_data: Some(x),
})
.collect::<Vec<_>>(),
}
}

View File

@@ -15,11 +15,11 @@ workspace = true
api.workspace = true
async-trait.workspace = true
common-base.workspace = true
common-config.workspace = true
common-error.workspace = true
common-macro.workspace = true
common-telemetry.workspace = true
digest = "0.10"
notify.workspace = true
sha1 = "0.10"
snafu.workspace = true
sql.workspace = true

View File

@@ -17,13 +17,13 @@ use std::sync::Arc;
use common_base::secrets::SecretString;
use digest::Digest;
use sha1::Sha1;
use snafu::{ensure, OptionExt};
use snafu::{OptionExt, ensure};
use crate::error::{IllegalParamSnafu, InvalidConfigSnafu, Result, UserPasswordMismatchSnafu};
use crate::user_info::DefaultUserInfo;
use crate::user_provider::static_user_provider::{StaticUserProvider, STATIC_USER_PROVIDER};
use crate::user_provider::static_user_provider::{STATIC_USER_PROVIDER, StaticUserProvider};
use crate::user_provider::watch_file_user_provider::{
WatchFileUserProvider, WATCH_FILE_USER_PROVIDER,
WATCH_FILE_USER_PROVIDER, WatchFileUserProvider,
};
use crate::{UserInfoRef, UserProviderRef};
@@ -35,7 +35,7 @@ pub fn userinfo_by_name(username: Option<String>) -> UserInfoRef {
DefaultUserInfo::with_name(username.unwrap_or_else(|| DEFAULT_USERNAME.to_string()))
}
pub fn user_provider_from_option(opt: &String) -> Result<UserProviderRef> {
pub fn user_provider_from_option(opt: &str) -> Result<UserProviderRef> {
let (name, content) = opt.split_once(':').with_context(|| InvalidConfigSnafu {
value: opt.to_string(),
msg: "UserProviderOption must be in format `<option>:<value>`",
@@ -57,7 +57,7 @@ pub fn user_provider_from_option(opt: &String) -> Result<UserProviderRef> {
}
}
pub fn static_user_provider_from_option(opt: &String) -> Result<StaticUserProvider> {
pub fn static_user_provider_from_option(opt: &str) -> Result<StaticUserProvider> {
let (name, content) = opt.split_once(':').with_context(|| InvalidConfigSnafu {
value: opt.to_string(),
msg: "UserProviderOption must be in format `<option>:<value>`",

View File

@@ -75,11 +75,12 @@ pub enum Error {
username: String,
},
#[snafu(display("Failed to initialize a watcher for file {}", path))]
#[snafu(display("Failed to initialize a file watcher"))]
FileWatch {
path: String,
#[snafu(source)]
error: notify::Error,
source: common_config::error::Error,
#[snafu(implicit)]
location: Location,
},
#[snafu(display("User is not authorized to perform this action"))]

View File

@@ -22,13 +22,13 @@ mod user_provider;
pub mod tests;
pub use common::{
auth_mysql, static_user_provider_from_option, user_provider_from_option, userinfo_by_name,
HashedPassword, Identity, Password,
HashedPassword, Identity, Password, auth_mysql, static_user_provider_from_option,
user_provider_from_option, userinfo_by_name,
};
pub use permission::{PermissionChecker, PermissionReq, PermissionResp};
pub use permission::{DefaultPermissionChecker, PermissionChecker, PermissionReq, PermissionResp};
pub use user_info::UserInfo;
pub use user_provider::static_user_provider::StaticUserProvider;
pub use user_provider::UserProvider;
pub use user_provider::static_user_provider::StaticUserProvider;
/// pub type alias
pub type UserInfoRef = std::sync::Arc<dyn UserInfo>;

View File

@@ -13,12 +13,15 @@
// limitations under the License.
use std::fmt::Debug;
use std::sync::Arc;
use api::v1::greptime_request::Request;
use common_telemetry::debug;
use sql::statements::statement::Statement;
use crate::error::{PermissionDeniedSnafu, Result};
use crate::{PermissionCheckerRef, UserInfoRef};
use crate::user_info::DefaultUserInfo;
use crate::{PermissionCheckerRef, UserInfo, UserInfoRef};
#[derive(Debug, Clone)]
pub enum PermissionReq<'a> {
@@ -32,6 +35,33 @@ pub enum PermissionReq<'a> {
PromStoreRead,
Otlp,
LogWrite,
BulkInsert,
}
impl<'a> PermissionReq<'a> {
/// Returns true if the permission request is for read operations.
pub fn is_readonly(&self) -> bool {
match self {
PermissionReq::GrpcRequest(Request::Query(_))
| PermissionReq::PromQuery
| PermissionReq::LogQuery
| PermissionReq::PromStoreRead => true,
PermissionReq::SqlStatement(stmt) => stmt.is_readonly(),
PermissionReq::GrpcRequest(_)
| PermissionReq::Opentsdb
| PermissionReq::LineProtocol
| PermissionReq::PromStoreWrite
| PermissionReq::Otlp
| PermissionReq::LogWrite
| PermissionReq::BulkInsert => false,
}
}
/// Returns true if the permission request is for write operations.
pub fn is_write(&self) -> bool {
!self.is_readonly()
}
}
#[derive(Debug)]
@@ -64,3 +94,106 @@ impl PermissionChecker for Option<&PermissionCheckerRef> {
}
}
}
/// The default permission checker implementation.
/// It checks the permission mode of [DefaultUserInfo].
pub struct DefaultPermissionChecker;
impl DefaultPermissionChecker {
/// Returns a new [PermissionCheckerRef] instance.
pub fn arc() -> PermissionCheckerRef {
Arc::new(DefaultPermissionChecker)
}
}
impl PermissionChecker for DefaultPermissionChecker {
fn check_permission(
&self,
user_info: UserInfoRef,
req: PermissionReq,
) -> Result<PermissionResp> {
if let Some(default_user) = user_info.as_any().downcast_ref::<DefaultUserInfo>() {
let permission_mode = default_user.permission_mode();
if req.is_readonly() && !permission_mode.can_read() {
debug!(
"Permission denied: read operation not allowed, user = {}, permission = {}",
default_user.username(),
permission_mode.as_str()
);
return Ok(PermissionResp::Reject);
}
if req.is_write() && !permission_mode.can_write() {
debug!(
"Permission denied: write operation not allowed, user = {}, permission = {}",
default_user.username(),
permission_mode.as_str()
);
return Ok(PermissionResp::Reject);
}
}
// default allow all
Ok(PermissionResp::Allow)
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::user_info::PermissionMode;
#[test]
fn test_default_permission_checker_allow_all_operations() {
let checker = DefaultPermissionChecker;
let user_info =
DefaultUserInfo::with_name_and_permission("test_user", PermissionMode::ReadWrite);
let read_req = PermissionReq::PromQuery;
let write_req = PermissionReq::PromStoreWrite;
let read_result = checker
.check_permission(user_info.clone(), read_req)
.unwrap();
let write_result = checker.check_permission(user_info, write_req).unwrap();
assert!(matches!(read_result, PermissionResp::Allow));
assert!(matches!(write_result, PermissionResp::Allow));
}
#[test]
fn test_default_permission_checker_readonly_user() {
let checker = DefaultPermissionChecker;
let user_info =
DefaultUserInfo::with_name_and_permission("readonly_user", PermissionMode::ReadOnly);
let read_req = PermissionReq::PromQuery;
let write_req = PermissionReq::PromStoreWrite;
let read_result = checker
.check_permission(user_info.clone(), read_req)
.unwrap();
let write_result = checker.check_permission(user_info, write_req).unwrap();
assert!(matches!(read_result, PermissionResp::Allow));
assert!(matches!(write_result, PermissionResp::Reject));
}
#[test]
fn test_default_permission_checker_writeonly_user() {
let checker = DefaultPermissionChecker;
let user_info =
DefaultUserInfo::with_name_and_permission("writeonly_user", PermissionMode::WriteOnly);
let read_req = PermissionReq::LogQuery;
let write_req = PermissionReq::LogWrite;
let read_result = checker
.check_permission(user_info.clone(), read_req)
.unwrap();
let write_result = checker.check_permission(user_info, write_req).unwrap();
assert!(matches!(read_result, PermissionResp::Reject));
assert!(matches!(write_result, PermissionResp::Allow));
}
}

View File

@@ -21,7 +21,7 @@ use crate::error::{
UserPasswordMismatchSnafu,
};
use crate::user_info::DefaultUserInfo;
use crate::{auth_mysql, Identity, Password, UserInfoRef, UserProvider};
use crate::{Identity, Password, UserInfoRef, UserProvider, auth_mysql};
pub struct DatabaseAuthInfo<'a> {
pub catalog: &'a str,

View File

@@ -23,17 +23,86 @@ pub trait UserInfo: Debug + Sync + Send {
fn username(&self) -> &str;
}
/// The user permission mode
#[derive(Debug, Clone, Copy, PartialEq, Eq, Default)]
pub enum PermissionMode {
#[default]
ReadWrite,
ReadOnly,
WriteOnly,
}
impl PermissionMode {
/// Parse permission mode from string.
/// Supported values are:
/// - "rw", "readwrite", "read_write" => ReadWrite
/// - "ro", "readonly", "read_only" => ReadOnly
/// - "wo", "writeonly", "write_only" => WriteOnly
/// Returns None if the input string is not a valid permission mode.
pub fn from_str(s: &str) -> Self {
match s.to_lowercase().as_str() {
"readwrite" | "read_write" | "rw" => PermissionMode::ReadWrite,
"readonly" | "read_only" | "ro" => PermissionMode::ReadOnly,
"writeonly" | "write_only" | "wo" => PermissionMode::WriteOnly,
_ => PermissionMode::ReadWrite,
}
}
/// Convert permission mode to string.
/// - ReadWrite => "rw"
/// - ReadOnly => "ro"
/// - WriteOnly => "wo"
/// The returned string is a static string slice.
pub fn as_str(&self) -> &'static str {
match self {
PermissionMode::ReadWrite => "rw",
PermissionMode::ReadOnly => "ro",
PermissionMode::WriteOnly => "wo",
}
}
/// Returns true if the permission mode allows read operations.
pub fn can_read(&self) -> bool {
matches!(self, PermissionMode::ReadWrite | PermissionMode::ReadOnly)
}
/// Returns true if the permission mode allows write operations.
pub fn can_write(&self) -> bool {
matches!(self, PermissionMode::ReadWrite | PermissionMode::WriteOnly)
}
}
impl std::fmt::Display for PermissionMode {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(f, "{}", self.as_str())
}
}
#[derive(Debug)]
pub(crate) struct DefaultUserInfo {
username: String,
permission_mode: PermissionMode,
}
impl DefaultUserInfo {
pub(crate) fn with_name(username: impl Into<String>) -> UserInfoRef {
Self::with_name_and_permission(username, PermissionMode::default())
}
/// Create a UserInfo with specified permission mode.
pub(crate) fn with_name_and_permission(
username: impl Into<String>,
permission_mode: PermissionMode,
) -> UserInfoRef {
Arc::new(Self {
username: username.into(),
permission_mode,
})
}
pub(crate) fn permission_mode(&self) -> &PermissionMode {
&self.permission_mode
}
}
impl UserInfo for DefaultUserInfo {
@@ -45,3 +114,120 @@ impl UserInfo for DefaultUserInfo {
self.username.as_str()
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_permission_mode_from_str() {
// Test ReadWrite variants
assert_eq!(
PermissionMode::from_str("readwrite"),
PermissionMode::ReadWrite
);
assert_eq!(
PermissionMode::from_str("read_write"),
PermissionMode::ReadWrite
);
assert_eq!(PermissionMode::from_str("rw"), PermissionMode::ReadWrite);
assert_eq!(
PermissionMode::from_str("ReadWrite"),
PermissionMode::ReadWrite
);
assert_eq!(PermissionMode::from_str("RW"), PermissionMode::ReadWrite);
// Test ReadOnly variants
assert_eq!(
PermissionMode::from_str("readonly"),
PermissionMode::ReadOnly
);
assert_eq!(
PermissionMode::from_str("read_only"),
PermissionMode::ReadOnly
);
assert_eq!(PermissionMode::from_str("ro"), PermissionMode::ReadOnly);
assert_eq!(
PermissionMode::from_str("ReadOnly"),
PermissionMode::ReadOnly
);
assert_eq!(PermissionMode::from_str("RO"), PermissionMode::ReadOnly);
// Test WriteOnly variants
assert_eq!(
PermissionMode::from_str("writeonly"),
PermissionMode::WriteOnly
);
assert_eq!(
PermissionMode::from_str("write_only"),
PermissionMode::WriteOnly
);
assert_eq!(PermissionMode::from_str("wo"), PermissionMode::WriteOnly);
assert_eq!(
PermissionMode::from_str("WriteOnly"),
PermissionMode::WriteOnly
);
assert_eq!(PermissionMode::from_str("WO"), PermissionMode::WriteOnly);
// Test invalid inputs default to ReadWrite
assert_eq!(
PermissionMode::from_str("invalid"),
PermissionMode::ReadWrite
);
assert_eq!(PermissionMode::from_str(""), PermissionMode::ReadWrite);
assert_eq!(PermissionMode::from_str("xyz"), PermissionMode::ReadWrite);
}
#[test]
fn test_permission_mode_as_str() {
assert_eq!(PermissionMode::ReadWrite.as_str(), "rw");
assert_eq!(PermissionMode::ReadOnly.as_str(), "ro");
assert_eq!(PermissionMode::WriteOnly.as_str(), "wo");
}
#[test]
fn test_permission_mode_default() {
assert_eq!(PermissionMode::default(), PermissionMode::ReadWrite);
}
#[test]
fn test_permission_mode_round_trip() {
let modes = [
PermissionMode::ReadWrite,
PermissionMode::ReadOnly,
PermissionMode::WriteOnly,
];
for mode in modes {
let str_repr = mode.as_str();
let parsed = PermissionMode::from_str(str_repr);
assert_eq!(mode, parsed);
}
}
#[test]
fn test_default_user_info_with_name() {
let user_info = DefaultUserInfo::with_name("test_user");
assert_eq!(user_info.username(), "test_user");
}
#[test]
fn test_default_user_info_with_name_and_permission() {
let user_info =
DefaultUserInfo::with_name_and_permission("test_user", PermissionMode::ReadOnly);
assert_eq!(user_info.username(), "test_user");
// Cast to DefaultUserInfo to access permission_mode
let default_user = user_info
.as_any()
.downcast_ref::<DefaultUserInfo>()
.unwrap();
assert_eq!(default_user.permission_mode, PermissionMode::ReadOnly);
}
#[test]
fn test_user_info_as_any() {
let user_info = DefaultUserInfo::with_name("test_user");
let any_ref = user_info.as_any();
assert!(any_ref.downcast_ref::<DefaultUserInfo>().is_some());
}
}

View File

@@ -22,15 +22,15 @@ use std::io::BufRead;
use std::path::Path;
use common_base::secrets::ExposeSecret;
use snafu::{ensure, OptionExt, ResultExt};
use snafu::{OptionExt, ResultExt, ensure};
use crate::common::{Identity, Password};
use crate::error::{
IllegalParamSnafu, InvalidConfigSnafu, IoSnafu, Result, UnsupportedPasswordTypeSnafu,
UserNotFoundSnafu, UserPasswordMismatchSnafu,
};
use crate::user_info::DefaultUserInfo;
use crate::{auth_mysql, UserInfoRef};
use crate::user_info::{DefaultUserInfo, PermissionMode};
use crate::{UserInfoRef, auth_mysql};
#[async_trait::async_trait]
pub trait UserProvider: Send + Sync {
@@ -64,11 +64,19 @@ pub trait UserProvider: Send + Sync {
}
}
fn load_credential_from_file(filepath: &str) -> Result<Option<HashMap<String, Vec<u8>>>> {
/// Type alias for user info map
/// Key is username, value is (password, permission_mode)
pub type UserInfoMap = HashMap<String, (Vec<u8>, PermissionMode)>;
fn load_credential_from_file(filepath: &str) -> Result<UserInfoMap> {
// check valid path
let path = Path::new(filepath);
if !path.exists() {
return Ok(None);
return InvalidConfigSnafu {
value: filepath.to_string(),
msg: "UserProvider file must exist",
}
.fail();
}
ensure!(
@@ -83,13 +91,19 @@ fn load_credential_from_file(filepath: &str) -> Result<Option<HashMap<String, Ve
.lines()
.map_while(std::result::Result::ok)
.filter_map(|line| {
if let Some((k, v)) = line.split_once('=') {
Some((k.to_string(), v.as_bytes().to_vec()))
} else {
None
// The line format is:
// - `username=password` - Basic user with default permissions
// - `username:permission_mode=password` - User with specific permission mode
// - Lines starting with '#' are treated as comments and ignored
// - Empty lines are ignored
let line = line.trim();
if line.is_empty() || line.starts_with('#') {
return None;
}
parse_credential_line(line)
})
.collect::<HashMap<String, Vec<u8>>>();
.collect::<HashMap<String, _>>();
ensure!(
!credential.is_empty(),
@@ -99,11 +113,31 @@ fn load_credential_from_file(filepath: &str) -> Result<Option<HashMap<String, Ve
}
);
Ok(Some(credential))
Ok(credential)
}
/// Parse a line of credential in the format of `username=password` or `username:permission_mode=password`
pub(crate) fn parse_credential_line(line: &str) -> Option<(String, (Vec<u8>, PermissionMode))> {
let parts = line.split('=').collect::<Vec<&str>>();
if parts.len() != 2 {
return None;
}
let (username_part, password) = (parts[0], parts[1]);
let (username, permission_mode) = if let Some((user, perm)) = username_part.split_once(':') {
(user, PermissionMode::from_str(perm))
} else {
(username_part, PermissionMode::default())
};
Some((
username.to_string(),
(password.as_bytes().to_vec(), permission_mode),
))
}
fn authenticate_with_credential(
users: &HashMap<String, Vec<u8>>,
users: &UserInfoMap,
input_id: Identity<'_>,
input_pwd: Password<'_>,
) -> Result<UserInfoRef> {
@@ -115,7 +149,7 @@ fn authenticate_with_credential(
msg: "blank username"
}
);
let save_pwd = users.get(username).context(UserNotFoundSnafu {
let (save_pwd, permission_mode) = users.get(username).context(UserNotFoundSnafu {
username: username.to_string(),
})?;
@@ -128,7 +162,10 @@ fn authenticate_with_credential(
}
);
if save_pwd == pwd.expose_secret().as_bytes() {
Ok(DefaultUserInfo::with_name(username))
Ok(DefaultUserInfo::with_name_and_permission(
username,
*permission_mode,
))
} else {
UserPasswordMismatchSnafu {
username: username.to_string(),
@@ -137,8 +174,9 @@ fn authenticate_with_credential(
}
}
Password::MysqlNativePassword(auth_data, salt) => {
auth_mysql(auth_data, salt, username, save_pwd)
.map(|_| DefaultUserInfo::with_name(username))
auth_mysql(auth_data, salt, username, save_pwd).map(|_| {
DefaultUserInfo::with_name_and_permission(username, *permission_mode)
})
}
Password::PgMD5(_, _) => UnsupportedPasswordTypeSnafu {
password_type: "pg_md5",
@@ -148,3 +186,108 @@ fn authenticate_with_credential(
}
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_parse_credential_line() {
// Basic username=password format
let result = parse_credential_line("admin=password123");
assert_eq!(
result,
Some((
"admin".to_string(),
("password123".as_bytes().to_vec(), PermissionMode::default())
))
);
// Username with permission mode
let result = parse_credential_line("user:ReadOnly=secret");
assert_eq!(
result,
Some((
"user".to_string(),
("secret".as_bytes().to_vec(), PermissionMode::ReadOnly)
))
);
let result = parse_credential_line("user:ro=secret");
assert_eq!(
result,
Some((
"user".to_string(),
("secret".as_bytes().to_vec(), PermissionMode::ReadOnly)
))
);
// Username with WriteOnly permission mode
let result = parse_credential_line("writer:WriteOnly=mypass");
assert_eq!(
result,
Some((
"writer".to_string(),
("mypass".as_bytes().to_vec(), PermissionMode::WriteOnly)
))
);
// Username with 'wo' as WriteOnly permission shorthand
let result = parse_credential_line("writer:wo=mypass");
assert_eq!(
result,
Some((
"writer".to_string(),
("mypass".as_bytes().to_vec(), PermissionMode::WriteOnly)
))
);
// Username with complex password containing special characters
let result = parse_credential_line("admin:rw=p@ssw0rd!123");
assert_eq!(
result,
Some((
"admin".to_string(),
(
"p@ssw0rd!123".as_bytes().to_vec(),
PermissionMode::ReadWrite
)
))
);
// Username with spaces should be preserved
let result = parse_credential_line("user name:WriteOnly=password");
assert_eq!(
result,
Some((
"user name".to_string(),
("password".as_bytes().to_vec(), PermissionMode::WriteOnly)
))
);
// Invalid format - no equals sign
let result = parse_credential_line("invalid_line");
assert_eq!(result, None);
// Invalid format - multiple equals signs
let result = parse_credential_line("user=pass=word");
assert_eq!(result, None);
// Empty password
let result = parse_credential_line("user=");
assert_eq!(
result,
Some((
"user".to_string(),
("".as_bytes().to_vec(), PermissionMode::default())
))
);
// Empty username
let result = parse_credential_line("=password");
assert_eq!(
result,
Some((
"".to_string(),
("password".as_bytes().to_vec(), PermissionMode::default())
))
);
}
}

View File

@@ -12,19 +12,19 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use std::collections::HashMap;
use async_trait::async_trait;
use snafu::{OptionExt, ResultExt};
use crate::error::{FromUtf8Snafu, InvalidConfigSnafu, Result};
use crate::user_provider::{authenticate_with_credential, load_credential_from_file};
use crate::user_provider::{
UserInfoMap, authenticate_with_credential, load_credential_from_file, parse_credential_line,
};
use crate::{Identity, Password, UserInfoRef, UserProvider};
pub(crate) const STATIC_USER_PROVIDER: &str = "static_user_provider";
pub struct StaticUserProvider {
users: HashMap<String, Vec<u8>>,
users: UserInfoMap,
}
impl StaticUserProvider {
@@ -35,23 +35,18 @@ impl StaticUserProvider {
})?;
match mode {
"file" => {
let users = load_credential_from_file(content)?
.context(InvalidConfigSnafu {
value: content.to_string(),
msg: "StaticFileUserProvider must be a valid file path",
})?;
let users = load_credential_from_file(content)?;
Ok(StaticUserProvider { users })
}
"cmd" => content
.split(',')
.map(|kv| {
let (k, v) = kv.split_once('=').context(InvalidConfigSnafu {
parse_credential_line(kv).context(InvalidConfigSnafu {
value: kv.to_string(),
msg: "StaticUserProviderOption cmd values must be in format `user=pwd[,user=pwd]`",
})?;
Ok((k.to_string(), v.as_bytes().to_vec()))
})
})
.collect::<Result<HashMap<String, Vec<u8>>>>()
.collect::<Result<UserInfoMap>>()
.map(|users| StaticUserProvider { users }),
_ => InvalidConfigSnafu {
value: mode.to_string(),
@@ -69,7 +64,7 @@ impl StaticUserProvider {
msg: "Expect at least one pair of username and password",
})?;
let username = kv.0;
let pwd = String::from_utf8(kv.1.clone()).context(FromUtf8Snafu)?;
let pwd = String::from_utf8(kv.1.0.clone()).context(FromUtf8Snafu)?;
Ok((username.clone(), pwd))
}
}
@@ -102,10 +97,10 @@ pub mod test {
use common_test_util::temp_dir::create_temp_dir;
use crate::UserProvider;
use crate::user_info::DefaultUserInfo;
use crate::user_provider::static_user_provider::StaticUserProvider;
use crate::user_provider::{Identity, Password};
use crate::UserProvider;
async fn test_authenticate(provider: &dyn UserProvider, username: &str, password: &str) {
let re = provider
@@ -143,12 +138,13 @@ pub mod test {
let file = File::create(&file_path);
let file = file.unwrap();
let mut lw = LineWriter::new(file);
assert!(lw
.write_all(
assert!(
lw.write_all(
b"root=123456
admin=654321",
)
.is_ok());
.is_ok()
);
lw.flush().unwrap();
}

View File

@@ -12,28 +12,25 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use std::collections::HashMap;
use std::path::Path;
use std::sync::mpsc::channel;
use std::sync::{Arc, Mutex};
use async_trait::async_trait;
use common_config::file_watcher::{FileWatcherBuilder, FileWatcherConfig};
use common_telemetry::{info, warn};
use notify::{EventKind, RecursiveMode, Watcher};
use snafu::{ensure, ResultExt};
use snafu::ResultExt;
use crate::error::{FileWatchSnafu, InvalidConfigSnafu, Result};
use crate::user_info::DefaultUserInfo;
use crate::user_provider::{authenticate_with_credential, load_credential_from_file};
use crate::error::{FileWatchSnafu, Result};
use crate::user_provider::{UserInfoMap, authenticate_with_credential, load_credential_from_file};
use crate::{Identity, Password, UserInfoRef, UserProvider};
pub(crate) const WATCH_FILE_USER_PROVIDER: &str = "watch_file_user_provider";
type WatchedCredentialRef = Arc<Mutex<Option<HashMap<String, Vec<u8>>>>>;
type WatchedCredentialRef = Arc<Mutex<UserInfoMap>>;
/// A user provider that reads user credential from a file and watches the file for changes.
///
/// Empty file is invalid; but file not exist means every user can be authenticated.
/// Both empty file and non-existent file are invalid and will cause initialization to fail.
#[derive(Debug)]
pub(crate) struct WatchFileUserProvider {
users: WatchedCredentialRef,
}
@@ -42,61 +39,36 @@ impl WatchFileUserProvider {
pub fn new(filepath: &str) -> Result<Self> {
let credential = load_credential_from_file(filepath)?;
let users = Arc::new(Mutex::new(credential));
let this = WatchFileUserProvider {
users: users.clone(),
};
let (tx, rx) = channel::<notify::Result<notify::Event>>();
let mut debouncer =
notify::recommended_watcher(tx).context(FileWatchSnafu { path: "<none>" })?;
let mut dir = Path::new(filepath).to_path_buf();
ensure!(
dir.pop(),
InvalidConfigSnafu {
value: filepath,
msg: "UserProvider path must be a file path",
}
);
debouncer
.watch(&dir, RecursiveMode::NonRecursive)
.context(FileWatchSnafu { path: filepath })?;
let users_clone = users.clone();
let filepath_owned = filepath.to_string();
let filepath = filepath.to_string();
std::thread::spawn(move || {
let filename = Path::new(&filepath).file_name();
let _hold = debouncer;
while let Ok(res) = rx.recv() {
if let Ok(event) = res {
let is_this_file = event.paths.iter().any(|p| p.file_name() == filename);
let is_relevant_event = matches!(
event.kind,
EventKind::Modify(_) | EventKind::Create(_) | EventKind::Remove(_)
FileWatcherBuilder::new()
.watch_path(filepath)
.context(FileWatchSnafu)?
.config(FileWatcherConfig::new())
.spawn(move || match load_credential_from_file(&filepath_owned) {
Ok(credential) => {
let mut users = users_clone.lock().expect("users credential must be valid");
#[cfg(not(test))]
info!("User provider file {} reloaded", &filepath_owned);
#[cfg(test)]
info!(
"User provider file {} reloaded: {:?}",
&filepath_owned, credential
);
if is_this_file && is_relevant_event {
info!(?event.kind, "User provider file {} changed", &filepath);
match load_credential_from_file(&filepath) {
Ok(credential) => {
let mut users =
users.lock().expect("users credential must be valid");
#[cfg(not(test))]
info!("User provider file {filepath} reloaded");
#[cfg(test)]
info!("User provider file {filepath} reloaded: {credential:?}");
*users = credential;
}
Err(err) => {
warn!(
?err,
"Fail to load credential from file {filepath}; keep the old one",
)
}
}
}
*users = credential;
}
}
});
Err(err) => {
warn!(
?err,
"Fail to load credential from file {}; keep the old one", &filepath_owned
)
}
})
.context(FileWatchSnafu)?;
Ok(this)
Ok(WatchFileUserProvider { users })
}
}
@@ -108,16 +80,7 @@ impl UserProvider for WatchFileUserProvider {
async fn authenticate(&self, id: Identity<'_>, password: Password<'_>) -> Result<UserInfoRef> {
let users = self.users.lock().expect("users credential must be valid");
if let Some(users) = users.as_ref() {
authenticate_with_credential(users, id, password)
} else {
match id {
Identity::UserId(id, _) => {
warn!(id, "User provider file not exist, allow all users");
Ok(DefaultUserInfo::with_name(id))
}
}
}
authenticate_with_credential(&users, id, password)
}
async fn authorize(&self, _: &str, _: &str, _: &UserInfoRef) -> Result<()> {
@@ -133,9 +96,9 @@ pub mod test {
use common_test_util::temp_dir::create_temp_dir;
use tokio::time::sleep;
use crate::UserProvider;
use crate::user_provider::watch_file_user_provider::WatchFileUserProvider;
use crate::user_provider::{Identity, Password};
use crate::UserProvider;
async fn test_authenticate(
provider: &dyn UserProvider,
@@ -178,6 +141,21 @@ pub mod test {
}
}
#[tokio::test]
async fn test_file_provider_initialization_with_missing_file() {
common_telemetry::init_default_ut_logging();
let dir = create_temp_dir("test_missing_file");
let file_path = format!("{}/non_existent_file", dir.path().to_str().unwrap());
// Try to create provider with non-existent file should fail
let result = WatchFileUserProvider::new(file_path.as_str());
assert!(result.is_err());
let error = result.unwrap_err();
assert!(error.to_string().contains("UserProvider file must exist"));
}
#[tokio::test]
async fn test_file_provider() {
common_telemetry::init_default_ut_logging();
@@ -202,9 +180,10 @@ pub mod test {
// remove the tmp file
assert!(std::fs::remove_file(&file_path).is_ok());
test_authenticate(&provider, "root", "123456", true, Some(timeout)).await;
// When file is deleted during runtime, keep the last known good credentials
test_authenticate(&provider, "root", "654321", true, Some(timeout)).await;
test_authenticate(&provider, "admin", "654321", true, Some(timeout)).await;
test_authenticate(&provider, "root", "123456", false, Some(timeout)).await;
test_authenticate(&provider, "admin", "654321", false, Some(timeout)).await;
// recreate the tmp file
assert!(std::fs::write(&file_path, "root=123456\n").is_ok());

Some files were not shown because too many files have changed in this diff Show More