* feat: support reporting env vars in heartbeat messages to metasrv
Add `heartbeat_env_vars` config option for datanode and frontend. When
configured, the specified environment variable values are read at startup
and sent to metasrv in every heartbeat via the `extensions` map. Metasrv
extracts and stores them in `NodeInfo` for use in routing decisions
(e.g. AZ-aware region placement).
- Add `EnvVars` helper in `common/meta/src/datanode.rs` following the
existing `GcStat` extension pattern with `into_extensions`/`from_extensions`
- Add `env_vars: HashMap<String, String>` field to `NodeInfo` in
`common/meta/src/cluster.rs` with `#[serde(default)]` for backward compat
- Add `heartbeat_env_vars: Vec<String>` config field to `DatanodeOptions`,
`FrontendOptions`, and `StandaloneOptions`
- Inject env vars into heartbeat `extensions` in both datanode and frontend
heartbeat tasks (`datanode/src/heartbeat.rs`, `frontend/src/heartbeat.rs`)
- Extract env vars from `req.extensions` in all three metasrv
`CollectXxxClusterInfoHandler`s
- Update `NodeInfo` construction sites in `meta-client`,
`discovery/lease.rs`, and `standalone/information_extension.rs`
- Update expected TOML output in `tests-integration/tests/http.rs`
- Add unit tests for `EnvVars` round-trip and `NodeInfo` backward compat
Signed-off-by: Lei, HUANG <leih@nvidia.com>
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* refactor: address heartbeat env review feedback
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* chore: log error on deserialization failure
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* refactor: send heartbeat env vars once
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* fix: resend heartbeat env vars after reconnect
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* revert: keep env vars in every heartbeat
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
---------
Signed-off-by: Lei, HUANG <leih@nvidia.com>
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* feat(operator): allow last_row merge_mode when append_mode is enabled
- Update RegionOptions::validate to allow last_row merge_mode with append_mode.
- Update fill_table_options_for_create to automatically set merge_mode to last_row when append_mode is enabled for LastNonNull table type.
- Add unit tests in mito2 and operator to verify options validation and table creation.
- Add integration test for InfluxDB write with append mode hint.
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* fix(operator): simplify append mode options
Group `LastNonNull` auto-create options in a single append-mode branch.
Files:
- `src/operator/src/insert.rs`
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* fix: sqlness
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
---------
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* fix(server): describe EXPLAIN statements so bind parameters work
`do_describe_inner` only planned `Insert`/`Query`/`Delete`, so
`EXPLAIN` and `EXPLAIN ANALYZE` fell through to the non-plan branch
and had no parameter-type inference. At Bind time the Postgres
handler then reported `unsupported_parameter_type` even though the
inner query would have worked on its own.
Recurse one level into `Statement::Explain` so that an EXPLAIN
wrapping a plannable statement goes through the same describe path.
Adds a tokio-postgres integration test that exercises
`EXPLAIN`/`EXPLAIN ANALYZE` over the extended query protocol.
Fixes#8029
Signed-off-by: BootstrapperSBL <yvanwww@gmail.com>
* refactor(server): extract plannable-inner check into closure
Reduce duplication between the direct match and the EXPLAIN inner match
by factoring out is_inner_plannable. Behaviour unchanged.
Signed-off-by: BootstrapperSBL <yvanwww@gmail.com>
---------
Signed-off-by: BootstrapperSBL <yvanwww@gmail.com>
* feat: add support for decimal parameter type, remove string replacement fallback
* chore: format
* fix: add support for using unsigned bigint in postgres
* chore: format toml
* refactor: cleanup duplicated code
* fix: rescale decimal
* feat: support alter from primary_key to flat
Signed-off-by: evenyag <realevenyag@gmail.com>
* chore: alter flat to primary_key
Signed-off-by: evenyag <realevenyag@gmail.com>
* feat: change default_experimental_flat_format to true
Signed-off-by: evenyag <realevenyag@gmail.com>
* feat: compute channel size from splitted batch size
Signed-off-by: evenyag <realevenyag@gmail.com>
* test: add tests for split and channel size
Signed-off-by: evenyag <realevenyag@gmail.com>
* fix: always set sst_format from manifest on region open
sanitize_region_options did not set options.sst_format when the
default (PrimaryKey) matched the manifest value, leaving it as None
after reopen. This caused the alter format change to appear lost.
Signed-off-by: evenyag <realevenyag@gmail.com>
* test: fix tests
Signed-off-by: evenyag <realevenyag@gmail.com>
* test: show create table after alteration
Signed-off-by: evenyag <realevenyag@gmail.com>
* refactor!: rename default_experimental_flat_format to default_flat_format
The flat format is no longer experimental. Remove "experimental" from
the config field name, doc comments, and all references.
Signed-off-by: evenyag <realevenyag@gmail.com>
* chore: fix clippy
Signed-off-by: evenyag <realevenyag@gmail.com>
---------
Signed-off-by: evenyag <realevenyag@gmail.com>
* feat: metric batch 2s PoC
Signed-off-by: jeremyhi <fengjiachun@gmail.com>
* chore: max_concurrent_flushes
Signed-off-by: jeremyhi <fengjiachun@gmail.com>
* chore: work channel size
Signed-off-by: jeremyhi <fengjiachun@gmail.com>
* feat(servers): add metrics and logs for pending rows batch flush
Add the `FLUSH_ELAPSED` histogram metric to track the duration of pending
rows batch flushes in the Prometheus store protocol handler. This provides
better observability into the performance and latency of the batcher.
Also update telemetry by:
- Recording elapsed time for both successful and failed flush operations.
- Adding an informational log upon successful flush including row count and duration.
- Including elapsed time in error logs when a flush fails.
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* feat(servers): implement columnar batching for pending rows
Refactor PendingRowsBatcher to use columnar batching for the metrics
store. Incoming RowInsertRequests are now converted to RecordBatches,
partitioned, and flushed via BulkInsert requests to datanodes.
- Enhance MultiDimPartitionRule to handle scalar boolean predicates.
- Add metrics for tracking flush failures and dropped rows.
- Update dependencies to support columnar batching in servers.
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* feat(servers): add backpressure for pending rows
Implement backpressure in PendingRowsBatcher by limiting in-flight
requests with a semaphore and making the submission wait for the flush
result. This ensures Prometheus write requests are throttled and only
return once the data has been successfully flushed to datanodes.
- Add max_inflight_requests to PromStoreOptions.
- Use oneshot channels to notify submitters of flush completion.
- Limit concurrent requests using a new inflight_semaphore.
- Update PendingRowsBatcher::submit to wait for the flush outcome.
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* feat: add stage-level metrics for bulk ingestion
Introduce histograms to track the elapsed time of various stages in the
metric engine bulk insert path and the server's pending rows batcher.
This provides better observability into the performance bottlenecks
of the ingestion pipeline.
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* - `src/metric-engine/src/engine/bulk_insert.rs`: Removed the fallback mechanism that converted record batches to rows when bulk inserts were unsupported, along with related helper functions and unused imports.
- `src/operator/src/insert.rs`: Removed an unused import (`common_time::TimeToLive::Instant`).
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* feat(servers): columnar Prom remote write
Optimize the Prometheus remote write path by allowing direct conversion
from decoded Prometheus samples to Arrow RecordBatches. This bypasses
intermediate row-based representations when `PendingRowsBatcher` is
active and no pipeline is used, improving ingestion efficiency.
- Implement `as_record_batch_groups` in `TablesBuilder` and `PromWriteRequest`.
- Add `submit_prom_record_batch_groups` to `PendingRowsBatcher`.
- Introduce `DecodedPromWriteRequest` in `prom_store`.
- Implement row-to-RecordBatch conversion logic in `prom_row_builder`.
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* Revert "feat(servers): columnar Prom remote write"
This reverts commit efbb63c12a3e7fcec03858ea0351efd94fec8242.
* refactor(servers): improve row to RecordBatch conversion
- Use `snafu::ensure` for row validation in `rows_to_record_batch`.
- Add explicit type hint for `MutableVector` to improve clarity.
- Reorganize and clean up imports in `pending_rows_batcher.rs`.
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* perf(servers): use arrow builders for row conversion
This commit optimizes the conversion from `api::v1::Rows` to `RecordBatch`
by using Arrow builders directly. This avoids the overhead of
`MutableVector` and `common_recordbatch`, leading to better performance
in the `pending_rows_batcher`.
Additionally, the `#[allow(dead_code)]` attribute is removed from
`modify_batch_sparse` in the metric engine as it is now utilized.
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* perf(metric-engine): optimize batch modification
Optimize `modify_batch_sparse` by reusing buffers, using Arrow
builders, and employing fast-path encoding methods. This reduces
allocations and avoids redundant downcasting and serializer overhead.
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* feat/metric-engine-support-bulk:
**Add Environment Variable for Batch Sync Control**
- `pending_rows_batcher.rs`: Introduced an environment variable `PENDING_ROWS_BATCH_SYNC` to control the synchronization behavior of batch processing. If set to true, the function will wait for the flush result; otherwise, it will return immediatel
with the total rows count.
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* wip
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* chore: update and fix clippy
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* fix: failing test
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* picking-pending-rows-batcher:
### Commit Message
Remove Unused Code and Simplify Error Handling
- **`src/error.rs`**: Removed the `BatcherQueueFull` error variant and its associated logic, simplifying the error handling by removing unused code.
- **`src/http/prom_store.rs`**: Eliminated the `try_decompress` function, streamlining the decompression logic by directly using `snappy_decompress` in `decode_remote_read_request`.
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* chore: parse PENDING_ROWS_BATCH_SYNC once
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* chore: revert unrelated changes
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* **Refactor Prometheus Write Handling**
- **`prom_store.rs`**: Introduced `pre_write` method in `PromStoreProtocolHandler` to handle pre-write checks for Prometheus remote write requests. Updated `write` method to utilize `pre_write`.
- **`server.rs`**: Modified `PendingRowsBatcher` initialization to conditionally create a batcher based on `with_metric_engine` flag.
- **`http/prom_store.rs`**: Integrated `pre_write` checks before submitting requests to `PendingRowsBatcher`.
- **`query_handler.rs`**: Added `pre_write` method to `PromStoreProtocolHandler` trait for pre-write operations.
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* picking-pending-rows-batcher:
- **Fix Label Typo**: Corrected a typo in the label value from `"flush_wn ite_region"` to `"flush_write_region"` in `pending_rows_batcher.rs`.
- **Refactor Array Building Logic**: Introduced a macro `build_array!` to streamline the construction of `ArrayRef` for different data types, reducing code duplication in `pending_rows_batcher.rs`.
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* format toml
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* picking-pending-rows-batcher:
### Update PromStore and PendingRowsBatcher Configuration
- **`prom_store.rs`**: Set `pending_rows_flush_interval` to `Duration::ZERO` to disable automatic flushing.
- **`pending_rows_batcher.rs`**: Enhance validation to disable the batcher when `flush_interval` is zero or configuration values like `max_batch_rows`, `max_concurrent_flushes`, `worker_channel_capacity`, or `max_inflight_requests` are zero, preventing potential panics or deadlocks.
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* picking-pending-rows-batcher:
### Update `pending_rows_flush_interval` to Zero
- **Files Modified**:
- `src/frontend/src/service_config/prom_store.rs`
- `tests-integration/tests/http.rs`
- **Key Changes**:
- Updated `pending_rows_flush_interval` from `Duration::from_secs(2)` to `Duration::ZERO` in `prom_store.rs`.
- Changed `pending_rows_flush_interval` configuration from `"2s"` to `"0s"` in `http.rs`.
These changes set the flush interval to zero, potentially affecting how frequently pending rows are flushed.
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* picking-pending-rows-batcher:
**Add Worker Management Enhancements**
- **`metrics.rs`**: Introduced `PENDING_WORKERS` gauge to track active pending rows batch workers.
- **`pending_rows_batcher.rs`**:
- Added worker idle timeout logic with `WORKER_IDLE_TIMEOUT_MULTIPLIER`.
- Implemented worker management functions: `spawn_worker`, `remove_worker_if_same_channel`, and `should_close_worker_on_idle_timeout`.
- Enhanced worker lifecycle management to handle idle workers and ensure proper cleanup.
- **Tests**: Added unit tests for worker removal and idle timeout logic.
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* fix: clippy
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
---------
Signed-off-by: jeremyhi <fengjiachun@gmail.com>
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
Co-authored-by: jeremyhi <fengjiachun@gmail.com>
* refactor/prom-related-code:
### Commit Message
Refactor Byte Handling and Improve Decoding Logic
- **`prom_decode.rs`**: Removed `Bytes` usage in favor of `Vec<u8>` for handling raw data, improving memory management and simplifying the decoding process.
- **`prom_store.rs`**: Updated `try_decompress` function to return `Vec<u8>` instead of `Bytes`, aligning with the new data handling approach.
- **`prom_row_builder.rs`**: Modified `TablesBuilder` to use `Vec<u8>` for `raw_data`, enhancing data manipulation capabilities.
- **`proto.rs`**: Refactored `PromWriteRequest` decoding logic to use `Vec<u8>`, optimizing the buffer management and decoding flow.
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* refactor: mod structure
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* refactor/prom-related-code:
- **Refactor `prom_store.rs` and `prom_remote_write/mod.rs`:** Moved `decode_remote_write_request` and `try_decompress` functions from `prom_store.rs` to `prom_remote_write/mod.rs`. This change centralizes the logic related to remote write request
decoding and decompression.
- **Update `PromValidationMode` in `validation.rs`:** Implemented `Default` trait using the `#[derive(Default)]` attribute for `PromValidationMode` and updated related methods to use `Result` instead of `std::result::Result`.
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* refactor/prom-related-code:
### Remove `proto.rs` and Update References
- **Removed**: Deleted the `proto.rs` file, which contained re-exports for Prometheus remote write decode types.
- **Updated References**: Adjusted references to `PromSeriesProcessor` and `PromWriteRequest` in `prom_decode.rs` and `prom_store.rs` to import directly from `prom_remote_write`.
- **Modified Modules**: Removed the `proto` module from `lib.rs`.
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* fix: lint
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* fix: remove assert_eq
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* refactor/prom-related-code:
### Refactor Prometheus Remote Write Module
- **Modularization of `prom_remote_write`:**
- Split `PromValidationMode` and `validate_label_name` into a new `validation` module.
- Moved `PromSeriesProcessor` and `PromWriteRequest` to a `decode` module.
- Separated `PromLabel` into a `types` module and adjusted visibility.
- **Visibility Adjustments:**
- Changed `PromTimeSeries` and `PromLabel` structs to `pub(crate)` for internal use.
- **File Updates:**
- Updated references in `prom_decode.rs`, `http.rs`, `prom_store.rs`, `decode.rs`, `mod.rs`, `row_builder.rs`, `types.rs`, `prom_store_test.rs`, and `test_util.rs` to reflect module changes.
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
---------
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* feat: use arrow-pg for encode_row
* refactor: remove bytea and datetime module
* feat: port more encodings to arrow-pg
* feat: implement intervalstyle
* chore: format
* chore: remove error that is no longer used
* chore: use released arrow-pg
* Apply suggestions from code review
Co-authored-by: LFC <990479+MichaelScofield@users.noreply.github.com>
---------
Co-authored-by: LFC <990479+MichaelScofield@users.noreply.github.com>
* feat: impl vector index scan in storage
Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
* feat: fallback to read remote blob when blob not found
Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
* chore: refactor encoding and decoding and apply suggestions
Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
* fix: license
Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
* test: add apply_with_k tests
Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
* chore: apply suggestions
Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
* fix: forgot to align nulls when the vector column is not in the batch
Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
* test: add test for vector column is not in a batch while buiilding
Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
---------
Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
* feat: add repartition procedure factory support to DdlManager
- Introduce RepartitionProcedureFactory trait for creating and registering
repartition procedures
- Implement DefaultRepartitionProcedureFactory for metasrv with full support
- Implement StandaloneRepartitionProcedureFactory for standalone (unsupported)
- Add procedure loader registration for RepartitionProcedure and
RepartitionGroupProcedure
- Add helper methods to TableMetadataAllocator for allocator access
- Add error types for repartition procedure operations
- Update DdlManager to accept and use RepartitionProcedureFactoryRef
Signed-off-by: WenyXu <wenymedia@gmail.com>
* feat: integrate repartition procedure into DdlManager
- Add submit_repartition_task() to handle repartition from alter table
- Route Repartition operations in submit_alter_table_task() to repartition factory
- Refactor: rename submit_procedure() to execute_procedure_and_wait()
- Make all DDL operations wait for completion by default
- Add submit_procedure() for fire-and-forget submissions
- Add CreateRepartitionProcedure error type
- Add placeholder Repartition handling in grpc-expr (unsupported)
- Update greptime-proto dependency
Signed-off-by: WenyXu <wenymedia@gmail.com>
* feat: implement ALTER TABLE REPARTITION procedure submission
Signed-off-by: WenyXu <wenymedia@gmail.com>
* refactor(repartition): handle central region in apply staging manifest
- Introduce ApplyStagingManifestInstructions struct to organize instructions
- Add special handling for central region when applying staging manifests
- Transition state from UpdateMetadata to RepartitionEnd after applying staging manifests
- Remove next_state() method in RepartitionStart and inline state transitions
- Improve logging and expression serialization in DDL statement executor
- Move repartition tests from standalone to distributed test suite
Signed-off-by: WenyXu <wenymedia@gmail.com>
* chore: apply suggestions from CR
Signed-off-by: WenyXu <wenymedia@gmail.com>
* chore: update proto
Signed-off-by: WenyXu <wenymedia@gmail.com>
---------
Signed-off-by: WenyXu <wenymedia@gmail.com>