* feat: switch partition tree to bulk
Signed-off-by: evenyag <realevenyag@gmail.com>
* chore: keep partition tree memtable for migration test
Restore PartitionTreeMemtable construction when memtable.type=partition_tree
is explicit, and move the sparse-encoding bulk override into the default
(no explicit memtable.type) arm so phase 2's memtable.type=bulk wins on
reopen. Rewrite test_reopen_time_series_sparse_memtable_with_bulk to use a
metric-engine-shaped schema and sparse-encoded rows with WriteHint::Sparse,
so the test actually exercises a PartitionTreeMemtable in phase 1 and
verifies WAL replay into the new BulkMemtable on reopen without flushing.
Signed-off-by: evenyag <realevenyag@gmail.com>
* chore: drop partition tree memtable from runtime
Re-apply the unconditional sparse-encoding override in
`MemtableBuilderProvider::builder_for_options` and route the
`MemtableOptions::PartitionTree` arm to `BulkMemtable` with a deprecation
warning. After this change, `PartitionTreeMemtableBuilder` is no longer
reachable from the engine runtime; benchmarks still reference the type.
Remove `test_reopen_time_series_sparse_memtable_with_bulk` and the
`put_sparse_rows` helper added in the previous commit — that test only
existed to validate the PartitionTree -> Bulk reopen migration and is
unnecessary now that the override is in place.
Signed-off-by: evenyag <realevenyag@gmail.com>
* refactor(mito2): move timestamp_array_to_i64_slice into read module
Relocate the timestamp_array_to_i64_slice helper from
memtable/partition_tree/data.rs to the read module so that the read
path no longer depends on the partition_tree internals. All call sites
(both inside and outside the partition_tree module) now import from
crate::read.
Signed-off-by: evenyag <realevenyag@gmail.com>
* refactor(mito2): use TimeSeriesMemtableBuilder in time_partition tests
The time_partition tests use the memtable builder purely as a generic
backend for the TimePartitions write/scan paths; nothing in them is
specific to the partition-tree memtable. Switch the seven affected
tests to TimeSeriesMemtableBuilder so the tests no longer depend on
PartitionTreeMemtableBuilder.
Signed-off-by: evenyag <realevenyag@gmail.com>
* chore(mito2): delete PartitionTreeMemtable implementation
The runtime already falls back to BulkMemtable for the PartitionTree
variant. Drop the now-unreachable implementation, its metrics, the
partition_tree benchmarks, the metric-engine Unsupported fallback in
bulk_insert.rs, and the test helpers that only existed for the deleted
module.
MemtableOptions::PartitionTree, its parsing, the runtime fallback, the
store-api MEMTABLE_PARTITION_TREE_* constants, and the SQL fixtures
remain so existing region options keep round-tripping.
Signed-off-by: evenyag <realevenyag@gmail.com>
* refactor(mito-codec): drop skip_partition_column parameter
PartitionTreeMemtable was the only caller passing
skip_partition_column=true; every other caller passes false. Now that
the partition_tree module is gone, the parameter is uniformly false
and the guard branch is dead. Drop the parameter from the trait method
and both impls, remove the guard and the is_partition_column helper,
and update the four remaining call sites in mito2 plus the bench.
Signed-off-by: evenyag <realevenyag@gmail.com>
* chore(mito2): remove unused MemtableConfig enum
Signed-off-by: evenyag <realevenyag@gmail.com>
* chore: fmt code
Signed-off-by: evenyag <realevenyag@gmail.com>
* refactor: remove unused variant
Signed-off-by: evenyag <realevenyag@gmail.com>
* test: update test_config_api
Signed-off-by: evenyag <realevenyag@gmail.com>
* fix: remove unused memtable test helpers
Signed-off-by: evenyag <realevenyag@gmail.com>
* chore: address review comment
Signed-off-by: evenyag <realevenyag@gmail.com>
* fix: support bulk memtable options
Signed-off-by: evenyag <realevenyag@gmail.com>
* fix: sanitize config
Signed-off-by: evenyag <realevenyag@gmail.com>
* feat: remove partition tree options from region options
Move primary_key_encoding to the top level
Signed-off-by: evenyag <realevenyag@gmail.com>
* test: make ssts test datetime replaced text stable
Signed-off-by: evenyag <realevenyag@gmail.com>
* test: update sqlness result
Signed-off-by: evenyag <realevenyag@gmail.com>
* chore: validate_enum_options consider bulk memtable
Signed-off-by: evenyag <realevenyag@gmail.com>
* refactor: pass region id when parsing region options
Replace the `TryFrom<&HashMap>` impl for `RegionOptions` with
`try_from_options(region_id, options_map)` so the legacy partition_tree
fallback can log the affected region. The fallback now also overrides
the SST format to flat in addition to clearing the memtable type.
Signed-off-by: evenyag <realevenyag@gmail.com>
* fix: align sst_format with bulk memtable on parse and open
Signed-off-by: evenyag <realevenyag@gmail.com>
---------
Signed-off-by: evenyag <realevenyag@gmail.com>
* feat: support reporting env vars in heartbeat messages to metasrv
Add `heartbeat_env_vars` config option for datanode and frontend. When
configured, the specified environment variable values are read at startup
and sent to metasrv in every heartbeat via the `extensions` map. Metasrv
extracts and stores them in `NodeInfo` for use in routing decisions
(e.g. AZ-aware region placement).
- Add `EnvVars` helper in `common/meta/src/datanode.rs` following the
existing `GcStat` extension pattern with `into_extensions`/`from_extensions`
- Add `env_vars: HashMap<String, String>` field to `NodeInfo` in
`common/meta/src/cluster.rs` with `#[serde(default)]` for backward compat
- Add `heartbeat_env_vars: Vec<String>` config field to `DatanodeOptions`,
`FrontendOptions`, and `StandaloneOptions`
- Inject env vars into heartbeat `extensions` in both datanode and frontend
heartbeat tasks (`datanode/src/heartbeat.rs`, `frontend/src/heartbeat.rs`)
- Extract env vars from `req.extensions` in all three metasrv
`CollectXxxClusterInfoHandler`s
- Update `NodeInfo` construction sites in `meta-client`,
`discovery/lease.rs`, and `standalone/information_extension.rs`
- Update expected TOML output in `tests-integration/tests/http.rs`
- Add unit tests for `EnvVars` round-trip and `NodeInfo` backward compat
Signed-off-by: Lei, HUANG <leih@nvidia.com>
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* refactor: address heartbeat env review feedback
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* chore: log error on deserialization failure
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* refactor: send heartbeat env vars once
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* fix: resend heartbeat env vars after reconnect
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* revert: keep env vars in every heartbeat
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
---------
Signed-off-by: Lei, HUANG <leih@nvidia.com>
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* feat(operator): allow last_row merge_mode when append_mode is enabled
- Update RegionOptions::validate to allow last_row merge_mode with append_mode.
- Update fill_table_options_for_create to automatically set merge_mode to last_row when append_mode is enabled for LastNonNull table type.
- Add unit tests in mito2 and operator to verify options validation and table creation.
- Add integration test for InfluxDB write with append mode hint.
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* fix(operator): simplify append mode options
Group `LastNonNull` auto-create options in a single append-mode branch.
Files:
- `src/operator/src/insert.rs`
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* fix: sqlness
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
---------
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* fix(server): describe EXPLAIN statements so bind parameters work
`do_describe_inner` only planned `Insert`/`Query`/`Delete`, so
`EXPLAIN` and `EXPLAIN ANALYZE` fell through to the non-plan branch
and had no parameter-type inference. At Bind time the Postgres
handler then reported `unsupported_parameter_type` even though the
inner query would have worked on its own.
Recurse one level into `Statement::Explain` so that an EXPLAIN
wrapping a plannable statement goes through the same describe path.
Adds a tokio-postgres integration test that exercises
`EXPLAIN`/`EXPLAIN ANALYZE` over the extended query protocol.
Fixes#8029
Signed-off-by: BootstrapperSBL <yvanwww@gmail.com>
* refactor(server): extract plannable-inner check into closure
Reduce duplication between the direct match and the EXPLAIN inner match
by factoring out is_inner_plannable. Behaviour unchanged.
Signed-off-by: BootstrapperSBL <yvanwww@gmail.com>
---------
Signed-off-by: BootstrapperSBL <yvanwww@gmail.com>
* feat: add support for decimal parameter type, remove string replacement fallback
* chore: format
* fix: add support for using unsigned bigint in postgres
* chore: format toml
* refactor: cleanup duplicated code
* fix: rescale decimal
* feat: support alter from primary_key to flat
Signed-off-by: evenyag <realevenyag@gmail.com>
* chore: alter flat to primary_key
Signed-off-by: evenyag <realevenyag@gmail.com>
* feat: change default_experimental_flat_format to true
Signed-off-by: evenyag <realevenyag@gmail.com>
* feat: compute channel size from splitted batch size
Signed-off-by: evenyag <realevenyag@gmail.com>
* test: add tests for split and channel size
Signed-off-by: evenyag <realevenyag@gmail.com>
* fix: always set sst_format from manifest on region open
sanitize_region_options did not set options.sst_format when the
default (PrimaryKey) matched the manifest value, leaving it as None
after reopen. This caused the alter format change to appear lost.
Signed-off-by: evenyag <realevenyag@gmail.com>
* test: fix tests
Signed-off-by: evenyag <realevenyag@gmail.com>
* test: show create table after alteration
Signed-off-by: evenyag <realevenyag@gmail.com>
* refactor!: rename default_experimental_flat_format to default_flat_format
The flat format is no longer experimental. Remove "experimental" from
the config field name, doc comments, and all references.
Signed-off-by: evenyag <realevenyag@gmail.com>
* chore: fix clippy
Signed-off-by: evenyag <realevenyag@gmail.com>
---------
Signed-off-by: evenyag <realevenyag@gmail.com>
* feat: metric batch 2s PoC
Signed-off-by: jeremyhi <fengjiachun@gmail.com>
* chore: max_concurrent_flushes
Signed-off-by: jeremyhi <fengjiachun@gmail.com>
* chore: work channel size
Signed-off-by: jeremyhi <fengjiachun@gmail.com>
* feat(servers): add metrics and logs for pending rows batch flush
Add the `FLUSH_ELAPSED` histogram metric to track the duration of pending
rows batch flushes in the Prometheus store protocol handler. This provides
better observability into the performance and latency of the batcher.
Also update telemetry by:
- Recording elapsed time for both successful and failed flush operations.
- Adding an informational log upon successful flush including row count and duration.
- Including elapsed time in error logs when a flush fails.
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* feat(servers): implement columnar batching for pending rows
Refactor PendingRowsBatcher to use columnar batching for the metrics
store. Incoming RowInsertRequests are now converted to RecordBatches,
partitioned, and flushed via BulkInsert requests to datanodes.
- Enhance MultiDimPartitionRule to handle scalar boolean predicates.
- Add metrics for tracking flush failures and dropped rows.
- Update dependencies to support columnar batching in servers.
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* feat(servers): add backpressure for pending rows
Implement backpressure in PendingRowsBatcher by limiting in-flight
requests with a semaphore and making the submission wait for the flush
result. This ensures Prometheus write requests are throttled and only
return once the data has been successfully flushed to datanodes.
- Add max_inflight_requests to PromStoreOptions.
- Use oneshot channels to notify submitters of flush completion.
- Limit concurrent requests using a new inflight_semaphore.
- Update PendingRowsBatcher::submit to wait for the flush outcome.
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* feat: add stage-level metrics for bulk ingestion
Introduce histograms to track the elapsed time of various stages in the
metric engine bulk insert path and the server's pending rows batcher.
This provides better observability into the performance bottlenecks
of the ingestion pipeline.
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* - `src/metric-engine/src/engine/bulk_insert.rs`: Removed the fallback mechanism that converted record batches to rows when bulk inserts were unsupported, along with related helper functions and unused imports.
- `src/operator/src/insert.rs`: Removed an unused import (`common_time::TimeToLive::Instant`).
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* feat(servers): columnar Prom remote write
Optimize the Prometheus remote write path by allowing direct conversion
from decoded Prometheus samples to Arrow RecordBatches. This bypasses
intermediate row-based representations when `PendingRowsBatcher` is
active and no pipeline is used, improving ingestion efficiency.
- Implement `as_record_batch_groups` in `TablesBuilder` and `PromWriteRequest`.
- Add `submit_prom_record_batch_groups` to `PendingRowsBatcher`.
- Introduce `DecodedPromWriteRequest` in `prom_store`.
- Implement row-to-RecordBatch conversion logic in `prom_row_builder`.
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* Revert "feat(servers): columnar Prom remote write"
This reverts commit efbb63c12a3e7fcec03858ea0351efd94fec8242.
* refactor(servers): improve row to RecordBatch conversion
- Use `snafu::ensure` for row validation in `rows_to_record_batch`.
- Add explicit type hint for `MutableVector` to improve clarity.
- Reorganize and clean up imports in `pending_rows_batcher.rs`.
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* perf(servers): use arrow builders for row conversion
This commit optimizes the conversion from `api::v1::Rows` to `RecordBatch`
by using Arrow builders directly. This avoids the overhead of
`MutableVector` and `common_recordbatch`, leading to better performance
in the `pending_rows_batcher`.
Additionally, the `#[allow(dead_code)]` attribute is removed from
`modify_batch_sparse` in the metric engine as it is now utilized.
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* perf(metric-engine): optimize batch modification
Optimize `modify_batch_sparse` by reusing buffers, using Arrow
builders, and employing fast-path encoding methods. This reduces
allocations and avoids redundant downcasting and serializer overhead.
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* feat/metric-engine-support-bulk:
**Add Environment Variable for Batch Sync Control**
- `pending_rows_batcher.rs`: Introduced an environment variable `PENDING_ROWS_BATCH_SYNC` to control the synchronization behavior of batch processing. If set to true, the function will wait for the flush result; otherwise, it will return immediatel
with the total rows count.
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* wip
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* chore: update and fix clippy
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* fix: failing test
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* picking-pending-rows-batcher:
### Commit Message
Remove Unused Code and Simplify Error Handling
- **`src/error.rs`**: Removed the `BatcherQueueFull` error variant and its associated logic, simplifying the error handling by removing unused code.
- **`src/http/prom_store.rs`**: Eliminated the `try_decompress` function, streamlining the decompression logic by directly using `snappy_decompress` in `decode_remote_read_request`.
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* chore: parse PENDING_ROWS_BATCH_SYNC once
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* chore: revert unrelated changes
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* **Refactor Prometheus Write Handling**
- **`prom_store.rs`**: Introduced `pre_write` method in `PromStoreProtocolHandler` to handle pre-write checks for Prometheus remote write requests. Updated `write` method to utilize `pre_write`.
- **`server.rs`**: Modified `PendingRowsBatcher` initialization to conditionally create a batcher based on `with_metric_engine` flag.
- **`http/prom_store.rs`**: Integrated `pre_write` checks before submitting requests to `PendingRowsBatcher`.
- **`query_handler.rs`**: Added `pre_write` method to `PromStoreProtocolHandler` trait for pre-write operations.
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* picking-pending-rows-batcher:
- **Fix Label Typo**: Corrected a typo in the label value from `"flush_wn ite_region"` to `"flush_write_region"` in `pending_rows_batcher.rs`.
- **Refactor Array Building Logic**: Introduced a macro `build_array!` to streamline the construction of `ArrayRef` for different data types, reducing code duplication in `pending_rows_batcher.rs`.
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* format toml
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* picking-pending-rows-batcher:
### Update PromStore and PendingRowsBatcher Configuration
- **`prom_store.rs`**: Set `pending_rows_flush_interval` to `Duration::ZERO` to disable automatic flushing.
- **`pending_rows_batcher.rs`**: Enhance validation to disable the batcher when `flush_interval` is zero or configuration values like `max_batch_rows`, `max_concurrent_flushes`, `worker_channel_capacity`, or `max_inflight_requests` are zero, preventing potential panics or deadlocks.
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* picking-pending-rows-batcher:
### Update `pending_rows_flush_interval` to Zero
- **Files Modified**:
- `src/frontend/src/service_config/prom_store.rs`
- `tests-integration/tests/http.rs`
- **Key Changes**:
- Updated `pending_rows_flush_interval` from `Duration::from_secs(2)` to `Duration::ZERO` in `prom_store.rs`.
- Changed `pending_rows_flush_interval` configuration from `"2s"` to `"0s"` in `http.rs`.
These changes set the flush interval to zero, potentially affecting how frequently pending rows are flushed.
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* picking-pending-rows-batcher:
**Add Worker Management Enhancements**
- **`metrics.rs`**: Introduced `PENDING_WORKERS` gauge to track active pending rows batch workers.
- **`pending_rows_batcher.rs`**:
- Added worker idle timeout logic with `WORKER_IDLE_TIMEOUT_MULTIPLIER`.
- Implemented worker management functions: `spawn_worker`, `remove_worker_if_same_channel`, and `should_close_worker_on_idle_timeout`.
- Enhanced worker lifecycle management to handle idle workers and ensure proper cleanup.
- **Tests**: Added unit tests for worker removal and idle timeout logic.
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* fix: clippy
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
---------
Signed-off-by: jeremyhi <fengjiachun@gmail.com>
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
Co-authored-by: jeremyhi <fengjiachun@gmail.com>
* feat: use arrow-pg for encode_row
* refactor: remove bytea and datetime module
* feat: port more encodings to arrow-pg
* feat: implement intervalstyle
* chore: format
* chore: remove error that is no longer used
* chore: use released arrow-pg
* Apply suggestions from code review
Co-authored-by: LFC <990479+MichaelScofield@users.noreply.github.com>
---------
Co-authored-by: LFC <990479+MichaelScofield@users.noreply.github.com>
* feat: impl vector index scan in storage
Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
* feat: fallback to read remote blob when blob not found
Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
* chore: refactor encoding and decoding and apply suggestions
Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
* fix: license
Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
* test: add apply_with_k tests
Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
* chore: apply suggestions
Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
* fix: forgot to align nulls when the vector column is not in the batch
Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
* test: add test for vector column is not in a batch while buiilding
Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
---------
Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
* feat: impl vector index building
Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
* feat: supports flat format
Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
* ci: add vector_index feature to test
Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
* chore: apply suggestions
Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
* chore: apply suggestions from copilot
Signed-off-by: Dennis Zhuang <killme2008@gmail.com>
---------
Signed-off-by: Dennis Zhuang <killme2008@gmail.com>