* feat: collect per file metrics
Signed-off-by: evenyag <realevenyag@gmail.com>
* feat: divide build_cost to build_part_cost and build_reader_cost
Signed-off-by: evenyag <realevenyag@gmail.com>
* feat: limit the file metrics num to display
Signed-off-by: evenyag <realevenyag@gmail.com>
* fix: use sorted iter to get sorted files
Signed-off-by: evenyag <realevenyag@gmail.com>
* fix: output metrics in desc order
Signed-off-by: evenyag <realevenyag@gmail.com>
---------
Signed-off-by: evenyag <realevenyag@gmail.com>
* feat/allow-one-to-many-pipeline:
### Enhance Pipeline Processing for One-to-Many Transformations
- **Support One-to-Many Transformations**:
- Updated `processor.rs`, `etl.rs`, `vrl_processor.rs`, and `greptime.rs` to handle one-to-many transformations by allowing VRL processors to return arrays, expanding each element into separate rows.
- Introduced `transform_array_elements` and `values_to_rows` functions to facilitate this transformation.
- **Error Handling Enhancements**:
- Added new error types in `error.rs` to handle cases where array elements are not objects and for transformation failures.
- **Testing Enhancements**:
- Added tests in `pipeline.rs` to verify one-to-many transformations, single object processing, and error handling for non-object array elements.
- **Context Management**:
- Modified `ctx_req.rs` to clone `ContextOpt` when adding rows, ensuring correct context management during transformations.
- **Server Pipeline Adjustments**:
- Updated `pipeline.rs` in `servers` to handle transformed outputs with one-to-many row expansions, ensuring correct row padding and request formation.
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* feat/allow-one-to-many-pipeline:
Add one-to-many VRL pipeline test in `http.rs`
- Introduced `test_pipeline_one_to_many_vrl` to verify VRL processor's ability to expand a single input row into multiple output rows.
- Updated `http_tests!` macro to include the new test.
- Implemented test scenarios for single and multiple input rows, ensuring correct data transformation and row count validation.
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* feat/allow-one-to-many-pipeline:
### Add Tests for VRL Pipeline Transformations
- **File:** `src/pipeline/src/etl.rs`
- Added tests for one-to-many VRL pipeline expansion to ensure multiple output rows from a single input.
- Introduced tests to verify backward compatibility for single object output.
- Implemented tests to confirm zero rows are produced from empty arrays.
- Added validation tests to ensure array elements must be objects.
- Developed tests for one-to-many transformations with table suffix hints from VRL.
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* feat/allow-one-to-many-pipeline:
### Enhance Pipeline Transformation with Per-Row Table Suffixes
- **`src/pipeline/src/etl.rs`**: Updated `TransformedOutput` to include per-row table suffixes, allowing for more flexible routing of transformed data. Modified `PipelineExecOutput` and related methods to
handle the new structure.
- **`src/pipeline/src/etl/transform/transformer/greptime.rs`**: Enhanced `values_to_rows` to support per-row table suffix extraction and application.
- **`src/pipeline/tests/common.rs`** and **`src/pipeline/tests/pipeline.rs`**: Adjusted tests to validate the new per-row table suffix functionality, ensuring backward compatibility and correct behavior in
one-to-many transformations.
- **`src/servers/src/pipeline.rs`**: Modified `run_custom_pipeline` to process transformed outputs with per-row table suffixes, grouping rows by `(opt, table_name)` for insertion.
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* feat/allow-one-to-many-pipeline:
### Update VRL Processor Type Checks
- **File:** `vrl_processor.rs`
- **Changes:** Updated type checking logic to use `contains_object()` and `contains_array()` methods instead of `is_object()` and `is_array()`. This change ensures
compatibility with VRL type inference that may return multiple possible types.
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* feat/allow-one-to-many-pipeline:
- **Enhance Error Handling**: Added new error types `ArrayElementMustBeObjectSnafu` and `TransformArrayElementSnafu` to improve error handling in `etl.rs` and `greptime.rs`.
- **Refactor Error Usage**: Moved error usage declarations in `transform_array_elements` and `values_to_rows` functions to the top of the file for better organization in `etl.rs` and `greptime.rs`.
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* feat/allow-one-to-many-pipeline:
### Update `greptime.rs` to Enhance Error Handling
- **Error Handling**: Modified the `values_to_rows` function to handle invalid array elements based on the `skip_error` parameter. If `skip_error` is true, invalid elements are skipped; otherwise, an error is returned.
- **Testing**: Added unit tests in `greptime.rs` to verify the behavior of `values_to_rows` with different `skip_error` settings, ensuring correct processing of valid objects and appropriate error handling for invalid elements.
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* feat/allow-one-to-many-pipeline:
### Commit Summary
- **Enhance `TransformedOutput` Structure**: Refactored `TransformedOutput` to use a `HashMap` for grouping rows by `ContextOpt`, allowing for per-row configuration options. Updated methods in `PipelineExecOutput` to support the new structure (`src/pipeline/src/etl.rs`).
- **Add New Transformation Method**: Introduced `transform_array_elements_to_hashmap` to handle array inputs with per-row `ContextOpt` in `HashMap` format (`src/pipeline/src/etl.rs`).
- **Update Pipeline Execution**: Modified `run_custom_pipeline` to process `TransformedOutput` using the new `HashMap` structure, ensuring rows are grouped by `ContextOpt` and table name (`src/servers/src/pipeline.rs`).
- **Add Tests for New Structure**: Implemented tests to verify the functionality of the new `HashMap` structure in `TransformedOutput`, including scenarios for one-to-many mapping, single object input, and empty arrays (`src/pipeline/src/etl.rs`).
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* feat/allow-one-to-many-pipeline:
### Refactor `values_to_rows` to Return `HashMap` Grouped by `ContextOpt`
- **`etl.rs`**:
- Updated `values_to_rows` to return a `HashMap` grouped by `ContextOpt` instead of a vector.
- Adjusted logic to handle single object and array inputs, ensuring rows are grouped by their `ContextOpt`.
- Modified functions to extract rows from default `ContextOpt` and apply table suffixes accordingly.
- **`greptime.rs`**:
- Enhanced `values_to_rows` to handle errors gracefully with `skip_error` logic.
- Added logic to group rows by `ContextOpt` for array inputs.
- **Tests**:
- Updated existing tests to validate the new `HashMap` return structure.
- Added a new test to verify correct grouping of rows by per-element `ContextOpt`.
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* feat/allow-one-to-many-pipeline:
### Refactor and Enhance Error Handling in ETL Pipeline
- **Refactored Functionality**:
- Replaced `transform_array_elements` with `transform_array_elements_by_ctx` in `etl.rs` to streamline transformation logic and improve error handling.
- Updated `values_to_rows` in `greptime.rs` to use `or_default` for cleaner code.
- **Enhanced Error Handling**:
- Introduced `unwrap_or_continue_if_err` macro in `etl.rs` to allow skipping errors based on pipeline context, improving robustness in data processing.
These changes enhance the maintainability and error resilience of the ETL pipeline.
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* feat/allow-one-to-many-pipeline:
### Update `Row` Handling in ETL Pipeline
- **Refactor `Row` Type**: Introduced `RowWithTableSuffix` type alias to simplify handling of rows with optional table suffixes across the ETL pipeline.
- **Modify Function Signatures**: Updated function signatures in `etl.rs` and `greptime.rs` to use `RowWithTableSuffix` for better clarity and consistency.
- **Enhance Test Coverage**: Adjusted test logic in `greptime.rs` to align with the new `RowWithTableSuffix` type, ensuring correct grouping and processing of rows by TTL.
Files affected: `etl.rs`, `greptime.rs`.
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
---------
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* fix: regression with shortcutted statement on postgres extended query
* chore: typo fix
* feat: also add more type support for parameters
* chore: remove dbg print
* refactor/bulk-insert-service:
refactor: decode FlightData early in put_record_batch pipeline
- Move FlightDecoder usage from Inserter up to PutRecordBatchRequestStream,
passing decoded RecordBatch and schema bytes instead of raw FlightData.
- Eliminate redundant per-request decoding/encoding in Inserter; encode
once and reuse for all region requests.
- Streamline GrpcQueryHandler trait and implementations to accept
PutRecordBatchRequest containing pre-decoded data.
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* refactor/bulk-insert-service:
feat: stream-based bulk insert with per-batch responses
- Introduce handle_put_record_batch_stream() to process Flight DoPut streams
- Resolve table & permissions once, yield (request_id, AffectedRows) per batch
- Replace loop-over-request with async-stream in frontend & server
- Make PutRecordBatchRequestStream public for cross-crate usage
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* refactor/bulk-insert-service:
fix: propagate request_id with errors in bulk insert stream
Changes the bulk-insert stream item type from
Result<(i64, AffectedRows), E> to (i64, Result<AffectedRows, E>)
so every emitted tuple carries the request_id even on failure,
letting callers correlate errors with the originating request.
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* refactor/bulk-insert-service:
refactor: unify DoPut response stream to return DoPutResponse
Replace the tuple (i64, Result<AffectedRows>) with Result<DoPutResponse>
throughout the gRPC bulk-insert path so the handler, adapter and server
all speak the same type.
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* refactor/bulk-insert-service:
feat: add elapsed_secs to DoPutResponse for bulk-insert timing
- DoPutResponse now carries elapsed_secs field
- Frontend measures and attaches insert duration
- Server observes GRPC_BULK_INSERT_ELAPSED metric from response
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* refactor/bulk-insert-service:
refactor: unify Bytes import in flight module
- Replace `bytes::Bytes` with `Bytes` alias for consistency
- Remove redundant `ProstBytes` alias
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* refactor/bulk-insert-service:
fix: terminate gRPC stream on error and optimize FlightData handling
- Stop retrying on stream errors in gRPC handler
- Replace Vec1 indexing with into_iter().next() for FlightData
- Remove redundant clones in bulk_insert and flight modules
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* refactor/bulk-insert-service:
Improve permission check placement in `grpc.rs`
- Moved the permission check for `BulkInsert` to occur before resolving the table reference in `GrpcQueryHandler` implementation.
- Ensures permission validation is performed earlier in the process, potentially avoiding unnecessary operations if permission is denied.
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* refactor/bulk-insert-service:
**Refactor Bulk Insert Handling in gRPC**
- **`grpc.rs`**:
- Switched from `async_stream::stream` to `async_stream::try_stream` for error handling.
- Removed `body_size` parameter and added `flight_data` to `handle_bulk_insert`.
- Simplified error handling and permission checks in `GrpcQueryHandler`.
- **`bulk_insert.rs`**:
- Added `raw_flight_data` parameter to `handle_bulk_insert`.
- Calculated `body_size` from `raw_flight_data` and removed redundant encoding logic.
- **`flight.rs`**:
- Replaced `body_size` with `flight_data` in `PutRecordBatchRequest`.
- Updated memory usage calculation to include `flight_data` components.
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* refactor/bulk-insert-service:
perf(bulk_insert): encode record batch once per datanode
Move FlightData encoding outside the per-region loop so the same
encoded bytes are reused when mask.select_all(), eliminating redundant
serialisation work.
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
---------
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* refactor/allow-custom-flight-service:
### Add Custom Flight Handler Support
- **`server.rs`**:
- Introduced a new field `flight_handler` in the `Services` struct to allow optional custom flight handler configuration.
- Added a method `with_flight_handler` to set the custom flight handler.
- Modified `build_grpc_server` to use the custom flight handler if provided, defaulting to `GreptimeRequestHandler` otherwise.
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* refactor/allow-custom-flight-service:
### Make structs and enums public in `flight.rs`
- Changed visibility of `PutRecordBatchRequest` and `PutRecordBatchRequestStream` structs to public.
- Made `PutRecordBatchRequestStreamState` enum public.
- Updated fields within `PutRecordBatchRequest` and `PutRecordBatchRequestStream` to be public.
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
---------
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* feat/change-tsid-gen:
perf(metric-engine): replace mur3 with fxhash for faster TSID generation
- Switches from mur3::Hasher128 to fxhash::FxHasher for TSID hashing
- Pre-computes label-name hash when no nulls are present, avoiding redundant work
- Adds fast-path for rows without nulls; falls back to slow path otherwise
- Updates Cargo.toml and lockfile to reflect dependency change
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* feat/change-tsid-gen:
fix: only check primary-key labels for null when re-using cached hash
- Rename has_null() → has_null_labels() and restrict the check to the
primary-key columns so that non-label NULLs do not force a full
TSID re-computation.
- Update expected hashes in tests to match the new logic.
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* feat/change-tsid-gen:
test: add comprehensive TSID generation tests for label ordering and null handling
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* feat/change-tsid-gen:
bench: add criterion benchmark for TSID generator
- Compare original mur3 vs current fxhash fast/slow paths
- Test 2, 5, 10 label sets plus null-value slow path
- Add mur3 & criterion dev-deps; register bench target
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* feat/change-tsid-gen:
test: stabilize metric-engine tests by fixing non-deterministic row order
- Add ORDER BY to SELECTs in TTL tests to ensure consistent output
- Update expected __tsid values after hash function change
- Swap expected OTLP metric rows to match new ordering
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* feat/change-tsid-gen:
refactor: simplify Default impls and remove redundant code
- Replace manual Default for TsidGenerator with derive
- Remove unnecessary into_iter() call
- Simplify Option::unwrap_or_else to unwrap_or
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
---------
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>