Allow flush edits with equal entry ids when flushed sequence advances, so close-time flush after truncate still succeeds for skip-wal regions while stale pre-truncate flushes are rejected. Add a regression test for create->truncate->write->close timing.
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* feat/flat-for-time-series:
### Commit Message
Enhance `TimeSeriesMemtable` with Record Batch Support
- **`time_series.rs`**:
- Introduced `BatchToRecordBatchContext` to facilitate conversion of batch iterators to record batch iterators.
- Added `build_record_batch` method in `TimeSeriesIterBuilder` to support record batch creation.
- Implemented multiple test cases to validate the functionality of record batch creation, including tests for projections,
deduplication, sequence filtering, and data correctness.
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* feat/flat-for-time-series:
Refactor `TimeSeriesMemtable` and `TimeSeriesIterBuilder`
- Renamed `adapter_context` to `batch_to_record_batch` in `TimeSeriesMemtable` for clarity.
- Simplified `MemtableRangeContext` initialization by removing the `batch_to_record_batch` parameter.
- Added `is_record_batch` method to `TimeSeriesIterBuilder` to indicate record batch status.
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* feat/flat-for-time-series:
### Add Time Range Filtering and Predicate Group Enhancements
- **`memtable.rs`**: Updated `IterBuilder` to include `time_range` parameter in `build_record_batch` method, enhancing record batch iteration with time range filtering.
- **`time_series.rs`**: Modified `TimeSeriesIterBuilder` to use `PredicateGroup` instead of `Predicate`, and integrated `PruneTimeIterator` for time-based filtering.
- **`memtable_util.rs`**: Removed unused `Predicate` import, reflecting changes in predicate handling.
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
---------
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
chore: remove GrpcQueryHandler::put_record_batch, we should use GrpcQueryHandler::handle_put_record_batch_stream instead
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* fix: resolve optimization issue for extended query
* fix: type cast from subquery
* chore: update error information in sqlness
* chore: switch to released pgwire
* refactor: remove optimize function completely
* chore: add more tests
* test: attempt to fix the fuzz issue
* fix: try to resolve the test issue
* perf: support group accumulators for state wrapper
* new tests and avoid clone
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
---------
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
* feat(metric-engine): support bulk inserts
Implement `RegionRequest::BulkInserts` to support efficient columnar data
ingestion in the metric engine.
Key changes:
- Implement `bulk_insert_region` to handle logical-to-physical region mapping
and dispatch writes.
- Add `batch_modifier` for `RecordBatch` transformations, specifically for
`__tsid` generation and sparse primary key encoding.
- Integrate `BulkInserts` into the `MetricEngine` request handling logic.
- Provide a row-based fallback mechanism if the underlying storage doesn't
support bulk writes.
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* feat/metric-engine-bulk-insert:
### Update `bulk_insert.rs` to Support Partition Expression Version
- **Enhancements**:
- Added support for `partition_expr_version` in `RegionBulkInsertsRequest` and `RegionPutRequest`.
- Modified the handling of `partition_expr_version` to be dynamically set from the `request` object.
Files affected:
- `src/metric-engine/src/engine/bulk_insert.rs`
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* fix: cargo lock revert
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* add doc for conversions
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* chore: simplify test
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* feat/metric-engine-bulk-insert:
### Refactor `bulk_insert.rs` in `metric-engine`
- **Refactor Functionality**:
- Replaced `resolve_tag_columns` with `resolve_tag_columns_from_metadata` to streamline tag column resolution.
- Moved logic for resolving tag columns directly into `resolve_tag_columns_from_metadata`, removing the need for an external function call.
- **Enhancements**:
- Improved error handling and context provision for missing physical regions and columns.
- Optimized tag column sorting and index management within the batch processing logic.
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* feat/metric-engine-bulk-insert:
### Refactor `record_batch_to_rows` Function in `bulk_insert.rs`
- Simplified the `record_batch_to_rows` function by removing the `logical_metadata` parameter and directly validating column types within the function.
- Enhanced error handling for timestamp, value, and tag columns by checking their data types and providing detailed error messages.
- Replaced the use of `Helper::try_into_vector` with direct downcasting to `TimestampMillisecondArray`, `Float64Array`, and `StringArray` for improved type safety and clarity.
- Updated the construction of `api::v1::Rows` to directly handle null values and construct `api::v1::Value` objects accordingly.
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* feat/metric-engine-bulk-insert:
## Commit Message
Refactor `bulk_insert.rs` to optimize state access
- Moved the state read operation inside a new block to limit its scope and improve code clarity.
- Adjusted logic for processing `tag_columns` and `non_tag_indices` to work within the new block structure.
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* feat/metric-engine-bulk-insert:
### Refactor `compute_tsid_array` Function
- **Refactored `compute_tsid_array` function**: Modified the function signature to accept `tag_arrays` as a parameter instead of building it internally. This change affects the following files:
- `src/metric-engine/src/batch_modifier.rs`
- **Updated test cases**: Adjusted test cases to accommodate the new `compute_tsid_array` function signature by passing `tag_arrays` explicitly.
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* docs: add doc for bulk_insert_region
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* feat/metric-engine-bulk-insert:
### Commit Message
Refactor `bulk_insert.rs` in `metric-engine`:
- Removed error handling for unsupported status codes in `write_data` method.
- Eliminated `record_batch_to_rows` function, simplifying the data insertion process.
- Streamlined the `write_data` method by removing fallback logic for unsupported operations.
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* feat/metric-engine-bulk-insert:
- **Optimize Primary Key Construction**: Refactored `modify_batch_sparse` in `batch_modifier.rs` to use `BinaryBuilder` for more efficient primary key construction.
- **Add Fallback for Unsupported Bulk Inserts**: Updated `bulk_insert.rs` to handle unsupported bulk inserts by converting record batches to rows and using `RegionPutRequest`.
- **Implement Record Batch to Rows Conversion**: Added `record_batch_to_rows` function in `bulk_insert.rs` to convert `RecordBatch` to `api::v1::Rows` for fallback operations.
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* feat/metric-engine-bulk-insert:
Add test for handling null values in `record_batch_to_rows`
- Added a new test `test_record_batch_to_rows_with_null_values` in `bulk_insert.rs` to verify the handling of null values in the `record_batch_to_rows` function.
- The test checks the conversion of a `RecordBatch` with null values in various fields to ensure correct row creation and schema handling.
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* feat/metric-engine-bulk-insert:
Add fallback path for unsupported status and improve error context handling
- **`bulk_insert.rs`**:
- Added a fallback path for `PartitionTreeMemtable` in case of unsupported status code.
- Enhanced error handling by using `with_context` for better error messages when timestamp and value columns are not found in `RecordBatch`.
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
---------
Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
* feat(http): improve error logging with client IP
- Add logging to ErrorResponse::from_error_message()
- Add middleware to log HTTP errors with client IP
Closes#7328
Signed-off-by: maximk777 <maximkirienkov777@gmail.com>
* fix(http): address review comments for error logging
Restore rich Debug logging in from_error(), add URI/method/matched path
to client IP middleware, and only log when client address is available.
Signed-off-by: evenyag <realevenyag@gmail.com>
---------
Signed-off-by: maximk777 <maximkirienkov777@gmail.com>
Signed-off-by: evenyag <realevenyag@gmail.com>
Co-authored-by: evenyag <realevenyag@gmail.com>
* feat: support write flat as primary key format
Signed-off-by: evenyag <realevenyag@gmail.com>
* feat: migrate flush to always use FlatSource
Add FormatType propagation in SstWriteRequest and use it to choose
Flat vs PrimaryKey write paths (write_all_flat vs
write_all_flat_as_primary_key) in AccessLayer and WriteCache. Make
compactor and flush derive the sst_write_format from region options or
engine config. Simplify flush logic and remove the old memtable_source
helper. Update tests to set default sst_write_format.
Signed-off-by: evenyag <realevenyag@gmail.com>
* refactor: compaction use flat source
Signed-off-by: evenyag <realevenyag@gmail.com>
* refactor: read parquet sequentially as flat batches
Signed-off-by: evenyag <realevenyag@gmail.com>
* refactor: remove new_batch_with_binary in favor of new_record_batch_with_binary
Replace PrimaryKeyWriteFormat with FlatWriteFormat in test_read_large_binary
test and use new_record_batch_with_binary directly, removing the now-unused
new_batch_with_binary function and its BinaryArray import.
Signed-off-by: evenyag <realevenyag@gmail.com>
* test: add tests for PrimaryKeyWriteFormat::convert_flat_batch
Signed-off-by: evenyag <realevenyag@gmail.com>
* refactor: remove Either from SstWriteRequest
Signed-off-by: evenyag <realevenyag@gmail.com>
* fix: handle index build mode
Signed-off-by: evenyag <realevenyag@gmail.com>
* fix: consider sparse encoding and last non null in flush
Signed-off-by: evenyag <realevenyag@gmail.com>
* test: add unit tests for field_column_start edge cases
Signed-off-by: evenyag <realevenyag@gmail.com>
---------
Signed-off-by: evenyag <realevenyag@gmail.com>
* feat(procedure): detect potential deadlock when parent/child share lock keys
Add a deadlock detection mechanism in submit_subprocedure() to warn
when a child procedure's lock_key overlaps with its parent's lock_key.
When this happens, the parent holds the lock while waiting for the child
to complete (at child_notify.notified().await), but the child blocks
forever trying to acquire the same lock. This is a classic Hold-and-Wait
deadlock.
The detection:
- Emits a warn! log in all builds (visible in production)
- Triggers debug_assert!(false) in debug/test builds for early CI detection
This partially addresses the TODO at line 121-122 and is a follow-up
to the discussion in: https://github.com/GreptimeTeam/greptimedb/issues/7692
Signed-off-by: YZL0v3ZZ <2055877225@qq.com>
* style: fix trailing whitespace
Signed-off-by: YZL0v3ZZ <2055877225@qq.com>
* refactor(procedure): extract deadlock detection into a testable pure function
Signed-off-by: YZL0v3ZZ <2055877225@qq.com>
* fix(procedure): preserve lock mode when detecting parent/child deadlock
Signed-off-by: YZL0v3ZZ <2055877225@qq.com>
* re-run ci check
Signed-off-by: YZL0v3ZZ <2055877225@qq.com>
---------
Signed-off-by: YZL0v3ZZ <2055877225@qq.com>