* feat: switch partition tree to bulk Signed-off-by: evenyag <realevenyag@gmail.com> * chore: keep partition tree memtable for migration test Restore PartitionTreeMemtable construction when memtable.type=partition_tree is explicit, and move the sparse-encoding bulk override into the default (no explicit memtable.type) arm so phase 2's memtable.type=bulk wins on reopen. Rewrite test_reopen_time_series_sparse_memtable_with_bulk to use a metric-engine-shaped schema and sparse-encoded rows with WriteHint::Sparse, so the test actually exercises a PartitionTreeMemtable in phase 1 and verifies WAL replay into the new BulkMemtable on reopen without flushing. Signed-off-by: evenyag <realevenyag@gmail.com> * chore: drop partition tree memtable from runtime Re-apply the unconditional sparse-encoding override in `MemtableBuilderProvider::builder_for_options` and route the `MemtableOptions::PartitionTree` arm to `BulkMemtable` with a deprecation warning. After this change, `PartitionTreeMemtableBuilder` is no longer reachable from the engine runtime; benchmarks still reference the type. Remove `test_reopen_time_series_sparse_memtable_with_bulk` and the `put_sparse_rows` helper added in the previous commit — that test only existed to validate the PartitionTree -> Bulk reopen migration and is unnecessary now that the override is in place. Signed-off-by: evenyag <realevenyag@gmail.com> * refactor(mito2): move timestamp_array_to_i64_slice into read module Relocate the timestamp_array_to_i64_slice helper from memtable/partition_tree/data.rs to the read module so that the read path no longer depends on the partition_tree internals. All call sites (both inside and outside the partition_tree module) now import from crate::read. Signed-off-by: evenyag <realevenyag@gmail.com> * refactor(mito2): use TimeSeriesMemtableBuilder in time_partition tests The time_partition tests use the memtable builder purely as a generic backend for the TimePartitions write/scan paths; nothing in them is specific to the partition-tree memtable. Switch the seven affected tests to TimeSeriesMemtableBuilder so the tests no longer depend on PartitionTreeMemtableBuilder. Signed-off-by: evenyag <realevenyag@gmail.com> * chore(mito2): delete PartitionTreeMemtable implementation The runtime already falls back to BulkMemtable for the PartitionTree variant. Drop the now-unreachable implementation, its metrics, the partition_tree benchmarks, the metric-engine Unsupported fallback in bulk_insert.rs, and the test helpers that only existed for the deleted module. MemtableOptions::PartitionTree, its parsing, the runtime fallback, the store-api MEMTABLE_PARTITION_TREE_* constants, and the SQL fixtures remain so existing region options keep round-tripping. Signed-off-by: evenyag <realevenyag@gmail.com> * refactor(mito-codec): drop skip_partition_column parameter PartitionTreeMemtable was the only caller passing skip_partition_column=true; every other caller passes false. Now that the partition_tree module is gone, the parameter is uniformly false and the guard branch is dead. Drop the parameter from the trait method and both impls, remove the guard and the is_partition_column helper, and update the four remaining call sites in mito2 plus the bench. Signed-off-by: evenyag <realevenyag@gmail.com> * chore(mito2): remove unused MemtableConfig enum Signed-off-by: evenyag <realevenyag@gmail.com> * chore: fmt code Signed-off-by: evenyag <realevenyag@gmail.com> * refactor: remove unused variant Signed-off-by: evenyag <realevenyag@gmail.com> * test: update test_config_api Signed-off-by: evenyag <realevenyag@gmail.com> * fix: remove unused memtable test helpers Signed-off-by: evenyag <realevenyag@gmail.com> * chore: address review comment Signed-off-by: evenyag <realevenyag@gmail.com> * fix: support bulk memtable options Signed-off-by: evenyag <realevenyag@gmail.com> * fix: sanitize config Signed-off-by: evenyag <realevenyag@gmail.com> * feat: remove partition tree options from region options Move primary_key_encoding to the top level Signed-off-by: evenyag <realevenyag@gmail.com> * test: make ssts test datetime replaced text stable Signed-off-by: evenyag <realevenyag@gmail.com> * test: update sqlness result Signed-off-by: evenyag <realevenyag@gmail.com> * chore: validate_enum_options consider bulk memtable Signed-off-by: evenyag <realevenyag@gmail.com> * refactor: pass region id when parsing region options Replace the `TryFrom<&HashMap>` impl for `RegionOptions` with `try_from_options(region_id, options_map)` so the legacy partition_tree fallback can log the affected region. The fallback now also overrides the SST format to flat in addition to clearing the memtable type. Signed-off-by: evenyag <realevenyag@gmail.com> * fix: align sst_format with bulk memtable on parse and open Signed-off-by: evenyag <realevenyag@gmail.com> --------- Signed-off-by: evenyag <realevenyag@gmail.com>
Sqlness Test
Sqlness manual
Case file
Sqlness has two types of file:
.sql: test input, SQL only.result: expected test output, SQL and its results
.result is the output (execution result) file. If you see .result files is changed,
it means this test gets a different result and indicates it fails. You should
check change logs to solve the problem.
You only need to write test SQL in .sql file, and run the test.
Case organization
The root dir of input cases is tests/cases. It contains several subdirectories stand for different test
modes. E.g., standalone/ contains all the tests to run under greptimedb standalone start mode.
Under the first level of subdirectory (e.g. the cases/standalone), you can organize your cases as you like.
Sqlness walks through every file recursively and runs them.
Kafka WAL
Sqlness supports Kafka WAL. You can either provide a Kafka cluster or let sqlness to start one for you.
To run test with kafka, you need to pass the option -w kafka. If no other options are provided, sqlness will use conf/kafka-cluster.yml to start a Kafka cluster. This requires docker and docker-compose commands in your environment.
Otherwise, you can additionally pass the your existing kafka environment to sqlness with -k option. E.g.:
cargo sqlness bare -w kafka -k localhost:9092
In this case, sqlness will not start its own kafka cluster and the one you provided instead.
Run the test
Unlike other tests, this harness is in a binary target form. You can run it with:
cargo sqlness bare
It automatically finishes the following procedures: compile GreptimeDB, start it, grab tests and feed it to
the server, then collect and compare the results. You only need to check if the .result files are changed.
If not, congratulations, the test is passed 🥳!