* feat: add repartition procedure factory support to DdlManager - Introduce RepartitionProcedureFactory trait for creating and registering repartition procedures - Implement DefaultRepartitionProcedureFactory for metasrv with full support - Implement StandaloneRepartitionProcedureFactory for standalone (unsupported) - Add procedure loader registration for RepartitionProcedure and RepartitionGroupProcedure - Add helper methods to TableMetadataAllocator for allocator access - Add error types for repartition procedure operations - Update DdlManager to accept and use RepartitionProcedureFactoryRef Signed-off-by: WenyXu <wenymedia@gmail.com> * feat: integrate repartition procedure into DdlManager - Add submit_repartition_task() to handle repartition from alter table - Route Repartition operations in submit_alter_table_task() to repartition factory - Refactor: rename submit_procedure() to execute_procedure_and_wait() - Make all DDL operations wait for completion by default - Add submit_procedure() for fire-and-forget submissions - Add CreateRepartitionProcedure error type - Add placeholder Repartition handling in grpc-expr (unsupported) - Update greptime-proto dependency Signed-off-by: WenyXu <wenymedia@gmail.com> * feat: implement ALTER TABLE REPARTITION procedure submission Signed-off-by: WenyXu <wenymedia@gmail.com> * refactor(repartition): handle central region in apply staging manifest - Introduce ApplyStagingManifestInstructions struct to organize instructions - Add special handling for central region when applying staging manifests - Transition state from UpdateMetadata to RepartitionEnd after applying staging manifests - Remove next_state() method in RepartitionStart and inline state transitions - Improve logging and expression serialization in DDL statement executor - Move repartition tests from standalone to distributed test suite Signed-off-by: WenyXu <wenymedia@gmail.com> * chore: apply suggestions from CR Signed-off-by: WenyXu <wenymedia@gmail.com> * chore: update proto Signed-off-by: WenyXu <wenymedia@gmail.com> --------- Signed-off-by: WenyXu <wenymedia@gmail.com>
Sqlness Test
Sqlness manual
Case file
Sqlness has two types of file:
.sql: test input, SQL only.result: expected test output, SQL and its results
.result is the output (execution result) file. If you see .result files is changed,
it means this test gets a different result and indicates it fails. You should
check change logs to solve the problem.
You only need to write test SQL in .sql file, and run the test.
Case organization
The root dir of input cases is tests/cases. It contains several subdirectories stand for different test
modes. E.g., standalone/ contains all the tests to run under greptimedb standalone start mode.
Under the first level of subdirectory (e.g. the cases/standalone), you can organize your cases as you like.
Sqlness walks through every file recursively and runs them.
Kafka WAL
Sqlness supports Kafka WAL. You can either provide a Kafka cluster or let sqlness to start one for you.
To run test with kafka, you need to pass the option -w kafka. If no other options are provided, sqlness will use conf/kafka-cluster.yml to start a Kafka cluster. This requires docker and docker-compose commands in your environment.
Otherwise, you can additionally pass the your existing kafka environment to sqlness with -k option. E.g.:
cargo sqlness bare -w kafka -k localhost:9092
In this case, sqlness will not start its own kafka cluster and the one you provided instead.
Run the test
Unlike other tests, this harness is in a binary target form. You can run it with:
cargo sqlness bare
It automatically finishes the following procedures: compile GreptimeDB, start it, grab tests and feed it to
the server, then collect and compare the results. You only need to check if the .result files are changed.
If not, congratulations, the test is passed 🥳!