* feat: add cache stream helpers, key construction, config wiring, and metrics for partition range cache Add range result cache size config field and wire it through cache builder chains. Implement cache key building (build_range_cache_key), stream replay/store helpers (cached_flat_range_stream, cache_flat_range_stream), dictionary compaction (compact_pk_dictionary), and partition range row group collection. Add range cache metrics (size, hit, miss) to ScanMetricsSet and PartitionMetrics. Move fingerprint tests from scan_region to range_cache module. These functions are not yet wired into scan execution. Signed-off-by: evenyag <realevenyag@gmail.com> * feat: add benchmark for cache stream Signed-off-by: evenyag <realevenyag@gmail.com> * refactor: move bench_util to test_util Signed-off-by: evenyag <realevenyag@gmail.com> * feat: share dict Signed-off-by: evenyag <realevenyag@gmail.com> * test: test ptr_eq Signed-off-by: evenyag <realevenyag@gmail.com> * chore: fmt code Signed-off-by: evenyag <realevenyag@gmail.com> * refactor: simplify value array handling Signed-off-by: evenyag <realevenyag@gmail.com> * chore: add todo for estimate size Signed-off-by: evenyag <realevenyag@gmail.com> * feat: simplify size calculation Signed-off-by: evenyag <realevenyag@gmail.com> * chore: remove one test Signed-off-by: evenyag <realevenyag@gmail.com> * test: update config test Signed-off-by: evenyag <realevenyag@gmail.com> * chore: address review comment Only ignore exprs that can extract time ranges Signed-off-by: evenyag <realevenyag@gmail.com> * test: fix tests Signed-off-by: evenyag <realevenyag@gmail.com> --------- Signed-off-by: evenyag <realevenyag@gmail.com>
Setup tests for multiple storage backend
To run the integration test, please copy .env.example to .env in the project root folder and change the values on need.
Take s3 for example. You need to set your S3 bucket, access key id and secret key:
# Settings for s3 test
GT_S3_BUCKET=S3 bucket
GT_S3_REGION=S3 region
GT_S3_ACCESS_KEY_ID=S3 access key id
GT_S3_ACCESS_KEY=S3 secret access key
Run
Execute the following command in the project root folder:
cargo test integration
Test s3 storage:
cargo test s3
Test oss storage:
cargo test oss
Test azblob storage:
cargo test azblob
Setup tests with Kafka wal
To run the integration test, please copy .env.example to .env in the project root folder and change the values on need.
GT_KAFKA_ENDPOINTS = localhost:9092
Setup kafka standalone
cd tests-integration/fixtures
docker compose -f docker-compose.yml up kafka
Setup tests with etcd TLS
This guide explains how to set up and test TLS-enabled etcd connections in GreptimeDB integration tests.
Quick Start
TLS certificates are already at tests-integration/fixtures/etcd-tls-certs/.
-
Start TLS-enabled etcd:
cd tests-integration/fixtures docker compose up etcd-tls -d -
Start all services (including etcd-tls):
cd tests-integration/fixtures docker compose up -d --wait
Certificate Details
The checked-in certificates include:
ca.crt- Certificate Authority certificateserver.crt/server-key.pem- Server certificate for etcd-tls serviceclient.crt/client-key.pem- Client certificate for connecting to etcd-tls
The server certificate includes SANs for localhost, etcd-tls, 127.0.0.1, and ::1.
Regenerating Certificates (Optional)
If you need to regenerate the etcd certificates:
# Regenerate certificates (overwrites existing ones)
./scripts/generate-etcd-tls-certs.sh
# Or generate in custom location
./scripts/generate-etcd-tls-certs.sh /path/to/cert/directory
If you need to regenerate the mysql and postgres certificates:
# Regenerate certificates (overwrites existing ones)
./scripts/generate_certs.sh
# Or generate in custom location
./scripts/generate_certs.sh /path/to/cert/directory
Note: The checked-in certificates are for testing purposes only and should never be used in production.