* feat: query mem limiter * feat: config docs * feat: frontend query limit config * fix: unused imports Signed-off-by: jeremyhi <fengjiachun@gmail.com> * feat: add metrics for query memory tracker Signed-off-by: jeremyhi <fengjiachun@gmail.com> * fix: right postion for tracker Signed-off-by: jeremyhi <fengjiachun@gmail.com> * fix: avoid race condition Signed-off-by: jeremyhi <fengjiachun@gmail.com> * feat: soft and hard limit Signed-off-by: jeremyhi <fengjiachun@gmail.com> * feat: docs Signed-off-by: jeremyhi <fengjiachun@gmail.com> * fix: when soft_limit == 0 Signed-off-by: jeremyhi <fengjiachun@gmail.com> * feat: upgrade limit algorithm Signed-off-by: jeremyhi <fengjiachun@gmail.com> * fix: remove batch window Signed-off-by: jeremyhi <fengjiachun@gmail.com> * chore: batch mem size Signed-off-by: jeremyhi <fengjiachun@gmail.com> * feat: refine limit algorithm Signed-off-by: jeremyhi <fengjiachun@gmail.com> * fix: get sys mem Signed-off-by: jeremyhi <fengjiachun@gmail.com> * chore: minor change * feat: up tracker to the top stream * feat: estimated_size for batch Signed-off-by: jeremyhi <fengjiachun@gmail.com> * chore: minor refactor * feat: scan_memory_limit connect to max_concurrent_queries Signed-off-by: jeremyhi <fengjiachun@gmail.com> * chore: make callback clearly * feat: add unlimted enum Signed-off-by: jeremyhi <fengjiachun@gmail.com> * chore: by review comment * chore: comment on recursion_limit Signed-off-by: jeremyhi <fengjiachun@gmail.com> * feat: refactor and put permit into RegionScanExec Signed-off-by: jeremyhi <fengjiachun@gmail.com> * chore: multiple lazy static blocks * chore: minor change Signed-off-by: jeremyhi <fengjiachun@gmail.com> --------- Signed-off-by: jeremyhi <fengjiachun@gmail.com>
Setup tests for multiple storage backend
To run the integration test, please copy .env.example to .env in the project root folder and change the values on need.
Take s3 for example. You need to set your S3 bucket, access key id and secret key:
# Settings for s3 test
GT_S3_BUCKET=S3 bucket
GT_S3_REGION=S3 region
GT_S3_ACCESS_KEY_ID=S3 access key id
GT_S3_ACCESS_KEY=S3 secret access key
Run
Execute the following command in the project root folder:
cargo test integration
Test s3 storage:
cargo test s3
Test oss storage:
cargo test oss
Test azblob storage:
cargo test azblob
Setup tests with Kafka wal
To run the integration test, please copy .env.example to .env in the project root folder and change the values on need.
GT_KAFKA_ENDPOINTS = localhost:9092
Setup kafka standalone
cd tests-integration/fixtures
docker compose -f docker-compose.yml up kafka
Setup tests with etcd TLS
This guide explains how to set up and test TLS-enabled etcd connections in GreptimeDB integration tests.
Quick Start
TLS certificates are already at tests-integration/fixtures/etcd-tls-certs/.
-
Start TLS-enabled etcd:
cd tests-integration/fixtures docker compose up etcd-tls -d -
Start all services (including etcd-tls):
cd tests-integration/fixtures docker compose up -d --wait
Certificate Details
The checked-in certificates include:
ca.crt- Certificate Authority certificateserver.crt/server-key.pem- Server certificate for etcd-tls serviceclient.crt/client-key.pem- Client certificate for connecting to etcd-tls
The server certificate includes SANs for localhost, etcd-tls, 127.0.0.1, and ::1.
Regenerating Certificates (Optional)
If you need to regenerate the etcd certificates:
# Regenerate certificates (overwrites existing ones)
./scripts/generate-etcd-tls-certs.sh
# Or generate in custom location
./scripts/generate-etcd-tls-certs.sh /path/to/cert/directory
If you need to regenerate the mysql and postgres certificates:
# Regenerate certificates (overwrites existing ones)
./scripts/generate_certs.sh
# Or generate in custom location
./scripts/generate_certs.sh /path/to/cert/directory
Note: The checked-in certificates are for testing purposes only and should never be used in production.