Files
greptimedb/tests-integration
Yvan Wang 793545d8e6 fix(server): describe EXPLAIN statements so bind parameters work (#8035)
* fix(server): describe EXPLAIN statements so bind parameters work

`do_describe_inner` only planned `Insert`/`Query`/`Delete`, so
`EXPLAIN` and `EXPLAIN ANALYZE` fell through to the non-plan branch
and had no parameter-type inference. At Bind time the Postgres
handler then reported `unsupported_parameter_type` even though the
inner query would have worked on its own.

Recurse one level into `Statement::Explain` so that an EXPLAIN
wrapping a plannable statement goes through the same describe path.
Adds a tokio-postgres integration test that exercises
`EXPLAIN`/`EXPLAIN ANALYZE` over the extended query protocol.

Fixes #8029

Signed-off-by: BootstrapperSBL <yvanwww@gmail.com>

* refactor(server): extract plannable-inner check into closure

Reduce duplication between the direct match and the EXPLAIN inner match
by factoring out is_inner_plannable. Behaviour unchanged.

Signed-off-by: BootstrapperSBL <yvanwww@gmail.com>

---------

Signed-off-by: BootstrapperSBL <yvanwww@gmail.com>
2026-04-26 14:01:35 +00:00
..
2026-04-23 03:03:37 +00:00

Setup tests for multiple storage backend

To run the integration test, please copy .env.example to .env in the project root folder and change the values on need.

Take s3 for example. You need to set your S3 bucket, access key id and secret key:

# Settings for s3 test
GT_S3_BUCKET=S3 bucket
GT_S3_REGION=S3 region
GT_S3_ACCESS_KEY_ID=S3 access key id
GT_S3_ACCESS_KEY=S3 secret access key

Run

Execute the following command in the project root folder:

cargo test integration

Test s3 storage:

cargo test s3

Test oss storage:

cargo test oss

Test azblob storage:

cargo test azblob

Setup tests with Kafka wal

To run the integration test, please copy .env.example to .env in the project root folder and change the values on need.

GT_KAFKA_ENDPOINTS = localhost:9092

Setup kafka standalone

cd tests-integration/fixtures

docker compose -f docker-compose.yml up kafka

Setup tests with etcd TLS

This guide explains how to set up and test TLS-enabled etcd connections in GreptimeDB integration tests.

Quick Start

TLS certificates are already at tests-integration/fixtures/etcd-tls-certs/.

  1. Start TLS-enabled etcd:

    cd tests-integration/fixtures
    docker compose up etcd-tls -d
    
  2. Start all services (including etcd-tls):

    cd tests-integration/fixtures
    docker compose up -d --wait
    

Certificate Details

The checked-in certificates include:

  • ca.crt - Certificate Authority certificate
  • server.crt / server-key.pem - Server certificate for etcd-tls service
  • client.crt / client-key.pem - Client certificate for connecting to etcd-tls

The server certificate includes SANs for localhost, etcd-tls, 127.0.0.1, and ::1.

Regenerating Certificates (Optional)

If you need to regenerate the etcd certificates:

# Regenerate certificates (overwrites existing ones)
./scripts/generate-etcd-tls-certs.sh

# Or generate in custom location
./scripts/generate-etcd-tls-certs.sh /path/to/cert/directory

If you need to regenerate the mysql and postgres certificates:

# Regenerate certificates (overwrites existing ones)
./scripts/generate_certs.sh

# Or generate in custom location
./scripts/generate_certs.sh /path/to/cert/directory

Note: The checked-in certificates are for testing purposes only and should never be used in production.