- User Guide |
+ User Guide |
API Docs |
Roadmap 2025
@@ -66,17 +66,24 @@
## Introduction
-**GreptimeDB** is an open-source, cloud-native database purpose-built for the unified collection and analysis of observability data (metrics, logs, and traces). Whether you’re operating on the edge, in the cloud, or across hybrid environments, GreptimeDB empowers real-time insights at massive scale — all in one system.
+**GreptimeDB** is an open-source, cloud-native database that unifies metrics, logs, and traces, enabling real-time observability at any scale — across edge, cloud, and hybrid environments.
## Features
| Feature | Description |
| --------- | ----------- |
-| [Unified Observability Data](https://docs.greptime.com/user-guide/concepts/why-greptimedb) | Store metrics, logs, and traces as timestamped, contextual wide events. Query via [SQL](https://docs.greptime.com/user-guide/query-data/sql), [PromQL](https://docs.greptime.com/user-guide/query-data/promql), and [streaming](https://docs.greptime.com/user-guide/flow-computation/overview). |
-| [High Performance & Cost Effective](https://docs.greptime.com/user-guide/manage-data/data-index) | Written in Rust, with a distributed query engine, [rich indexing](https://docs.greptime.com/user-guide/manage-data/data-index), and optimized columnar storage, delivering sub-second responses at PB scale. |
-| [Cloud-Native Architecture](https://docs.greptime.com/user-guide/concepts/architecture) | Designed for [Kubernetes](https://docs.greptime.com/user-guide/deployments-administration/deploy-on-kubernetes/greptimedb-operator-management), with compute/storage separation, native object storage (AWS S3, Azure Blob, etc.) and seamless cross-cloud access. |
-| [Developer-Friendly](https://docs.greptime.com/user-guide/protocols/overview) | Access via SQL/PromQL interfaces, REST API, MySQL/PostgreSQL protocols, and popular ingestion [protocols](https://docs.greptime.com/user-guide/protocols/overview). |
-| [Flexible Deployment](https://docs.greptime.com/user-guide/deployments-administration/overview) | Deploy anywhere: edge (including ARM/[Android](https://docs.greptime.com/user-guide/deployments-administration/run-on-android)) or cloud, with unified APIs and efficient data sync. |
+| [All-in-One Observability](https://docs.greptime.com/user-guide/concepts/why-greptimedb) | OpenTelemetry-native platform unifying metrics, logs, and traces. Query via [SQL](https://docs.greptime.com/user-guide/query-data/sql), [PromQL](https://docs.greptime.com/user-guide/query-data/promql), and [Flow](https://docs.greptime.com/user-guide/flow-computation/overview). |
+| [High Performance](https://docs.greptime.com/user-guide/manage-data/data-index) | Written in Rust with [rich indexing](https://docs.greptime.com/user-guide/manage-data/data-index) (inverted, fulltext, skipping, vector), delivering sub-second responses at PB scale. |
+| [Cost Efficiency](https://docs.greptime.com/user-guide/concepts/architecture) | 50x lower operational and storage costs with compute-storage separation and native object storage (S3, Azure Blob, etc.). |
+| [Cloud-Native & Scalable](https://docs.greptime.com/user-guide/deployments-administration/deploy-on-kubernetes/greptimedb-operator-management) | Purpose-built for [Kubernetes](https://docs.greptime.com/user-guide/deployments-administration/deploy-on-kubernetes/greptimedb-operator-management) with unlimited cross-cloud scaling, handling hundreds of thousands of concurrent requests. |
+| [Developer-Friendly](https://docs.greptime.com/user-guide/protocols/overview) | SQL/PromQL interfaces, built-in web dashboard, REST API, MySQL/PostgreSQL protocol compatibility, and native [OpenTelemetry](https://docs.greptime.com/user-guide/ingest-data/for-observability/opentelemetry/) support. |
+| [Flexible Deployment](https://docs.greptime.com/user-guide/deployments-administration/overview) | Deploy anywhere from ARM-based edge devices (including [Android](https://docs.greptime.com/user-guide/deployments-administration/run-on-android)) to cloud, with unified APIs and efficient data sync. |
+
+ ✅ **Perfect for:**
+ - Unified observability stack replacing Prometheus + Loki + Tempo
+ - Large-scale metrics with high cardinality (millions to billions of time series)
+ - Large-scale observability platform requiring cost efficiency and scalability
+ - IoT and edge computing with resource and bandwidth constraints
Learn more in [Why GreptimeDB](https://docs.greptime.com/user-guide/concepts/why-greptimedb) and [Observability 2.0 and the Database for It](https://greptime.com/blogs/2025-04-25-greptimedb-observability2-new-database).
@@ -85,10 +92,10 @@ Learn more in [Why GreptimeDB](https://docs.greptime.com/user-guide/concepts/why
| Feature | GreptimeDB | Traditional TSDB | Log Stores |
|----------------------------------|-----------------------|--------------------|-----------------|
| Data Types | Metrics, Logs, Traces | Metrics only | Logs only |
-| Query Language | SQL, PromQL, Streaming| Custom/PromQL | Custom/DSL |
+| Query Language | SQL, PromQL | Custom/PromQL | Custom/DSL |
| Deployment | Edge + Cloud | Cloud/On-prem | Mostly central |
| Indexing & Performance | PB-Scale, Sub-second | Varies | Varies |
-| Integration | REST, SQL, Common protocols | Varies | Varies |
+| Integration | REST API, SQL, Common protocols | Varies | Varies |
**Performance:**
* [GreptimeDB tops JSONBench's billion-record cold run test!](https://greptime.com/blogs/2025-03-18-jsonbench-greptimedb-performance)
@@ -98,8 +105,14 @@ Read [more benchmark reports](https://docs.greptime.com/user-guide/concepts/feat
## Architecture
-* Read the [architecture](https://docs.greptime.com/contributor-guide/overview/#architecture) document.
-* [DeepWiki](https://deepwiki.com/GreptimeTeam/greptimedb/1-overview) provides an in-depth look at GreptimeDB:
+GreptimeDB can run in two modes:
+* **Standalone Mode** - Single binary for development and small deployments
+* **Distributed Mode** - Separate components for production scale:
+ - Frontend: Query processing and protocol handling
+ - Datanode: Data storage and retrieval
+ - Metasrv: Metadata management and coordination
+
+Read the [architecture](https://docs.greptime.com/contributor-guide/overview/#architecture) document. [DeepWiki](https://deepwiki.com/GreptimeTeam/greptimedb/1-overview) provides an in-depth look at GreptimeDB:
## Try GreptimeDB
@@ -119,7 +132,8 @@ docker run -p 127.0.0.1:4000-4003:4000-4003 \
--postgres-addr 0.0.0.0:4003
```
Dashboard: [http://localhost:4000/dashboard](http://localhost:4000/dashboard)
-[Full Install Guide](https://docs.greptime.com/getting-started/installation/overview)
+
+Read more in the [full Install Guide](https://docs.greptime.com/getting-started/installation/overview).
**Troubleshooting:**
* Cannot connect to the database? Ensure that ports `4000`, `4001`, `4002`, and `4003` are not blocked by a firewall or used by other services.
@@ -148,21 +162,26 @@ cargo run -- standalone start
## Tools & Extensions
-- **Kubernetes:** [GreptimeDB Operator](https://github.com/GrepTimeTeam/greptimedb-operator)
-- **Helm Charts:** [Greptime Helm Charts](https://github.com/GreptimeTeam/helm-charts)
-- **Dashboard:** [Web UI](https://github.com/GreptimeTeam/dashboard)
-- **SDKs/Ingester:** [Go](https://github.com/GreptimeTeam/greptimedb-ingester-go), [Java](https://github.com/GreptimeTeam/greptimedb-ingester-java), [C++](https://github.com/GreptimeTeam/greptimedb-ingester-cpp), [Erlang](https://github.com/GreptimeTeam/greptimedb-ingester-erl), [Rust](https://github.com/GreptimeTeam/greptimedb-ingester-rust), [JS](https://github.com/GreptimeTeam/greptimedb-ingester-js)
-- **Grafana**: [Official Dashboard](https://github.com/GreptimeTeam/greptimedb/blob/main/grafana/README.md)
+- **Kubernetes**: [GreptimeDB Operator](https://github.com/GrepTimeTeam/greptimedb-operator)
+- **Helm Charts**: [Greptime Helm Charts](https://github.com/GreptimeTeam/helm-charts)
+- **Dashboard**: [Web UI](https://github.com/GreptimeTeam/dashboard)
+- **gRPC Ingester**: [Go](https://github.com/GreptimeTeam/greptimedb-ingester-go), [Java](https://github.com/GreptimeTeam/greptimedb-ingester-java), [C++](https://github.com/GreptimeTeam/greptimedb-ingester-cpp), [Erlang](https://github.com/GreptimeTeam/greptimedb-ingester-erl), [Rust](https://github.com/GreptimeTeam/greptimedb-ingester-rust)
+- **Grafana Data Source**: [GreptimeDB Grafana data source plugin](https://github.com/GreptimeTeam/greptimedb-grafana-datasource)
+- **Grafana Dashboard**: [Official Dashboard for monitoring](https://github.com/GreptimeTeam/greptimedb/blob/main/grafana/README.md)
## Project Status
-> **Status:** Beta.
-> **GA (v1.0):** Targeted for mid 2025.
+> **Status:** Beta — marching toward v1.0 GA!
+> **GA (v1.0):** January 10, 2026
-- Being used in production by early adopters
+- Deployed in production by open-source projects and commercial users
- Stable, actively maintained, with regular releases ([version info](https://docs.greptime.com/nightly/reference/about-greptimedb-version))
- Suitable for evaluation and pilot deployments
+GreptimeDB v1.0 represents a major milestone toward maturity — marking stable APIs, production readiness, and proven performance.
+
+**Roadmap:** Beta1 (Nov 10) → Beta2 (Nov 24) → RC1 (Dec 8) → GA (Jan 10, 2026), please read [v1.0 highlights and release plan](https://greptime.com/blogs/2025-11-05-greptimedb-v1-highlights) for details.
+
For production use, we recommend using the latest stable release.
[](https://www.star-history.com/#GreptimeTeam/GreptimeDB&Date)
@@ -203,5 +222,5 @@ Special thanks to all contributors! See [AUTHORS.md](https://github.com/Greptime
- Uses [Apache Arrow™](https://arrow.apache.org/) (memory model)
- [Apache Parquet™](https://parquet.apache.org/) (file storage)
-- [Apache Arrow DataFusion™](https://arrow.apache.org/datafusion/) (query engine)
+- [Apache DataFusion™](https://arrow.apache.org/datafusion/) (query engine)
- [Apache OpenDAL™](https://opendal.apache.org/) (data access abstraction)
diff --git a/config/config.md b/config/config.md
index 58c491b4ad..e61c48c43f 100644
--- a/config/config.md
+++ b/config/config.md
@@ -108,9 +108,6 @@
| `storage` | -- | -- | The data storage options. |
| `storage.data_home` | String | `./greptimedb_data` | The working home directory. |
| `storage.type` | String | `File` | The storage type used to store the data. - `File`: the data is stored in the local file system. - `S3`: the data is stored in the S3 object storage. - `Gcs`: the data is stored in the Google Cloud Storage. - `Azblob`: the data is stored in the Azure Blob Storage. - `Oss`: the data is stored in the Aliyun OSS. |
-| `storage.enable_read_cache` | Bool | `true` | Whether to enable read cache. If not set, the read cache will be enabled by default when using object storage. |
-| `storage.cache_path` | String | Unset | Read cache configuration for object storage such as 'S3' etc, it's configured by default when using object storage. It is recommended to configure it when using object storage for better performance. A local file directory, defaults to `{data_home}`. An empty string means disabling. |
-| `storage.cache_capacity` | String | Unset | The local file cache capacity in bytes. If your disk space is sufficient, it is recommended to set it larger. |
| `storage.bucket` | String | Unset | The S3 bucket name. **It's only used when the storage type is `S3`, `Oss` and `Gcs`**. |
| `storage.root` | String | Unset | The S3 data will be stored in the specified prefix, for example, `s3://${bucket}/${root}`. **It's only used when the storage type is `S3`, `Oss` and `Azblob`**. |
| `storage.access_key_id` | String | Unset | The access key id of the aws account. It's **highly recommended** to use AWS IAM roles instead of hardcoding the access key id and secret key. **It's only used when the storage type is `S3` and `Oss`**. |
@@ -141,6 +138,8 @@
| `region_engine.mito.max_background_flushes` | Integer | Auto | Max number of running background flush jobs (default: 1/2 of cpu cores). |
| `region_engine.mito.max_background_compactions` | Integer | Auto | Max number of running background compaction jobs (default: 1/4 of cpu cores). |
| `region_engine.mito.max_background_purges` | Integer | Auto | Max number of running background purge jobs (default: number of cpu cores). |
+| `region_engine.mito.experimental_compaction_memory_limit` | String | 0 | Memory budget for compaction tasks. Setting it to 0 or "unlimited" disables the limit. |
+| `region_engine.mito.experimental_compaction_on_exhausted` | String | wait | Behavior when compaction cannot acquire memory from the budget. Options: "wait" (default, 10s), "wait()", "fail" |
| `region_engine.mito.auto_flush_interval` | String | `1h` | Interval to auto flush a region if it has not flushed yet. |
| `region_engine.mito.global_write_buffer_size` | String | Auto | Global write buffer size for all regions. If not set, it's default to 1/8 of OS memory with a max limitation of 1GB. |
| `region_engine.mito.global_write_buffer_reject_size` | String | Auto | Global write buffer size threshold to reject write requests. If not set, it's default to 2 times of `global_write_buffer_size`. |
@@ -154,6 +153,8 @@
| `region_engine.mito.write_cache_ttl` | String | Unset | TTL for write cache. |
| `region_engine.mito.preload_index_cache` | Bool | `true` | Preload index (puffin) files into cache on region open (default: true). When enabled, index files are loaded into the write cache during region initialization, which can improve query performance at the cost of longer startup times. |
| `region_engine.mito.index_cache_percent` | Integer | `20` | Percentage of write cache capacity allocated for index (puffin) files (default: 20). The remaining capacity is used for data (parquet) files. Must be between 0 and 100 (exclusive). For example, with a 5GiB write cache and 20% allocation, 1GiB is reserved for index files and 4GiB for data files. |
+| `region_engine.mito.enable_refill_cache_on_read` | Bool | `true` | Enable refilling cache on read operations (default: true). When disabled, cache refilling on read won't happen. |
+| `region_engine.mito.manifest_cache_size` | String | `256MB` | Capacity for manifest cache (default: 256MB). |
| `region_engine.mito.sst_write_buffer_size` | String | `8MB` | Buffer size for SST writing. |
| `region_engine.mito.parallel_scan_channel_size` | Integer | `32` | Capacity of the channel to send data from parallel scan tasks to the main task. |
| `region_engine.mito.max_concurrent_scan_files` | Integer | `384` | Maximum number of SST files to scan concurrently. |
@@ -210,14 +211,6 @@
| `slow_query.record_type` | String | Unset | The record type of slow queries. It can be `system_table` or `log`. |
| `slow_query.threshold` | String | Unset | The threshold of slow query. |
| `slow_query.sample_ratio` | Float | Unset | The sampling ratio of slow query log. The value should be in the range of (0, 1]. |
-| `export_metrics` | -- | -- | The standalone can export its metrics and send to Prometheus compatible service (e.g. `greptimedb`) from remote-write API. This is only used for `greptimedb` to export its own metrics internally. It's different from prometheus scrape. |
-| `export_metrics.enable` | Bool | `false` | whether enable export metrics. |
-| `export_metrics.write_interval` | String | `30s` | The interval of export metrics. |
-| `export_metrics.self_import` | -- | -- | For `standalone` mode, `self_import` is recommended to collect metrics generated by itself You must create the database before enabling it. |
-| `export_metrics.self_import.db` | String | Unset | -- |
-| `export_metrics.remote_write` | -- | -- | -- |
-| `export_metrics.remote_write.url` | String | `""` | The prometheus remote write endpoint that the metrics send to. The url example can be: `http://127.0.0.1:4000/v1/prometheus/write?db=greptime_metrics`. |
-| `export_metrics.remote_write.headers` | InlineTable | -- | HTTP headers of Prometheus remote-write carry. |
| `tracing` | -- | -- | The tracing options. Only effect when compiled with `tokio-console` feature. |
| `tracing.tokio_console_addr` | String | Unset | The tokio console address. |
| `memory` | -- | -- | The memory options. |
@@ -302,7 +295,6 @@
| `meta_client` | -- | -- | The metasrv client options. |
| `meta_client.metasrv_addrs` | Array | -- | The addresses of the metasrv. |
| `meta_client.timeout` | String | `3s` | Operation timeout. |
-| `meta_client.heartbeat_timeout` | String | `500ms` | Heartbeat timeout. |
| `meta_client.ddl_timeout` | String | `10s` | DDL timeout. |
| `meta_client.connect_timeout` | String | `1s` | Connect server timeout. |
| `meta_client.tcp_nodelay` | Bool | `true` | `TCP_NODELAY` option for accepted connections. |
@@ -335,12 +327,6 @@
| `slow_query.threshold` | String | `30s` | The threshold of slow query. It can be human readable time string, for example: `10s`, `100ms`, `1s`. |
| `slow_query.sample_ratio` | Float | `1.0` | The sampling ratio of slow query log. The value should be in the range of (0, 1]. For example, `0.1` means 10% of the slow queries will be logged and `1.0` means all slow queries will be logged. |
| `slow_query.ttl` | String | `90d` | The TTL of the `slow_queries` system table. Default is `90d` when `record_type` is `system_table`. |
-| `export_metrics` | -- | -- | The frontend can export its metrics and send to Prometheus compatible service (e.g. `greptimedb` itself) from remote-write API. This is only used for `greptimedb` to export its own metrics internally. It's different from prometheus scrape. |
-| `export_metrics.enable` | Bool | `false` | whether enable export metrics. |
-| `export_metrics.write_interval` | String | `30s` | The interval of export metrics. |
-| `export_metrics.remote_write` | -- | -- | -- |
-| `export_metrics.remote_write.url` | String | `""` | The prometheus remote write endpoint that the metrics send to. The url example can be: `http://127.0.0.1:4000/v1/prometheus/write?db=greptime_metrics`. |
-| `export_metrics.remote_write.headers` | InlineTable | -- | HTTP headers of Prometheus remote-write carry. |
| `tracing` | -- | -- | The tracing options. Only effect when compiled with `tokio-console` feature. |
| `tracing.tokio_console_addr` | String | Unset | The tokio console address. |
| `memory` | -- | -- | The memory options. |
@@ -354,7 +340,7 @@
| Key | Type | Default | Descriptions |
| --- | -----| ------- | ----------- |
| `data_home` | String | `./greptimedb_data` | The working home directory. |
-| `store_addrs` | Array | -- | Store server address default to etcd store. For postgres store, the format is: "password=password dbname=postgres user=postgres host=localhost port=5432" For etcd store, the format is: "127.0.0.1:2379" |
+| `store_addrs` | Array | -- | Store server address(es). The format depends on the selected backend.
For etcd: a list of "host:port" endpoints. e.g. ["192.168.1.1:2379", "192.168.1.2:2379"]
For PostgreSQL: a connection string in libpq format or URI. e.g. - "host=localhost port=5432 user=postgres password= dbname=postgres" - "postgresql://user:password@localhost:5432/mydb?connect_timeout=10" The detail see: https://docs.rs/tokio-postgres/latest/tokio_postgres/config/struct.Config.html
For mysql store, the format is a MySQL connection URL. e.g. "mysql://user:password@localhost:3306/greptime_meta?ssl-mode=VERIFY_CA&ssl-ca=/path/to/ca.pem" |
| `store_key_prefix` | String | `""` | If it's not empty, the metasrv will store all data with this key prefix. |
| `backend` | String | `etcd_store` | The datastore for meta server. Available values: - `etcd_store` (default value) - `memory_store` - `postgres_store` - `mysql_store` |
| `meta_table_name` | String | `greptime_metakv` | Table name in RDS to store metadata. Effect when using a RDS kvbackend. **Only used when backend is `postgres_store`.** |
@@ -370,12 +356,11 @@
| `runtime` | -- | -- | The runtime options. |
| `runtime.global_rt_size` | Integer | `8` | The number of threads to execute the runtime for global read operations. |
| `runtime.compact_rt_size` | Integer | `4` | The number of threads to execute the runtime for global write operations. |
-| `backend_tls` | -- | -- | TLS configuration for kv store backend (applicable for etcd, PostgreSQL, and MySQL backends) When using etcd, PostgreSQL, or MySQL as metadata store, you can configure TLS here |
+| `backend_tls` | -- | -- | TLS configuration for kv store backend (applicable for etcd, PostgreSQL, and MySQL backends) When using etcd, PostgreSQL, or MySQL as metadata store, you can configure TLS here
Note: if TLS is configured in both this section and the `store_addrs` connection string, the settings here will override the TLS settings in `store_addrs`. |
| `backend_tls.mode` | String | `prefer` | TLS mode, refer to https://www.postgresql.org/docs/current/libpq-ssl.html - "disable" - No TLS - "prefer" (default) - Try TLS, fallback to plain - "require" - Require TLS - "verify_ca" - Require TLS and verify CA - "verify_full" - Require TLS and verify hostname |
| `backend_tls.cert_path` | String | `""` | Path to client certificate file (for client authentication) Like "/path/to/client.crt" |
| `backend_tls.key_path` | String | `""` | Path to client private key file (for client authentication) Like "/path/to/client.key" |
| `backend_tls.ca_cert_path` | String | `""` | Path to CA certificate file (for server certificate verification) Required when using custom CAs or self-signed certificates Leave empty to use system root certificates only Like "/path/to/ca.crt" |
-| `backend_tls.watch` | Bool | `false` | Watch for certificate file changes and auto reload |
| `grpc` | -- | -- | The gRPC server options. |
| `grpc.bind_addr` | String | `127.0.0.1:3002` | The address to bind the gRPC server. |
| `grpc.server_addr` | String | `127.0.0.1:3002` | The communication server address for the frontend and datanode to connect to metasrv. If left empty or unset, the server will automatically use the IP address of the first network interface on the host, with the same port number as the one specified in `bind_addr`. |
@@ -430,12 +415,6 @@
| `logging.otlp_headers` | -- | -- | Additional OTLP headers, only valid when using OTLP http |
| `logging.tracing_sample_ratio` | -- | Unset | The percentage of tracing will be sampled and exported. Valid range `[0, 1]`, 1 means all traces are sampled, 0 means all traces are not sampled, the default value is 1. ratio > 1 are treated as 1. Fractions < 0 are treated as 0 |
| `logging.tracing_sample_ratio.default_ratio` | Float | `1.0` | -- |
-| `export_metrics` | -- | -- | The metasrv can export its metrics and send to Prometheus compatible service (e.g. `greptimedb` itself) from remote-write API. This is only used for `greptimedb` to export its own metrics internally. It's different from prometheus scrape. |
-| `export_metrics.enable` | Bool | `false` | whether enable export metrics. |
-| `export_metrics.write_interval` | String | `30s` | The interval of export metrics. |
-| `export_metrics.remote_write` | -- | -- | -- |
-| `export_metrics.remote_write.url` | String | `""` | The prometheus remote write endpoint that the metrics send to. The url example can be: `http://127.0.0.1:4000/v1/prometheus/write?db=greptime_metrics`. |
-| `export_metrics.remote_write.headers` | InlineTable | -- | HTTP headers of Prometheus remote-write carry. |
| `tracing` | -- | -- | The tracing options. Only effect when compiled with `tokio-console` feature. |
| `tracing.tokio_console_addr` | String | Unset | The tokio console address. |
| `memory` | -- | -- | The memory options. |
@@ -478,7 +457,6 @@
| `meta_client` | -- | -- | The metasrv client options. |
| `meta_client.metasrv_addrs` | Array | -- | The addresses of the metasrv. |
| `meta_client.timeout` | String | `3s` | Operation timeout. |
-| `meta_client.heartbeat_timeout` | String | `500ms` | Heartbeat timeout. |
| `meta_client.ddl_timeout` | String | `10s` | DDL timeout. |
| `meta_client.connect_timeout` | String | `1s` | Connect server timeout. |
| `meta_client.tcp_nodelay` | Bool | `true` | `TCP_NODELAY` option for accepted connections. |
@@ -509,9 +487,6 @@
| `storage` | -- | -- | The data storage options. |
| `storage.data_home` | String | `./greptimedb_data` | The working home directory. |
| `storage.type` | String | `File` | The storage type used to store the data. - `File`: the data is stored in the local file system. - `S3`: the data is stored in the S3 object storage. - `Gcs`: the data is stored in the Google Cloud Storage. - `Azblob`: the data is stored in the Azure Blob Storage. - `Oss`: the data is stored in the Aliyun OSS. |
-| `storage.cache_path` | String | Unset | Read cache configuration for object storage such as 'S3' etc, it's configured by default when using object storage. It is recommended to configure it when using object storage for better performance. A local file directory, defaults to `{data_home}`. An empty string means disabling. |
-| `storage.enable_read_cache` | Bool | `true` | Whether to enable read cache. If not set, the read cache will be enabled by default when using object storage. |
-| `storage.cache_capacity` | String | Unset | The local file cache capacity in bytes. If your disk space is sufficient, it is recommended to set it larger. |
| `storage.bucket` | String | Unset | The S3 bucket name. **It's only used when the storage type is `S3`, `Oss` and `Gcs`**. |
| `storage.root` | String | Unset | The S3 data will be stored in the specified prefix, for example, `s3://${bucket}/${root}`. **It's only used when the storage type is `S3`, `Oss` and `Azblob`**. |
| `storage.access_key_id` | String | Unset | The access key id of the aws account. It's **highly recommended** to use AWS IAM roles instead of hardcoding the access key id and secret key. **It's only used when the storage type is `S3` and `Oss`**. |
@@ -544,6 +519,8 @@
| `region_engine.mito.max_background_flushes` | Integer | Auto | Max number of running background flush jobs (default: 1/2 of cpu cores). |
| `region_engine.mito.max_background_compactions` | Integer | Auto | Max number of running background compaction jobs (default: 1/4 of cpu cores). |
| `region_engine.mito.max_background_purges` | Integer | Auto | Max number of running background purge jobs (default: number of cpu cores). |
+| `region_engine.mito.experimental_compaction_memory_limit` | String | 0 | Memory budget for compaction tasks. Setting it to 0 or "unlimited" disables the limit. |
+| `region_engine.mito.experimental_compaction_on_exhausted` | String | wait | Behavior when compaction cannot acquire memory from the budget. Options: "wait" (default, 10s), "wait()", "fail" |
| `region_engine.mito.auto_flush_interval` | String | `1h` | Interval to auto flush a region if it has not flushed yet. |
| `region_engine.mito.global_write_buffer_size` | String | Auto | Global write buffer size for all regions. If not set, it's default to 1/8 of OS memory with a max limitation of 1GB. |
| `region_engine.mito.global_write_buffer_reject_size` | String | Auto | Global write buffer size threshold to reject write requests. If not set, it's default to 2 times of `global_write_buffer_size` |
@@ -557,6 +534,8 @@
| `region_engine.mito.write_cache_ttl` | String | Unset | TTL for write cache. |
| `region_engine.mito.preload_index_cache` | Bool | `true` | Preload index (puffin) files into cache on region open (default: true). When enabled, index files are loaded into the write cache during region initialization, which can improve query performance at the cost of longer startup times. |
| `region_engine.mito.index_cache_percent` | Integer | `20` | Percentage of write cache capacity allocated for index (puffin) files (default: 20). The remaining capacity is used for data (parquet) files. Must be between 0 and 100 (exclusive). For example, with a 5GiB write cache and 20% allocation, 1GiB is reserved for index files and 4GiB for data files. |
+| `region_engine.mito.enable_refill_cache_on_read` | Bool | `true` | Enable refilling cache on read operations (default: true). When disabled, cache refilling on read won't happen. |
+| `region_engine.mito.manifest_cache_size` | String | `256MB` | Capacity for manifest cache (default: 256MB). |
| `region_engine.mito.sst_write_buffer_size` | String | `8MB` | Buffer size for SST writing. |
| `region_engine.mito.parallel_scan_channel_size` | Integer | `32` | Capacity of the channel to send data from parallel scan tasks to the main task. |
| `region_engine.mito.max_concurrent_scan_files` | Integer | `384` | Maximum number of SST files to scan concurrently. |
@@ -608,12 +587,6 @@
| `logging.otlp_headers` | -- | -- | Additional OTLP headers, only valid when using OTLP http |
| `logging.tracing_sample_ratio` | -- | Unset | The percentage of tracing will be sampled and exported. Valid range `[0, 1]`, 1 means all traces are sampled, 0 means all traces are not sampled, the default value is 1. ratio > 1 are treated as 1. Fractions < 0 are treated as 0 |
| `logging.tracing_sample_ratio.default_ratio` | Float | `1.0` | -- |
-| `export_metrics` | -- | -- | The datanode can export its metrics and send to Prometheus compatible service (e.g. `greptimedb` itself) from remote-write API. This is only used for `greptimedb` to export its own metrics internally. It's different from prometheus scrape. |
-| `export_metrics.enable` | Bool | `false` | whether enable export metrics. |
-| `export_metrics.write_interval` | String | `30s` | The interval of export metrics. |
-| `export_metrics.remote_write` | -- | -- | -- |
-| `export_metrics.remote_write.url` | String | `""` | The prometheus remote write endpoint that the metrics send to. The url example can be: `http://127.0.0.1:4000/v1/prometheus/write?db=greptime_metrics`. |
-| `export_metrics.remote_write.headers` | InlineTable | -- | HTTP headers of Prometheus remote-write carry. |
| `tracing` | -- | -- | The tracing options. Only effect when compiled with `tokio-console` feature. |
| `tracing.tokio_console_addr` | String | Unset | The tokio console address. |
| `memory` | -- | -- | The memory options. |
@@ -656,7 +629,6 @@
| `meta_client` | -- | -- | The metasrv client options. |
| `meta_client.metasrv_addrs` | Array | -- | The addresses of the metasrv. |
| `meta_client.timeout` | String | `3s` | Operation timeout. |
-| `meta_client.heartbeat_timeout` | String | `500ms` | Heartbeat timeout. |
| `meta_client.ddl_timeout` | String | `10s` | DDL timeout. |
| `meta_client.connect_timeout` | String | `1s` | Connect server timeout. |
| `meta_client.tcp_nodelay` | Bool | `true` | `TCP_NODELAY` option for accepted connections. |
diff --git a/config/datanode.example.toml b/config/datanode.example.toml
index dda926e1cb..47b6cc8cec 100644
--- a/config/datanode.example.toml
+++ b/config/datanode.example.toml
@@ -99,9 +99,6 @@ metasrv_addrs = ["127.0.0.1:3002"]
## Operation timeout.
timeout = "3s"
-## Heartbeat timeout.
-heartbeat_timeout = "500ms"
-
## DDL timeout.
ddl_timeout = "10s"
@@ -284,18 +281,6 @@ data_home = "./greptimedb_data"
## - `Oss`: the data is stored in the Aliyun OSS.
type = "File"
-## Read cache configuration for object storage such as 'S3' etc, it's configured by default when using object storage. It is recommended to configure it when using object storage for better performance.
-## A local file directory, defaults to `{data_home}`. An empty string means disabling.
-## @toml2docs:none-default
-#+ cache_path = ""
-
-## Whether to enable read cache. If not set, the read cache will be enabled by default when using object storage.
-#+ enable_read_cache = true
-
-## The local file cache capacity in bytes. If your disk space is sufficient, it is recommended to set it larger.
-## @toml2docs:none-default
-cache_capacity = "5GiB"
-
## The S3 bucket name.
## **It's only used when the storage type is `S3`, `Oss` and `Gcs`**.
## @toml2docs:none-default
@@ -455,6 +440,15 @@ compress_manifest = false
## @toml2docs:none-default="Auto"
#+ max_background_purges = 8
+## Memory budget for compaction tasks. Setting it to 0 or "unlimited" disables the limit.
+## @toml2docs:none-default="0"
+#+ experimental_compaction_memory_limit = "0"
+
+## Behavior when compaction cannot acquire memory from the budget.
+## Options: "wait" (default, 10s), "wait()", "fail"
+## @toml2docs:none-default="wait"
+#+ experimental_compaction_on_exhausted = "wait"
+
## Interval to auto flush a region if it has not flushed yet.
auto_flush_interval = "1h"
@@ -510,6 +504,13 @@ preload_index_cache = true
## 1GiB is reserved for index files and 4GiB for data files.
index_cache_percent = 20
+## Enable refilling cache on read operations (default: true).
+## When disabled, cache refilling on read won't happen.
+enable_refill_cache_on_read = true
+
+## Capacity for manifest cache (default: 256MB).
+manifest_cache_size = "256MB"
+
## Buffer size for SST writing.
sst_write_buffer_size = "8MB"
@@ -712,21 +713,6 @@ otlp_export_protocol = "http"
[logging.tracing_sample_ratio]
default_ratio = 1.0
-## The datanode can export its metrics and send to Prometheus compatible service (e.g. `greptimedb` itself) from remote-write API.
-## This is only used for `greptimedb` to export its own metrics internally. It's different from prometheus scrape.
-[export_metrics]
-## whether enable export metrics.
-enable = false
-## The interval of export metrics.
-write_interval = "30s"
-
-[export_metrics.remote_write]
-## The prometheus remote write endpoint that the metrics send to. The url example can be: `http://127.0.0.1:4000/v1/prometheus/write?db=greptime_metrics`.
-url = ""
-
-## HTTP headers of Prometheus remote-write carry.
-headers = { }
-
## The tracing options. Only effect when compiled with `tokio-console` feature.
#+ [tracing]
## The tokio console address.
diff --git a/config/flownode.example.toml b/config/flownode.example.toml
index 4e44c1ecbb..b13acfc447 100644
--- a/config/flownode.example.toml
+++ b/config/flownode.example.toml
@@ -78,9 +78,6 @@ metasrv_addrs = ["127.0.0.1:3002"]
## Operation timeout.
timeout = "3s"
-## Heartbeat timeout.
-heartbeat_timeout = "500ms"
-
## DDL timeout.
ddl_timeout = "10s"
diff --git a/config/frontend.example.toml b/config/frontend.example.toml
index 04d763c18f..701cb0b087 100644
--- a/config/frontend.example.toml
+++ b/config/frontend.example.toml
@@ -226,9 +226,6 @@ metasrv_addrs = ["127.0.0.1:3002"]
## Operation timeout.
timeout = "3s"
-## Heartbeat timeout.
-heartbeat_timeout = "500ms"
-
## DDL timeout.
ddl_timeout = "10s"
@@ -329,21 +326,6 @@ sample_ratio = 1.0
## The TTL of the `slow_queries` system table. Default is `90d` when `record_type` is `system_table`.
ttl = "90d"
-## The frontend can export its metrics and send to Prometheus compatible service (e.g. `greptimedb` itself) from remote-write API.
-## This is only used for `greptimedb` to export its own metrics internally. It's different from prometheus scrape.
-[export_metrics]
-## whether enable export metrics.
-enable = false
-## The interval of export metrics.
-write_interval = "30s"
-
-[export_metrics.remote_write]
-## The prometheus remote write endpoint that the metrics send to. The url example can be: `http://127.0.0.1:4000/v1/prometheus/write?db=greptime_metrics`.
-url = ""
-
-## HTTP headers of Prometheus remote-write carry.
-headers = { }
-
## The tracing options. Only effect when compiled with `tokio-console` feature.
#+ [tracing]
## The tokio console address.
diff --git a/config/metasrv.example.toml b/config/metasrv.example.toml
index d7d5ace99c..7997383a52 100644
--- a/config/metasrv.example.toml
+++ b/config/metasrv.example.toml
@@ -1,11 +1,19 @@
## The working home directory.
data_home = "./greptimedb_data"
-## Store server address default to etcd store.
-## For postgres store, the format is:
-## "password=password dbname=postgres user=postgres host=localhost port=5432"
-## For etcd store, the format is:
-## "127.0.0.1:2379"
+## Store server address(es). The format depends on the selected backend.
+##
+## For etcd: a list of "host:port" endpoints.
+## e.g. ["192.168.1.1:2379", "192.168.1.2:2379"]
+##
+## For PostgreSQL: a connection string in libpq format or URI.
+## e.g.
+## - "host=localhost port=5432 user=postgres password= dbname=postgres"
+## - "postgresql://user:password@localhost:5432/mydb?connect_timeout=10"
+## The detail see: https://docs.rs/tokio-postgres/latest/tokio_postgres/config/struct.Config.html
+##
+## For mysql store, the format is a MySQL connection URL.
+## e.g. "mysql://user:password@localhost:3306/greptime_meta?ssl-mode=VERIFY_CA&ssl-ca=/path/to/ca.pem"
store_addrs = ["127.0.0.1:2379"]
## If it's not empty, the metasrv will store all data with this key prefix.
@@ -75,6 +83,9 @@ node_max_idle_time = "24hours"
## TLS configuration for kv store backend (applicable for etcd, PostgreSQL, and MySQL backends)
## When using etcd, PostgreSQL, or MySQL as metadata store, you can configure TLS here
+##
+## Note: if TLS is configured in both this section and the `store_addrs` connection string, the
+## settings here will override the TLS settings in `store_addrs`.
[backend_tls]
## TLS mode, refer to https://www.postgresql.org/docs/current/libpq-ssl.html
## - "disable" - No TLS
@@ -98,9 +109,6 @@ key_path = ""
## Like "/path/to/ca.crt"
ca_cert_path = ""
-## Watch for certificate file changes and auto reload
-watch = false
-
## The gRPC server options.
[grpc]
## The address to bind the gRPC server.
@@ -323,21 +331,6 @@ otlp_export_protocol = "http"
[logging.tracing_sample_ratio]
default_ratio = 1.0
-## The metasrv can export its metrics and send to Prometheus compatible service (e.g. `greptimedb` itself) from remote-write API.
-## This is only used for `greptimedb` to export its own metrics internally. It's different from prometheus scrape.
-[export_metrics]
-## whether enable export metrics.
-enable = false
-## The interval of export metrics.
-write_interval = "30s"
-
-[export_metrics.remote_write]
-## The prometheus remote write endpoint that the metrics send to. The url example can be: `http://127.0.0.1:4000/v1/prometheus/write?db=greptime_metrics`.
-url = ""
-
-## HTTP headers of Prometheus remote-write carry.
-headers = { }
-
## The tracing options. Only effect when compiled with `tokio-console` feature.
#+ [tracing]
## The tokio console address.
diff --git a/config/standalone.example.toml b/config/standalone.example.toml
index 70e6e0888f..47be8f6334 100644
--- a/config/standalone.example.toml
+++ b/config/standalone.example.toml
@@ -388,18 +388,6 @@ data_home = "./greptimedb_data"
## - `Oss`: the data is stored in the Aliyun OSS.
type = "File"
-## Whether to enable read cache. If not set, the read cache will be enabled by default when using object storage.
-#+ enable_read_cache = true
-
-## Read cache configuration for object storage such as 'S3' etc, it's configured by default when using object storage. It is recommended to configure it when using object storage for better performance.
-## A local file directory, defaults to `{data_home}`. An empty string means disabling.
-## @toml2docs:none-default
-#+ cache_path = ""
-
-## The local file cache capacity in bytes. If your disk space is sufficient, it is recommended to set it larger.
-## @toml2docs:none-default
-cache_capacity = "5GiB"
-
## The S3 bucket name.
## **It's only used when the storage type is `S3`, `Oss` and `Gcs`**.
## @toml2docs:none-default
@@ -546,6 +534,15 @@ compress_manifest = false
## @toml2docs:none-default="Auto"
#+ max_background_purges = 8
+## Memory budget for compaction tasks. Setting it to 0 or "unlimited" disables the limit.
+## @toml2docs:none-default="0"
+#+ experimental_compaction_memory_limit = "0"
+
+## Behavior when compaction cannot acquire memory from the budget.
+## Options: "wait" (default, 10s), "wait()", "fail"
+## @toml2docs:none-default="wait"
+#+ experimental_compaction_on_exhausted = "wait"
+
## Interval to auto flush a region if it has not flushed yet.
auto_flush_interval = "1h"
@@ -601,6 +598,13 @@ preload_index_cache = true
## 1GiB is reserved for index files and 4GiB for data files.
index_cache_percent = 20
+## Enable refilling cache on read operations (default: true).
+## When disabled, cache refilling on read won't happen.
+enable_refill_cache_on_read = true
+
+## Capacity for manifest cache (default: 256MB).
+manifest_cache_size = "256MB"
+
## Buffer size for SST writing.
sst_write_buffer_size = "8MB"
@@ -820,27 +824,6 @@ default_ratio = 1.0
## @toml2docs:none-default
#+ sample_ratio = 1.0
-## The standalone can export its metrics and send to Prometheus compatible service (e.g. `greptimedb`) from remote-write API.
-## This is only used for `greptimedb` to export its own metrics internally. It's different from prometheus scrape.
-[export_metrics]
-## whether enable export metrics.
-enable = false
-## The interval of export metrics.
-write_interval = "30s"
-
-## For `standalone` mode, `self_import` is recommended to collect metrics generated by itself
-## You must create the database before enabling it.
-[export_metrics.self_import]
-## @toml2docs:none-default
-db = "greptime_metrics"
-
-[export_metrics.remote_write]
-## The prometheus remote write endpoint that the metrics send to. The url example can be: `http://127.0.0.1:4000/v1/prometheus/write?db=greptime_metrics`.
-url = ""
-
-## HTTP headers of Prometheus remote-write carry.
-headers = { }
-
## The tracing options. Only effect when compiled with `tokio-console` feature.
#+ [tracing]
## The tokio console address.
diff --git a/docker/buildx/centos/Dockerfile b/docker/buildx/centos/Dockerfile
index b7e822fac6..f5bbd15ad6 100644
--- a/docker/buildx/centos/Dockerfile
+++ b/docker/buildx/centos/Dockerfile
@@ -1,10 +1,10 @@
-FROM centos:7 as builder
+FROM centos:7 AS builder
ARG CARGO_PROFILE
ARG FEATURES
ARG OUTPUT_DIR
-ENV LANG en_US.utf8
+ENV LANG=en_US.utf8
WORKDIR /greptimedb
# Install dependencies
@@ -22,7 +22,7 @@ RUN unzip protoc-3.15.8-linux-x86_64.zip -d /usr/local/
# Install Rust
SHELL ["/bin/bash", "-c"]
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- --no-modify-path --default-toolchain none -y
-ENV PATH /usr/local/bin:/root/.cargo/bin/:$PATH
+ENV PATH=/usr/local/bin:/root/.cargo/bin/:$PATH
# Build the project in release mode.
RUN --mount=target=.,rw \
@@ -33,7 +33,7 @@ RUN --mount=target=.,rw \
TARGET_DIR=/out/target
# Export the binary to the clean image.
-FROM centos:7 as base
+FROM centos:7 AS base
ARG OUTPUT_DIR
@@ -45,7 +45,7 @@ RUN yum install -y epel-release \
WORKDIR /greptime
COPY --from=builder /out/target/${OUTPUT_DIR}/greptime /greptime/bin/
-ENV PATH /greptime/bin/:$PATH
+ENV PATH=/greptime/bin/:$PATH
ENV MALLOC_CONF="prof:true,prof_active:false"
diff --git a/docker/buildx/distroless/Dockerfile b/docker/buildx/distroless/Dockerfile
new file mode 100644
index 0000000000..b0f3af33e8
--- /dev/null
+++ b/docker/buildx/distroless/Dockerfile
@@ -0,0 +1,65 @@
+FROM ubuntu:22.04 AS builder
+
+ARG CARGO_PROFILE
+ARG FEATURES
+ARG OUTPUT_DIR
+
+ENV LANG=en_US.utf8
+WORKDIR /greptimedb
+
+RUN apt-get update && \
+ DEBIAN_FRONTEND=noninteractive apt-get install -y software-properties-common
+
+# Install dependencies.
+RUN --mount=type=cache,target=/var/cache/apt \
+ apt-get update && apt-get install -y \
+ libssl-dev \
+ protobuf-compiler \
+ curl \
+ git \
+ build-essential \
+ pkg-config
+
+# Install Rust.
+SHELL ["/bin/bash", "-c"]
+RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- --no-modify-path --default-toolchain none -y
+ENV PATH=/root/.cargo/bin/:$PATH
+
+# Build the project in release mode.
+RUN --mount=target=. \
+ --mount=type=cache,target=/root/.cargo/registry \
+ make build \
+ CARGO_PROFILE=${CARGO_PROFILE} \
+ FEATURES=${FEATURES} \
+ TARGET_DIR=/out/target
+
+FROM ubuntu:22.04 AS libs
+
+ARG TARGETARCH
+
+# Copy required library dependencies based on architecture
+RUN if [ "$TARGETARCH" = "amd64" ]; then \
+ cp /lib/x86_64-linux-gnu/libz.so.1.2.11 /lib/x86_64-linux-gnu/libz.so.1; \
+ elif [ "$TARGETARCH" = "arm64" ]; then \
+ cp /lib/aarch64-linux-gnu/libz.so.1.2.11 /lib/aarch64-linux-gnu/libz.so.1; \
+ else \
+ echo "Unsupported architecture: $TARGETARCH" && exit 1; \
+ fi
+
+# Export the binary to the clean distroless image.
+FROM gcr.io/distroless/cc-debian12:latest AS base
+
+ARG OUTPUT_DIR
+ARG TARGETARCH
+
+# Copy required library dependencies
+COPY --from=libs /lib /lib
+COPY --from=busybox:stable /bin/busybox /bin/busybox
+
+WORKDIR /greptime
+COPY --from=builder /out/target/${OUTPUT_DIR}/greptime /greptime/bin/greptime
+ENV PATH=/greptime/bin/:$PATH
+
+ENV MALLOC_CONF="prof:true,prof_active:false"
+
+ENTRYPOINT ["greptime"]
diff --git a/docker/buildx/ubuntu/Dockerfile b/docker/buildx/ubuntu/Dockerfile
index 6306e04688..b6dc386da4 100644
--- a/docker/buildx/ubuntu/Dockerfile
+++ b/docker/buildx/ubuntu/Dockerfile
@@ -1,10 +1,10 @@
-FROM ubuntu:22.04 as builder
+FROM ubuntu:22.04 AS builder
ARG CARGO_PROFILE
ARG FEATURES
ARG OUTPUT_DIR
-ENV LANG en_US.utf8
+ENV LANG=en_US.utf8
WORKDIR /greptimedb
RUN apt-get update && \
@@ -23,7 +23,7 @@ RUN --mount=type=cache,target=/var/cache/apt \
# Install Rust.
SHELL ["/bin/bash", "-c"]
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- --no-modify-path --default-toolchain none -y
-ENV PATH /root/.cargo/bin/:$PATH
+ENV PATH=/root/.cargo/bin/:$PATH
# Build the project in release mode.
RUN --mount=target=. \
@@ -35,7 +35,7 @@ RUN --mount=target=. \
# Export the binary to the clean image.
# TODO(zyy17): Maybe should use the more secure container image.
-FROM ubuntu:22.04 as base
+FROM ubuntu:22.04 AS base
ARG OUTPUT_DIR
@@ -45,7 +45,7 @@ RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get \
WORKDIR /greptime
COPY --from=builder /out/target/${OUTPUT_DIR}/greptime /greptime/bin/
-ENV PATH /greptime/bin/:$PATH
+ENV PATH=/greptime/bin/:$PATH
ENV MALLOC_CONF="prof:true,prof_active:false"
diff --git a/docker/ci/centos/Dockerfile b/docker/ci/centos/Dockerfile
index 480f2196b2..67efadd7dc 100644
--- a/docker/ci/centos/Dockerfile
+++ b/docker/ci/centos/Dockerfile
@@ -13,7 +13,7 @@ ARG TARGETARCH
ADD $TARGETARCH/greptime /greptime/bin/
-ENV PATH /greptime/bin/:$PATH
+ENV PATH=/greptime/bin/:$PATH
ENV MALLOC_CONF="prof:true,prof_active:false"
diff --git a/docker/ci/distroless/Dockerfile b/docker/ci/distroless/Dockerfile
new file mode 100644
index 0000000000..f5e7ebd88e
--- /dev/null
+++ b/docker/ci/distroless/Dockerfile
@@ -0,0 +1,40 @@
+FROM ubuntu:22.04 AS libs
+
+ARG TARGETARCH
+
+# Copy required library dependencies based on architecture
+# TARGETARCH values: amd64, arm64
+# Ubuntu library paths: x86_64-linux-gnu, aarch64-linux-gnu
+RUN if [ "$TARGETARCH" = "amd64" ]; then \
+ mkdir -p /output/x86_64-linux-gnu && \
+ cp /lib/x86_64-linux-gnu/libz.so.1.2.11 /output/x86_64-linux-gnu/libz.so.1; \
+ elif [ "$TARGETARCH" = "arm64" ]; then \
+ mkdir -p /output/aarch64-linux-gnu && \
+ cp /lib/aarch64-linux-gnu/libz.so.1.2.11 /output/aarch64-linux-gnu/libz.so.1; \
+ else \
+ echo "Unsupported architecture: $TARGETARCH" && exit 1; \
+ fi
+
+FROM gcr.io/distroless/cc-debian12:latest
+
+# The root path under which contains all the dependencies to build this Dockerfile.
+ARG DOCKER_BUILD_ROOT=.
+# The binary name of GreptimeDB executable.
+# Defaults to "greptime", but sometimes in other projects it might be different.
+ARG TARGET_BIN=greptime
+
+ARG TARGETARCH
+
+# Copy required library dependencies
+COPY --from=libs /output /lib
+COPY --from=busybox:stable /bin/busybox /bin/busybox
+
+ADD $TARGETARCH/$TARGET_BIN /greptime/bin/
+
+ENV PATH=/greptime/bin/:$PATH
+
+ENV TARGET_BIN=$TARGET_BIN
+
+ENV MALLOC_CONF="prof:true,prof_active:false"
+
+ENTRYPOINT ["greptime"]
diff --git a/docker/ci/ubuntu/Dockerfile b/docker/ci/ubuntu/Dockerfile
index 046fd62972..c1a88e02c8 100644
--- a/docker/ci/ubuntu/Dockerfile
+++ b/docker/ci/ubuntu/Dockerfile
@@ -14,7 +14,7 @@ ARG TARGETARCH
ADD $TARGETARCH/$TARGET_BIN /greptime/bin/
-ENV PATH /greptime/bin/:$PATH
+ENV PATH=/greptime/bin/:$PATH
ENV TARGET_BIN=$TARGET_BIN
diff --git a/docs/how-to/how-to-change-log-level-on-the-fly.md b/docs/how-to/how-to-change-log-level-on-the-fly.md
index 16a72bf6ae..c3bf2602a2 100644
--- a/docs/how-to/how-to-change-log-level-on-the-fly.md
+++ b/docs/how-to/how-to-change-log-level-on-the-fly.md
@@ -13,4 +13,19 @@ Log Level changed from Some("info") to "trace,flow=debug"%
The data is a string in the format of `global_level,module1=level1,module2=level2,...` that follows the same rule of `RUST_LOG`.
-The module is the module name of the log, and the level is the log level. The log level can be one of the following: `trace`, `debug`, `info`, `warn`, `error`, `off`(case insensitive).
\ No newline at end of file
+The module is the module name of the log, and the level is the log level. The log level can be one of the following: `trace`, `debug`, `info`, `warn`, `error`, `off`(case insensitive).
+
+# Enable/Disable Trace on the Fly
+
+## HTTP API
+
+example:
+```bash
+curl --data "true" 127.0.0.1:4000/debug/enable_trace
+```
+And database will reply with something like:
+```
+trace enabled%
+```
+
+Possible values are "true" or "false".
diff --git a/docs/rfcs/2025-07-23-global-gc-worker.md b/docs/rfcs/2025-07-23-global-gc-worker.md
index 69d1e3ac34..331ed01f38 100644
--- a/docs/rfcs/2025-07-23-global-gc-worker.md
+++ b/docs/rfcs/2025-07-23-global-gc-worker.md
@@ -106,6 +106,37 @@ This mechanism may be too complex to implement at once. We can consider a two-ph
Also the read replica shouldn't be later in manifest version for more than the lingering time of obsolete files, otherwise it might ref to files that are already deleted by the GC worker.
- need to upload tmp manifest to object storage, which may introduce additional complexity and potential performance overhead. But since long-running queries are typically not frequent, the performance impact is expected to be minimal.
+one potential race condition with region-migration is illustrated below:
+
+```mermaid
+sequenceDiagram
+ participant gc_worker as GC Worker(same dn as region 1)
+ participant region1 as Region 1 (Leader → Follower)
+ participant region2 as Region 2 (Follower → Leader)
+ participant region_dir as Region Directory
+
+ gc_worker->>region1: Start GC, get region manifest
+ activate region1
+ region1-->>gc_worker: Region 1 manifest
+ deactivate region1
+ gc_worker->>region_dir: Scan region directory
+
+ Note over region1,region2: Region Migration Occurs
+ region1-->>region2: Downgrade to Follower
+ region2-->>region1: Becomes Leader
+
+ region2->>region_dir: Add new file
+
+ gc_worker->>region_dir: Continue scanning
+ gc_worker-->>region_dir: Discovers new file
+ Note over gc_worker: New file not in Region 1's manifest
+ gc_worker->>gc_worker: Mark file as orphan(incorrectly)
+```
+which could cause gc worker to incorrectly mark the new file as orphan and delete it, if config the lingering time for orphan files(files not mentioned anywhere(in used or unused)) is not long enough.
+
+A good enough solution could be to use lock to prevent gc worker to happen on the region if region migration is happening on the region, and vise versa.
+
+The race condition between gc worker and repartition also needs to be considered carefully. For now, acquiring lock for both region-migration and repartition during gc worker process could be a simple solution.
## Conclusion and Rationale
diff --git a/flake.lock b/flake.lock
index e410fe9785..0bc3e6f283 100644
--- a/flake.lock
+++ b/flake.lock
@@ -8,11 +8,11 @@
"rust-analyzer-src": "rust-analyzer-src"
},
"locked": {
- "lastModified": 1760078406,
- "narHash": "sha256-JeJK0ZA845PtkCHkfo4KjeI1mYrsr2s3cxBYKhF4BoE=",
+ "lastModified": 1765252472,
+ "narHash": "sha256-byMt/uMi7DJ8tRniFopDFZMO3leSjGp6GS4zWOFT+uQ=",
"owner": "nix-community",
"repo": "fenix",
- "rev": "351277c60d104944122ee389cdf581c5ce2c6732",
+ "rev": "8456b985f6652e3eef0632ee9992b439735c5544",
"type": "github"
},
"original": {
@@ -41,16 +41,16 @@
},
"nixpkgs": {
"locked": {
- "lastModified": 1759994382,
- "narHash": "sha256-wSK+3UkalDZRVHGCRikZ//CyZUJWDJkBDTQX1+G77Ow=",
+ "lastModified": 1764983851,
+ "narHash": "sha256-y7RPKl/jJ/KAP/VKLMghMgXTlvNIJMHKskl8/Uuar7o=",
"owner": "NixOS",
"repo": "nixpkgs",
- "rev": "5da4a26309e796daa7ffca72df93dbe53b8164c7",
+ "rev": "d9bc5c7dceb30d8d6fafa10aeb6aa8a48c218454",
"type": "github"
},
"original": {
"owner": "NixOS",
- "ref": "nixos-25.05",
+ "ref": "nixos-25.11",
"repo": "nixpkgs",
"type": "github"
}
@@ -65,11 +65,11 @@
"rust-analyzer-src": {
"flake": false,
"locked": {
- "lastModified": 1760014945,
- "narHash": "sha256-ySdl7F9+oeWNHVrg3QL/brazqmJvYFEdpGnF3pyoDH8=",
+ "lastModified": 1765120009,
+ "narHash": "sha256-nG76b87rkaDzibWbnB5bYDm6a52b78A+fpm+03pqYIw=",
"owner": "rust-lang",
"repo": "rust-analyzer",
- "rev": "90d2e1ce4dfe7dc49250a8b88a0f08ffdb9cb23f",
+ "rev": "5e3e9c4e61bba8a5e72134b9ffefbef8f531d008",
"type": "github"
},
"original": {
diff --git a/flake.nix b/flake.nix
index 555e4c714e..58c10465a0 100644
--- a/flake.nix
+++ b/flake.nix
@@ -2,7 +2,7 @@
description = "Development environment flake";
inputs = {
- nixpkgs.url = "github:NixOS/nixpkgs/nixos-25.05";
+ nixpkgs.url = "github:NixOS/nixpkgs/nixos-25.11";
fenix = {
url = "github:nix-community/fenix";
inputs.nixpkgs.follows = "nixpkgs";
@@ -48,7 +48,7 @@
gnuplot ## for cargo bench
];
- LD_LIBRARY_PATH = pkgs.lib.makeLibraryPath buildInputs;
+ buildInputs = buildInputs;
NIX_HARDENING_ENABLE = "";
};
});
diff --git a/src/api/src/helper.rs b/src/api/src/helper.rs
index da5fdcfeda..b64e6d0265 100644
--- a/src/api/src/helper.rs
+++ b/src/api/src/helper.rs
@@ -12,7 +12,7 @@
// See the License for the specific language governing permissions and
// limitations under the License.
-use std::collections::HashSet;
+use std::collections::{BTreeMap, HashSet};
use std::sync::Arc;
use common_decimal::Decimal128;
@@ -20,13 +20,12 @@ use common_decimal::decimal128::{DECIMAL128_DEFAULT_SCALE, DECIMAL128_MAX_PRECIS
use common_time::time::Time;
use common_time::timestamp::TimeUnit;
use common_time::{Date, IntervalDayTime, IntervalMonthDayNano, IntervalYearMonth, Timestamp};
+use datatypes::json::value::{JsonNumber, JsonValue, JsonValueRef, JsonVariant};
use datatypes::prelude::{ConcreteDataType, ValueRef};
use datatypes::types::{
- IntervalType, JsonFormat, StructField, StructType, TimeType, TimestampType,
-};
-use datatypes::value::{
- ListValue, ListValueRef, OrderedF32, OrderedF64, StructValue, StructValueRef, Value,
+ IntervalType, JsonFormat, JsonType, StructField, StructType, TimeType, TimestampType,
};
+use datatypes::value::{ListValueRef, OrderedF32, OrderedF64, StructValueRef, Value};
use datatypes::vectors::VectorRef;
use greptime_proto::v1::column_data_type_extension::TypeExt;
use greptime_proto::v1::ddl_request::Expr;
@@ -34,9 +33,9 @@ use greptime_proto::v1::greptime_request::Request;
use greptime_proto::v1::query_request::Query;
use greptime_proto::v1::value::ValueData;
use greptime_proto::v1::{
- self, ColumnDataTypeExtension, DdlRequest, DecimalTypeExtension, JsonNativeTypeExtension,
- JsonTypeExtension, ListTypeExtension, QueryRequest, Row, SemanticType, StructTypeExtension,
- VectorTypeExtension,
+ self, ColumnDataTypeExtension, DdlRequest, DecimalTypeExtension, DictionaryTypeExtension,
+ JsonList, JsonNativeTypeExtension, JsonObject, JsonTypeExtension, ListTypeExtension,
+ QueryRequest, Row, SemanticType, StructTypeExtension, VectorTypeExtension, json_value,
};
use paste::paste;
use snafu::prelude::*;
@@ -81,6 +80,10 @@ impl ColumnDataTypeWrapper {
pub fn to_parts(&self) -> (ColumnDataType, Option) {
(self.datatype, self.datatype_ext.clone())
}
+
+ pub fn into_parts(self) -> (ColumnDataType, Option) {
+ (self.datatype, self.datatype_ext)
+ }
}
impl From for ConcreteDataType {
@@ -126,6 +129,7 @@ impl From for ConcreteDataType {
};
ConcreteDataType::json_native_datatype(inner_type.into())
}
+ None => ConcreteDataType::Json(JsonType::null()),
_ => {
// invalid state, type extension is missing or invalid
ConcreteDataType::null_datatype()
@@ -215,6 +219,26 @@ impl From for ConcreteDataType {
ConcreteDataType::null_datatype()
}
}
+ ColumnDataType::Dictionary => {
+ if let Some(TypeExt::DictionaryType(d)) = datatype_wrapper
+ .datatype_ext
+ .as_ref()
+ .and_then(|datatype_ext| datatype_ext.type_ext.as_ref())
+ {
+ let key_type = ColumnDataTypeWrapper {
+ datatype: d.key_datatype(),
+ datatype_ext: d.key_datatype_extension.clone().map(|ext| *ext),
+ };
+ let value_type = ColumnDataTypeWrapper {
+ datatype: d.value_datatype(),
+ datatype_ext: d.value_datatype_extension.clone().map(|ext| *ext),
+ };
+ ConcreteDataType::dictionary_datatype(key_type.into(), value_type.into())
+ } else {
+ // invalid state: type extension not found
+ ConcreteDataType::null_datatype()
+ }
+ }
}
}
}
@@ -338,13 +362,30 @@ impl ColumnDataTypeWrapper {
}),
}
}
+
+ pub fn dictionary_datatype(
+ key_type: ColumnDataTypeWrapper,
+ value_type: ColumnDataTypeWrapper,
+ ) -> Self {
+ ColumnDataTypeWrapper {
+ datatype: ColumnDataType::Dictionary,
+ datatype_ext: Some(ColumnDataTypeExtension {
+ type_ext: Some(TypeExt::DictionaryType(Box::new(DictionaryTypeExtension {
+ key_datatype: key_type.datatype().into(),
+ key_datatype_extension: key_type.datatype_ext.map(Box::new),
+ value_datatype: value_type.datatype().into(),
+ value_datatype_extension: value_type.datatype_ext.map(Box::new),
+ }))),
+ }),
+ }
+ }
}
impl TryFrom for ColumnDataTypeWrapper {
type Error = error::Error;
fn try_from(datatype: ConcreteDataType) -> Result {
- let column_datatype = match datatype {
+ let column_datatype = match &datatype {
ConcreteDataType::Boolean(_) => ColumnDataType::Boolean,
ConcreteDataType::Int8(_) => ColumnDataType::Int8,
ConcreteDataType::Int16(_) => ColumnDataType::Int16,
@@ -381,9 +422,8 @@ impl TryFrom for ColumnDataTypeWrapper {
ConcreteDataType::Vector(_) => ColumnDataType::Vector,
ConcreteDataType::List(_) => ColumnDataType::List,
ConcreteDataType::Struct(_) => ColumnDataType::Struct,
- ConcreteDataType::Null(_)
- | ConcreteDataType::Dictionary(_)
- | ConcreteDataType::Duration(_) => {
+ ConcreteDataType::Dictionary(_) => ColumnDataType::Dictionary,
+ ConcreteDataType::Null(_) | ConcreteDataType::Duration(_) => {
return error::IntoColumnDataTypeSnafu { from: datatype }.fail();
}
};
@@ -404,16 +444,22 @@ impl TryFrom for ColumnDataTypeWrapper {
JsonFormat::Jsonb => Some(ColumnDataTypeExtension {
type_ext: Some(TypeExt::JsonType(JsonTypeExtension::JsonBinary.into())),
}),
- JsonFormat::Native(inner) => {
- let inner_type = ColumnDataTypeWrapper::try_from(*inner.clone())?;
- Some(ColumnDataTypeExtension {
- type_ext: Some(TypeExt::JsonNativeType(Box::new(
- JsonNativeTypeExtension {
- datatype: inner_type.datatype.into(),
- datatype_extension: inner_type.datatype_ext.map(Box::new),
- },
- ))),
- })
+ JsonFormat::Native(native_type) => {
+ if native_type.is_null() {
+ None
+ } else {
+ let native_type = ConcreteDataType::from(native_type.as_ref());
+ let (datatype, datatype_extension) =
+ ColumnDataTypeWrapper::try_from(native_type)?.into_parts();
+ Some(ColumnDataTypeExtension {
+ type_ext: Some(TypeExt::JsonNativeType(Box::new(
+ JsonNativeTypeExtension {
+ datatype: datatype as i32,
+ datatype_extension: datatype_extension.map(Box::new),
+ },
+ ))),
+ })
+ }
}
}
} else {
@@ -463,6 +509,25 @@ impl TryFrom for ColumnDataTypeWrapper {
None
}
}
+ ColumnDataType::Dictionary => {
+ if let ConcreteDataType::Dictionary(dict_type) = &datatype {
+ let key_type = ColumnDataTypeWrapper::try_from(dict_type.key_type().clone())?;
+ let value_type =
+ ColumnDataTypeWrapper::try_from(dict_type.value_type().clone())?;
+ Some(ColumnDataTypeExtension {
+ type_ext: Some(TypeExt::DictionaryType(Box::new(
+ DictionaryTypeExtension {
+ key_datatype: key_type.datatype.into(),
+ key_datatype_extension: key_type.datatype_ext.map(Box::new),
+ value_datatype: value_type.datatype.into(),
+ value_datatype_extension: value_type.datatype_ext.map(Box::new),
+ },
+ ))),
+ })
+ } else {
+ None
+ }
+ }
_ => None,
};
Ok(Self {
@@ -601,6 +666,9 @@ pub fn values_with_capacity(datatype: ColumnDataType, capacity: usize) -> Values
struct_values: Vec::with_capacity(capacity),
..Default::default()
},
+ ColumnDataType::Dictionary => Values {
+ ..Default::default()
+ },
}
}
@@ -640,6 +708,7 @@ fn ddl_request_type(request: &DdlRequest) -> &'static str {
Some(Expr::CreateView(_)) => "ddl.create_view",
Some(Expr::DropView(_)) => "ddl.drop_view",
Some(Expr::AlterDatabase(_)) => "ddl.alter_database",
+ Some(Expr::CommentOn(_)) => "ddl.comment_on",
None => "ddl.empty",
}
}
@@ -801,21 +870,8 @@ pub fn pb_value_to_value_ref<'a>(
}
ValueData::JsonValue(inner_value) => {
- let json_datatype_ext = datatype_ext
- .as_ref()
- .and_then(|ext| {
- if let Some(TypeExt::JsonNativeType(l)) = &ext.type_ext {
- Some(l)
- } else {
- None
- }
- })
- .expect("json value must contain datatype ext");
-
- ValueRef::Json(Box::new(pb_value_to_value_ref(
- inner_value,
- json_datatype_ext.datatype_extension.as_deref(),
- )))
+ let value = decode_json_value(inner_value);
+ ValueRef::Json(Box::new(value))
}
}
}
@@ -839,125 +895,64 @@ pub fn is_column_type_value_eq(
.unwrap_or(false)
}
-/// Convert value into proto's value.
-pub fn to_proto_value(value: Value) -> v1::Value {
- match value {
- Value::Null => v1::Value { value_data: None },
- Value::Boolean(v) => v1::Value {
- value_data: Some(ValueData::BoolValue(v)),
- },
- Value::UInt8(v) => v1::Value {
- value_data: Some(ValueData::U8Value(v.into())),
- },
- Value::UInt16(v) => v1::Value {
- value_data: Some(ValueData::U16Value(v.into())),
- },
- Value::UInt32(v) => v1::Value {
- value_data: Some(ValueData::U32Value(v)),
- },
- Value::UInt64(v) => v1::Value {
- value_data: Some(ValueData::U64Value(v)),
- },
- Value::Int8(v) => v1::Value {
- value_data: Some(ValueData::I8Value(v.into())),
- },
- Value::Int16(v) => v1::Value {
- value_data: Some(ValueData::I16Value(v.into())),
- },
- Value::Int32(v) => v1::Value {
- value_data: Some(ValueData::I32Value(v)),
- },
- Value::Int64(v) => v1::Value {
- value_data: Some(ValueData::I64Value(v)),
- },
- Value::Float32(v) => v1::Value {
- value_data: Some(ValueData::F32Value(*v)),
- },
- Value::Float64(v) => v1::Value {
- value_data: Some(ValueData::F64Value(*v)),
- },
- Value::String(v) => v1::Value {
- value_data: Some(ValueData::StringValue(v.as_utf8().to_string())),
- },
- Value::Binary(v) => v1::Value {
- value_data: Some(ValueData::BinaryValue(v.to_vec())),
- },
- Value::Date(v) => v1::Value {
- value_data: Some(ValueData::DateValue(v.val())),
- },
- Value::Timestamp(v) => match v.unit() {
- TimeUnit::Second => v1::Value {
- value_data: Some(ValueData::TimestampSecondValue(v.value())),
- },
- TimeUnit::Millisecond => v1::Value {
- value_data: Some(ValueData::TimestampMillisecondValue(v.value())),
- },
- TimeUnit::Microsecond => v1::Value {
- value_data: Some(ValueData::TimestampMicrosecondValue(v.value())),
- },
- TimeUnit::Nanosecond => v1::Value {
- value_data: Some(ValueData::TimestampNanosecondValue(v.value())),
- },
- },
- Value::Time(v) => match v.unit() {
- TimeUnit::Second => v1::Value {
- value_data: Some(ValueData::TimeSecondValue(v.value())),
- },
- TimeUnit::Millisecond => v1::Value {
- value_data: Some(ValueData::TimeMillisecondValue(v.value())),
- },
- TimeUnit::Microsecond => v1::Value {
- value_data: Some(ValueData::TimeMicrosecondValue(v.value())),
- },
- TimeUnit::Nanosecond => v1::Value {
- value_data: Some(ValueData::TimeNanosecondValue(v.value())),
- },
- },
- Value::IntervalYearMonth(v) => v1::Value {
- value_data: Some(ValueData::IntervalYearMonthValue(v.to_i32())),
- },
- Value::IntervalDayTime(v) => v1::Value {
- value_data: Some(ValueData::IntervalDayTimeValue(v.to_i64())),
- },
- Value::IntervalMonthDayNano(v) => v1::Value {
- value_data: Some(ValueData::IntervalMonthDayNanoValue(
- convert_month_day_nano_to_pb(v),
- )),
- },
- Value::Decimal128(v) => v1::Value {
- value_data: Some(ValueData::Decimal128Value(convert_to_pb_decimal128(v))),
- },
- Value::List(list_value) => v1::Value {
- value_data: Some(ValueData::ListValue(v1::ListValue {
- items: convert_list_to_pb_values(list_value),
+fn encode_json_value(value: JsonValue) -> v1::JsonValue {
+ fn helper(json: JsonVariant) -> v1::JsonValue {
+ let value = match json {
+ JsonVariant::Null => None,
+ JsonVariant::Bool(x) => Some(json_value::Value::Boolean(x)),
+ JsonVariant::Number(x) => Some(match x {
+ JsonNumber::PosInt(i) => json_value::Value::Uint(i),
+ JsonNumber::NegInt(i) => json_value::Value::Int(i),
+ JsonNumber::Float(f) => json_value::Value::Float(f.0),
+ }),
+ JsonVariant::String(x) => Some(json_value::Value::Str(x)),
+ JsonVariant::Array(x) => Some(json_value::Value::Array(JsonList {
+ items: x.into_iter().map(helper).collect::>(),
})),
- },
- Value::Struct(struct_value) => v1::Value {
- value_data: Some(ValueData::StructValue(v1::StructValue {
- items: convert_struct_to_pb_values(struct_value),
- })),
- },
- Value::Json(v) => v1::Value {
- value_data: Some(ValueData::JsonValue(Box::new(to_proto_value(*v)))),
- },
- Value::Duration(_) => v1::Value { value_data: None },
+ JsonVariant::Object(x) => {
+ let entries = x
+ .into_iter()
+ .map(|(key, v)| v1::json_object::Entry {
+ key,
+ value: Some(helper(v)),
+ })
+ .collect::>();
+ Some(json_value::Value::Object(JsonObject { entries }))
+ }
+ };
+ v1::JsonValue { value }
}
+ helper(value.into_variant())
}
-fn convert_list_to_pb_values(list_value: ListValue) -> Vec {
- list_value
- .take_items()
- .into_iter()
- .map(to_proto_value)
- .collect()
-}
-
-fn convert_struct_to_pb_values(struct_value: StructValue) -> Vec {
- struct_value
- .take_items()
- .into_iter()
- .map(to_proto_value)
- .collect()
+fn decode_json_value(value: &v1::JsonValue) -> JsonValueRef<'_> {
+ let Some(value) = &value.value else {
+ return JsonValueRef::null();
+ };
+ match value {
+ json_value::Value::Boolean(x) => (*x).into(),
+ json_value::Value::Int(x) => (*x).into(),
+ json_value::Value::Uint(x) => (*x).into(),
+ json_value::Value::Float(x) => (*x).into(),
+ json_value::Value::Str(x) => (x.as_str()).into(),
+ json_value::Value::Array(array) => array
+ .items
+ .iter()
+ .map(|x| decode_json_value(x).into_variant())
+ .collect::>()
+ .into(),
+ json_value::Value::Object(x) => x
+ .entries
+ .iter()
+ .filter_map(|entry| {
+ entry
+ .value
+ .as_ref()
+ .map(|v| (entry.key.as_str(), decode_json_value(v).into_variant()))
+ })
+ .collect::>()
+ .into(),
+ }
}
/// Returns the [ColumnDataTypeWrapper] of the value.
@@ -1006,14 +1001,14 @@ pub fn vectors_to_rows<'a>(
let mut rows = vec![Row { values: vec![] }; row_count];
for column in columns {
for (row_index, row) in rows.iter_mut().enumerate() {
- row.values.push(value_to_grpc_value(column.get(row_index)))
+ row.values.push(to_grpc_value(column.get(row_index)))
}
}
rows
}
-pub fn value_to_grpc_value(value: Value) -> GrpcValue {
+pub fn to_grpc_value(value: Value) -> GrpcValue {
GrpcValue {
value_data: match value {
Value::Null => None,
@@ -1053,7 +1048,7 @@ pub fn value_to_grpc_value(value: Value) -> GrpcValue {
let items = list_value
.take_items()
.into_iter()
- .map(value_to_grpc_value)
+ .map(to_grpc_value)
.collect();
Some(ValueData::ListValue(v1::ListValue { items }))
}
@@ -1061,13 +1056,11 @@ pub fn value_to_grpc_value(value: Value) -> GrpcValue {
let items = struct_value
.take_items()
.into_iter()
- .map(value_to_grpc_value)
+ .map(to_grpc_value)
.collect();
Some(ValueData::StructValue(v1::StructValue { items }))
}
- Value::Json(inner_value) => Some(ValueData::JsonValue(Box::new(value_to_grpc_value(
- *inner_value,
- )))),
+ Value::Json(v) => Some(ValueData::JsonValue(encode_json_value(*v))),
Value::Duration(_) => unreachable!(),
},
}
@@ -1163,6 +1156,7 @@ mod tests {
use common_time::interval::IntervalUnit;
use datatypes::scalars::ScalarVector;
use datatypes::types::{Int8Type, Int32Type, UInt8Type, UInt32Type};
+ use datatypes::value::{ListValue, StructValue};
use datatypes::vectors::{
BooleanVector, DateVector, Float32Vector, PrimitiveVector, StringVector,
};
@@ -1259,6 +1253,9 @@ mod tests {
let values = values_with_capacity(ColumnDataType::Json, 2);
assert_eq!(2, values.json_values.capacity());
assert_eq!(2, values.string_values.capacity());
+
+ let values = values_with_capacity(ColumnDataType::Dictionary, 2);
+ assert!(values.bool_values.is_empty());
}
#[test]
@@ -1355,6 +1352,17 @@ mod tests {
ConcreteDataType::list_datatype(Arc::new(ConcreteDataType::string_datatype())),
ColumnDataTypeWrapper::list_datatype(ColumnDataTypeWrapper::string_datatype()).into()
);
+ assert_eq!(
+ ConcreteDataType::dictionary_datatype(
+ ConcreteDataType::int32_datatype(),
+ ConcreteDataType::string_datatype()
+ ),
+ ColumnDataTypeWrapper::dictionary_datatype(
+ ColumnDataTypeWrapper::int32_datatype(),
+ ColumnDataTypeWrapper::string_datatype()
+ )
+ .into()
+ );
let struct_type = StructType::new(Arc::new(vec![
StructField::new("id".to_string(), ConcreteDataType::int64_datatype(), true),
StructField::new(
@@ -1525,6 +1533,18 @@ mod tests {
ColumnDataTypeWrapper::vector_datatype(3),
ConcreteDataType::vector_datatype(3).try_into().unwrap()
);
+ assert_eq!(
+ ColumnDataTypeWrapper::dictionary_datatype(
+ ColumnDataTypeWrapper::int32_datatype(),
+ ColumnDataTypeWrapper::string_datatype()
+ ),
+ ConcreteDataType::dictionary_datatype(
+ ConcreteDataType::int32_datatype(),
+ ConcreteDataType::string_datatype()
+ )
+ .try_into()
+ .unwrap()
+ );
let result: Result = ConcreteDataType::null_datatype().try_into();
assert!(result.is_err());
@@ -1580,6 +1600,20 @@ mod tests {
datatype_extension: Some(Box::new(ColumnDataTypeExtension {
type_ext: Some(TypeExt::StructType(StructTypeExtension {
fields: vec![
+ v1::StructField {
+ name: "address".to_string(),
+ datatype: ColumnDataTypeWrapper::string_datatype()
+ .datatype()
+ .into(),
+ datatype_extension: None
+ },
+ v1::StructField {
+ name: "age".to_string(),
+ datatype: ColumnDataTypeWrapper::int64_datatype()
+ .datatype()
+ .into(),
+ datatype_extension: None
+ },
v1::StructField {
name: "id".to_string(),
datatype: ColumnDataTypeWrapper::int64_datatype()
@@ -1594,20 +1628,6 @@ mod tests {
.into(),
datatype_extension: None
},
- v1::StructField {
- name: "age".to_string(),
- datatype: ColumnDataTypeWrapper::int32_datatype()
- .datatype()
- .into(),
- datatype_extension: None
- },
- v1::StructField {
- name: "address".to_string(),
- datatype: ColumnDataTypeWrapper::string_datatype()
- .datatype()
- .into(),
- datatype_extension: None
- }
]
}))
}))
@@ -1740,7 +1760,7 @@ mod tests {
Arc::new(ConcreteDataType::boolean_datatype()),
));
- let pb_value = to_proto_value(value);
+ let pb_value = to_grpc_value(value);
match pb_value.value_data.unwrap() {
ValueData::ListValue(pb_list_value) => {
@@ -1769,7 +1789,7 @@ mod tests {
.unwrap(),
);
- let pb_value = to_proto_value(value);
+ let pb_value = to_grpc_value(value);
match pb_value.value_data.unwrap() {
ValueData::StructValue(pb_struct_value) => {
@@ -1778,4 +1798,199 @@ mod tests {
_ => panic!("Unexpected value type"),
}
}
+
+ #[test]
+ fn test_encode_decode_json_value() {
+ let json = JsonValue::null();
+ let proto = encode_json_value(json.clone());
+ assert!(proto.value.is_none());
+ let value = decode_json_value(&proto);
+ assert_eq!(json.as_ref(), value);
+
+ let json: JsonValue = true.into();
+ let proto = encode_json_value(json.clone());
+ assert_eq!(proto.value, Some(json_value::Value::Boolean(true)));
+ let value = decode_json_value(&proto);
+ assert_eq!(json.as_ref(), value);
+
+ let json: JsonValue = (-1i64).into();
+ let proto = encode_json_value(json.clone());
+ assert_eq!(proto.value, Some(json_value::Value::Int(-1)));
+ let value = decode_json_value(&proto);
+ assert_eq!(json.as_ref(), value);
+
+ let json: JsonValue = 1u64.into();
+ let proto = encode_json_value(json.clone());
+ assert_eq!(proto.value, Some(json_value::Value::Uint(1)));
+ let value = decode_json_value(&proto);
+ assert_eq!(json.as_ref(), value);
+
+ let json: JsonValue = 1.0f64.into();
+ let proto = encode_json_value(json.clone());
+ assert_eq!(proto.value, Some(json_value::Value::Float(1.0)));
+ let value = decode_json_value(&proto);
+ assert_eq!(json.as_ref(), value);
+
+ let json: JsonValue = "s".into();
+ let proto = encode_json_value(json.clone());
+ assert_eq!(proto.value, Some(json_value::Value::Str("s".to_string())));
+ let value = decode_json_value(&proto);
+ assert_eq!(json.as_ref(), value);
+
+ let json: JsonValue = [1i64, 2, 3].into();
+ let proto = encode_json_value(json.clone());
+ assert_eq!(
+ proto.value,
+ Some(json_value::Value::Array(JsonList {
+ items: vec![
+ v1::JsonValue {
+ value: Some(json_value::Value::Int(1))
+ },
+ v1::JsonValue {
+ value: Some(json_value::Value::Int(2))
+ },
+ v1::JsonValue {
+ value: Some(json_value::Value::Int(3))
+ }
+ ]
+ }))
+ );
+ let value = decode_json_value(&proto);
+ assert_eq!(json.as_ref(), value);
+
+ let json: JsonValue = [(); 0].into();
+ let proto = encode_json_value(json.clone());
+ assert_eq!(
+ proto.value,
+ Some(json_value::Value::Array(JsonList { items: vec![] }))
+ );
+ let value = decode_json_value(&proto);
+ assert_eq!(json.as_ref(), value);
+
+ let json: JsonValue = [("k3", 3i64), ("k2", 2i64), ("k1", 1i64)].into();
+ let proto = encode_json_value(json.clone());
+ assert_eq!(
+ proto.value,
+ Some(json_value::Value::Object(JsonObject {
+ entries: vec![
+ v1::json_object::Entry {
+ key: "k1".to_string(),
+ value: Some(v1::JsonValue {
+ value: Some(json_value::Value::Int(1))
+ }),
+ },
+ v1::json_object::Entry {
+ key: "k2".to_string(),
+ value: Some(v1::JsonValue {
+ value: Some(json_value::Value::Int(2))
+ }),
+ },
+ v1::json_object::Entry {
+ key: "k3".to_string(),
+ value: Some(v1::JsonValue {
+ value: Some(json_value::Value::Int(3))
+ }),
+ },
+ ]
+ }))
+ );
+ let value = decode_json_value(&proto);
+ assert_eq!(json.as_ref(), value);
+
+ let json: JsonValue = [("null", ()); 0].into();
+ let proto = encode_json_value(json.clone());
+ assert_eq!(
+ proto.value,
+ Some(json_value::Value::Object(JsonObject { entries: vec![] }))
+ );
+ let value = decode_json_value(&proto);
+ assert_eq!(json.as_ref(), value);
+
+ let json: JsonValue = [
+ ("null", JsonVariant::from(())),
+ ("bool", false.into()),
+ ("list", ["hello", "world"].into()),
+ (
+ "object",
+ [
+ ("positive_i", JsonVariant::from(42u64)),
+ ("negative_i", (-42i64).into()),
+ ("nested", [("what", "blah")].into()),
+ ]
+ .into(),
+ ),
+ ]
+ .into();
+ let proto = encode_json_value(json.clone());
+ assert_eq!(
+ proto.value,
+ Some(json_value::Value::Object(JsonObject {
+ entries: vec![
+ v1::json_object::Entry {
+ key: "bool".to_string(),
+ value: Some(v1::JsonValue {
+ value: Some(json_value::Value::Boolean(false))
+ }),
+ },
+ v1::json_object::Entry {
+ key: "list".to_string(),
+ value: Some(v1::JsonValue {
+ value: Some(json_value::Value::Array(JsonList {
+ items: vec![
+ v1::JsonValue {
+ value: Some(json_value::Value::Str("hello".to_string()))
+ },
+ v1::JsonValue {
+ value: Some(json_value::Value::Str("world".to_string()))
+ },
+ ]
+ }))
+ }),
+ },
+ v1::json_object::Entry {
+ key: "null".to_string(),
+ value: Some(v1::JsonValue { value: None }),
+ },
+ v1::json_object::Entry {
+ key: "object".to_string(),
+ value: Some(v1::JsonValue {
+ value: Some(json_value::Value::Object(JsonObject {
+ entries: vec![
+ v1::json_object::Entry {
+ key: "negative_i".to_string(),
+ value: Some(v1::JsonValue {
+ value: Some(json_value::Value::Int(-42))
+ }),
+ },
+ v1::json_object::Entry {
+ key: "nested".to_string(),
+ value: Some(v1::JsonValue {
+ value: Some(json_value::Value::Object(JsonObject {
+ entries: vec![v1::json_object::Entry {
+ key: "what".to_string(),
+ value: Some(v1::JsonValue {
+ value: Some(json_value::Value::Str(
+ "blah".to_string()
+ ))
+ }),
+ },]
+ }))
+ }),
+ },
+ v1::json_object::Entry {
+ key: "positive_i".to_string(),
+ value: Some(v1::JsonValue {
+ value: Some(json_value::Value::Uint(42))
+ }),
+ },
+ ]
+ }))
+ }),
+ },
+ ]
+ }))
+ );
+ let value = decode_json_value(&proto);
+ assert_eq!(json.as_ref(), value);
+ }
}
diff --git a/src/auth/Cargo.toml b/src/auth/Cargo.toml
index 905bd72373..9c91023da5 100644
--- a/src/auth/Cargo.toml
+++ b/src/auth/Cargo.toml
@@ -15,11 +15,11 @@ workspace = true
api.workspace = true
async-trait.workspace = true
common-base.workspace = true
+common-config.workspace = true
common-error.workspace = true
common-macro.workspace = true
common-telemetry.workspace = true
digest = "0.10"
-notify.workspace = true
sha1 = "0.10"
snafu.workspace = true
sql.workspace = true
diff --git a/src/auth/src/error.rs b/src/auth/src/error.rs
index d28a85f828..a8dfe7f629 100644
--- a/src/auth/src/error.rs
+++ b/src/auth/src/error.rs
@@ -75,11 +75,12 @@ pub enum Error {
username: String,
},
- #[snafu(display("Failed to initialize a watcher for file {}", path))]
+ #[snafu(display("Failed to initialize a file watcher"))]
FileWatch {
- path: String,
#[snafu(source)]
- error: notify::Error,
+ source: common_config::error::Error,
+ #[snafu(implicit)]
+ location: Location,
},
#[snafu(display("User is not authorized to perform this action"))]
diff --git a/src/auth/src/user_provider/watch_file_user_provider.rs b/src/auth/src/user_provider/watch_file_user_provider.rs
index 4df17502b7..451efd5cc4 100644
--- a/src/auth/src/user_provider/watch_file_user_provider.rs
+++ b/src/auth/src/user_provider/watch_file_user_provider.rs
@@ -12,16 +12,14 @@
// See the License for the specific language governing permissions and
// limitations under the License.
-use std::path::Path;
-use std::sync::mpsc::channel;
use std::sync::{Arc, Mutex};
use async_trait::async_trait;
+use common_config::file_watcher::{FileWatcherBuilder, FileWatcherConfig};
use common_telemetry::{info, warn};
-use notify::{EventKind, RecursiveMode, Watcher};
-use snafu::{ResultExt, ensure};
+use snafu::ResultExt;
-use crate::error::{FileWatchSnafu, InvalidConfigSnafu, Result};
+use crate::error::{FileWatchSnafu, Result};
use crate::user_provider::{UserInfoMap, authenticate_with_credential, load_credential_from_file};
use crate::{Identity, Password, UserInfoRef, UserProvider};
@@ -41,61 +39,36 @@ impl WatchFileUserProvider {
pub fn new(filepath: &str) -> Result {
let credential = load_credential_from_file(filepath)?;
let users = Arc::new(Mutex::new(credential));
- let this = WatchFileUserProvider {
- users: users.clone(),
- };
- let (tx, rx) = channel::>();
- let mut debouncer =
- notify::recommended_watcher(tx).context(FileWatchSnafu { path: "" })?;
- let mut dir = Path::new(filepath).to_path_buf();
- ensure!(
- dir.pop(),
- InvalidConfigSnafu {
- value: filepath,
- msg: "UserProvider path must be a file path",
- }
- );
- debouncer
- .watch(&dir, RecursiveMode::NonRecursive)
- .context(FileWatchSnafu { path: filepath })?;
+ let users_clone = users.clone();
+ let filepath_owned = filepath.to_string();
- let filepath = filepath.to_string();
- std::thread::spawn(move || {
- let filename = Path::new(&filepath).file_name();
- let _hold = debouncer;
- while let Ok(res) = rx.recv() {
- if let Ok(event) = res {
- let is_this_file = event.paths.iter().any(|p| p.file_name() == filename);
- let is_relevant_event = matches!(
- event.kind,
- EventKind::Modify(_) | EventKind::Create(_) | EventKind::Remove(_)
+ FileWatcherBuilder::new()
+ .watch_path(filepath)
+ .context(FileWatchSnafu)?
+ .config(FileWatcherConfig::new())
+ .spawn(move || match load_credential_from_file(&filepath_owned) {
+ Ok(credential) => {
+ let mut users = users_clone.lock().expect("users credential must be valid");
+ #[cfg(not(test))]
+ info!("User provider file {} reloaded", &filepath_owned);
+ #[cfg(test)]
+ info!(
+ "User provider file {} reloaded: {:?}",
+ &filepath_owned, credential
);
- if is_this_file && is_relevant_event {
- info!(?event.kind, "User provider file {} changed", &filepath);
- match load_credential_from_file(&filepath) {
- Ok(credential) => {
- let mut users =
- users.lock().expect("users credential must be valid");
- #[cfg(not(test))]
- info!("User provider file {filepath} reloaded");
- #[cfg(test)]
- info!("User provider file {filepath} reloaded: {credential:?}");
- *users = credential;
- }
- Err(err) => {
- warn!(
- ?err,
- "Fail to load credential from file {filepath}; keep the old one",
- )
- }
- }
- }
+ *users = credential;
}
- }
- });
+ Err(err) => {
+ warn!(
+ ?err,
+ "Fail to load credential from file {}; keep the old one", &filepath_owned
+ )
+ }
+ })
+ .context(FileWatchSnafu)?;
- Ok(this)
+ Ok(WatchFileUserProvider { users })
}
}
diff --git a/src/catalog/Cargo.toml b/src/catalog/Cargo.toml
index 62674e2572..c41548082d 100644
--- a/src/catalog/Cargo.toml
+++ b/src/catalog/Cargo.toml
@@ -5,7 +5,6 @@ edition.workspace = true
license.workspace = true
[features]
-enterprise = []
testing = []
[lints]
diff --git a/src/catalog/src/kvbackend.rs b/src/catalog/src/kvbackend.rs
index d7f32fc66d..334acc999c 100644
--- a/src/catalog/src/kvbackend.rs
+++ b/src/catalog/src/kvbackend.rs
@@ -12,13 +12,14 @@
// See the License for the specific language governing permissions and
// limitations under the License.
-pub use client::{CachedKvBackend, CachedKvBackendBuilder, MetaKvBackend};
-
mod builder;
mod client;
mod manager;
mod table_cache;
-pub use builder::KvBackendCatalogManagerBuilder;
+pub use builder::{
+ CatalogManagerConfigurator, CatalogManagerConfiguratorRef, KvBackendCatalogManagerBuilder,
+};
+pub use client::{CachedKvBackend, CachedKvBackendBuilder, MetaKvBackend};
pub use manager::KvBackendCatalogManager;
pub use table_cache::{TableCache, TableCacheRef, new_table_cache};
diff --git a/src/catalog/src/kvbackend/builder.rs b/src/catalog/src/kvbackend/builder.rs
index 247a111124..de56f81c0f 100644
--- a/src/catalog/src/kvbackend/builder.rs
+++ b/src/catalog/src/kvbackend/builder.rs
@@ -12,9 +12,11 @@
// See the License for the specific language governing permissions and
// limitations under the License.
+use std::collections::HashMap;
use std::sync::Arc;
use common_catalog::consts::DEFAULT_CATALOG_NAME;
+use common_error::ext::BoxedError;
use common_meta::cache::LayeredCacheRegistryRef;
use common_meta::key::TableMetadataManager;
use common_meta::key::flow::FlowMetadataManager;
@@ -23,24 +25,34 @@ use common_procedure::ProcedureManagerRef;
use moka::sync::Cache;
use partition::manager::PartitionRuleManager;
-#[cfg(feature = "enterprise")]
-use crate::information_schema::InformationSchemaTableFactoryRef;
-use crate::information_schema::{InformationExtensionRef, InformationSchemaProvider};
+use crate::information_schema::{
+ InformationExtensionRef, InformationSchemaProvider, InformationSchemaTableFactoryRef,
+};
use crate::kvbackend::KvBackendCatalogManager;
use crate::kvbackend::manager::{CATALOG_CACHE_MAX_CAPACITY, SystemCatalog};
use crate::process_manager::ProcessManagerRef;
use crate::system_schema::numbers_table_provider::NumbersTableProvider;
use crate::system_schema::pg_catalog::PGCatalogProvider;
+/// The configurator that customizes or enhances the [`KvBackendCatalogManagerBuilder`].
+#[async_trait::async_trait]
+pub trait CatalogManagerConfigurator: Send + Sync {
+ async fn configure(
+ &self,
+ builder: KvBackendCatalogManagerBuilder,
+ ctx: C,
+ ) -> std::result::Result;
+}
+
+pub type CatalogManagerConfiguratorRef = Arc>;
+
pub struct KvBackendCatalogManagerBuilder {
information_extension: InformationExtensionRef,
backend: KvBackendRef,
cache_registry: LayeredCacheRegistryRef,
procedure_manager: Option,
process_manager: Option,
- #[cfg(feature = "enterprise")]
- extra_information_table_factories:
- std::collections::HashMap,
+ extra_information_table_factories: HashMap,
}
impl KvBackendCatalogManagerBuilder {
@@ -55,8 +67,7 @@ impl KvBackendCatalogManagerBuilder {
cache_registry,
procedure_manager: None,
process_manager: None,
- #[cfg(feature = "enterprise")]
- extra_information_table_factories: std::collections::HashMap::new(),
+ extra_information_table_factories: HashMap::new(),
}
}
@@ -71,10 +82,9 @@ impl KvBackendCatalogManagerBuilder {
}
/// Sets the extra information tables.
- #[cfg(feature = "enterprise")]
pub fn with_extra_information_table_factories(
mut self,
- factories: std::collections::HashMap,
+ factories: HashMap,
) -> Self {
self.extra_information_table_factories = factories;
self
@@ -87,7 +97,6 @@ impl KvBackendCatalogManagerBuilder {
cache_registry,
procedure_manager,
process_manager,
- #[cfg(feature = "enterprise")]
extra_information_table_factories,
} = self;
Arc::new_cyclic(|me| KvBackendCatalogManager {
@@ -111,7 +120,6 @@ impl KvBackendCatalogManagerBuilder {
process_manager.clone(),
backend.clone(),
);
- #[cfg(feature = "enterprise")]
let provider = provider
.with_extra_table_factories(extra_information_table_factories.clone());
Arc::new(provider)
@@ -123,7 +131,6 @@ impl KvBackendCatalogManagerBuilder {
numbers_table_provider: NumbersTableProvider,
backend,
process_manager,
- #[cfg(feature = "enterprise")]
extra_information_table_factories,
},
cache_registry,
diff --git a/src/catalog/src/kvbackend/manager.rs b/src/catalog/src/kvbackend/manager.rs
index 29e0cc4ce8..7852142c6a 100644
--- a/src/catalog/src/kvbackend/manager.rs
+++ b/src/catalog/src/kvbackend/manager.rs
@@ -53,9 +53,9 @@ use crate::error::{
CacheNotFoundSnafu, GetTableCacheSnafu, InvalidTableInfoInCatalogSnafu, ListCatalogsSnafu,
ListSchemasSnafu, ListTablesSnafu, Result, TableMetadataManagerSnafu,
};
-#[cfg(feature = "enterprise")]
-use crate::information_schema::InformationSchemaTableFactoryRef;
-use crate::information_schema::{InformationExtensionRef, InformationSchemaProvider};
+use crate::information_schema::{
+ InformationExtensionRef, InformationSchemaProvider, InformationSchemaTableFactoryRef,
+};
use crate::kvbackend::TableCacheRef;
use crate::process_manager::ProcessManagerRef;
use crate::system_schema::SystemSchemaProvider;
@@ -557,7 +557,6 @@ pub(super) struct SystemCatalog {
pub(super) numbers_table_provider: NumbersTableProvider,
pub(super) backend: KvBackendRef,
pub(super) process_manager: Option,
- #[cfg(feature = "enterprise")]
pub(super) extra_information_table_factories:
std::collections::HashMap,
}
@@ -628,7 +627,6 @@ impl SystemCatalog {
self.process_manager.clone(),
self.backend.clone(),
);
- #[cfg(feature = "enterprise")]
let provider = provider
.with_extra_table_factories(self.extra_information_table_factories.clone());
Arc::new(provider)
diff --git a/src/catalog/src/system_schema/information_schema.rs b/src/catalog/src/system_schema/information_schema.rs
index 44609ade34..9715aa9402 100644
--- a/src/catalog/src/system_schema/information_schema.rs
+++ b/src/catalog/src/system_schema/information_schema.rs
@@ -22,7 +22,6 @@ mod procedure_info;
pub mod process_list;
pub mod region_peers;
mod region_statistics;
-mod runtime_metrics;
pub mod schemata;
mod ssts;
mod table_constraints;
@@ -65,7 +64,6 @@ use crate::system_schema::information_schema::information_memory_table::get_sche
use crate::system_schema::information_schema::key_column_usage::InformationSchemaKeyColumnUsage;
use crate::system_schema::information_schema::partitions::InformationSchemaPartitions;
use crate::system_schema::information_schema::region_peers::InformationSchemaRegionPeers;
-use crate::system_schema::information_schema::runtime_metrics::InformationSchemaMetrics;
use crate::system_schema::information_schema::schemata::InformationSchemaSchemata;
use crate::system_schema::information_schema::ssts::{
InformationSchemaSstsIndexMeta, InformationSchemaSstsManifest, InformationSchemaSstsStorage,
@@ -119,7 +117,6 @@ macro_rules! setup_memory_table {
};
}
-#[cfg(feature = "enterprise")]
pub struct MakeInformationTableRequest {
pub catalog_name: String,
pub catalog_manager: Weak,
@@ -130,12 +127,10 @@ pub struct MakeInformationTableRequest {
///
/// This trait allows for extensibility of the information schema by providing
/// a way to dynamically create custom information schema tables.
-#[cfg(feature = "enterprise")]
pub trait InformationSchemaTableFactory {
fn make_information_table(&self, req: MakeInformationTableRequest) -> SystemTableRef;
}
-#[cfg(feature = "enterprise")]
pub type InformationSchemaTableFactoryRef = Arc;
/// The `information_schema` tables info provider.
@@ -145,9 +140,7 @@ pub struct InformationSchemaProvider {
process_manager: Option,
flow_metadata_manager: Arc,
tables: HashMap,
- #[allow(dead_code)]
kv_backend: KvBackendRef,
- #[cfg(feature = "enterprise")]
extra_table_factories: HashMap,
}
@@ -168,7 +161,6 @@ impl SystemSchemaProviderInner for InformationSchemaProvider {
}
fn system_table(&self, name: &str) -> Option {
- #[cfg(feature = "enterprise")]
if let Some(factory) = self.extra_table_factories.get(name) {
let req = MakeInformationTableRequest {
catalog_name: self.catalog_name.clone(),
@@ -216,7 +208,6 @@ impl SystemSchemaProviderInner for InformationSchemaProvider {
self.catalog_name.clone(),
self.catalog_manager.clone(),
)) as _),
- RUNTIME_METRICS => Some(Arc::new(InformationSchemaMetrics::new())),
PARTITIONS => Some(Arc::new(InformationSchemaPartitions::new(
self.catalog_name.clone(),
self.catalog_manager.clone(),
@@ -284,7 +275,6 @@ impl InformationSchemaProvider {
process_manager,
tables: HashMap::new(),
kv_backend,
- #[cfg(feature = "enterprise")]
extra_table_factories: HashMap::new(),
};
@@ -293,7 +283,6 @@ impl InformationSchemaProvider {
provider
}
- #[cfg(feature = "enterprise")]
pub(crate) fn with_extra_table_factories(
mut self,
factories: HashMap,
@@ -311,10 +300,6 @@ impl InformationSchemaProvider {
// authentication details, and other critical information.
// Only put these tables under `greptime` catalog to prevent info leak.
if self.catalog_name == DEFAULT_CATALOG_NAME {
- tables.insert(
- RUNTIME_METRICS.to_string(),
- self.build_table(RUNTIME_METRICS).unwrap(),
- );
tables.insert(
BUILD_INFO.to_string(),
self.build_table(BUILD_INFO).unwrap(),
@@ -365,7 +350,6 @@ impl InformationSchemaProvider {
if let Some(process_list) = self.build_table(PROCESS_LIST) {
tables.insert(PROCESS_LIST.to_string(), process_list);
}
- #[cfg(feature = "enterprise")]
for name in self.extra_table_factories.keys() {
tables.insert(name.clone(), self.build_table(name).expect(name));
}
@@ -444,7 +428,7 @@ pub trait InformationExtension {
}
/// The request to inspect the datanode.
-#[derive(Debug, Clone, PartialEq, Eq)]
+#[derive(Debug, Clone, PartialEq)]
pub struct DatanodeInspectRequest {
/// Kind to fetch from datanode.
pub kind: DatanodeInspectKind,
diff --git a/src/catalog/src/system_schema/information_schema/partitions.rs b/src/catalog/src/system_schema/information_schema/partitions.rs
index 68f4f83051..b9396fe554 100644
--- a/src/catalog/src/system_schema/information_schema/partitions.rs
+++ b/src/catalog/src/system_schema/information_schema/partitions.rs
@@ -211,6 +211,7 @@ struct InformationSchemaPartitionsBuilder {
partition_names: StringVectorBuilder,
partition_ordinal_positions: Int64VectorBuilder,
partition_expressions: StringVectorBuilder,
+ partition_descriptions: StringVectorBuilder,
create_times: TimestampSecondVectorBuilder,
partition_ids: UInt64VectorBuilder,
}
@@ -231,6 +232,7 @@ impl InformationSchemaPartitionsBuilder {
partition_names: StringVectorBuilder::with_capacity(INIT_CAPACITY),
partition_ordinal_positions: Int64VectorBuilder::with_capacity(INIT_CAPACITY),
partition_expressions: StringVectorBuilder::with_capacity(INIT_CAPACITY),
+ partition_descriptions: StringVectorBuilder::with_capacity(INIT_CAPACITY),
create_times: TimestampSecondVectorBuilder::with_capacity(INIT_CAPACITY),
partition_ids: UInt64VectorBuilder::with_capacity(INIT_CAPACITY),
}
@@ -319,6 +321,21 @@ impl InformationSchemaPartitionsBuilder {
return;
}
+ // Get partition column names (shared by all partitions)
+ // In MySQL, PARTITION_EXPRESSION is the partitioning function expression (e.g., column name)
+ let partition_columns: String = table_info
+ .meta
+ .partition_column_names()
+ .cloned()
+ .collect::>()
+ .join(", ");
+
+ let partition_expr_str = if partition_columns.is_empty() {
+ None
+ } else {
+ Some(partition_columns)
+ };
+
for (index, partition) in partitions.iter().enumerate() {
let partition_name = format!("p{index}");
@@ -328,8 +345,12 @@ impl InformationSchemaPartitionsBuilder {
self.partition_names.push(Some(&partition_name));
self.partition_ordinal_positions
.push(Some((index + 1) as i64));
- let expression = partition.partition_expr.as_ref().map(|e| e.to_string());
- self.partition_expressions.push(expression.as_deref());
+ // PARTITION_EXPRESSION: partition column names (same for all partitions)
+ self.partition_expressions
+ .push(partition_expr_str.as_deref());
+ // PARTITION_DESCRIPTION: partition boundary expression (different for each partition)
+ let description = partition.partition_expr.as_ref().map(|e| e.to_string());
+ self.partition_descriptions.push(description.as_deref());
self.create_times.push(Some(TimestampSecond::from(
table_info.meta.created_on.timestamp(),
)));
@@ -369,7 +390,7 @@ impl InformationSchemaPartitionsBuilder {
null_string_vector.clone(),
Arc::new(self.partition_expressions.finish()),
null_string_vector.clone(),
- null_string_vector.clone(),
+ Arc::new(self.partition_descriptions.finish()),
// TODO(dennis): rows and index statistics info
null_i64_vector.clone(),
null_i64_vector.clone(),
diff --git a/src/catalog/src/system_schema/information_schema/runtime_metrics.rs b/src/catalog/src/system_schema/information_schema/runtime_metrics.rs
deleted file mode 100644
index 5ccb871321..0000000000
--- a/src/catalog/src/system_schema/information_schema/runtime_metrics.rs
+++ /dev/null
@@ -1,265 +0,0 @@
-// Copyright 2023 Greptime Team
-//
-// Licensed under the Apache License, Version 2.0 (the "License");
-// you may not use this file except in compliance with the License.
-// You may obtain a copy of the License at
-//
-// http://www.apache.org/licenses/LICENSE-2.0
-//
-// Unless required by applicable law or agreed to in writing, software
-// distributed under the License is distributed on an "AS IS" BASIS,
-// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
-// See the License for the specific language governing permissions and
-// limitations under the License.
-
-use std::sync::Arc;
-
-use arrow_schema::SchemaRef as ArrowSchemaRef;
-use common_catalog::consts::INFORMATION_SCHEMA_RUNTIME_METRICS_TABLE_ID;
-use common_error::ext::BoxedError;
-use common_recordbatch::adapter::RecordBatchStreamAdapter;
-use common_recordbatch::{RecordBatch, SendableRecordBatchStream};
-use common_time::util::current_time_millis;
-use datafusion::execution::TaskContext;
-use datafusion::physical_plan::SendableRecordBatchStream as DfSendableRecordBatchStream;
-use datafusion::physical_plan::stream::RecordBatchStreamAdapter as DfRecordBatchStreamAdapter;
-use datafusion::physical_plan::streaming::PartitionStream as DfPartitionStream;
-use datatypes::prelude::{ConcreteDataType, MutableVector};
-use datatypes::scalars::ScalarVectorBuilder;
-use datatypes::schema::{ColumnSchema, Schema, SchemaRef};
-use datatypes::vectors::{
- ConstantVector, Float64VectorBuilder, StringVectorBuilder, TimestampMillisecondVector,
- VectorRef,
-};
-use itertools::Itertools;
-use snafu::ResultExt;
-use store_api::storage::{ScanRequest, TableId};
-
-use crate::error::{CreateRecordBatchSnafu, InternalSnafu, Result};
-use crate::system_schema::information_schema::{InformationTable, RUNTIME_METRICS};
-
-#[derive(Debug)]
-pub(super) struct InformationSchemaMetrics {
- schema: SchemaRef,
-}
-
-const METRIC_NAME: &str = "metric_name";
-const METRIC_VALUE: &str = "value";
-const METRIC_LABELS: &str = "labels";
-const PEER_ADDR: &str = "peer_addr";
-const PEER_TYPE: &str = "peer_type";
-const TIMESTAMP: &str = "timestamp";
-
-/// The `information_schema.runtime_metrics` virtual table.
-/// It provides the GreptimeDB runtime metrics for the users by SQL.
-impl InformationSchemaMetrics {
- pub(super) fn new() -> Self {
- Self {
- schema: Self::schema(),
- }
- }
-
- fn schema() -> SchemaRef {
- Arc::new(Schema::new(vec![
- ColumnSchema::new(METRIC_NAME, ConcreteDataType::string_datatype(), false),
- ColumnSchema::new(METRIC_VALUE, ConcreteDataType::float64_datatype(), false),
- ColumnSchema::new(METRIC_LABELS, ConcreteDataType::string_datatype(), true),
- ColumnSchema::new(PEER_ADDR, ConcreteDataType::string_datatype(), true),
- ColumnSchema::new(PEER_TYPE, ConcreteDataType::string_datatype(), false),
- ColumnSchema::new(
- TIMESTAMP,
- ConcreteDataType::timestamp_millisecond_datatype(),
- false,
- ),
- ]))
- }
-
- fn builder(&self) -> InformationSchemaMetricsBuilder {
- InformationSchemaMetricsBuilder::new(self.schema.clone())
- }
-}
-
-impl InformationTable for InformationSchemaMetrics {
- fn table_id(&self) -> TableId {
- INFORMATION_SCHEMA_RUNTIME_METRICS_TABLE_ID
- }
-
- fn table_name(&self) -> &'static str {
- RUNTIME_METRICS
- }
-
- fn schema(&self) -> SchemaRef {
- self.schema.clone()
- }
-
- fn to_stream(&self, request: ScanRequest) -> Result {
- let schema = self.schema.arrow_schema().clone();
- let mut builder = self.builder();
- let stream = Box::pin(DfRecordBatchStreamAdapter::new(
- schema,
- futures::stream::once(async move {
- builder
- .make_metrics(Some(request))
- .await
- .map(|x| x.into_df_record_batch())
- .map_err(Into::into)
- }),
- ));
-
- Ok(Box::pin(
- RecordBatchStreamAdapter::try_new(stream)
- .map_err(BoxedError::new)
- .context(InternalSnafu)?,
- ))
- }
-}
-
-struct InformationSchemaMetricsBuilder {
- schema: SchemaRef,
-
- metric_names: StringVectorBuilder,
- metric_values: Float64VectorBuilder,
- metric_labels: StringVectorBuilder,
- peer_addrs: StringVectorBuilder,
- peer_types: StringVectorBuilder,
-}
-
-impl InformationSchemaMetricsBuilder {
- fn new(schema: SchemaRef) -> Self {
- Self {
- schema,
- metric_names: StringVectorBuilder::with_capacity(42),
- metric_values: Float64VectorBuilder::with_capacity(42),
- metric_labels: StringVectorBuilder::with_capacity(42),
- peer_addrs: StringVectorBuilder::with_capacity(42),
- peer_types: StringVectorBuilder::with_capacity(42),
- }
- }
-
- fn add_metric(
- &mut self,
- metric_name: &str,
- labels: String,
- metric_value: f64,
- peer: Option<&str>,
- peer_type: &str,
- ) {
- self.metric_names.push(Some(metric_name));
- self.metric_values.push(Some(metric_value));
- self.metric_labels.push(Some(&labels));
- self.peer_addrs.push(peer);
- self.peer_types.push(Some(peer_type));
- }
-
- async fn make_metrics(&mut self, _request: Option) -> Result {
- let metric_families = prometheus::gather();
-
- let write_request =
- common_telemetry::metric::convert_metric_to_write_request(metric_families, None, 0);
-
- for ts in write_request.timeseries {
- //Safety: always has `__name__` label
- let metric_name = ts
- .labels
- .iter()
- .find_map(|label| {
- if label.name == "__name__" {
- Some(label.value.clone())
- } else {
- None
- }
- })
- .unwrap();
-
- self.add_metric(
- &metric_name,
- ts.labels
- .into_iter()
- .filter_map(|label| {
- if label.name == "__name__" {
- None
- } else {
- Some(format!("{}={}", label.name, label.value))
- }
- })
- .join(", "),
- // Safety: always has a sample
- ts.samples[0].value,
- // The peer column is always `None` for standalone
- None,
- "STANDALONE",
- );
- }
-
- // FIXME(dennis): fetching other peers metrics
- self.finish()
- }
-
- fn finish(&mut self) -> Result {
- let rows_num = self.metric_names.len();
-
- let timestamps = Arc::new(ConstantVector::new(
- Arc::new(TimestampMillisecondVector::from_slice([
- current_time_millis(),
- ])),
- rows_num,
- ));
-
- let columns: Vec = vec![
- Arc::new(self.metric_names.finish()),
- Arc::new(self.metric_values.finish()),
- Arc::new(self.metric_labels.finish()),
- Arc::new(self.peer_addrs.finish()),
- Arc::new(self.peer_types.finish()),
- timestamps,
- ];
-
- RecordBatch::new(self.schema.clone(), columns).context(CreateRecordBatchSnafu)
- }
-}
-
-impl DfPartitionStream for InformationSchemaMetrics {
- fn schema(&self) -> &ArrowSchemaRef {
- self.schema.arrow_schema()
- }
-
- fn execute(&self, _: Arc) -> DfSendableRecordBatchStream {
- let schema = self.schema.arrow_schema().clone();
- let mut builder = self.builder();
- Box::pin(DfRecordBatchStreamAdapter::new(
- schema,
- futures::stream::once(async move {
- builder
- .make_metrics(None)
- .await
- .map(|x| x.into_df_record_batch())
- .map_err(Into::into)
- }),
- ))
- }
-}
-
-#[cfg(test)]
-mod tests {
- use common_recordbatch::RecordBatches;
-
- use super::*;
-
- #[tokio::test]
- async fn test_make_metrics() {
- let metrics = InformationSchemaMetrics::new();
-
- let stream = metrics.to_stream(ScanRequest::default()).unwrap();
-
- let batches = RecordBatches::try_collect(stream).await.unwrap();
-
- let result_literal = batches.pretty_print().unwrap();
-
- assert!(result_literal.contains(METRIC_NAME));
- assert!(result_literal.contains(METRIC_VALUE));
- assert!(result_literal.contains(METRIC_LABELS));
- assert!(result_literal.contains(PEER_ADDR));
- assert!(result_literal.contains(PEER_TYPE));
- assert!(result_literal.contains(TIMESTAMP));
- }
-}
diff --git a/src/catalog/src/system_schema/information_schema/table_names.rs b/src/catalog/src/system_schema/information_schema/table_names.rs
index 23791425dc..2a3329fece 100644
--- a/src/catalog/src/system_schema/information_schema/table_names.rs
+++ b/src/catalog/src/system_schema/information_schema/table_names.rs
@@ -38,7 +38,6 @@ pub const TABLE_PRIVILEGES: &str = "table_privileges";
pub const TRIGGERS: &str = "triggers";
pub const GLOBAL_STATUS: &str = "global_status";
pub const SESSION_STATUS: &str = "session_status";
-pub const RUNTIME_METRICS: &str = "runtime_metrics";
pub const PARTITIONS: &str = "partitions";
pub const REGION_PEERS: &str = "region_peers";
pub const TABLE_CONSTRAINTS: &str = "table_constraints";
diff --git a/src/catalog/src/system_schema/information_schema/tables.rs b/src/catalog/src/system_schema/information_schema/tables.rs
index 507dedc547..38a0cb1d61 100644
--- a/src/catalog/src/system_schema/information_schema/tables.rs
+++ b/src/catalog/src/system_schema/information_schema/tables.rs
@@ -12,7 +12,6 @@
// See the License for the specific language governing permissions and
// limitations under the License.
-use std::collections::HashSet;
use std::sync::{Arc, Weak};
use arrow_schema::SchemaRef as ArrowSchemaRef;
@@ -255,14 +254,17 @@ impl InformationSchemaTablesBuilder {
// TODO(dennis): `region_stats` API is not stable in distributed cluster because of network issue etc.
// But we don't want the statements such as `show tables` fail,
// so using `unwrap_or_else` here instead of `?` operator.
- let region_stats = information_extension
- .region_stats()
- .await
- .map_err(|e| {
- error!(e; "Failed to call region_stats");
- e
- })
- .unwrap_or_else(|_| vec![]);
+ let region_stats = {
+ let mut x = information_extension
+ .region_stats()
+ .await
+ .unwrap_or_else(|e| {
+ error!(e; "Failed to find region stats in information_schema, fallback to all empty");
+ vec![]
+ });
+ x.sort_unstable_by_key(|x| x.id);
+ x
+ };
for schema_name in catalog_manager.schema_names(&catalog_name, None).await? {
let mut stream = catalog_manager.tables(&catalog_name, &schema_name, None);
@@ -273,16 +275,16 @@ impl InformationSchemaTablesBuilder {
// TODO(dennis): make it working for metric engine
let table_region_stats =
if table_info.meta.engine == MITO_ENGINE || table_info.is_physical_table() {
- let region_ids = table_info
+ table_info
.meta
.region_numbers
.iter()
.map(|n| RegionId::new(table_info.ident.table_id, *n))
- .collect::>();
-
- region_stats
- .iter()
- .filter(|stat| region_ids.contains(&stat.id))
+ .flat_map(|region_id| {
+ region_stats
+ .binary_search_by_key(®ion_id, |x| x.id)
+ .map(|i| ®ion_stats[i])
+ })
.collect::>()
} else {
vec![]
diff --git a/src/cli/src/data/export.rs b/src/cli/src/data/export.rs
index 5ddc2a39bc..007f8aa67c 100644
--- a/src/cli/src/data/export.rs
+++ b/src/cli/src/data/export.rs
@@ -67,9 +67,17 @@ pub struct ExportCommand {
#[clap(long, default_value_t = default_database())]
database: String,
- /// Parallelism of the export.
- #[clap(long, short = 'j', default_value = "1")]
- export_jobs: usize,
+ /// The number of databases exported in parallel.
+ /// For example, if there are 20 databases and `db_parallelism` is 4,
+ /// 4 databases will be exported concurrently.
+ #[clap(long, short = 'j', default_value = "1", alias = "export-jobs")]
+ db_parallelism: usize,
+
+ /// The number of tables exported in parallel within a single database.
+ /// For example, if a database has 30 tables and `parallelism` is 8,
+ /// 8 tables will be exported concurrently.
+ #[clap(long, default_value = "4")]
+ table_parallelism: usize,
/// Max retry times for each job.
#[clap(long, default_value = "3")]
@@ -210,10 +218,11 @@ impl ExportCommand {
schema,
database_client,
output_dir: self.output_dir.clone(),
- parallelism: self.export_jobs,
+ export_jobs: self.db_parallelism,
target: self.target.clone(),
start_time: self.start_time.clone(),
end_time: self.end_time.clone(),
+ parallelism: self.table_parallelism,
s3: self.s3,
ddl_local_dir: self.ddl_local_dir.clone(),
s3_bucket: self.s3_bucket.clone(),
@@ -251,10 +260,11 @@ pub struct Export {
schema: Option,
database_client: DatabaseClient,
output_dir: Option,
- parallelism: usize,
+ export_jobs: usize,
target: ExportTarget,
start_time: Option,
end_time: Option,
+ parallelism: usize,
s3: bool,
ddl_local_dir: Option,
s3_bucket: Option,
@@ -464,7 +474,7 @@ impl Export {
async fn export_create_table(&self) -> Result<()> {
let timer = Instant::now();
- let semaphore = Arc::new(Semaphore::new(self.parallelism));
+ let semaphore = Arc::new(Semaphore::new(self.export_jobs));
let db_names = self.get_db_names().await?;
let db_count = db_names.len();
let operator = Arc::new(self.build_prefer_fs_operator().await?);
@@ -625,13 +635,13 @@ impl Export {
async fn export_database_data(&self) -> Result<()> {
let timer = Instant::now();
- let semaphore = Arc::new(Semaphore::new(self.parallelism));
+ let semaphore = Arc::new(Semaphore::new(self.export_jobs));
let db_names = self.get_db_names().await?;
let db_count = db_names.len();
let mut tasks = Vec::with_capacity(db_count);
let operator = Arc::new(self.build_operator().await?);
let fs_first_operator = Arc::new(self.build_prefer_fs_operator().await?);
- let with_options = build_with_options(&self.start_time, &self.end_time);
+ let with_options = build_with_options(&self.start_time, &self.end_time, self.parallelism);
for schema in db_names {
let semaphore_moved = semaphore.clone();
@@ -888,7 +898,11 @@ impl Tool for Export {
}
/// Builds the WITH options string for SQL commands, assuming consistent syntax across S3 and local exports.
-fn build_with_options(start_time: &Option, end_time: &Option) -> String {
+fn build_with_options(
+ start_time: &Option,
+ end_time: &Option,
+ parallelism: usize,
+) -> String {
let mut options = vec!["format = 'parquet'".to_string()];
if let Some(start) = start_time {
options.push(format!("start_time = '{}'", start));
@@ -896,5 +910,6 @@ fn build_with_options(start_time: &Option, end_time: &Option) ->
if let Some(end) = end_time {
options.push(format!("end_time = '{}'", end));
}
+ options.push(format!("parallelism = {}", parallelism));
options.join(", ")
}
diff --git a/src/cli/src/data/import.rs b/src/cli/src/data/import.rs
index 908f3d4c9f..ffe8b62c7e 100644
--- a/src/cli/src/data/import.rs
+++ b/src/cli/src/data/import.rs
@@ -56,9 +56,11 @@ pub struct ImportCommand {
#[clap(long, default_value_t = default_database())]
database: String,
- /// Parallelism of the import.
- #[clap(long, short = 'j', default_value = "1")]
- import_jobs: usize,
+ /// The number of databases imported in parallel.
+ /// For example, if there are 20 databases and `db_parallelism` is 4,
+ /// 4 databases will be imported concurrently.
+ #[clap(long, short = 'j', default_value = "1", alias = "import-jobs")]
+ db_parallelism: usize,
/// Max retry times for each job.
#[clap(long, default_value = "3")]
@@ -109,7 +111,7 @@ impl ImportCommand {
schema,
database_client,
input_dir: self.input_dir.clone(),
- parallelism: self.import_jobs,
+ parallelism: self.db_parallelism,
target: self.target.clone(),
}))
}
diff --git a/src/client/src/client.rs b/src/client/src/client.rs
index 611cce954d..39cb5c30aa 100644
--- a/src/client/src/client.rs
+++ b/src/client/src/client.rs
@@ -21,7 +21,7 @@ use api::v1::prometheus_gateway_client::PrometheusGatewayClient;
use api::v1::region::region_client::RegionClient as PbRegionClient;
use arrow_flight::flight_service_client::FlightServiceClient;
use common_grpc::channel_manager::{
- ChannelConfig, ChannelManager, ClientTlsOption, load_tls_config,
+ ChannelConfig, ChannelManager, ClientTlsOption, load_client_tls_config,
};
use parking_lot::RwLock;
use snafu::{OptionExt, ResultExt};
@@ -95,9 +95,9 @@ impl Client {
U: AsRef,
A: AsRef<[U]>,
{
- let channel_config = ChannelConfig::default().client_tls_config(client_tls);
- let tls_config = load_tls_config(channel_config.client_tls.as_ref())
- .context(error::CreateTlsChannelSnafu)?;
+ let channel_config = ChannelConfig::default().client_tls_config(client_tls.clone());
+ let tls_config =
+ load_client_tls_config(Some(client_tls)).context(error::CreateTlsChannelSnafu)?;
let channel_manager = ChannelManager::with_config(channel_config, tls_config);
Ok(Self::with_manager_and_urls(channel_manager, urls))
}
diff --git a/src/client/src/database.rs b/src/client/src/database.rs
index 0646c3e2a3..239f3fe3f9 100644
--- a/src/client/src/database.rs
+++ b/src/client/src/database.rs
@@ -435,10 +435,10 @@ impl Database {
.context(ExternalSnafu)?;
match flight_message {
FlightMessage::RecordBatch(arrow_batch) => {
- yield RecordBatch::try_from_df_record_batch(
+ yield Ok(RecordBatch::from_df_record_batch(
schema_cloned.clone(),
arrow_batch,
- )
+ ))
}
FlightMessage::Metrics(_) => {}
FlightMessage::AffectedRows(_) | FlightMessage::Schema(_) => {
diff --git a/src/client/src/region.rs b/src/client/src/region.rs
index 6e5a286083..3e80b83cec 100644
--- a/src/client/src/region.rs
+++ b/src/client/src/region.rs
@@ -182,10 +182,8 @@ impl RegionRequester {
match flight_message {
FlightMessage::RecordBatch(record_batch) => {
- let result_to_yield = RecordBatch::try_from_df_record_batch(
- schema_cloned.clone(),
- record_batch,
- );
+ let result_to_yield =
+ RecordBatch::from_df_record_batch(schema_cloned.clone(), record_batch);
// get the next message from the stream. normally it should be a metrics message.
if let Some(next_flight_message_result) = flight_message_stream.next().await
@@ -219,7 +217,7 @@ impl RegionRequester {
stream_ended = true;
}
- yield result_to_yield;
+ yield Ok(result_to_yield);
}
FlightMessage::Metrics(s) => {
// just a branch in case of some metrics message comes after other things.
diff --git a/src/cmd/Cargo.toml b/src/cmd/Cargo.toml
index 7a957b509b..d279ddb7f0 100644
--- a/src/cmd/Cargo.toml
+++ b/src/cmd/Cargo.toml
@@ -16,7 +16,7 @@ default = [
"meta-srv/pg_kvbackend",
"meta-srv/mysql_kvbackend",
]
-enterprise = ["common-meta/enterprise", "frontend/enterprise", "meta-srv/enterprise", "catalog/enterprise"]
+enterprise = ["common-meta/enterprise", "frontend/enterprise", "meta-srv/enterprise"]
tokio-console = ["common-telemetry/tokio-console"]
[lints]
diff --git a/src/cmd/src/datanode/objbench.rs b/src/cmd/src/datanode/objbench.rs
index dffb971072..0a3f27b77e 100644
--- a/src/cmd/src/datanode/objbench.rs
+++ b/src/cmd/src/datanode/objbench.rs
@@ -145,6 +145,17 @@ impl ObjbenchCommand {
let region_meta = extract_region_metadata(&self.source, &parquet_meta)?;
let num_rows = parquet_meta.file_metadata().num_rows() as u64;
let num_row_groups = parquet_meta.num_row_groups() as u64;
+ let max_row_group_uncompressed_size: u64 = parquet_meta
+ .row_groups()
+ .iter()
+ .map(|rg| {
+ rg.columns()
+ .iter()
+ .map(|c| c.uncompressed_size() as u64)
+ .sum::()
+ })
+ .max()
+ .unwrap_or(0);
println!(
"{} Metadata loaded - rows: {}, size: {} bytes",
@@ -160,9 +171,11 @@ impl ObjbenchCommand {
time_range: Default::default(),
level: 0,
file_size,
+ max_row_group_uncompressed_size,
available_indexes: Default::default(),
+ indexes: Default::default(),
index_file_size: 0,
- index_file_id: None,
+ index_version: 0,
num_rows,
num_row_groups,
sequence: None,
@@ -563,7 +576,7 @@ fn new_noop_file_purger() -> FilePurgerRef {
#[derive(Debug)]
struct Noop;
impl FilePurger for Noop {
- fn remove_file(&self, _file_meta: FileMeta, _is_delete: bool) {}
+ fn remove_file(&self, _file_meta: FileMeta, _is_delete: bool, _index_outdated: bool) {}
}
Arc::new(Noop)
}
diff --git a/src/cmd/src/error.rs b/src/cmd/src/error.rs
index 0b77dec341..fbff2d42e0 100644
--- a/src/cmd/src/error.rs
+++ b/src/cmd/src/error.rs
@@ -99,13 +99,6 @@ pub enum Error {
source: flow::Error,
},
- #[snafu(display("Servers error"))]
- Servers {
- #[snafu(implicit)]
- location: Location,
- source: servers::error::Error,
- },
-
#[snafu(display("Failed to start frontend"))]
StartFrontend {
#[snafu(implicit)]
@@ -336,7 +329,6 @@ impl ErrorExt for Error {
Error::ShutdownFrontend { source, .. } => source.status_code(),
Error::StartMetaServer { source, .. } => source.status_code(),
Error::ShutdownMetaServer { source, .. } => source.status_code(),
- Error::Servers { source, .. } => source.status_code(),
Error::BuildMetaServer { source, .. } => source.status_code(),
Error::UnsupportedSelectorType { source, .. } => source.status_code(),
Error::BuildCli { source, .. } => source.status_code(),
diff --git a/src/cmd/src/flownode.rs b/src/cmd/src/flownode.rs
index 07f3279724..6cefdb0f79 100644
--- a/src/cmd/src/flownode.rs
+++ b/src/cmd/src/flownode.rs
@@ -12,6 +12,7 @@
// See the License for the specific language governing permissions and
// limitations under the License.
+use std::fmt::Debug;
use std::path::Path;
use std::sync::Arc;
use std::time::Duration;
@@ -39,12 +40,14 @@ use flow::{
get_flow_auth_options,
};
use meta_client::{MetaClientOptions, MetaClientType};
+use plugins::flownode::context::GrpcConfigureContext;
+use servers::configurator::GrpcBuilderConfiguratorRef;
use snafu::{OptionExt, ResultExt, ensure};
use tracing_appender::non_blocking::WorkerGuard;
use crate::error::{
BuildCacheRegistrySnafu, InitMetadataSnafu, LoadLayeredConfigSnafu, MetaClientInitSnafu,
- MissingConfigSnafu, Result, ShutdownFlownodeSnafu, StartFlownodeSnafu,
+ MissingConfigSnafu, OtherSnafu, Result, ShutdownFlownodeSnafu, StartFlownodeSnafu,
};
use crate::options::{GlobalOptions, GreptimeOptions};
use crate::{App, create_resource_limit_metrics, log_versions, maybe_activate_heap_profile};
@@ -55,33 +58,14 @@ type FlownodeOptions = GreptimeOptions;
pub struct Instance {
flownode: FlownodeInstance,
-
- // The components of flownode, which make it easier to expand based
- // on the components.
- #[cfg(feature = "enterprise")]
- components: Components,
-
// Keep the logging guard to prevent the worker from being dropped.
_guard: Vec,
}
-#[cfg(feature = "enterprise")]
-pub struct Components {
- pub catalog_manager: catalog::CatalogManagerRef,
- pub fe_client: Arc,
- pub kv_backend: common_meta::kv_backend::KvBackendRef,
-}
-
impl Instance {
- pub fn new(
- flownode: FlownodeInstance,
- #[cfg(feature = "enterprise")] components: Components,
- guard: Vec,
- ) -> Self {
+ pub fn new(flownode: FlownodeInstance, guard: Vec) -> Self {
Self {
flownode,
- #[cfg(feature = "enterprise")]
- components,
_guard: guard,
}
}
@@ -94,11 +78,6 @@ impl Instance {
pub fn flownode_mut(&mut self) -> &mut FlownodeInstance {
&mut self.flownode
}
-
- #[cfg(feature = "enterprise")]
- pub fn components(&self) -> &Components {
- &self.components
- }
}
#[async_trait::async_trait]
@@ -396,7 +375,7 @@ impl StartCommand {
let frontend_client = Arc::new(frontend_client);
let flownode_builder = FlownodeBuilder::new(
opts.clone(),
- plugins,
+ plugins.clone(),
table_metadata_manager,
catalog_manager.clone(),
flow_metadata_manager,
@@ -405,8 +384,29 @@ impl StartCommand {
.with_heartbeat_task(heartbeat_task);
let mut flownode = flownode_builder.build().await.context(StartFlownodeSnafu)?;
+
+ let builder =
+ FlownodeServiceBuilder::grpc_server_builder(&opts, flownode.flownode_server());
+ let builder = if let Some(configurator) =
+ plugins.get::>()
+ {
+ let context = GrpcConfigureContext {
+ kv_backend: cached_meta_backend.clone(),
+ fe_client: frontend_client.clone(),
+ flownode_id: member_id,
+ catalog_manager: catalog_manager.clone(),
+ };
+ configurator
+ .configure(builder, context)
+ .await
+ .context(OtherSnafu)?
+ } else {
+ builder
+ };
+ let grpc_server = builder.build();
+
let services = FlownodeServiceBuilder::new(&opts)
- .with_default_grpc_server(flownode.flownode_server())
+ .with_grpc_server(grpc_server)
.enable_http_service()
.build()
.context(StartFlownodeSnafu)?;
@@ -430,16 +430,6 @@ impl StartCommand {
.set_frontend_invoker(invoker)
.await;
- #[cfg(feature = "enterprise")]
- let components = Components {
- catalog_manager: catalog_manager.clone(),
- fe_client: frontend_client,
- kv_backend: cached_meta_backend,
- };
-
- #[cfg(not(feature = "enterprise"))]
- return Ok(Instance::new(flownode, guard));
- #[cfg(feature = "enterprise")]
- Ok(Instance::new(flownode, components, guard))
+ Ok(Instance::new(flownode, guard))
}
}
diff --git a/src/cmd/src/frontend.rs b/src/cmd/src/frontend.rs
index 89992eba37..fa36a99ed4 100644
--- a/src/cmd/src/frontend.rs
+++ b/src/cmd/src/frontend.rs
@@ -12,6 +12,7 @@
// See the License for the specific language governing permissions and
// limitations under the License.
+use std::fmt::Debug;
use std::path::Path;
use std::sync::Arc;
use std::time::Duration;
@@ -19,7 +20,10 @@ use std::time::Duration;
use async_trait::async_trait;
use cache::{build_fundamental_cache_registry, with_default_composite_cache_registry};
use catalog::information_extension::DistributedInformationExtension;
-use catalog::kvbackend::{CachedKvBackendBuilder, KvBackendCatalogManagerBuilder, MetaKvBackend};
+use catalog::kvbackend::{
+ CachedKvBackendBuilder, CatalogManagerConfiguratorRef, KvBackendCatalogManagerBuilder,
+ MetaKvBackend,
+};
use catalog::process_manager::ProcessManager;
use clap::Parser;
use client::client_manager::NodeClients;
@@ -31,6 +35,7 @@ use common_meta::cache::{CacheRegistryBuilder, LayeredCacheRegistryBuilder};
use common_meta::heartbeat::handler::HandlerGroupExecutor;
use common_meta::heartbeat::handler::invalidate_table_cache::InvalidateCacheHandler;
use common_meta::heartbeat::handler::parse_mailbox_message::ParseMailboxMessageHandler;
+use common_meta::heartbeat::handler::suspend::SuspendHandler;
use common_query::prelude::set_default_prefix;
use common_stat::ResourceStatImpl;
use common_telemetry::info;
@@ -41,15 +46,17 @@ use frontend::frontend::Frontend;
use frontend::heartbeat::HeartbeatTask;
use frontend::instance::builder::FrontendBuilder;
use frontend::server::Services;
-use meta_client::{MetaClientOptions, MetaClientType};
+use meta_client::{MetaClientOptions, MetaClientRef, MetaClientType};
+use plugins::frontend::context::{
+ CatalogManagerConfigureContext, DistributedCatalogManagerConfigureContext,
+};
use servers::addrs;
-use servers::export_metrics::ExportMetricsTask;
use servers::grpc::GrpcOptions;
-use servers::tls::{TlsMode, TlsOption};
+use servers::tls::{TlsMode, TlsOption, merge_tls_option};
use snafu::{OptionExt, ResultExt};
use tracing_appender::non_blocking::WorkerGuard;
-use crate::error::{self, Result};
+use crate::error::{self, OtherSnafu, Result};
use crate::options::{GlobalOptions, GreptimeOptions};
use crate::{App, create_resource_limit_metrics, log_versions, maybe_activate_heap_profile};
@@ -177,6 +184,8 @@ pub struct StartCommand {
#[clap(long)]
tls_key_path: Option,
#[clap(long)]
+ tls_watch: bool,
+ #[clap(long)]
user_provider: Option,
#[clap(long)]
disable_dashboard: Option,
@@ -230,6 +239,7 @@ impl StartCommand {
self.tls_mode.clone(),
self.tls_cert_path.clone(),
self.tls_key_path.clone(),
+ self.tls_watch,
);
if let Some(addr) = &self.http_addr {
@@ -246,7 +256,7 @@ impl StartCommand {
if let Some(addr) = &self.rpc_bind_addr {
opts.grpc.bind_addr.clone_from(addr);
- opts.grpc.tls = tls_opts.clone();
+ opts.grpc.tls = merge_tls_option(&opts.grpc.tls, tls_opts.clone());
}
if let Some(addr) = &self.rpc_server_addr {
@@ -281,13 +291,13 @@ impl StartCommand {
if let Some(addr) = &self.mysql_addr {
opts.mysql.enable = true;
opts.mysql.addr.clone_from(addr);
- opts.mysql.tls = tls_opts.clone();
+ opts.mysql.tls = merge_tls_option(&opts.mysql.tls, tls_opts.clone());
}
if let Some(addr) = &self.postgres_addr {
opts.postgres.enable = true;
opts.postgres.addr.clone_from(addr);
- opts.postgres.tls = tls_opts;
+ opts.postgres.tls = merge_tls_option(&opts.postgres.tls, tls_opts.clone());
}
if let Some(enable) = self.influxdb_enable {
@@ -414,38 +424,30 @@ impl StartCommand {
layered_cache_registry.clone(),
)
.with_process_manager(process_manager.clone());
- #[cfg(feature = "enterprise")]
- let builder = if let Some(factories) = plugins.get() {
- builder.with_extra_information_table_factories(factories)
+ let builder = if let Some(configurator) =
+ plugins.get::>()
+ {
+ let ctx = DistributedCatalogManagerConfigureContext {
+ meta_client: meta_client.clone(),
+ };
+ let ctx = CatalogManagerConfigureContext::Distributed(ctx);
+
+ configurator
+ .configure(builder, ctx)
+ .await
+ .context(OtherSnafu)?
} else {
builder
};
let catalog_manager = builder.build();
- let executor = HandlerGroupExecutor::new(vec![
- Arc::new(ParseMailboxMessageHandler),
- Arc::new(InvalidateCacheHandler::new(layered_cache_registry.clone())),
- ]);
-
- let mut resource_stat = ResourceStatImpl::default();
- resource_stat.start_collect_cpu_usage();
-
- let heartbeat_task = HeartbeatTask::new(
- &opts,
- meta_client.clone(),
- opts.heartbeat.clone(),
- Arc::new(executor),
- Arc::new(resource_stat),
- );
- let heartbeat_task = Some(heartbeat_task);
-
let instance = FrontendBuilder::new(
opts.clone(),
cached_meta_backend.clone(),
layered_cache_registry.clone(),
catalog_manager,
client,
- meta_client,
+ meta_client.clone(),
process_manager,
)
.with_plugin(plugins.clone())
@@ -453,10 +455,10 @@ impl StartCommand {
.try_build()
.await
.context(error::StartFrontendSnafu)?;
- let instance = Arc::new(instance);
- let export_metrics_task = ExportMetricsTask::try_new(&opts.export_metrics, Some(&plugins))
- .context(error::ServersSnafu)?;
+ let heartbeat_task = Some(create_heartbeat_task(&opts, meta_client, &instance));
+
+ let instance = Arc::new(instance);
let servers = Services::new(opts, instance.clone(), plugins)
.build()
@@ -466,13 +468,34 @@ impl StartCommand {
instance,
servers,
heartbeat_task,
- export_metrics_task,
};
Ok(Instance::new(frontend, guard))
}
}
+pub fn create_heartbeat_task(
+ options: &frontend::frontend::FrontendOptions,
+ meta_client: MetaClientRef,
+ instance: &frontend::instance::Instance,
+) -> HeartbeatTask {
+ let executor = Arc::new(HandlerGroupExecutor::new(vec![
+ Arc::new(ParseMailboxMessageHandler),
+ Arc::new(SuspendHandler::new(instance.suspend_state())),
+ Arc::new(InvalidateCacheHandler::new(
+ instance.cache_invalidator().clone(),
+ )),
+ ]));
+
+ let stat = {
+ let mut stat = ResourceStatImpl::default();
+ stat.start_collect_cpu_usage();
+ Arc::new(stat)
+ };
+
+ HeartbeatTask::new(options, meta_client, executor, stat)
+}
+
#[cfg(test)]
mod tests {
use std::io::Write;
diff --git a/src/cmd/src/metasrv.rs b/src/cmd/src/metasrv.rs
index 4f71775e74..ee67267de3 100644
--- a/src/cmd/src/metasrv.rs
+++ b/src/cmd/src/metasrv.rs
@@ -12,7 +12,7 @@
// See the License for the specific language governing permissions and
// limitations under the License.
-use std::fmt;
+use std::fmt::{self, Debug};
use std::path::Path;
use std::time::Duration;
@@ -23,7 +23,7 @@ use common_config::Configurable;
use common_telemetry::info;
use common_telemetry::logging::{DEFAULT_LOGGING_DIR, TracingOptions};
use common_version::{short_version, verbose_version};
-use meta_srv::bootstrap::MetasrvInstance;
+use meta_srv::bootstrap::{MetasrvInstance, metasrv_builder};
use meta_srv::metasrv::BackendImpl;
use snafu::ResultExt;
use tracing_appender::non_blocking::WorkerGuard;
@@ -177,7 +177,7 @@ pub struct StartCommand {
backend: Option,
}
-impl fmt::Debug for StartCommand {
+impl Debug for StartCommand {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.debug_struct("StartCommand")
.field("rpc_bind_addr", &self.rpc_bind_addr)
@@ -341,7 +341,7 @@ impl StartCommand {
.await
.context(StartMetaServerSnafu)?;
- let builder = meta_srv::bootstrap::metasrv_builder(&opts, plugins, None)
+ let builder = metasrv_builder(&opts, plugins, None)
.await
.context(error::BuildMetaServerSnafu)?;
let metasrv = builder.build().await.context(error::BuildMetaServerSnafu)?;
diff --git a/src/cmd/src/standalone.rs b/src/cmd/src/standalone.rs
index bf5aff7825..012680ac08 100644
--- a/src/cmd/src/standalone.rs
+++ b/src/cmd/src/standalone.rs
@@ -12,6 +12,7 @@
// See the License for the specific language governing permissions and
// limitations under the License.
+use std::fmt::Debug;
use std::net::SocketAddr;
use std::path::Path;
use std::sync::Arc;
@@ -20,7 +21,7 @@ use std::{fs, path};
use async_trait::async_trait;
use cache::{build_fundamental_cache_registry, with_default_composite_cache_registry};
use catalog::information_schema::InformationExtensionRef;
-use catalog::kvbackend::KvBackendCatalogManagerBuilder;
+use catalog::kvbackend::{CatalogManagerConfiguratorRef, KvBackendCatalogManagerBuilder};
use catalog::process_manager::ProcessManager;
use clap::Parser;
use common_base::Plugins;
@@ -31,7 +32,7 @@ use common_meta::cache::LayeredCacheRegistryBuilder;
use common_meta::ddl::flow_meta::FlowMetadataAllocator;
use common_meta::ddl::table_meta::TableMetadataAllocator;
use common_meta::ddl::{DdlContext, NoopRegionFailureDetectorControl};
-use common_meta::ddl_manager::DdlManager;
+use common_meta::ddl_manager::{DdlManager, DdlManagerConfiguratorRef};
use common_meta::key::flow::FlowMetadataManager;
use common_meta::key::{TableMetadataManager, TableMetadataManagerRef};
use common_meta::kv_backend::KvBackendRef;
@@ -57,14 +58,17 @@ use frontend::instance::StandaloneDatanodeManager;
use frontend::instance::builder::FrontendBuilder;
use frontend::server::Services;
use meta_srv::metasrv::{FLOW_ID_SEQ, TABLE_ID_SEQ};
-use servers::export_metrics::ExportMetricsTask;
-use servers::tls::{TlsMode, TlsOption};
+use plugins::frontend::context::{
+ CatalogManagerConfigureContext, StandaloneCatalogManagerConfigureContext,
+};
+use plugins::standalone::context::DdlManagerConfigureContext;
+use servers::tls::{TlsMode, TlsOption, merge_tls_option};
use snafu::ResultExt;
use standalone::StandaloneInformationExtension;
use standalone::options::StandaloneOptions;
use tracing_appender::non_blocking::WorkerGuard;
-use crate::error::{Result, StartFlownodeSnafu};
+use crate::error::{OtherSnafu, Result, StartFlownodeSnafu};
use crate::options::{GlobalOptions, GreptimeOptions};
use crate::{App, create_resource_limit_metrics, error, log_versions, maybe_activate_heap_profile};
@@ -117,34 +121,15 @@ pub struct Instance {
flownode: FlownodeInstance,
procedure_manager: ProcedureManagerRef,
wal_options_allocator: WalOptionsAllocatorRef,
-
- // The components of standalone, which make it easier to expand based
- // on the components.
- #[cfg(feature = "enterprise")]
- components: Components,
-
// Keep the logging guard to prevent the worker from being dropped.
_guard: Vec,
}
-#[cfg(feature = "enterprise")]
-pub struct Components {
- pub plugins: Plugins,
- pub kv_backend: KvBackendRef,
- pub frontend_client: Arc,
- pub catalog_manager: catalog::CatalogManagerRef,
-}
-
impl Instance {
/// Find the socket addr of a server by its `name`.
pub fn server_addr(&self, name: &str) -> Option {
self.frontend.server_handlers().addr(name)
}
-
- #[cfg(feature = "enterprise")]
- pub fn components(&self) -> &Components {
- &self.components
- }
}
#[async_trait]
@@ -228,6 +213,8 @@ pub struct StartCommand {
#[clap(long)]
tls_key_path: Option,
#[clap(long)]
+ tls_watch: bool,
+ #[clap(long)]
user_provider: Option,
#[clap(long, default_value = "GREPTIMEDB_STANDALONE")]
pub env_prefix: String,
@@ -277,6 +264,7 @@ impl StartCommand {
self.tls_mode.clone(),
self.tls_cert_path.clone(),
self.tls_key_path.clone(),
+ self.tls_watch,
);
if let Some(addr) = &self.http_addr {
@@ -305,19 +293,20 @@ impl StartCommand {
),
}.fail();
}
- opts.grpc.bind_addr.clone_from(addr)
+ opts.grpc.bind_addr.clone_from(addr);
+ opts.grpc.tls = merge_tls_option(&opts.grpc.tls, tls_opts.clone());
}
if let Some(addr) = &self.mysql_addr {
opts.mysql.enable = true;
opts.mysql.addr.clone_from(addr);
- opts.mysql.tls = tls_opts.clone();
+ opts.mysql.tls = merge_tls_option(&opts.mysql.tls, tls_opts.clone());
}
if let Some(addr) = &self.postgres_addr {
opts.postgres.enable = true;
opts.postgres.addr.clone_from(addr);
- opts.postgres.tls = tls_opts;
+ opts.postgres.tls = merge_tls_option(&opts.postgres.tls, tls_opts.clone());
}
if self.influxdb_enable {
@@ -413,6 +402,13 @@ impl StartCommand {
plugins.insert::(information_extension.clone());
let process_manager = Arc::new(ProcessManager::new(opts.grpc.server_addr.clone(), None));
+
+ // for standalone not use grpc, but get a handler to frontend grpc client without
+ // actually make a connection
+ let (frontend_client, frontend_instance_handler) =
+ FrontendClient::from_empty_grpc_handler(opts.query.clone());
+ let frontend_client = Arc::new(frontend_client);
+
let builder = KvBackendCatalogManagerBuilder::new(
information_extension.clone(),
kv_backend.clone(),
@@ -420,9 +416,17 @@ impl StartCommand {
)
.with_procedure_manager(procedure_manager.clone())
.with_process_manager(process_manager.clone());
- #[cfg(feature = "enterprise")]
- let builder = if let Some(factories) = plugins.get() {
- builder.with_extra_information_table_factories(factories)
+ let builder = if let Some(configurator) =
+ plugins.get::>()
+ {
+ let ctx = StandaloneCatalogManagerConfigureContext {
+ fe_client: frontend_client.clone(),
+ };
+ let ctx = CatalogManagerConfigureContext::Standalone(ctx);
+ configurator
+ .configure(builder, ctx)
+ .await
+ .context(OtherSnafu)?
} else {
builder
};
@@ -437,11 +441,6 @@ impl StartCommand {
..Default::default()
};
- // for standalone not use grpc, but get a handler to frontend grpc client without
- // actually make a connection
- let (frontend_client, frontend_instance_handler) =
- FrontendClient::from_empty_grpc_handler(opts.query.clone());
- let frontend_client = Arc::new(frontend_client);
let flow_builder = FlownodeBuilder::new(
flownode_options,
plugins.clone(),
@@ -512,11 +511,21 @@ impl StartCommand {
let ddl_manager = DdlManager::try_new(ddl_context, procedure_manager.clone(), true)
.context(error::InitDdlManagerSnafu)?;
- #[cfg(feature = "enterprise")]
- let ddl_manager = {
- let trigger_ddl_manager: Option =
- plugins.get();
- ddl_manager.with_trigger_ddl_manager(trigger_ddl_manager)
+
+ let ddl_manager = if let Some(configurator) =
+ plugins.get::>()
+ {
+ let ctx = DdlManagerConfigureContext {
+ kv_backend: kv_backend.clone(),
+ fe_client: frontend_client.clone(),
+ catalog_manager: catalog_manager.clone(),
+ };
+ configurator
+ .configure(ddl_manager, ctx)
+ .await
+ .context(OtherSnafu)?
+ } else {
+ ddl_manager
};
let procedure_executor = Arc::new(LocalProcedureExecutor::new(
@@ -562,9 +571,6 @@ impl StartCommand {
.context(StartFlownodeSnafu)?;
flow_streaming_engine.set_frontend_invoker(invoker).await;
- let export_metrics_task = ExportMetricsTask::try_new(&opts.export_metrics, Some(&plugins))
- .context(error::ServersSnafu)?;
-
let servers = Services::new(opts, fe_instance.clone(), plugins.clone())
.build()
.context(error::StartFrontendSnafu)?;
@@ -573,15 +579,6 @@ impl StartCommand {
instance: fe_instance,
servers,
heartbeat_task: None,
- export_metrics_task,
- };
-
- #[cfg(feature = "enterprise")]
- let components = Components {
- plugins,
- kv_backend,
- frontend_client,
- catalog_manager,
};
Ok(Instance {
@@ -590,8 +587,6 @@ impl StartCommand {
flownode,
procedure_manager,
wal_options_allocator,
- #[cfg(feature = "enterprise")]
- components,
_guard: guard,
})
}
@@ -769,6 +764,8 @@ mod tests {
fn test_load_log_options_from_cli() {
let cmd = StartCommand {
user_provider: Some("static_user_provider:cmd:test=test".to_string()),
+ mysql_addr: Some("127.0.0.1:4002".to_string()),
+ postgres_addr: Some("127.0.0.1:4003".to_string()),
..Default::default()
};
diff --git a/src/cmd/tests/load_config_test.rs b/src/cmd/tests/load_config_test.rs
index 222012bfd8..79b42dbfc1 100644
--- a/src/cmd/tests/load_config_test.rs
+++ b/src/cmd/tests/load_config_test.rs
@@ -31,7 +31,6 @@ use meta_srv::selector::SelectorType;
use metric_engine::config::EngineConfig as MetricEngineConfig;
use mito2::config::MitoConfig;
use query::options::QueryOptions;
-use servers::export_metrics::ExportMetricsOption;
use servers::grpc::GrpcOptions;
use servers::http::HttpOptions;
use servers::tls::{TlsMode, TlsOption};
@@ -53,7 +52,6 @@ fn test_load_datanode_example_config() {
meta_client: Some(MetaClientOptions {
metasrv_addrs: vec!["127.0.0.1:3002".to_string()],
timeout: Duration::from_secs(3),
- heartbeat_timeout: Duration::from_millis(500),
ddl_timeout: Duration::from_secs(10),
connect_timeout: Duration::from_secs(1),
tcp_nodelay: true,
@@ -95,11 +93,6 @@ fn test_load_datanode_example_config() {
tracing_sample_ratio: Some(Default::default()),
..Default::default()
},
- export_metrics: ExportMetricsOption {
- self_import: None,
- remote_write: Some(Default::default()),
- ..Default::default()
- },
grpc: GrpcOptions::default()
.with_bind_addr("127.0.0.1:3001")
.with_server_addr("127.0.0.1:3001"),
@@ -124,7 +117,6 @@ fn test_load_frontend_example_config() {
meta_client: Some(MetaClientOptions {
metasrv_addrs: vec!["127.0.0.1:3002".to_string()],
timeout: Duration::from_secs(3),
- heartbeat_timeout: Duration::from_millis(500),
ddl_timeout: Duration::from_secs(10),
connect_timeout: Duration::from_secs(1),
tcp_nodelay: true,
@@ -146,11 +138,6 @@ fn test_load_frontend_example_config() {
..Default::default()
},
},
- export_metrics: ExportMetricsOption {
- self_import: None,
- remote_write: Some(Default::default()),
- ..Default::default()
- },
grpc: GrpcOptions {
bind_addr: "127.0.0.1:4001".to_string(),
server_addr: "127.0.0.1:4001".to_string(),
@@ -201,11 +188,6 @@ fn test_load_metasrv_example_config() {
tcp_nodelay: true,
},
},
- export_metrics: ExportMetricsOption {
- self_import: None,
- remote_write: Some(Default::default()),
- ..Default::default()
- },
backend_tls: Some(TlsOption {
mode: TlsMode::Prefer,
cert_path: String::new(),
@@ -257,7 +239,6 @@ fn test_load_flownode_example_config() {
meta_client: Some(MetaClientOptions {
metasrv_addrs: vec!["127.0.0.1:3002".to_string()],
timeout: Duration::from_secs(3),
- heartbeat_timeout: Duration::from_millis(500),
ddl_timeout: Duration::from_secs(10),
connect_timeout: Duration::from_secs(1),
tcp_nodelay: true,
@@ -317,11 +298,6 @@ fn test_load_standalone_example_config() {
tracing_sample_ratio: Some(Default::default()),
..Default::default()
},
- export_metrics: ExportMetricsOption {
- self_import: Some(Default::default()),
- remote_write: Some(Default::default()),
- ..Default::default()
- },
http: HttpOptions {
cors_allowed_origins: vec!["https://example.com".to_string()],
..Default::default()
diff --git a/src/common/base/src/plugins.rs b/src/common/base/src/plugins.rs
index bbab003c69..aa1a9d1287 100644
--- a/src/common/base/src/plugins.rs
+++ b/src/common/base/src/plugins.rs
@@ -32,7 +32,12 @@ impl Plugins {
pub fn insert(&self, value: T) {
let last = self.write().insert(value);
- assert!(last.is_none(), "each type of plugins must be one and only");
+ if last.is_some() {
+ panic!(
+ "Plugin of type {} already exists",
+ std::any::type_name::()
+ );
+ }
}
pub fn get(&self) -> Option {
@@ -140,7 +145,7 @@ mod tests {
}
#[test]
- #[should_panic(expected = "each type of plugins must be one and only")]
+ #[should_panic(expected = "Plugin of type i32 already exists")]
fn test_plugin_uniqueness() {
let plugins = Plugins::new();
plugins.insert(1i32);
diff --git a/src/common/catalog/src/consts.rs b/src/common/catalog/src/consts.rs
index 8a59a15cc6..1cd5db8a0c 100644
--- a/src/common/catalog/src/consts.rs
+++ b/src/common/catalog/src/consts.rs
@@ -86,8 +86,6 @@ pub const INFORMATION_SCHEMA_TRIGGERS_TABLE_ID: u32 = 24;
pub const INFORMATION_SCHEMA_GLOBAL_STATUS_TABLE_ID: u32 = 25;
/// id for information_schema.SESSION_STATUS
pub const INFORMATION_SCHEMA_SESSION_STATUS_TABLE_ID: u32 = 26;
-/// id for information_schema.RUNTIME_METRICS
-pub const INFORMATION_SCHEMA_RUNTIME_METRICS_TABLE_ID: u32 = 27;
/// id for information_schema.PARTITIONS
pub const INFORMATION_SCHEMA_PARTITIONS_TABLE_ID: u32 = 28;
/// id for information_schema.REGION_PEERS
@@ -112,6 +110,8 @@ pub const INFORMATION_SCHEMA_SSTS_MANIFEST_TABLE_ID: u32 = 37;
pub const INFORMATION_SCHEMA_SSTS_STORAGE_TABLE_ID: u32 = 38;
/// id for information_schema.ssts_index_meta
pub const INFORMATION_SCHEMA_SSTS_INDEX_META_TABLE_ID: u32 = 39;
+/// id for information_schema.alerts
+pub const INFORMATION_SCHEMA_ALERTS_TABLE_ID: u32 = 40;
// ----- End of information_schema tables -----
diff --git a/src/common/config/Cargo.toml b/src/common/config/Cargo.toml
index b45c03a6c3..2737f82a58 100644
--- a/src/common/config/Cargo.toml
+++ b/src/common/config/Cargo.toml
@@ -11,8 +11,10 @@ workspace = true
common-base.workspace = true
common-error.workspace = true
common-macro.workspace = true
+common-telemetry.workspace = true
config.workspace = true
humantime-serde.workspace = true
+notify.workspace = true
object-store.workspace = true
serde.workspace = true
serde_json.workspace = true
diff --git a/src/common/config/src/error.rs b/src/common/config/src/error.rs
index fbce83fd00..82abd8a9b8 100644
--- a/src/common/config/src/error.rs
+++ b/src/common/config/src/error.rs
@@ -49,14 +49,41 @@ pub enum Error {
#[snafu(implicit)]
location: Location,
},
+
+ #[snafu(display("Failed to watch file: {}", path))]
+ FileWatch {
+ path: String,
+ #[snafu(source)]
+ error: notify::Error,
+ #[snafu(implicit)]
+ location: Location,
+ },
+
+ #[snafu(display("Failed to canonicalize path: {}", path))]
+ CanonicalizePath {
+ path: String,
+ #[snafu(source)]
+ error: std::io::Error,
+ #[snafu(implicit)]
+ location: Location,
+ },
+
+ #[snafu(display("Invalid path '{}': expected a file, not a directory", path))]
+ InvalidPath {
+ path: String,
+ #[snafu(implicit)]
+ location: Location,
+ },
}
impl ErrorExt for Error {
fn status_code(&self) -> StatusCode {
match self {
- Error::TomlFormat { .. } | Error::LoadLayeredConfig { .. } => {
- StatusCode::InvalidArguments
- }
+ Error::TomlFormat { .. }
+ | Error::LoadLayeredConfig { .. }
+ | Error::FileWatch { .. }
+ | Error::InvalidPath { .. }
+ | Error::CanonicalizePath { .. } => StatusCode::InvalidArguments,
Error::SerdeJson { .. } => StatusCode::Unexpected,
}
}
diff --git a/src/common/config/src/file_watcher.rs b/src/common/config/src/file_watcher.rs
new file mode 100644
index 0000000000..2507af024a
--- /dev/null
+++ b/src/common/config/src/file_watcher.rs
@@ -0,0 +1,355 @@
+// Copyright 2023 Greptime Team
+//
+// Licensed under the Apache License, Version 2.0 (the "License");
+// you may not use this file except in compliance with the License.
+// You may obtain a copy of the License at
+//
+// http://www.apache.org/licenses/LICENSE-2.0
+//
+// Unless required by applicable law or agreed to in writing, software
+// distributed under the License is distributed on an "AS IS" BASIS,
+// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+// See the License for the specific language governing permissions and
+// limitations under the License.
+
+//! Common file watching utilities for configuration hot-reloading.
+//!
+//! This module provides a generic file watcher that can be used to watch
+//! files for changes and trigger callbacks when changes occur.
+//!
+//! The watcher monitors the parent directory of each file rather than the
+//! file itself. This ensures that file deletions and recreations are properly
+//! tracked, which is common with editors that use atomic saves or when
+//! configuration files are replaced.
+
+use std::collections::HashSet;
+use std::path::{Path, PathBuf};
+use std::sync::mpsc::channel;
+
+use common_telemetry::{error, info, warn};
+use notify::{EventKind, RecursiveMode, Watcher};
+use snafu::ResultExt;
+
+use crate::error::{CanonicalizePathSnafu, FileWatchSnafu, InvalidPathSnafu, Result};
+
+/// Configuration for the file watcher behavior.
+#[derive(Debug, Clone, Default)]
+pub struct FileWatcherConfig {
+ /// Whether to include Remove events in addition to Modify and Create.
+ pub include_remove_events: bool,
+}
+
+impl FileWatcherConfig {
+ pub fn new() -> Self {
+ Self::default()
+ }
+
+ pub fn with_modify_and_create(mut self) -> Self {
+ self.include_remove_events = false;
+ self
+ }
+
+ pub fn with_remove_events(mut self) -> Self {
+ self.include_remove_events = true;
+ self
+ }
+}
+
+/// A builder for creating file watchers with flexible configuration.
+///
+/// The watcher monitors the parent directory of each file to handle file
+/// deletion and recreation properly. Events are filtered to only trigger
+/// callbacks for the specific files being watched.
+pub struct FileWatcherBuilder {
+ config: FileWatcherConfig,
+ /// Canonicalized paths of files to watch.
+ file_paths: Vec,
+}
+
+impl FileWatcherBuilder {
+ /// Create a new builder with default configuration.
+ pub fn new() -> Self {
+ Self {
+ config: FileWatcherConfig::default(),
+ file_paths: Vec::new(),
+ }
+ }
+
+ /// Set the watcher configuration.
+ pub fn config(mut self, config: FileWatcherConfig) -> Self {
+ self.config = config;
+ self
+ }
+
+ /// Add a file path to watch.
+ ///
+ /// Returns an error if the path is a directory.
+ /// The path is canonicalized for reliable comparison with events.
+ pub fn watch_path>(mut self, path: P) -> Result {
+ let path = path.as_ref();
+ snafu::ensure!(
+ path.is_file(),
+ InvalidPathSnafu {
+ path: path.display().to_string(),
+ }
+ );
+ // Canonicalize the path for reliable comparison with event paths
+ let canonical = path.canonicalize().context(CanonicalizePathSnafu {
+ path: path.display().to_string(),
+ })?;
+ self.file_paths.push(canonical);
+ Ok(self)
+ }
+
+ /// Add multiple file paths to watch.
+ ///
+ /// Returns an error if any path is a directory.
+ pub fn watch_paths, I: IntoIterator>(
+ mut self,
+ paths: I,
+ ) -> Result {
+ for path in paths {
+ self = self.watch_path(path)?;
+ }
+ Ok(self)
+ }
+
+ /// Build and spawn the file watcher with the given callback.
+ ///
+ /// The callback is invoked when relevant file events are detected for
+ /// the watched files. The watcher monitors the parent directories to
+ /// handle file deletion and recreation properly.
+ ///
+ /// The spawned watcher thread runs for the lifetime of the process.
+ pub fn spawn(self, callback: F) -> Result<()>
+ where
+ F: Fn() + Send + 'static,
+ {
+ let (tx, rx) = channel::>();
+ let mut watcher =
+ notify::recommended_watcher(tx).context(FileWatchSnafu { path: "" })?;
+
+ // Collect unique parent directories to watch
+ let mut watched_dirs: HashSet = HashSet::new();
+ for file_path in &self.file_paths {
+ if let Some(parent) = file_path.parent()
+ && watched_dirs.insert(parent.to_path_buf())
+ {
+ watcher
+ .watch(parent, RecursiveMode::NonRecursive)
+ .context(FileWatchSnafu {
+ path: parent.display().to_string(),
+ })?;
+ }
+ }
+
+ let config = self.config;
+ let watched_files: HashSet = self.file_paths.iter().cloned().collect();
+
+ info!(
+ "Spawning file watcher for paths: {:?} (watching parent directories)",
+ self.file_paths
+ .iter()
+ .map(|p| p.display().to_string())
+ .collect::>()
+ );
+
+ std::thread::spawn(move || {
+ // Keep watcher alive in the thread
+ let _watcher = watcher;
+
+ while let Ok(res) = rx.recv() {
+ match res {
+ Ok(event) => {
+ if !is_relevant_event(&event.kind, &config) {
+ continue;
+ }
+
+ // Check if any of the event paths match our watched files
+ let is_watched_file = event.paths.iter().any(|event_path| {
+ // Try to canonicalize the event path for comparison
+ // If the file was deleted, canonicalize will fail, so we also
+ // compare the raw path
+ if let Ok(canonical) = event_path.canonicalize()
+ && watched_files.contains(&canonical)
+ {
+ return true;
+ }
+ // For deleted files, compare using the raw path
+ watched_files.contains(event_path)
+ });
+
+ if !is_watched_file {
+ continue;
+ }
+
+ info!(?event.kind, ?event.paths, "Detected file change");
+ callback();
+ }
+ Err(err) => {
+ warn!("File watcher error: {}", err);
+ }
+ }
+ }
+
+ error!("File watcher channel closed unexpectedly");
+ });
+
+ Ok(())
+ }
+}
+
+impl Default for FileWatcherBuilder {
+ fn default() -> Self {
+ Self::new()
+ }
+}
+
+/// Check if an event kind is relevant based on the configuration.
+fn is_relevant_event(kind: &EventKind, config: &FileWatcherConfig) -> bool {
+ match kind {
+ EventKind::Modify(_) | EventKind::Create(_) => true,
+ EventKind::Remove(_) => config.include_remove_events,
+ _ => false,
+ }
+}
+
+#[cfg(test)]
+mod tests {
+ use std::sync::Arc;
+ use std::sync::atomic::{AtomicUsize, Ordering};
+ use std::time::Duration;
+
+ use common_test_util::temp_dir::create_temp_dir;
+
+ use super::*;
+
+ #[test]
+ fn test_file_watcher_detects_changes() {
+ common_telemetry::init_default_ut_logging();
+
+ let dir = create_temp_dir("test_file_watcher");
+ let file_path = dir.path().join("test_file.txt");
+
+ // Create initial file
+ std::fs::write(&file_path, "initial content").unwrap();
+
+ let counter = Arc::new(AtomicUsize::new(0));
+ let counter_clone = counter.clone();
+
+ FileWatcherBuilder::new()
+ .watch_path(&file_path)
+ .unwrap()
+ .config(FileWatcherConfig::new())
+ .spawn(move || {
+ counter_clone.fetch_add(1, Ordering::SeqCst);
+ })
+ .unwrap();
+
+ // Give watcher time to start
+ std::thread::sleep(Duration::from_millis(100));
+
+ // Modify the file
+ std::fs::write(&file_path, "modified content").unwrap();
+
+ // Wait for the event to be processed
+ std::thread::sleep(Duration::from_millis(500));
+
+ assert!(
+ counter.load(Ordering::SeqCst) >= 1,
+ "Watcher should have detected at least one change"
+ );
+ }
+
+ #[test]
+ fn test_file_watcher_detects_delete_and_recreate() {
+ common_telemetry::init_default_ut_logging();
+
+ let dir = create_temp_dir("test_file_watcher_recreate");
+ let file_path = dir.path().join("test_file.txt");
+
+ // Create initial file
+ std::fs::write(&file_path, "initial content").unwrap();
+
+ let counter = Arc::new(AtomicUsize::new(0));
+ let counter_clone = counter.clone();
+
+ FileWatcherBuilder::new()
+ .watch_path(&file_path)
+ .unwrap()
+ .config(FileWatcherConfig::new())
+ .spawn(move || {
+ counter_clone.fetch_add(1, Ordering::SeqCst);
+ })
+ .unwrap();
+
+ // Give watcher time to start
+ std::thread::sleep(Duration::from_millis(100));
+
+ // Delete the file
+ std::fs::remove_file(&file_path).unwrap();
+ std::thread::sleep(Duration::from_millis(100));
+
+ // Recreate the file - this should still be detected because we watch the directory
+ std::fs::write(&file_path, "recreated content").unwrap();
+
+ // Wait for the event to be processed
+ std::thread::sleep(Duration::from_millis(500));
+
+ assert!(
+ counter.load(Ordering::SeqCst) >= 1,
+ "Watcher should have detected file recreation"
+ );
+ }
+
+ #[test]
+ fn test_file_watcher_ignores_other_files() {
+ common_telemetry::init_default_ut_logging();
+
+ let dir = create_temp_dir("test_file_watcher_other");
+ let watched_file = dir.path().join("watched.txt");
+ let other_file = dir.path().join("other.txt");
+
+ // Create both files
+ std::fs::write(&watched_file, "watched content").unwrap();
+ std::fs::write(&other_file, "other content").unwrap();
+
+ let counter = Arc::new(AtomicUsize::new(0));
+ let counter_clone = counter.clone();
+
+ FileWatcherBuilder::new()
+ .watch_path(&watched_file)
+ .unwrap()
+ .config(FileWatcherConfig::new())
+ .spawn(move || {
+ counter_clone.fetch_add(1, Ordering::SeqCst);
+ })
+ .unwrap();
+
+ // Give watcher time to start
+ std::thread::sleep(Duration::from_millis(100));
+
+ // Modify the other file - should NOT trigger callback
+ std::fs::write(&other_file, "modified other content").unwrap();
+
+ // Wait for potential event
+ std::thread::sleep(Duration::from_millis(500));
+
+ assert_eq!(
+ counter.load(Ordering::SeqCst),
+ 0,
+ "Watcher should not have detected changes to other files"
+ );
+
+ // Now modify the watched file - SHOULD trigger callback
+ std::fs::write(&watched_file, "modified watched content").unwrap();
+
+ // Wait for the event to be processed
+ std::thread::sleep(Duration::from_millis(500));
+
+ assert!(
+ counter.load(Ordering::SeqCst) >= 1,
+ "Watcher should have detected change to watched file"
+ );
+ }
+}
diff --git a/src/common/config/src/lib.rs b/src/common/config/src/lib.rs
index cc25ebce16..eea3b1351d 100644
--- a/src/common/config/src/lib.rs
+++ b/src/common/config/src/lib.rs
@@ -14,6 +14,7 @@
pub mod config;
pub mod error;
+pub mod file_watcher;
use std::time::Duration;
diff --git a/src/common/datasource/src/buffered_writer.rs b/src/common/datasource/src/buffered_writer.rs
index e1571b0187..953715b223 100644
--- a/src/common/datasource/src/buffered_writer.rs
+++ b/src/common/datasource/src/buffered_writer.rs
@@ -12,28 +12,11 @@
// See the License for the specific language governing permissions and
// limitations under the License.
-use std::future::Future;
-
use arrow::record_batch::RecordBatch;
use async_trait::async_trait;
use datafusion::parquet::format::FileMetaData;
-use snafu::{OptionExt, ResultExt};
-use tokio::io::{AsyncWrite, AsyncWriteExt};
-use crate::error::{self, Result};
-use crate::share_buffer::SharedBuffer;
-
-pub struct LazyBufferedWriter {
- path: String,
- writer_factory: F,
- writer: Option,
- /// None stands for [`LazyBufferedWriter`] closed.
- encoder: Option,
- buffer: SharedBuffer,
- rows_written: usize,
- bytes_written: u64,
- threshold: usize,
-}
+use crate::error::Result;
pub trait DfRecordBatchEncoder {
fn write(&mut self, batch: &RecordBatch) -> Result<()>;
@@ -43,126 +26,3 @@ pub trait DfRecordBatchEncoder {
pub trait ArrowWriterCloser {
async fn close(mut self) -> Result;
}
-
-impl<
- T: AsyncWrite + Send + Unpin,
- U: DfRecordBatchEncoder + ArrowWriterCloser,
- F: Fn(String) -> Fut,
- Fut: Future