mirror of
https://github.com/GreptimeTeam/greptimedb.git
synced 2026-01-03 20:02:54 +00:00
40 KiB
40 KiB
Configurations
Standalone Mode
| Key | Type | Default | Descriptions |
|---|---|---|---|
mode |
String | standalone |
The running mode of the datanode. It can be standalone or distributed. |
enable_telemetry |
Bool | true |
Enable telemetry to collect anonymous usage data. |
default_timezone |
String | None |
The default timezone of the server. |
http |
-- | -- | The HTTP server options. |
http.addr |
String | 127.0.0.1:4000 |
The address to bind the HTTP server. |
http.timeout |
String | 30s |
HTTP request timeout. |
http.body_limit |
String | 64MB |
HTTP request body limit. Support the following units are supported: B, KB, KiB, MB, MiB, GB, GiB, TB, TiB, PB, PiB. |
grpc |
-- | -- | The gRPC server options. |
grpc.addr |
String | 127.0.0.1:4001 |
The address to bind the gRPC server. |
grpc.runtime_size |
Integer | 8 |
The number of server worker threads. |
mysql |
-- | -- | MySQL server options. |
mysql.enable |
Bool | true |
Whether to enable. |
mysql.addr |
String | 127.0.0.1:4002 |
The addr to bind the MySQL server. |
mysql.runtime_size |
Integer | 2 |
The number of server worker threads. |
mysql.tls |
-- | -- | -- |
mysql.tls.mode |
String | disable |
TLS mode, refer to https://www.postgresql.org/docs/current/libpq-ssl.html - disable (default value)- prefer- require- verify-ca- verify-full |
mysql.tls.cert_path |
String | None |
Certificate file path. |
mysql.tls.key_path |
String | None |
Private key file path. |
mysql.tls.watch |
Bool | false |
Watch for Certificate and key file change and auto reload |
postgres |
-- | -- | PostgresSQL server options. |
postgres.enable |
Bool | true |
Whether to enable |
postgres.addr |
String | 127.0.0.1:4003 |
The addr to bind the PostgresSQL server. |
postgres.runtime_size |
Integer | 2 |
The number of server worker threads. |
postgres.tls |
-- | -- | PostgresSQL server TLS options, see mysql_options.tls section. |
postgres.tls.mode |
String | disable |
TLS mode. |
postgres.tls.cert_path |
String | None |
Certificate file path. |
postgres.tls.key_path |
String | None |
Private key file path. |
postgres.tls.watch |
Bool | false |
Watch for Certificate and key file change and auto reload |
opentsdb |
-- | -- | OpenTSDB protocol options. |
opentsdb.enable |
Bool | true |
Whether to enable |
opentsdb.addr |
String | 127.0.0.1:4242 |
OpenTSDB telnet API server address. |
opentsdb.runtime_size |
Integer | 2 |
The number of server worker threads. |
influxdb |
-- | -- | InfluxDB protocol options. |
influxdb.enable |
Bool | true |
Whether to enable InfluxDB protocol in HTTP API. |
prom_store |
-- | -- | Prometheus remote storage options |
prom_store.enable |
Bool | true |
Whether to enable Prometheus remote write and read in HTTP API. |
prom_store.with_metric_engine |
Bool | true |
Whether to store the data from Prometheus remote write in metric engine. |
wal |
-- | -- | The WAL options. |
wal.provider |
String | raft_engine |
The provider of the WAL. - raft_engine: the wal is stored in the local file system by raft-engine.- kafka: it's remote wal that data is stored in Kafka. |
wal.dir |
String | None |
The directory to store the WAL files. It's only used when the provider is raft_engine. |
wal.file_size |
String | 256MB |
The size of the WAL segment file. It's only used when the provider is raft_engine. |
wal.purge_threshold |
String | 4GB |
The threshold of the WAL size to trigger a flush. It's only used when the provider is raft_engine. |
wal.purge_interval |
String | 10m |
The interval to trigger a flush. It's only used when the provider is raft_engine. |
wal.read_batch_size |
Integer | 128 |
The read batch size. It's only used when the provider is raft_engine. |
wal.sync_write |
Bool | false |
Whether to use sync write. It's only used when the provider is raft_engine. |
wal.enable_log_recycle |
Bool | true |
Whether to reuse logically truncated log files. It's only used when the provider is raft_engine. |
wal.prefill_log_files |
Bool | false |
Whether to pre-create log files on start up. It's only used when the provider is raft_engine. |
wal.sync_period |
String | 10s |
Duration for fsyncing log files. It's only used when the provider is raft_engine. |
wal.broker_endpoints |
Array | -- | The Kafka broker endpoints. It's only used when the provider is kafka. |
wal.max_batch_size |
String | 1MB |
The max size of a single producer batch. Warning: Kafka has a default limit of 1MB per message in a topic. It's only used when the provider is kafka. |
wal.linger |
String | 200ms |
The linger duration of a kafka batch producer. It's only used when the provider is kafka. |
wal.consumer_wait_timeout |
String | 100ms |
The consumer wait timeout. It's only used when the provider is kafka. |
wal.backoff_init |
String | 500ms |
The initial backoff delay. It's only used when the provider is kafka. |
wal.backoff_max |
String | 10s |
The maximum backoff delay. It's only used when the provider is kafka. |
wal.backoff_base |
Integer | 2 |
The exponential backoff rate, i.e. next backoff = base * current backoff. It's only used when the provider is kafka. |
wal.backoff_deadline |
String | 5mins |
The deadline of retries. It's only used when the provider is kafka. |
metadata_store |
-- | -- | Metadata storage options. |
metadata_store.file_size |
String | 256MB |
Kv file size in bytes. |
metadata_store.purge_threshold |
String | 4GB |
Kv purge threshold. |
procedure |
-- | -- | Procedure storage options. |
procedure.max_retry_times |
Integer | 3 |
Procedure max retry time. |
procedure.retry_delay |
String | 500ms |
Initial retry delay of procedures, increases exponentially |
storage |
-- | -- | The data storage options. |
storage.data_home |
String | /tmp/greptimedb/ |
The working home directory. |
storage.type |
String | File |
The storage type used to store the data. - File: the data is stored in the local file system.- S3: the data is stored in the S3 object storage.- Gcs: the data is stored in the Google Cloud Storage.- Azblob: the data is stored in the Azure Blob Storage.- Oss: the data is stored in the Aliyun OSS. |
storage.cache_path |
String | None |
Cache configuration for object storage such as 'S3' etc. The local file cache directory. |
storage.cache_capacity |
String | None |
The local file cache capacity in bytes. |
storage.bucket |
String | None |
The S3 bucket name. It's only used when the storage type is S3, Oss and Gcs. |
storage.root |
String | None |
The S3 data will be stored in the specified prefix, for example, s3://${bucket}/${root}.It's only used when the storage type is S3, Oss and Azblob. |
storage.access_key_id |
String | None |
The access key id of the aws account. It's highly recommended to use AWS IAM roles instead of hardcoding the access key id and secret key. It's only used when the storage type is S3 and Oss. |
storage.secret_access_key |
String | None |
The secret access key of the aws account. It's highly recommended to use AWS IAM roles instead of hardcoding the access key id and secret key. It's only used when the storage type is S3. |
storage.access_key_secret |
String | None |
The secret access key of the aliyun account. It's only used when the storage type is Oss. |
storage.account_name |
String | None |
The account key of the azure account. It's only used when the storage type is Azblob. |
storage.account_key |
String | None |
The account key of the azure account. It's only used when the storage type is Azblob. |
storage.scope |
String | None |
The scope of the google cloud storage. It's only used when the storage type is Gcs. |
storage.credential_path |
String | None |
The credential path of the google cloud storage. It's only used when the storage type is Gcs. |
storage.container |
String | None |
The container of the azure account. It's only used when the storage type is Azblob. |
storage.sas_token |
String | None |
The sas token of the azure account. It's only used when the storage type is Azblob. |
storage.endpoint |
String | None |
The endpoint of the S3 service. It's only used when the storage type is S3, Oss, Gcs and Azblob. |
storage.region |
String | None |
The region of the S3 service. It's only used when the storage type is S3, Oss, Gcs and Azblob. |
[[region_engine]] |
-- | -- | The region engine options. You can configure multiple region engines. |
region_engine.mito |
-- | -- | The Mito engine options. |
region_engine.mito.num_workers |
Integer | 8 |
Number of region workers. |
region_engine.mito.worker_channel_size |
Integer | 128 |
Request channel size of each worker. |
region_engine.mito.worker_request_batch_size |
Integer | 64 |
Max batch size for a worker to handle requests. |
region_engine.mito.manifest_checkpoint_distance |
Integer | 10 |
Number of meta action updated to trigger a new checkpoint for the manifest. |
region_engine.mito.compress_manifest |
Bool | false |
Whether to compress manifest and checkpoint file by gzip (default false). |
region_engine.mito.max_background_jobs |
Integer | 4 |
Max number of running background jobs |
region_engine.mito.auto_flush_interval |
String | 1h |
Interval to auto flush a region if it has not flushed yet. |
region_engine.mito.global_write_buffer_size |
String | 1GB |
Global write buffer size for all regions. If not set, it's default to 1/8 of OS memory with a max limitation of 1GB. |
region_engine.mito.global_write_buffer_reject_size |
String | 2GB |
Global write buffer size threshold to reject write requests. If not set, it's default to 2 times of global_write_buffer_size |
region_engine.mito.sst_meta_cache_size |
String | 128MB |
Cache size for SST metadata. Setting it to 0 to disable the cache. If not set, it's default to 1/32 of OS memory with a max limitation of 128MB. |
region_engine.mito.vector_cache_size |
String | 512MB |
Cache size for vectors and arrow arrays. Setting it to 0 to disable the cache. If not set, it's default to 1/16 of OS memory with a max limitation of 512MB. |
region_engine.mito.page_cache_size |
String | 512MB |
Cache size for pages of SST row groups. Setting it to 0 to disable the cache. If not set, it's default to 1/16 of OS memory with a max limitation of 512MB. |
region_engine.mito.sst_write_buffer_size |
String | 8MB |
Buffer size for SST writing. |
region_engine.mito.scan_parallelism |
Integer | 0 |
Parallelism to scan a region (default: 1/4 of cpu cores). - 0: using the default value (1/4 of cpu cores).- 1: scan in current thread.- n: scan in parallelism n. |
region_engine.mito.parallel_scan_channel_size |
Integer | 32 |
Capacity of the channel to send data from parallel scan tasks to the main task. |
region_engine.mito.allow_stale_entries |
Bool | false |
Whether to allow stale WAL entries read during replay. |
region_engine.mito.inverted_index |
-- | -- | The options for inverted index in Mito engine. |
region_engine.mito.inverted_index.create_on_flush |
String | auto |
Whether to create the index on flush. - auto: automatically- disable: never |
region_engine.mito.inverted_index.create_on_compaction |
String | auto |
Whether to create the index on compaction. - auto: automatically- disable: never |
region_engine.mito.inverted_index.apply_on_query |
String | auto |
Whether to apply the index on query - auto: automatically- disable: never |
region_engine.mito.inverted_index.mem_threshold_on_create |
String | 64M |
Memory threshold for performing an external sort during index creation. Setting to empty will disable external sorting, forcing all sorting operations to happen in memory. |
region_engine.mito.inverted_index.intermediate_path |
String | "" |
File system path to store intermediate files for external sorting (default {data_home}/index_intermediate). |
region_engine.mito.memtable |
-- | -- | -- |
region_engine.mito.memtable.type |
String | time_series |
Memtable type. - time_series: time-series memtable- partition_tree: partition tree memtable (experimental) |
region_engine.mito.memtable.index_max_keys_per_shard |
Integer | 8192 |
The max number of keys in one shard. Only available for partition_tree memtable. |
region_engine.mito.memtable.data_freeze_threshold |
Integer | 32768 |
The max rows of data inside the actively writing buffer in one shard. Only available for partition_tree memtable. |
region_engine.mito.memtable.fork_dictionary_bytes |
String | 1GiB |
Max dictionary bytes. Only available for partition_tree memtable. |
logging |
-- | -- | The logging options. |
logging.dir |
String | /tmp/greptimedb/logs |
The directory to store the log files. |
logging.level |
String | None |
The log level. Can be info/debug/warn/error. |
logging.enable_otlp_tracing |
Bool | false |
Enable OTLP tracing. |
logging.otlp_endpoint |
String | None |
The OTLP tracing endpoint. |
logging.append_stdout |
Bool | true |
Whether to append logs to stdout. |
logging.tracing_sample_ratio |
-- | -- | The percentage of tracing will be sampled and exported. Valid range [0, 1], 1 means all traces are sampled, 0 means all traces are not sampled, the default value is 1.ratio > 1 are treated as 1. Fractions < 0 are treated as 0 |
logging.tracing_sample_ratio.default_ratio |
Float | 1.0 |
-- |
export_metrics |
-- | -- | The datanode can export its metrics and send to Prometheus compatible service (e.g. send to greptimedb itself) from remote-write API.This is only used for greptimedb to export its own metrics internally. It's different from prometheus scrape. |
export_metrics.enable |
Bool | false |
whether enable export metrics. |
export_metrics.write_interval |
String | 30s |
The interval of export metrics. |
export_metrics.self_import |
-- | -- | For standalone mode, self_import is recommend to collect metrics generated by itself |
export_metrics.self_import.db |
String | None |
-- |
export_metrics.remote_write |
-- | -- | -- |
export_metrics.remote_write.url |
String | "" |
The url the metrics send to. The url example can be: http://127.0.0.1:4000/v1/prometheus/write?db=information_schema. |
export_metrics.remote_write.headers |
InlineTable | -- | HTTP headers of Prometheus remote-write carry. |
Cluster Mode
Frontend
| Key | Type | Default | Descriptions |
|---|---|---|---|
mode |
String | standalone |
The running mode of the datanode. It can be standalone or distributed. |
default_timezone |
String | None |
The default timezone of the server. |
heartbeat |
-- | -- | The heartbeat options. |
heartbeat.interval |
String | 18s |
Interval for sending heartbeat messages to the metasrv. |
heartbeat.retry_interval |
String | 3s |
Interval for retrying to send heartbeat messages to the metasrv. |
http |
-- | -- | The HTTP server options. |
http.addr |
String | 127.0.0.1:4000 |
The address to bind the HTTP server. |
http.timeout |
String | 30s |
HTTP request timeout. |
http.body_limit |
String | 64MB |
HTTP request body limit. Support the following units are supported: B, KB, KiB, MB, MiB, GB, GiB, TB, TiB, PB, PiB. |
grpc |
-- | -- | The gRPC server options. |
grpc.addr |
String | 127.0.0.1:4001 |
The address to bind the gRPC server. |
grpc.runtime_size |
Integer | 8 |
The number of server worker threads. |
mysql |
-- | -- | MySQL server options. |
mysql.enable |
Bool | true |
Whether to enable. |
mysql.addr |
String | 127.0.0.1:4002 |
The addr to bind the MySQL server. |
mysql.runtime_size |
Integer | 2 |
The number of server worker threads. |
mysql.tls |
-- | -- | -- |
mysql.tls.mode |
String | disable |
TLS mode, refer to https://www.postgresql.org/docs/current/libpq-ssl.html - disable (default value)- prefer- require- verify-ca- verify-full |
mysql.tls.cert_path |
String | None |
Certificate file path. |
mysql.tls.key_path |
String | None |
Private key file path. |
mysql.tls.watch |
Bool | false |
Watch for Certificate and key file change and auto reload |
postgres |
-- | -- | PostgresSQL server options. |
postgres.enable |
Bool | true |
Whether to enable |
postgres.addr |
String | 127.0.0.1:4003 |
The addr to bind the PostgresSQL server. |
postgres.runtime_size |
Integer | 2 |
The number of server worker threads. |
postgres.tls |
-- | -- | PostgresSQL server TLS options, see mysql_options.tls section. |
postgres.tls.mode |
String | disable |
TLS mode. |
postgres.tls.cert_path |
String | None |
Certificate file path. |
postgres.tls.key_path |
String | None |
Private key file path. |
postgres.tls.watch |
Bool | false |
Watch for Certificate and key file change and auto reload |
opentsdb |
-- | -- | OpenTSDB protocol options. |
opentsdb.enable |
Bool | true |
Whether to enable |
opentsdb.addr |
String | 127.0.0.1:4242 |
OpenTSDB telnet API server address. |
opentsdb.runtime_size |
Integer | 2 |
The number of server worker threads. |
influxdb |
-- | -- | InfluxDB protocol options. |
influxdb.enable |
Bool | true |
Whether to enable InfluxDB protocol in HTTP API. |
prom_store |
-- | -- | Prometheus remote storage options |
prom_store.enable |
Bool | true |
Whether to enable Prometheus remote write and read in HTTP API. |
prom_store.with_metric_engine |
Bool | true |
Whether to store the data from Prometheus remote write in metric engine. |
meta_client |
-- | -- | The metasrv client options. |
meta_client.metasrv_addrs |
Array | -- | The addresses of the metasrv. |
meta_client.timeout |
String | 3s |
Operation timeout. |
meta_client.heartbeat_timeout |
String | 500ms |
Heartbeat timeout. |
meta_client.ddl_timeout |
String | 10s |
DDL timeout. |
meta_client.connect_timeout |
String | 1s |
Connect server timeout. |
meta_client.tcp_nodelay |
Bool | true |
TCP_NODELAY option for accepted connections. |
meta_client.metadata_cache_max_capacity |
Integer | 100000 |
The configuration about the cache of the metadata. |
meta_client.metadata_cache_ttl |
String | 10m |
TTL of the metadata cache. |
meta_client.metadata_cache_tti |
String | 5m |
-- |
datanode |
-- | -- | Datanode options. |
datanode.client |
-- | -- | Datanode client options. |
datanode.client.timeout |
String | 10s |
-- |
datanode.client.connect_timeout |
String | 10s |
-- |
datanode.client.tcp_nodelay |
Bool | true |
-- |
logging |
-- | -- | The logging options. |
logging.dir |
String | /tmp/greptimedb/logs |
The directory to store the log files. |
logging.level |
String | None |
The log level. Can be info/debug/warn/error. |
logging.enable_otlp_tracing |
Bool | false |
Enable OTLP tracing. |
logging.otlp_endpoint |
String | None |
The OTLP tracing endpoint. |
logging.append_stdout |
Bool | true |
Whether to append logs to stdout. |
logging.tracing_sample_ratio |
-- | -- | The percentage of tracing will be sampled and exported. Valid range [0, 1], 1 means all traces are sampled, 0 means all traces are not sampled, the default value is 1.ratio > 1 are treated as 1. Fractions < 0 are treated as 0 |
logging.tracing_sample_ratio.default_ratio |
Float | 1.0 |
-- |
export_metrics |
-- | -- | The datanode can export its metrics and send to Prometheus compatible service (e.g. send to greptimedb itself) from remote-write API.This is only used for greptimedb to export its own metrics internally. It's different from prometheus scrape. |
export_metrics.enable |
Bool | false |
whether enable export metrics. |
export_metrics.write_interval |
String | 30s |
The interval of export metrics. |
export_metrics.self_import |
-- | -- | For standalone mode, self_import is recommend to collect metrics generated by itself |
export_metrics.self_import.db |
String | None |
-- |
export_metrics.remote_write |
-- | -- | -- |
export_metrics.remote_write.url |
String | "" |
The url the metrics send to. The url example can be: http://127.0.0.1:4000/v1/prometheus/write?db=information_schema. |
export_metrics.remote_write.headers |
InlineTable | -- | HTTP headers of Prometheus remote-write carry. |
Metasrv
| Key | Type | Default | Descriptions |
|---|---|---|---|
data_home |
String | /tmp/metasrv/ |
The working home directory. |
bind_addr |
String | 127.0.0.1:3002 |
The bind address of metasrv. |
server_addr |
String | 127.0.0.1:3002 |
The communication server address for frontend and datanode to connect to metasrv, "127.0.0.1:3002" by default for localhost. |
store_addr |
String | 127.0.0.1:2379 |
Etcd server address. |
selector |
String | lease_based |
Datanode selector type. - lease_based (default value).- load_basedFor details, please see "https://docs.greptime.com/developer-guide/metasrv/selector". |
use_memory_store |
Bool | false |
Store data in memory. |
enable_telemetry |
Bool | true |
Whether to enable greptimedb telemetry. |
store_key_prefix |
String | "" |
If it's not empty, the metasrv will store all data with this key prefix. |
procedure |
-- | -- | Procedure storage options. |
procedure.max_retry_times |
Integer | 12 |
Procedure max retry time. |
procedure.retry_delay |
String | 500ms |
Initial retry delay of procedures, increases exponentially |
procedure.max_metadata_value_size |
String | 1500KiB |
Auto split large value GreptimeDB procedure uses etcd as the default metadata storage backend. The etcd the maximum size of any request is 1.5 MiB 1500KiB = 1536KiB (1.5MiB) - 36KiB (reserved size of key) Comments out the max_metadata_value_size, for don't split large value (no limit). |
failure_detector |
-- | -- | -- |
failure_detector.threshold |
Float | 8.0 |
-- |
failure_detector.min_std_deviation |
String | 100ms |
-- |
failure_detector.acceptable_heartbeat_pause |
String | 3000ms |
-- |
failure_detector.first_heartbeat_estimate |
String | 1000ms |
-- |
datanode |
-- | -- | Datanode options. |
datanode.client |
-- | -- | Datanode client options. |
datanode.client.timeout |
String | 10s |
-- |
datanode.client.connect_timeout |
String | 10s |
-- |
datanode.client.tcp_nodelay |
Bool | true |
-- |
wal |
-- | -- | -- |
wal.provider |
String | raft_engine |
-- |
wal.broker_endpoints |
Array | -- | The broker endpoints of the Kafka cluster. |
wal.num_topics |
Integer | 64 |
Number of topics to be created upon start. |
wal.selector_type |
String | round_robin |
Topic selector type. Available selector types: - round_robin (default) |
wal.topic_name_prefix |
String | greptimedb_wal_topic |
A Kafka topic is constructed by concatenating topic_name_prefix and topic_id. |
wal.replication_factor |
Integer | 1 |
Expected number of replicas of each partition. |
wal.create_topic_timeout |
String | 30s |
Above which a topic creation operation will be cancelled. |
wal.backoff_init |
String | 500ms |
The initial backoff for kafka clients. |
wal.backoff_max |
String | 10s |
The maximum backoff for kafka clients. |
wal.backoff_base |
Integer | 2 |
Exponential backoff rate, i.e. next backoff = base * current backoff. |
wal.backoff_deadline |
String | 5mins |
Stop reconnecting if the total wait time reaches the deadline. If this config is missing, the reconnecting won't terminate. |
logging |
-- | -- | The logging options. |
logging.dir |
String | /tmp/greptimedb/logs |
The directory to store the log files. |
logging.level |
String | None |
The log level. Can be info/debug/warn/error. |
logging.enable_otlp_tracing |
Bool | false |
Enable OTLP tracing. |
logging.otlp_endpoint |
String | None |
The OTLP tracing endpoint. |
logging.append_stdout |
Bool | true |
Whether to append logs to stdout. |
logging.tracing_sample_ratio |
-- | -- | The percentage of tracing will be sampled and exported. Valid range [0, 1], 1 means all traces are sampled, 0 means all traces are not sampled, the default value is 1.ratio > 1 are treated as 1. Fractions < 0 are treated as 0 |
logging.tracing_sample_ratio.default_ratio |
Float | 1.0 |
-- |
export_metrics |
-- | -- | The datanode can export its metrics and send to Prometheus compatible service (e.g. send to greptimedb itself) from remote-write API.This is only used for greptimedb to export its own metrics internally. It's different from prometheus scrape. |
export_metrics.enable |
Bool | false |
whether enable export metrics. |
export_metrics.write_interval |
String | 30s |
The interval of export metrics. |
export_metrics.self_import |
-- | -- | For standalone mode, self_import is recommend to collect metrics generated by itself |
export_metrics.self_import.db |
String | None |
-- |
export_metrics.remote_write |
-- | -- | -- |
export_metrics.remote_write.url |
String | "" |
The url the metrics send to. The url example can be: http://127.0.0.1:4000/v1/prometheus/write?db=information_schema. |
export_metrics.remote_write.headers |
InlineTable | -- | HTTP headers of Prometheus remote-write carry. |
Datanode
| Key | Type | Default | Descriptions |
|---|---|---|---|
mode |
String | standalone |
The running mode of the datanode. It can be standalone or distributed. |
node_id |
Integer | None |
The datanode identifier and should be unique in the cluster. |
require_lease_before_startup |
Bool | false |
Start services after regions have obtained leases. It will block the datanode start if it can't receive leases in the heartbeat from metasrv. |
init_regions_in_background |
Bool | false |
Initialize all regions in the background during the startup. By default, it provides services after all regions have been initialized. |
rpc_addr |
String | 127.0.0.1:3001 |
The gRPC address of the datanode. |
rpc_hostname |
String | None |
The hostname of the datanode. |
rpc_runtime_size |
Integer | 8 |
The number of gRPC server worker threads. |
rpc_max_recv_message_size |
String | 512MB |
The maximum receive message size for gRPC server. |
rpc_max_send_message_size |
String | 512MB |
The maximum send message size for gRPC server. |
enable_telemetry |
Bool | true |
Enable telemetry to collect anonymous usage data. |
heartbeat |
-- | -- | The heartbeat options. |
heartbeat.interval |
String | 3s |
Interval for sending heartbeat messages to the metasrv. |
heartbeat.retry_interval |
String | 3s |
Interval for retrying to send heartbeat messages to the metasrv. |
meta_client |
-- | -- | The metasrv client options. |
meta_client.metasrv_addrs |
Array | -- | The addresses of the metasrv. |
meta_client.timeout |
String | 3s |
Operation timeout. |
meta_client.heartbeat_timeout |
String | 500ms |
Heartbeat timeout. |
meta_client.ddl_timeout |
String | 10s |
DDL timeout. |
meta_client.connect_timeout |
String | 1s |
Connect server timeout. |
meta_client.tcp_nodelay |
Bool | true |
TCP_NODELAY option for accepted connections. |
meta_client.metadata_cache_max_capacity |
Integer | 100000 |
The configuration about the cache of the metadata. |
meta_client.metadata_cache_ttl |
String | 10m |
TTL of the metadata cache. |
meta_client.metadata_cache_tti |
String | 5m |
-- |
wal |
-- | -- | The WAL options. |
wal.provider |
String | raft_engine |
The provider of the WAL. - raft_engine: the wal is stored in the local file system by raft-engine.- kafka: it's remote wal that data is stored in Kafka. |
wal.dir |
String | None |
The directory to store the WAL files. It's only used when the provider is raft_engine. |
wal.file_size |
String | 256MB |
The size of the WAL segment file. It's only used when the provider is raft_engine. |
wal.purge_threshold |
String | 4GB |
The threshold of the WAL size to trigger a flush. It's only used when the provider is raft_engine. |
wal.purge_interval |
String | 10m |
The interval to trigger a flush. It's only used when the provider is raft_engine. |
wal.read_batch_size |
Integer | 128 |
The read batch size. It's only used when the provider is raft_engine. |
wal.sync_write |
Bool | false |
Whether to use sync write. It's only used when the provider is raft_engine. |
wal.enable_log_recycle |
Bool | true |
Whether to reuse logically truncated log files. It's only used when the provider is raft_engine. |
wal.prefill_log_files |
Bool | false |
Whether to pre-create log files on start up. It's only used when the provider is raft_engine. |
wal.sync_period |
String | 10s |
Duration for fsyncing log files. It's only used when the provider is raft_engine. |
wal.broker_endpoints |
Array | -- | The Kafka broker endpoints. It's only used when the provider is kafka. |
wal.max_batch_size |
String | 1MB |
The max size of a single producer batch. Warning: Kafka has a default limit of 1MB per message in a topic. It's only used when the provider is kafka. |
wal.linger |
String | 200ms |
The linger duration of a kafka batch producer. It's only used when the provider is kafka. |
wal.consumer_wait_timeout |
String | 100ms |
The consumer wait timeout. It's only used when the provider is kafka. |
wal.backoff_init |
String | 500ms |
The initial backoff delay. It's only used when the provider is kafka. |
wal.backoff_max |
String | 10s |
The maximum backoff delay. It's only used when the provider is kafka. |
wal.backoff_base |
Integer | 2 |
The exponential backoff rate, i.e. next backoff = base * current backoff. It's only used when the provider is kafka. |
wal.backoff_deadline |
String | 5mins |
The deadline of retries. It's only used when the provider is kafka. |
storage |
-- | -- | The data storage options. |
storage.data_home |
String | /tmp/greptimedb/ |
The working home directory. |
storage.type |
String | File |
The storage type used to store the data. - File: the data is stored in the local file system.- S3: the data is stored in the S3 object storage.- Gcs: the data is stored in the Google Cloud Storage.- Azblob: the data is stored in the Azure Blob Storage.- Oss: the data is stored in the Aliyun OSS. |
storage.cache_path |
String | None |
Cache configuration for object storage such as 'S3' etc. The local file cache directory. |
storage.cache_capacity |
String | None |
The local file cache capacity in bytes. |
storage.bucket |
String | None |
The S3 bucket name. It's only used when the storage type is S3, Oss and Gcs. |
storage.root |
String | None |
The S3 data will be stored in the specified prefix, for example, s3://${bucket}/${root}.It's only used when the storage type is S3, Oss and Azblob. |
storage.access_key_id |
String | None |
The access key id of the aws account. It's highly recommended to use AWS IAM roles instead of hardcoding the access key id and secret key. It's only used when the storage type is S3 and Oss. |
storage.secret_access_key |
String | None |
The secret access key of the aws account. It's highly recommended to use AWS IAM roles instead of hardcoding the access key id and secret key. It's only used when the storage type is S3. |
storage.access_key_secret |
String | None |
The secret access key of the aliyun account. It's only used when the storage type is Oss. |
storage.account_name |
String | None |
The account key of the azure account. It's only used when the storage type is Azblob. |
storage.account_key |
String | None |
The account key of the azure account. It's only used when the storage type is Azblob. |
storage.scope |
String | None |
The scope of the google cloud storage. It's only used when the storage type is Gcs. |
storage.credential_path |
String | None |
The credential path of the google cloud storage. It's only used when the storage type is Gcs. |
storage.container |
String | None |
The container of the azure account. It's only used when the storage type is Azblob. |
storage.sas_token |
String | None |
The sas token of the azure account. It's only used when the storage type is Azblob. |
storage.endpoint |
String | None |
The endpoint of the S3 service. It's only used when the storage type is S3, Oss, Gcs and Azblob. |
storage.region |
String | None |
The region of the S3 service. It's only used when the storage type is S3, Oss, Gcs and Azblob. |
[[region_engine]] |
-- | -- | The region engine options. You can configure multiple region engines. |
region_engine.mito |
-- | -- | The Mito engine options. |
region_engine.mito.num_workers |
Integer | 8 |
Number of region workers. |
region_engine.mito.worker_channel_size |
Integer | 128 |
Request channel size of each worker. |
region_engine.mito.worker_request_batch_size |
Integer | 64 |
Max batch size for a worker to handle requests. |
region_engine.mito.manifest_checkpoint_distance |
Integer | 10 |
Number of meta action updated to trigger a new checkpoint for the manifest. |
region_engine.mito.compress_manifest |
Bool | false |
Whether to compress manifest and checkpoint file by gzip (default false). |
region_engine.mito.max_background_jobs |
Integer | 4 |
Max number of running background jobs |
region_engine.mito.auto_flush_interval |
String | 1h |
Interval to auto flush a region if it has not flushed yet. |
region_engine.mito.global_write_buffer_size |
String | 1GB |
Global write buffer size for all regions. If not set, it's default to 1/8 of OS memory with a max limitation of 1GB. |
region_engine.mito.global_write_buffer_reject_size |
String | 2GB |
Global write buffer size threshold to reject write requests. If not set, it's default to 2 times of global_write_buffer_size |
region_engine.mito.sst_meta_cache_size |
String | 128MB |
Cache size for SST metadata. Setting it to 0 to disable the cache. If not set, it's default to 1/32 of OS memory with a max limitation of 128MB. |
region_engine.mito.vector_cache_size |
String | 512MB |
Cache size for vectors and arrow arrays. Setting it to 0 to disable the cache. If not set, it's default to 1/16 of OS memory with a max limitation of 512MB. |
region_engine.mito.page_cache_size |
String | 512MB |
Cache size for pages of SST row groups. Setting it to 0 to disable the cache. If not set, it's default to 1/16 of OS memory with a max limitation of 512MB. |
region_engine.mito.sst_write_buffer_size |
String | 8MB |
Buffer size for SST writing. |
region_engine.mito.scan_parallelism |
Integer | 0 |
Parallelism to scan a region (default: 1/4 of cpu cores). - 0: using the default value (1/4 of cpu cores).- 1: scan in current thread.- n: scan in parallelism n. |
region_engine.mito.parallel_scan_channel_size |
Integer | 32 |
Capacity of the channel to send data from parallel scan tasks to the main task. |
region_engine.mito.allow_stale_entries |
Bool | false |
Whether to allow stale WAL entries read during replay. |
region_engine.mito.inverted_index |
-- | -- | The options for inverted index in Mito engine. |
region_engine.mito.inverted_index.create_on_flush |
String | auto |
Whether to create the index on flush. - auto: automatically- disable: never |
region_engine.mito.inverted_index.create_on_compaction |
String | auto |
Whether to create the index on compaction. - auto: automatically- disable: never |
region_engine.mito.inverted_index.apply_on_query |
String | auto |
Whether to apply the index on query - auto: automatically- disable: never |
region_engine.mito.inverted_index.mem_threshold_on_create |
String | 64M |
Memory threshold for performing an external sort during index creation. Setting to empty will disable external sorting, forcing all sorting operations to happen in memory. |
region_engine.mito.inverted_index.intermediate_path |
String | "" |
File system path to store intermediate files for external sorting (default {data_home}/index_intermediate). |
region_engine.mito.memtable |
-- | -- | -- |
region_engine.mito.memtable.type |
String | time_series |
Memtable type. - time_series: time-series memtable- partition_tree: partition tree memtable (experimental) |
region_engine.mito.memtable.index_max_keys_per_shard |
Integer | 8192 |
The max number of keys in one shard. Only available for partition_tree memtable. |
region_engine.mito.memtable.data_freeze_threshold |
Integer | 32768 |
The max rows of data inside the actively writing buffer in one shard. Only available for partition_tree memtable. |
region_engine.mito.memtable.fork_dictionary_bytes |
String | 1GiB |
Max dictionary bytes. Only available for partition_tree memtable. |
logging |
-- | -- | The logging options. |
logging.dir |
String | /tmp/greptimedb/logs |
The directory to store the log files. |
logging.level |
String | None |
The log level. Can be info/debug/warn/error. |
logging.enable_otlp_tracing |
Bool | false |
Enable OTLP tracing. |
logging.otlp_endpoint |
String | None |
The OTLP tracing endpoint. |
logging.append_stdout |
Bool | true |
Whether to append logs to stdout. |
logging.tracing_sample_ratio |
-- | -- | The percentage of tracing will be sampled and exported. Valid range [0, 1], 1 means all traces are sampled, 0 means all traces are not sampled, the default value is 1.ratio > 1 are treated as 1. Fractions < 0 are treated as 0 |
logging.tracing_sample_ratio.default_ratio |
Float | 1.0 |
-- |
export_metrics |
-- | -- | The datanode can export its metrics and send to Prometheus compatible service (e.g. send to greptimedb itself) from remote-write API.This is only used for greptimedb to export its own metrics internally. It's different from prometheus scrape. |
export_metrics.enable |
Bool | false |
whether enable export metrics. |
export_metrics.write_interval |
String | 30s |
The interval of export metrics. |
export_metrics.self_import |
-- | -- | For standalone mode, self_import is recommend to collect metrics generated by itself |
export_metrics.self_import.db |
String | None |
-- |
export_metrics.remote_write |
-- | -- | -- |
export_metrics.remote_write.url |
String | "" |
The url the metrics send to. The url example can be: http://127.0.0.1:4000/v1/prometheus/write?db=information_schema. |
export_metrics.remote_write.headers |
InlineTable | -- | HTTP headers of Prometheus remote-write carry. |