Fix pageserver CLI parameter names and document them

This commit is contained in:
Heikki Linnakangas
2021-11-25 11:15:33 +02:00
committed by Kirill Bulatov
parent 0650e51f0b
commit d47f610606
2 changed files with 160 additions and 32 deletions

128
docs/settings.md Normal file
View File

@@ -0,0 +1,128 @@
## Pageserver
### listen_pg_addr
Network interface and port number to listen at for connections from
the compute nodes and safekeepers. The default is `127.0.0.1:64000`.
### listen_http_addr
Network interface and port number to listen at for admin connections.
The default is `127.0.0.1:9898`.
### checkpoint_distance
`checkpoint_distance` is the amount of incoming WAL that is held in
the open layer, before it's flushed to local disk. It puts an upper
bound on how much WAL needs to be re-processed after a pageserver
crash. It is a soft limit, the pageserver can momentarily go above it,
but it will trigger a checkpoint operation to get it back below the
limit.
`checkpoint_distance` also determines how much WAL needs to be kept
durable in the safekeeper. The safekeeper must have capacity to hold
this much WAL, with some headroom, otherwise you can get stuck in a
situation where the safekeeper is full and stops accepting new WAL,
but the pageserver is not flushing out and releasing the space in the
safekeeper because it hasn't reached checkpoint_distance yet.
`checkpoint_distance` also controls how often the WAL is uploaded to
S3.
The unit is # of bytes.
### checkpoint_period
The pageserver checks whether `checkpoint_distance` has been reached
every `checkpoint_period` seconds. Default is 1 s, which should be
fine.
### gc_horizon
`gz_horizon` determines how much history is retained, to allow
branching and read replicas at an older point in time. The unit is #
of bytes of WAL. Page versions older than this are garbage collected
away.
### gc_period
Interval at which garbage collection is triggered. Default is 100 s.
### superuser
Name of the initial superuser role, passed to initdb when a new tenant
is initialized. It doesn't affect anything after initialization. The
default is Note: The default is 'zenith_admin', and the console
depends on that, so if you change it, bad things will happen.
### page_cache_size
Size of the page cache, to hold materialized page versions. Unit is
number of 8 kB blocks. The default is 8192, which means 64 MB.
### max_file_descriptors
Max number of file descriptors to hold open concurrently for accessing
layer files. This should be kept well below the process/container/OS
limit (see `ulimit -n`), as the pageserver also needs file descriptors
for other files and for sockets for incoming connections.
### postgres-distrib
A directory with Postgres installation to use during pageserver activities.
Inside that dir, a `bin/postgres` binary should be present.
The default distrib dir is `./tmp_install/`.
### workdir (-D)
A directory in the file system, where pageserver will store its files.
The default is `./.zenith/`.
### Remote storage
There's a way to automatically backup and restore some of the pageserver's data from working dir to the remote storage.
The backup system is disabled by default and can be enabled for either of the currently available storages:
#### Local FS storage
##### remote-storage-local-path
Pageserver can back up and restore some of its workdir contents to another directory.
For that, only a path to that directory needs to be specified as a parameter.
#### S3 storage
Pageserver can back up and restore some of its workdir contents to S3.
Full set of S3 credentials is needed for that as parameters:
##### remote-storage-s3-bucket
Name of the bucket to connect to, example: "some-sample-bucket".
##### remote-storage-region
Name of the region where the bucket is located at, example: "eu-north-1"
##### remote-storage-access-key
Access key to connect to the bucket ("login" part of the credentials), example: "AKIAIOSFODNN7EXAMPLE"
##### remote-storage-secret-access-key
Secret access key to connect to the bucket ("password" part of the credentials), example: "wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY"
#### General remote storage configuration
Pagesever allows only one remote storage configured concurrently and errors if parameters from multiple different remote configurations are used.
No default values are used for the remote storage configuration parameters.
##### remote-storage-max-concurrent-sync
Max number of concurrent connections to open for uploading to or
downloading from S3.
The default value is 100.
## safekeeper
TODO

View File

@@ -85,24 +85,24 @@ impl CfgFileParams {
arg_matches.value_of(arg_name).map(str::to_owned)
};
let remote_storage = if let Some(local_path) = get_arg("relish-storage-local-path") {
let remote_storage = if let Some(local_path) = get_arg("remote-storage-local-path") {
Some(RemoteStorage::Local { local_path })
} else if let Some((bucket_name, bucket_region)) =
get_arg("relish-storage-s3-bucket").zip(get_arg("relish-storage-region"))
get_arg("remote-storage-s3-bucket").zip(get_arg("remote-storage-region"))
{
Some(RemoteStorage::AwsS3 {
bucket_name,
bucket_region,
access_key_id: get_arg("relish-storage-access-key"),
secret_access_key: get_arg("relish-storage-secret-access-key"),
access_key_id: get_arg("remote-storage-access-key"),
secret_access_key: get_arg("remote-storage-secret-access-key"),
})
} else {
None
};
Self {
listen_pg_addr: get_arg("listen-pg"),
listen_http_addr: get_arg("listen-http"),
listen_pg_addr: get_arg("listen_pg_addr"),
listen_http_addr: get_arg("listen_http_addr"),
checkpoint_distance: get_arg("checkpoint_distance"),
checkpoint_period: get_arg("checkpoint_period"),
gc_horizon: get_arg("gc_horizon"),
@@ -114,7 +114,7 @@ impl CfgFileParams {
auth_validation_public_key_path: get_arg("auth-validation-public-key-path"),
auth_type: get_arg("auth-type"),
remote_storage,
remote_storage_max_concurrent_sync: get_arg("relish-storage-max-concurrent-sync"),
remote_storage_max_concurrent_sync: get_arg("remote-storage-max-concurrent-sync"),
}
}
@@ -282,17 +282,17 @@ fn main() -> Result<()> {
.about("Materializes WAL stream to pages and serves them to the postgres")
.version(GIT_VERSION)
.arg(
Arg::with_name("listen-pg")
Arg::with_name("listen_pg_addr")
.short("l")
.long("listen-pg")
.alias("listen") // keep some compatibility
.long("listen_pg_addr")
.aliases(&["listen", "listen-pg"]) // keep some compatibility
.takes_value(true)
.help(formatcp!("listen for incoming page requests on ip:port (default: {DEFAULT_PG_LISTEN_ADDR})")),
)
.arg(
Arg::with_name("listen-http")
.long("listen-http")
.alias("http_endpoint") // keep some compatibility
Arg::with_name("listen_http_addr")
.long("listen_http_addr")
.aliases(&["http_endpoint", "listen-http"]) // keep some compatibility
.takes_value(true)
.help(formatcp!("http endpoint address for metrics and management API calls on ip:port (default: {DEFAULT_HTTP_LISTEN_ADDR})")),
)
@@ -385,45 +385,45 @@ fn main() -> Result<()> {
.help("Authentication scheme type. One of: Trust, MD5, ZenithJWT"),
)
.arg(
Arg::with_name("relish-storage-local-path")
.long("relish-storage-local-path")
Arg::with_name("remote-storage-local-path")
.long("remote-storage-local-path")
.takes_value(true)
.help("Path to the local directory, to be used as an external relish storage")
.help("Path to the local directory, to be used as an external remote storage")
.conflicts_with_all(&[
"relish-storage-s3-bucket",
"relish-storage-region",
"relish-storage-access-key",
"relish-storage-secret-access-key",
"remote-storage-s3-bucket",
"remote-storage-region",
"remote-storage-access-key",
"remote-storage-secret-access-key",
]),
)
.arg(
Arg::with_name("relish-storage-s3-bucket")
.long("relish-storage-s3-bucket")
Arg::with_name("remote-storage-s3-bucket")
.long("remote-storage-s3-bucket")
.takes_value(true)
.help("Name of the AWS S3 bucket to use an external relish storage")
.requires("relish-storage-region"),
.help("Name of the AWS S3 bucket to use an external remote storage")
.requires("remote-storage-region"),
)
.arg(
Arg::with_name("relish-storage-region")
.long("relish-storage-region")
Arg::with_name("remote-storage-region")
.long("remote-storage-region")
.takes_value(true)
.help("Region of the AWS S3 bucket"),
)
.arg(
Arg::with_name("relish-storage-access-key")
.long("relish-storage-access-key")
Arg::with_name("remote-storage-access-key")
.long("remote-storage-access-key")
.takes_value(true)
.help("Credentials to access the AWS S3 bucket"),
)
.arg(
Arg::with_name("relish-storage-secret-access-key")
.long("relish-storage-secret-access-key")
Arg::with_name("remote-storage-secret-access-key")
.long("remote-storage-secret-access-key")
.takes_value(true)
.help("Credentials to access the AWS S3 bucket"),
)
.arg(
Arg::with_name("relish-storage-max-concurrent-sync")
.long("relish-storage-max-concurrent-sync")
Arg::with_name("remote-storage-max-concurrent-sync")
.long("remote-storage-max-concurrent-sync")
.takes_value(true)
.help("Maximum allowed concurrent synchronisations with storage"),
)