mirror of
https://github.com/neondatabase/neon.git
synced 2025-12-22 21:59:59 +00:00
Fix typos
This commit is contained in:
committed by
Heikki Linnakangas
parent
500e8772f0
commit
f1c51a1267
@@ -6,7 +6,7 @@ RELEASE=${RELEASE:-false}
|
||||
|
||||
# look at docker hub for latest tag for neon docker image
|
||||
if [ "${RELEASE}" = "true" ]; then
|
||||
echo "search latest relase tag"
|
||||
echo "search latest release tag"
|
||||
VERSION=$(curl -s https://registry.hub.docker.com/v1/repositories/neondatabase/neon/tags |jq -r -S '.[].name' | grep release | sed 's/release-//g' | grep -E '^[0-9]+$' | sort -n | tail -1)
|
||||
if [ -z "${VERSION}" ]; then
|
||||
echo "no any docker tags found, exiting..."
|
||||
@@ -31,7 +31,7 @@ echo "found ${VERSION}"
|
||||
rm -rf neon_install postgres_install.tar.gz neon_install.tar.gz .neon_current_version
|
||||
mkdir neon_install
|
||||
|
||||
# retrive binaries from docker image
|
||||
# retrieve binaries from docker image
|
||||
echo "getting binaries from docker image"
|
||||
docker pull --quiet neondatabase/neon:${TAG}
|
||||
ID=$(docker create neondatabase/neon:${TAG})
|
||||
|
||||
2
.github/workflows/benchmarking.yml
vendored
2
.github/workflows/benchmarking.yml
vendored
@@ -19,7 +19,7 @@ jobs:
|
||||
bench:
|
||||
# this workflow runs on self hosteed runner
|
||||
# it's environment is quite different from usual guthub runner
|
||||
# probably the most important difference is that it doesnt start from clean workspace each time
|
||||
# probably the most important difference is that it doesn't start from clean workspace each time
|
||||
# e g if you install system packages they are not cleaned up since you install them directly in host machine
|
||||
# not a container or something
|
||||
# See documentation for more info: https://docs.github.com/en/actions/hosting-your-own-runners/about-self-hosted-runners
|
||||
|
||||
@@ -25,7 +25,7 @@ COPY --from=pg-build /pg/tmp_install/include/postgresql/server tmp_install/inclu
|
||||
COPY . .
|
||||
|
||||
# Show build caching stats to check if it was used in the end.
|
||||
# Has to be the part of the same RUN since cachepot daemon is killed in the end of this RUN, loosing the compilation stats.
|
||||
# Has to be the part of the same RUN since cachepot daemon is killed in the end of this RUN, losing the compilation stats.
|
||||
RUN set -e \
|
||||
&& sudo -E "PATH=$PATH" mold -run cargo build --release \
|
||||
&& cachepot -s
|
||||
|
||||
@@ -4,7 +4,7 @@
|
||||
# We may also reuse it in CI to unify installation process and as a general binaries building
|
||||
# tool for production servers.
|
||||
#
|
||||
# Dynamic linking is used for librocksdb and libstdc++ bacause librocksdb-sys calls
|
||||
# Dynamic linking is used for librocksdb and libstdc++ because librocksdb-sys calls
|
||||
# bindgen with "dynamic" feature flag. This also prevents usage of dockerhub alpine-rust
|
||||
# images which are statically linked and have guards against any dlopen. I would rather
|
||||
# prefer all static binaries so we may change the way librocksdb-sys builds or wait until
|
||||
|
||||
@@ -115,7 +115,7 @@ Neon safekeeper LSNs. For more check [safekeeper/README_PROTO.md](/safekeeper/RE
|
||||
* `CommitLSN`: position in WAL confirmed by quorum safekeepers.
|
||||
* `RestartLSN`: position in WAL confirmed by all safekeepers.
|
||||
* `FlushLSN`: part of WAL persisted to the disk by safekeeper.
|
||||
* `VCL`: the largerst LSN for which we can guarantee availablity of all prior records.
|
||||
* `VCL`: the largerst LSN for which we can guarantee availability of all prior records.
|
||||
|
||||
Neon pageserver LSNs:
|
||||
* `last_record_lsn` - the end of last processed WAL record.
|
||||
|
||||
@@ -6,7 +6,7 @@ Zenith supports multitenancy. One pageserver can serve multiple tenants at once.
|
||||
|
||||
### Tenants in other commands
|
||||
|
||||
By default during `zenith init` new tenant is created on the pageserver. Newly created tenant's id is saved to cli config, so other commands can use it automatically if no direct arugment `--tenantid=<tenantid>` is provided. So generally tenantid more frequently appears in internal pageserver interface. Its commands take tenantid argument to distinguish to which tenant operation should be applied. CLI support creation of new tenants.
|
||||
By default during `zenith init` new tenant is created on the pageserver. Newly created tenant's id is saved to cli config, so other commands can use it automatically if no direct argument `--tenantid=<tenantid>` is provided. So generally tenantid more frequently appears in internal pageserver interface. Its commands take tenantid argument to distinguish to which tenant operation should be applied. CLI support creation of new tenants.
|
||||
|
||||
Examples for cli:
|
||||
|
||||
|
||||
@@ -111,13 +111,13 @@ Since we are storing page diffs of variable sizes there is no structural depende
|
||||
|
||||
### **Chunk metadata**
|
||||
|
||||
Chunk metadata is a file lies in chunk directory that stores info about current snapshots and PITR regions. Chunck should always consult this data when merging SSTables and applying delete markers.
|
||||
Chunk metadata is a file lies in chunk directory that stores info about current snapshots and PITR regions. Chunk should always consult this data when merging SSTables and applying delete markers.
|
||||
|
||||
### **Chunk splitting**
|
||||
|
||||
*(NB: following paragraph is about how to avoid page splitting)*
|
||||
|
||||
When chunks hits some soft storage limit (let's say 100Gb) it should be split in half and global matadata about chunk boundaries should be updated. Here i assume that chunk split is a local operation happening on single node. Process of chink splitting should look like following:
|
||||
When chunks hits some soft storage limit (let's say 100Gb) it should be split in half and global metadata about chunk boundaries should be updated. Here i assume that chunk split is a local operation happening on single node. Process of chink splitting should look like following:
|
||||
|
||||
1. Find separation key and spawn two new chunks with [lo, mid) [mid, hi) boundaries.
|
||||
|
||||
@@ -166,7 +166,7 @@ Multi-tenant storage makes sense even on a laptop, when you work with different
|
||||
|
||||
Few databases are stored in one chunk, replicated three times
|
||||
|
||||
- When database can't fit into one storage node it can occupy lots of chunks that were split while database was growing. Chunk placement on nodes is controlled by us with some automatization, but we alway may manually move chunks around the cluster.
|
||||
- When database can't fit into one storage node it can occupy lots of chunks that were split while database was growing. Chunk placement on nodes is controlled by us with some automatization, but we always may manually move chunks around the cluster.
|
||||
|
||||
<img width="940" alt="Screenshot_2021-02-22_at_16 49 10" src="https://user-images.githubusercontent.com/284219/108729815-fb071e00-753b-11eb-86e0-be6703e47d82.png">
|
||||
|
||||
|
||||
@@ -123,7 +123,7 @@ Show currently attached storages. For example:
|
||||
> zenith storage list
|
||||
NAME USED TYPE OPTIONS PATH
|
||||
local 5.1G zenith-local /opt/zenith/store/local
|
||||
local.compr 20.4G zenith-local comression=on /opt/zenith/store/local.compr
|
||||
local.compr 20.4G zenith-local compression=on /opt/zenith/store/local.compr
|
||||
zcloud 60G zenith-remote zenith.tech/stas/mystore
|
||||
s3tank 80G S3
|
||||
```
|
||||
|
||||
@@ -31,7 +31,7 @@ Ideally, just one binary that incorporates all elements we need.
|
||||
|
||||
#### Components:
|
||||
|
||||
- **zenith-CLI** - interface for end-users. Turns commands to REST requests and handles responces to show them in a user-friendly way.
|
||||
- **zenith-CLI** - interface for end-users. Turns commands to REST requests and handles responses to show them in a user-friendly way.
|
||||
CLI proposal is here https://github.com/libzenith/rfcs/blob/003-laptop-cli.md/003-laptop-cli.md
|
||||
WIP code is here: https://github.com/libzenith/postgres/tree/main/pageserver/src/bin/cli
|
||||
|
||||
|
||||
@@ -25,9 +25,9 @@ To make changes in the catalog you need to run compute nodes
|
||||
zenith start /home/pipedpiper/northwind:main -- starts a compute instance
|
||||
zenith start zenith://zenith.tech/northwind:main -- starts a compute instance in the cloud
|
||||
-- you can start a compute node against any hash or branch
|
||||
zenith start /home/pipedpiper/northwind:experimental --port 8008 -- start anothe compute instance (on different port)
|
||||
zenith start /home/pipedpiper/northwind:experimental --port 8008 -- start another compute instance (on different port)
|
||||
-- you can start a compute node against any hash or branch
|
||||
zenith start /home/pipedpiper/northwind:<hash> --port 8009 -- start anothe compute instance (on different port)
|
||||
zenith start /home/pipedpiper/northwind:<hash> --port 8009 -- start another compute instance (on different port)
|
||||
|
||||
-- After running some DML you can run
|
||||
-- zenith status and see how there are two WAL streams one on top of
|
||||
|
||||
@@ -4,7 +4,7 @@ We may think about backups as snapshots in a different format (i.e plain pgdata
|
||||
|
||||
Even if zenith aims to maintains durability using it's own snapshots, backups will be useful for uploading data from postges to zenith.
|
||||
|
||||
So here is an attemt to design consistent CLI for diferent usage scenarios:
|
||||
So here is an attempt to design consistent CLI for different usage scenarios:
|
||||
|
||||
#### 1. Start empty pageserver.
|
||||
That is what we have now.
|
||||
|
||||
@@ -3,7 +3,7 @@
|
||||
GetPage@LSN can be called with older LSNs, and the page server needs
|
||||
to be able to reconstruct older page versions. That's needed for
|
||||
having read-only replicas that lag behind the primary, or that are
|
||||
"anchored" at an older LSN, and internally in the page server whne you
|
||||
"anchored" at an older LSN, and internally in the page server when you
|
||||
branch at an older point in time. How do you do that?
|
||||
|
||||
For now, I'm not considering incremental snapshots at all. I don't
|
||||
|
||||
@@ -123,7 +123,7 @@ As far as I understand Bookfile/Aversion addresses versioning and serialization
|
||||
As for exact data that should go to snapshots I think it is the following for each snapshot:
|
||||
|
||||
* format version number
|
||||
* set of key/values to interpret content (e.g. is page compression enabled, is that a full or incremental snapshot, previous snapshot id, is there WAL at the end on file, etc) -- it is up to a reader to decide what to do if some keys are missing or some unknow key are present. If we add something backward compatible to the file we can keep the version number.
|
||||
* set of key/values to interpret content (e.g. is page compression enabled, is that a full or incremental snapshot, previous snapshot id, is there WAL at the end on file, etc) -- it is up to a reader to decide what to do if some keys are missing or some unknown key are present. If we add something backward compatible to the file we can keep the version number.
|
||||
* array of [BuffTag, corresponding offset in file] for pages -- IIUC that is analogous to ToC in Bookfile
|
||||
* array of [(BuffTag, LSN), corresponding offset in file] for the WAL records
|
||||
* pages, one by one
|
||||
|
||||
@@ -13,7 +13,7 @@ https://github.com/zenithdb/rfcs/pull/3/files
|
||||
|
||||
|
||||
This makes our biggest our difference from Raft. In Raft, every log record is
|
||||
stamped with term in which it was generated; while we essentialy store in
|
||||
stamped with term in which it was generated; while we essentially store in
|
||||
`epoch` only the term of the highest record on this safekeeper -- when we know
|
||||
it -- because during recovery generally we don't, and `epoch` is bumped directly
|
||||
to the term of the proposer who performs the recovery when it is finished. It is
|
||||
|
||||
@@ -124,7 +124,7 @@ Each storage node can subscribe to the relevant sets of keys and maintain a loca
|
||||
|
||||
### Safekeeper address discovery
|
||||
|
||||
During the startup safekeeper should publish the address he is listening on as the part of `{"sk_#{sk_id}" => ip_address}`. Then the pageserver can resolve `sk_#{sk_id}` to the actual address. This way it would work both locally and in the cloud setup. Safekeeper should have `--advertised-address` CLI option so that we can listen on e.g. 0.0.0.0 but advertize something more useful.
|
||||
During the startup safekeeper should publish the address he is listening on as the part of `{"sk_#{sk_id}" => ip_address}`. Then the pageserver can resolve `sk_#{sk_id}` to the actual address. This way it would work both locally and in the cloud setup. Safekeeper should have `--advertised-address` CLI option so that we can listen on e.g. 0.0.0.0 but advertise something more useful.
|
||||
|
||||
### Safekeeper behavior
|
||||
|
||||
@@ -195,7 +195,7 @@ sequenceDiagram
|
||||
PS1->>SK1: start replication
|
||||
```
|
||||
|
||||
#### Behavour of services during typical operations
|
||||
#### Behaviour of services during typical operations
|
||||
|
||||
```mermaid
|
||||
sequenceDiagram
|
||||
@@ -250,7 +250,7 @@ sequenceDiagram
|
||||
PS2->>M: Register downloaded timeline
|
||||
PS2->>M: Get safekeepers for timeline, subscribe to changes
|
||||
PS2->>SK1: Start replication to catch up
|
||||
note over O: PS2 catched up, time to switch compute
|
||||
note over O: PS2 caught up, time to switch compute
|
||||
O->>C: Restart compute with new pageserver url in config
|
||||
note over C: Wal push is restarted
|
||||
loop request pages
|
||||
|
||||
@@ -49,7 +49,7 @@ topics.
|
||||
|
||||
RFC lifecycle:
|
||||
|
||||
- Should be submitted in a pull request with and full RFC text in a commited markdown file and copy of the Summary and Motivation sections also included in the PR body.
|
||||
- Should be submitted in a pull request with and full RFC text in a committed markdown file and copy of the Summary and Motivation sections also included in the PR body.
|
||||
- RFC should be published for review before most of the actual code is written. This isn’t a strict rule, don’t hesitate to experiment and build a POC in parallel with writing an RFC.
|
||||
- Add labels to the PR in the same manner as you do Issues. Example TBD
|
||||
- Request the review from your peers. Reviewing the RFCs from your peers is a priority, same as reviewing the actual code.
|
||||
|
||||
@@ -105,7 +105,7 @@ Interval at which garbage collection is triggered. Default is 100 s.
|
||||
|
||||
#### image_creation_threshold
|
||||
|
||||
L0 delta layer threshold for L1 iamge layer creation. Default is 3.
|
||||
L0 delta layer threshold for L1 image layer creation. Default is 3.
|
||||
|
||||
#### pitr_interval
|
||||
|
||||
|
||||
@@ -10,7 +10,7 @@ Intended to be used in integration tests and in CLI tools for local installation
|
||||
|
||||
`/docs`:
|
||||
|
||||
Documentaion of the Zenith features and concepts.
|
||||
Documentation of the Zenith features and concepts.
|
||||
Now it is mostly dev documentation.
|
||||
|
||||
`/monitoring`:
|
||||
@@ -92,7 +92,7 @@ A single virtual environment with all dependencies is described in the single `P
|
||||
|
||||
### Prerequisites
|
||||
- Install Python 3.9 (the minimal supported version) or greater.
|
||||
- Our setup with poetry should work with newer python versions too. So feel free to open an issue with a `c/test-runner` label if something doesnt work as expected.
|
||||
- Our setup with poetry should work with newer python versions too. So feel free to open an issue with a `c/test-runner` label if something doesn't work as expected.
|
||||
- If you have some trouble with other version you can resolve it by installing Python 3.9 separately, via [pyenv](https://github.com/pyenv/pyenv) or via system package manager e.g.:
|
||||
```bash
|
||||
# In Ubuntu
|
||||
|
||||
@@ -73,7 +73,7 @@ impl WalStreamDecoder {
|
||||
/// Returns one of the following:
|
||||
/// Ok((Lsn, Bytes)): a tuple containing the LSN of next record, and the record itself
|
||||
/// Ok(None): there is not enough data in the input buffer. Feed more by calling the `feed_bytes` function
|
||||
/// Err(WalDecodeError): an error occured while decoding, meaning the input was invalid.
|
||||
/// Err(WalDecodeError): an error occurred while decoding, meaning the input was invalid.
|
||||
///
|
||||
pub fn poll_decode(&mut self) -> Result<Option<(Lsn, Bytes)>, WalDecodeError> {
|
||||
let recordbuf;
|
||||
|
||||
@@ -531,7 +531,7 @@ impl CheckPoint {
|
||||
///
|
||||
/// Returns 'true' if the XID was updated.
|
||||
pub fn update_next_xid(&mut self, xid: u32) -> bool {
|
||||
// nextXid should nw greate than any XID in WAL, so increment provided XID and check for wraparround.
|
||||
// nextXid should nw greater than any XID in WAL, so increment provided XID and check for wraparround.
|
||||
let mut new_xid = std::cmp::max(xid + 1, pg_constants::FIRST_NORMAL_TRANSACTION_ID);
|
||||
// To reduce number of metadata checkpoints, we forward align XID on XID_CHECKPOINT_INTERVAL.
|
||||
// XID_CHECKPOINT_INTERVAL should not be larger than BLCKSZ*CLOG_XACTS_PER_BYTE
|
||||
|
||||
@@ -71,7 +71,7 @@ impl From<bincode::Error> for SerializeError {
|
||||
/// - Fixed integer encoding (i.e. 1u32 is 00000001 not 01)
|
||||
///
|
||||
/// Does not allow trailing bytes in deserialization. If this is desired, you
|
||||
/// may set [`Options::allow_trailing_bytes`] to explicitly accomodate this.
|
||||
/// may set [`Options::allow_trailing_bytes`] to explicitly accommodate this.
|
||||
pub fn be_coder() -> impl Options {
|
||||
bincode::DefaultOptions::new()
|
||||
.with_big_endian()
|
||||
@@ -85,7 +85,7 @@ pub fn be_coder() -> impl Options {
|
||||
/// - Fixed integer encoding (i.e. 1u32 is 00000001 not 01)
|
||||
///
|
||||
/// Does not allow trailing bytes in deserialization. If this is desired, you
|
||||
/// may set [`Options::allow_trailing_bytes`] to explicitly accomodate this.
|
||||
/// may set [`Options::allow_trailing_bytes`] to explicitly accommodate this.
|
||||
pub fn le_coder() -> impl Options {
|
||||
bincode::DefaultOptions::new()
|
||||
.with_little_endian()
|
||||
|
||||
@@ -64,7 +64,7 @@ pub mod signals;
|
||||
/// One thing to note is that .git is not available in docker (and it is bad to include it there).
|
||||
/// So everything becides docker build is covered by git_version crate, and docker uses a `GIT_VERSION` argument to get the value required.
|
||||
/// It takes variable from build process env and puts it to the rustc env. And then we can retrieve it here by using env! macro.
|
||||
/// Git version received from environment variable used as a fallback in git_version invokation.
|
||||
/// Git version received from environment variable used as a fallback in git_version invocation.
|
||||
/// And to avoid running buildscript every recompilation, we use rerun-if-env-changed option.
|
||||
/// So the build script will be run only when GIT_VERSION envvar has changed.
|
||||
///
|
||||
|
||||
@@ -475,7 +475,7 @@ impl PostgresBackend {
|
||||
self.write_message(&BeMessage::ErrorResponse(&e.to_string()))?;
|
||||
}
|
||||
// NOTE there is no ReadyForQuery message. This handler is used
|
||||
// for basebackup and it uses CopyOut which doesnt require
|
||||
// for basebackup and it uses CopyOut which doesn't require
|
||||
// ReadyForQuery message and backend just switches back to
|
||||
// processing mode after sending CopyDone or ErrorResponse.
|
||||
}
|
||||
|
||||
@@ -464,7 +464,7 @@ impl BeParameterStatusMessage<'static> {
|
||||
}
|
||||
}
|
||||
|
||||
// One row desciption in RowDescription packet.
|
||||
// One row description in RowDescription packet.
|
||||
#[derive(Debug)]
|
||||
pub struct RowDescriptor<'a> {
|
||||
pub name: &'a [u8],
|
||||
@@ -613,7 +613,7 @@ fn cstr_to_str(b: &Bytes) -> Result<&str> {
|
||||
impl<'a> BeMessage<'a> {
|
||||
/// Write message to the given buf.
|
||||
// Unlike the reading side, we use BytesMut
|
||||
// here as msg len preceeds its body and it is handy to write it down first
|
||||
// here as msg len precedes its body and it is handy to write it down first
|
||||
// and then fill the length. With Write we would have to either calc it
|
||||
// manually or have one more buffer.
|
||||
pub fn write(buf: &mut BytesMut, message: &BeMessage) -> io::Result<()> {
|
||||
@@ -1047,7 +1047,7 @@ mod tests {
|
||||
#[test]
|
||||
fn test_zenithfeedback_serialization() {
|
||||
let mut zf = ZenithFeedback::empty();
|
||||
// Fill zf wih some values
|
||||
// Fill zf with some values
|
||||
zf.current_timeline_size = 12345678;
|
||||
// Set rounded time to be able to compare it with deserialized value,
|
||||
// because it is rounded up to microseconds during serialization.
|
||||
@@ -1062,7 +1062,7 @@ mod tests {
|
||||
#[test]
|
||||
fn test_zenithfeedback_unknown_key() {
|
||||
let mut zf = ZenithFeedback::empty();
|
||||
// Fill zf wih some values
|
||||
// Fill zf with some values
|
||||
zf.current_timeline_size = 12345678;
|
||||
// Set rounded time to be able to compare it with deserialized value,
|
||||
// because it is rounded up to microseconds during serialization.
|
||||
|
||||
@@ -114,7 +114,7 @@ pub struct PageServerConf {
|
||||
pub default_tenant_conf: TenantConf,
|
||||
|
||||
/// A prefix to add in etcd brokers before every key.
|
||||
/// Can be used for isolating different pageserver groups withing the same etcd cluster.
|
||||
/// Can be used for isolating different pageserver groups within the same etcd cluster.
|
||||
pub broker_etcd_prefix: String,
|
||||
|
||||
/// Etcd broker endpoints to connect to.
|
||||
|
||||
@@ -15,7 +15,7 @@ pub struct KeySpace {
|
||||
impl KeySpace {
|
||||
///
|
||||
/// Partition a key space into roughly chunks of roughly 'target_size' bytes
|
||||
/// in each patition.
|
||||
/// in each partition.
|
||||
///
|
||||
pub fn partition(&self, target_size: u64) -> KeyPartitioning {
|
||||
// Assume that each value is 8k in size.
|
||||
|
||||
@@ -823,7 +823,7 @@ impl LayeredRepository {
|
||||
for (timeline_id, timeline_entry) in timelines.iter() {
|
||||
timeline_ids.push(*timeline_id);
|
||||
|
||||
// This is unresolved question for now, how to do gc in presense of remote timelines
|
||||
// This is unresolved question for now, how to do gc in presence of remote timelines
|
||||
// especially when this is combined with branching.
|
||||
// Somewhat related: https://github.com/zenithdb/zenith/issues/999
|
||||
if let Some(ancestor_timeline_id) = &timeline_entry.ancestor_timeline_id() {
|
||||
@@ -1831,7 +1831,7 @@ impl LayeredTimeline {
|
||||
// collect any page versions that are no longer needed because
|
||||
// of the new image layers we created in step 2.
|
||||
//
|
||||
// TODO: This hight level strategy hasn't been implemented yet.
|
||||
// TODO: This high level strategy hasn't been implemented yet.
|
||||
// Below are functions compact_level0() and create_image_layers()
|
||||
// but they are a bit ad hoc and don't quite work like it's explained
|
||||
// above. Rewrite it.
|
||||
@@ -2268,7 +2268,7 @@ impl LayeredTimeline {
|
||||
}
|
||||
|
||||
// 3. Is it needed by a child branch?
|
||||
// NOTE With that wee would keep data that
|
||||
// NOTE With that we would keep data that
|
||||
// might be referenced by child branches forever.
|
||||
// We can track this in child timeline GC and delete parent layers when
|
||||
// they are no longer needed. This might be complicated with long inheritance chains.
|
||||
|
||||
@@ -7,7 +7,7 @@
|
||||
//! - Fixed-width keys
|
||||
//! - Fixed-width values (VALUE_SZ)
|
||||
//! - The tree is created in a bulk operation. Insert/deletion after creation
|
||||
//! is not suppported
|
||||
//! is not supported
|
||||
//! - page-oriented
|
||||
//!
|
||||
//! TODO:
|
||||
|
||||
@@ -634,7 +634,7 @@ impl PageServerHandler {
|
||||
return Ok(());
|
||||
}
|
||||
// auth is some, just checked above, when auth is some
|
||||
// then claims are always present because of checks during connetion init
|
||||
// then claims are always present because of checks during connection init
|
||||
// so this expect won't trigger
|
||||
let claims = self
|
||||
.claims
|
||||
|
||||
@@ -521,7 +521,7 @@ pub struct DatadirModification<'a, R: Repository> {
|
||||
|
||||
lsn: Lsn,
|
||||
|
||||
// The modifications are not applied directly to the underyling key-value store.
|
||||
// The modifications are not applied directly to the underlying key-value store.
|
||||
// The put-functions add the modifications here, and they are flushed to the
|
||||
// underlying key-value store by the 'finish' function.
|
||||
pending_updates: HashMap<Key, Value>,
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
//! Timeline synchrnonization logic to delete a bulk of timeline's remote files from the remote storage.
|
||||
//! Timeline synchronization logic to delete a bulk of timeline's remote files from the remote storage.
|
||||
|
||||
use anyhow::Context;
|
||||
use futures::stream::{FuturesUnordered, StreamExt};
|
||||
|
||||
@@ -19,7 +19,7 @@ use utils::{
|
||||
#[derive(Debug, Clone, Copy, Hash, PartialEq, Eq, Ord, PartialOrd, Serialize, Deserialize)]
|
||||
/// Key used in the Repository kv-store.
|
||||
///
|
||||
/// The Repository treates this as an opaque struct, but see the code in pgdatadir_mapping.rs
|
||||
/// The Repository treats this as an opaque struct, but see the code in pgdatadir_mapping.rs
|
||||
/// for what we actually store in these fields.
|
||||
pub struct Key {
|
||||
pub field1: u8,
|
||||
@@ -210,7 +210,7 @@ pub trait Repository: Send + Sync {
|
||||
) -> Result<()>;
|
||||
|
||||
/// Get Timeline handle for given zenith timeline ID.
|
||||
/// This function is idempotent. It doesnt change internal state in any way.
|
||||
/// This function is idempotent. It doesn't change internal state in any way.
|
||||
fn get_timeline(&self, timelineid: ZTimelineId) -> Option<RepositoryTimeline<Self::Timeline>>;
|
||||
|
||||
/// Get Timeline handle for locally available timeline. Load it into memory if it is not loaded.
|
||||
@@ -345,11 +345,11 @@ pub trait Timeline: Send + Sync {
|
||||
|
||||
/// Look up given page version.
|
||||
///
|
||||
/// NOTE: It is considerd an error to 'get' a key that doesn't exist. The abstraction
|
||||
/// NOTE: It is considered an error to 'get' a key that doesn't exist. The abstraction
|
||||
/// above this needs to store suitable metadata to track what data exists with
|
||||
/// what keys, in separate metadata entries. If a non-existent key is requested,
|
||||
/// the Repository implementation may incorrectly return a value from an ancestore
|
||||
/// branch, for exampel, or waste a lot of cycles chasing the non-existing key.
|
||||
/// the Repository implementation may incorrectly return a value from an ancestor
|
||||
/// branch, for example, or waste a lot of cycles chasing the non-existing key.
|
||||
///
|
||||
fn get(&self, key: Key, lsn: Lsn) -> Result<Bytes>;
|
||||
|
||||
|
||||
@@ -69,7 +69,7 @@
|
||||
//! Yet instead of keeping the `metadata` file remotely, we wrap it with more data in [`IndexPart`], containing the list of remote files.
|
||||
//! This file gets read to populate the cache, if the remote timeline data is missing from it and gets updated after every successful download.
|
||||
//! This way, we optimize S3 storage access by not running the `S3 list` command that could be expencive and slow: knowing both [`ZTenantId`] and [`ZTimelineId`],
|
||||
//! we can always reconstruct the path to the timeline, use this to get the same path on the remote storage and retrive its shard contents, if needed, same as any layer files.
|
||||
//! we can always reconstruct the path to the timeline, use this to get the same path on the remote storage and retrieve its shard contents, if needed, same as any layer files.
|
||||
//!
|
||||
//! By default, pageserver reads the remote storage index data only for timelines located locally, to synchronize those, if needed.
|
||||
//! Bulk index data download happens only initially, on pageserver startup. The rest of the remote storage stays unknown to pageserver and loaded on demand only,
|
||||
@@ -96,7 +96,7 @@
|
||||
//! timeline uploads and downloads can happen concurrently, in no particular order due to incremental nature of the timeline layers.
|
||||
//! Deletion happens only after a successful upload only, otherwise the compaction output might make the timeline inconsistent until both tasks are fully processed without errors.
|
||||
//! Upload and download update the remote data (inmemory index and S3 json index part file) only after every layer is successfully synchronized, while the deletion task
|
||||
//! does otherwise: it requires to have the remote data updated first succesfully: blob files will be invisible to pageserver this way.
|
||||
//! does otherwise: it requires to have the remote data updated first successfully: blob files will be invisible to pageserver this way.
|
||||
//!
|
||||
//! During the loop startup, an initial [`RemoteTimelineIndex`] state is constructed via downloading and merging the index data for all timelines,
|
||||
//! present locally.
|
||||
@@ -440,7 +440,7 @@ fn collect_timeline_files(
|
||||
// initial collect will fail because there is no metadata.
|
||||
// We either need to start download if we see empty dir after restart or attach caller should
|
||||
// be aware of that and retry attach if awaits_download for timeline switched from true to false
|
||||
// but timelinne didnt appear locally.
|
||||
// but timelinne didn't appear locally.
|
||||
// Check what happens with remote index in that case.
|
||||
let timeline_metadata_path = match timeline_metadata_path {
|
||||
Some(path) => path,
|
||||
@@ -1007,7 +1007,7 @@ where
|
||||
// in local (implicitly, via Lsn values and related memory state) or remote (explicitly via remote layer file paths) metadata.
|
||||
// When operating in a system without tasks failing over the error threshold,
|
||||
// current batching and task processing systems aim to update the layer set and metadata files (remote and local),
|
||||
// without "loosing" such layer files.
|
||||
// without "losing" such layer files.
|
||||
let (upload_result, status_update) = tokio::join!(
|
||||
async {
|
||||
if let Some(upload_data) = upload_data {
|
||||
@@ -1162,7 +1162,7 @@ where
|
||||
return Some(TimelineSyncStatusUpdate::Downloaded);
|
||||
}
|
||||
Err(e) => {
|
||||
error!("Timeline {sync_id} was expected to be in the remote index after a sucessful download, but it's absent: {e:?}");
|
||||
error!("Timeline {sync_id} was expected to be in the remote index after a successful download, but it's absent: {e:?}");
|
||||
}
|
||||
},
|
||||
Err(e) => {
|
||||
@@ -1549,10 +1549,10 @@ fn compare_local_and_remote_timeline(
|
||||
let remote_files = remote_entry.stored_files();
|
||||
|
||||
// TODO probably here we need more sophisticated logic,
|
||||
// if more data is available remotely can we just download whats there?
|
||||
// if more data is available remotely can we just download what's there?
|
||||
// without trying to upload something. It may be tricky, needs further investigation.
|
||||
// For now looks strange that we can request upload
|
||||
// and dowload for the same timeline simultaneously.
|
||||
// and download for the same timeline simultaneously.
|
||||
// (upload needs to be only for previously unsynced files, not whole timeline dir).
|
||||
// If one of the tasks fails they will be reordered in the queue which can lead
|
||||
// to timeline being stuck in evicted state
|
||||
@@ -1565,7 +1565,7 @@ fn compare_local_and_remote_timeline(
|
||||
}),
|
||||
));
|
||||
(LocalTimelineInitStatus::NeedsSync, true)
|
||||
// we do not need to manupulate with remote consistent lsn here
|
||||
// we do not need to manipulate with remote consistent lsn here
|
||||
// because it will be updated when sync will be completed
|
||||
} else {
|
||||
(LocalTimelineInitStatus::LocallyComplete, false)
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
//! Timeline synchrnonization logic to delete a bulk of timeline's remote files from the remote storage.
|
||||
//! Timeline synchronization logic to delete a bulk of timeline's remote files from the remote storage.
|
||||
|
||||
use anyhow::Context;
|
||||
use futures::stream::{FuturesUnordered, StreamExt};
|
||||
|
||||
@@ -1,4 +1,4 @@
|
||||
//! Timeline synchrnonization logic to fetch the layer files from remote storage into pageserver's local directory.
|
||||
//! Timeline synchronization logic to fetch the layer files from remote storage into pageserver's local directory.
|
||||
|
||||
use std::{collections::HashSet, fmt::Debug, path::Path};
|
||||
|
||||
|
||||
@@ -273,7 +273,7 @@ mod tests {
|
||||
};
|
||||
|
||||
let index_part = IndexPart::from_remote_timeline(&timeline_path, remote_timeline.clone())
|
||||
.expect("Correct remote timeline should be convertable to index part");
|
||||
.expect("Correct remote timeline should be convertible to index part");
|
||||
|
||||
assert_eq!(
|
||||
index_part.timeline_layers.iter().collect::<BTreeSet<_>>(),
|
||||
@@ -305,7 +305,7 @@ mod tests {
|
||||
);
|
||||
|
||||
let restored_timeline = RemoteTimeline::from_index_part(&timeline_path, index_part)
|
||||
.expect("Correct index part should be convertable to remote timeline");
|
||||
.expect("Correct index part should be convertible to remote timeline");
|
||||
|
||||
let original_metadata = &remote_timeline.metadata;
|
||||
let restored_metadata = &restored_timeline.metadata;
|
||||
|
||||
@@ -391,7 +391,7 @@ mod tests {
|
||||
assert_eq!(
|
||||
upload.metadata,
|
||||
Some(metadata),
|
||||
"Successful upload should not chage its metadata"
|
||||
"Successful upload should not change its metadata"
|
||||
);
|
||||
|
||||
let storage_files = storage.list().await?;
|
||||
|
||||
@@ -336,7 +336,7 @@ impl VirtualFile {
|
||||
// library RwLock doesn't allow downgrading without releasing the lock,
|
||||
// and that doesn't seem worth the trouble.
|
||||
//
|
||||
// XXX: `parking_lot::RwLock` can enable such downgrades, yet its implemenation is fair and
|
||||
// XXX: `parking_lot::RwLock` can enable such downgrades, yet its implementation is fair and
|
||||
// may deadlock on subsequent read calls.
|
||||
// Simply replacing all `RwLock` in project causes deadlocks, so use it sparingly.
|
||||
let result = STORAGE_IO_TIME
|
||||
|
||||
@@ -12,7 +12,7 @@
|
||||
//! The zenith Repository can store page versions in two formats: as
|
||||
//! page images, or a WAL records. WalIngest::ingest_record() extracts
|
||||
//! page images out of some WAL records, but most it stores as WAL
|
||||
//! records. If a WAL record modifies multple pages, WalIngest
|
||||
//! records. If a WAL record modifies multiple pages, WalIngest
|
||||
//! will call Repository::put_wal_record or put_page_image functions
|
||||
//! separately for each modified page.
|
||||
//!
|
||||
|
||||
@@ -122,7 +122,7 @@ lazy_static! {
|
||||
|
||||
///
|
||||
/// This is the real implementation that uses a Postgres process to
|
||||
/// perform WAL replay. Only one thread can use the processs at a time,
|
||||
/// perform WAL replay. Only one thread can use the process at a time,
|
||||
/// that is controlled by the Mutex. In the future, we might want to
|
||||
/// launch a pool of processes to allow concurrent replay of multiple
|
||||
/// records.
|
||||
@@ -134,7 +134,7 @@ pub struct PostgresRedoManager {
|
||||
process: Mutex<Option<PostgresRedoProcess>>,
|
||||
}
|
||||
|
||||
/// Can this request be served by zenith redo funcitons
|
||||
/// Can this request be served by zenith redo functions
|
||||
/// or we need to pass it to wal-redo postgres process?
|
||||
fn can_apply_in_zenith(rec: &ZenithWalRecord) -> bool {
|
||||
// Currently, we don't have bespoken Rust code to replay any
|
||||
|
||||
@@ -95,7 +95,7 @@ async fn handle_client(
|
||||
|
||||
/// Establish a (most probably, secure) connection with the client.
|
||||
/// For better testing experience, `stream` can be any object satisfying the traits.
|
||||
/// It's easier to work with owned `stream` here as we need to updgrade it to TLS;
|
||||
/// It's easier to work with owned `stream` here as we need to upgrade it to TLS;
|
||||
/// we also take an extra care of propagating only the select handshake errors to client.
|
||||
async fn handshake<S: AsyncRead + AsyncWrite + Unpin>(
|
||||
stream: S,
|
||||
|
||||
@@ -75,7 +75,7 @@ safekeepers. The Paxos and crash recovery algorithm ensures that only
|
||||
one primary node can be actively streaming WAL to the quorum of
|
||||
safekeepers.
|
||||
|
||||
See README_PROTO.md for a more detailed desription of the consensus
|
||||
See README_PROTO.md for a more detailed description of the consensus
|
||||
protocol. spec/ contains TLA+ specification of it.
|
||||
|
||||
# Q&A
|
||||
|
||||
@@ -143,7 +143,7 @@ Restart of PostgreSQL initiates new round of voting and switching new epoch.
|
||||
## Limitations
|
||||
Right now message queue is maintained in main memory and is not spilled to the disk.
|
||||
It can cause memory overflow in case of presence of lagging safekeepers.
|
||||
It is assumed that in case of loosing local data by some safekeepers, it should be recovered using some external mechanism.
|
||||
It is assumed that in case of losing local data by some safekeepers, it should be recovered using some external mechanism.
|
||||
|
||||
|
||||
## Glossary
|
||||
@@ -153,7 +153,7 @@ It is assumed that in case of loosing local data by some safekeepers, it should
|
||||
* `NodeID`: pair (term,UUID)
|
||||
* `Pager`: Neon component restoring pages from WAL stream
|
||||
* `Replica`: read-only computatio node
|
||||
* `VCL`: the largerst LSN for which we can guarantee availablity of all prior records.
|
||||
* `VCL`: the largerst LSN for which we can guarantee availability of all prior records.
|
||||
|
||||
## Algorithm
|
||||
|
||||
|
||||
@@ -88,7 +88,7 @@ TypeOk ==
|
||||
\* in campaign proposer sends RequestVote and waits for acks;
|
||||
\* in leader he is elected
|
||||
/\ prop_state[p].state \in {"campaign", "leader"}
|
||||
\* 0..max_term should be actually Nat in the unbouned model, but TLC won't
|
||||
\* 0..max_term should be actually Nat in the unbounded model, but TLC won't
|
||||
\* swallow it
|
||||
/\ prop_state[p].term \in 0..max_term
|
||||
\* votes received
|
||||
|
||||
@@ -100,7 +100,7 @@ fn main() -> anyhow::Result<()> {
|
||||
Arg::new("dump-control-file")
|
||||
.long("dump-control-file")
|
||||
.takes_value(true)
|
||||
.help("Dump control file at path specifed by this argument and exit"),
|
||||
.help("Dump control file at path specified by this argument and exit"),
|
||||
)
|
||||
.arg(
|
||||
Arg::new("id").long("id").takes_value(true).help("safekeeper node id: integer")
|
||||
|
||||
@@ -39,7 +39,7 @@ async fn request_callback(
|
||||
}
|
||||
});
|
||||
|
||||
// use Config parsing because SockAddr parsing doesnt allow to use host names instead of ip addresses
|
||||
// use Config parsing because SockAddr parsing doesn't allow to use host names instead of ip addresses
|
||||
let me_connstr = format!("postgresql://no_user@{}/no_db", listen_pg_addr_str);
|
||||
let me_conf: postgres::config::Config = me_connstr.parse().unwrap();
|
||||
let (host, port) = connection_host_port(&me_conf);
|
||||
|
||||
@@ -27,7 +27,7 @@ struct SafeKeeperStateV1 {
|
||||
acceptor_state: AcceptorStateV1,
|
||||
/// information about server
|
||||
server: ServerInfoV2,
|
||||
/// Unique id of the last *elected* proposer we dealed with. Not needed
|
||||
/// Unique id of the last *elected* proposer we dealt with. Not needed
|
||||
/// for correctness, exists for monitoring purposes.
|
||||
proposer_uuid: PgUuid,
|
||||
/// part of WAL acknowledged by quorum and available locally
|
||||
@@ -57,7 +57,7 @@ pub struct SafeKeeperStateV2 {
|
||||
pub acceptor_state: AcceptorState,
|
||||
/// information about server
|
||||
pub server: ServerInfoV2,
|
||||
/// Unique id of the last *elected* proposer we dealed with. Not needed
|
||||
/// Unique id of the last *elected* proposer we dealt with. Not needed
|
||||
/// for correctness, exists for monitoring purposes.
|
||||
pub proposer_uuid: PgUuid,
|
||||
/// part of WAL acknowledged by quorum and available locally
|
||||
@@ -89,7 +89,7 @@ pub struct SafeKeeperStateV3 {
|
||||
pub acceptor_state: AcceptorState,
|
||||
/// information about server
|
||||
pub server: ServerInfoV3,
|
||||
/// Unique id of the last *elected* proposer we dealed with. Not needed
|
||||
/// Unique id of the last *elected* proposer we dealt with. Not needed
|
||||
/// for correctness, exists for monitoring purposes.
|
||||
#[serde(with = "hex")]
|
||||
pub proposer_uuid: PgUuid,
|
||||
@@ -114,7 +114,7 @@ pub struct SafeKeeperStateV4 {
|
||||
pub acceptor_state: AcceptorState,
|
||||
/// information about server
|
||||
pub server: ServerInfo,
|
||||
/// Unique id of the last *elected* proposer we dealed with. Not needed
|
||||
/// Unique id of the last *elected* proposer we dealt with. Not needed
|
||||
/// for correctness, exists for monitoring purposes.
|
||||
#[serde(with = "hex")]
|
||||
pub proposer_uuid: PgUuid,
|
||||
|
||||
@@ -180,7 +180,7 @@ pub struct SafeKeeperState {
|
||||
pub acceptor_state: AcceptorState,
|
||||
/// information about server
|
||||
pub server: ServerInfo,
|
||||
/// Unique id of the last *elected* proposer we dealed with. Not needed
|
||||
/// Unique id of the last *elected* proposer we dealt with. Not needed
|
||||
/// for correctness, exists for monitoring purposes.
|
||||
#[serde(with = "hex")]
|
||||
pub proposer_uuid: PgUuid,
|
||||
@@ -759,7 +759,7 @@ where
|
||||
self.inmem.commit_lsn = max(self.inmem.commit_lsn, state.timeline_start_lsn);
|
||||
self.metrics.commit_lsn.set(self.inmem.commit_lsn.0 as f64);
|
||||
|
||||
// Initalizing backup_lsn is useful to avoid making backup think it should upload 0 segment.
|
||||
// Initializing backup_lsn is useful to avoid making backup think it should upload 0 segment.
|
||||
self.inmem.backup_lsn = max(self.inmem.backup_lsn, state.timeline_start_lsn);
|
||||
|
||||
state.acceptor_state.term_history = msg.term_history.clone();
|
||||
|
||||
@@ -257,7 +257,7 @@ impl WalBackupTask {
|
||||
// Optimization idea for later:
|
||||
// Avoid checking election leader every time by returning current lease grant expiration time
|
||||
// Re-check leadership only after expiration time,
|
||||
// such approach woud reduce overhead on write-intensive workloads
|
||||
// such approach would reduce overhead on write-intensive workloads
|
||||
|
||||
match l
|
||||
.check_am_i(
|
||||
@@ -389,7 +389,7 @@ async fn backup_object(source_file: &Path, size: usize) -> Result<()> {
|
||||
|
||||
let file = File::open(&source_file).await?;
|
||||
|
||||
// Storage is initialized by launcher at ths point.
|
||||
// Storage is initialized by launcher at this point.
|
||||
match storage.as_ref().unwrap() {
|
||||
GenericRemoteStorage::Local(local_storage) => {
|
||||
let destination = local_storage.remote_object_id(source_file)?;
|
||||
|
||||
@@ -126,7 +126,7 @@ pub struct PhysicalStorage {
|
||||
conf: SafeKeeperConf,
|
||||
|
||||
// fields below are filled upon initialization
|
||||
/// None if unitialized, Some(usize) if storage is initialized.
|
||||
/// None if uninitialized, Some(usize) if storage is initialized.
|
||||
wal_seg_size: Option<usize>,
|
||||
|
||||
/// Written to disk, but possibly still in the cache and not fully persisted.
|
||||
@@ -456,7 +456,7 @@ impl Storage for PhysicalStorage {
|
||||
segno += 1;
|
||||
let (wal_file_path, wal_file_partial_path) =
|
||||
wal_file_paths(&self.timeline_dir, segno, wal_seg_size)?;
|
||||
// TODO: better use fs::try_exists which is currenty avaialble only in nightly build
|
||||
// TODO: better use fs::try_exists which is currently available only in nightly build
|
||||
if wal_file_path.exists() {
|
||||
fs::remove_file(&wal_file_path)?;
|
||||
} else if wal_file_partial_path.exists() {
|
||||
|
||||
@@ -14,7 +14,7 @@ def test_clog_truncate(zenith_simple_env: ZenithEnv):
|
||||
env = zenith_simple_env
|
||||
env.zenith_cli.create_branch('test_clog_truncate', 'empty')
|
||||
|
||||
# set agressive autovacuum to make sure that truncation will happen
|
||||
# set aggressive autovacuum to make sure that truncation will happen
|
||||
config = [
|
||||
'autovacuum_max_workers=10',
|
||||
'autovacuum_vacuum_threshold=0',
|
||||
|
||||
@@ -55,7 +55,7 @@ def test_pitr_gc(zenith_env_builder: ZenithEnvBuilder):
|
||||
with closing(env.pageserver.connect()) as psconn:
|
||||
with psconn.cursor(cursor_factory=psycopg2.extras.DictCursor) as pscur:
|
||||
pscur.execute(f"compact {env.initial_tenant.hex} {timeline}")
|
||||
# perform agressive GC. Data still should be kept because of the PITR setting.
|
||||
# perform aggressive GC. Data still should be kept because of the PITR setting.
|
||||
pscur.execute(f"do_gc {env.initial_tenant.hex} {timeline} 0")
|
||||
row = pscur.fetchone()
|
||||
print_gc_result(row)
|
||||
|
||||
@@ -116,7 +116,7 @@ def test_remote_storage_backup_and_restore(zenith_env_builder: ZenithEnvBuilder,
|
||||
detail = client.timeline_detail(UUID(tenant_id), UUID(timeline_id))
|
||||
assert detail['local'] is not None
|
||||
log.info("Timeline detail after attach completed: %s", detail)
|
||||
assert lsn_from_hex(detail['local']['last_record_lsn']) >= current_lsn, 'current db Lsn should shoud not be less than the one stored on remote storage'
|
||||
assert lsn_from_hex(detail['local']['last_record_lsn']) >= current_lsn, 'current db Lsn should should not be less than the one stored on remote storage'
|
||||
assert not detail['remote']['awaits_download']
|
||||
|
||||
pg = env.postgres.create_start('main')
|
||||
|
||||
@@ -92,7 +92,7 @@ def load(pg: Postgres, stop_event: threading.Event, load_ok_event: threading.Eve
|
||||
# if we recovered after failure verify that we have correct number of rows
|
||||
log.info("recovering at %s", inserted_ctr)
|
||||
cur.execute("SELECT count(*) FROM load")
|
||||
# it seems that sometimes transaction gets commited before we can acknowledge
|
||||
# it seems that sometimes transaction gets committed before we can acknowledge
|
||||
# the result, so sometimes selected value is larger by one than we expect
|
||||
assert cur.fetchone()[0] - inserted_ctr <= 1
|
||||
log.info("successfully recovered %s", inserted_ctr)
|
||||
|
||||
@@ -28,7 +28,7 @@ def test_vm_bit_clear(zenith_simple_env: ZenithEnv):
|
||||
cur.execute('INSERT INTO vmtest_update SELECT g FROM generate_series(1, 1000) g')
|
||||
cur.execute('VACUUM FREEZE vmtest_update')
|
||||
|
||||
# DELETE and UDPATE the rows.
|
||||
# DELETE and UPDATE the rows.
|
||||
cur.execute('DELETE FROM vmtest_delete WHERE id = 1')
|
||||
cur.execute('UPDATE vmtest_update SET id = 5000 WHERE id = 1')
|
||||
|
||||
|
||||
@@ -905,8 +905,8 @@ def test_delete_force(zenith_env_builder: ZenithEnvBuilder):
|
||||
|
||||
# Create two tenants: one will be deleted, other should be preserved.
|
||||
tenant_id = env.initial_tenant.hex
|
||||
timeline_id_1 = env.zenith_cli.create_branch('br1').hex # Acive, delete explicitly
|
||||
timeline_id_2 = env.zenith_cli.create_branch('br2').hex # Inactive, delete explictly
|
||||
timeline_id_1 = env.zenith_cli.create_branch('br1').hex # Active, delete explicitly
|
||||
timeline_id_2 = env.zenith_cli.create_branch('br2').hex # Inactive, delete explicitly
|
||||
timeline_id_3 = env.zenith_cli.create_branch('br3').hex # Active, delete with the tenant
|
||||
timeline_id_4 = env.zenith_cli.create_branch('br4').hex # Inactive, delete with the tenant
|
||||
|
||||
|
||||
@@ -206,7 +206,7 @@ class ZenithBenchmarker:
|
||||
f"{prefix}.number_of_transactions_actually_processed",
|
||||
pg_bench_result.number_of_transactions_actually_processed,
|
||||
'',
|
||||
# thats because this is predefined by test matrix and doesnt change across runs
|
||||
# that's because this is predefined by test matrix and doesn't change across runs
|
||||
report=MetricReport.TEST_PARAM,
|
||||
)
|
||||
self.record(f"{prefix}.latency_average",
|
||||
@@ -302,7 +302,7 @@ def pytest_addoption(parser):
|
||||
parser.addoption(
|
||||
"--out-dir",
|
||||
dest="out_dir",
|
||||
help="Directory to ouput performance tests results to.",
|
||||
help="Directory to output performance tests results to.",
|
||||
)
|
||||
|
||||
|
||||
|
||||
@@ -75,7 +75,7 @@ def pytest_addoption(parser):
|
||||
"--skip-interfering-proc-check",
|
||||
dest="skip_interfering_proc_check",
|
||||
action="store_true",
|
||||
help="skip check for interferring processes",
|
||||
help="skip check for interfering processes",
|
||||
)
|
||||
|
||||
|
||||
@@ -88,7 +88,7 @@ top_output_dir = ""
|
||||
|
||||
def check_interferring_processes(config):
|
||||
if config.getoption("skip_interfering_proc_check"):
|
||||
warnings.warn("interferring process check is skipped")
|
||||
warnings.warn("interfering process check is skipped")
|
||||
return
|
||||
|
||||
# does not use -c as it is not supported on macOS
|
||||
@@ -107,7 +107,7 @@ def check_interferring_processes(config):
|
||||
def pytest_configure(config):
|
||||
"""
|
||||
Ensure that no unwanted daemons are running before we start testing.
|
||||
Check that we do not owerflow available ports range.
|
||||
Check that we do not overflow available ports range.
|
||||
"""
|
||||
check_interferring_processes(config)
|
||||
|
||||
@@ -1417,7 +1417,7 @@ class RemotePostgres(PgProtocol):
|
||||
raise Exception('cannot stop a remote Postgres instance')
|
||||
|
||||
def get_subdir_size(self, subdir) -> int:
|
||||
# TODO: Could use the server's Generic File Acccess functions if superuser.
|
||||
# TODO: Could use the server's Generic File Access functions if superuser.
|
||||
# See https://www.postgresql.org/docs/14/functions-admin.html#FUNCTIONS-ADMIN-GENFILE
|
||||
raise Exception('cannot get size of a Postgres instance')
|
||||
|
||||
|
||||
Reference in New Issue
Block a user