Should we list the WAL safekeeper nodes here? Or are they part of the Storage? Or not visible to users at all?
>
);
}
function BucketSummary(props) {
const bucketSummary = props.bucketSummary;
const startOperation = props.startOperation;
function slicedice() {
startOperation('Slicing sequential WAL to per-relation WAL...',
fetch("/api/slicedice", { method: 'POST' }));
}
if (!bucketSummary.nonrelimages)
{
return <>loading...>
}
return (
Base images at following WAL positions:
{bucketSummary.nonrelimages.map((img) => (
{img}
))}
Sliced WAL is available up to { bucketSummary.maxwal }
Raw WAL is available up to { bucketSummary.maxseqwal }
Currently, the slicing or "sharding" of the WAL needs to be triggered manually, by clicking the above button.
TODO: make it a continuous process that runs in the WAL safekeepers, or in the Page Servers, or as a standalone service.
);
}
else
return '';
}
function ActionButtons(props) {
const startOperation = props.startOperation;
const bucketSummary = props.bucketSummary;
function reset_demo() {
startOperation('resetting everything...',
fetch("/api/reset_demo", { method: 'POST' }));
}
function init_primary() {
startOperation('Initializing new primary...',
fetch("/api/init_primary", { method: 'POST' }));
}
function zenith_push() {
startOperation('Pushing new base image...',
fetch("/api/zenith_push", { method: 'POST' }));
}
return (
RESET DEMO deletes everything in the storage bucket, and stops and destroys all servers. This resets the whole demo environment to the initial state.
Init Primary runs initdb to create a new primary server. Click this after Resetting the demo.
Push Base Image stops the primary, copies the current state of the primary to the storage bucket as a new base backup, and restarts the primary.
TODO: This should be handled by a continuous background process, probably running in the storage nodes. And without having to shut down the cluster, of course.
In Zenith, snapshots are just specific points (LSNs) in the WAL history, with a label. A snapshot prevents garbage collecting old data that's still needed to reconstruct the database at that LSN.
TODO:
List existing snapshots
Create new snapshot manually, from current state or from a given LSN
Drill into the WAL stream to see what have happened. Provide tools for e.g. finding point where a table was dropped
Create snapshots automatically based on events in the WAL, like if you call pg_create_restore_point(() in the primary
Launch new reader instance at a snapshot
Export snapshot
Rollback cluster to a snapshot
>
);
} else if (page === 'demo') {
return (
<>
Misc actions
>
);
} else if (page === 'import') {
return (
<>
Import & Export tools
TODO:
Initialize database from existing backup (pg_basebackup, WAL-G, pgbackrest)
Initialize from a pg_dump or other SQL script
Launch batch job to import data files from S3
Launch batch job to export database with pg_dump to S3
These jobs can be run in against reader processing nodes. We can even
spawn a new reader node dedicated to a job, and destry it when the job is done.
>
);
} else if (page === 'jobs') {
return (
<>
Batch jobs
TODO:
List running jobs launched from Import & Export tools