mirror of
https://github.com/neondatabase/neon.git
synced 2026-01-07 13:32:57 +00:00
storcon: rework scheduler optimisation, prioritize AZ (#9916)
## Problem We want to do a more robust job of scheduling tenants into their home AZ: https://github.com/neondatabase/neon/issues/8264. Closes: https://github.com/neondatabase/neon/issues/8969 ## Summary of changes ### Scope This PR combines prioritizing AZ with a larger rework of how we do optimisation. The rationale is that just bumping AZ in the order of Score attributes is a very tiny change: the interesting part is lining up all the optimisation logic to respect this properly, which means rewriting it to use the same scores as the scheduler, rather than the fragile hand-crafted logic that we had before. Separating these changes out is possible, but would involve doing two rounds of test updates instead of one. ### Scheduling optimisation `TenantShard`'s `optimize_attachment` and `optimize_secondary` methods now both use the scheduler to pick a new "favourite" location. Then there is some refined logic for whether + how to migrate to it: - To decide if a new location is sufficiently "better", we generate scores using some projected ScheduleContexts that exclude the shard under consideration, so that we avoid migrating from a node with AffinityScore(2) to a node with AffinityScore(1), only to migrate back later. - Score types get a `for_optimization` method so that when we compare scores, we will only do an optimisation if the scores differ by their highest-ranking attributes, not just because one pageserver is lower in utilization. Eventually we _will_ want a mode that does this, but doing it here would make scheduling logic unstable and harder to test, and to do this correctly one needs to know the size of the tenant that one is migrating. - When we find a new attached location that we would like to move to, we will create a new secondary location there, even if we already had one on some other node. This handles the case where we have a home AZ A, and want to migrate the attachment between pageservers in that AZ while retaining a secondary location in some other AZ as well. - A unit test is added for https://github.com/neondatabase/neon/issues/8969, which is implicitly fixed by reworking optimisation to use the same scheduling scores as scheduling.
This commit is contained in:
@@ -112,7 +112,7 @@ impl TenantShardDrain {
|
||||
}
|
||||
}
|
||||
|
||||
match scheduler.node_preferred(tenant_shard.intent.get_secondary()) {
|
||||
match tenant_shard.preferred_secondary(scheduler) {
|
||||
Some(node) => Some(node),
|
||||
None => {
|
||||
tracing::warn!(
|
||||
|
||||
@@ -826,7 +826,21 @@ impl Reconciler {
|
||||
if self.cancel.is_cancelled() {
|
||||
return Err(ReconcileError::Cancel);
|
||||
}
|
||||
self.location_config(&node, conf, None, false).await?;
|
||||
// We only try to configure secondary locations if the node is available. This does
|
||||
// not stop us succeeding with the reconcile, because our core goal is to make the
|
||||
// shard _available_ (the attached location), and configuring secondary locations
|
||||
// can be done lazily when the node becomes available (via background reconciliation).
|
||||
if node.is_available() {
|
||||
self.location_config(&node, conf, None, false).await?;
|
||||
} else {
|
||||
// If the node is unavailable, we skip and consider the reconciliation successful: this
|
||||
// is a common case where a pageserver is marked unavailable: we demote a location on
|
||||
// that unavailable pageserver to secondary.
|
||||
tracing::info!("Skipping configuring secondary location {node}, it is unavailable");
|
||||
self.observed
|
||||
.locations
|
||||
.insert(node.get_id(), ObservedStateLocation { conf: None });
|
||||
}
|
||||
}
|
||||
|
||||
// The condition below identifies a detach. We must have no attached intent and
|
||||
|
||||
@@ -32,6 +32,9 @@ pub(crate) struct SchedulerNode {
|
||||
shard_count: usize,
|
||||
/// How many shards are currently attached on this node, via their [`crate::tenant_shard::IntentState`].
|
||||
attached_shard_count: usize,
|
||||
/// How many shards have a location on this node (via [`crate::tenant_shard::IntentState`]) _and_ this node
|
||||
/// is in their preferred AZ (i.e. this is their 'home' location)
|
||||
home_shard_count: usize,
|
||||
/// Availability zone id in which the node resides
|
||||
az: AvailabilityZone,
|
||||
|
||||
@@ -47,6 +50,12 @@ pub(crate) trait NodeSchedulingScore: Debug + Ord + Copy + Sized {
|
||||
preferred_az: &Option<AvailabilityZone>,
|
||||
context: &ScheduleContext,
|
||||
) -> Option<Self>;
|
||||
|
||||
/// Return a score that drops any components based on node utilization: this is useful
|
||||
/// for finding scores for scheduling optimisation, when we want to avoid rescheduling
|
||||
/// shards due to e.g. disk usage, to avoid flapping.
|
||||
fn for_optimization(&self) -> Self;
|
||||
|
||||
fn is_overloaded(&self) -> bool;
|
||||
fn node_id(&self) -> NodeId;
|
||||
}
|
||||
@@ -136,17 +145,13 @@ impl PartialOrd for SecondaryAzMatch {
|
||||
/// Ordering is given by member declaration order (top to bottom).
|
||||
#[derive(Debug, PartialEq, Eq, PartialOrd, Ord, Clone, Copy)]
|
||||
pub(crate) struct NodeAttachmentSchedulingScore {
|
||||
/// The number of shards belonging to the tenant currently being
|
||||
/// scheduled that are attached to this node.
|
||||
affinity_score: AffinityScore,
|
||||
/// Flag indicating whether this node matches the preferred AZ
|
||||
/// of the shard. For equal affinity scores, nodes in the matching AZ
|
||||
/// are considered first.
|
||||
az_match: AttachmentAzMatch,
|
||||
/// Size of [`ScheduleContext::attached_nodes`] for the current node.
|
||||
/// This normally tracks the number of attached shards belonging to the
|
||||
/// tenant being scheduled that are already on this node.
|
||||
attached_shards_in_context: usize,
|
||||
/// The number of shards belonging to the tenant currently being
|
||||
/// scheduled that are attached to this node.
|
||||
affinity_score: AffinityScore,
|
||||
/// Utilisation score that combines shard count and disk utilisation
|
||||
utilization_score: u64,
|
||||
/// Total number of shards attached to this node. When nodes have identical utilisation, this
|
||||
@@ -177,13 +182,25 @@ impl NodeSchedulingScore for NodeAttachmentSchedulingScore {
|
||||
.copied()
|
||||
.unwrap_or(AffinityScore::FREE),
|
||||
az_match: AttachmentAzMatch(AzMatch::new(&node.az, preferred_az.as_ref())),
|
||||
attached_shards_in_context: context.attached_nodes.get(node_id).copied().unwrap_or(0),
|
||||
utilization_score: utilization.cached_score(),
|
||||
total_attached_shard_count: node.attached_shard_count,
|
||||
node_id: *node_id,
|
||||
})
|
||||
}
|
||||
|
||||
/// For use in scheduling optimisation, where we only want to consider the aspects
|
||||
/// of the score that can only be resolved by moving things (such as inter-shard affinity
|
||||
/// and AZ affinity), and ignore aspects that reflect the total utilization of a node (which
|
||||
/// can fluctuate for other reasons)
|
||||
fn for_optimization(&self) -> Self {
|
||||
Self {
|
||||
utilization_score: 0,
|
||||
total_attached_shard_count: 0,
|
||||
node_id: NodeId(0),
|
||||
..*self
|
||||
}
|
||||
}
|
||||
|
||||
fn is_overloaded(&self) -> bool {
|
||||
PageserverUtilization::is_overloaded(self.utilization_score)
|
||||
}
|
||||
@@ -208,9 +225,9 @@ pub(crate) struct NodeSecondarySchedulingScore {
|
||||
affinity_score: AffinityScore,
|
||||
/// Utilisation score that combines shard count and disk utilisation
|
||||
utilization_score: u64,
|
||||
/// Total number of shards attached to this node. When nodes have identical utilisation, this
|
||||
/// acts as an anti-affinity between attached shards.
|
||||
total_attached_shard_count: usize,
|
||||
/// Anti-affinity with other non-home locations: this gives the behavior that secondaries
|
||||
/// will spread out across the nodes in an AZ.
|
||||
total_non_home_shard_count: usize,
|
||||
/// Convenience to make selection deterministic in tests and empty systems
|
||||
node_id: NodeId,
|
||||
}
|
||||
@@ -237,11 +254,20 @@ impl NodeSchedulingScore for NodeSecondarySchedulingScore {
|
||||
.copied()
|
||||
.unwrap_or(AffinityScore::FREE),
|
||||
utilization_score: utilization.cached_score(),
|
||||
total_attached_shard_count: node.attached_shard_count,
|
||||
total_non_home_shard_count: (node.shard_count - node.home_shard_count),
|
||||
node_id: *node_id,
|
||||
})
|
||||
}
|
||||
|
||||
fn for_optimization(&self) -> Self {
|
||||
Self {
|
||||
utilization_score: 0,
|
||||
total_non_home_shard_count: 0,
|
||||
node_id: NodeId(0),
|
||||
..*self
|
||||
}
|
||||
}
|
||||
|
||||
fn is_overloaded(&self) -> bool {
|
||||
PageserverUtilization::is_overloaded(self.utilization_score)
|
||||
}
|
||||
@@ -293,6 +319,10 @@ impl AffinityScore {
|
||||
pub(crate) fn inc(&mut self) {
|
||||
self.0 += 1;
|
||||
}
|
||||
|
||||
pub(crate) fn dec(&mut self) {
|
||||
self.0 -= 1;
|
||||
}
|
||||
}
|
||||
|
||||
impl std::ops::Add for AffinityScore {
|
||||
@@ -324,9 +354,6 @@ pub(crate) struct ScheduleContext {
|
||||
/// Sparse map of nodes: omitting a node implicitly makes its affinity [`AffinityScore::FREE`]
|
||||
pub(crate) nodes: HashMap<NodeId, AffinityScore>,
|
||||
|
||||
/// Specifically how many _attached_ locations are on each node
|
||||
pub(crate) attached_nodes: HashMap<NodeId, usize>,
|
||||
|
||||
pub(crate) mode: ScheduleMode,
|
||||
}
|
||||
|
||||
@@ -334,7 +361,6 @@ impl ScheduleContext {
|
||||
pub(crate) fn new(mode: ScheduleMode) -> Self {
|
||||
Self {
|
||||
nodes: HashMap::new(),
|
||||
attached_nodes: HashMap::new(),
|
||||
mode,
|
||||
}
|
||||
}
|
||||
@@ -348,25 +374,31 @@ impl ScheduleContext {
|
||||
}
|
||||
}
|
||||
|
||||
pub(crate) fn push_attached(&mut self, node_id: NodeId) {
|
||||
let entry = self.attached_nodes.entry(node_id).or_default();
|
||||
*entry += 1;
|
||||
}
|
||||
|
||||
pub(crate) fn get_node_affinity(&self, node_id: NodeId) -> AffinityScore {
|
||||
self.nodes
|
||||
.get(&node_id)
|
||||
.copied()
|
||||
.unwrap_or(AffinityScore::FREE)
|
||||
}
|
||||
|
||||
pub(crate) fn get_node_attachments(&self, node_id: NodeId) -> usize {
|
||||
self.attached_nodes.get(&node_id).copied().unwrap_or(0)
|
||||
/// Remove `shard`'s contributions to this context. This is useful when considering scheduling
|
||||
/// this shard afresh, where we don't want it to e.g. experience anti-affinity to its current location.
|
||||
pub(crate) fn project_detach(&self, shard: &TenantShard) -> Self {
|
||||
let mut new_context = self.clone();
|
||||
|
||||
if let Some(attached) = shard.intent.get_attached() {
|
||||
if let Some(score) = new_context.nodes.get_mut(attached) {
|
||||
score.dec();
|
||||
}
|
||||
}
|
||||
|
||||
for secondary in shard.intent.get_secondary() {
|
||||
if let Some(score) = new_context.nodes.get_mut(secondary) {
|
||||
score.dec();
|
||||
}
|
||||
}
|
||||
|
||||
new_context
|
||||
}
|
||||
|
||||
/// For test, track the sum of AffinityScore values, which is effectively how many
|
||||
/// attached or secondary locations have been registered with this context.
|
||||
#[cfg(test)]
|
||||
pub(crate) fn attach_count(&self) -> usize {
|
||||
self.attached_nodes.values().sum()
|
||||
pub(crate) fn location_count(&self) -> usize {
|
||||
self.nodes.values().map(|i| i.0).sum()
|
||||
}
|
||||
}
|
||||
|
||||
@@ -388,6 +420,7 @@ impl Scheduler {
|
||||
SchedulerNode {
|
||||
shard_count: 0,
|
||||
attached_shard_count: 0,
|
||||
home_shard_count: 0,
|
||||
may_schedule: node.may_schedule(),
|
||||
az: node.get_availability_zone_id().clone(),
|
||||
},
|
||||
@@ -415,6 +448,7 @@ impl Scheduler {
|
||||
SchedulerNode {
|
||||
shard_count: 0,
|
||||
attached_shard_count: 0,
|
||||
home_shard_count: 0,
|
||||
may_schedule: node.may_schedule(),
|
||||
az: node.get_availability_zone_id().clone(),
|
||||
},
|
||||
@@ -427,6 +461,9 @@ impl Scheduler {
|
||||
Some(node) => {
|
||||
node.shard_count += 1;
|
||||
node.attached_shard_count += 1;
|
||||
if Some(&node.az) == shard.preferred_az() {
|
||||
node.home_shard_count += 1;
|
||||
}
|
||||
}
|
||||
None => anyhow::bail!(
|
||||
"Tenant {} references nonexistent node {}",
|
||||
@@ -438,7 +475,12 @@ impl Scheduler {
|
||||
|
||||
for node_id in shard.intent.get_secondary() {
|
||||
match expect_nodes.get_mut(node_id) {
|
||||
Some(node) => node.shard_count += 1,
|
||||
Some(node) => {
|
||||
node.shard_count += 1;
|
||||
if Some(&node.az) == shard.preferred_az() {
|
||||
node.home_shard_count += 1;
|
||||
}
|
||||
}
|
||||
None => anyhow::bail!(
|
||||
"Tenant {} references nonexistent node {}",
|
||||
shard.tenant_shard_id,
|
||||
@@ -482,13 +524,20 @@ impl Scheduler {
|
||||
///
|
||||
/// It is an error to call this for a node that is not known to the scheduler (i.e. passed into
|
||||
/// [`Self::new`] or [`Self::node_upsert`])
|
||||
pub(crate) fn update_node_ref_counts(&mut self, node_id: NodeId, update: RefCountUpdate) {
|
||||
pub(crate) fn update_node_ref_counts(
|
||||
&mut self,
|
||||
node_id: NodeId,
|
||||
preferred_az: Option<&AvailabilityZone>,
|
||||
update: RefCountUpdate,
|
||||
) {
|
||||
let Some(node) = self.nodes.get_mut(&node_id) else {
|
||||
debug_assert!(false);
|
||||
tracing::error!("Scheduler missing node {node_id}");
|
||||
return;
|
||||
};
|
||||
|
||||
let is_home_az = Some(&node.az) == preferred_az;
|
||||
|
||||
match update {
|
||||
RefCountUpdate::PromoteSecondary => {
|
||||
node.attached_shard_count += 1;
|
||||
@@ -496,19 +545,31 @@ impl Scheduler {
|
||||
RefCountUpdate::Attach => {
|
||||
node.shard_count += 1;
|
||||
node.attached_shard_count += 1;
|
||||
if is_home_az {
|
||||
node.home_shard_count += 1;
|
||||
}
|
||||
}
|
||||
RefCountUpdate::Detach => {
|
||||
node.shard_count -= 1;
|
||||
node.attached_shard_count -= 1;
|
||||
if is_home_az {
|
||||
node.home_shard_count -= 1;
|
||||
}
|
||||
}
|
||||
RefCountUpdate::DemoteAttached => {
|
||||
node.attached_shard_count -= 1;
|
||||
}
|
||||
RefCountUpdate::AddSecondary => {
|
||||
node.shard_count += 1;
|
||||
if is_home_az {
|
||||
node.home_shard_count += 1;
|
||||
}
|
||||
}
|
||||
RefCountUpdate::RemoveSecondary => {
|
||||
node.shard_count -= 1;
|
||||
if is_home_az {
|
||||
node.home_shard_count -= 1;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -594,6 +655,7 @@ impl Scheduler {
|
||||
entry.insert(SchedulerNode {
|
||||
shard_count: 0,
|
||||
attached_shard_count: 0,
|
||||
home_shard_count: 0,
|
||||
may_schedule: node.may_schedule(),
|
||||
az: node.get_availability_zone_id().clone(),
|
||||
});
|
||||
@@ -607,33 +669,20 @@ impl Scheduler {
|
||||
}
|
||||
}
|
||||
|
||||
/// Where we have several nodes to choose from, for example when picking a secondary location
|
||||
/// to promote to an attached location, this method may be used to pick the best choice based
|
||||
/// on the scheduler's knowledge of utilization and availability.
|
||||
///
|
||||
/// If the input is empty, or all the nodes are not elegible for scheduling, return None: the
|
||||
/// caller can pick a node some other way.
|
||||
pub(crate) fn node_preferred(&self, nodes: &[NodeId]) -> Option<NodeId> {
|
||||
if nodes.is_empty() {
|
||||
return None;
|
||||
}
|
||||
|
||||
// TODO: When the utilization score returned by the pageserver becomes meaningful,
|
||||
// schedule based on that instead of the shard count.
|
||||
let node = nodes
|
||||
.iter()
|
||||
.map(|node_id| {
|
||||
let may_schedule = self
|
||||
.nodes
|
||||
.get(node_id)
|
||||
.map(|n| !matches!(n.may_schedule, MaySchedule::No))
|
||||
.unwrap_or(false);
|
||||
(*node_id, may_schedule)
|
||||
})
|
||||
.max_by_key(|(_n, may_schedule)| *may_schedule);
|
||||
|
||||
// If even the preferred node has may_schedule==false, return None
|
||||
node.and_then(|(node_id, may_schedule)| if may_schedule { Some(node_id) } else { None })
|
||||
/// Calculate a single node's score, used in optimizer logic to compare specific
|
||||
/// nodes' scores.
|
||||
pub(crate) fn compute_node_score<Score>(
|
||||
&mut self,
|
||||
node_id: NodeId,
|
||||
preferred_az: &Option<AvailabilityZone>,
|
||||
context: &ScheduleContext,
|
||||
) -> Option<Score>
|
||||
where
|
||||
Score: NodeSchedulingScore,
|
||||
{
|
||||
self.nodes
|
||||
.get_mut(&node_id)
|
||||
.and_then(|node| Score::generate(&node_id, node, preferred_az, context))
|
||||
}
|
||||
|
||||
/// Compute a schedulling score for each node that the scheduler knows of
|
||||
@@ -727,7 +776,7 @@ impl Scheduler {
|
||||
tracing::info!(
|
||||
"scheduler selected node {node_id} (elegible nodes {:?}, hard exclude: {hard_exclude:?}, soft exclude: {context:?})",
|
||||
scores.iter().map(|i| i.node_id().0).collect::<Vec<_>>()
|
||||
);
|
||||
);
|
||||
}
|
||||
|
||||
// Note that we do not update shard count here to reflect the scheduling: that
|
||||
@@ -743,47 +792,74 @@ impl Scheduler {
|
||||
}
|
||||
|
||||
/// For choosing which AZ to schedule a new shard into, use this. It will return the
|
||||
/// AZ with the lowest median utilization.
|
||||
/// AZ with the the lowest number of shards currently scheduled in this AZ as their home
|
||||
/// location.
|
||||
///
|
||||
/// We use an AZ-wide measure rather than simply selecting the AZ of the least-loaded
|
||||
/// node, because while tenants start out single sharded, when they grow and undergo
|
||||
/// shard-split, they will occupy space on many nodes within an AZ.
|
||||
/// shard-split, they will occupy space on many nodes within an AZ. It is important
|
||||
/// that we pick the AZ in a way that balances this _future_ load.
|
||||
///
|
||||
/// We use median rather than total free space or mean utilization, because
|
||||
/// we wish to avoid preferring AZs that have low-load nodes resulting from
|
||||
/// recent replacements.
|
||||
///
|
||||
/// The practical result is that we will pick an AZ based on its median node, and
|
||||
/// then actually _schedule_ the new shard onto the lowest-loaded node in that AZ.
|
||||
/// Once we've picked an AZ, subsequent scheduling within that AZ will be driven by
|
||||
/// nodes' utilization scores.
|
||||
pub(crate) fn get_az_for_new_tenant(&self) -> Option<AvailabilityZone> {
|
||||
if self.nodes.is_empty() {
|
||||
return None;
|
||||
}
|
||||
|
||||
let mut scores_by_az = HashMap::new();
|
||||
for (node_id, node) in &self.nodes {
|
||||
let az_scores = scores_by_az.entry(&node.az).or_insert_with(Vec::new);
|
||||
let score = match &node.may_schedule {
|
||||
MaySchedule::Yes(utilization) => utilization.score(),
|
||||
MaySchedule::No => PageserverUtilization::full().score(),
|
||||
};
|
||||
az_scores.push((node_id, node, score));
|
||||
#[derive(Default)]
|
||||
struct AzScore {
|
||||
home_shard_count: usize,
|
||||
scheduleable: bool,
|
||||
}
|
||||
|
||||
// Sort by utilization. Also include the node ID to break ties.
|
||||
for scores in scores_by_az.values_mut() {
|
||||
scores.sort_by_key(|i| (i.2, i.0));
|
||||
let mut azs: HashMap<&AvailabilityZone, AzScore> = HashMap::new();
|
||||
for node in self.nodes.values() {
|
||||
let az = azs.entry(&node.az).or_default();
|
||||
az.home_shard_count += node.home_shard_count;
|
||||
az.scheduleable |= matches!(node.may_schedule, MaySchedule::Yes(_));
|
||||
}
|
||||
|
||||
let mut median_by_az = scores_by_az
|
||||
// If any AZs are schedulable, then filter out the non-schedulable ones (i.e. AZs where
|
||||
// all nodes are overloaded or otherwise unschedulable).
|
||||
if azs.values().any(|i| i.scheduleable) {
|
||||
azs.retain(|_, i| i.scheduleable);
|
||||
}
|
||||
|
||||
// Find the AZ with the lowest number of shards currently allocated
|
||||
Some(
|
||||
azs.into_iter()
|
||||
.min_by_key(|i| (i.1.home_shard_count, i.0))
|
||||
.unwrap()
|
||||
.0
|
||||
.clone(),
|
||||
)
|
||||
}
|
||||
|
||||
pub(crate) fn get_node_az(&self, node_id: &NodeId) -> Option<AvailabilityZone> {
|
||||
self.nodes.get(node_id).map(|n| n.az.clone())
|
||||
}
|
||||
|
||||
/// For use when choosing a preferred secondary location: filter out nodes that are not
|
||||
/// available, and gather their AZs.
|
||||
pub(crate) fn filter_usable_nodes(
|
||||
&self,
|
||||
nodes: &[NodeId],
|
||||
) -> Vec<(NodeId, Option<AvailabilityZone>)> {
|
||||
nodes
|
||||
.iter()
|
||||
.map(|(az, nodes)| (*az, nodes.get(nodes.len() / 2).unwrap().2))
|
||||
.collect::<Vec<_>>();
|
||||
// Sort by utilization. Also include the AZ to break ties.
|
||||
median_by_az.sort_by_key(|i| (i.1, i.0));
|
||||
|
||||
// Return the AZ with the lowest median utilization
|
||||
Some(median_by_az.first().unwrap().0.clone())
|
||||
.filter_map(|node_id| {
|
||||
let node = self
|
||||
.nodes
|
||||
.get(node_id)
|
||||
.expect("Referenced nodes always exist");
|
||||
if matches!(node.may_schedule, MaySchedule::Yes(_)) {
|
||||
Some((*node_id, Some(node.az.clone())))
|
||||
} else {
|
||||
None
|
||||
}
|
||||
})
|
||||
.collect()
|
||||
}
|
||||
|
||||
/// Unit test access to internal state
|
||||
@@ -843,7 +919,14 @@ pub(crate) mod test_utils {
|
||||
|
||||
#[cfg(test)]
|
||||
mod tests {
|
||||
use pageserver_api::{controller_api::NodeAvailability, models::utilization::test_utilization};
|
||||
use pageserver_api::{
|
||||
controller_api::NodeAvailability, models::utilization::test_utilization,
|
||||
shard::ShardIdentity,
|
||||
};
|
||||
use utils::{
|
||||
id::TenantId,
|
||||
shard::{ShardCount, ShardNumber, TenantShardId},
|
||||
};
|
||||
|
||||
use super::*;
|
||||
|
||||
@@ -853,8 +936,8 @@ mod tests {
|
||||
let nodes = test_utils::make_test_nodes(2, &[]);
|
||||
|
||||
let mut scheduler = Scheduler::new(nodes.values());
|
||||
let mut t1_intent = IntentState::new();
|
||||
let mut t2_intent = IntentState::new();
|
||||
let mut t1_intent = IntentState::new(None);
|
||||
let mut t2_intent = IntentState::new(None);
|
||||
|
||||
let context = ScheduleContext::default();
|
||||
|
||||
@@ -930,7 +1013,7 @@ mod tests {
|
||||
let scheduled = scheduler
|
||||
.schedule_shard::<AttachedShardTag>(&[], &None, context)
|
||||
.unwrap();
|
||||
let mut intent = IntentState::new();
|
||||
let mut intent = IntentState::new(None);
|
||||
intent.set_attached(scheduler, Some(scheduled));
|
||||
scheduled_intents.push(intent);
|
||||
assert_eq!(scheduled, expect_node);
|
||||
@@ -1063,7 +1146,7 @@ mod tests {
|
||||
let scheduled = scheduler
|
||||
.schedule_shard::<Tag>(&[], &preferred_az, context)
|
||||
.unwrap();
|
||||
let mut intent = IntentState::new();
|
||||
let mut intent = IntentState::new(preferred_az.clone());
|
||||
intent.set_attached(scheduler, Some(scheduled));
|
||||
scheduled_intents.push(intent);
|
||||
assert_eq!(scheduled, expect_node);
|
||||
@@ -1089,9 +1172,9 @@ mod tests {
|
||||
&mut context,
|
||||
);
|
||||
|
||||
// Node 2 is not in "az-a", but it has the lowest affinity so we prefer that.
|
||||
// Node 1 and 3 (az-a) have same affinity score, so prefer the lowest node id.
|
||||
assert_scheduler_chooses::<AttachedShardTag>(
|
||||
NodeId(2),
|
||||
NodeId(1),
|
||||
Some(az_a_tag.clone()),
|
||||
&mut scheduled_intents,
|
||||
&mut scheduler,
|
||||
@@ -1107,26 +1190,6 @@ mod tests {
|
||||
&mut context,
|
||||
);
|
||||
|
||||
// Avoid nodes in "az-b" for the secondary location.
|
||||
// Nodes 1 and 3 are identically loaded, so prefer the lowest node id.
|
||||
assert_scheduler_chooses::<SecondaryShardTag>(
|
||||
NodeId(1),
|
||||
Some(az_b_tag.clone()),
|
||||
&mut scheduled_intents,
|
||||
&mut scheduler,
|
||||
&mut context,
|
||||
);
|
||||
|
||||
// Avoid nodes in "az-b" for the secondary location.
|
||||
// Node 3 has lower affinity score than 1, so prefer that.
|
||||
assert_scheduler_chooses::<SecondaryShardTag>(
|
||||
NodeId(3),
|
||||
Some(az_b_tag.clone()),
|
||||
&mut scheduled_intents,
|
||||
&mut scheduler,
|
||||
&mut context,
|
||||
);
|
||||
|
||||
for mut intent in scheduled_intents {
|
||||
intent.clear(&mut scheduler);
|
||||
}
|
||||
@@ -1150,34 +1213,292 @@ mod tests {
|
||||
|
||||
let mut scheduler = Scheduler::new(nodes.values());
|
||||
|
||||
/// Force the utilization of a node in Scheduler's state to a particular
|
||||
/// number of bytes used.
|
||||
fn set_utilization(scheduler: &mut Scheduler, node_id: NodeId, shard_count: u32) {
|
||||
let mut node = Node::new(
|
||||
node_id,
|
||||
"".to_string(),
|
||||
0,
|
||||
"".to_string(),
|
||||
0,
|
||||
scheduler.nodes.get(&node_id).unwrap().az.clone(),
|
||||
);
|
||||
node.set_availability(NodeAvailability::Active(test_utilization::simple(
|
||||
shard_count,
|
||||
0,
|
||||
)));
|
||||
scheduler.node_upsert(&node);
|
||||
/// Force the `home_shard_count` of a node directly: this is the metric used
|
||||
/// by the scheduler when picking AZs.
|
||||
fn set_shard_count(scheduler: &mut Scheduler, node_id: NodeId, shard_count: usize) {
|
||||
let node = scheduler.nodes.get_mut(&node_id).unwrap();
|
||||
node.home_shard_count = shard_count;
|
||||
}
|
||||
|
||||
// Initial empty state. Scores are tied, scheduler prefers lower AZ ID.
|
||||
assert_eq!(scheduler.get_az_for_new_tenant(), Some(az_a_tag.clone()));
|
||||
|
||||
// Put some utilization on one node in AZ A: this should change nothing, as the median hasn't changed
|
||||
set_utilization(&mut scheduler, NodeId(1), 1000000);
|
||||
assert_eq!(scheduler.get_az_for_new_tenant(), Some(az_a_tag.clone()));
|
||||
|
||||
// Put some utilization on a second node in AZ A: now the median has changed, so the scheduler
|
||||
// should prefer the other AZ.
|
||||
set_utilization(&mut scheduler, NodeId(2), 1000000);
|
||||
// Home shard count is higher in AZ A, so AZ B will be preferred
|
||||
set_shard_count(&mut scheduler, NodeId(1), 10);
|
||||
assert_eq!(scheduler.get_az_for_new_tenant(), Some(az_b_tag.clone()));
|
||||
|
||||
// Total home shard count is higher in AZ B, so we revert to preferring AZ A
|
||||
set_shard_count(&mut scheduler, NodeId(4), 6);
|
||||
set_shard_count(&mut scheduler, NodeId(5), 6);
|
||||
assert_eq!(scheduler.get_az_for_new_tenant(), Some(az_a_tag.clone()));
|
||||
}
|
||||
|
||||
/// Test that when selecting AZs for many new tenants, we get the expected balance across nodes
|
||||
#[test]
|
||||
fn az_selection_many() {
|
||||
let az_a_tag = AvailabilityZone("az-a".to_string());
|
||||
let az_b_tag = AvailabilityZone("az-b".to_string());
|
||||
let az_c_tag = AvailabilityZone("az-c".to_string());
|
||||
let nodes = test_utils::make_test_nodes(
|
||||
6,
|
||||
&[
|
||||
az_a_tag.clone(),
|
||||
az_b_tag.clone(),
|
||||
az_c_tag.clone(),
|
||||
az_a_tag.clone(),
|
||||
az_b_tag.clone(),
|
||||
az_c_tag.clone(),
|
||||
],
|
||||
);
|
||||
|
||||
let mut scheduler = Scheduler::new(nodes.values());
|
||||
|
||||
// We should get 1/6th of these on each node, give or take a few...
|
||||
let total_tenants = 300;
|
||||
|
||||
// ...where the 'few' is the number of AZs, because the scheduling will sometimes overshoot
|
||||
// on one AZ before correcting itself. This is because we select the 'home' AZ based on
|
||||
// an AZ-wide metric, but we select the location for secondaries on a purely node-based
|
||||
// metric (while excluding the home AZ).
|
||||
let grace = 3;
|
||||
|
||||
let mut scheduled_shards = Vec::new();
|
||||
for _i in 0..total_tenants {
|
||||
let preferred_az = scheduler.get_az_for_new_tenant().unwrap();
|
||||
|
||||
let mut node_home_counts = scheduler
|
||||
.nodes
|
||||
.iter()
|
||||
.map(|(node_id, node)| (node_id, node.home_shard_count))
|
||||
.collect::<Vec<_>>();
|
||||
node_home_counts.sort_by_key(|i| i.0);
|
||||
eprintln!("Selected {}, vs nodes {:?}", preferred_az, node_home_counts);
|
||||
|
||||
let tenant_shard_id = TenantShardId {
|
||||
tenant_id: TenantId::generate(),
|
||||
shard_number: ShardNumber(0),
|
||||
shard_count: ShardCount(1),
|
||||
};
|
||||
|
||||
let shard_identity = ShardIdentity::new(
|
||||
tenant_shard_id.shard_number,
|
||||
tenant_shard_id.shard_count,
|
||||
pageserver_api::shard::ShardStripeSize(1),
|
||||
)
|
||||
.unwrap();
|
||||
let mut shard = TenantShard::new(
|
||||
tenant_shard_id,
|
||||
shard_identity,
|
||||
pageserver_api::controller_api::PlacementPolicy::Attached(1),
|
||||
Some(preferred_az),
|
||||
);
|
||||
|
||||
let mut context = ScheduleContext::default();
|
||||
shard.schedule(&mut scheduler, &mut context).unwrap();
|
||||
eprintln!("Scheduled shard at {:?}", shard.intent);
|
||||
|
||||
scheduled_shards.push(shard);
|
||||
}
|
||||
|
||||
for (node_id, node) in &scheduler.nodes {
|
||||
eprintln!(
|
||||
"Node {}: {} {} {}",
|
||||
node_id, node.shard_count, node.attached_shard_count, node.home_shard_count
|
||||
);
|
||||
}
|
||||
|
||||
for node in scheduler.nodes.values() {
|
||||
assert!((node.home_shard_count as i64 - total_tenants as i64 / 6).abs() < grace);
|
||||
}
|
||||
|
||||
for mut shard in scheduled_shards {
|
||||
shard.intent.clear(&mut scheduler);
|
||||
}
|
||||
}
|
||||
|
||||
#[test]
|
||||
/// Make sure that when we have an odd number of nodes and an even number of shards, we still
|
||||
/// get scheduling stability.
|
||||
fn odd_nodes_stability() {
|
||||
let az_a = AvailabilityZone("az-a".to_string());
|
||||
let az_b = AvailabilityZone("az-b".to_string());
|
||||
|
||||
let nodes = test_utils::make_test_nodes(
|
||||
10,
|
||||
&[
|
||||
az_a.clone(),
|
||||
az_a.clone(),
|
||||
az_a.clone(),
|
||||
az_a.clone(),
|
||||
az_a.clone(),
|
||||
az_b.clone(),
|
||||
az_b.clone(),
|
||||
az_b.clone(),
|
||||
az_b.clone(),
|
||||
az_b.clone(),
|
||||
],
|
||||
);
|
||||
let mut scheduler = Scheduler::new(nodes.values());
|
||||
|
||||
// Need to keep these alive because they contribute to shard counts via RAII
|
||||
let mut scheduled_shards = Vec::new();
|
||||
|
||||
let mut context = ScheduleContext::default();
|
||||
|
||||
fn schedule_shard(
|
||||
tenant_shard_id: TenantShardId,
|
||||
expect_attached: NodeId,
|
||||
expect_secondary: NodeId,
|
||||
scheduled_shards: &mut Vec<TenantShard>,
|
||||
scheduler: &mut Scheduler,
|
||||
preferred_az: Option<AvailabilityZone>,
|
||||
context: &mut ScheduleContext,
|
||||
) {
|
||||
let shard_identity = ShardIdentity::new(
|
||||
tenant_shard_id.shard_number,
|
||||
tenant_shard_id.shard_count,
|
||||
pageserver_api::shard::ShardStripeSize(1),
|
||||
)
|
||||
.unwrap();
|
||||
let mut shard = TenantShard::new(
|
||||
tenant_shard_id,
|
||||
shard_identity,
|
||||
pageserver_api::controller_api::PlacementPolicy::Attached(1),
|
||||
preferred_az,
|
||||
);
|
||||
|
||||
shard.schedule(scheduler, context).unwrap();
|
||||
|
||||
assert_eq!(shard.intent.get_attached().unwrap(), expect_attached);
|
||||
assert_eq!(
|
||||
shard.intent.get_secondary().first().unwrap(),
|
||||
&expect_secondary
|
||||
);
|
||||
|
||||
scheduled_shards.push(shard);
|
||||
}
|
||||
|
||||
let tenant_id = TenantId::generate();
|
||||
|
||||
schedule_shard(
|
||||
TenantShardId {
|
||||
tenant_id,
|
||||
shard_number: ShardNumber(0),
|
||||
shard_count: ShardCount(8),
|
||||
},
|
||||
NodeId(1),
|
||||
NodeId(6),
|
||||
&mut scheduled_shards,
|
||||
&mut scheduler,
|
||||
Some(az_a.clone()),
|
||||
&mut context,
|
||||
);
|
||||
|
||||
schedule_shard(
|
||||
TenantShardId {
|
||||
tenant_id,
|
||||
shard_number: ShardNumber(1),
|
||||
shard_count: ShardCount(8),
|
||||
},
|
||||
NodeId(2),
|
||||
NodeId(7),
|
||||
&mut scheduled_shards,
|
||||
&mut scheduler,
|
||||
Some(az_a.clone()),
|
||||
&mut context,
|
||||
);
|
||||
|
||||
schedule_shard(
|
||||
TenantShardId {
|
||||
tenant_id,
|
||||
shard_number: ShardNumber(2),
|
||||
shard_count: ShardCount(8),
|
||||
},
|
||||
NodeId(3),
|
||||
NodeId(8),
|
||||
&mut scheduled_shards,
|
||||
&mut scheduler,
|
||||
Some(az_a.clone()),
|
||||
&mut context,
|
||||
);
|
||||
|
||||
schedule_shard(
|
||||
TenantShardId {
|
||||
tenant_id,
|
||||
shard_number: ShardNumber(3),
|
||||
shard_count: ShardCount(8),
|
||||
},
|
||||
NodeId(4),
|
||||
NodeId(9),
|
||||
&mut scheduled_shards,
|
||||
&mut scheduler,
|
||||
Some(az_a.clone()),
|
||||
&mut context,
|
||||
);
|
||||
|
||||
schedule_shard(
|
||||
TenantShardId {
|
||||
tenant_id,
|
||||
shard_number: ShardNumber(4),
|
||||
shard_count: ShardCount(8),
|
||||
},
|
||||
NodeId(5),
|
||||
NodeId(10),
|
||||
&mut scheduled_shards,
|
||||
&mut scheduler,
|
||||
Some(az_a.clone()),
|
||||
&mut context,
|
||||
);
|
||||
|
||||
schedule_shard(
|
||||
TenantShardId {
|
||||
tenant_id,
|
||||
shard_number: ShardNumber(5),
|
||||
shard_count: ShardCount(8),
|
||||
},
|
||||
NodeId(1),
|
||||
NodeId(6),
|
||||
&mut scheduled_shards,
|
||||
&mut scheduler,
|
||||
Some(az_a.clone()),
|
||||
&mut context,
|
||||
);
|
||||
|
||||
schedule_shard(
|
||||
TenantShardId {
|
||||
tenant_id,
|
||||
shard_number: ShardNumber(6),
|
||||
shard_count: ShardCount(8),
|
||||
},
|
||||
NodeId(2),
|
||||
NodeId(7),
|
||||
&mut scheduled_shards,
|
||||
&mut scheduler,
|
||||
Some(az_a.clone()),
|
||||
&mut context,
|
||||
);
|
||||
|
||||
schedule_shard(
|
||||
TenantShardId {
|
||||
tenant_id,
|
||||
shard_number: ShardNumber(7),
|
||||
shard_count: ShardCount(8),
|
||||
},
|
||||
NodeId(3),
|
||||
NodeId(8),
|
||||
&mut scheduled_shards,
|
||||
&mut scheduler,
|
||||
Some(az_a.clone()),
|
||||
&mut context,
|
||||
);
|
||||
|
||||
// Assert that the optimizer suggests nochanges, i.e. our initial scheduling was stable.
|
||||
for shard in &scheduled_shards {
|
||||
assert_eq!(shard.optimize_attachment(&mut scheduler, &context), None);
|
||||
}
|
||||
|
||||
for mut shard in scheduled_shards {
|
||||
shard.intent.clear(&mut scheduler);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
@@ -1404,7 +1404,11 @@ impl Service {
|
||||
|
||||
// We will populate intent properly later in [`Self::startup_reconcile`], initially populate
|
||||
// it with what we can infer: the node for which a generation was most recently issued.
|
||||
let mut intent = IntentState::new();
|
||||
let mut intent = IntentState::new(
|
||||
tsp.preferred_az_id
|
||||
.as_ref()
|
||||
.map(|az| AvailabilityZone(az.clone())),
|
||||
);
|
||||
if let Some(generation_pageserver) = tsp.generation_pageserver.map(|n| NodeId(n as u64))
|
||||
{
|
||||
if nodes.contains_key(&generation_pageserver) {
|
||||
@@ -2474,18 +2478,29 @@ impl Service {
|
||||
tenant_id: TenantId,
|
||||
_guard: &TracingExclusiveGuard<TenantOperations>,
|
||||
) -> Result<(), ApiError> {
|
||||
let present_in_memory = {
|
||||
// Check if the tenant is present in memory, and select an AZ to use when loading
|
||||
// if we will load it.
|
||||
let load_in_az = {
|
||||
let locked = self.inner.read().unwrap();
|
||||
locked
|
||||
let existing = locked
|
||||
.tenants
|
||||
.range(TenantShardId::tenant_range(tenant_id))
|
||||
.next()
|
||||
.is_some()
|
||||
};
|
||||
.next();
|
||||
|
||||
if present_in_memory {
|
||||
return Ok(());
|
||||
}
|
||||
// If the tenant is not present in memory, we expect to load it from database,
|
||||
// so let's figure out what AZ to load it into while we have self.inner locked.
|
||||
if existing.is_none() {
|
||||
locked
|
||||
.scheduler
|
||||
.get_az_for_new_tenant()
|
||||
.ok_or(ApiError::BadRequest(anyhow::anyhow!(
|
||||
"No AZ with nodes found to load tenant"
|
||||
)))?
|
||||
} else {
|
||||
// We already have this tenant in memory
|
||||
return Ok(());
|
||||
}
|
||||
};
|
||||
|
||||
let tenant_shards = self.persistence.load_tenant(tenant_id).await?;
|
||||
if tenant_shards.is_empty() {
|
||||
@@ -2494,8 +2509,20 @@ impl Service {
|
||||
));
|
||||
}
|
||||
|
||||
// TODO: choose a fresh AZ to use for this tenant when un-detaching: there definitely isn't a running
|
||||
// compute, so no benefit to making AZ sticky across detaches.
|
||||
// Update the persistent shards with the AZ that we are about to apply to in-memory state
|
||||
self.persistence
|
||||
.set_tenant_shard_preferred_azs(
|
||||
tenant_shards
|
||||
.iter()
|
||||
.map(|t| {
|
||||
(
|
||||
t.get_tenant_shard_id().expect("Corrupt shard in database"),
|
||||
load_in_az.clone(),
|
||||
)
|
||||
})
|
||||
.collect(),
|
||||
)
|
||||
.await?;
|
||||
|
||||
let mut locked = self.inner.write().unwrap();
|
||||
tracing::info!(
|
||||
@@ -2505,7 +2532,7 @@ impl Service {
|
||||
);
|
||||
|
||||
locked.tenants.extend(tenant_shards.into_iter().map(|p| {
|
||||
let intent = IntentState::new();
|
||||
let intent = IntentState::new(Some(load_in_az.clone()));
|
||||
let shard =
|
||||
TenantShard::from_persistent(p, intent).expect("Corrupt shard row in database");
|
||||
|
||||
@@ -4236,6 +4263,22 @@ impl Service {
|
||||
}
|
||||
|
||||
tracing::info!("Restoring parent shard {tenant_shard_id}");
|
||||
|
||||
// Drop any intents that refer to unavailable nodes, to enable this abort to proceed even
|
||||
// if the original attachment location is offline.
|
||||
if let Some(node_id) = shard.intent.get_attached() {
|
||||
if !nodes.get(node_id).unwrap().is_available() {
|
||||
tracing::info!("Demoting attached intent for {tenant_shard_id} on unavailable node {node_id}");
|
||||
shard.intent.demote_attached(scheduler, *node_id);
|
||||
}
|
||||
}
|
||||
for node_id in shard.intent.get_secondary().clone() {
|
||||
if !nodes.get(&node_id).unwrap().is_available() {
|
||||
tracing::info!("Dropping secondary intent for {tenant_shard_id} on unavailable node {node_id}");
|
||||
shard.intent.remove_secondary(scheduler, node_id);
|
||||
}
|
||||
}
|
||||
|
||||
shard.splitting = SplitState::Idle;
|
||||
if let Err(e) = shard.schedule(scheduler, &mut ScheduleContext::default()) {
|
||||
// If this shard can't be scheduled now (perhaps due to offline nodes or
|
||||
@@ -4389,15 +4432,13 @@ impl Service {
|
||||
|
||||
let mut child_state =
|
||||
TenantShard::new(child, child_shard, policy.clone(), preferred_az.clone());
|
||||
child_state.intent = IntentState::single(scheduler, Some(pageserver));
|
||||
child_state.intent =
|
||||
IntentState::single(scheduler, Some(pageserver), preferred_az.clone());
|
||||
child_state.observed = ObservedState {
|
||||
locations: child_observed,
|
||||
};
|
||||
child_state.generation = Some(generation);
|
||||
child_state.config = config.clone();
|
||||
if let Some(preferred_az) = &preferred_az {
|
||||
child_state.set_preferred_az(preferred_az.clone());
|
||||
}
|
||||
|
||||
// The child's TenantShard::splitting is intentionally left at the default value of Idle,
|
||||
// as at this point in the split process we have succeeded and this part is infallible:
|
||||
@@ -5014,6 +5055,8 @@ impl Service {
|
||||
// If our new attached node was a secondary, it no longer should be.
|
||||
shard.intent.remove_secondary(scheduler, migrate_req.node_id);
|
||||
|
||||
shard.intent.set_attached(scheduler, Some(migrate_req.node_id));
|
||||
|
||||
// If we were already attached to something, demote that to a secondary
|
||||
if let Some(old_attached) = old_attached {
|
||||
if n > 0 {
|
||||
@@ -5025,8 +5068,6 @@ impl Service {
|
||||
shard.intent.push_secondary(scheduler, old_attached);
|
||||
}
|
||||
}
|
||||
|
||||
shard.intent.set_attached(scheduler, Some(migrate_req.node_id));
|
||||
}
|
||||
PlacementPolicy::Secondary => {
|
||||
shard.intent.clear(scheduler);
|
||||
@@ -5712,7 +5753,7 @@ impl Service {
|
||||
register_req.listen_http_port,
|
||||
register_req.listen_pg_addr,
|
||||
register_req.listen_pg_port,
|
||||
register_req.availability_zone_id,
|
||||
register_req.availability_zone_id.clone(),
|
||||
);
|
||||
|
||||
// TODO: idempotency if the node already exists in the database
|
||||
@@ -5732,8 +5773,9 @@ impl Service {
|
||||
.set(locked.nodes.len() as i64);
|
||||
|
||||
tracing::info!(
|
||||
"Registered pageserver {}, now have {} pageservers",
|
||||
"Registered pageserver {} ({}), now have {} pageservers",
|
||||
register_req.node_id,
|
||||
register_req.availability_zone_id,
|
||||
locked.nodes.len()
|
||||
);
|
||||
Ok(())
|
||||
@@ -6467,6 +6509,7 @@ impl Service {
|
||||
// Shard was dropped between planning and execution;
|
||||
continue;
|
||||
};
|
||||
tracing::info!("Applying optimization: {optimization:?}");
|
||||
if shard.apply_optimization(scheduler, optimization) {
|
||||
optimizations_applied += 1;
|
||||
if self.maybe_reconcile_shard(shard, nodes).is_some() {
|
||||
@@ -6497,7 +6540,13 @@ impl Service {
|
||||
|
||||
let mut work = Vec::new();
|
||||
let mut locked = self.inner.write().unwrap();
|
||||
let (nodes, tenants, scheduler) = locked.parts_mut();
|
||||
let (_nodes, tenants, scheduler) = locked.parts_mut();
|
||||
|
||||
// We are going to plan a bunch of optimisations before applying any of them, so the
|
||||
// utilisation stats on nodes will be effectively stale for the >1st optimisation we
|
||||
// generate. To avoid this causing unstable migrations/flapping, it's important that the
|
||||
// code in TenantShard for finding optimisations uses [`NodeAttachmentSchedulingScore::disregard_utilization`]
|
||||
// to ignore the utilisation component of the score.
|
||||
|
||||
for (_tenant_id, schedule_context, shards) in
|
||||
TenantShardContextIterator::new(tenants, ScheduleMode::Speculative)
|
||||
@@ -6528,13 +6577,28 @@ impl Service {
|
||||
continue;
|
||||
}
|
||||
|
||||
// TODO: optimization calculations are relatively expensive: create some fast-path for
|
||||
// the common idle case (avoiding the search on tenants that we have recently checked)
|
||||
// Fast path: we may quickly identify shards that don't have any possible optimisations
|
||||
if !shard.maybe_optimizable(scheduler, &schedule_context) {
|
||||
if cfg!(feature = "testing") {
|
||||
// Check that maybe_optimizable doesn't disagree with the actual optimization functions.
|
||||
// Only do this in testing builds because it is not a correctness-critical check, so we shouldn't
|
||||
// panic in prod if we hit this, or spend cycles on it in prod.
|
||||
assert!(shard
|
||||
.optimize_attachment(scheduler, &schedule_context)
|
||||
.is_none());
|
||||
assert!(shard
|
||||
.optimize_secondary(scheduler, &schedule_context)
|
||||
.is_none());
|
||||
}
|
||||
continue;
|
||||
}
|
||||
|
||||
if let Some(optimization) =
|
||||
// If idle, maybe ptimize attachments: if a shard has a secondary location that is preferable to
|
||||
// If idle, maybe optimize attachments: if a shard has a secondary location that is preferable to
|
||||
// its primary location based on soft constraints, cut it over.
|
||||
shard.optimize_attachment(nodes, &schedule_context)
|
||||
shard.optimize_attachment(scheduler, &schedule_context)
|
||||
{
|
||||
tracing::info!(tenant_shard_id=%shard.tenant_shard_id, "Identified optimization for attachment: {optimization:?}");
|
||||
work.push((shard.tenant_shard_id, optimization));
|
||||
break;
|
||||
} else if let Some(optimization) =
|
||||
@@ -6544,6 +6608,7 @@ impl Service {
|
||||
// in the same tenant with secondary locations on the node where they originally split.
|
||||
shard.optimize_secondary(scheduler, &schedule_context)
|
||||
{
|
||||
tracing::info!(tenant_shard_id=%shard.tenant_shard_id, "Identified optimization for secondary: {optimization:?}");
|
||||
work.push((shard.tenant_shard_id, optimization));
|
||||
break;
|
||||
}
|
||||
@@ -6592,8 +6657,10 @@ impl Service {
|
||||
}
|
||||
}
|
||||
}
|
||||
ScheduleOptimizationAction::ReplaceSecondary(_) => {
|
||||
// No extra checks needed to replace a secondary: this does not interrupt client access
|
||||
ScheduleOptimizationAction::ReplaceSecondary(_)
|
||||
| ScheduleOptimizationAction::CreateSecondary(_)
|
||||
| ScheduleOptimizationAction::RemoveSecondary(_) => {
|
||||
// No extra checks needed to manage secondaries: this does not interrupt client access
|
||||
validated_work.push((tenant_shard_id, optimization))
|
||||
}
|
||||
};
|
||||
@@ -6665,26 +6732,35 @@ impl Service {
|
||||
/// we have this helper to move things along faster.
|
||||
#[cfg(feature = "testing")]
|
||||
async fn kick_secondary_download(&self, tenant_shard_id: TenantShardId) {
|
||||
let (attached_node, secondary_node) = {
|
||||
let (attached_node, secondaries) = {
|
||||
let locked = self.inner.read().unwrap();
|
||||
let Some(shard) = locked.tenants.get(&tenant_shard_id) else {
|
||||
tracing::warn!(
|
||||
"Skipping kick of secondary download for {tenant_shard_id}: not found"
|
||||
);
|
||||
return;
|
||||
};
|
||||
let (Some(attached), Some(secondary)) = (
|
||||
shard.intent.get_attached(),
|
||||
shard.intent.get_secondary().first(),
|
||||
) else {
|
||||
|
||||
let Some(attached) = shard.intent.get_attached() else {
|
||||
tracing::warn!(
|
||||
"Skipping kick of secondary download for {tenant_shard_id}: no attached"
|
||||
);
|
||||
return;
|
||||
};
|
||||
(
|
||||
locked.nodes.get(attached).unwrap().clone(),
|
||||
locked.nodes.get(secondary).unwrap().clone(),
|
||||
)
|
||||
|
||||
let secondaries = shard
|
||||
.intent
|
||||
.get_secondary()
|
||||
.iter()
|
||||
.map(|n| locked.nodes.get(n).unwrap().clone())
|
||||
.collect::<Vec<_>>();
|
||||
|
||||
(locked.nodes.get(attached).unwrap().clone(), secondaries)
|
||||
};
|
||||
|
||||
// Make remote API calls to upload + download heatmaps: we ignore errors because this is just
|
||||
// a 'kick' to let scheduling optimisation run more promptly.
|
||||
attached_node
|
||||
match attached_node
|
||||
.with_client_retries(
|
||||
|client| async move { client.tenant_heatmap_upload(tenant_shard_id).await },
|
||||
&self.config.jwt_token,
|
||||
@@ -6693,22 +6769,57 @@ impl Service {
|
||||
SHORT_RECONCILE_TIMEOUT,
|
||||
&self.cancel,
|
||||
)
|
||||
.await;
|
||||
.await
|
||||
{
|
||||
Some(Err(e)) => {
|
||||
tracing::info!(
|
||||
"Failed to upload heatmap from {attached_node} for {tenant_shard_id}: {e}"
|
||||
);
|
||||
}
|
||||
None => {
|
||||
tracing::info!(
|
||||
"Cancelled while uploading heatmap from {attached_node} for {tenant_shard_id}"
|
||||
);
|
||||
}
|
||||
Some(Ok(_)) => {
|
||||
tracing::info!(
|
||||
"Successfully uploaded heatmap from {attached_node} for {tenant_shard_id}"
|
||||
);
|
||||
}
|
||||
}
|
||||
|
||||
secondary_node
|
||||
.with_client_retries(
|
||||
|client| async move {
|
||||
client
|
||||
.tenant_secondary_download(tenant_shard_id, Some(Duration::from_secs(1)))
|
||||
.await
|
||||
},
|
||||
&self.config.jwt_token,
|
||||
3,
|
||||
10,
|
||||
SHORT_RECONCILE_TIMEOUT,
|
||||
&self.cancel,
|
||||
)
|
||||
.await;
|
||||
for secondary_node in secondaries {
|
||||
match secondary_node
|
||||
.with_client_retries(
|
||||
|client| async move {
|
||||
client
|
||||
.tenant_secondary_download(
|
||||
tenant_shard_id,
|
||||
Some(Duration::from_secs(1)),
|
||||
)
|
||||
.await
|
||||
},
|
||||
&self.config.jwt_token,
|
||||
3,
|
||||
10,
|
||||
SHORT_RECONCILE_TIMEOUT,
|
||||
&self.cancel,
|
||||
)
|
||||
.await
|
||||
{
|
||||
Some(Err(e)) => {
|
||||
tracing::info!(
|
||||
"Failed to download heatmap from {secondary_node} for {tenant_shard_id}: {e}"
|
||||
);
|
||||
}
|
||||
None => {
|
||||
tracing::info!("Cancelled while downloading heatmap from {secondary_node} for {tenant_shard_id}");
|
||||
}
|
||||
Some(Ok(progress)) => {
|
||||
tracing::info!("Successfully downloaded heatmap from {secondary_node} for {tenant_shard_id}: {progress:?}");
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Look for shards which are oversized and in need of splitting
|
||||
@@ -7144,9 +7255,15 @@ impl Service {
|
||||
fn fill_node_plan(&self, node_id: NodeId) -> Vec<TenantShardId> {
|
||||
let mut locked = self.inner.write().unwrap();
|
||||
let fill_requirement = locked.scheduler.compute_fill_requirement(node_id);
|
||||
let (nodes, tenants, _scheduler) = locked.parts_mut();
|
||||
|
||||
let mut tids_by_node = locked
|
||||
.tenants
|
||||
let node_az = nodes
|
||||
.get(&node_id)
|
||||
.expect("Node must exist")
|
||||
.get_availability_zone_id()
|
||||
.clone();
|
||||
|
||||
let mut tids_by_node = tenants
|
||||
.iter_mut()
|
||||
.filter_map(|(tid, tenant_shard)| {
|
||||
if !matches!(
|
||||
@@ -7159,6 +7276,25 @@ impl Service {
|
||||
return None;
|
||||
}
|
||||
|
||||
// AZ check: when filling nodes after a restart, our intent is to move _back_ the
|
||||
// shards which belong on this node, not to promote shards whose scheduling preference
|
||||
// would be on their currently attached node. So will avoid promoting shards whose
|
||||
// home AZ doesn't match the AZ of the node we're filling.
|
||||
match tenant_shard.preferred_az() {
|
||||
None => {
|
||||
// Shard doesn't have an AZ preference: it is elegible to be moved.
|
||||
}
|
||||
Some(az) if az == &node_az => {
|
||||
// This shard's home AZ is equal to the node we're filling: it is
|
||||
// elegible to be moved: fall through;
|
||||
}
|
||||
Some(_) => {
|
||||
// This shard's home AZ is somewhere other than the node we're filling:
|
||||
// do not include it in the fill plan.
|
||||
return None;
|
||||
}
|
||||
}
|
||||
|
||||
if tenant_shard.intent.get_secondary().contains(&node_id) {
|
||||
if let Some(primary) = tenant_shard.intent.get_attached() {
|
||||
return Some((*primary, *tid));
|
||||
|
||||
@@ -43,9 +43,6 @@ impl<'a> Iterator for TenantShardContextIterator<'a> {
|
||||
|
||||
// Accumulate the schedule context for all the shards in a tenant
|
||||
schedule_context.avoid(&shard.intent.all_pageservers());
|
||||
if let Some(attached) = shard.intent.get_attached() {
|
||||
schedule_context.push_attached(*attached);
|
||||
}
|
||||
tenant_shards.push(shard);
|
||||
|
||||
if tenant_shard_id.shard_number.0 == tenant_shard_id.shard_count.count() - 1 {
|
||||
@@ -115,7 +112,7 @@ mod tests {
|
||||
assert_eq!(tenant_id, t1_id);
|
||||
assert_eq!(shards[0].tenant_shard_id.shard_number, ShardNumber(0));
|
||||
assert_eq!(shards.len(), 1);
|
||||
assert_eq!(context.attach_count(), 1);
|
||||
assert_eq!(context.location_count(), 2);
|
||||
|
||||
let (tenant_id, context, shards) = iter.next().unwrap();
|
||||
assert_eq!(tenant_id, t2_id);
|
||||
@@ -124,13 +121,13 @@ mod tests {
|
||||
assert_eq!(shards[2].tenant_shard_id.shard_number, ShardNumber(2));
|
||||
assert_eq!(shards[3].tenant_shard_id.shard_number, ShardNumber(3));
|
||||
assert_eq!(shards.len(), 4);
|
||||
assert_eq!(context.attach_count(), 4);
|
||||
assert_eq!(context.location_count(), 8);
|
||||
|
||||
let (tenant_id, context, shards) = iter.next().unwrap();
|
||||
assert_eq!(tenant_id, t3_id);
|
||||
assert_eq!(shards[0].tenant_shard_id.shard_number, ShardNumber(0));
|
||||
assert_eq!(shards.len(), 1);
|
||||
assert_eq!(context.attach_count(), 1);
|
||||
assert_eq!(context.location_count(), 2);
|
||||
|
||||
for shard in tenants.values_mut() {
|
||||
shard.intent.clear(&mut scheduler);
|
||||
|
||||
File diff suppressed because it is too large
Load Diff
Reference in New Issue
Block a user