mirror of
https://github.com/neondatabase/neon.git
synced 2026-01-13 08:22:55 +00:00
## Problem A user can perform many database connections at the same instant of time - these will all cache miss and materialise as requests to the control plane. #5705 ## Summary of changes I am using a `DashMap` (a sharded `RwLock<HashMap>`) of endpoints -> semaphores to apply a limiter. If the limiter is enabled (permits > 0), the semaphore will be retrieved per endpoint and a permit will be awaited before continuing to call the wake_compute endpoint. ### Important details This dashmap would grow uncontrollably without maintenance. It's not a cache so I don't think an LRU-based reclamation makes sense. Instead, I've made use of the sharding functionality of DashMap to lock a single shard and clear out unused semaphores periodically. I ran a test in release, using 128 tokio tasks among 12 threads each pushing 1000 entries into the map per second, clearing a shard every 2 seconds (64 second epoch with 32 shards). The endpoint names were sampled from a gamma distribution to make sure some overlap would occur, and each permit was held for 1ms. The histogram for time to clear each shard settled between 256-512us without any variance in my testing. Holding a lock for under a millisecond for 1 of the shards does not concern me as blocking
23 lines
559 B
Rust
23 lines
559 B
Rust
//! Various stuff for dealing with the Neon Console.
|
|
//! Later we might move some API wrappers here.
|
|
|
|
/// Payloads used in the console's APIs.
|
|
pub mod messages;
|
|
|
|
/// Wrappers for console APIs and their mocks.
|
|
pub mod provider;
|
|
pub use provider::{errors, Api, AuthInfo, CachedNodeInfo, ConsoleReqExtra, NodeInfo};
|
|
|
|
/// Various cache-related types.
|
|
pub mod caches {
|
|
pub use super::provider::{ApiCaches, NodeInfoCache};
|
|
}
|
|
|
|
/// Various cache-related types.
|
|
pub mod locks {
|
|
pub use super::provider::ApiLocks;
|
|
}
|
|
|
|
/// Console's management API.
|
|
pub mod mgmt;
|