* refactor: rename WalOptionsAllocator to WalProvider The name "WalOptionsAllocator" was misleading because: - For RaftEngine variant, it doesn't actually allocate anything - The actual allocation logic lives in KafkaTopicPool "WalProvider" better describes its role as providing WAL options based on the configured WAL backend (RaftEngine or Kafka). Changes: - Rename `WalOptionsAllocator` to `WalProvider` - Rename `WalOptionsAllocatorRef` to `WalProviderRef` - Rename `build_wal_options_allocator` to `build_wal_provider` - Rename module `wal_options_allocator` to `wal_provider` - Rename error types: `BuildWalOptionsAllocator` -> `BuildWalProvider`, `StartWalOptionsAllocator` -> `StartWalProvider` Signed-off-by: WenyXu <wenymedia@gmail.com> * refactor(meta): extract allocator traits from TableMetadataAllocator Refactor TableMetadataAllocator to use trait-based dependency injection for better testability and separation of concerns. Changes: - Add `ResourceIdAllocator` trait to abstract ID allocation - Add `WalOptionsAllocator` trait to abstract WAL options allocation - Implement traits for `Sequence` and `WalProvider` - Remove duplicate `allocate_region_wal_options` function - Rename `table_id_sequence` to `table_id_allocator` for consistency - Rename `TableIdSequenceHandler` to `TableIdAllocatorHandler` Signed-off-by: WenyXu <wenymedia@gmail.com> * feat(meta): add max_region_number tracking to PhysicalTableRouteValue Add `max_region_number` field to track the highest region number ever allocated for a table. This value only increases when regions are added and never decreases when regions are dropped, ensuring unique region numbers across the table's lifetime. Changes: - Add `max_region_number` field to `PhysicalTableRouteValue` - Implement custom `Deserialize` for backward compatibility - Update `update_region_routes` to maintain max_region_number - Calculate max_region_number from region_routes in `new()` Signed-off-by: WenyXu <wenymedia@gmail.com> * refactor: extract TableRouteAllocator trait from TableMetadataAllocator - Add TableRouteAllocator trait for abstracting region route allocation - Implement blanket impl for all PeerAllocator types - Add PeerAllocator impl for Arc<T> to support trait object delegation - Update TableMetadataAllocator to use TableRouteAllocatorRef Signed-off-by: WenyXu <wenymedia@gmail.com> * refactor: rename TableRouteAllocator to RegionRoutesAllocator - Rename table_route.rs to region_routes.rs - Rename TableRouteAllocator trait to RegionRoutesAllocator - Rename wal_option.rs to wal_options.rs for consistency - Update TableMetadataAllocator to use new naming Signed-off-by: WenyXu <wenymedia@gmail.com> * feat(meta-srv): implement region allocation for repartition procedure This commit implements the region allocation phase of the repartition procedure, which handles allocating new regions when a table needs to be split into more partitions. Key changes: - Refactor `RegionRoutesAllocator::allocate` to accept `(region_number, partition_expr)` tuples for more flexible region number assignment - Simplify `AllocationPlanEntry` by removing `regions_to_allocate` and `regions_to_deallocate` fields (now derived from source/target counts) - Add `convert_allocation_plan_to_repartition_plan` function to handle allocation, equal, and deallocation cases - Fix `RepartitionPlanEntry::allocate_regions()` to return target regions (was incorrectly returning source regions) - Implement complete `AllocateRegion` state with: - Region route allocation via `RegionRoutesAllocator` - WAL options allocation via `WalOptionsAllocator` - Operating region registration for concurrency control - Region creation on datanodes via `CreateTableExecutor` - Table route metadata update - Add `TableRouteValue::max_region_number()` helper method - Add comprehensive unit tests for plan conversion and allocation logic Signed-off-by: WenyXu <wenymedia@gmail.com> * chore: apply suggestions from CR Signed-off-by: WenyXu <wenymedia@gmail.com> * chore: apply suggestions from CR Signed-off-by: WenyXu <wenymedia@gmail.com> --------- Signed-off-by: WenyXu <wenymedia@gmail.com>
Real-Time & Cloud-Native Observability Database
for metrics, logs, and traces
Delivers sub-second querying at PB scale and exceptional cost efficiency from edge to cloud.
- Introduction
- ⭐ Key Features
- Quick Comparison
- Architecture
- Try GreptimeDB
- Getting Started
- Build From Source
- Tools & Extensions
- Project Status
- Community
- License
- Commercial Support
- Contributing
- Acknowledgement
Introduction
GreptimeDB is an open-source, cloud-native database that unifies metrics, logs, and traces, enabling real-time observability at any scale — across edge, cloud, and hybrid environments.
Features
| Feature | Description |
|---|---|
| All-in-One Observability | OpenTelemetry-native platform unifying metrics, logs, and traces. Query via SQL, PromQL, and Flow. |
| High Performance | Written in Rust with rich indexing (inverted, fulltext, skipping, vector), delivering sub-second responses at PB scale. |
| Cost Efficiency | 50x lower operational and storage costs with compute-storage separation and native object storage (S3, Azure Blob, etc.). |
| Cloud-Native & Scalable | Purpose-built for Kubernetes with unlimited cross-cloud scaling, handling hundreds of thousands of concurrent requests. |
| Developer-Friendly | SQL/PromQL interfaces, built-in web dashboard, REST API, MySQL/PostgreSQL protocol compatibility, and native OpenTelemetry support. |
| Flexible Deployment | Deploy anywhere from ARM-based edge devices (including Android) to cloud, with unified APIs and efficient data sync. |
✅ Perfect for:
- Unified observability stack replacing Prometheus + Loki + Tempo
- Large-scale metrics with high cardinality (millions to billions of time series)
- Large-scale observability platform requiring cost efficiency and scalability
- IoT and edge computing with resource and bandwidth constraints
Learn more in Why GreptimeDB and Observability 2.0 and the Database for It.
Quick Comparison
| Feature | GreptimeDB | Traditional TSDB | Log Stores |
|---|---|---|---|
| Data Types | Metrics, Logs, Traces | Metrics only | Logs only |
| Query Language | SQL, PromQL | Custom/PromQL | Custom/DSL |
| Deployment | Edge + Cloud | Cloud/On-prem | Mostly central |
| Indexing & Performance | PB-Scale, Sub-second | Varies | Varies |
| Integration | REST API, SQL, Common protocols | Varies | Varies |
Performance:
Read more benchmark reports.
Architecture
GreptimeDB can run in two modes:
- Standalone Mode - Single binary for development and small deployments
- Distributed Mode - Separate components for production scale:
- Frontend: Query processing and protocol handling
- Datanode: Data storage and retrieval
- Metasrv: Metadata management and coordination
Read the architecture document. DeepWiki provides an in-depth look at GreptimeDB:

Try GreptimeDB
docker pull greptime/greptimedb
docker run -p 127.0.0.1:4000-4003:4000-4003 \
-v "$(pwd)/greptimedb_data:/greptimedb_data" \
--name greptime --rm \
greptime/greptimedb:latest standalone start \
--http-addr 0.0.0.0:4000 \
--rpc-bind-addr 0.0.0.0:4001 \
--mysql-addr 0.0.0.0:4002 \
--postgres-addr 0.0.0.0:4003
Dashboard: http://localhost:4000/dashboard
Read more in the full Install Guide.
Troubleshooting:
- Cannot connect to the database? Ensure that ports
4000,4001,4002, and4003are not blocked by a firewall or used by other services. - Failed to start? Check the container logs with
docker logs greptimefor further details.
Getting Started
Build From Source
Prerequisites:
- Rust toolchain (nightly)
- Protobuf compiler (>= 3.15)
- C/C++ building essentials, including
gcc/g++/autoconfand glibc library (eg.libc6-devon Ubuntu andglibc-develon Fedora) - Python toolchain (optional): Required only if using some test scripts.
Build and Run:
make
cargo run -- standalone start
Tools & Extensions
- Kubernetes: GreptimeDB Operator
- Helm Charts: Greptime Helm Charts
- Dashboard: Web UI
- gRPC Ingester: Go, Java, C++, Erlang, Rust
- Grafana Data Source: GreptimeDB Grafana data source plugin
- Grafana Dashboard: Official Dashboard for monitoring
Project Status
Status: Beta — marching toward v1.0 GA! GA (v1.0): January 10, 2026
- Deployed in production by open-source projects and commercial users
- Stable, actively maintained, with regular releases (version info)
- Suitable for evaluation and pilot deployments
GreptimeDB v1.0 represents a major milestone toward maturity — marking stable APIs, production readiness, and proven performance.
Roadmap: Beta1 (Nov 10) → Beta2 (Nov 24) → RC1 (Dec 8) → GA (Jan 10, 2026), please read v1.0 highlights and release plan for details.
For production use, we recommend using the latest stable release.
If you find this project useful, a ⭐ would mean a lot to us!

Community
We invite you to engage and contribute!
License
GreptimeDB is licensed under the Apache License 2.0.
Commercial Support
Running GreptimeDB in your organization? We offer enterprise add-ons, services, training, and consulting. Contact us for details.
Contributing
- Read our Contribution Guidelines.
- Explore Internal Concepts and DeepWiki.
- Pick up a good first issue and join the #contributors Slack channel.
Acknowledgement
Special thanks to all contributors! See AUTHORS.md.
- Uses Apache Arrow™ (memory model)
- Apache Parquet™ (file storage)
- Apache DataFusion™ (query engine)
- Apache OpenDAL™ (data access abstraction)
