* fix/pick-continue: ### Add Tests for TWCS Compaction Logic - **`twcs.rs`**: - Modified the logic in `TwcsPicker` to handle cases with zero runs by using `continue` instead of `return`. - Added two new test cases: `test_build_output_multiple_windows_with_zero_runs` and `test_build_output_single_window_zero_runs` to verify the behavior of the compaction logic when there are zero runs in the windows. - **`memtable_util.rs`**: - Removed unused import `PredicateGroup`. Signed-off-by: Lei, HUANG <mrsatangel@gmail.com> * fix: clippy Signed-off-by: Lei, HUANG <mrsatangel@gmail.com> * fix/pick-continue: ### Commit Message Enhance Compaction Process with Expired SST Handling and Testing - **`compactor.rs`**: - Introduced handling for expired SSTs by updating the manifest immediately upon task completion. - Added new test cases to verify the handling of expired SSTs and manifest updates. - **`task.rs`**: - Implemented `remove_expired` function to handle expired SSTs by updating the manifest and notifying the region worker loop. - Refactored `handle_compaction` to `handle_expiration_and_compaction` to integrate expired SST removal before merging inputs. - Added logging and error handling for expired SST removal process. Signed-off-by: Lei, HUANG <mrsatangel@gmail.com> * refactor/progressive-compaction: **Enhance Compaction Task Error Handling** - Updated `task.rs` to conditionally execute the removal of expired SST files only when they exist, improving error handling and performance. - Added a check for non-empty `expired_ssts` before initiating the removal process, ensuring unnecessary operations are avoided. Signed-off-by: Lei, HUANG <mrsatangel@gmail.com> * refactor/progressive-compaction: ### Refactor `DefaultCompactor` to Extract `merge_single_output` Method - **File**: `src/mito2/src/compaction/compactor.rs` - Extracted the logic for merging a single compaction output into SST files into a new method `merge_single_output` within the `DefaultCompactor` struct. - Simplified the `merge_ssts` method by utilizing the new `merge_single_output` method, reducing code duplication and improving maintainability. Signed-off-by: Lei, HUANG <mrsatangel@gmail.com> * refactor/progressive-compaction: ### Add Max Background Compaction Tasks Configuration - **`compaction.rs`**: Added `max_background_compactions` to the compaction scheduler to limit background tasks. - **`compaction/compactor.rs`**: Removed immediate manifest update logic after task completion. - **`compaction/picker.rs`**: Introduced `max_background_tasks` parameter in `new_picker` to control task limits. - **`compaction/twcs.rs`**: Updated `TwcsPicker` to include `max_background_tasks` and truncate inputs exceeding this limit. Added related test cases to ensure functionality. Signed-off-by: Lei, HUANG <mrsatangel@gmail.com> * fix/pick-continue: ### Improve Error Handling and Task Management in Compaction - **`task.rs`**: Enhanced error handling in `remove_expired` function by logging errors without halting the compaction process. Removed the return of `Result` type and added detailed logging for various failure scenarios. - **`twcs.rs`**: Adjusted task management logic by removing input truncation based on `max_background_tasks` and instead discarding remaining tasks if the output size exceeds the limit. This ensures better control over task execution and resource management. Signed-off-by: Lei, HUANG <mrsatangel@gmail.com> * fix/pick-continue: ### Add Unit Tests for Compaction Task and TWCS Picker - **`task.rs`**: Added unit tests to verify the behavior of `PickerOutput` with and without expired SSTs. - **`twcs.rs`**: Introduced tests for `TwcsPicker` to ensure correct handling of `max_background_tasks` during compaction, including scenarios with and without task truncation. Signed-off-by: Lei, HUANG <mrsatangel@gmail.com> * fix/pick-continue: **Improve Error Handling and Notification in Compaction Task** - **File:** `task.rs` - Changed log level from `warn` to `error` for manifest update failures to enhance error visibility. - Refactored the notification mechanism for expired file removal by using `BackgroundNotify::RegionEdit` with `RegionEditResult` to streamline the process. - Simplified error handling by consolidating match cases into a single `if let Err` block for better readability and maintainability. Signed-off-by: Lei, HUANG <mrsatangel@gmail.com> --------- Signed-off-by: Lei, HUANG <mrsatangel@gmail.com>
greptime_memory_limit_in_bytes and greptime_cpu_limit_in_millicores metrics (#7043)
Real-Time & Cloud-Native Observability Database
for metrics, logs, and traces
Delivers sub-second querying at PB scale and exceptional cost efficiency from edge to cloud.
- Introduction
- ⭐ Key Features
- Quick Comparison
- Architecture
- Try GreptimeDB
- Getting Started
- Build From Source
- Tools & Extensions
- Project Status
- Community
- License
- Commercial Support
- Contributing
- Acknowledgement
Introduction
GreptimeDB is an open-source, cloud-native database purpose-built for the unified collection and analysis of observability data (metrics, logs, and traces). Whether you’re operating on the edge, in the cloud, or across hybrid environments, GreptimeDB empowers real-time insights at massive scale — all in one system.
Features
| Feature | Description |
|---|---|
| Unified Observability Data | Store metrics, logs, and traces as timestamped, contextual wide events. Query via SQL, PromQL, and streaming. |
| High Performance & Cost Effective | Written in Rust, with a distributed query engine, rich indexing, and optimized columnar storage, delivering sub-second responses at PB scale. |
| Cloud-Native Architecture | Designed for Kubernetes, with compute/storage separation, native object storage (AWS S3, Azure Blob, etc.) and seamless cross-cloud access. |
| Developer-Friendly | Access via SQL/PromQL interfaces, REST API, MySQL/PostgreSQL protocols, and popular ingestion protocols. |
| Flexible Deployment | Deploy anywhere: edge (including ARM/Android) or cloud, with unified APIs and efficient data sync. |
Learn more in Why GreptimeDB and Observability 2.0 and the Database for It.
Quick Comparison
| Feature | GreptimeDB | Traditional TSDB | Log Stores |
|---|---|---|---|
| Data Types | Metrics, Logs, Traces | Metrics only | Logs only |
| Query Language | SQL, PromQL, Streaming | Custom/PromQL | Custom/DSL |
| Deployment | Edge + Cloud | Cloud/On-prem | Mostly central |
| Indexing & Performance | PB-Scale, Sub-second | Varies | Varies |
| Integration | REST, SQL, Common protocols | Varies | Varies |
Performance:
Read more benchmark reports.
Architecture
- Read the architecture document.
- DeepWiki provides an in-depth look at GreptimeDB:

Try GreptimeDB
docker pull greptime/greptimedb
docker run -p 127.0.0.1:4000-4003:4000-4003 \
-v "$(pwd)/greptimedb_data:/greptimedb_data" \
--name greptime --rm \
greptime/greptimedb:latest standalone start \
--http-addr 0.0.0.0:4000 \
--rpc-bind-addr 0.0.0.0:4001 \
--mysql-addr 0.0.0.0:4002 \
--postgres-addr 0.0.0.0:4003
Dashboard: http://localhost:4000/dashboard Full Install Guide
Troubleshooting:
- Cannot connect to the database? Ensure that ports
4000,4001,4002, and4003are not blocked by a firewall or used by other services. - Failed to start? Check the container logs with
docker logs greptimefor further details.
Getting Started
Build From Source
Prerequisites:
- Rust toolchain (nightly)
- Protobuf compiler (>= 3.15)
- C/C++ building essentials, including
gcc/g++/autoconfand glibc library (eg.libc6-devon Ubuntu andglibc-develon Fedora) - Python toolchain (optional): Required only if using some test scripts.
Build and Run:
make
cargo run -- standalone start
Tools & Extensions
- Kubernetes: GreptimeDB Operator
- Helm Charts: Greptime Helm Charts
- Dashboard: Web UI
- SDKs/Ingester: Go, Java, C++, Erlang, Rust, JS
- Grafana: Official Dashboard
Project Status
Status: Beta. GA (v1.0): Targeted for mid 2025.
- Being used in production by early adopters
- Stable, actively maintained, with regular releases (version info)
- Suitable for evaluation and pilot deployments
For production use, we recommend using the latest stable release.
If you find this project useful, a ⭐ would mean a lot to us!

Community
We invite you to engage and contribute!
License
GreptimeDB is licensed under the Apache License 2.0.
Commercial Support
Running GreptimeDB in your organization? We offer enterprise add-ons, services, training, and consulting. Contact us for details.
Contributing
- Read our Contribution Guidelines.
- Explore Internal Concepts and DeepWiki.
- Pick up a good first issue and join the #contributors Slack channel.
Acknowledgement
Special thanks to all contributors! See AUTHORS.md.
- Uses Apache Arrow™ (memory model)
- Apache Parquet™ (file storage)
- Apache Arrow DataFusion™ (query engine)
- Apache OpenDAL™ (data access abstraction)
