* feat: add inverted applier metrics Signed-off-by: evenyag <realevenyag@gmail.com> * feat: add metrics to bloom applier Signed-off-by: evenyag <realevenyag@gmail.com> * feat: add metrics to fulltext index applier Signed-off-by: evenyag <realevenyag@gmail.com> * feat: implement BloomFilterReadMetrics for BloomFilterReader Signed-off-by: evenyag <realevenyag@gmail.com> * feat: collect read metrics for inverted index Signed-off-by: evenyag <realevenyag@gmail.com> * feat: add metrics for range_read and metadata Signed-off-by: evenyag <realevenyag@gmail.com> * refactor: rename elapsed to fetch_elapsed Signed-off-by: evenyag <realevenyag@gmail.com> * feat: collect metadata fetch metrics for inverted index Signed-off-by: evenyag <realevenyag@gmail.com> * feat: collect cache metrics for inverted and bloom index Signed-off-by: evenyag <realevenyag@gmail.com> * feat: collect read metrics in appliers Signed-off-by: evenyag <realevenyag@gmail.com> * feat: collect fulltext dir metrics for applier Signed-off-by: evenyag <realevenyag@gmail.com> * feat: collect parquet row group metrics Signed-off-by: evenyag <realevenyag@gmail.com> * feat: add parquet metadata metrics Signed-off-by: evenyag <realevenyag@gmail.com> * feat: add apply metrics Signed-off-by: evenyag <realevenyag@gmail.com> * feat: collect more metrics for memory row group Signed-off-by: evenyag <realevenyag@gmail.com> * feat: add fetch metrics to ReaderMetrics Signed-off-by: evenyag <realevenyag@gmail.com> * feat: init verbose metrics Signed-off-by: evenyag <realevenyag@gmail.com> * feat: debug print metrics in ScanMetricsSet Signed-off-by: evenyag <realevenyag@gmail.com> * feat: implement debug for new metrics Signed-off-by: evenyag <realevenyag@gmail.com> * chore: fix compiler errors Signed-off-by: evenyag <realevenyag@gmail.com> * feat: update parquet fetch metrics Signed-off-by: evenyag <realevenyag@gmail.com> * feat: collect the whole fetch time Signed-off-by: evenyag <realevenyag@gmail.com> * feat: add file_scan_cost Signed-off-by: evenyag <realevenyag@gmail.com> * chore: parquet fetch add cache_miss counter Signed-off-by: evenyag <realevenyag@gmail.com> * feat: print index read metrics Signed-off-by: evenyag <realevenyag@gmail.com> * chore: use actual bytes to increase counter Signed-off-by: evenyag <realevenyag@gmail.com> * refactor: remove provided implementations for index reader traits Signed-off-by: evenyag <realevenyag@gmail.com> * refactor: change get_parquet_meta_data() method to receive metrics Signed-off-by: evenyag <realevenyag@gmail.com> * refactor: rename file_scan_cost to sst_scan_cost Signed-off-by: evenyag <realevenyag@gmail.com> * chore: refine ParquetFetchMetrics Signed-off-by: evenyag <realevenyag@gmail.com> * style: fix clippy Signed-off-by: evenyag <realevenyag@gmail.com> * style: fmt code Signed-off-by: evenyag <realevenyag@gmail.com> * refactor: remove useless inner method Signed-off-by: evenyag <realevenyag@gmail.com> * refactor: collect page size actual needed Signed-off-by: evenyag <realevenyag@gmail.com> * refactor: simplify InvertedIndexReadMetrics Signed-off-by: evenyag <realevenyag@gmail.com> * refactor: simplfy InvertedIndexApplyMetrics Debug Signed-off-by: evenyag <realevenyag@gmail.com> * refactor: simplify BloomFilterReadMetrics Debug Signed-off-by: evenyag <realevenyag@gmail.com> * refactor: simplify BloomFilterIndexApplyMetrics Debug Signed-off-by: evenyag <realevenyag@gmail.com> * refactor: simplify FulltextIndexApplyMetrics implementation Signed-off-by: evenyag <realevenyag@gmail.com> * refactor: simplify ParquetFetchMetrics Debug Signed-off-by: evenyag <realevenyag@gmail.com> * refactor: simplify MetadataCacheMetrics Debug Signed-off-by: evenyag <realevenyag@gmail.com> * feat: only print verbose metrics when they are not empty. Signed-off-by: evenyag <realevenyag@gmail.com> * refactor: use mutex to protect ParquetFetchMetrics Signed-off-by: evenyag <realevenyag@gmail.com> * style: fmt code Signed-off-by: evenyag <realevenyag@gmail.com> * refactor: use duration for elapsed in ParquetFetchMetricsData Signed-off-by: evenyag <realevenyag@gmail.com> --------- Signed-off-by: evenyag <realevenyag@gmail.com>
greptime_memory_limit_in_bytes and greptime_cpu_limit_in_millicores metrics (#7043)
Real-Time & Cloud-Native Observability Database
for metrics, logs, and traces
Delivers sub-second querying at PB scale and exceptional cost efficiency from edge to cloud.
- Introduction
- ⭐ Key Features
- Quick Comparison
- Architecture
- Try GreptimeDB
- Getting Started
- Build From Source
- Tools & Extensions
- Project Status
- Community
- License
- Commercial Support
- Contributing
- Acknowledgement
Introduction
GreptimeDB is an open-source, cloud-native database that unifies metrics, logs, and traces, enabling real-time observability at any scale — across edge, cloud, and hybrid environments.
Features
| Feature | Description |
|---|---|
| All-in-One Observability | OpenTelemetry-native platform unifying metrics, logs, and traces. Query via SQL, PromQL, and Flow. |
| High Performance | Written in Rust with rich indexing (inverted, fulltext, skipping, vector), delivering sub-second responses at PB scale. |
| Cost Efficiency | 50x lower operational and storage costs with compute-storage separation and native object storage (S3, Azure Blob, etc.). |
| Cloud-Native & Scalable | Purpose-built for Kubernetes with unlimited cross-cloud scaling, handling hundreds of thousands of concurrent requests. |
| Developer-Friendly | SQL/PromQL interfaces, built-in web dashboard, REST API, MySQL/PostgreSQL protocol compatibility, and native OpenTelemetry support. |
| Flexible Deployment | Deploy anywhere from ARM-based edge devices (including Android) to cloud, with unified APIs and efficient data sync. |
✅ Perfect for:
- Unified observability stack replacing Prometheus + Loki + Tempo
- Large-scale metrics with high cardinality (millions to billions of time series)
- Large-scale observability platform requiring cost efficiency and scalability
- IoT and edge computing with resource and bandwidth constraints
Learn more in Why GreptimeDB and Observability 2.0 and the Database for It.
Quick Comparison
| Feature | GreptimeDB | Traditional TSDB | Log Stores |
|---|---|---|---|
| Data Types | Metrics, Logs, Traces | Metrics only | Logs only |
| Query Language | SQL, PromQL | Custom/PromQL | Custom/DSL |
| Deployment | Edge + Cloud | Cloud/On-prem | Mostly central |
| Indexing & Performance | PB-Scale, Sub-second | Varies | Varies |
| Integration | REST API, SQL, Common protocols | Varies | Varies |
Performance:
Read more benchmark reports.
Architecture
GreptimeDB can run in two modes:
- Standalone Mode - Single binary for development and small deployments
- Distributed Mode - Separate components for production scale:
- Frontend: Query processing and protocol handling
- Datanode: Data storage and retrieval
- Metasrv: Metadata management and coordination
Read the architecture document. DeepWiki provides an in-depth look at GreptimeDB:

Try GreptimeDB
docker pull greptime/greptimedb
docker run -p 127.0.0.1:4000-4003:4000-4003 \
-v "$(pwd)/greptimedb_data:/greptimedb_data" \
--name greptime --rm \
greptime/greptimedb:latest standalone start \
--http-addr 0.0.0.0:4000 \
--rpc-bind-addr 0.0.0.0:4001 \
--mysql-addr 0.0.0.0:4002 \
--postgres-addr 0.0.0.0:4003
Dashboard: http://localhost:4000/dashboard
Read more in the full Install Guide.
Troubleshooting:
- Cannot connect to the database? Ensure that ports
4000,4001,4002, and4003are not blocked by a firewall or used by other services. - Failed to start? Check the container logs with
docker logs greptimefor further details.
Getting Started
Build From Source
Prerequisites:
- Rust toolchain (nightly)
- Protobuf compiler (>= 3.15)
- C/C++ building essentials, including
gcc/g++/autoconfand glibc library (eg.libc6-devon Ubuntu andglibc-develon Fedora) - Python toolchain (optional): Required only if using some test scripts.
Build and Run:
make
cargo run -- standalone start
Tools & Extensions
- Kubernetes: GreptimeDB Operator
- Helm Charts: Greptime Helm Charts
- Dashboard: Web UI
- gRPC Ingester: Go, Java, C++, Erlang, Rust
- Grafana Data Source: GreptimeDB Grafana data source plugin
- Grafana Dashboard: Official Dashboard for monitoring
Project Status
Status: Beta — marching toward v1.0 GA! GA (v1.0): January 10, 2026
- Deployed in production by open-source projects and commercial users
- Stable, actively maintained, with regular releases (version info)
- Suitable for evaluation and pilot deployments
GreptimeDB v1.0 represents a major milestone toward maturity — marking stable APIs, production readiness, and proven performance.
Roadmap: Beta1 (Nov 10) → Beta2 (Nov 24) → RC1 (Dec 8) → GA (Jan 10, 2026), please read v1.0 highlights and release plan for details.
For production use, we recommend using the latest stable release.
If you find this project useful, a ⭐ would mean a lot to us!

Community
We invite you to engage and contribute!
License
GreptimeDB is licensed under the Apache License 2.0.
Commercial Support
Running GreptimeDB in your organization? We offer enterprise add-ons, services, training, and consulting. Contact us for details.
Contributing
- Read our Contribution Guidelines.
- Explore Internal Concepts and DeepWiki.
- Pick up a good first issue and join the #contributors Slack channel.
Acknowledgement
Special thanks to all contributors! See AUTHORS.md.
- Uses Apache Arrow™ (memory model)
- Apache Parquet™ (file storage)
- Apache DataFusion™ (query engine)
- Apache OpenDAL™ (data access abstraction)
