* feat: use correct projection index for old format Signed-off-by: evenyag <realevenyag@gmail.com> * chore: remove allow dead_code from format Signed-off-by: evenyag <realevenyag@gmail.com> * feat: check and convert old format to flat format Signed-off-by: evenyag <realevenyag@gmail.com> * fix: sub primary key num from projection Signed-off-by: evenyag <realevenyag@gmail.com> * fix: always convert the batch in FlatRowGroupReader Signed-off-by: evenyag <realevenyag@gmail.com> * style: fix clippy Signed-off-by: evenyag <realevenyag@gmail.com> * refactor: Change &Option<&[]> to Option<&[]> Signed-off-by: evenyag <realevenyag@gmail.com> * refactor: only build arrow schema once adds a method flat_sst_arrow_schema_column_num() to get the field num Signed-off-by: evenyag <realevenyag@gmail.com> * feat: Handle flat format and old format separately Adds two structs ParquetFlat and ParquetPrimaryKeyToFlat. ParquetPrimaryKeyToFlat delegates stats and projection to the PrimaryKeyReadFormat. Signed-off-by: evenyag <realevenyag@gmail.com> * fix: handle non string tag correctly Signed-off-by: evenyag <realevenyag@gmail.com> * fix: do not register file cache twice Signed-off-by: evenyag <realevenyag@gmail.com> * fix: clean temp files Signed-off-by: evenyag <realevenyag@gmail.com> * chore: add rows and bytes to flush success log Signed-off-by: evenyag <realevenyag@gmail.com> * chore: convert format in memtable Signed-off-by: evenyag <realevenyag@gmail.com> * refactor: add compaction flag to ScanInput Signed-off-by: evenyag <realevenyag@gmail.com> * fix: compaction should use old format for sparse encoding Signed-off-by: evenyag <realevenyag@gmail.com> * fix: merge schema use old format in sparse encoding Signed-off-by: evenyag <realevenyag@gmail.com> * feat: reads legacy format but not convert if skip_auto_convert Signed-off-by: evenyag <realevenyag@gmail.com> * fix: suppport sparse encoding in bulk parts Signed-off-by: evenyag <realevenyag@gmail.com> --------- Signed-off-by: evenyag <realevenyag@gmail.com>
Real-Time & Cloud-Native Observability Database
for metrics, logs, and traces
Delivers sub-second querying at PB scale and exceptional cost efficiency from edge to cloud.
- Introduction
- ⭐ Key Features
- Quick Comparison
- Architecture
- Try GreptimeDB
- Getting Started
- Build From Source
- Tools & Extensions
- Project Status
- Community
- License
- Commercial Support
- Contributing
- Acknowledgement
Introduction
GreptimeDB is an open-source, cloud-native database purpose-built for the unified collection and analysis of observability data (metrics, logs, and traces). Whether you’re operating on the edge, in the cloud, or across hybrid environments, GreptimeDB empowers real-time insights at massive scale — all in one system.
Features
| Feature | Description |
|---|---|
| Unified Observability Data | Store metrics, logs, and traces as timestamped, contextual wide events. Query via SQL, PromQL, and streaming. |
| High Performance & Cost Effective | Written in Rust, with a distributed query engine, rich indexing, and optimized columnar storage, delivering sub-second responses at PB scale. |
| Cloud-Native Architecture | Designed for Kubernetes, with compute/storage separation, native object storage (AWS S3, Azure Blob, etc.) and seamless cross-cloud access. |
| Developer-Friendly | Access via SQL/PromQL interfaces, REST API, MySQL/PostgreSQL protocols, and popular ingestion protocols. |
| Flexible Deployment | Deploy anywhere: edge (including ARM/Android) or cloud, with unified APIs and efficient data sync. |
Learn more in Why GreptimeDB and Observability 2.0 and the Database for It.
Quick Comparison
| Feature | GreptimeDB | Traditional TSDB | Log Stores |
|---|---|---|---|
| Data Types | Metrics, Logs, Traces | Metrics only | Logs only |
| Query Language | SQL, PromQL, Streaming | Custom/PromQL | Custom/DSL |
| Deployment | Edge + Cloud | Cloud/On-prem | Mostly central |
| Indexing & Performance | PB-Scale, Sub-second | Varies | Varies |
| Integration | REST, SQL, Common protocols | Varies | Varies |
Performance:
Read more benchmark reports.
Architecture
- Read the architecture document.
- DeepWiki provides an in-depth look at GreptimeDB:

Try GreptimeDB
1. Live Demo
Experience GreptimeDB directly in your browser.
2. GreptimeCloud
Start instantly with a free cluster.
3. Docker (Local Quickstart)
docker pull greptime/greptimedb
docker run -p 127.0.0.1:4000-4003:4000-4003 \
-v "$(pwd)/greptimedb_data:/greptimedb_data" \
--name greptime --rm \
greptime/greptimedb:latest standalone start \
--http-addr 0.0.0.0:4000 \
--rpc-bind-addr 0.0.0.0:4001 \
--mysql-addr 0.0.0.0:4002 \
--postgres-addr 0.0.0.0:4003
Dashboard: http://localhost:4000/dashboard Full Install Guide
Troubleshooting:
- Cannot connect to the database? Ensure that ports
4000,4001,4002, and4003are not blocked by a firewall or used by other services. - Failed to start? Check the container logs with
docker logs greptimefor further details.
Getting Started
Build From Source
Prerequisites:
- Rust toolchain (nightly)
- Protobuf compiler (>= 3.15)
- C/C++ building essentials, including
gcc/g++/autoconfand glibc library (eg.libc6-devon Ubuntu andglibc-develon Fedora) - Python toolchain (optional): Required only if using some test scripts.
Build and Run:
make
cargo run -- standalone start
Tools & Extensions
- Kubernetes: GreptimeDB Operator
- Helm Charts: Greptime Helm Charts
- Dashboard: Web UI
- SDKs/Ingester: Go, Java, C++, Erlang, Rust, JS
- Grafana: Official Dashboard
Project Status
Status: Beta. GA (v1.0): Targeted for mid 2025.
- Being used in production by early adopters
- Stable, actively maintained, with regular releases (version info)
- Suitable for evaluation and pilot deployments
For production use, we recommend using the latest stable release.
If you find this project useful, a ⭐ would mean a lot to us!

Community
We invite you to engage and contribute!
License
GreptimeDB is licensed under the Apache License 2.0.
Commercial Support
Running GreptimeDB in your organization? We offer enterprise add-ons, services, training, and consulting. Contact us for details.
Contributing
- Read our Contribution Guidelines.
- Explore Internal Concepts and DeepWiki.
- Pick up a good first issue and join the #contributors Slack channel.
Acknowledgement
Special thanks to all contributors! See AUTHORS.md.
- Uses Apache Arrow™ (memory model)
- Apache Parquet™ (file storage)
- Apache Arrow DataFusion™ (query engine)
- Apache OpenDAL™ (data access abstraction)
