* chore: refactor dir for local catalog manager * refactor: CatalogProvider returns Result * refactor: SchemaProvider returns Result * feat: add kv operations to remote catalog * chore: refactor some code * feat: impl catalog initialization * feat: add register table and register system table function * refactor: add table_info method for Table trait * chore: add some tests * chore: add register schema test * chore: fix build issue after rebase onto develop * refactor: mock to separate file * build: failed to compile * fix: use a container struct to bridge KvBackend and Accessor trait * feat: upgrade opendal to 0.17 * test: add more tests * chore: add catalog name and schema name to table info * chore: add catalog name and schema name to table info * chore: rebase onto develop * refactor: common-catalog crate * chore: refactor dir for local catalog manager * refactor: CatalogProvider returns Result * refactor: SchemaProvider returns Result * feat: add kv operations to remote catalog * chore: refactor some code * feat: impl catalog initialization * feat: add register table and register system table function * refactor: add table_info method for Table trait * chore: add some tests * chore: add register schema test * chore: fix build issue after rebase onto develop * refactor: mock to separate file * build: failed to compile * fix: use a container struct to bridge KvBackend and Accessor trait * feat: upgrade opendal to 0.17 * test: add more tests * chore: add catalog name and schema name to table info * chore: add catalog name and schema name to table info * chore: rebase onto develop * refactor: common-catalog crate * refactor: remove remote catalog related files * fix: compilation * feat: add table version to TableKey * feat: add node id to TableValue * fix: some CR comments * chore: change async fn create_expr_to_request to sync * fix: add backtrace to errors * fix: code style * refactor: merge refactor/catalog-crate * feat: table key with version * feat: impl KvBackend for MetaClient * fix: integrate metaclient * fix: catalog use local table info as baseline * fix: sync metsrv * fix: wip * fix: update remote catalog on register and deregister * refactor: CatalogProvider * refactor: CatalogManager * fix: catalog key filtering * fix: pass some test * refactor: catalog iterating * fix: CatalogManager::table also requires both catalog_name and schema_name * chore: merge develop * chore: merge catalog crate * fix: adapt to recent meta-client api change * feat: databode lease * feat: remote catalog (#356) * chore: refactor dir for local catalog manager * refactor: CatalogProvider returns Result * refactor: SchemaProvider returns Result * feat: add kv operations to remote catalog * chore: refactor some code * feat: impl catalog initialization * feat: add register table and register system table function * refactor: add table_info method for Table trait * chore: add some tests * chore: add register schema test * chore: fix build issue after rebase onto develop * refactor: mock to separate file * build: failed to compile * fix: use a container struct to bridge KvBackend and Accessor trait * feat: upgrade opendal to 0.17 * test: add more tests * chore: add catalog name and schema name to table info * chore: add catalog name and schema name to table info * chore: rebase onto develop * refactor: common-catalog crate * chore: refactor dir for local catalog manager * refactor: CatalogProvider returns Result * refactor: SchemaProvider returns Result * feat: add kv operations to remote catalog * chore: refactor some code * feat: impl catalog initialization * feat: add register table and register system table function * refactor: add table_info method for Table trait * chore: add some tests * chore: add register schema test * chore: fix build issue after rebase onto develop * refactor: mock to separate file * build: failed to compile * fix: use a container struct to bridge KvBackend and Accessor trait * feat: upgrade opendal to 0.17 * test: add more tests * chore: add catalog name and schema name to table info * chore: add catalog name and schema name to table info * chore: rebase onto develop * refactor: common-catalog crate * refactor: remove remote catalog related files * fix: compilation * feat: add table version to TableKey * feat: add node id to TableValue * fix: some CR comments * chore: change async fn create_expr_to_request to sync * fix: add backtrace to errors * fix: code style * refactor: merge refactor/catalog-crate * feat: table key with version * feat: impl KvBackend for MetaClient * fix: integrate metaclient * fix: catalog use local table info as baseline * fix: sync metsrv * fix: wip * fix: update remote catalog on register and deregister * refactor: CatalogProvider * refactor: CatalogManager * fix: catalog key filtering * fix: pass some test * refactor: catalog iterating * fix: CatalogManager::table also requires both catalog_name and schema_name * chore: merge develop * chore: merge catalog crate * fix: adapt to recent meta-client api change * feat: datanode heartbeat (#355) * feat: add heartbeat task to instance * feat: add node_id datanode opts * fix: use real node id in heartbeat and meta client * feat: distribute table in frontend * test: distribute read demo * test: distribute read demo * test: distribute read demo * add write spliter * fix: node id changed to u64 * feat: datanode uses remote catalog implementation * dist insert integrate table * feat: specify region ids on creating table (#359) * fix: compiling issues * feat: datanode lease (#354) * Some glue code about dist_insert * fix: correctly wrap string value with quotes * feat: create route * feat: frontend catalog (#362) * feat: integrate catalog to frontend * feat: preserve partition rule on create * fix: print tables on start * chore: log in create route * test: distribute read demo * feat: support metasrv addr command line options * feat: optimize DataNodeInstance creation (#368) * chore: remove unnecessary changes * chore: revert changes to src/api * chore: revert changes to src/datanode/src/server.rs * chore: remove opendal backend * chore: optimize imports * chore: revert changes to instance and region ids * refactor: MetaKvBackend range * fix: remove some wrap * refactor: initiation of catalog * feat: add region id to create table request and add heartbeat task to datanode instance * fix: fix auto reconnect for heartbeat task * chore: change TableValue::region_numbers to vec<u32>. * fix: some tests * fix: avoid concurrently start Heartbeat task by compare_exchange * feat: refactor catalog key and values, separate table info into two kinds of keys * feat: bump table id from metasrv * fix: compare and set table id * chore: merge develop * fix: use integer serialization instead of string serialization Co-authored-by: jiachun <jiachun_fjc@163.com> Co-authored-by: luofucong <luofucong@greptime.com> Co-authored-by: fys <1113014250@qq.com> Co-authored-by: Jiachun Feng <jiachun_feng@proton.me>
GreptimeDB
GreptimeDB: the next-generation hybrid timeseries/analytics processing database in the cloud.
Getting Started
Prerequisites
To compile GreptimeDB from source, you'll need the following:
- Rust
- Protobuf
Rust
The easiest way to install Rust is to use rustup, which will check our rust-toolchain file and install correct Rust version for you.
Protobuf
protoc is required for compiling .proto files. protobuf is available from
major package manager on macos and linux distributions. You can find an
installation instructions here.
Build the Docker Image
docker build --network host -f docker/Dockerfile -t greptimedb .
Usage
Start Datanode
// Start datanode with default options.
cargo run -- datanode start
OR
// Start datanode with `http-addr` option.
cargo run -- datanode start --http-addr=0.0.0.0:9999
OR
// Start datanode with `log-dir` and `log-level` options.
cargo run -- --log-dir=logs --log-level=debug datanode start
Start datanode with config file:
cargo run -- --log-dir=logs --log-level=debug datanode start -c ./config/datanode.example.toml
Start datanode by runing docker container:
docker run -p 3000:3000 \
-p 3001:3001 \
-p 3306:3306 \
greptimedb
Start Frontend
Frontend should connect to Datanode, so Datanode must have been started at first!
// Connects to local Datanode at its default GRPC port: 3001
// Start Frontend with default options.
cargo run -- frontend start
OR
// Start Frontend with `mysql-addr` option.
cargo run -- frontend start --mysql-addr=0.0.0.0:9999
OR
// Start datanode with `log-dir` and `log-level` options.
cargo run -- --log-dir=logs --log-level=debug frontend start
Start datanode with config file:
cargo run -- --log-dir=logs --log-level=debug frontend start -c ./config/frontend.example.toml
SQL Operations
-
Connecting DB by mysql client:
# The datanode listen on port 3306 by default. mysql -h 127.0.0.1 -P 3306 -
Create table:
CREATE TABLE monitor ( host STRING, ts TIMESTAMP, cpu DOUBLE DEFAULT 0, memory DOUBLE, TIME INDEX (ts), PRIMARY KEY(host)) ENGINE=mito WITH(regions=1); -
Insert data:
INSERT INTO monitor(host, cpu, memory, ts) VALUES ('host1', 66.6, 1024, 1660897955); INSERT INTO monitor(host, cpu, memory, ts) VALUES ('host2', 77.7, 2048, 1660897956); INSERT INTO monitor(host, cpu, memory, ts) VALUES ('host3', 88.8, 4096, 1660897957); -
Query data:
mysql> SELECT * FROM monitor; +-------+------------+------+--------+ | host | ts | cpu | memory | +-------+------------+------+--------+ | host1 | 1660897955 | 66.6 | 1024 | | host2 | 1660897956 | 77.7 | 2048 | | host3 | 1660897957 | 88.8 | 4096 | +-------+------------+------+--------+ 3 rows in set (0.01 sec)You can delete your data by removing
/tmp/greptimedb.
Contributing
Please refer to contribution guidelines for more information.