* feat(storage): Implement skeleton of ReadResolver ReadResolver is used to resolve difference between schemas * feat(storage): Add user_column_end to ReadResover * feat(storage): Implement Batch::batch_from_parts Used to construct Batch from parts according to the schema that user expects to read. * feat(storage): Compat memtable schema * feat(storage): Compat parquet file schema * fix(storage): ReadResolver supports projection under same schema version Now ReadResolver takes ProjectedSchemaRef as dest schema, and checks whether a value column is needed by the schema after projection. * feat(storage): Check whether columns are same columns is_source_column_readable() takes ColumnMetadata instead of ColumnSchema, and compares their column id to check whether they are same columns. * refactor(storage): Use row_key_end/user_column_end in source_schema Rename ReadResolver::is_needed to ReadResolver::is_source_needed, and remove row_key_end/user_column_end from ReadResolver, since they should be same as source_schema's * chore(storage): Remove unused codes * test(storage): Add tests for the resolver * feat(storage): Returns error on different source and dest column names * style(storage): Fix clippy * refactor: Rename ReadResolver to ReadAdapter * chore(table): Removed unused comment * refactor: rename to is_source_column_compatible
GreptimeDB
GreptimeDB: the next-generation hybrid timeseries/analytics processing database in the cloud.
Getting Started
Prerequisites
To compile GreptimeDB from source, you'll need the following:
- Rust
- Protobuf
Rust
The easiest way to install Rust is to use rustup, which will check our rust-toolchain file and install correct Rust version for you.
Protobuf
protoc is required for compiling .proto files. protobuf is available from
major package manager on macos and linux distributions. You can find an
installation instructions here.
Build the Docker Image
docker build --network host -f docker/Dockerfile -t greptimedb .
Usage
Start Datanode
// Start datanode with default options.
cargo run -- datanode start
OR
// Start datanode with `http-addr` option.
cargo run -- datanode start --http-addr=0.0.0.0:9999
OR
// Start datanode with `log-dir` and `log-level` options.
cargo run -- --log-dir=logs --log-level=debug datanode start
Start datanode with config file:
cargo run -- --log-dir=logs --log-level=debug datanode start -c ./config/datanode.example.toml
Start datanode by runing docker container:
docker run -p 3000:3000 \
-p 3001:3001 \
-p 3306:3306 \
greptimedb
Start Frontend
Frontend should connect to Datanode, so Datanode must have been started at first!
// Connects to local Datanode at its default GRPC port: 3001
// Start Frontend with default options.
cargo run -- frontend start
OR
// Start Frontend with `mysql-addr` option.
cargo run -- frontend start --mysql-addr=0.0.0.0:9999
OR
// Start datanode with `log-dir` and `log-level` options.
cargo run -- --log-dir=logs --log-level=debug frontend start
Start datanode with config file:
cargo run -- --log-dir=logs --log-level=debug frontend start -c ./config/frontend.example.toml
SQL Operations
-
Connecting DB by mysql client:
# The datanode listen on port 3306 by default. mysql -h 127.0.0.1 -P 3306 -
Create table:
CREATE TABLE monitor ( host STRING, ts TIMESTAMP, cpu DOUBLE DEFAULT 0, memory DOUBLE, TIME INDEX (ts), PRIMARY KEY(host)) ENGINE=mito WITH(regions=1); -
Insert data:
INSERT INTO monitor(host, cpu, memory, ts) VALUES ('host1', 66.6, 1024, 1660897955); INSERT INTO monitor(host, cpu, memory, ts) VALUES ('host2', 77.7, 2048, 1660897956); INSERT INTO monitor(host, cpu, memory, ts) VALUES ('host3', 88.8, 4096, 1660897957); -
Query data:
mysql> SELECT * FROM monitor; +-------+------------+------+--------+ | host | ts | cpu | memory | +-------+------------+------+--------+ | host1 | 1660897955 | 66.6 | 1024 | | host2 | 1660897956 | 77.7 | 2048 | | host3 | 1660897957 | 88.8 | 4096 | +-------+------------+------+--------+ 3 rows in set (0.01 sec)You can delete your data by removing
/tmp/greptimedb.
Contributing
Please refer to contribution guidelines for more information.