Compare commits

...

870 Commits

Author SHA1 Message Date
liyang
01fdbf3626 chore: upgrade 0.4.2 (#2644) 2023-10-24 12:21:58 +08:00
Lei, HUANG
97897aaf9b fix: predicate shall use real schema to create physical exprs (#2642)
* fix: prune predicate show use real schema to create physical exprs

* refactor: remove redundant results

* fix: unit tests

* test: add more sqlness cases

* test: add more sqlness cases

* fix: sqlness orderby

* chore: update log

* fix: cache physical expr in memtable iter

---------

Co-authored-by: Yingwen <realevenyag@gmail.com>
2023-10-24 03:41:25 +00:00
Wei
1fc42a681f refactor: create_or_open always set writable (#2641)
feat: set opened region writable
2023-10-23 10:32:51 +00:00
Wei
fbc8f56eaa feat: lookup manifest file size (#2590)
* feat: get manifest file size

* feat: manifest size statistics

* refactor: manifest map key

* chore: comment and unit test

* chore: remove no-use function

* chore: change style

* Apply suggestions from code review

Co-authored-by: Yingwen <realevenyag@gmail.com>

* chore: cr comment

* chore: cr comment

* chore: cr comment

* chore: cr comment

---------

Co-authored-by: Yingwen <realevenyag@gmail.com>
2023-10-23 08:59:00 +00:00
yuanbohan
44280f7c9d feat(otlp): initial OTLP trace support (#2627)
* feat: otlp tracing framework via http

* feat: otlp trace transformer plugin

* feat: successfully write traces into db

* chore: plugin to parse request

* test: helper functions

* feat: parse_request_to_spans function

* chore: remove implicite calling parse in PraceParser

* chore: fix clippy

* chore: add TODO marker for span fields

* refactor TraceParser trait

* refactor TraceParser trait

* table_name method in OTLP TraceParser trait

* fix: approximate row, column count

* chore: function signature without row

* chore: do not clone by moving span.kind upper

* docs for parse and to_grpc_insert_requests

---------

Co-authored-by: fys <fengys1996@gmail.com>
Co-authored-by: fys <40801205+fengys1996@users.noreply.github.com>
2023-10-23 06:37:43 +00:00
Ning Sun
0fbde48655 feat: hide internal error and unknown error message from end user (#2544)
* feat: use fixed error message for unknown error

* feat: return fixed message for internal error as well

* chore: include status code in error message

* test: update tests for asserts of error message

* feat: change status code of some datafusion error

* fix: make CollectRecordbatch an query error

* test: update sqlness results
2023-10-23 03:07:35 +00:00
Niwaka
9dcfd28f61 feat: impl ObjectStoreManager for custom_storage (#2621)
* feat: impl ObjectStoreManager for custom_storage

* fix: rename object_store_manager to manager

* fix: rename global to default

* chore: add document for ObjectStoreManager

* refactor: simplify default_object_store

* fix: address review
2023-10-23 03:00:29 +00:00
Yingwen
82dbc3e1ae feat(mito): Ports InMemoryRowGroup from parquet crate (#2633)
* feat: ports InMemoryRowGroup from parquet

* chore: pub InMemoryRowGroup

* style: allow some clippy lints
2023-10-23 02:22:19 +00:00
Ruihang Xia
4d478658b5 fix: pass datanode config file in distributed mode sqlness (#2631)
* fix: pass datanode config file in distributed mode sqlness

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-10-20 10:57:23 +00:00
localhost
89ebe47cd9 feat: RepeatedTask adds execute-first-wait-later behavior. (#2625)
* feat: RepeatedTask adds execute-first-wait-later behavior.

* feat: add inverval generator for repeate task component

* feat: impl debug for dyn IntervalGenerator trait

* chore: change some words

* chore: instead of complicated way, we add an initial_delay to control task interval

* chore: some improve by pr comment
2023-10-20 09:43:45 +00:00
Ruihang Xia
212ea2c25c feat: implement HistogramFold plan for prometheus histogram type (#2626)
* basic impl of fold plan

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add schema test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fill plan attributes

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix styles

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* unify variable names

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-10-20 07:42:10 +00:00
Ruihang Xia
1658d088ab ci: add size labeler (#2628)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-10-20 06:39:13 +00:00
Baasit
346b57cf10 feat: row protocol support for opentsdb (#2623)
* feat: opentsdb row protocol

* fix: added commnets for num of rows and failure if output is not of affecetd rows

* fix: added extra 1 to number of columns

* fix: avoided cloning datapoints, took ownership instead

* fix: avoided cloning datapoints, took ownership instead

* fix: changed vecotr slice to vector

* fix: remove clone

* fix: combined datapoints and requests with zip instead of enumerating

---------

Co-authored-by: Ubuntu <ubuntu@ip-172-31-43-183.us-east-2.compute.internal>
2023-10-20 06:25:59 +00:00
Weny Xu
e1dcf83326 fix: correct the range behavior in MemoryKvBackend & RaftEngineBackend (#2615)
* fix: correct the range behavior in MemoryKvBackend & RaftEngineBackend

* refactor: migrate tests from MemoryKvBackend

* chore: apply suggestions from CR

* fix: fix license header

* chore: apply suggestions from CR

* chore: apply suggestions from CR

* fix: fix range bugs
2023-10-20 02:30:47 +00:00
Ning Sun
b5d9d635eb ci: add slack notification for nightly ci failure (#2617) 2023-10-19 15:47:15 +00:00
zyy17
88dd78a69c ci: remove the old version python (#2624)
ci: remove old version python
2023-10-19 15:46:15 +00:00
zyy17
6439b929b3 ci: the 'publish-github-release' and 'release-cn-artifacts' have to wait for all the artifacts are built (#2622) 2023-10-19 21:05:44 +08:00
Wei
ba15c14103 feat: get internal value size of ValueRef (#2613)
* feat: impl byte_size

* chore: clippy

* chore: cr comment
2023-10-19 11:59:37 +08:00
Weny Xu
d57b144b2f chore: change test_remove_outdated_meta_task sleep time to 40ms (#2620)
chore: change test_remove_outdated_meta_task sleep time to 300ms
2023-10-18 11:33:35 +00:00
WU Jingdi
46e106bcc3 feat: allow nest range expr in Range Query (#2557)
* feat: eable range expr nest

* fix: change range expr rewrite format

* chore: organize range query tests

* chore: change range expr name(e.g. MAX(v) RANGE 5s FILL 6)

* chore: add range query test

* chore: fix code advice

* chore: fix ca
2023-10-18 07:03:26 +00:00
localhost
a7507a2b12 chore: change telemetry report url to resolve connectivity issues (#2608)
chore: change otel report url to resolve connectivity issues
2023-10-18 06:58:54 +00:00
Wei
5b8e5066a0 refactor: make ReadableSize more readable. (#2614)
* refactor: ReadableSize is readable.

* docs: Update src/common/base/src/readable_size.rs

---------

Co-authored-by: Yingwen <realevenyag@gmail.com>
2023-10-18 06:32:50 +00:00
Weny Xu
dcd481e6a4 feat: stop the procedure manager if a new leader is elected (#2576)
* feat: stop the procedure manager if a new leader is elected

* chore: apply suggestions from CR

* chore: apply suggestions

* chore: apply suggestions from CR

* feat: add should_report to GreptimeDBTelemetry

Signed-off-by: WenyXu <wenymedia@gmail.com>

* refactor: refactor subscribing leader change loop

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2023-10-18 06:12:28 +00:00
zyy17
3217b56cc1 ci: release new version '0.4.0' -> '0.4.1' (#2611) 2023-10-17 07:33:41 +00:00
shuiyisong
eccad647d0 chore: add export data to migrate tool (#2610)
* chore: add export data to migrate tool

* chore: export copy from sql too
2023-10-17 06:33:58 +00:00
Yun Chen
829db8c5c1 fix!: align frontend cmd name to rpc_* (#2609)
fix: align frontend cmd name to rpc_*
2023-10-17 06:18:18 +00:00
Ruihang Xia
9056c3a6aa feat: implement greptime cli export (#2535)
* feat: implement greptime cli export

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* read information schema

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* parse database name from cli params

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-10-17 01:56:52 +00:00
ZhangJian He
d9e7b898a3 feat: add walconfig dir back (#2606)
Signed-off-by: ZhangJian He <shoothzj@gmail.com>
2023-10-16 11:26:06 +00:00
zyy17
59d4081f7a ci: correct image name of dev build (#2603) 2023-10-16 03:54:44 +00:00
zyy17
6e87ac0a0e ci: refine release-cn-artifacts action (#2600)
* ci: add copy-image.sh and upload-artifacts-to-s3.sh

* ci: remove unused options in dev build

* ci: use 'upload-artifacts-to-s3.sh' and 'copy-image.sh' in release-cn-artifacts action

* refactor: refine copy-image.sh
2023-10-13 17:04:06 +08:00
shuiyisong
d89cfd0d4d fix: auth in standalone mode (#2591)
chore: user_provider in standalone mode
2023-10-13 08:37:58 +00:00
Yingwen
8a0054aa89 fix: make nyc-taxi bench work again (#2599)
* fix: invalid requests created by nyc-taxi

* feat: add timestamp to table name

* style: fix clippy

* chore: re-export deps for client

* fix: wait result

* chore: no need to define a prefix constant
2023-10-13 08:16:26 +00:00
Yun Chen
f859932745 fix: convert to ReadableSize & Durations (#2594)
* fix: convert to ReadableSize & Durations

* fix: change more grpc sender/recv message size to ReadableSize

fix: format

fix: cargo fmt

fix: change cmd test to use durations

fix: revert metaclient change

fix: convert default fields in meta client options

fix: human serde meta client durations

* fix: remove milisecond postfix in heartbeat option

* fix: humantime serde on heartbeat

* fix: update config example

* fix: update integration test config

* fix: address pr comments

* fix: fix pr comment on default annotation
2023-10-13 03:28:29 +00:00
Ruihang Xia
9a8fc08e6a docs(benchmark): update 0.4.0 tsbs result (#2597)
* docs(benchmark): update 0.4.0 tsbs result

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix typo

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-10-13 03:08:14 +00:00
Ruihang Xia
825e4beead build(ci): pin linux runner to ubuntu-20.04 (#2586)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-10-12 18:08:05 +08:00
zyy17
0a23b40321 ci: downgrade builder version: ubuntu 22.04 -> ubuntu 20.04 for compatible with older version glibc(>=2.31) (#2592) 2023-10-12 16:46:25 +08:00
Ruihang Xia
cf6ef0a30d chore(cli): deregister cli attach command (#2589)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-10-12 08:11:17 +00:00
dennis zhuang
65a659d136 fix: ensure data_home directory created (#2588)
fix: ensure data_home directory created before creating metadata store, #2587
2023-10-12 07:32:55 +00:00
Ruihang Xia
62bcb45787 feat!: change config name from kv_store to metadata_store (#2585)
featchange config name from kv_store to metadata_store

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-10-12 06:55:09 +00:00
zyy17
94f3542a4f ci: fix skopeo running errors (#2581)
ci: fix skopeo auth error
2023-10-12 06:13:56 +00:00
LFC
fc3bc5327d ci: release Windows artifacts (#2574)
* ci: release Windows artifacts

* ci: release Windows artifacts
2023-10-12 14:10:59 +08:00
Ning Sun
9e33ddceea ci: run windows tests every night instead of every commit (#2577)
ci: move windows ci to nightly-ci
2023-10-12 02:53:42 +00:00
zyy17
c9bdf4ff9f ci: refine the process of releasing dev-builder images (#2580)
* fix: fix error of releasing android builder image

* fix: run skopeo error

* ci: add 'release-dev-builder-images-cn' job

* ci: add 'disable_building_images'

* fix: add vars

* ci: use skopeo container

* ci: update opts defaule values
2023-10-12 02:41:54 +00:00
dennis zhuang
0a9972aa9a fix: cache capacity unit in sample config (#2575) 2023-10-11 11:02:39 +00:00
zyy17
76d5b710c8 ci: add more options for releasing dev-builder images (#2573) 2023-10-11 16:24:50 +08:00
zyy17
fe02366ce6 fix: remove unused options and add 'build-android-artifacts' (#2572) 2023-10-11 15:32:58 +08:00
zyy17
d7aeb369a6 refactor: add new action 'release-cn-artifacts' (#2554)
* refactor: add new action 'release-cn-artifacts'

* refactor: refine naming: 'release-artifacts' -> 'publish-github-release'

Signed-off-by: zyy17 <zyylsxm@gmail.com>

---------

Signed-off-by: zyy17 <zyylsxm@gmail.com>
2023-10-11 03:42:04 +00:00
zyy17
9284bb7a2b ci: seperate the job of building dev-builder images (#2569) 2023-10-11 11:09:53 +08:00
liyang
e23dd5a44f fix: fix to readme document link (#2566) 2023-10-11 02:45:43 +00:00
zyy17
c60b59adc8 chore: add the steps of building android binary (#2567) 2023-10-11 02:31:11 +00:00
Lei, HUANG
c9c2b3c91f fix: revert memtable pk rb cache to rwlock (#2565)
* fix: revert memtable pk rb cache to rwlock

* feat: refine
2023-10-10 20:51:05 +08:00
Yingwen
7f75190fce chore: update Cargo.lock (#2564) 2023-10-10 16:28:50 +08:00
Yingwen
0a394c73a2 chore: bump version to 0.4.0 (#2563) 2023-10-10 16:16:15 +08:00
JeremyHi
ae95f23e05 feat: add metrics for region server (#2552)
* feat: add metircs for region server

* fix: add comment and remove unused code
2023-10-10 07:40:16 +00:00
Lei, HUANG
6b39f5923d feat: add compaction metrics (#2560)
* feat: add compaction metrics

* feat: add compaction request total count

* fix: CR comments
2023-10-10 07:38:39 +00:00
JeremyHi
ed725d030f fix: support multi addrs while using etcd (#2562)
fix: support multi addrs while useing etcd
2023-10-10 07:30:48 +00:00
Wei
4fe7e162af fix: human_time mismatch (#2558)
* fix: human_time mismatch.

* fix: add comment
2023-10-10 07:22:12 +00:00
Yingwen
8a5ef826b9 fix(mito): Do not write to memtables if writing wal is failed (#2561)
* feat: add writes total metrics

* fix: don't write memtable if write ctx is failed

* feat: write rows metrics
2023-10-10 06:55:57 +00:00
Ruihang Xia
07be50403e feat: add basic metrics to query (#2559)
* add metrics to merge scan

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* count series in promql

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* tweak label name

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* tweak label name

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* document metric label

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-10-10 06:55:25 +00:00
Lei, HUANG
8bdef9a348 feat: memtable filter push down (#2539)
* feat: memtable support filter pushdown to prune primary keys

* fix: switch to next time series when pk not selected

* fix: allow predicate evaluation failure

* fix: some clippy warnings

* fix: panic when no primary key in schema

* feat: cache decoded record batch for primary key

* refactor: use arcswap instead of rwlock

* fix: format toml
2023-10-10 04:03:10 +00:00
Yingwen
d4577e7372 feat(mito): add metrics to mito engine (#2556)
* feat: allow discarding a timer

* feat: flush metrics

* feat: flush bytes and region count metrics

* refactor: add as_str to get static string

* feat: add handle request elapsed metrics

* feat: add some write related metrics

* style: fix clippy
2023-10-10 03:53:17 +00:00
dennis zhuang
88f26673f0 fix: adds back http_timeout for frontend subcommand (#2555) 2023-10-10 03:05:16 +00:00
Baasit
19f300fc5a feat: renaming kv directory to metadata (#2549)
* fix: renamed kv directory to metadata directory

* fix: changed function name

* fix: changed function name
2023-10-09 11:43:17 +00:00
Weny Xu
cc83764331 fix: check table exists before allocating table id (#2546)
* fix: check table exists before allocating table_id

* chore: apply suggestions from CR
2023-10-09 11:40:10 +00:00
Yingwen
81aa7a4caf chore(mito): change default batch size/row group size (#2550) 2023-10-09 11:10:12 +00:00
Yingwen
d68dd1f3eb fix: schema validation is skipped once we need to fill a column (#2548)
* test: test different order

* test: add tests for missing and invalid columns

* fix: do not skip schema validation while missing columns

* chore: use field_columns()

* test: add tests for different column order
2023-10-09 09:20:51 +00:00
Lei, HUANG
9b3470b049 feat: android image builder dockerfile (#2541)
* feat: android image builder dockerfile

* feat: add building android dev-builder to ci config file

* fix: add build arg

* feat: use makefile to build image and add strip command
2023-10-09 09:10:14 +00:00
Weny Xu
8cc862ff8a refactor: refactor cache invalidator (#2540) 2023-10-09 08:19:18 +00:00
Weny Xu
81ccb58fb4 refactor!: compare with origin bytes during the transactions (#2538)
* refactor: compare with origin bytes during the transaction

* refactor: use serialize_str instead

* Update src/common/meta/src/key.rs

Co-authored-by: JeremyHi <jiachun_feng@proton.me>

* chore: apply suggestions from CR

---------

Co-authored-by: JeremyHi <jiachun_feng@proton.me>
2023-10-09 08:17:19 +00:00
Weny Xu
ce3c10a86e refactor: de/encode protobuf-encoded byte array with base64 (#2545) 2023-10-09 05:31:44 +00:00
shuiyisong
007f7ba03c refactor: extract plugins crate (#2487)
* chore: move frontend plugins fn

* chore: move datanode plugins to fn

* chore: add opt plugins

* chore: add plugins to meta-srv

* chore: setup meta plugins, wait for router extension

* chore: try use configurator for grpc too

* chore: minor fix fmt

* chore: minor fix fmt

* chore: add start meta_srv for hook

* chore: merge develop

* chore: minor fix

* chore: replace Arc<Plugins> with PluginsRef

* chore: fix header

* chore: remove empty file

* chore: modify comments

* chore: remove PluginsRef type alias

* chore: remove `OptPlugins`
2023-10-09 04:54:27 +00:00
Weny Xu
dfe68a7e0b refactor: check push result out of loop (#2511)
* refactor: check push result out of loop

* chore: apply suggestions from CR
2023-10-09 02:49:48 +00:00
Ruihang Xia
d5e4fcaaff feat: dist plan optimize part 2 (#2543)
* allow udf and scalar fn

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* put CountWildcardRule before dist planner

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* bump datafusion to fix first_value/last_value

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update sqlness result

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* use retain instead

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-10-09 02:18:36 +00:00
Yingwen
17b385a985 fix: compiler errors under pprof and mem-prof features (#2537)
* fix: compiler errors under pprof feature

* fix: compiler errors under mem-prof feature
2023-10-08 08:28:45 +00:00
shuiyisong
067917845f fix: carry dbname from frontend to datanode (#2520)
* chore: add dbname in region request header for tracking purpose

* chore: fix handle read

* chore: add write meter

* chore: add meter-core to dep

* chore: add converter between RegionRequestHeader and QueryContext & update proto version
2023-10-08 06:30:23 +00:00
Weny Xu
a680133acc feat: enable no delay for mysql, opentsdb, http (#2530)
* refactor: enable no delay for mysql, opentsdb, http

* Apply suggestions from code review

Co-authored-by: Yingwen <realevenyag@gmail.com>

---------

Co-authored-by: Yingwen <realevenyag@gmail.com>
2023-10-08 06:19:52 +00:00
Yingwen
0593c3bde3 fix(mito): pruning for mito2 (#2525)
* fix: pruning for mito2

* chore: refactor projection parameters; add some tests; customize row group size for each flush task.

* chore: pass whole RegionFlushRequest

---------

Co-authored-by: Lei, HUANG <mrsatangel@gmail.com>
2023-10-08 03:45:15 +00:00
Lei, HUANG
0292445476 fix: timestamp range filter (#2533)
* fix: timestamp range filter

* fix: rebase develop

* fix: some style issues
2023-10-08 03:29:02 +00:00
dennis zhuang
ff15bc41d6 feat: improve object storage cache (#2522)
* feat: refactor object storage cache with moka

* chore: minor fixes

* fix: concurrent issues and invalidate cache after write/delete

* chore: minor changes

* fix: cargo lock

* refactor: rename

* chore: change DEFAULT_OBJECT_STORE_CACHE_SIZE to 256Mib

* fix: typo

* chore: style

* fix: toml format

* chore: toml

* fix: toml format

* Update src/object-store/src/layers/lru_cache/read_cache.rs

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>

* chore: update Cargo.toml

Co-authored-by: Yingwen <realevenyag@gmail.com>

* chore: update src/object-store/Cargo.toml

Co-authored-by: Yingwen <realevenyag@gmail.com>

* chore: refactor and apply suggestions

* fix: typo

* feat: adds back allow list for caching

* chore: cr suggestion

Co-authored-by: Yingwen <realevenyag@gmail.com>

* chore: cr suggestion

Co-authored-by: Yingwen <realevenyag@gmail.com>

* refactor: wrap inner Accessor with Arc

* chore: remove run_pending_task in read and write path

* chore: the arc is unnecessary

---------

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>
2023-10-08 03:27:49 +00:00
Yingwen
657542c0b8 feat(mito): Cache repeated vector for tags (#2523)
* feat: add vector_cache to CacheManager

* feat: cache repeated vectors

* feat: skip decoding pk if output doesn't contain tags

* test: add TestRegionMetadataBuilder

* test: test ProjectionMapper

* test: test vector cache

* test: test projection mapper convert

* style: fix clippy

* feat: do not cache vector if it is too large

* docs: update comment
2023-10-07 11:36:00 +00:00
Ning Sun
0ad3fb6040 fix: mysql timezone settings (#2534)
* fix: restore time zone settings for mysql

* test: add integration test for time zone

* test: fix unit test for check
2023-10-07 10:21:32 +00:00
Bamboo1
b44e39f897 feat: the schema of RegionMetadata is not output during debug (#2498)
* feat: the schema of RegionMetadata is not output during debug because column_metadatas contains duplicate information

Signed-off-by: ZhuZiyi <zyzhu2001@gmail.com>

* feat: the id_to_index of RegionMetadata is not output during debug

Signed-off-by: ZhuZiyi <zyzhu2001@gmail.com>

* feat: add debug trait

Signed-off-by: ZhuZiyi <zyzhu2001@gmail.com>

* feat: use default debug in ConcreteDataType

Signed-off-by: ZhuZiyi <zyzhu2001@gmail.com>

* chore: add std::fmt

Signed-off-by: ZhuZiyi <zyzhu2001@gmail.com>

* test: add debug trait test

Signed-off-by: ZhuZiyi <zyzhu2001@gmail.com>

* chore: typo

Signed-off-by: ZhuZiyi <zyzhu2001@gmail.com>

* chore: resolve conversation

Signed-off-by: ZhuZiyi <zyzhu2001@gmail.com>

* chore: format

Signed-off-by: ZhuZiyi <zyzhu2001@gmail.com>

* fix: test bug

Signed-off-by: ZhuZiyi <zyzhu2001@gmail.com>

---------

Signed-off-by: ZhuZiyi <zyzhu2001@gmail.com>
2023-10-07 08:01:54 +00:00
Weny Xu
f50f2a84a9 fix: open region missing options (#2473)
* fix: open region missing options

* refactor: remove redundant clone

* chore: apply suggestions from CR

* chore: apply suggestions

* chore: apply suggestions

* test: add test for initialize_region_server

* feat: introduce RegionInfo
2023-10-07 07:17:16 +00:00
Yingwen
fe783c7c1f perf(mito): Use a heap to merge batches for the same key (#2521)
* feat: merge by heap

* fix: fix heap order

* feat: avoid pop/push next and refactor some functions

* feat: replace merge_batches and fixe tests

* test: add test that a key is deleted

* fix: skip empty batch

* style: clippy

* chore: fix typos
2023-10-07 02:56:08 +00:00
Weny Xu
00fe7d104e feat: enable tcp no_delay by default for internal services (#2527) 2023-10-07 02:35:28 +00:00
Zhenchi
201acd152d fix: missing file engine with default options (#2519)
Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2023-09-28 10:25:12 +00:00
Niwaka
04dbd835a1 feat: support greatest function (#2490)
* feat: support greatest function

* feat: make greatest take date_type as input

* fix: move sqlness test into common/function/time.sql

* fix: avoid using unwarp

* fix: use downcast

* refactor: simplify arrow cast
2023-09-28 10:25:09 +00:00
Wenjie0329
e3d333258b docs: add event banner (#2518) 2023-09-28 08:08:43 +00:00
Ruihang Xia
10ecc30817 feat: pushdown aggr, limit and sort plan (#2495)
* check partition for aggr plan

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* handle empty partition rule

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove CheckPartition option

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update some valid sqlness result

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* opt-out promql plan and update sqlness

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix limit

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix insert select subquery

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update unit test result

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix typo

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* Update src/query/src/dist_plan/analyzer.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>
2023-09-28 06:35:45 +00:00
JeremyHi
52ac093110 fix: drop table 0 rows affected (#2515) 2023-09-28 06:21:18 +00:00
Zhenchi
1f1d72bdb8 feat: defensively specify limit parameter for file stream (#2517)
Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2023-09-28 06:14:27 +00:00
Zhenchi
7edafc3407 feat: push down filters to region engine (#2513)
feat: pushdown filters to region engine

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2023-09-27 13:50:44 +00:00
LFC
ccd6de8d6b fix: allow .(dot) literal in table name (#2483)
* fix: allow `.`(dot) literal in table name

* fix: resolve PR comments
2023-09-27 11:50:07 +00:00
shuiyisong
ee8d472aae chore: tune return msg (#2506)
* chore: test return msg

* fix: test_child_error

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* chore: fix test

* chore: minor fix grpc return value

* chore: format return msg

* chore: use root error as return value

* chore: fix empty err display

* chore: iter through external error

* chore: remove err msg

* chore: remove unused field

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
2023-09-27 10:40:25 +00:00
Weny Xu
9282e59a3b fix: re-create heartbeat stream ASAP (#2499)
* chore: set default connect_timeout_millis to 1000

* fix: re-create heartbeat stream ASAP

* chore: apply suggestions
2023-09-27 04:00:16 +00:00
Ruihang Xia
fbe2f2df46 refactor: simplify warn! and error! macros (#2503)
* refactor: simplify the error! and warn! macros

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* support display format

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* err.msg to err

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-09-27 03:07:03 +00:00
Yingwen
db6ceda5f0 fix(mito): fix region drop task runs multiple times but never clean the dir (#2504)
fix: fix region drop task runs multiple times but never clean the directory
2023-09-27 02:58:17 +00:00
Ruihang Xia
e352fb4495 fix: check for table scan before expanding (#2491)
* fix: check for table scan before expanding

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* change assert_ok to unwrap

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy warning

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update sqlness result

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* don't skip dml

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* uncomment ignored tests

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-09-26 12:12:08 +00:00
Yingwen
a6116bb866 feat(mito): Add cache manager (#2488)
* feat: add cache manager

* feat: add cache to reader builder

* feat: add AsyncFileReaderCache

* feat: Impl AsyncFileReaderCache

* chore: move moka dep to workspace

* feat: add moka cache to the manager

* feat: implement parquet meta cache

* test: test cache manager

* feat: consider vec size

* style: fix clippy

* test: fix config api test

* feat: divide cache

* test: test disabling meta cache

* test: fix config api test

* feat: remove meta cache if file is purged
2023-09-26 11:46:19 +00:00
Ruihang Xia
515ce825bd feat: stack trace style debug print for error (#2489)
* impl macro stack_trace_debug

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* manually mark external error

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* ignore warnings

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy warnings

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* use debug print

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* simplify the error and warn macro

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix ut

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add docs

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* replace snafu backtrace with location

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-09-26 11:23:21 +00:00
Vanish
7fc9604735 feat: distribute truncate table in region server (#2414)
* feat: distribute truncate table

* chore: add metrics for truncate table

* test: add sqlness test

* chore: cr

* test: add multi truncate

* chore: add trace id to the header
2023-09-26 11:14:14 +00:00
Zhenchi
a4282415f7 fix: convert datetime to chrono datetime (#2497)
* fix: convert datetime to chrono datetime

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* chore: typo

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix the bad fix

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2023-09-26 09:04:12 +00:00
Zhenchi
0bf26642a4 feat: re-support query engine execute dml (#2484)
* feat: re-support query engine execute dml

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* chore: remove region_number in InsertRequest

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* chore: add doc comments

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2023-09-26 08:37:04 +00:00
Weny Xu
230a3026ad fix: dn doesn't have chance to send a heartbeat to the new leader (#2471)
* refactor: set meta leader lease secs to 3s

* fix: correct default heartbeat interval

* refactor: ask meta leader in parallel

* feat: configure heartbeat client timeout to 500ms

* fix: trigger to send heartbeat immediately after fail

* fix: fix clippy
2023-09-26 05:05:38 +00:00
Wei
54e506a494 refactor: datetime time unit (#2469)
* refactor: datetime time unit

* Update src/common/time/src/datetime.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

* chore: cr.

---------

Co-authored-by: Yingwen <realevenyag@gmail.com>
2023-09-25 10:00:56 +00:00
Yingwen
7ecfaa240f refactor(mito): remove #[allow(dead_code)] (#2479) 2023-09-25 09:20:00 +00:00
LFC
c0f080df26 fix: print root cause error message to user facing interface (#2486) 2023-09-25 08:44:49 +00:00
Niwaka
f9351e4fb5 chore: add integration test for issue2437 (#2481) 2023-09-25 06:23:16 +00:00
zyy17
00272d53cc chore: fix typo (#2477) 2023-09-24 06:47:14 +00:00
JeremyHi
7310ec0bb3 chore: refactor options (#2476) 2023-09-24 02:12:33 +00:00
Yingwen
73842f10e7 fix(mito): normalize region dir in RegionOpener (#2475)
fix: normalize region dir in RegionOpener
2023-09-23 10:06:00 +00:00
Yingwen
32d1d68441 fix(mito): reset is_sorted to true after the merger finishing one series (#2474)
fix: reset is_sorted flag to true after the merger finishing one series
2023-09-23 10:05:34 +00:00
Ning Sun
ffa729cdf5 feat: implement storage for OTLP histogram (#2282)
* feat: implement new histogram data model

* feat:  use prometheus table format for histogram

* refactor: remove duplicated code

* fix: histogram tag column

* fix: use accumulated count in buckets

* refactor: using row based protocol for otlp WIP

* refactor: use row based writer for otlp.

Also updated row writer for owned keys

* refactor: use row writers for otlp

* test: add integration tests for histogram

* refactor: change le column name
2023-09-23 07:59:14 +00:00
JeremyHi
9d0de25bff chore: typo (#2470) 2023-09-22 09:47:34 +00:00
Wei
aef9e7bfc3 refactor: not allowed int64 type as time index (#2460)
* refactor: remove is_timestamp_compatible.

* chore: fmt

* refactor: remove int64 to timestamp match

* chore

* chore: apply suggestions from code review

Co-authored-by: dennis zhuang <killme2008@gmail.com>

* chore: fmt

---------

Co-authored-by: dennis zhuang <killme2008@gmail.com>
2023-09-22 06:28:02 +00:00
Yingwen
c6e95ffe63 fix(mito): compaction scheduler schedules more tasks than expected (#2466)
* test: test on_compaction_finished

* fix: avoid submit same region to compact

* feat: persist and recover compaction time window

* test: fix test

* test: sort like result
2023-09-22 06:13:12 +00:00
Yingwen
c9f8b9c7c3 feat: update proto and remove create_if_not_exists (#2467) 2023-09-22 03:24:49 +00:00
Baasit
688e64632d feat: support for show full tables (#2410)
* feat: added show tables command

* fix(tests): fixed parser and statement unit tests

* chore: implemeted display trait for table type

* fix: handled no tabletype and error for usopprted command in show databse

* chore: removed full as a show kind, instead as a show option

* chore(tests): fixed failing test and added more tests for show full

* chore: refactored table types to use filters

* fix: changed table_type to tables
2023-09-22 02:34:57 +00:00
JeremyHi
8e5eaf5472 chore: remove unused region_stats method form table (#2458)
chore: remove unused region_status method form table
2023-09-22 02:27:29 +00:00
LinFeng
621c6f371b feat: limit grpc message size (#2459)
* feat: add two grpc config options

Those options are for:
* Limit receiving(decoding) message size
* Limit sending(enoding) message size

* test: add integration tests for message size limit
2023-09-22 02:07:46 +00:00
Ruihang Xia
4c7ad44605 refactor: remove SqlStatementExecutor (#2464)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-09-22 01:57:48 +00:00
Weny Xu
6306aeabf0 chore: bump opendal to 0.40 (#2465) 2023-09-21 14:25:23 +00:00
JeremyHi
40781ec754 fix: test on windows (#2462)
* fix: test on windows

* fix: fix windows root

* fix: use relative path instead of root

* fix: remove incorrect replace

* fix: fix tests

---------

Co-authored-by: WenyXu <wenymedia@gmail.com>
2023-09-21 10:57:56 +00:00
zyy17
c7b490e1a0 ci: expand upload retry timeout (#2461) 2023-09-21 10:02:15 +00:00
Ruihang Xia
e3f53a8060 fix: add slash after generated region_dir (#2463)
* fix: add slash after generated region_dir

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update ut

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-09-21 07:46:05 +00:00
Weny Xu
580d11b1e1 refactor: unify table metadata cache invalidator (#2449)
* refactor: unify table metadata cache invalidator

* chore: apply from suggestions
2023-09-21 03:45:49 +00:00
shuiyisong
20f4f7971a refactor: remove source and location in snafu display (#2428)
* refactor: remove source pt 1

* refactor: remove source pt 2

* refactor: remove source pt 3

* refactor: remove location pt 1

* refactor: remove location pt 2

* chore: remove rustc files

* chore: fix error case

* chore: fix test

* chore: fix test

* chore: fix cr issue

Co-authored-by: fys <40801205+fengys1996@users.noreply.github.com>

---------

Co-authored-by: fys <40801205+fengys1996@users.noreply.github.com>
2023-09-21 02:55:24 +00:00
dennis zhuang
9863e501f1 test: revert ignored tests (#2455) 2023-09-21 02:33:18 +00:00
zyy17
df0877111e ci: make upload-to-s3 configurable(for now, it's false) (#2456) 2023-09-20 14:12:54 +00:00
dennis zhuang
23cc7d82e5 feat: supports binary data type (#2454) 2023-09-20 12:53:19 +00:00
Ruihang Xia
34d6288945 feat: bring back sqlness and integration tests (#2448)
* enable integration test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update sqlness result

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* disable sqlness region failover

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* enable sqlness in CI

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* sort unstable result

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* set require_lease_before_startup to true

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix: fix inconsistent cache

* replace windows path chars

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* ignore some integration cases in windows

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* Revert "ignore some integration cases in windows"

This reverts commit 122478b7c1.

* disable windows for now

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix: fix close region bug in RegionHeartbeatResponseHandler

* disable failover tests

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: WenyXu <wenymedia@gmail.com>
2023-09-20 09:17:30 +00:00
JeremyHi
567fbad647 chore: type alias typo (#2452)
chore: typo
2023-09-20 07:53:35 +00:00
Ruihang Xia
a5c499572c feat: open region in parallel (#2451)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-09-20 07:40:17 +00:00
JeremyHi
ca50ba5dc4 fix: remark region as inactive on leader changed (#2446)
* fix: remark reigon as inactive on leader changed

* chore: by comment
2023-09-20 06:37:27 +00:00
Yingwen
17e560c909 feat(mito): Allow to retry create request and alter request (#2447)
* feat: RegionMetadataBuilder allow adding/dropping columns multiple times

* test: test add if not exists/drop if exists

* feat: change validator and add need_alter

* test: fix tests and test need_alter

* test: test alter retry

* feat: open before create

* style: fix clippy
2023-09-20 06:36:46 +00:00
Weny Xu
339e12c64a fix: fix alter table verification (#2437)
* fix: fix verify alter

* refactor: move AlterTable UpdateMetadata to last step

* refactor: send region request in parallel

* Update src/table/src/metadata.rs

Co-authored-by: LFC <990479+MichaelScofield@users.noreply.github.com>

* Update src/table/src/metadata.rs

Co-authored-by: JeremyHi <jiachun_feng@proton.me>

---------

Co-authored-by: LFC <990479+MichaelScofield@users.noreply.github.com>
Co-authored-by: JeremyHi <jiachun_feng@proton.me>
2023-09-19 13:40:48 +00:00
Ruihang Xia
0f79ccab31 refactor: remove the old mito engine (#2443)
Co-authored-by: Even Yag <realevenyag@gmail.com>
2023-09-19 09:30:13 +00:00
Yingwen
7b606ed289 feat(mito): make use of options in RegionCreate/OpenRequest (#2436)
* refactor: move RegionOptions to options mod

* refactor: define compaction strategy in region/options.rs

* feat: use duration for time window

* refactor: rename CompactionStrategy to CompactionOptions

* feat: use serde to parse options

* feat: parse options

* feat: set options on creation/opening

* test: test create/open with options

* chore: remove todo

* feat: get compaction ttl and options from RegionOptions

* style: fix clippy

* chore: Remove unused engine_options

* style: fix clippy

* chore: remove todo
2023-09-19 09:06:09 +00:00
Weny Xu
1fb2d95c5f fix: fix open region missing path (#2441)
* fix: fix open region missing path

* fix: correct log

* chore: apply suggestions from CR

* fix: fix tests
2023-09-19 08:50:59 +00:00
Wei
8ee62a7d90 fix: parse i64 to different kinds of timestamp (#2440)
* feat: support i64 to multi-timestamp.

* chore: fmt
2023-09-19 08:26:25 +00:00
Ruihang Xia
802229de87 fix: type cast bugs found by sqlness (#2438)
* update valid results

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* accomplish datatype

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* cast null

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix unit tests

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-09-19 08:20:41 +00:00
Zhenchi
deac284973 refactor: RegionRequestHandler -> RegionQueryHandler (#2439)
* refactor: RegionRequestHandler -> RegionQueryHandler

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* refactor: rename FrontendRegionQueryHandler

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* chore: privte RegionInvoker

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2023-09-19 08:19:58 +00:00
Wei
5805e8d4b6 feat: type conversion between Values (#2394)
* feat: add cast() in datatype trait.

* feat: add cast for primitive type

* feat: add unit test cases

* test: add datetime/time cases.

* refactor: time_type cast function.

* chore: typos.

* refactor code.

* feat: add can_cast_type func.

* chore: rename cast to try_cast

* feat: impl cast_with_opt

* chore: pub use cast_with_opt

* chore: add timezone for test

* Update src/common/time/src/date.rs

Co-authored-by: dennis zhuang <killme2008@gmail.com>

* chore: duration type

---------

Co-authored-by: dennis zhuang <killme2008@gmail.com>
2023-09-18 14:25:38 +00:00
dennis zhuang
342cc0a4c4 fix: compile error after updating protos (#2435) 2023-09-18 12:12:39 +00:00
Weny Xu
df6c79a378 fix: check version before alter region (#2433)
* fix: check version before alter region

* chore: apply suggestions from CR

* Update src/mito2/src/worker/handle_alter.rs

Co-authored-by: dennis zhuang <killme2008@gmail.com>

---------

Co-authored-by: dennis zhuang <killme2008@gmail.com>
2023-09-18 11:49:26 +00:00
dennis zhuang
5566f34bd1 feat: make scripts table work again (#2420)
* feat: make scripts table work again

* chore: typo

* fix: license header

* Update src/table/src/metadata.rs

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>

* chore: cr comments

---------

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
2023-09-18 11:43:21 +00:00
Wei
14e6998d41 feat: impl duration datatype and vectors (#2180)
* feat: impl datatype, vector traits for duration.

* feat: duration and grpc.

* test: add unit test cases.

* chore: style and test case.

* fix: update greptime-proto version and helper.rs

* chore: fix type name.

* Update src/datatypes/src/data_type.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

* chore: cr.

* chore: fix greptime-proto

---------

Co-authored-by: Yingwen <realevenyag@gmail.com>
2023-09-18 11:28:06 +00:00
Weny Xu
43476e1ff9 refactor: rename coordination to require_lease_before_startup (#2431) 2023-09-18 11:07:42 +00:00
Weny Xu
c42cce57ca fix: fix incorrect matches (#2430)
* fix: fix incorrect matches

* fix: fix incorrect status code
2023-09-18 10:53:32 +00:00
Zhenchi
d6d46378a1 test: fix some integration tests (#2432)
* test: fix some integration tests

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* test: add timezone setting

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2023-09-18 10:52:14 +00:00
Ruihang Xia
fbbf3978d9 fix: render comment in SHOW CREATE TABLE (#2427)
* feat: add comment field to ColumnDef

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix sqlness case

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-09-18 10:51:10 +00:00
dennis zhuang
b0c56a3e23 feat: type alias (#2331)
* fix: remove location from error msg

* feat: adds transformer for sqlparser statements

* feat: supports type alias

* fix: typo

* fix: license header

* test: adds timestamp_types test

* refactor: transform

* fix: rebase develop and fix tests

* fix: compile error

* chore: delete src/datanode/src/sql/create_external.rs
2023-09-18 09:43:02 +00:00
Zhenchi
73af1368bd test: more integration test cases for external table (#2429)
Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2023-09-18 09:21:51 +00:00
Weny Xu
2c3ff90dbc feat: start services after first heartbeat response processed (#2424)
* feat: start services after first heartbeat response processed

* refactor: watch changes in RegionAliveKeeper

* feat: add coordination to DatanodeOptions

* chore: apply suggestions from CR

* chore: apply suggestions from CR

* chore: enable coordination in sqlness
2023-09-18 08:49:26 +00:00
Zhenchi
3a39215f11 feat: migrate file engine from table to reigon (#2365)
* feat: migrate file engine from table to reigon

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* Update src/file-engine/src/engine.rs

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>

* feat: specify ts index for file engine

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* feat: handle time index for external table

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix: some integration testsg

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* feat: add file schema and table schema compatibility

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* feat: compatible file schema to region schema

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* feat: add error msg

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix: simplify close

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix: implement set_writable

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix: tests-integration compilation

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix: address comments

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: zhongzc <zhongzc@zhongzcs-MacBook-Pro.local>
2023-09-18 08:02:43 +00:00
JeremyHi
e7e254cd11 feat: all distributed time together (#2423) 2023-09-17 15:18:52 +00:00
Yingwen
4a82926d72 docs: fix cargo doc errors and warnings (#2421)
* docs: fix cargo doc warnings and errors

* docs: fix warnings

* docs: fix warnings

* chore: rm src/common/function-macro/src/lib.rs
2023-09-17 11:45:15 +00:00
Ruihang Xia
92824d1c66 fix: update several sqlness results (#2422)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-09-17 11:33:40 +00:00
Yingwen
55ae5e5b66 feat(mito): Implements compaction scheduler (#2413)
* feat: allow multiple waiters in compaction request

* feat: compaction status wip

* feat: track region status in compaction scheduler

* feat: impl compaction scheduler

* feat: call compaction scheduler

* feat: remove status if nothing to compact

* feat: schedule compaction after flush

* feat: set compacting to false after compaction finished

* refactor: flush status only needs region id and version control

* refactor: schedule_compaction don't need region as argument

* test: test flush/scheduler for empty requests

* test: trigger compaction in test

* feat: notify scheduler on truncated

* chore: Apply suggestions from code review

Co-authored-by: JeremyHi <jiachun_feng@proton.me>

---------

Co-authored-by: JeremyHi <jiachun_feng@proton.me>
2023-09-17 09:15:11 +00:00
Ruihang Xia
693e8de83a feat: scope spawned task with trace id (#2419)
* feat: scope spawned task with trace id

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-09-17 09:05:28 +00:00
JeremyHi
542e863ecc fix: missing datanode id on keep lease (#2415) 2023-09-17 07:57:17 +00:00
Ruihang Xia
49310acea1 refactor: rename common-function-macro subcrate (#2418)
* rename common-function-macro to common-macro

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* put impl into their own file

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-09-17 07:56:41 +00:00
Weny Xu
5b08e03944 feat: sync regions between RegionServer and RegionAliveKeeper (#2417)
* feat: sync regions between RegionServer and RegionAliveKeeper

* Apply suggestions from code review

Co-authored-by: JeremyHi <jiachun_feng@proton.me>

* refactor: rename event name

* chore: apply suggestions

---------

Co-authored-by: JeremyHi <jiachun_feng@proton.me>
2023-09-17 07:55:44 +00:00
JeremyHi
98a40bae95 feat!: unify naming with options (#2416) 2023-09-17 07:24:57 +00:00
JeremyHi
342a6d071f feat: heartbeat request with header (#2412)
* feat: heartbeat request with header

* chore: frontend send heartbeat with a longer interval
2023-09-16 09:56:41 +00:00
Baasit
0a692aafb0 feat: clap wrapper around sqlness (#2400)
* feat: wrapped sqlness with clap to provide nice interface

* fix: added spaces and changed -f flag to bool
2023-09-16 08:53:08 +00:00
dennis zhuang
627c5b7419 feat: move table operations from frontend to operator crate (#2411)
* feat: move table operations from frontend to operator crate

* chore: blank line

* fix: toml format

* chore: move constants
2023-09-16 07:58:45 +00:00
Ruihang Xia
5e35087b67 fix: generate region path with given prefix (#2409)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-09-16 03:23:36 +00:00
Ruihang Xia
c149c123c3 feat: reopen corresponding regions on starting datanode (#2399)
* separate config and datanode impl

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* partial implement of fetching region id list

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* reopen all regions on starting region server

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update sqlness & assign default datanode id

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* set writable on lease

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* apply cr suggs.

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* Update src/datanode/src/datanode.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: dennis zhuang <killme2008@gmail.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>
2023-09-15 13:30:20 +00:00
Vanish
0bd6b9bb39 feat: implement truncate region for mito2 (#2335)
* feat: implement truncate region for mito2.

* chore: add license header and fix typos

* Update src/mito2/src/worker/handle_truncate.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

* cr

* chore: consider the flush task being executed before truncating the region.

* test

* feat: check flush and compaction tasks

* chore: remove useless changes

* Update src/mito2/src/manifest/action.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

* Update src/mito2/src/worker/handle_flush.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

* chore: CR, consider sequence number

* test: use EventListener to test the flush task during truncate

* fix: fix listener error

* Update src/mito2/src/engine/truncate_test.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

* chore: cr

* fix: remove set None

* Update src/mito2/src/region/version.rs

Co-authored-by: dennis zhuang <killme2008@gmail.com>

* Update src/mito2/src/worker/handle_flush.rs

Co-authored-by: dennis zhuang <killme2008@gmail.com>

* Update src/mito2/src/worker/handle_truncate.rs

Co-authored-by: dennis zhuang <killme2008@gmail.com>

* doc: add some doc for FlushTruncateListener and RegionTruncate

---------

Co-authored-by: Yingwen <realevenyag@gmail.com>
Co-authored-by: dennis zhuang <killme2008@gmail.com>
2023-09-15 13:20:01 +00:00
Bamboo1
6aec30a1a8 feat: reserve internal column (#2396)
* feat: reserve internal column

Signed-off-by: ZhuZiyi <zyzhu2001@gmail.com>

* test: add function test

Signed-off-by: ZhuZiyi <zyzhu2001@gmail.com>

* fix: spell typos

Signed-off-by: ZhuZiyi <zyzhu2001@gmail.com>

* chore: resolve conversation

Signed-off-by: ZhuZiyi <zyzhu2001@gmail.com>

---------

Signed-off-by: ZhuZiyi <zyzhu2001@gmail.com>
2023-09-15 11:19:40 +00:00
LFC
a688760563 fix: validate partition columns (#2393)
fix: partition column must belong to primary keys or equals to time index
2023-09-15 10:07:32 +00:00
LFC
4b13c88752 fix: resolve more integration tests (#2406)
* fix: resolve more integration tests

* Update tests-integration/tests/http.rs

Co-authored-by: Weny Xu <wenymedia@gmail.com>

---------

Co-authored-by: Weny Xu <wenymedia@gmail.com>
2023-09-15 09:43:16 +00:00
JeremyHi
9572b1edbb feat: region storage path (#2404)
* feat: region storage path

* Update src/common/meta/src/key/datanode_table.rs

Co-authored-by: Weny Xu <wenymedia@gmail.com>

* chore: by cr

* feat: upgrade proto

---------

Co-authored-by: Weny Xu <wenymedia@gmail.com>
2023-09-15 09:07:54 +00:00
dennis zhuang
43e3c94fd1 refactor: catalog managers (#2405)
* feat: rename catalog::local to catalog::memory

* refactor: catalog managers

* chore: license header
2023-09-15 08:48:14 +00:00
Ruihang Xia
364b99a14c fix: enable ignored promql unit tests (#2403)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-09-15 07:37:30 +00:00
shuiyisong
a8ae386a57 chore: add #[serde(default)] to new added engine field (#2402)
chore: add serde default to new field
2023-09-15 07:11:57 +00:00
LFC
fe5679e77e refactor: remove table ident (#2368)
* refactor:
1. remove TableIdent, use TableId directly
2. use the latest greptime-proto
3. independently invalidate table id cache and table name cache

* rebase

* fix: resolve PR comments

* fix: resolve PR comments
2023-09-15 05:14:40 +00:00
JeremyHi
8e70b9e982 feat: remove deprecated metadata keys (#2398)
* feat: remove deprecated metadata keys

* feat: this time, weny indeed said [removes it]
2023-09-15 02:11:21 +00:00
JeremyHi
d1adb915bf feat: set readonly first when deregister region (#2391)
* feat: set readonly first when deregister region

* revert distxxx
2023-09-14 12:12:38 +00:00
Yingwen
a84a8ad04f fix: alter table procedure panics while renaming table (#2397)
* fix: procedure panic on renaming table

* test: fix test_insert_and_select invalid arguments

* test: fix test_standalone_insert_and_query using wrong semantic type

* test: fix test_distributed_insert_delete_and_query semantic type
2023-09-14 11:50:00 +00:00
JeremyHi
7bb8a5999c feat!: add engine name to DatanodeTableValue (#2395)
* feat: add engine name to DatanodeTableValue

* fix: by cr
2023-09-14 09:50:35 +00:00
Yingwen
26992d58cd chore: decrease mutable write buffer limit (#2390)
* chore: set mutable limit to half of the global write buffer size

* refactor: put handle_flush_finished after handle_flush_request

* refactor: rename tests.rs to basic_test.rs

* style: fmt code
2023-09-14 08:24:14 +00:00
Ruihang Xia
47bf300869 fix: update sqlness result for order_by (#2389)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-09-14 07:28:40 +00:00
Yingwen
a7df5a7c9a fix(mito): incorrect field index in ProjectionMapper (#2388)
* chore: update todo comments

* test: add test for projection

* fix: panics when projecting fields

* chore: remove todos
2023-09-14 04:15:15 +00:00
Yingwen
d4ae8a6fed feat(mito): Add writable flag to region (#2349)
* feat: add writable flag to region.

* refactor: rename MitoEngine to MitoEngine::scanner

* feat: add set_writable() to RegionEngine

* feat: check whether region is writable

* feat: make set_writable sync

* test: test set_writable

* docs: update comments

* feat: send result on compaction failure

* refactor: wrap output sender in new type

* feat: on failure

* refactor: use get_region_or/writable_region_or

* refactor: remove send_result

* feat: notify waiters on flush scheduler drop

* test: fix tests

* fix: only alter writable region
2023-09-14 02:45:30 +00:00
Yingwen
da54a0c139 fix: alter table procedure forgets to update next column id (#2385)
* feat: add more info to error messages

* feat: store next column id in procedure

* fix: update next column id for table info

* test: fix add col test

* chore: remove location from invalid request error

* test: update test

* test: fix test
2023-09-14 02:06:57 +00:00
Ruihang Xia
cc7eb3d317 fix: querying temporary table (#2387)
* fix information schema

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove log

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-09-13 14:30:56 +00:00
Weny Xu
93f3048f4f refactor: migrate OpenDal to 0.39 (#2383)
* chore: bump opendal to 7d552

* refactor: migrate OpenDal to 0.39

* chore: apply suggestions from CR
2023-09-13 12:43:53 +00:00
LFC
d08b05c963 fix: make test-integration able to compile (#2384)
* fix: make test-integration able to compile

* chore: fmt toml

---------

Co-authored-by: WenyXu <wenymedia@gmail.com>
2023-09-13 12:42:46 +00:00
JeremyHi
f76aa278fd feat: atomic metadata (#2366)
* feat: atomic creating metadata

* chore: exist exists

* chore: license header

* chore: weny never say that

* feat: add put_conditionally to kv_backend
2023-09-13 10:51:05 +00:00
JeremyHi
6f4779b474 feat: engine name in heartbeat (#2377) 2023-09-13 09:10:10 +00:00
Ruihang Xia
de723d9c1c fix: update sqlness result in distributed mode (#2381)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-09-13 09:07:55 +00:00
dennis zhuang
7448e975c2 chore: change error messages (#2379)
* chore: change error messages

* chore: remove locaton in table not found error msg
2023-09-13 08:21:03 +00:00
dennis zhuang
3f97a0d285 fix: gRPC max mesage size limitation (#2375)
* fix: gRPC max mesage size limitation

* chore: don't set max_encoding_message_size
2023-09-13 08:13:49 +00:00
Ruihang Xia
60bdf9685f feat: use the latest command line options for sqlness runner (#2371)
feat: use the latest command line options

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-09-13 03:38:43 +00:00
Ruihang Xia
9c76d2cf54 feat: convert sql number to values with target type (#2370)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-09-13 11:14:42 +08:00
Weny Xu
1a7268186b chore: bump raft-engine to 22dfb4 (#2360) 2023-09-12 07:57:15 -05:00
Ruihang Xia
eeecce4623 refactor: remove table procedure (#2359)
remove table procedure

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-09-12 07:57:15 -05:00
Ruihang Xia
1ad5f6e5d5 refactor: system tables in FrontendCatalogManager (#2358)
* rename method names

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove system table, table engine, register/deregister

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add system catalog

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* run nextest

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* some documents

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix: fix clippy

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: WenyXu <wenymedia@gmail.com>
2023-09-12 07:57:15 -05:00
Yingwen
46eca5026e fix(mito): Stores and recovers flushed sequence (#2355)
* test: add test for reopen

* feat: last entry id starts from flushed entry id

* fix: store flushed sequence and recover it from manifest

* test: check sequence in alter test

* test: more tests for alter
2023-09-12 07:57:15 -05:00
Weny Xu
912341e4fa fix: fix start issues under standalone mode (#2352)
* fix: fix standalone starts

* chore: bump raft-engine to 571462e

* refactor: remove MetadataService
2023-09-12 07:57:15 -05:00
JeremyHi
80c5d52015 feat: stop region server (#2356)
* feat: stop region server

* fix: close region first
2023-09-12 07:57:15 -05:00
Zhenchi
4af126eb1b feat: consolidate Insert request related partitioning and distributed processing operations into Inserter (#2346)
* refactor: RegionRequest as param of RegionRequestHandler.handle

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* feat: partition insert & delete reqs for both standalone and distributed mode

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* chore: nit change

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix: wrong function nameg

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* feat: do request in inserter & deleter

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* feat: remove RegionRequestHandler.handle

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* refactor: rename table_creator

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* chore: nit change

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* refactor: address comments

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* chore: nit change

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2023-09-12 07:57:15 -05:00
LFC
fe954b78a2 refactor: system tables in new region server (#2344)
refactor: inverse the dependency between system tables and catalog manager
2023-09-12 07:57:15 -05:00
JeremyHi
3cab6de391 feat: filter out empty heartbeat req (#2345)
* feat: filter out empty heartbeat request

* fix: big mistake
2023-09-12 07:57:15 -05:00
Yingwen
606ee43f1d feat(mito): Implement skeleton for alteration (#2343)
* feat: impl handle_alter wip

* refactor: move send_result to worker.rs

* feat: skeleton for handle_alter_request

* feat: write requests should wait for alteration

* feat: define alter request

* chore: no warnings

* fix: remove memtables after flush

* chore: update comments and impl add_write_request_to_pending

* feat: add schema version to RegionMetadata

* feat: impl alter_schema/can_alter_directly

* chore: use send_result

* test: pull next_batch again

* feat: convert pb AlterRequest to RegionAlterRequest

* feat: validate alter request

* feat: validate request and alter metadata

* feat: allow none location

* test: test alter

* fix: recover files and flushed entry id from manifest

* test: test alter

* chore: change comments and variables

* chore: fix compiler errors

* feat: add is_empty() to MemtableVersion

* test: fix metadata alter test

* fix: Compaction picker doesn't notify waiters if it returns None

* chore: address CR comments

* test: add tests for alter request

* refactor: use send_result
2023-09-12 07:57:15 -05:00
Lei, HUANG
3331e3158c feat(mito2): compaction (#2317)
* feat: compaction component

* feat: mito2 compaction

* Avoid building time range predicates when merge SST files since in TWCS we don't enforce strict time window.

* fix: some CR comments

* minor: change CompactionRequest::senders to an option

* chore: handle compaction finish error

* feat: integrate compaction into region worker

* chore: rebase upstream

* fix: Some CR comments

* chore: Apply suggestions from code review

* style: fix clippy

---------

Co-authored-by: Yingwen <realevenyag@gmail.com>
2023-09-12 07:57:15 -05:00
Weny Xu
a4604afde5 refactor: rename NEXT_TABLE_ROUTE_PREFIX to TABLE_ROUTE_PREFIX (#2348)
* refactor: rename NEXT_TABLE_ROUTE_PREFIX to TABLE_ROUTE_PREFIX

* chore: apply suggestions from CR
2023-09-12 07:57:15 -05:00
Weny Xu
f386329e29 refactor: introduce DdlTaskExecutor and refactor statement executor (#2341)
* feat: add kv store option

* refactor: refactor statement executor

* refactor: refactor standalone table creator

* chore: apply suggestions from CR

* chore: apply suggestions from CR

* refactor: move ShowCreateTable and CreateDatabase to StatementExecutor

* fix: fix RegionDistribution

* feat: build standalone

* chore: apply suggestions from CR

* chore: apply suggestions from CR

* chore: apply suggestions from CR
2023-09-12 07:57:15 -05:00
Yingwen
3f6d557b8d feat: Implements a reader to make schema compatible (#2326)
* docs: update comment

* feat: Add compat reader to SeqScan

* feat: add struct to compat pk and fields

* refactor: remove unused fields from ParquetReader

* feat: compat framework

* feat: Implement CompatPrimaryKey and CompatFields

* feat: implement compat reader

* feat: Test compat reader

* test: test compat reader

* feat: add more checks to concat

* style: fix clippy

* test: more tests for compat reader

* test: test reader with projection
2023-09-12 07:57:15 -05:00
Ruihang Xia
6215f124f7 refactor: remove datanode instance (#2342)
* pass nextest

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove deadcode

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* rename region_alive_keepers

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-09-12 07:57:15 -05:00
LFC
1d83c942a9 refactor: script table creation (#2340)
* refactor:
1. remove method `register_system_table` from CatalogManager
2. the creation of ScriptTable (as a system table) is removed from CatalogManager. Instead, the ScriptTable is created when Frontend instance is starting; and is created by calling Frontend instance's grpc handler.

* rebase
2023-09-12 07:57:15 -05:00
Ruihang Xia
f287a5db9f feat: adapt region keep aliver for region server (#2333)
* basic impl

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* refactor, collapse one layer

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove old heartbeat handler impls

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove old region alive keeper

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove remote catalog manager

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* global replace

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* test countdown task

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-09-12 07:57:15 -05:00
Zhenchi
dac6b2e80a feat(frontend): migrate delete to region server (#2329)
* feat(frontend): migrate delete to region server

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* feat: add more check and do trim columns

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* refactor: RegionRequestHandler.handle retrun AffectedRows

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2023-09-12 07:57:15 -05:00
Yingwen
1e44e86d81 feat(mito): Stall write requests and add more flush tests (#2322)
* feat: impl reject write

* feat: sanitize reject size

* feat: add should_stall to WriteBufferManager

* feat: stall requests

* test: mock WriteBufferManager

* feat: add new_with_manager for test and remove object_store from inner

* feat: add an event listener for tests

* feat: Use listener to test flush

* refactor: add flush_test.rs

* style: fix clippy

* feat: test write stall

* test: test flush empty
2023-09-12 07:57:15 -05:00
JeremyHi
56691ff03b refactor: mailbox timeout (#2330)
refactor: Optimize the timeout mechanism of the mailbox
2023-09-12 07:57:15 -05:00
Weny Xu
e4de63625f refactor: refactor raft engine backend and state store (#2336)
* refactor: remove redundant code

* refactor: refactor RaftEngineBackend Error to common_meta::error::Error

* refactor: refactor state store

* chore: apply suggestions from CR
2023-09-12 07:57:15 -05:00
Ruihang Xia
4b2b59c31b refactor: clean unnecessary disabled lints (#2338)
* clean manifest

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* clean engine

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* clean region

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* clean asscess_layer

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* clean manifest manager

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* clean row_converter

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* clean scheduler

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* clean worker

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-09-12 07:57:15 -05:00
Weny Xu
2ee2d29085 refactor: move Sequence to common meta (#2337) 2023-09-12 07:57:15 -05:00
Yingwen
c3f6529178 fix: improve error message in validate_proto_value (#2328)
* fix: correct error message in validate_proto_value()

* fix: print location in InvalidRequest error

* style: format
2023-09-12 07:57:15 -05:00
Ruihang Xia
eb7116ab56 feat: read/write works in distributed mode 🎉 (#2327)
* add do_get method to RegionRequestHandler

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* move RegionRequestHandler to client crate

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* use RegionRequestHandler in MergeScan

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* minor fix

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* ignore tests

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix format

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-09-12 07:57:15 -05:00
Zhenchi
5f7d48f107 feat(frontend): reorg insert converters and introduce stmt_to_region (#2324)
* feat(frontend): reorg insert converters and introduce stmt_to_region

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* chore: shorten import path

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix: add check for column count

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix: clippy

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2023-09-12 07:57:15 -05:00
LFC
711e27d9fa feat: distributed alter table in region server (#2311)
* feat: distributed alter table in region server

* rebase
2023-09-12 07:57:15 -05:00
Weny Xu
922e342b63 refactor: refactor ddl manager (#2306)
* refactor: refactor ddl manager

* chore: apply suggestions from CR
2023-09-12 07:57:15 -05:00
Zhenchi
7dde9ce3ce feat(frontend): migrate insert to region server (#2318)
* feat(frontend): migrate insert to region server

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* refactor: move converter to Inserter

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* chore: rename convert function

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix: address comments

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix: address comments

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix: add span id

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix: compilation

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* retrigger action

* retrigger action

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2023-09-12 07:57:15 -05:00
Yingwen
3eccb36047 feat: avoid using vector to get default value (#2323) 2023-09-12 07:57:15 -05:00
Ruihang Xia
f71aa373c1 feat: start datanode with config (#2312)
* remove memory-catalog and procedure

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* derive serde for MitoConfig

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* start datanode with configs

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove dir in WalConfig

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add rename field attr

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add stupid duplicated mito config

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove wrong import

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* wired compile error

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-09-12 07:57:15 -05:00
Ruihang Xia
50fca2400e feat: adapt methods from RegionEngine for MitoEngine (#2315)
* feat: adapt methods from RegionEngine for MitoEngine

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* minor fixes

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-09-12 07:57:15 -05:00
JeremyHi
920763d7dd feat: add metric and manage tool for InactiveRegionKey (#2313)
* feat: add metric and manage tool for InactiveRegionKey

* chore: by review comment
2023-09-12 07:57:15 -05:00
dennis zhuang
a3d5931fca feat: unify all protocol options (#2316)
* feat: unify all protocol options

* feat: adds enable to example configs

* chore: style

Co-authored-by: JeremyHi <jiachun_feng@proton.me>

---------

Co-authored-by: JeremyHi <jiachun_feng@proton.me>
2023-09-12 07:57:15 -05:00
dennis zhuang
b1599ad3a5 fix: can't adding new columns as primary key (#2310) 2023-09-12 07:57:15 -05:00
dennis zhuang
38697e0c4d feat: build http client for cloud object storage (#2314)
* feat: build http client for s3/oss/azblob storages

* chore: style

* fix: test

* fix: cargo toml fmt
2023-09-12 07:57:15 -05:00
Yingwen
50220f8f04 feat: Impl write buffer manager for mito2 (#2309)
* feat: add write buffer manager to builder

* feat: impl WriteBufferManager

* feat: impl MemtableVersion::mutable_usage

* chore: Address CR comments

Co-authored-by: JeremyHi <jiachun_feng@proton.me>

* refactor: rename mutable_limitation to mutable_limit

---------

Co-authored-by: JeremyHi <jiachun_feng@proton.me>
2023-09-12 07:57:15 -05:00
Niwaka
3504d8254e fix: unused table options (#2267)
* fix: unused table options keys

* refactor: simplify validate table options

* chore: Add newlines

---------

Co-authored-by: Yingwen <realevenyag@gmail.com>
2023-09-12 07:57:15 -05:00
dennis zhuang
fad58835bf fix: don't raise an error when manifest directory is not created (#2308)
* fix: don't raise an error when manifest directory is not created

* chore: apply suggestion

Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>

---------

Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>
2023-09-12 07:57:15 -05:00
Lei, HUANG
43fdff3639 feat: remove memtable request (#2307)
* refactor: remove scan request from memtable API

* docs: Update comment

---------

Co-authored-by: Yingwen <realevenyag@gmail.com>
2023-09-12 07:57:15 -05:00
Weny Xu
271f80daad fix: LoadBase Selector cannot follow the region distribution rules (#2259)
* fix: LoadBase Selector cannot follow the region distribution rules

* chore: apply suggestions from CR
2023-09-12 07:57:15 -05:00
Lei, HUANG
36231a5d50 feat(mito2): add alloc_tracker for memtable (#2266)
* feat: add alloc_tracker for memtable

* chore: integrate WriteBufferManager
2023-09-12 07:57:15 -05:00
JeremyHi
a7fa40e16d fix: filter out outdated heartbeat (#2303)
* fix: filter out outdated heartbeat, #1707

* feat: reorder handlers

* refactor: disableXXX to enableXXX

* feat: make full use of region leases to facilitate failover

* chore: minor refactor

* chore: by comment

* feat: logging on inactive/active
2023-09-12 07:57:15 -05:00
Yingwen
648b2ae293 feat(mito): Flush region (#2291)
* chore: call handle_flush_request

* feat: alias SchedulerRef and clean scheduler on drop

* feat: add scheduler to workers

* feat: remove RegionMemtableStats

* feat: pick regions to flush

* feat: add more fields to region flush task

* feat: smallvec workspace dep

* feat: Use list to hold immutable memtables

* feat: flush job wip

* feat: use access layer to read write sst

* feat: flush memtables to l0

* feat: write manifest

* feat: schedule next flush on success

* feat: schedule flush on success and failure

* feat: add purger to region

* feat: apply edit after flush

* feat: collect stats for SSTs

* feat: manual flush

* test: test flush and fix manifest test

* feat: remove flush scheduler job limit

* fix: typo

* style: clippy

* feat: clean flushed files on failure

* chore: address CR comment

* refactor: Use put_rows

* feat: Clean flush scheduler on drop

* feat: remove region flush status on drop and close

* chore: address CR comment
2023-09-12 07:57:15 -05:00
Weny Xu
fa5e3b94d3 refactor: refactor ddl procedure (#2304) 2023-09-12 07:57:15 -05:00
Weny Xu
4818887e38 refactor: refactor DistInstance (#2305) 2023-09-12 07:57:15 -05:00
Ruihang Xia
eddff17523 feat: drop region in mito2 (#2286)
* basic impl

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* check in opening region

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix typo

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix typo again

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* Update src/mito2/src/worker/handle_drop.rs

Co-authored-by: JeremyHi <jiachun_feng@proton.me>

* remove file in order

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix remove logic

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* use scan to list files

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: JeremyHi <jiachun_feng@proton.me>
2023-09-12 07:57:15 -05:00
Weny Xu
c839ed271c refactor: refactor: ddl context (#2301)
* refactor: refactor: ddl context

* refactor: remove unused code

* chore: apply suggestions from CR
2023-09-12 07:57:15 -05:00
JeremyHi
4d2cae4174 refactor: inactive node manager (#2300)
refactor: use region_id instead of table&region_num in InactiveNodeManager
2023-09-12 07:57:15 -05:00
Yingwen
b234733c61 feat(mito): Support deleting rows in mito2 (#2275)
* feat: check delete request

* test: test delete and overwrite
2023-09-12 07:57:15 -05:00
Lei, HUANG
9691d19601 feat: impl kv backend for raft engine (#2280)
* feat: kv backend on raft-engine

* feat: raft-engine kvbackend

* fix: toml

* fix: some review comments

* chore: optimize delete

* fix: lift lock in batch_delete
2023-09-12 07:57:15 -05:00
LFC
ff3881f0e1 feat: drop distributed Mito2 table (#2260)
* feat: drop distributed Mito2 table

* rebase develop

* fix: resolve PR comments

* fix: resolve PR comments
2023-09-12 07:57:15 -05:00
JeremyHi
fa542f6e93 feat: new heartbeat (#2299) 2023-09-12 07:57:15 -05:00
Zhenchi
d6c82867d5 refactor: remove the most Table impls (#2274)
Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
2023-09-12 07:57:15 -05:00
Lei, HUANG
86d56f71ef fix: flume bug (#2298)
fix: flume
2023-09-12 07:57:15 -05:00
Zhenchi
b42d343ae6 feat(frontend): unify column inserter and row inserter (#2293)
* refactor: unify inserter

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* feat(frontend): unify column inserter and row inserter

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix: remove redundant clone

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix: move empty check ahead

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* feat: add more logs

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix: leading license

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* adjust indent

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2023-09-12 07:57:15 -05:00
Yingwen
365e557e7a feat(mito): Integrate access layer and file purger to region (#2296)
* feat: alias SchedulerRef and clean scheduler on drop

* feat: add scheduler to workers

* feat: use access layer to read write sst

* feat: add purger to region

* refactor: allow getting region_dir from AccessLayer

* feat: add scheduler to FlushScheduler

* feat: getter for object store

* chore: fix typo

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>

---------

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
2023-09-12 07:57:15 -05:00
Ruihang Xia
46d171d341 chore: bump greptime-proto to replace region_dir (#2290)
chore: bump greptime-proto to replace region_dir

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-09-12 07:57:15 -05:00
Ruihang Xia
718246ea1a feat: implement heartbeat for region server (#2279)
* retrieve region stats from region server

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* implement heartbeat handler

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* start datanode with region server

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove comment

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* disable non-unit test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* implement heartbeat task

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-09-12 07:57:15 -05:00
JeremyHi
58d07e0e62 feat: v04 rm unused exprs (#2285)
* feat: rm compact and flush exprs

* refactor: continue to rm compact and flush
2023-09-12 07:57:15 -05:00
dennis zhuang
db89235474 feat: only allow timestamp type as time index (#2281)
* feat: only allow timestamp data type as time index

* test: update sqltest cases, todo: need some fixes

* fix: sqlness tests

* fix: forgot adding back cte test

* chore: style
2023-09-12 07:57:15 -05:00
shuiyisong
6e593401f7 refactor: collecting memory usage during scan (#2353)
* chore: try custom metrics

* chore: fix header

* chore: minor change
2023-09-12 15:52:57 +08:00
Yingwen
466fbaca5d fix: panic in try_into_vector() (#2351) 2023-09-11 19:06:31 +08:00
ZonaHe
de966af83b feat: update dashboard to v0.3.3 (#2339)
Co-authored-by: ZonaHex <ZonaHex@users.noreply.github.com>
2023-09-06 19:19:56 +08:00
Zou Wei
b8c50d00aa feat: sqlness test for interval type (#2265)
* feat: add integration-test for interval type.

* chore: add two cases.

* chore: cr

* chore: Field to Column
2023-09-04 14:30:48 +08:00
Ruihang Xia
a12ee5cab8 fix: qualify inputs on handling join in promql (#2297)
* add qualifier to join inputs

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add one more case

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update test results

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-09-01 11:51:34 +08:00
ZonaHe
a0d15b489a feat: update dashboard to v0.3.2 (#2295)
Co-authored-by: ZonaHex <ZonaHex@users.noreply.github.com>
2023-08-31 22:05:00 +08:00
shuiyisong
baa372520d fix: json compatibility to null (#2287)
* fix: existing null value for schema name value

* chore: fix null check

* fix: change catalognamevalue and schemanamevalue to option

* fix: fix null case
2023-08-31 14:21:10 +08:00
shuiyisong
5df4d44761 feat: schema level opts (#2283)
* chore: update proto

* chore: add try from for schema name value

* chore: merge schema opts to table opts while creating table

* chore: use table ttl opts first

* chore: add unit test

* chore: update proto version
2023-08-30 08:11:08 +00:00
Weny Xu
8e9f2ffce4 fix: skip procedure if target route is not found (#2277)
* fix: skip procedure if target route is not found

* chore: apply suggestions from CR
2023-08-30 06:59:50 +00:00
Weny Xu
1101e7bb18 fix: deregister table after keeper closes table (#2278)
* fix: deregister table after keeper closes table

* chore: apply suggestions from CR
2023-08-30 03:43:04 +00:00
zyy17
5fbc941023 ci: upload the latest artifacts to 'latest/' directory of S3 bucket in scheduled and formal release (#2276)
Signed-off-by: zyy17 <zyylsxm@gmail.com>
2023-08-29 09:00:45 +00:00
Bamboo1
68600a2cf9 feat(mito2): add file purger and cooperate with scheduler to purge sst files (#2251)
* feat: add file purger and use scheduler

Signed-off-by: ZhuZiyi <zyzhu2001@gmail.com>

* chore: code format

Signed-off-by: ZhuZiyi <zyzhu2001@gmail.com>

* chore: code format

Signed-off-by: ZhuZiyi <zyzhu2001@gmail.com>

* feat: print some information about handling error message

Signed-off-by: ZhuZiyi <zyzhu2001@gmail.com>

* fix: resolve conversion

Signed-off-by: ZhuZiyi <zyzhu2001@gmail.com>

* chore: code format

Signed-off-by: ZhuZiyi <zyzhu2001@gmail.com>

* chore: resolve conversation

Signed-off-by: ZhuZiyi <zyzhu2001@gmail.com>

* fix: resolve conflicting files

Signed-off-by: ZhuZiyi <zyzhu2001@gmail.com>

* chore: code format

Signed-off-by: ZhuZiyi <zyzhu2001@gmail.com>

* chore: code format

Signed-off-by: ZhuZiyi <zyzhu2001@gmail.com>

---------

Signed-off-by: ZhuZiyi <zyzhu2001@gmail.com>
2023-08-29 07:55:03 +00:00
Yingwen
805f254d15 feat(mito): Flush framework for mito2 (#2262)
* feat: write buffer manager

* feat: skeleton

* feat: add flush logic to write path

* feat: add methods to memtable trait

* feat: freeze memtable

* feat: define flush task

* feat: schedule_flush wip

* feat: adding pending requests/tasks

* feat: separate ddl request and background request

* feat: Remove RegionTask and RequestBody

* feat: handle flush related requests

* feat: make tests pass

* style: fix clippy

* docs: update comment

* refactor: rename background requests

* feat: replace Option<RegionWriteCtx> with an enum MaybeStalling
2023-08-29 07:13:15 +00:00
Zhenchi
2a6c830ca7 refactor(table): remove Table impl for system (#2270)
* refactor(table): remove Table impl for system

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix: format & import

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2023-08-29 03:43:43 +00:00
Weny Xu
22dea02485 fix: use RegionId region number instead (#2273) 2023-08-29 02:52:24 +00:00
LFC
ef75e8f7c3 feat: create distributed Mito2 table (#2246)
* feat: create distributed Mito2 table

* rebase develop
2023-08-28 12:07:52 +00:00
Weny Xu
71fc3c42d9 fix: open region does not register catalog/schema (#2271)
* fix: open region does not register catalog/schema

* fix: fix ci
2023-08-28 12:06:10 +00:00
JeremyHi
c02ac36ce8 feat: avoid confusion in desc table (#2272)
feat: Field to Column to aviod confusion in DESC TABLE
2023-08-28 11:50:33 +00:00
Lei, HUANG
c112b9a763 feat(mito2): WAL replay (#2264)
* feat: replay memtable when opening table

* test: region replay

* refactor: save logstore in TestEnv

* fix: some cr comments

* chore: rebase develop

* chore: update last entry id during replay
2023-08-28 11:45:23 +00:00
Weny Xu
96fd17aa0a fix: fix typoes (#2268) 2023-08-28 09:26:00 +00:00
Ruihang Xia
6b8cf0bbf0 feat: impl region engine for mito (#2269)
* update proto

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* convert request

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update proto

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* import result convertor

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* rename symbols

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-08-28 09:24:12 +00:00
Yingwen
e2522dff21 feat(mito): Skeleton for scanning a region (#2230)
* feat: define stream builder

* feat: scan region wip

* feat: create SeqScan in ScanRegion

* feat: scanner

* feat: engine handles scan request

* feat: map projection index to column id

* feat: Impl record batch stream

* refactor: change BatchConverter to ProjectionMapper

* feat: add column_ids to mapper

* feat: implement SeqScan::build()

* chore: fix typo

* docs: add mermaid for ScanRegion

* style: fix clippy

* test: fix record batch test

* fix: update sequence and entry id

* test: test query

* feat: address CR comment

* chore: address CR comments

* chore: Update src/mito2/src/read/scan_region.rs

Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>

---------

Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>
2023-08-28 06:59:31 +00:00
LFC
d8f851bef2 fix: keep region failover state not changed upon failure (#2261) 2023-08-28 04:40:47 +00:00
JeremyHi
63b22b2403 feat: prometheus row inserter (#2263)
* feat: prometheus row inserter

* chore: add unit test

* refactor: to row_insert_requests

* chore: typo

* chore: alloc row by TableData

* chore: by review comment
2023-08-28 03:22:23 +00:00
Weny Xu
c56f5e39cd refactor: set default metasrv procedure retry times to 12 (#2242) 2023-08-26 07:41:15 +00:00
Weny Xu
7ff200c0fa fix: align region numbers to real regions (#2257) 2023-08-25 11:48:58 +00:00
dennis zhuang
5160838d04 chore: change version to 0.4.0-nightly (#2258)
* chore: change version to 0.4.0-nightly

* fix: test
2023-08-25 09:44:39 +00:00
shuiyisong
f16f58266e refactor: query_ctx from http middleware (#2253)
* chore: change userinfo to query_ctx in http handler

* chore: minor change

* chore: move prometheus http to http mod

* chore: fix uni test:

* chore: add back schema check

* chore: minor change

* chore: remove clone
2023-08-25 09:36:33 +00:00
Ruihang Xia
8d446ed741 fix: quote ident on rendered SQL (#2248)
* fix: quote ident on rendered SQL

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* read quote style from query context

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update sqlness result

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-08-25 07:25:21 +00:00
JeremyHi
de1daec680 feat: upgrade desc table output (#2256) 2023-08-25 06:52:22 +00:00
Zhenchi
9d87c8b6de refactor(table): cleanup dist table (#2255)
Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2023-08-25 06:37:39 +00:00
Lei, HUANG
6bf260a05c chore: write to mito2 (#2250)
* chore: write to mito2

* fix: clippy

* feat: brdige memtable

* chore: rebase develop
2023-08-25 06:18:42 +00:00
WU Jingdi
15912afd96 fix: the inconsistent order of input/output in range select (#2229)
* fix: the inconsistent order of input/output in range select

* chore: apply CR
2023-08-25 04:12:59 +00:00
Lei, HUANG
dbe0e95f2f feat(mito2): concat and projection (#2243)
* refactor: use arrow::compute::concat instead of push values to vector builders

* feat: support projection

* refactor: remove sequence

* refactor: concatenate

* fix: series must not be empty

* refactor: projection
2023-08-25 03:25:27 +00:00
Ruihang Xia
20b7f907b2 fix: promql planner should clear its states on each selector (#2247)
* reset planner status on selector

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add sqlness test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add empty line

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* sort result

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* mask fields to keep ordering

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-08-25 03:07:44 +00:00
Weny Xu
b13d932e4e fix: fix RegionAliveKeeper does not find the table after restarting (#2249) 2023-08-25 03:05:17 +00:00
Bamboo1
48348aa364 fix: fix test_scheduler_continuous_stop in scheduler (#2252)
* fix: fix test_scheduler_continuous_stop in scheduler

Signed-off-by: ZhuZiyi <zyzhu2001@gmail.com>

* chore: add document annotation

Signed-off-by: ZhuZiyi <zyzhu2001@gmail.com>

---------

Signed-off-by: ZhuZiyi <zyzhu2001@gmail.com>
2023-08-25 02:59:48 +00:00
Zhenchi
9ce73e7ca1 refactor(frontend): TableScan instead of scan_to_stream for COPY TO (#2244)
* refactor(frontend): TableScan instead of `scan_to_stream` for `COPY TO`

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix: format

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2023-08-24 12:46:54 +00:00
Ruihang Xia
b633a16667 feat: apply rewriter to subquery exprs (#2245)
* apply rewriter to subquery exprs

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* workaround for datafusion's check

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* clean up

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add sqlness test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix typo

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* change time index type

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-08-24 11:48:04 +00:00
Zhenchi
0a6ab2a287 refactor(script): not to call scan_to_stream on table (#2241)
* refactor(script): not to call `scan_to_stream` on table

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* refactor: build plan via LogicalPlanBuilder

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2023-08-24 08:10:07 +00:00
JeremyHi
7746e5b172 feat: dist row inserter (#2231)
* feat: fronend row inserter

* feat: row splitter

chore: row splitter's unit test

* feat: RowDistInserter

* feat: make influxdb line protocol using row-based protocol

* Update src/partition/src/row_splitter.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

* Update src/frontend/src/instance/distributed/row_inserter.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

* chore: by review comment

* Update src/frontend/src/instance/distributed/row_inserter.rs

Co-authored-by: LFC <bayinamine@gmail.com>

* chore: by comment

---------

Co-authored-by: Yingwen <realevenyag@gmail.com>
Co-authored-by: LFC <bayinamine@gmail.com>
2023-08-24 06:58:05 +00:00
Weny Xu
a7e0e2330e fix: invalidate cache after altering (#2239) 2023-08-24 03:56:17 +00:00
Lei, HUANG
19d2d77b41 fix: parse large timestamp (#2185)
* feat: support parsing large timestamp values

* chore: update sqlness tests

* fix: tests

* fix: allow larger window
2023-08-24 03:52:15 +00:00
Yingwen
4ee1034012 feat(mito): merge reader for mito2 (#2210)
* feat: Implement slice and first/last timestamp for Batch

* feat(mito): implements sort/concat for Batch

* chore: fix typo

* chore: remove comments

* feat: sort and dedup

* test: test batch operations

* chore: cast enum to test op type

* test: test filter related api

* sytle: fix clippy

* feat: implement Node and CompareFirst

* feat: merge reader wip

* feat: merge wip

* feat: use batch's operation to sort and dedup

* feat: implement BatchReader for MergeReader

* feat: simplify codes

* test: test merge reader

* refactor: use test util to create batch

* refactor: remove unused imports

* feat: update comment

* chore: remove metadata() from Source

* chroe: update comment

* feat: source supports batch iterator

* chore: update comment
2023-08-24 03:37:51 +00:00
Ruihang Xia
e5ba3d1708 feat: rewrite the dist analyzer (#2238)
* it works!

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* clean up

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add documents

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove unstable timestamp from sqlness test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* rename rewriter struct

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-08-24 03:29:08 +00:00
dennis zhuang
8b1f4eb958 feat: types sqlness tests (#2073)
* feat: timestamp types sqlness tests

* feat: adds timestamp tests

* test: add string tests

* test: comment a case in timestamp

* test: add float type tests

* chore: adds TODO

* feat: set TZ=UTC for sqlness test
2023-08-24 03:26:19 +00:00
discord9
eca7e87129 chore: try from value (#2236)
* chore: try from value

* chore: add TryFromValueError variant
2023-08-24 02:44:13 +00:00
Weny Xu
beb92ba1d2 refactor: use table id instead of table ident (#2233) 2023-08-23 13:28:08 +00:00
Lei, HUANG
fdb5ad23bf refactor: use Batch::sort_and_dedup instead of Values::sort_in_place (#2235) 2023-08-23 08:56:49 +00:00
Ruihang Xia
d581688fd2 fix: dist planner has wrong behavior in table with multiple partitions (#2237)
* fix: dist planner has wrong behavior in table with multiple partitions

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* Update tests/cases/distributed/explain/multi_partitions.sql

Co-authored-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: Zhenchi <zhongzc_arch@outlook.com>
2023-08-23 08:32:20 +00:00
Bamboo1
4dbc32f532 refactor: remove associate type in scheduler to simplify it #2153 (#2194)
* feature: add a simple scheduler using flume

Signed-off-by: ZhuZiyi <zyzhu2001@gmail.com>

* fix: only use a sender rather clone many senders

Signed-off-by: ZhuZiyi <zyzhu2001@gmail.com>

* fix: use select to avoid loop

Signed-off-by: ZhuZiyi <zyzhu2001@gmail.com>

* feat: add parameters in new function to build the flume capacity and number of receivers

Signed-off-by: ZhuZiyi <zyzhu2001@gmail.com>

* test: add countdownlatch test concurrency

Signed-off-by: ZhuZiyi <zyzhu2001@gmail.com>

* test: add barrier replacing countdownlatch to test concurrency and add wait all tasks finished in stop

Signed-off-by: ZhuZiyi <zyzhu2001@gmail.com>

* chore: add some document annotation

Signed-off-by: ZhuZiyi <zyzhu2001@gmail.com>

* chore: add license header

Signed-off-by: ZhuZiyi <zyzhu2001@gmail.com>

* chore: code format

Signed-off-by: ZhuZiyi <zyzhu2001@gmail.com>

* chore: add Cargo.lock

Signed-off-by: ZhuZiyi <zyzhu2001@gmail.com>

* chore: Cargo.toml format

Signed-off-by: ZhuZiyi <zyzhu2001@gmail.com>

* chore: delete println in test

Signed-off-by: ZhuZiyi <zyzhu2001@gmail.com>

* chore: code format

Signed-off-by: ZhuZiyi <zyzhu2001@gmail.com>

* chore: code format

Signed-off-by: ZhuZiyi <zyzhu2001@gmail.com>

* feat: add error handle

Signed-off-by: ZhuZiyi <zyzhu2001@gmail.com>

* fix: fix error handle and add test scheduler stop

Signed-off-by: ZhuZiyi <zyzhu2001@gmail.com>

* chore: spelling mistake

Signed-off-by: ZhuZiyi <zyzhu2001@gmail.com>

* fix: wait all tasks finished

Signed-off-by: ZhuZiyi <zyzhu2001@gmail.com>

* chore: add todo which need wrap Future returned by send_async

Signed-off-by: ZhuZiyi <zyzhu2001@gmail.com>

* chore: code format

Signed-off-by: ZhuZiyi <zyzhu2001@gmail.com>

* test: remove unnessary sleep in test

Signed-off-by: ZhuZiyi <zyzhu2001@gmail.com>

* fix: resolve some conflicts

Signed-off-by: ZhuZiyi <zyzhu2001@gmail.com>

* fix: resolve conversation

Signed-off-by: ZhuZiyi <zyzhu2001@gmail.com>

* chore: code format

Signed-off-by: ZhuZiyi <zyzhu2001@gmail.com>

* chore: code format

Signed-off-by: ZhuZiyi <zyzhu2001@gmail.com>

* feat: modify the function of schedule to synchronize and drop sender after stopping scheduler

Signed-off-by: ZhuZiyi <zyzhu2001@gmail.com>

---------

Signed-off-by: ZhuZiyi <zyzhu2001@gmail.com>
2023-08-23 06:28:00 +00:00
Zhenchi
af95e46512 refactor(table): eliminate calls to DistTable.delete (#2225)
* refactor(table): eliminate calls to DistTable.delete

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix: format

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix: clippy

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2023-08-23 02:33:48 +00:00
Weny Xu
d81ddd8879 chore: fix clippy (#2232) 2023-08-23 02:24:29 +00:00
Ning Sun
88247e4284 fix!: resolve residual issues with removing prometheus port (#2227)
* fix: resolve residual issues when removing prometheus port

* fix: remove prometheus from sample config as well
2023-08-23 01:49:11 +00:00
Ruihang Xia
18250c4803 feat: implement Flight and gRPC services for RegionServer (#2226)
* extract FlightCraft trait

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* split service handler in GrpcServer

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* left grpc server implement

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* start region server if configured

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-08-22 13:30:09 +00:00
dennis zhuang
18fa0e01ed feat: remove checkpoint_on_startup (#2228)
feat: update flushed manifest version when it is larger
2023-08-22 13:09:34 +00:00
Yingwen
cc3e198975 feat(mito): Implement operations like concat and sort for Batch (#2203)
* feat: Implement slice and first/last timestamp for Batch

* feat(mito): implements sort/concat for Batch

* chore: fix typo

* chore: remove comments

* feat: sort and dedup

* test: test batch operations

* chore: cast enum to test op type

* test: test filter related api

* sytle: fix clippy

* docs: comment for slice

* chore: address CR comment

Don't return Option in get_timestamp()/get_sequence()
2023-08-22 12:03:02 +00:00
Yingwen
cd3755c615 feat(mito): Support handling RegionWriteRequest (#2218)
* feat: convert region request to worker write request

* chore: remove unused codes

* test: fix tests compiler errors

* chore: remove create/close/open request from worker requests

* chore: add comment

* chore: fix typo
2023-08-22 11:16:00 +00:00
Lei, HUANG
be1e13c713 feat(mito2): time series memtable (#2208)
* feat: time series memtable

* feat: add some test

* fix: some clippy warnings

* chore: some rustdoc

* refactor: test

* fix: remove useless functions

* feat: add config for TimeSeriesMemtable

* chore: some optimize

* refactor: remove bucketing

* refactor: avoid cloing RegionMetadataRef across all Series; make initial_builder_capacity a const; sort batch only by timestamp and sequence
2023-08-22 08:40:46 +00:00
Zhenchi
cb3561f3b3 refactor(table): eliminate calls to DistTable.insert (#2219)
Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2023-08-22 06:15:02 +00:00
Niwaka
b3b43fe1c3 fix: table options can't be found in distributed mode (#2209)
* fix: table options can't be found in distributed mode

* refactor: use iterator for regions_numbers

* chore: remove TODO
2023-08-22 03:53:56 +00:00
WU Jingdi
b411769de6 feat: Implement a basical range select query (#2138)
* feat: Implement a basical range select query

* chore: support any timestamp type & CR fix
2023-08-22 03:07:14 +00:00
niebayes
e5f4ca2dab feat: streaming do_get (#2171)
* feat: rewrite do_get for streaming get flight data

* feat: rewrite do_get call stack but leave the async stream adapter not modified yet

* feat: rewrite the async stream adapter to accept greptime record batch stream

* fix: resolve some PR comments

* feat: rewrite tests to adapt to the streaming do_get

* feat: add unit tests for streaming do_get

* feat: rewrite timer metric of merge scan

* remove unhelpful unit tests for streaming do_get

* add a new metric timer for merge scan and fix some test errors

* rewrite mysql writer to write query results in a streaming manner

* fix: fix fmt errors

* fix: rewrite sqlness runner to take into account the streaming do_get

* fix: fix toml format errors

* fix: resolve some PR comments

* fix: resolve some PR comments

* fix: refactor do_get to increase readability

* fix: refactor mysql try_write_one to increase readability
2023-08-22 02:54:05 +00:00
Weny Xu
5b7b2cf77d fix: fix ddl client can not update leader addr (#2205)
* fix: fix ddl client can not update leader addr

* chore: apply suggestions from CR

* feat: add message to context

* fix: only retry if unavailable or deadline exceeded

* chore: apply suggestions from CR
2023-08-21 13:57:29 +00:00
shuiyisong
9352649f22 chore: add table region key to delete in upgrade tool (#2214) 2023-08-21 08:16:10 +00:00
shuiyisong
c5f507c20e fix: add user_info extension to prom_store handler (#2212)
chore: add user_info extention to prom_store auth
2023-08-21 04:55:34 +00:00
JeremyHi
033b650d0d feat: row write protocol (#2189)
* feat: datanode's row insrter

* refactor: ExprFactory

* feat: row inserter in standalon mode

* chore: minor refactor

* feat: influxdb line protocol's row protocol

* chore: minor refactor

* improve: avoid to use too many string

* no longer async

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>

* chore: do not check empty data

* chore: by review comment

* chore: by comment

* chore: by review comment

* chore: by review comment

---------

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
2023-08-19 13:08:44 +00:00
dennis zhuang
272f649b22 fix: some TODO in sqlness cases and refactor meta-client error (#2207)
* fix: some TODO in sqlness cases and refactor meta-client error

* fix: delete tests/cases/standalone/alter/drop_col_not_null_next.output
2023-08-18 10:09:11 +00:00
Ruihang Xia
3150f4b22e fix: specify input ordering and distribution for prom plan (#2204)
* fix: specify input ordering and distribution for prom plan

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update sqlness result

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-08-18 09:45:46 +00:00
Weny Xu
e1ce1d86a1 refactor: unite key serialization method (#2195) 2023-08-18 09:42:19 +00:00
ZonaHe
b8595e1960 feat: update dashboard to v0.3.1 (#2192)
Co-authored-by: ZonaHex <ZonaHex@users.noreply.github.com>
2023-08-18 09:42:18 +00:00
shuiyisong
61e6656fea fix: auth in prometheus gateway service (#2206)
* fix: auth in prometheus gateway service

* chore: remove unused code

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>

---------

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
2023-08-18 09:41:38 +00:00
Ruihang Xia
1bbec75f5b fix: skip partition clause in show create table (#2200)
* fix: skip partition clause in show create table

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update test results

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-08-18 09:10:31 +00:00
Zhenchi
8d6a2d0b59 refactor: apply numbers to ThinTable (#2202)
* refactor: apply numbers to `ThinTable`

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* refactor: tiny polish

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix: unused import

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2023-08-18 03:11:37 +00:00
Weny Xu
177036475a fix: support to copy from parquet with typecast (#2201) 2023-08-18 03:09:54 +00:00
Zhenchi
87a730658a refactor: add ThinTable to proxy tables from infoschema (#2193)
* refactor: add thin table to proxy tables in info_schema

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix(catalog): fix typo in DataSourceAdapter struct name

* fix: remove redundant Send + Sync

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* refactor(catalog): rename DataSourceAdapter to InformationTableDataSource

* feat(catalog): add ThinTableAdapter for adapting ThinTable to Table interface

* rebase develop

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* refactor: default impl for table_type of InformationTable

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* refactor: filter_pushdown as table field

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* fix: remove explicit type declaration

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2023-08-17 15:19:14 +00:00
JeremyHi
b67e5bbf70 fix: invalid err msg (#2196) 2023-08-17 11:12:35 +00:00
Ruihang Xia
4aaf6aa51b feat: implement query API for RegionServer (#2197)
* some initial change

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* impl dummy structs

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* decode and send logical plan

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* implement table scan

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add some comments

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-08-17 11:02:31 +00:00
Weny Xu
6e6ff5a606 refactor: update table metadata in single txn (#2172)
* refactor: table-metadata-manager

* feat: remove comparing when deleting metadata

* fix: fix comment typos

* chore: apply suggestions from CR

* test: add tests for updating DatanodeTable

* fix: fix clippy

* chore: apply suggestions from CR

* refactor: improve update table route tests

* refactor: return Txn instead of TxnRequest

* chore: apply suggestions from CR

* chore: apply suggestions from CR

* refactor: update table metadata in single txn

* feat: check table exists before drop table executing

* test: add tests for table metadata manager

* refactor: remove table region manager

* chore: apply suggestions from CR

* feat: add bench program

* chore: apply suggestions from CR
2023-08-17 06:29:19 +00:00
Yingwen
4ba12155fe feat(mito): Implement SST format for mito2 (#2178)
* chore: update comment

* feat: stream writer takes arrow's types

* feat: Define Batch struct

* feat: arrow_schema_to_store

* refactor: rename

* feat: write parquet in new format with tsids

* feat: reader support projection

* feat: Impl read compat

* refactor: rename SchemaCompat to CompatRecordBatch

* feat: changing sst format

* feat: make it compile

* feat: remove tsid and some structs

* feat: from_sst_record_batch wip

* chore: push array

* chore: wip

* feat: decode batches from RecordBatch

* feat: reader converts record batches

* feat: remove compat mod

* chore: remove some codes

* feat: sort fields by column id

* test: test to_sst_arrow_schema

* feat: do not sort fields

* test: more test helpers

* feat: simplify projection

* fix: projection indices is incorrect

* refactor: define write/read format

* test: test write format

* test: test projection

* test: test convert record batch

* feat: remove unused errors

* refactor: wrap get_field_batch_columns

* chore: clippy

* chore: fix clippy

* feat: build arrow schema from region meta in ReadFormat

* feat: initialize the parquet reader at `build()`

* chore: fix typo
2023-08-17 06:25:50 +00:00
Weny Xu
832e5dcfd7 chore: remove allow-unused (#2184) 2023-08-17 03:15:12 +00:00
shuiyisong
d45ee8b42a chore: fix collect region stat on non-base table (#2190) 2023-08-17 02:13:49 +00:00
JeremyHi
6cd7319d67 refactor: grpc insert (#2188)
* feat: interval type for row protocol

* feat: minor refactor grpc insert

* Update src/common/grpc-expr/src/util.rs

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>

* fix: by comment

---------

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
2023-08-16 11:25:25 +00:00
Yingwen
bb062003ef ci: fallback to run_id to avoid cancelling other jobs (#2186)
ci: fallback to run id to avoid cancelling other jobs
2023-08-16 09:24:17 +00:00
Weny Xu
8ea1763033 refactor: refactor table metadata manager (#2159)
* refactor: table-metadata-manager

* feat: remove comparing when deleting metadata

* fix: fix comment typos

* chore: apply suggestions from CR

* test: add tests for updating DatanodeTable

* fix: fix clippy

* chore: apply suggestions from CR

* refactor: improve update table route tests

* refactor: return Txn instead of TxnRequest

* chore: apply suggestions from CR

* chore: apply suggestions from CR
2023-08-16 06:43:03 +00:00
Zhenchi
1afe96e397 refactor: prevent dist table from invoking scan (#2179)
* refactor: prevent dist table from invoking `scan`

* refactor: reorg code

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

* chore: add comment

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2023-08-16 04:43:33 +00:00
Ruihang Xia
814c599029 ci: cancel in-progress actions on new commit (#2182)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-08-16 04:21:14 +00:00
Ruihang Xia
4c3169431b feat: move region metadata to store-api (#2181)
* add metadata & handle_read

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* move metadata to store-api

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* dep aquamarine

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove deadcode

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove temporary code

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* Update src/store-api/Cargo.toml

Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>

* remove old mod

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>
2023-08-16 04:18:26 +00:00
sh2
202540823f refactor!: move prometheus routes to default http server (#2005)
* move prometheus routes to default http server

Signed-off-by: sh2 <shawnhxh@outlook.com>

* fix ci test and remove the server logic of prometheus

* remove unused import and prometheus relevant code

* fix ci: rustfmt and test

* fix ci: silly fmt

* fix ci: silly silly fmt

* change `/prom_store` back to `/prometheus`

* remove unused variable

---------

Signed-off-by: sh2 <shawnhxh@outlook.com>
2023-08-16 03:21:14 +00:00
dennis zhuang
0967678a51 feat: don't enable telemetry for debug building (#2177) 2023-08-16 01:53:11 +00:00
shuiyisong
c8cde704cf chore: minor auth crate change (#2176)
* chore: pub auth_mysql

* chore: pub all error

* chore: remove back to error

* chore: wrap failed permission check result to err

* chore: minor change
2023-08-15 10:49:22 +00:00
JeremyHi
24dc827ff9 feat: grpc handler result (#2107)
* feat: grpc handler inner result

* feat: ext header, x-greptime-err-code, x-greptime-err-msg

* fix: sqlness case

* chore: by comment

* fix: convert status to Error
2023-08-15 09:34:00 +00:00
Weny Xu
f5e44ba4cf docs: rfc of update metadata in single txn (#2165)
* docs: rfc of update metadata in single txn

* chore: apply suggestion from CR
2023-08-15 17:44:07 +08:00
zyy17
32c3ac4fcf refactor: improve the image building performance (#2175)
* refactor: use '--output type=local' in 'build-greptime-by-buildx' target to reduce unnecessary 'docker cp'"

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* refactor: improve the image building performance

* ci: release centos dev builder

* ci: use 'make build-by-dev-builder' to improve docker build performance

* refactor: add 'which' command in centos

* fix: add 'OUTPUT_DIR' to fix 'make docker-image-buildx' error

* fix: fix incorrect dockerfile path

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* refactor: remove configure-aws-credentials action and use env variables

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* ci: update slack notification prompt

* refactor: clean up the target directory before building artifacts of centos7

---------

Signed-off-by: zyy17 <zyylsxm@gmail.com>
2023-08-15 09:28:09 +00:00
Niwaka
a8f2e4468d feat: handle multiple grpc deletes (#2150)
* feat: handle multiple grpc deletes

* fix: make DistDeleter::grpc_delete return usize

* fix: remove backtrace from MissingTimeIndexColumn

* fix: avoid using unwrap in PartitionRuleManager::split_delete_request

* fix: simplify MissingTimeIndexColumn
2023-08-15 08:22:46 +00:00
Yingwen
d4565c0a94 feat(mito): Defines the read Batch struct for mito2 (#2174)
* feat: define batch

* feat: define Batch struct

* feat: stream writer takes arrow's types

* feat: make it compile

* feat: use uint64vector and uint8vector

* feat: add timestamps and primary key
2023-08-15 06:39:21 +00:00
Ruihang Xia
2168970814 feat: define region server and related requests (#2160)
* define region server and related requests

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fill request body

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* change mito2's request type

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* chore: bump greptime-proto to d9167cab (row insert/delete)

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix test compile

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove name_to_index

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* address cr comments

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* finilise

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-08-15 06:27:27 +00:00
Weny Xu
69a2036cee feat!: add deserializer for Partition (#2169)
* feat!: add deserializer for Partition

* fix: fix tests
2023-08-15 03:36:58 +00:00
Lei, HUANG
e924b44e83 refactor: KeyValues return ValueRef (#2170)
* refactor: KeyValues return ValueRef

* 1. Change KeyValues returned value from pb value to ValueRef
2. Replace OpType/SemanticType with pb's OpType and SemanticType to avoid duplicated conversions.

* feat: define min value of OpType as a const

* fix: toml format
2023-08-14 14:51:13 +00:00
Yingwen
768239eb49 fix: panic on truncate table in distributed mode (#2173) 2023-08-14 14:20:20 +00:00
Ning Sun
f3157df190 fix: normalize otlp string keys (#2168) 2023-08-14 09:39:54 +00:00
dennis zhuang
b353bd20db fix: print_anonymous_usage_data_disclaimer at wrong place (#2167) 2023-08-14 08:01:10 +00:00
Lei, HUANG
55b5df9c51 feat: row wise converter (#2162)
* feat: impl mem-comparable encoding for timestamp

* fix: test cases

* impl time series encode/decoder

* fix: merge unsupported match arms

* fix: clippy

* chore: big number delimiter

* feat: encode timestamps as i64

* fix: remove useless error variant
2023-08-14 07:13:39 +00:00
Ruihang Xia
393047a541 feat: implement metric for MergeScanExec (#2166)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-08-14 07:10:45 +00:00
LFC
606b489d53 feat: redact secrets in sql when logging (#2141) 2023-08-14 06:40:00 +00:00
Weny Xu
d0b3607633 feat: add table route manager and upgrade tool (#2145)
* feat: add table route manager and upgrade tool

* test: add table route manager tests

* feat: add new TableRouteValue struct

* chore: apply suggestions from CR

* refactor: change HashMap to BTreeMap

* feat: add version to TableRouteValue
2023-08-14 04:19:44 +00:00
Weny Xu
5b012a1f67 feat!: switch to new catalog/schema key (#2140)
* feat!: switch to new catalog/schema key

* chore: apply suggestions from CR
2023-08-14 03:08:43 +00:00
Ruihang Xia
f6b53984da fix(metasrv)!: do not overwrite boolean options unconditionally (#2161)
* fix: do not overwrite boolean options unconditionally

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix sqlness start command

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-08-14 03:04:54 +00:00
shuiyisong
7f51141ed0 refactor: auth crate (#2148)
* chore: move user_info to auth crate

* chore: temp commit before resolving tests compile error

* chore: fix compile issue

* chore: minor fix

* chore: tmp save

* chore: change user_info to trait

* chore: minor change & use auth result user info in pg session setup

* chore: add as_any to user_info

* chore: rename user_info

* chore: remove ice file

* chore: add permission checker

* chore: add grpc permission check

* chore: add session spawn user_info to query_ctx

* chore: minor update

* chore: add permission checker to sql handler & temp save

* chore: add permission checker to prometheus handler

* chore: add permission checker to opentsdb handler

* chore: add permission checker to other handlers

* chore: add test

* chore: add user_info setting on http entrance

* chore: fix toml

* chore: remove box in permission req

* chore: cr issue

* chore: cr issue
2023-08-14 02:51:26 +00:00
Ruihang Xia
6d64e1c296 feat(mito): checkpoint for mito2 (#2142)
* basic impl

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* adjust dir structure

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add tests

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix styles

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix typo

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* sort result

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* downgrade log level

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* apply CR sugg.

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add region id to log

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-08-13 09:26:01 +00:00
Yingwen
e6090a8d5b feat(mito): Write wal and memtable (#2135)
* feat: hold wal entry in RegionWriteCtx

* feat: entry id and commited sequence

* feat: write to wal

* feat: write memtable

* feat: fill missing columns

* feat: validate write request

* feat: more validation to write request

* chore: fix typos

* feat: remove init and validate rows in new()

* style: fix clippy
2023-08-12 07:44:44 +00:00
谢政
b62e643e92 build: update protobuf-build to support apple silicon (#2143)
* build: update protobuf-build to support apple silicon

* build: Update src/log-store/Cargo.toml

Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>

* build: update the Cargo.lock too

---------

Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>
Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
2023-08-12 03:31:51 +00:00
dennis zhuang
6f40128058 feat!: enable telemetry by default (#2137)
* feat: remove greptimedb-telemetry feature

* feat: adds enable_telemetry option to metasrv and datanode

* refactor: move data_home from file config to storage config

* feat: store the installation uuid into datanode and metasrv working home

* fix: cargo toml fmt

* test: ignore region failver test when using local fle storage

* test: ignore telemetry reporter in test mode

* feat: print warning log when enabling telemetry

* chore: the telemetry doc link

* chore: remove enable_telemetry from datanode example config file

* refactor: rename GREPTIMEDB_TELEMETRY_CLIENT_REQUEST_TIMEOUT

* chore: rename print_warn_log to print_anonymous_usage_data_disclaimer
2023-08-11 14:50:40 +00:00
LFC
0b05c22be1 fix: make "explain" executable in repl (#2157) 2023-08-11 20:21:40 +08:00
Ruihang Xia
4fd1057764 fix: several clippy error/warnings after upgrading toolchain (#2156)
* fix pyscripts mod

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy::needless-pass-by-ref-mut

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add pyo3 feature gate in Makefile

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-08-11 20:20:21 +08:00
Zou Wei
6877d082f6 feat: compatible with postgres interval type (#2146)
* feat: impl ToSql/FromSql/ToSqlText for PgInterval.

* chore: remove useless code.

* feat: compatible with postgres interval type.

* chore: cr comment.
2023-08-11 20:19:57 +08:00
LFC
2dcc67769e fix: runs sqlness test on windows-latest-8-cores (#2158) 2023-08-11 17:34:58 +08:00
Ruihang Xia
b9bac2b195 fix: let information_schema know itself (#2149)
* rename show create table

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* register information_schema on registering catalog

* fix tests in standalone

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix frontend catalog manager

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add sqlness test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy & typo

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* tweak sqlness test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* rename constructor

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* rename method

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix typo (again)

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove redundent clones

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-08-11 15:37:27 +08:00
Zou Wei
584acca09d feat: impl duration type (#2117)
* feat: impl duration type in common time.

* feat: convert from/to std::time::Duration.

* refactor: convert function
2023-08-11 07:04:42 +00:00
LFC
ad2021a8d8 feat: print build output if it's failed in sqlness (#2152)
* feat: print build output if it's failed in sqlness

* feat: print build output if it's failed in sqlness
2023-08-11 03:34:15 +00:00
zyy17
c970c206d1 ci: add retry for uploading artifacts to s3 (#2147) 2023-08-10 12:59:04 +00:00
LFC
5c19913a91 build: on windows (#2054)
* build on windows

* rebase develop

* fix: resolve PR comments
2023-08-10 08:08:37 +00:00
zyy17
587a24e7fb ci: add working dir and some minor changes of create-version.sh (#2133)
* ci: add context argument in build-greptime-binary action

* refactor: add 'working-dir' in upload-artifacts action and rename 'context' to 'working-dir'

* refactor: use timestamp as part of image tag when trigger manually
2023-08-10 04:46:43 +00:00
Ning Sun
0270708d6d fix: correct grpc metric labels (#2136) 2023-08-10 03:59:41 +00:00
WU Jingdi
b7319fe2b1 feat: Support RangeSelect LogicalPlan rewrite (#2058)
* feat: Support RangeSelect LogicalPlan rewrite

* chore: fix code advice

* fix: change format of range_fn

* chore: optimize project plan rewrite

* chore: fix code advice
2023-08-10 02:53:20 +00:00
LFC
ea3708b33d fix: deserialize TableInfoValue with missing field (#2134) 2023-08-10 02:43:24 +00:00
Zhenchi
7abe71f399 fix(table): return correct table types (#2131)
* fix(table): return correct table types

Signed-off-by: zhongzc <zhongzc@zhongzcs-MacBook-Pro.local>

* fix: NumbersTable to be Temporary table

Signed-off-by: zhongzc <zhongzc@zhongzcs-MacBook-Pro.local>

* fix(test): fix affected cases

Signed-off-by: zhongzc <zhongzc@zhongzcs-MacBook-Pro.local>

* fix(test): fix affected cases

Signed-off-by: zhongzc <zhongzc@zhongzcs-MacBook-Pro.local>

* fix: fmt

Signed-off-by: zhongzc <zhongzc@zhongzcs-MacBook-Pro.local>

* fix(tests): fix instance_test expected result

* retrigger action

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>

---------

Signed-off-by: zhongzc <zhongzc@zhongzcs-MacBook-Pro.local>
Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
Co-authored-by: zhongzc <zhongzc@zhongzcs-MacBook-Pro.local>
2023-08-09 11:07:00 +00:00
Ruihang Xia
b156225b80 fix: correct the schema used by TypeConversionRule (#2132)
* fix: correct the schema used by TypeConversionRule

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* specify time zone in UT

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-08-09 08:18:17 +00:00
zyy17
2ac51c6348 fix: set the correct working dir before building the artifacts (#2129) 2023-08-09 14:34:29 +08:00
Ning Sun
7f5f8749da test: add conditional compilation flag for datanode mock module (#2130) 2023-08-09 06:10:54 +00:00
Yingwen
d4e863882f feat: Add write method to memtable trait (#2123)
* feat: validate semantic type

* feat: define KeyValues

* test: test semantic type check

* feat: impl KeyValues

* test: test KeyValues

* feat: Add write to Memtable

* style: fix clippy

* docs: more comment
2023-08-09 04:07:50 +00:00
Ning Sun
d18eb18b32 feat: use server inferenced types on statement describe (#2032)
* feat: use server inferenced types on statement describe

* feat: add support for server inferenced type

* feat: allow parameter type inferencing

* chore: update comments

* fix: lint issue

* style: comfort rustfmt

* Update src/servers/src/postgres/types.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

---------

Co-authored-by: Yingwen <realevenyag@gmail.com>
2023-08-09 02:57:56 +00:00
liyang
aa6452c86c chore: rename dockerhub registry password (#2127) 2023-08-09 02:28:56 +00:00
zyy17
d44cd9c6f5 fix: add 'image-name' argument to correct the invalid image namespace(mix with image-name) (#2126) 2023-08-09 10:04:11 +08:00
gongzhengyang
ce0f909cac perf: change current schema and catalog to borrow, clone only necessary (#2116)
perf: change current schema and catalog to borrow, clone only when necessary

Co-authored-by: gongzhengyang <gongzhengyang@bolean.com.cn>
2023-08-08 12:48:24 +00:00
Ruihang Xia
4c693799d8 fix: bugs related to merge scan (#2118)
* fix: prevent optimize merge scan, mark distinct as unsupported

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix some other problems

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix unit tests

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove deadcode

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add some comments

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* Update src/query/src/optimizer/type_conversion.rs

Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>
2023-08-08 11:42:57 +00:00
Vanish
57836e762b feat: truncate table in standalone mode (#2090)
* feat: impl table procedure in standalone mode

* chore: remove useless changes

* test: add some tests

* Update src/table-procedure/src/truncate.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

* CR

* Update src/datanode/src/sql/truncate_table.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

* chore: fmt

---------

Co-authored-by: Yingwen <realevenyag@gmail.com>
2023-08-08 11:23:36 +00:00
zyy17
d927ab1ce5 ci: add 'upload-to-s3' option and disable it in dev build (#2124) 2023-08-08 11:22:24 +00:00
Ning Sun
c39de9072f refactor: use workspace dependencies for internal modules (#2119)
* refactor: use workspace dependencies for internal modules

* fix: resolve issue with mock module in datanode

* refactor: update test modules
2023-08-08 11:02:34 +00:00
zyy17
815a6d2d61 fix: var compare error(yet another stupid mistake) (#2122) 2023-08-08 17:39:53 +08:00
zyy17
f1f8a1d3a9 ci: fix incorrect variable name (#2121) 2023-08-08 17:20:11 +08:00
zyy17
e7abd00fc0 ci: fix error import path (#2120) 2023-08-08 17:12:54 +08:00
zyy17
5e2fdec1b6 ci: add dev-build (#2114) 2023-08-08 07:58:59 +00:00
Lei, HUANG
2d9ea595cb chore!: change logstore namespace prefix (#1998)
* chore: change logstore namespace prefix

* chore: change delimiter
2023-08-08 07:36:46 +00:00
LFC
46fa3eb629 chore: upgrade rust toolchain to latest nightly (#2049)
* chore: upgrade rust toolchain to latest nightly

* rebase develop

* update rust toolchain in ci
2023-08-08 07:17:51 +00:00
Weny Xu
7d0d8dc6e3 feat: return metasrv leader addr (#2110) 2023-08-07 10:01:42 +00:00
Zhenchi
f8d152231d feat(information_schema): implement table_factory method (#2108)
* feat(information_schema): implement table_factory method

* refactor(catalog): simplify table_factory method

* Update src/table/src/data_source.rs

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>

---------

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
2023-08-07 08:07:25 +00:00
Weny Xu
c8cb1ef5bc feat: add schema and catalog key migration tool (#2048)
* feat: add schema and catalog key migration tool

* chore: apply suggestions from CR
2023-08-07 06:22:05 +00:00
Zou Wei
d5cadeeec3 feat: conversion between interval and gRPC (#2064)
* feat: support grpc for interval type

* chore: add unit test cases.

* chore: cargo clippy

* chore: modify greptime-proto version

* chore: cr comment.

* chore: cargo fmt

* refactor: convert function.
2023-08-07 06:22:04 +00:00
Ruihang Xia
7210b35d86 docs: rfc of refactoring table trait (#2106)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-08-07 02:55:19 +00:00
Vanish
cf7e8c9142 feat: truncate region (#2097)
* feat: impl truncate region

* test: test truncate region

* chore: typo

* refactor: table truncate

* chore: remove useless changes

* chore: reset version

* fix: wait for flush task to complete

* fix: clippy

* chore: remove useless changes

* CR

Co-authored-by: Yingwen <realevenyag@gmail.com>

* Update src/storage/src/engine.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

* Update src/storage/src/engine.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

* Update src/storage/src/region.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

* Update src/storage/src/region/tests/truncate.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

* Update src/storage/src/region/tests/truncate.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

* Update src/storage/src/region/writer.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

* CR

* Update src/storage/src/engine.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

* Update src/storage/src/manifest/region.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

---------

Co-authored-by: Yingwen <realevenyag@gmail.com>
2023-08-04 12:26:25 +00:00
Yingwen
cb4dd89754 feat(mito): Implement mito2 Wal (#2103)
* feat: define wal struct

* feat: Implement Wal read/write

* feat: obsolete wal

* test: test wal

* refactor: use try_stream and remove async from scan
2023-08-04 11:04:25 +00:00
zyy17
9139962070 fix: fix version output empty error: '$GITHUB_ENV' -> '$GITHUB_OUTPUT' (#2104) 2023-08-04 17:48:11 +08:00
Ruihang Xia
9718aa17c9 feat: define region group and sequence (#2100)
* define region group

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* define region sequence

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* check partition number

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* test region seq and group

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-08-04 09:08:07 +00:00
Ruihang Xia
18896739d8 fix: disable region failover in sqlness test (#2102)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-08-04 08:38:40 +00:00
zyy17
8bcad936d3 fix: wrong action url prompt (#2099)
fix: wrong action url
2023-08-04 07:39:02 +00:00
shuiyisong
7efff2d704 fix: introduce taplo.toml and sort Cargo.toml (#2096)
* fix: add taplo.toml

* fix: introduce taplo.toml & sort cargo.toml

* chore: remove option in ci too
2023-08-04 06:44:45 +00:00
Ning Sun
93cd4ab89d ci: require cargo.lock up to date (#2094) 2023-08-04 02:59:01 +00:00
Yingwen
e5663a075f feat(mito): preparation to implementing write (#2085)
* refactor: move request mod

* feat: add mutation

* feat: add handle_write mod

* feat: one mutation at a time

* feat: handle write requests

* feat: validate schema

* refactor: move schema check to write request

* feat: add convert value

* feat: fill default values

* chore: remove comments

* feat: remove code

* feat: remove code

* feat: buf requests

* style: fix clippy

* refactor: rename check functions

* chore: fix compile error

* chore: Revert "feat: remove code"

This reverts commit 6516597540.

* chore: Revert "feat: remove code"

This reverts commit 5f2b790a01.

* chore: upgrade greptime-proto

* chore: Update comment

Co-authored-by: dennis zhuang <killme2008@gmail.com>

---------

Co-authored-by: dennis zhuang <killme2008@gmail.com>
2023-08-04 02:53:02 +00:00
zyy17
ac81d3c74f fix: add the missing 'NIGHTLY_RELEASE_PREFIX' and fail fast in 'allocate-runners' job (#2093) 2023-08-04 02:51:47 +00:00
JeremyHi
7987e08ca2 chore: typo (#2092) 2023-08-04 01:38:17 +00:00
Eugene Tolbakov
1492700acc fix(timestamp): add trim for the input date string (#2078)
* fix(timestamp): add trim for the input date string

* fix(timestamp): add analyzer rule to trim strings before conversion

* fix: adjust according to CR
2023-08-03 23:33:47 +00:00
shuiyisong
6f1094db0a fix: arc() usage in non-test code (#2091)
* chore: try fix arc issue

* chore: move `parse_catalog_and_schema_from_client_database_name` to catalog crate

* fix: arc issue

* fix: arc issue

* fix: arc issue

* fix: arc issue

* fix: minor change
2023-08-03 10:16:02 +00:00
zyy17
21655cb56f ci: add nightly build workflow (#2089) 2023-08-03 09:11:39 +00:00
Ruihang Xia
5f0403c245 feat: improve /label and /labels APIs in prometheus server (#2087)
* support __name__ for /label

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* make match[] in labels optional

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* Apply suggestions from code review

Co-authored-by: Yingwen <realevenyag@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>
2023-08-03 07:51:08 +00:00
fys
d7002caca7 chore: add meter-core dependency (#2088) 2023-08-03 07:24:34 +00:00
fys
dda922507f feat: impl pubsub in metasrv (#2045)
* feat: impl pubsub

* add test_subscriber_disconnect unit test

* chore: cr

* cr

* cr
2023-08-03 03:56:43 +00:00
Yingwen
fdd4929c8f refactor(mito): mv mito2 request (#2086)
* refactor: mv request mod to crate level

* refactor: mv SkippedFields
2023-08-03 03:38:46 +00:00
zyy17
90b2200cc8 chore!: modify install.sh to adapt the new release package format (#2077)
chore: modify install.sh to adapt the new release package format
2023-08-03 02:09:31 +00:00
Vanish
e3a079a142 fix: session features (#2084) 2023-08-02 13:39:17 +00:00
discord9
c55841988e feat: necessary Hash derive for types (#2075)
* feat: necessary derive for types

* impl (Partial)Ord for ConcreteDataType
2023-08-02 13:08:43 +00:00
zyy17
279df2e558 fix: incorrect argument name: 'disable_run_tests' -> 'disable-run-tests' (#2079)
fix: 'disable_run_tests' -> 'disable-run-tests'
2023-08-02 11:16:56 +00:00
Ning Sun
7a27ef8d11 fix: remove openssl from reqwest and use rustls instead (#2081)
* fix: remove openssl from reqwest and use rustls instead

* fix: correct server url

* style: fix toml format
2023-08-02 10:23:21 +00:00
zyy17
be8f243c64 chore: update Cargo.lock (#2068) 2023-08-02 15:23:16 +08:00
zyy17
e1edb87017 fix: add the missing 'TARGET' in Makefile (#2066) 2023-08-02 06:42:43 +00:00
Ruihang Xia
bbbeaa709b fix(deps): update greptime-proto rev to the one after merge (#2063)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-08-02 06:33:10 +00:00
Weny Xu
4626c2efe5 feat: add Catalog and Schema Manager (#2037)
* feat: add Range Stream

* feat: add catalog and schema manager

* feat: enhance KeyValueDecoderFn

* chore: apply suggestions from CR

* chore: apply suggestions from CR
2023-08-02 03:56:29 +00:00
Ruihang Xia
346c52eb72 docs: update SDK list (#2062)
* docs: update SDK list

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* correct py url

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-08-02 02:31:43 +00:00
zyy17
47a796c0ba fix: incorrect github token secret name (#2061)
Signed-off-by: zyy17 <zyylsxm@gmail.com>
2023-08-02 02:20:49 +00:00
shuiyisong
5eb2c609a3 fix: auth in grpc (#2056)
* fix: auth in grpc

* fix: change to return err

* fix: add grpc test

* fix: add http test

* fix: add mysql and pg test
2023-08-01 15:18:31 +00:00
zyy17
7d76131469 chore: modify the directory of release bucket and remove unused files (#2059) 2023-08-01 13:07:13 +00:00
Ruihang Xia
a3fa455f31 docs: rfc of metric engine (#1925)
* docs: rfc of metric engine

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add drawback section

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add sections about physical impl and meta routing

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add chart about region id group

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix typos

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-08-01 11:26:48 +00:00
JeremyHi
fd7eb87a52 refactor: common semantic-type (#2057) 2023-08-01 11:18:05 +00:00
Sunray Ley
090b7e61ca feat: make the gRPC channel between Frontend and Datanode configurable (#2044)
* feat: expose frontend datanode_client_options

* chore: add configuration options to the configuration file

* refactor(frontend): extract DatanodeOptions to service_config

* refactor(frontend): extract DatanodeOptions to service_config

* style: remove unnecessary suffix in variable name

Co-authored-by: Yingwen <realevenyag@gmail.com>

* feat: use humantime_serde for readable duration

---------

Co-authored-by: Yingwen <realevenyag@gmail.com>
2023-08-01 10:49:41 +00:00
Yingwen
c529c8a41b feat(mito): Implement open and close for mito2 regions (#2052)
* feat: add close request

* feat: handle close and open request

* feat: Implement open

* test: add TestEnv::new

* feat: close region/engine and test

* style: fix clippy

* style: import log macros

* docs: update docs

* docs: add mermaid for manifest manager
2023-08-01 10:49:07 +00:00
gongzhengyang
0eac56a442 chore: remove unused dependencies (#2055)
Co-authored-by: gongzhengyang <gongzhengyang@bolean.com.cn>
2023-08-01 07:43:03 +00:00
Ruihang Xia
44f3ed2f74 chore(deps): bump datafusion to the latest commit (#1967)
* bump deps

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix compile except pyo3 backend

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix promql-parser metric name matcher

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix tests

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix pyarrow convert

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix pyo3 compiling

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix tests

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove deadcode

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update stream adapter display format

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix physical optimizer rule

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-08-01 02:10:49 +00:00
Ruihang Xia
5bd80a74ab feat: prepare for implementing considering partition key in the distributed planner (#2000)
* basic impl

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix frontend logic

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add sqlness test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* check substrait compatibility before pushdown

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* going to revert some rules

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update sqlness result

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix test and clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix compile error

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix typo

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove println

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* Apply suggestions from code review

Co-authored-by: Yingwen <realevenyag@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>
2023-07-31 12:36:23 +00:00
Ruihang Xia
bddaf265a9 chore(ci): run clippy, coverage and sqlness in parallel (#2050)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-07-31 10:37:30 +00:00
Yingwen
4d5ecb54c5 feat(mito): Implement open for RegionManifestManager (#2036)
* feat: file purger trait

* feat: Implement open for RegionManifestManager

* feat: remove RegionVersion

* feat: Use RwLock

* chore: remove AtomicManifestVersion

* feat: Remove unused error

* feat: store meta action

* chore: update comment
2023-07-31 10:04:22 +00:00
shuiyisong
922d826347 chore: make tables() return kv instead of key only (#2047)
* chore: make tables return kv

* chore: remove comment code
2023-07-31 07:30:47 +00:00
localhost
7681864eb4 chore: add version reporter (#2007)
* chore: add version reporter

* chore: add uuid for version report

* chore: add file license

* chore: format code

* chore: fix by pr comment

* chore: change version report api url

* chore: change greptimedb opentelemetry crate name

* chore: minor code beautification

* chore: add keys only option when range etcd

* chore: fix by pr comment

* chore: fix by pr comment

* chore: change uuid file location

* chore: only run telemetry in meta leader

* chore: add more test and some minor fix

* chore: make clippy happy

* chore: fix by pr comment

* chore: fix by pr comment

* chore: add debug log for greptimedb telemetry
2023-07-31 06:58:00 +00:00
zyy17
45832475d0 feat: rewrite the release pipeline to make it clean (#2038)
* refactor: modify cache path of Dockerfile

* feat: rewrite the release pipeline to make it clean
2023-07-31 04:57:04 +00:00
Zou Wei
7727508485 feat: impl interval type (#1952)
* feat: impl interval type in common time

* feat: impl datatype, vectors, value for interval

    pick 0c1d9f297 feat: impl interval type in common time
    pick d528c647f feat: impl datatype, vectors, value for interval
    pick 1e12dd5c7 comments update
    pick 74103e36c add license header

* comments update

* add license header

* cargo clippy

* refactor interval type

* add unit test and case to dummy.sql

* cargo clippy

* chore: add doc comments

* chore: cargo fmt

* feat: add formats, refactor comparison

* add docs comments

* Apply suggestions from code review

Co-authored-by: Yingwen <realevenyag@gmail.com>

* chore: cr comment

---------

Co-authored-by: Yingwen <realevenyag@gmail.com>
2023-07-31 03:54:39 +00:00
zyy17
216f220007 fix: restore 'aarch64/compile-python.sh' to fix the failed release temporarily (#2046)
fix: add 'aarch64/compile-python.sh' back to fix release failed temporarily

Signed-off-by: zyy17 <zyylsxm@gmail.com>
2023-07-31 03:38:27 +00:00
Niwaka
695398652c feat: accept influxdb request without timestamp even if table doesn't exist (#2041)
* feat: accept influxdb request without timestamp even if table doesn't exist

* refactor: InsertRequests::try_from

* feat: check row number
2023-07-31 02:55:09 +00:00
parkma99
fc6ebf58b4 refactor: create_current_timestamp_vector by using VectorOp::cast (#2042)
* refactor using VectorOp cast

* add test case
2023-07-31 02:51:06 +00:00
Zou Wei
f22b787fd9 chore: return error in arrow array convert function (#2043)
fix: return error instead of unreachable!()
2023-07-31 02:47:40 +00:00
Lei, HUANG
81ea61ba43 fix: window inferer (#2033)
* fix: window inferer

* chore: rename
2023-07-26 02:18:19 +00:00
zyy17
662879ff4b refactor: don't set the build jobs when nproc is not found (#2034)
refactor: don't set the build jobs when nproc not found
2023-07-25 13:40:44 +00:00
LFC
48996b0646 fix: etcd range pagenation in table metadata migration tool (#2035) 2023-07-25 10:02:26 +00:00
fys
0b4ac987cd refactor: arrange lease kvs randomly in lease_based selector (#2028)
* refactor: arrange lease kvs randomly in lease_based selector

* fix: cr
2023-07-25 07:32:10 +00:00
shuiyisong
9c1f0234de refactor: query context (#2022)
* chore: refactor query_context

* chore: remove use statement

* chore: make query_context build return arc

* fix: sqlness test

* fix: cr issue

* fix: use unwrap or else
2023-07-25 06:11:34 +00:00
Ruihang Xia
f55bff51ac feat: set and retrieve trace id in log macro (#2016)
* trace id passed by task local store

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* modify log macro

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove tokio::spawn

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* use real trace id

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-07-25 03:50:27 +00:00
Weny Xu
0fc0f74cd7 fix: fix parking_lot unresolved (#2025) 2023-07-25 03:20:12 +00:00
Yingwen
5f65e3ff44 feat(mito): Port parquet writer and reader to mito2 (#2018)
* feat(mito): Port Batch and BufferedWriter

* feat: encode metadata to parquet

* feat: define BatchReader trait

* chore: ParquetWriter write_all takes `&mut self`

* feat(mito): port ParquetReader

* chore: fix typo

* chore: address CR comment
2023-07-24 09:35:21 +00:00
dennis zhuang
1f371f5e6e fix: checkpoint metadata file dirty caching (#2020)
fix: dirty last checkpoint metadata file when enable object store caching, #2013
2023-07-24 08:18:19 +00:00
shuiyisong
632cb26430 feat: trace_id in query context (#2014)
* chore: unify once_cell version

* chore: update cargo lock

* chore: add gen_trace_id

* chore: add trace_id to query_ctx

* chore: add debug log

* Revert "chore: add debug log"

This reverts commit f52ab3bb300f1d73117cd6ebbd8e0162829b1aba.

* chore: add frontend node id option

* chore: add query ctx to query engine ctx

* chore: set trace_id to logical_plan api

* chore: add trace_id in grpc entrance

* chore: generate trace_id while creating query_ctx

* chore: fix typo

* chore: extract trace_id from grpc header

* chore: extract trace_id from grpc header

* chore: fix clippy

* chore: add `QueryContextBuilder`

* chore: change node_id in fe to string
2023-07-24 07:35:06 +00:00
liyang
39e74dc87e chore: rename tag github env (#2019) 2023-07-24 07:29:24 +00:00
JeremyHi
41139ec11d feat: region lease improve (#2004)
* feat: add exists api into KvBackend

* refactor: region lease

* feat: fiter out inactive node in keep-lease

* feat: register&deregister inactive node

* chore: doc

* chore: ut

* chore: minor refactor

* feat: use memory_kv to store inactive node

* fix: use real error in

* chore: make inactive_node_manager's func compact

* chore: more efficiently

* feat: clear inactive status on cadidate node
2023-07-24 03:49:14 +00:00
zyy17
657fcaf9d0 refactor: unify the greptime artifacts building (#2015)
* refactor: unify the make targets of building images

* refactor: make Dockerfile more clean

1. Add dev-builder image to build greptime binary easily;
2. Add 'docker/ci/Dockerfile-centos' to release centos image;
3. Delete Dockerfile of aarch64 and just need to use one Dockerfile;

Signed-off-by: zyy17 <zyylsxm@gmail.com>

---------

Signed-off-by: zyy17 <zyylsxm@gmail.com>
2023-07-24 03:06:16 +00:00
liyang
f1cd28ffa1 feat: (upload binary s3)add nightly build tag (#2011)
feat: add nightly build tag
2023-07-21 06:49:57 +00:00
Sunray Ley
86378ad93a docs: fix incorrect document URL (#2012) 2023-07-21 14:55:23 +08:00
Yingwen
792d8dfe33 feat(mito): create region in mito2 engine (#1999)
* chore: check table existence

* refactor: rename LevelMetaVec

* feat: create request to metadata

* refactor: Share MitoConfig between workers

* feat: impl handle_create_request

* refactor: move tests mod

* feat: validate time index nullable

* feat: test create region

* feat: test create if not exists

* feat: remove option

* style: fix clippy

* chore: address CR comments
2023-07-21 06:41:34 +00:00
gobraves
e3ac3298b1 feat: add orc stream (#1981)
* add orc stream #1820

* update orc stream

* fix: create orcstreamadapter with opt projection

* fix: license header

* docs: delete comment
2023-07-21 05:54:02 +00:00
LFC
953b8a0132 feat: benchmark table metadata managers (#2008)
* feat: benchmark table metadata managers

* feat: benchmark table metadata managers
2023-07-21 05:41:06 +00:00
Ning Sun
e0aecc9209 refactor: improve semantics of session and query context (#2009) 2023-07-21 03:50:32 +00:00
Ning Sun
a7557b70f1 feat: Add more tags for OTLP metrics protocol (#2003)
* test: add integration tests for otlp

* feat: add resource and scope attributes as tag
2023-07-21 02:02:43 +00:00
Vanish
51fe074666 feat: truncate table execute (#2002)
* feat: implement truncate execute in standalone mode

* feat: implement truncate execute in distribute mode

* chore: update greptime-proto

* fix: license header

* chore: CR

* chore: update greptime-proto
2023-07-20 11:04:37 +00:00
Ruihang Xia
6235441577 fix: avoid large vector allocation on large query span (#2006)
* avoid collect all timestamp at the begining

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* simplify branch logic

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-07-20 09:26:53 +00:00
LFC
172febb1af refactor!: trying to replace TableGlobalValue, part 2 (#1985)
* refactor!: using the new table metadata values

* fix: resolve PR comments

* fix: resolve PR comments

* fix: resolve PR comments
2023-07-19 12:01:43 +00:00
JeremyHi
2ef0d06cdb feat: status_code in response header (#1982)
* feat: status_code in response header

* chore: parese grpc response

* fix: sqlness failed

* chore: fix sqlness
2023-07-19 11:27:49 +00:00
Weny Xu
2e2a82689c fix: alter procedure table not found issue (#1993)
* fix: alter procedure table not found issue

* chore: apply suggestions

* chore: apply suggestions from CR
2023-07-19 08:26:13 +00:00
Yingwen
bb8468437e feat(mito): Define Version and metadata builders for mito2 (#1989)
* feat: define structs for version

* feat: Build region from metadata and memtable builder

* feat: impl validate for metadata

* feat: add more fields to RegionMetadata

* test: more tests

* test: more check and test

* feat: allow overwriting version

* style: fix clippy
2023-07-19 07:50:20 +00:00
Ben Baoyi
3241de0b85 refactor: Separating statement parse func (#1975)
* refactor:Separating statement parse func

* refactor:refactor describe,explain and drop

* Update src/sql/src/parser.rs

---------

Co-authored-by: LFC <bayinamine@gmail.com>
2023-07-19 05:22:36 +00:00
Ning Sun
b227a7637c feat: add timers for promql query (#1994)
feat: add timer for promql query
2023-07-19 03:54:49 +00:00
Yingwen
43bde82e28 test(storage): fix schedule_duplicate_tasks test (#1990)
test: fix schedule_duplicate_tasks test
2023-07-19 03:05:30 +00:00
Ning Sun
62a41d2280 feat: initial implementation for OpenTelemetry otlp/http (#1974)
* feat: initial implementation for otlp

* feat: implement more opentelemetry data types

* feat: add metrics

* feat: add support for parsing db name from headers

* feat: allow dbname authentication via header

* chore: disable histogram for now

* refactor: correct error name

* test: add tests for otlp encoders

* Update src/servers/src/error.rs

Co-authored-by: Eugene Tolbakov <ev.tolbakov@gmail.com>

* refactor: address review issues

---------

Co-authored-by: Eugene Tolbakov <ev.tolbakov@gmail.com>
2023-07-19 03:03:52 +00:00
Niwaka
3741751c8d feat: support where in show databases (#1962) 2023-07-19 00:01:05 +00:00
Ruihang Xia
8bea853954 refactor(mito2): implement RegionManifestManager (#1984)
* finilise manager and related API

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* impl manifest initialize and update

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* more test and utils

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* resolve CR comments

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-07-18 14:03:35 +00:00
Weny Xu
37dad206f4 fix: fix wait procedure watcher bug (#1987) 2023-07-18 09:07:31 +00:00
Weny Xu
1783e4c5cb refactor: move DatanodeAlterTable after InvalidateTableCache (#1978)
* refactor: move AlterDatanode after InvalidateTableCache

* fix: acquire table key in region failover procedure
2023-07-18 07:03:20 +00:00
dennis zhuang
b81570b99a feat: impl time type (#1961)
* chore: remove useless Option type in plugins (#1544)

Co-authored-by: paomian <qtang@greptime.com>

* chore: remove useless Option type in plugins (#1544)

Co-authored-by: paomian <qtang@greptime.com>

* chore: remove useless Option type in plugins (#1544)

Co-authored-by: paomian <qtang@greptime.com>

* chore: remove useless Option type in plugins (#1544)

Co-authored-by: paomian <qtang@greptime.com>

* feat: first commit for time type

* feat: impl time type

* fix: arrow vectors type conversion

* test: add time test

* test: adds more tests for time type

* chore: style

* fix: sqlness result

* Update src/common/time/src/time.rs

Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>

* chore: CR comments

---------

Co-authored-by: localhost <xpaomian@gmail.com>
Co-authored-by: paomian <qtang@greptime.com>
Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>
2023-07-18 02:55:28 +00:00
Eugene Tolbakov
6811acb314 fix(status_endpoint): add default value for hostname (#1972)
* fix(status_endpoint): add default value for hostname

* fix: adjust according to clippy suggestions

* fix: adjust according to CR suggestions
2023-07-17 11:55:11 +00:00
LFC
3e846e27f8 fix: compile error after #1971 is merged (#1979) 2023-07-17 10:01:47 +00:00
Ruihang Xia
f152568701 fix(ci): add merge queue trigger (#1980)
* fix(ci): add merge queue trigger

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update docs.yml also

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-07-17 18:01:05 +08:00
LFC
dd62f4c407 feat: tool to migrate table metadata values (#1971)
feat: tool to migrate table metadata values when upgrading to version 0.4
2023-07-17 15:06:32 +08:00
Weny Xu
4fd37d9d4e test: add ddl idempotent tests of datanode (#1966) 2023-07-17 11:58:06 +08:00
LFC
7cf6c2bd5c refactor: trying to replace TableGlobalValue, part 1 (#1956)
* refactor: trying to replace TableGlobalValue, part 1

* fix: resolve PR comments
2023-07-17 11:32:46 +08:00
Kree0
8f71ac2172 refactor: move heartbeat configuration into an independent section (#1976)
refactor: move heartbeat configuration into an independent section in config file

* refactor: move heartbeat configuration into an independent section in config file

* feat: add HeartbeatOptions struct

* test: modify corresponding test case

* chore: modify corresponding example file
2023-07-17 11:29:02 +08:00
Ning Sun
076d44055f docs: fix playground section (#1973)
The image link keeps changing after website build, here we remove the broken image and use link temporarily
2023-07-17 10:48:06 +08:00
Weny Xu
d9751268aa feat: expose metasrv datanode_client_options (#1965)
* feat: expose meta datanode_client_options

* chore: apply suggestions from CR
2023-07-15 14:26:24 +08:00
ZonaHe
8f1241912c feat: update dashboard to v0.3.0 (#1968)
Co-authored-by: ZonaHex <ZonaHex@users.noreply.github.com>
2023-07-15 14:16:41 +08:00
Lei, HUANG
97cfa3d6c9 feat: support append entries from multiple regions at a time (#1959)
* feat: support append entries from multiple regions at a time

* chore: add some tests

* fix: false postive mutable_key warning

* fix: append_batch api

* fix: remove unused clippy allows
2023-07-14 09:57:17 +00:00
Yingwen
ef7c5dd311 feat(mito): Implement WorkerGroup to handle requests (#1950)
* feat: engine worker framework

* feat: worder comments

* feat: divide worker requests by type

* feat: handlers for worker thread

* refactor: rename requests to ddl and dml requests

* feat: methods to stop and submit requests

* refactor: rename request queue to request buffer

* refactor: remove ddl and dml request

* feat: send request to worker

* test: test stop

* docs(mito): worker group docs

* style: fix clippy

* docs: update WorkerGroup comment

* chore: address CR comments

* chore: fix comment issues

* feat: use mpsc::channel

* feat: check is_running flag

* chore: Add stop request to notify a worker

* refactor: add join_dir to join paths

* feat: redefine region requests

* docs: more comments

* refactor: rename worker thread to worker loop

* chore: address CR comments
2023-07-14 08:06:44 +00:00
Ruihang Xia
ce43896a0b refactor(mito2): implement serialize/deserialize for RegionMetadata (#1964)
* feat: implement serialize/deserialize for RegionMetadata

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove Raw*

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* render mermaid

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* derive Serialize

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* clean up

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* rename symbols

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-07-14 14:44:12 +08:00
JeremyHi
c9cce0225d feat: ask leader (#1957)
* feat: ask leader

* fix: license header

* chore: by comment
2023-07-14 11:32:47 +08:00
Ruihang Xia
5bfd0d9857 refactor(mito): define region metadata (#1960)
* refactor(mito): define region metadata

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove unneeded message

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* Update src/mito2/src/metadata.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

* add primary keys vector

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update mermaid

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>
2023-07-13 13:40:41 +00:00
JeremyHi
e4fd5d0fd3 refactor: let metasrv returns ref always (#1954) 2023-07-13 17:06:51 +08:00
Weny Xu
132668bcd1 feat: invalidate table cache after altering (#1951) 2023-07-13 14:19:26 +08:00
JeremyHi
8b4145b634 feat: simplify the usage of channel_manage (#1949)
* feat: simplify the usage of channelJ_manager by avoiding the external call of start_channel_recycle

* chore: fix unit test

* Update src/common/grpc/Cargo.toml

Co-authored-by: fys <40801205+Fengys123@users.noreply.github.com>

---------

Co-authored-by: fys <40801205+Fengys123@users.noreply.github.com>
2023-07-13 06:11:11 +00:00
Weny Xu
735c6390ca feat: implement alter table procedure (#1878)
* feat: implement alter table procedure

* fix: fix uncaught error

* refactor: move fetch_table/s to table_routes.rs

* refactor: refactor error handling

* chore: apply suggestions from CR

* feat: switch to using alter table procedure

* feat: add table_version

* chore: apply suggestions from CR

* feat: introduce ddl_channel_manager

* chore: update greptime-proto
2023-07-13 10:41:46 +08:00
Ben Baoyi
9ff7670adf refactor:remove common_error::prelude (#1946)
* feat:Remove common_error::prelude

* fix merge error

* cr comment

* fix error
2023-07-13 10:36:36 +08:00
Ruihang Xia
16be56a743 refactor(mito): port manifest storage to mito2 (#1948)
* refactor(mito): port manifest storage to mito2

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove deadcode

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-07-12 11:21:11 +00:00
Ben Baoyi
2bfe25157f feat:add check port before start sqlness test (#1895)
* feat:add check port before start sqlness test

* cr comment

* feat:remove redundant check_port

* cr comment

* cr comment

* cr comment
2023-07-12 17:44:50 +08:00
LFC
4fdb6d2f21 refactor: remote catalog uses memory (#1926)
* refactor: remote catalog uses memory

* rebase develop

* fix: resolve PR comments
2023-07-12 09:33:33 +00:00
Vanish
39091421a4 feat: implement truncate table parser (#1932)
* feat: truncate parser

* chore: keyword TABLE as optional
2023-07-12 14:59:24 +08:00
Eugene Tolbakov
674bfd85c7 chore(prom)!: rename prometheus(remote storage) to prom-store and promql(HTTP server) to prometheus (#1931)
* chore(prom): rename prometheus(remote storage) to prom-store and promql(HTTP server) to prometheus

* chore: apply clippy suggestions

* chore: adjust format according to rustfmt
2023-07-12 14:47:09 +08:00
Ruihang Xia
4fa8340572 feat: support desc [table] <table_name> (#1944)
* feat: support desc [table]

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* refine style

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-07-12 06:41:31 +00:00
shuiyisong
5422224530 chore: upgrade toml version (#1945) 2023-07-12 14:22:02 +08:00
Ruihang Xia
077785cf1e refactor(mito): define manifest related API (#1942)
* refactor: port some manifest struct to mito2

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy and nextest

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* revert lock file and resolve clippy warnings

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-07-12 03:42:55 +00:00
Weny Xu
a751aa5ba0 feat: switch to using drop table procedure (#1901)
* feat: switch to using drop table procedure

* chore: remove unused attributes

* feat: register the drop table procedure loader

* fix: fix typo
2023-07-12 10:35:23 +08:00
Weny Xu
264c5ea720 feat: meta procedure options (#1937)
* feat: meta procedure options

* chore: tune meta procedure options in tests

* Update src/common/procedure/Cargo.toml

Co-authored-by: dennis zhuang <killme2008@gmail.com>

---------

Co-authored-by: dennis zhuang <killme2008@gmail.com>
2023-07-12 02:22:08 +00:00
Weny Xu
fa12392d2c fix: fix frontend meta client option issue (#1939) 2023-07-12 10:13:07 +08:00
Ruihang Xia
421103c336 refactor: remove misdirectional alias "Request as GreptimeRequest" (#1940)
remove Request as GreptimeRequest

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-07-12 10:06:05 +08:00
Ning Sun
41e856eb9e refactor: change logging level for mysql error log (#1938)
* refactor: change logging level for mysql error log

* Update src/common/telemetry/Cargo.toml

Co-authored-by: LFC <bayinamine@gmail.com>

---------

Co-authored-by: LFC <bayinamine@gmail.com>
2023-07-11 20:49:05 +08:00
JeremyHi
e1ca454992 chore: grpc-timeput = 10s (#1934)
* chore: grpc-timeput = 10s

* chore: fix ut
2023-07-11 15:07:18 +08:00
Weny Xu
2d30f4c373 fix: fix broken CI (#1933) 2023-07-11 14:48:41 +08:00
Lei, HUANG
a7ea3bbc16 feat: manual compact api (#1912)
* merge develop

* chore: merge develop

* fix: some cr commentx

* fix: cr comments
2023-07-11 04:00:39 +00:00
Eugene Tolbakov
fc850c9988 feat(config-endpoint): add initial implementation (#1896)
* feat(config-endpoint): add initial implementation

* feat: add initial handler implementation

* fix: apply clippy suggestions, use axum response instead of string

* feat: address CR suggestions

* fix: minor adjustments in formatting

* fix: add a test

* feat: add to_toml_string method to options

* fix: adjust the assertion for the integration test

* fix: adjust expected indents

* fix: adjust assertion for the integration test

* fix: improve according to clippy
2023-07-11 11:08:32 +08:00
Ning Sun
f293126315 feat: add logical plan based prepare statement for postgresql (#1813)
* feat: add logical plan based prepare statement for postgresql

* refactor: correct more types

* Update src/servers/src/postgres/types.rs

Co-authored-by: LFC <bayinamine@gmail.com>

* fix: address review issues

* test: add datetime in integration tests

---------

Co-authored-by: LFC <bayinamine@gmail.com>
2023-07-11 11:07:18 +08:00
Weny Xu
c615fb2a93 fix: fix uncaught error 🥲 (#1929)
fix: fix uncaught error
2023-07-10 23:41:20 +08:00
Yingwen
65f5349767 feat(mito2): Define basic structs for MitoEngine (#1928)
* chore: metadata wip

* docs(mito2): Add struct relationships

* feat(mito2): define basic structs

* feat: add version and refactor other metadata

* docs: remove generics param from MitoEngine

* chore: Update src/mito2/Cargo.toml

Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>

* chore: Apply suggestions from code review

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>

---------

Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>
Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
2023-07-10 12:25:33 +00:00
Weny Xu
ed756288b3 fix: fix uncaught error (#1924) 2023-07-10 17:46:11 +08:00
shuiyisong
04ddeffd2a chore: add rate limit status code (#1923) 2023-07-10 17:41:59 +08:00
Weny Xu
c8ed1bbfae fix: cast orc data against output schema (#1922)
fix: cast data against output schema
2023-07-10 08:53:38 +00:00
Lei, HUANG
207d3d23a1 chore: bump latest greptime-proto version (#1920) 2023-07-10 16:28:22 +08:00
shuiyisong
63173f63a1 chore: add interceptor for prometheus query (#1919)
* chore: add prom query interceptor

* chore: add test

* chore: add test

* chore: fix cr issue
2023-07-10 16:28:07 +08:00
Yingwen
4ea8a78817 feat: dedup rows while flushing memtables (#1916)
* test: enlarge meta client timeout

* feat: dedup on flush

* test: enlarge datanode clients timeout

* chore: fix typo
2023-07-10 15:07:10 +08:00
Lei, HUANG
553530cff4 fix: immediately reschedule a compaction after compaction (#1882)
* fix: immediately reschedule a compaction after compaction

* refactor: add WriterCompactRequest

* feat: reschedule compaction

* fix: only reschedule compaction when it's triggered by flush

* fix: remove max_files_in_l0

---------

Co-authored-by: evenyag <realevenyag@gmail.com>
2023-07-10 15:05:31 +08:00
Lei, HUANG
c3db99513a fix: remove useless mirror subsitution and RUN command (#1918)
fix: remove useless mirror subsitution and RUN command in builder docker file
2023-07-10 14:54:43 +08:00
Ruihang Xia
8e256b317d test: add unit test for distributed limit pushdown (#1917)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-07-10 14:40:18 +08:00
Weny Xu
b31fad5d52 feat: switch to using create table procedure (#1861)
* feat: switch to using create table procedure

* fix: add missing table_id and fix uncaught error

* refactor: remove unused code and metrics

* chore: apply suggestions from CR

* chore: remove unused attributes

* feat: add info log and metrics

* fix: fix conflicts
2023-07-10 10:08:09 +08:00
Weny Xu
00181885cc refactor: remove unused code (#1913) 2023-07-10 10:06:22 +08:00
Niwaka
195dfdc5d3 feat: add deregister_schema to CatalogManager (#1911)
* feat: add deregister_schema to CatalogManager

* refactor: MemoryCatalogManager::deregister_schema

* fix: typo

* fix: typo
2023-07-10 09:59:14 +08:00
zyy17
f20b5695b8 ci: use enterprise ACR (#1908) 2023-07-07 23:14:34 +08:00
Yingwen
f731193ddc refactor: Define RegionId as a new type (#1903)
* refactor: Define RegionId as a new type

* chore: use into

* feat: custom debug print for region id

fix: test_show_create_table
2023-07-07 21:26:03 +08:00
zyy17
963e468286 refactor: add curl binary in docker image (#1898) 2023-07-07 12:59:57 +00:00
LFC
f19498f73e refactor: unify KvBackend and KvStore (#1890)
* refactor: unify KvBackend and KvStore
2023-07-07 19:53:49 +08:00
Lei, HUANG
4cc42e2ba6 fix: before/after order (#1907) 2023-07-07 19:41:21 +08:00
Yingwen
cd5afc8cb7 ci: fix typo and check typo in docs ci (#1905) 2023-07-07 17:07:17 +08:00
Weny Xu
6dd24f4dc4 feat!: rename WITH parameter ENDPOINT_URL to ENDPOINT (#1904)
* feat!: rename WITH parameter ENDPOINT_URL to ENDPOINT

* fix: typo
2023-07-07 17:04:24 +08:00
Yingwen
55500b7711 docs(rfcs): Add table engine refactor RFC (#1899)
* docs(rfcs): Add table engine refactor RFC

* docs(rfcs): add pics

* refactor: replace svg files with mermaid diagrams

---------

Co-authored-by: Lei, HUANG <mrsatangel@gmail.com>
2023-07-07 16:27:10 +08:00
Weny Xu
64acfd3802 feat: implement drop table procedure (#1872)
* feat: implement drop table procedure

* fix: fix uncaught error

* refacotr: refactor error handling

* chore: apply suggestions from CR

* refactor: move fetch_table/s to table_routes.rs

* chore: fix clippy

* chore: apply suggestions from CR

* chore: rebase onto develop

* feat: compare the table_route value before deleting

* feat: handle if table already exists on datanode

* Update src/meta-srv/src/procedure/drop_table.rs

Co-authored-by: JeremyHi <jiachun_feng@proton.me>

---------

Co-authored-by: JeremyHi <jiachun_feng@proton.me>
2023-07-07 16:03:40 +08:00
Yingwen
ad165c1c64 ci: fix sqlness action in docs.yml doesn't have same name as develop.yml (#1902) 2023-07-07 14:33:47 +08:00
Niwaka
8dcb12e317 feat: support where in show (#1829)
* feat: support where in show

* fix: lift schema out of match

* fix: rename

* fix: improve error handling
2023-07-07 13:45:54 +08:00
LFC
03e30652c8 refactor: TableNameKey and DatanodeTableKey (#1868)
* refactor: TableNameKey and DatanodeTableKey
2023-07-07 13:27:43 +08:00
Yingwen
61c793796c ci: skip sqlness test on docs update (#1900) 2023-07-07 11:45:44 +08:00
Weny Xu
dc085442d7 chore: bump orc-rust to 0.2.4 (#1894)
chore: bump orc-rust to 0.2
2023-07-06 08:18:24 +00:00
Ruihang Xia
9153191819 fix: resolve catalog and schema in dist planner (#1891)
* try resolve catalog and schema

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* upload sqlness case

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix information schema case

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix unnamed table name

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-07-06 08:08:44 +00:00
Lei, HUANG
979400ac58 refactor: support special characters in table keys (#1893)
* refactor: support special characters in table keys

* remnove '"()

* Allow `:` as initial character of table names.
2023-07-06 15:17:08 +08:00
Weny Xu
28748edb0d chore: bump proto to 917ead6 (#1892)
* feat: add table_id for ddl exprs

* chore: bump proto to 917ead6
2023-07-06 13:29:36 +08:00
Niwaka
66e5ed5483 feat: support gcs storage (#1781) 2023-07-05 23:03:51 +08:00
Yingwen
af2fb2acbd docs: add tsbs benchmark result of v0.3.2 (#1888)
* docs: add tsbs benchmark result of v0.3.2

* docs: table header
2023-07-05 20:55:36 +08:00
Yingwen
eb2654b89a ci: allow update release (except release note) if it already exists (#1887) 2023-07-05 03:55:28 +00:00
liyang
3d0d082c56 refactor: release push binary (#1883) 2023-07-05 11:02:12 +08:00
Weny Xu
4073fceea5 fix: fix broken CI 😢 (#1884) 2023-07-05 10:43:53 +08:00
Weny Xu
8a00424468 refactor: implement Display for TableRouteKey (#1879) 2023-07-05 09:42:16 +08:00
liyang
4b580f4037 feat: release binary to aws s3 (#1881) 2023-07-04 22:33:35 +08:00
Weny Xu
ee16262b45 feat: add create table procedure (#1845)
* feat: add create table procedure

* feat: change table_info type from vec u8 to RawTableInfo

* feat: return create table status

* fix: fix uncaught error

* refactor: use a notifier to respond to callers

* chore: apply suggestions from CR

* chore: apply suggestions from CR

* chore: add comment

* chore: apply suggestions from CR

* refacotr: make CreateMetadata step after DatanodeCreateTable step
2023-07-04 22:24:43 +08:00
Yingwen
f37b394f1a fix: check table existence in create table procedure (#1880)
* fix: check table existence in table procedures

* fix: use correct error variant

* chore: address view comments

* chore: address comments

* test: change error code
2023-07-04 22:01:27 +08:00
Eugene Tolbakov
ccee60f37d feat(http_body_limit): add initial support for DefaultBodyLimit (#1860)
* feat(http_body_limit): add initial support for DefaultBodyLimit

* fix: address CR suggestions

* fix: adjust the const for default http body limit

* fix: adjust the toml_str for the test

* fix: address CR suggestions

* fix: body_limit units in example config toml files

* fix: address clippy suggestions
2023-07-04 20:56:56 +08:00
Ruihang Xia
bee8323bae chore: bump sqlness to 0.5.0 (#1877)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-07-04 19:49:12 +08:00
Weny Xu
000df8cf1e feat: add ddl client (#1856)
* feat: add ddl client

* chore: apply suggestions from CR

* chore: apply suggestions from CR
2023-07-04 19:32:02 +08:00
Yingwen
884731a2c8 chore: initialize mito2 crate (#1875) 2023-07-04 17:55:00 +08:00
shuiyisong
2922c25a16 chore: stop caching None in CachedMetaKvBackend (#1871)
* chore: dont cache none

* fix: test case

* chore: add comment

* chore: minor rewrite
2023-07-04 17:17:48 +08:00
Lei, HUANG
4dec06ec86 chore: bump version 0.3.2 (#1876)
bump version 0.3.2
2023-07-04 17:04:27 +08:00
Lei, HUANG
3b6f70cde3 feat: initial twcs impl (#1851)
* feat: initial twcs impl

* chore: rename SimplePicker to LeveledPicker

* rename some structs

* Remove Compaction strategy

* make compaction picker a trait object

* make compaction picker configurable for every region

* chore: add some test for ttl

* add some tests

* fix: some style issues in cr

* feat: enable twcs when creating tables

* feat: allow config time window when creating tables

* fix: some cr comments
2023-07-04 16:42:27 +08:00
Yingwen
b8e92292d2 feat: Implement a new scan mode using a chain reader (#1857)
* feat: add log

* feat: print more info

* feat: use chain reader

* fix: panic on getting first range

* fix: prev not updated

* fix: reverse readers and iter backward

* chore: don't print windows in log

* feat: consider memtable range

Also fix the issue that using incorrect comparision method to sort time
ranges.

* fix: merge memtable window with sst's

* feat: add use_chain_reader option

* feat: skip empty memtables

* chore: change log level

* fix: memtable range not ordered

* style: fix clippy

* chore: address review comments

* chore: print region id in log
2023-07-04 16:01:34 +08:00
Ruihang Xia
746fe8b4fe fix: use mark-deletion for system catalog (#1874)
* fix: use mark-deletion for system catalog

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix the default value

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* clean tables

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-07-04 16:00:39 +08:00
JeremyHi
20f2fc4a2a feat: add leader kv store cache for metadata (#1853)
* feat: add leader kv store cache for metadata

* refactor: create cache internal

* fix: race condition

* fix: race condition on read
2023-07-04 15:49:42 +08:00
Yingwen
2ef84f64f1 feat(servers): enlarge default body limit to 64M (#1873) 2023-07-04 07:13:14 +00:00
fys
451cc02d8d chore: add feature for metrics-process, default enable (#1870)
chore: add feature for metrics process, default enable
2023-07-04 13:28:33 +08:00
Lei, HUANG
b466ef6cb6 fix: libz dependency (#1867) 2023-07-03 10:08:53 +00:00
LFC
5b42e15105 refactor: add TableInfoKey and TableRegionKey (#1865)
* refactor: add TableInfoKey and TableRegionKey

* refactor: move KvBackend to common-meta

* fix: resolve PR comments
2023-07-03 18:01:20 +08:00
shuiyisong
e1bb7acfe5 fix: return err msg if use wrong database in MySQL (#1866) 2023-07-03 17:31:09 +08:00
Lei, HUANG
2c0c4672b4 feat: support building binary for centos7 (#1863)
feat:support building binary for centos7
2023-07-03 14:13:55 +08:00
Cao Zhengjia
e54415e723 feat: Make heartbeat intervals configurable in Frontend and Datanode (#1864)
* update frontend options and config

* fix format
2023-07-03 12:08:47 +08:00
Ruihang Xia
783a794060 fix: break CI again 🥲 (#1859)
* fix information schema case

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* disable -Wunused_result lint

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-06-30 20:01:14 +08:00
Vanish
563f6e05e2 feat: remove all the manifests in drop_region. (#1834)
* feat: drop_region delete manifest file

* chore: remove redundant code

* chore: fmt

* chore: clippy

* chore: clippy

* feat: support delete_all in manifest.

* chore:CR

* test: test_drop_basic, test_drop_reopen

* chore: cr

* fix: typo

* chore: cr
2023-06-30 17:42:11 +08:00
Ruihang Xia
25cb667470 fix: sort unstable sqlness result (#1858)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-06-30 09:25:24 +00:00
Ruihang Xia
c77b94650c refactor: remove Table::scan method (#1855)
* remove scan method

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* clean up

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-06-30 12:13:14 +08:00
Ruihang Xia
605776f49c feat: support bool operator with other computation (#1844)
* add some cases

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix sqlness test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* impl atan2 and power

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix instant manipulator

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-06-29 19:23:54 +08:00
Ruihang Xia
d45e7b7480 refactor: build parquet file stream from ParquetExec (#1852)
* refactor: build parquet file stream from ParquetExec

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* rename sqlness case

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-06-29 19:19:31 +08:00
JeremyHi
2b3ca1309a feat: table_routes util (#1849) 2023-06-29 16:47:56 +08:00
Weny Xu
acfa229641 chore: bump orc-rust to 0319acd (#1847) 2023-06-29 10:45:05 +08:00
JeremyHi
7e23dd7714 feat: http api for node-lease (#1843)
* feat: add node-lease http api

* revert: show_create.result
2023-06-29 09:34:54 +08:00
Lei, HUANG
559d1f73a2 feat: push all possible filters down to parquet exec (#1839)
* feat: push all possible filters down to parquet exec

* fix: project

* test: add ut for DatafusionArrowPredicate

* fix: according to CR comments
2023-06-28 20:14:37 +08:00
JeremyHi
bc33fdc8ef feat: save node lease into memory (#1841)
* feat: lease secs = 5

* feat: set lease data into memory of leader

* fix: ignore stale heartbeat

* Update src/meta-srv/src/election.rs

Co-authored-by: LFC <bayinamine@gmail.com>

---------

Co-authored-by: LFC <bayinamine@gmail.com>
2023-06-28 11:54:06 +08:00
Lei, HUANG
f287d3115b chore: replace result assertions (#1840)
* s/assert!\((.*)\.is_ok\(\)\);/\1.unwrap\(\);/g

* s/assert!\((.*)\.is_some\(\)\);/\1.unwrap\(\);/g
2023-06-27 19:14:48 +08:00
Ruihang Xia
b737a240de fix: add sqlness tests for some promql function (#1838)
* correct range manipulate exec fmt text

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix partition requirement

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix udf signature

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* finilise

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* ignore unstable ordered result

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add nan value test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-06-27 19:05:26 +08:00
fys
99f0479bd2 feat: improve influxdb v2 api compability (#1831)
* feat: support influxdb v2 api

* cr
2023-06-27 18:21:51 +08:00
fys
313121f2ae fix: block when stream insert (#1835)
* fix: stream insert blocking

* fix: example link

* chore: Increase the default channel size "1024" -> "65536"
2023-06-27 16:57:03 +08:00
LFC
fcff66e039 chore: deny unused results (#1825)
* chore: deny unused results

* rebase
2023-06-27 15:33:53 +08:00
shuiyisong
03057cab6c feat: physical plan wrapper (#1837)
* test: add physical plan wrapper trait

* test: add plugins to datanode initialization

* test: add plugins to datanode initialization

* chore: add metrics method

* chore: update meter-core version

* chore: remove unused code

* chore: impl metrics method on df execution plan adapter

* chore: minor comment fix

* chore: add retry in create table

* chore: shrink keep lease handler buffer

* chore: add etcd batch size warn

* chore: try shrink

* Revert "chore: try shrink"

This reverts commit 0361b51670.

* chore: add create table backup time

* add metrics in some interfaces

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* calc elapsed time and rows

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* chore: remove timer in scan batch

* chore: add back stream metrics wrapper

* chore: add timer to ready poll

* chore: minor update

* chore: try using df_plan.metrics()

* chore: remove table scan timer

* chore: remove scan timer

* chore: add debug log

* Revert "chore: add debug log"

This reverts commit 672a0138fd.

* chore: use batch size as row count

* chore: use batch size as row count

* chore: tune code for pr

* chore: rename to physical plan wrapper

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
2023-06-27 14:04:04 +08:00
Weny Xu
dcfce49cff refactor(datanode): move Instance heartbeat task to Datanode struct (#1832)
* refactor(datanode): move Instance heartbeat to Datanode struct

* chore: apply suggestions from CR

* fix: start heartbeat task after instance starts
2023-06-27 12:32:20 +08:00
JeremyHi
78b07996b1 feat: txn for meta (#1828)
* feat: txn for meta kvstore

* feat: txn

* chore: add unit test

* chore: more test

* chore: more test

* Update src/meta-srv/src/service/store/memory.rs

Co-authored-by: LFC <bayinamine@gmail.com>

* chore: by cr

---------

Co-authored-by: LFC <bayinamine@gmail.com>
2023-06-26 17:12:48 +08:00
dennis zhuang
034564fd27 feat: make blob(binary) type working (#1818)
* feat: test blob type

* feat: make blob type working

* chore: comment

* Update src/sql/src/statements/insert.rs

Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>

* chore: by CR comments

* fix: comment

* Update src/sql/src/statements/insert.rs

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>

* Update src/sql/src/statements/insert.rs

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>

* fix: test

---------

Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>
Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
2023-06-26 08:49:04 +00:00
Ruihang Xia
a95f8767a8 refactor: merge catalog provider & schema provider into catalog manager (#1803)
* move  to expr_factory

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* move configs into service_config

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* move GrpcQueryHandler into distributed.rs

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix compile and test in catalog sub-crate

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* clean up

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix table-procedure compile and test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix query compile and tests

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix datanode compile and tests

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix catalog/query/script/servers compile

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix frontend compile

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix nextest except information_schema

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* support information_schema

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix sqlness test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix merge errors

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove other structs

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* clean up

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix format

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* change deregister_table's return type to empty tuple

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-06-26 15:08:59 +08:00
Eugene Tolbakov
964d26e415 fix: docker build for aarch64 (#1826) 2023-06-25 18:29:00 +09:00
Yingwen
fd412b7b07 refactor!: Uses table id to locate tables in table engines (#1817)
* refactor: add table_id to get_table()/table_exists()

* refactor: Add table_id to alter table request

* refactor: Add table id to DropTableRequest

* refactor: add table id to DropTableRequest

* refactor: Use table id as key for the tables map

* refactor: use table id as file engine's map key

* refactor: Remove table reference from engine's get_table/table_exists

* style: remove unused imports

* feat!: Add table id to TableRegionalValue

* style: fix cilppy

* chore: add comments and logs
2023-06-25 15:05:20 +08:00
Weny Xu
223cf31409 feat: support to copy from orc format (#1814)
* feat: support to copy from orc format

* test: add copy from orc test

* chore: add license header

* refactor: remove unimplemented macro

* chore: apply suggestions from CR

* chore: bump orc-rust to 0.2.3
2023-06-25 14:07:16 +08:00
Ruihang Xia
62f660e439 feat: implement metrics for Scan plan (#1812)
* add metrics in some interfaces

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* calc elapsed time and rows

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-06-25 14:06:50 +08:00
Lei, HUANG
0fb18245b8 fix: docker build (#1822) 2023-06-25 11:05:46 +08:00
Weny Xu
caed6879e6 refactor: remove redundant code (#1821) 2023-06-25 10:56:31 +08:00
Yingwen
5ab0747092 test(storage): wait task before checking scheduled task num (#1811) 2023-06-21 18:04:34 +08:00
Ruihang Xia
b1ccc7ef5d fix: prevent filter pushdown in distributed planner (#1806)
* fix: prevent filter pushdown in distributed planner

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix metadata

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-06-21 16:25:50 +08:00
Lei, HUANG
d1b5ce0d35 chore: check catalog deregister result (#1810)
* chore: check deregister result and return error on failure

* refactor: SystemCatalog::deregister_table returns Result<()>
2023-06-21 08:09:11 +00:00
Lei, HUANG
a314993ab4 chore: change logstore default config (#1809) 2023-06-21 07:34:24 +00:00
LFC
fa522bc579 fix: drop region alive countdown tasks when deregistering table (#1808) 2023-06-21 14:49:32 +08:00
Lei, HUANG
5335203360 feat: support cross compilation to aarch64 linux (#1802) 2023-06-21 14:08:45 +08:00
Ruihang Xia
23bf55a265 fix: __field__ matcher on single value column (#1805)
* fix error text and field_column_names

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add sqlness test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add empty line

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* improve style

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-06-21 10:59:58 +08:00
Eugene Tolbakov
3b91fc2c64 feat: add initial implementation for status endpoint (#1789)
* feat: add initial implementation for status endpoint

* feat(status_endpoint): add more data to response

* feat(status_endpoint): use build data env vars

* feat(status_endpoint): add simple test

* fix(status_endpoint): adjust the toml indentation
2023-06-21 10:50:08 +08:00
LFC
6205616301 fix: filter table regional values with the current node id (#1800) 2023-06-20 19:17:35 +08:00
JeremyHi
e47ef1f0d2 chore: minor fix (#1801) 2023-06-20 11:03:52 +00:00
Lei, HUANG
16c1ee2618 feat: incremental database backup (#1240)
* feat: incremental database backup

* chore: rebase develop

* chore: move backup to StatementExecutor

* feat: copy database parser

* chore: remove some todos

* chore: use timestamp string instead of i64 string

* fix: typo
2023-06-20 18:26:55 +08:00
JeremyHi
323e2aed07 feat: deal with more than 128 txn (#1799) 2023-06-20 17:56:45 +08:00
LFC
cbc2620a59 feat: start region alive keepers (#1796)
* feat: start region alive keepers
2023-06-20 15:45:29 +08:00
JeremyHi
4fdee5ea3c feat: deal with node epoch (#1795)
* feat: deal with node epoch

* feat: dn send node_epoch

* Update src/meta-srv/src/handler/persist_stats_handler.rs

Co-authored-by: dennis zhuang <killme2008@gmail.com>

* Update src/meta-srv/src/service/store/ext.rs

Co-authored-by: dennis zhuang <killme2008@gmail.com>

* chore: by cr

---------

Co-authored-by: dennis zhuang <killme2008@gmail.com>
2023-06-20 07:07:05 +00:00
dennis zhuang
30472cebae feat: prepare supports caching logical plan and infering param types (#1776)
* feat: change do_describe function signature

* feat: infer param type and cache logical plan for msyql prepared statments

* fix: convert_value

* fix: forgot helper

* chore: comments

* fix: typo

* test: add more tests and test date, datatime in mysql

* chore: fix CR comments

* chore: add location

* chore: by CR comments

* Update tests-integration/tests/sql.rs

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>

* chore: remove the trace

---------

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
2023-06-20 04:07:28 +00:00
Ruihang Xia
903f02bf10 ci: optimize release progress (#1794)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-06-20 11:39:53 +08:00
JeremyHi
1703e93e15 feat: add handler execution timer (#1791)
* feat: add handler execution timer

* fix: by cr
2023-06-20 11:25:13 +08:00
LFC
2dd86b686f feat: extend region leases in Metasrv (#1784)
* feat: extend region leases in Metasrv

* fix: resolve PR comments
2023-06-19 19:55:59 +08:00
LFC
128c6ec98c feat: region alive keeper in Datanode (#1780) 2023-06-19 14:50:33 +08:00
Lei, HUANG
960b84262b fix: abort parquet writer (#1785)
* fix: sst file size

* fix: avoid creating file when no row's been written

* chore: rename tests

* fix: some clippy issues

* fix: some cr comments
2023-06-19 03:19:31 +00:00
Lei, HUANG
69854c07c5 fix: wait for compaction task to finish (#1783) 2023-06-16 16:45:06 +08:00
JeremyHi
1eeb5b4330 feat: disable_region_failover option for metasrv (#1777) 2023-06-15 16:26:27 +08:00
LFC
9b3037fe97 feat: a countdown task for closing region in Datanode (#1775) 2023-06-14 15:50:21 +08:00
dennis zhuang
09747ea206 feat: use DataFrame to replace SQL for Prometheus remote read (#1774)
* feat: debug QueryEngineState

* feat: impl read_table to create DataFrame for a table

* fix: clippy warnings

* feat: use DataFrame to handle prometheus remote read quries

* Update src/frontend/src/instance/prometheus.rs

Co-authored-by: LFC <bayinamine@gmail.com>

* chore: CR comments

---------

Co-authored-by: LFC <bayinamine@gmail.com>
2023-06-14 07:39:28 +00:00
Lei, HUANG
fb35e09072 chore: fix compaction caused race condition (#1767)
fix: unit tests. For real, this time.
2023-06-13 21:03:09 +08:00
Weny Xu
803940cfa4 feat: enable azblob tests (#1765)
* feat: enable azblob tests

* fix: add missing arg
2023-06-13 07:44:57 +00:00
Weny Xu
420ae054b3 chore: add debug log for heartbeat (#1770) 2023-06-13 07:43:26 +00:00
Lei, HUANG
0f1e061f24 fix: compile issue on develop and workaround to fix failing tests cau… (#1771)
* fix: compile issue on develop and workaround to fix failing tests caused by logstore file lock

* Apply suggestions from code review

Co-authored-by: JeremyHi <jiachun_feng@proton.me>

---------

Co-authored-by: JeremyHi <jiachun_feng@proton.me>
2023-06-13 07:30:16 +00:00
Lei, HUANG
7961de25ad feat: persist compaction time window (#1757)
* feat: persist compaction time window

* refactor: remove useless compaction window fields

* chore: revert some useless change

* fix: some CR comments

* fix: comment out unstable sqlness test

* revert commented sqlness
2023-06-13 10:15:42 +08:00
Lei, HUANG
f7d98e533b chore: fix compaction caused race condition (#1759)
* fix: set max_files_in_l0 in unit tests to avoid compaction

* refactor: pass while EngineConfig

* fix: comment out unstable sqlness test

* revert commented sqlness
2023-06-12 11:19:42 +00:00
Ruihang Xia
b540d640cf fix: unstable order with union operation (#1763)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-06-12 18:16:24 +08:00
Eugene Tolbakov
51a4d660b7 feat(to_unixtime): add timestamp types as arguments (#1632)
* feat(to_unixtime): add timestamp types as arguments

* feat(to_unixtime): change the return type

* feat(to_unixtime): address code review issues

* feat(to_unixtime): fix fmt issue
2023-06-12 17:21:49 +08:00
Ruihang Xia
1b2381502e fix: bring EnforceSorting rule forward (#1754)
* fix: bring EnforceSorting rule forward

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove duplicated rules

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* wrap remove logic into a method

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-06-12 07:29:08 +00:00
Yingwen
0e937be3f5 fix(storage): Use region_write_buffer_size as default value (#1760) 2023-06-12 15:05:17 +08:00
Weny Xu
564c183607 chore: make MetaKvBackend public (#1761) 2023-06-12 14:13:26 +08:00
Ruihang Xia
8c78368374 refactor: replace #[snafu(backtrace)] with Location (#1753)
* remove snafu backtrace

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* clean up

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-06-12 11:55:33 +08:00
Lei, HUANG
67c16dd631 feat: optimize some parquet writer parameter (#1758) 2023-06-12 11:46:45 +08:00
Lei, HUANG
ddcee052b2 fix: order by optimization (#1748)
* add some debug log

* fix: use lazy parquet reader in MitoTable::scan_to_stream to avoid IO in plan stage

* fix: unit tests

* fix: order-by optimization

* add some tests

* fix: move metric names to metrics.rs

* fix: some cr comments
2023-06-12 11:45:43 +08:00
王听正
7efcf868d5 refactor: Remove MySQL related options from Datanode (#1756)
* refactor: Remove MySQL related options from Datanode

remove mysql_addr and mysql_runtime_size in datanode.rs, remove command line argument mysql_addr in cmd/src/datanode.rs

#1739

* feat: remove --mysql-addr from command line

in pre commit, sqlness can not find --mysql-addrr, because we remove it

issue#1739

* refactor: remove --mysql-addr from command line

in pre commit, sqlness can not find --mysql-addrr, because we remove it

issue#1739
2023-06-12 11:00:24 +08:00
dennis zhuang
f08f726bec test: s3 manifest (#1755)
* feat: change default manifest options

* test: s3 manifest

* feat: revert checkpoint_margin to 10

* Update src/object-store/src/test_util.rs

Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>

---------

Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>
2023-06-09 10:28:41 +00:00
Ning Sun
7437820bdc ci: correct data type for input and event check (#1752) 2023-06-09 13:59:56 +08:00
Lei, HUANG
910c950717 fix: jemalloc error does not implement Error (#1747) 2023-06-09 04:00:50 +00:00
Zou Wei
f91cd250f8 feat:make version() show greptime info. (#1749)
* feat:impl get_version() to return greptime info.

* fix: refactor test case.
2023-06-09 11:38:52 +08:00
Yingwen
115d9eea8d chore: Log version and arguments (#1744) 2023-06-09 11:38:08 +08:00
Ning Sun
bc8f236806 ci: fix using env in job.if context (#1751) 2023-06-09 11:28:29 +08:00
Yiran
fdbda51c25 chore: update document links in README.md (#1745) 2023-06-09 10:05:24 +08:00
Ning Sun
e184826353 ci: allow triggering nightly release manually (#1746)
ci: allow triggering nightly manually
2023-06-09 10:04:44 +08:00
Yingwen
5b8e54e60e feat: Add HTTP API for cpu profiling (#1694)
* chore: print source error in mem-prof

* feat(common-pprof): add pprof crate

* feat(servers): Add pprof handler to router

refactor the mem_prof handler to avoid checking feature while
registering router

* feat(servers): pprof handler support different output type

* docs(common-pprof): Add readme

* feat(common-pprof): Build guard using code in pprof-rs's example

* feat(common-pprof): use prost

* feat: don't add timeout to perf api

* feat: add feature pprof

* feat: update readme

* test: fix tests

* feat: close region in TestBase

* feat(pprof): addres comments
2023-06-07 15:25:16 +08:00
Lei, HUANG
8cda1635cc feat: make jemalloc the default allocator (#1733)
* feat: add jemalloc metrics

* fix: dep format
2023-06-06 12:11:22 +00:00
Lei, HUANG
f63ddb57c3 fix: parquet time range predicate panic (#1735)
fix: parquet reader should use store schema to build time range predicate
2023-06-06 19:11:45 +08:00
fys
d2a8fd9890 feat: add route admin api in metasrv (#1734)
* feat: add route admin api in metasrv

* fix: add license
2023-06-06 18:00:02 +08:00
LFC
91026a6820 chore: clean up some of my todos (#1723)
* chore: clean up some of my todos

* fix: ci
2023-06-06 17:25:04 +08:00
Ruihang Xia
7a60bfec2a fix: empty result type on prom query endpoint (#1732)
* adjust return type

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add test case

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-06-06 15:40:54 +08:00
Niwaka
a103614fd2 feat: support /api/v1/series for Prometheus (#1620)
* feat: support /api/v1/series for Prometheus

* chore: error handling

* feat: update tests
2023-06-06 10:29:16 +08:00
Yingwen
1b4976b077 feat: Adds some metrics for write path and flush (#1726)
* feat: more metrics

* feat: Add preprocess elapsed

* chore(storage): rename metric

* test: fix tests
2023-06-05 21:35:44 +08:00
Lei, HUANG
166fb8871e chore: bump greptimedb version 0.4.0 (#1724) 2023-06-05 18:41:53 +08:00
Yingwen
466f258266 feat(servers): collect samples by metric (#1706) 2023-06-03 17:17:52 +08:00
Ruihang Xia
94228285a7 feat: convert values to vector directly (#1704)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-06-03 12:41:13 +08:00
JeremyHi
3d7185749d feat: insert with stream (#1703)
* feat: insert with stream

* chore: by CR
2023-06-03 03:58:00 +00:00
LFC
5004cf6d9a feat: make grpc insert requests in a batch (#1687)
* feat: make Prometheus remote write in a batch

* rebase

* fix: resolve PR comments

* fix: resolve PR comments

* fix: resolve PR comments
2023-06-02 09:06:48 +00:00
Ruihang Xia
8e69aef973 feat: serialize/deserialize support for PromQL plans (#1684)
* implement serializer

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy and CR comments

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix compile error

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* register registry

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* enable promql plan for dist planner

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-06-02 16:14:05 +08:00
Ruihang Xia
2615718999 feat: merge scan for distributed execution (#1660)
* generate exec plan

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* move DatanodeClients to client crate

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* wip MergeScanExec::to_stream

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix compile errors

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix default catalog

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix expand order of new stage

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* move sqlness cases contains plan out of common dir

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* refactor information schema to allow duplicated scan call

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix: ignore two cases due to substrait

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* reorganise sqlness common cases

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix typos

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* redact round robin partition number

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* Apply suggestions from code review

Co-authored-by: Yingwen <realevenyag@gmail.com>

* skip tranforming projection

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update sqlness result

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* revert common/order

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* Update src/query/src/dist_plan/merge_scan.rs

Co-authored-by: JeremyHi <jiachun_feng@proton.me>

* update sqlness result

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update sqlness result again

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* resolve CR comments

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* ignore region failover IT

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update sqlness result again and again

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* unignore some tests about projection

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* enable failover tests

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>
Co-authored-by: JeremyHi <jiachun_feng@proton.me>
2023-06-02 06:42:54 +00:00
fys
fe6e3daf81 fix: failed to insert data with u8 (#1701)
* fix: failed to insert data with u8 field

* remove unused code

* fix cr
2023-06-02 06:01:59 +00:00
ZonaHe
b7e1778ada feat: update dashboard to v0.2.6 (#1700)
Co-authored-by: ZonaHex <ZonaHex@users.noreply.github.com>
2023-06-02 13:26:07 +08:00
Lei, HUANG
ccd666aa9b fix: avoid writing manifest and wal if no files are actually flushed (#1698)
* fix: avoid writing manifest and wal if no files are actually flushed

* fix: simplify log
2023-06-02 13:16:59 +08:00
JeremyHi
2aa442c86d feat: exists API for KVStore (#1695)
* feat: exists API for kv

* chore: add unit test
2023-06-02 12:35:04 +08:00
Weny Xu
f811ae4665 fix: enable region failover test (#1699)
fix: fix region failover test
2023-06-02 12:05:37 +08:00
Ruihang Xia
e5b6f8654a feat: optimizer rule to pass expected output ordering hint (#1675)
* move type convertsion rule into optimizer dir

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* implement order_hint rule

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* it works!

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* use column name instead

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* accomplish test case

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update lock file

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-06-02 03:43:51 +00:00
Ruihang Xia
ff6d11ddc7 chore: ignore symbol link target file (#1696)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-06-02 10:42:44 +08:00
Ruihang Xia
878c6bf75a fix: do not alias relation before join (#1693)
* fix: do not alias relation before join

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* Update src/promql/src/error.rs

Co-authored-by: dennis zhuang <killme2008@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>
Co-authored-by: dennis zhuang <killme2008@gmail.com>
2023-06-01 14:24:37 +00:00
LFC
ce440606a9 fix: sqlness failed due to region failover wrongly kicks in for dropp… (#1690)
fix: sqlness failed due to region failover wrongly kicks in for dropped or renamed table
2023-06-01 21:47:47 +08:00
fys
5fd7250dca fix: invalidate route cache on renaming table (#1691)
* fix: sqlness test

* remove unnecessary clone

* fix cr
2023-06-01 20:43:31 +08:00
Ruihang Xia
5a5e88353c fix: do not change timestamp index column while planning aggr (#1688)
* fix: do not change timestamp index column while planning aggr

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove println

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-06-01 20:17:18 +08:00
Ruihang Xia
ef15de5f17 ci: always upload sqlness log (#1692)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-06-01 20:01:26 +08:00
fys
86adac1532 chore: reject table creation when partitions exceeds peer number (#1654)
* chore: table creation is rejected, when partition_num exceeds peer_num

* chore: modify no_active_datanode error msg

* fix: ut

* fix sqlness test and add limit for select peer in region_failover

* upgrade greptime-proto

* self cr

* fix: cargo sqlness

* chore: add table info in select ctx for failover

* fix sqlness
2023-06-01 09:05:17 +00:00
Ning Sun
e7a410573b test: fix sqlx compatibility and adds integration test for sqlx (#1686)
* test: fix sqlx compatibility and adds integration test for sqlx

* test: correct insert statements
2023-06-01 15:43:13 +08:00
Yingwen
548f0d1e2a feat: Add app version metric (#1685)
* feat: Add app version metric

* chore: use greptimedb instead of greptime
2023-06-01 14:31:08 +08:00
Zheming Li
5467ea496f feat: Add column supports at first or after the existing columns (#1621)
* feat: Add column supports at first or after the existing columns

* Update src/common/query/Cargo.toml

---------

Co-authored-by: dennis zhuang <killme2008@gmail.com>
2023-06-01 02:13:00 +00:00
Yingwen
70e17ead68 fix: Print source error in subprocedure failure message (#1683)
* fix: print source error in subprocedure failed error

* feat: print source error in subprocedure failure message
2023-06-01 09:51:31 +08:00
dennis zhuang
ae8203fafa fix: prepare statement doesn't support insert clause (#1680)
* fix: insert clause doesn't support prepare statement

* fix: manifeste dir

* fix: format

* fix: temp path
2023-05-31 20:14:58 +08:00
Ruihang Xia
ac3666b841 chore(deps): bump arrow/parquet to 40.0, datafuson to the latest HEAD (#1677)
* fix compile error

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove deprecated substrait

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update deps

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* downgrade opendal to 0.33.1

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* change finish's impl

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update test results

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* ignore failing cases

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-05-31 18:55:02 +08:00
Weny Xu
0460f3ae30 test: add write test for region failover (#1673)
* test: add write test for region failover

* test: add written data assertion after failover

* test: support more storage types
2023-05-31 15:42:00 +08:00
Yingwen
9d179802b8 feat: Add a global TTL option for all tables (#1679)
* feat: Add a global TTL option for all tables

* docs: update config examples

* chore: print start command and options when standalone/frontend starts
2023-05-31 15:36:25 +08:00
Lei, HUANG
72b6bd11f7 feat: adapt window reader to order rules (#1671)
* feat: adapt window reader to order rules

* fix: add asc sort test case
2023-05-31 03:36:17 +00:00
Xuanwo
6b08a5f94e chore: Bump OpenDAL to v0.36 (#1678)
* chore: Bump OpenDAL to v0.36

Signed-off-by: Xuanwo <github@xuanwo.io>

* Fix

Signed-off-by: Xuanwo <github@xuanwo.io>

---------

Signed-off-by: Xuanwo <github@xuanwo.io>
2023-05-31 11:12:40 +08:00
dennis zhuang
00104bef76 feat: supports CTE query (#1674)
* feat: supports CTE query

* test: move cte test to standalone
2023-05-30 12:08:49 +00:00
Zou Wei
ae81c7329d feat: support azblob storage. (#1659)
* feat:support azblob storage.

* test:add some tests.

* refactor:use if-let.
2023-05-30 19:59:38 +08:00
Yingwen
c5f6d7c99a refactor: update proto and rename incorrect region_id fields (#1670) 2023-05-30 15:19:04 +09:00
Weny Xu
bb1b71bcf0 feat: acquire table_id from region_id (#1656)
feat: acquire table_id from region_id
2023-05-30 03:36:47 +00:00
Weny Xu
a4b884406a feat: add invalidate cache step (#1658)
* feat: add invalidate cache step

* refactor: refactor TableIdent

* chore: apply suggestions from CR
2023-05-30 11:17:59 +08:00
dennis zhuang
ab5dfd31ec feat: sql dialect for different protocols (#1631)
* feat: add SqlDialect to query context

* feat: use session in postgrel handlers

* chore: refactor sql dialect

* feat: use different dialects for different sql protocols

* feat: adds GreptimeDbDialect

* refactor: replace GenericDialect with GreptimeDbDialect

* feat: save user info to session

* fix: compile error

* fix: test
2023-05-30 09:52:35 +08:00
Yingwen
563ce59071 feat: Add request type and result code to grpc metrics (#1664) 2023-05-30 09:51:08 +08:00
LFC
51b23664f7 feat: update table metadata in lock (#1634)
* feat: using distributed lock to guard against the concurrent updating of table metadatas in region failover procedure

* fix: resolve PR comments

* fix: resolve PR comments
2023-05-30 08:59:14 +08:00
Ruihang Xia
9e21632f23 fix: clippy warning (#1669)
* fix: clippy warning

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* restore the removed common sqlness cases

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-05-30 08:55:24 +08:00
Ruihang Xia
b27c569ae0 refactor: add scan_to_stream() to Table trait to postpone the stream generation (#1639)
* add scan_to_stream to Table

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* impl parquet stream

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* reorganise adapters

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* implement scan_to_stream for mito table

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* clean up

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add location info

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix: table scan

* UT pass

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* impl project record batch

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix information schema

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* resolve CR comments

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove one todo

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix errors generated by merge commit

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add output_ordering method to record batch stream

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix rustfmt

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* enhance error types

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: Lei, HUANG <mrsatangel@gmail.com>
2023-05-29 20:03:47 +08:00
Weny Xu
0eaae634fa fix: invalidate table route cache (#1663) 2023-05-29 18:49:23 +08:00
JeremyHi
8b9b5a0d3a feat: broadcast with mailbox (#1661)
feat: broad with mailbox
2023-05-29 15:11:50 +08:00
Lei, HUANG
78fab08b51 feat: window inferer (#1648)
* feat: window inferer

* doc: add some doc

* test: add a long missing unit test case for windowed reader

* add more tests

* fix: some CR comments
2023-05-29 14:41:00 +08:00
Weny Xu
d072947ef2 refactor: move code out of loop (#1657) 2023-05-27 13:31:13 +08:00
Weny Xu
4094907c09 fix: fix type casting issue (#1652)
* fix: fix type casting issue

* chore: apply suggestion from CR
2023-05-27 00:17:56 +08:00
Ruihang Xia
0da94930d5 feat: impl literal only PromQL query (#1641)
* refactor EmptyMetric to accept expr

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* impl literal only query

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add empty line

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* clean up

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* support literal on HTTP gateway

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy (again)

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-05-26 23:27:03 +08:00
fys
f0a519b71b chore: reduce the number of requests for meta (#1647) 2023-05-26 17:25:18 +08:00
Yingwen
89366ba939 refactor: Holds histogram in the timer to avoid clone labels if possible (#1653)
* feat: use Histogram struct to impl Timer

* fix: fix compile errors

* feat: downgrade metrics-process

* fix: compiler errors
2023-05-26 17:12:03 +08:00
Yingwen
c042723fc9 feat: Record process metrics (#1646)
* feat(servers): Export process metrics

* chore: update metrics related deps to get the process-metrics printed

The latest process-metrics crate depends on metrics 0.21, we use metrics
0.20. This cause the process-metrics crate doesn't record the metrics
  when use metrics macros
2023-05-26 11:51:01 +08:00
Weny Xu
732784d3f8 feat: support to load missing region (#1651)
* feat: support to load missing region

* Update src/mito/src/table.rs

Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>

---------

Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>
2023-05-26 03:30:46 +00:00
Ning Sun
332b3677ac feat: add metrics for ingested row count (#1645) 2023-05-26 10:57:27 +08:00
Weny Xu
6cd634b105 fix: fix typo (#1649) 2023-05-26 10:24:12 +08:00
Yinnan Yao
cd1ccb110b fix: install python3-pip in Dockerfile (#1644)
When I use docker build to build the image, I get an error that pip is missing. Add install python3-pip in Dockerfile.

Fixes: #1643

Signed-off-by: yaoyinnan <yaoyinnan@foxmail.com>
2023-05-25 23:00:39 +08:00
Weny Xu
953793143b feat: add invalidate table cache handler (#1633)
* feat: add invalidate table cache handler

* feat: setup invalidate table cache handler for frontend

* test: add test for invalidate table cache handler

* chore: apply suggestions from CR

* chore: apply suggestions from CR

* fix: fix report_interval unit
2023-05-25 17:45:45 +08:00
Yingwen
8a7998cd25 feat(servers): Add metrics based on axum's example (#1638)
Log on error
2023-05-25 17:31:48 +09:00
LFC
eb24bab5df refactor: set the filters for testing logs (#1637)
minor: set the filters for testing logs
2023-05-25 11:07:57 +08:00
fys
8f9e9686fe chore: add metrics for table route getting (#1636)
chore: add metrics for getting table_route
2023-05-25 10:02:59 +08:00
shuiyisong
61a32d1b9c chore: add boxed error for custom error map (#1635)
* chore: add boxed error for custom error map

* chore: fix typo

* chore: add comment & update display msg

* chore: change name to other error
2023-05-24 12:54:52 +00:00
Weny Xu
74a6517bd0 refactor: move the common part of the heartbeat response handler to common (#1627)
* refactor: move heartbeat response handler to common

* chore: apply suggestions from CR
2023-05-24 07:55:06 +00:00
fys
fa4a497d75 feat: add cache for catalog kv backend (#1592)
* feat: add kvbackend cache

* fix: cargo fmt
2023-05-24 15:07:29 +08:00
Ning Sun
ddca0307d1 feat: more configurable logging levels (#1630)
* feat: make logging level more configurable

* chore: resolve lint warnings

* fix: correct default level for h2

* chore: update text copy
2023-05-24 14:47:41 +08:00
Weny Xu
3dc45f1c13 feat: implement CloseRegionHandler (#1569)
* feat: implement CloseRegionHandler

* feat: register heartbeat response handlers

* test: add tests for heartbeat response handlers

* fix: drop table does not release regions

* chore: apply suggestion from CR

* fix: fix close region issue

* chore: apply suggestion from CR

* chore: apply suggestion from CR

* chore: apply suggestion from CR

* chore: apply suggestion from CR

* chore: apply suggestion from CR

* chore: apply suggestion from CR

* chore: apply suggestion from CR

* chore: modify method name and add log

* refactor: refactor HeartbeatResponseHandler

* chore: apply suggestion from CR

* refactor: remove close method from Region trait

* chore: apply suggestion from CR

* chore: remove PartialEq from CloseTableResult

* chore: apply suggestion from CR
2023-05-23 15:44:27 +08:00
dennis zhuang
7c55783e53 feat!: reorganize the storage layout (#1609)
* feat: adds data_home to DataOptions

* refactor: split out object store stuffs from datanode instance

* feat: move data_home into FileConfig

* refactor: object storage layers

* feat: adds datanode path to procedure paths

* feat: temp commit

* refactor: clean code

* fix: forgot files

* fix: forgot files

* Update src/common/test-util/src/ports.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

* Update tests/runner/src/env.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

* fix: compile error

* chore: cr comments

* fix: dependencies order in cargo

* fix: data path in test

---------

Co-authored-by: Yingwen <realevenyag@gmail.com>
2023-05-23 13:58:26 +08:00
shuiyisong
5b304fa692 chore: add grpc query interceptor (#1626) 2023-05-23 13:57:54 +08:00
Weny Xu
9f67ad8bce fix: fix doesn't release closed regions issue (#1596)
* fix: fix close region issue

* chore: apply suggestion from CR

* chore: apply suggestion from CR

* chore: apply suggestion from CR

* chore: apply suggestion from CR

* refactor: remove close method from Region trait

* chore: remove PartialEq from CloseTableResult
2023-05-23 11:40:12 +08:00
Weny Xu
e646490d16 chore: fix code styling (#1623) 2023-05-23 10:09:34 +08:00
JeremyHi
1225edb065 refactor: move rpc's commons to common-meta (#1625) 2023-05-23 10:07:24 +08:00
Lei, HUANG
8e7ec4626b refactor: remove useless error (#1624)
* refactor: remove useless

* fix: remove useless error variant
2023-05-22 22:55:27 +08:00
LFC
f64527da22 feat: region failover procedure (#1558)
* feat: region failover procedure
2023-05-22 19:54:52 +08:00
Yingwen
6dbceb1ad5 feat: Trigger flush based on global write buffer size (#1585)
* feat(storage): Add AllocTracker

* feat(storage): flush request wip

* feat(storage): support global write buffer size

* fix(storage): Test and fix size based strategy

* test(storage): Test AllocTracker

* test(storage): Test pick_by_write_buffer_full

* docs: Add flush config example

* test(storage): Test schedule_engine_flush

* feat(storage): Add metrics for write buffer size

* chore(flush): Add log when triggering flush by global buffer

* chore(storage): track allocation in update_stats
2023-05-22 19:00:30 +08:00
Ning Sun
067c5ee7ce feat: time_zone variable for mysql connections (#1607)
* feat: add timezone info to query context

* feat: parse mysql compatible time zone string

* feat: add method to timestamp for rendering timezone aware string

* feat: use timezone from session for time string rendering

* refactor: use querycontectref

* feat: implement session/timezone variable read/write

* style: resolve toml format

* test: update tests

* Apply suggestions from code review

Co-authored-by: dennis zhuang <killme2008@gmail.com>

* Update src/session/src/context.rs

Co-authored-by: dennis zhuang <killme2008@gmail.com>

* refactor: address review issues

---------

Co-authored-by: dennis zhuang <killme2008@gmail.com>
2023-05-22 18:30:23 +08:00
Yingwen
32ad358323 fix(table-procedure): Open table in RegisterCatalog state (#1617)
* fix(table-procedure): on_register_catalog should use open_table

* test: Test recover RegisterCatalog state

* test: Fix subprocedure does not execute in test

* feat(mito): adjust procedure log level

* refactor: rename execute_parent_procedure

execute_parent_procedure -> execute_until_suspended_or_done
2023-05-22 17:54:02 +08:00
Chuanle Chen
77497ca46a feat: support /api/v1/label/<label_name>/values from Prometheus (#1604)
* feat: support `/api/v1/label/<label_name>/values` from Prometheus

* chore: apply CR

* chore: apply CR
2023-05-22 07:24:12 +00:00
JeremyHi
e5a215de46 chore: truncate route-table (#1619) 2023-05-22 14:54:40 +08:00
Ruihang Xia
e5aad0f607 feat: distributed planner basic (#1599)
* basic skeleton

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* clean up

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* change QueryEngineState's constructor

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* install extension planner

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* tidy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-05-22 11:48:03 +08:00
QuenKar
edf6c0bf48 refactor: add "table engine" to datanode heartbeat. (#1616)
refactor:add "table engine" to datanode heartbeat.
2023-05-22 10:09:32 +08:00
Ruihang Xia
c3eeda7d84 refactor(frontend): adjust code structure (#1615)
* move  to expr_factory

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* move configs into service_config

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* move GrpcQueryHandler into distributed.rs

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-05-20 02:09:20 +08:00
Lei, HUANG
82f2b34f4d fix: wal replay ignore manifest entries (#1612)
* fix: wal replay ignore manifest entries

* test: add ut
2023-05-19 18:12:44 +08:00
Vanish
8764ce7845 feat: add delete WAL in drop_region (#1577)
* feat: add delete WAL in drop_region

* chore: fix typo err.

* feat: mark all SSTs deleted and remove the region from StorageEngine's region map.

* test: add test_drop_region for StorageEngine.

* chore: make clippy happy

* fix: fix conflict

* chore: CR.

* chore: CR

* chore: fix clippy

* fix: temp file life time
2023-05-18 18:02:34 +08:00
localhost
d76ddc575f fix: meta admin API get catalog table name error (#1603) 2023-05-18 14:27:40 +08:00
Weny Xu
68dfea0cfd fix: fix datanode cannot start while failing to open tables (#1601) 2023-05-17 20:56:13 +08:00
fys
57c02af55b feat: change default selector in meta from "LeaseBased" to "LoadBased" (#1598)
* feat: change default selector from "LeaseBased" to "LoadBased"

* fix: ut
2023-05-17 17:48:13 +08:00
Lei, HUANG
e8c2222a76 feat: add WindowedReader (#1532)
* feat: add WindowedReader

* fix: some cr comments

* feat: filter memtable by timestamp range

* fix: add source in error variants

* fix: some CR comments

* refactor: filter memtable in MapIterWrapper

* fix: clippy
2023-05-17 17:34:29 +08:00
JeremyHi
eb95a9e78b fix: sequence out of range (#1597) 2023-05-17 14:43:54 +08:00
zyy17
4920836021 refactor: support parsing env list (#1595)
* refactor: support parse env list

* refactor: set 'multiple = true' for metasrv_addr cli option and remove duplicated parsing
2023-05-17 14:37:08 +08:00
Huaijin
715e1a321f feat: implement /api/v1/labels for prometheus (#1580)
* feat: implement /api/v1/labels for prometheus

* fix: only gather match[]

* chore: fix typo

* chore: fix typo

* chore: change style

* fix: suggestion

* fix: suggestion

* chore: typo

* fix: fmt

* fix: add more test
2023-05-17 03:56:22 +00:00
localhost
a6ec79ee30 chore: add a uniform prefix to the metrics using the official recommendation of (#1590) 2023-05-17 11:08:49 +08:00
Lei, HUANG
e70d49b9cf feat: memtable stats (#1591)
* feat: memtable stats

* chore: add tests for timestamp subtraction

* feat: add `Value:as_timestamp` method
2023-05-17 11:07:07 +08:00
Weny Xu
ca75a7b744 fix: remove region number validation (#1593)
* fix: remove region number validation

* Update src/mito/src/engine.rs

Co-authored-by: dennis zhuang <killme2008@gmail.com>

---------

Co-authored-by: dennis zhuang <killme2008@gmail.com>
2023-05-17 09:23:56 +08:00
localhost
3330957896 chore: add fmt for statement query (#1588)
* chore: add fmt for statement query

* chore: add test for query display
2023-05-16 16:14:11 +08:00
WU Jingdi
fb1ac0cb9c feat: support user config manifest compression (#1579)
* feat: support user config manifest compression

* chore: change style

* chore: enhance test
2023-05-16 11:02:59 +08:00
Niwaka
856ab5bea7 feat: make RepeatedTask invoke remove_outdated_meta method (#1578)
* feat: make RepeatedTask invoke remove_outdated_meta method

* fix: typo

* chore: improve error message
2023-05-16 10:21:35 +08:00
Eugene Tolbakov
122bd5f0ab feat(tql): add initial implementation for explain & analyze (#1427)
* feat(tql): resolve conflicts after merge,formatting and clippy issues, add sqlness tests, adjust explain with start, end, step

* feat(tql): adjust sqlness assertions
2023-05-16 07:28:24 +08:00
Ruihang Xia
2fd1075c4f fix: uses nextest in the Release CI (#1582)
* fix: uses nextest in the Release CI

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* install nextest

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* Update Makefile

Co-authored-by: zyy17 <zyylsxm@gmail.com>

* update workflow yaml

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: zyy17 <zyylsxm@gmail.com>
2023-05-15 21:09:09 +08:00
fys
027707d969 feat: support frontend-meta heartbeat (#1555)
* feat: support frontend heartbeat

* fix: typo "reponse" -> "response"

* add ut

* enable start heartbeat task

* chore: frontend id is specified by metasrv, not in the frontend startup parameter

* fix typo

* self-cr

* cr

* cr

* cr

* remove unnecessary headers

* use the member id in the header as the node id
2023-05-15 09:54:45 +00:00
Yingwen
8d54d40b21 feat: Add FlushPicker to flush regions periodically (#1559)
* feat: Add FlushPicker

* feat(storage): Add close to StorageEngine

* style(storage): fix clippy

* feat(storage): Close regions in StorageEngine::close

* chore(storage): Clear requests on scheduler stop

* test(storage): Test flush picker

* feat(storage): Add metrics for auto flush

* feat(storage): Add flush reason and record it in metrics

* feat: Expose flush config

docs(config): Update config example

* refactor(storage): Run auto flush task in FlushScheduler

* refactor(storage): Add FlushItem trait to make FlushPicker easy to test
2023-05-15 17:29:28 +08:00
Ning Sun
497b1f9dc9 feat: metrics for storage engine (#1574)
* feat: add storage engine region count gauge

* test: remove catalog metrics because we can't get a correct number

* feat: add metrics for log store write and compaction

* fix: address review issues
2023-05-15 15:22:00 +08:00
LFC
4ae0b5e185 test: move instances tests to "tests-integration" (#1573)
* test: move standalone and distributed instances tests from "frontend" crate to "tests-integration"

* fix: resolve PR comments
2023-05-15 12:00:43 +08:00
Lei, HUANG
cfcfc72681 refactor: remove version column (#1576) 2023-05-15 11:03:37 +08:00
Weny Xu
66903d42e1 feat: implement OpenTableHandler (#1567)
* feat: implement OpenTableHandler

* chore: apply suggestion from CR

* chore: apply suggestion from CR
2023-05-15 10:47:28 +08:00
zyy17
4fc173acf0 refactor: support layered configuration (#1535)
* refactor: add a layered configuration by using config-rs

Signed-off-by: zyy17 <zyylsxm@gmail.com>

* refactor: add 'env_var_prefix' for 'load_options()' and remove duplicate default construction in frontend

* refactor: add test_config_precedence_order in standalone

* refactor: add 'test_config_precedence_order()' test case in metasrv

* refactor: add 'test_config_precedence_order()' test case in datanode

* refactor: refine the naming '*_env_var_*' -> '*_env_vars_*'

* refactor: fix clippy error

* refactor: refine error naming 'LoadConfig' -> 'LoadLayeredConfig' and add Location

* refactor: move 'env_vars_prefix' to clap options

* fix: use '__' as envrionment variables separator and simplify load_layered_options()

* refactor: derive 'Default' for StartCommand and use default function to simplify the test cases

* fix: clippy error

* chore: update comments

* chore(deps): update deps info

* refactor(naming): 'env_vars_prefix' -> 'env_prefix'

* refactor: simplify the code

* refactor: change some argument type of 'load_layered_options()'

* refactor: simplify the code

* refactor: remove unnecessary 'clone()'

* refactor: add 'GREPTIMEDB_*' prefix for env_prefix

* refactor: modify configuration precedence order: cli > config file > environment variables > default values

---------

Signed-off-by: zyy17 <zyylsxm@gmail.com>
2023-05-13 22:37:47 +08:00
Huaijin
f9a4326461 fix: unwrap() None in NULL value exist multi-field table during prometheus query_range (#1571)
* fix: NULL value in multi-field table meet error in prometheus query_range

* fix: suggestion

* chore: change style
2023-05-12 17:36:03 +08:00
Ning Sun
4151d7a8ea fix: allow cross-schema query on information_schema (#1568) 2023-05-11 16:54:28 +08:00
LFC
a4e106380b fix: refreshing Dashboard returns 404 (#1562)
* fix: refreshing Dashboard returns 404

* fix: refreshing Dashboard returns 404
2023-05-11 15:08:20 +08:00
Ruihang Xia
7a310cb056 docs: rfc of distributed planner (#1554)
* docs: rfc of distributed planner

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* Update docs/rfcs/2023-05-09-distributed-planner.md

Co-authored-by: LFC <bayinamine@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: dennis zhuang <killme2008@gmail.com>
Co-authored-by: LFC <bayinamine@gmail.com>
2023-05-11 14:45:32 +08:00
LFC
8fef32f8ef feat: enable tokio console in cluster mode (#1512)
* feat: enable tokio console subscriber

* fix: resolve PR comments

* fix: resolve PR comments

* fix: resolve PR comments
2023-05-11 14:35:06 +08:00
Ning Sun
8c85fdec29 fix: correct schema/table count in catalog metrics (#1565) 2023-05-11 14:20:42 +08:00
ZonaHe
84f6b46437 feat: update dashboard to v0.2.5 (#1563)
Co-authored-by: ZonaHex <ZonaHex@users.noreply.github.com>
2023-05-11 13:55:42 +08:00
Weny Xu
44aef6fcbd feat(datanode): iImplement the heartbeat response handler (#1547)
* feat(datanode): implement instruction handler

* chore: apply suggestion from CR

* refactor: refactor heartbeat response handler
2023-05-11 09:27:13 +08:00
JeremyHi
7a9dd5f0c8 feat: ignore mailbox message into stat (#1560) 2023-05-10 18:06:04 +08:00
WU Jingdi
486bb2ee8e feat: Compress manifest and checkpoint (#1497)
* feat: Compress manifest and checkpoint

* refactor: use file extention infer compression type

* chore: apply suggestions from CR

* Update src/storage/src/manifest/storage.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

* chore: CR advices

* chore: Fix bugs, strengthen test

* chore: Fix CR, strengthen test

---------

Co-authored-by: dennis zhuang <killme2008@gmail.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>
2023-05-10 07:53:06 +00:00
Weny Xu
020c55e260 refactor: change mailbox_messages to mailbox_message (#1557) 2023-05-10 07:17:11 +00:00
Yingwen
ee3e1dbdaa feat: Use LocalScheduler framework to implement FlushScheduler (#1531)
* test: simplify countdownlatch

* feat: impl Drop for LocalScheduler

* feat(storage): Impl FlushRequest and FlushHandler

* feat(storage): Use scheduler to handle flush job

* chore(storage): remove unused code

* feat(storage): Use new type pattern for RegionMap

* feat(storage): Remove on_success callback

* feat(storage): Address CR comments and add some metrics to flush
2023-05-10 07:16:51 +00:00
dennis zhuang
aa0c5b888c docs: update readme (#1549)
* docs: update readme

* Update README.md

Co-authored-by: Ning Sun <classicning@gmail.com>

* chore: cr comments

* chore: cr comments

---------

Co-authored-by: Ning Sun <classicning@gmail.com>
2023-05-10 14:36:07 +08:00
Weny Xu
fbb7db42aa chore: unify code styling (#1523) 2023-05-10 11:10:39 +08:00
Ning Sun
a1587595d9 feat: add information_schema as exception of cross schema check (#1551)
* feat: add information_schema as a cross-schema query exception

* fix: resolve lint issue
2023-05-10 10:55:00 +08:00
Weny Xu
abd5a8ecbb chore(datasource): make CompressionType follow the style of the guide (#1522) 2023-05-10 10:50:24 +08:00
Ruihang Xia
4ddab8e982 build: change release CI to only run test on linux (#1548)
* disable all linux release

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* split linux and macos

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* correct job name

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add missing build job

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* run build-macos first

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* disable unstable test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* disable test on macos

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* re-enable test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* do not dependent on build-macos

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-05-10 10:49:14 +08:00
Yingwen
1833e487a4 refactor: remove unnecessary async from RepeatedTask::start (#1545)
* refactor: relax RepeatedTask requirements

Some refactor:
- Remove async from start()
- Cancel task in drop
- Allow TaskFunction::call taking &mut self
- Make start/stop concurrent safe

* test(log-store): Fix log store tests (start multiple times)
2023-05-09 21:03:15 +08:00
ZonaHe
c93b5743e8 feat: update dashboard to v0.2.4 (#1553)
Co-authored-by: ZonaHex <ZonaHex@users.noreply.github.com>
2023-05-09 20:56:20 +08:00
Weny Xu
550c494d25 fix: Copy from must follow the order of table fields issue (#1521)
* fix: Copy from must follow the order of table fields issue

* chore: apply suggestion from CR
2023-05-09 17:46:16 +08:00
Yingwen
2ab0e42d6f feat: clean procedure's state after it is done (#1543)
* feat(common-procedure): pub(crate) use proc_path

* feat(common-procedure): Implement delete_procedure

* feat(common-procedure): Clean procedure after it is finished

* chore(common-procedure): put path_string in front of try_stream

* test(common-procedure): Test cleaning up procedures

* feat(common-procedure): Clean procedure states in recover()

* feat(common-procedure): Use VecDeque for finished procedures
2023-05-09 11:44:50 +08:00
JeremyHi
05e6ca1e14 fix: the latest number of regions (#1546)
* fix: the latest number of regions

* fix: unit test
2023-05-09 10:11:26 +08:00
localhost
b9661818f2 chore: remove useless Option type in plugins (#1544)
Co-authored-by: paomian <qtang@greptime.com>
2023-05-08 21:54:24 +08:00
localhost
f86390345c chore: remove useless Option type in plugins (#1544)
Co-authored-by: paomian <qtang@greptime.com>
2023-05-08 21:53:45 +08:00
localhost
7191bb9652 chore: remove useless Option type in plugins (#1544)
Co-authored-by: paomian <qtang@greptime.com>
2023-05-08 21:52:12 +08:00
localhost
34c7f78861 chore: add configurator to http server (#1488)
* chore: add configurator params to start server fun

* chore: update plugins type

---------

Co-authored-by: paomian <qtang@greptime.com>
2023-05-08 10:55:03 +00:00
JeremyHi
610651fa8f feat: meta metrics (#1538)
* chore: from_etcd_kv (better name)

* feat: kv request metric

* feat: router metric

* feat: connections metric
2023-05-08 17:50:21 +08:00
fys
c48067f88d fix: no active datanode when frontend start (#1533)
* fix: no active datanode when frontend start

* chore: add log when can not get stat_val
2023-05-08 15:02:07 +08:00
Ning Sun
ec1b95c250 docs: add play section (#1528)
* docs: add play section

* Update README.md

Co-authored-by: xiaomin tang <xtang@users.noreply.github.com>

---------

Co-authored-by: dennis zhuang <killme2008@gmail.com>
Co-authored-by: xiaomin tang <xtang@users.noreply.github.com>
2023-05-08 14:26:22 +08:00
gitccl
fbf1ddd006 feat: open catalogs and schemas in parallel (#1527)
* feat: open catalogs and schemas in parallel

* fix: code review
2023-05-08 10:34:30 +08:00
Ning Sun
d679cfcb53 feat: add semantic_type to information_schema.columns (#1530) 2023-05-06 15:48:37 +08:00
discord9
2c82ded975 feat: table metrics (#1469)
* feat: Statistic

* add todo

* fmt: cargo fmt

* feat: some simple impl for MemTable

* chore: a try on adding statistics

* Update src/table/src/stats.rs

Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>

* docs: fix typo

* newlines unnecessary

---------

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
2023-05-06 14:59:49 +08:00
Ruihang Xia
d4f3f617e4 chore(toolchain): update rust-toolchain to 2023-05-03 (#1524)
* chore(toolchain): update rust-toolchain to 2023-05-03

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update workflow yaml

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-05-06 11:34:09 +08:00
Ruihang Xia
6fe117d7d5 fix: vector and matrix in Prometheus use different field (#1520)
* fix empty tag

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix result type

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* make it work

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* clean up

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-05-05 15:54:26 +08:00
Ning Sun
b0ab641602 feat: add catalog/schema/table count as catalog metrics (#1499)
* feat: add catalog/schema/table count as catalog metrics

* test: add integration tests for catalog metrics
2023-05-05 05:54:12 +00:00
Huaijin
224ec9bd25 fix: wrong max_table_id log in remote catalog manager (#1516)
* fix: wrong max_table_id log in remote catalog manager

* chore: update link in CONTRIBUTING.md

* chore: add a new const MAX_SYS_TABLE_ID
2023-05-05 03:39:45 +00:00
Niwaka
d86b3386dc fix: incorrect show create table output (#1514)
* fix: incorrect show create table output

* feat: change CreateTable's Display if table is external

* feat: change CreateTable's Display if table is external
2023-05-05 11:29:09 +08:00
Lei, HUANG
c8301feed7 fix: respect MySQL timestamp format (#1510) 2023-05-04 18:57:38 +08:00
dennis zhuang
b1920c41a4 fix: object store cache bug (#1482)
* feat: use streaming read instead of reading whole file

* feat: enable atomic writing for object store file caching

* fix: recover existing keys from local cache

* test: recovering keys from local file cache for LruCachePolicy

* Update src/datanode/src/instance.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

* chore: cr comments

* feat: md5 hash caching path

* fix: test

* fix: read cache

* Update src/object-store/src/cache_policy.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

---------

Co-authored-by: Yingwen <realevenyag@gmail.com>
2023-05-04 18:25:40 +08:00
Weny Xu
c471007edd feat: support to copy table from/to CSV and JSON format file (#1475)
* refactor: refactor copy from executor

* feat: support to copy from CSV and JSON format files

* feat: support to copy table to the CSV and JSON format file

* test: add tests copy from/to

* chore: apply suggestions from CR
2023-05-04 17:20:28 +08:00
Yingwen
2818f466d3 feat: Log error in GreptimeRequestHandler (#1507)
* feat(common-error): Add should_log_error

* feat(servers): log error in grpc handler
2023-05-04 15:48:38 +08:00
JeremyHi
d7a906e0bd feat: metasrv mailbox (#1481)
* refactor: id first in pusher_key

* feat: is_acceptable for multi roles

* feat: mailbox

* fix: channel for mailbox

* feat: impl mailbox via heartbeat

* chore: add unit test for mailbox

* chore: by cr

* chore: typo

* chore: refactor the mailbox API

* chore: br cr

* chore: check timeout interval to 10ms

* chore: add response header
2023-05-04 15:42:43 +08:00
Ning Sun
6e1bb9e458 feat: add support for information_schema.columns (#1500)
* feat: add support for information_schema.columns

* feat: remove information_schema from its view

* Update src/catalog/src/information_schema.rs

Co-authored-by: LFC <bayinamine@gmail.com>

* fix: error on table data type

* test: correct sqlness test for information schema

* test: add information_schema.columns sqlness tests

---------

Co-authored-by: LFC <bayinamine@gmail.com>
2023-05-04 14:29:38 +08:00
Ning Sun
494ad570c5 feat: update pgwire to 0.14 (#1504) 2023-05-04 14:24:26 +08:00
Vanish
12d59e6341 chore: remove redundant code. (#1502) 2023-05-04 14:20:26 +08:00
Yingwen
479ef9d379 fix: checkpoint GC task also deletes the file with the last version (#1491)
* test(storage): use assert_eq to check scan result

* feat(storage): Add more info to manifest log

* feat: Avoid error log when unable to delete

* fix: The manifest gc task should delete files <= last_version

* feat(storage): Don't log if the error kind is not found

* feat: Add keep_last_checkpoint option
2023-05-04 14:18:38 +08:00
Niwaka
93ffe1ff33 feat: improve and distinguish different errors for IllegalInsertData (#1503)
* feat: improve and distinguish different errors for IllegalInsertData

* feat: change error code for UnexpectedValuesLength and ColumnAlreadyExists

* chore: improve readability of error message
2023-05-04 12:36:24 +08:00
Niwaka
d461328238 fix: insert distributed table if partition column has default value (#1498)
* fix: insert distributed table if partition column has default value

* Address review

* address review

* address review

* chore: introduce assert_columns

---------

Co-authored-by: WenyXu <wenymedia@gmail.com>
2023-05-02 20:50:02 +08:00
Vanish
6aae5b7286 feat: prevent sensitive information (key, password, secrets etc.) from being printed in plain (#1501)
* feat: add secret type

* chore: replace key, password, secrets with secret type.

* chore: use secrecy

* chore: remove redundant file

* style: taplo fmt
2023-05-01 20:54:54 +08:00
Ning Sun
7dbac89000 feat: add metrics for protocol interfaces (#1495)
* feat: add metrics for various interfaces

* feat: add db label for protocols

* feat: add postgres protocol metrics

* feat: add metrics for grpcs apis

* feat: add auth failure counter for mysql/pg

* fix: add db label to grpc prometheus interface

* Apply suggestions from code review

Co-authored-by: dennis zhuang <killme2008@gmail.com>

* feat: add error code for auth failure counter

* fix: use schema as dbname when catalog is default

---------

Co-authored-by: dennis zhuang <killme2008@gmail.com>
2023-04-28 23:42:35 +08:00
Yingwen
0b0b5a10da feat: Remove store from procedure config (#1489)
* feat(procedure): Add key prefix

* feat: Remove store config from ProcedureConfig

* refactor(procedure): Address review comments

Add proc_path! macro and rename KEY_PREFIX to PROC_PATH

* docs: Update procedure config examples
2023-04-28 22:12:57 +08:00
Yingwen
51be35a7b1 feat(mito): Combine the original and procedure's implementation (#1468)
* fix(mito): Add metrics to mito DDL procedure

* feat(mito): Use procedure's implementation to create table

* feat(mito): Use procedure's implementation to alter table

* feat(mito): Use procedure's implementation to drop table

* style(mito): Fix clippy

* test(mito): Fix tests

* feat(mito): Add TableCreator

* feat(mito): update alter table procedure

* fix(mito): alter procedure create alter op first

* feat(mito): Combine alter table code

* fix(mito): Fix deadlock

* feat(mito): Simplify drop table procedure
2023-04-28 11:48:52 +08:00
Lei, HUANG
9e4887f29f fix: disable dashboard (#1494) 2023-04-27 22:55:15 +08:00
yuanbohan
cca34aa914 chore: upgrade promql-parser version (#1484) 2023-04-27 13:10:15 +00:00
Ruihang Xia
0ac50632aa feat: use server time if it's not specified (#1480)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-04-27 20:54:26 +08:00
Yingwen
b1f7ad097a test: Fix s3 region in test (#1493) 2023-04-27 12:25:20 +00:00
Weny Xu
a77a4a4bd1 fix: add s3 region info (#1492)
fix: add region info
2023-04-27 19:13:01 +08:00
Weny Xu
47f1cbaaed fix: add s3 region info (#1486) 2023-04-27 17:35:34 +08:00
Yingwen
8e3c3cbc40 build: Download assets to cargo output dir (#1476)
* build: Download assets to cargo output dir

Also remove the output from the build script and only print the output
on failure

* chore: Update src/servers/build.rs

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>

* build: replace pushd by cd

---------

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
2023-04-27 17:09:10 +08:00
Vanish
9f0efc748d feat: make log level and destination configurable from config files (#1444)
* feat: implement load_options.

* refactor: build by ConfigOptions.

* refactor: init_global_logging by LoggingOptions.

* chore: make clippy happy.

* refactor: use TopLevelOptions push top level options to subcommand.

* test: test TopLevelOptions.

* refactor: push Options in Box.

* refactor: push Options in Box.

* refactor: use let-else and Options.
2023-04-27 15:30:04 +08:00
Ruihang Xia
939a51aea9 feat: adopt REPLACE interceptor and quit all processes on exit (#1478)
* bump version and update test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* quit all processes on drop

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* Update tests/runner/src/env.rs

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>
2023-04-27 07:16:41 +00:00
Weny Xu
bf35620904 refactor: refactor BufferedWriter (#1439)
* feat: implement ApproximateBufWriter

* refactor: refactor BufferedWriter

* refactor: remove ApproximateBufWriter

* fix: fix losing pending writes issue

* chore: fmt

* chore: remove unused import

* chore: rename method name

* feat: return written row count

* chore: apply suggestions from CR

* fix: fix counting the bytes_written twice issue
2023-04-27 14:45:33 +08:00
Weny Xu
09f55e3cd8 chore: remove info log (#1483) 2023-04-27 14:05:22 +08:00
dennis zhuang
b88d8e5b82 feat: bump opendal to 0.33 (#1479) 2023-04-27 12:13:18 +08:00
Weny Xu
a709a5c842 feat: support to create parquet format external table (#1463)
* feat: support parquet format external table

* Update src/file-table-engine/src/error.rs

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>

---------

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
2023-04-26 16:45:37 +08:00
Lei, HUANG
fb9978e95d refactor: catalog (#1454)
* wip

* add schema_async

* remove CatalogList

* remove catalog provider and schema provider

* fix

* fix: rename table

* fix: sqlness

* fix: ignore tonic error metadata

* fix: table engine name

* feat: rename catalog_async to catalog

* respect engine name in table regional value when deregistering tables

* fix: CR
2023-04-26 08:36:40 +00:00
discord9
ef4e473e6d fix: recompile&register scripts as UDF on reboot (#1421)
* fixme: recompile somewhere else

* feat: re-compile&re-register all scripts in table

* fix: allow empty scripts table

* chore: add non-blocking somewhere

* chore: PR advices

* chore: more PR advices

* style: remove useless join

* style: remove redunent code

* refactor: use `bg` runtime instead

* style: cargo fmt
2023-04-26 16:30:58 +08:00
Ning Sun
1a245f35b9 feat: improve metrics and log level (#1470)
* refactor: tune log and metrics for meta/frontend

* feat: add panic counter
2023-04-26 13:13:40 +08:00
dennis zhuang
8d8a480dc1 fix: object store caching bug, #1466 (#1467)
* fix: object store caching bug, #1466

* fix: forgot to add S3WithCache tests
2023-04-25 21:48:51 +08:00
Lei, HUANG
197c34bc17 fix: grpc client keepalive (#1461)
fix: grpc keepalive
2023-04-25 20:07:57 +08:00
Ruihang Xia
4d9afee8ef chore(deps): update substrait dep in client (#1453)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-04-25 16:21:59 +08:00
Weny Xu
7f14d40798 test: add tests for external table (#1460) 2023-04-25 15:14:46 +08:00
Yingwen
eb50cee601 feat: Switch to the procedure framework (#1448)
* feat: Remove create_mock_sql_handler()

create_to_request() and alter_to_request() don't need `&self`, so
we don't need to mock the sql handler to test them

* feat: Enable procedure manager by default

* docs: Update config example

* test: Enable procedure framework in all tests

* refactor(datanode): rename methods using procedure

* test(catalog): Fix temp dir drops before test finishes

* tests: Enable procedure framework in sqlness

* test: Fix sqlness standalone rename test

* fix: Drop procedure allows table not in engine

* test: Change rename table test

* fix: add options to table meta when creating table by procedure

* test: adjust error message in schema test case

* test: Fix test_sql_api error message
2023-04-25 12:04:02 +08:00
Lei, HUANG
92c0808766 fix: frontend opt should respect http addr in config file when no com… (#1456)
* fix: frontend opt should respect http addr in config file when no command options is given

* refactor: command line options should be Option<bool>

* fix: ci
2023-04-25 03:43:42 +00:00
Ruihang Xia
f9ea6b63bf feat: impl instant query and add tests (#1452)
* feat: impl instant query and add tests

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* clean up

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-04-25 11:08:14 +08:00
fys
2287db7ff7 fix: execute sql query in another catalog (#1457) 2023-04-25 10:30:35 +08:00
shuiyisong
69acf32914 chore: add len() to Bytes and StringBytes (#1455)
* chore: add `len()` to Bytes and StringBytes

* chore: add `len()` to Bytes and StringBytes
2023-04-25 10:18:41 +08:00
Ruihang Xia
b9db2cfd83 fix: support restart sqlness in distributed mode (#1443)
* fix: support restart sqlness in distributed mode

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* move alter_table case to common dir

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* is_standalone flag

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* Update tests/runner/src/env.rs

Co-authored-by: LFC <bayinamine@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: LFC <bayinamine@gmail.com>
2023-04-24 19:36:12 +08:00
JeremyHi
6d247f73fd fix: add log on leader stepdown (#1450) 2023-04-24 19:16:57 +08:00
Ruihang Xia
2cf828da3c feat: implement Prometheus-compatible API in gRPC (#1449)
* update greptime-proto

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* remove duplicate delete enum

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* impl handler and service

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* clean up

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-04-24 18:03:48 +08:00
Weny Xu
f2167663b2 feat: support to create external table (#1372)
* feat: support to create external table

* chore: apply suggestions from CR

* test: add create external table without ts type

* chore: apply suggestions from CR

* fix: fix import typo

* refactor: move consts to table crate

* chore: apply suggestions from CR

* refactor: rename create_table_schema
2023-04-24 14:43:12 +08:00
LFC
17daf4cdff feat: support "delete" in distributed mode (#1441)
* feat: support "delete" in distributed mode

* fix: resolve PR comments
2023-04-24 12:07:50 +08:00
shuiyisong
7c6754d03e feat: meter write request (#1447)
* chore: add write meter

* chore: update meter macro

* chore: update meter framework url to https
2023-04-24 11:42:06 +08:00
zyy17
e64fea3a15 ci: upgrade nightly release tag from v0.2.0 to v0.3.0 (#1446) 2023-04-24 11:04:39 +08:00
Weny Xu
22b5a94d02 feat: support creating the physical plan for JSON and CSV files (#1424)
* feat: support creating the physical plan for JSON and CSV files

* chore: apply suggestions from CR

* chore: apply suggestions from CR

* refactor(file-table-engine): use datasource Format instead
2023-04-24 10:17:11 +08:00
Weny Xu
d374859e24 refactor: replace Copy Format with datasource Format (#1435)
* refactor: replace Copy Format with datasource Format

* chore: apply suggestions from CR

* chore: apply suggestions from CR
2023-04-23 08:31:54 +00:00
Ning Sun
c5dba29f9e refactor: remove redundant plugins argument (#1436) 2023-04-23 12:39:46 +08:00
Hao
9f442dedf9 chore: fix some typo and add deriv to plan in promql (#1438) 2023-04-23 12:21:25 +08:00
Ruihang Xia
5d77ed00bb test: add basic cases for distributed TQL (#1437)
* test: add basic cases for distributed TQL

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* drop table

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-04-23 03:34:42 +00:00
Zheming Li
c75845c570 fix: wrong next column in manifest (#1440)
Signed-off-by: Zheming Li <nkdudu@126.com>
2023-04-23 11:25:38 +08:00
Yingwen
1ee9ad4ca1 feat: manage multiple engine procedure in the engine manager (#1434)
* feat(table): Add engine procedure to engine manager

* feat(datanode): Get engine procedure from engine manager

* feat(table-procedure): Add source error to SubprocedureFailed

* test: Enable procedure in tests and pass all tests

* style(table-procedure): Fix clippy
2023-04-23 10:04:09 +08:00
Weny Xu
f2cc912c87 feat: implement ParquetFileReaderFactory (#1423)
* feat: implement ParquetFileReaderFactory

* refactor: use LazyParquetFileReader instead

* chore: apply suggestions from code review

Co-authored-by: Yingwen <realevenyag@gmail.com>

---------

Co-authored-by: Yingwen <realevenyag@gmail.com>
2023-04-21 13:40:58 +08:00
dennis zhuang
2a9f482bc7 feat: show create table (#1336)
* temp commit

* feat: impl Display for CreateTable statement

* feat: impl show create table for standalone

* fix: forgot show.rs

* feat: clean code

* fix: typo

* feat: impl show create table for distributed

* test: add show create table sqlness test

* fix: typo

* fix: sqlness tests

* feat: render partition rules for distributed table

* Update src/sql/src/statements.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

* Update src/sql/src/statements.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

* Update src/sql/src/statements.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

* Update src/sql/src/statements/create.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

* chore: by CR comments

* fix: compile error

* fix: missing column comments and extra table options

* test: add show create table test

* test: add show create table test

* chore: timestamp precision

* fix: test

---------

Co-authored-by: Yingwen <realevenyag@gmail.com>
2023-04-21 11:37:16 +08:00
Weny Xu
d5e4662181 refactor: refactor stmt_options_to_table_options (#1403)
refactor: move stmt_options_to_table_options to query crate
2023-04-21 11:08:01 +08:00
Yingwen
9cd2cf630d feat: procedures for file table engine (#1417)
* refactor: Add table_ref() to requests as their methods

* feat: Add CreateImmutableFileTable

* feat: Add DropImmutableFileTable

* feat: Implement TableEngineProcedure for ImmutableFileTableEngine

* feat: Add common-procedure-test crate

* refactor: mito engine use common-procedure-test to test procedures

* test: Add test for create and drop table

* chore: Address review comments
2023-04-20 18:52:44 +08:00
Ruihang Xia
7152a1b79e feat: expose output_ordering on scan plan (#1425)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-04-20 17:58:48 +08:00
fys
f2cfd8e608 refactor: default catalog and schema are created at Metasrv (#1391)
* refactor: default catalog and schema are created at Metasrv

* fix: unit test

* fix: add license

* simplify the meta mock

* cr
2023-04-20 17:58:37 +08:00
ZonaHe
e8cd2f0e48 feat: update dashboard to v0.2.3 (#1430)
Co-authored-by: ZonaHex <ZonaHex@users.noreply.github.com>
2023-04-20 17:51:11 +08:00
Yingwen
830367b8f4 feat: Drop table by procedure (#1401)
* feat: Add drop table procedure

* feat: support dropping table by procedure on datanode

* test: Add test for DropTableProcedure

* test: Test drop table by procedure

* chore: update comments

* fix: Make on_remove_from_catalog idempotent
2023-04-20 15:57:56 +08:00
Ruihang Xia
37678e2e02 ci: enable test on release (#1428)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-04-20 12:06:20 +08:00
Ruihang Xia
b6647af2e3 test: add integration case to check dashboard path (#1422)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-04-20 11:17:01 +08:00
1312 changed files with 133936 additions and 46368 deletions

View File

@@ -12,4 +12,9 @@ rustflags = [
"-Wclippy::print_stdout",
"-Wclippy::print_stderr",
"-Wclippy::implicit_clone",
# It seems clippy has made a false positive decision here when upgrading rust toolchain to
# nightly-2023-08-07, we do need it to be borrowed mutably.
# Allow it for now; try disallow it when the toolchain is upgraded in the future.
"-Aclippy::needless_pass_by_ref_mut",
]

View File

@@ -1,2 +1,3 @@
[profile.default]
slow-timeout = { period = "60s", terminate-after = 3, grace-period = "30s" }
retries = { backoff = "exponential", count = 3, delay = "10s", jitter = true }

View File

@@ -20,6 +20,3 @@ out/
# Rust
target/
# Git
.git

View File

@@ -3,8 +3,19 @@ GT_S3_BUCKET=S3 bucket
GT_S3_ACCESS_KEY_ID=S3 access key id
GT_S3_ACCESS_KEY=S3 secret access key
GT_S3_ENDPOINT_URL=S3 endpoint url
GT_S3_REGION=S3 region
# Settings for oss test
GT_OSS_BUCKET=OSS bucket
GT_OSS_ACCESS_KEY_ID=OSS access key id
GT_OSS_ACCESS_KEY=OSS access key
GT_OSS_ENDPOINT=OSS endpoint
# Settings for azblob test
GT_AZBLOB_CONTAINER=AZBLOB container
GT_AZBLOB_ACCOUNT_NAME=AZBLOB account name
GT_AZBLOB_ACCOUNT_KEY=AZBLOB account key
GT_AZBLOB_ENDPOINT=AZBLOB endpoint
# Settings for gcs test
GT_GCS_BUCKET = GCS bucket
GT_GCS_SCOPE = GCS scope
GT_GCS_CREDENTIAL_PATH = GCS credential path
GT_GCS_ENDPOINT = GCS end point

View File

@@ -0,0 +1,76 @@
name: Build and push dev-builder images
description: Build and push dev-builder images to DockerHub and ACR
inputs:
dockerhub-image-registry:
description: The dockerhub image registry to store the images
required: false
default: docker.io
dockerhub-image-registry-username:
description: The dockerhub username to login to the image registry
required: true
dockerhub-image-registry-token:
description: The dockerhub token to login to the image registry
required: true
dockerhub-image-namespace:
description: The dockerhub namespace of the image registry to store the images
required: false
default: greptime
version:
description: Version of the dev-builder
required: false
default: latest
build-dev-builder-ubuntu:
description: Build dev-builder-ubuntu image
required: false
default: 'true'
build-dev-builder-centos:
description: Build dev-builder-centos image
required: false
default: 'true'
build-dev-builder-android:
description: Build dev-builder-android image
required: false
default: 'true'
runs:
using: composite
steps:
- name: Login to Dockerhub
uses: docker/login-action@v2
with:
registry: ${{ inputs.dockerhub-image-registry }}
username: ${{ inputs.dockerhub-image-registry-username }}
password: ${{ inputs.dockerhub-image-registry-token }}
- name: Build and push dev-builder-ubuntu image
shell: bash
if: ${{ inputs.build-dev-builder-ubuntu == 'true' }}
run: |
make dev-builder \
BASE_IMAGE=ubuntu \
BUILDX_MULTI_PLATFORM_BUILD=true \
IMAGE_REGISTRY=${{ inputs.dockerhub-image-registry }} \
IMAGE_NAMESPACE=${{ inputs.dockerhub-image-namespace }} \
IMAGE_TAG=${{ inputs.version }}
- name: Build and push dev-builder-centos image
shell: bash
if: ${{ inputs.build-dev-builder-centos == 'true' }}
run: |
make dev-builder \
BASE_IMAGE=centos \
BUILDX_MULTI_PLATFORM_BUILD=true \
IMAGE_REGISTRY=${{ inputs.dockerhub-image-registry }} \
IMAGE_NAMESPACE=${{ inputs.dockerhub-image-namespace }} \
IMAGE_TAG=${{ inputs.version }}
- name: Build and push dev-builder-android image # Only build image for amd64 platform.
shell: bash
if: ${{ inputs.build-dev-builder-android == 'true' }}
run: |
make dev-builder \
BASE_IMAGE=android \
IMAGE_REGISTRY=${{ inputs.dockerhub-image-registry }} \
IMAGE_NAMESPACE=${{ inputs.dockerhub-image-namespace }} \
IMAGE_TAG=${{ inputs.version }} && \
docker push ${{ inputs.dockerhub-image-registry }}/${{ inputs.dockerhub-image-namespace }}/dev-builder-android:${{ inputs.version }}

View File

@@ -0,0 +1,63 @@
name: Build greptime binary
description: Build and upload the single linux artifact
inputs:
base-image:
description: Base image to build greptime
required: true
features:
description: Cargo features to build
required: true
cargo-profile:
description: Cargo profile to build
required: true
artifacts-dir:
description: Directory to store artifacts
required: true
version:
description: Version of the artifact
required: true
working-dir:
description: Working directory to build the artifacts
required: false
default: .
build-android-artifacts:
description: Build android artifacts
required: false
default: 'false'
runs:
using: composite
steps:
- name: Build greptime binary
shell: bash
if: ${{ inputs.build-android-artifacts == 'false' }}
run: |
cd ${{ inputs.working-dir }} && \
make build-by-dev-builder \
CARGO_PROFILE=${{ inputs.cargo-profile }} \
FEATURES=${{ inputs.features }} \
BASE_IMAGE=${{ inputs.base-image }}
- name: Upload artifacts
uses: ./.github/actions/upload-artifacts
if: ${{ inputs.build-android-artifacts == 'false' }}
with:
artifacts-dir: ${{ inputs.artifacts-dir }}
target-file: ./target/${{ inputs.cargo-profile }}/greptime
version: ${{ inputs.version }}
working-dir: ${{ inputs.working-dir }}
# TODO(zyy17): We can remove build-android-artifacts flag in the future.
- name: Build greptime binary
shell: bash
if: ${{ inputs.build-android-artifacts == 'true' }}
run: |
cd ${{ inputs.working-dir }} && make strip-android-bin
- name: Upload android artifacts
uses: ./.github/actions/upload-artifacts
if: ${{ inputs.build-android-artifacts == 'true' }}
with:
artifacts-dir: ${{ inputs.artifacts-dir }}
target-file: ./target/aarch64-linux-android/release/greptime
version: ${{ inputs.version }}
working-dir: ${{ inputs.working-dir }}

View File

@@ -0,0 +1,104 @@
name: Build greptime images
description: Build and push greptime images
inputs:
image-registry:
description: The image registry to store the images
required: true
image-registry-username:
description: The username to login to the image registry
required: true
image-registry-password:
description: The password to login to the image registry
required: true
amd64-artifact-name:
description: The name of the amd64 artifact for building images
required: true
arm64-artifact-name:
description: The name of the arm64 artifact for building images
required: false
default: ""
image-namespace:
description: The namespace of the image registry to store the images
required: true
image-name:
description: The name of the image to build
required: true
image-tag:
description: The tag of the image to build
required: true
docker-file:
description: The path to the Dockerfile to build
required: true
platforms:
description: The supported platforms to build the image
required: true
push-latest-tag:
description: Whether to push the latest tag
required: false
default: 'true'
runs:
using: composite
steps:
- name: Login to image registry
uses: docker/login-action@v2
with:
registry: ${{ inputs.image-registry }}
username: ${{ inputs.image-registry-username }}
password: ${{ inputs.image-registry-password }}
- name: Set up qemu for multi-platform builds
uses: docker/setup-qemu-action@v2
- name: Set up buildx
uses: docker/setup-buildx-action@v2
- name: Download amd64 artifacts
uses: actions/download-artifact@v3
with:
name: ${{ inputs.amd64-artifact-name }}
- name: Unzip the amd64 artifacts
shell: bash
run: |
tar xvf ${{ inputs.amd64-artifact-name }}.tar.gz && \
rm ${{ inputs.amd64-artifact-name }}.tar.gz && \
rm -rf amd64 && \
mv ${{ inputs.amd64-artifact-name }} amd64
- name: Download arm64 artifacts
uses: actions/download-artifact@v3
if: ${{ inputs.arm64-artifact-name }}
with:
name: ${{ inputs.arm64-artifact-name }}
- name: Unzip the arm64 artifacts
shell: bash
if: ${{ inputs.arm64-artifact-name }}
run: |
tar xvf ${{ inputs.arm64-artifact-name }}.tar.gz && \
rm ${{ inputs.arm64-artifact-name }}.tar.gz && \
rm -rf arm64 && \
mv ${{ inputs.arm64-artifact-name }} arm64
- name: Build and push images(without latest) for amd64 and arm64
if: ${{ inputs.push-latest-tag == 'false' }}
uses: docker/build-push-action@v3
with:
context: .
file: ${{ inputs.docker-file }}
push: true
platforms: ${{ inputs.platforms }}
tags: |
${{ inputs.image-registry }}/${{ inputs.image-namespace }}/${{ inputs.image-name }}:${{ inputs.image-tag }}
- name: Build and push images for amd64 and arm64
if: ${{ inputs.push-latest-tag == 'true' }}
uses: docker/build-push-action@v3
with:
context: .
file: ${{ inputs.docker-file }}
push: true
platforms: ${{ inputs.platforms }}
tags: |
${{ inputs.image-registry }}/${{ inputs.image-namespace }}/${{ inputs.image-name }}:latest
${{ inputs.image-registry }}/${{ inputs.image-namespace }}/${{ inputs.image-name }}:${{ inputs.image-tag }}

62
.github/actions/build-images/action.yml vendored Normal file
View File

@@ -0,0 +1,62 @@
name: Group for building greptimedb images
description: Group for building greptimedb images
inputs:
image-registry:
description: The image registry to store the images
required: true
image-namespace:
description: The namespace of the image registry to store the images
required: true
image-name:
description: The name of the image to build
required: false
default: greptimedb
image-registry-username:
description: The username to login to the image registry
required: true
image-registry-password:
description: The password to login to the image registry
required: true
version:
description: Version of the artifact
required: true
push-latest-tag:
description: Whether to push the latest tag
required: false
default: 'true'
dev-mode:
description: Enable dev mode, only build standard greptime
required: false
default: 'false'
runs:
using: composite
steps:
- name: Build and push standard images to dockerhub
uses: ./.github/actions/build-greptime-images
with: # The image will be used as '${{ inputs.image-registry }}/${{ inputs.image-namespace }}/${{ inputs.image-name }}:${{ inputs.version }}'
image-registry: ${{ inputs.image-registry }}
image-namespace: ${{ inputs.image-namespace }}
image-registry-username: ${{ inputs.image-registry-username }}
image-registry-password: ${{ inputs.image-registry-password }}
image-name: ${{ inputs.image-name }}
image-tag: ${{ inputs.version }}
docker-file: docker/ci/ubuntu/Dockerfile
amd64-artifact-name: greptime-linux-amd64-pyo3-${{ inputs.version }}
arm64-artifact-name: greptime-linux-arm64-pyo3-${{ inputs.version }}
platforms: linux/amd64,linux/arm64
push-latest-tag: ${{ inputs.push-latest-tag }}
- name: Build and push centos images to dockerhub
if: ${{ inputs.dev-mode == 'false' }}
uses: ./.github/actions/build-greptime-images
with:
image-registry: ${{ inputs.image-registry }}
image-namespace: ${{ inputs.image-namespace }}
image-registry-username: ${{ inputs.image-registry-username }}
image-registry-password: ${{ inputs.image-registry-password }}
image-name: ${{ inputs.image-name }}-centos
image-tag: ${{ inputs.version }}
docker-file: docker/ci/centos/Dockerfile
amd64-artifact-name: greptime-linux-amd64-centos-${{ inputs.version }}
platforms: linux/amd64
push-latest-tag: ${{ inputs.push-latest-tag }}

View File

@@ -0,0 +1,88 @@
name: Build linux artifacts
description: Build linux artifacts
inputs:
arch:
description: Architecture to build
required: true
cargo-profile:
description: Cargo profile to build
required: true
version:
description: Version of the artifact
required: true
disable-run-tests:
description: Disable running integration tests
required: true
dev-mode:
description: Enable dev mode, only build standard greptime
required: false
default: 'false'
working-dir:
description: Working directory to build the artifacts
required: false
default: .
runs:
using: composite
steps:
- name: Run integration test
if: ${{ inputs.disable-run-tests == 'false' }}
shell: bash
# NOTE: If the BUILD_JOBS > 4, it's always OOM in EC2 instance.
run: |
cd ${{ inputs.working-dir }} && \
make run-it-in-container BUILD_JOBS=4
- name: Upload sqlness logs
if: ${{ failure() && inputs.disable-run-tests == 'false' }} # Only upload logs when the integration tests failed.
uses: actions/upload-artifact@v3
with:
name: sqlness-logs
path: /tmp/greptime-*.log
retention-days: 3
- name: Build standard greptime
uses: ./.github/actions/build-greptime-binary
with:
base-image: ubuntu
features: pyo3_backend,servers/dashboard
cargo-profile: ${{ inputs.cargo-profile }}
artifacts-dir: greptime-linux-${{ inputs.arch }}-pyo3-${{ inputs.version }}
version: ${{ inputs.version }}
working-dir: ${{ inputs.working-dir }}
- name: Build greptime without pyo3
if: ${{ inputs.dev-mode == 'false' }}
uses: ./.github/actions/build-greptime-binary
with:
base-image: ubuntu
features: servers/dashboard
cargo-profile: ${{ inputs.cargo-profile }}
artifacts-dir: greptime-linux-${{ inputs.arch }}-${{ inputs.version }}
version: ${{ inputs.version }}
working-dir: ${{ inputs.working-dir }}
- name: Clean up the target directory # Clean up the target directory for the centos7 base image, or it will still use the objects of last build.
shell: bash
run: |
rm -rf ./target/
- name: Build greptime on centos base image
uses: ./.github/actions/build-greptime-binary
if: ${{ inputs.arch == 'amd64' && inputs.dev-mode == 'false' }} # Only build centos7 base image for amd64.
with:
base-image: centos
features: servers/dashboard
cargo-profile: ${{ inputs.cargo-profile }}
artifacts-dir: greptime-linux-${{ inputs.arch }}-centos-${{ inputs.version }}
version: ${{ inputs.version }}
working-dir: ${{ inputs.working-dir }}
- name: Build greptime on android base image
uses: ./.github/actions/build-greptime-binary
if: ${{ inputs.arch == 'amd64' && inputs.dev-mode == 'false' }} # Only build android base image on amd64.
with:
base-image: android
artifacts-dir: greptime-android-arm64-${{ inputs.version }}
version: ${{ inputs.version }}
working-dir: ${{ inputs.working-dir }}
build-android-artifacts: true

View File

@@ -0,0 +1,89 @@
name: Build macos artifacts
description: Build macos artifacts
inputs:
arch:
description: Architecture to build
required: true
rust-toolchain:
description: Rust toolchain to use
required: true
cargo-profile:
description: Cargo profile to build
required: true
features:
description: Cargo features to build
required: true
version:
description: Version of the artifact
required: true
disable-run-tests:
description: Disable running integration tests
required: true
artifacts-dir:
description: Directory to store artifacts
required: true
runs:
using: composite
steps:
- name: Cache cargo assets
id: cache
uses: actions/cache@v3
with:
path: |
~/.cargo/bin/
~/.cargo/registry/index/
~/.cargo/registry/cache/
~/.cargo/git/db/
target/
key: ${{ inputs.arch }}-build-cargo-${{ hashFiles('**/Cargo.lock') }}
- name: Install protoc
shell: bash
run: |
brew install protobuf
- name: Install rust toolchain
uses: dtolnay/rust-toolchain@master
with:
toolchain: ${{ inputs.rust-toolchain }}
targets: ${{ inputs.arch }}
- name: Start etcd # For integration tests.
if: ${{ inputs.disable-run-tests == 'false' }}
shell: bash
run: |
brew install etcd && \
brew services start etcd
- name: Install latest nextest release # For integration tests.
if: ${{ inputs.disable-run-tests == 'false' }}
uses: taiki-e/install-action@nextest
- name: Run integration tests
if: ${{ inputs.disable-run-tests == 'false' }}
shell: bash
run: |
make test sqlness-test
- name: Upload sqlness logs
if: ${{ failure() }} # Only upload logs when the integration tests failed.
uses: actions/upload-artifact@v3
with:
name: sqlness-logs
path: /tmp/greptime-*.log
retention-days: 3
- name: Build greptime binary
shell: bash
run: |
make build \
CARGO_PROFILE=${{ inputs.cargo-profile }} \
FEATURES=${{ inputs.features }} \
TARGET=${{ inputs.arch }}
- name: Upload artifacts
uses: ./.github/actions/upload-artifacts
with:
artifacts-dir: ${{ inputs.artifacts-dir }}
target-file: target/${{ inputs.arch }}/${{ inputs.cargo-profile }}/greptime
version: ${{ inputs.version }}

View File

@@ -0,0 +1,80 @@
name: Build Windows artifacts
description: Build Windows artifacts
inputs:
arch:
description: Architecture to build
required: true
rust-toolchain:
description: Rust toolchain to use
required: true
cargo-profile:
description: Cargo profile to build
required: true
features:
description: Cargo features to build
required: true
version:
description: Version of the artifact
required: true
disable-run-tests:
description: Disable running integration tests
required: true
artifacts-dir:
description: Directory to store artifacts
required: true
runs:
using: composite
steps:
- uses: arduino/setup-protoc@v1
- name: Install rust toolchain
uses: dtolnay/rust-toolchain@master
with:
toolchain: ${{ inputs.rust-toolchain }}
targets: ${{ inputs.arch }}
components: llvm-tools-preview
- name: Rust Cache
uses: Swatinem/rust-cache@v2
- name: Install Python
uses: actions/setup-python@v4
with:
python-version: '3.10'
- name: Install PyArrow Package
shell: pwsh
run: pip install pyarrow
- name: Install WSL distribution
uses: Vampire/setup-wsl@v2
with:
distribution: Ubuntu-22.04
- name: Install latest nextest release # For integration tests.
if: ${{ inputs.disable-run-tests == 'false' }}
uses: taiki-e/install-action@nextest
- name: Run integration tests
if: ${{ inputs.disable-run-tests == 'false' }}
shell: pwsh
run: make test sqlness-test
- name: Upload sqlness logs
if: ${{ failure() }} # Only upload logs when the integration tests failed.
uses: actions/upload-artifact@v3
with:
name: sqlness-logs
path: ${{ runner.temp }}/greptime-*.log
retention-days: 3
- name: Build greptime binary
shell: pwsh
run: cargo build --profile ${{ inputs.cargo-profile }} --features ${{ inputs.features }} --target ${{ inputs.arch }}
- name: Upload artifacts
uses: ./.github/actions/upload-artifacts
with:
artifacts-dir: ${{ inputs.artifacts-dir }}
target-file: target/${{ inputs.arch }}/${{ inputs.cargo-profile }}/greptime
version: ${{ inputs.version }}

View File

@@ -0,0 +1,50 @@
name: Publish GitHub release
description: Publish GitHub release
inputs:
version:
description: Version to release
required: true
runs:
using: composite
steps:
# Download artifacts from previous jobs, the artifacts will be downloaded to:
# ${WORKING_DIR}
# |- greptime-darwin-amd64-pyo3-v0.5.0/greptime-darwin-amd64-pyo3-v0.5.0.tar.gz
# |- greptime-darwin-amd64-pyo3-v0.5.0.sha256sum/greptime-darwin-amd64-pyo3-v0.5.0.sha256sum
# |- greptime-darwin-amd64-v0.5.0/greptime-darwin-amd64-v0.5.0.tar.gz
# |- greptime-darwin-amd64-v0.5.0.sha256sum/greptime-darwin-amd64-v0.5.0.sha256sum
# ...
- name: Download artifacts
uses: actions/download-artifact@v3
- name: Create git tag for release
if: ${{ github.event_name != 'push' }} # Meaning this is a scheduled or manual workflow.
shell: bash
run: |
git tag ${{ inputs.version }}
# Only publish release when the release tag is like v1.0.0, v1.0.1, v1.0.2, etc.
- name: Set release arguments
shell: bash
run: |
if [[ "${{ inputs.version }}" =~ ^v[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
echo "prerelease=false" >> $GITHUB_ENV
echo "makeLatest=true" >> $GITHUB_ENV
echo "generateReleaseNotes=false" >> $GITHUB_ENV
else
echo "prerelease=true" >> $GITHUB_ENV
echo "makeLatest=false" >> $GITHUB_ENV
echo "generateReleaseNotes=true" >> $GITHUB_ENV
fi
- name: Publish release
uses: ncipollo/release-action@v1
with:
name: "Release ${{ inputs.version }}"
prerelease: ${{ env.prerelease }}
makeLatest: ${{ env.makeLatest }}
tag: ${{ inputs.version }}
generateReleaseNotes: ${{ env.generateReleaseNotes }}
allowUpdates: true
artifacts: |
**/greptime-*/*

View File

@@ -0,0 +1,138 @@
name: Release CN artifacts
description: Release artifacts to CN region
inputs:
src-image-registry:
description: The source image registry to store the images
required: true
default: docker.io
src-image-namespace:
description: The namespace of the source image registry to store the images
required: true
default: greptime
src-image-name:
description: The name of the source image
required: false
default: greptimedb
dst-image-registry:
description: The destination image registry to store the images
required: true
dst-image-namespace:
description: The namespace of the destination image registry to store the images
required: true
default: greptime
dst-image-registry-username:
description: The username to login to the image registry
required: true
dst-image-registry-password:
description: The password to login to the image registry
required: true
version:
description: Version of the artifact
required: true
dev-mode:
description: Enable dev mode, only push standard greptime
required: false
default: 'false'
push-latest-tag:
description: Whether to push the latest tag of the image
required: false
default: 'true'
aws-cn-s3-bucket:
description: S3 bucket to store released artifacts in CN region
required: true
aws-cn-access-key-id:
description: AWS access key id in CN region
required: true
aws-cn-secret-access-key:
description: AWS secret access key in CN region
required: true
aws-cn-region:
description: AWS region in CN
required: true
upload-to-s3:
description: Upload to S3
required: false
default: 'true'
artifacts-dir:
description: Directory to store artifacts
required: false
default: 'artifacts'
update-version-info:
description: Update the version info in S3
required: false
default: 'true'
upload-max-retry-times:
description: Max retry times for uploading artifacts to S3
required: false
default: "20"
upload-retry-timeout:
description: Timeout for uploading artifacts to S3
required: false
default: "30" # minutes
runs:
using: composite
steps:
- name: Download artifacts
uses: actions/download-artifact@v3
with:
path: ${{ inputs.artifacts-dir }}
- name: Release artifacts to cn region
uses: nick-invision/retry@v2
if: ${{ inputs.upload-to-s3 == 'true' }}
env:
AWS_ACCESS_KEY_ID: ${{ inputs.aws-cn-access-key-id }}
AWS_SECRET_ACCESS_KEY: ${{ inputs.aws-cn-secret-access-key }}
AWS_DEFAULT_REGION: ${{ inputs.aws-cn-region }}
UPDATE_VERSION_INFO: ${{ inputs.update-version-info }}
with:
max_attempts: ${{ inputs.upload-max-retry-times }}
timeout_minutes: ${{ inputs.upload-retry-timeout }}
command: |
./.github/scripts/upload-artifacts-to-s3.sh \
${{ inputs.artifacts-dir }} \
${{ inputs.version }} \
${{ inputs.aws-cn-s3-bucket }}
- name: Push greptimedb image from Dockerhub to ACR
shell: bash
env:
DST_REGISTRY_USERNAME: ${{ inputs.dst-image-registry-username }}
DST_REGISTRY_PASSWORD: ${{ inputs.dst-image-registry-password }}
run: |
./.github/scripts/copy-image.sh \
${{ inputs.src-image-registry }}/${{ inputs.src-image-namespace }}/${{ inputs.src-image-name }}:${{ inputs.version }} \
${{ inputs.dst-image-registry }}/${{ inputs.dst-image-namespace }}
- name: Push latest greptimedb image from Dockerhub to ACR
shell: bash
if: ${{ inputs.push-latest-tag == 'true' }}
env:
DST_REGISTRY_USERNAME: ${{ inputs.dst-image-registry-username }}
DST_REGISTRY_PASSWORD: ${{ inputs.dst-image-registry-password }}
run: |
./.github/scripts/copy-image.sh \
${{ inputs.src-image-registry }}/${{ inputs.src-image-namespace }}/${{ inputs.src-image-name }}:latest \
${{ inputs.dst-image-registry }}/${{ inputs.dst-image-namespace }}
- name: Push greptimedb-centos image from DockerHub to ACR
shell: bash
if: ${{ inputs.dev-mode == 'false' }}
env:
DST_REGISTRY_USERNAME: ${{ inputs.dst-image-registry-username }}
DST_REGISTRY_PASSWORD: ${{ inputs.dst-image-registry-password }}
run: |
./.github/scripts/copy-image.sh \
${{ inputs.src-image-registry }}/${{ inputs.src-image-namespace }}/${{ inputs.src-image-name }}-centos:latest \
${{ inputs.dst-image-registry }}/${{ inputs.dst-image-namespace }}
- name: Push greptimedb-centos image from DockerHub to ACR
shell: bash
if: ${{ inputs.dev-mode == 'false' && inputs.push-latest-tag == 'true' }}
env:
DST_REGISTRY_USERNAME: ${{ inputs.dst-image-registry-username }}
DST_REGISTRY_PASSWORD: ${{ inputs.dst-image-registry-password }}
run: |
./.github/scripts/copy-image.sh \
${{ inputs.src-image-registry }}/${{ inputs.src-image-namespace }}/${{ inputs.src-image-name }}-centos:latest \
${{ inputs.dst-image-registry }}/${{ inputs.dst-image-namespace }}

67
.github/actions/start-runner/action.yml vendored Normal file
View File

@@ -0,0 +1,67 @@
name: Start EC2 runner
description: Start EC2 runner
inputs:
runner:
description: The linux runner name
required: true
aws-access-key-id:
description: AWS access key id
required: true
aws-secret-access-key:
description: AWS secret access key
required: true
aws-region:
description: AWS region
required: true
github-token:
description: The GitHub token to clone private repository
required: false
default: ""
image-id:
description: The EC2 image id
required: true
security-group-id:
description: The EC2 security group id
required: true
subnet-id:
description: The EC2 subnet id
required: true
outputs:
label:
description: "label"
value: ${{ steps.start-linux-arm64-ec2-runner.outputs.label || inputs.runner }}
ec2-instance-id:
description: "ec2-instance-id"
value: ${{ steps.start-linux-arm64-ec2-runner.outputs.ec2-instance-id }}
runs:
using: composite
steps:
- name: Configure AWS credentials
if: startsWith(inputs.runner, 'ec2')
uses: aws-actions/configure-aws-credentials@v2
with:
aws-access-key-id: ${{ inputs.aws-access-key-id }}
aws-secret-access-key: ${{ inputs.aws-secret-access-key }}
aws-region: ${{ inputs.aws-region }}
# The EC2 runner will use the following format:
# <vm-type>-<instance-type>-<arch>
# like 'ec2-c6a.4xlarge-amd64'.
- name: Get EC2 instance type
if: startsWith(inputs.runner, 'ec2')
id: get-ec2-instance-type
shell: bash
run: |
echo "instance-type=$(echo ${{ inputs.runner }} | cut -d'-' -f2)" >> $GITHUB_OUTPUT
- name: Start EC2 runner
if: startsWith(inputs.runner, 'ec2')
uses: machulav/ec2-github-runner@v2
id: start-linux-arm64-ec2-runner
with:
mode: start
ec2-image-id: ${{ inputs.image-id }}
ec2-instance-type: ${{ steps.get-ec2-instance-type.outputs.instance-type }}
subnet-id: ${{ inputs.subnet-id }}
security-group-id: ${{ inputs.security-group-id }}
github-token: ${{ inputs.github-token }}

41
.github/actions/stop-runner/action.yml vendored Normal file
View File

@@ -0,0 +1,41 @@
name: Stop EC2 runner
description: Stop EC2 runner
inputs:
label:
description: The linux runner name
required: true
ec2-instance-id:
description: The EC2 instance id
required: true
aws-access-key-id:
description: AWS access key id
required: true
aws-secret-access-key:
description: AWS secret access key
required: true
aws-region:
description: AWS region
required: true
github-token:
description: The GitHub token to clone private repository
required: false
default: ""
runs:
using: composite
steps:
- name: Configure AWS credentials
if: ${{ inputs.label && inputs.ec2-instance-id }}
uses: aws-actions/configure-aws-credentials@v2
with:
aws-access-key-id: ${{ inputs.aws-access-key-id }}
aws-secret-access-key: ${{ inputs.aws-secret-access-key }}
aws-region: ${{ inputs.aws-region }}
- name: Stop EC2 runner
if: ${{ inputs.label && inputs.ec2-instance-id }}
uses: machulav/ec2-github-runner@v2
with:
mode: stop
label: ${{ inputs.label }}
ec2-instance-id: ${{ inputs.ec2-instance-id }}
github-token: ${{ inputs.github-token }}

View File

@@ -0,0 +1,63 @@
name: Upload artifacts
description: Upload artifacts
inputs:
artifacts-dir:
description: Directory to store artifacts
required: true
target-file:
description: The path of the target artifact
required: true
version:
description: Version of the artifact
required: true
working-dir:
description: Working directory to upload the artifacts
required: false
default: .
runs:
using: composite
steps:
- name: Create artifacts directory
working-directory: ${{ inputs.working-dir }}
shell: bash
run: |
mkdir -p ${{ inputs.artifacts-dir }} && \
mv ${{ inputs.target-file }} ${{ inputs.artifacts-dir }}
# The compressed artifacts will use the following layout:
# greptime-linux-amd64-pyo3-v0.3.0sha256sum
# greptime-linux-amd64-pyo3-v0.3.0.tar.gz
# greptime-linux-amd64-pyo3-v0.3.0
# └── greptime
- name: Compress artifacts and calculate checksum
working-directory: ${{ inputs.working-dir }}
shell: bash
run: |
tar -zcvf ${{ inputs.artifacts-dir }}.tar.gz ${{ inputs.artifacts-dir }}
- name: Calculate checksum
if: runner.os != 'Windows'
working-directory: ${{ inputs.working-dir }}
shell: bash
run: |
echo $(shasum -a 256 ${{ inputs.artifacts-dir }}.tar.gz | cut -f1 -d' ') > ${{ inputs.artifacts-dir }}.sha256sum
- name: Calculate checksum on Windows
if: runner.os == 'Windows'
working-directory: ${{ inputs.working-dir }}
shell: pwsh
run: Get-FileHash ${{ inputs.artifacts-dir }}.tar.gz -Algorithm SHA256 | select -ExpandProperty Hash > ${{ inputs.artifacts-dir }}.sha256sum
# Note: The artifacts will be double zip compressed(related issue: https://github.com/actions/upload-artifact/issues/39).
# However, when we use 'actions/download-artifact@v3' to download the artifacts, it will be automatically unzipped.
- name: Upload artifacts
uses: actions/upload-artifact@v3
with:
name: ${{ inputs.artifacts-dir }}
path: ${{ inputs.working-dir }}/${{ inputs.artifacts-dir }}.tar.gz
- name: Upload checksum
uses: actions/upload-artifact@v3
with:
name: ${{ inputs.artifacts-dir }}.sha256sum
path: ${{ inputs.working-dir }}/${{ inputs.artifacts-dir }}.sha256sum

47
.github/scripts/copy-image.sh vendored Executable file
View File

@@ -0,0 +1,47 @@
#!/usr/bin/env bash
set -e
set -o pipefail
SRC_IMAGE=$1
DST_REGISTRY=$2
SKOPEO_STABLE_IMAGE="quay.io/skopeo/stable:latest"
# Check if necessary variables are set.
function check_vars() {
for var in DST_REGISTRY_USERNAME DST_REGISTRY_PASSWORD DST_REGISTRY SRC_IMAGE; do
if [ -z "${!var}" ]; then
echo "$var is not set or empty."
echo "Usage: DST_REGISTRY_USERNAME=<your-dst-registry-username> DST_REGISTRY_PASSWORD=<your-dst-registry-password> $0 <dst-registry> <src-image>"
exit 1
fi
done
}
# Copies images from DockerHub to the destination registry.
function copy_images_from_dockerhub() {
# Check if docker is installed.
if ! command -v docker &> /dev/null; then
echo "docker is not installed. Please install docker to continue."
exit 1
fi
# Extract the name and tag of the source image.
IMAGE_NAME=$(echo "$SRC_IMAGE" | sed "s/.*\///")
echo "Copying $SRC_IMAGE to $DST_REGISTRY/$IMAGE_NAME"
docker run "$SKOPEO_STABLE_IMAGE" copy -a docker://"$SRC_IMAGE" \
--dest-creds "$DST_REGISTRY_USERNAME":"$DST_REGISTRY_PASSWORD" \
docker://"$DST_REGISTRY/$IMAGE_NAME"
}
function main() {
check_vars
copy_images_from_dockerhub
}
# Usage example:
# DST_REGISTRY_USERNAME=123 DST_REGISTRY_PASSWORD=456 \
# ./copy-image.sh greptime/greptimedb:v0.4.0 greptime-registry.cn-hangzhou.cr.aliyuncs.com
main

68
.github/scripts/create-version.sh vendored Executable file
View File

@@ -0,0 +1,68 @@
#!/usr/bin/env bash
set -e
# - If it's a tag push release, the version is the tag name(${{ github.ref_name }});
# - If it's a scheduled release, the version is '${{ env.NEXT_RELEASE_VERSION }}-nightly-$buildTime', like 'v0.2.0-nightly-20230313';
# - If it's a manual release, the version is '${{ env.NEXT_RELEASE_VERSION }}-$(git rev-parse --short HEAD)-YYYYMMDDSS', like 'v0.2.0-e5b243c-2023071245';
# - If it's a nightly build, the version is 'nightly-YYYYMMDD-$(git rev-parse --short HEAD)', like 'nightly-20230712-e5b243c'.
# create_version ${GIHUB_EVENT_NAME} ${NEXT_RELEASE_VERSION} ${NIGHTLY_RELEASE_PREFIX}
function create_version() {
# Read from envrionment variables.
if [ -z "$GITHUB_EVENT_NAME" ]; then
echo "GITHUB_EVENT_NAME is empty"
exit 1
fi
if [ -z "$NEXT_RELEASE_VERSION" ]; then
echo "NEXT_RELEASE_VERSION is empty"
exit 1
fi
if [ -z "$NIGHTLY_RELEASE_PREFIX" ]; then
echo "NIGHTLY_RELEASE_PREFIX is empty"
exit 1
fi
# Reuse $NEXT_RELEASE_VERSION to identify whether it's a nightly build.
# It will be like 'nigtly-20230808-7d0d8dc6'.
if [ "$NEXT_RELEASE_VERSION" = nightly ]; then
echo "$NIGHTLY_RELEASE_PREFIX-$(date "+%Y%m%d")-$(git rev-parse --short HEAD)"
exit 0
fi
# Reuse $NEXT_RELEASE_VERSION to identify whether it's a dev build.
# It will be like 'dev-2023080819-f0e7216c'.
if [ "$NEXT_RELEASE_VERSION" = dev ]; then
if [ -z "$COMMIT_SHA" ]; then
echo "COMMIT_SHA is empty in dev build"
exit 1
fi
echo "dev-$(date "+%Y%m%d-%s")-$(echo "$COMMIT_SHA" | cut -c1-8)"
exit 0
fi
# Note: Only output 'version=xxx' to stdout when everything is ok, so that it can be used in GitHub Actions Outputs.
if [ "$GITHUB_EVENT_NAME" = push ]; then
if [ -z "$GITHUB_REF_NAME" ]; then
echo "GITHUB_REF_NAME is empty in push event"
exit 1
fi
echo "$GITHUB_REF_NAME"
elif [ "$GITHUB_EVENT_NAME" = workflow_dispatch ]; then
echo "$NEXT_RELEASE_VERSION-$(git rev-parse --short HEAD)-$(date "+%Y%m%d-%s")"
elif [ "$GITHUB_EVENT_NAME" = schedule ]; then
echo "$NEXT_RELEASE_VERSION-$NIGHTLY_RELEASE_PREFIX-$(date "+%Y%m%d")"
else
echo "Unsupported GITHUB_EVENT_NAME: $GITHUB_EVENT_NAME"
exit 1
fi
}
# You can run as following examples:
# GITHUB_EVENT_NAME=push NEXT_RELEASE_VERSION=v0.4.0 NIGHTLY_RELEASE_PREFIX=nigtly GITHUB_REF_NAME=v0.3.0 ./create-version.sh
# GITHUB_EVENT_NAME=workflow_dispatch NEXT_RELEASE_VERSION=v0.4.0 NIGHTLY_RELEASE_PREFIX=nigtly ./create-version.sh
# GITHUB_EVENT_NAME=schedule NEXT_RELEASE_VERSION=v0.4.0 NIGHTLY_RELEASE_PREFIX=nigtly ./create-version.sh
# GITHUB_EVENT_NAME=schedule NEXT_RELEASE_VERSION=nightly NIGHTLY_RELEASE_PREFIX=nigtly ./create-version.sh
# GITHUB_EVENT_NAME=workflow_dispatch COMMIT_SHA=f0e7216c4bb6acce9b29a21ec2d683be2e3f984a NEXT_RELEASE_VERSION=dev NIGHTLY_RELEASE_PREFIX=nigtly ./create-version.sh
create_version

102
.github/scripts/upload-artifacts-to-s3.sh vendored Executable file
View File

@@ -0,0 +1,102 @@
#!/usr/bin/env bash
set -e
set -o pipefail
ARTIFACTS_DIR=$1
VERSION=$2
AWS_S3_BUCKET=$3
RELEASE_DIRS="releases/greptimedb"
GREPTIMEDB_REPO="GreptimeTeam/greptimedb"
# Check if necessary variables are set.
function check_vars() {
for var in AWS_S3_BUCKET VERSION ARTIFACTS_DIR; do
if [ -z "${!var}" ]; then
echo "$var is not set or empty."
echo "Usage: $0 <artifacts-dir> <version> <aws-s3-bucket>"
exit 1
fi
done
}
# Uploads artifacts to AWS S3 bucket.
function upload_artifacts() {
# The bucket layout will be:
# releases/greptimedb
# ├── latest-version.txt
# ├── latest-nightly-version.txt
# ├── v0.1.0
# │ ├── greptime-darwin-amd64-pyo3-v0.1.0.sha256sum
# │ └── greptime-darwin-amd64-pyo3-v0.1.0.tar.gz
# └── v0.2.0
# ├── greptime-darwin-amd64-pyo3-v0.2.0.sha256sum
# └── greptime-darwin-amd64-pyo3-v0.2.0.tar.gz
find "$ARTIFACTS_DIR" -type f \( -name "*.tar.gz" -o -name "*.sha256sum" \) | while IFS= read -r file; do
aws s3 cp \
"$file" "s3://$AWS_S3_BUCKET/$RELEASE_DIRS/$VERSION/$(basename "$file")"
done
}
# Updates the latest version information in AWS S3 if UPDATE_VERSION_INFO is true.
function update_version_info() {
if [ "$UPDATE_VERSION_INFO" == "true" ]; then
# If it's the officail release(like v1.0.0, v1.0.1, v1.0.2, etc.), update latest-version.txt.
if [[ "$VERSION" =~ ^v[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
echo "Updating latest-version.txt"
echo "$VERSION" > latest-version.txt
aws s3 cp \
latest-version.txt "s3://$AWS_S3_BUCKET/$RELEASE_DIRS/latest-version.txt"
fi
# If it's the nightly release, update latest-nightly-version.txt.
if [[ "$VERSION" == *"nightly"* ]]; then
echo "Updating latest-nightly-version.txt"
echo "$VERSION" > latest-nightly-version.txt
aws s3 cp \
latest-nightly-version.txt "s3://$AWS_S3_BUCKET/$RELEASE_DIRS/latest-nightly-version.txt"
fi
fi
}
# Downloads artifacts from Github if DOWNLOAD_ARTIFACTS_FROM_GITHUB is true.
function download_artifacts_from_github() {
if [ "$DOWNLOAD_ARTIFACTS_FROM_GITHUB" == "true" ]; then
# Check if jq is installed.
if ! command -v jq &> /dev/null; then
echo "jq is not installed. Please install jq to continue."
exit 1
fi
# Get the latest release API response.
RELEASES_API_RESPONSE=$(curl -s -H "Accept: application/vnd.github.v3+json" "https://api.github.com/repos/$GREPTIMEDB_REPO/releases/latest")
# Extract download URLs for the artifacts.
# Exclude source code archives which are typically named as 'greptimedb-<version>.zip' or 'greptimedb-<version>.tar.gz'.
ASSET_URLS=$(echo "$RELEASES_API_RESPONSE" | jq -r '.assets[] | select(.name | test("greptimedb-.*\\.(zip|tar\\.gz)$") | not) | .browser_download_url')
# Download each asset.
while IFS= read -r url; do
if [ -n "$url" ]; then
curl -LJO "$url"
echo "Downloaded: $url"
fi
done <<< "$ASSET_URLS"
fi
}
function main() {
check_vars
download_artifacts_from_github
upload_artifacts
update_version_info
}
# Usage example:
# AWS_ACCESS_KEY_ID=<your_access_key_id> \
# AWS_SECRET_ACCESS_KEY=<your_secret_access_key> \
# AWS_DEFAULT_REGION=<your_region> \
# UPDATE_VERSION_INFO=true \
# DOWNLOAD_ARTIFACTS_FROM_GITHUB=false \
# ./upload-artifacts-to-s3.sh <artifacts-dir> <version> <aws-s3-bucket>
main

View File

@@ -13,11 +13,11 @@ on:
name: Build API docs
env:
RUST_TOOLCHAIN: nightly-2023-02-26
RUST_TOOLCHAIN: nightly-2023-08-07
jobs:
apidoc:
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@v3
- uses: arduino/setup-protoc@v1

337
.github/workflows/dev-build.yml vendored Normal file
View File

@@ -0,0 +1,337 @@
# Development build only build the debug version of the artifacts manually.
name: GreptimeDB Development Build
on:
workflow_dispatch: # Allows you to run this workflow manually.
inputs:
repository:
description: The public repository to build
required: false
default: GreptimeTeam/greptimedb
commit: # Note: We only pull the source code and use the current workflow to build the artifacts.
description: The commit to build
required: true
linux_amd64_runner:
type: choice
description: The runner uses to build linux-amd64 artifacts
default: ec2-c6i.4xlarge-amd64
options:
- ubuntu-20.04
- ubuntu-20.04-8-cores
- ubuntu-20.04-16-cores
- ubuntu-20.04-32-cores
- ubuntu-20.04-64-cores
- ec2-c6i.xlarge-amd64 # 4C8G
- ec2-c6i.2xlarge-amd64 # 8C16G
- ec2-c6i.4xlarge-amd64 # 16C32G
- ec2-c6i.8xlarge-amd64 # 32C64G
- ec2-c6i.16xlarge-amd64 # 64C128G
linux_arm64_runner:
type: choice
description: The runner uses to build linux-arm64 artifacts
default: ec2-c6g.4xlarge-arm64
options:
- ec2-c6g.xlarge-arm64 # 4C8G
- ec2-c6g.2xlarge-arm64 # 8C16G
- ec2-c6g.4xlarge-arm64 # 16C32G
- ec2-c6g.8xlarge-arm64 # 32C64G
- ec2-c6g.16xlarge-arm64 # 64C128G
skip_test:
description: Do not run integration tests during the build
type: boolean
default: true
build_linux_amd64_artifacts:
type: boolean
description: Build linux-amd64 artifacts
required: false
default: true
build_linux_arm64_artifacts:
type: boolean
description: Build linux-arm64 artifacts
required: false
default: true
release_images:
type: boolean
description: Build and push images to DockerHub and ACR
required: false
default: true
# Use env variables to control all the release process.
env:
CARGO_PROFILE: nightly
# Controls whether to run tests, include unit-test, integration-test and sqlness.
DISABLE_RUN_TESTS: ${{ inputs.skip_test || vars.DEFAULT_SKIP_TEST }}
# Always use 'dev' to indicate it's the dev build.
NEXT_RELEASE_VERSION: dev
NIGHTLY_RELEASE_PREFIX: nightly
# Use the different image name to avoid conflict with the release images.
IMAGE_NAME: greptimedb-dev
# The source code will check out in the following path: '${WORKING_DIR}/dev/greptime'.
CHECKOUT_GREPTIMEDB_PATH: dev/greptimedb
jobs:
allocate-runners:
name: Allocate runners
if: ${{ github.repository == 'GreptimeTeam/greptimedb' }}
runs-on: ubuntu-20.04
outputs:
linux-amd64-runner: ${{ steps.start-linux-amd64-runner.outputs.label }}
linux-arm64-runner: ${{ steps.start-linux-arm64-runner.outputs.label }}
# The following EC2 resource id will be used for resource releasing.
linux-amd64-ec2-runner-label: ${{ steps.start-linux-amd64-runner.outputs.label }}
linux-amd64-ec2-runner-instance-id: ${{ steps.start-linux-amd64-runner.outputs.ec2-instance-id }}
linux-arm64-ec2-runner-label: ${{ steps.start-linux-arm64-runner.outputs.label }}
linux-arm64-ec2-runner-instance-id: ${{ steps.start-linux-arm64-runner.outputs.ec2-instance-id }}
# The 'version' use as the global tag name of the release workflow.
version: ${{ steps.create-version.outputs.version }}
steps:
- name: Checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Create version
id: create-version
run: |
version=$(./.github/scripts/create-version.sh) && \
echo $version && \
echo "version=$version" >> $GITHUB_OUTPUT
env:
GITHUB_EVENT_NAME: ${{ github.event_name }}
GITHUB_REF_NAME: ${{ github.ref_name }}
COMMIT_SHA: ${{ inputs.commit }}
NEXT_RELEASE_VERSION: ${{ env.NEXT_RELEASE_VERSION }}
NIGHTLY_RELEASE_PREFIX: ${{ env.NIGHTLY_RELEASE_PREFIX }}
- name: Allocate linux-amd64 runner
if: ${{ inputs.build_linux_amd64_artifacts || github.event_name == 'schedule' }}
uses: ./.github/actions/start-runner
id: start-linux-amd64-runner
with:
runner: ${{ inputs.linux_amd64_runner || vars.DEFAULT_AMD64_RUNNER }}
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ vars.EC2_RUNNER_REGION }}
github-token: ${{ secrets.GH_PERSONAL_ACCESS_TOKEN }}
image-id: ${{ vars.EC2_RUNNER_LINUX_AMD64_IMAGE_ID }}
security-group-id: ${{ vars.EC2_RUNNER_SECURITY_GROUP_ID }}
subnet-id: ${{ vars.EC2_RUNNER_SUBNET_ID }}
- name: Allocate linux-arm64 runner
if: ${{ inputs.build_linux_arm64_artifacts || github.event_name == 'schedule' }}
uses: ./.github/actions/start-runner
id: start-linux-arm64-runner
with:
runner: ${{ inputs.linux_arm64_runner || vars.DEFAULT_ARM64_RUNNER }}
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ vars.EC2_RUNNER_REGION }}
github-token: ${{ secrets.GH_PERSONAL_ACCESS_TOKEN }}
image-id: ${{ vars.EC2_RUNNER_LINUX_ARM64_IMAGE_ID }}
security-group-id: ${{ vars.EC2_RUNNER_SECURITY_GROUP_ID }}
subnet-id: ${{ vars.EC2_RUNNER_SUBNET_ID }}
build-linux-amd64-artifacts:
name: Build linux-amd64 artifacts
if: ${{ inputs.build_linux_amd64_artifacts || github.event_name == 'schedule' }}
needs: [
allocate-runners,
]
runs-on: ${{ needs.allocate-runners.outputs.linux-amd64-runner }}
steps:
- name: Checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Checkout greptimedb
uses: actions/checkout@v3
with:
repository: ${{ inputs.repository }}
ref: ${{ inputs.commit }}
path: ${{ env.CHECKOUT_GREPTIMEDB_PATH }}
- uses: ./.github/actions/build-linux-artifacts
with:
arch: amd64
cargo-profile: ${{ env.CARGO_PROFILE }}
version: ${{ needs.allocate-runners.outputs.version }}
disable-run-tests: ${{ env.DISABLE_RUN_TESTS }}
dev-mode: true # Only build the standard greptime binary.
working-dir: ${{ env.CHECKOUT_GREPTIMEDB_PATH }}
build-linux-arm64-artifacts:
name: Build linux-arm64 artifacts
if: ${{ inputs.build_linux_arm64_artifacts || github.event_name == 'schedule' }}
needs: [
allocate-runners,
]
runs-on: ${{ needs.allocate-runners.outputs.linux-arm64-runner }}
steps:
- name: Checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Checkout greptimedb
uses: actions/checkout@v3
with:
repository: ${{ inputs.repository }}
ref: ${{ inputs.commit }}
path: ${{ env.CHECKOUT_GREPTIMEDB_PATH }}
- uses: ./.github/actions/build-linux-artifacts
with:
arch: arm64
cargo-profile: ${{ env.CARGO_PROFILE }}
version: ${{ needs.allocate-runners.outputs.version }}
disable-run-tests: ${{ env.DISABLE_RUN_TESTS }}
dev-mode: true # Only build the standard greptime binary.
working-dir: ${{ env.CHECKOUT_GREPTIMEDB_PATH }}
release-images-to-dockerhub:
name: Build and push images to DockerHub
if: ${{ inputs.release_images || github.event_name == 'schedule' }}
needs: [
allocate-runners,
build-linux-amd64-artifacts,
build-linux-arm64-artifacts,
]
runs-on: ubuntu-20.04
outputs:
build-result: ${{ steps.set-build-result.outputs.build-result }}
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Build and push images to dockerhub
uses: ./.github/actions/build-images
with:
image-registry: docker.io
image-namespace: ${{ vars.IMAGE_NAMESPACE }}
image-name: ${{ env.IMAGE_NAME }}
image-registry-username: ${{ secrets.DOCKERHUB_USERNAME }}
image-registry-password: ${{ secrets.DOCKERHUB_TOKEN }}
version: ${{ needs.allocate-runners.outputs.version }}
push-latest-tag: false # Don't push the latest tag to registry.
dev-mode: true # Only build the standard images.
- name: Set build result
id: set-build-result
run: |
echo "build-result=success" >> $GITHUB_OUTPUT
release-cn-artifacts:
name: Release artifacts to CN region
if: ${{ inputs.release_images || github.event_name == 'schedule' }}
needs: [
allocate-runners,
release-images-to-dockerhub,
]
runs-on: ubuntu-20.04
continue-on-error: true
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Release artifacts to CN region
uses: ./.github/actions/release-cn-artifacts
with:
src-image-registry: docker.io
src-image-namespace: ${{ vars.IMAGE_NAMESPACE }}
src-image-name: ${{ env.IMAGE_NAME }}
dst-image-registry-username: ${{ secrets.ALICLOUD_USERNAME }}
dst-image-registry-password: ${{ secrets.ALICLOUD_PASSWORD }}
dst-image-registry: ${{ vars.ACR_IMAGE_REGISTRY }}
dst-image-namespace: ${{ vars.IMAGE_NAMESPACE }}
version: ${{ needs.allocate-runners.outputs.version }}
aws-cn-s3-bucket: ${{ vars.AWS_RELEASE_BUCKET }}
aws-cn-access-key-id: ${{ secrets.AWS_CN_ACCESS_KEY_ID }}
aws-cn-secret-access-key: ${{ secrets.AWS_CN_SECRET_ACCESS_KEY }}
aws-cn-region: ${{ vars.AWS_RELEASE_BUCKET_REGION }}
dev-mode: true # Only build the standard images(exclude centos images).
push-latest-tag: false # Don't push the latest tag to registry.
update-version-info: false # Don't update the version info in S3.
stop-linux-amd64-runner: # It's always run as the last job in the workflow to make sure that the runner is released.
name: Stop linux-amd64 runner
# Only run this job when the runner is allocated.
if: ${{ always() }}
runs-on: ubuntu-20.04
needs: [
allocate-runners,
build-linux-amd64-artifacts,
]
steps:
- name: Checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Stop EC2 runner
uses: ./.github/actions/stop-runner
with:
label: ${{ needs.allocate-runners.outputs.linux-amd64-ec2-runner-label }}
ec2-instance-id: ${{ needs.allocate-runners.outputs.linux-amd64-ec2-runner-instance-id }}
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ vars.EC2_RUNNER_REGION }}
github-token: ${{ secrets.GH_PERSONAL_ACCESS_TOKEN }}
stop-linux-arm64-runner: # It's always run as the last job in the workflow to make sure that the runner is released.
name: Stop linux-arm64 runner
# Only run this job when the runner is allocated.
if: ${{ always() }}
runs-on: ubuntu-20.04
needs: [
allocate-runners,
build-linux-arm64-artifacts,
]
steps:
- name: Checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Stop EC2 runner
uses: ./.github/actions/stop-runner
with:
label: ${{ needs.allocate-runners.outputs.linux-arm64-ec2-runner-label }}
ec2-instance-id: ${{ needs.allocate-runners.outputs.linux-arm64-ec2-runner-instance-id }}
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ vars.EC2_RUNNER_REGION }}
github-token: ${{ secrets.GH_PERSONAL_ACCESS_TOKEN }}
notification:
if: ${{ always() }} # Not requiring successful dependent jobs, always run.
name: Send notification to Greptime team
needs: [
release-images-to-dockerhub
]
runs-on: ubuntu-20.04
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL_DEVELOP_CHANNEL }}
steps:
- name: Notifiy nightly build successful result
uses: slackapi/slack-github-action@v1.23.0
if: ${{ needs.release-images-to-dockerhub.outputs.build-result == 'success' }}
with:
payload: |
{"text": "GreptimeDB's ${{ env.NEXT_RELEASE_VERSION }} build has completed successfully."}
- name: Notifiy nightly build failed result
uses: slackapi/slack-github-action@v1.23.0
if: ${{ needs.release-images-to-dockerhub.outputs.build-result != 'success' }}
with:
payload: |
{"text": "GreptimeDB's ${{ env.NEXT_RELEASE_VERSION }} build has failed, please check 'https://github.com/GreptimeTeam/greptimedb/actions/workflows/${{ env.NEXT_RELEASE_VERSION }}-build.yml'."}

View File

@@ -1,4 +1,5 @@
on:
merge_group:
pull_request:
types: [opened, synchronize, reopened, ready_for_review]
paths-ignore:
@@ -23,13 +24,17 @@ on:
name: CI
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
cancel-in-progress: true
env:
RUST_TOOLCHAIN: nightly-2023-02-26
RUST_TOOLCHAIN: nightly-2023-08-07
jobs:
typos:
name: Spell Check with Typos
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@v3
- uses: crate-ci/typos@v1.13.10
@@ -37,7 +42,7 @@ jobs:
check:
name: Check
if: github.event.pull_request.draft == false
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
timeout-minutes: 60
steps:
- uses: actions/checkout@v3
@@ -50,73 +55,33 @@ jobs:
- name: Rust Cache
uses: Swatinem/rust-cache@v2
- name: Run cargo check
run: cargo check --workspace --all-targets
run: cargo check --locked --workspace --all-targets
toml:
name: Toml Check
if: github.event.pull_request.draft == false
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
timeout-minutes: 60
steps:
- uses: actions/checkout@v3
- uses: dtolnay/rust-toolchain@master
with:
toolchain: ${{ env.RUST_TOOLCHAIN }}
toolchain: stable
- name: Rust Cache
uses: Swatinem/rust-cache@v2
- name: Install taplo
run: cargo install taplo-cli --version ^0.8 --locked
run: cargo +stable install taplo-cli --version ^0.8 --locked
- name: Run taplo
run: taplo format --check --option "indent_string= "
# Use coverage to run test.
# test:
# name: Test Suite
# if: github.event.pull_request.draft == false
# runs-on: ubuntu-latest
# timeout-minutes: 60
# steps:
# - uses: actions/checkout@v3
# - name: Cache LLVM and Clang
# id: cache-llvm
# uses: actions/cache@v3
# with:
# path: ./llvm
# key: llvm
# - uses: arduino/setup-protoc@v1
# with:
# repo-token: ${{ secrets.GITHUB_TOKEN }}
# - uses: KyleMayes/install-llvm-action@v1
# with:
# version: "14.0"
# cached: ${{ steps.cache-llvm.outputs.cache-hit }}
# - uses: dtolnay/rust-toolchain@master
# with:
# toolchain: ${{ env.RUST_TOOLCHAIN }}
# - name: Rust Cache
# uses: Swatinem/rust-cache@v2
# - name: Cleanup disk
# uses: curoky/cleanup-disk-action@v2.0
# with:
# retain: 'rust,llvm'
# - name: Install latest nextest release
# uses: taiki-e/install-action@nextest
# - name: Run tests
# run: cargo nextest run
# env:
# CARGO_BUILD_RUSTFLAGS: "-C link-arg=-fuse-ld=lld"
# RUST_BACKTRACE: 1
# GT_S3_BUCKET: ${{ secrets.S3_BUCKET }}
# GT_S3_ACCESS_KEY_ID: ${{ secrets.S3_ACCESS_KEY_ID }}
# GT_S3_ACCESS_KEY: ${{ secrets.S3_ACCESS_KEY }}
# UNITTEST_LOG_DIR: "__unittest_logs"
run: taplo format --check
sqlness:
name: Sqlness Test
if: github.event.pull_request.draft == false
runs-on: ubuntu-latest-8-cores
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ ubuntu-20.04-8-cores ]
timeout-minutes: 60
needs: [clippy]
steps:
- uses: actions/checkout@v3
- uses: arduino/setup-protoc@v1
@@ -127,30 +92,20 @@ jobs:
toolchain: ${{ env.RUST_TOOLCHAIN }}
- name: Rust Cache
uses: Swatinem/rust-cache@v2
- name: Run etcd
run: |
ETCD_VER=v3.5.7
DOWNLOAD_URL=https://github.com/etcd-io/etcd/releases/download
curl -L ${DOWNLOAD_URL}/${ETCD_VER}/etcd-${ETCD_VER}-linux-amd64.tar.gz -o /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz
mkdir -p /tmp/etcd-download
tar xzvf /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz -C /tmp/etcd-download --strip-components=1
rm -f /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz
sudo cp -a /tmp/etcd-download/etcd* /usr/local/bin/
nohup etcd >/tmp/etcd.log 2>&1 &
- name: Run sqlness
run: cargo sqlness && ls /tmp
run: cargo sqlness
- name: Upload sqlness logs
if: always()
uses: actions/upload-artifact@v3
with:
name: sqlness-logs
path: /tmp/greptime-*.log
path: ${{ runner.temp }}/greptime-*.log
retention-days: 3
fmt:
name: Rustfmt
if: github.event.pull_request.draft == false
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
timeout-minutes: 60
steps:
- uses: actions/checkout@v3
@@ -169,7 +124,7 @@ jobs:
clippy:
name: Clippy
if: github.event.pull_request.draft == false
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
timeout-minutes: 60
steps:
- uses: actions/checkout@v3
@@ -187,9 +142,8 @@ jobs:
coverage:
if: github.event.pull_request.draft == false
runs-on: ubuntu-latest-8-cores
runs-on: ubuntu-20.04-8-cores
timeout-minutes: 60
needs: [clippy]
steps:
- uses: actions/checkout@v3
- uses: arduino/setup-protoc@v1
@@ -216,7 +170,7 @@ jobs:
- name: Install cargo-llvm-cov
uses: taiki-e/install-action@cargo-llvm-cov
- name: Collect coverage data
run: cargo llvm-cov nextest --workspace --lcov --output-path lcov.info -F pyo3_backend
run: cargo llvm-cov nextest --workspace --lcov --output-path lcov.info -F pyo3_backend -F dashboard
env:
CARGO_BUILD_RUSTFLAGS: "-C link-arg=-fuse-ld=lld"
RUST_BACKTRACE: 1
@@ -224,6 +178,7 @@ jobs:
GT_S3_BUCKET: ${{ secrets.S3_BUCKET }}
GT_S3_ACCESS_KEY_ID: ${{ secrets.S3_ACCESS_KEY_ID }}
GT_S3_ACCESS_KEY: ${{ secrets.S3_ACCESS_KEY }}
GT_S3_REGION: ${{ secrets.S3_REGION }}
UNITTEST_LOG_DIR: "__unittest_logs"
- name: Codecov upload
uses: codecov/codecov-action@v2

View File

@@ -11,7 +11,7 @@ on:
jobs:
doc_issue:
if: github.event.label.name == 'doc update required'
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
steps:
- name: create an issue in doc repo
uses: dacbd/create-issue-action@main
@@ -25,7 +25,7 @@ jobs:
${{ github.event.issue.html_url || github.event.pull_request.html_url }}
cloud_issue:
if: github.event.label.name == 'cloud followup required'
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
steps:
- name: create an issue in cloud repo
uses: dacbd/create-issue-action@main

View File

@@ -1,4 +1,5 @@
on:
merge_group:
pull_request:
types: [opened, synchronize, reopened, ready_for_review]
paths:
@@ -27,29 +28,43 @@ name: CI
# https://docs.github.com/en/repositories/configuring-branches-and-merges-in-your-repository/defining-the-mergeability-of-pull-requests/troubleshooting-required-status-checks#handling-skipped-but-required-checks
jobs:
typos:
name: Spell Check with Typos
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@v3
- uses: crate-ci/typos@v1.13.10
check:
name: Check
if: github.event.pull_request.draft == false
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
steps:
- run: 'echo "No action required"'
fmt:
name: Rustfmt
if: github.event.pull_request.draft == false
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
steps:
- run: 'echo "No action required"'
clippy:
name: Clippy
if: github.event.pull_request.draft == false
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
steps:
- run: 'echo "No action required"'
coverage:
if: github.event.pull_request.draft == false
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
steps:
- run: 'echo "No action required"'
sqlness:
name: Sqlness Test
if: github.event.pull_request.draft == false
runs-on: ubuntu-20.04
steps:
- run: 'echo "No action required"'

View File

@@ -8,7 +8,7 @@ on:
types: [opened, synchronize, reopened, ready_for_review]
jobs:
license-header-check:
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
name: license-header-check
steps:
- uses: actions/checkout@v2

309
.github/workflows/nightly-build.yml vendored Normal file
View File

@@ -0,0 +1,309 @@
# Nightly build only do the following things:
# 1. Run integration tests;
# 2. Build binaries and images for linux-amd64 and linux-arm64 platform;
name: GreptimeDB Nightly Build
on:
schedule:
# Trigger at 00:00(UTC) on every day-of-week from Monday through Friday.
- cron: '0 0 * * 1-5'
workflow_dispatch: # Allows you to run this workflow manually.
inputs:
linux_amd64_runner:
type: choice
description: The runner uses to build linux-amd64 artifacts
default: ec2-c6i.2xlarge-amd64
options:
- ubuntu-20.04
- ubuntu-20.04-8-cores
- ubuntu-20.04-16-cores
- ubuntu-20.04-32-cores
- ubuntu-20.04-64-cores
- ec2-c6i.xlarge-amd64 # 4C8G
- ec2-c6i.2xlarge-amd64 # 8C16G
- ec2-c6i.4xlarge-amd64 # 16C32G
- ec2-c6i.8xlarge-amd64 # 32C64G
- ec2-c6i.16xlarge-amd64 # 64C128G
linux_arm64_runner:
type: choice
description: The runner uses to build linux-arm64 artifacts
default: ec2-c6g.2xlarge-arm64
options:
- ec2-c6g.xlarge-arm64 # 4C8G
- ec2-c6g.2xlarge-arm64 # 8C16G
- ec2-c6g.4xlarge-arm64 # 16C32G
- ec2-c6g.8xlarge-arm64 # 32C64G
- ec2-c6g.16xlarge-arm64 # 64C128G
skip_test:
description: Do not run integration tests during the build
type: boolean
default: true
build_linux_amd64_artifacts:
type: boolean
description: Build linux-amd64 artifacts
required: false
default: false
build_linux_arm64_artifacts:
type: boolean
description: Build linux-arm64 artifacts
required: false
default: false
release_images:
type: boolean
description: Build and push images to DockerHub and ACR
required: false
default: false
# Use env variables to control all the release process.
env:
CARGO_PROFILE: nightly
# Controls whether to run tests, include unit-test, integration-test and sqlness.
DISABLE_RUN_TESTS: ${{ inputs.skip_test || vars.DEFAULT_SKIP_TEST }}
# Always use 'nightly' to indicate it's the nightly build.
NEXT_RELEASE_VERSION: nightly
NIGHTLY_RELEASE_PREFIX: nightly
jobs:
allocate-runners:
name: Allocate runners
if: ${{ github.repository == 'GreptimeTeam/greptimedb' }}
runs-on: ubuntu-20.04
outputs:
linux-amd64-runner: ${{ steps.start-linux-amd64-runner.outputs.label }}
linux-arm64-runner: ${{ steps.start-linux-arm64-runner.outputs.label }}
# The following EC2 resource id will be used for resource releasing.
linux-amd64-ec2-runner-label: ${{ steps.start-linux-amd64-runner.outputs.label }}
linux-amd64-ec2-runner-instance-id: ${{ steps.start-linux-amd64-runner.outputs.ec2-instance-id }}
linux-arm64-ec2-runner-label: ${{ steps.start-linux-arm64-runner.outputs.label }}
linux-arm64-ec2-runner-instance-id: ${{ steps.start-linux-arm64-runner.outputs.ec2-instance-id }}
# The 'version' use as the global tag name of the release workflow.
version: ${{ steps.create-version.outputs.version }}
steps:
- name: Checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Create version
id: create-version
run: |
version=$(./.github/scripts/create-version.sh) && \
echo $version && \
echo "version=$version" >> $GITHUB_OUTPUT
env:
GITHUB_EVENT_NAME: ${{ github.event_name }}
GITHUB_REF_NAME: ${{ github.ref_name }}
NEXT_RELEASE_VERSION: ${{ env.NEXT_RELEASE_VERSION }}
NIGHTLY_RELEASE_PREFIX: ${{ env.NIGHTLY_RELEASE_PREFIX }}
- name: Allocate linux-amd64 runner
if: ${{ inputs.build_linux_amd64_artifacts || github.event_name == 'schedule' }}
uses: ./.github/actions/start-runner
id: start-linux-amd64-runner
with:
runner: ${{ inputs.linux_amd64_runner || vars.DEFAULT_AMD64_RUNNER }}
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ vars.EC2_RUNNER_REGION }}
github-token: ${{ secrets.GH_PERSONAL_ACCESS_TOKEN }}
image-id: ${{ vars.EC2_RUNNER_LINUX_AMD64_IMAGE_ID }}
security-group-id: ${{ vars.EC2_RUNNER_SECURITY_GROUP_ID }}
subnet-id: ${{ vars.EC2_RUNNER_SUBNET_ID }}
- name: Allocate linux-arm64 runner
if: ${{ inputs.build_linux_arm64_artifacts || github.event_name == 'schedule' }}
uses: ./.github/actions/start-runner
id: start-linux-arm64-runner
with:
runner: ${{ inputs.linux_arm64_runner || vars.DEFAULT_ARM64_RUNNER }}
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ vars.EC2_RUNNER_REGION }}
github-token: ${{ secrets.GH_PERSONAL_ACCESS_TOKEN }}
image-id: ${{ vars.EC2_RUNNER_LINUX_ARM64_IMAGE_ID }}
security-group-id: ${{ vars.EC2_RUNNER_SECURITY_GROUP_ID }}
subnet-id: ${{ vars.EC2_RUNNER_SUBNET_ID }}
build-linux-amd64-artifacts:
name: Build linux-amd64 artifacts
if: ${{ inputs.build_linux_amd64_artifacts || github.event_name == 'schedule' }}
needs: [
allocate-runners,
]
runs-on: ${{ needs.allocate-runners.outputs.linux-amd64-runner }}
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 0
- uses: ./.github/actions/build-linux-artifacts
with:
arch: amd64
cargo-profile: ${{ env.CARGO_PROFILE }}
version: ${{ needs.allocate-runners.outputs.version }}
disable-run-tests: ${{ env.DISABLE_RUN_TESTS }}
build-linux-arm64-artifacts:
name: Build linux-arm64 artifacts
if: ${{ inputs.build_linux_arm64_artifacts || github.event_name == 'schedule' }}
needs: [
allocate-runners,
]
runs-on: ${{ needs.allocate-runners.outputs.linux-arm64-runner }}
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 0
- uses: ./.github/actions/build-linux-artifacts
with:
arch: arm64
cargo-profile: ${{ env.CARGO_PROFILE }}
version: ${{ needs.allocate-runners.outputs.version }}
disable-run-tests: ${{ env.DISABLE_RUN_TESTS }}
release-images-to-dockerhub:
name: Build and push images to DockerHub
if: ${{ inputs.release_images || github.event_name == 'schedule' }}
needs: [
allocate-runners,
build-linux-amd64-artifacts,
build-linux-arm64-artifacts,
]
runs-on: ubuntu-20.04
outputs:
nightly-build-result: ${{ steps.set-nightly-build-result.outputs.nightly-build-result }}
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Build and push images to dockerhub
uses: ./.github/actions/build-images
with:
image-registry: docker.io
image-namespace: ${{ vars.IMAGE_NAMESPACE }}
image-registry-username: ${{ secrets.DOCKERHUB_USERNAME }}
image-registry-password: ${{ secrets.DOCKERHUB_TOKEN }}
version: ${{ needs.allocate-runners.outputs.version }}
push-latest-tag: false # Don't push the latest tag to registry.
- name: Set nightly build result
id: set-nightly-build-result
run: |
echo "nightly-build-result=success" >> $GITHUB_OUTPUT
release-cn-artifacts:
name: Release artifacts to CN region
if: ${{ inputs.release_images || github.event_name == 'schedule' }}
needs: [
allocate-runners,
release-images-to-dockerhub,
]
runs-on: ubuntu-20.04
# When we push to ACR, it's easy to fail due to some unknown network issues.
# However, we don't want to fail the whole workflow because of this.
# The ACR have daily sync with DockerHub, so don't worry about the image not being updated.
continue-on-error: true
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Release artifacts to CN region
uses: ./.github/actions/release-cn-artifacts
with:
src-image-registry: docker.io
src-image-namespace: ${{ vars.IMAGE_NAMESPACE }}
src-image-name: greptimedb
dst-image-registry-username: ${{ secrets.ALICLOUD_USERNAME }}
dst-image-registry-password: ${{ secrets.ALICLOUD_PASSWORD }}
dst-image-registry: ${{ vars.ACR_IMAGE_REGISTRY }}
dst-image-namespace: ${{ vars.IMAGE_NAMESPACE }}
version: ${{ needs.allocate-runners.outputs.version }}
aws-cn-s3-bucket: ${{ vars.AWS_RELEASE_BUCKET }}
aws-cn-access-key-id: ${{ secrets.AWS_CN_ACCESS_KEY_ID }}
aws-cn-secret-access-key: ${{ secrets.AWS_CN_SECRET_ACCESS_KEY }}
aws-cn-region: ${{ vars.AWS_RELEASE_BUCKET_REGION }}
dev-mode: false
update-version-info: false # Don't update version info in S3.
push-latest-tag: false # Don't push the latest tag to registry.
stop-linux-amd64-runner: # It's always run as the last job in the workflow to make sure that the runner is released.
name: Stop linux-amd64 runner
# Only run this job when the runner is allocated.
if: ${{ always() }}
runs-on: ubuntu-20.04
needs: [
allocate-runners,
build-linux-amd64-artifacts,
]
steps:
- name: Checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Stop EC2 runner
uses: ./.github/actions/stop-runner
with:
label: ${{ needs.allocate-runners.outputs.linux-amd64-ec2-runner-label }}
ec2-instance-id: ${{ needs.allocate-runners.outputs.linux-amd64-ec2-runner-instance-id }}
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ vars.EC2_RUNNER_REGION }}
github-token: ${{ secrets.GH_PERSONAL_ACCESS_TOKEN }}
stop-linux-arm64-runner: # It's always run as the last job in the workflow to make sure that the runner is released.
name: Stop linux-arm64 runner
# Only run this job when the runner is allocated.
if: ${{ always() }}
runs-on: ubuntu-20.04
needs: [
allocate-runners,
build-linux-arm64-artifacts,
]
steps:
- name: Checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Stop EC2 runner
uses: ./.github/actions/stop-runner
with:
label: ${{ needs.allocate-runners.outputs.linux-arm64-ec2-runner-label }}
ec2-instance-id: ${{ needs.allocate-runners.outputs.linux-arm64-ec2-runner-instance-id }}
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ vars.EC2_RUNNER_REGION }}
github-token: ${{ secrets.GH_PERSONAL_ACCESS_TOKEN }}
notification:
if: ${{ always() }} # Not requiring successful dependent jobs, always run.
name: Send notification to Greptime team
needs: [
release-images-to-dockerhub
]
runs-on: ubuntu-20.04
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL_DEVELOP_CHANNEL }}
steps:
- name: Notifiy nightly build successful result
uses: slackapi/slack-github-action@v1.23.0
if: ${{ needs.release-images-to-dockerhub.outputs.nightly-build-result == 'success' }}
with:
payload: |
{"text": "GreptimeDB's ${{ env.NEXT_RELEASE_VERSION }} build has completed successfully."}
- name: Notifiy nightly build failed result
uses: slackapi/slack-github-action@v1.23.0
if: ${{ needs.release-images-to-dockerhub.outputs.nightly-build-result != 'success' }}
with:
payload: |
{"text": "GreptimeDB's ${{ env.NEXT_RELEASE_VERSION }} build has failed, please check 'https://github.com/GreptimeTeam/greptimedb/actions/workflows/${{ env.NEXT_RELEASE_VERSION }}-build.yml'."}

98
.github/workflows/nightly-ci.yml vendored Normal file
View File

@@ -0,0 +1,98 @@
# Nightly CI: runs tests every night for our second tier plaforms (Windows)
on:
schedule:
- cron: '0 23 * * 1-5'
workflow_dispatch:
name: Nightly CI
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
cancel-in-progress: true
env:
RUST_TOOLCHAIN: nightly-2023-08-07
jobs:
sqlness:
name: Sqlness Test
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ windows-latest-8-cores ]
timeout-minutes: 60
steps:
- uses: actions/checkout@v4.1.0
- uses: arduino/setup-protoc@v1
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
- uses: dtolnay/rust-toolchain@master
with:
toolchain: ${{ env.RUST_TOOLCHAIN }}
- name: Rust Cache
uses: Swatinem/rust-cache@v2
- name: Run sqlness
run: cargo sqlness
- name: Notify slack if failed
if: failure()
uses: slackapi/slack-github-action@v1.23.0
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL_DEVELOP_CHANNEL }}
with:
payload: |
{"text": "Nightly CI failed for sqlness tests"}
- name: Upload sqlness logs
if: always()
uses: actions/upload-artifact@v3
with:
name: sqlness-logs
path: ${{ runner.temp }}/greptime-*.log
retention-days: 3
test-on-windows:
runs-on: windows-latest-8-cores
timeout-minutes: 60
steps:
- run: git config --global core.autocrlf false
- uses: actions/checkout@v4.1.0
- uses: arduino/setup-protoc@v1
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
- name: Install Rust toolchain
uses: dtolnay/rust-toolchain@master
with:
toolchain: ${{ env.RUST_TOOLCHAIN }}
components: llvm-tools-preview
- name: Rust Cache
uses: Swatinem/rust-cache@v2
- name: Install Cargo Nextest
uses: taiki-e/install-action@nextest
- name: Install Python
uses: actions/setup-python@v4
with:
python-version: '3.10'
- name: Install PyArrow Package
run: pip install pyarrow
- name: Install WSL distribution
uses: Vampire/setup-wsl@v2
with:
distribution: Ubuntu-22.04
- name: Running tests
run: cargo nextest run -F pyo3_backend,dashboard
env:
RUST_BACKTRACE: 1
CARGO_INCREMENTAL: 0
GT_S3_BUCKET: ${{ secrets.S3_BUCKET }}
GT_S3_ACCESS_KEY_ID: ${{ secrets.S3_ACCESS_KEY_ID }}
GT_S3_ACCESS_KEY: ${{ secrets.S3_ACCESS_KEY }}
GT_S3_REGION: ${{ secrets.S3_REGION }}
UNITTEST_LOG_DIR: "__unittest_logs"
- name: Notify slack if failed
if: failure()
uses: slackapi/slack-github-action@v1.23.0
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL_DEVELOP_CHANNEL }}
with:
payload: |
{"text": "Nightly CI failed for cargo test"}

View File

@@ -10,7 +10,7 @@ on:
jobs:
check:
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
timeout-minutes: 10
steps:
- uses: thehanimo/pr-title-checker@v1.3.4
@@ -19,7 +19,7 @@ jobs:
pass_on_octokit_error: false
configuration_path: ".github/pr-title-checker-config.json"
breaking:
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
timeout-minutes: 10
steps:
- uses: thehanimo/pr-title-checker@v1.3.4

View File

@@ -0,0 +1,85 @@
name: Release dev-builder images
on:
workflow_dispatch: # Allows you to run this workflow manually.
inputs:
version:
description: Version of the dev-builder
required: false
default: latest
release_dev_builder_ubuntu_image:
type: boolean
description: Release dev-builder-ubuntu image
required: false
default: false
release_dev_builder_centos_image:
type: boolean
description: Release dev-builder-centos image
required: false
default: false
release_dev_builder_android_image:
type: boolean
description: Release dev-builder-android image
required: false
default: false
jobs:
release-dev-builder-images:
name: Release dev builder images
if: ${{ inputs.release_dev_builder_ubuntu_image || inputs.release_dev_builder_centos_image || inputs.release_dev_builder_android_image }} # Only manually trigger this job.
runs-on: ubuntu-20.04-16-cores
steps:
- name: Checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Build and push dev builder images
uses: ./.github/actions/build-dev-builder-images
with:
version: ${{ inputs.version }}
dockerhub-image-registry-username: ${{ secrets.DOCKERHUB_USERNAME }}
dockerhub-image-registry-token: ${{ secrets.DOCKERHUB_TOKEN }}
build-dev-builder-ubuntu: ${{ inputs.release_dev_builder_ubuntu_image }}
build-dev-builder-centos: ${{ inputs.release_dev_builder_centos_image }}
build-dev-builder-android: ${{ inputs.release_dev_builder_android_image }}
release-dev-builder-images-cn: # Note: Be careful issue: https://github.com/containers/skopeo/issues/1874 and we decide to use the latest stable skopeo container.
name: Release dev builder images to CN region
runs-on: ubuntu-20.04
needs: [
release-dev-builder-images
]
steps:
- name: Push dev-builder-ubuntu image
shell: bash
if: ${{ inputs.release_dev_builder_ubuntu_image }}
env:
DST_REGISTRY_USERNAME: ${{ secrets.ALICLOUD_USERNAME }}
DST_REGISTRY_PASSWORD: ${{ secrets.ALICLOUD_PASSWORD }}
run: |
docker run quay.io/skopeo/stable:latest copy -a docker://docker.io/${{ vars.IMAGE_NAMESPACE }}/dev-builder-ubuntu:${{ inputs.version }} \
--dest-creds "$DST_REGISTRY_USERNAME":"$DST_REGISTRY_PASSWORD" \
docker://${{ vars.ACR_IMAGE_REGISTRY }}/${{ vars.IMAGE_NAMESPACE }}/dev-builder-ubuntu:${{ inputs.version }}
- name: Push dev-builder-centos image
shell: bash
if: ${{ inputs.release_dev_builder_centos_image }}
env:
DST_REGISTRY_USERNAME: ${{ secrets.ALICLOUD_USERNAME }}
DST_REGISTRY_PASSWORD: ${{ secrets.ALICLOUD_PASSWORD }}
run: |
docker run quay.io/skopeo/stable:latest copy -a docker://docker.io/${{ vars.IMAGE_NAMESPACE }}/dev-builder-centos:${{ inputs.version }} \
--dest-creds "$DST_REGISTRY_USERNAME":"$DST_REGISTRY_PASSWORD" \
docker://${{ vars.ACR_IMAGE_REGISTRY }}/${{ vars.IMAGE_NAMESPACE }}/dev-builder-centos:${{ inputs.version }}
- name: Push dev-builder-android image
shell: bash
if: ${{ inputs.release_dev_builder_android_image }}
env:
DST_REGISTRY_USERNAME: ${{ secrets.ALICLOUD_USERNAME }}
DST_REGISTRY_PASSWORD: ${{ secrets.ALICLOUD_PASSWORD }}
run: |
docker run quay.io/skopeo/stable:latest copy -a docker://docker.io/${{ vars.IMAGE_NAMESPACE }}/dev-builder-android:${{ inputs.version }} \
--dest-creds "$DST_REGISTRY_USERNAME":"$DST_REGISTRY_PASSWORD" \
docker://${{ vars.ACR_IMAGE_REGISTRY }}/${{ vars.IMAGE_NAMESPACE }}/dev-builder-android:${{ inputs.version }}

View File

@@ -1,3 +1,8 @@
name: Release
# There are two kinds of formal release:
# 1. The tag('v*.*.*') push release: the release workflow will be triggered by the tag push event.
# 2. The scheduled release(the version will be '${{ env.NEXT_RELEASE_VERSION }}-nightly-YYYYMMDD'): the release workflow will be triggered by the schedule event.
on:
push:
tags:
@@ -5,400 +10,406 @@ on:
schedule:
# At 00:00 on Monday.
- cron: '0 0 * * 1'
# Mannually trigger only builds binaries.
workflow_dispatch:
name: Release
workflow_dispatch: # Allows you to run this workflow manually.
# Notes: The GitHub Actions ONLY support 10 inputs, and it's already used up.
inputs:
linux_amd64_runner:
type: choice
description: The runner uses to build linux-amd64 artifacts
default: ec2-c6i.4xlarge-amd64
options:
- ubuntu-20.04
- ubuntu-20.04-8-cores
- ubuntu-20.04-16-cores
- ubuntu-20.04-32-cores
- ubuntu-20.04-64-cores
- ec2-c6i.xlarge-amd64 # 4C8G
- ec2-c6i.2xlarge-amd64 # 8C16G
- ec2-c6i.4xlarge-amd64 # 16C32G
- ec2-c6i.8xlarge-amd64 # 32C64G
- ec2-c6i.16xlarge-amd64 # 64C128G
linux_arm64_runner:
type: choice
description: The runner uses to build linux-arm64 artifacts
default: ec2-c6g.4xlarge-arm64
options:
- ec2-c6g.xlarge-arm64 # 4C8G
- ec2-c6g.2xlarge-arm64 # 8C16G
- ec2-c6g.4xlarge-arm64 # 16C32G
- ec2-c6g.8xlarge-arm64 # 32C64G
- ec2-c6g.16xlarge-arm64 # 64C128G
macos_runner:
type: choice
description: The runner uses to build macOS artifacts
default: macos-latest
options:
- macos-latest
skip_test:
description: Do not run integration tests during the build
type: boolean
default: true
build_linux_amd64_artifacts:
type: boolean
description: Build linux-amd64 artifacts
required: false
default: false
build_linux_arm64_artifacts:
type: boolean
description: Build linux-arm64 artifacts
required: false
default: false
build_macos_artifacts:
type: boolean
description: Build macos artifacts
required: false
default: false
build_windows_artifacts:
type: boolean
description: Build Windows artifacts
required: false
default: false
publish_github_release:
type: boolean
description: Create GitHub release and upload artifacts
required: false
default: false
release_images:
type: boolean
description: Build and push images to DockerHub and ACR
required: false
default: false
# Use env variables to control all the release process.
env:
RUST_TOOLCHAIN: nightly-2023-02-26
SCHEDULED_BUILD_VERSION_PREFIX: v0.2.0
SCHEDULED_PERIOD: nightly
# The arguments of building greptime.
RUST_TOOLCHAIN: nightly-2023-08-07
CARGO_PROFILE: nightly
## FIXME(zyy17): Enable it after the tests are stabled.
DISABLE_RUN_TESTS: true
# Controls whether to run tests, include unit-test, integration-test and sqlness.
DISABLE_RUN_TESTS: ${{ inputs.skip_test || vars.DEFAULT_SKIP_TEST }}
# The scheduled version is '${{ env.NEXT_RELEASE_VERSION }}-nightly-YYYYMMDD', like v0.2.0-nigthly-20230313;
NIGHTLY_RELEASE_PREFIX: nightly
# Note: The NEXT_RELEASE_VERSION should be modified manually by every formal release.
NEXT_RELEASE_VERSION: v0.5.0
jobs:
build:
name: Build binary
allocate-runners:
name: Allocate runners
if: ${{ github.repository == 'GreptimeTeam/greptimedb' }}
runs-on: ubuntu-20.04
outputs:
linux-amd64-runner: ${{ steps.start-linux-amd64-runner.outputs.label }}
linux-arm64-runner: ${{ steps.start-linux-arm64-runner.outputs.label }}
macos-runner: ${{ inputs.macos_runner || vars.DEFAULT_MACOS_RUNNER }}
windows-runner: windows-latest-8-cores
# The following EC2 resource id will be used for resource releasing.
linux-amd64-ec2-runner-label: ${{ steps.start-linux-amd64-runner.outputs.label }}
linux-amd64-ec2-runner-instance-id: ${{ steps.start-linux-amd64-runner.outputs.ec2-instance-id }}
linux-arm64-ec2-runner-label: ${{ steps.start-linux-arm64-runner.outputs.label }}
linux-arm64-ec2-runner-instance-id: ${{ steps.start-linux-arm64-runner.outputs.ec2-instance-id }}
# The 'version' use as the global tag name of the release workflow.
version: ${{ steps.create-version.outputs.version }}
steps:
- name: Checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
# The create-version will create a global variable named 'version' in the global workflows.
# - If it's a tag push release, the version is the tag name(${{ github.ref_name }});
# - If it's a scheduled release, the version is '${{ env.NEXT_RELEASE_VERSION }}-nightly-$buildTime', like v0.2.0-nigthly-20230313;
# - If it's a manual release, the version is '${{ env.NEXT_RELEASE_VERSION }}-<short-git-sha>-YYYYMMDDSS', like v0.2.0-e5b243c-2023071245;
- name: Create version
id: create-version
run: |
echo "version=$(./.github/scripts/create-version.sh)" >> $GITHUB_OUTPUT
env:
GITHUB_EVENT_NAME: ${{ github.event_name }}
GITHUB_REF_NAME: ${{ github.ref_name }}
NEXT_RELEASE_VERSION: ${{ env.NEXT_RELEASE_VERSION }}
NIGHTLY_RELEASE_PREFIX: ${{ env.NIGHTLY_RELEASE_PREFIX }}
- name: Allocate linux-amd64 runner
if: ${{ inputs.build_linux_amd64_artifacts || github.event_name == 'push' || github.event_name == 'schedule' }}
uses: ./.github/actions/start-runner
id: start-linux-amd64-runner
with:
runner: ${{ inputs.linux_amd64_runner || vars.DEFAULT_AMD64_RUNNER }}
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ vars.EC2_RUNNER_REGION }}
github-token: ${{ secrets.GH_PERSONAL_ACCESS_TOKEN }}
image-id: ${{ vars.EC2_RUNNER_LINUX_AMD64_IMAGE_ID }}
security-group-id: ${{ vars.EC2_RUNNER_SECURITY_GROUP_ID }}
subnet-id: ${{ vars.EC2_RUNNER_SUBNET_ID }}
- name: Allocate linux-arm64 runner
if: ${{ inputs.build_linux_arm64_artifacts || github.event_name == 'push' || github.event_name == 'schedule' }}
uses: ./.github/actions/start-runner
id: start-linux-arm64-runner
with:
runner: ${{ inputs.linux_arm64_runner || vars.DEFAULT_ARM64_RUNNER }}
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ vars.EC2_RUNNER_REGION }}
github-token: ${{ secrets.GH_PERSONAL_ACCESS_TOKEN }}
image-id: ${{ vars.EC2_RUNNER_LINUX_ARM64_IMAGE_ID }}
security-group-id: ${{ vars.EC2_RUNNER_SECURITY_GROUP_ID }}
subnet-id: ${{ vars.EC2_RUNNER_SUBNET_ID }}
build-linux-amd64-artifacts:
name: Build linux-amd64 artifacts
if: ${{ inputs.build_linux_amd64_artifacts || github.event_name == 'push' || github.event_name == 'schedule' }}
needs: [
allocate-runners,
]
runs-on: ${{ needs.allocate-runners.outputs.linux-amd64-runner }}
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 0
- uses: ./.github/actions/build-linux-artifacts
with:
arch: amd64
cargo-profile: ${{ env.CARGO_PROFILE }}
version: ${{ needs.allocate-runners.outputs.version }}
disable-run-tests: ${{ env.DISABLE_RUN_TESTS }}
build-linux-arm64-artifacts:
name: Build linux-arm64 artifacts
if: ${{ inputs.build_linux_arm64_artifacts || github.event_name == 'push' || github.event_name == 'schedule' }}
needs: [
allocate-runners,
]
runs-on: ${{ needs.allocate-runners.outputs.linux-arm64-runner }}
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 0
- uses: ./.github/actions/build-linux-artifacts
with:
arch: arm64
cargo-profile: ${{ env.CARGO_PROFILE }}
version: ${{ needs.allocate-runners.outputs.version }}
disable-run-tests: ${{ env.DISABLE_RUN_TESTS }}
build-macos-artifacts:
name: Build macOS artifacts
strategy:
fail-fast: false
matrix:
# The file format is greptime-<os>-<arch>
include:
- arch: x86_64-unknown-linux-gnu
os: ubuntu-2004-16-cores
file: greptime-linux-amd64
continue-on-error: false
opts: "-F servers/dashboard"
- arch: aarch64-unknown-linux-gnu
os: ubuntu-2004-16-cores
file: greptime-linux-arm64
continue-on-error: false
opts: "-F servers/dashboard"
- arch: aarch64-apple-darwin
os: macos-latest
file: greptime-darwin-arm64
continue-on-error: false
opts: "-F servers/dashboard"
- arch: x86_64-apple-darwin
os: macos-latest
file: greptime-darwin-amd64
continue-on-error: false
opts: "-F servers/dashboard"
- arch: x86_64-unknown-linux-gnu
os: ubuntu-2004-16-cores
file: greptime-linux-amd64-pyo3
continue-on-error: false
opts: "-F pyo3_backend,servers/dashboard"
- arch: aarch64-unknown-linux-gnu
os: ubuntu-2004-16-cores
file: greptime-linux-arm64-pyo3
continue-on-error: false
opts: "-F pyo3_backend,servers/dashboard"
- arch: aarch64-apple-darwin
os: macos-latest
file: greptime-darwin-arm64-pyo3
continue-on-error: false
opts: "-F pyo3_backend,servers/dashboard"
- arch: x86_64-apple-darwin
os: macos-latest
file: greptime-darwin-amd64-pyo3
continue-on-error: false
opts: "-F pyo3_backend,servers/dashboard"
- os: ${{ needs.allocate-runners.outputs.macos-runner }}
arch: aarch64-apple-darwin
features: servers/dashboard
artifacts-dir-prefix: greptime-darwin-arm64
- os: ${{ needs.allocate-runners.outputs.macos-runner }}
arch: aarch64-apple-darwin
features: pyo3_backend,servers/dashboard
artifacts-dir-prefix: greptime-darwin-arm64-pyo3
- os: ${{ needs.allocate-runners.outputs.macos-runner }}
features: servers/dashboard
arch: x86_64-apple-darwin
artifacts-dir-prefix: greptime-darwin-amd64
- os: ${{ needs.allocate-runners.outputs.macos-runner }}
features: pyo3_backend,servers/dashboard
arch: x86_64-apple-darwin
artifacts-dir-prefix: greptime-darwin-amd64-pyo3
runs-on: ${{ matrix.os }}
continue-on-error: ${{ matrix.continue-on-error }}
if: github.repository == 'GreptimeTeam/greptimedb'
needs: [
allocate-runners,
]
if: ${{ inputs.build_macos_artifacts || github.event_name == 'push' || github.event_name == 'schedule' }}
steps:
- name: Checkout sources
uses: actions/checkout@v3
- name: Cache cargo assets
id: cache
uses: actions/cache@v3
- uses: actions/checkout@v3
with:
path: |
~/.cargo/bin/
~/.cargo/registry/index/
~/.cargo/registry/cache/
~/.cargo/git/db/
target/
key: ${{ matrix.arch }}-build-cargo-${{ hashFiles('**/Cargo.lock') }}
fetch-depth: 0
- name: Install Protoc for linux
if: contains(matrix.arch, 'linux') && endsWith(matrix.arch, '-gnu')
run: | # Make sure the protoc is >= 3.15
wget https://github.com/protocolbuffers/protobuf/releases/download/v21.9/protoc-21.9-linux-x86_64.zip
unzip protoc-21.9-linux-x86_64.zip -d protoc
sudo cp protoc/bin/protoc /usr/local/bin/
sudo cp -r protoc/include/google /usr/local/include/
- name: Install Protoc for macos
if: contains(matrix.arch, 'darwin')
run: |
brew install protobuf
- name: Install etcd for linux
if: contains(matrix.arch, 'linux') && endsWith(matrix.arch, '-gnu')
run: |
ETCD_VER=v3.5.7
DOWNLOAD_URL=https://github.com/etcd-io/etcd/releases/download
curl -L ${DOWNLOAD_URL}/${ETCD_VER}/etcd-${ETCD_VER}-linux-amd64.tar.gz -o /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz
mkdir -p /tmp/etcd-download
tar xzvf /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz -C /tmp/etcd-download --strip-components=1
rm -f /tmp/etcd-${ETCD_VER}-linux-amd64.tar.gz
sudo cp -a /tmp/etcd-download/etcd* /usr/local/bin/
nohup etcd >/tmp/etcd.log 2>&1 &
- name: Install etcd for macos
if: contains(matrix.arch, 'darwin')
run: |
brew install etcd
brew services start etcd
- name: Install dependencies for linux
if: contains(matrix.arch, 'linux') && endsWith(matrix.arch, '-gnu')
run: |
sudo apt-get -y update
sudo apt-get -y install libssl-dev pkg-config g++-aarch64-linux-gnu gcc-aarch64-linux-gnu binutils-aarch64-linux-gnu wget
# FIXME(zyy17): Should we specify the version of python when building binary for darwin?
- name: Compile Python 3.10.10 from source for linux
if: contains(matrix.arch, 'linux') && contains(matrix.opts, 'pyo3_backend')
run: |
sudo chmod +x ./docker/aarch64/compile-python.sh
sudo ./docker/aarch64/compile-python.sh ${{ matrix.arch }}
- name: Install rust toolchain
uses: dtolnay/rust-toolchain@master
- uses: ./.github/actions/build-macos-artifacts
with:
toolchain: ${{ env.RUST_TOOLCHAIN }}
targets: ${{ matrix.arch }}
arch: ${{ matrix.arch }}
rust-toolchain: ${{ env.RUST_TOOLCHAIN }}
cargo-profile: ${{ env.CARGO_PROFILE }}
features: ${{ matrix.features }}
version: ${{ needs.allocate-runners.outputs.version }}
disable-run-tests: ${{ env.DISABLE_RUN_TESTS }}
artifacts-dir: ${{ matrix.artifacts-dir-prefix }}-${{ needs.allocate-runners.outputs.version }}
- name: Output package versions
run: protoc --version ; cargo version ; rustc --version ; gcc --version ; g++ --version
- name: Run tests
if: env.DISABLE_RUN_TESTS == 'false'
run: make unit-test integration-test sqlness-test
- name: Run cargo build with pyo3 for aarch64-linux
if: contains(matrix.arch, 'aarch64-unknown-linux-gnu') && contains(matrix.opts, 'pyo3_backend')
run: |
# TODO(zyy17): We should make PYO3_CROSS_LIB_DIR configurable.
export PYTHON_INSTALL_PATH_AMD64=${PWD}/python-3.10.10/amd64
export LD_LIBRARY_PATH=$PYTHON_INSTALL_PATH_AMD64/lib:$LD_LIBRARY_PATH
export LIBRARY_PATH=$PYTHON_INSTALL_PATH_AMD64/lib:$LIBRARY_PATH
export PATH=$PYTHON_INSTALL_PATH_AMD64/bin:$PATH
export PYO3_CROSS_LIB_DIR=${PWD}/python-3.10.10/aarch64
echo "PYO3_CROSS_LIB_DIR: $PYO3_CROSS_LIB_DIR"
alias python=$PYTHON_INSTALL_PATH_AMD64/bin/python3
alias pip=$PYTHON_INSTALL_PATH_AMD64/bin/python3-pip
cargo build --profile ${{ env.CARGO_PROFILE }} --locked --target ${{ matrix.arch }} ${{ matrix.opts }}
- name: Run cargo build with pyo3 for amd64-linux
if: contains(matrix.arch, 'x86_64-unknown-linux-gnu') && contains(matrix.opts, 'pyo3_backend')
run: |
export PYTHON_INSTALL_PATH_AMD64=${PWD}/python-3.10.10/amd64
export LD_LIBRARY_PATH=$PYTHON_INSTALL_PATH_AMD64/lib:$LD_LIBRARY_PATH
export LIBRARY_PATH=$PYTHON_INSTALL_PATH_AMD64/lib:$LIBRARY_PATH
export PATH=$PYTHON_INSTALL_PATH_AMD64/bin:$PATH
echo "implementation=CPython" >> pyo3.config
echo "version=3.10" >> pyo3.config
echo "implementation=CPython" >> pyo3.config
echo "shared=true" >> pyo3.config
echo "abi3=true" >> pyo3.config
echo "lib_name=python3.10" >> pyo3.config
echo "lib_dir=$PYTHON_INSTALL_PATH_AMD64/lib" >> pyo3.config
echo "executable=$PYTHON_INSTALL_PATH_AMD64/bin/python3" >> pyo3.config
echo "pointer_width=64" >> pyo3.config
echo "build_flags=" >> pyo3.config
echo "suppress_build_script_link_lines=false" >> pyo3.config
cat pyo3.config
export PYO3_CONFIG_FILE=${PWD}/pyo3.config
alias python=$PYTHON_INSTALL_PATH_AMD64/bin/python3
alias pip=$PYTHON_INSTALL_PATH_AMD64/bin/python3-pip
cargo build --profile ${{ env.CARGO_PROFILE }} --locked --target ${{ matrix.arch }} ${{ matrix.opts }}
- name: Run cargo build
if: contains(matrix.arch, 'darwin') || contains(matrix.opts, 'pyo3_backend') == false
run: cargo build --profile ${{ env.CARGO_PROFILE }} --locked --target ${{ matrix.arch }} ${{ matrix.opts }}
- name: Calculate checksum and rename binary
shell: bash
run: |
cd target/${{ matrix.arch }}/${{ env.CARGO_PROFILE }}
chmod +x greptime
tar -zcvf ${{ matrix.file }}.tgz greptime
echo $(shasum -a 256 ${{ matrix.file }}.tgz | cut -f1 -d' ') > ${{ matrix.file }}.sha256sum
- name: Upload artifacts
uses: actions/upload-artifact@v3
with:
name: ${{ matrix.file }}
path: target/${{ matrix.arch }}/${{ env.CARGO_PROFILE }}/${{ matrix.file }}.tgz
- name: Upload checksum of artifacts
uses: actions/upload-artifact@v3
with:
name: ${{ matrix.file }}.sha256sum
path: target/${{ matrix.arch }}/${{ env.CARGO_PROFILE }}/${{ matrix.file }}.sha256sum
docker:
name: Build docker image
needs: [build]
runs-on: ubuntu-latest
if: github.repository == 'GreptimeTeam/greptimedb' && github.event_name != 'workflow_dispatch'
build-windows-artifacts:
name: Build Windows artifacts
strategy:
fail-fast: false
matrix:
include:
- os: ${{ needs.allocate-runners.outputs.windows-runner }}
arch: x86_64-pc-windows-msvc
features: servers/dashboard
artifacts-dir-prefix: greptime-windows-amd64
- os: ${{ needs.allocate-runners.outputs.windows-runner }}
arch: x86_64-pc-windows-msvc
features: pyo3_backend,servers/dashboard
artifacts-dir-prefix: greptime-windows-amd64-pyo3
runs-on: ${{ matrix.os }}
needs: [
allocate-runners,
]
if: ${{ inputs.build_windows_artifacts || github.event_name == 'push' || github.event_name == 'schedule' }}
steps:
- name: Checkout sources
uses: actions/checkout@v3
- run: git config --global core.autocrlf false
- name: Login to Dockerhub
uses: docker/login-action@v2
- uses: actions/checkout@v3
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}
fetch-depth: 0
- name: Configure scheduled build image tag # the tag would be ${SCHEDULED_BUILD_VERSION_PREFIX}-YYYYMMDD-${SCHEDULED_PERIOD}
shell: bash
if: github.event_name == 'schedule'
run: |
buildTime=`date "+%Y%m%d"`
SCHEDULED_BUILD_VERSION=${{ env.SCHEDULED_BUILD_VERSION_PREFIX }}-$buildTime-${{ env.SCHEDULED_PERIOD }}
echo "IMAGE_TAG=${SCHEDULED_BUILD_VERSION:1}" >> $GITHUB_ENV
- name: Configure tag # If the release tag is v0.1.0, then the image version tag will be 0.1.0.
shell: bash
if: github.event_name != 'schedule'
run: |
VERSION=${{ github.ref_name }}
echo "IMAGE_TAG=${VERSION:1}" >> $GITHUB_ENV
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
- name: Set up buildx
uses: docker/setup-buildx-action@v2
- name: Download amd64 binary
uses: actions/download-artifact@v3
- uses: ./.github/actions/build-windows-artifacts
with:
name: greptime-linux-amd64-pyo3
path: amd64
arch: ${{ matrix.arch }}
rust-toolchain: ${{ env.RUST_TOOLCHAIN }}
cargo-profile: ${{ env.CARGO_PROFILE }}
features: ${{ matrix.features }}
version: ${{ needs.allocate-runners.outputs.version }}
disable-run-tests: ${{ env.DISABLE_RUN_TESTS }}
artifacts-dir: ${{ matrix.artifacts-dir-prefix }}-${{ needs.allocate-runners.outputs.version }}
- name: Unzip the amd64 artifacts
run: |
tar xvf amd64/greptime-linux-amd64-pyo3.tgz -C amd64/ && rm amd64/greptime-linux-amd64-pyo3.tgz
cp -r amd64 docker/ci
- name: Download arm64 binary
id: download-arm64
uses: actions/download-artifact@v3
with:
name: greptime-linux-arm64-pyo3
path: arm64
- name: Unzip the arm64 artifacts
id: unzip-arm64
if: success() || steps.download-arm64.conclusion == 'success'
run: |
tar xvf arm64/greptime-linux-arm64-pyo3.tgz -C arm64/ && rm arm64/greptime-linux-arm64-pyo3.tgz
cp -r arm64 docker/ci
- name: Build and push all
uses: docker/build-push-action@v3
if: success() || steps.unzip-arm64.conclusion == 'success' # Build and push all platform if unzip-arm64 succeeds
with:
context: ./docker/ci/
file: ./docker/ci/Dockerfile
push: true
platforms: linux/amd64,linux/arm64
tags: |
greptime/greptimedb:latest
greptime/greptimedb:${{ env.IMAGE_TAG }}
- name: Build and push amd64 only
uses: docker/build-push-action@v3
if: success() || steps.download-arm64.conclusion == 'failure' # Only build and push amd64 platform if download-arm64 fails
with:
context: ./docker/ci/
file: ./docker/ci/Dockerfile
push: true
platforms: linux/amd64
tags: |
greptime/greptimedb:latest
greptime/greptimedb:${{ env.IMAGE_TAG }}
release:
name: Release artifacts
# Release artifacts only when all the artifacts are built successfully.
needs: [build,docker]
runs-on: ubuntu-latest
if: github.repository == 'GreptimeTeam/greptimedb' && github.event_name != 'workflow_dispatch'
release-images-to-dockerhub:
name: Build and push images to DockerHub
if: ${{ inputs.release_images || github.event_name == 'push' || github.event_name == 'schedule' }}
needs: [
allocate-runners,
build-linux-amd64-artifacts,
build-linux-arm64-artifacts,
]
runs-on: ubuntu-2004-16-cores
steps:
- name: Checkout sources
uses: actions/checkout@v3
- name: Download artifacts
uses: actions/download-artifact@v3
- name: Configure scheduled build version # the version would be ${SCHEDULED_BUILD_VERSION_PREFIX}-${SCHEDULED_PERIOD}-YYYYMMDD, like v0.2.0-nigthly-20230313.
shell: bash
if: github.event_name == 'schedule'
run: |
buildTime=`date "+%Y%m%d"`
SCHEDULED_BUILD_VERSION=${{ env.SCHEDULED_BUILD_VERSION_PREFIX }}-${{ env.SCHEDULED_PERIOD }}-$buildTime
echo "SCHEDULED_BUILD_VERSION=${SCHEDULED_BUILD_VERSION}" >> $GITHUB_ENV
# Only publish release when the release tag is like v1.0.0, v1.0.1, v1.0.2, etc.
- name: Set whether it is the latest release
run: |
if [[ "${{ github.ref_name }}" =~ ^v[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
echo "prerelease=false" >> $GITHUB_ENV
echo "makeLatest=true" >> $GITHUB_ENV
else
echo "prerelease=true" >> $GITHUB_ENV
echo "makeLatest=false" >> $GITHUB_ENV
fi
- name: Create scheduled build git tag
if: github.event_name == 'schedule'
run: |
git tag ${{ env.SCHEDULED_BUILD_VERSION }}
- name: Publish scheduled release # configure the different release title and tags.
uses: ncipollo/release-action@v1
if: github.event_name == 'schedule'
- uses: actions/checkout@v3
with:
name: "Release ${{ env.SCHEDULED_BUILD_VERSION }}"
prerelease: ${{ env.prerelease }}
makeLatest: ${{ env.makeLatest }}
tag: ${{ env.SCHEDULED_BUILD_VERSION }}
generateReleaseNotes: true
artifacts: |
**/greptime-*
fetch-depth: 0
- name: Publish release
uses: ncipollo/release-action@v1
if: github.event_name != 'schedule'
- name: Build and push images to dockerhub
uses: ./.github/actions/build-images
with:
name: "${{ github.ref_name }}"
prerelease: ${{ env.prerelease }}
makeLatest: ${{ env.makeLatest }}
generateReleaseNotes: true
artifacts: |
**/greptime-*
image-registry: docker.io
image-namespace: ${{ vars.IMAGE_NAMESPACE }}
image-registry-username: ${{ secrets.DOCKERHUB_USERNAME }}
image-registry-password: ${{ secrets.DOCKERHUB_TOKEN }}
version: ${{ needs.allocate-runners.outputs.version }}
docker-push-acr:
name: Push docker image to alibaba cloud container registry
needs: [docker]
runs-on: ubuntu-latest
if: github.repository == 'GreptimeTeam/greptimedb' && github.event_name != 'workflow_dispatch'
release-cn-artifacts:
name: Release artifacts to CN region
if: ${{ inputs.release_images || github.event_name == 'push' || github.event_name == 'schedule' }}
needs: [ # The job have to wait for all the artifacts are built.
allocate-runners,
build-linux-amd64-artifacts,
build-linux-arm64-artifacts,
build-macos-artifacts,
build-windows-artifacts,
release-images-to-dockerhub,
]
runs-on: ubuntu-20.04
# When we push to ACR, it's easy to fail due to some unknown network issues.
# However, we don't want to fail the whole workflow because of this.
# The ACR have daily sync with DockerHub, so don't worry about the image not being updated.
continue-on-error: true
steps:
- name: Checkout sources
uses: actions/checkout@v3
- name: Set up QEMU
uses: docker/setup-qemu-action@v2
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Login to alibaba cloud container registry
uses: docker/login-action@v2
- uses: actions/checkout@v3
with:
registry: registry.cn-hangzhou.aliyuncs.com
username: ${{ secrets.ALICLOUD_USERNAME }}
password: ${{ secrets.ALICLOUD_PASSWORD }}
fetch-depth: 0
- name: Configure scheduled build image tag # the tag would be ${SCHEDULED_BUILD_VERSION_PREFIX}-YYYYMMDD-${SCHEDULED_PERIOD}
shell: bash
if: github.event_name == 'schedule'
run: |
buildTime=`date "+%Y%m%d"`
SCHEDULED_BUILD_VERSION=${{ env.SCHEDULED_BUILD_VERSION_PREFIX }}-$buildTime-${{ env.SCHEDULED_PERIOD }}
echo "IMAGE_TAG=${SCHEDULED_BUILD_VERSION:1}" >> $GITHUB_ENV
- name: Release artifacts to CN region
uses: ./.github/actions/release-cn-artifacts
with:
src-image-registry: docker.io
src-image-namespace: ${{ vars.IMAGE_NAMESPACE }}
src-image-name: greptimedb
dst-image-registry-username: ${{ secrets.ALICLOUD_USERNAME }}
dst-image-registry-password: ${{ secrets.ALICLOUD_PASSWORD }}
dst-image-registry: ${{ vars.ACR_IMAGE_REGISTRY }}
dst-image-namespace: ${{ vars.IMAGE_NAMESPACE }}
version: ${{ needs.allocate-runners.outputs.version }}
aws-cn-s3-bucket: ${{ vars.AWS_RELEASE_BUCKET }}
aws-cn-access-key-id: ${{ secrets.AWS_CN_ACCESS_KEY_ID }}
aws-cn-secret-access-key: ${{ secrets.AWS_CN_SECRET_ACCESS_KEY }}
aws-cn-region: ${{ vars.AWS_RELEASE_BUCKET_REGION }}
dev-mode: false
update-version-info: true
push-latest-tag: true
- name: Configure tag # If the release tag is v0.1.0, then the image version tag will be 0.1.0.
shell: bash
if: github.event_name != 'schedule'
run: |
VERSION=${{ github.ref_name }}
echo "IMAGE_TAG=${VERSION:1}" >> $GITHUB_ENV
publish-github-release:
name: Create GitHub release and upload artifacts
if: ${{ inputs.publish_github_release || github.event_name == 'push' || github.event_name == 'schedule' }}
needs: [ # The job have to wait for all the artifacts are built.
allocate-runners,
build-linux-amd64-artifacts,
build-linux-arm64-artifacts,
build-macos-artifacts,
build-windows-artifacts,
release-images-to-dockerhub,
]
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Push image to alibaba cloud container registry # Use 'docker buildx imagetools create' to create a new image base on source image.
run: |
docker buildx imagetools create \
--tag registry.cn-hangzhou.aliyuncs.com/greptime/greptimedb:latest \
--tag registry.cn-hangzhou.aliyuncs.com/greptime/greptimedb:${{ env.IMAGE_TAG }} \
greptime/greptimedb:${{ env.IMAGE_TAG }}
- name: Publish GitHub release
uses: ./.github/actions/publish-github-release
with:
version: ${{ needs.allocate-runners.outputs.version }}
### Stop runners ###
# It's very necessary to split the job of releasing runners into 'stop-linux-amd64-runner' and 'stop-linux-arm64-runner'.
# Because we can terminate the specified EC2 instance immediately after the job is finished without uncessary waiting.
stop-linux-amd64-runner: # It's always run as the last job in the workflow to make sure that the runner is released.
name: Stop linux-amd64 runner
# Only run this job when the runner is allocated.
if: ${{ always() }}
runs-on: ubuntu-20.04
needs: [
allocate-runners,
build-linux-amd64-artifacts,
]
steps:
- name: Checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Stop EC2 runner
uses: ./.github/actions/stop-runner
with:
label: ${{ needs.allocate-runners.outputs.linux-amd64-ec2-runner-label }}
ec2-instance-id: ${{ needs.allocate-runners.outputs.linux-amd64-ec2-runner-instance-id }}
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ vars.EC2_RUNNER_REGION }}
github-token: ${{ secrets.GH_PERSONAL_ACCESS_TOKEN }}
stop-linux-arm64-runner: # It's always run as the last job in the workflow to make sure that the runner is released.
name: Stop linux-arm64 runner
# Only run this job when the runner is allocated.
if: ${{ always() }}
runs-on: ubuntu-20.04
needs: [
allocate-runners,
build-linux-arm64-artifacts,
]
steps:
- name: Checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Stop EC2 runner
uses: ./.github/actions/stop-runner
with:
label: ${{ needs.allocate-runners.outputs.linux-arm64-ec2-runner-label }}
ec2-instance-id: ${{ needs.allocate-runners.outputs.linux-arm64-ec2-runner-instance-id }}
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ vars.EC2_RUNNER_REGION }}
github-token: ${{ secrets.GH_PERSONAL_ACCESS_TOKEN }}

26
.github/workflows/size-label.yml vendored Normal file
View File

@@ -0,0 +1,26 @@
name: size-labeler
on: [pull_request]
jobs:
labeler:
runs-on: ubuntu-latest
name: Label the PR size
steps:
- uses: codelytv/pr-size-labeler@v1
with:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
s_label: 'Size: S'
s_max_size: '100'
m_label: 'Size: M'
m_max_size: '500'
l_label: 'Size: L'
l_max_size: '1000'
xl_label: 'Size: XL'
fail_if_xl: 'false'
message_if_xl: >
This PR exceeds the recommended size of 1000 lines.
Please make sure you are NOT addressing multiple issues with one PR.
Note this PR might be rejected due to its size.
github_api_url: 'api.github.com'
files_to_ignore: 'Cargo.lock'

7
.gitignore vendored
View File

@@ -1,6 +1,8 @@
# Generated by Cargo
# will have compiled files and executables
/target/
# also ignore if it's a symbolic link
/target
# Remove Cargo.lock from gitignore if creating an executable, leave it for libraries
# More information here https://doc.rust-lang.org/cargo/guide/cargo-toml-vs-cargo-lock.html
@@ -39,3 +41,8 @@ benchmarks/data
# dashboard files
!/src/servers/dashboard/VERSION
/src/servers/dashboard/*
# Vscode workspace
*.code-workspace
venv/

View File

@@ -107,6 +107,6 @@ The core team will be thrilled if you participate in any way you like. When you
Also, see some extra GreptimeDB content:
- [GreptimeDB Docs](https://greptime.com/docs)
- [Learn GreptimeDB](https://greptime.com/products/db)
- [GreptimeDB Docs](https://docs.greptime.com/)
- [Learn GreptimeDB](https://greptime.com/product/db)
- [Greptime Inc. Website](https://greptime.com)

4498
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -2,19 +2,24 @@
members = [
"benchmarks",
"src/api",
"src/auth",
"src/catalog",
"src/client",
"src/cmd",
"src/common/base",
"src/common/catalog",
"src/common/config",
"src/common/datasource",
"src/common/error",
"src/common/function",
"src/common/function-macro",
"src/common/macro",
"src/common/greptimedb-telemetry",
"src/common/grpc",
"src/common/grpc-expr",
"src/common/mem-prof",
"src/common/meta",
"src/common/procedure",
"src/common/procedure-test",
"src/common/query",
"src/common/recordbatch",
"src/common/runtime",
@@ -22,16 +27,19 @@ members = [
"src/common/telemetry",
"src/common/test-util",
"src/common/time",
"src/common/version",
"src/datanode",
"src/datatypes",
"src/file-table-engine",
"src/file-engine",
"src/frontend",
"src/log-store",
"src/meta-client",
"src/meta-srv",
"src/mito",
"src/mito2",
"src/object-store",
"src/operator",
"src/partition",
"src/plugins",
"src/promql",
"src/query",
"src/script",
@@ -41,48 +49,125 @@ members = [
"src/storage",
"src/store-api",
"src/table",
"src/table-procedure",
"tests-integration",
"tests/runner",
]
resolver = "2"
[workspace.package]
version = "0.2.0"
version = "0.4.2"
edition = "2021"
license = "Apache-2.0"
[workspace.dependencies]
arrow = { version = "37.0" }
arrow-array = "37.0"
arrow-flight = "37.0"
arrow-schema = { version = "37.0", features = ["serde"] }
aquamarine = "0.3"
arrow = { version = "43.0" }
arrow-array = "43.0"
arrow-flight = "43.0"
arrow-schema = { version = "43.0", features = ["serde"] }
async-stream = "0.3"
async-trait = "0.1"
chrono = { version = "0.4", features = ["serde"] }
# TODO(ruihang): use arrow-datafusion when it contains https://github.com/apache/arrow-datafusion/pull/6032
datafusion = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "b14f7a9ffe91257fc3d2a5d654f2a1a14a8fc793" }
datafusion-common = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "b14f7a9ffe91257fc3d2a5d654f2a1a14a8fc793" }
datafusion-expr = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "b14f7a9ffe91257fc3d2a5d654f2a1a14a8fc793" }
datafusion-optimizer = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "b14f7a9ffe91257fc3d2a5d654f2a1a14a8fc793" }
datafusion-physical-expr = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "b14f7a9ffe91257fc3d2a5d654f2a1a14a8fc793" }
datafusion-sql = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "b14f7a9ffe91257fc3d2a5d654f2a1a14a8fc793" }
datafusion-substrait = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "b14f7a9ffe91257fc3d2a5d654f2a1a14a8fc793" }
datafusion = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "b6f3b28b6fe91924cc8dd3d83726b766f2a706ec" }
datafusion-common = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "b6f3b28b6fe91924cc8dd3d83726b766f2a706ec" }
datafusion-expr = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "b6f3b28b6fe91924cc8dd3d83726b766f2a706ec" }
datafusion-optimizer = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "b6f3b28b6fe91924cc8dd3d83726b766f2a706ec" }
datafusion-physical-expr = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "b6f3b28b6fe91924cc8dd3d83726b766f2a706ec" }
datafusion-sql = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "b6f3b28b6fe91924cc8dd3d83726b766f2a706ec" }
datafusion-substrait = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "b6f3b28b6fe91924cc8dd3d83726b766f2a706ec" }
derive_builder = "0.12"
etcd-client = "0.11"
futures = "0.3"
futures-util = "0.3"
parquet = "37.0"
greptime-proto = { git = "https://github.com/GreptimeTeam/greptime-proto.git", rev = "1f1dd532a111e3834cc3019c5605e2993ffb9dc3" }
humantime-serde = "1.1"
itertools = "0.10"
lazy_static = "1.4"
meter-core = { git = "https://github.com/GreptimeTeam/greptime-meter.git", rev = "abbd357c1e193cd270ea65ee7652334a150b628f" }
metrics = "0.20"
moka = "0.12"
once_cell = "1.18"
opentelemetry-proto = { version = "0.2", features = ["gen-tonic", "metrics", "traces"] }
parquet = "43.0"
paste = "1.0"
prost = "0.11"
raft-engine = { git = "https://github.com/tikv/raft-engine.git", rev = "22dfb426cd994602b57725ef080287d3e53db479" }
rand = "0.8"
regex = "1.8"
reqwest = { version = "0.11", default-features = false, features = [
"json",
"rustls-tls-native-roots",
"stream",
] }
serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
smallvec = "1"
snafu = { version = "0.7", features = ["backtraces"] }
sqlparser = "0.33"
sqlparser = { git = "https://github.com/GreptimeTeam/sqlparser-rs.git", rev = "6cf9d23d5b8fbecd65efc1d9afb7e80ad7a424da", features = [
"visitor",
] }
strum = { version = "0.25", features = ["derive"] }
tempfile = "3"
tokio = { version = "1.24.2", features = ["full"] }
tokio-util = { version = "0.7", features = ["io-util"] }
tokio = { version = "1.28", features = ["full"] }
tokio-util = { version = "0.7", features = ["io-util", "compat"] }
toml = "0.7"
tonic = { version = "0.9", features = ["tls"] }
uuid = { version = "1", features = ["serde", "v4", "fast-rng"] }
metrics = "0.20"
## workspaces members
api = { path = "src/api" }
auth = { path = "src/auth" }
catalog = { path = "src/catalog" }
client = { path = "src/client" }
cmd = { path = "src/cmd" }
common-base = { path = "src/common/base" }
common-catalog = { path = "src/common/catalog" }
common-config = { path = "src/common/config" }
common-datasource = { path = "src/common/datasource" }
common-error = { path = "src/common/error" }
common-function = { path = "src/common/function" }
common-greptimedb-telemetry = { path = "src/common/greptimedb-telemetry" }
common-grpc = { path = "src/common/grpc" }
common-grpc-expr = { path = "src/common/grpc-expr" }
common-macro = { path = "src/common/macro" }
common-mem-prof = { path = "src/common/mem-prof" }
common-meta = { path = "src/common/meta" }
common-pprof = { path = "src/common/pprof" }
common-procedure = { path = "src/common/procedure" }
common-procedure-test = { path = "src/common/procedure-test" }
common-query = { path = "src/common/query" }
common-recordbatch = { path = "src/common/recordbatch" }
common-runtime = { path = "src/common/runtime" }
common-telemetry = { path = "src/common/telemetry" }
common-test-util = { path = "src/common/test-util" }
common-time = { path = "src/common/time" }
common-version = { path = "src/common/version" }
datanode = { path = "src/datanode" }
datatypes = { path = "src/datatypes" }
file-engine = { path = "src/file-engine" }
frontend = { path = "src/frontend" }
log-store = { path = "src/log-store" }
meta-client = { path = "src/meta-client" }
meta-srv = { path = "src/meta-srv" }
mito = { path = "src/mito" }
mito2 = { path = "src/mito2" }
object-store = { path = "src/object-store" }
operator = { path = "src/operator" }
partition = { path = "src/partition" }
plugins = { path = "src/plugins" }
promql = { path = "src/promql" }
query = { path = "src/query" }
script = { path = "src/script" }
servers = { path = "src/servers" }
session = { path = "src/session" }
sql = { path = "src/sql" }
storage = { path = "src/storage" }
store-api = { path = "src/store-api" }
substrait = { path = "src/common/substrait" }
table = { path = "src/table" }
[workspace.dependencies.meter-macros]
git = "https://github.com/GreptimeTeam/greptime-meter.git"
rev = "abbd357c1e193cd270ea65ee7652334a150b628f"
[profile.release]
debug = true

7
Cross.toml Normal file
View File

@@ -0,0 +1,7 @@
[build]
pre-build = [
"dpkg --add-architecture $CROSS_DEB_ARCH",
"apt update && apt install -y unzip zlib1g-dev zlib1g-dev:$CROSS_DEB_ARCH",
"curl -LO https://github.com/protocolbuffers/protobuf/releases/download/v3.15.8/protoc-3.15.8-linux-x86_64.zip && unzip protoc-3.15.8-linux-x86_64.zip -d /usr/",
"chmod a+x /usr/bin/protoc && chmod -R a+rx /usr/include/google",
]

161
Makefile
View File

@@ -1,15 +1,104 @@
IMAGE_REGISTRY ?= greptimedb
# The arguments for building images.
CARGO_PROFILE ?=
FEATURES ?=
TARGET_DIR ?=
TARGET ?=
CARGO_BUILD_OPTS := --locked
IMAGE_REGISTRY ?= docker.io
IMAGE_NAMESPACE ?= greptime
IMAGE_TAG ?= latest
BUILDX_MULTI_PLATFORM_BUILD ?= false
BUILDX_BUILDER_NAME ?= gtbuilder
BASE_IMAGE ?= ubuntu
RUST_TOOLCHAIN ?= $(shell cat rust-toolchain.toml | grep channel | cut -d'"' -f2)
CARGO_REGISTRY_CACHE ?= ${HOME}/.cargo/registry
ARCH := $(shell uname -m | sed 's/x86_64/amd64/' | sed 's/aarch64/arm64/')
OUTPUT_DIR := $(shell if [ "$(RELEASE)" = "true" ]; then echo "release"; elif [ ! -z "$(CARGO_PROFILE)" ]; then echo "$(CARGO_PROFILE)" ; else echo "debug"; fi)
# The arguments for running integration tests.
ETCD_VERSION ?= v3.5.9
ETCD_IMAGE ?= quay.io/coreos/etcd:${ETCD_VERSION}
RETRY_COUNT ?= 3
NEXTEST_OPTS := --retries ${RETRY_COUNT}
BUILD_JOBS ?= $(shell which nproc 1>/dev/null && expr $$(nproc) / 2) # If nproc is not available, we don't set the build jobs.
ifeq ($(BUILD_JOBS), 0) # If the number of cores is less than 2, set the build jobs to 1.
BUILD_JOBS := 1
endif
ifneq ($(strip $(BUILD_JOBS)),)
NEXTEST_OPTS += --build-jobs=${BUILD_JOBS}
endif
ifneq ($(strip $(CARGO_PROFILE)),)
CARGO_BUILD_OPTS += --profile ${CARGO_PROFILE}
endif
ifneq ($(strip $(FEATURES)),)
CARGO_BUILD_OPTS += --features ${FEATURES}
endif
ifneq ($(strip $(TARGET_DIR)),)
CARGO_BUILD_OPTS += --target-dir ${TARGET_DIR}
endif
ifneq ($(strip $(TARGET)),)
CARGO_BUILD_OPTS += --target ${TARGET}
endif
ifneq ($(strip $(RELEASE)),)
CARGO_BUILD_OPTS += --release
endif
ifeq ($(BUILDX_MULTI_PLATFORM_BUILD), true)
BUILDX_MULTI_PLATFORM_BUILD_OPTS := --platform linux/amd64,linux/arm64 --push
else
BUILDX_MULTI_PLATFORM_BUILD_OPTS := -o type=docker
endif
ifneq ($(strip $(CARGO_BUILD_EXTRA_OPTS)),)
CARGO_BUILD_OPTS += ${CARGO_BUILD_EXTRA_OPTS}
endif
##@ Build
.PHONY: build
build: ## Build debug version greptime.
cargo build
cargo ${CARGO_EXTENSION} build ${CARGO_BUILD_OPTS}
.PHONY: release
release: ## Build release version greptime.
cargo build --release
.POHNY: build-by-dev-builder
build-by-dev-builder: ## Build greptime by dev-builder.
docker run --network=host \
-v ${PWD}:/greptimedb -v ${CARGO_REGISTRY_CACHE}:/root/.cargo/registry \
-w /greptimedb ${IMAGE_REGISTRY}/${IMAGE_NAMESPACE}/dev-builder-${BASE_IMAGE}:latest \
make build \
CARGO_EXTENSION="${CARGO_EXTENSION}" \
CARGO_PROFILE=${CARGO_PROFILE} \
FEATURES=${FEATURES} \
TARGET_DIR=${TARGET_DIR} \
TARGET=${TARGET} \
RELEASE=${RELEASE} \
CARGO_BUILD_EXTRA_OPTS="${CARGO_BUILD_EXTRA_OPTS}"
.PHONY: build-android-bin
build-android-bin: ## Build greptime binary for android.
docker run --network=host \
-v ${PWD}:/greptimedb -v ${CARGO_REGISTRY_CACHE}:/root/.cargo/registry \
-w /greptimedb ${IMAGE_REGISTRY}/${IMAGE_NAMESPACE}/dev-builder-android:latest \
make build \
CARGO_EXTENSION="ndk --platform 23 -t aarch64-linux-android" \
CARGO_PROFILE=release \
FEATURES="${FEATURES}" \
TARGET_DIR="${TARGET_DIR}" \
TARGET="${TARGET}" \
RELEASE="${RELEASE}" \
CARGO_BUILD_EXTRA_OPTS="--bin greptime --no-default-features"
.PHONY: strip-android-bin
strip-android-bin: build-android-bin ## Strip greptime binary for android.
docker run --network=host \
-v ${PWD}:/greptimedb \
-w /greptimedb ${IMAGE_REGISTRY}/${IMAGE_NAMESPACE}/dev-builder-android:latest \
bash -c '$${NDK_ROOT}/toolchains/llvm/prebuilt/linux-x86_64/bin/llvm-strip /greptimedb/target/aarch64-linux-android/release/greptime'
.PHONY: clean
clean: ## Clean the project.
@@ -21,25 +110,46 @@ fmt: ## Format all the Rust code.
.PHONY: fmt-toml
fmt-toml: ## Format all TOML files.
taplo format --option "indent_string= "
taplo format
.PHONY: check-toml
check-toml: ## Check all TOML files.
taplo format --check --option "indent_string= "
taplo format --check
.PHONY: docker-image
docker-image: ## Build docker image.
docker build --network host -f docker/Dockerfile -t ${IMAGE_REGISTRY}:${IMAGE_TAG} .
docker-image: build-by-dev-builder ## Build docker image.
mkdir -p ${ARCH} && \
cp ./target/${OUTPUT_DIR}/greptime ${ARCH}/greptime && \
docker build -f docker/ci/${BASE_IMAGE}/Dockerfile -t ${IMAGE_REGISTRY}/${IMAGE_NAMESPACE}/greptimedb:${IMAGE_TAG} . && \
rm -r ${ARCH}
.PHONY: docker-image-buildx
docker-image-buildx: multi-platform-buildx ## Build docker image by buildx.
docker buildx build --builder ${BUILDX_BUILDER_NAME} \
--build-arg="CARGO_PROFILE=${CARGO_PROFILE}" \
--build-arg="FEATURES=${FEATURES}" \
--build-arg="OUTPUT_DIR=${OUTPUT_DIR}" \
-f docker/buildx/${BASE_IMAGE}/Dockerfile \
-t ${IMAGE_REGISTRY}/${IMAGE_NAMESPACE}/greptimedb:${IMAGE_TAG} ${BUILDX_MULTI_PLATFORM_BUILD_OPTS} .
.PHONY: dev-builder
dev-builder: multi-platform-buildx ## Build dev-builder image.
docker buildx build --builder ${BUILDX_BUILDER_NAME} \
--build-arg="RUST_TOOLCHAIN=${RUST_TOOLCHAIN}" \
-f docker/dev-builder/${BASE_IMAGE}/Dockerfile \
-t ${IMAGE_REGISTRY}/${IMAGE_NAMESPACE}/dev-builder-${BASE_IMAGE}:${IMAGE_TAG} ${BUILDX_MULTI_PLATFORM_BUILD_OPTS} .
.PHONY: multi-platform-buildx
multi-platform-buildx: ## Create buildx multi-platform builder.
docker buildx inspect ${BUILDX_BUILDER_NAME} || docker buildx create --name ${BUILDX_BUILDER_NAME} --driver docker-container --bootstrap --use
##@ Test
test: nextest ## Run unit and integration tests.
cargo nextest run ${NEXTEST_OPTS}
.PHONY: unit-test
unit-test: ## Run unit test.
cargo test --workspace
.PHONY: integration-test
integration-test: ## Run integation test.
cargo test integration
.PHONY: nextest ## Install nextest tools.
nextest:
cargo --list | grep nextest || cargo install cargo-nextest --locked
.PHONY: sqlness-test
sqlness-test: ## Run sqlness test.
@@ -51,12 +161,27 @@ check: ## Cargo check all the targets.
.PHONY: clippy
clippy: ## Check clippy rules.
cargo clippy --workspace --all-targets -- -D warnings
cargo clippy --workspace --all-targets -F pyo3_backend -- -D warnings
.PHONY: fmt-check
fmt-check: ## Check code format.
cargo fmt --all -- --check
.PHONY: start-etcd
start-etcd: ## Start single node etcd for testing purpose.
docker run --rm -d --network=host -p 2379-2380:2379-2380 ${ETCD_IMAGE}
.PHONY: stop-etcd
stop-etcd: ## Stop single node etcd for testing purpose.
docker stop $$(docker ps -q --filter ancestor=${ETCD_IMAGE})
.PHONY: run-it-in-container
run-it-in-container: start-etcd ## Run integration tests in dev-builder.
docker run --network=host \
-v ${PWD}:/greptimedb -v ${CARGO_REGISTRY_CACHE}:/root/.cargo/registry -v /tmp:/tmp \
-w /greptimedb ${IMAGE_REGISTRY}/${IMAGE_NAMESPACE}/dev-builder-${BASE_IMAGE}:latest \
make test sqlness-test BUILD_JOBS=${BUILD_JOBS}
##@ General
# The help target prints out all targets with their descriptions organized
@@ -72,4 +197,4 @@ fmt-check: ## Check code format.
.PHONY: help
help: ## Display help messages.
@awk 'BEGIN {FS = ":.*##"; printf "\nUsage:\n make \033[36m<target>\033[0m\n"} /^[a-zA-Z_0-9-]+:.*?##/ { printf " \033[36m%-20s\033[0m %s\n", $$1, $$2 } /^##@/ { printf "\n\033[1m%s\033[0m\n", substr($$0, 5) } ' $(MAKEFILE_LIST)
@awk 'BEGIN {FS = ":.*##"; printf "\nUsage:\n make \033[36m<target>\033[0m\n"} /^[a-zA-Z_0-9-]+:.*?##/ { printf " \033[36m%-30s\033[0m %s\n", $$1, $$2 } /^##@/ { printf "\n\033[1m%s\033[0m\n", substr($$0, 5) } ' $(MAKEFILE_LIST)

View File

@@ -27,6 +27,14 @@
<a href="https://greptime.com/slack"><img src="https://img.shields.io/badge/slack-GreptimeDB-0abd59?logo=slack" alt="slack" /></a>
</p>
## Upcoming Event
Come and meet us in **KubeCon + CloudNativeCon North America 2023!**
<p align="center">
<picture>
<img alt="KubeCon + CloudNativeCon North Logo" src="./docs/banner/KCCNC_NA_2023_1000x200_Email Banner.png" width="800px">
</picture>
</p>
## What is GreptimeDB
GreptimeDB is an open-source time-series database with a special focus on
@@ -47,6 +55,10 @@ for years. Based on their best-practices, GreptimeDB is born to give you:
## Quick Start
### [GreptimePlay](https://greptime.com/playground)
Try out the features of GreptimeDB right from your browser.
### Build
#### Build from Source
@@ -92,64 +104,22 @@ Or if you built from docker:
docker run -p 4002:4002 -v "$(pwd):/tmp/greptimedb" greptime/greptimedb standalone start
```
For more startup options, greptimedb's **distributed mode** and information
about Kubernetes deployment, check our [docs](https://docs.greptime.com/).
Please see the online document site for more installation options and [operations info](https://docs.greptime.com/user-guide/operations/overview).
### Connect
### Get started
1. Connect to GreptimeDB via standard [MySQL
client](https://dev.mysql.com/downloads/mysql/):
Read the [complete getting started guide](https://docs.greptime.com/getting-started/try-out-greptimedb) on our [official document site](https://docs.greptime.com/).
```
# The standalone instance listen on port 4002 by default.
mysql -h 127.0.0.1 -P 4002
```
2. Create table:
```SQL
CREATE TABLE monitor (
host STRING,
ts TIMESTAMP,
cpu DOUBLE DEFAULT 0,
memory DOUBLE,
TIME INDEX (ts),
PRIMARY KEY(host)) ENGINE=mito WITH(regions=1);
```
3. Insert some data:
```SQL
INSERT INTO monitor(host, cpu, memory, ts) VALUES ('host1', 66.6, 1024, 1660897955000);
INSERT INTO monitor(host, cpu, memory, ts) VALUES ('host2', 77.7, 2048, 1660897956000);
INSERT INTO monitor(host, cpu, memory, ts) VALUES ('host3', 88.8, 4096, 1660897957000);
```
4. Query the data:
```SQL
SELECT * FROM monitor;
```
```TEXT
+-------+--------------------------+------+--------+
| host | ts | cpu | memory |
+-------+--------------------------+------+--------+
| host1 | 2022-08-19 16:32:35+0800 | 66.6 | 1024 |
| host2 | 2022-08-19 16:32:36+0800 | 77.7 | 2048 |
| host3 | 2022-08-19 16:32:37+0800 | 88.8 | 4096 |
+-------+--------------------------+------+--------+
3 rows in set (0.03 sec)
```
You can always cleanup test database by removing `/tmp/greptimedb`.
To write and query data, GreptimeDB is compatible with multiple [protocols and clients](https://docs.greptime.com/user-guide/clients/overview).
## Resources
### Installation
- [Pre-built Binaries](https://github.com/GreptimeTeam/greptimedb/releases):
For Linux and macOS, you can easily download pre-built binaries that are ready to use. In most cases, downloading the version without PyO3 is sufficient. However, if you plan to run scripts in CPython (and use Python packages like NumPy and Pandas), you will need to download the version with PyO3 and install a Python with the same version as the Python in the PyO3 version. We recommend using virtualenv for the installation process to manage multiple Python versions.
- [Pre-built Binaries](https://greptime.com/download):
For Linux and macOS, you can easily download pre-built binaries including official releases and nightly builds that are ready to use.
In most cases, downloading the version without PyO3 is sufficient. However, if you plan to run scripts in CPython (and use Python packages like NumPy and Pandas), you will need to download the version with PyO3 and install a Python with the same version as the Python in the PyO3 version.
We recommend using virtualenv for the installation process to manage multiple Python versions.
- [Docker Images](https://hub.docker.com/r/greptime/greptimedb)(**recommended**): pre-built
Docker images, this is the easiest way to try GreptimeDB. By default it runs CPython script with `pyo3_backend` enabled.
- [`gtctl`](https://github.com/GreptimeTeam/gtctl): the command-line tool for
@@ -157,7 +127,7 @@ You can always cleanup test database by removing `/tmp/greptimedb`.
### Documentation
- GreptimeDB [User Guide](https://docs.greptime.com/user-guide/concepts.html)
- GreptimeDB [User Guide](https://docs.greptime.com/user-guide/concepts/overview)
- GreptimeDB [Developer
Guide](https://docs.greptime.com/developer-guide/overview.html)
- GreptimeDB [internal code document](https://greptimedb.rs)
@@ -167,8 +137,12 @@ You can always cleanup test database by removing `/tmp/greptimedb`.
### SDK
- [GreptimeDB Java
Client](https://github.com/GreptimeTeam/greptimedb-client-java)
- [GreptimeDB C++ Client](https://github.com/GreptimeTeam/greptimedb-client-cpp)
- [GreptimeDB Erlang Client](https://github.com/GreptimeTeam/greptimedb-client-erl)
- [GreptimeDB Go Client](https://github.com/GreptimeTeam/greptimedb-client-go)
- [GreptimeDB Java Client](https://github.com/GreptimeTeam/greptimedb-client-java)
- [GreptimeDB Python Client](https://github.com/GreptimeTeam/greptimedb-client-py) (WIP)
- [GreptimeDB Rust Client](https://github.com/GreptimeTeam/greptimedb-client-rust)
## Project Status

View File

@@ -6,9 +6,11 @@ license.workspace = true
[dependencies]
arrow.workspace = true
chrono.workspace = true
clap = { version = "4.0", features = ["derive"] }
client = { path = "../src/client" }
client = { workspace = true }
futures-util.workspace = true
indicatif = "0.17.1"
itertools = "0.10.5"
itertools.workspace = true
parquet.workspace = true
tokio.workspace = true

View File

@@ -26,15 +26,17 @@ use arrow::datatypes::{DataType, Float64Type, Int64Type};
use arrow::record_batch::RecordBatch;
use clap::Parser;
use client::api::v1::column::Values;
use client::api::v1::{Column, ColumnDataType, ColumnDef, CreateTableExpr, InsertRequest};
use client::{Client, Database, DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME};
use client::api::v1::{
Column, ColumnDataType, ColumnDef, CreateTableExpr, InsertRequest, InsertRequests, SemanticType,
};
use client::{Client, Database, Output, DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME};
use futures_util::TryStreamExt;
use indicatif::{MultiProgress, ProgressBar, ProgressStyle};
use parquet::arrow::arrow_reader::ParquetRecordBatchReaderBuilder;
use tokio::task::JoinSet;
const CATALOG_NAME: &str = "greptime";
const SCHEMA_NAME: &str = "public";
const TABLE_NAME: &str = "nyc_taxi";
#[derive(Parser)]
#[command(name = "NYC benchmark runner")]
@@ -72,7 +74,12 @@ fn get_file_list<P: AsRef<Path>>(path: P) -> Vec<PathBuf> {
.collect()
}
fn new_table_name() -> String {
format!("nyc_taxi_{}", chrono::Utc::now().timestamp())
}
async fn write_data(
table_name: &str,
batch_size: usize,
db: &Database,
path: PathBuf,
@@ -102,13 +109,16 @@ async fn write_data(
}
let (columns, row_count) = convert_record_batch(record_batch);
let request = InsertRequest {
table_name: TABLE_NAME.to_string(),
region_number: 0,
table_name: table_name.to_string(),
columns,
row_count,
};
let requests = InsertRequests {
inserts: vec![request],
};
let now = Instant::now();
db.insert(request).await.unwrap();
db.insert(requests).await.unwrap();
let elapsed = now.elapsed();
total_rpc_elapsed_ms += elapsed.as_millis();
progress_bar.inc(row_count as _);
@@ -126,6 +136,11 @@ fn convert_record_batch(record_batch: RecordBatch) -> (Vec<Column>, u32) {
for (array, field) in record_batch.columns().iter().zip(fields.iter()) {
let (values, datatype) = build_values(array);
let semantic_type = match field.name().as_str() {
"VendorID" => SemanticType::Tag,
"tpep_pickup_datetime" => SemanticType::Timestamp,
_ => SemanticType::Field,
};
let column = Column {
column_name: field.name().clone(),
@@ -136,8 +151,7 @@ fn convert_record_batch(record_batch: RecordBatch) -> (Vec<Column>, u32) {
.map(|bitmap| bitmap.buffer().as_slice().to_vec())
.unwrap_or_default(),
datatype: datatype.into(),
// datatype and semantic_type are set to default
..Default::default()
semantic_type: semantic_type as i32,
};
columns.push(column);
}
@@ -183,7 +197,7 @@ fn build_values(column: &ArrayRef) -> (Values, ColumnDataType) {
let values = array.values();
(
Values {
ts_microsecond_values: values.to_vec(),
timestamp_microsecond_values: values.to_vec(),
..Default::default()
},
ColumnDataType::TimestampMicrosecond,
@@ -238,159 +252,193 @@ fn is_record_batch_full(batch: &RecordBatch) -> bool {
batch.columns().iter().all(|col| col.null_count() == 0)
}
fn create_table_expr() -> CreateTableExpr {
fn create_table_expr(table_name: &str) -> CreateTableExpr {
CreateTableExpr {
catalog_name: CATALOG_NAME.to_string(),
schema_name: SCHEMA_NAME.to_string(),
table_name: TABLE_NAME.to_string(),
table_name: table_name.to_string(),
desc: "".to_string(),
column_defs: vec![
ColumnDef {
name: "VendorID".to_string(),
datatype: ColumnDataType::Int64 as i32,
data_type: ColumnDataType::Int64 as i32,
is_nullable: true,
default_constraint: vec![],
semantic_type: SemanticType::Tag as i32,
comment: String::new(),
},
ColumnDef {
name: "tpep_pickup_datetime".to_string(),
datatype: ColumnDataType::TimestampMicrosecond as i32,
is_nullable: true,
data_type: ColumnDataType::TimestampMicrosecond as i32,
is_nullable: false,
default_constraint: vec![],
semantic_type: SemanticType::Timestamp as i32,
comment: String::new(),
},
ColumnDef {
name: "tpep_dropoff_datetime".to_string(),
datatype: ColumnDataType::TimestampMicrosecond as i32,
data_type: ColumnDataType::TimestampMicrosecond as i32,
is_nullable: true,
default_constraint: vec![],
semantic_type: SemanticType::Field as i32,
comment: String::new(),
},
ColumnDef {
name: "passenger_count".to_string(),
datatype: ColumnDataType::Float64 as i32,
data_type: ColumnDataType::Float64 as i32,
is_nullable: true,
default_constraint: vec![],
semantic_type: SemanticType::Field as i32,
comment: String::new(),
},
ColumnDef {
name: "trip_distance".to_string(),
datatype: ColumnDataType::Float64 as i32,
data_type: ColumnDataType::Float64 as i32,
is_nullable: true,
default_constraint: vec![],
semantic_type: SemanticType::Field as i32,
comment: String::new(),
},
ColumnDef {
name: "RatecodeID".to_string(),
datatype: ColumnDataType::Float64 as i32,
data_type: ColumnDataType::Float64 as i32,
is_nullable: true,
default_constraint: vec![],
semantic_type: SemanticType::Field as i32,
comment: String::new(),
},
ColumnDef {
name: "store_and_fwd_flag".to_string(),
datatype: ColumnDataType::String as i32,
data_type: ColumnDataType::String as i32,
is_nullable: true,
default_constraint: vec![],
semantic_type: SemanticType::Field as i32,
comment: String::new(),
},
ColumnDef {
name: "PULocationID".to_string(),
datatype: ColumnDataType::Int64 as i32,
data_type: ColumnDataType::Int64 as i32,
is_nullable: true,
default_constraint: vec![],
semantic_type: SemanticType::Field as i32,
comment: String::new(),
},
ColumnDef {
name: "DOLocationID".to_string(),
datatype: ColumnDataType::Int64 as i32,
data_type: ColumnDataType::Int64 as i32,
is_nullable: true,
default_constraint: vec![],
semantic_type: SemanticType::Field as i32,
comment: String::new(),
},
ColumnDef {
name: "payment_type".to_string(),
datatype: ColumnDataType::Int64 as i32,
data_type: ColumnDataType::Int64 as i32,
is_nullable: true,
default_constraint: vec![],
semantic_type: SemanticType::Field as i32,
comment: String::new(),
},
ColumnDef {
name: "fare_amount".to_string(),
datatype: ColumnDataType::Float64 as i32,
data_type: ColumnDataType::Float64 as i32,
is_nullable: true,
default_constraint: vec![],
semantic_type: SemanticType::Field as i32,
comment: String::new(),
},
ColumnDef {
name: "extra".to_string(),
datatype: ColumnDataType::Float64 as i32,
data_type: ColumnDataType::Float64 as i32,
is_nullable: true,
default_constraint: vec![],
semantic_type: SemanticType::Field as i32,
comment: String::new(),
},
ColumnDef {
name: "mta_tax".to_string(),
datatype: ColumnDataType::Float64 as i32,
data_type: ColumnDataType::Float64 as i32,
is_nullable: true,
default_constraint: vec![],
semantic_type: SemanticType::Field as i32,
comment: String::new(),
},
ColumnDef {
name: "tip_amount".to_string(),
datatype: ColumnDataType::Float64 as i32,
data_type: ColumnDataType::Float64 as i32,
is_nullable: true,
default_constraint: vec![],
semantic_type: SemanticType::Field as i32,
comment: String::new(),
},
ColumnDef {
name: "tolls_amount".to_string(),
datatype: ColumnDataType::Float64 as i32,
data_type: ColumnDataType::Float64 as i32,
is_nullable: true,
default_constraint: vec![],
semantic_type: SemanticType::Field as i32,
comment: String::new(),
},
ColumnDef {
name: "improvement_surcharge".to_string(),
datatype: ColumnDataType::Float64 as i32,
data_type: ColumnDataType::Float64 as i32,
is_nullable: true,
default_constraint: vec![],
semantic_type: SemanticType::Field as i32,
comment: String::new(),
},
ColumnDef {
name: "total_amount".to_string(),
datatype: ColumnDataType::Float64 as i32,
data_type: ColumnDataType::Float64 as i32,
is_nullable: true,
default_constraint: vec![],
semantic_type: SemanticType::Field as i32,
comment: String::new(),
},
ColumnDef {
name: "congestion_surcharge".to_string(),
datatype: ColumnDataType::Float64 as i32,
data_type: ColumnDataType::Float64 as i32,
is_nullable: true,
default_constraint: vec![],
semantic_type: SemanticType::Field as i32,
comment: String::new(),
},
ColumnDef {
name: "airport_fee".to_string(),
datatype: ColumnDataType::Float64 as i32,
data_type: ColumnDataType::Float64 as i32,
is_nullable: true,
default_constraint: vec![],
semantic_type: SemanticType::Field as i32,
comment: String::new(),
},
],
time_index: "tpep_pickup_datetime".to_string(),
primary_keys: vec!["VendorID".to_string()],
create_if_not_exists: false,
create_if_not_exists: true,
table_options: Default::default(),
region_ids: vec![0],
table_id: None,
engine: "mito".to_string(),
}
}
fn query_set() -> HashMap<String, String> {
let mut ret = HashMap::new();
ret.insert(
"count_all".to_string(),
format!("SELECT COUNT(*) FROM {TABLE_NAME};"),
);
ret.insert(
"fare_amt_by_passenger".to_string(),
format!("SELECT passenger_count, MIN(fare_amount), MAX(fare_amount), SUM(fare_amount) FROM {TABLE_NAME} GROUP BY passenger_count")
);
ret
fn query_set(table_name: &str) -> HashMap<String, String> {
HashMap::from([
(
"count_all".to_string(),
format!("SELECT COUNT(*) FROM {table_name};"),
),
(
"fare_amt_by_passenger".to_string(),
format!("SELECT passenger_count, MIN(fare_amount), MAX(fare_amount), SUM(fare_amount) FROM {table_name} GROUP BY passenger_count"),
)
])
}
async fn do_write(args: &Args, db: &Database) {
async fn do_write(args: &Args, db: &Database, table_name: &str) {
let mut file_list = get_file_list(args.path.clone().expect("Specify data path in argument"));
let mut write_jobs = JoinSet::new();
let create_table_result = db.create(create_table_expr()).await;
let create_table_result = db.create(create_table_expr(table_name)).await;
println!("Create table result: {create_table_result:?}");
let progress_bar_style = ProgressStyle::with_template(
@@ -408,7 +456,10 @@ async fn do_write(args: &Args, db: &Database) {
let db = db.clone();
let mpb = multi_progress_bar.clone();
let pb_style = progress_bar_style.clone();
write_jobs.spawn(async move { write_data(batch_size, &db, path, mpb, pb_style).await });
let table_name = table_name.to_string();
let _ = write_jobs.spawn(async move {
write_data(&table_name, batch_size, &db, path, mpb, pb_style).await
});
}
}
while write_jobs.join_next().await.is_some() {
@@ -417,23 +468,32 @@ async fn do_write(args: &Args, db: &Database) {
let db = db.clone();
let mpb = multi_progress_bar.clone();
let pb_style = progress_bar_style.clone();
write_jobs.spawn(async move { write_data(batch_size, &db, path, mpb, pb_style).await });
let table_name = table_name.to_string();
let _ = write_jobs.spawn(async move {
write_data(&table_name, batch_size, &db, path, mpb, pb_style).await
});
}
}
}
async fn do_query(num_iter: usize, db: &Database) {
for (query_name, query) in query_set() {
async fn do_query(num_iter: usize, db: &Database, table_name: &str) {
for (query_name, query) in query_set(table_name) {
println!("Running query: {query}");
for i in 0..num_iter {
let now = Instant::now();
let _res = db.sql(&query).await.unwrap();
let res = db.sql(&query).await.unwrap();
match res {
Output::AffectedRows(_) | Output::RecordBatches(_) => (),
Output::Stream(stream) => {
stream.try_collect::<Vec<_>>().await.unwrap();
}
}
let elapsed = now.elapsed();
println!(
"query {}, iteration {}: {}ms",
query_name,
i,
elapsed.as_millis()
elapsed.as_millis(),
);
}
}
@@ -450,13 +510,14 @@ fn main() {
.block_on(async {
let client = Client::with_urls(vec![&args.endpoint]);
let db = Database::new(DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, client);
let table_name = new_table_name();
if !args.skip_write {
do_write(&args, &db).await;
do_write(&args, &db, &table_name).await;
}
if !args.skip_read {
do_query(args.iter_num, &db).await;
do_query(args.iter_num, &db, &table_name).await;
}
})
}

View File

@@ -1,7 +1,5 @@
# Node running mode, see `standalone.example.toml`.
mode = "distributed"
# Whether to use in-memory catalog, see `standalone.example.toml`.
enable_memory_catalog = false
# The datanode identifier, should be unique.
node_id = 42
# gRPC server address, "127.0.0.1:3001" by default.
@@ -10,31 +8,50 @@ rpc_addr = "127.0.0.1:3001"
rpc_hostname = "127.0.0.1"
# The number of gRPC server worker threads, 8 by default.
rpc_runtime_size = 8
# Start services after regions have obtained leases.
# It will block the datanode start if it can't receive leases in the heartbeat from metasrv.
require_lease_before_startup = false
[heartbeat]
# Interval for sending heartbeat messages to the Metasrv, 3 seconds by default.
interval = "3s"
# Metasrv client options.
[meta_client_options]
[meta_client]
# Metasrv address list.
metasrv_addrs = ["127.0.0.1:3002"]
# Operation timeout in milliseconds, 3000 by default.
timeout_millis = 3000
# Connect server timeout in milliseconds, 5000 by default.
connect_timeout_millis = 5000
# Heartbeat timeout, 500 milliseconds by default.
heartbeat_timeout = "500ms"
# Operation timeout, 3 seconds by default.
timeout = "3s"
# Connect server timeout, 1 second by default.
connect_timeout = "1s"
# `TCP_NODELAY` option for accepted connections, true by default.
tcp_nodelay = true
# WAL options, see `standalone.example.toml`.
[wal]
dir = "/tmp/greptimedb/wal"
file_size = "1GB"
purge_threshold = "50GB"
# WAL data directory
# dir = "/tmp/greptimedb/wal"
file_size = "256MB"
purge_threshold = "4GB"
purge_interval = "10m"
read_batch_size = 128
sync_write = false
# Storage options, see `standalone.example.toml`.
[storage]
# The working home directory.
data_home = "/tmp/greptimedb/"
type = "File"
data_dir = "/tmp/greptimedb/data/"
# TTL for all tables. Disabled by default.
# global_ttl = "7d"
# Cache configuration for object storage such as 'S3' etc.
# The local file cache directory
# cache_path = "/path/local_cache"
# The local file cache capacity in bytes.
# cache_capacity = "256MB"
# Compaction options, see `standalone.example.toml`.
[storage.compaction]
@@ -48,13 +65,46 @@ max_purge_tasks = 32
# Create a checkpoint every <checkpoint_margin> actions.
checkpoint_margin = 10
# Region manifest logs and checkpoints gc execution duration
gc_duration = '30s'
# Whether to try creating a manifest checkpoint on region opening
checkpoint_on_startup = false
gc_duration = '10m'
# Procedure storage options, see `standalone.example.toml`.
# [procedure.store]
# type = "File"
# data_dir = "/tmp/greptimedb/procedure/"
# max_retry_times = 3
# retry_delay = "500ms"
# Storage flush options
[storage.flush]
# Max inflight flush tasks.
max_flush_tasks = 8
# Default write buffer size for a region.
region_write_buffer_size = "32MB"
# Interval to check whether a region needs flush.
picker_schedule_interval = "5m"
# Interval to auto flush a region if it has not flushed yet.
auto_flush_interval = "1h"
# Global write buffer size for all regions.
global_write_buffer_size = "1GB"
# Mito engine options
[[region_engine]]
[region_engine.mito]
# Number of region workers
num_workers = 8
# Request channel size of each worker
worker_channel_size = 128
# Max batch size for a worker to handle requests
worker_request_batch_size = 64
# Number of meta action updated to trigger a new checkpoint for the manifest
manifest_checkpoint_distance = 10
# Manifest compression type
manifest_compress_type = "Uncompressed"
# Max number of running background jobs
max_background_jobs = 4
# Interval to auto flush a region if it has not flushed yet.
auto_flush_interval = "1h"
# Global write buffer size for all regions.
global_write_buffer_size = "1GB"
# Global write buffer size threshold to reject write requests (default 2G).
global_write_buffer_reject_size = "2GB"
# Log options
# [logging]
# Specify logs directory.
# dir = "/tmp/greptimedb/logs"
# Specify the log level [info | debug | error | warn]
# level = "info"

View File

@@ -1,58 +1,79 @@
# Node running mode, see `standalone.example.toml`.
mode = "distributed"
[heartbeat]
# Interval for sending heartbeat task to the Metasrv, 5 seconds by default.
interval = "5s"
# Interval for retry sending heartbeat task, 5 seconds by default.
retry_interval = "5s"
# HTTP server options, see `standalone.example.toml`.
[http_options]
[http]
addr = "127.0.0.1:4000"
timeout = "30s"
body_limit = "64MB"
# gRPC server options, see `standalone.example.toml`.
[grpc_options]
[grpc]
addr = "127.0.0.1:4001"
runtime_size = 8
# MySQL server options, see `standalone.example.toml`.
[mysql_options]
[mysql]
enable = true
addr = "127.0.0.1:4002"
runtime_size = 2
# MySQL server TLS options, see `standalone.example.toml`.
[mysql_options.tls]
[mysql.tls]
mode = "disable"
cert_path = ""
key_path = ""
# PostgresSQL server options, see `standalone.example.toml`.
[postgres_options]
[postgres]
enable = true
addr = "127.0.0.1:4003"
runtime_size = 2
# PostgresSQL server TLS options, see `standalone.example.toml`.
[postgres_options.tls]
[postgres.tls]
mode = "disable"
cert_path = ""
key_path = ""
# OpenTSDB protocol options, see `standalone.example.toml`.
[opentsdb_options]
[opentsdb]
enable = true
addr = "127.0.0.1:4242"
runtime_size = 2
# InfluxDB protocol options, see `standalone.example.toml`.
[influxdb_options]
[influxdb]
enable = true
# Prometheus protocol options, see `standalone.example.toml`.
[prometheus_options]
# Prometheus remote storage options, see `standalone.example.toml`.
[prom_store]
enable = true
# Prometheus protocol options, see `standalone.example.toml`.
[prom_options]
addr = "127.0.0.1:4004"
# Metasrv client options, see `datanode.example.toml`.
[meta_client_options]
[meta_client]
metasrv_addrs = ["127.0.0.1:3002"]
timeout_millis = 3000
connect_timeout_millis = 5000
timeout = "3s"
# DDL timeouts options.
ddl_timeout = "10s"
connect_timeout = "1s"
tcp_nodelay = true
# Log options, see `standalone.example.toml`
# [logging]
# dir = "/tmp/greptimedb/logs"
# level = "info"
# Datanode options.
[datanode]
# Datanode client options.
[datanode.client]
timeout = "10s"
connect_timeout = "10s"
tcp_nodelay = true

View File

@@ -1,11 +1,11 @@
# The working home directory.
data_home = "/tmp/metasrv/"
# The bind address of metasrv, "127.0.0.1:3002" by default.
bind_addr = "127.0.0.1:3002"
# The communication server address for frontend and datanode to connect to metasrv, "127.0.0.1:3002" by default for localhost.
server_addr = "127.0.0.1:3002"
# Etcd server address, "127.0.0.1:2379" by default.
store_addr = "127.0.0.1:2379"
# Datanode lease in seconds, 15 seconds by default.
datanode_lease_secs = 15
# Datanode selector type.
# - "LeaseBased" (default value).
# - "LoadBased"
@@ -13,3 +13,25 @@ datanode_lease_secs = 15
selector = "LeaseBased"
# Store data in memory, false by default.
use_memory_store = false
# Whether to enable greptimedb telemetry, true by default.
enable_telemetry = true
# Log options, see `standalone.example.toml`
# [logging]
# dir = "/tmp/greptimedb/logs"
# level = "info"
# Procedure storage options.
[procedure]
# Procedure max retry time.
max_retry_times = 12
# Initial retry delay of procedures, increases exponentially
retry_delay = "500ms"
# # Datanode options.
# [datanode]
# # Datanode client options.
# [datanode.client_options]
# timeout = "10s"
# connect_timeout = "10s"
# tcp_nodelay = true

View File

@@ -1,31 +1,36 @@
# Node running mode, "standalone" or "distributed".
mode = "standalone"
# Whether to use in-memory catalog, `false` by default.
enable_memory_catalog = false
# Whether to enable greptimedb telemetry, true by default.
enable_telemetry = true
# HTTP server options.
[http_options]
[http]
# Server address, "127.0.0.1:4000" by default.
addr = "127.0.0.1:4000"
# HTTP request timeout, 30s by default.
timeout = "30s"
# HTTP request body limit, 64Mb by default.
# the following units are supported: B, KB, KiB, MB, MiB, GB, GiB, TB, TiB, PB, PiB
body_limit = "64MB"
# gRPC server options.
[grpc_options]
[grpc]
# Server address, "127.0.0.1:4001" by default.
addr = "127.0.0.1:4001"
# The number of server worker threads, 8 by default.
runtime_size = 8
# MySQL server options.
[mysql_options]
[mysql]
# Whether to enable
enable = true
# Server address, "127.0.0.1:4002" by default.
addr = "127.0.0.1:4002"
# The number of server worker threads, 2 by default.
runtime_size = 2
# MySQL server TLS options.
[mysql_options.tls]
[mysql.tls]
# TLS mode, refer to https://www.postgresql.org/docs/current/libpq-ssl.html
# - "disable" (default value)
# - "prefer"
@@ -39,14 +44,16 @@ cert_path = ""
key_path = ""
# PostgresSQL server options.
[postgres_options]
[postgres]
# Whether to enable
enable = true
# Server address, "127.0.0.1:4003" by default.
addr = "127.0.0.1:4003"
# The number of server worker threads, 2 by default.
runtime_size = 2
# PostgresSQL server TLS options, see `[mysql_options.tls]` section.
[postgres_options.tls]
[postgres.tls]
# TLS mode.
mode = "disable"
# certificate file path.
@@ -55,35 +62,32 @@ cert_path = ""
key_path = ""
# OpenTSDB protocol options.
[opentsdb_options]
[opentsdb]
# Whether to enable
enable = true
# OpenTSDB telnet API server address, "127.0.0.1:4242" by default.
addr = "127.0.0.1:4242"
# The number of server worker threads, 2 by default.
runtime_size = 2
# InfluxDB protocol options.
[influxdb_options]
[influxdb]
# Whether to enable InfluxDB protocol in HTTP API, true by default.
enable = true
# Prometheus protocol options.
[prometheus_options]
# Prometheus remote storage options
[prom_store]
# Whether to enable Prometheus remote write and read in HTTP API, true by default.
enable = true
# Prom protocol options.
[prom_options]
# Prometheus API server address, "127.0.0.1:4004" by default.
addr = "127.0.0.1:4004"
# WAL options.
[wal]
# WAL data directory.
dir = "/tmp/greptimedb/wal"
# WAL data directory
# dir = "/tmp/greptimedb/wal"
# WAL file size in bytes.
file_size = "1GB"
# WAL purge threshold in bytes.
purge_threshold = "50GB"
file_size = "256MB"
# WAL purge threshold.
purge_threshold = "4GB"
# WAL purge interval in seconds.
purge_interval = "10m"
# WAL read batch size.
@@ -91,12 +95,32 @@ read_batch_size = 128
# Whether to sync log file after every write.
sync_write = false
# Metadata storage options.
[metadata_store]
# Kv file size in bytes.
file_size = "256MB"
# Kv purge threshold.
purge_threshold = "4GB"
# Procedure storage options.
[procedure]
# Procedure max retry time.
max_retry_times = 3
# Initial retry delay of procedures, increases exponentially
retry_delay = "500ms"
# Storage options.
[storage]
# The working home directory.
data_home = "/tmp/greptimedb/"
# Storage type.
type = "File"
# Data directory, "/tmp/greptimedb/data" by default.
data_dir = "/tmp/greptimedb/data/"
# TTL for all tables. Disabled by default.
# global_ttl = "7d"
# Cache configuration for object storage such as 'S3' etc.
# cache_path = "/path/local_cache"
# The local file cache capacity in bytes.
# cache_capacity = "256MB"
# Compaction options.
[storage.compaction]
@@ -113,18 +137,24 @@ max_purge_tasks = 32
# Create a checkpoint every <checkpoint_margin> actions.
checkpoint_margin = 10
# Region manifest logs and checkpoints gc execution duration
gc_duration = '30s'
# Whether to try creating a manifest checkpoint on region opening
checkpoint_on_startup = false
gc_duration = '10m'
# Procedure storage options.
# Uncomment to enable.
# [procedure.store]
# # Storage type.
# type = "File"
# # Procedure data path.
# data_dir = "/tmp/greptimedb/procedure/"
# # Procedure max retry time.
# max_retry_times = 3
# # Initial retry delay of procedures, increases exponentially
# retry_delay = "500ms"
# Storage flush options
[storage.flush]
# Max inflight flush tasks.
max_flush_tasks = 8
# Default write buffer size for a region.
region_write_buffer_size = "32MB"
# Interval to check whether a region needs flush.
picker_schedule_interval = "5m"
# Interval to auto flush a region if it has not flushed yet.
auto_flush_interval = "1h"
# Global write buffer size for all regions.
global_write_buffer_size = "1GB"
# Log options
# [logging]
# Specify logs directory.
# dir = "/tmp/greptimedb/logs"
# Specify the log level [info | debug | error | warn]
# level = "info"

View File

@@ -1,36 +0,0 @@
FROM ubuntu:22.04 as builder
ENV LANG en_US.utf8
WORKDIR /greptimedb
# Install dependencies.
RUN apt-get update && apt-get install -y \
libssl-dev \
protobuf-compiler \
curl \
build-essential \
pkg-config \
python3 \
python3-dev \
&& pip install pyarrow
# Install Rust.
SHELL ["/bin/bash", "-c"]
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- --no-modify-path --default-toolchain none -y
ENV PATH /root/.cargo/bin/:$PATH
# Build the project in release mode.
COPY . .
RUN cargo build --release
# Export the binary to the clean image.
# TODO(zyy17): Maybe should use the more secure container image.
FROM ubuntu:22.04 as base
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get -y install ca-certificates
WORKDIR /greptime
COPY --from=builder /greptimedb/target/release/greptime /greptime/bin/
ENV PATH /greptime/bin/:$PATH
ENTRYPOINT ["greptime"]

View File

@@ -1,57 +0,0 @@
FROM ubuntu:22.04 as builder
ENV LANG en_US.utf8
WORKDIR /greptimedb
# Install dependencies.
RUN apt-get update && apt-get install -y \
libssl-dev \
protobuf-compiler \
curl \
build-essential \
pkg-config \
wget
# Install Rust.
SHELL ["/bin/bash", "-c"]
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- --no-modify-path --default-toolchain none -y
ENV PATH /root/.cargo/bin/:$PATH
# Install cross platform toolchain
RUN apt-get -y update && \
apt-get -y install g++-aarch64-linux-gnu gcc-aarch64-linux-gnu && \
apt-get install binutils-aarch64-linux-gnu
COPY ./docker/aarch64/compile-python.sh ./docker/aarch64/
RUN chmod +x ./docker/aarch64/compile-python.sh && \
./docker/aarch64/compile-python.sh
COPY ./rust-toolchain.toml .
# Install rustup target for cross compiling.
RUN rustup target add aarch64-unknown-linux-gnu
COPY . .
# Update dependency, using separate `RUN` to separate cache
RUN cargo fetch
# This three env var is set in script, so I set it manually in dockerfile.
ENV LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib/
ENV LIBRARY_PATH=$LIBRARY_PATH:/usr/local/lib/
ENV PY_INSTALL_PATH=/greptimedb/python_arm64_build
# Set the environment variable for cross compiling and compile it
# cross compiled python is `python3` in path, but pyo3 need `python` in path so alias it
# Build the project in release mode.
RUN export PYO3_CROSS_LIB_DIR=$PY_INSTALL_PATH/lib && \
alias python=python3 && \
cargo build --target aarch64-unknown-linux-gnu --release -F pyo3_backend
# Exporting the binary to the clean image
FROM ubuntu:22.04 as base
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get -y install ca-certificates
WORKDIR /greptime
COPY --from=builder /greptimedb/target/aarch64-unknown-linux-gnu/release/greptime /greptime/bin/
ENV PATH /greptime/bin/:$PATH
ENTRYPOINT ["greptime"]

View File

@@ -1,87 +0,0 @@
#!/usr/bin/env bash
set -e
# this script will download Python source code, compile it, and install it to /usr/local/lib
# then use this python to compile cross-compiled python for aarch64
ARCH=$1
PYTHON_VERSION=3.10.10
PYTHON_SOURCE_DIR=Python-${PYTHON_VERSION}
PYTHON_INSTALL_PATH_AMD64=${PWD}/python-${PYTHON_VERSION}/amd64
PYTHON_INSTALL_PATH_AARCH64=${PWD}/python-${PYTHON_VERSION}/aarch64
function download_python_source_code() {
wget https://www.python.org/ftp/python/$PYTHON_VERSION/Python-$PYTHON_VERSION.tgz
tar -xvf Python-$PYTHON_VERSION.tgz
}
function compile_for_amd64_platform() {
mkdir -p "$PYTHON_INSTALL_PATH_AMD64"
echo "Compiling for amd64 platform..."
./configure \
--prefix="$PYTHON_INSTALL_PATH_AMD64" \
--enable-shared \
ac_cv_pthread_is_default=no ac_cv_pthread=yes ac_cv_cxx_thread=yes \
ac_cv_have_long_long_format=yes \
--disable-ipv6 ac_cv_file__dev_ptmx=no ac_cv_file__dev_ptc=no
make
make install
}
# explain Python compile options here a bit:s
# --enable-shared: enable building a shared Python library (default is no) but we do need it for calling from rust
# CC, CXX, AR, LD, RANLIB: set the compiler, archiver, linker, and ranlib programs to use
# build: the machine you are building on, host: the machine you will run the compiled program on
# --with-system-ffi: build _ctypes module using an installed ffi library, see Doc/library/ctypes.rst, not used in here TODO: could remove
# ac_cv_pthread_is_default=no ac_cv_pthread=yes ac_cv_cxx_thread=yes:
# allow cross-compiled python to have -pthread set for CXX, see https://github.com/python/cpython/pull/22525
# ac_cv_have_long_long_format=yes: target platform supports long long type
# disable-ipv6: disable ipv6 support, we don't need it in here
# ac_cv_file__dev_ptmx=no ac_cv_file__dev_ptc=no: disable pty support, we don't need it in here
function compile_for_aarch64_platform() {
export LD_LIBRARY_PATH=$PYTHON_INSTALL_PATH_AMD64/lib:$LD_LIBRARY_PATH
export LIBRARY_PATH=$PYTHON_INSTALL_PATH_AMD64/lib:$LIBRARY_PATH
export PATH=$PYTHON_INSTALL_PATH_AMD64/bin:$PATH
mkdir -p "$PYTHON_INSTALL_PATH_AARCH64"
echo "Compiling for aarch64 platform..."
echo "LD_LIBRARY_PATH: $LD_LIBRARY_PATH"
echo "LIBRARY_PATH: $LIBRARY_PATH"
echo "PATH: $PATH"
./configure --build=x86_64-linux-gnu --host=aarch64-linux-gnu \
--prefix="$PYTHON_INSTALL_PATH_AARCH64" --enable-optimizations \
CC=aarch64-linux-gnu-gcc \
CXX=aarch64-linux-gnu-g++ \
AR=aarch64-linux-gnu-ar \
LD=aarch64-linux-gnu-ld \
RANLIB=aarch64-linux-gnu-ranlib \
--enable-shared \
ac_cv_pthread_is_default=no ac_cv_pthread=yes ac_cv_cxx_thread=yes \
ac_cv_have_long_long_format=yes \
--disable-ipv6 ac_cv_file__dev_ptmx=no ac_cv_file__dev_ptc=no
make
make altinstall
}
# Main script starts here.
download_python_source_code
# Enter the python source code directory.
cd $PYTHON_SOURCE_DIR || exit 1
# Build local python first, then build cross-compiled python.
compile_for_amd64_platform
# Clean the build directory.
make clean && make distclean
# Cross compile python for aarch64.
if [ "$ARCH" = "aarch64-unknown-linux-gnu" ]; then
compile_for_aarch64_platform
fi

View File

@@ -0,0 +1,54 @@
FROM centos:7 as builder
ARG CARGO_PROFILE
ARG FEATURES
ARG OUTPUT_DIR
ENV LANG en_US.utf8
WORKDIR /greptimedb
# Install dependencies
RUN ulimit -n 1024000 && yum groupinstall -y 'Development Tools'
RUN yum install -y epel-release \
openssl \
openssl-devel \
centos-release-scl \
rh-python38 \
rh-python38-python-devel \
which
# Install protoc
RUN curl -LO https://github.com/protocolbuffers/protobuf/releases/download/v3.15.8/protoc-3.15.8-linux-x86_64.zip
RUN unzip protoc-3.15.8-linux-x86_64.zip -d /usr/local/
# Install Rust
SHELL ["/bin/bash", "-c"]
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- --no-modify-path --default-toolchain none -y
ENV PATH /opt/rh/rh-python38/root/usr/bin:/usr/local/bin:/root/.cargo/bin/:$PATH
# Build the project in release mode.
RUN --mount=target=.,rw \
--mount=type=cache,target=/root/.cargo/registry \
make build \
CARGO_PROFILE=${CARGO_PROFILE} \
FEATURES=${FEATURES} \
TARGET_DIR=/out/target
# Export the binary to the clean image.
FROM centos:7 as base
ARG OUTPUT_DIR
RUN yum install -y epel-release \
openssl \
openssl-devel \
centos-release-scl \
rh-python38 \
rh-python38-python-devel \
which
WORKDIR /greptime
COPY --from=builder /out/target/${OUTPUT_DIR}/greptime /greptime/bin/
ENV PATH /greptime/bin/:$PATH
ENTRYPOINT ["greptime"]

View File

@@ -0,0 +1,62 @@
FROM ubuntu:20.04 as builder
ARG CARGO_PROFILE
ARG FEATURES
ARG OUTPUT_DIR
ENV LANG en_US.utf8
WORKDIR /greptimedb
# Add PPA for Python 3.10.
RUN apt-get update && \
DEBIAN_FRONTEND=noninteractive apt-get install -y software-properties-common && \
add-apt-repository ppa:deadsnakes/ppa -y
# Install dependencies.
RUN --mount=type=cache,target=/var/cache/apt \
apt-get update && apt-get install -y \
libssl-dev \
protobuf-compiler \
curl \
git \
build-essential \
pkg-config \
python3.10 \
python3.10-dev \
python3-pip
# Install Rust.
SHELL ["/bin/bash", "-c"]
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- --no-modify-path --default-toolchain none -y
ENV PATH /root/.cargo/bin/:$PATH
# Build the project in release mode.
RUN --mount=target=. \
--mount=type=cache,target=/root/.cargo/registry \
make build \
CARGO_PROFILE=${CARGO_PROFILE} \
FEATURES=${FEATURES} \
TARGET_DIR=/out/target
# Export the binary to the clean image.
# TODO(zyy17): Maybe should use the more secure container image.
FROM ubuntu:22.04 as base
ARG OUTPUT_DIR
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get \
-y install ca-certificates \
python3.10 \
python3.10-dev \
python3-pip \
curl
COPY ./docker/python/requirements.txt /etc/greptime/requirements.txt
RUN python3 -m pip install -r /etc/greptime/requirements.txt
WORKDIR /greptime
COPY --from=builder /out/target/${OUTPUT_DIR}/greptime /greptime/bin/
ENV PATH /greptime/bin/:$PATH
ENTRYPOINT ["greptime"]

View File

@@ -0,0 +1,16 @@
FROM centos:7
RUN yum install -y epel-release \
openssl \
openssl-devel \
centos-release-scl \
rh-python38 \
rh-python38-python-devel
ARG TARGETARCH
ADD $TARGETARCH/greptime /greptime/bin/
ENV PATH /greptime/bin/:$PATH
ENTRYPOINT ["greptime"]

View File

@@ -4,9 +4,10 @@ RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y \
ca-certificates \
python3.10 \
python3.10-dev \
python3-pip
python3-pip \
curl
COPY requirements.txt /etc/greptime/requirements.txt
COPY ./docker/python/requirements.txt /etc/greptime/requirements.txt
RUN python3 -m pip install -r /etc/greptime/requirements.txt

View File

@@ -0,0 +1,41 @@
FROM --platform=linux/amd64 saschpe/android-ndk:34-jdk17.0.8_7-ndk25.2.9519653-cmake3.22.1
ENV LANG en_US.utf8
WORKDIR /greptimedb
# Rename libunwind to libgcc
RUN cp ${NDK_ROOT}/toolchains/llvm/prebuilt/linux-x86_64/lib64/clang/14.0.7/lib/linux/aarch64/libunwind.a ${NDK_ROOT}/toolchains/llvm/prebuilt/linux-x86_64/lib64/clang/14.0.7/lib/linux/aarch64/libgcc.a
# Install dependencies.
RUN apt-get update && apt-get install -y \
libssl-dev \
protobuf-compiler \
curl \
git \
build-essential \
pkg-config \
python3 \
python3-dev \
python3-pip \
&& pip3 install --upgrade pip \
&& pip3 install pyarrow
# Trust workdir
RUN git config --global --add safe.directory /greptimedb
# Install Rust.
SHELL ["/bin/bash", "-c"]
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- --no-modify-path --default-toolchain none -y
ENV PATH /root/.cargo/bin/:$PATH
# Add android toolchains
ARG RUST_TOOLCHAIN
RUN rustup toolchain install ${RUST_TOOLCHAIN}
RUN rustup target add aarch64-linux-android
# Install cargo-ndk
RUN cargo install cargo-ndk
ENV ANDROID_NDK_HOME $NDK_ROOT
# Builder entrypoint.
CMD ["cargo", "ndk", "--platform", "23", "-t", "aarch64-linux-android", "build", "--bin", "greptime", "--profile", "release", "--no-default-features"]

View File

@@ -0,0 +1,29 @@
FROM centos:7 as builder
ENV LANG en_US.utf8
# Install dependencies
RUN ulimit -n 1024000 && yum groupinstall -y 'Development Tools'
RUN yum install -y epel-release \
openssl \
openssl-devel \
centos-release-scl \
rh-python38 \
rh-python38-python-devel \
which
# Install protoc
RUN curl -LO https://github.com/protocolbuffers/protobuf/releases/download/v3.15.8/protoc-3.15.8-linux-x86_64.zip
RUN unzip protoc-3.15.8-linux-x86_64.zip -d /usr/local/
# Install Rust
SHELL ["/bin/bash", "-c"]
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- --no-modify-path --default-toolchain none -y
ENV PATH /opt/rh/rh-python38/root/usr/bin:/usr/local/bin:/root/.cargo/bin/:$PATH
# Install Rust toolchains.
ARG RUST_TOOLCHAIN
RUN rustup toolchain install ${RUST_TOOLCHAIN}
# Install nextest.
RUN cargo install cargo-nextest --locked

View File

@@ -0,0 +1,46 @@
FROM ubuntu:20.04
ENV LANG en_US.utf8
WORKDIR /greptimedb
# Add PPA for Python 3.10.
RUN apt-get update && \
DEBIAN_FRONTEND=noninteractive apt-get install -y software-properties-common && \
add-apt-repository ppa:deadsnakes/ppa -y
# Install dependencies.
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y \
libssl-dev \
tzdata \
protobuf-compiler \
curl \
ca-certificates \
git \
build-essential \
pkg-config \
python3.10 \
python3.10-dev
# Remove Python 3.8 and install pip.
RUN apt-get -y purge python3.8 && \
apt-get -y autoremove && \
ln -s /usr/bin/python3.10 /usr/bin/python3 && \
curl -sS https://bootstrap.pypa.io/get-pip.py | python3.10
RUN git config --global --add safe.directory /greptimedb
# Install Python dependencies.
COPY ./docker/python/requirements.txt /etc/greptime/requirements.txt
RUN python3 -m pip install -r /etc/greptime/requirements.txt
# Install Rust.
SHELL ["/bin/bash", "-c"]
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- --no-modify-path --default-toolchain none -y
ENV PATH /root/.cargo/bin/:$PATH
# Install Rust toolchains.
ARG RUST_TOOLCHAIN
RUN rustup toolchain install ${RUST_TOOLCHAIN}
# Install nextest.
RUN cargo install cargo-nextest --locked

Binary file not shown.

After

Width:  |  Height:  |  Size: 51 KiB

View File

@@ -0,0 +1,39 @@
# TSBS benchmark - v0.3.2
## Environment
| | |
| --- | --- |
| CPU | AMD Ryzen 7 7735HS (8 core 3.2GHz) |
| Memory | 32GB |
| Disk | SOLIDIGM SSDPFKNU010TZ |
| OS | Ubuntu 22.04.2 LTS |
## Write performance
| Write buffer size | Ingest raterows/s |
| --- | --- |
| 512M | 139583.04 |
| 32M | 279250.52 |
## Query performance
| Query type | v0.3.2 write buffer 32M (ms) | v0.3.2 write buffer 512M (ms) | v0.3.1 write buffer 32M (ms) |
| --- | --- | --- | --- |
| cpu-max-all-1 | 921.12 | 241.23 | 553.63 |
| cpu-max-all-8 | 2657.66 | 502.78 | 3308.41 |
| double-groupby-1 | 28238.85 | 27367.42 | 52148.22 |
| double-groupby-5 | 33094.65 | 32421.89 | 56762.37 |
| double-groupby-all | 38565.89 | 38635.52 | 59596.80 |
| groupby-orderby-limit | 23321.60 | 22423.55 | 53983.23 |
| high-cpu-1 | 1167.04 | 254.15 | 832.41 |
| high-cpu-all | 32814.08 | 29906.94 | 62853.12 |
| lastpoint | 192045.05 | 153575.42 | NA |
| single-groupby-1-1-1 | 63.97 | 87.35 | 92.66 |
| single-groupby-1-1-12 | 666.24 | 326.98 | 781.50 |
| single-groupby-1-8-1 | 225.29 | 137.97 |281.95 |
| single-groupby-5-1-1 | 70.40 | 81.64 | 86.15 |
| single-groupby-5-1-12 | 722.75 | 356.01 | 805.18 |
| single-groupby-5-8-1 | 285.60 | 115.88 | 326.29 |

View File

@@ -0,0 +1,61 @@
# TSBS benchmark - v0.4.0
## Environment
### Local
| | |
| ------ | ---------------------------------- |
| CPU | AMD Ryzen 7 7735HS (8 core 3.2GHz) |
| Memory | 32GB |
| Disk | SOLIDIGM SSDPFKNU010TZ |
| OS | Ubuntu 22.04.2 LTS |
### Aliyun amd64
| | |
| ------- | -------------- |
| Machine | ecs.g7.4xlarge |
| CPU | 16 core |
| Memory | 64GB |
| Disk | 100G |
| OS | Ubuntu 22.04 |
### Aliyun arm64
| | |
| ------- | ----------------- |
| Machine | ecs.g8y.4xlarge |
| CPU | 16 core |
| Memory | 64GB |
| Disk | 100G |
| OS | Ubuntu 22.04 ARM |
## Write performance
| Environment | Ingest raterows/s |
| ------------------ | --------------------- |
| Local | 365280.60 |
| Aliyun g7.4xlarge | 341368.72 |
| Aliyun g8y.4xlarge | 320907.29 |
## Query performance
| Query type | Local (ms) | Aliyun g7.4xlarge (ms) | Aliyun g8y.4xlarge (ms) |
| --------------------- | ---------- | ---------------------- | ----------------------- |
| cpu-max-all-1 | 50.70 | 31.46 | 47.61 |
| cpu-max-all-8 | 262.16 | 129.26 | 152.43 |
| double-groupby-1 | 2512.71 | 1408.19 | 1586.10 |
| double-groupby-5 | 3896.15 | 2304.29 | 2585.29 |
| double-groupby-all | 5404.67 | 3337.61 | 3773.91 |
| groupby-orderby-limit | 3786.98 | 2065.72 | 2312.57 |
| high-cpu-1 | 71.96 | 37.29 | 54.01 |
| high-cpu-all | 9468.75 | 7595.69 | 8467.46 |
| lastpoint | 13379.43 | 11253.76 | 12949.40 |
| single-groupby-1-1-1 | 20.72 | 12.16 | 13.35 |
| single-groupby-1-1-12 | 28.53 | 15.67 | 21.62 |
| single-groupby-1-8-1 | 72.23 | 37.90 | 43.52 |
| single-groupby-5-1-1 | 26.75 | 15.59 | 17.48 |
| single-groupby-5-1-12 | 45.41 | 22.90 | 31.96 |
| single-groupby-5-8-1 | 107.96 | 59.76 | 69.58 |

View File

@@ -0,0 +1,137 @@
---
Feature Name: distributed-planner
Tracking Issue: TBD
Date: 2023-05-09
Author: "Ruihang Xia <waynestxia@gmail.com>"
---
Distributed Planner
-------------------
# Summary
Enhance the logical planner with aware of distributed, multi-region table topology. To achieve "push computation down" execution rather than the current "pull data up" manner.
# Motivation
Query distributively can leverage the advantage of GreptimeDB's architecture to process large dataset that exceeds the capacity of a single node, or accelerate the query execution by executing it in parallel. This task includes two sub-tasks
- Be able to transform the plan that can push as much as possible computation down to data source.
- Be able to handle pipeline breaker (like `Join` or `Sort`) on multiple computation nodes.
This is a relatively complex topic. To keep this RFC concentrated I'll focus on the first one.
# Details
## Background: Partition and Region
GreptimeDB supports table partitioning, where the partition rule is set during table creation. Each partition can be further divided into one or more physical storage units known as "regions". Both partitions and regions are divided based on rows:
``` text
┌────────────────────────────────────┐
│ │
│ Table │
│ │
└─────┬────────────┬────────────┬────┘
│ │ │
│ │ │
┌─────▼────┐ ┌─────▼────┐ ┌─────▼────┐
│ Region 1 │ │ Region 2 │ │ Region 3 │
└──────────┘ └──────────┘ └──────────┘
Row 1~10 Row 11~20 Row 21~30
```
General speaking, region is the minimum element of data distribution, and we can also use it as the unit to distribute computation. This can greatly simplify the routing logic of this distributed planner, by always schedule the computation to the node that currently opening the corresponding region. And is also easy to scale more node for computing since GreptimeDB's data is persisted on shared storage backend like S3. But this is a bit beyond the scope of this specific topic.
## Background: Commutativity
Commutativity is an attribute that describes whether two operation can exchange their apply order: $P1(P2(R)) \Leftrightarrow P2(P1(R))$. If the equation keeps, we can transform one expression into another form without changing its result. This is useful on rewriting SQL expression, and is the theoretical basis of this RFC.
Take this SQL as an example
``` sql
SELECT a FROM t WHERE a > 10;
```
As we know projection and filter are commutative (todo: latex), it can be translated to the following two identical plan trees:
```text
┌─────────────┐ ┌─────────────┐
│Projection(a)│ │Filter(a>10) │
└──────▲──────┘ └──────▲──────┘
│ │
┌──────┴──────┐ ┌──────┴──────┐
│Filter(a>10) │ │Projection(a)│
└──────▲──────┘ └──────▲──────┘
│ │
┌──────┴──────┐ ┌──────┴──────┐
│ TableScan │ │ TableScan │
└─────────────┘ └─────────────┘
```
## Merge Operation
This RFC proposes to add a new expression node `MergeScan` to merge result from several regions in the frontend. It wrap the abstraction of remote data and execution, and expose a `TableScan` interface to upper level.
``` text
┌───────┼───────┐
│ │ │
│ ┌──┴──┐ │
│ └──▲──┘ │
│ │ │
│ ┌──┴──┐ │
│ └──▲──┘ │ ┌─────────────────────────────┐
│ │ │ │ │
│ ┌────┴────┐ │ │ ┌──────────┐ ┌───┐ ┌───┐ │
│ │MergeScan◄──┼────┤ │ Region 1 │ │ │ .. │ │ │
│ └─────────┘ │ │ └──────────┘ └───┘ └───┘ │
│ │ │ │
└─Frontend──────┘ └─Remote-Sources──────────────┘
```
This merge operation simply chains all the the underlying remote data sources and return `RecordBatch`, just like a coalesce op. And each remote sources is a gRPC query to datanode via the substrait logical plan interface. The plan is transformed and divided from the original query that comes to frontend.
## Commutativity of MergeScan
Obviously, The position of `MergeScan` is the key of the distributed plan. The more closer to the underlying `TableScan`, the less computation is taken by datanodes. Thus the goal is to pull the `MergeScan` up as more as possible. The word "pull up" means exchange `MergeScan` with its parent node in the plan tree, which means we should check the commutativity between the existing expression nodes and the `MergeScan`. Here I classify all the possibility into five categories:
- Commutative: $P1(P2(R)) \Leftrightarrow P2(P1(R))$
- filter
- projection
- operations that match the partition key
- Partial Commutative: $P1(P2(R)) \Leftrightarrow P1(P2(P1(R)))$
- $min(R) \rightarrow min(MERGE(min(R)))$
- $max(R) \rightarrow max(MERGE(max(R)))$
- Conditional Commutative: $P1(P2(R)) \Leftrightarrow P3(P2(P1(R)))$
- $count(R) \rightarrow sum(count(R))$
- Transformed Commutative: $P1(P2(R)) \Leftrightarrow P1(P3(R)) \Leftrightarrow P3(P1(R))$
- $avg(R) \rightarrow sum(R)/count(R)$
- Non-commutative
- sort
- join
- percentile
## Steps to plan
After establishing the set of commutative relations for all expressions, we can begin transforming the logical plan. There are four steps:
- Add a merge node before table scan
- Evaluate commutativity in a bottom-up way, stop at the first non-commutative node
- Divide the TableScan to scan over partitions
- Execute
First insert the `MergeScan` on top of the bottom `TableScan` node. Then examine the commutativity start from the `MergeScan` node transform the plan tree based on the result. Stop this process on the first non-commutative node.
``` text
┌─────────────┐ ┌─────────────┐
│ Sort │ │ Sort │
└──────▲──────┘ └──────▲──────┘
│ │
┌─────────────┐ ┌──────┴──────┐ ┌──────┴──────┐
│ Sort │ │Projection(a)│ │ MergeScan │
└──────▲──────┘ └──────▲──────┘ └──────▲──────┘
│ │ │
┌──────┴──────┐ ┌──────┴──────┐ ┌──────┴──────┐
│Projection(a)│ │ MergeScan │ │Projection(a)│
└──────▲──────┘ └──────▲──────┘ └──────▲──────┘
│ │ │
┌──────┴──────┐ ┌──────┴──────┐ ┌──────┴──────┐
│ TableScan │ │ TableScan │ │ TableScan │
└─────────────┘ └─────────────┘ └─────────────┘
(a) (b) (c)
```
Then in the physical planning phase, convert the sub-tree below `MergeScan` into a remote query request and dispatch to all the regions. And let the `MergeScan` to receive the results and feed to it parent node.
To simplify the overall complexity, any error in the procedure will lead to a failure to the entire query, and cancel all other parts.
# Alternatives
## Spill
If only consider the ability of processing large dataset, we can enable DataFusion's spill ability to temporary persist intermediate data into disk, like the "swap" memory. But this will lead to a super slow performance and very large write amplification.
# Future Work
As described in the `Motivation` section we can further explore the distributed planner on the physical execution level, by introducing mechanism like Spark's shuffle to improve parallelism and reduce intermediate pipeline breaker's stage.

View File

@@ -0,0 +1,303 @@
---
Feature Name: table-engine-refactor
Tracking Issue: https://github.com/GreptimeTeam/greptimedb/issues/1869
Date: 2023-07-06
Author: "Yingwen <realevenyag@gmail.com>"
---
Refactor Table Engine
----------------------
# Summary
Refactor table engines to address several historical tech debts.
# Motivation
Both `Frontend` and `Datanode` have to deal with multiple regions in a table. This results in code duplication and additional burden to the `Datanode`.
Before:
```mermaid
graph TB
subgraph Frontend["Frontend"]
subgraph MyTable
A("region 0, 2 -> Datanode0")
B("region 1, 3 -> Datanode1")
end
end
MyTable --> MetaSrv
MetaSrv --> ETCD
MyTable-->TableEngine0
MyTable-->TableEngine1
subgraph Datanode0
Procedure0("procedure")
TableEngine0("table engine")
region0
region2
mytable0("my_table")
Procedure0-->mytable0
TableEngine0-->mytable0
mytable0-->region0
mytable0-->region2
end
subgraph Datanode1
Procedure1("procedure")
TableEngine1("table engine")
region1
region3
mytable1("my_table")
Procedure1-->mytable1
TableEngine1-->mytable1
mytable1-->region1
mytable1-->region3
end
subgraph manifest["table manifest"]
M0("my_table")
M1("regions: [0, 1, 2, 3]")
end
mytable1-->manifest
mytable0-->manifest
RegionManifest0("region manifest 0")
RegionManifest1("region manifest 1")
RegionManifest2("region manifest 2")
RegionManifest3("region manifest 3")
region0-->RegionManifest0
region1-->RegionManifest1
region2-->RegionManifest2
region3-->RegionManifest3
```
`Datanodes` can update the same manifest file for a table as regions are assigned to different nodes in the cluster. We also have to run procedures on `Datanode` to ensure the table manifest is consistent with region manifests. "Table" in a `Datanode` is a subset of the table's regions. The `Datanode` is much closer to `RegionServer` in `HBase` which only deals with regions.
In cluster mode, we store table metadata in etcd and table manifest. The table manifest becomes redundant. We can remove the table manifest if we refactor the table engines to region engines that only care about regions. What's more, we don't need to run those procedures on `Datanode`.
After:
```mermaid
graph TB
subgraph Frontend["Frontend"]
direction LR
subgraph MyTable
A("region 0, 2 -> Datanode0")
B("region 1, 3 -> Datanode1")
end
end
MyTable --> MetaSrv
MetaSrv --> ETCD
MyTable-->RegionEngine
MyTable-->RegionEngine1
subgraph Datanode0
RegionEngine("region engine")
region0
region2
RegionEngine-->region0
RegionEngine-->region2
end
subgraph Datanode1
RegionEngine1("region engine")
region1
region3
RegionEngine1-->region1
RegionEngine1-->region3
end
RegionManifest0("region manifest 0")
RegionManifest1("region manifest 1")
RegionManifest2("region manifest 2")
RegionManifest3("region manifest 3")
region0-->RegionManifest0
region1-->RegionManifest1
region2-->RegionManifest2
region3-->RegionManifest3
```
This RFC proposes to refactor table engines into region engines as a first step to make the `Datanode` acts like a `RegionServer`.
# Details
## Overview
We plan to refactor the `TableEngine` trait into `RegionEngine` gradually. This RFC focuses on the `mito` engine as it is the default table engine and the most complicated engine.
Currently, we built `MitoEngine` upon `StorageEngine` that manages regions of the `mito` engine. Since `MitoEngine` becomes a region engine, we could combine `StorageEngine` with `MitoEngine` to simplify our code structure.
The chart below shows the overall architecture of the `MitoEngine`.
```mermaid
classDiagram
class MitoEngine~LogStore~ {
-WorkerGroup workers
}
class MitoRegion {
+VersionControlRef version_control
-RegionId region_id
-String manifest_dir
-AtomicI64 last_flush_millis
+region_id() RegionId
+scan() ChunkReaderImpl
}
class RegionMap {
-HashMap&lt;RegionId, MitoRegionRef&gt; regions
}
class ChunkReaderImpl
class WorkerGroup {
-Vec~RegionWorker~ workers
}
class RegionWorker {
-RegionMap regions
-Sender sender
-JoinHandle handle
}
class RegionWorkerThread~LogStore~ {
-RegionMap regions
-Receiver receiver
-Wal~LogStore~ wal
-ObjectStore object_store
-MemtableBuilderRef memtable_builder
-FlushSchedulerRef~LogStore~ flush_scheduler
-FlushStrategy flush_strategy
-CompactionSchedulerRef~LogStore~ compaction_scheduler
-FilePurgerRef file_purger
}
class Wal~LogStore~ {
-LogStore log_store
}
class MitoConfig
MitoEngine~LogStore~ o-- MitoConfig
MitoEngine~LogStore~ o-- MitoRegion
MitoEngine~LogStore~ o-- WorkerGroup
MitoRegion o-- VersionControl
MitoRegion -- ChunkReaderImpl
WorkerGroup o-- RegionWorker
RegionWorker o-- RegionMap
RegionWorker -- RegionWorkerThread~LogStore~
RegionWorkerThread~LogStore~ o-- RegionMap
RegionWorkerThread~LogStore~ o-- Wal~LogStore~
```
We replace the `RegionWriter` with `RegionWorker` to process write requests and DDL requests.
## Metadata
We also merge region's metadata with table's metadata. It should make metadata much easier to maintain.
```mermaid
classDiagram
class VersionControl {
-CowCell~Version~ version
-AtomicU64 committed_sequence
}
class Version {
-RegionMetadataRef metadata
-MemtableVersionRef memtables
-LevelMetasRef ssts
-SequenceNumber flushed_sequence
-ManifestVersion manifest_version
}
class MemtableVersion {
-MemtableRef mutable
-Vec~MemtableRef~ immutables
+mutable_memtable() MemtableRef
+immutable_memtables() &[MemtableRef]
+freeze_mutable(MemtableRef new_mutable) MemtableVersion
}
class LevelMetas {
-LevelMetaVec levels
-AccessLayerRef sst_layer
-FilePurgerRef file_purger
-Option~i64~ compaction_time_window
}
class LevelMeta {
-Level level
-HashMap&lt;FileId, FileHandle&gt; files
}
class FileHandle {
-FileMeta meta
-bool compacting
-AtomicBool deleted
-AccessLayerRef sst_layer
-FilePurgerRef file_purger
}
class FileMeta {
+RegionId region_id
+FileId file_id
+Option&lt;Timestamp, Timestamp&gt; time_range
+Level level
+u64 file_size
}
VersionControl o-- Version
Version o-- RegionMetadata
Version o-- MemtableVersion
Version o-- LevelMetas
LevelMetas o-- LevelMeta
LevelMeta o-- FileHandle
FileHandle o-- FileMeta
class RegionMetadata {
+RegionId region_id
+VersionNumber version
+SchemaRef table_schema
+Vec~usize~ primary_key_indices
+Vec~usize~ value_indices
+ColumnId next_column_id
+TableOptions region_options
+DateTime~Utc~ created_on
+RegionSchemaRef region_schema
}
class RegionSchema {
-SchemaRef user_schema
-StoreSchemaRef store_schema
-ColumnsMetadataRef columns
}
class Schema
class StoreSchema {
-Vec~ColumnMetadata~ columns
-SchemaRef schema
-usize row_key_end
-usize user_column_end
}
class ColumnsMetadata {
-Vec~ColumnMetadata~ columns
-HashMap&lt;String, usize&gt; name_to_col_index
-usize row_key_end
-usize timestamp_key_index
-usize user_column_end
}
class ColumnMetadata
RegionMetadata o-- RegionSchema
RegionMetadata o-- Schema
RegionSchema o-- StoreSchema
RegionSchema o-- Schema
RegionSchema o-- ColumnsMetadata
StoreSchema o-- ColumnsMetadata
StoreSchema o-- Schema
StoreSchema o-- ColumnMetadata
ColumnsMetadata o-- ColumnMetadata
```
# Drawback
This is a breaking change.
# Future Work
- Rename `TableEngine` to `RegionEngine`
- Simplify schema relationship in the `mito` engine
- Refactor the `Datanode` into a `RegionServer`.

View File

@@ -0,0 +1,202 @@
---
Feature Name: metric-engine
Tracking Issue: TBD
Date: 2023-07-10
Author: "Ruihang Xia <waynestxia@gmail.com>"
---
# Summary
A new metric engine that can significantly enhance our ability to handle the tremendous number of small tables in scenarios like Prometheus metrics, by leveraging a synthetic wide table that offers storage and metadata multiplexing capabilities over the existing engine.
# Motivation
The concept "Table" in GreptimeDB is a bit "heavy" compared to other time-series storage like Prometheus or VictoriaMetrics. This has lots of disadvantages in aspects from performance, footprint, and storage to cost.
# Details
## Top level description
- User Interface
This feature will add a new type of storage engine. It might be available to be an option like `with ENGINE=mito` or an internal interface like auto create table on Prometheus remote write. From the user side, there is no difference from tables in mito engine. All the DDL like `CREATE`, `ALTER` and DML like `SELECT` should be supported.
- Implementation Overlook
This new engine doesn't re-implement low level components like file R/W etc. It's a wrapper layer over the existing mito engine, with extra storage and metadata multiplexing capabilities. I.e., it expose multiple table based on one mito engine table like this:
``` plaintext
┌───────────────┐ ┌───────────────┐ ┌───────────────┐
│ Metric Engine │ │ Metric Engine │ │ Metric Engine │
│ Table 1 │ │ Table 2 │ │ Table 3 │
└───────────────┘ └───────────────┘ └───────────────┘
▲ ▲ ▲
│ │ │
└───────────────┼───────────────────┘
┌─────────┴────────┐
│ Metric Region │
│ Engine │
│ ┌─────────────┤
│ │ Mito Region │
│ │ Engine │
└────▲─────────────┘
┌─────┴───────────────┐
│ │
│ Mito Engine Table │
│ │
└─────────────────────┘
```
The following parts will describe these implementation details:
- How to route these metric region tables and how those table are distributed
- How to maintain the schema and other metadata of the underlying mito engine table
- How to maintain the schema of metric engine table
- How the query goes
## Routing
Before this change, the region route rule was based on a group of partition keys. Relation of physical table to region is one-to-many.
``` rust
pub struct PartitionDef {
partition_columns: Vec<String>,
partition_bounds: Vec<PartitionBound>,
}
```
And for metric engine tables, the key difference is we split the concept of "physical table" and "logical table". Like the previous ASCII chart, multiple logical tables are based on one physical table. The relationship of logical table to region becomes many-to-many. Thus, we must include the table name (of logical table) into partition rules.
Consider the partition/route interface is a generic map of string array to region id, all we need to do is to insert logical table name into the request:
``` rust
fn route(request: Vec<String>) -> RegionId;
```
The next question is, where to do this conversion? The basic idea is to dispatch different routing behavior based on the engine type. Since we have all the necessary information in frontend, it's a good place to do that. And can leave meta server untouched. The essential change is to associate engine type with route rule.
## Physical Region Schema
The idea "physical wide table" is to perform column-level multiplexing. I.e., map all logical columns to physical columns by their names.
```
┌────────────┐ ┌────────────┐ ┌────────────┐
│ Table 1 │ │ Table 2 │ │ Table 3 │
├───┬────┬───┤ ├───┬────┬───┤ ├───┬────┬───┤
│C1 │ C2 │ C3│ │C1 │ C3 │ C5├──────┐ │C2 │ C4 │ C6│
└─┬─┴──┬─┴─┬─┘ ┌────┴───┴──┬─┴───┘ │ └─┬─┴──┬─┴─┬─┘
│ │ │ │ │ │ │ │ │
│ │ │ │ └──────────┐ │ │ │ │
│ │ │ │ │ │ │ │ │
│ │ │ │ ┌─────────────────┐ │ │ │ │ │
│ │ │ │ │ Physical Table │ │ │ │ │ │
│ │ │ │ ├──┬──┬──┬──┬──┬──┘ │ │ │ │ │
└────x───x───┴─►│C1│C2│C3│C4│C5│C6◄─┼─x────x────x───┘
│ │ └──┘▲─┘▲─┴─▲└─▲└──┘ │ │ │ │
│ │ │ │ │ │ │ │ │ │
├───x──────────┘ ├───x──x─────┘ │ │ │
│ │ │ │ │ │ │ │
│ └─────────────┘ │ └───────┘ │ │
│ │ │ │
└─────────────────────x───────────────┘ │
│ │
└────────────────────┘
```
This approach is very straightforward but has one problem. It only works when two columns have different semantic type (time index, tag or field) or data types but with the same name. E.g., `CREATE TABLE t1 (c1 timestamp(3) TIME INDEX)` and `CREATE TABLE t2 (c1 STRING PRIMARY KEY)`.
One possible workaround is to prefix each column with its data type and semantic type, like `_STRING_PK_c1`. However, considering the primary goal at present is to support data from monitoring metrics like Prometheus remote write, it's acceptable not to support this at first because data types are often simple and limited here.
The next point is changing the physical table's schema. This is only needed when creating a new logical table or altering the existing table. Typically speaking, table creating and altering are explicit. We only need to emit an add column request to underlying physical table on processing logical table's DDL. GreptimeDB can create or alter table automatically on some protocols, but the internal logic is the same.
Also for simplicity, we don't support shrinking the underlying table at first. This can be achieved by introducing mechanism on the physical column.
Frontend needs not to keep physical table's schema.
## Metadata of physical regions
Those metric engine regions need to store extra metadata like the schema of logical table or all logical table's name. That information is relatively simple and can be stored in a format like key-value pair. For now, we have to use another physical mito region for metadata. This involves an issue with region scheduling. Since we don't have the ability to perform affinity scheduling, the initial version will just assume the data region and metadata region are in the same instance. See alternatives - other storage for physical region's metadata for possible future improvement.
Here is the schema of metadata region and how we would use it. The `CREATE TABLE` clause of metadata region looks like the following. Notice that it wouldn't be actually created by SQL.
``` sql
CREATE TABLE metadata(
ts timestamp time index,
key string primary key,
value string
);
```
The `ts` field is just a placeholder -- for the constraints that a mito region must contain a time index field. It will always be `0`. The other two fields `key` and `value` will be used as a k-v storage. It contains two group of key
- `__table_<TABLE_NAME>` is used for marking table existence. It doesn't have value.
- `__column_<TABLE_NAME>_<COLUMN_NAME>` is used for marking table existence, the value is column's semantic type.
## Physical region implementation
This RFC proposes to add a new region implementation named "MetricRegion". As showed in the first chart, it's wrapped over the existing mito region. This section will describe the implementation details. Firstly, here is a chart shows how the region hierarchy looks like:
```plaintext
┌───────────────────────┐
│ Metric Region │
│ │
│ ┌────────┬──────────┤
│ │ Mito │ Mito │
│ │ Region │ Region │
│ │ for │ for │
│ │ Data │ Metadata │
└───┴────────┴──────────┘
```
All upper levels only see the Metric Region. E.g., Meta Server schedules on this region, or Frontend routes requests to this Metrics Region's id. To be scheduled (open or close etc.), Metric Region needs to implement its own procedures. Most of those procedures can be simply assembled from underlying Mito Regions', but those related to data like alter or drop will have its own new logic.
Another point is region id. Since the region id is used widely from meta server to persisted state, it's better to keep it unchanged. This means we can't use the same id for two regions, but one for each. To achieve this, this RFC proposes a concept named "region id group". A region id group is a group of region ids that are bound for different purposes. Like the two underlying regions here.
This preserves the first 8 bits of the `u32` region number for grouping. Each group has one main id (the first one) and other sub ids (the rest non-zero ids). All components other than the region implementation itself doesn't aware of the existence of region id group. They only see the main id. The region implementation is in response of managing and using the region id group.
```plaintext
63 31 23 0
┌────────────────────────────────────┬──────────┬──────────────────┐
│ Table Id(32) │ Group(8) │ Region Number(24)│
└────────────────────────────────────┴──────────┴──────────────────┘
Region Id(32)
```
## Routing in meta server
From previous sections, we can conclude the following points about routing:
- Each "logical table" has its own, universe unique table id.
- Logical table doesn't have physical region, they share the same physical region with other logical tables.
- Route rule of logical table's is a strict subset of physical table's.
To associate the logical table with physical region, we need to specify necessary information in the create table request. Specifically, the table type and its parent table. This require to change our gRPC proto's definition. And once meta recognize the table to create is a logical table, it will use the parent table's region to create route entry.
And to reduce the consumption of region failover (which need to update the physical table route info), we'd better to split the current route table structure into two parts:
```rust
region_route: Map<TableName, [RegionId]>,
node_route: Map<RegionId, NodeId>,
```
By doing this on each failover the meta server only needs to update the second `node_route` map and leave the first one untouched.
## Query
Like other existing components, a user query always starts in the frontend. In the planning phase, frontend needs to fetch related schemas of the queried table. This part is the same. I.e., changes in this RFC don't affect components above the `Table` abstraction.
# Alternatives
## Other routing method
We can also do this "special" route rule in the meta server. But there is no difference with the proposed method.
## Other storage for physical region's metadata
Once we have implemented the "region family" that allows multiple physical schemas exist in one region, we can store the metadata and table data into one region.
Before that, we can also let the `MetricRegion` holds a `KvBackend` to access the storage layer directly. But this breaks the abstraction in some way.
# Drawbacks
Since the physical storage is mixed together. It's hard to do fine-grained operations in table level. Like configuring TTL, memtable size or compaction strategy in table level. Or define different partition rules for different tables. For scenarios like this, it's better to move the table out of metrics engine and "upgrade" it to a normal mito engine table. This requires a migration process in a low cost. And we have to ensure data consistency during the migration, which may require a out-of-service period.

View File

@@ -0,0 +1,175 @@
---
Feature Name: table-trait-refactor
Tracking Issue: https://github.com/GreptimeTeam/greptimedb/issues/2065
Date: 2023-08-04
Author: "Ruihang Xia <waynestxia@gmail.com>"
---
Refactor Table Trait
--------------------
# Summary
Refactor `Table` trait to adapt the new region server architecture and make code more straightforward.
# Motivation
The `Table` is designed in the background of both frontend and datanode keeping the same concepts. And all the operations are served by a `Table`. However, in our practice, we found that not all the operations are suitable to be served by a `Table`. For example, the `Table` doesn't hold actual physical data itself, thus operations like write or alter are simply a proxy over underlying regions. And in the recent refactor to datanode ([rfc table-engine-refactor](./2023-07-06-table-engine-refactor.md)), we are changing datanode to region server that is only aware of `Region` things. This also calls for a refactor to the `Table` trait.
# Details
## Definitions
The current `Table` trait contains the following methods:
```rust
pub trait Table {
/// Get a reference to the schema for this table
fn schema(&self) -> SchemaRef;
/// Get a reference to the table info.
fn table_info(&self) -> TableInfoRef;
/// Get the type of this table for metadata/catalog purposes.
fn table_type(&self) -> TableType;
/// Insert values into table.
///
/// Returns number of inserted rows.
async fn insert(&self, _request: InsertRequest) -> Result<usize>;
/// Generate a record batch stream for querying.
async fn scan_to_stream(&self, request: ScanRequest) -> Result<SendableRecordBatchStream>;
/// Tests whether the table provider can make use of any or all filter expressions
/// to optimise data retrieval.
fn supports_filters_pushdown(&self, filters: &[&Expr]) -> Result<Vec<FilterPushDownType>>;
/// Alter table.
async fn alter(&self, _context: AlterContext, _request: &AlterTableRequest) -> Result<()>;
/// Delete rows in the table.
///
/// Returns number of deleted rows.
async fn delete(&self, _request: DeleteRequest) -> Result<usize>;
/// Flush table.
///
/// Options:
/// - region_number: specify region to flush.
/// - wait: Whether to wait until flush is done.
async fn flush(&self, region_number: Option<RegionNumber>, wait: Option<bool>) -> Result<()>;
/// Close the table.
async fn close(&self, _regions: &[RegionNumber]) -> Result<()>;
/// Get region stats in this table.
fn region_stats(&self) -> Result<Vec<RegionStat>>;
/// Return true if contains the region
fn contains_region(&self, _region: RegionNumber) -> Result<bool>;
/// Get statistics for this table, if available
fn statistics(&self) -> Option<TableStatistics>;
async fn compact(&self, region_number: Option<RegionNumber>, wait: Option<bool>) -> Result<()>;
}
```
We can divide those methods into three categories from the perspective of functionality:
| Retrieve Metadata | Manipulate Data | Read Data |
| :------------------------: | :-------------: | :--------------: |
| `schema` | `insert` | `scan_to_stream` |
| `table_info` | `alter` | |
| `table_type` | `delete` | |
| `supports_filter_pushdown` | `flush` | |
| `region_stats` | `close` | |
| `contains_region` | `compact` | |
| `statistics` | | |
And considering most of the access to metadata happens in frontend, like route or query; and all the persisted data are stored in regions; while only the query engine needs to read data. We can divide the `Table` trait into three concepts:
- struct `Table` provides metadata:
```rust
impl Table {
/// Get a reference to the schema for this table
fn schema(&self) -> SchemaRef;
/// Get a reference to the table info.
fn table_info(&self) -> TableInfoRef;
/// Get the type of this table for metadata/catalog purposes.
fn table_type(&self) -> TableType;
/// Get statistics for this table, if available
fn statistics(&self) -> Option<TableStatistics>;
fn to_data_source(&self) -> DataSourceRef;
}
```
- Requests to region server
- `InsertRequest`
- `AlterRequest`
- `DeleteRequest`
- `FlushRequest`
- `CompactRequest`
- `CloseRequest`
- trait `DataSource` provides data (`RecordBatch`)
```rust
trait DataSource {
fn get_stream(&self, request: ScanRequest) -> Result<SendableRecordBatchStream>;
}
```
## Use `Table`
`Table` will only be used in frontend. It's constructed from the `OpenTableRequest` or `CreateTableRequest`.
`Table` also provides a method `to_data_source` to generate a `DataSource` from itself. But this method is only for non-`TableType::Base` tables (i.e., `TableType::View` and `TableType::Temporary`) because `TableType::Base` table doesn't hold actual data itself. Its `DataSource` should be constructed from the `Region` directly (in other words, it's a remote query).
And it requires some extra information to construct a `DataSource`, named `TableSourceProvider`:
```rust
type TableFactory = Arc<dyn Fn() -> DataSourceRef>;
pub enum TableSourceProvider {
Base,
View(LogicalPlan),
Temporary(TableFactory),
}
```
## Use `DataSource`
`DataSource` will be adapted to the `TableProvider` from DataFusion that can be `scan()`ed in a `TableScan` plan.
In frontend this is done in the planning phase. And datanode will have one implementation for `Region` to generate record batch stream.
## Interact with RegionServer
Previously, persisted state change operations were through the old `Table` trait, like said before. Now they will come from the action source, like the procedure or protocol handler directly to the region server. E.g., on alter table, the corresponding procedure will generate its `AlterRequest` and send it to regions. Or write request will be split in frontend handler, and sent to regions. `Table` only provides necessary metadata like route information if needed, but not the necessary part anymore.
## Implement temporary table
Temporary table is a special table that doesn't revolves to any persistent physical region. Examples are:
- the `Numbers` table for testing, which produces a record batch that contains 0-100 integers.
- tables in information schema. It is an interface for querying catalog's metadata. The contents are generated on the fly with information from `CatalogManager`. The `CatalogManager` can be held in `TableFactory`.
- Function table that produces data generated by a formula or a function. Like something that always `sin(current_timestamp())`.
## Relationship among those components
Here is a diagram to show the relationship among those components, and how they interact with each other.
```mermaid
erDiagram
CatalogManager ||--|{ Table : manages
Table ||--|{ DataStream : generates
Table ||--|{ Region : routes
Region ||--|{ DataStream : implements
DataStream }|..|| QueryEngine : adapts-to
Procedure ||--|{ Region : requests
Protocol ||--|{ Region : writes
Protocol ||--|{ QueryEngine : queries
```
# Drawback
This is a breaking change.

View File

@@ -0,0 +1,90 @@
---
Feature Name: Update Metadata in single transaction
Tracking Issue: https://github.com/GreptimeTeam/greptimedb/issues/1715
Date: 2023-08-13
Author: "Feng Yangsen <fengys1996@gmail.com>, Xu Wenkang <wenymedia@gmail.com>"
---
# Summary
Update Metadata in single transaction.
# Motivation
Currently, multiple transactions are involved during the procedure. This implementation is inefficient, and it's hard to make data consistent. Therefore, We can update multiple metadata in a single transaction.
# Details
Now we have the following table metadata keys:
**TableInfo**
```rust
// __table_info/{table_id}
pub struct TableInfoKey {
table_id: TableId,
}
pub struct TableInfoValue {
pub table_info: RawTableInfo,
version: u64,
}
```
**TableRoute**
```rust
// __table_route/{table_id}
pub struct NextTableRouteKey {
table_id: TableId,
}
pub struct TableRoute {
pub region_routes: Vec<RegionRoute>,
}
```
**DatanodeTable**
```rust
// __table_route/{datanode_id}/{table_id}
pub struct DatanodeTableKey {
datanode_id: DatanodeId,
table_id: TableId,
}
pub struct DatanodeTableValue {
pub table_id: TableId,
pub regions: Vec<RegionNumber>,
version: u64,
}
```
**TableNameKey**
```rust
// __table_name/{CatalogName}/{SchemaName}/{TableName}
pub struct TableNameKey<'a> {
pub catalog: &'a str,
pub schema: &'a str,
pub table: &'a str,
}
pub struct TableNameValue {
table_id: TableId,
}
```
These table metadata only updates in the following operations.
## Region Failover
It needs to update `TableRoute` key and `DatanodeTable` keys. If the `TableRoute` equals the Snapshot of `TableRoute` submitting the Failover task, then we can safely update these keys.
After submitting Failover tasks to acquire locks for execution, the `TableRoute` may be updated by another task. After acquiring the lock, we can get the latest `TableRoute` again and then execute it if needed.
## Create Table DDL
Creates all of the above keys. `TableRoute`, `TableInfo`, should be empty.
The **TableNameKey**'s lock will be held by the procedure framework.
## Drop Table DDL
`TableInfoKey` and `NextTableRouteKey` will be added with `__removed-` prefix, and the other above keys will be deleted. The transaction will not compare any keys.
## Alter Table DDL
1. Rename table, updates `TableInfo` and `TableName`. Compares `TableInfo`, and the new `TableNameKey` should be empty, and TableInfo should equal the Snapshot when submitting DDL.
The old and new **TableNameKey**'s lock will be held by the procedure framework.
2. Alter table, updates `TableInfo`. `TableInfo` should equal the Snapshot when submitting DDL.

View File

@@ -1,2 +1,2 @@
[toolchain]
channel = "nightly-2023-02-26"
channel = "nightly-2023-08-07"

View File

@@ -2,14 +2,17 @@
# This script is used to download built dashboard assets from the "GreptimeTeam/dashboard" repository.
set -e
set -e -x
declare -r SCRIPT_DIR=$(cd $(dirname ${0}) >/dev/null 2>&1 && pwd)
declare -r ROOT_DIR=$(dirname ${SCRIPT_DIR})
declare -r STATIC_DIR="$ROOT_DIR/src/servers/dashboard"
OUT_DIR="${1:-$SCRIPT_DIR}"
RELEASE_VERSION="$(cat $STATIC_DIR/VERSION)"
RELEASE_VERSION="$(cat $STATIC_DIR/VERSION | tr -d '\t\r\n ')"
echo "Downloading assets to dir: $OUT_DIR"
cd $OUT_DIR
# Download the SHA256 checksum attached to the release. To verify the integrity
# of the download, this checksum will be used to check the download tar file
# containing the built dashboard assets.

View File

@@ -61,7 +61,16 @@ if [ -n "${OS_TYPE}" ] && [ -n "${ARCH_TYPE}" ]; then
fi
echo "Downloading ${BIN}, OS: ${OS_TYPE}, Arch: ${ARCH_TYPE}, Version: ${VERSION}"
PACKAGE_NAME="${BIN}-${OS_TYPE}-${ARCH_TYPE}-${VERSION}.tar.gz"
wget "https://github.com/${GITHUB_ORG}/${GITHUB_REPO}/releases/download/${VERSION}/${BIN}-${OS_TYPE}-${ARCH_TYPE}.tgz"
tar xvf ${BIN}-${OS_TYPE}-${ARCH_TYPE}.tgz && rm ${BIN}-${OS_TYPE}-${ARCH_TYPE}.tgz && echo "Run './${BIN} --help' to get started"
if [ -n "${PACKAGE_NAME}" ]; then
wget "https://github.com/${GITHUB_ORG}/${GITHUB_REPO}/releases/download/${VERSION}/${PACKAGE_NAME}"
# Extract the binary and clean the rest.
tar xvf "${PACKAGE_NAME}" && \
mv "${PACKAGE_NAME%.tar.gz}/${BIN}" "${PWD}" && \
rm -r "${PACKAGE_NAME}" && \
rm -r "${PACKAGE_NAME%.tar.gz}" && \
echo "Run './${BIN} --help' to get started"
fi
fi

View File

@@ -5,15 +5,18 @@ edition.workspace = true
license.workspace = true
[dependencies]
arrow-flight.workspace = true
common-base = { path = "../common/base" }
common-error = { path = "../common/error" }
common-time = { path = "../common/time" }
datatypes = { path = "../datatypes" }
greptime-proto = { git = "https://github.com/GreptimeTeam/greptime-proto.git", rev = "0bebe5f69c91cdfbce85cb8f45f9fcd28185261c" }
common-base = { workspace = true }
common-error = { workspace = true }
common-macro = { workspace = true }
common-time = { workspace = true }
datatypes = { workspace = true }
greptime-proto.workspace = true
prost.workspace = true
snafu = { version = "0.7", features = ["backtraces"] }
tonic.workspace = true
[build-dependencies]
tonic-build = "0.9"
[dev-dependencies]
paste = "1.0"

View File

@@ -15,15 +15,17 @@
use std::any::Any;
use common_error::ext::ErrorExt;
use common_error::prelude::StatusCode;
use common_error::status_code::StatusCode;
use common_macro::stack_trace_debug;
use datatypes::prelude::ConcreteDataType;
use snafu::prelude::*;
use snafu::Location;
pub type Result<T> = std::result::Result<T, Error>;
#[derive(Debug, Snafu)]
#[derive(Snafu)]
#[snafu(visibility(pub))]
#[stack_trace_debug]
pub enum Error {
#[snafu(display("Unknown proto column datatype: {}", datatype))]
UnknownColumnDataType { datatype: i32, location: Location },
@@ -34,25 +36,17 @@ pub enum Error {
location: Location,
},
#[snafu(display(
"Failed to convert column default constraint, column: {}, source: {}",
column,
source
))]
#[snafu(display("Failed to convert column default constraint, column: {}", column))]
ConvertColumnDefaultConstraint {
column: String,
#[snafu(backtrace)]
location: Location,
source: datatypes::error::Error,
},
#[snafu(display(
"Invalid column default constraint, column: {}, source: {}",
column,
source
))]
#[snafu(display("Invalid column default constraint, column: {}", column))]
InvalidColumnDefaultConstraint {
column: String,
#[snafu(backtrace)]
location: Location,
source: datatypes::error::Error,
},
}

File diff suppressed because it is too large Load Diff

View File

@@ -15,7 +15,7 @@
pub mod error;
pub mod helper;
pub mod prometheus {
pub mod prom_store {
pub mod remote {
pub use greptime_proto::prometheus::remote::*;
}
@@ -23,4 +23,5 @@ pub mod prometheus {
pub mod v1;
pub use greptime_proto;
pub use prost::DecodeError;

View File

@@ -12,7 +12,9 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use datatypes::schema::{ColumnDefaultConstraint, ColumnSchema};
use std::collections::HashMap;
use datatypes::schema::{ColumnDefaultConstraint, ColumnSchema, COMMENT_KEY};
use snafu::ResultExt;
use crate::error::{self, Result};
@@ -20,7 +22,7 @@ use crate::helper::ColumnDataTypeWrapper;
use crate::v1::ColumnDef;
pub fn try_as_column_schema(column_def: &ColumnDef) -> Result<ColumnSchema> {
let data_type = ColumnDataTypeWrapper::try_new(column_def.datatype)?;
let data_type = ColumnDataTypeWrapper::try_new(column_def.data_type)?;
let constraint = if column_def.default_constraint.is_empty() {
None
@@ -34,9 +36,17 @@ pub fn try_as_column_schema(column_def: &ColumnDef) -> Result<ColumnSchema> {
)
};
ColumnSchema::new(&column_def.name, data_type.into(), column_def.is_nullable)
.with_default_constraint(constraint)
.context(error::InvalidColumnDefaultConstraintSnafu {
column: &column_def.name,
})
let mut metadata = HashMap::new();
if !column_def.comment.is_empty() {
metadata.insert(COMMENT_KEY.to_string(), column_def.comment.clone());
}
Ok(
ColumnSchema::new(&column_def.name, data_type.into(), column_def.is_nullable)
.with_default_constraint(constraint)
.context(error::InvalidColumnDefaultConstraintSnafu {
column: &column_def.name,
})?
.with_metadata(metadata),
)
}

25
src/auth/Cargo.toml Normal file
View File

@@ -0,0 +1,25 @@
[package]
name = "auth"
version.workspace = true
edition.workspace = true
license.workspace = true
[features]
default = []
testing = []
[dependencies]
api.workspace = true
async-trait.workspace = true
common-error.workspace = true
common-macro.workspace = true
digest = "0.10"
hex = { version = "0.4" }
secrecy = { version = "0.8", features = ["serde", "alloc"] }
sha1 = "0.10"
snafu.workspace = true
sql.workspace = true
tokio.workspace = true
[dev-dependencies]
common-test-util.workspace = true

147
src/auth/src/common.rs Normal file
View File

@@ -0,0 +1,147 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::sync::Arc;
use digest::Digest;
use secrecy::SecretString;
use sha1::Sha1;
use snafu::{ensure, OptionExt};
use crate::error::{IllegalParamSnafu, InvalidConfigSnafu, Result, UserPasswordMismatchSnafu};
use crate::user_info::DefaultUserInfo;
use crate::user_provider::static_user_provider::{StaticUserProvider, STATIC_USER_PROVIDER};
use crate::{UserInfoRef, UserProviderRef};
pub(crate) const DEFAULT_USERNAME: &str = "greptime";
/// construct a [`UserInfo`](crate::user_info::UserInfo) impl with name
/// use default username `greptime` if None is provided
pub fn userinfo_by_name(username: Option<String>) -> UserInfoRef {
DefaultUserInfo::with_name(username.unwrap_or_else(|| DEFAULT_USERNAME.to_string()))
}
pub fn user_provider_from_option(opt: &String) -> Result<UserProviderRef> {
let (name, content) = opt.split_once(':').context(InvalidConfigSnafu {
value: opt.to_string(),
msg: "UserProviderOption must be in format `<option>:<value>`",
})?;
match name {
STATIC_USER_PROVIDER => {
let provider =
StaticUserProvider::try_from(content).map(|p| Arc::new(p) as UserProviderRef)?;
Ok(provider)
}
_ => InvalidConfigSnafu {
value: name.to_string(),
msg: "Invalid UserProviderOption",
}
.fail(),
}
}
type Username<'a> = &'a str;
type HostOrIp<'a> = &'a str;
#[derive(Debug, Clone)]
pub enum Identity<'a> {
UserId(Username<'a>, Option<HostOrIp<'a>>),
}
pub type HashedPassword<'a> = &'a [u8];
pub type Salt<'a> = &'a [u8];
/// Authentication information sent by the client.
pub enum Password<'a> {
PlainText(SecretString),
MysqlNativePassword(HashedPassword<'a>, Salt<'a>),
PgMD5(HashedPassword<'a>, Salt<'a>),
}
pub fn auth_mysql(
auth_data: HashedPassword,
salt: Salt,
username: &str,
save_pwd: &[u8],
) -> Result<()> {
ensure!(
auth_data.len() == 20,
IllegalParamSnafu {
msg: "Illegal mysql password length"
}
);
// ref: https://github.com/mysql/mysql-server/blob/a246bad76b9271cb4333634e954040a970222e0a/sql/auth/password.cc#L62
let hash_stage_2 = double_sha1(save_pwd);
let tmp = sha1_two(salt, &hash_stage_2);
// xor auth_data and tmp
let mut xor_result = [0u8; 20];
for i in 0..20 {
xor_result[i] = auth_data[i] ^ tmp[i];
}
let candidate_stage_2 = sha1_one(&xor_result);
if candidate_stage_2 == hash_stage_2 {
Ok(())
} else {
UserPasswordMismatchSnafu {
username: username.to_string(),
}
.fail()
}
}
fn sha1_two(input_1: &[u8], input_2: &[u8]) -> Vec<u8> {
let mut hasher = Sha1::new();
hasher.update(input_1);
hasher.update(input_2);
hasher.finalize().to_vec()
}
fn sha1_one(data: &[u8]) -> Vec<u8> {
let mut hasher = Sha1::new();
hasher.update(data);
hasher.finalize().to_vec()
}
fn double_sha1(data: &[u8]) -> Vec<u8> {
sha1_one(&sha1_one(data))
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_sha() {
let sha_1_answer: Vec<u8> = vec![
124, 74, 141, 9, 202, 55, 98, 175, 97, 229, 149, 32, 148, 61, 194, 100, 148, 248, 148,
27,
];
let sha_1 = sha1_one("123456".as_bytes());
assert_eq!(sha_1, sha_1_answer);
let double_sha1_answer: Vec<u8> = vec![
107, 180, 131, 126, 183, 67, 41, 16, 94, 228, 86, 141, 218, 125, 198, 126, 210, 202,
42, 217,
];
let double_sha1 = double_sha1("123456".as_bytes());
assert_eq!(double_sha1, double_sha1_answer);
let sha1_2_answer: Vec<u8> = vec![
132, 115, 215, 211, 99, 186, 164, 206, 168, 152, 217, 192, 117, 47, 240, 252, 142, 244,
37, 204,
];
let sha1_2 = sha1_two("123456".as_bytes(), "654321".as_bytes());
assert_eq!(sha1_2, sha1_2_answer);
}
}

93
src/auth/src/error.rs Normal file
View File

@@ -0,0 +1,93 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use common_error::ext::{BoxedError, ErrorExt};
use common_error::status_code::StatusCode;
use common_macro::stack_trace_debug;
use snafu::{Location, Snafu};
#[derive(Snafu)]
#[snafu(visibility(pub))]
#[stack_trace_debug]
pub enum Error {
#[snafu(display("Invalid config value: {}, {}", value, msg))]
InvalidConfig { value: String, msg: String },
#[snafu(display("Illegal param: {}", msg))]
IllegalParam { msg: String },
#[snafu(display("Internal state error: {}", msg))]
InternalState { msg: String },
#[snafu(display("IO error"))]
Io {
#[snafu(source)]
error: std::io::Error,
location: Location,
},
#[snafu(display("Auth failed"))]
AuthBackend {
location: Location,
source: BoxedError,
},
#[snafu(display("User not found, username: {}", username))]
UserNotFound { username: String },
#[snafu(display("Unsupported password type: {}", password_type))]
UnsupportedPasswordType { password_type: String },
#[snafu(display("Username and password does not match, username: {}", username))]
UserPasswordMismatch { username: String },
#[snafu(display(
"Access denied for user '{}' to database '{}-{}'",
username,
catalog,
schema
))]
AccessDenied {
catalog: String,
schema: String,
username: String,
},
#[snafu(display("User is not authorized to perform this action"))]
PermissionDenied { location: Location },
}
impl ErrorExt for Error {
fn status_code(&self) -> StatusCode {
match self {
Error::InvalidConfig { .. } => StatusCode::InvalidArguments,
Error::IllegalParam { .. } => StatusCode::InvalidArguments,
Error::InternalState { .. } => StatusCode::Unexpected,
Error::Io { .. } => StatusCode::Internal,
Error::AuthBackend { .. } => StatusCode::Internal,
Error::UserNotFound { .. } => StatusCode::UserNotFound,
Error::UnsupportedPasswordType { .. } => StatusCode::UnsupportedPasswordType,
Error::UserPasswordMismatch { .. } => StatusCode::UserPasswordMismatch,
Error::AccessDenied { .. } => StatusCode::AccessDenied,
Error::PermissionDenied { .. } => StatusCode::PermissionDenied,
}
}
fn as_any(&self) -> &dyn std::any::Any {
self
}
}
pub type Result<T> = std::result::Result<T, Error>;

34
src/auth/src/lib.rs Normal file
View File

@@ -0,0 +1,34 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
mod common;
pub mod error;
mod permission;
mod user_info;
mod user_provider;
#[cfg(feature = "testing")]
pub mod tests;
pub use common::{
auth_mysql, user_provider_from_option, userinfo_by_name, HashedPassword, Identity, Password,
};
pub use permission::{PermissionChecker, PermissionReq, PermissionResp};
pub use user_info::UserInfo;
pub use user_provider::UserProvider;
/// pub type alias
pub type UserInfoRef = std::sync::Arc<dyn UserInfo>;
pub type UserProviderRef = std::sync::Arc<dyn UserProvider>;
pub type PermissionCheckerRef = std::sync::Arc<dyn PermissionChecker>;

View File

@@ -0,0 +1,64 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::fmt::Debug;
use api::v1::greptime_request::Request;
use sql::statements::statement::Statement;
use crate::error::{PermissionDeniedSnafu, Result};
use crate::{PermissionCheckerRef, UserInfoRef};
#[derive(Debug, Clone)]
pub enum PermissionReq<'a> {
GrpcRequest(&'a Request),
SqlStatement(&'a Statement),
PromQuery,
Opentsdb,
LineProtocol,
PromStoreWrite,
PromStoreRead,
Otlp,
}
#[derive(Debug)]
pub enum PermissionResp {
Allow,
Reject,
}
pub trait PermissionChecker: Send + Sync {
fn check_permission(
&self,
user_info: Option<UserInfoRef>,
req: PermissionReq,
) -> Result<PermissionResp>;
}
impl PermissionChecker for Option<&PermissionCheckerRef> {
fn check_permission(
&self,
user_info: Option<UserInfoRef>,
req: PermissionReq,
) -> Result<PermissionResp> {
match self {
Some(checker) => match checker.check_permission(user_info, req) {
Ok(PermissionResp::Reject) => PermissionDeniedSnafu.fail(),
Ok(PermissionResp::Allow) => Ok(PermissionResp::Allow),
Err(e) => Err(e),
},
None => Ok(PermissionResp::Allow),
}
}
}

View File

@@ -11,13 +11,14 @@
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use secrecy::ExposeSecret;
use servers::auth::user_provider::auth_mysql;
use servers::auth::{
AccessDeniedSnafu, Identity, Password, UnsupportedPasswordTypeSnafu, UserNotFoundSnafu,
UserPasswordMismatchSnafu, UserProvider,
use crate::error::{
AccessDeniedSnafu, Result, UnsupportedPasswordTypeSnafu, UserNotFoundSnafu,
UserPasswordMismatchSnafu,
};
use session::context::UserInfo;
use crate::user_info::DefaultUserInfo;
use crate::{auth_mysql, Identity, Password, UserInfoRef, UserProvider};
pub struct DatabaseAuthInfo<'a> {
pub catalog: &'a str,
@@ -55,17 +56,13 @@ impl UserProvider for MockUserProvider {
"mock_user_provider"
}
async fn authenticate(
&self,
id: Identity<'_>,
password: Password<'_>,
) -> servers::auth::Result<UserInfo> {
async fn authenticate(&self, id: Identity<'_>, password: Password<'_>) -> Result<UserInfoRef> {
match id {
Identity::UserId(username, _host) => match password {
Password::PlainText(password) => {
if username == "greptime" {
if password == "greptime" {
Ok(UserInfo::new("greptime"))
if password.expose_secret() == "greptime" {
Ok(DefaultUserInfo::with_name("greptime"))
} else {
UserPasswordMismatchSnafu {
username: username.to_string(),
@@ -81,7 +78,7 @@ impl UserProvider for MockUserProvider {
}
Password::MysqlNativePassword(auth_data, salt) => {
auth_mysql(auth_data, salt, username, "greptime".as_bytes())
.map(|_| UserInfo::new(username))
.map(|_| DefaultUserInfo::with_name(username))
}
_ => UnsupportedPasswordTypeSnafu {
password_type: "mysql_native_password",
@@ -91,12 +88,7 @@ impl UserProvider for MockUserProvider {
}
}
async fn authorize(
&self,
catalog: &str,
schema: &str,
user_info: &UserInfo,
) -> servers::auth::Result<()> {
async fn authorize(&self, catalog: &str, schema: &str, user_info: &UserInfoRef) -> Result<()> {
if catalog == self.catalog && schema == self.schema && user_info.username() == self.username
{
Ok(())
@@ -113,6 +105,8 @@ impl UserProvider for MockUserProvider {
#[tokio::test]
async fn test_auth_by_plain_text() {
use crate::error;
let user_provider = MockUserProvider::default();
assert_eq!("mock_user_provider", user_provider.name());
@@ -120,11 +114,11 @@ async fn test_auth_by_plain_text() {
let auth_result = user_provider
.authenticate(
Identity::UserId("greptime", None),
Password::PlainText("greptime"),
Password::PlainText("greptime".to_string().into()),
)
.await;
assert!(auth_result.is_ok());
assert_eq!("greptime", auth_result.unwrap().username());
.await
.unwrap();
assert_eq!("greptime", auth_result.username());
// auth failed, unsupported password type
let auth_result = user_provider
@@ -136,33 +130,33 @@ async fn test_auth_by_plain_text() {
assert!(auth_result.is_err());
assert!(matches!(
auth_result.err().unwrap(),
servers::auth::Error::UnsupportedPasswordType { .. }
error::Error::UnsupportedPasswordType { .. }
));
// auth failed, err: user not exist.
let auth_result = user_provider
.authenticate(
Identity::UserId("not_exist_username", None),
Password::PlainText("greptime"),
Password::PlainText("greptime".to_string().into()),
)
.await;
assert!(auth_result.is_err());
assert!(matches!(
auth_result.err().unwrap(),
servers::auth::Error::UserNotFound { .. }
error::Error::UserNotFound { .. }
));
// auth failed, err: wrong password
let auth_result = user_provider
.authenticate(
Identity::UserId("greptime", None),
Password::PlainText("wrong_password"),
Password::PlainText("wrong_password".to_string().into()),
)
.await;
assert!(auth_result.is_err());
assert!(matches!(
auth_result.err().unwrap(),
servers::auth::Error::UserPasswordMismatch { .. }
error::Error::UserPasswordMismatch { .. }
))
}
@@ -175,8 +169,8 @@ async fn test_schema_validate() {
username: "test_user",
});
let right_user = UserInfo::new("test_user");
let wrong_user = UserInfo::default();
let right_user = DefaultUserInfo::with_name("test_user");
let wrong_user = DefaultUserInfo::with_name("greptime");
// check catalog
let re = validator
@@ -192,6 +186,8 @@ async fn test_schema_validate() {
let re = validator.authorize("greptime", "public", &wrong_user).await;
assert!(re.is_err());
// check ok
let re = validator.authorize("greptime", "public", &right_user).await;
assert!(re.is_ok());
validator
.authorize("greptime", "public", &right_user)
.await
.unwrap();
}

47
src/auth/src/user_info.rs Normal file
View File

@@ -0,0 +1,47 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::any::Any;
use std::fmt::Debug;
use std::sync::Arc;
use crate::UserInfoRef;
pub trait UserInfo: Debug + Sync + Send {
fn as_any(&self) -> &dyn Any;
fn username(&self) -> &str;
}
#[derive(Debug)]
pub(crate) struct DefaultUserInfo {
username: String,
}
impl DefaultUserInfo {
pub(crate) fn with_name(username: impl Into<String>) -> UserInfoRef {
Arc::new(Self {
username: username.into(),
})
}
}
impl UserInfo for DefaultUserInfo {
fn as_any(&self) -> &dyn Any {
self
}
fn username(&self) -> &str {
self.username.as_str()
}
}

View File

@@ -0,0 +1,46 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
pub(crate) mod static_user_provider;
use crate::common::{Identity, Password};
use crate::error::Result;
use crate::UserInfoRef;
#[async_trait::async_trait]
pub trait UserProvider: Send + Sync {
fn name(&self) -> &str;
/// Checks whether a user is valid and allowed to access the database.
async fn authenticate(&self, id: Identity<'_>, password: Password<'_>) -> Result<UserInfoRef>;
/// Checks whether a connection request
/// from a certain user to a certain catalog/schema is legal.
/// This method should be called after [authenticate()](UserProvider::authenticate()).
async fn authorize(&self, catalog: &str, schema: &str, user_info: &UserInfoRef) -> Result<()>;
/// Combination of [authenticate()](UserProvider::authenticate()) and [authorize()](UserProvider::authorize()).
/// In most cases it's preferred for both convenience and performance.
async fn auth(
&self,
id: Identity<'_>,
password: Password<'_>,
catalog: &str,
schema: &str,
) -> Result<UserInfoRef> {
let user_info = self.authenticate(id, password).await?;
self.authorize(catalog, schema, &user_info).await?;
Ok(user_info)
}
}

View File

@@ -19,19 +19,17 @@ use std::io::BufRead;
use std::path::Path;
use async_trait::async_trait;
use digest;
use digest::Digest;
use session::context::UserInfo;
use sha1::Sha1;
use secrecy::ExposeSecret;
use snafu::{ensure, OptionExt, ResultExt};
use crate::auth::{
Error, HashedPassword, Identity, IllegalParamSnafu, InvalidConfigSnafu, IoSnafu, Password,
Result, Salt, UnsupportedPasswordTypeSnafu, UserNotFoundSnafu, UserPasswordMismatchSnafu,
UserProvider,
use crate::error::{
Error, IllegalParamSnafu, InvalidConfigSnafu, IoSnafu, Result, UnsupportedPasswordTypeSnafu,
UserNotFoundSnafu, UserPasswordMismatchSnafu,
};
use crate::user_info::DefaultUserInfo;
use crate::{auth_mysql, Identity, Password, UserInfoRef, UserProvider};
pub const STATIC_USER_PROVIDER: &str = "static_user_provider";
pub(crate) const STATIC_USER_PROVIDER: &str = "static_user_provider";
impl TryFrom<&str> for StaticUserProvider {
type Error = Error;
@@ -53,7 +51,7 @@ impl TryFrom<&str> for StaticUserProvider {
let file = File::open(path).context(IoSnafu)?;
let credential = io::BufReader::new(file)
.lines()
.filter_map(|line| line.ok())
.map_while(std::result::Result::ok)
.filter_map(|line| {
if let Some((k, v)) = line.split_once('=') {
Some((k.to_string(), v.as_bytes().to_vec()))
@@ -90,7 +88,7 @@ impl TryFrom<&str> for StaticUserProvider {
}
}
pub struct StaticUserProvider {
pub(crate) struct StaticUserProvider {
users: HashMap<String, Vec<u8>>,
}
@@ -104,7 +102,7 @@ impl UserProvider for StaticUserProvider {
&self,
input_id: Identity<'_>,
input_pwd: Password<'_>,
) -> Result<UserInfo> {
) -> Result<UserInfoRef> {
match input_id {
Identity::UserId(username, _) => {
ensure!(
@@ -120,13 +118,13 @@ impl UserProvider for StaticUserProvider {
match input_pwd {
Password::PlainText(pwd) => {
ensure!(
!pwd.is_empty(),
!pwd.expose_secret().is_empty(),
IllegalParamSnafu {
msg: "blank password"
}
);
return if save_pwd == pwd.as_bytes() {
Ok(UserInfo::new(username))
return if save_pwd == pwd.expose_secret().as_bytes() {
Ok(DefaultUserInfo::with_name(username))
} else {
UserPasswordMismatchSnafu {
username: username.to_string(),
@@ -135,14 +133,8 @@ impl UserProvider for StaticUserProvider {
};
}
Password::MysqlNativePassword(auth_data, salt) => {
ensure!(
auth_data.len() == 20,
IllegalParamSnafu {
msg: "Illegal MySQL native password format, length != 20"
}
);
auth_mysql(auth_data, salt, username, save_pwd)
.map(|_| UserInfo::new(username))
.map(|_| DefaultUserInfo::with_name(username))
}
Password::PgMD5(_, _) => UnsupportedPasswordTypeSnafu {
password_type: "pg_md5",
@@ -153,106 +145,47 @@ impl UserProvider for StaticUserProvider {
}
}
async fn authorize(&self, _catalog: &str, _schema: &str, _user_info: &UserInfo) -> Result<()> {
async fn authorize(
&self,
_catalog: &str,
_schema: &str,
_user_info: &UserInfoRef,
) -> Result<()> {
// default allow all
Ok(())
}
}
pub fn auth_mysql(
auth_data: HashedPassword,
salt: Salt,
username: &str,
save_pwd: &[u8],
) -> Result<()> {
// ref: https://github.com/mysql/mysql-server/blob/a246bad76b9271cb4333634e954040a970222e0a/sql/auth/password.cc#L62
let hash_stage_2 = double_sha1(save_pwd);
let tmp = sha1_two(salt, &hash_stage_2);
// xor auth_data and tmp
let mut xor_result = [0u8; 20];
for i in 0..20 {
xor_result[i] = auth_data[i] ^ tmp[i];
}
let candidate_stage_2 = sha1_one(&xor_result);
if candidate_stage_2 == hash_stage_2 {
Ok(())
} else {
UserPasswordMismatchSnafu {
username: username.to_string(),
}
.fail()
}
}
fn sha1_two(input_1: &[u8], input_2: &[u8]) -> Vec<u8> {
let mut hasher = Sha1::new();
hasher.update(input_1);
hasher.update(input_2);
hasher.finalize().to_vec()
}
fn sha1_one(data: &[u8]) -> Vec<u8> {
let mut hasher = Sha1::new();
hasher.update(data);
hasher.finalize().to_vec()
}
fn double_sha1(data: &[u8]) -> Vec<u8> {
sha1_one(&sha1_one(data))
}
#[cfg(test)]
pub mod test {
use std::fs::File;
use std::io::{LineWriter, Write};
use common_test_util::temp_dir::create_temp_dir;
use session::context::UserInfo;
use crate::auth::user_provider::{double_sha1, sha1_one, sha1_two, StaticUserProvider};
use crate::auth::{Identity, Password, UserProvider};
#[test]
fn test_sha() {
let sha_1_answer: Vec<u8> = vec![
124, 74, 141, 9, 202, 55, 98, 175, 97, 229, 149, 32, 148, 61, 194, 100, 148, 248, 148,
27,
];
let sha_1 = sha1_one("123456".as_bytes());
assert_eq!(sha_1, sha_1_answer);
let double_sha1_answer: Vec<u8> = vec![
107, 180, 131, 126, 183, 67, 41, 16, 94, 228, 86, 141, 218, 125, 198, 126, 210, 202,
42, 217,
];
let double_sha1 = double_sha1("123456".as_bytes());
assert_eq!(double_sha1, double_sha1_answer);
let sha1_2_answer: Vec<u8> = vec![
132, 115, 215, 211, 99, 186, 164, 206, 168, 152, 217, 192, 117, 47, 240, 252, 142, 244,
37, 204,
];
let sha1_2 = sha1_two("123456".as_bytes(), "654321".as_bytes());
assert_eq!(sha1_2, sha1_2_answer);
}
use crate::user_info::DefaultUserInfo;
use crate::user_provider::static_user_provider::StaticUserProvider;
use crate::user_provider::{Identity, Password};
use crate::UserProvider;
async fn test_authenticate(provider: &dyn UserProvider, username: &str, password: &str) {
let re = provider
.authenticate(
Identity::UserId(username, None),
Password::PlainText(password),
Password::PlainText(password.to_string().into()),
)
.await;
assert!(re.is_ok());
let _ = re.unwrap();
}
#[tokio::test]
async fn test_authorize() {
let user_info = DefaultUserInfo::with_name("root");
let provider = StaticUserProvider::try_from("cmd:root=123456,admin=654321").unwrap();
let re = provider
.authorize("catalog", "schema", &UserInfo::new("root"))
.await;
assert!(re.is_ok());
provider
.authorize("catalog", "schema", &user_info)
.await
.unwrap();
}
#[tokio::test]
@@ -269,7 +202,6 @@ pub mod test {
{
// write a tmp file
let file = File::create(&file_path);
assert!(file.is_ok());
let file = file.unwrap();
let mut lw = LineWriter::new(file);
assert!(lw
@@ -278,7 +210,7 @@ pub mod test {
admin=654321",
)
.is_ok());
assert!(lw.flush().is_ok());
lw.flush().unwrap();
}
let param = format!("file:{file_path}");

61
src/auth/tests/mod.rs Normal file
View File

@@ -0,0 +1,61 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#![feature(assert_matches)]
use std::assert_matches::assert_matches;
use std::sync::Arc;
use api::v1::greptime_request::Request;
use auth::error::Error::InternalState;
use auth::{PermissionChecker, PermissionCheckerRef, PermissionReq, PermissionResp, UserInfoRef};
use sql::statements::show::{ShowDatabases, ShowKind};
use sql::statements::statement::Statement;
struct DummyPermissionChecker;
impl PermissionChecker for DummyPermissionChecker {
fn check_permission(
&self,
_user_info: Option<UserInfoRef>,
req: PermissionReq,
) -> auth::error::Result<PermissionResp> {
match req {
PermissionReq::GrpcRequest(_) => Ok(PermissionResp::Allow),
PermissionReq::SqlStatement(_) => Ok(PermissionResp::Reject),
_ => Err(InternalState {
msg: "testing".to_string(),
}),
}
}
}
#[test]
fn test_permission_checker() {
let checker: PermissionCheckerRef = Arc::new(DummyPermissionChecker);
let grpc_result = checker.check_permission(
None,
PermissionReq::GrpcRequest(&Request::Query(Default::default())),
);
assert_matches!(grpc_result, Ok(PermissionResp::Allow));
let sql_result = checker.check_permission(
None,
PermissionReq::SqlStatement(&Statement::ShowDatabases(ShowDatabases::new(ShowKind::All))),
);
assert_matches!(sql_result, Ok(PermissionResp::Reject));
let err_result = checker.check_permission(None, PermissionReq::Opentsdb);
assert_matches!(err_result, Err(InternalState { msg }) if msg == "testing");
}

View File

@@ -4,44 +4,50 @@ version.workspace = true
edition.workspace = true
license.workspace = true
[features]
testing = []
[dependencies]
api = { path = "../api" }
api = { workspace = true }
arc-swap = "1.0"
arrow-schema.workspace = true
async-stream.workspace = true
async-trait = "0.1"
backoff = { version = "0.4", features = ["tokio"] }
common-catalog = { path = "../common/catalog" }
common-error = { path = "../common/error" }
common-grpc = { path = "../common/grpc" }
common-query = { path = "../common/query" }
common-recordbatch = { path = "../common/recordbatch" }
common-runtime = { path = "../common/runtime" }
common-telemetry = { path = "../common/telemetry" }
common-time = { path = "../common/time" }
common-catalog = { workspace = true }
common-error = { workspace = true }
common-grpc = { workspace = true }
common-macro = { workspace = true }
common-meta = { workspace = true }
common-query = { workspace = true }
common-recordbatch = { workspace = true }
common-runtime = { workspace = true }
common-telemetry = { workspace = true }
common-time = { workspace = true }
dashmap = "5.4"
datafusion.workspace = true
datatypes = { path = "../datatypes" }
datatypes = { workspace = true }
futures = "0.3"
futures-util.workspace = true
key-lock = "0.1"
lazy_static = "1.4"
meta-client = { path = "../meta-client" }
lazy_static.workspace = true
meta-client = { workspace = true }
metrics.workspace = true
moka = { workspace = true, features = ["future"] }
parking_lot = "0.12"
regex = "1.6"
serde = "1.0"
partition.workspace = true
regex.workspace = true
serde.workspace = true
serde_json = "1.0"
session = { path = "../session" }
session = { workspace = true }
snafu = { version = "0.7", features = ["backtraces"] }
storage = { path = "../storage" }
table = { path = "../table" }
store-api = { workspace = true }
table = { workspace = true }
tokio.workspace = true
[dev-dependencies]
common-test-util = { path = "../common/test-util" }
catalog = { workspace = true, features = ["testing"] }
chrono.workspace = true
log-store = { path = "../log-store" }
mito = { path = "../mito", features = ["test"] }
object-store = { path = "../object-store" }
storage = { path = "../storage" }
common-test-util = { workspace = true }
log-store = { workspace = true }
object-store = { workspace = true }
storage = { workspace = true }
tokio.workspace = true

View File

@@ -1,324 +0,0 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//! Catalog adapter between datafusion and greptime query engine.
use std::any::Any;
use std::sync::Arc;
use async_trait::async_trait;
use common_error::prelude::BoxedError;
use datafusion::catalog::catalog::{
CatalogList as DfCatalogList, CatalogProvider as DfCatalogProvider,
};
use datafusion::catalog::schema::SchemaProvider as DfSchemaProvider;
use datafusion::datasource::TableProvider as DfTableProvider;
use datafusion::error::Result as DataFusionResult;
use snafu::ResultExt;
use table::table::adapter::{DfTableProviderAdapter, TableAdapter};
use table::TableRef;
use crate::error::{self, Result, SchemaProviderOperationSnafu};
use crate::{
CatalogListRef, CatalogProvider, CatalogProviderRef, SchemaProvider, SchemaProviderRef,
};
pub struct DfCatalogListAdapter {
catalog_list: CatalogListRef,
}
impl DfCatalogListAdapter {
pub fn new(catalog_list: CatalogListRef) -> DfCatalogListAdapter {
DfCatalogListAdapter { catalog_list }
}
}
impl DfCatalogList for DfCatalogListAdapter {
fn as_any(&self) -> &dyn Any {
self
}
fn register_catalog(
&self,
name: String,
catalog: Arc<dyn DfCatalogProvider>,
) -> Option<Arc<dyn DfCatalogProvider>> {
let catalog_adapter = Arc::new(CatalogProviderAdapter {
df_catalog_provider: catalog,
});
self.catalog_list
.register_catalog(name, catalog_adapter)
.expect("datafusion does not accept fallible catalog access") // TODO(hl): datafusion register catalog does not handles errors
.map(|catalog_provider| Arc::new(DfCatalogProviderAdapter { catalog_provider }) as _)
}
fn catalog_names(&self) -> Vec<String> {
// TODO(hl): datafusion register catalog does not handles errors
self.catalog_list
.catalog_names()
.expect("datafusion does not accept fallible catalog access")
}
fn catalog(&self, name: &str) -> Option<Arc<dyn DfCatalogProvider>> {
self.catalog_list
.catalog(name)
.expect("datafusion does not accept fallible catalog access") // TODO(hl): datafusion register catalog does not handles errors
.map(|catalog_provider| Arc::new(DfCatalogProviderAdapter { catalog_provider }) as _)
}
}
/// Datafusion's CatalogProvider -> greptime CatalogProvider
struct CatalogProviderAdapter {
df_catalog_provider: Arc<dyn DfCatalogProvider>,
}
impl CatalogProvider for CatalogProviderAdapter {
fn as_any(&self) -> &dyn Any {
self
}
fn schema_names(&self) -> Result<Vec<String>> {
Ok(self.df_catalog_provider.schema_names())
}
fn register_schema(
&self,
_name: String,
_schema: SchemaProviderRef,
) -> Result<Option<SchemaProviderRef>> {
todo!("register_schema is not supported in Datafusion catalog provider")
}
fn schema(&self, name: &str) -> Result<Option<Arc<dyn SchemaProvider>>> {
Ok(self
.df_catalog_provider
.schema(name)
.map(|df_schema_provider| Arc::new(SchemaProviderAdapter { df_schema_provider }) as _))
}
}
///Greptime CatalogProvider -> datafusion's CatalogProvider
pub struct DfCatalogProviderAdapter {
catalog_provider: CatalogProviderRef,
}
impl DfCatalogProviderAdapter {
pub fn new(catalog_provider: CatalogProviderRef) -> Self {
Self { catalog_provider }
}
}
impl DfCatalogProvider for DfCatalogProviderAdapter {
fn as_any(&self) -> &dyn Any {
self
}
fn schema_names(&self) -> Vec<String> {
self.catalog_provider
.schema_names()
.expect("datafusion does not accept fallible catalog access")
}
fn schema(&self, name: &str) -> Option<Arc<dyn DfSchemaProvider>> {
self.catalog_provider
.schema(name)
.expect("datafusion does not accept fallible catalog access")
.map(|schema_provider| Arc::new(DfSchemaProviderAdapter { schema_provider }) as _)
}
}
/// Greptime SchemaProvider -> datafusion SchemaProvider
struct DfSchemaProviderAdapter {
schema_provider: Arc<dyn SchemaProvider>,
}
#[async_trait]
impl DfSchemaProvider for DfSchemaProviderAdapter {
fn as_any(&self) -> &dyn Any {
self
}
fn table_names(&self) -> Vec<String> {
self.schema_provider
.table_names()
.expect("datafusion does not accept fallible catalog access")
}
async fn table(&self, name: &str) -> Option<Arc<dyn DfTableProvider>> {
self.schema_provider
.table(name)
.await
.expect("datafusion does not accept fallible catalog access")
.map(|table| Arc::new(DfTableProviderAdapter::new(table)) as _)
}
fn register_table(
&self,
name: String,
table: Arc<dyn DfTableProvider>,
) -> DataFusionResult<Option<Arc<dyn DfTableProvider>>> {
let table = Arc::new(TableAdapter::new(table)?);
match self.schema_provider.register_table(name, table)? {
Some(p) => Ok(Some(Arc::new(DfTableProviderAdapter::new(p)))),
None => Ok(None),
}
}
fn deregister_table(&self, name: &str) -> DataFusionResult<Option<Arc<dyn DfTableProvider>>> {
match self.schema_provider.deregister_table(name)? {
Some(p) => Ok(Some(Arc::new(DfTableProviderAdapter::new(p)))),
None => Ok(None),
}
}
fn table_exist(&self, name: &str) -> bool {
self.schema_provider
.table_exist(name)
.expect("datafusion does not accept fallible catalog access")
}
}
/// Datafusion SchemaProviderAdapter -> greptime SchemaProviderAdapter
struct SchemaProviderAdapter {
df_schema_provider: Arc<dyn DfSchemaProvider>,
}
#[async_trait]
impl SchemaProvider for SchemaProviderAdapter {
fn as_any(&self) -> &dyn Any {
self
}
/// Retrieves the list of available table names in this schema.
fn table_names(&self) -> Result<Vec<String>> {
Ok(self.df_schema_provider.table_names())
}
async fn table(&self, name: &str) -> Result<Option<TableRef>> {
let table = self.df_schema_provider.table(name).await;
let table = table.map(|table_provider| {
match table_provider
.as_any()
.downcast_ref::<DfTableProviderAdapter>()
{
Some(adapter) => adapter.table(),
None => {
// TODO(yingwen): Avoid panic here.
let adapter =
TableAdapter::new(table_provider).expect("convert datafusion table");
Arc::new(adapter) as _
}
}
});
Ok(table)
}
fn register_table(&self, name: String, table: TableRef) -> Result<Option<TableRef>> {
let table_provider = Arc::new(DfTableProviderAdapter::new(table.clone()));
Ok(self
.df_schema_provider
.register_table(name, table_provider)
.context(error::DatafusionSnafu {
msg: "Fail to register table to datafusion",
})
.map_err(BoxedError::new)
.context(SchemaProviderOperationSnafu)?
.map(|_| table))
}
fn rename_table(&self, _name: &str, _new_name: String) -> Result<TableRef> {
todo!()
}
fn deregister_table(&self, name: &str) -> Result<Option<TableRef>> {
self.df_schema_provider
.deregister_table(name)
.context(error::DatafusionSnafu {
msg: "Fail to deregister table from datafusion",
})
.map_err(BoxedError::new)
.context(SchemaProviderOperationSnafu)?
.map(|table| {
let adapter = TableAdapter::new(table)
.context(error::TableSchemaMismatchSnafu)
.map_err(BoxedError::new)
.context(SchemaProviderOperationSnafu)?;
Ok(Arc::new(adapter) as _)
})
.transpose()
}
fn table_exist(&self, name: &str) -> Result<bool> {
Ok(self.df_schema_provider.table_exist(name))
}
}
#[cfg(test)]
mod tests {
use table::table::numbers::NumbersTable;
use super::*;
use crate::local::{new_memory_catalog_list, MemoryCatalogProvider, MemorySchemaProvider};
#[test]
#[should_panic]
pub fn test_register_schema() {
let adapter = CatalogProviderAdapter {
df_catalog_provider: Arc::new(
datafusion::catalog::catalog::MemoryCatalogProvider::new(),
),
};
adapter
.register_schema(
"whatever".to_string(),
Arc::new(MemorySchemaProvider::new()),
)
.unwrap();
}
#[tokio::test]
async fn test_register_table() {
let adapter = DfSchemaProviderAdapter {
schema_provider: Arc::new(MemorySchemaProvider::new()),
};
adapter
.register_table(
"test_table".to_string(),
Arc::new(DfTableProviderAdapter::new(Arc::new(
NumbersTable::default(),
))),
)
.unwrap();
adapter.table("test_table").await.unwrap();
}
#[test]
pub fn test_register_catalog() {
let catalog_list = DfCatalogListAdapter {
catalog_list: new_memory_catalog_list().unwrap(),
};
assert!(catalog_list
.register_catalog(
"test_catalog".to_string(),
Arc::new(DfCatalogProviderAdapter {
catalog_provider: Arc::new(MemoryCatalogProvider::new()),
}),
)
.is_none());
catalog_list.catalog("test_catalog").unwrap();
}
}

View File

@@ -16,37 +16,52 @@ use std::any::Any;
use std::fmt::Debug;
use common_error::ext::{BoxedError, ErrorExt};
use common_error::prelude::{Snafu, StatusCode};
use common_error::status_code::StatusCode;
use common_macro::stack_trace_debug;
use datafusion::error::DataFusionError;
use datatypes::prelude::ConcreteDataType;
use snafu::Location;
use snafu::{Location, Snafu};
use table::metadata::TableId;
use tokio::task::JoinError;
use crate::DeregisterTableRequest;
#[derive(Debug, Snafu)]
#[derive(Snafu)]
#[snafu(visibility(pub))]
#[stack_trace_debug]
pub enum Error {
#[snafu(display("Failed to open system catalog table, source: {}", source))]
#[snafu(display("Failed to list catalogs"))]
ListCatalogs {
location: Location,
source: BoxedError,
},
#[snafu(display("Failed to list {}'s schemas", catalog))]
ListSchemas {
location: Location,
catalog: String,
source: BoxedError,
},
#[snafu(display("Failed to re-compile script due to internal error"))]
CompileScriptInternal {
location: Location,
source: BoxedError,
},
#[snafu(display("Failed to open system catalog table"))]
OpenSystemCatalog {
#[snafu(backtrace)]
location: Location,
source: table::error::Error,
},
#[snafu(display("Failed to create system catalog table, source: {}", source))]
#[snafu(display("Failed to create system catalog table"))]
CreateSystemCatalog {
#[snafu(backtrace)]
location: Location,
source: table::error::Error,
},
#[snafu(display(
"Failed to create table, table info: {}, source: {}",
table_info,
source
))]
#[snafu(display("Failed to create table, table info: {}", table_info))]
CreateTable {
table_info: String,
#[snafu(backtrace)]
location: Location,
source: table::error::Error,
},
@@ -77,16 +92,17 @@ pub enum Error {
#[snafu(display("Catalog value is not present"))]
EmptyValue { location: Location },
#[snafu(display("Failed to deserialize value, source: {}", source))]
#[snafu(display("Failed to deserialize value"))]
ValueDeserialize {
source: serde_json::error::Error,
#[snafu(source)]
error: serde_json::error::Error,
location: Location,
},
#[snafu(display("Table engine not found: {}, source: {}", engine_name, source))]
#[snafu(display("Table engine not found: {}", engine_name))]
TableEngineNotFound {
engine_name: String,
#[snafu(backtrace)]
location: Location,
source: table::error::Error,
},
@@ -106,7 +122,7 @@ pub enum Error {
#[snafu(display("Table `{}` already exists", table))]
TableExists { table: String, location: Location },
#[snafu(display("Table `{}` not exist", table))]
#[snafu(display("Table not found: {}", table))]
TableNotExist { table: String, location: Location },
#[snafu(display("Schema {} already exists", schema))]
@@ -121,15 +137,18 @@ pub enum Error {
#[snafu(display("Operation {} not supported", op))]
NotSupported { op: String, location: Location },
#[snafu(display("Failed to open table, table info: {}, source: {}", table_info, source))]
#[snafu(display("Failed to open table {table_id}"))]
OpenTable {
table_info: String,
#[snafu(backtrace)]
table_id: TableId,
location: Location,
source: table::error::Error,
},
#[snafu(display("Failed to open table in parallel, source: {}", source))]
ParallelOpenTable { source: JoinError },
#[snafu(display("Failed to open table in parallel"))]
ParallelOpenTable {
#[snafu(source)]
error: JoinError,
},
#[snafu(display("Table not found while opening table, table info: {}", table_info))]
TableNotFound {
@@ -139,119 +158,85 @@ pub enum Error {
#[snafu(display("Failed to read system catalog table records"))]
ReadSystemCatalog {
#[snafu(backtrace)]
location: Location,
source: common_recordbatch::error::Error,
},
#[snafu(display("Failed to create recordbatch, source: {}", source))]
#[snafu(display("Failed to create recordbatch"))]
CreateRecordBatch {
#[snafu(backtrace)]
location: Location,
source: common_recordbatch::error::Error,
},
#[snafu(display(
"Failed to insert table creation record to system catalog, source: {}",
source
))]
#[snafu(display("Failed to insert table creation record to system catalog"))]
InsertCatalogRecord {
#[snafu(backtrace)]
location: Location,
source: table::error::Error,
},
#[snafu(display(
"Failed to deregister table, request: {:?}, source: {}",
request,
source
))]
DeregisterTable {
request: DeregisterTableRequest,
#[snafu(backtrace)]
source: table::error::Error,
},
#[snafu(display("Illegal catalog manager state: {}", msg))]
IllegalManagerState { location: Location, msg: String },
#[snafu(display("Failed to scan system catalog table, source: {}", source))]
#[snafu(display("Failed to scan system catalog table"))]
SystemCatalogTableScan {
#[snafu(backtrace)]
location: Location,
source: table::error::Error,
},
#[snafu(display("Failure during SchemaProvider operation, source: {}", source))]
SchemaProviderOperation {
#[snafu(backtrace)]
source: BoxedError,
},
#[snafu(display("{source}"))]
#[snafu(display(""))]
Internal {
#[snafu(backtrace)]
location: Location,
source: BoxedError,
},
#[snafu(display("Failed to execute system catalog table scan, source: {}", source))]
#[snafu(display("Failed to upgrade weak catalog manager reference"))]
UpgradeWeakCatalogManagerRef { location: Location },
#[snafu(display("Failed to execute system catalog table scan"))]
SystemCatalogTableScanExec {
#[snafu(backtrace)]
location: Location,
source: common_query::error::Error,
},
#[snafu(display("Cannot parse catalog value, source: {}", source))]
#[snafu(display("Cannot parse catalog value"))]
InvalidCatalogValue {
#[snafu(backtrace)]
location: Location,
source: common_catalog::error::Error,
},
#[snafu(display("Failed to perform metasrv operation, source: {}", source))]
#[snafu(display("Failed to perform metasrv operation"))]
MetaSrv {
#[snafu(backtrace)]
location: Location,
source: meta_client::error::Error,
},
#[snafu(display("Invalid table info in catalog, source: {}", source))]
#[snafu(display("Invalid table info in catalog"))]
InvalidTableInfoInCatalog {
#[snafu(backtrace)]
location: Location,
source: datatypes::error::Error,
},
#[snafu(display("Failed to serialize or deserialize catalog entry: {}", source))]
CatalogEntrySerde {
#[snafu(backtrace)]
source: common_catalog::error::Error,
},
#[snafu(display("Illegal access to catalog: {} and schema: {}", catalog, schema))]
QueryAccessDenied { catalog: String, schema: String },
#[snafu(display(
"Failed to get region stats, catalog: {}, schema: {}, table: {}, source: {}",
catalog,
schema,
table,
source
))]
RegionStats {
catalog: String,
schema: String,
table: String,
#[snafu(backtrace)]
source: table::error::Error,
},
#[snafu(display("Invalid system table definition: {err_msg}"))]
InvalidSystemTableDef { err_msg: String, location: Location },
#[snafu(display("{}: {}", msg, source))]
#[snafu(display(""))]
Datafusion {
msg: String,
source: DataFusionError,
#[snafu(source)]
error: DataFusionError,
location: Location,
},
#[snafu(display("Table schema mismatch, source: {}", source))]
#[snafu(display("Table schema mismatch"))]
TableSchemaMismatch {
#[snafu(backtrace)]
location: Location,
source: table::error::Error,
},
#[snafu(display("A generic error has occurred, msg: {}", msg))]
Generic { msg: String, location: Location },
#[snafu(display("Table metadata manager error"))]
TableMetadataManager {
source: common_meta::error::Error,
location: Location,
},
}
pub type Result<T> = std::result::Result<T, Error>;
@@ -262,24 +247,22 @@ impl ErrorExt for Error {
Error::InvalidKey { .. }
| Error::SchemaNotFound { .. }
| Error::TableNotFound { .. }
| Error::IllegalManagerState { .. }
| Error::CatalogNotFound { .. }
| Error::InvalidEntryType { .. }
| Error::InvalidSystemTableDef { .. }
| Error::ParallelOpenTable { .. } => StatusCode::Unexpected,
Error::SystemCatalog { .. }
| Error::EmptyValue { .. }
| Error::ValueDeserialize { .. } => StatusCode::StorageUnavailable,
Error::SystemCatalogTypeMismatch { .. } => StatusCode::Internal,
Error::Generic { .. }
| Error::SystemCatalogTypeMismatch { .. }
| Error::UpgradeWeakCatalogManagerRef { .. } => StatusCode::Internal,
Error::ReadSystemCatalog { source, .. } | Error::CreateRecordBatch { source } => {
source.status_code()
}
Error::InvalidCatalogValue { source, .. } | Error::CatalogEntrySerde { source } => {
Error::ReadSystemCatalog { source, .. } | Error::CreateRecordBatch { source, .. } => {
source.status_code()
}
Error::InvalidCatalogValue { source, .. } => source.status_code(),
Error::TableExists { .. } => StatusCode::TableAlreadyExists,
Error::TableNotExist { .. } => StatusCode::TableNotFound,
@@ -287,26 +270,30 @@ impl ErrorExt for Error {
StatusCode::InvalidArguments
}
Error::ListCatalogs { source, .. } | Error::ListSchemas { source, .. } => {
source.status_code()
}
Error::OpenSystemCatalog { source, .. }
| Error::CreateSystemCatalog { source, .. }
| Error::InsertCatalogRecord { source, .. }
| Error::OpenTable { source, .. }
| Error::CreateTable { source, .. }
| Error::DeregisterTable { source, .. }
| Error::RegionStats { source, .. }
| Error::TableSchemaMismatch { source } => source.status_code(),
| Error::TableSchemaMismatch { source, .. } => source.status_code(),
Error::MetaSrv { source, .. } => source.status_code(),
Error::SystemCatalogTableScan { source } => source.status_code(),
Error::SystemCatalogTableScanExec { source } => source.status_code(),
Error::InvalidTableInfoInCatalog { source } => source.status_code(),
Error::SchemaProviderOperation { source } | Error::Internal { source } => {
Error::SystemCatalogTableScan { source, .. } => source.status_code(),
Error::SystemCatalogTableScanExec { source, .. } => source.status_code(),
Error::InvalidTableInfoInCatalog { source, .. } => source.status_code(),
Error::CompileScriptInternal { source, .. } | Error::Internal { source, .. } => {
source.status_code()
}
Error::Unimplemented { .. } | Error::NotSupported { .. } => StatusCode::Unsupported,
Error::QueryAccessDenied { .. } => StatusCode::AccessDenied,
Error::Datafusion { .. } => StatusCode::EngineExecuteQuery,
Error::TableMetadataManager { source, .. } => source.status_code(),
}
}

View File

@@ -1,379 +0,0 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::collections::HashMap;
use std::fmt::{Display, Formatter};
use common_catalog::error::{
DeserializeCatalogEntryValueSnafu, Error, InvalidCatalogSnafu, SerializeCatalogEntryValueSnafu,
};
use lazy_static::lazy_static;
use regex::Regex;
use serde::{Deserialize, Serialize, Serializer};
use snafu::{ensure, OptionExt, ResultExt};
use table::metadata::{RawTableInfo, TableId, TableVersion};
pub const CATALOG_KEY_PREFIX: &str = "__c";
pub const SCHEMA_KEY_PREFIX: &str = "__s";
pub const TABLE_GLOBAL_KEY_PREFIX: &str = "__tg";
pub const TABLE_REGIONAL_KEY_PREFIX: &str = "__tr";
const ALPHANUMERICS_NAME_PATTERN: &str = "[a-zA-Z_][a-zA-Z0-9_]*";
lazy_static! {
static ref CATALOG_KEY_PATTERN: Regex = Regex::new(&format!(
"^{CATALOG_KEY_PREFIX}-({ALPHANUMERICS_NAME_PATTERN})$"
))
.unwrap();
}
lazy_static! {
static ref SCHEMA_KEY_PATTERN: Regex = Regex::new(&format!(
"^{SCHEMA_KEY_PREFIX}-({ALPHANUMERICS_NAME_PATTERN})-({ALPHANUMERICS_NAME_PATTERN})$"
))
.unwrap();
}
lazy_static! {
static ref TABLE_GLOBAL_KEY_PATTERN: Regex = Regex::new(&format!(
"^{TABLE_GLOBAL_KEY_PREFIX}-({ALPHANUMERICS_NAME_PATTERN})-({ALPHANUMERICS_NAME_PATTERN})-({ALPHANUMERICS_NAME_PATTERN})$"
))
.unwrap();
}
lazy_static! {
static ref TABLE_REGIONAL_KEY_PATTERN: Regex = Regex::new(&format!(
"^{TABLE_REGIONAL_KEY_PREFIX}-({ALPHANUMERICS_NAME_PATTERN})-({ALPHANUMERICS_NAME_PATTERN})-({ALPHANUMERICS_NAME_PATTERN})-([0-9]+)$"
))
.unwrap();
}
pub fn build_catalog_prefix() -> String {
format!("{CATALOG_KEY_PREFIX}-")
}
pub fn build_schema_prefix(catalog_name: impl AsRef<str>) -> String {
format!("{SCHEMA_KEY_PREFIX}-{}-", catalog_name.as_ref())
}
pub fn build_table_global_prefix(
catalog_name: impl AsRef<str>,
schema_name: impl AsRef<str>,
) -> String {
format!(
"{TABLE_GLOBAL_KEY_PREFIX}-{}-{}-",
catalog_name.as_ref(),
schema_name.as_ref()
)
}
pub fn build_table_regional_prefix(
catalog_name: impl AsRef<str>,
schema_name: impl AsRef<str>,
) -> String {
format!(
"{}-{}-{}-",
TABLE_REGIONAL_KEY_PREFIX,
catalog_name.as_ref(),
schema_name.as_ref()
)
}
/// Table global info has only one key across all datanodes so it does not have `node_id` field.
#[derive(Clone)]
pub struct TableGlobalKey {
pub catalog_name: String,
pub schema_name: String,
pub table_name: String,
}
impl Display for TableGlobalKey {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
f.write_str(TABLE_GLOBAL_KEY_PREFIX)?;
f.write_str("-")?;
f.write_str(&self.catalog_name)?;
f.write_str("-")?;
f.write_str(&self.schema_name)?;
f.write_str("-")?;
f.write_str(&self.table_name)
}
}
impl TableGlobalKey {
pub fn parse<S: AsRef<str>>(s: S) -> Result<Self, Error> {
let key = s.as_ref();
let captures = TABLE_GLOBAL_KEY_PATTERN
.captures(key)
.context(InvalidCatalogSnafu { key })?;
ensure!(captures.len() == 4, InvalidCatalogSnafu { key });
Ok(Self {
catalog_name: captures[1].to_string(),
schema_name: captures[2].to_string(),
table_name: captures[3].to_string(),
})
}
}
/// Table global info contains necessary info for a datanode to create table regions, including
/// table id, table meta(schema...), region id allocation across datanodes.
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]
pub struct TableGlobalValue {
/// Id of datanode that created the global table info kv. only for debugging.
pub node_id: u64,
/// Allocation of region ids across all datanodes.
pub regions_id_map: HashMap<u64, Vec<u32>>,
pub table_info: RawTableInfo,
}
impl TableGlobalValue {
pub fn table_id(&self) -> TableId {
self.table_info.ident.table_id
}
}
/// Table regional info that varies between datanode, so it contains a `node_id` field.
pub struct TableRegionalKey {
pub catalog_name: String,
pub schema_name: String,
pub table_name: String,
pub node_id: u64,
}
impl Display for TableRegionalKey {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
f.write_str(TABLE_REGIONAL_KEY_PREFIX)?;
f.write_str("-")?;
f.write_str(&self.catalog_name)?;
f.write_str("-")?;
f.write_str(&self.schema_name)?;
f.write_str("-")?;
f.write_str(&self.table_name)?;
f.write_str("-")?;
f.serialize_u64(self.node_id)
}
}
impl TableRegionalKey {
pub fn parse<S: AsRef<str>>(s: S) -> Result<Self, Error> {
let key = s.as_ref();
let captures = TABLE_REGIONAL_KEY_PATTERN
.captures(key)
.context(InvalidCatalogSnafu { key })?;
ensure!(captures.len() == 5, InvalidCatalogSnafu { key });
let node_id = captures[4]
.to_string()
.parse()
.map_err(|_| InvalidCatalogSnafu { key }.build())?;
Ok(Self {
catalog_name: captures[1].to_string(),
schema_name: captures[2].to_string(),
table_name: captures[3].to_string(),
node_id,
})
}
}
/// Regional table info of specific datanode, including table version on that datanode and
/// region ids allocated by metasrv.
#[derive(Debug, Serialize, Deserialize, Clone)]
pub struct TableRegionalValue {
pub version: TableVersion,
pub regions_ids: Vec<u32>,
}
pub struct CatalogKey {
pub catalog_name: String,
}
impl Display for CatalogKey {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
f.write_str(CATALOG_KEY_PREFIX)?;
f.write_str("-")?;
f.write_str(&self.catalog_name)
}
}
impl CatalogKey {
pub fn parse(s: impl AsRef<str>) -> Result<Self, Error> {
let key = s.as_ref();
let captures = CATALOG_KEY_PATTERN
.captures(key)
.context(InvalidCatalogSnafu { key })?;
ensure!(captures.len() == 2, InvalidCatalogSnafu { key });
Ok(Self {
catalog_name: captures[1].to_string(),
})
}
}
#[derive(Debug, Serialize, Deserialize)]
pub struct CatalogValue;
pub struct SchemaKey {
pub catalog_name: String,
pub schema_name: String,
}
impl Display for SchemaKey {
fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {
f.write_str(SCHEMA_KEY_PREFIX)?;
f.write_str("-")?;
f.write_str(&self.catalog_name)?;
f.write_str("-")?;
f.write_str(&self.schema_name)
}
}
impl SchemaKey {
pub fn parse(s: impl AsRef<str>) -> Result<Self, Error> {
let key = s.as_ref();
let captures = SCHEMA_KEY_PATTERN
.captures(key)
.context(InvalidCatalogSnafu { key })?;
ensure!(captures.len() == 3, InvalidCatalogSnafu { key });
Ok(Self {
catalog_name: captures[1].to_string(),
schema_name: captures[2].to_string(),
})
}
}
#[derive(Debug, Serialize, Deserialize)]
pub struct SchemaValue;
macro_rules! define_catalog_value {
( $($val_ty: ty), *) => {
$(
impl $val_ty {
pub fn parse(s: impl AsRef<str>) -> Result<Self, Error> {
serde_json::from_str(s.as_ref())
.context(DeserializeCatalogEntryValueSnafu { raw: s.as_ref() })
}
pub fn from_bytes(bytes: impl AsRef<[u8]>) -> Result<Self, Error> {
Self::parse(&String::from_utf8_lossy(bytes.as_ref()))
}
pub fn as_bytes(&self) -> Result<Vec<u8>, Error> {
Ok(serde_json::to_string(self)
.context(SerializeCatalogEntryValueSnafu)?
.into_bytes())
}
}
)*
}
}
define_catalog_value!(
TableRegionalValue,
TableGlobalValue,
CatalogValue,
SchemaValue
);
#[cfg(test)]
mod tests {
use datatypes::prelude::ConcreteDataType;
use datatypes::schema::{ColumnSchema, RawSchema, Schema};
use table::metadata::{RawTableMeta, TableIdent, TableType};
use super::*;
#[test]
fn test_parse_catalog_key() {
let key = "__c-C";
let catalog_key = CatalogKey::parse(key).unwrap();
assert_eq!("C", catalog_key.catalog_name);
assert_eq!(key, catalog_key.to_string());
}
#[test]
fn test_parse_schema_key() {
let key = "__s-C-S";
let schema_key = SchemaKey::parse(key).unwrap();
assert_eq!("C", schema_key.catalog_name);
assert_eq!("S", schema_key.schema_name);
assert_eq!(key, schema_key.to_string());
}
#[test]
fn test_parse_table_key() {
let key = "__tg-C-S-T";
let entry = TableGlobalKey::parse(key).unwrap();
assert_eq!("C", entry.catalog_name);
assert_eq!("S", entry.schema_name);
assert_eq!("T", entry.table_name);
assert_eq!(key, &entry.to_string());
}
#[test]
fn test_build_prefix() {
assert_eq!("__c-", build_catalog_prefix());
assert_eq!("__s-CATALOG-", build_schema_prefix("CATALOG"));
assert_eq!(
"__tg-CATALOG-SCHEMA-",
build_table_global_prefix("CATALOG", "SCHEMA")
);
}
#[test]
fn test_serialize_schema() {
let schema = Schema::new(vec![ColumnSchema::new(
"name",
ConcreteDataType::string_datatype(),
true,
)]);
let meta = RawTableMeta {
schema: RawSchema::from(&schema),
engine: "mito".to_string(),
created_on: chrono::DateTime::default(),
primary_key_indices: vec![0, 1],
next_column_id: 3,
engine_options: Default::default(),
value_indices: vec![2, 3],
options: Default::default(),
region_numbers: vec![1],
};
let table_info = RawTableInfo {
ident: TableIdent {
table_id: 42,
version: 1,
},
name: "table_1".to_string(),
desc: Some("blah".to_string()),
catalog_name: "catalog_1".to_string(),
schema_name: "schema_1".to_string(),
meta,
table_type: TableType::Base,
};
let value = TableGlobalValue {
node_id: 0,
regions_id_map: HashMap::from([(0, vec![1, 2, 3])]),
table_info,
};
let serialized = serde_json::to_string(&value).unwrap();
let deserialized = TableGlobalValue::parse(serialized).unwrap();
assert_eq!(value, deserialized);
}
#[test]
fn test_table_global_value_compatibility() {
let s = r#"{"node_id":1,"regions_id_map":{"1":[0]},"table_info":{"ident":{"table_id":1098,"version":1},"name":"container_cpu_limit","desc":"Created on insertion","catalog_name":"greptime","schema_name":"dd","meta":{"schema":{"column_schemas":[{"name":"container_id","data_type":{"String":null},"is_nullable":true,"is_time_index":false,"default_constraint":null,"metadata":{}},{"name":"container_name","data_type":{"String":null},"is_nullable":true,"is_time_index":false,"default_constraint":null,"metadata":{}},{"name":"docker_image","data_type":{"String":null},"is_nullable":true,"is_time_index":false,"default_constraint":null,"metadata":{}},{"name":"host","data_type":{"String":null},"is_nullable":true,"is_time_index":false,"default_constraint":null,"metadata":{}},{"name":"image_name","data_type":{"String":null},"is_nullable":true,"is_time_index":false,"default_constraint":null,"metadata":{}},{"name":"image_tag","data_type":{"String":null},"is_nullable":true,"is_time_index":false,"default_constraint":null,"metadata":{}},{"name":"interval","data_type":{"String":null},"is_nullable":true,"is_time_index":false,"default_constraint":null,"metadata":{}},{"name":"runtime","data_type":{"String":null},"is_nullable":true,"is_time_index":false,"default_constraint":null,"metadata":{}},{"name":"short_image","data_type":{"String":null},"is_nullable":true,"is_time_index":false,"default_constraint":null,"metadata":{}},{"name":"type","data_type":{"String":null},"is_nullable":true,"is_time_index":false,"default_constraint":null,"metadata":{}},{"name":"dd_value","data_type":{"Float64":{}},"is_nullable":true,"is_time_index":false,"default_constraint":null,"metadata":{}},{"name":"ts","data_type":{"Timestamp":{"Millisecond":null}},"is_nullable":false,"is_time_index":true,"default_constraint":null,"metadata":{"greptime:time_index":"true"}},{"name":"git.repository_url","data_type":{"String":null},"is_nullable":true,"is_time_index":false,"default_constraint":null,"metadata":{}}],"timestamp_index":11,"version":1},"primary_key_indices":[0,1,2,3,4,5,6,7,8,9,12],"value_indices":[10,11],"engine":"mito","next_column_id":12,"region_numbers":[],"engine_options":{},"options":{},"created_on":"1970-01-01T00:00:00Z"},"table_type":"Base"}}"#;
TableGlobalValue::parse(s).unwrap();
}
}

View File

@@ -12,69 +12,170 @@
// See the License for the specific language governing permissions and
// limitations under the License.
mod columns;
mod tables;
use std::any::Any;
use std::sync::Arc;
use std::collections::HashMap;
use std::sync::{Arc, Weak};
use async_trait::async_trait;
use datafusion::datasource::streaming::{PartitionStream, StreamingTable};
use common_catalog::consts::INFORMATION_SCHEMA_NAME;
use common_error::ext::BoxedError;
use common_recordbatch::{RecordBatchStreamAdaptor, SendableRecordBatchStream};
use datatypes::schema::SchemaRef;
use futures_util::StreamExt;
use snafu::ResultExt;
use table::table::adapter::TableAdapter;
use store_api::data_source::DataSource;
use store_api::storage::{ScanRequest, TableId};
use table::error::{SchemaConversionSnafu, TablesRecordBatchSnafu};
use table::metadata::{
FilterPushDownType, TableInfoBuilder, TableInfoRef, TableMetaBuilder, TableType,
};
use table::thin_table::{ThinTable, ThinTableAdapter};
use table::TableRef;
use crate::error::{DatafusionSnafu, Result, TableSchemaMismatchSnafu};
use self::columns::InformationSchemaColumns;
use crate::error::Result;
use crate::information_schema::tables::InformationSchemaTables;
use crate::{CatalogProviderRef, SchemaProvider};
use crate::CatalogManager;
const TABLES: &str = "tables";
pub const TABLES: &str = "tables";
pub const COLUMNS: &str = "columns";
pub(crate) struct InformationSchemaProvider {
pub struct InformationSchemaProvider {
catalog_name: String,
catalog_provider: CatalogProviderRef,
catalog_manager: Weak<dyn CatalogManager>,
}
impl InformationSchemaProvider {
pub(crate) fn new(catalog_name: String, catalog_provider: CatalogProviderRef) -> Self {
pub fn new(catalog_name: String, catalog_manager: Weak<dyn CatalogManager>) -> Self {
Self {
catalog_name,
catalog_provider,
catalog_manager,
}
}
/// Build a map of [TableRef] in information schema.
/// Including `tables` and `columns`.
pub fn build(
catalog_name: String,
catalog_manager: Weak<dyn CatalogManager>,
) -> HashMap<String, TableRef> {
let provider = Self::new(catalog_name, catalog_manager);
let mut schema = HashMap::new();
schema.insert(TABLES.to_owned(), provider.table(TABLES).unwrap());
schema.insert(COLUMNS.to_owned(), provider.table(COLUMNS).unwrap());
schema
}
pub fn table(&self, name: &str) -> Option<TableRef> {
self.information_table(name).map(|table| {
let table_info = Self::table_info(self.catalog_name.clone(), &table);
let filter_pushdown = FilterPushDownType::Unsupported;
let thin_table = ThinTable::new(table_info, filter_pushdown);
let data_source = Arc::new(InformationTableDataSource::new(table));
Arc::new(ThinTableAdapter::new(thin_table, data_source)) as _
})
}
fn information_table(&self, name: &str) -> Option<InformationTableRef> {
match name.to_ascii_lowercase().as_str() {
TABLES => Some(Arc::new(InformationSchemaTables::new(
self.catalog_name.clone(),
self.catalog_manager.clone(),
)) as _),
COLUMNS => Some(Arc::new(InformationSchemaColumns::new(
self.catalog_name.clone(),
self.catalog_manager.clone(),
)) as _),
_ => None,
}
}
fn table_info(catalog_name: String, table: &InformationTableRef) -> TableInfoRef {
let table_meta = TableMetaBuilder::default()
.schema(table.schema())
.primary_key_indices(vec![])
.next_column_id(0)
.build()
.unwrap();
let table_info = TableInfoBuilder::default()
.table_id(table.table_id())
.name(table.table_name().to_owned())
.catalog_name(catalog_name)
.schema_name(INFORMATION_SCHEMA_NAME.to_owned())
.meta(table_meta)
.table_type(table.table_type())
.build()
.unwrap();
Arc::new(table_info)
}
}
#[async_trait]
impl SchemaProvider for InformationSchemaProvider {
fn as_any(&self) -> &dyn Any {
self
trait InformationTable {
fn table_id(&self) -> TableId;
fn table_name(&self) -> &'static str;
fn schema(&self) -> SchemaRef;
fn to_stream(&self) -> Result<SendableRecordBatchStream>;
fn table_type(&self) -> TableType {
TableType::Temporary
}
}
type InformationTableRef = Arc<dyn InformationTable + Send + Sync>;
struct InformationTableDataSource {
table: InformationTableRef,
}
impl InformationTableDataSource {
fn new(table: InformationTableRef) -> Self {
Self { table }
}
fn table_names(&self) -> Result<Vec<String>> {
Ok(vec![TABLES.to_string()])
fn try_project(&self, projection: &[usize]) -> std::result::Result<SchemaRef, BoxedError> {
let schema = self
.table
.schema()
.try_project(projection)
.context(SchemaConversionSnafu)
.map_err(BoxedError::new)?;
Ok(Arc::new(schema))
}
}
async fn table(&self, name: &str) -> Result<Option<TableRef>> {
let table = if name.eq_ignore_ascii_case(TABLES) {
Arc::new(InformationSchemaTables::new(
self.catalog_name.clone(),
self.catalog_provider.clone(),
))
} else {
return Ok(None);
impl DataSource for InformationTableDataSource {
fn get_stream(
&self,
request: ScanRequest,
) -> std::result::Result<SendableRecordBatchStream, BoxedError> {
let projection = request.projection;
let projected_schema = match &projection {
Some(projection) => self.try_project(projection)?,
None => self.table.schema(),
};
let table = Arc::new(
StreamingTable::try_new(table.schema().clone(), vec![table]).with_context(|_| {
DatafusionSnafu {
msg: format!("Failed to get InformationSchema table '{name}'"),
}
})?,
);
let table = TableAdapter::new(table).context(TableSchemaMismatchSnafu)?;
Ok(Some(Arc::new(table)))
}
let stream = self
.table
.to_stream()
.map_err(BoxedError::new)
.context(TablesRecordBatchSnafu)
.map_err(BoxedError::new)?
.map(move |batch| match &projection {
Some(p) => batch.and_then(|b| b.try_project(p)),
None => batch,
});
fn table_exist(&self, name: &str) -> Result<bool> {
Ok(matches!(name.to_ascii_lowercase().as_str(), TABLES))
let stream = RecordBatchStreamAdaptor {
schema: projected_schema,
stream: Box::pin(stream),
output_ordering: None,
};
Ok(Box::pin(stream))
}
}

View File

@@ -0,0 +1,264 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::sync::{Arc, Weak};
use arrow_schema::SchemaRef as ArrowSchemaRef;
use common_catalog::consts::{
INFORMATION_SCHEMA_COLUMNS_TABLE_ID, INFORMATION_SCHEMA_NAME, SEMANTIC_TYPE_FIELD,
SEMANTIC_TYPE_PRIMARY_KEY, SEMANTIC_TYPE_TIME_INDEX,
};
use common_error::ext::BoxedError;
use common_query::physical_plan::TaskContext;
use common_recordbatch::adapter::RecordBatchStreamAdapter;
use common_recordbatch::{RecordBatch, SendableRecordBatchStream};
use datafusion::physical_plan::stream::RecordBatchStreamAdapter as DfRecordBatchStreamAdapter;
use datafusion::physical_plan::streaming::PartitionStream as DfPartitionStream;
use datafusion::physical_plan::SendableRecordBatchStream as DfSendableRecordBatchStream;
use datatypes::prelude::{ConcreteDataType, DataType};
use datatypes::scalars::ScalarVectorBuilder;
use datatypes::schema::{ColumnSchema, Schema, SchemaRef};
use datatypes::vectors::{StringVectorBuilder, VectorRef};
use snafu::{OptionExt, ResultExt};
use store_api::storage::TableId;
use super::tables::InformationSchemaTables;
use super::{InformationTable, COLUMNS, TABLES};
use crate::error::{
CreateRecordBatchSnafu, InternalSnafu, Result, UpgradeWeakCatalogManagerRefSnafu,
};
use crate::CatalogManager;
pub(super) struct InformationSchemaColumns {
schema: SchemaRef,
catalog_name: String,
catalog_manager: Weak<dyn CatalogManager>,
}
const TABLE_CATALOG: &str = "table_catalog";
const TABLE_SCHEMA: &str = "table_schema";
const TABLE_NAME: &str = "table_name";
const COLUMN_NAME: &str = "column_name";
const DATA_TYPE: &str = "data_type";
const SEMANTIC_TYPE: &str = "semantic_type";
impl InformationSchemaColumns {
pub(super) fn new(catalog_name: String, catalog_manager: Weak<dyn CatalogManager>) -> Self {
Self {
schema: Self::schema(),
catalog_name,
catalog_manager,
}
}
fn schema() -> SchemaRef {
Arc::new(Schema::new(vec![
ColumnSchema::new(TABLE_CATALOG, ConcreteDataType::string_datatype(), false),
ColumnSchema::new(TABLE_SCHEMA, ConcreteDataType::string_datatype(), false),
ColumnSchema::new(TABLE_NAME, ConcreteDataType::string_datatype(), false),
ColumnSchema::new(COLUMN_NAME, ConcreteDataType::string_datatype(), false),
ColumnSchema::new(DATA_TYPE, ConcreteDataType::string_datatype(), false),
ColumnSchema::new(SEMANTIC_TYPE, ConcreteDataType::string_datatype(), false),
]))
}
fn builder(&self) -> InformationSchemaColumnsBuilder {
InformationSchemaColumnsBuilder::new(
self.schema.clone(),
self.catalog_name.clone(),
self.catalog_manager.clone(),
)
}
}
impl InformationTable for InformationSchemaColumns {
fn table_id(&self) -> TableId {
INFORMATION_SCHEMA_COLUMNS_TABLE_ID
}
fn table_name(&self) -> &'static str {
COLUMNS
}
fn schema(&self) -> SchemaRef {
self.schema.clone()
}
fn to_stream(&self) -> Result<SendableRecordBatchStream> {
let schema = self.schema.arrow_schema().clone();
let mut builder = self.builder();
let stream = Box::pin(DfRecordBatchStreamAdapter::new(
schema,
futures::stream::once(async move {
builder
.make_tables()
.await
.map(|x| x.into_df_record_batch())
.map_err(Into::into)
}),
));
Ok(Box::pin(
RecordBatchStreamAdapter::try_new(stream)
.map_err(BoxedError::new)
.context(InternalSnafu)?,
))
}
}
struct InformationSchemaColumnsBuilder {
schema: SchemaRef,
catalog_name: String,
catalog_manager: Weak<dyn CatalogManager>,
catalog_names: StringVectorBuilder,
schema_names: StringVectorBuilder,
table_names: StringVectorBuilder,
column_names: StringVectorBuilder,
data_types: StringVectorBuilder,
semantic_types: StringVectorBuilder,
}
impl InformationSchemaColumnsBuilder {
fn new(
schema: SchemaRef,
catalog_name: String,
catalog_manager: Weak<dyn CatalogManager>,
) -> Self {
Self {
schema,
catalog_name,
catalog_manager,
catalog_names: StringVectorBuilder::with_capacity(42),
schema_names: StringVectorBuilder::with_capacity(42),
table_names: StringVectorBuilder::with_capacity(42),
column_names: StringVectorBuilder::with_capacity(42),
data_types: StringVectorBuilder::with_capacity(42),
semantic_types: StringVectorBuilder::with_capacity(42),
}
}
/// Construct the `information_schema.tables` virtual table
async fn make_tables(&mut self) -> Result<RecordBatch> {
let catalog_name = self.catalog_name.clone();
let catalog_manager = self
.catalog_manager
.upgrade()
.context(UpgradeWeakCatalogManagerRefSnafu)?;
for schema_name in catalog_manager.schema_names(&catalog_name).await? {
if !catalog_manager
.schema_exists(&catalog_name, &schema_name)
.await?
{
continue;
}
for table_name in catalog_manager
.table_names(&catalog_name, &schema_name)
.await?
{
let (keys, schema) = if let Some(table) = catalog_manager
.table(&catalog_name, &schema_name, &table_name)
.await?
{
let keys = &table.table_info().meta.primary_key_indices;
let schema = table.schema();
(keys.clone(), schema)
} else {
// TODO: this specific branch is only a workaround for FrontendCatalogManager.
if schema_name == INFORMATION_SCHEMA_NAME {
if table_name == COLUMNS {
(vec![], InformationSchemaColumns::schema())
} else if table_name == TABLES {
(vec![], InformationSchemaTables::schema())
} else {
continue;
}
} else {
continue;
}
};
for (idx, column) in schema.column_schemas().iter().enumerate() {
let semantic_type = if column.is_time_index() {
SEMANTIC_TYPE_TIME_INDEX
} else if keys.contains(&idx) {
SEMANTIC_TYPE_PRIMARY_KEY
} else {
SEMANTIC_TYPE_FIELD
};
self.add_column(
&catalog_name,
&schema_name,
&table_name,
&column.name,
column.data_type.name(),
semantic_type,
);
}
}
}
self.finish()
}
fn add_column(
&mut self,
catalog_name: &str,
schema_name: &str,
table_name: &str,
column_name: &str,
data_type: &str,
semantic_type: &str,
) {
self.catalog_names.push(Some(catalog_name));
self.schema_names.push(Some(schema_name));
self.table_names.push(Some(table_name));
self.column_names.push(Some(column_name));
self.data_types.push(Some(data_type));
self.semantic_types.push(Some(semantic_type));
}
fn finish(&mut self) -> Result<RecordBatch> {
let columns: Vec<VectorRef> = vec![
Arc::new(self.catalog_names.finish()),
Arc::new(self.schema_names.finish()),
Arc::new(self.table_names.finish()),
Arc::new(self.column_names.finish()),
Arc::new(self.data_types.finish()),
Arc::new(self.semantic_types.finish()),
];
RecordBatch::new(self.schema.clone(), columns).context(CreateRecordBatchSnafu)
}
}
impl DfPartitionStream for InformationSchemaColumns {
fn schema(&self) -> &ArrowSchemaRef {
self.schema.arrow_schema()
}
fn execute(&self, _: Arc<TaskContext>) -> DfSendableRecordBatchStream {
let schema = self.schema.arrow_schema().clone();
let mut builder = self.builder();
Box::pin(DfRecordBatchStreamAdapter::new(
schema,
futures::stream::once(async move {
builder
.make_tables()
.await
.map(|x| x.into_df_record_batch())
.map_err(Into::into)
}),
))
}
}

View File

@@ -12,64 +12,110 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use std::sync::Arc;
use std::sync::{Arc, Weak};
use arrow_schema::SchemaRef as ArrowSchemaRef;
use common_catalog::consts::INFORMATION_SCHEMA_NAME;
use common_catalog::consts::{
INFORMATION_SCHEMA_COLUMNS_TABLE_ID, INFORMATION_SCHEMA_NAME,
INFORMATION_SCHEMA_TABLES_TABLE_ID,
};
use common_error::ext::BoxedError;
use common_query::physical_plan::TaskContext;
use common_recordbatch::RecordBatch;
use datafusion::datasource::streaming::PartitionStream as DfPartitionStream;
use common_recordbatch::adapter::RecordBatchStreamAdapter;
use common_recordbatch::{RecordBatch, SendableRecordBatchStream};
use datafusion::physical_plan::stream::RecordBatchStreamAdapter as DfRecordBatchStreamAdapter;
use datafusion::physical_plan::streaming::PartitionStream as DfPartitionStream;
use datafusion::physical_plan::SendableRecordBatchStream as DfSendableRecordBatchStream;
use datatypes::prelude::{ConcreteDataType, ScalarVectorBuilder, VectorRef};
use datatypes::schema::{ColumnSchema, Schema, SchemaRef};
use datatypes::vectors::{StringVectorBuilder, UInt32VectorBuilder};
use snafu::ResultExt;
use snafu::{OptionExt, ResultExt};
use store_api::storage::TableId;
use table::metadata::TableType;
use crate::error::{CreateRecordBatchSnafu, Result};
use crate::information_schema::TABLES;
use crate::CatalogProviderRef;
use super::{COLUMNS, TABLES};
use crate::error::{
CreateRecordBatchSnafu, InternalSnafu, Result, UpgradeWeakCatalogManagerRefSnafu,
};
use crate::information_schema::InformationTable;
use crate::CatalogManager;
pub(super) struct InformationSchemaTables {
schema: SchemaRef,
catalog_name: String,
catalog_provider: CatalogProviderRef,
catalog_manager: Weak<dyn CatalogManager>,
}
impl InformationSchemaTables {
pub(super) fn new(catalog_name: String, catalog_provider: CatalogProviderRef) -> Self {
let schema = Arc::new(Schema::new(vec![
pub(super) fn new(catalog_name: String, catalog_manager: Weak<dyn CatalogManager>) -> Self {
Self {
schema: Self::schema(),
catalog_name,
catalog_manager,
}
}
pub(crate) fn schema() -> SchemaRef {
Arc::new(Schema::new(vec![
ColumnSchema::new("table_catalog", ConcreteDataType::string_datatype(), false),
ColumnSchema::new("table_schema", ConcreteDataType::string_datatype(), false),
ColumnSchema::new("table_name", ConcreteDataType::string_datatype(), false),
ColumnSchema::new("table_type", ConcreteDataType::string_datatype(), false),
ColumnSchema::new("table_id", ConcreteDataType::uint32_datatype(), true),
ColumnSchema::new("engine", ConcreteDataType::string_datatype(), true),
]));
Self {
schema,
catalog_name,
catalog_provider,
}
]))
}
fn builder(&self) -> InformationSchemaTablesBuilder {
InformationSchemaTablesBuilder::new(
self.schema.clone(),
self.catalog_name.clone(),
self.catalog_provider.clone(),
self.catalog_manager.clone(),
)
}
}
impl InformationTable for InformationSchemaTables {
fn table_id(&self) -> TableId {
INFORMATION_SCHEMA_TABLES_TABLE_ID
}
fn table_name(&self) -> &'static str {
TABLES
}
fn schema(&self) -> SchemaRef {
self.schema.clone()
}
fn to_stream(&self) -> Result<SendableRecordBatchStream> {
let schema = self.schema.arrow_schema().clone();
let mut builder = self.builder();
let stream = Box::pin(DfRecordBatchStreamAdapter::new(
schema,
futures::stream::once(async move {
builder
.make_tables()
.await
.map(|x| x.into_df_record_batch())
.map_err(Into::into)
}),
));
Ok(Box::pin(
RecordBatchStreamAdapter::try_new(stream)
.map_err(BoxedError::new)
.context(InternalSnafu)?,
))
}
}
/// Builds the `information_schema.TABLE` table row by row
///
/// Columns are based on <https://www.postgresql.org/docs/current/infoschema-columns.html>
struct InformationSchemaTablesBuilder {
schema: SchemaRef,
catalog_name: String,
catalog_provider: CatalogProviderRef,
catalog_manager: Weak<dyn CatalogManager>,
catalog_names: StringVectorBuilder,
schema_names: StringVectorBuilder,
@@ -80,11 +126,15 @@ struct InformationSchemaTablesBuilder {
}
impl InformationSchemaTablesBuilder {
fn new(schema: SchemaRef, catalog_name: String, catalog_provider: CatalogProviderRef) -> Self {
fn new(
schema: SchemaRef,
catalog_name: String,
catalog_manager: Weak<dyn CatalogManager>,
) -> Self {
Self {
schema,
catalog_name,
catalog_provider,
catalog_manager,
catalog_names: StringVectorBuilder::with_capacity(42),
schema_names: StringVectorBuilder::with_capacity(42),
table_names: StringVectorBuilder::with_capacity(42),
@@ -97,37 +147,63 @@ impl InformationSchemaTablesBuilder {
/// Construct the `information_schema.tables` virtual table
async fn make_tables(&mut self) -> Result<RecordBatch> {
let catalog_name = self.catalog_name.clone();
let catalog_manager = self
.catalog_manager
.upgrade()
.context(UpgradeWeakCatalogManagerRefSnafu)?;
for schema_name in self.catalog_provider.schema_names()? {
if schema_name == INFORMATION_SCHEMA_NAME {
for schema_name in catalog_manager.schema_names(&catalog_name).await? {
if !catalog_manager
.schema_exists(&catalog_name, &schema_name)
.await?
{
continue;
}
let Some(schema) = self.catalog_provider.schema(&schema_name)? else { continue };
for table_name in schema.table_names()? {
let Some(table) = schema.table(&table_name).await? else { continue };
let table_info = table.table_info();
self.add_table(
&catalog_name,
&schema_name,
&table_name,
table.table_type(),
Some(table_info.ident.table_id),
Some(&table_info.meta.engine),
);
for table_name in catalog_manager
.table_names(&catalog_name, &schema_name)
.await?
{
if let Some(table) = catalog_manager
.table(&catalog_name, &schema_name, &table_name)
.await?
{
let table_info = table.table_info();
self.add_table(
&catalog_name,
&schema_name,
&table_name,
table.table_type(),
Some(table_info.ident.table_id),
Some(&table_info.meta.engine),
);
} else {
// TODO: this specific branch is only a workaround for FrontendCatalogManager.
if schema_name == INFORMATION_SCHEMA_NAME {
if table_name == COLUMNS {
self.add_table(
&catalog_name,
&schema_name,
&table_name,
TableType::Temporary,
Some(INFORMATION_SCHEMA_COLUMNS_TABLE_ID),
None,
);
} else if table_name == TABLES {
self.add_table(
&catalog_name,
&schema_name,
&table_name,
TableType::Temporary,
Some(INFORMATION_SCHEMA_TABLES_TABLE_ID),
None,
);
}
}
};
}
}
// Add a final list for the information schema tables themselves
self.add_table(
&catalog_name,
INFORMATION_SCHEMA_NAME,
TABLES,
TableType::View,
None,
None,
);
self.finish()
}
@@ -171,7 +247,7 @@ impl DfPartitionStream for InformationSchemaTables {
}
fn execute(&self, _: Arc<TaskContext>) -> DfSendableRecordBatchStream {
let schema = self.schema().clone();
let schema = self.schema.arrow_schema().clone();
let mut builder = self.builder();
Box::pin(DfRecordBatchStreamAdapter::new(
schema,

View File

@@ -0,0 +1,22 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
pub use client::{CachedMetaKvBackend, MetaKvBackend};
mod client;
mod manager;
#[cfg(feature = "testing")]
pub mod mock;
pub use manager::KvBackendCatalogManager;

View File

@@ -0,0 +1,333 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::any::Any;
use std::fmt::Debug;
use std::sync::Arc;
use std::time::Duration;
use common_error::ext::BoxedError;
use common_meta::cache_invalidator::KvCacheInvalidator;
use common_meta::error::Error::{CacheNotGet, GetKvCache};
use common_meta::error::{CacheNotGetSnafu, Error, ExternalSnafu, Result};
use common_meta::kv_backend::{KvBackend, KvBackendRef, TxnService};
use common_meta::rpc::store::{
BatchDeleteRequest, BatchDeleteResponse, BatchGetRequest, BatchGetResponse, BatchPutRequest,
BatchPutResponse, CompareAndPutRequest, CompareAndPutResponse, DeleteRangeRequest,
DeleteRangeResponse, MoveValueRequest, MoveValueResponse, PutRequest, PutResponse,
RangeRequest, RangeResponse,
};
use common_meta::rpc::KeyValue;
use common_telemetry::{debug, timer};
use meta_client::client::MetaClient;
use moka::future::{Cache, CacheBuilder};
use snafu::{OptionExt, ResultExt};
use crate::metrics::{METRIC_CATALOG_KV_GET, METRIC_CATALOG_KV_REMOTE_GET};
const CACHE_MAX_CAPACITY: u64 = 10000;
const CACHE_TTL_SECOND: u64 = 10 * 60;
const CACHE_TTI_SECOND: u64 = 5 * 60;
pub type CacheBackendRef = Arc<Cache<Vec<u8>, KeyValue>>;
pub struct CachedMetaKvBackend {
kv_backend: KvBackendRef,
cache: CacheBackendRef,
name: String,
}
impl TxnService for CachedMetaKvBackend {
type Error = Error;
}
#[async_trait::async_trait]
impl KvBackend for CachedMetaKvBackend {
fn name(&self) -> &str {
&self.name
}
fn as_any(&self) -> &dyn Any {
self
}
async fn range(&self, req: RangeRequest) -> Result<RangeResponse> {
self.kv_backend.range(req).await
}
async fn put(&self, req: PutRequest) -> Result<PutResponse> {
let key = &req.key.clone();
let ret = self.kv_backend.put(req).await;
if ret.is_ok() {
self.invalidate_key(key).await;
}
ret
}
async fn batch_put(&self, req: BatchPutRequest) -> Result<BatchPutResponse> {
let keys = req
.kvs
.iter()
.map(|kv| kv.key().to_vec())
.collect::<Vec<_>>();
let resp = self.kv_backend.batch_put(req).await;
if resp.is_ok() {
for key in keys {
self.invalidate_key(&key).await;
}
}
resp
}
async fn batch_get(&self, req: BatchGetRequest) -> Result<BatchGetResponse> {
self.kv_backend.batch_get(req).await
}
async fn compare_and_put(&self, req: CompareAndPutRequest) -> Result<CompareAndPutResponse> {
let key = &req.key.clone();
let ret = self.kv_backend.compare_and_put(req).await;
if ret.is_ok() {
self.invalidate_key(key).await;
}
ret
}
async fn delete_range(&self, mut req: DeleteRangeRequest) -> Result<DeleteRangeResponse> {
let prev_kv = req.prev_kv;
req.prev_kv = true;
let resp = self.kv_backend.delete_range(req).await;
match resp {
Ok(mut resp) => {
for prev_kv in resp.prev_kvs.iter() {
self.invalidate_key(prev_kv.key()).await;
}
if !prev_kv {
resp.prev_kvs = vec![];
}
Ok(resp)
}
Err(e) => Err(e),
}
}
async fn batch_delete(&self, mut req: BatchDeleteRequest) -> Result<BatchDeleteResponse> {
let prev_kv = req.prev_kv;
req.prev_kv = true;
let resp = self.kv_backend.batch_delete(req).await;
match resp {
Ok(mut resp) => {
for prev_kv in resp.prev_kvs.iter() {
self.invalidate_key(prev_kv.key()).await;
}
if !prev_kv {
resp.prev_kvs = vec![];
}
Ok(resp)
}
Err(e) => Err(e),
}
}
async fn move_value(&self, req: MoveValueRequest) -> Result<MoveValueResponse> {
let from_key = &req.from_key.clone();
let to_key = &req.to_key.clone();
let ret = self.kv_backend.move_value(req).await;
if ret.is_ok() {
self.invalidate_key(from_key).await;
self.invalidate_key(to_key).await;
}
ret
}
async fn get(&self, key: &[u8]) -> Result<Option<KeyValue>> {
let _timer = timer!(METRIC_CATALOG_KV_GET);
let init = async {
let _timer = timer!(METRIC_CATALOG_KV_REMOTE_GET);
self.kv_backend.get(key).await.map(|val| {
val.with_context(|| CacheNotGetSnafu {
key: String::from_utf8_lossy(key),
})
})?
};
// currently moka doesn't have `optionally_try_get_with_by_ref`
// TODO(fys): change to moka method when available
// https://github.com/moka-rs/moka/issues/254
match self.cache.try_get_with_by_ref(key, init).await {
Ok(val) => Ok(Some(val)),
Err(e) => match e.as_ref() {
CacheNotGet { .. } => Ok(None),
_ => Err(e),
},
}
.map_err(|e| GetKvCache {
err_msg: e.to_string(),
})
}
}
#[async_trait::async_trait]
impl KvCacheInvalidator for CachedMetaKvBackend {
async fn invalidate_key(&self, key: &[u8]) {
self.cache.invalidate(key).await;
debug!("invalidated cache key: {}", String::from_utf8_lossy(key));
}
}
impl CachedMetaKvBackend {
pub fn new(client: Arc<MetaClient>) -> Self {
let kv_backend = Arc::new(MetaKvBackend { client });
Self::wrap(kv_backend)
}
pub fn wrap(kv_backend: KvBackendRef) -> Self {
let cache = Arc::new(
CacheBuilder::new(CACHE_MAX_CAPACITY)
.time_to_live(Duration::from_secs(CACHE_TTL_SECOND))
.time_to_idle(Duration::from_secs(CACHE_TTI_SECOND))
.build(),
);
let name = format!("CachedKvBackend({})", kv_backend.name());
Self {
kv_backend,
cache,
name,
}
}
pub fn cache(&self) -> &CacheBackendRef {
&self.cache
}
}
#[derive(Debug)]
pub struct MetaKvBackend {
pub client: Arc<MetaClient>,
}
impl TxnService for MetaKvBackend {
type Error = Error;
}
/// Implement `KvBackend` trait for `MetaKvBackend` instead of opendal's `Accessor` since
/// `MetaClient`'s range method can return both keys and values, which can reduce IO overhead
/// comparing to `Accessor`'s list and get method.
#[async_trait::async_trait]
impl KvBackend for MetaKvBackend {
fn name(&self) -> &str {
"MetaKvBackend"
}
async fn range(&self, req: RangeRequest) -> Result<RangeResponse> {
self.client
.range(req)
.await
.map_err(BoxedError::new)
.context(ExternalSnafu)
}
async fn get(&self, key: &[u8]) -> Result<Option<KeyValue>> {
let mut response = self
.client
.range(RangeRequest::new().with_key(key))
.await
.map_err(BoxedError::new)
.context(ExternalSnafu)?;
Ok(response.take_kvs().get_mut(0).map(|kv| KeyValue {
key: kv.take_key(),
value: kv.take_value(),
}))
}
async fn batch_put(&self, req: BatchPutRequest) -> Result<BatchPutResponse> {
self.client
.batch_put(req)
.await
.map_err(BoxedError::new)
.context(ExternalSnafu)
}
async fn put(&self, req: PutRequest) -> Result<PutResponse> {
self.client
.put(req)
.await
.map_err(BoxedError::new)
.context(ExternalSnafu)
}
async fn delete_range(&self, req: DeleteRangeRequest) -> Result<DeleteRangeResponse> {
self.client
.delete_range(req)
.await
.map_err(BoxedError::new)
.context(ExternalSnafu)
}
async fn batch_delete(&self, req: BatchDeleteRequest) -> Result<BatchDeleteResponse> {
self.client
.batch_delete(req)
.await
.map_err(BoxedError::new)
.context(ExternalSnafu)
}
async fn batch_get(&self, req: BatchGetRequest) -> Result<BatchGetResponse> {
self.client
.batch_get(req)
.await
.map_err(BoxedError::new)
.context(ExternalSnafu)
}
async fn compare_and_put(
&self,
request: CompareAndPutRequest,
) -> Result<CompareAndPutResponse> {
self.client
.compare_and_put(request)
.await
.map_err(BoxedError::new)
.context(ExternalSnafu)
}
async fn move_value(&self, req: MoveValueRequest) -> Result<MoveValueResponse> {
self.client
.move_value(req)
.await
.map_err(BoxedError::new)
.context(ExternalSnafu)
}
fn as_any(&self) -> &dyn Any {
self
}
}

View File

@@ -0,0 +1,292 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::any::Any;
use std::collections::BTreeSet;
use std::sync::{Arc, Weak};
use common_catalog::consts::{DEFAULT_SCHEMA_NAME, INFORMATION_SCHEMA_NAME, NUMBERS_TABLE_ID};
use common_error::ext::BoxedError;
use common_meta::cache_invalidator::{CacheInvalidator, CacheInvalidatorRef, Context};
use common_meta::datanode_manager::DatanodeManagerRef;
use common_meta::error::Result as MetaResult;
use common_meta::key::catalog_name::CatalogNameKey;
use common_meta::key::schema_name::SchemaNameKey;
use common_meta::key::table_name::TableNameKey;
use common_meta::key::{TableMetadataManager, TableMetadataManagerRef};
use common_meta::kv_backend::KvBackendRef;
use common_meta::table_name::TableName;
use futures_util::TryStreamExt;
use partition::manager::{PartitionRuleManager, PartitionRuleManagerRef};
use snafu::prelude::*;
use table::dist_table::DistTable;
use table::metadata::TableId;
use table::table::numbers::{NumbersTable, NUMBERS_TABLE_NAME};
use table::TableRef;
use crate::error::{
self as catalog_err, ListCatalogsSnafu, ListSchemasSnafu, Result as CatalogResult,
TableMetadataManagerSnafu,
};
use crate::information_schema::{InformationSchemaProvider, COLUMNS, TABLES};
use crate::CatalogManager;
/// Access all existing catalog, schema and tables.
///
/// The result comes from two source, all the user tables are presented in
/// a kv-backend which persists the metadata of a table. And system tables
/// comes from `SystemCatalog`, which is static and read-only.
#[derive(Clone)]
pub struct KvBackendCatalogManager {
// TODO(LFC): Maybe use a real implementation for Standalone mode.
// Now we use `NoopKvCacheInvalidator` for Standalone mode. In Standalone mode, the KV backend
// is implemented by RaftEngine. Maybe we need a cache for it?
cache_invalidator: CacheInvalidatorRef,
partition_manager: PartitionRuleManagerRef,
table_metadata_manager: TableMetadataManagerRef,
datanode_manager: DatanodeManagerRef,
/// A sub-CatalogManager that handles system tables
system_catalog: SystemCatalog,
}
#[async_trait::async_trait]
impl CacheInvalidator for KvBackendCatalogManager {
async fn invalidate_table_name(&self, ctx: &Context, table_name: TableName) -> MetaResult<()> {
self.cache_invalidator
.invalidate_table_name(ctx, table_name)
.await
}
async fn invalidate_table_id(&self, ctx: &Context, table_id: TableId) -> MetaResult<()> {
self.cache_invalidator
.invalidate_table_id(ctx, table_id)
.await
}
}
impl KvBackendCatalogManager {
pub fn new(
backend: KvBackendRef,
cache_invalidator: CacheInvalidatorRef,
datanode_manager: DatanodeManagerRef,
) -> Arc<Self> {
Arc::new_cyclic(|me| Self {
partition_manager: Arc::new(PartitionRuleManager::new(backend.clone())),
table_metadata_manager: Arc::new(TableMetadataManager::new(backend)),
cache_invalidator,
datanode_manager,
system_catalog: SystemCatalog {
catalog_manager: me.clone(),
},
})
}
pub fn partition_manager(&self) -> PartitionRuleManagerRef {
self.partition_manager.clone()
}
pub fn table_metadata_manager_ref(&self) -> &TableMetadataManagerRef {
&self.table_metadata_manager
}
pub fn datanode_manager(&self) -> DatanodeManagerRef {
self.datanode_manager.clone()
}
}
#[async_trait::async_trait]
impl CatalogManager for KvBackendCatalogManager {
async fn catalog_names(&self) -> CatalogResult<Vec<String>> {
let stream = self
.table_metadata_manager
.catalog_manager()
.catalog_names()
.await;
let keys = stream
.try_collect::<Vec<_>>()
.await
.map_err(BoxedError::new)
.context(ListCatalogsSnafu)?;
Ok(keys)
}
async fn schema_names(&self, catalog: &str) -> CatalogResult<Vec<String>> {
let stream = self
.table_metadata_manager
.schema_manager()
.schema_names(catalog)
.await;
let mut keys = stream
.try_collect::<BTreeSet<_>>()
.await
.map_err(BoxedError::new)
.context(ListSchemasSnafu { catalog })?
.into_iter()
.collect::<Vec<_>>();
keys.extend_from_slice(&self.system_catalog.schema_names());
Ok(keys)
}
async fn table_names(&self, catalog: &str, schema: &str) -> CatalogResult<Vec<String>> {
let mut tables = self
.table_metadata_manager
.table_name_manager()
.tables(catalog, schema)
.await
.context(TableMetadataManagerSnafu)?
.into_iter()
.map(|(k, _)| k)
.collect::<Vec<String>>();
tables.extend_from_slice(&self.system_catalog.table_names(schema));
Ok(tables)
}
async fn catalog_exists(&self, catalog: &str) -> CatalogResult<bool> {
self.table_metadata_manager
.catalog_manager()
.exists(CatalogNameKey::new(catalog))
.await
.context(TableMetadataManagerSnafu)
}
async fn schema_exists(&self, catalog: &str, schema: &str) -> CatalogResult<bool> {
if self.system_catalog.schema_exist(schema) {
return Ok(true);
}
self.table_metadata_manager
.schema_manager()
.exists(SchemaNameKey::new(catalog, schema))
.await
.context(TableMetadataManagerSnafu)
}
async fn table_exists(&self, catalog: &str, schema: &str, table: &str) -> CatalogResult<bool> {
if self.system_catalog.table_exist(schema, table) {
return Ok(true);
}
let key = TableNameKey::new(catalog, schema, table);
self.table_metadata_manager
.table_name_manager()
.get(key)
.await
.context(TableMetadataManagerSnafu)
.map(|x| x.is_some())
}
async fn table(
&self,
catalog: &str,
schema: &str,
table_name: &str,
) -> CatalogResult<Option<TableRef>> {
if let Some(table) = self.system_catalog.table(catalog, schema, table_name) {
return Ok(Some(table));
}
let key = TableNameKey::new(catalog, schema, table_name);
let Some(table_name_value) = self
.table_metadata_manager
.table_name_manager()
.get(key)
.await
.context(TableMetadataManagerSnafu)?
else {
return Ok(None);
};
let table_id = table_name_value.table_id();
let Some(table_info_value) = self
.table_metadata_manager
.table_info_manager()
.get(table_id)
.await
.context(TableMetadataManagerSnafu)?
.map(|v| v.into_inner())
else {
return Ok(None);
};
let table_info = Arc::new(
table_info_value
.table_info
.try_into()
.context(catalog_err::InvalidTableInfoInCatalogSnafu)?,
);
Ok(Some(DistTable::table(table_info)))
}
fn as_any(&self) -> &dyn Any {
self
}
}
// TODO: This struct can hold a static map of all system tables when
// the upper layer (e.g., procedure) can inform the catalog manager
// a new catalog is created.
/// Existing system tables:
/// - public.numbers
/// - information_schema.tables
/// - information_schema.columns
#[derive(Clone)]
struct SystemCatalog {
catalog_manager: Weak<KvBackendCatalogManager>,
}
impl SystemCatalog {
fn schema_names(&self) -> Vec<String> {
vec![INFORMATION_SCHEMA_NAME.to_string()]
}
fn table_names(&self, schema: &str) -> Vec<String> {
if schema == INFORMATION_SCHEMA_NAME {
vec![TABLES.to_string(), COLUMNS.to_string()]
} else if schema == DEFAULT_SCHEMA_NAME {
vec![NUMBERS_TABLE_NAME.to_string()]
} else {
vec![]
}
}
fn schema_exist(&self, schema: &str) -> bool {
schema == INFORMATION_SCHEMA_NAME
}
fn table_exist(&self, schema: &str, table: &str) -> bool {
if schema == INFORMATION_SCHEMA_NAME {
table == TABLES || table == COLUMNS
} else if schema == DEFAULT_SCHEMA_NAME {
table == NUMBERS_TABLE_NAME
} else {
false
}
}
fn table(&self, catalog: &str, schema: &str, table_name: &str) -> Option<TableRef> {
if schema == INFORMATION_SCHEMA_NAME {
let information_schema_provider =
InformationSchemaProvider::new(catalog.to_string(), self.catalog_manager.clone());
information_schema_provider.table(table_name)
} else if schema == DEFAULT_SCHEMA_NAME && table_name == NUMBERS_TABLE_NAME {
Some(NumbersTable::table(NUMBERS_TABLE_ID))
} else {
None
}
}
}

View File

@@ -0,0 +1,128 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::collections::HashMap;
use std::sync::{Arc, RwLock as StdRwLock};
use common_recordbatch::RecordBatch;
use datatypes::data_type::ConcreteDataType;
use datatypes::schema::{ColumnSchema, Schema};
use datatypes::vectors::StringVector;
use table::engine::{CloseTableResult, EngineContext, TableEngine};
use table::metadata::TableId;
use table::requests::{
AlterTableRequest, CloseTableRequest, CreateTableRequest, DropTableRequest, OpenTableRequest,
TruncateTableRequest,
};
use table::test_util::MemTable;
use table::TableRef;
#[derive(Default)]
pub struct MockTableEngine {
tables: StdRwLock<HashMap<TableId, TableRef>>,
}
#[async_trait::async_trait]
impl TableEngine for MockTableEngine {
fn name(&self) -> &str {
"MockTableEngine"
}
/// Create a table with only one column
async fn create_table(
&self,
_ctx: &EngineContext,
request: CreateTableRequest,
) -> table::Result<TableRef> {
let table_id = request.id;
let schema = Arc::new(Schema::new(vec![ColumnSchema::new(
"name",
ConcreteDataType::string_datatype(),
true,
)]));
let data = vec![Arc::new(StringVector::from(vec!["a", "b", "c"])) as _];
let record_batch = RecordBatch::new(schema, data).unwrap();
let table = MemTable::new_with_catalog(
&request.table_name,
record_batch,
table_id,
request.catalog_name,
request.schema_name,
vec![0],
);
let mut tables = self.tables.write().unwrap();
let _ = tables.insert(table_id, table.clone() as TableRef);
Ok(table)
}
async fn open_table(
&self,
_ctx: &EngineContext,
request: OpenTableRequest,
) -> table::Result<Option<TableRef>> {
Ok(self.tables.read().unwrap().get(&request.table_id).cloned())
}
async fn alter_table(
&self,
_ctx: &EngineContext,
_request: AlterTableRequest,
) -> table::Result<TableRef> {
unimplemented!()
}
fn get_table(
&self,
_ctx: &EngineContext,
table_id: TableId,
) -> table::Result<Option<TableRef>> {
Ok(self.tables.read().unwrap().get(&table_id).cloned())
}
fn table_exists(&self, _ctx: &EngineContext, table_id: TableId) -> bool {
self.tables.read().unwrap().contains_key(&table_id)
}
async fn drop_table(
&self,
_ctx: &EngineContext,
_request: DropTableRequest,
) -> table::Result<bool> {
unimplemented!()
}
async fn close_table(
&self,
_ctx: &EngineContext,
request: CloseTableRequest,
) -> table::Result<CloseTableResult> {
let _ = self.tables.write().unwrap().remove(&request.table_id);
Ok(CloseTableResult::Released(vec![]))
}
async fn close(&self) -> table::Result<()> {
Ok(())
}
async fn truncate_table(
&self,
_ctx: &EngineContext,
_request: TruncateTableRequest,
) -> table::Result<bool> {
Ok(true)
}
}

View File

@@ -12,103 +12,43 @@
// See the License for the specific language governing permissions and
// limitations under the License.
#![feature(trait_upcasting)]
#![feature(assert_matches)]
#![feature(try_blocks)]
use std::any::Any;
use std::fmt::{Debug, Formatter};
use std::sync::Arc;
use api::v1::meta::{RegionStat, TableName};
use common_telemetry::{info, warn};
use snafu::ResultExt;
use table::engine::{EngineContext, TableEngineRef};
use futures::future::BoxFuture;
use table::metadata::TableId;
use table::requests::CreateTableRequest;
use table::TableRef;
use crate::error::{CreateTableSnafu, Result};
pub use crate::schema::{SchemaProvider, SchemaProviderRef};
use crate::error::Result;
pub mod datafusion;
pub mod error;
pub mod helper;
pub(crate) mod information_schema;
pub mod local;
pub mod remote;
pub mod schema;
pub mod system;
pub mod information_schema;
pub mod kvbackend;
pub mod memory;
mod metrics;
pub mod table_source;
pub mod tables;
/// Represent a list of named catalogs
pub trait CatalogList: Sync + Send {
/// Returns the catalog list as [`Any`](std::any::Any)
/// so that it can be downcast to a specific implementation.
fn as_any(&self) -> &dyn Any;
/// Adds a new catalog to this catalog list
/// If a catalog of the same name existed before, it is replaced in the list and returned.
fn register_catalog(
&self,
name: String,
catalog: CatalogProviderRef,
) -> Result<Option<CatalogProviderRef>>;
/// Retrieves the list of available catalog names
fn catalog_names(&self) -> Result<Vec<String>>;
/// Retrieves a specific catalog by name, provided it exists.
fn catalog(&self, name: &str) -> Result<Option<CatalogProviderRef>>;
}
/// Represents a catalog, comprising a number of named schemas.
pub trait CatalogProvider: Sync + Send {
/// Returns the catalog provider as [`Any`](std::any::Any)
/// so that it can be downcast to a specific implementation.
fn as_any(&self) -> &dyn Any;
/// Retrieves the list of available schema names in this catalog.
fn schema_names(&self) -> Result<Vec<String>>;
/// Registers schema to this catalog.
fn register_schema(
&self,
name: String,
schema: SchemaProviderRef,
) -> Result<Option<SchemaProviderRef>>;
/// Retrieves a specific schema from the catalog by name, provided it exists.
fn schema(&self, name: &str) -> Result<Option<SchemaProviderRef>>;
}
pub type CatalogListRef = Arc<dyn CatalogList>;
pub type CatalogProviderRef = Arc<dyn CatalogProvider>;
#[async_trait::async_trait]
pub trait CatalogManager: CatalogList {
/// Starts a catalog manager.
async fn start(&self) -> Result<()>;
pub trait CatalogManager: Send + Sync {
fn as_any(&self) -> &dyn Any;
/// Registers a table within given catalog/schema to catalog manager,
/// returns whether the table registered.
async fn register_table(&self, request: RegisterTableRequest) -> Result<bool>;
async fn catalog_names(&self) -> Result<Vec<String>>;
/// Deregisters a table within given catalog/schema to catalog manager,
/// returns whether the table deregistered.
async fn deregister_table(&self, request: DeregisterTableRequest) -> Result<bool>;
async fn schema_names(&self, catalog: &str) -> Result<Vec<String>>;
/// Register a schema with catalog name and schema name. Retuens whether the
/// schema registered.
async fn register_schema(&self, request: RegisterSchemaRequest) -> Result<bool>;
async fn table_names(&self, catalog: &str, schema: &str) -> Result<Vec<String>>;
/// Rename a table to [RenameTableRequest::new_table_name], returns whether the table is renamed.
async fn rename_table(&self, request: RenameTableRequest) -> Result<bool>;
async fn catalog_exists(&self, catalog: &str) -> Result<bool>;
/// Register a system table, should be called before starting the manager.
async fn register_system_table(&self, request: RegisterSystemTableRequest)
-> error::Result<()>;
async fn schema_exists(&self, catalog: &str, schema: &str) -> Result<bool>;
fn schema(&self, catalog: &str, schema: &str) -> Result<Option<SchemaProviderRef>>;
async fn table_exists(&self, catalog: &str, schema: &str, table: &str) -> Result<bool>;
/// Returns the table by catalog, schema and table name.
async fn table(
@@ -122,7 +62,8 @@ pub trait CatalogManager: CatalogList {
pub type CatalogManagerRef = Arc<dyn CatalogManager>;
/// Hook called after system table opening.
pub type OpenSystemTableHook = Arc<dyn Fn(TableRef) -> Result<()> + Send + Sync>;
pub type OpenSystemTableHook =
Box<dyn Fn(TableRef) -> BoxFuture<'static, Result<()>> + Send + Sync>;
/// Register system table request:
/// - When system table is already created and registered, the hook will be called
@@ -171,109 +112,13 @@ pub struct DeregisterTableRequest {
}
#[derive(Debug, Clone)]
pub struct RegisterSchemaRequest {
pub struct DeregisterSchemaRequest {
pub catalog: String,
pub schema: String,
}
pub trait CatalogProviderFactory {
fn create(&self, catalog_name: String) -> CatalogProviderRef;
}
pub trait SchemaProviderFactory {
fn create(&self, catalog_name: String, schema_name: String) -> SchemaProviderRef;
}
pub(crate) async fn handle_system_table_request<'a, M: CatalogManager>(
manager: &'a M,
engine: TableEngineRef,
sys_table_requests: &'a mut Vec<RegisterSystemTableRequest>,
) -> Result<()> {
for req in sys_table_requests.drain(..) {
let catalog_name = &req.create_table_request.catalog_name;
let schema_name = &req.create_table_request.schema_name;
let table_name = &req.create_table_request.table_name;
let table_id = req.create_table_request.id;
let table = manager.table(catalog_name, schema_name, table_name).await?;
let table = if let Some(table) = table {
table
} else {
let table = engine
.create_table(&EngineContext::default(), req.create_table_request.clone())
.await
.with_context(|_| CreateTableSnafu {
table_info: common_catalog::format_full_table_name(
catalog_name,
schema_name,
table_name,
),
})?;
manager
.register_table(RegisterTableRequest {
catalog: catalog_name.clone(),
schema: schema_name.clone(),
table_name: table_name.clone(),
table_id,
table: table.clone(),
})
.await?;
info!("Created and registered system table: {table_name}");
table
};
if let Some(hook) = req.open_hook {
(hook)(table)?;
}
}
Ok(())
}
/// The stat of regions in the datanode node.
/// The number of regions can be got from len of vec.
///
/// Ignores any errors occurred during iterating regions. The intention of this method is to
/// collect region stats that will be carried in Datanode's heartbeat to Metasrv, so it's a
/// "try our best" job.
pub async fn datanode_stat(catalog_manager: &CatalogManagerRef) -> (u64, Vec<RegionStat>) {
let mut region_number: u64 = 0;
let mut region_stats = Vec::new();
let Ok(catalog_names) = catalog_manager.catalog_names() else { return (region_number, region_stats) };
for catalog_name in catalog_names {
let Ok(Some(catalog)) = catalog_manager.catalog(&catalog_name) else { continue };
let Ok(schema_names) = catalog.schema_names() else { continue };
for schema_name in schema_names {
let Ok(Some(schema)) = catalog.schema(&schema_name) else { continue };
let Ok(table_names) = schema.table_names() else { continue };
for table_name in table_names {
let Ok(Some(table)) = schema.table(&table_name).await else { continue };
let region_numbers = &table.table_info().meta.region_numbers;
region_number += region_numbers.len() as u64;
match table.region_stats() {
Ok(stats) => {
let stats = stats.into_iter().map(|stat| RegionStat {
region_id: stat.region_id,
table_name: Some(TableName {
catalog_name: catalog_name.clone(),
schema_name: schema_name.clone(),
table_name: table_name.clone(),
}),
approximate_bytes: stat.disk_usage_bytes as i64,
..Default::default()
});
region_stats.extend(stats);
}
Err(e) => {
warn!("Failed to get region status, err: {:?}", e);
}
};
}
}
}
(region_number, region_stats)
#[derive(Debug, Clone)]
pub struct RegisterSchemaRequest {
pub catalog: String,
pub schema: String,
}

View File

@@ -1,611 +0,0 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::any::Any;
use std::sync::atomic::{AtomicU32, Ordering};
use std::sync::Arc;
use common_catalog::consts::{
DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, INFORMATION_SCHEMA_NAME, MIN_USER_TABLE_ID,
MITO_ENGINE, SYSTEM_CATALOG_NAME, SYSTEM_CATALOG_TABLE_NAME,
};
use common_catalog::format_full_table_name;
use common_recordbatch::{RecordBatch, SendableRecordBatchStream};
use common_telemetry::{error, info};
use datatypes::prelude::ScalarVector;
use datatypes::vectors::{BinaryVector, UInt8Vector};
use futures_util::lock::Mutex;
use snafu::{ensure, OptionExt, ResultExt};
use table::engine::manager::TableEngineManagerRef;
use table::engine::EngineContext;
use table::metadata::TableId;
use table::requests::OpenTableRequest;
use table::table::numbers::NumbersTable;
use table::table::TableIdProvider;
use table::TableRef;
use crate::error::{
self, CatalogNotFoundSnafu, IllegalManagerStateSnafu, OpenTableSnafu, ReadSystemCatalogSnafu,
Result, SchemaExistsSnafu, SchemaNotFoundSnafu, SystemCatalogSnafu,
SystemCatalogTypeMismatchSnafu, TableEngineNotFoundSnafu, TableExistsSnafu, TableNotExistSnafu,
TableNotFoundSnafu,
};
use crate::local::memory::{MemoryCatalogManager, MemoryCatalogProvider, MemorySchemaProvider};
use crate::system::{
decode_system_catalog, Entry, SystemCatalogTable, TableEntry, ENTRY_TYPE_INDEX, KEY_INDEX,
VALUE_INDEX,
};
use crate::tables::SystemCatalog;
use crate::{
handle_system_table_request, CatalogList, CatalogManager, CatalogProvider, CatalogProviderRef,
DeregisterTableRequest, RegisterSchemaRequest, RegisterSystemTableRequest,
RegisterTableRequest, RenameTableRequest, SchemaProvider, SchemaProviderRef,
};
/// A `CatalogManager` consists of a system catalog and a bunch of user catalogs.
pub struct LocalCatalogManager {
system: Arc<SystemCatalog>,
catalogs: Arc<MemoryCatalogManager>,
engine_manager: TableEngineManagerRef,
next_table_id: AtomicU32,
init_lock: Mutex<bool>,
register_lock: Mutex<()>,
system_table_requests: Mutex<Vec<RegisterSystemTableRequest>>,
}
impl LocalCatalogManager {
/// Create a new [CatalogManager] with given user catalogs and mito engine
pub async fn try_new(engine_manager: TableEngineManagerRef) -> Result<Self> {
let engine = engine_manager
.engine(MITO_ENGINE)
.context(TableEngineNotFoundSnafu {
engine_name: MITO_ENGINE,
})?;
let table = SystemCatalogTable::new(engine.clone()).await?;
let memory_catalog_list = crate::local::memory::new_memory_catalog_list()?;
let system_catalog = Arc::new(SystemCatalog::new(table));
Ok(Self {
system: system_catalog,
catalogs: memory_catalog_list,
engine_manager,
next_table_id: AtomicU32::new(MIN_USER_TABLE_ID),
init_lock: Mutex::new(false),
register_lock: Mutex::new(()),
system_table_requests: Mutex::new(Vec::default()),
})
}
/// Scan all entries from system catalog table
pub async fn init(&self) -> Result<()> {
self.init_system_catalog()?;
let system_records = self.system.information_schema.system.records().await?;
let entries = self.collect_system_catalog_entries(system_records).await?;
let max_table_id = self.handle_system_catalog_entries(entries).await?;
info!(
"All system catalog entries processed, max table id: {}",
max_table_id
);
self.next_table_id
.store((max_table_id + 1).max(MIN_USER_TABLE_ID), Ordering::Relaxed);
*self.init_lock.lock().await = true;
// Processing system table hooks
let mut sys_table_requests = self.system_table_requests.lock().await;
let engine = self
.engine_manager
.engine(MITO_ENGINE)
.context(TableEngineNotFoundSnafu {
engine_name: MITO_ENGINE,
})?;
handle_system_table_request(self, engine, &mut sys_table_requests).await?;
Ok(())
}
fn init_system_catalog(&self) -> Result<()> {
let system_schema = Arc::new(MemorySchemaProvider::new());
system_schema.register_table(
SYSTEM_CATALOG_TABLE_NAME.to_string(),
self.system.information_schema.system.clone(),
)?;
let system_catalog = Arc::new(MemoryCatalogProvider::new());
system_catalog.register_schema(INFORMATION_SCHEMA_NAME.to_string(), system_schema)?;
self.catalogs
.register_catalog(SYSTEM_CATALOG_NAME.to_string(), system_catalog)?;
let default_catalog = Arc::new(MemoryCatalogProvider::new());
let default_schema = Arc::new(MemorySchemaProvider::new());
// Add numbers table for test
let table = Arc::new(NumbersTable::default());
default_schema.register_table("numbers".to_string(), table)?;
default_catalog.register_schema(DEFAULT_SCHEMA_NAME.to_string(), default_schema)?;
self.catalogs
.register_catalog(DEFAULT_CATALOG_NAME.to_string(), default_catalog)?;
Ok(())
}
/// Collect stream of system catalog entries to `Vec<Entry>`
async fn collect_system_catalog_entries(
&self,
stream: SendableRecordBatchStream,
) -> Result<Vec<Entry>> {
let record_batch = common_recordbatch::util::collect(stream)
.await
.context(ReadSystemCatalogSnafu)?;
let rbs = record_batch
.into_iter()
.map(Self::record_batch_to_entry)
.collect::<Result<Vec<_>>>()?;
Ok(rbs.into_iter().flat_map(Vec::into_iter).collect::<_>())
}
/// Convert `RecordBatch` to a vector of `Entry`.
fn record_batch_to_entry(rb: RecordBatch) -> Result<Vec<Entry>> {
ensure!(
rb.num_columns() >= 6,
SystemCatalogSnafu {
msg: format!("Length mismatch: {}", rb.num_columns())
}
);
let entry_type = rb
.column(ENTRY_TYPE_INDEX)
.as_any()
.downcast_ref::<UInt8Vector>()
.with_context(|| SystemCatalogTypeMismatchSnafu {
data_type: rb.column(ENTRY_TYPE_INDEX).data_type(),
})?;
let key = rb
.column(KEY_INDEX)
.as_any()
.downcast_ref::<BinaryVector>()
.with_context(|| SystemCatalogTypeMismatchSnafu {
data_type: rb.column(KEY_INDEX).data_type(),
})?;
let value = rb
.column(VALUE_INDEX)
.as_any()
.downcast_ref::<BinaryVector>()
.with_context(|| SystemCatalogTypeMismatchSnafu {
data_type: rb.column(VALUE_INDEX).data_type(),
})?;
let mut res = Vec::with_capacity(rb.num_rows());
for ((t, k), v) in entry_type
.iter_data()
.zip(key.iter_data())
.zip(value.iter_data())
{
let entry = decode_system_catalog(t, k, v)?;
res.push(entry);
}
Ok(res)
}
/// Processes records from system catalog table and returns the max table id persisted
/// in system catalog table.
async fn handle_system_catalog_entries(&self, entries: Vec<Entry>) -> Result<TableId> {
let entries = Self::sort_entries(entries);
let mut max_table_id = 0;
for entry in entries {
match entry {
Entry::Catalog(c) => {
self.catalogs.register_catalog_if_absent(
c.catalog_name.clone(),
Arc::new(MemoryCatalogProvider::new()),
);
info!("Register catalog: {}", c.catalog_name);
}
Entry::Schema(s) => {
let catalog =
self.catalogs
.catalog(&s.catalog_name)?
.context(CatalogNotFoundSnafu {
catalog_name: &s.catalog_name,
})?;
catalog.register_schema(
s.schema_name.clone(),
Arc::new(MemorySchemaProvider::new()),
)?;
info!("Registered schema: {:?}", s);
}
Entry::Table(t) => {
self.open_and_register_table(&t).await?;
info!("Registered table: {:?}", t);
max_table_id = max_table_id.max(t.table_id);
}
}
}
Ok(max_table_id)
}
/// Sort catalog entries to ensure catalog entries comes first, then schema entries,
/// and table entries is the last.
fn sort_entries(mut entries: Vec<Entry>) -> Vec<Entry> {
entries.sort();
entries
}
async fn open_and_register_table(&self, t: &TableEntry) -> Result<()> {
let catalog = self
.catalogs
.catalog(&t.catalog_name)?
.context(CatalogNotFoundSnafu {
catalog_name: &t.catalog_name,
})?;
let schema = catalog
.schema(&t.schema_name)?
.context(SchemaNotFoundSnafu {
catalog: &t.catalog_name,
schema: &t.schema_name,
})?;
let context = EngineContext {};
let request = OpenTableRequest {
catalog_name: t.catalog_name.clone(),
schema_name: t.schema_name.clone(),
table_name: t.table_name.clone(),
table_id: t.table_id,
};
let engine = self
.engine_manager
.engine(&t.engine)
.context(TableEngineNotFoundSnafu {
engine_name: &t.engine,
})?;
let option = engine
.open_table(&context, request)
.await
.with_context(|_| OpenTableSnafu {
table_info: format!(
"{}.{}.{}, id: {}",
&t.catalog_name, &t.schema_name, &t.table_name, t.table_id
),
})?
.with_context(|| TableNotFoundSnafu {
table_info: format!(
"{}.{}.{}, id: {}",
&t.catalog_name, &t.schema_name, &t.table_name, t.table_id
),
})?;
schema.register_table(t.table_name.clone(), option)?;
Ok(())
}
}
impl CatalogList for LocalCatalogManager {
fn as_any(&self) -> &dyn Any {
self
}
fn register_catalog(
&self,
name: String,
catalog: CatalogProviderRef,
) -> Result<Option<CatalogProviderRef>> {
self.catalogs.register_catalog(name, catalog)
}
fn catalog_names(&self) -> Result<Vec<String>> {
self.catalogs.catalog_names()
}
fn catalog(&self, name: &str) -> Result<Option<CatalogProviderRef>> {
if name.eq_ignore_ascii_case(SYSTEM_CATALOG_NAME) {
Ok(Some(self.system.clone()))
} else {
self.catalogs.catalog(name)
}
}
}
#[async_trait::async_trait]
impl TableIdProvider for LocalCatalogManager {
async fn next_table_id(&self) -> table::Result<TableId> {
Ok(self.next_table_id.fetch_add(1, Ordering::Relaxed))
}
}
#[async_trait::async_trait]
impl CatalogManager for LocalCatalogManager {
/// Start [LocalCatalogManager] to load all information from system catalog table.
/// Make sure table engine is initialized before starting [MemoryCatalogManager].
async fn start(&self) -> Result<()> {
self.init().await
}
async fn register_table(&self, request: RegisterTableRequest) -> Result<bool> {
let started = self.init_lock.lock().await;
ensure!(
*started,
IllegalManagerStateSnafu {
msg: "Catalog manager not started",
}
);
let catalog_name = &request.catalog;
let schema_name = &request.schema;
let catalog = self
.catalogs
.catalog(catalog_name)?
.context(CatalogNotFoundSnafu { catalog_name })?;
let schema = catalog
.schema(schema_name)?
.with_context(|| SchemaNotFoundSnafu {
catalog: catalog_name,
schema: schema_name,
})?;
{
let _lock = self.register_lock.lock().await;
if let Some(existing) = schema.table(&request.table_name).await? {
if existing.table_info().ident.table_id != request.table_id {
error!(
"Unexpected table register request: {:?}, existing: {:?}",
request,
existing.table_info()
);
return TableExistsSnafu {
table: format_full_table_name(
catalog_name,
schema_name,
&request.table_name,
),
}
.fail();
}
// Try to register table with same table id, just ignore.
Ok(false)
} else {
let engine = request.table.table_info().meta.engine.to_string();
// table does not exist
self.system
.register_table(
catalog_name.clone(),
schema_name.clone(),
request.table_name.clone(),
request.table_id,
engine,
)
.await?;
schema.register_table(request.table_name, request.table)?;
Ok(true)
}
}
}
async fn rename_table(&self, request: RenameTableRequest) -> Result<bool> {
let started = self.init_lock.lock().await;
ensure!(
*started,
IllegalManagerStateSnafu {
msg: "Catalog manager not started",
}
);
let catalog_name = &request.catalog;
let schema_name = &request.schema;
let catalog = self
.catalogs
.catalog(catalog_name)?
.context(CatalogNotFoundSnafu { catalog_name })?;
let schema = catalog
.schema(schema_name)?
.with_context(|| SchemaNotFoundSnafu {
catalog: catalog_name,
schema: schema_name,
})?;
let _lock = self.register_lock.lock().await;
ensure!(
!schema.table_exist(&request.new_table_name)?,
TableExistsSnafu {
table: &request.new_table_name
}
);
let old_table = schema
.table(&request.table_name)
.await?
.context(TableNotExistSnafu {
table: &request.table_name,
})?;
let engine = old_table.table_info().meta.engine.to_string();
// rename table in system catalog
self.system
.register_table(
catalog_name.clone(),
schema_name.clone(),
request.new_table_name.clone(),
request.table_id,
engine,
)
.await?;
let renamed = schema
.rename_table(&request.table_name, request.new_table_name.clone())
.is_ok();
Ok(renamed)
}
async fn deregister_table(&self, request: DeregisterTableRequest) -> Result<bool> {
{
let started = *self.init_lock.lock().await;
ensure!(started, IllegalManagerStateSnafu { msg: "not started" });
}
{
let _ = self.register_lock.lock().await;
let DeregisterTableRequest {
catalog,
schema,
table_name,
} = &request;
let table_id = self
.catalogs
.table(catalog, schema, table_name)
.await?
.with_context(|| error::TableNotExistSnafu {
table: format_full_table_name(catalog, schema, table_name),
})?
.table_info()
.ident
.table_id;
if !self.system.deregister_table(&request, table_id).await? {
return Ok(false);
}
self.catalogs.deregister_table(request).await
}
}
async fn register_schema(&self, request: RegisterSchemaRequest) -> Result<bool> {
let started = self.init_lock.lock().await;
ensure!(
*started,
IllegalManagerStateSnafu {
msg: "Catalog manager not started",
}
);
let catalog_name = &request.catalog;
let schema_name = &request.schema;
let catalog = self
.catalogs
.catalog(catalog_name)?
.context(CatalogNotFoundSnafu { catalog_name })?;
{
let _lock = self.register_lock.lock().await;
ensure!(
catalog.schema(schema_name)?.is_none(),
SchemaExistsSnafu {
schema: schema_name,
}
);
self.system
.register_schema(request.catalog, schema_name.clone())
.await?;
catalog.register_schema(request.schema, Arc::new(MemorySchemaProvider::new()))?;
Ok(true)
}
}
async fn register_system_table(&self, request: RegisterSystemTableRequest) -> Result<()> {
ensure!(
!*self.init_lock.lock().await,
IllegalManagerStateSnafu {
msg: "Catalog manager already started",
}
);
let mut sys_table_requests = self.system_table_requests.lock().await;
sys_table_requests.push(request);
Ok(())
}
fn schema(&self, catalog: &str, schema: &str) -> Result<Option<SchemaProviderRef>> {
self.catalogs
.catalog(catalog)?
.context(CatalogNotFoundSnafu {
catalog_name: catalog,
})?
.schema(schema)
}
async fn table(
&self,
catalog_name: &str,
schema_name: &str,
table_name: &str,
) -> Result<Option<TableRef>> {
let catalog = self
.catalogs
.catalog(catalog_name)?
.context(CatalogNotFoundSnafu { catalog_name })?;
let schema = catalog
.schema(schema_name)?
.with_context(|| SchemaNotFoundSnafu {
catalog: catalog_name,
schema: schema_name,
})?;
schema.table(table_name).await
}
}
#[cfg(test)]
mod tests {
use std::assert_matches::assert_matches;
use mito::engine::MITO_ENGINE;
use super::*;
use crate::system::{CatalogEntry, SchemaEntry};
#[test]
fn test_sort_entry() {
let vec = vec![
Entry::Table(TableEntry {
catalog_name: "C1".to_string(),
schema_name: "S1".to_string(),
table_name: "T1".to_string(),
table_id: 1,
engine: MITO_ENGINE.to_string(),
}),
Entry::Catalog(CatalogEntry {
catalog_name: "C2".to_string(),
}),
Entry::Schema(SchemaEntry {
catalog_name: "C1".to_string(),
schema_name: "S1".to_string(),
}),
Entry::Schema(SchemaEntry {
catalog_name: "C2".to_string(),
schema_name: "S2".to_string(),
}),
Entry::Catalog(CatalogEntry {
catalog_name: "".to_string(),
}),
Entry::Table(TableEntry {
catalog_name: "C1".to_string(),
schema_name: "S1".to_string(),
table_name: "T2".to_string(),
table_id: 2,
engine: MITO_ENGINE.to_string(),
}),
];
let res = LocalCatalogManager::sort_entries(vec);
assert_matches!(res[0], Entry::Catalog(..));
assert_matches!(res[1], Entry::Catalog(..));
assert_matches!(res[2], Entry::Schema(..));
assert_matches!(res[3], Entry::Schema(..));
assert_matches!(res[4], Entry::Table(..));
assert_matches!(res[5], Entry::Table(..));
}
}

View File

@@ -1,534 +0,0 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::any::Any;
use std::collections::hash_map::Entry;
use std::collections::HashMap;
use std::sync::atomic::{AtomicU32, Ordering};
use std::sync::{Arc, RwLock};
use async_trait::async_trait;
use common_catalog::consts::MIN_USER_TABLE_ID;
use common_telemetry::error;
use snafu::{ensure, OptionExt};
use table::metadata::TableId;
use table::table::TableIdProvider;
use table::TableRef;
use crate::error::{
self, CatalogNotFoundSnafu, Result, SchemaNotFoundSnafu, TableExistsSnafu, TableNotFoundSnafu,
};
use crate::schema::SchemaProvider;
use crate::{
CatalogList, CatalogManager, CatalogProvider, CatalogProviderRef, DeregisterTableRequest,
RegisterSchemaRequest, RegisterSystemTableRequest, RegisterTableRequest, RenameTableRequest,
SchemaProviderRef,
};
/// Simple in-memory list of catalogs
pub struct MemoryCatalogManager {
/// Collection of catalogs containing schemas and ultimately Tables
pub catalogs: RwLock<HashMap<String, CatalogProviderRef>>,
pub table_id: AtomicU32,
}
impl Default for MemoryCatalogManager {
fn default() -> Self {
let manager = Self {
table_id: AtomicU32::new(MIN_USER_TABLE_ID),
catalogs: Default::default(),
};
let default_catalog = Arc::new(MemoryCatalogProvider::new());
manager
.register_catalog("greptime".to_string(), default_catalog.clone())
.unwrap();
default_catalog
.register_schema("public".to_string(), Arc::new(MemorySchemaProvider::new()))
.unwrap();
manager
}
}
#[async_trait::async_trait]
impl TableIdProvider for MemoryCatalogManager {
async fn next_table_id(&self) -> table::error::Result<TableId> {
Ok(self.table_id.fetch_add(1, Ordering::Relaxed))
}
}
#[async_trait::async_trait]
impl CatalogManager for MemoryCatalogManager {
async fn start(&self) -> Result<()> {
self.table_id.store(MIN_USER_TABLE_ID, Ordering::Relaxed);
Ok(())
}
async fn register_table(&self, request: RegisterTableRequest) -> Result<bool> {
let catalogs = self.catalogs.write().unwrap();
let catalog = catalogs
.get(&request.catalog)
.context(CatalogNotFoundSnafu {
catalog_name: &request.catalog,
})?
.clone();
let schema = catalog
.schema(&request.schema)?
.with_context(|| SchemaNotFoundSnafu {
catalog: &request.catalog,
schema: &request.schema,
})?;
schema
.register_table(request.table_name, request.table)
.map(|v| v.is_none())
}
async fn rename_table(&self, request: RenameTableRequest) -> Result<bool> {
let catalogs = self.catalogs.write().unwrap();
let catalog = catalogs
.get(&request.catalog)
.context(CatalogNotFoundSnafu {
catalog_name: &request.catalog,
})?
.clone();
let schema = catalog
.schema(&request.schema)?
.with_context(|| SchemaNotFoundSnafu {
catalog: &request.catalog,
schema: &request.schema,
})?;
Ok(schema
.rename_table(&request.table_name, request.new_table_name)
.is_ok())
}
async fn deregister_table(&self, request: DeregisterTableRequest) -> Result<bool> {
let catalogs = self.catalogs.write().unwrap();
let catalog = catalogs
.get(&request.catalog)
.context(CatalogNotFoundSnafu {
catalog_name: &request.catalog,
})?
.clone();
let schema = catalog
.schema(&request.schema)?
.with_context(|| SchemaNotFoundSnafu {
catalog: &request.catalog,
schema: &request.schema,
})?;
schema
.deregister_table(&request.table_name)
.map(|v| v.is_some())
}
async fn register_schema(&self, request: RegisterSchemaRequest) -> Result<bool> {
let catalogs = self.catalogs.write().unwrap();
let catalog = catalogs
.get(&request.catalog)
.context(CatalogNotFoundSnafu {
catalog_name: &request.catalog,
})?;
catalog.register_schema(request.schema, Arc::new(MemorySchemaProvider::new()))?;
Ok(true)
}
async fn register_system_table(&self, _request: RegisterSystemTableRequest) -> Result<()> {
// TODO(ruihang): support register system table request
Ok(())
}
fn schema(&self, catalog: &str, schema: &str) -> Result<Option<SchemaProviderRef>> {
let catalogs = self.catalogs.read().unwrap();
if let Some(c) = catalogs.get(catalog) {
c.schema(schema)
} else {
Ok(None)
}
}
async fn table(
&self,
catalog: &str,
schema: &str,
table_name: &str,
) -> Result<Option<TableRef>> {
let catalog = {
let c = self.catalogs.read().unwrap();
let Some(c) = c.get(catalog) else { return Ok(None) };
c.clone()
};
match catalog.schema(schema)? {
None => Ok(None),
Some(s) => s.table(table_name).await,
}
}
}
impl MemoryCatalogManager {
/// Registers a catalog and return `None` if no catalog with the same name was already
/// registered, or `Some` with the previously registered catalog.
pub fn register_catalog_if_absent(
&self,
name: String,
catalog: CatalogProviderRef,
) -> Option<CatalogProviderRef> {
let mut catalogs = self.catalogs.write().unwrap();
let entry = catalogs.entry(name);
match entry {
Entry::Occupied(v) => Some(v.get().clone()),
Entry::Vacant(v) => {
v.insert(catalog);
None
}
}
}
}
impl CatalogList for MemoryCatalogManager {
fn as_any(&self) -> &dyn Any {
self
}
fn register_catalog(
&self,
name: String,
catalog: CatalogProviderRef,
) -> Result<Option<CatalogProviderRef>> {
let mut catalogs = self.catalogs.write().unwrap();
Ok(catalogs.insert(name, catalog))
}
fn catalog_names(&self) -> Result<Vec<String>> {
let catalogs = self.catalogs.read().unwrap();
Ok(catalogs.keys().map(|s| s.to_string()).collect())
}
fn catalog(&self, name: &str) -> Result<Option<CatalogProviderRef>> {
let catalogs = self.catalogs.read().unwrap();
Ok(catalogs.get(name).cloned())
}
}
impl Default for MemoryCatalogProvider {
fn default() -> Self {
Self::new()
}
}
/// Simple in-memory implementation of a catalog.
pub struct MemoryCatalogProvider {
schemas: RwLock<HashMap<String, Arc<dyn SchemaProvider>>>,
}
impl MemoryCatalogProvider {
/// Instantiates a new MemoryCatalogProvider with an empty collection of schemas.
pub fn new() -> Self {
Self {
schemas: RwLock::new(HashMap::new()),
}
}
}
impl CatalogProvider for MemoryCatalogProvider {
fn as_any(&self) -> &dyn Any {
self
}
fn schema_names(&self) -> Result<Vec<String>> {
let schemas = self.schemas.read().unwrap();
Ok(schemas.keys().cloned().collect())
}
fn register_schema(
&self,
name: String,
schema: SchemaProviderRef,
) -> Result<Option<SchemaProviderRef>> {
let mut schemas = self.schemas.write().unwrap();
ensure!(
!schemas.contains_key(&name),
error::SchemaExistsSnafu { schema: &name }
);
Ok(schemas.insert(name, schema))
}
fn schema(&self, name: &str) -> Result<Option<Arc<dyn SchemaProvider>>> {
let schemas = self.schemas.read().unwrap();
Ok(schemas.get(name).cloned())
}
}
/// Simple in-memory implementation of a schema.
pub struct MemorySchemaProvider {
tables: RwLock<HashMap<String, TableRef>>,
}
impl MemorySchemaProvider {
/// Instantiates a new MemorySchemaProvider with an empty collection of tables.
pub fn new() -> Self {
Self {
tables: RwLock::new(HashMap::new()),
}
}
}
impl Default for MemorySchemaProvider {
fn default() -> Self {
Self::new()
}
}
#[async_trait]
impl SchemaProvider for MemorySchemaProvider {
fn as_any(&self) -> &dyn Any {
self
}
fn table_names(&self) -> Result<Vec<String>> {
let tables = self.tables.read().unwrap();
Ok(tables.keys().cloned().collect())
}
async fn table(&self, name: &str) -> Result<Option<TableRef>> {
let tables = self.tables.read().unwrap();
Ok(tables.get(name).cloned())
}
fn register_table(&self, name: String, table: TableRef) -> Result<Option<TableRef>> {
let mut tables = self.tables.write().unwrap();
if let Some(existing) = tables.get(name.as_str()) {
// if table with the same name but different table id exists, then it's a fatal bug
if existing.table_info().ident.table_id != table.table_info().ident.table_id {
error!(
"Unexpected table register: {:?}, existing: {:?}",
table.table_info(),
existing.table_info()
);
return TableExistsSnafu { table: name }.fail()?;
}
Ok(Some(existing.clone()))
} else {
Ok(tables.insert(name, table))
}
}
fn rename_table(&self, name: &str, new_name: String) -> Result<TableRef> {
let mut tables = self.tables.write().unwrap();
let Some(table) = tables.remove(name) else {
return TableNotFoundSnafu {
table_info: name.to_string(),
}
.fail()?;
};
let e = match tables.entry(new_name) {
Entry::Vacant(e) => e,
Entry::Occupied(e) => {
return TableExistsSnafu { table: e.key() }.fail();
}
};
e.insert(table.clone());
Ok(table)
}
fn deregister_table(&self, name: &str) -> Result<Option<TableRef>> {
let mut tables = self.tables.write().unwrap();
Ok(tables.remove(name))
}
fn table_exist(&self, name: &str) -> Result<bool> {
let tables = self.tables.read().unwrap();
Ok(tables.contains_key(name))
}
}
/// Create a memory catalog list contains a numbers table for test
pub fn new_memory_catalog_list() -> Result<Arc<MemoryCatalogManager>> {
Ok(Arc::new(MemoryCatalogManager::default()))
}
#[cfg(test)]
mod tests {
use common_catalog::consts::*;
use common_error::ext::ErrorExt;
use common_error::prelude::StatusCode;
use table::table::numbers::NumbersTable;
use super::*;
#[tokio::test]
async fn test_new_memory_catalog_list() {
let catalog_list = new_memory_catalog_list().unwrap();
let default_catalog = catalog_list.catalog(DEFAULT_CATALOG_NAME).unwrap().unwrap();
let default_schema = default_catalog
.schema(DEFAULT_SCHEMA_NAME)
.unwrap()
.unwrap();
default_schema
.register_table("numbers".to_string(), Arc::new(NumbersTable::default()))
.unwrap();
let table = default_schema.table("numbers").await.unwrap();
assert!(table.is_some());
assert!(default_schema.table("not_exists").await.unwrap().is_none());
}
#[tokio::test]
async fn test_mem_provider() {
let provider = MemorySchemaProvider::new();
let table_name = "numbers";
assert!(!provider.table_exist(table_name).unwrap());
assert!(provider.deregister_table(table_name).unwrap().is_none());
let test_table = NumbersTable::default();
// register table successfully
assert!(provider
.register_table(table_name.to_string(), Arc::new(test_table))
.unwrap()
.is_none());
assert!(provider.table_exist(table_name).unwrap());
let other_table = NumbersTable::new(12);
let result = provider.register_table(table_name.to_string(), Arc::new(other_table));
let err = result.err().unwrap();
assert_eq!(StatusCode::TableAlreadyExists, err.status_code());
}
#[tokio::test]
async fn test_mem_provider_rename_table() {
let provider = MemorySchemaProvider::new();
let table_name = "num";
assert!(!provider.table_exist(table_name).unwrap());
let test_table: TableRef = Arc::new(NumbersTable::default());
// register test table
assert!(provider
.register_table(table_name.to_string(), test_table.clone())
.unwrap()
.is_none());
assert!(provider.table_exist(table_name).unwrap());
// rename test table
let new_table_name = "numbers";
provider
.rename_table(table_name, new_table_name.to_string())
.unwrap();
// test old table name not exist
assert!(!provider.table_exist(table_name).unwrap());
assert!(provider.deregister_table(table_name).unwrap().is_none());
// test new table name exists
assert!(provider.table_exist(new_table_name).unwrap());
let registered_table = provider.table(new_table_name).await.unwrap().unwrap();
assert_eq!(
registered_table.table_info().ident.table_id,
test_table.table_info().ident.table_id
);
let other_table = Arc::new(NumbersTable::new(2));
let result = provider.register_table(new_table_name.to_string(), other_table);
let err = result.err().unwrap();
assert_eq!(StatusCode::TableAlreadyExists, err.status_code());
}
#[tokio::test]
async fn test_catalog_rename_table() {
let catalog = MemoryCatalogManager::default();
let schema = catalog
.schema(DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME)
.unwrap()
.unwrap();
// register table
let table_name = "num";
let table_id = 2333;
let table: TableRef = Arc::new(NumbersTable::new(table_id));
let register_table_req = RegisterTableRequest {
catalog: DEFAULT_CATALOG_NAME.to_string(),
schema: DEFAULT_SCHEMA_NAME.to_string(),
table_name: table_name.to_string(),
table_id,
table,
};
assert!(catalog.register_table(register_table_req).await.unwrap());
assert!(schema.table_exist(table_name).unwrap());
// rename table
let new_table_name = "numbers_new";
let rename_table_req = RenameTableRequest {
catalog: DEFAULT_CATALOG_NAME.to_string(),
schema: DEFAULT_SCHEMA_NAME.to_string(),
table_name: table_name.to_string(),
new_table_name: new_table_name.to_string(),
table_id,
};
assert!(catalog.rename_table(rename_table_req).await.unwrap());
assert!(!schema.table_exist(table_name).unwrap());
assert!(schema.table_exist(new_table_name).unwrap());
let registered_table = catalog
.table(DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, new_table_name)
.await
.unwrap()
.unwrap();
assert_eq!(registered_table.table_info().ident.table_id, table_id);
}
#[test]
pub fn test_register_if_absent() {
let list = MemoryCatalogManager::default();
assert!(list
.register_catalog_if_absent(
"test_catalog".to_string(),
Arc::new(MemoryCatalogProvider::new())
)
.is_none());
list.register_catalog_if_absent(
"test_catalog".to_string(),
Arc::new(MemoryCatalogProvider::new()),
)
.unwrap();
list.as_any()
.downcast_ref::<MemoryCatalogManager>()
.unwrap();
}
#[tokio::test]
pub async fn test_catalog_deregister_table() {
let catalog = MemoryCatalogManager::default();
let schema = catalog
.schema(DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME)
.unwrap()
.unwrap();
let register_table_req = RegisterTableRequest {
catalog: DEFAULT_CATALOG_NAME.to_string(),
schema: DEFAULT_SCHEMA_NAME.to_string(),
table_name: "numbers".to_string(),
table_id: 2333,
table: Arc::new(NumbersTable::default()),
};
catalog.register_table(register_table_req).await.unwrap();
assert!(schema.table_exist("numbers").unwrap());
let deregister_table_req = DeregisterTableRequest {
catalog: DEFAULT_CATALOG_NAME.to_string(),
schema: DEFAULT_SCHEMA_NAME.to_string(),
table_name: "numbers".to_string(),
};
catalog
.deregister_table(deregister_table_req)
.await
.unwrap();
assert!(!schema.table_exist("numbers").unwrap());
}
}

17
src/catalog/src/memory.rs Normal file
View File

@@ -0,0 +1,17 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
pub mod manager;
pub use manager::{new_memory_catalog_manager, MemoryCatalogManager};

View File

@@ -0,0 +1,369 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::any::Any;
use std::collections::hash_map::Entry;
use std::collections::HashMap;
use std::sync::{Arc, RwLock, Weak};
use common_catalog::consts::{DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, INFORMATION_SCHEMA_NAME};
use metrics::{decrement_gauge, increment_gauge};
use snafu::OptionExt;
use table::TableRef;
use crate::error::{CatalogNotFoundSnafu, Result, SchemaNotFoundSnafu, TableExistsSnafu};
use crate::information_schema::InformationSchemaProvider;
use crate::{CatalogManager, DeregisterTableRequest, RegisterSchemaRequest, RegisterTableRequest};
type SchemaEntries = HashMap<String, HashMap<String, TableRef>>;
/// Simple in-memory list of catalogs
#[derive(Clone)]
pub struct MemoryCatalogManager {
/// Collection of catalogs containing schemas and ultimately Tables
catalogs: Arc<RwLock<HashMap<String, SchemaEntries>>>,
}
#[async_trait::async_trait]
impl CatalogManager for MemoryCatalogManager {
async fn schema_exists(&self, catalog: &str, schema: &str) -> Result<bool> {
self.schema_exist_sync(catalog, schema)
}
async fn table(
&self,
catalog: &str,
schema: &str,
table_name: &str,
) -> Result<Option<TableRef>> {
let result = try {
self.catalogs
.read()
.unwrap()
.get(catalog)?
.get(schema)?
.get(table_name)
.cloned()?
};
Ok(result)
}
async fn catalog_exists(&self, catalog: &str) -> Result<bool> {
self.catalog_exist_sync(catalog)
}
async fn table_exists(&self, catalog: &str, schema: &str, table: &str) -> Result<bool> {
let catalogs = self.catalogs.read().unwrap();
Ok(catalogs
.get(catalog)
.with_context(|| CatalogNotFoundSnafu {
catalog_name: catalog,
})?
.get(schema)
.with_context(|| SchemaNotFoundSnafu { catalog, schema })?
.contains_key(table))
}
async fn catalog_names(&self) -> Result<Vec<String>> {
Ok(self.catalogs.read().unwrap().keys().cloned().collect())
}
async fn schema_names(&self, catalog_name: &str) -> Result<Vec<String>> {
Ok(self
.catalogs
.read()
.unwrap()
.get(catalog_name)
.with_context(|| CatalogNotFoundSnafu { catalog_name })?
.keys()
.cloned()
.collect())
}
async fn table_names(&self, catalog_name: &str, schema_name: &str) -> Result<Vec<String>> {
Ok(self
.catalogs
.read()
.unwrap()
.get(catalog_name)
.with_context(|| CatalogNotFoundSnafu { catalog_name })?
.get(schema_name)
.with_context(|| SchemaNotFoundSnafu {
catalog: catalog_name,
schema: schema_name,
})?
.keys()
.cloned()
.collect())
}
fn as_any(&self) -> &dyn Any {
self
}
}
impl MemoryCatalogManager {
pub fn new() -> Arc<Self> {
Arc::new(Self {
catalogs: Default::default(),
})
}
/// Creates a manager with some default setups
/// (e.g. default catalog/schema and information schema)
pub fn with_default_setup() -> Arc<Self> {
let manager = Arc::new(Self {
catalogs: Default::default(),
});
// Safety: default catalog/schema is registered in order so no CatalogNotFound error will occur
manager.register_catalog_sync(DEFAULT_CATALOG_NAME).unwrap();
manager
.register_schema_sync(RegisterSchemaRequest {
catalog: DEFAULT_CATALOG_NAME.to_string(),
schema: DEFAULT_SCHEMA_NAME.to_string(),
})
.unwrap();
manager
}
fn schema_exist_sync(&self, catalog: &str, schema: &str) -> Result<bool> {
Ok(self
.catalogs
.read()
.unwrap()
.get(catalog)
.with_context(|| CatalogNotFoundSnafu {
catalog_name: catalog,
})?
.contains_key(schema))
}
fn catalog_exist_sync(&self, catalog: &str) -> Result<bool> {
Ok(self.catalogs.read().unwrap().get(catalog).is_some())
}
/// Registers a catalog if it does not exist and returns false if the schema exists.
pub fn register_catalog_sync(&self, name: &str) -> Result<bool> {
let name = name.to_string();
let mut catalogs = self.catalogs.write().unwrap();
match catalogs.entry(name.clone()) {
Entry::Vacant(e) => {
let arc_self = Arc::new(self.clone());
let catalog = arc_self.create_catalog_entry(name);
e.insert(catalog);
increment_gauge!(crate::metrics::METRIC_CATALOG_MANAGER_CATALOG_COUNT, 1.0);
Ok(true)
}
Entry::Occupied(_) => Ok(false),
}
}
pub fn deregister_table_sync(&self, request: DeregisterTableRequest) -> Result<()> {
let mut catalogs = self.catalogs.write().unwrap();
let schema = catalogs
.get_mut(&request.catalog)
.with_context(|| CatalogNotFoundSnafu {
catalog_name: &request.catalog,
})?
.get_mut(&request.schema)
.with_context(|| SchemaNotFoundSnafu {
catalog: &request.catalog,
schema: &request.schema,
})?;
let result = schema.remove(&request.table_name);
if result.is_some() {
decrement_gauge!(
crate::metrics::METRIC_CATALOG_MANAGER_TABLE_COUNT,
1.0,
&[crate::metrics::db_label(&request.catalog, &request.schema)],
);
}
Ok(())
}
/// Registers a schema if it does not exist.
/// It returns an error if the catalog does not exist,
/// and returns false if the schema exists.
pub fn register_schema_sync(&self, request: RegisterSchemaRequest) -> Result<bool> {
let mut catalogs = self.catalogs.write().unwrap();
let catalog = catalogs
.get_mut(&request.catalog)
.with_context(|| CatalogNotFoundSnafu {
catalog_name: &request.catalog,
})?;
match catalog.entry(request.schema) {
Entry::Vacant(e) => {
e.insert(HashMap::new());
increment_gauge!(crate::metrics::METRIC_CATALOG_MANAGER_SCHEMA_COUNT, 1.0);
Ok(true)
}
Entry::Occupied(_) => Ok(false),
}
}
/// Registers a schema and returns an error if the catalog or schema does not exist.
pub fn register_table_sync(&self, request: RegisterTableRequest) -> Result<bool> {
let mut catalogs = self.catalogs.write().unwrap();
let schema = catalogs
.get_mut(&request.catalog)
.with_context(|| CatalogNotFoundSnafu {
catalog_name: &request.catalog,
})?
.get_mut(&request.schema)
.with_context(|| SchemaNotFoundSnafu {
catalog: &request.catalog,
schema: &request.schema,
})?;
if schema.contains_key(&request.table_name) {
return TableExistsSnafu {
table: &request.table_name,
}
.fail();
}
schema.insert(request.table_name, request.table);
increment_gauge!(
crate::metrics::METRIC_CATALOG_MANAGER_TABLE_COUNT,
1.0,
&[crate::metrics::db_label(&request.catalog, &request.schema)],
);
Ok(true)
}
fn create_catalog_entry(self: &Arc<Self>, catalog: String) -> SchemaEntries {
let information_schema = InformationSchemaProvider::build(
catalog,
Arc::downgrade(self) as Weak<dyn CatalogManager>,
);
let mut catalog = HashMap::new();
catalog.insert(INFORMATION_SCHEMA_NAME.to_string(), information_schema);
catalog
}
#[cfg(any(test, feature = "testing"))]
pub fn new_with_table(table: TableRef) -> Arc<Self> {
let manager = Self::with_default_setup();
let catalog = &table.table_info().catalog_name;
let schema = &table.table_info().schema_name;
if !manager.catalog_exist_sync(catalog).unwrap() {
manager.register_catalog_sync(catalog).unwrap();
}
if !manager.schema_exist_sync(catalog, schema).unwrap() {
manager
.register_schema_sync(RegisterSchemaRequest {
catalog: catalog.to_string(),
schema: schema.to_string(),
})
.unwrap();
}
let request = RegisterTableRequest {
catalog: catalog.to_string(),
schema: schema.to_string(),
table_name: table.table_info().name.clone(),
table_id: table.table_info().ident.table_id,
table,
};
let _ = manager.register_table_sync(request).unwrap();
manager
}
}
/// Create a memory catalog list contains a numbers table for test
pub fn new_memory_catalog_manager() -> Result<Arc<MemoryCatalogManager>> {
Ok(MemoryCatalogManager::with_default_setup())
}
#[cfg(test)]
mod tests {
use common_catalog::consts::*;
use table::table::numbers::{NumbersTable, NUMBERS_TABLE_NAME};
use super::*;
#[tokio::test]
async fn test_new_memory_catalog_list() {
let catalog_list = new_memory_catalog_manager().unwrap();
let register_request = RegisterTableRequest {
catalog: DEFAULT_CATALOG_NAME.to_string(),
schema: DEFAULT_SCHEMA_NAME.to_string(),
table_name: NUMBERS_TABLE_NAME.to_string(),
table_id: NUMBERS_TABLE_ID,
table: NumbersTable::table(NUMBERS_TABLE_ID),
};
catalog_list.register_table_sync(register_request).unwrap();
let table = catalog_list
.table(
DEFAULT_CATALOG_NAME,
DEFAULT_SCHEMA_NAME,
NUMBERS_TABLE_NAME,
)
.await
.unwrap();
let _ = table.unwrap();
assert!(catalog_list
.table(DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, "not_exists")
.await
.unwrap()
.is_none());
}
#[test]
pub fn test_register_catalog_sync() {
let list = MemoryCatalogManager::with_default_setup();
assert!(list.register_catalog_sync("test_catalog").unwrap());
assert!(!list.register_catalog_sync("test_catalog").unwrap());
}
#[tokio::test]
pub async fn test_catalog_deregister_table() {
let catalog = MemoryCatalogManager::with_default_setup();
let table_name = "foo_table";
let register_table_req = RegisterTableRequest {
catalog: DEFAULT_CATALOG_NAME.to_string(),
schema: DEFAULT_SCHEMA_NAME.to_string(),
table_name: table_name.to_string(),
table_id: 2333,
table: NumbersTable::table(2333),
};
catalog.register_table_sync(register_table_req).unwrap();
assert!(catalog
.table(DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, table_name)
.await
.unwrap()
.is_some());
let deregister_table_req = DeregisterTableRequest {
catalog: DEFAULT_CATALOG_NAME.to_string(),
schema: DEFAULT_SCHEMA_NAME.to_string(),
table_name: table_name.to_string(),
};
catalog.deregister_table_sync(deregister_table_req).unwrap();
assert!(catalog
.table(DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, table_name)
.await
.unwrap()
.is_none());
}
}

View File

@@ -0,0 +1,29 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use common_catalog::build_db_string;
pub(crate) const METRIC_DB_LABEL: &str = "db";
pub(crate) const METRIC_CATALOG_MANAGER_CATALOG_COUNT: &str = "catalog.catalog_count";
pub(crate) const METRIC_CATALOG_MANAGER_SCHEMA_COUNT: &str = "catalog.schema_count";
pub(crate) const METRIC_CATALOG_MANAGER_TABLE_COUNT: &str = "catalog.table_count";
pub(crate) const METRIC_CATALOG_KV_REMOTE_GET: &str = "catalog.kv.get.remote";
pub(crate) const METRIC_CATALOG_KV_GET: &str = "catalog.kv.get";
#[inline]
pub(crate) fn db_label(catalog: &str, schema: &str) -> (&'static str, String) {
(METRIC_DB_LABEL, build_db_string(catalog, schema))
}

View File

@@ -1,131 +0,0 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::fmt::Debug;
use std::pin::Pin;
use std::sync::Arc;
pub use client::MetaKvBackend;
use futures::Stream;
use futures_util::StreamExt;
pub use manager::{RemoteCatalogManager, RemoteCatalogProvider, RemoteSchemaProvider};
use crate::error::Error;
mod client;
mod manager;
#[derive(Debug, Clone)]
pub struct Kv(pub Vec<u8>, pub Vec<u8>);
pub type ValueIter<'a, E> = Pin<Box<dyn Stream<Item = Result<Kv, E>> + Send + 'a>>;
#[async_trait::async_trait]
pub trait KvBackend: Send + Sync {
fn range<'a, 'b>(&'a self, key: &[u8]) -> ValueIter<'b, Error>
where
'a: 'b;
async fn set(&self, key: &[u8], val: &[u8]) -> Result<(), Error>;
/// Compare and set value of key. `expect` is the expected value, if backend's current value associated
/// with key is the same as `expect`, the value will be updated to `val`.
///
/// - If the compare-and-set operation successfully updated value, this method will return an `Ok(Ok())`
/// - If associated value is not the same as `expect`, no value will be updated and an `Ok(Err(Vec<u8>))`
/// will be returned, the `Err(Vec<u8>)` indicates the current associated value of key.
/// - If any error happens during operation, an `Err(Error)` will be returned.
async fn compare_and_set(
&self,
key: &[u8],
expect: &[u8],
val: &[u8],
) -> Result<Result<(), Option<Vec<u8>>>, Error>;
async fn delete_range(&self, key: &[u8], end: &[u8]) -> Result<(), Error>;
async fn delete(&self, key: &[u8]) -> Result<(), Error> {
self.delete_range(key, &[]).await
}
/// Default get is implemented based on `range` method.
async fn get(&self, key: &[u8]) -> Result<Option<Kv>, Error> {
let mut iter = self.range(key);
while let Some(r) = iter.next().await {
let kv = r?;
if kv.0 == key {
return Ok(Some(kv));
}
}
return Ok(None);
}
}
pub type KvBackendRef = Arc<dyn KvBackend>;
#[cfg(test)]
mod tests {
use async_stream::stream;
use super::*;
struct MockKvBackend {}
#[async_trait::async_trait]
impl KvBackend for MockKvBackend {
fn range<'a, 'b>(&'a self, _key: &[u8]) -> ValueIter<'b, Error>
where
'a: 'b,
{
Box::pin(stream!({
for i in 0..3 {
yield Ok(Kv(
i.to_string().as_bytes().to_vec(),
i.to_string().as_bytes().to_vec(),
))
}
}))
}
async fn set(&self, _key: &[u8], _val: &[u8]) -> Result<(), Error> {
unimplemented!()
}
async fn compare_and_set(
&self,
_key: &[u8],
_expect: &[u8],
_val: &[u8],
) -> Result<Result<(), Option<Vec<u8>>>, Error> {
unimplemented!()
}
async fn delete_range(&self, _key: &[u8], _end: &[u8]) -> Result<(), Error> {
unimplemented!()
}
}
#[tokio::test]
async fn test_get() {
let backend = MockKvBackend {};
let result = backend.get(0.to_string().as_bytes()).await;
assert_eq!(0.to_string().as_bytes(), result.unwrap().unwrap().0);
let result = backend.get(1.to_string().as_bytes()).await;
assert_eq!(1.to_string().as_bytes(), result.unwrap().unwrap().0);
let result = backend.get(2.to_string().as_bytes()).await;
assert_eq!(2.to_string().as_bytes(), result.unwrap().unwrap().0);
let result = backend.get(3.to_string().as_bytes()).await;
assert!(result.unwrap().is_none());
}
}

View File

@@ -1,108 +0,0 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::fmt::Debug;
use std::sync::Arc;
use async_stream::stream;
use common_telemetry::info;
use meta_client::client::MetaClient;
use meta_client::rpc::{CompareAndPutRequest, DeleteRangeRequest, PutRequest, RangeRequest};
use snafu::ResultExt;
use crate::error::{Error, MetaSrvSnafu};
use crate::remote::{Kv, KvBackend, ValueIter};
#[derive(Debug)]
pub struct MetaKvBackend {
pub client: Arc<MetaClient>,
}
/// Implement `KvBackend` trait for `MetaKvBackend` instead of opendal's `Accessor` since
/// `MetaClient`'s range method can return both keys and values, which can reduce IO overhead
/// comparing to `Accessor`'s list and get method.
#[async_trait::async_trait]
impl KvBackend for MetaKvBackend {
fn range<'a, 'b>(&'a self, key: &[u8]) -> ValueIter<'b, Error>
where
'a: 'b,
{
let key = key.to_vec();
Box::pin(stream!({
let mut resp = self
.client
.range(RangeRequest::new().with_prefix(key))
.await
.context(MetaSrvSnafu)?;
let kvs = resp.take_kvs();
for mut kv in kvs.into_iter() {
yield Ok(Kv(kv.take_key(), kv.take_value()))
}
}))
}
async fn get(&self, key: &[u8]) -> Result<Option<Kv>, Error> {
let mut response = self
.client
.range(RangeRequest::new().with_key(key))
.await
.context(MetaSrvSnafu)?;
Ok(response
.take_kvs()
.get_mut(0)
.map(|kv| Kv(kv.take_key(), kv.take_value())))
}
async fn set(&self, key: &[u8], val: &[u8]) -> Result<(), Error> {
let req = PutRequest::new()
.with_key(key.to_vec())
.with_value(val.to_vec());
let _ = self.client.put(req).await.context(MetaSrvSnafu)?;
Ok(())
}
async fn delete_range(&self, key: &[u8], end: &[u8]) -> Result<(), Error> {
let req = DeleteRangeRequest::new().with_range(key.to_vec(), end.to_vec());
let resp = self.client.delete_range(req).await.context(MetaSrvSnafu)?;
info!(
"Delete range, key: {}, end: {}, deleted: {}",
String::from_utf8_lossy(key),
String::from_utf8_lossy(end),
resp.deleted()
);
Ok(())
}
async fn compare_and_set(
&self,
key: &[u8],
expect: &[u8],
val: &[u8],
) -> Result<Result<(), Option<Vec<u8>>>, Error> {
let request = CompareAndPutRequest::new()
.with_key(key.to_vec())
.with_expect(expect.to_vec())
.with_value(val.to_vec());
let mut response = self
.client
.compare_and_put(request)
.await
.context(MetaSrvSnafu)?;
if response.is_success() {
Ok(Ok(()))
} else {
Ok(Err(response.take_prev_kv().map(|v| v.value().to_vec())))
}
}
}

Some files were not shown because too many files have changed in this diff Show More