Compare commits

...

67 Commits

Author SHA1 Message Date
Weny Xu
d57b144b2f chore: change test_remove_outdated_meta_task sleep time to 40ms (#2620)
chore: change test_remove_outdated_meta_task sleep time to 300ms
2023-10-18 11:33:35 +00:00
WU Jingdi
46e106bcc3 feat: allow nest range expr in Range Query (#2557)
* feat: eable range expr nest

* fix: change range expr rewrite format

* chore: organize range query tests

* chore: change range expr name(e.g. MAX(v) RANGE 5s FILL 6)

* chore: add range query test

* chore: fix code advice

* chore: fix ca
2023-10-18 07:03:26 +00:00
localhost
a7507a2b12 chore: change telemetry report url to resolve connectivity issues (#2608)
chore: change otel report url to resolve connectivity issues
2023-10-18 06:58:54 +00:00
Wei
5b8e5066a0 refactor: make ReadableSize more readable. (#2614)
* refactor: ReadableSize is readable.

* docs: Update src/common/base/src/readable_size.rs

---------

Co-authored-by: Yingwen <realevenyag@gmail.com>
2023-10-18 06:32:50 +00:00
Weny Xu
dcd481e6a4 feat: stop the procedure manager if a new leader is elected (#2576)
* feat: stop the procedure manager if a new leader is elected

* chore: apply suggestions from CR

* chore: apply suggestions

* chore: apply suggestions from CR

* feat: add should_report to GreptimeDBTelemetry

Signed-off-by: WenyXu <wenymedia@gmail.com>

* refactor: refactor subscribing leader change loop

---------

Signed-off-by: WenyXu <wenymedia@gmail.com>
2023-10-18 06:12:28 +00:00
zyy17
3217b56cc1 ci: release new version '0.4.0' -> '0.4.1' (#2611) 2023-10-17 07:33:41 +00:00
shuiyisong
eccad647d0 chore: add export data to migrate tool (#2610)
* chore: add export data to migrate tool

* chore: export copy from sql too
2023-10-17 06:33:58 +00:00
Yun Chen
829db8c5c1 fix!: align frontend cmd name to rpc_* (#2609)
fix: align frontend cmd name to rpc_*
2023-10-17 06:18:18 +00:00
Ruihang Xia
9056c3a6aa feat: implement greptime cli export (#2535)
* feat: implement greptime cli export

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* read information schema

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* parse database name from cli params

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-10-17 01:56:52 +00:00
ZhangJian He
d9e7b898a3 feat: add walconfig dir back (#2606)
Signed-off-by: ZhangJian He <shoothzj@gmail.com>
2023-10-16 11:26:06 +00:00
zyy17
59d4081f7a ci: correct image name of dev build (#2603) 2023-10-16 03:54:44 +00:00
zyy17
6e87ac0a0e ci: refine release-cn-artifacts action (#2600)
* ci: add copy-image.sh and upload-artifacts-to-s3.sh

* ci: remove unused options in dev build

* ci: use 'upload-artifacts-to-s3.sh' and 'copy-image.sh' in release-cn-artifacts action

* refactor: refine copy-image.sh
2023-10-13 17:04:06 +08:00
shuiyisong
d89cfd0d4d fix: auth in standalone mode (#2591)
chore: user_provider in standalone mode
2023-10-13 08:37:58 +00:00
Yingwen
8a0054aa89 fix: make nyc-taxi bench work again (#2599)
* fix: invalid requests created by nyc-taxi

* feat: add timestamp to table name

* style: fix clippy

* chore: re-export deps for client

* fix: wait result

* chore: no need to define a prefix constant
2023-10-13 08:16:26 +00:00
Yun Chen
f859932745 fix: convert to ReadableSize & Durations (#2594)
* fix: convert to ReadableSize & Durations

* fix: change more grpc sender/recv message size to ReadableSize

fix: format

fix: cargo fmt

fix: change cmd test to use durations

fix: revert metaclient change

fix: convert default fields in meta client options

fix: human serde meta client durations

* fix: remove milisecond postfix in heartbeat option

* fix: humantime serde on heartbeat

* fix: update config example

* fix: update integration test config

* fix: address pr comments

* fix: fix pr comment on default annotation
2023-10-13 03:28:29 +00:00
Ruihang Xia
9a8fc08e6a docs(benchmark): update 0.4.0 tsbs result (#2597)
* docs(benchmark): update 0.4.0 tsbs result

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix typo

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-10-13 03:08:14 +00:00
Ruihang Xia
825e4beead build(ci): pin linux runner to ubuntu-20.04 (#2586)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-10-12 18:08:05 +08:00
zyy17
0a23b40321 ci: downgrade builder version: ubuntu 22.04 -> ubuntu 20.04 for compatible with older version glibc(>=2.31) (#2592) 2023-10-12 16:46:25 +08:00
Ruihang Xia
cf6ef0a30d chore(cli): deregister cli attach command (#2589)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-10-12 08:11:17 +00:00
dennis zhuang
65a659d136 fix: ensure data_home directory created (#2588)
fix: ensure data_home directory created before creating metadata store, #2587
2023-10-12 07:32:55 +00:00
Ruihang Xia
62bcb45787 feat!: change config name from kv_store to metadata_store (#2585)
featchange config name from kv_store to metadata_store

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-10-12 06:55:09 +00:00
zyy17
94f3542a4f ci: fix skopeo running errors (#2581)
ci: fix skopeo auth error
2023-10-12 06:13:56 +00:00
LFC
fc3bc5327d ci: release Windows artifacts (#2574)
* ci: release Windows artifacts

* ci: release Windows artifacts
2023-10-12 14:10:59 +08:00
Ning Sun
9e33ddceea ci: run windows tests every night instead of every commit (#2577)
ci: move windows ci to nightly-ci
2023-10-12 02:53:42 +00:00
zyy17
c9bdf4ff9f ci: refine the process of releasing dev-builder images (#2580)
* fix: fix error of releasing android builder image

* fix: run skopeo error

* ci: add 'release-dev-builder-images-cn' job

* ci: add 'disable_building_images'

* fix: add vars

* ci: use skopeo container

* ci: update opts defaule values
2023-10-12 02:41:54 +00:00
dennis zhuang
0a9972aa9a fix: cache capacity unit in sample config (#2575) 2023-10-11 11:02:39 +00:00
zyy17
76d5b710c8 ci: add more options for releasing dev-builder images (#2573) 2023-10-11 16:24:50 +08:00
zyy17
fe02366ce6 fix: remove unused options and add 'build-android-artifacts' (#2572) 2023-10-11 15:32:58 +08:00
zyy17
d7aeb369a6 refactor: add new action 'release-cn-artifacts' (#2554)
* refactor: add new action 'release-cn-artifacts'

* refactor: refine naming: 'release-artifacts' -> 'publish-github-release'

Signed-off-by: zyy17 <zyylsxm@gmail.com>

---------

Signed-off-by: zyy17 <zyylsxm@gmail.com>
2023-10-11 03:42:04 +00:00
zyy17
9284bb7a2b ci: seperate the job of building dev-builder images (#2569) 2023-10-11 11:09:53 +08:00
liyang
e23dd5a44f fix: fix to readme document link (#2566) 2023-10-11 02:45:43 +00:00
zyy17
c60b59adc8 chore: add the steps of building android binary (#2567) 2023-10-11 02:31:11 +00:00
Lei, HUANG
c9c2b3c91f fix: revert memtable pk rb cache to rwlock (#2565)
* fix: revert memtable pk rb cache to rwlock

* feat: refine
2023-10-10 20:51:05 +08:00
Yingwen
7f75190fce chore: update Cargo.lock (#2564) 2023-10-10 16:28:50 +08:00
Yingwen
0a394c73a2 chore: bump version to 0.4.0 (#2563) 2023-10-10 16:16:15 +08:00
JeremyHi
ae95f23e05 feat: add metrics for region server (#2552)
* feat: add metircs for region server

* fix: add comment and remove unused code
2023-10-10 07:40:16 +00:00
Lei, HUANG
6b39f5923d feat: add compaction metrics (#2560)
* feat: add compaction metrics

* feat: add compaction request total count

* fix: CR comments
2023-10-10 07:38:39 +00:00
JeremyHi
ed725d030f fix: support multi addrs while using etcd (#2562)
fix: support multi addrs while useing etcd
2023-10-10 07:30:48 +00:00
Wei
4fe7e162af fix: human_time mismatch (#2558)
* fix: human_time mismatch.

* fix: add comment
2023-10-10 07:22:12 +00:00
Yingwen
8a5ef826b9 fix(mito): Do not write to memtables if writing wal is failed (#2561)
* feat: add writes total metrics

* fix: don't write memtable if write ctx is failed

* feat: write rows metrics
2023-10-10 06:55:57 +00:00
Ruihang Xia
07be50403e feat: add basic metrics to query (#2559)
* add metrics to merge scan

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* count series in promql

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* tweak label name

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* tweak label name

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* document metric label

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-10-10 06:55:25 +00:00
Lei, HUANG
8bdef9a348 feat: memtable filter push down (#2539)
* feat: memtable support filter pushdown to prune primary keys

* fix: switch to next time series when pk not selected

* fix: allow predicate evaluation failure

* fix: some clippy warnings

* fix: panic when no primary key in schema

* feat: cache decoded record batch for primary key

* refactor: use arcswap instead of rwlock

* fix: format toml
2023-10-10 04:03:10 +00:00
Yingwen
d4577e7372 feat(mito): add metrics to mito engine (#2556)
* feat: allow discarding a timer

* feat: flush metrics

* feat: flush bytes and region count metrics

* refactor: add as_str to get static string

* feat: add handle request elapsed metrics

* feat: add some write related metrics

* style: fix clippy
2023-10-10 03:53:17 +00:00
dennis zhuang
88f26673f0 fix: adds back http_timeout for frontend subcommand (#2555) 2023-10-10 03:05:16 +00:00
Baasit
19f300fc5a feat: renaming kv directory to metadata (#2549)
* fix: renamed kv directory to metadata directory

* fix: changed function name

* fix: changed function name
2023-10-09 11:43:17 +00:00
Weny Xu
cc83764331 fix: check table exists before allocating table id (#2546)
* fix: check table exists before allocating table_id

* chore: apply suggestions from CR
2023-10-09 11:40:10 +00:00
Yingwen
81aa7a4caf chore(mito): change default batch size/row group size (#2550) 2023-10-09 11:10:12 +00:00
Yingwen
d68dd1f3eb fix: schema validation is skipped once we need to fill a column (#2548)
* test: test different order

* test: add tests for missing and invalid columns

* fix: do not skip schema validation while missing columns

* chore: use field_columns()

* test: add tests for different column order
2023-10-09 09:20:51 +00:00
Lei, HUANG
9b3470b049 feat: android image builder dockerfile (#2541)
* feat: android image builder dockerfile

* feat: add building android dev-builder to ci config file

* fix: add build arg

* feat: use makefile to build image and add strip command
2023-10-09 09:10:14 +00:00
Weny Xu
8cc862ff8a refactor: refactor cache invalidator (#2540) 2023-10-09 08:19:18 +00:00
Weny Xu
81ccb58fb4 refactor!: compare with origin bytes during the transactions (#2538)
* refactor: compare with origin bytes during the transaction

* refactor: use serialize_str instead

* Update src/common/meta/src/key.rs

Co-authored-by: JeremyHi <jiachun_feng@proton.me>

* chore: apply suggestions from CR

---------

Co-authored-by: JeremyHi <jiachun_feng@proton.me>
2023-10-09 08:17:19 +00:00
Weny Xu
ce3c10a86e refactor: de/encode protobuf-encoded byte array with base64 (#2545) 2023-10-09 05:31:44 +00:00
shuiyisong
007f7ba03c refactor: extract plugins crate (#2487)
* chore: move frontend plugins fn

* chore: move datanode plugins to fn

* chore: add opt plugins

* chore: add plugins to meta-srv

* chore: setup meta plugins, wait for router extension

* chore: try use configurator for grpc too

* chore: minor fix fmt

* chore: minor fix fmt

* chore: add start meta_srv for hook

* chore: merge develop

* chore: minor fix

* chore: replace Arc<Plugins> with PluginsRef

* chore: fix header

* chore: remove empty file

* chore: modify comments

* chore: remove PluginsRef type alias

* chore: remove `OptPlugins`
2023-10-09 04:54:27 +00:00
Weny Xu
dfe68a7e0b refactor: check push result out of loop (#2511)
* refactor: check push result out of loop

* chore: apply suggestions from CR
2023-10-09 02:49:48 +00:00
Ruihang Xia
d5e4fcaaff feat: dist plan optimize part 2 (#2543)
* allow udf and scalar fn

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* put CountWildcardRule before dist planner

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* bump datafusion to fix first_value/last_value

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update sqlness result

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* use retain instead

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-10-09 02:18:36 +00:00
Yingwen
17b385a985 fix: compiler errors under pprof and mem-prof features (#2537)
* fix: compiler errors under pprof feature

* fix: compiler errors under mem-prof feature
2023-10-08 08:28:45 +00:00
shuiyisong
067917845f fix: carry dbname from frontend to datanode (#2520)
* chore: add dbname in region request header for tracking purpose

* chore: fix handle read

* chore: add write meter

* chore: add meter-core to dep

* chore: add converter between RegionRequestHeader and QueryContext & update proto version
2023-10-08 06:30:23 +00:00
Weny Xu
a680133acc feat: enable no delay for mysql, opentsdb, http (#2530)
* refactor: enable no delay for mysql, opentsdb, http

* Apply suggestions from code review

Co-authored-by: Yingwen <realevenyag@gmail.com>

---------

Co-authored-by: Yingwen <realevenyag@gmail.com>
2023-10-08 06:19:52 +00:00
Yingwen
0593c3bde3 fix(mito): pruning for mito2 (#2525)
* fix: pruning for mito2

* chore: refactor projection parameters; add some tests; customize row group size for each flush task.

* chore: pass whole RegionFlushRequest

---------

Co-authored-by: Lei, HUANG <mrsatangel@gmail.com>
2023-10-08 03:45:15 +00:00
Lei, HUANG
0292445476 fix: timestamp range filter (#2533)
* fix: timestamp range filter

* fix: rebase develop

* fix: some style issues
2023-10-08 03:29:02 +00:00
dennis zhuang
ff15bc41d6 feat: improve object storage cache (#2522)
* feat: refactor object storage cache with moka

* chore: minor fixes

* fix: concurrent issues and invalidate cache after write/delete

* chore: minor changes

* fix: cargo lock

* refactor: rename

* chore: change DEFAULT_OBJECT_STORE_CACHE_SIZE to 256Mib

* fix: typo

* chore: style

* fix: toml format

* chore: toml

* fix: toml format

* Update src/object-store/src/layers/lru_cache/read_cache.rs

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>

* chore: update Cargo.toml

Co-authored-by: Yingwen <realevenyag@gmail.com>

* chore: update src/object-store/Cargo.toml

Co-authored-by: Yingwen <realevenyag@gmail.com>

* chore: refactor and apply suggestions

* fix: typo

* feat: adds back allow list for caching

* chore: cr suggestion

Co-authored-by: Yingwen <realevenyag@gmail.com>

* chore: cr suggestion

Co-authored-by: Yingwen <realevenyag@gmail.com>

* refactor: wrap inner Accessor with Arc

* chore: remove run_pending_task in read and write path

* chore: the arc is unnecessary

---------

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>
2023-10-08 03:27:49 +00:00
Yingwen
657542c0b8 feat(mito): Cache repeated vector for tags (#2523)
* feat: add vector_cache to CacheManager

* feat: cache repeated vectors

* feat: skip decoding pk if output doesn't contain tags

* test: add TestRegionMetadataBuilder

* test: test ProjectionMapper

* test: test vector cache

* test: test projection mapper convert

* style: fix clippy

* feat: do not cache vector if it is too large

* docs: update comment
2023-10-07 11:36:00 +00:00
Ning Sun
0ad3fb6040 fix: mysql timezone settings (#2534)
* fix: restore time zone settings for mysql

* test: add integration test for time zone

* test: fix unit test for check
2023-10-07 10:21:32 +00:00
Bamboo1
b44e39f897 feat: the schema of RegionMetadata is not output during debug (#2498)
* feat: the schema of RegionMetadata is not output during debug because column_metadatas contains duplicate information

Signed-off-by: ZhuZiyi <zyzhu2001@gmail.com>

* feat: the id_to_index of RegionMetadata is not output during debug

Signed-off-by: ZhuZiyi <zyzhu2001@gmail.com>

* feat: add debug trait

Signed-off-by: ZhuZiyi <zyzhu2001@gmail.com>

* feat: use default debug in ConcreteDataType

Signed-off-by: ZhuZiyi <zyzhu2001@gmail.com>

* chore: add std::fmt

Signed-off-by: ZhuZiyi <zyzhu2001@gmail.com>

* test: add debug trait test

Signed-off-by: ZhuZiyi <zyzhu2001@gmail.com>

* chore: typo

Signed-off-by: ZhuZiyi <zyzhu2001@gmail.com>

* chore: resolve conversation

Signed-off-by: ZhuZiyi <zyzhu2001@gmail.com>

* chore: format

Signed-off-by: ZhuZiyi <zyzhu2001@gmail.com>

* fix: test bug

Signed-off-by: ZhuZiyi <zyzhu2001@gmail.com>

---------

Signed-off-by: ZhuZiyi <zyzhu2001@gmail.com>
2023-10-07 08:01:54 +00:00
Weny Xu
f50f2a84a9 fix: open region missing options (#2473)
* fix: open region missing options

* refactor: remove redundant clone

* chore: apply suggestions from CR

* chore: apply suggestions

* chore: apply suggestions

* test: add test for initialize_region_server

* feat: introduce RegionInfo
2023-10-07 07:17:16 +00:00
Yingwen
fe783c7c1f perf(mito): Use a heap to merge batches for the same key (#2521)
* feat: merge by heap

* fix: fix heap order

* feat: avoid pop/push next and refactor some functions

* feat: replace merge_batches and fixe tests

* test: add test that a key is deleted

* fix: skip empty batch

* style: clippy

* chore: fix typos
2023-10-07 02:56:08 +00:00
Weny Xu
00fe7d104e feat: enable tcp no_delay by default for internal services (#2527) 2023-10-07 02:35:28 +00:00
235 changed files with 7864 additions and 2624 deletions

View File

@@ -1,93 +0,0 @@
name: Build and push dev-builder image
description: Build and push dev-builder image to DockerHub and ACR
inputs:
dockerhub-image-registry:
description: The dockerhub image registry to store the images
required: false
default: docker.io
dockerhub-image-registry-username:
description: The dockerhub username to login to the image registry
required: true
dockerhub-image-registry-token:
description: The dockerhub token to login to the image registry
required: true
dockerhub-image-namespace:
description: The dockerhub namespace of the image registry to store the images
required: false
default: greptime
acr-image-registry:
description: The ACR image registry to store the images
required: true
acr-image-registry-username:
description: The ACR username to login to the image registry
required: true
acr-image-registry-password:
description: The ACR password to login to the image registry
required: true
acr-image-namespace:
description: The ACR namespace of the image registry to store the images
required: false
default: greptime
version:
description: Version of the dev-builder
required: false
default: latest
runs:
using: composite
steps:
- name: Login to Dockerhub
uses: docker/login-action@v2
with:
registry: ${{ inputs.dockerhub-image-registry }}
username: ${{ inputs.dockerhub-image-registry-username }}
password: ${{ inputs.dockerhub-image-registry-token }}
- name: Build and push ubuntu dev builder image to dockerhub
shell: bash
run:
make dev-builder \
BASE_IMAGE=ubuntu \
BUILDX_MULTI_PLATFORM_BUILD=true \
IMAGE_REGISTRY=${{ inputs.dockerhub-image-registry }} \
IMAGE_NAMESPACE=${{ inputs.dockerhub-image-namespace }} \
IMAGE_TAG=${{ inputs.version }}
- name: Build and push centos dev builder image to dockerhub
shell: bash
run:
make dev-builder \
BASE_IMAGE=centos \
BUILDX_MULTI_PLATFORM_BUILD=true \
IMAGE_REGISTRY=${{ inputs.dockerhub-image-registry }} \
IMAGE_NAMESPACE=${{ inputs.dockerhub-image-namespace }} \
IMAGE_TAG=${{ inputs.version }}
- name: Login to ACR
uses: docker/login-action@v2
continue-on-error: true
with:
registry: ${{ inputs.acr-image-registry }}
username: ${{ inputs.acr-image-registry-username }}
password: ${{ inputs.acr-image-registry-password }}
- name: Build and push ubuntu dev builder image to ACR
shell: bash
continue-on-error: true
run: # buildx will cache the images that already built, so it will not take long time to build the images again.
make dev-builder \
BASE_IMAGE=ubuntu \
BUILDX_MULTI_PLATFORM_BUILD=true \
IMAGE_REGISTRY=${{ inputs.acr-image-registry }} \
IMAGE_NAMESPACE=${{ inputs.acr-image-namespace }} \
IMAGE_TAG=${{ inputs.version }}
- name: Build and push centos dev builder image to ACR
shell: bash
continue-on-error: true
run: # buildx will cache the images that already built, so it will not take long time to build the images again.
make dev-builder \
BASE_IMAGE=centos \
BUILDX_MULTI_PLATFORM_BUILD=true \
IMAGE_REGISTRY=${{ inputs.acr-image-registry }} \
IMAGE_NAMESPACE=${{ inputs.acr-image-namespace }} \
IMAGE_TAG=${{ inputs.version }}

View File

@@ -0,0 +1,76 @@
name: Build and push dev-builder images
description: Build and push dev-builder images to DockerHub and ACR
inputs:
dockerhub-image-registry:
description: The dockerhub image registry to store the images
required: false
default: docker.io
dockerhub-image-registry-username:
description: The dockerhub username to login to the image registry
required: true
dockerhub-image-registry-token:
description: The dockerhub token to login to the image registry
required: true
dockerhub-image-namespace:
description: The dockerhub namespace of the image registry to store the images
required: false
default: greptime
version:
description: Version of the dev-builder
required: false
default: latest
build-dev-builder-ubuntu:
description: Build dev-builder-ubuntu image
required: false
default: 'true'
build-dev-builder-centos:
description: Build dev-builder-centos image
required: false
default: 'true'
build-dev-builder-android:
description: Build dev-builder-android image
required: false
default: 'true'
runs:
using: composite
steps:
- name: Login to Dockerhub
uses: docker/login-action@v2
with:
registry: ${{ inputs.dockerhub-image-registry }}
username: ${{ inputs.dockerhub-image-registry-username }}
password: ${{ inputs.dockerhub-image-registry-token }}
- name: Build and push dev-builder-ubuntu image
shell: bash
if: ${{ inputs.build-dev-builder-ubuntu == 'true' }}
run: |
make dev-builder \
BASE_IMAGE=ubuntu \
BUILDX_MULTI_PLATFORM_BUILD=true \
IMAGE_REGISTRY=${{ inputs.dockerhub-image-registry }} \
IMAGE_NAMESPACE=${{ inputs.dockerhub-image-namespace }} \
IMAGE_TAG=${{ inputs.version }}
- name: Build and push dev-builder-centos image
shell: bash
if: ${{ inputs.build-dev-builder-centos == 'true' }}
run: |
make dev-builder \
BASE_IMAGE=centos \
BUILDX_MULTI_PLATFORM_BUILD=true \
IMAGE_REGISTRY=${{ inputs.dockerhub-image-registry }} \
IMAGE_NAMESPACE=${{ inputs.dockerhub-image-namespace }} \
IMAGE_TAG=${{ inputs.version }}
- name: Build and push dev-builder-android image # Only build image for amd64 platform.
shell: bash
if: ${{ inputs.build-dev-builder-android == 'true' }}
run: |
make dev-builder \
BASE_IMAGE=android \
IMAGE_REGISTRY=${{ inputs.dockerhub-image-registry }} \
IMAGE_NAMESPACE=${{ inputs.dockerhub-image-namespace }} \
IMAGE_TAG=${{ inputs.version }} && \
docker push ${{ inputs.dockerhub-image-registry }}/${{ inputs.dockerhub-image-namespace }}/dev-builder-android:${{ inputs.version }}

View File

@@ -16,35 +16,20 @@ inputs:
version:
description: Version of the artifact
required: true
release-to-s3-bucket:
description: S3 bucket to store released artifacts
required: true
aws-access-key-id:
description: AWS access key id
required: true
aws-secret-access-key:
description: AWS secret access key
required: true
aws-region:
description: AWS region
required: true
upload-to-s3:
description: Upload to S3
required: false
default: 'true'
upload-latest-artifacts:
description: Upload the latest artifacts to S3
required: false
default: 'true'
working-dir:
description: Working directory to build the artifacts
required: false
default: .
build-android-artifacts:
description: Build android artifacts
required: false
default: 'false'
runs:
using: composite
steps:
- name: Build greptime binary
shell: bash
if: ${{ inputs.build-android-artifacts == 'false' }}
run: |
cd ${{ inputs.working-dir }} && \
make build-by-dev-builder \
@@ -54,14 +39,25 @@ runs:
- name: Upload artifacts
uses: ./.github/actions/upload-artifacts
if: ${{ inputs.build-android-artifacts == 'false' }}
with:
artifacts-dir: ${{ inputs.artifacts-dir }}
target-file: ./target/${{ inputs.cargo-profile }}/greptime
version: ${{ inputs.version }}
release-to-s3-bucket: ${{ inputs.release-to-s3-bucket }}
aws-access-key-id: ${{ inputs.aws-access-key-id }}
aws-secret-access-key: ${{ inputs.aws-secret-access-key }}
aws-region: ${{ inputs.aws-region }}
upload-to-s3: ${{ inputs.upload-to-s3 }}
upload-latest-artifacts: ${{ inputs.upload-latest-artifacts }}
working-dir: ${{ inputs.working-dir }}
# TODO(zyy17): We can remove build-android-artifacts flag in the future.
- name: Build greptime binary
shell: bash
if: ${{ inputs.build-android-artifacts == 'true' }}
run: |
cd ${{ inputs.working-dir }} && make strip-android-bin
- name: Upload android artifacts
uses: ./.github/actions/upload-artifacts
if: ${{ inputs.build-android-artifacts == 'true' }}
with:
artifacts-dir: ${{ inputs.artifacts-dir }}
target-file: ./target/aarch64-linux-android/release/greptime
version: ${{ inputs.version }}
working-dir: ${{ inputs.working-dir }}

View File

@@ -13,30 +13,10 @@ inputs:
disable-run-tests:
description: Disable running integration tests
required: true
release-to-s3-bucket:
description: S3 bucket to store released artifacts
required: true
aws-access-key-id:
description: AWS access key id
required: true
aws-secret-access-key:
description: AWS secret access key
required: true
aws-region:
description: AWS region
required: true
dev-mode:
description: Enable dev mode, only build standard greptime
required: false
default: 'false'
upload-to-s3:
description: Upload to S3
required: false
default: 'true'
upload-latest-artifacts:
description: Upload the latest artifacts to S3
required: false
default: 'true'
working-dir:
description: Working directory to build the artifacts
required: false
@@ -68,12 +48,6 @@ runs:
cargo-profile: ${{ inputs.cargo-profile }}
artifacts-dir: greptime-linux-${{ inputs.arch }}-pyo3-${{ inputs.version }}
version: ${{ inputs.version }}
release-to-s3-bucket: ${{ inputs.release-to-s3-bucket }}
aws-access-key-id: ${{ inputs.aws-access-key-id }}
aws-secret-access-key: ${{ inputs.aws-secret-access-key }}
aws-region: ${{ inputs.aws-region }}
upload-to-s3: ${{ inputs.upload-to-s3 }}
upload-latest-artifacts: ${{ inputs.upload-latest-artifacts }}
working-dir: ${{ inputs.working-dir }}
- name: Build greptime without pyo3
@@ -85,12 +59,6 @@ runs:
cargo-profile: ${{ inputs.cargo-profile }}
artifacts-dir: greptime-linux-${{ inputs.arch }}-${{ inputs.version }}
version: ${{ inputs.version }}
release-to-s3-bucket: ${{ inputs.release-to-s3-bucket }}
aws-access-key-id: ${{ inputs.aws-access-key-id }}
aws-secret-access-key: ${{ inputs.aws-secret-access-key }}
aws-region: ${{ inputs.aws-region }}
upload-to-s3: ${{ inputs.upload-to-s3 }}
upload-latest-artifacts: ${{ inputs.upload-latest-artifacts }}
working-dir: ${{ inputs.working-dir }}
- name: Clean up the target directory # Clean up the target directory for the centos7 base image, or it will still use the objects of last build.
@@ -107,10 +75,14 @@ runs:
cargo-profile: ${{ inputs.cargo-profile }}
artifacts-dir: greptime-linux-${{ inputs.arch }}-centos-${{ inputs.version }}
version: ${{ inputs.version }}
release-to-s3-bucket: ${{ inputs.release-to-s3-bucket }}
aws-access-key-id: ${{ inputs.aws-access-key-id }}
aws-secret-access-key: ${{ inputs.aws-secret-access-key }}
aws-region: ${{ inputs.aws-region }}
upload-to-s3: ${{ inputs.upload-to-s3 }}
upload-latest-artifacts: ${{ inputs.upload-latest-artifacts }}
working-dir: ${{ inputs.working-dir }}
- name: Build greptime on android base image
uses: ./.github/actions/build-greptime-binary
if: ${{ inputs.arch == 'amd64' && inputs.dev-mode == 'false' }} # Only build android base image on amd64.
with:
base-image: android
artifacts-dir: greptime-android-arm64-${{ inputs.version }}
version: ${{ inputs.version }}
working-dir: ${{ inputs.working-dir }}
build-android-artifacts: true

View File

@@ -19,25 +19,9 @@ inputs:
disable-run-tests:
description: Disable running integration tests
required: true
release-to-s3-bucket:
description: S3 bucket to store released artifacts
required: true
artifacts-dir:
description: Directory to store artifacts
required: true
aws-access-key-id:
description: AWS access key id
required: true
aws-secret-access-key:
description: AWS secret access key
required: true
aws-region:
description: AWS region
required: true
upload-to-s3:
description: Upload to S3
required: false
default: 'true'
runs:
using: composite
steps:
@@ -103,8 +87,3 @@ runs:
artifacts-dir: ${{ inputs.artifacts-dir }}
target-file: target/${{ inputs.arch }}/${{ inputs.cargo-profile }}/greptime
version: ${{ inputs.version }}
release-to-s3-bucket: ${{ inputs.release-to-s3-bucket }}
aws-access-key-id: ${{ inputs.aws-access-key-id }}
aws-secret-access-key: ${{ inputs.aws-secret-access-key }}
aws-region: ${{ inputs.aws-region }}
upload-to-s3: ${{ inputs.upload-to-s3 }}

View File

@@ -0,0 +1,80 @@
name: Build Windows artifacts
description: Build Windows artifacts
inputs:
arch:
description: Architecture to build
required: true
rust-toolchain:
description: Rust toolchain to use
required: true
cargo-profile:
description: Cargo profile to build
required: true
features:
description: Cargo features to build
required: true
version:
description: Version of the artifact
required: true
disable-run-tests:
description: Disable running integration tests
required: true
artifacts-dir:
description: Directory to store artifacts
required: true
runs:
using: composite
steps:
- uses: arduino/setup-protoc@v1
- name: Install rust toolchain
uses: dtolnay/rust-toolchain@master
with:
toolchain: ${{ inputs.rust-toolchain }}
targets: ${{ inputs.arch }}
components: llvm-tools-preview
- name: Rust Cache
uses: Swatinem/rust-cache@v2
- name: Install Python
uses: actions/setup-python@v4
with:
python-version: '3.10'
- name: Install PyArrow Package
shell: pwsh
run: pip install pyarrow
- name: Install WSL distribution
uses: Vampire/setup-wsl@v2
with:
distribution: Ubuntu-22.04
- name: Install latest nextest release # For integration tests.
if: ${{ inputs.disable-run-tests == 'false' }}
uses: taiki-e/install-action@nextest
- name: Run integration tests
if: ${{ inputs.disable-run-tests == 'false' }}
shell: pwsh
run: make test sqlness-test
- name: Upload sqlness logs
if: ${{ failure() }} # Only upload logs when the integration tests failed.
uses: actions/upload-artifact@v3
with:
name: sqlness-logs
path: ${{ runner.temp }}/greptime-*.log
retention-days: 3
- name: Build greptime binary
shell: pwsh
run: cargo build --profile ${{ inputs.cargo-profile }} --features ${{ inputs.features }} --target ${{ inputs.arch }}
- name: Upload artifacts
uses: ./.github/actions/upload-artifacts
with:
artifacts-dir: ${{ inputs.artifacts-dir }}
target-file: target/${{ inputs.arch }}/${{ inputs.cargo-profile }}/greptime
version: ${{ inputs.version }}

View File

@@ -1,5 +1,5 @@
name: Release artifacts
description: Release artifacts
name: Publish GitHub release
description: Publish GitHub release
inputs:
version:
description: Version to release

View File

@@ -0,0 +1,138 @@
name: Release CN artifacts
description: Release artifacts to CN region
inputs:
src-image-registry:
description: The source image registry to store the images
required: true
default: docker.io
src-image-namespace:
description: The namespace of the source image registry to store the images
required: true
default: greptime
src-image-name:
description: The name of the source image
required: false
default: greptimedb
dst-image-registry:
description: The destination image registry to store the images
required: true
dst-image-namespace:
description: The namespace of the destination image registry to store the images
required: true
default: greptime
dst-image-registry-username:
description: The username to login to the image registry
required: true
dst-image-registry-password:
description: The password to login to the image registry
required: true
version:
description: Version of the artifact
required: true
dev-mode:
description: Enable dev mode, only push standard greptime
required: false
default: 'false'
push-latest-tag:
description: Whether to push the latest tag of the image
required: false
default: 'true'
aws-cn-s3-bucket:
description: S3 bucket to store released artifacts in CN region
required: true
aws-cn-access-key-id:
description: AWS access key id in CN region
required: true
aws-cn-secret-access-key:
description: AWS secret access key in CN region
required: true
aws-cn-region:
description: AWS region in CN
required: true
upload-to-s3:
description: Upload to S3
required: false
default: 'true'
artifacts-dir:
description: Directory to store artifacts
required: false
default: 'artifacts'
update-version-info:
description: Update the version info in S3
required: false
default: 'true'
upload-max-retry-times:
description: Max retry times for uploading artifacts to S3
required: false
default: "20"
upload-retry-timeout:
description: Timeout for uploading artifacts to S3
required: false
default: "30" # minutes
runs:
using: composite
steps:
- name: Download artifacts
uses: actions/download-artifact@v3
with:
path: ${{ inputs.artifacts-dir }}
- name: Release artifacts to cn region
uses: nick-invision/retry@v2
if: ${{ inputs.upload-to-s3 == 'true' }}
env:
AWS_ACCESS_KEY_ID: ${{ inputs.aws-cn-access-key-id }}
AWS_SECRET_ACCESS_KEY: ${{ inputs.aws-cn-secret-access-key }}
AWS_DEFAULT_REGION: ${{ inputs.aws-cn-region }}
UPDATE_VERSION_INFO: ${{ inputs.update-version-info }}
with:
max_attempts: ${{ inputs.upload-max-retry-times }}
timeout_minutes: ${{ inputs.upload-retry-timeout }}
command: |
./.github/scripts/upload-artifacts-to-s3.sh \
${{ inputs.artifacts-dir }} \
${{ inputs.version }} \
${{ inputs.aws-cn-s3-bucket }}
- name: Push greptimedb image from Dockerhub to ACR
shell: bash
env:
DST_REGISTRY_USERNAME: ${{ inputs.dst-image-registry-username }}
DST_REGISTRY_PASSWORD: ${{ inputs.dst-image-registry-password }}
run: |
./.github/scripts/copy-image.sh \
${{ inputs.src-image-registry }}/${{ inputs.src-image-namespace }}/${{ inputs.src-image-name }}:${{ inputs.version }} \
${{ inputs.dst-image-registry }}/${{ inputs.dst-image-namespace }}
- name: Push latest greptimedb image from Dockerhub to ACR
shell: bash
if: ${{ inputs.push-latest-tag == 'true' }}
env:
DST_REGISTRY_USERNAME: ${{ inputs.dst-image-registry-username }}
DST_REGISTRY_PASSWORD: ${{ inputs.dst-image-registry-password }}
run: |
./.github/scripts/copy-image.sh \
${{ inputs.src-image-registry }}/${{ inputs.src-image-namespace }}/${{ inputs.src-image-name }}:latest \
${{ inputs.dst-image-registry }}/${{ inputs.dst-image-namespace }}
- name: Push greptimedb-centos image from DockerHub to ACR
shell: bash
if: ${{ inputs.dev-mode == 'false' }}
env:
DST_REGISTRY_USERNAME: ${{ inputs.dst-image-registry-username }}
DST_REGISTRY_PASSWORD: ${{ inputs.dst-image-registry-password }}
run: |
./.github/scripts/copy-image.sh \
${{ inputs.src-image-registry }}/${{ inputs.src-image-namespace }}/${{ inputs.src-image-name }}-centos:latest \
${{ inputs.dst-image-registry }}/${{ inputs.dst-image-namespace }}
- name: Push greptimedb-centos image from DockerHub to ACR
shell: bash
if: ${{ inputs.dev-mode == 'false' && inputs.push-latest-tag == 'true' }}
env:
DST_REGISTRY_USERNAME: ${{ inputs.dst-image-registry-username }}
DST_REGISTRY_PASSWORD: ${{ inputs.dst-image-registry-password }}
run: |
./.github/scripts/copy-image.sh \
${{ inputs.src-image-registry }}/${{ inputs.src-image-namespace }}/${{ inputs.src-image-name }}-centos:latest \
${{ inputs.dst-image-registry }}/${{ inputs.dst-image-namespace }}

View File

@@ -10,34 +10,6 @@ inputs:
version:
description: Version of the artifact
required: true
release-to-s3-bucket:
description: S3 bucket to store released artifacts
required: true
aws-access-key-id:
description: AWS access key id
required: true
aws-secret-access-key:
description: AWS secret access key
required: true
aws-region:
description: AWS region
required: true
upload-to-s3:
description: Upload to S3
required: false
default: 'true'
upload-latest-artifacts:
description: Upload the latest artifacts to S3
required: false
default: 'true'
upload-max-retry-times:
description: Max retry times for uploading artifacts to S3
required: false
default: "20"
upload-retry-timeout:
description: Timeout for uploading artifacts to S3
required: false
default: "30" # minutes
working-dir:
description: Working directory to upload the artifacts
required: false
@@ -61,9 +33,21 @@ runs:
working-directory: ${{ inputs.working-dir }}
shell: bash
run: |
tar -zcvf ${{ inputs.artifacts-dir }}.tar.gz ${{ inputs.artifacts-dir }} && \
tar -zcvf ${{ inputs.artifacts-dir }}.tar.gz ${{ inputs.artifacts-dir }}
- name: Calculate checksum
if: runner.os != 'Windows'
working-directory: ${{ inputs.working-dir }}
shell: bash
run: |
echo $(shasum -a 256 ${{ inputs.artifacts-dir }}.tar.gz | cut -f1 -d' ') > ${{ inputs.artifacts-dir }}.sha256sum
- name: Calculate checksum on Windows
if: runner.os == 'Windows'
working-directory: ${{ inputs.working-dir }}
shell: pwsh
run: Get-FileHash ${{ inputs.artifacts-dir }}.tar.gz -Algorithm SHA256 | select -ExpandProperty Hash > ${{ inputs.artifacts-dir }}.sha256sum
# Note: The artifacts will be double zip compressed(related issue: https://github.com/actions/upload-artifact/issues/39).
# However, when we use 'actions/download-artifact@v3' to download the artifacts, it will be automatically unzipped.
- name: Upload artifacts
@@ -77,49 +61,3 @@ runs:
with:
name: ${{ inputs.artifacts-dir }}.sha256sum
path: ${{ inputs.working-dir }}/${{ inputs.artifacts-dir }}.sha256sum
- name: Upload artifacts to S3
if: ${{ inputs.upload-to-s3 == 'true' }}
uses: nick-invision/retry@v2
env:
AWS_ACCESS_KEY_ID: ${{ inputs.aws-access-key-id }}
AWS_SECRET_ACCESS_KEY: ${{ inputs.aws-secret-access-key }}
AWS_DEFAULT_REGION: ${{ inputs.aws-region }}
with:
max_attempts: ${{ inputs.upload-max-retry-times }}
timeout_minutes: ${{ inputs.upload-retry-timeout }}
# The bucket layout will be:
# releases/greptimedb
# ├── v0.1.0
# │ ├── greptime-darwin-amd64-pyo3-v0.1.0.sha256sum
# │ └── greptime-darwin-amd64-pyo3-v0.1.0.tar.gz
# └── v0.2.0
# ├── greptime-darwin-amd64-pyo3-v0.2.0.sha256sum
# └── greptime-darwin-amd64-pyo3-v0.2.0.tar.gz
command: |
cd ${{ inputs.working-dir }} && \
aws s3 cp \
${{ inputs.artifacts-dir }}.tar.gz \
s3://${{ inputs.release-to-s3-bucket }}/releases/greptimedb/${{ inputs.version }}/${{ inputs.artifacts-dir }}.tar.gz && \
aws s3 cp \
${{ inputs.artifacts-dir }}.sha256sum \
s3://${{ inputs.release-to-s3-bucket }}/releases/greptimedb/${{ inputs.version }}/${{ inputs.artifacts-dir }}.sha256sum
- name: Upload latest artifacts to S3
if: ${{ inputs.upload-to-s3 == 'true' && inputs.upload-latest-artifacts == 'true' }} # We'll also upload the latest artifacts to S3 in the scheduled and formal release.
uses: nick-invision/retry@v2
env:
AWS_ACCESS_KEY_ID: ${{ inputs.aws-access-key-id }}
AWS_SECRET_ACCESS_KEY: ${{ inputs.aws-secret-access-key }}
AWS_DEFAULT_REGION: ${{ inputs.aws-region }}
with:
max_attempts: ${{ inputs.upload-max-retry-times }}
timeout_minutes: ${{ inputs.upload-retry-timeout }}
command: |
cd ${{ inputs.working-dir }} && \
aws s3 cp \
${{ inputs.artifacts-dir }}.tar.gz \
s3://${{ inputs.release-to-s3-bucket }}/releases/greptimedb/latest/${{ inputs.artifacts-dir }}.tar.gz && \
aws s3 cp \
${{ inputs.artifacts-dir }}.sha256sum \
s3://${{ inputs.release-to-s3-bucket }}/releases/greptimedb/latest/${{ inputs.artifacts-dir }}.sha256sum

47
.github/scripts/copy-image.sh vendored Executable file
View File

@@ -0,0 +1,47 @@
#!/usr/bin/env bash
set -e
set -o pipefail
SRC_IMAGE=$1
DST_REGISTRY=$2
SKOPEO_STABLE_IMAGE="quay.io/skopeo/stable:latest"
# Check if necessary variables are set.
function check_vars() {
for var in DST_REGISTRY_USERNAME DST_REGISTRY_PASSWORD DST_REGISTRY SRC_IMAGE; do
if [ -z "${!var}" ]; then
echo "$var is not set or empty."
echo "Usage: DST_REGISTRY_USERNAME=<your-dst-registry-username> DST_REGISTRY_PASSWORD=<your-dst-registry-password> $0 <dst-registry> <src-image>"
exit 1
fi
done
}
# Copies images from DockerHub to the destination registry.
function copy_images_from_dockerhub() {
# Check if docker is installed.
if ! command -v docker &> /dev/null; then
echo "docker is not installed. Please install docker to continue."
exit 1
fi
# Extract the name and tag of the source image.
IMAGE_NAME=$(echo "$SRC_IMAGE" | sed "s/.*\///")
echo "Copying $SRC_IMAGE to $DST_REGISTRY/$IMAGE_NAME"
docker run "$SKOPEO_STABLE_IMAGE" copy -a docker://"$SRC_IMAGE" \
--dest-creds "$DST_REGISTRY_USERNAME":"$DST_REGISTRY_PASSWORD" \
docker://"$DST_REGISTRY/$IMAGE_NAME"
}
function main() {
check_vars
copy_images_from_dockerhub
}
# Usage example:
# DST_REGISTRY_USERNAME=123 DST_REGISTRY_PASSWORD=456 \
# ./copy-image.sh greptime/greptimedb:v0.4.0 greptime-registry.cn-hangzhou.cr.aliyuncs.com
main

102
.github/scripts/upload-artifacts-to-s3.sh vendored Executable file
View File

@@ -0,0 +1,102 @@
#!/usr/bin/env bash
set -e
set -o pipefail
ARTIFACTS_DIR=$1
VERSION=$2
AWS_S3_BUCKET=$3
RELEASE_DIRS="releases/greptimedb"
GREPTIMEDB_REPO="GreptimeTeam/greptimedb"
# Check if necessary variables are set.
function check_vars() {
for var in AWS_S3_BUCKET VERSION ARTIFACTS_DIR; do
if [ -z "${!var}" ]; then
echo "$var is not set or empty."
echo "Usage: $0 <artifacts-dir> <version> <aws-s3-bucket>"
exit 1
fi
done
}
# Uploads artifacts to AWS S3 bucket.
function upload_artifacts() {
# The bucket layout will be:
# releases/greptimedb
# ├── latest-version.txt
# ├── latest-nightly-version.txt
# ├── v0.1.0
# │ ├── greptime-darwin-amd64-pyo3-v0.1.0.sha256sum
# │ └── greptime-darwin-amd64-pyo3-v0.1.0.tar.gz
# └── v0.2.0
# ├── greptime-darwin-amd64-pyo3-v0.2.0.sha256sum
# └── greptime-darwin-amd64-pyo3-v0.2.0.tar.gz
find "$ARTIFACTS_DIR" -type f \( -name "*.tar.gz" -o -name "*.sha256sum" \) | while IFS= read -r file; do
aws s3 cp \
"$file" "s3://$AWS_S3_BUCKET/$RELEASE_DIRS/$VERSION/$(basename "$file")"
done
}
# Updates the latest version information in AWS S3 if UPDATE_VERSION_INFO is true.
function update_version_info() {
if [ "$UPDATE_VERSION_INFO" == "true" ]; then
# If it's the officail release(like v1.0.0, v1.0.1, v1.0.2, etc.), update latest-version.txt.
if [[ "$VERSION" =~ ^v[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
echo "Updating latest-version.txt"
echo "$VERSION" > latest-version.txt
aws s3 cp \
latest-version.txt "s3://$AWS_S3_BUCKET/$RELEASE_DIRS/latest-version.txt"
fi
# If it's the nightly release, update latest-nightly-version.txt.
if [[ "$VERSION" == *"nightly"* ]]; then
echo "Updating latest-nightly-version.txt"
echo "$VERSION" > latest-nightly-version.txt
aws s3 cp \
latest-nightly-version.txt "s3://$AWS_S3_BUCKET/$RELEASE_DIRS/latest-nightly-version.txt"
fi
fi
}
# Downloads artifacts from Github if DOWNLOAD_ARTIFACTS_FROM_GITHUB is true.
function download_artifacts_from_github() {
if [ "$DOWNLOAD_ARTIFACTS_FROM_GITHUB" == "true" ]; then
# Check if jq is installed.
if ! command -v jq &> /dev/null; then
echo "jq is not installed. Please install jq to continue."
exit 1
fi
# Get the latest release API response.
RELEASES_API_RESPONSE=$(curl -s -H "Accept: application/vnd.github.v3+json" "https://api.github.com/repos/$GREPTIMEDB_REPO/releases/latest")
# Extract download URLs for the artifacts.
# Exclude source code archives which are typically named as 'greptimedb-<version>.zip' or 'greptimedb-<version>.tar.gz'.
ASSET_URLS=$(echo "$RELEASES_API_RESPONSE" | jq -r '.assets[] | select(.name | test("greptimedb-.*\\.(zip|tar\\.gz)$") | not) | .browser_download_url')
# Download each asset.
while IFS= read -r url; do
if [ -n "$url" ]; then
curl -LJO "$url"
echo "Downloaded: $url"
fi
done <<< "$ASSET_URLS"
fi
}
function main() {
check_vars
download_artifacts_from_github
upload_artifacts
update_version_info
}
# Usage example:
# AWS_ACCESS_KEY_ID=<your_access_key_id> \
# AWS_SECRET_ACCESS_KEY=<your_secret_access_key> \
# AWS_DEFAULT_REGION=<your_region> \
# UPDATE_VERSION_INFO=true \
# DOWNLOAD_ARTIFACTS_FROM_GITHUB=false \
# ./upload-artifacts-to-s3.sh <artifacts-dir> <version> <aws-s3-bucket>
main

View File

@@ -17,7 +17,7 @@ env:
jobs:
apidoc:
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@v3
- uses: arduino/setup-protoc@v1

View File

@@ -16,11 +16,11 @@ on:
description: The runner uses to build linux-amd64 artifacts
default: ec2-c6i.4xlarge-amd64
options:
- ubuntu-latest
- ubuntu-latest-8-cores
- ubuntu-latest-16-cores
- ubuntu-latest-32-cores
- ubuntu-latest-64-cores
- ubuntu-20.04
- ubuntu-20.04-8-cores
- ubuntu-20.04-16-cores
- ubuntu-20.04-32-cores
- ubuntu-20.04-64-cores
- ec2-c6i.xlarge-amd64 # 4C8G
- ec2-c6i.2xlarge-amd64 # 8C16G
- ec2-c6i.4xlarge-amd64 # 16C32G
@@ -78,7 +78,7 @@ jobs:
allocate-runners:
name: Allocate runners
if: ${{ github.repository == 'GreptimeTeam/greptimedb' }}
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
outputs:
linux-amd64-runner: ${{ steps.start-linux-amd64-runner.outputs.label }}
linux-arm64-runner: ${{ steps.start-linux-arm64-runner.outputs.label }}
@@ -164,12 +164,7 @@ jobs:
cargo-profile: ${{ env.CARGO_PROFILE }}
version: ${{ needs.allocate-runners.outputs.version }}
disable-run-tests: ${{ env.DISABLE_RUN_TESTS }}
release-to-s3-bucket: ${{ vars.AWS_RELEASE_BUCKET }}
aws-access-key-id: ${{ secrets.AWS_CN_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_CN_SECRET_ACCESS_KEY }}
aws-region: ${{ vars.AWS_RELEASE_BUCKET_REGION }}
dev-mode: true # Only build the standard greptime binary.
upload-to-s3: false # No need to upload to S3.
working-dir: ${{ env.CHECKOUT_GREPTIMEDB_PATH }}
build-linux-arm64-artifacts:
@@ -198,12 +193,7 @@ jobs:
cargo-profile: ${{ env.CARGO_PROFILE }}
version: ${{ needs.allocate-runners.outputs.version }}
disable-run-tests: ${{ env.DISABLE_RUN_TESTS }}
release-to-s3-bucket: ${{ vars.AWS_RELEASE_BUCKET }}
aws-access-key-id: ${{ secrets.AWS_CN_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_CN_SECRET_ACCESS_KEY }}
aws-region: ${{ vars.AWS_RELEASE_BUCKET_REGION }}
dev-mode: true # Only build the standard greptime binary.
upload-to-s3: false # No need to upload to S3.
working-dir: ${{ env.CHECKOUT_GREPTIMEDB_PATH }}
release-images-to-dockerhub:
@@ -214,7 +204,7 @@ jobs:
build-linux-amd64-artifacts,
build-linux-arm64-artifacts,
]
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
outputs:
build-result: ${{ steps.set-build-result.outputs.build-result }}
steps:
@@ -239,41 +229,44 @@ jobs:
run: |
echo "build-result=success" >> $GITHUB_OUTPUT
release-images-to-acr:
name: Build and push images to ACR
release-cn-artifacts:
name: Release artifacts to CN region
if: ${{ inputs.release_images || github.event_name == 'schedule' }}
needs: [
allocate-runners,
build-linux-amd64-artifacts,
build-linux-arm64-artifacts,
release-images-to-dockerhub,
]
runs-on: ubuntu-latest
# When we push to ACR, it's easy to fail due to some unknown network issues.
# However, we don't want to fail the whole workflow because of this.
# The ACR have daily sync with DockerHub, so don't worry about the image not being updated.
runs-on: ubuntu-20.04
continue-on-error: true
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Build and push images to ACR
uses: ./.github/actions/build-images
- name: Release artifacts to CN region
uses: ./.github/actions/release-cn-artifacts
with:
image-registry: ${{ vars.ACR_IMAGE_REGISTRY }}
image-namespace: ${{ vars.IMAGE_NAMESPACE }}
image-name: ${{ env.IMAGE_NAME }}
image-registry-username: ${{ secrets.ALICLOUD_USERNAME }}
image-registry-password: ${{ secrets.ALICLOUD_PASSWORD }}
src-image-registry: docker.io
src-image-namespace: ${{ vars.IMAGE_NAMESPACE }}
src-image-name: ${{ env.IMAGE_NAME }}
dst-image-registry-username: ${{ secrets.ALICLOUD_USERNAME }}
dst-image-registry-password: ${{ secrets.ALICLOUD_PASSWORD }}
dst-image-registry: ${{ vars.ACR_IMAGE_REGISTRY }}
dst-image-namespace: ${{ vars.IMAGE_NAMESPACE }}
version: ${{ needs.allocate-runners.outputs.version }}
push-latest-tag: false # Don't push the latest tag to registry.
dev-mode: true # Only build the standard images.
aws-cn-s3-bucket: ${{ vars.AWS_RELEASE_BUCKET }}
aws-cn-access-key-id: ${{ secrets.AWS_CN_ACCESS_KEY_ID }}
aws-cn-secret-access-key: ${{ secrets.AWS_CN_SECRET_ACCESS_KEY }}
aws-cn-region: ${{ vars.AWS_RELEASE_BUCKET_REGION }}
dev-mode: true # Only build the standard images(exclude centos images).
push-latest-tag: false # Don't push the latest tag to registry.
update-version-info: false # Don't update the version info in S3.
stop-linux-amd64-runner: # It's always run as the last job in the workflow to make sure that the runner is released.
name: Stop linux-amd64 runner
# Only run this job when the runner is allocated.
if: ${{ always() }}
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
needs: [
allocate-runners,
build-linux-amd64-artifacts,
@@ -298,7 +291,7 @@ jobs:
name: Stop linux-arm64 runner
# Only run this job when the runner is allocated.
if: ${{ always() }}
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
needs: [
allocate-runners,
build-linux-arm64-artifacts,
@@ -325,7 +318,7 @@ jobs:
needs: [
release-images-to-dockerhub
]
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL_DEVELOP_CHANNEL }}
steps:

View File

@@ -34,7 +34,7 @@ env:
jobs:
typos:
name: Spell Check with Typos
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@v3
- uses: crate-ci/typos@v1.13.10
@@ -42,7 +42,7 @@ jobs:
check:
name: Check
if: github.event.pull_request.draft == false
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
timeout-minutes: 60
steps:
- uses: actions/checkout@v3
@@ -60,7 +60,7 @@ jobs:
toml:
name: Toml Check
if: github.event.pull_request.draft == false
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
timeout-minutes: 60
steps:
- uses: actions/checkout@v3
@@ -80,7 +80,7 @@ jobs:
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ ubuntu-latest-8-cores, windows-latest-8-cores ]
os: [ ubuntu-20.04-8-cores ]
timeout-minutes: 60
steps:
- uses: actions/checkout@v3
@@ -105,7 +105,7 @@ jobs:
fmt:
name: Rustfmt
if: github.event.pull_request.draft == false
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
timeout-minutes: 60
steps:
- uses: actions/checkout@v3
@@ -124,7 +124,7 @@ jobs:
clippy:
name: Clippy
if: github.event.pull_request.draft == false
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
timeout-minutes: 60
steps:
- uses: actions/checkout@v3
@@ -142,7 +142,7 @@ jobs:
coverage:
if: github.event.pull_request.draft == false
runs-on: ubuntu-latest-8-cores
runs-on: ubuntu-20.04-8-cores
timeout-minutes: 60
steps:
- uses: actions/checkout@v3
@@ -188,43 +188,3 @@ jobs:
flags: rust
fail_ci_if_error: false
verbose: true
test-on-windows:
if: github.event.pull_request.draft == false
runs-on: windows-latest-8-cores
timeout-minutes: 60
steps:
- run: git config --global core.autocrlf false
- uses: actions/checkout@v3
- uses: arduino/setup-protoc@v1
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
- name: Install Rust toolchain
uses: dtolnay/rust-toolchain@master
with:
toolchain: ${{ env.RUST_TOOLCHAIN }}
components: llvm-tools-preview
- name: Rust Cache
uses: Swatinem/rust-cache@v2
- name: Install Cargo Nextest
uses: taiki-e/install-action@nextest
- name: Install Python
uses: actions/setup-python@v4
with:
python-version: '3.10'
- name: Install PyArrow Package
run: pip install pyarrow
- name: Install WSL distribution
uses: Vampire/setup-wsl@v2
with:
distribution: Ubuntu-22.04
- name: Running tests
run: cargo nextest run -F pyo3_backend,dashboard
env:
RUST_BACKTRACE: 1
CARGO_INCREMENTAL: 0
GT_S3_BUCKET: ${{ secrets.S3_BUCKET }}
GT_S3_ACCESS_KEY_ID: ${{ secrets.S3_ACCESS_KEY_ID }}
GT_S3_ACCESS_KEY: ${{ secrets.S3_ACCESS_KEY }}
GT_S3_REGION: ${{ secrets.S3_REGION }}
UNITTEST_LOG_DIR: "__unittest_logs"

View File

@@ -11,7 +11,7 @@ on:
jobs:
doc_issue:
if: github.event.label.name == 'doc update required'
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
steps:
- name: create an issue in doc repo
uses: dacbd/create-issue-action@main
@@ -25,7 +25,7 @@ jobs:
${{ github.event.issue.html_url || github.event.pull_request.html_url }}
cloud_issue:
if: github.event.label.name == 'cloud followup required'
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
steps:
- name: create an issue in cloud repo
uses: dacbd/create-issue-action@main

View File

@@ -30,7 +30,7 @@ name: CI
jobs:
typos:
name: Spell Check with Typos
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@v3
- uses: crate-ci/typos@v1.13.10
@@ -38,33 +38,33 @@ jobs:
check:
name: Check
if: github.event.pull_request.draft == false
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
steps:
- run: 'echo "No action required"'
fmt:
name: Rustfmt
if: github.event.pull_request.draft == false
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
steps:
- run: 'echo "No action required"'
clippy:
name: Clippy
if: github.event.pull_request.draft == false
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
steps:
- run: 'echo "No action required"'
coverage:
if: github.event.pull_request.draft == false
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
steps:
- run: 'echo "No action required"'
sqlness:
name: Sqlness Test
if: github.event.pull_request.draft == false
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
steps:
- run: 'echo "No action required"'

View File

@@ -8,7 +8,7 @@ on:
types: [opened, synchronize, reopened, ready_for_review]
jobs:
license-header-check:
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
name: license-header-check
steps:
- uses: actions/checkout@v2

View File

@@ -14,11 +14,11 @@ on:
description: The runner uses to build linux-amd64 artifacts
default: ec2-c6i.2xlarge-amd64
options:
- ubuntu-latest
- ubuntu-latest-8-cores
- ubuntu-latest-16-cores
- ubuntu-latest-32-cores
- ubuntu-latest-64-cores
- ubuntu-20.04
- ubuntu-20.04-8-cores
- ubuntu-20.04-16-cores
- ubuntu-20.04-32-cores
- ubuntu-20.04-64-cores
- ec2-c6i.xlarge-amd64 # 4C8G
- ec2-c6i.2xlarge-amd64 # 8C16G
- ec2-c6i.4xlarge-amd64 # 16C32G
@@ -70,7 +70,7 @@ jobs:
allocate-runners:
name: Allocate runners
if: ${{ github.repository == 'GreptimeTeam/greptimedb' }}
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
outputs:
linux-amd64-runner: ${{ steps.start-linux-amd64-runner.outputs.label }}
linux-arm64-runner: ${{ steps.start-linux-arm64-runner.outputs.label }}
@@ -147,11 +147,6 @@ jobs:
cargo-profile: ${{ env.CARGO_PROFILE }}
version: ${{ needs.allocate-runners.outputs.version }}
disable-run-tests: ${{ env.DISABLE_RUN_TESTS }}
release-to-s3-bucket: ${{ vars.AWS_RELEASE_BUCKET }}
aws-access-key-id: ${{ secrets.AWS_CN_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_CN_SECRET_ACCESS_KEY }}
aws-region: ${{ vars.AWS_RELEASE_BUCKET_REGION }}
upload-latest-artifacts: false
build-linux-arm64-artifacts:
name: Build linux-arm64 artifacts
@@ -171,11 +166,6 @@ jobs:
cargo-profile: ${{ env.CARGO_PROFILE }}
version: ${{ needs.allocate-runners.outputs.version }}
disable-run-tests: ${{ env.DISABLE_RUN_TESTS }}
release-to-s3-bucket: ${{ vars.AWS_RELEASE_BUCKET }}
aws-access-key-id: ${{ secrets.AWS_CN_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_CN_SECRET_ACCESS_KEY }}
aws-region: ${{ vars.AWS_RELEASE_BUCKET_REGION }}
upload-latest-artifacts: false
release-images-to-dockerhub:
name: Build and push images to DockerHub
@@ -185,7 +175,7 @@ jobs:
build-linux-amd64-artifacts,
build-linux-arm64-artifacts,
]
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
outputs:
nightly-build-result: ${{ steps.set-nightly-build-result.outputs.nightly-build-result }}
steps:
@@ -208,15 +198,14 @@ jobs:
run: |
echo "nightly-build-result=success" >> $GITHUB_OUTPUT
release-images-to-acr:
name: Build and push images to ACR
release-cn-artifacts:
name: Release artifacts to CN region
if: ${{ inputs.release_images || github.event_name == 'schedule' }}
needs: [
allocate-runners,
build-linux-amd64-artifacts,
build-linux-arm64-artifacts,
release-images-to-dockerhub,
]
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
# When we push to ACR, it's easy to fail due to some unknown network issues.
# However, we don't want to fail the whole workflow because of this.
# The ACR have daily sync with DockerHub, so don't worry about the image not being updated.
@@ -226,21 +215,30 @@ jobs:
with:
fetch-depth: 0
- name: Build and push images to ACR
uses: ./.github/actions/build-images
- name: Release artifacts to CN region
uses: ./.github/actions/release-cn-artifacts
with:
image-registry: ${{ vars.ACR_IMAGE_REGISTRY }}
image-namespace: ${{ vars.IMAGE_NAMESPACE }}
image-registry-username: ${{ secrets.ALICLOUD_USERNAME }}
image-registry-password: ${{ secrets.ALICLOUD_PASSWORD }}
src-image-registry: docker.io
src-image-namespace: ${{ vars.IMAGE_NAMESPACE }}
src-image-name: greptimedb
dst-image-registry-username: ${{ secrets.ALICLOUD_USERNAME }}
dst-image-registry-password: ${{ secrets.ALICLOUD_PASSWORD }}
dst-image-registry: ${{ vars.ACR_IMAGE_REGISTRY }}
dst-image-namespace: ${{ vars.IMAGE_NAMESPACE }}
version: ${{ needs.allocate-runners.outputs.version }}
push-latest-tag: false # Don't push the latest tag to registry.
aws-cn-s3-bucket: ${{ vars.AWS_RELEASE_BUCKET }}
aws-cn-access-key-id: ${{ secrets.AWS_CN_ACCESS_KEY_ID }}
aws-cn-secret-access-key: ${{ secrets.AWS_CN_SECRET_ACCESS_KEY }}
aws-cn-region: ${{ vars.AWS_RELEASE_BUCKET_REGION }}
dev-mode: false
update-version-info: false # Don't update version info in S3.
push-latest-tag: false # Don't push the latest tag to registry.
stop-linux-amd64-runner: # It's always run as the last job in the workflow to make sure that the runner is released.
name: Stop linux-amd64 runner
# Only run this job when the runner is allocated.
if: ${{ always() }}
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
needs: [
allocate-runners,
build-linux-amd64-artifacts,
@@ -265,7 +263,7 @@ jobs:
name: Stop linux-arm64 runner
# Only run this job when the runner is allocated.
if: ${{ always() }}
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
needs: [
allocate-runners,
build-linux-arm64-artifacts,
@@ -292,7 +290,7 @@ jobs:
needs: [
release-images-to-dockerhub
]
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
env:
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL_DEVELOP_CHANNEL }}
steps:

82
.github/workflows/nightly-ci.yml vendored Normal file
View File

@@ -0,0 +1,82 @@
# Nightly CI: runs tests every night for our second tier plaforms (Windows)
on:
schedule:
- cron: '0 23 * * 1-5'
workflow_dispatch:
name: Nightly CI
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
cancel-in-progress: true
env:
RUST_TOOLCHAIN: nightly-2023-08-07
jobs:
sqlness:
name: Sqlness Test
runs-on: ${{ matrix.os }}
strategy:
matrix:
os: [ windows-latest-8-cores ]
timeout-minutes: 60
steps:
- uses: actions/checkout@v4.1.0
- uses: arduino/setup-protoc@v1
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
- uses: dtolnay/rust-toolchain@master
with:
toolchain: ${{ env.RUST_TOOLCHAIN }}
- name: Rust Cache
uses: Swatinem/rust-cache@v2
- name: Run sqlness
run: cargo sqlness
- name: Upload sqlness logs
if: always()
uses: actions/upload-artifact@v3
with:
name: sqlness-logs
path: ${{ runner.temp }}/greptime-*.log
retention-days: 3
test-on-windows:
runs-on: windows-latest-8-cores
timeout-minutes: 60
steps:
- run: git config --global core.autocrlf false
- uses: actions/checkout@v4.1.0
- uses: arduino/setup-protoc@v1
with:
repo-token: ${{ secrets.GITHUB_TOKEN }}
- name: Install Rust toolchain
uses: dtolnay/rust-toolchain@master
with:
toolchain: ${{ env.RUST_TOOLCHAIN }}
components: llvm-tools-preview
- name: Rust Cache
uses: Swatinem/rust-cache@v2
- name: Install Cargo Nextest
uses: taiki-e/install-action@nextest
- name: Install Python
uses: actions/setup-python@v4
with:
python-version: '3.10'
- name: Install PyArrow Package
run: pip install pyarrow
- name: Install WSL distribution
uses: Vampire/setup-wsl@v2
with:
distribution: Ubuntu-22.04
- name: Running tests
run: cargo nextest run -F pyo3_backend,dashboard
env:
RUST_BACKTRACE: 1
CARGO_INCREMENTAL: 0
GT_S3_BUCKET: ${{ secrets.S3_BUCKET }}
GT_S3_ACCESS_KEY_ID: ${{ secrets.S3_ACCESS_KEY_ID }}
GT_S3_ACCESS_KEY: ${{ secrets.S3_ACCESS_KEY }}
GT_S3_REGION: ${{ secrets.S3_REGION }}
UNITTEST_LOG_DIR: "__unittest_logs"

View File

@@ -10,7 +10,7 @@ on:
jobs:
check:
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
timeout-minutes: 10
steps:
- uses: thehanimo/pr-title-checker@v1.3.4
@@ -19,7 +19,7 @@ jobs:
pass_on_octokit_error: false
configuration_path: ".github/pr-title-checker-config.json"
breaking:
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
timeout-minutes: 10
steps:
- uses: thehanimo/pr-title-checker@v1.3.4

View File

@@ -0,0 +1,85 @@
name: Release dev-builder images
on:
workflow_dispatch: # Allows you to run this workflow manually.
inputs:
version:
description: Version of the dev-builder
required: false
default: latest
release_dev_builder_ubuntu_image:
type: boolean
description: Release dev-builder-ubuntu image
required: false
default: false
release_dev_builder_centos_image:
type: boolean
description: Release dev-builder-centos image
required: false
default: false
release_dev_builder_android_image:
type: boolean
description: Release dev-builder-android image
required: false
default: false
jobs:
release-dev-builder-images:
name: Release dev builder images
if: ${{ inputs.release_dev_builder_ubuntu_image || inputs.release_dev_builder_centos_image || inputs.release_dev_builder_android_image }} # Only manually trigger this job.
runs-on: ubuntu-20.04-16-cores
steps:
- name: Checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Build and push dev builder images
uses: ./.github/actions/build-dev-builder-images
with:
version: ${{ inputs.version }}
dockerhub-image-registry-username: ${{ secrets.DOCKERHUB_USERNAME }}
dockerhub-image-registry-token: ${{ secrets.DOCKERHUB_TOKEN }}
build-dev-builder-ubuntu: ${{ inputs.release_dev_builder_ubuntu_image }}
build-dev-builder-centos: ${{ inputs.release_dev_builder_centos_image }}
build-dev-builder-android: ${{ inputs.release_dev_builder_android_image }}
release-dev-builder-images-cn: # Note: Be careful issue: https://github.com/containers/skopeo/issues/1874 and we decide to use the latest stable skopeo container.
name: Release dev builder images to CN region
runs-on: ubuntu-20.04
needs: [
release-dev-builder-images
]
steps:
- name: Push dev-builder-ubuntu image
shell: bash
if: ${{ inputs.release_dev_builder_ubuntu_image }}
env:
DST_REGISTRY_USERNAME: ${{ secrets.ALICLOUD_USERNAME }}
DST_REGISTRY_PASSWORD: ${{ secrets.ALICLOUD_PASSWORD }}
run: |
docker run quay.io/skopeo/stable:latest copy -a docker://docker.io/${{ vars.IMAGE_NAMESPACE }}/dev-builder-ubuntu:${{ inputs.version }} \
--dest-creds "$DST_REGISTRY_USERNAME":"$DST_REGISTRY_PASSWORD" \
docker://${{ vars.ACR_IMAGE_REGISTRY }}/${{ vars.IMAGE_NAMESPACE }}/dev-builder-ubuntu:${{ inputs.version }}
- name: Push dev-builder-centos image
shell: bash
if: ${{ inputs.release_dev_builder_centos_image }}
env:
DST_REGISTRY_USERNAME: ${{ secrets.ALICLOUD_USERNAME }}
DST_REGISTRY_PASSWORD: ${{ secrets.ALICLOUD_PASSWORD }}
run: |
docker run quay.io/skopeo/stable:latest copy -a docker://docker.io/${{ vars.IMAGE_NAMESPACE }}/dev-builder-centos:${{ inputs.version }} \
--dest-creds "$DST_REGISTRY_USERNAME":"$DST_REGISTRY_PASSWORD" \
docker://${{ vars.ACR_IMAGE_REGISTRY }}/${{ vars.IMAGE_NAMESPACE }}/dev-builder-centos:${{ inputs.version }}
- name: Push dev-builder-android image
shell: bash
if: ${{ inputs.release_dev_builder_android_image }}
env:
DST_REGISTRY_USERNAME: ${{ secrets.ALICLOUD_USERNAME }}
DST_REGISTRY_PASSWORD: ${{ secrets.ALICLOUD_PASSWORD }}
run: |
docker run quay.io/skopeo/stable:latest copy -a docker://docker.io/${{ vars.IMAGE_NAMESPACE }}/dev-builder-android:${{ inputs.version }} \
--dest-creds "$DST_REGISTRY_USERNAME":"$DST_REGISTRY_PASSWORD" \
docker://${{ vars.ACR_IMAGE_REGISTRY }}/${{ vars.IMAGE_NAMESPACE }}/dev-builder-android:${{ inputs.version }}

View File

@@ -18,11 +18,11 @@ on:
description: The runner uses to build linux-amd64 artifacts
default: ec2-c6i.4xlarge-amd64
options:
- ubuntu-latest
- ubuntu-latest-8-cores
- ubuntu-latest-16-cores
- ubuntu-latest-32-cores
- ubuntu-latest-64-cores
- ubuntu-20.04
- ubuntu-20.04-8-cores
- ubuntu-20.04-16-cores
- ubuntu-20.04-32-cores
- ubuntu-20.04-64-cores
- ec2-c6i.xlarge-amd64 # 4C8G
- ec2-c6i.2xlarge-amd64 # 8C16G
- ec2-c6i.4xlarge-amd64 # 16C32G
@@ -63,7 +63,12 @@ on:
description: Build macos artifacts
required: false
default: false
release_artifacts:
build_windows_artifacts:
type: boolean
description: Build Windows artifacts
required: false
default: false
publish_github_release:
type: boolean
description: Create GitHub release and upload artifacts
required: false
@@ -73,11 +78,6 @@ on:
description: Build and push images to DockerHub and ACR
required: false
default: false
release_dev_builder_image:
type: boolean
description: Release dev-builder image
required: false
default: false
# Use env variables to control all the release process.
env:
@@ -91,17 +91,18 @@ env:
# The scheduled version is '${{ env.NEXT_RELEASE_VERSION }}-nightly-YYYYMMDD', like v0.2.0-nigthly-20230313;
NIGHTLY_RELEASE_PREFIX: nightly
# Note: The NEXT_RELEASE_VERSION should be modified manually by every formal release.
NEXT_RELEASE_VERSION: v0.4.0
NEXT_RELEASE_VERSION: v0.5.0
jobs:
allocate-runners:
name: Allocate runners
if: ${{ github.repository == 'GreptimeTeam/greptimedb' }}
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
outputs:
linux-amd64-runner: ${{ steps.start-linux-amd64-runner.outputs.label }}
linux-arm64-runner: ${{ steps.start-linux-arm64-runner.outputs.label }}
macos-runner: ${{ inputs.macos_runner || vars.DEFAULT_MACOS_RUNNER }}
windows-runner: windows-latest-8-cores
# The following EC2 resource id will be used for resource releasing.
linux-amd64-ec2-runner-label: ${{ steps.start-linux-amd64-runner.outputs.label }}
@@ -177,11 +178,6 @@ jobs:
cargo-profile: ${{ env.CARGO_PROFILE }}
version: ${{ needs.allocate-runners.outputs.version }}
disable-run-tests: ${{ env.DISABLE_RUN_TESTS }}
release-to-s3-bucket: ${{ vars.AWS_RELEASE_BUCKET }}
aws-access-key-id: ${{ secrets.AWS_CN_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_CN_SECRET_ACCESS_KEY }}
aws-region: ${{ vars.AWS_RELEASE_BUCKET_REGION }}
upload-to-s3: ${{ vars.UPLOAD_TO_S3 }}
build-linux-arm64-artifacts:
name: Build linux-arm64 artifacts
@@ -201,11 +197,6 @@ jobs:
cargo-profile: ${{ env.CARGO_PROFILE }}
version: ${{ needs.allocate-runners.outputs.version }}
disable-run-tests: ${{ env.DISABLE_RUN_TESTS }}
release-to-s3-bucket: ${{ vars.AWS_RELEASE_BUCKET }}
aws-access-key-id: ${{ secrets.AWS_CN_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_CN_SECRET_ACCESS_KEY }}
aws-region: ${{ vars.AWS_RELEASE_BUCKET_REGION }}
upload-to-s3: ${{ vars.UPLOAD_TO_S3 }}
build-macos-artifacts:
name: Build macOS artifacts
@@ -247,12 +238,43 @@ jobs:
features: ${{ matrix.features }}
version: ${{ needs.allocate-runners.outputs.version }}
disable-run-tests: ${{ env.DISABLE_RUN_TESTS }}
release-to-s3-bucket: ${{ vars.AWS_RELEASE_BUCKET }}
artifacts-dir: ${{ matrix.artifacts-dir-prefix }}-${{ needs.allocate-runners.outputs.version }}
aws-access-key-id: ${{ secrets.AWS_CN_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_CN_SECRET_ACCESS_KEY }}
aws-region: ${{ vars.AWS_RELEASE_BUCKET_REGION }}
upload-to-s3: ${{ vars.UPLOAD_TO_S3 }}
build-windows-artifacts:
name: Build Windows artifacts
strategy:
fail-fast: false
matrix:
include:
- os: ${{ needs.allocate-runners.outputs.windows-runner }}
arch: x86_64-pc-windows-msvc
features: servers/dashboard
artifacts-dir-prefix: greptime-windows-amd64
- os: ${{ needs.allocate-runners.outputs.windows-runner }}
arch: x86_64-pc-windows-msvc
features: pyo3_backend,servers/dashboard
artifacts-dir-prefix: greptime-windows-amd64-pyo3
runs-on: ${{ matrix.os }}
needs: [
allocate-runners,
]
if: ${{ inputs.build_windows_artifacts || github.event_name == 'push' || github.event_name == 'schedule' }}
steps:
- run: git config --global core.autocrlf false
- uses: actions/checkout@v3
with:
fetch-depth: 0
- uses: ./.github/actions/build-windows-artifacts
with:
arch: ${{ matrix.arch }}
rust-toolchain: ${{ env.RUST_TOOLCHAIN }}
cargo-profile: ${{ env.CARGO_PROFILE }}
features: ${{ matrix.features }}
version: ${{ needs.allocate-runners.outputs.version }}
disable-run-tests: ${{ env.DISABLE_RUN_TESTS }}
artifacts-dir: ${{ matrix.artifacts-dir-prefix }}-${{ needs.allocate-runners.outputs.version }}
release-images-to-dockerhub:
name: Build and push images to DockerHub
@@ -277,15 +299,14 @@ jobs:
image-registry-password: ${{ secrets.DOCKERHUB_TOKEN }}
version: ${{ needs.allocate-runners.outputs.version }}
release-images-to-acr:
name: Build and push images to ACR
release-cn-artifacts:
name: Release artifacts to CN region
if: ${{ inputs.release_images || github.event_name == 'push' || github.event_name == 'schedule' }}
needs: [
allocate-runners,
build-linux-amd64-artifacts,
build-linux-arm64-artifacts,
release-images-to-dockerhub,
]
runs-on: ubuntu-2004-16-cores
runs-on: ubuntu-20.04
# When we push to ACR, it's easy to fail due to some unknown network issues.
# However, we don't want to fail the whole workflow because of this.
# The ACR have daily sync with DockerHub, so don't worry about the image not being updated.
@@ -295,18 +316,28 @@ jobs:
with:
fetch-depth: 0
- name: Build and push images to ACR
uses: ./.github/actions/build-images
- name: Release artifacts to CN region
uses: ./.github/actions/release-cn-artifacts
with:
image-registry: ${{ vars.ACR_IMAGE_REGISTRY }}
image-namespace: ${{ vars.IMAGE_NAMESPACE }}
image-registry-username: ${{ secrets.ALICLOUD_USERNAME }}
image-registry-password: ${{ secrets.ALICLOUD_PASSWORD }}
src-image-registry: docker.io
src-image-namespace: ${{ vars.IMAGE_NAMESPACE }}
src-image-name: greptimedb
dst-image-registry-username: ${{ secrets.ALICLOUD_USERNAME }}
dst-image-registry-password: ${{ secrets.ALICLOUD_PASSWORD }}
dst-image-registry: ${{ vars.ACR_IMAGE_REGISTRY }}
dst-image-namespace: ${{ vars.IMAGE_NAMESPACE }}
version: ${{ needs.allocate-runners.outputs.version }}
aws-cn-s3-bucket: ${{ vars.AWS_RELEASE_BUCKET }}
aws-cn-access-key-id: ${{ secrets.AWS_CN_ACCESS_KEY_ID }}
aws-cn-secret-access-key: ${{ secrets.AWS_CN_SECRET_ACCESS_KEY }}
aws-cn-region: ${{ vars.AWS_RELEASE_BUCKET_REGION }}
dev-mode: false
update-version-info: true
push-latest-tag: true
release-artifacts:
publish-github-release:
name: Create GitHub release and upload artifacts
if: ${{ inputs.release_artifacts || github.event_name == 'push' || github.event_name == 'schedule' }}
if: ${{ inputs.publish_github_release || github.event_name == 'push' || github.event_name == 'schedule' }}
needs: [
allocate-runners,
build-linux-amd64-artifacts,
@@ -314,36 +345,17 @@ jobs:
build-macos-artifacts,
release-images-to-dockerhub,
]
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Release artifacts
uses: ./.github/actions/release-artifacts
- name: Publish GitHub release
uses: ./.github/actions/publish-github-release
with:
version: ${{ needs.allocate-runners.outputs.version }}
release-dev-builder-image:
name: Release dev builder image
if: ${{ inputs.release_dev_builder_image }} # Only manually trigger this job.
runs-on: ubuntu-latest-16-cores
steps:
- name: Checkout
uses: actions/checkout@v3
with:
fetch-depth: 0
- name: Build and push dev builder image
uses: ./.github/actions/build-dev-builder-image
with:
dockerhub-image-registry-username: ${{ secrets.DOCKERHUB_USERNAME }}
dockerhub-image-registry-token: ${{ secrets.DOCKERHUB_TOKEN }}
acr-image-registry: ${{ vars.ACR_IMAGE_REGISTRY }}
acr-image-registry-username: ${{ secrets.ALICLOUD_USERNAME }}
acr-image-registry-password: ${{ secrets.ALICLOUD_PASSWORD }}
### Stop runners ###
# It's very necessary to split the job of releasing runners into 'stop-linux-amd64-runner' and 'stop-linux-arm64-runner'.
# Because we can terminate the specified EC2 instance immediately after the job is finished without uncessary waiting.
@@ -351,7 +363,7 @@ jobs:
name: Stop linux-amd64 runner
# Only run this job when the runner is allocated.
if: ${{ always() }}
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
needs: [
allocate-runners,
build-linux-amd64-artifacts,
@@ -376,7 +388,7 @@ jobs:
name: Stop linux-arm64 runner
# Only run this job when the runner is allocated.
if: ${{ always() }}
runs-on: ubuntu-latest
runs-on: ubuntu-20.04
needs: [
allocate-runners,
build-linux-arm64-artifacts,

276
Cargo.lock generated
View File

@@ -204,7 +204,7 @@ checksum = "8f1f8f5a6f3d50d89e3797d7593a50f96bb2aaa20ca0cc7be1fb673232c91d72"
[[package]]
name = "api"
version = "0.4.0-nightly"
version = "0.4.1"
dependencies = [
"common-base",
"common-error",
@@ -579,26 +579,6 @@ dependencies = [
"zstd-safe 6.0.6",
]
[[package]]
name = "async-io"
version = "1.13.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0fc5b45d93ef0529756f812ca52e44c221b35341892d3dcc34132ac02f3dd2af"
dependencies = [
"async-lock",
"autocfg",
"cfg-if 1.0.0",
"concurrent-queue",
"futures-lite",
"log",
"parking",
"polling",
"rustix 0.37.23",
"slab",
"socket2 0.4.9",
"waker-fn",
]
[[package]]
name = "async-lock"
version = "2.8.0"
@@ -686,7 +666,7 @@ dependencies = [
[[package]]
name = "auth"
version = "0.4.0-nightly"
version = "0.4.1"
dependencies = [
"api",
"async-trait",
@@ -859,11 +839,13 @@ dependencies = [
[[package]]
name = "benchmarks"
version = "0.4.0-nightly"
version = "0.4.1"
dependencies = [
"arrow",
"chrono",
"clap 4.4.1",
"client",
"futures-util",
"indicatif",
"itertools 0.10.5",
"parquet",
@@ -1240,7 +1222,7 @@ checksum = "37b2a672a2cb129a2e41c10b1224bb368f9f37a2b16b612598138befd7b37eb5"
[[package]]
name = "catalog"
version = "0.4.0-nightly"
version = "0.4.1"
dependencies = [
"api",
"arc-swap",
@@ -1524,7 +1506,7 @@ checksum = "cd7cc57abe963c6d3b9d8be5b06ba7c8957a930305ca90304f24ef040aa6f961"
[[package]]
name = "client"
version = "0.4.0-nightly"
version = "0.4.1"
dependencies = [
"api",
"arrow-flight",
@@ -1554,7 +1536,7 @@ dependencies = [
"rand",
"session",
"snafu",
"substrait 0.4.0-nightly",
"substrait 0.4.1",
"substrait 0.7.5",
"tokio",
"tokio-stream",
@@ -1591,7 +1573,7 @@ dependencies = [
[[package]]
name = "cmd"
version = "0.4.0-nightly"
version = "0.4.1"
dependencies = [
"anymap",
"async-trait",
@@ -1627,6 +1609,7 @@ dependencies = [
"mito2",
"nu-ansi-term",
"partition",
"plugins",
"prost",
"query",
"rand",
@@ -1638,7 +1621,7 @@ dependencies = [
"servers",
"session",
"snafu",
"substrait 0.4.0-nightly",
"substrait 0.4.1",
"table",
"temp-env",
"tikv-jemallocator",
@@ -1671,7 +1654,7 @@ checksum = "55b672471b4e9f9e95499ea597ff64941a309b2cdbffcc46f2cc5e2d971fd335"
[[package]]
name = "common-base"
version = "0.4.0-nightly"
version = "0.4.1"
dependencies = [
"anymap",
"bitvec",
@@ -1686,7 +1669,7 @@ dependencies = [
[[package]]
name = "common-catalog"
version = "0.4.0-nightly"
version = "0.4.1"
dependencies = [
"chrono",
"common-error",
@@ -1699,7 +1682,7 @@ dependencies = [
[[package]]
name = "common-config"
version = "0.4.0-nightly"
version = "0.4.1"
dependencies = [
"common-base",
"humantime-serde",
@@ -1708,7 +1691,7 @@ dependencies = [
[[package]]
name = "common-datasource"
version = "0.4.0-nightly"
version = "0.4.1"
dependencies = [
"arrow",
"arrow-schema",
@@ -1737,7 +1720,7 @@ dependencies = [
[[package]]
name = "common-error"
version = "0.4.0-nightly"
version = "0.4.1"
dependencies = [
"snafu",
"strum 0.25.0",
@@ -1745,7 +1728,7 @@ dependencies = [
[[package]]
name = "common-function"
version = "0.4.0-nightly"
version = "0.4.1"
dependencies = [
"arc-swap",
"chrono-tz 0.6.3",
@@ -1768,7 +1751,7 @@ dependencies = [
[[package]]
name = "common-greptimedb-telemetry"
version = "0.4.0-nightly"
version = "0.4.1"
dependencies = [
"async-trait",
"common-error",
@@ -1787,7 +1770,7 @@ dependencies = [
[[package]]
name = "common-grpc"
version = "0.4.0-nightly"
version = "0.4.1"
dependencies = [
"api",
"arrow-flight",
@@ -1817,7 +1800,7 @@ dependencies = [
[[package]]
name = "common-grpc-expr"
version = "0.4.0-nightly"
version = "0.4.1"
dependencies = [
"api",
"async-trait",
@@ -1836,7 +1819,7 @@ dependencies = [
[[package]]
name = "common-macro"
version = "0.4.0-nightly"
version = "0.4.1"
dependencies = [
"arc-swap",
"backtrace",
@@ -1853,7 +1836,7 @@ dependencies = [
[[package]]
name = "common-mem-prof"
version = "0.4.0-nightly"
version = "0.4.1"
dependencies = [
"common-error",
"common-macro",
@@ -1866,12 +1849,14 @@ dependencies = [
[[package]]
name = "common-meta"
version = "0.4.0-nightly"
version = "0.4.1"
dependencies = [
"api",
"arrow-flight",
"async-stream",
"async-trait",
"base64 0.21.3",
"bytes",
"chrono",
"common-catalog",
"common-error",
@@ -1902,7 +1887,7 @@ dependencies = [
[[package]]
name = "common-procedure"
version = "0.4.0-nightly"
version = "0.4.1"
dependencies = [
"async-stream",
"async-trait",
@@ -1926,7 +1911,7 @@ dependencies = [
[[package]]
name = "common-procedure-test"
version = "0.4.0-nightly"
version = "0.4.1"
dependencies = [
"async-trait",
"common-procedure",
@@ -1934,7 +1919,7 @@ dependencies = [
[[package]]
name = "common-query"
version = "0.4.0-nightly"
version = "0.4.1"
dependencies = [
"api",
"async-trait",
@@ -1957,7 +1942,7 @@ dependencies = [
[[package]]
name = "common-recordbatch"
version = "0.4.0-nightly"
version = "0.4.1"
dependencies = [
"common-error",
"common-macro",
@@ -1974,7 +1959,7 @@ dependencies = [
[[package]]
name = "common-runtime"
version = "0.4.0-nightly"
version = "0.4.1"
dependencies = [
"async-trait",
"common-error",
@@ -1991,7 +1976,7 @@ dependencies = [
[[package]]
name = "common-telemetry"
version = "0.4.0-nightly"
version = "0.4.1"
dependencies = [
"backtrace",
"common-error",
@@ -2018,7 +2003,7 @@ dependencies = [
[[package]]
name = "common-test-util"
version = "0.4.0-nightly"
version = "0.4.1"
dependencies = [
"once_cell",
"rand",
@@ -2027,7 +2012,7 @@ dependencies = [
[[package]]
name = "common-time"
version = "0.4.0-nightly"
version = "0.4.1"
dependencies = [
"arrow",
"chrono",
@@ -2042,7 +2027,7 @@ dependencies = [
[[package]]
name = "common-version"
version = "0.4.0-nightly"
version = "0.4.1"
dependencies = [
"build-data",
]
@@ -2496,7 +2481,7 @@ dependencies = [
[[package]]
name = "datafusion"
version = "27.0.0"
source = "git+https://github.com/waynexia/arrow-datafusion.git?rev=c0b0fca548e99d020c76e1a1cd7132aab26000e1#c0b0fca548e99d020c76e1a1cd7132aab26000e1"
source = "git+https://github.com/waynexia/arrow-datafusion.git?rev=b6f3b28b6fe91924cc8dd3d83726b766f2a706ec#b6f3b28b6fe91924cc8dd3d83726b766f2a706ec"
dependencies = [
"ahash 0.8.3",
"arrow",
@@ -2544,7 +2529,7 @@ dependencies = [
[[package]]
name = "datafusion-common"
version = "27.0.0"
source = "git+https://github.com/waynexia/arrow-datafusion.git?rev=c0b0fca548e99d020c76e1a1cd7132aab26000e1#c0b0fca548e99d020c76e1a1cd7132aab26000e1"
source = "git+https://github.com/waynexia/arrow-datafusion.git?rev=b6f3b28b6fe91924cc8dd3d83726b766f2a706ec#b6f3b28b6fe91924cc8dd3d83726b766f2a706ec"
dependencies = [
"arrow",
"arrow-array",
@@ -2558,7 +2543,7 @@ dependencies = [
[[package]]
name = "datafusion-execution"
version = "27.0.0"
source = "git+https://github.com/waynexia/arrow-datafusion.git?rev=c0b0fca548e99d020c76e1a1cd7132aab26000e1#c0b0fca548e99d020c76e1a1cd7132aab26000e1"
source = "git+https://github.com/waynexia/arrow-datafusion.git?rev=b6f3b28b6fe91924cc8dd3d83726b766f2a706ec#b6f3b28b6fe91924cc8dd3d83726b766f2a706ec"
dependencies = [
"dashmap",
"datafusion-common",
@@ -2575,7 +2560,7 @@ dependencies = [
[[package]]
name = "datafusion-expr"
version = "27.0.0"
source = "git+https://github.com/waynexia/arrow-datafusion.git?rev=c0b0fca548e99d020c76e1a1cd7132aab26000e1#c0b0fca548e99d020c76e1a1cd7132aab26000e1"
source = "git+https://github.com/waynexia/arrow-datafusion.git?rev=b6f3b28b6fe91924cc8dd3d83726b766f2a706ec#b6f3b28b6fe91924cc8dd3d83726b766f2a706ec"
dependencies = [
"ahash 0.8.3",
"arrow",
@@ -2589,7 +2574,7 @@ dependencies = [
[[package]]
name = "datafusion-optimizer"
version = "27.0.0"
source = "git+https://github.com/waynexia/arrow-datafusion.git?rev=c0b0fca548e99d020c76e1a1cd7132aab26000e1#c0b0fca548e99d020c76e1a1cd7132aab26000e1"
source = "git+https://github.com/waynexia/arrow-datafusion.git?rev=b6f3b28b6fe91924cc8dd3d83726b766f2a706ec#b6f3b28b6fe91924cc8dd3d83726b766f2a706ec"
dependencies = [
"arrow",
"async-trait",
@@ -2606,7 +2591,7 @@ dependencies = [
[[package]]
name = "datafusion-physical-expr"
version = "27.0.0"
source = "git+https://github.com/waynexia/arrow-datafusion.git?rev=c0b0fca548e99d020c76e1a1cd7132aab26000e1#c0b0fca548e99d020c76e1a1cd7132aab26000e1"
source = "git+https://github.com/waynexia/arrow-datafusion.git?rev=b6f3b28b6fe91924cc8dd3d83726b766f2a706ec#b6f3b28b6fe91924cc8dd3d83726b766f2a706ec"
dependencies = [
"ahash 0.8.3",
"arrow",
@@ -2641,7 +2626,7 @@ dependencies = [
[[package]]
name = "datafusion-row"
version = "27.0.0"
source = "git+https://github.com/waynexia/arrow-datafusion.git?rev=c0b0fca548e99d020c76e1a1cd7132aab26000e1#c0b0fca548e99d020c76e1a1cd7132aab26000e1"
source = "git+https://github.com/waynexia/arrow-datafusion.git?rev=b6f3b28b6fe91924cc8dd3d83726b766f2a706ec#b6f3b28b6fe91924cc8dd3d83726b766f2a706ec"
dependencies = [
"arrow",
"datafusion-common",
@@ -2652,7 +2637,7 @@ dependencies = [
[[package]]
name = "datafusion-sql"
version = "27.0.0"
source = "git+https://github.com/waynexia/arrow-datafusion.git?rev=c0b0fca548e99d020c76e1a1cd7132aab26000e1#c0b0fca548e99d020c76e1a1cd7132aab26000e1"
source = "git+https://github.com/waynexia/arrow-datafusion.git?rev=b6f3b28b6fe91924cc8dd3d83726b766f2a706ec#b6f3b28b6fe91924cc8dd3d83726b766f2a706ec"
dependencies = [
"arrow",
"arrow-schema",
@@ -2665,7 +2650,7 @@ dependencies = [
[[package]]
name = "datafusion-substrait"
version = "27.0.0"
source = "git+https://github.com/waynexia/arrow-datafusion.git?rev=c0b0fca548e99d020c76e1a1cd7132aab26000e1#c0b0fca548e99d020c76e1a1cd7132aab26000e1"
source = "git+https://github.com/waynexia/arrow-datafusion.git?rev=b6f3b28b6fe91924cc8dd3d83726b766f2a706ec#b6f3b28b6fe91924cc8dd3d83726b766f2a706ec"
dependencies = [
"async-recursion",
"chrono",
@@ -2680,7 +2665,7 @@ dependencies = [
[[package]]
name = "datanode"
version = "0.4.0-nightly"
version = "0.4.1"
dependencies = [
"api",
"arrow-flight",
@@ -2739,7 +2724,7 @@ dependencies = [
"sql",
"storage",
"store-api",
"substrait 0.4.0-nightly",
"substrait 0.4.1",
"table",
"tokio",
"tokio-stream",
@@ -2753,7 +2738,7 @@ dependencies = [
[[package]]
name = "datatypes"
version = "0.4.0-nightly"
version = "0.4.1"
dependencies = [
"arrow",
"arrow-array",
@@ -3216,7 +3201,7 @@ dependencies = [
[[package]]
name = "file-engine"
version = "0.4.0-nightly"
version = "0.4.1"
dependencies = [
"api",
"async-trait",
@@ -3326,7 +3311,7 @@ dependencies = [
[[package]]
name = "frontend"
version = "0.4.0-nightly"
version = "0.4.1"
dependencies = [
"api",
"arc-swap",
@@ -3390,7 +3375,7 @@ dependencies = [
"storage",
"store-api",
"strfmt",
"substrait 0.4.0-nightly",
"substrait 0.4.1",
"table",
"tokio",
"toml 0.7.6",
@@ -3526,21 +3511,6 @@ version = "0.3.28"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4fff74096e71ed47f8e023204cfd0aa1289cd54ae5430a9523be060cdb849964"
[[package]]
name = "futures-lite"
version = "1.13.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "49a9d51ce47660b1e808d3c990b4709f2f415d928835a17dfd16991515c46bce"
dependencies = [
"fastrand 1.9.0",
"futures-core",
"futures-io",
"memchr",
"parking",
"pin-project-lite",
"waker-fn",
]
[[package]]
name = "futures-macro"
version = "0.3.28"
@@ -4210,7 +4180,7 @@ checksum = "d2fabcfbdc87f4758337ca535fb41a6d701b65693ce38287d856d1674551ec9b"
[[package]]
name = "greptime-proto"
version = "0.1.0"
source = "git+https://github.com/GreptimeTeam/greptime-proto.git?rev=693128abe9adc70ba636010a172c9da55b206bba#693128abe9adc70ba636010a172c9da55b206bba"
source = "git+https://github.com/GreptimeTeam/greptime-proto.git?rev=1f1dd532a111e3834cc3019c5605e2993ffb9dc3#1f1dd532a111e3834cc3019c5605e2993ffb9dc3"
dependencies = [
"prost",
"serde",
@@ -5012,12 +4982,6 @@ version = "0.1.4"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f051f77a7c8e6957c0696eac88f26b0117e54f52d3fc682ab19397a8812846a4"
[[package]]
name = "linux-raw-sys"
version = "0.3.8"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ef53942eb7bf7ff43a617b3e2c1c4a5ecf5944a7c1bc12d7ee39bbb15e5c1519"
[[package]]
name = "linux-raw-sys"
version = "0.4.5"
@@ -5042,7 +5006,7 @@ checksum = "b5e6163cb8c49088c2c36f57875e58ccd8c87c7427f7fbd50ea6710b2f3f2e8f"
[[package]]
name = "log-store"
version = "0.4.0-nightly"
version = "0.4.1"
dependencies = [
"async-stream",
"async-trait",
@@ -5122,15 +5086,6 @@ dependencies = [
"vob",
]
[[package]]
name = "lru"
version = "0.9.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "71e7d46de488603ffdd5f30afbc64fbba2378214a2c3a2fb83abf3d33126df17"
dependencies = [
"hashbrown 0.13.2",
]
[[package]]
name = "lru"
version = "0.10.1"
@@ -5321,7 +5276,7 @@ dependencies = [
[[package]]
name = "meta-client"
version = "0.4.0-nightly"
version = "0.4.1"
dependencies = [
"api",
"async-trait",
@@ -5334,6 +5289,7 @@ dependencies = [
"datatypes",
"etcd-client",
"futures",
"humantime-serde",
"meta-srv",
"rand",
"serde",
@@ -5350,7 +5306,7 @@ dependencies = [
[[package]]
name = "meta-srv"
version = "0.4.0-nightly"
version = "0.4.1"
dependencies = [
"anymap",
"api",
@@ -5542,11 +5498,12 @@ dependencies = [
[[package]]
name = "mito2"
version = "0.4.0-nightly"
version = "0.4.1"
dependencies = [
"anymap",
"api",
"aquamarine",
"arc-swap",
"async-channel",
"async-compat",
"async-stream",
@@ -5598,12 +5555,12 @@ dependencies = [
[[package]]
name = "moka"
version = "0.11.3"
version = "0.12.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "fa6e72583bf6830c956235bff0d5afec8cf2952f579ebad18ae7821a917d950f"
checksum = "8dc65d4615c08c8a13d91fd404b5a2a4485ba35b4091e3315cf8798d280c2f29"
dependencies = [
"async-io",
"async-lock",
"async-trait",
"crossbeam-channel",
"crossbeam-epoch",
"crossbeam-utils",
@@ -5612,7 +5569,6 @@ dependencies = [
"parking_lot 0.12.1",
"quanta 0.11.1",
"rustc_version",
"scheduled-thread-pool",
"skeptic",
"smallvec",
"tagptr",
@@ -5666,7 +5622,7 @@ dependencies = [
"futures-sink",
"futures-util",
"lazy_static",
"lru 0.10.1",
"lru",
"mio",
"mysql_common",
"once_cell",
@@ -6004,19 +5960,19 @@ dependencies = [
[[package]]
name = "object-store"
version = "0.4.0-nightly"
version = "0.4.1"
dependencies = [
"anyhow",
"async-trait",
"bytes",
"common-runtime",
"common-telemetry",
"common-test-util",
"futures",
"lru 0.9.0",
"md5",
"metrics",
"moka",
"opendal",
"pin-project",
"tokio",
"uuid",
]
@@ -6228,7 +6184,7 @@ dependencies = [
[[package]]
name = "operator"
version = "0.4.0-nightly"
version = "0.4.1"
dependencies = [
"api",
"async-compat",
@@ -6273,7 +6229,7 @@ dependencies = [
"sqlparser 0.34.0",
"storage",
"store-api",
"substrait 0.4.0-nightly",
"substrait 0.4.1",
"table",
"tokio",
"tonic 0.9.2",
@@ -6397,12 +6353,6 @@ dependencies = [
"winapi",
]
[[package]]
name = "parking"
version = "2.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "14f2252c834a40ed9bb5422029649578e63aa341ac401f74e719dd1afda8394e"
[[package]]
name = "parking_lot"
version = "0.11.2"
@@ -6499,7 +6449,7 @@ dependencies = [
[[package]]
name = "partition"
version = "0.4.0-nightly"
version = "0.4.1"
dependencies = [
"api",
"async-trait",
@@ -6823,6 +6773,18 @@ dependencies = [
"plotters-backend",
]
[[package]]
name = "plugins"
version = "0.4.1"
dependencies = [
"auth",
"common-base",
"datanode",
"frontend",
"meta-srv",
"snafu",
]
[[package]]
name = "pmutil"
version = "0.5.3"
@@ -6834,22 +6796,6 @@ dependencies = [
"syn 1.0.109",
]
[[package]]
name = "polling"
version = "2.8.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4b2d323e8ca7996b3e23126511a523f7e62924d93ecd5ae73b333815b0eb3dce"
dependencies = [
"autocfg",
"bitflags 1.3.2",
"cfg-if 1.0.0",
"concurrent-queue",
"libc",
"log",
"pin-project-lite",
"windows-sys 0.48.0",
]
[[package]]
name = "portable-atomic"
version = "0.3.20"
@@ -7079,7 +7025,7 @@ dependencies = [
[[package]]
name = "promql"
version = "0.4.0-nightly"
version = "0.4.1"
dependencies = [
"async-recursion",
"async-trait",
@@ -7093,6 +7039,7 @@ dependencies = [
"datatypes",
"futures",
"greptime-proto",
"metrics",
"promql-parser",
"prost",
"query",
@@ -7340,7 +7287,7 @@ dependencies = [
[[package]]
name = "query"
version = "0.4.0-nightly"
version = "0.4.1"
dependencies = [
"ahash 0.8.3",
"api",
@@ -7397,7 +7344,7 @@ dependencies = [
"stats-cli",
"store-api",
"streaming-stats",
"substrait 0.4.0-nightly",
"substrait 0.4.1",
"table",
"tokio",
"tokio-stream",
@@ -8058,20 +8005,6 @@ dependencies = [
"windows-sys 0.45.0",
]
[[package]]
name = "rustix"
version = "0.37.23"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4d69718bf81c6127a49dc64e44a742e8bb9213c0ff8869a22c308f84c1d4ab06"
dependencies = [
"bitflags 1.3.2",
"errno 0.3.3",
"io-lifetimes",
"libc",
"linux-raw-sys 0.3.8",
"windows-sys 0.48.0",
]
[[package]]
name = "rustix"
version = "0.38.10"
@@ -8577,15 +8510,6 @@ dependencies = [
"windows-sys 0.48.0",
]
[[package]]
name = "scheduled-thread-pool"
version = "0.2.7"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3cbc66816425a074528352f5789333ecff06ca41b36b0b0efdfbb29edc391a19"
dependencies = [
"parking_lot 0.12.1",
]
[[package]]
name = "schemars"
version = "0.8.13"
@@ -8619,7 +8543,7 @@ checksum = "94143f37725109f92c262ed2cf5e59bce7498c01bcc1502d7b9afe439a4e9f49"
[[package]]
name = "script"
version = "0.4.0-nightly"
version = "0.4.1"
dependencies = [
"api",
"arc-swap",
@@ -8899,7 +8823,7 @@ dependencies = [
[[package]]
name = "servers"
version = "0.4.0-nightly"
version = "0.4.1"
dependencies = [
"aide",
"api",
@@ -8993,8 +8917,9 @@ dependencies = [
[[package]]
name = "session"
version = "0.4.0-nightly"
version = "0.4.1"
dependencies = [
"api",
"arc-swap",
"auth",
"common-catalog",
@@ -9270,7 +9195,7 @@ dependencies = [
[[package]]
name = "sql"
version = "0.4.0-nightly"
version = "0.4.1"
dependencies = [
"api",
"common-base",
@@ -9321,7 +9246,7 @@ dependencies = [
[[package]]
name = "sqlness-runner"
version = "0.4.0-nightly"
version = "0.4.1"
dependencies = [
"async-trait",
"clap 4.4.1",
@@ -9341,13 +9266,13 @@ dependencies = [
[[package]]
name = "sqlparser"
version = "0.34.0"
source = "git+https://github.com/GreptimeTeam/sqlparser-rs.git?rev=296a4f6c73b129d6f565a42a2e5e53c6bc2b9da4#296a4f6c73b129d6f565a42a2e5e53c6bc2b9da4"
source = "git+https://github.com/GreptimeTeam/sqlparser-rs.git?rev=6cf9d23d5b8fbecd65efc1d9afb7e80ad7a424da#6cf9d23d5b8fbecd65efc1d9afb7e80ad7a424da"
dependencies = [
"lazy_static",
"log",
"regex",
"sqlparser 0.35.0",
"sqlparser_derive 0.1.1 (git+https://github.com/GreptimeTeam/sqlparser-rs.git?rev=296a4f6c73b129d6f565a42a2e5e53c6bc2b9da4)",
"sqlparser_derive 0.1.1 (git+https://github.com/GreptimeTeam/sqlparser-rs.git?rev=6cf9d23d5b8fbecd65efc1d9afb7e80ad7a424da)",
]
[[package]]
@@ -9374,7 +9299,7 @@ dependencies = [
[[package]]
name = "sqlparser_derive"
version = "0.1.1"
source = "git+https://github.com/GreptimeTeam/sqlparser-rs.git?rev=296a4f6c73b129d6f565a42a2e5e53c6bc2b9da4#296a4f6c73b129d6f565a42a2e5e53c6bc2b9da4"
source = "git+https://github.com/GreptimeTeam/sqlparser-rs.git?rev=6cf9d23d5b8fbecd65efc1d9afb7e80ad7a424da#6cf9d23d5b8fbecd65efc1d9afb7e80ad7a424da"
dependencies = [
"proc-macro2",
"quote",
@@ -9527,7 +9452,7 @@ dependencies = [
[[package]]
name = "storage"
version = "0.4.0-nightly"
version = "0.4.1"
dependencies = [
"api",
"arc-swap",
@@ -9581,7 +9506,7 @@ dependencies = [
[[package]]
name = "store-api"
version = "0.4.0-nightly"
version = "0.4.1"
dependencies = [
"api",
"aquamarine",
@@ -9600,6 +9525,7 @@ dependencies = [
"serde",
"serde_json",
"snafu",
"strum 0.25.0",
"tokio",
]
@@ -9718,7 +9644,7 @@ dependencies = [
[[package]]
name = "substrait"
version = "0.4.0-nightly"
version = "0.4.1"
dependencies = [
"async-recursion",
"async-trait",
@@ -9876,7 +9802,7 @@ dependencies = [
[[package]]
name = "table"
version = "0.4.0-nightly"
version = "0.4.1"
dependencies = [
"anymap",
"async-trait",
@@ -9982,7 +9908,7 @@ dependencies = [
[[package]]
name = "tests-integration"
version = "0.4.0-nightly"
version = "0.4.1"
dependencies = [
"api",
"async-trait",
@@ -10035,7 +9961,7 @@ dependencies = [
"sql",
"sqlx",
"store-api",
"substrait 0.4.0-nightly",
"substrait 0.4.1",
"table",
"tempfile",
"tokio",
@@ -11196,12 +11122,6 @@ version = "0.3.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f8e76fae08f03f96e166d2dfda232190638c10e0383841252416f9cfe2ae60e6"
[[package]]
name = "waker-fn"
version = "1.1.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9d5b2c62b4012a3e1eca5a7e077d13b3bf498c4073e33ccd58626607748ceeca"
[[package]]
name = "walkdir"
version = "2.3.3"

View File

@@ -39,6 +39,7 @@ members = [
"src/object-store",
"src/operator",
"src/partition",
"src/plugins",
"src/promql",
"src/query",
"src/script",
@@ -54,40 +55,43 @@ members = [
resolver = "2"
[workspace.package]
version = "0.4.0-nightly"
version = "0.4.1"
edition = "2021"
license = "Apache-2.0"
[workspace.dependencies]
aquamarine = "0.3"
arrow = { version = "43.0" }
etcd-client = "0.11"
arrow-array = "43.0"
arrow-flight = "43.0"
arrow-schema = { version = "43.0", features = ["serde"] }
async-stream = "0.3"
async-trait = "0.1"
chrono = { version = "0.4", features = ["serde"] }
datafusion = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "c0b0fca548e99d020c76e1a1cd7132aab26000e1" }
datafusion-common = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "c0b0fca548e99d020c76e1a1cd7132aab26000e1" }
datafusion-expr = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "c0b0fca548e99d020c76e1a1cd7132aab26000e1" }
datafusion-optimizer = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "c0b0fca548e99d020c76e1a1cd7132aab26000e1" }
datafusion-physical-expr = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "c0b0fca548e99d020c76e1a1cd7132aab26000e1" }
datafusion-sql = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "c0b0fca548e99d020c76e1a1cd7132aab26000e1" }
datafusion-substrait = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "c0b0fca548e99d020c76e1a1cd7132aab26000e1" }
datafusion = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "b6f3b28b6fe91924cc8dd3d83726b766f2a706ec" }
datafusion-common = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "b6f3b28b6fe91924cc8dd3d83726b766f2a706ec" }
datafusion-expr = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "b6f3b28b6fe91924cc8dd3d83726b766f2a706ec" }
datafusion-optimizer = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "b6f3b28b6fe91924cc8dd3d83726b766f2a706ec" }
datafusion-physical-expr = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "b6f3b28b6fe91924cc8dd3d83726b766f2a706ec" }
datafusion-sql = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "b6f3b28b6fe91924cc8dd3d83726b766f2a706ec" }
datafusion-substrait = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "b6f3b28b6fe91924cc8dd3d83726b766f2a706ec" }
derive_builder = "0.12"
etcd-client = "0.11"
futures = "0.3"
futures-util = "0.3"
greptime-proto = { git = "https://github.com/GreptimeTeam/greptime-proto.git", rev = "693128abe9adc70ba636010a172c9da55b206bba" }
greptime-proto = { git = "https://github.com/GreptimeTeam/greptime-proto.git", rev = "1f1dd532a111e3834cc3019c5605e2993ffb9dc3" }
humantime-serde = "1.1"
itertools = "0.10"
lazy_static = "1.4"
moka = { version = "0.11" }
meter-core = { git = "https://github.com/GreptimeTeam/greptime-meter.git", rev = "abbd357c1e193cd270ea65ee7652334a150b628f" }
metrics = "0.20"
moka = "0.12"
once_cell = "1.18"
opentelemetry-proto = { version = "0.2", features = ["gen-tonic", "metrics"] }
parquet = "43.0"
paste = "1.0"
prost = "0.11"
raft-engine = { git = "https://github.com/tikv/raft-engine.git", rev = "22dfb426cd994602b57725ef080287d3e53db479" }
rand = "0.8"
regex = "1.8"
reqwest = { version = "0.11", default-features = false, features = [
@@ -99,7 +103,7 @@ serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0"
smallvec = "1"
snafu = { version = "0.7", features = ["backtraces"] }
sqlparser = { git = "https://github.com/GreptimeTeam/sqlparser-rs.git", rev = "296a4f6c73b129d6f565a42a2e5e53c6bc2b9da4", features = [
sqlparser = { git = "https://github.com/GreptimeTeam/sqlparser-rs.git", rev = "6cf9d23d5b8fbecd65efc1d9afb7e80ad7a424da", features = [
"visitor",
] }
strum = { version = "0.25", features = ["derive"] }
@@ -109,8 +113,6 @@ tokio-util = { version = "0.7", features = ["io-util", "compat"] }
toml = "0.7"
tonic = { version = "0.9", features = ["tls"] }
uuid = { version = "1", features = ["serde", "v4", "fast-rng"] }
metrics = "0.20"
meter-core = { git = "https://github.com/GreptimeTeam/greptime-meter.git", rev = "abbd357c1e193cd270ea65ee7652334a150b628f" }
## workspaces members
api = { path = "src/api" }
auth = { path = "src/auth" }
@@ -123,19 +125,18 @@ common-config = { path = "src/common/config" }
common-datasource = { path = "src/common/datasource" }
common-error = { path = "src/common/error" }
common-function = { path = "src/common/function" }
common-macro = { path = "src/common/macro" }
common-greptimedb-telemetry = { path = "src/common/greptimedb-telemetry" }
common-grpc = { path = "src/common/grpc" }
common-grpc-expr = { path = "src/common/grpc-expr" }
common-macro = { path = "src/common/macro" }
common-mem-prof = { path = "src/common/mem-prof" }
common-meta = { path = "src/common/meta" }
common-pprof = { path = "src/common/pprof" }
common-procedure = { path = "src/common/procedure" }
common-procedure-test = { path = "src/common/procedure-test" }
common-pprof = { path = "src/common/pprof" }
common-query = { path = "src/common/query" }
common-recordbatch = { path = "src/common/recordbatch" }
common-runtime = { path = "src/common/runtime" }
substrait = { path = "src/common/substrait" }
common-telemetry = { path = "src/common/telemetry" }
common-test-util = { path = "src/common/test-util" }
common-time = { path = "src/common/time" }
@@ -149,20 +150,20 @@ meta-client = { path = "src/meta-client" }
meta-srv = { path = "src/meta-srv" }
mito = { path = "src/mito" }
mito2 = { path = "src/mito2" }
operator = { path = "src/operator" }
object-store = { path = "src/object-store" }
operator = { path = "src/operator" }
partition = { path = "src/partition" }
plugins = { path = "src/plugins" }
promql = { path = "src/promql" }
query = { path = "src/query" }
raft-engine = { git = "https://github.com/tikv/raft-engine.git", rev = "22dfb426cd994602b57725ef080287d3e53db479" }
script = { path = "src/script" }
servers = { path = "src/servers" }
session = { path = "src/session" }
sql = { path = "src/sql" }
storage = { path = "src/storage" }
store-api = { path = "src/store-api" }
substrait = { path = "src/common/substrait" }
table = { path = "src/table" }
table-procedure = { path = "src/table-procedure" }
[workspace.dependencies.meter-macros]
git = "https://github.com/GreptimeTeam/greptime-meter.git"

View File

@@ -55,11 +55,15 @@ else
BUILDX_MULTI_PLATFORM_BUILD_OPTS := -o type=docker
endif
ifneq ($(strip $(CARGO_BUILD_EXTRA_OPTS)),)
CARGO_BUILD_OPTS += ${CARGO_BUILD_EXTRA_OPTS}
endif
##@ Build
.PHONY: build
build: ## Build debug version greptime.
cargo build ${CARGO_BUILD_OPTS}
cargo ${CARGO_EXTENSION} build ${CARGO_BUILD_OPTS}
.POHNY: build-by-dev-builder
build-by-dev-builder: ## Build greptime by dev-builder.
@@ -67,11 +71,34 @@ build-by-dev-builder: ## Build greptime by dev-builder.
-v ${PWD}:/greptimedb -v ${CARGO_REGISTRY_CACHE}:/root/.cargo/registry \
-w /greptimedb ${IMAGE_REGISTRY}/${IMAGE_NAMESPACE}/dev-builder-${BASE_IMAGE}:latest \
make build \
CARGO_EXTENSION="${CARGO_EXTENSION}" \
CARGO_PROFILE=${CARGO_PROFILE} \
FEATURES=${FEATURES} \
TARGET_DIR=${TARGET_DIR} \
TARGET=${TARGET} \
RELEASE=${RELEASE}
RELEASE=${RELEASE} \
CARGO_BUILD_EXTRA_OPTS="${CARGO_BUILD_EXTRA_OPTS}"
.PHONY: build-android-bin
build-android-bin: ## Build greptime binary for android.
docker run --network=host \
-v ${PWD}:/greptimedb -v ${CARGO_REGISTRY_CACHE}:/root/.cargo/registry \
-w /greptimedb ${IMAGE_REGISTRY}/${IMAGE_NAMESPACE}/dev-builder-android:latest \
make build \
CARGO_EXTENSION="ndk --platform 23 -t aarch64-linux-android" \
CARGO_PROFILE=release \
FEATURES="${FEATURES}" \
TARGET_DIR="${TARGET_DIR}" \
TARGET="${TARGET}" \
RELEASE="${RELEASE}" \
CARGO_BUILD_EXTRA_OPTS="--bin greptime --no-default-features"
.PHONY: strip-android-bin
strip-android-bin: build-android-bin ## Strip greptime binary for android.
docker run --network=host \
-v ${PWD}:/greptimedb \
-w /greptimedb ${IMAGE_REGISTRY}/${IMAGE_NAMESPACE}/dev-builder-android:latest \
bash -c '$${NDK_ROOT}/toolchains/llvm/prebuilt/linux-x86_64/bin/llvm-strip /greptimedb/target/aarch64-linux-android/release/greptime'
.PHONY: clean
clean: ## Clean the project.

View File

@@ -104,11 +104,11 @@ Or if you built from docker:
docker run -p 4002:4002 -v "$(pwd):/tmp/greptimedb" greptime/greptimedb standalone start
```
Please see [the online document site](https://docs.greptime.com/getting-started/overview#install-greptimedb) for more installation options and [operations info](https://docs.greptime.com/user-guide/operations/overview).
Please see the online document site for more installation options and [operations info](https://docs.greptime.com/user-guide/operations/overview).
### Get started
Read the [complete getting started guide](https://docs.greptime.com/getting-started/overview#connect) on our [official document site](https://docs.greptime.com/).
Read the [complete getting started guide](https://docs.greptime.com/getting-started/try-out-greptimedb) on our [official document site](https://docs.greptime.com/).
To write and query data, GreptimeDB is compatible with multiple [protocols and clients](https://docs.greptime.com/user-guide/clients/overview).

View File

@@ -6,8 +6,10 @@ license.workspace = true
[dependencies]
arrow.workspace = true
chrono.workspace = true
clap = { version = "4.0", features = ["derive"] }
client = { workspace = true }
futures-util.workspace = true
indicatif = "0.17.1"
itertools.workspace = true
parquet.workspace = true

View File

@@ -29,14 +29,14 @@ use client::api::v1::column::Values;
use client::api::v1::{
Column, ColumnDataType, ColumnDef, CreateTableExpr, InsertRequest, InsertRequests, SemanticType,
};
use client::{Client, Database, DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME};
use client::{Client, Database, Output, DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME};
use futures_util::TryStreamExt;
use indicatif::{MultiProgress, ProgressBar, ProgressStyle};
use parquet::arrow::arrow_reader::ParquetRecordBatchReaderBuilder;
use tokio::task::JoinSet;
const CATALOG_NAME: &str = "greptime";
const SCHEMA_NAME: &str = "public";
const TABLE_NAME: &str = "nyc_taxi";
#[derive(Parser)]
#[command(name = "NYC benchmark runner")]
@@ -74,7 +74,12 @@ fn get_file_list<P: AsRef<Path>>(path: P) -> Vec<PathBuf> {
.collect()
}
fn new_table_name() -> String {
format!("nyc_taxi_{}", chrono::Utc::now().timestamp())
}
async fn write_data(
table_name: &str,
batch_size: usize,
db: &Database,
path: PathBuf,
@@ -104,7 +109,7 @@ async fn write_data(
}
let (columns, row_count) = convert_record_batch(record_batch);
let request = InsertRequest {
table_name: TABLE_NAME.to_string(),
table_name: table_name.to_string(),
columns,
row_count,
};
@@ -113,7 +118,7 @@ async fn write_data(
};
let now = Instant::now();
let _ = db.insert(requests).await.unwrap();
db.insert(requests).await.unwrap();
let elapsed = now.elapsed();
total_rpc_elapsed_ms += elapsed.as_millis();
progress_bar.inc(row_count as _);
@@ -131,6 +136,11 @@ fn convert_record_batch(record_batch: RecordBatch) -> (Vec<Column>, u32) {
for (array, field) in record_batch.columns().iter().zip(fields.iter()) {
let (values, datatype) = build_values(array);
let semantic_type = match field.name().as_str() {
"VendorID" => SemanticType::Tag,
"tpep_pickup_datetime" => SemanticType::Timestamp,
_ => SemanticType::Field,
};
let column = Column {
column_name: field.name().clone(),
@@ -141,8 +151,7 @@ fn convert_record_batch(record_batch: RecordBatch) -> (Vec<Column>, u32) {
.map(|bitmap| bitmap.buffer().as_slice().to_vec())
.unwrap_or_default(),
datatype: datatype.into(),
// datatype and semantic_type are set to default
..Default::default()
semantic_type: semantic_type as i32,
};
columns.push(column);
}
@@ -243,11 +252,11 @@ fn is_record_batch_full(batch: &RecordBatch) -> bool {
batch.columns().iter().all(|col| col.null_count() == 0)
}
fn create_table_expr() -> CreateTableExpr {
fn create_table_expr(table_name: &str) -> CreateTableExpr {
CreateTableExpr {
catalog_name: CATALOG_NAME.to_string(),
schema_name: SCHEMA_NAME.to_string(),
table_name: TABLE_NAME.to_string(),
table_name: table_name.to_string(),
desc: "".to_string(),
column_defs: vec![
ColumnDef {
@@ -261,7 +270,7 @@ fn create_table_expr() -> CreateTableExpr {
ColumnDef {
name: "tpep_pickup_datetime".to_string(),
data_type: ColumnDataType::TimestampMicrosecond as i32,
is_nullable: true,
is_nullable: false,
default_constraint: vec![],
semantic_type: SemanticType::Timestamp as i32,
comment: String::new(),
@@ -405,31 +414,31 @@ fn create_table_expr() -> CreateTableExpr {
],
time_index: "tpep_pickup_datetime".to_string(),
primary_keys: vec!["VendorID".to_string()],
create_if_not_exists: false,
create_if_not_exists: true,
table_options: Default::default(),
table_id: None,
engine: "mito".to_string(),
}
}
fn query_set() -> HashMap<String, String> {
fn query_set(table_name: &str) -> HashMap<String, String> {
HashMap::from([
(
"count_all".to_string(),
format!("SELECT COUNT(*) FROM {TABLE_NAME};"),
format!("SELECT COUNT(*) FROM {table_name};"),
),
(
"fare_amt_by_passenger".to_string(),
format!("SELECT passenger_count, MIN(fare_amount), MAX(fare_amount), SUM(fare_amount) FROM {TABLE_NAME} GROUP BY passenger_count"),
format!("SELECT passenger_count, MIN(fare_amount), MAX(fare_amount), SUM(fare_amount) FROM {table_name} GROUP BY passenger_count"),
)
])
}
async fn do_write(args: &Args, db: &Database) {
async fn do_write(args: &Args, db: &Database, table_name: &str) {
let mut file_list = get_file_list(args.path.clone().expect("Specify data path in argument"));
let mut write_jobs = JoinSet::new();
let create_table_result = db.create(create_table_expr()).await;
let create_table_result = db.create(create_table_expr(table_name)).await;
println!("Create table result: {create_table_result:?}");
let progress_bar_style = ProgressStyle::with_template(
@@ -447,8 +456,10 @@ async fn do_write(args: &Args, db: &Database) {
let db = db.clone();
let mpb = multi_progress_bar.clone();
let pb_style = progress_bar_style.clone();
let _ = write_jobs
.spawn(async move { write_data(batch_size, &db, path, mpb, pb_style).await });
let table_name = table_name.to_string();
let _ = write_jobs.spawn(async move {
write_data(&table_name, batch_size, &db, path, mpb, pb_style).await
});
}
}
while write_jobs.join_next().await.is_some() {
@@ -457,24 +468,32 @@ async fn do_write(args: &Args, db: &Database) {
let db = db.clone();
let mpb = multi_progress_bar.clone();
let pb_style = progress_bar_style.clone();
let _ = write_jobs
.spawn(async move { write_data(batch_size, &db, path, mpb, pb_style).await });
let table_name = table_name.to_string();
let _ = write_jobs.spawn(async move {
write_data(&table_name, batch_size, &db, path, mpb, pb_style).await
});
}
}
}
async fn do_query(num_iter: usize, db: &Database) {
for (query_name, query) in query_set() {
async fn do_query(num_iter: usize, db: &Database, table_name: &str) {
for (query_name, query) in query_set(table_name) {
println!("Running query: {query}");
for i in 0..num_iter {
let now = Instant::now();
let _res = db.sql(&query).await.unwrap();
let res = db.sql(&query).await.unwrap();
match res {
Output::AffectedRows(_) | Output::RecordBatches(_) => (),
Output::Stream(stream) => {
stream.try_collect::<Vec<_>>().await.unwrap();
}
}
let elapsed = now.elapsed();
println!(
"query {}, iteration {}: {}ms",
query_name,
i,
elapsed.as_millis()
elapsed.as_millis(),
);
}
}
@@ -491,13 +510,14 @@ fn main() {
.block_on(async {
let client = Client::with_urls(vec![&args.endpoint]);
let db = Database::new(DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, client);
let table_name = new_table_name();
if !args.skip_write {
do_write(&args, &db).await;
do_write(&args, &db, &table_name).await;
}
if !args.skip_read {
do_query(args.iter_num, &db).await;
do_query(args.iter_num, &db, &table_name).await;
}
})
}

View File

@@ -13,19 +13,19 @@ rpc_runtime_size = 8
require_lease_before_startup = false
[heartbeat]
# Interval for sending heartbeat messages to the Metasrv in milliseconds, 3000 by default.
interval_millis = 3000
# Interval for sending heartbeat messages to the Metasrv, 3 seconds by default.
interval = "3s"
# Metasrv client options.
[meta_client]
# Metasrv address list.
metasrv_addrs = ["127.0.0.1:3002"]
# Heartbeat timeout in milliseconds, 500 by default.
heartbeat_timeout_millis = 500
# Operation timeout in milliseconds, 3000 by default.
timeout_millis = 3000
# Connect server timeout in milliseconds, 5000 by default.
connect_timeout_millis = 1000
# Heartbeat timeout, 500 milliseconds by default.
heartbeat_timeout = "500ms"
# Operation timeout, 3 seconds by default.
timeout = "3s"
# Connect server timeout, 1 second by default.
connect_timeout = "1s"
# `TCP_NODELAY` option for accepted connections, true by default.
tcp_nodelay = true
@@ -47,6 +47,12 @@ type = "File"
# TTL for all tables. Disabled by default.
# global_ttl = "7d"
# Cache configuration for object storage such as 'S3' etc.
# The local file cache directory
# cache_path = "/path/local_cache"
# The local file cache capacity in bytes.
# cache_capacity = "256MB"
# Compaction options, see `standalone.example.toml`.
[storage.compaction]
max_inflight_tasks = 4

View File

@@ -2,10 +2,10 @@
mode = "distributed"
[heartbeat]
# Interval for sending heartbeat task to the Metasrv in milliseconds, 5000 by default.
interval_millis = 5000
# Interval for retry sending heartbeat task in milliseconds, 5000 by default.
retry_interval_millis = 5000
# Interval for sending heartbeat task to the Metasrv, 5 seconds by default.
interval = "5s"
# Interval for retry sending heartbeat task, 5 seconds by default.
retry_interval = "5s"
# HTTP server options, see `standalone.example.toml`.
[http]
@@ -59,10 +59,10 @@ enable = true
# Metasrv client options, see `datanode.example.toml`.
[meta_client]
metasrv_addrs = ["127.0.0.1:3002"]
timeout_millis = 3000
timeout = "3s"
# DDL timeouts options.
ddl_timeout_millis = 10000
connect_timeout_millis = 1000
ddl_timeout = "10s"
connect_timeout = "1s"
tcp_nodelay = true
# Log options, see `standalone.example.toml`

View File

@@ -32,6 +32,6 @@ retry_delay = "500ms"
# [datanode]
# # Datanode client options.
# [datanode.client_options]
# timeout_millis = 10000
# connect_timeout_millis = 10000
# timeout = "10s"
# connect_timeout = "10s"
# tcp_nodelay = true

View File

@@ -82,6 +82,8 @@ enable = true
# WAL options.
[wal]
# WAL data directory
# dir = "/tmp/greptimedb/wal"
# WAL file size in bytes.
file_size = "256MB"
# WAL purge threshold.
@@ -93,8 +95,8 @@ read_batch_size = 128
# Whether to sync log file after every write.
sync_write = false
# Kv options.
[kv_store]
# Metadata storage options.
[metadata_store]
# Kv file size in bytes.
file_size = "256MB"
# Kv purge threshold.
@@ -115,6 +117,10 @@ data_home = "/tmp/greptimedb/"
type = "File"
# TTL for all tables. Disabled by default.
# global_ttl = "7d"
# Cache configuration for object storage such as 'S3' etc.
# cache_path = "/path/local_cache"
# The local file cache capacity in bytes.
# cache_capacity = "256MB"
# Compaction options.
[storage.compaction]

View File

@@ -1,4 +1,4 @@
FROM ubuntu:22.04 as builder
FROM ubuntu:20.04 as builder
ARG CARGO_PROFILE
ARG FEATURES
@@ -7,6 +7,11 @@ ARG OUTPUT_DIR
ENV LANG en_US.utf8
WORKDIR /greptimedb
# Add PPA for Python 3.10.
RUN apt-get update && \
DEBIAN_FRONTEND=noninteractive apt-get install -y software-properties-common && \
add-apt-repository ppa:deadsnakes/ppa -y
# Install dependencies.
RUN --mount=type=cache,target=/var/cache/apt \
apt-get update && apt-get install -y \

View File

@@ -0,0 +1,41 @@
FROM --platform=linux/amd64 saschpe/android-ndk:34-jdk17.0.8_7-ndk25.2.9519653-cmake3.22.1
ENV LANG en_US.utf8
WORKDIR /greptimedb
# Rename libunwind to libgcc
RUN cp ${NDK_ROOT}/toolchains/llvm/prebuilt/linux-x86_64/lib64/clang/14.0.7/lib/linux/aarch64/libunwind.a ${NDK_ROOT}/toolchains/llvm/prebuilt/linux-x86_64/lib64/clang/14.0.7/lib/linux/aarch64/libgcc.a
# Install dependencies.
RUN apt-get update && apt-get install -y \
libssl-dev \
protobuf-compiler \
curl \
git \
build-essential \
pkg-config \
python3 \
python3-dev \
python3-pip \
&& pip3 install --upgrade pip \
&& pip3 install pyarrow
# Trust workdir
RUN git config --global --add safe.directory /greptimedb
# Install Rust.
SHELL ["/bin/bash", "-c"]
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- --no-modify-path --default-toolchain none -y
ENV PATH /root/.cargo/bin/:$PATH
# Add android toolchains
ARG RUST_TOOLCHAIN
RUN rustup toolchain install ${RUST_TOOLCHAIN}
RUN rustup target add aarch64-linux-android
# Install cargo-ndk
RUN cargo install cargo-ndk
ENV ANDROID_NDK_HOME $NDK_ROOT
# Builder entrypoint.
CMD ["cargo", "ndk", "--platform", "23", "-t", "aarch64-linux-android", "build", "--bin", "greptime", "--profile", "release", "--no-default-features"]

View File

@@ -1,8 +1,13 @@
FROM ubuntu:22.04
FROM ubuntu:20.04
ENV LANG en_US.utf8
WORKDIR /greptimedb
# Add PPA for Python 3.10.
RUN apt-get update && \
DEBIAN_FRONTEND=noninteractive apt-get install -y software-properties-common && \
add-apt-repository ppa:deadsnakes/ppa -y
# Install dependencies.
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y \
libssl-dev \

View File

@@ -0,0 +1,61 @@
# TSBS benchmark - v0.4.0
## Environment
### Local
| | |
| ------ | ---------------------------------- |
| CPU | AMD Ryzen 7 7735HS (8 core 3.2GHz) |
| Memory | 32GB |
| Disk | SOLIDIGM SSDPFKNU010TZ |
| OS | Ubuntu 22.04.2 LTS |
### Aliyun amd64
| | |
| ------- | -------------- |
| Machine | ecs.g7.4xlarge |
| CPU | 16 core |
| Memory | 64GB |
| Disk | 100G |
| OS | Ubuntu 22.04 |
### Aliyun arm64
| | |
| ------- | ----------------- |
| Machine | ecs.g8y.4xlarge |
| CPU | 16 core |
| Memory | 64GB |
| Disk | 100G |
| OS | Ubuntu 22.04 ARM |
## Write performance
| Environment | Ingest raterows/s |
| ------------------ | --------------------- |
| Local | 365280.60 |
| Aliyun g7.4xlarge | 341368.72 |
| Aliyun g8y.4xlarge | 320907.29 |
## Query performance
| Query type | Local (ms) | Aliyun g7.4xlarge (ms) | Aliyun g8y.4xlarge (ms) |
| --------------------- | ---------- | ---------------------- | ----------------------- |
| cpu-max-all-1 | 50.70 | 31.46 | 47.61 |
| cpu-max-all-8 | 262.16 | 129.26 | 152.43 |
| double-groupby-1 | 2512.71 | 1408.19 | 1586.10 |
| double-groupby-5 | 3896.15 | 2304.29 | 2585.29 |
| double-groupby-all | 5404.67 | 3337.61 | 3773.91 |
| groupby-orderby-limit | 3786.98 | 2065.72 | 2312.57 |
| high-cpu-1 | 71.96 | 37.29 | 54.01 |
| high-cpu-all | 9468.75 | 7595.69 | 8467.46 |
| lastpoint | 13379.43 | 11253.76 | 12949.40 |
| single-groupby-1-1-1 | 20.72 | 12.16 | 13.35 |
| single-groupby-1-1-12 | 28.53 | 15.67 | 21.62 |
| single-groupby-1-8-1 | 72.23 | 37.90 | 43.52 |
| single-groupby-5-1-1 | 26.75 | 15.59 | 17.48 |
| single-groupby-5-1-12 | 45.41 | 22.90 | 31.96 |
| single-groupby-5-8-1 | 107.96 | 59.76 | 69.58 |

View File

@@ -4,8 +4,6 @@ version.workspace = true
edition.workspace = true
license.workspace = true
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[features]
default = []
testing = []

View File

@@ -18,9 +18,7 @@ use std::sync::{Arc, Weak};
use common_catalog::consts::{DEFAULT_SCHEMA_NAME, INFORMATION_SCHEMA_NAME, NUMBERS_TABLE_ID};
use common_error::ext::BoxedError;
use common_meta::cache_invalidator::{
CacheInvalidator, Context, KvCacheInvalidatorRef, TableMetadataCacheInvalidator,
};
use common_meta::cache_invalidator::{CacheInvalidator, CacheInvalidatorRef, Context};
use common_meta::datanode_manager::DatanodeManagerRef;
use common_meta::error::Result as MetaResult;
use common_meta::key::catalog_name::CatalogNameKey;
@@ -54,7 +52,7 @@ pub struct KvBackendCatalogManager {
// TODO(LFC): Maybe use a real implementation for Standalone mode.
// Now we use `NoopKvCacheInvalidator` for Standalone mode. In Standalone mode, the KV backend
// is implemented by RaftEngine. Maybe we need a cache for it?
table_metadata_cache_invalidator: TableMetadataCacheInvalidator,
cache_invalidator: CacheInvalidatorRef,
partition_manager: PartitionRuleManagerRef,
table_metadata_manager: TableMetadataManagerRef,
datanode_manager: DatanodeManagerRef,
@@ -65,13 +63,13 @@ pub struct KvBackendCatalogManager {
#[async_trait::async_trait]
impl CacheInvalidator for KvBackendCatalogManager {
async fn invalidate_table_name(&self, ctx: &Context, table_name: TableName) -> MetaResult<()> {
self.table_metadata_cache_invalidator
self.cache_invalidator
.invalidate_table_name(ctx, table_name)
.await
}
async fn invalidate_table_id(&self, ctx: &Context, table_id: TableId) -> MetaResult<()> {
self.table_metadata_cache_invalidator
self.cache_invalidator
.invalidate_table_id(ctx, table_id)
.await
}
@@ -80,15 +78,13 @@ impl CacheInvalidator for KvBackendCatalogManager {
impl KvBackendCatalogManager {
pub fn new(
backend: KvBackendRef,
backend_cache_invalidator: KvCacheInvalidatorRef,
cache_invalidator: CacheInvalidatorRef,
datanode_manager: DatanodeManagerRef,
) -> Arc<Self> {
Arc::new_cyclic(|me| Self {
partition_manager: Arc::new(PartitionRuleManager::new(backend.clone())),
table_metadata_manager: Arc::new(TableMetadataManager::new(backend)),
table_metadata_cache_invalidator: TableMetadataCacheInvalidator::new(
backend_cache_invalidator.clone(),
),
cache_invalidator,
datanode_manager,
system_catalog: SystemCatalog {
catalog_manager: me.clone(),
@@ -107,12 +103,6 @@ impl KvBackendCatalogManager {
pub fn datanode_manager(&self) -> DatanodeManagerRef {
self.datanode_manager.clone()
}
pub async fn invalidate_schema(&self, catalog: &str, schema: &str) {
self.table_metadata_cache_invalidator
.invalidate_schema(catalog, schema)
.await
}
}
#[async_trait::async_trait]
@@ -229,6 +219,7 @@ impl CatalogManager for KvBackendCatalogManager {
.get(table_id)
.await
.context(TableMetadataManagerSnafu)?
.map(|v| v.into_inner())
else {
return Ok(None);
};

View File

@@ -139,11 +139,19 @@ impl Client {
}
fn max_grpc_recv_message_size(&self) -> usize {
self.inner.channel_manager.config().max_recv_message_size
self.inner
.channel_manager
.config()
.max_recv_message_size
.as_bytes() as usize
}
fn max_grpc_send_message_size(&self) -> usize {
self.inner.channel_manager.config().max_send_message_size
self.inner
.channel_manager
.config()
.max_send_message_size
.as_bytes() as usize
}
pub(crate) fn make_flight_client(&self) -> Result<FlightClient> {

View File

@@ -167,11 +167,14 @@ impl Database {
}
}
pub async fn sql(&self, sql: &str) -> Result<Output> {
pub async fn sql<S>(&self, sql: S) -> Result<Output>
where
S: AsRef<str>,
{
let _timer = timer!(metrics::METRIC_GRPC_SQL);
self.do_get(
Request::Query(QueryRequest {
query: Some(Query::Sql(sql.to_string())),
query: Some(Query::Sql(sql.as_ref().to_string())),
}),
0,
)

View File

@@ -26,6 +26,8 @@ use api::v1::greptime_response::Response;
use api::v1::{AffectedRows, GreptimeResponse};
pub use common_catalog::consts::{DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME};
use common_error::status_code::StatusCode;
pub use common_query::Output;
pub use common_recordbatch::{RecordBatches, SendableRecordBatchStream};
use snafu::OptionExt;
pub use self::client::Client;

View File

@@ -49,6 +49,7 @@ metrics.workspace = true
mito2 = { workspace = true }
nu-ansi-term = "0.46"
partition = { workspace = true }
plugins.workspace = true
prost.workspace = true
query = { workspace = true }
rand.workspace = true

View File

@@ -14,6 +14,7 @@
mod bench;
mod cmd;
mod export;
mod helper;
mod repl;
// TODO(weny): Removes it
@@ -27,6 +28,7 @@ use common_telemetry::logging::LoggingOptions;
pub use repl::Repl;
use upgrade::UpgradeCommand;
use self::export::ExportCommand;
use crate::error::Result;
use crate::options::{Options, TopLevelOptions};
@@ -78,17 +80,19 @@ impl Command {
#[derive(Parser)]
enum SubCommand {
Attach(AttachCommand),
// Attach(AttachCommand),
Upgrade(UpgradeCommand),
Bench(BenchTableMetadataCommand),
Export(ExportCommand),
}
impl SubCommand {
async fn build(self) -> Result<Instance> {
match self {
SubCommand::Attach(cmd) => cmd.build().await,
// SubCommand::Attach(cmd) => cmd.build().await,
SubCommand::Upgrade(cmd) => cmd.build().await,
SubCommand::Bench(cmd) => cmd.build().await,
SubCommand::Export(cmd) => cmd.build().await,
}
}
}
@@ -104,51 +108,9 @@ pub(crate) struct AttachCommand {
}
impl AttachCommand {
#[allow(dead_code)]
async fn build(self) -> Result<Instance> {
let repl = Repl::try_new(&self).await?;
Ok(Instance::Repl(repl))
}
}
#[cfg(test)]
mod tests {
use super::*;
#[test]
fn test_load_options() {
let cmd = Command {
cmd: SubCommand::Attach(AttachCommand {
grpc_addr: String::from(""),
meta_addr: None,
disable_helper: false,
}),
};
let opts = cmd.load_options(TopLevelOptions::default()).unwrap();
let logging_opts = opts.logging_options();
assert_eq!("/tmp/greptimedb/logs", logging_opts.dir);
assert!(logging_opts.level.is_none());
assert!(!logging_opts.enable_jaeger_tracing);
}
#[test]
fn test_top_level_options() {
let cmd = Command {
cmd: SubCommand::Attach(AttachCommand {
grpc_addr: String::from(""),
meta_addr: None,
disable_helper: false,
}),
};
let opts = cmd
.load_options(TopLevelOptions {
log_dir: Some("/tmp/greptimedb/test/logs".to_string()),
log_level: Some("debug".to_string()),
})
.unwrap();
let logging_opts = opts.logging_options();
assert_eq!("/tmp/greptimedb/test/logs", logging_opts.dir);
assert_eq!("debug", logging_opts.level.as_ref().unwrap());
}
}

395
src/cmd/src/cli/export.rs Normal file
View File

@@ -0,0 +1,395 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::path::Path;
use std::sync::Arc;
use async_trait::async_trait;
use clap::{Parser, ValueEnum};
use client::{Client, Database, DEFAULT_SCHEMA_NAME};
use common_query::Output;
use common_recordbatch::util::collect;
use common_telemetry::{debug, error, info, warn};
use datatypes::scalars::ScalarVector;
use datatypes::vectors::{StringVector, Vector};
use snafu::{OptionExt, ResultExt};
use tokio::fs::File;
use tokio::io::AsyncWriteExt;
use tokio::sync::Semaphore;
use crate::cli::{Instance, Tool};
use crate::error::{
CollectRecordBatchesSnafu, ConnectServerSnafu, EmptyResultSnafu, Error, FileIoSnafu,
InvalidDatabaseNameSnafu, NotDataFromOutputSnafu, RequestDatabaseSnafu, Result,
};
type TableReference = (String, String, String);
#[derive(Debug, Default, Clone, ValueEnum)]
enum ExportTarget {
/// Corresponding to `SHOW CREATE TABLE`
#[default]
CreateTable,
/// Corresponding to `EXPORT TABLE`
TableData,
}
#[derive(Debug, Default, Parser)]
pub struct ExportCommand {
/// Server address to connect
#[clap(long)]
addr: String,
/// Directory to put the exported data. E.g.: /tmp/greptimedb-export
#[clap(long)]
output_dir: String,
/// The name of the catalog to export. Default to "greptime-*"".
#[clap(long, default_value = "")]
database: String,
/// Parallelism of the export.
#[clap(long, short = 'j', default_value = "1")]
export_jobs: usize,
/// Max retry times for each job.
#[clap(long, default_value = "3")]
max_retry: usize,
/// Things to export
#[clap(long, short = 't', value_enum)]
target: ExportTarget,
}
impl ExportCommand {
pub async fn build(&self) -> Result<Instance> {
let client = Client::with_urls([self.addr.clone()]);
client
.health_check()
.await
.with_context(|_| ConnectServerSnafu {
addr: self.addr.clone(),
})?;
let (catalog, schema) = split_database(&self.database)?;
let database_client = Database::new(
catalog.clone(),
schema.clone().unwrap_or(DEFAULT_SCHEMA_NAME.to_string()),
client,
);
Ok(Instance::Tool(Box::new(Export {
client: database_client,
catalog,
schema,
output_dir: self.output_dir.clone(),
parallelism: self.export_jobs,
target: self.target.clone(),
})))
}
}
pub struct Export {
client: Database,
catalog: String,
schema: Option<String>,
output_dir: String,
parallelism: usize,
target: ExportTarget,
}
impl Export {
/// Iterate over all db names.
///
/// Newbie: `db_name` is catalog + schema.
async fn iter_db_names(&self) -> Result<Vec<(String, String)>> {
if let Some(schema) = &self.schema {
Ok(vec![(self.catalog.clone(), schema.clone())])
} else {
let mut client = self.client.clone();
client.set_catalog(self.catalog.clone());
let result =
client
.sql("show databases")
.await
.with_context(|_| RequestDatabaseSnafu {
sql: "show databases".to_string(),
})?;
let Output::Stream(stream) = result else {
NotDataFromOutputSnafu.fail()?
};
let record_batch = collect(stream)
.await
.context(CollectRecordBatchesSnafu)?
.pop()
.context(EmptyResultSnafu)?;
let schemas = record_batch
.column(0)
.as_any()
.downcast_ref::<StringVector>()
.unwrap();
let mut result = Vec::with_capacity(schemas.len());
for i in 0..schemas.len() {
let schema = schemas.get_data(i).unwrap().to_owned();
result.push((self.catalog.clone(), schema));
}
Ok(result)
}
}
/// Return a list of [`TableReference`] to be exported.
/// Includes all tables under the given `catalog` and `schema`
async fn get_table_list(&self, catalog: &str, schema: &str) -> Result<Vec<TableReference>> {
// TODO: SQL injection hurts
let sql = format!(
"select table_catalog, table_schema, table_name from \
information_schema.tables where table_type = \'BASE TABLE\'\
and table_catalog = \'{catalog}\' and table_schema = \'{schema}\'",
);
let mut client = self.client.clone();
client.set_catalog(catalog);
client.set_schema(schema);
let result = client
.sql(&sql)
.await
.with_context(|_| RequestDatabaseSnafu { sql })?;
let Output::Stream(stream) = result else {
NotDataFromOutputSnafu.fail()?
};
let Some(record_batch) = collect(stream)
.await
.context(CollectRecordBatchesSnafu)?
.pop()
else {
return Ok(vec![]);
};
debug!("Fetched table list: {}", record_batch.pretty_print());
if record_batch.num_rows() == 0 {
return Ok(vec![]);
}
let mut result = Vec::with_capacity(record_batch.num_rows());
let catalog_column = record_batch
.column(0)
.as_any()
.downcast_ref::<StringVector>()
.unwrap();
let schema_column = record_batch
.column(1)
.as_any()
.downcast_ref::<StringVector>()
.unwrap();
let table_column = record_batch
.column(2)
.as_any()
.downcast_ref::<StringVector>()
.unwrap();
for i in 0..record_batch.num_rows() {
let catalog = catalog_column.get_data(i).unwrap().to_owned();
let schema = schema_column.get_data(i).unwrap().to_owned();
let table = table_column.get_data(i).unwrap().to_owned();
result.push((catalog, schema, table));
}
Ok(result)
}
async fn show_create_table(&self, catalog: &str, schema: &str, table: &str) -> Result<String> {
let sql = format!("show create table {}.{}.{}", catalog, schema, table);
let mut client = self.client.clone();
client.set_catalog(catalog);
client.set_schema(schema);
let result = client
.sql(&sql)
.await
.with_context(|_| RequestDatabaseSnafu { sql })?;
let Output::Stream(stream) = result else {
NotDataFromOutputSnafu.fail()?
};
let record_batch = collect(stream)
.await
.context(CollectRecordBatchesSnafu)?
.pop()
.context(EmptyResultSnafu)?;
let create_table = record_batch
.column(1)
.as_any()
.downcast_ref::<StringVector>()
.unwrap()
.get_data(0)
.unwrap();
Ok(format!("{create_table};\n"))
}
async fn export_create_table(&self) -> Result<()> {
let semaphore = Arc::new(Semaphore::new(self.parallelism));
let db_names = self.iter_db_names().await?;
let db_count = db_names.len();
let mut tasks = Vec::with_capacity(db_names.len());
for (catalog, schema) in db_names {
let semaphore_moved = semaphore.clone();
tasks.push(async move {
let _permit = semaphore_moved.acquire().await.unwrap();
let table_list = self.get_table_list(&catalog, &schema).await?;
let table_count = table_list.len();
tokio::fs::create_dir_all(&self.output_dir)
.await
.context(FileIoSnafu)?;
let output_file =
Path::new(&self.output_dir).join(format!("{catalog}-{schema}.sql"));
let mut file = File::create(output_file).await.context(FileIoSnafu)?;
for (c, s, t) in table_list {
match self.show_create_table(&c, &s, &t).await {
Err(e) => {
error!(e; "Failed to export table {}.{}.{}", c, s, t)
}
Ok(create_table) => {
file.write_all(create_table.as_bytes())
.await
.context(FileIoSnafu)?;
}
}
}
info!("finished exporting {catalog}.{schema} with {table_count} tables",);
Ok::<(), Error>(())
});
}
let success = futures::future::join_all(tasks)
.await
.into_iter()
.filter(|r| match r {
Ok(_) => true,
Err(e) => {
error!(e; "export job failed");
false
}
})
.count();
info!("success {success}/{db_count} jobs");
Ok(())
}
async fn export_table_data(&self) -> Result<()> {
let semaphore = Arc::new(Semaphore::new(self.parallelism));
let db_names = self.iter_db_names().await?;
let db_count = db_names.len();
let mut tasks = Vec::with_capacity(db_names.len());
for (catalog, schema) in db_names {
let semaphore_moved = semaphore.clone();
tasks.push(async move {
let _permit = semaphore_moved.acquire().await.unwrap();
tokio::fs::create_dir_all(&self.output_dir)
.await
.context(FileIoSnafu)?;
let output_dir = Path::new(&self.output_dir).join(format!("{catalog}-{schema}/"));
let mut client = self.client.clone();
client.set_catalog(catalog.clone());
client.set_schema(schema.clone());
// copy database to
let sql = format!(
"copy database {} to '{}' with (format='parquet');",
schema,
output_dir.to_str().unwrap()
);
client
.sql(sql.clone())
.await
.context(RequestDatabaseSnafu { sql })?;
info!("finished exporting {catalog}.{schema} data");
// export copy from sql
let dir_filenames = match output_dir.read_dir() {
Ok(dir) => dir,
Err(_) => {
warn!("empty database {catalog}.{schema}");
return Ok(());
}
};
let copy_from_file =
Path::new(&self.output_dir).join(format!("{catalog}-{schema}_copy_from.sql"));
let mut file = File::create(copy_from_file).await.context(FileIoSnafu)?;
let copy_from_sql = dir_filenames
.into_iter()
.map(|file| {
let file = file.unwrap();
let filename = file.file_name().into_string().unwrap();
format!(
"copy {} from '{}' with (format='parquet');\n",
filename.replace(".parquet", ""),
file.path().to_str().unwrap()
)
})
.collect::<Vec<_>>()
.join("");
file.write_all(copy_from_sql.as_bytes())
.await
.context(FileIoSnafu)?;
info!("finished exporting {catalog}.{schema} copy_from.sql");
Ok::<(), Error>(())
});
}
let success = futures::future::join_all(tasks)
.await
.into_iter()
.filter(|r| match r {
Ok(_) => true,
Err(e) => {
error!(e; "export job failed");
false
}
})
.count();
info!("success {success}/{db_count} jobs");
Ok(())
}
}
#[async_trait]
impl Tool for Export {
async fn do_work(&self) -> Result<()> {
match self.target {
ExportTarget::CreateTable => self.export_create_table().await,
ExportTarget::TableData => self.export_table_data().await,
}
}
}
/// Split at `-`.
fn split_database(database: &str) -> Result<(String, Option<String>)> {
let (catalog, schema) = database
.split_once('-')
.with_context(|| InvalidDatabaseNameSnafu {
database: database.to_string(),
})?;
if schema == "*" {
Ok((catalog.to_string(), None))
} else {
Ok((catalog.to_string(), Some(schema.to_string())))
}
}

View File

@@ -257,7 +257,7 @@ async fn create_query_engine(meta_addr: &str) -> Result<DatafusionQueryEngine> {
cached_meta_backend.clone(),
datanode_clients,
);
let plugins: Arc<Plugins> = Default::default();
let plugins: Plugins = Default::default();
let state = Arc::new(QueryEngineState::new(
catalog_list,
None,

View File

@@ -20,7 +20,7 @@ use client::api::v1::meta::TableRouteValue;
use common_meta::ddl::utils::region_storage_path;
use common_meta::error as MetaError;
use common_meta::key::catalog_name::{CatalogNameKey, CatalogNameValue};
use common_meta::key::datanode_table::{DatanodeTableKey, DatanodeTableValue};
use common_meta::key::datanode_table::{DatanodeTableKey, DatanodeTableValue, RegionInfo};
use common_meta::key::schema_name::{SchemaNameKey, SchemaNameValue};
use common_meta::key::table_info::{TableInfoKey, TableInfoValue};
use common_meta::key::table_name::{TableNameKey, TableNameValue};
@@ -405,8 +405,11 @@ impl MigrateTableMetadata {
DatanodeTableValue::new(
table_id,
regions,
engine.to_string(),
region_storage_path.clone(),
RegionInfo {
engine: engine.to_string(),
region_storage_path: region_storage_path.clone(),
region_options: (&value.table_info.meta.options).into(),
},
),
)
})

View File

@@ -31,6 +31,10 @@ pub struct Instance {
impl Instance {
pub async fn start(&mut self) -> Result<()> {
plugins::start_datanode_plugins(self.datanode.plugins())
.await
.context(StartDatanodeSnafu)?;
self.datanode.start().await.context(StartDatanodeSnafu)
}
@@ -92,6 +96,8 @@ struct StartCommand {
#[clap(long)]
data_home: Option<String>,
#[clap(long)]
wal_dir: Option<String>,
#[clap(long)]
http_addr: Option<String>,
#[clap(long)]
http_timeout: Option<u64>,
@@ -145,6 +151,10 @@ impl StartCommand {
opts.storage.data_home = data_home.clone();
}
if let Some(wal_dir) = &self.wal_dir {
opts.wal.dir = Some(wal_dir.clone());
}
if let Some(http_addr) = &self.http_addr {
opts.http.addr = http_addr.clone();
}
@@ -159,11 +169,15 @@ impl StartCommand {
Ok(Options::Datanode(Box::new(opts)))
}
async fn build(self, opts: DatanodeOptions) -> Result<Instance> {
async fn build(self, mut opts: DatanodeOptions) -> Result<Instance> {
let plugins = plugins::setup_datanode_plugins(&mut opts)
.await
.context(StartDatanodeSnafu)?;
logging::info!("Datanode start command: {:#?}", self);
logging::info!("Datanode options: {:#?}", opts);
let datanode = DatanodeBuilder::new(opts, None, Default::default())
let datanode = DatanodeBuilder::new(opts, None, plugins)
.build()
.await
.context(StartDatanodeSnafu)?;
@@ -180,6 +194,7 @@ mod tests {
use common_base::readable_size::ReadableSize;
use common_test_util::temp_dir::create_named_temp_file;
use datanode::config::{CompactionConfig, FileConfig, ObjectStoreConfig, RegionManifestConfig};
use servers::heartbeat_options::HeartbeatOptions;
use servers::Mode;
use super::*;
@@ -196,11 +211,14 @@ mod tests {
rpc_hostname = "127.0.0.1"
rpc_runtime_size = 8
[heartbeat]
interval = "300ms"
[meta_client]
metasrv_addrs = ["127.0.0.1:3002"]
timeout_millis = 3000
connect_timeout_millis = 5000
ddl_timeout_millis= 10000
timeout = "3s"
connect_timeout = "5s"
ddl_timeout = "10s"
tcp_nodelay = true
[wal]
@@ -243,25 +261,33 @@ mod tests {
assert_eq!("127.0.0.1:3001".to_string(), options.rpc_addr);
assert_eq!(Some(42), options.node_id);
assert_eq!("/other/wal", options.wal.dir.unwrap());
assert_eq!(Duration::from_secs(600), options.wal.purge_interval);
assert_eq!(1024 * 1024 * 1024, options.wal.file_size.0);
assert_eq!(1024 * 1024 * 1024 * 50, options.wal.purge_threshold.0);
assert!(!options.wal.sync_write);
let HeartbeatOptions {
interval: heart_beat_interval,
..
} = options.heartbeat;
assert_eq!(300, heart_beat_interval.as_millis());
let MetaClientOptions {
metasrv_addrs: metasrv_addr,
timeout_millis,
connect_timeout_millis,
timeout,
connect_timeout,
ddl_timeout,
tcp_nodelay,
ddl_timeout_millis,
..
} = options.meta_client.unwrap();
assert_eq!(vec!["127.0.0.1:3002".to_string()], metasrv_addr);
assert_eq!(5000, connect_timeout_millis);
assert_eq!(10000, ddl_timeout_millis);
assert_eq!(3000, timeout_millis);
assert_eq!(5000, connect_timeout.as_millis());
assert_eq!(10000, ddl_timeout.as_millis());
assert_eq!(3000, timeout.as_millis());
assert!(tcp_nodelay);
assert_eq!("/tmp/greptimedb/", options.storage.data_home);
assert!(matches!(
@@ -355,8 +381,8 @@ mod tests {
rpc_runtime_size = 8
[meta_client]
timeout_millis = 3000
connect_timeout_millis = 5000
timeout = "3s"
connect_timeout = "5s"
tcp_nodelay = true
[wal]
@@ -420,6 +446,7 @@ mod tests {
|| {
let command = StartCommand {
config_file: Some(file.path().to_str().unwrap().to_string()),
wal_dir: Some("/other/wal/dir".to_string()),
env_prefix: env_prefix.to_string(),
..Default::default()
};
@@ -447,6 +474,9 @@ mod tests {
// Should be read from config file, config file > env > default values.
assert_eq!(opts.storage.compaction.max_purge_tasks, 32);
// Should be read from cli, cli > config file > env > default values.
assert_eq!(opts.wal.dir.unwrap(), "/other/wal/dir");
// Should be default value.
assert_eq!(
opts.storage.manifest.checkpoint_margin,

View File

@@ -37,6 +37,18 @@ pub enum Error {
source: common_meta::error::Error,
},
#[snafu(display("Failed to start procedure manager"))]
StartProcedureManager {
location: Location,
source: common_procedure::error::Error,
},
#[snafu(display("Failed to stop procedure manager"))]
StopProcedureManager {
location: Location,
source: common_procedure::error::Error,
},
#[snafu(display("Failed to start datanode"))]
StartDatanode {
location: Location,
@@ -85,12 +97,6 @@ pub enum Error {
#[snafu(display("Illegal config: {}", msg))]
IllegalConfig { msg: String, location: Location },
#[snafu(display("Illegal auth config"))]
IllegalAuthConfig {
location: Location,
source: auth::error::Error,
},
#[snafu(display("Unsupported selector type: {}", selector_type))]
UnsupportedSelectorType {
selector_type: String,
@@ -180,12 +186,45 @@ pub enum Error {
location: Location,
},
#[snafu(display("Failed to connect server at {addr}"))]
ConnectServer {
addr: String,
source: client::error::Error,
location: Location,
},
#[snafu(display("Failed to serde json"))]
SerdeJson {
#[snafu(source)]
error: serde_json::error::Error,
location: Location,
},
#[snafu(display("Expect data from output, but got another thing"))]
NotDataFromOutput { location: Location },
#[snafu(display("Empty result from output"))]
EmptyResult { location: Location },
#[snafu(display("Failed to manipulate file"))]
FileIo {
location: Location,
#[snafu(source)]
error: std::io::Error,
},
#[snafu(display("Invalid database name: {}", database))]
InvalidDatabaseName {
location: Location,
database: String,
},
#[snafu(display("Failed to create directory {}", dir))]
CreateDir {
dir: String,
#[snafu(source)]
error: std::io::Error,
},
}
pub type Result<T> = std::result::Result<T, Error>;
@@ -204,13 +243,18 @@ impl ErrorExt for Error {
Error::IterStream { source, .. } | Error::InitMetadata { source, .. } => {
source.status_code()
}
Error::ConnectServer { source, .. } => source.status_code(),
Error::MissingConfig { .. }
| Error::LoadLayeredConfig { .. }
| Error::IllegalConfig { .. }
| Error::InvalidReplCommand { .. }
| Error::IllegalAuthConfig { .. }
| Error::ConnectEtcd { .. } => StatusCode::InvalidArguments,
| Error::ConnectEtcd { .. }
| Error::NotDataFromOutput { .. }
| Error::CreateDir { .. }
| Error::EmptyResult { .. }
| Error::InvalidDatabaseName { .. } => StatusCode::InvalidArguments,
Error::StartProcedureManager { source, .. }
| Error::StopProcedureManager { source, .. } => source.status_code(),
Error::ReplCreation { .. } | Error::Readline { .. } => StatusCode::Internal,
Error::RequestDatabase { source, .. } => source.status_code(),
Error::CollectRecordBatches { source, .. }
@@ -222,7 +266,7 @@ impl ErrorExt for Error {
Error::SubstraitEncodeLogicalPlan { source, .. } => source.status_code(),
Error::StartCatalogManager { source, .. } => source.status_code(),
Error::SerdeJson { .. } => StatusCode::Unexpected,
Error::SerdeJson { .. } | Error::FileIo { .. } => StatusCode::Unexpected,
}
}

View File

@@ -12,11 +12,9 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use std::sync::Arc;
use std::time::Duration;
use auth::UserProviderRef;
use clap::Parser;
use common_base::Plugins;
use common_telemetry::logging;
use frontend::frontend::FrontendOptions;
use frontend::instance::{FrontendInstance, Instance as FeInstance};
@@ -25,7 +23,7 @@ use servers::tls::{TlsMode, TlsOption};
use servers::Mode;
use snafu::ResultExt;
use crate::error::{self, IllegalAuthConfigSnafu, Result};
use crate::error::{self, Result, StartFrontendSnafu};
use crate::options::{Options, TopLevelOptions};
pub struct Instance {
@@ -34,10 +32,11 @@ pub struct Instance {
impl Instance {
pub async fn start(&mut self) -> Result<()> {
self.frontend
.start()
plugins::start_frontend_plugins(self.frontend.plugins().clone())
.await
.context(error::StartFrontendSnafu)
.context(StartFrontendSnafu)?;
self.frontend.start().await.context(StartFrontendSnafu)
}
pub async fn stop(&self) -> Result<()> {
@@ -88,7 +87,9 @@ pub struct StartCommand {
#[clap(long)]
http_addr: Option<String>,
#[clap(long)]
grpc_addr: Option<String>,
http_timeout: Option<u64>,
#[clap(long)]
rpc_addr: Option<String>,
#[clap(long)]
mysql_addr: Option<String>,
#[clap(long)]
@@ -141,11 +142,15 @@ impl StartCommand {
opts.http.addr = addr.clone()
}
if let Some(http_timeout) = self.http_timeout {
opts.http.timeout = Duration::from_secs(http_timeout)
}
if let Some(disable_dashboard) = self.disable_dashboard {
opts.http.disable_dashboard = disable_dashboard;
}
if let Some(addr) = &self.grpc_addr {
if let Some(addr) = &self.rpc_addr {
opts.grpc.addr = addr.clone()
}
@@ -177,38 +182,32 @@ impl StartCommand {
opts.mode = Mode::Distributed;
}
opts.user_provider = self.user_provider.clone();
Ok(Options::Frontend(Box::new(opts)))
}
async fn build(self, opts: FrontendOptions) -> Result<Instance> {
async fn build(self, mut opts: FrontendOptions) -> Result<Instance> {
let plugins = plugins::setup_frontend_plugins(&mut opts)
.await
.context(StartFrontendSnafu)?;
logging::info!("Frontend start command: {:#?}", self);
logging::info!("Frontend options: {:#?}", opts);
let plugins = Arc::new(load_frontend_plugins(&self.user_provider)?);
let mut instance = FeInstance::try_new_distributed(&opts, plugins.clone())
.await
.context(error::StartFrontendSnafu)?;
.context(StartFrontendSnafu)?;
instance
.build_servers(&opts)
.await
.context(error::StartFrontendSnafu)?;
.context(StartFrontendSnafu)?;
Ok(Instance { frontend: instance })
}
}
pub fn load_frontend_plugins(user_provider: &Option<String>) -> Result<Plugins> {
let plugins = Plugins::new();
if let Some(provider) = user_provider {
let provider = auth::user_provider_from_option(provider).context(IllegalAuthConfigSnafu)?;
plugins.insert::<UserProviderRef>(provider);
}
Ok(plugins)
}
#[cfg(test)]
mod tests {
use std::io::Write;
@@ -218,6 +217,7 @@ mod tests {
use common_base::readable_size::ReadableSize;
use common_test_util::temp_dir::create_named_temp_file;
use frontend::service_config::GrpcOptions;
use servers::http::HttpOptions;
use super::*;
use crate::options::ENV_VAR_SEP;
@@ -303,14 +303,17 @@ mod tests {
#[tokio::test]
async fn test_try_from_start_command_to_anymap() {
let command = StartCommand {
let mut fe_opts = FrontendOptions {
http: HttpOptions {
disable_dashboard: false,
..Default::default()
},
user_provider: Some("static_user_provider:cmd:test=test".to_string()),
disable_dashboard: Some(false),
..Default::default()
};
let plugins = load_frontend_plugins(&command.user_provider);
let plugins = plugins.unwrap();
let plugins = plugins::setup_frontend_plugins(&mut fe_opts).await.unwrap();
let provider = plugins.get::<UserProviderRef>().unwrap();
let result = provider
.authenticate(
@@ -350,8 +353,8 @@ mod tests {
addr = "127.0.0.1:4000"
[meta_client]
timeout_millis = 3000
connect_timeout_millis = 5000
timeout = "3s"
connect_timeout = "5s"
tcp_nodelay = true
[mysql]

View File

@@ -20,7 +20,7 @@ use meta_srv::bootstrap::MetaSrvInstance;
use meta_srv::metasrv::MetaSrvOptions;
use snafu::ResultExt;
use crate::error::{self, Result};
use crate::error::{self, Result, StartMetaServerSnafu};
use crate::options::{Options, TopLevelOptions};
pub struct Instance {
@@ -29,10 +29,10 @@ pub struct Instance {
impl Instance {
pub async fn start(&mut self) -> Result<()> {
self.instance
.start()
plugins::start_meta_srv_plugins(self.instance.plugins())
.await
.context(error::StartMetaServerSnafu)
.context(StartMetaServerSnafu)?;
self.instance.start().await.context(StartMetaServerSnafu)
}
pub async fn stop(&self) -> Result<()> {
@@ -158,12 +158,15 @@ impl StartCommand {
Ok(Options::Metasrv(Box::new(opts)))
}
async fn build(self, opts: MetaSrvOptions) -> Result<Instance> {
logging::info!("MetaSrv start command: {:#?}", self);
async fn build(self, mut opts: MetaSrvOptions) -> Result<Instance> {
let plugins = plugins::setup_meta_srv_plugins(&mut opts)
.await
.context(StartMetaServerSnafu)?;
logging::info!("MetaSrv start command: {:#?}", self);
logging::info!("MetaSrv options: {:#?}", opts);
let instance = MetaSrvInstance::new(opts)
let instance = MetaSrvInstance::new(opts, plugins)
.await
.context(error::BuildMetaServerSnafu)?;

View File

@@ -30,7 +30,7 @@ pub const ENV_LIST_SEP: &str = ",";
pub struct MixOptions {
pub data_home: String,
pub procedure: ProcedureConfig,
pub kv_store: KvStoreConfig,
pub metadata_store: KvStoreConfig,
pub frontend: FrontendOptions,
pub datanode: DatanodeOptions,
pub logging: LoggingOptions,
@@ -144,8 +144,8 @@ mod tests {
mysql_runtime_size = 2
[meta_client]
timeout_millis = 3000
connect_timeout_millis = 5000
timeout = "3s"
connect_timeout = "5s"
tcp_nodelay = true
[wal]
@@ -263,6 +263,9 @@ mod tests {
]
);
// Should be the values from config file, not environment variables.
assert_eq!(opts.wal.dir.unwrap(), "/tmp/greptimedb/wal");
// Should be default values.
assert_eq!(opts.node_id, None);
},

View File

@@ -13,12 +13,13 @@
// limitations under the License.
use std::sync::Arc;
use std::{fs, path};
use catalog::kvbackend::KvBackendCatalogManager;
use catalog::CatalogManagerRef;
use clap::Parser;
use common_base::Plugins;
use common_config::{kv_store_dir, KvStoreConfig, WalConfig};
use common_config::{metadata_store_dir, KvStoreConfig, WalConfig};
use common_meta::cache_invalidator::DummyKvCacheInvalidator;
use common_meta::kv_backend::KvBackendRef;
use common_procedure::ProcedureManagerRef;
@@ -41,10 +42,10 @@ use servers::Mode;
use snafu::ResultExt;
use crate::error::{
IllegalConfigSnafu, InitMetadataSnafu, Result, ShutdownDatanodeSnafu, ShutdownFrontendSnafu,
StartDatanodeSnafu, StartFrontendSnafu,
CreateDirSnafu, IllegalConfigSnafu, InitMetadataSnafu, Result, ShutdownDatanodeSnafu,
ShutdownFrontendSnafu, StartDatanodeSnafu, StartFrontendSnafu, StartProcedureManagerSnafu,
StopProcedureManagerSnafu,
};
use crate::frontend::load_frontend_plugins;
use crate::options::{MixOptions, Options, TopLevelOptions};
#[derive(Parser)]
@@ -96,9 +97,10 @@ pub struct StandaloneOptions {
pub prom_store: PromStoreOptions,
pub wal: WalConfig,
pub storage: StorageConfig,
pub kv_store: KvStoreConfig,
pub metadata_store: KvStoreConfig,
pub procedure: ProcedureConfig,
pub logging: LoggingOptions,
pub user_provider: Option<String>,
/// Options for different store engines.
pub region_engine: Vec<RegionEngineConfig>,
}
@@ -117,9 +119,10 @@ impl Default for StandaloneOptions {
prom_store: PromStoreOptions::default(),
wal: WalConfig::default(),
storage: StorageConfig::default(),
kv_store: KvStoreConfig::default(),
metadata_store: KvStoreConfig::default(),
procedure: ProcedureConfig::default(),
logging: LoggingOptions::default(),
user_provider: None,
region_engine: vec![
RegionEngineConfig::Mito(MitoConfig::default()),
RegionEngineConfig::File(FileEngineConfig::default()),
@@ -141,6 +144,7 @@ impl StandaloneOptions {
prom_store: self.prom_store,
meta_client: None,
logging: self.logging,
user_provider: self.user_provider,
..Default::default()
}
}
@@ -160,6 +164,7 @@ impl StandaloneOptions {
pub struct Instance {
datanode: Datanode,
frontend: FeInstance,
procedure_manager: ProcedureManagerRef,
}
impl Instance {
@@ -168,6 +173,11 @@ impl Instance {
self.datanode.start().await.context(StartDatanodeSnafu)?;
info!("Datanode instance started");
self.procedure_manager
.start()
.await
.context(StartProcedureManagerSnafu)?;
self.frontend.start().await.context(StartFrontendSnafu)?;
Ok(())
}
@@ -178,6 +188,11 @@ impl Instance {
.await
.context(ShutdownFrontendSnafu)?;
self.procedure_manager
.stop()
.await
.context(StopProcedureManagerSnafu)?;
self.datanode
.shutdown()
.await
@@ -278,7 +293,10 @@ impl StartCommand {
if self.influxdb_enable {
opts.influxdb.enable = self.influxdb_enable;
}
let kv_store = opts.kv_store.clone();
opts.user_provider = self.user_provider.clone();
let metadata_store = opts.metadata_store.clone();
let procedure = opts.procedure.clone();
let frontend = opts.clone().frontend_options();
let logging = opts.logging.clone();
@@ -286,7 +304,7 @@ impl StartCommand {
Ok(Options::Standalone(Box::new(MixOptions {
procedure,
kv_store,
metadata_store,
data_home: datanode.storage.data_home.to_string(),
frontend,
datanode,
@@ -298,8 +316,11 @@ impl StartCommand {
#[allow(unused_variables)]
#[allow(clippy::diverging_sub_expression)]
async fn build(self, opts: MixOptions) -> Result<Instance> {
let plugins = Arc::new(load_frontend_plugins(&self.user_provider)?);
let fe_opts = opts.frontend;
let mut fe_opts = opts.frontend;
let fe_plugins = plugins::setup_frontend_plugins(&mut fe_opts)
.await
.context(StartFrontendSnafu)?;
let dn_opts = opts.datanode;
info!("Standalone start command: {:#?}", self);
@@ -308,14 +329,22 @@ impl StartCommand {
fe_opts, dn_opts
);
let kv_dir = kv_store_dir(&opts.data_home);
let (kv_store, procedure_manager) =
FeInstance::try_build_standalone_components(kv_dir, opts.kv_store, opts.procedure)
.await
.context(StartFrontendSnafu)?;
// Ensure the data_home directory exists.
fs::create_dir_all(path::Path::new(&opts.data_home)).context(CreateDirSnafu {
dir: &opts.data_home,
})?;
let metadata_dir = metadata_store_dir(&opts.data_home);
let (kv_store, procedure_manager) = FeInstance::try_build_standalone_components(
metadata_dir,
opts.metadata_store,
opts.procedure,
)
.await
.context(StartFrontendSnafu)?;
let datanode =
DatanodeBuilder::new(dn_opts.clone(), Some(kv_store.clone()), plugins.clone())
DatanodeBuilder::new(dn_opts.clone(), Some(kv_store.clone()), Default::default())
.build()
.await
.context(StartDatanodeSnafu)?;
@@ -335,9 +364,9 @@ impl StartCommand {
// TODO: build frontend instance like in distributed mode
let mut frontend = build_frontend(
plugins,
fe_plugins,
kv_store,
procedure_manager,
procedure_manager.clone(),
catalog_manager,
region_server,
)
@@ -348,13 +377,17 @@ impl StartCommand {
.await
.context(StartFrontendSnafu)?;
Ok(Instance { datanode, frontend })
Ok(Instance {
datanode,
frontend,
procedure_manager,
})
}
}
/// Build frontend instance in standalone mode
async fn build_frontend(
plugins: Arc<Plugins>,
plugins: Plugins,
kv_store: KvBackendRef,
procedure_manager: ProcedureManagerRef,
catalog_manager: CatalogManagerRef,
@@ -388,13 +421,13 @@ mod tests {
#[tokio::test]
async fn test_try_from_start_command_to_anymap() {
let command = StartCommand {
let mut fe_opts = FrontendOptions {
user_provider: Some("static_user_provider:cmd:test=test".to_string()),
..Default::default()
};
let plugins = load_frontend_plugins(&command.user_provider);
let plugins = plugins.unwrap();
let plugins = plugins::setup_frontend_plugins(&mut fe_opts).await.unwrap();
let provider = plugins.get::<UserProviderRef>().unwrap();
let result = provider
.authenticate(
@@ -476,6 +509,8 @@ mod tests {
assert_eq!(None, fe_opts.mysql.reject_no_database);
assert!(fe_opts.influxdb.enable);
assert_eq!("/tmp/greptimedb/test/wal", dn_opts.wal.dir.unwrap());
match &dn_opts.storage.store {
datanode::config::ObjectStoreConfig::S3(s3_config) => {
assert_eq!(
@@ -609,7 +644,7 @@ mod tests {
assert_eq!(options.influxdb, default_options.influxdb);
assert_eq!(options.prom_store, default_options.prom_store);
assert_eq!(options.wal, default_options.wal);
assert_eq!(options.kv_store, default_options.kv_store);
assert_eq!(options.metadata_store, default_options.metadata_store);
assert_eq!(options.procedure, default_options.procedure);
assert_eq!(options.logging, default_options.logging);
assert_eq!(options.region_engine, default_options.region_engine);

View File

@@ -23,6 +23,8 @@ use std::sync::{Arc, Mutex, MutexGuard};
pub use bit_vec::BitVec;
/// [`Plugins`] is a wrapper of Arc contents.
/// Make it Cloneable and we can treat it like an Arc struct.
#[derive(Default, Clone)]
pub struct Plugins {
inner: Arc<Mutex<anymap::Map<dyn Any + Send + Sync>>>,

View File

@@ -14,10 +14,9 @@
// See the License for the specific language governing permissions and
// limitations under the License.
// This file is copied from https://github.com/tikv/raft-engine/blob/8dd2a39f359ff16f5295f35343f626e0c10132fa/src/util.rs without any modification.
// This file is copied from https://github.com/tikv/raft-engine/blob/8dd2a39f359ff16f5295f35343f626e0c10132fa/src/util.rs
use std::fmt;
use std::fmt::{Display, Write};
use std::fmt::{self, Debug, Display, Write};
use std::ops::{Div, Mul};
use std::str::FromStr;
@@ -34,7 +33,7 @@ pub const GIB: u64 = MIB * BINARY_DATA_MAGNITUDE;
pub const TIB: u64 = GIB * BINARY_DATA_MAGNITUDE;
pub const PIB: u64 = TIB * BINARY_DATA_MAGNITUDE;
#[derive(Clone, Debug, Copy, PartialEq, Eq, PartialOrd)]
#[derive(Clone, Copy, PartialEq, Eq, PartialOrd)]
pub struct ReadableSize(pub u64);
impl ReadableSize {
@@ -155,6 +154,12 @@ impl FromStr for ReadableSize {
}
}
impl Debug for ReadableSize {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(f, "{}", self)
}
}
impl Display for ReadableSize {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
if self.0 >= PIB {

View File

@@ -20,6 +20,8 @@ use serde::{Deserialize, Serialize};
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]
#[serde(default)]
pub struct WalConfig {
// wal directory
pub dir: Option<String>,
// wal file size in bytes
pub file_size: ReadableSize,
// wal purge threshold in bytes
@@ -36,7 +38,8 @@ pub struct WalConfig {
impl Default for WalConfig {
fn default() -> Self {
Self {
file_size: ReadableSize::mb(256), // log file size 256MB
dir: None,
file_size: ReadableSize::mb(256), // log file size 256MB
purge_threshold: ReadableSize::gb(4), // purge threshold 4GB
purge_interval: Duration::from_secs(600),
read_batch_size: 128,
@@ -45,8 +48,8 @@ impl Default for WalConfig {
}
}
pub fn kv_store_dir(store_dir: &str) -> String {
format!("{store_dir}/kv")
pub fn metadata_store_dir(store_dir: &str) -> String {
format!("{store_dir}/metadata")
}
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]

View File

@@ -58,9 +58,15 @@ impl Function for RangeFunction {
"range_fn"
}
// range_fn will never been used, return_type could be arbitrary value, is not important
fn return_type(&self, _input_types: &[ConcreteDataType]) -> Result<ConcreteDataType> {
Ok(ConcreteDataType::float64_datatype())
// The first argument to range_fn is the expression to be evaluated
fn return_type(&self, input_types: &[ConcreteDataType]) -> Result<ConcreteDataType> {
input_types
.first()
.cloned()
.ok_or(DataFusionError::Internal(
"No expr found in range_fn".into(),
))
.context(GeneralDataFusionSnafu)
}
/// `range_fn` will never been used. As long as a legal signature is returned, the specific content of the signature does not matter.

View File

@@ -15,6 +15,8 @@
use std::env;
use std::io::ErrorKind;
use std::path::{Path, PathBuf};
use std::sync::atomic::{AtomicBool, Ordering};
use std::sync::Arc;
use std::time::Duration;
use common_runtime::error::{Error, Result};
@@ -24,7 +26,7 @@ use reqwest::{Client, Response};
use serde::{Deserialize, Serialize};
/// The URL to report telemetry data.
pub const TELEMETRY_URL: &str = "https://api.greptime.cloud/db/otel/statistics";
pub const TELEMETRY_URL: &str = "https://telemetry.greptimestats.com/db/otel/statistics";
/// The local installation uuid cache file
const UUID_FILE_NAME: &str = ".greptimedb-telemetry-uuid";
@@ -36,13 +38,26 @@ const GREPTIMEDB_TELEMETRY_CLIENT_CONNECT_TIMEOUT: Duration = Duration::from_sec
const GREPTIMEDB_TELEMETRY_CLIENT_REQUEST_TIMEOUT: Duration = Duration::from_secs(10);
pub enum GreptimeDBTelemetryTask {
Enable(RepeatedTask<Error>),
Enable((RepeatedTask<Error>, Arc<AtomicBool>)),
Disable,
}
impl GreptimeDBTelemetryTask {
pub fn enable(interval: Duration, task_fn: BoxedTaskFunction<Error>) -> Self {
GreptimeDBTelemetryTask::Enable(RepeatedTask::new(interval, task_fn))
pub fn should_report(&self, value: bool) {
match self {
GreptimeDBTelemetryTask::Enable((_, should_report)) => {
should_report.store(value, Ordering::Relaxed);
}
GreptimeDBTelemetryTask::Disable => {}
}
}
pub fn enable(
interval: Duration,
task_fn: BoxedTaskFunction<Error>,
should_report: Arc<AtomicBool>,
) -> Self {
GreptimeDBTelemetryTask::Enable((RepeatedTask::new(interval, task_fn), should_report))
}
pub fn disable() -> Self {
@@ -51,7 +66,7 @@ impl GreptimeDBTelemetryTask {
pub fn start(&self) -> Result<()> {
match self {
GreptimeDBTelemetryTask::Enable(task) => {
GreptimeDBTelemetryTask::Enable((task, _)) => {
print_anonymous_usage_data_disclaimer();
task.start(common_runtime::bg_runtime())
}
@@ -61,7 +76,7 @@ impl GreptimeDBTelemetryTask {
pub async fn stop(&self) -> Result<()> {
match self {
GreptimeDBTelemetryTask::Enable(task) => task.stop().await,
GreptimeDBTelemetryTask::Enable((task, _)) => task.stop().await,
GreptimeDBTelemetryTask::Disable => Ok(()),
}
}
@@ -191,6 +206,7 @@ pub struct GreptimeDBTelemetry {
client: Option<Client>,
working_home: Option<String>,
telemetry_url: &'static str,
should_report: Arc<AtomicBool>,
}
#[async_trait::async_trait]
@@ -200,13 +216,19 @@ impl TaskFunction<Error> for GreptimeDBTelemetry {
}
async fn call(&mut self) -> Result<()> {
self.report_telemetry_info().await;
if self.should_report.load(Ordering::Relaxed) {
self.report_telemetry_info().await;
}
Ok(())
}
}
impl GreptimeDBTelemetry {
pub fn new(working_home: Option<String>, statistics: Box<dyn Collector + Send + Sync>) -> Self {
pub fn new(
working_home: Option<String>,
statistics: Box<dyn Collector + Send + Sync>,
should_report: Arc<AtomicBool>,
) -> Self {
let client = Client::builder()
.connect_timeout(GREPTIMEDB_TELEMETRY_CLIENT_CONNECT_TIMEOUT)
.timeout(GREPTIMEDB_TELEMETRY_CLIENT_REQUEST_TIMEOUT)
@@ -216,6 +238,7 @@ impl GreptimeDBTelemetry {
statistics,
client: client.ok(),
telemetry_url: TELEMETRY_URL,
should_report,
}
}
@@ -250,7 +273,8 @@ impl GreptimeDBTelemetry {
mod tests {
use std::convert::Infallible;
use std::env;
use std::sync::atomic::AtomicUsize;
use std::sync::atomic::{AtomicBool, AtomicUsize};
use std::sync::Arc;
use std::time::Duration;
use common_test_util::ports;
@@ -370,7 +394,11 @@ mod tests {
let working_home = working_home_temp.path().to_str().unwrap().to_string();
let test_statistic = Box::new(TestStatistic);
let mut test_report = GreptimeDBTelemetry::new(Some(working_home.clone()), test_statistic);
let mut test_report = GreptimeDBTelemetry::new(
Some(working_home.clone()),
test_statistic,
Arc::new(AtomicBool::new(true)),
);
let url = Box::leak(format!("{}:{}", "http://localhost", port).into_boxed_str());
test_report.telemetry_url = url;
let response = test_report.report_telemetry_info().await.unwrap();
@@ -384,7 +412,11 @@ mod tests {
assert_eq!(1, body.nodes.unwrap());
let failed_statistic = Box::new(FailedStatistic);
let mut failed_report = GreptimeDBTelemetry::new(Some(working_home), failed_statistic);
let mut failed_report = GreptimeDBTelemetry::new(
Some(working_home),
failed_statistic,
Arc::new(AtomicBool::new(true)),
);
failed_report.telemetry_url = url;
let response = failed_report.report_telemetry_info().await;
assert!(response.is_none());

View File

@@ -16,6 +16,7 @@ use std::sync::atomic::{AtomicBool, AtomicU64, AtomicUsize, Ordering};
use std::sync::Arc;
use std::time::Duration;
use common_base::readable_size::ReadableSize;
use common_telemetry::info;
use dashmap::mapref::entry::Entry;
use dashmap::DashMap;
@@ -31,8 +32,8 @@ use crate::error::{CreateChannelSnafu, InvalidConfigFilePathSnafu, InvalidTlsCon
const RECYCLE_CHANNEL_INTERVAL_SECS: u64 = 60;
pub const DEFAULT_GRPC_REQUEST_TIMEOUT_SECS: u64 = 10;
pub const DEFAULT_GRPC_CONNECT_TIMEOUT_SECS: u64 = 1;
pub const DEFAULT_MAX_GRPC_RECV_MESSAGE_SIZE: usize = 512 * 1024 * 1024;
pub const DEFAULT_MAX_GRPC_SEND_MESSAGE_SIZE: usize = 512 * 1024 * 1024;
pub const DEFAULT_MAX_GRPC_RECV_MESSAGE_SIZE: ReadableSize = ReadableSize::mb(512);
pub const DEFAULT_MAX_GRPC_SEND_MESSAGE_SIZE: ReadableSize = ReadableSize::mb(512);
lazy_static! {
static ref ID: AtomicU64 = AtomicU64::new(0);
@@ -250,9 +251,9 @@ pub struct ChannelConfig {
pub tcp_nodelay: bool,
pub client_tls: Option<ClientTlsOption>,
// Max gRPC receiving(decoding) message size
pub max_recv_message_size: usize,
pub max_recv_message_size: ReadableSize,
// Max gRPC sending(encoding) message size
pub max_send_message_size: usize,
pub max_send_message_size: ReadableSize,
}
impl Default for ChannelConfig {

View File

@@ -12,6 +12,8 @@ api = { workspace = true }
arrow-flight.workspace = true
async-stream.workspace = true
async-trait.workspace = true
base64 = "0.21"
bytes = "1.4"
common-catalog = { workspace = true }
common-error = { workspace = true }
common-grpc-expr.workspace = true

View File

@@ -17,7 +17,6 @@ use std::sync::Arc;
use table::metadata::TableId;
use crate::error::Result;
use crate::key::schema_name::SchemaNameKey;
use crate::key::table_info::TableInfoKey;
use crate::key::table_name::TableNameKey;
use crate::key::table_route::TableRouteKey;
@@ -68,36 +67,25 @@ impl CacheInvalidator for DummyCacheInvalidator {
}
}
#[derive(Clone)]
pub struct TableMetadataCacheInvalidator(KvCacheInvalidatorRef);
impl TableMetadataCacheInvalidator {
pub fn new(kv_cache_invalidator: KvCacheInvalidatorRef) -> Self {
Self(kv_cache_invalidator)
}
pub async fn invalidate_schema(&self, catalog: &str, schema: &str) {
let key = SchemaNameKey::new(catalog, schema).as_raw_key();
self.0.invalidate_key(&key).await;
}
}
#[async_trait::async_trait]
impl CacheInvalidator for TableMetadataCacheInvalidator {
impl<T> CacheInvalidator for T
where
T: KvCacheInvalidator,
{
async fn invalidate_table_name(&self, _ctx: &Context, table_name: TableName) -> Result<()> {
let key: TableNameKey = (&table_name).into();
self.0.invalidate_key(&key.as_raw_key()).await;
self.invalidate_key(&key.as_raw_key()).await;
Ok(())
}
async fn invalidate_table_id(&self, _ctx: &Context, table_id: TableId) -> Result<()> {
let key = TableInfoKey::new(table_id);
self.0.invalidate_key(&key.as_raw_key()).await;
self.invalidate_key(&key.as_raw_key()).await;
let key = &TableRouteKey { table_id };
self.0.invalidate_key(&key.as_raw_key()).await;
self.invalidate_key(&key.as_raw_key()).await;
Ok(())
}

View File

@@ -45,6 +45,7 @@ use crate::error::{
use crate::key::table_info::TableInfoValue;
use crate::key::table_name::TableNameKey;
use crate::key::table_route::TableRouteValue;
use crate::key::DeserializedValueWithBytes;
use crate::metrics;
use crate::rpc::ddl::AlterTableTask;
use crate::rpc::router::{find_leader_regions, find_leaders};
@@ -63,7 +64,7 @@ impl AlterTableProcedure {
pub fn new(
cluster_id: u64,
task: AlterTableTask,
table_info_value: TableInfoValue,
table_info_value: DeserializedValueWithBytes<TableInfoValue>,
context: DdlContext,
) -> Result<Self> {
let alter_kind = task
@@ -191,7 +192,8 @@ impl AlterTableProcedure {
.await?
.with_context(|| TableRouteNotFoundSnafu {
table_name: table_ref.to_string(),
})?;
})?
.into_inner();
let leaders = find_leaders(&region_routes);
let mut alter_region_tasks = Vec::with_capacity(leaders.len());
@@ -413,7 +415,7 @@ pub struct AlterTableData {
state: AlterTableState,
task: AlterTableTask,
/// Table info value before alteration.
table_info_value: TableInfoValue,
table_info_value: DeserializedValueWithBytes<TableInfoValue>,
cluster_id: u64,
/// Next column id of the table if the task adds columns to the table.
next_column_id: Option<ColumnId>,
@@ -422,7 +424,7 @@ pub struct AlterTableData {
impl AlterTableData {
pub fn new(
task: AlterTableTask,
table_info_value: TableInfoValue,
table_info_value: DeserializedValueWithBytes<TableInfoValue>,
cluster_id: u64,
next_column_id: Option<ColumnId>,
) -> Self {

View File

@@ -39,6 +39,7 @@ use crate::error::{self, Result};
use crate::key::table_info::TableInfoValue;
use crate::key::table_name::TableNameKey;
use crate::key::table_route::TableRouteValue;
use crate::key::DeserializedValueWithBytes;
use crate::metrics;
use crate::rpc::ddl::DropTableTask;
use crate::rpc::router::{find_leader_regions, find_leaders, RegionRoute};
@@ -55,8 +56,8 @@ impl DropTableProcedure {
pub fn new(
cluster_id: u64,
task: DropTableTask,
table_route_value: TableRouteValue,
table_info_value: TableInfoValue,
table_route_value: DeserializedValueWithBytes<TableRouteValue>,
table_info_value: DeserializedValueWithBytes<TableInfoValue>,
context: DdlContext,
) -> Self {
Self {
@@ -231,16 +232,16 @@ pub struct DropTableData {
pub state: DropTableState,
pub cluster_id: u64,
pub task: DropTableTask,
pub table_route_value: TableRouteValue,
pub table_info_value: TableInfoValue,
pub table_route_value: DeserializedValueWithBytes<TableRouteValue>,
pub table_info_value: DeserializedValueWithBytes<TableInfoValue>,
}
impl DropTableData {
pub fn new(
cluster_id: u64,
task: DropTableTask,
table_route_value: TableRouteValue,
table_info_value: TableInfoValue,
table_route_value: DeserializedValueWithBytes<TableRouteValue>,
table_info_value: DeserializedValueWithBytes<TableInfoValue>,
) -> Self {
Self {
state: DropTableState::Prepare,

View File

@@ -35,6 +35,7 @@ use crate::ddl::DdlContext;
use crate::error::{Result, TableNotFoundSnafu};
use crate::key::table_info::TableInfoValue;
use crate::key::table_name::TableNameKey;
use crate::key::DeserializedValueWithBytes;
use crate::metrics;
use crate::rpc::ddl::TruncateTableTask;
use crate::rpc::router::{find_leader_regions, find_leaders, RegionRoute};
@@ -90,7 +91,7 @@ impl TruncateTableProcedure {
pub(crate) fn new(
cluster_id: u64,
task: TruncateTableTask,
table_info_value: TableInfoValue,
table_info_value: DeserializedValueWithBytes<TableInfoValue>,
region_routes: Vec<RegionRoute>,
context: DdlContext,
) -> Self {
@@ -188,7 +189,7 @@ pub struct TruncateTableData {
state: TruncateTableState,
cluster_id: u64,
task: TruncateTableTask,
table_info_value: TableInfoValue,
table_info_value: DeserializedValueWithBytes<TableInfoValue>,
region_routes: Vec<RegionRoute>,
}
@@ -196,7 +197,7 @@ impl TruncateTableData {
pub fn new(
cluster_id: u64,
task: TruncateTableTask,
table_info_value: TableInfoValue,
table_info_value: DeserializedValueWithBytes<TableInfoValue>,
region_routes: Vec<RegionRoute>,
) -> Self {
Self {

View File

@@ -35,7 +35,7 @@ use crate::error::{
use crate::key::table_info::TableInfoValue;
use crate::key::table_name::TableNameKey;
use crate::key::table_route::TableRouteValue;
use crate::key::TableMetadataManagerRef;
use crate::key::{DeserializedValueWithBytes, TableMetadataManagerRef};
use crate::rpc::ddl::DdlTask::{AlterTable, CreateTable, DropTable, TruncateTable};
use crate::rpc::ddl::{
AlterTableTask, CreateTableTask, DropTableTask, SubmitDdlTaskRequest, SubmitDdlTaskResponse,
@@ -144,7 +144,7 @@ impl DdlManager {
&self,
cluster_id: u64,
alter_table_task: AlterTableTask,
table_info_value: TableInfoValue,
table_info_value: DeserializedValueWithBytes<TableInfoValue>,
) -> Result<ProcedureId> {
let context = self.create_context();
@@ -176,8 +176,8 @@ impl DdlManager {
&self,
cluster_id: u64,
drop_table_task: DropTableTask,
table_info_value: TableInfoValue,
table_route_value: TableRouteValue,
table_info_value: DeserializedValueWithBytes<TableInfoValue>,
table_route_value: DeserializedValueWithBytes<TableRouteValue>,
) -> Result<ProcedureId> {
let context = self.create_context();
@@ -198,7 +198,7 @@ impl DdlManager {
&self,
cluster_id: u64,
truncate_table_task: TruncateTableTask,
table_info_value: TableInfoValue,
table_info_value: DeserializedValueWithBytes<TableInfoValue>,
region_routes: Vec<RegionRoute>,
) -> Result<ProcedureId> {
let context = self.create_context();
@@ -252,7 +252,7 @@ async fn handle_truncate_table_task(
table_name: table_ref.to_string(),
})?;
let table_route = table_route_value.region_routes;
let table_route = table_route_value.into_inner().region_routes;
let id = ddl_manager
.submit_truncate_table_task(

View File

@@ -12,6 +12,7 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use std::collections::HashMap;
use std::fmt::{Display, Formatter};
use serde::{Deserialize, Serialize};
@@ -73,13 +74,15 @@ impl Display for OpenRegion {
pub struct OpenRegion {
pub region_ident: RegionIdent,
pub region_storage_path: String,
pub options: HashMap<String, String>,
}
impl OpenRegion {
pub fn new(region_ident: RegionIdent, path: &str) -> Self {
pub fn new(region_ident: RegionIdent, path: &str, options: HashMap<String, String>) -> Self {
Self {
region_ident,
region_storage_path: path.to_string(),
options,
}
}
}
@@ -127,12 +130,13 @@ mod tests {
engine: "mito2".to_string(),
},
"test/foo",
HashMap::new(),
));
let serialized = serde_json::to_string(&open_region).unwrap();
assert_eq!(
r#"{"OpenRegion":{"region_ident":{"cluster_id":1,"datanode_id":2,"table_id":1024,"region_number":1,"engine":"mito2"},"region_storage_path":"test/foo"}}"#,
r#"{"OpenRegion":{"region_ident":{"cluster_id":1,"datanode_id":2,"table_id":1024,"region_number":1,"engine":"mito2"},"region_storage_path":"test/foo","options":{}}}"#,
serialized
);

View File

@@ -55,13 +55,18 @@ pub mod table_region;
#[allow(deprecated)]
pub mod table_route;
use std::collections::BTreeMap;
use std::collections::{BTreeMap, HashMap};
use std::fmt::Debug;
use std::ops::Deref;
use std::sync::Arc;
use bytes::Bytes;
use common_catalog::consts::{DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME};
use datanode_table::{DatanodeTableKey, DatanodeTableManager, DatanodeTableValue};
use lazy_static::lazy_static;
use regex::Regex;
use serde::de::DeserializeOwned;
use serde::{Deserialize, Serialize};
use snafu::{ensure, OptionExt, ResultExt};
use store_api::storage::RegionNumber;
use table::metadata::{RawTableInfo, TableId};
@@ -69,6 +74,7 @@ use table_info::{TableInfoKey, TableInfoManager, TableInfoValue};
use table_name::{TableNameKey, TableNameManager, TableNameValue};
use self::catalog_name::{CatalogManager, CatalogNameKey, CatalogNameValue};
use self::datanode_table::RegionInfo;
use self::schema_name::{SchemaManager, SchemaNameKey, SchemaNameValue};
use self::table_route::{TableRouteManager, TableRouteValue};
use crate::ddl::utils::region_storage_path;
@@ -154,6 +160,115 @@ macro_rules! ensure_values {
};
}
/// A struct containing a deserialized value(`inner`) and an original bytes.
///
/// - Serialize behaviors:
///
/// The `inner` field will be ignored.
///
/// - Deserialize behaviors:
///
/// The `inner` field will be deserialized from the `bytes` field.
pub struct DeserializedValueWithBytes<T: DeserializeOwned + Serialize> {
// The original bytes of the inner.
bytes: Bytes,
// The value was deserialized from the original bytes.
inner: T,
}
impl<T: DeserializeOwned + Serialize> Deref for DeserializedValueWithBytes<T> {
type Target = T;
fn deref(&self) -> &Self::Target {
&self.inner
}
}
impl<T: DeserializeOwned + Serialize + Debug> Debug for DeserializedValueWithBytes<T> {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
write!(
f,
"DeserializedValueWithBytes(inner: {:?}, bytes: {:?})",
self.inner, self.bytes
)
}
}
impl<T: DeserializeOwned + Serialize> Serialize for DeserializedValueWithBytes<T> {
/// - Serialize behaviors:
///
/// The `inner` field will be ignored.
fn serialize<S>(&self, serializer: S) -> std::result::Result<S::Ok, S::Error>
where
S: serde::Serializer,
{
// Safety: The original bytes are always JSON encoded.
// It's more efficiently than `serialize_bytes`.
serializer.serialize_str(&String::from_utf8_lossy(&self.bytes))
}
}
impl<'de, T: DeserializeOwned + Serialize> Deserialize<'de> for DeserializedValueWithBytes<T> {
/// - Deserialize behaviors:
///
/// The `inner` field will be deserialized from the `bytes` field.
fn deserialize<D>(deserializer: D) -> std::result::Result<Self, D::Error>
where
D: serde::Deserializer<'de>,
{
let buf = String::deserialize(deserializer)?;
let bytes = Bytes::from(buf);
let value = DeserializedValueWithBytes::from_inner_bytes(bytes)
.map_err(|err| serde::de::Error::custom(err.to_string()))?;
Ok(value)
}
}
impl<T: Serialize + DeserializeOwned + Clone> Clone for DeserializedValueWithBytes<T> {
fn clone(&self) -> Self {
Self {
bytes: self.bytes.clone(),
inner: self.inner.clone(),
}
}
}
impl<T: Serialize + DeserializeOwned> DeserializedValueWithBytes<T> {
/// Returns a struct containing a deserialized value and an original `bytes`.
/// It accepts original bytes of inner.
pub fn from_inner_bytes(bytes: Bytes) -> Result<Self> {
let inner = serde_json::from_slice(&bytes).context(error::SerdeJsonSnafu)?;
Ok(Self { bytes, inner })
}
/// Returns a struct containing a deserialized value and an original `bytes`.
/// It accepts original bytes of inner.
pub fn from_inner_slice(bytes: &[u8]) -> Result<Self> {
Self::from_inner_bytes(Bytes::copy_from_slice(bytes))
}
pub fn into_inner(self) -> T {
self.inner
}
/// Returns original `bytes`
pub fn into_bytes(&self) -> Vec<u8> {
self.bytes.to_vec()
}
/// Notes: used for test purpose.
pub fn from_inner(inner: T) -> Self {
let bytes = serde_json::to_vec(&inner).unwrap();
Self {
bytes: Bytes::from(bytes),
inner,
}
}
}
impl TableMetadataManager {
pub fn new(kv_backend: KvBackendRef) -> Self {
TableMetadataManager {
@@ -211,7 +326,10 @@ impl TableMetadataManager {
pub async fn get_full_table_info(
&self,
table_id: TableId,
) -> Result<(Option<TableInfoValue>, Option<TableRouteValue>)> {
) -> Result<(
Option<DeserializedValueWithBytes<TableInfoValue>>,
Option<DeserializedValueWithBytes<TableRouteValue>>,
)> {
let (get_table_route_txn, table_route_decoder) =
self.table_route_manager.build_get_txn(table_id);
@@ -256,6 +374,7 @@ impl TableMetadataManager {
.table_name_manager()
.build_create_txn(&table_name, table_id)?;
let region_options = (&table_info.meta.options).into();
// Creates table info.
let table_info_value = TableInfoValue::new(table_info);
let (create_table_info_txn, on_create_table_info_failure) = self
@@ -268,6 +387,7 @@ impl TableMetadataManager {
table_id,
&engine,
&region_storage_path,
region_options,
distribution,
)?;
@@ -288,15 +408,17 @@ impl TableMetadataManager {
// Checks whether metadata was already created.
if !r.succeeded {
let remote_table_info =
on_create_table_info_failure(&r.responses)?.context(error::UnexpectedSnafu {
let remote_table_info = on_create_table_info_failure(&r.responses)?
.context(error::UnexpectedSnafu {
err_msg: "Reads the empty table info during the create table metadata",
})?;
})?
.into_inner();
let remote_table_route =
on_create_table_route_failure(&r.responses)?.context(error::UnexpectedSnafu {
let remote_table_route = on_create_table_route_failure(&r.responses)?
.context(error::UnexpectedSnafu {
err_msg: "Reads the empty table route during the create table metadata",
})?;
})?
.into_inner();
let op_name = "the creating table metadata";
ensure_values!(remote_table_info, table_info_value, op_name);
@@ -310,8 +432,8 @@ impl TableMetadataManager {
/// The caller MUST ensure it has the exclusive access to `TableNameKey`.
pub async fn delete_table_metadata(
&self,
table_info_value: &TableInfoValue,
table_route_value: &TableRouteValue,
table_info_value: &DeserializedValueWithBytes<TableInfoValue>,
table_route_value: &DeserializedValueWithBytes<TableRouteValue>,
) -> Result<()> {
let table_info = &table_info_value.table_info;
let table_id = table_info.ident.table_id;
@@ -361,7 +483,7 @@ impl TableMetadataManager {
/// and the new `TableNameKey` MUST be empty.
pub async fn rename_table(
&self,
current_table_info_value: TableInfoValue,
current_table_info_value: DeserializedValueWithBytes<TableInfoValue>,
new_table_name: String,
) -> Result<()> {
let current_table_info = &current_table_info_value.table_info;
@@ -386,9 +508,11 @@ impl TableMetadataManager {
table_id,
)?;
let new_table_info_value = current_table_info_value.with_update(move |table_info| {
table_info.name = new_table_name;
});
let new_table_info_value = current_table_info_value
.inner
.with_update(move |table_info| {
table_info.name = new_table_name;
});
// Updates table info.
let (update_table_info_txn, on_update_table_info_failure) = self
@@ -401,10 +525,11 @@ impl TableMetadataManager {
// Checks whether metadata was already updated.
if !r.succeeded {
let remote_table_info =
on_update_table_info_failure(&r.responses)?.context(error::UnexpectedSnafu {
let remote_table_info = on_update_table_info_failure(&r.responses)?
.context(error::UnexpectedSnafu {
err_msg: "Reads the empty table info during the rename table metadata",
})?;
})?
.into_inner();
let op_name = "the renaming table metadata";
ensure_values!(remote_table_info, new_table_info_value, op_name);
@@ -416,7 +541,7 @@ impl TableMetadataManager {
/// Updates table info and returns an error if different metadata exists.
pub async fn update_table_info(
&self,
current_table_info_value: TableInfoValue,
current_table_info_value: DeserializedValueWithBytes<TableInfoValue>,
new_table_info: RawTableInfo,
) -> Result<()> {
let table_id = current_table_info_value.table_info.ident.table_id;
@@ -432,10 +557,11 @@ impl TableMetadataManager {
// Checks whether metadata was already updated.
if !r.succeeded {
let remote_table_info =
on_update_table_info_failure(&r.responses)?.context(error::UnexpectedSnafu {
let remote_table_info = on_update_table_info_failure(&r.responses)?
.context(error::UnexpectedSnafu {
err_msg: "Reads the empty table info during the updating table info",
})?;
})?
.into_inner();
let op_name = "the updating table info";
ensure_values!(remote_table_info, new_table_info_value, op_name);
@@ -446,10 +572,10 @@ impl TableMetadataManager {
pub async fn update_table_route(
&self,
table_id: TableId,
engine: &str,
region_storage_path: &str,
current_table_route_value: TableRouteValue,
region_info: RegionInfo,
current_table_route_value: DeserializedValueWithBytes<TableRouteValue>,
new_region_routes: Vec<RegionRoute>,
new_region_options: &HashMap<String, String>,
) -> Result<()> {
// Updates the datanode table key value pairs.
let current_region_distribution =
@@ -458,10 +584,10 @@ impl TableMetadataManager {
let update_datanode_table_txn = self.datanode_table_manager().build_update_txn(
table_id,
engine,
region_storage_path,
region_info,
current_region_distribution,
new_region_distribution,
new_region_options,
)?;
// Updates the table_route.
@@ -477,10 +603,11 @@ impl TableMetadataManager {
// Checks whether metadata was already updated.
if !r.succeeded {
let remote_table_route =
on_update_table_route_failure(&r.responses)?.context(error::UnexpectedSnafu {
let remote_table_route = on_update_table_route_failure(&r.responses)?
.context(error::UnexpectedSnafu {
err_msg: "Reads the empty table route during the updating table route",
})?;
})?
.into_inner();
let op_name = "the updating table route";
ensure_values!(remote_table_route, new_table_route_value, op_name);
@@ -553,9 +680,10 @@ impl_optional_meta_value! {
#[cfg(test)]
mod tests {
use std::collections::BTreeMap;
use std::collections::{BTreeMap, HashMap};
use std::sync::Arc;
use bytes::Bytes;
use datatypes::prelude::ConcreteDataType;
use datatypes::schema::{ColumnSchema, SchemaBuilder};
use futures::TryStreamExt;
@@ -563,14 +691,43 @@ mod tests {
use super::datanode_table::DatanodeTableKey;
use crate::ddl::utils::region_storage_path;
use crate::key::datanode_table::RegionInfo;
use crate::key::table_info::TableInfoValue;
use crate::key::table_name::TableNameKey;
use crate::key::table_route::TableRouteValue;
use crate::key::{to_removed_key, TableMetadataManager};
use crate::key::{to_removed_key, DeserializedValueWithBytes, TableMetadataManager};
use crate::kv_backend::memory::MemoryKvBackend;
use crate::peer::Peer;
use crate::rpc::router::{region_distribution, Region, RegionRoute};
#[test]
fn test_deserialized_value_with_bytes() {
let region_route = new_test_region_route();
let region_routes = vec![region_route.clone()];
let expected_region_routes =
TableRouteValue::new(vec![region_route.clone(), region_route.clone()]);
let expected = serde_json::to_vec(&expected_region_routes).unwrap();
// Serialize behaviors:
// The inner field will be ignored.
let value = DeserializedValueWithBytes {
// ignored
inner: TableRouteValue::new(region_routes.clone()),
bytes: Bytes::from(expected.clone()),
};
let encoded = serde_json::to_vec(&value).unwrap();
// Deserialize behaviors:
// The inner field will be deserialized from the bytes field.
let decoded: DeserializedValueWithBytes<TableRouteValue> =
serde_json::from_slice(&encoded).unwrap();
assert_eq!(decoded.inner, expected_region_routes);
assert_eq!(decoded.bytes, expected);
}
#[test]
fn test_to_removed_key() {
let key = "test_key";
@@ -660,8 +817,14 @@ mod tests {
.await
.unwrap();
assert_eq!(remote_table_info.unwrap().table_info, table_info);
assert_eq!(remote_table_route.unwrap().region_routes, region_routes);
assert_eq!(
remote_table_info.unwrap().into_inner().table_info,
table_info
);
assert_eq!(
remote_table_route.unwrap().into_inner().region_routes,
region_routes
);
}
#[tokio::test]
@@ -674,7 +837,8 @@ mod tests {
new_test_table_info(region_routes.iter().map(|r| r.region.id.region_number())).into();
let table_id = table_info.ident.table_id;
let datanode_id = 2;
let table_route_value = TableRouteValue::new(region_routes.clone());
let table_route_value =
DeserializedValueWithBytes::from_inner(TableRouteValue::new(region_routes.clone()));
// creates metadata.
table_metadata_manager
@@ -682,7 +846,8 @@ mod tests {
.await
.unwrap();
let table_info_value = TableInfoValue::new(table_info.clone());
let table_info_value =
DeserializedValueWithBytes::from_inner(TableInfoValue::new(table_info.clone()));
// deletes metadata.
table_metadata_manager
@@ -723,7 +888,8 @@ mod tests {
.get_removed(table_id)
.await
.unwrap()
.unwrap();
.unwrap()
.into_inner();
assert_eq!(removed_table_info.table_info, table_info);
let removed_table_route = table_metadata_manager
@@ -731,7 +897,8 @@ mod tests {
.get_removed(table_id)
.await
.unwrap()
.unwrap();
.unwrap()
.into_inner();
assert_eq!(removed_table_route.region_routes, region_routes);
}
@@ -750,7 +917,9 @@ mod tests {
.await
.unwrap();
let new_table_name = "another_name".to_string();
let table_info_value = TableInfoValue::new(table_info.clone());
let table_info_value =
DeserializedValueWithBytes::from_inner(TableInfoValue::new(table_info.clone()));
table_metadata_manager
.rename_table(table_info_value.clone(), new_table_name.clone())
.await
@@ -762,7 +931,8 @@ mod tests {
.unwrap();
let mut modified_table_info = table_info.clone();
modified_table_info.name = "hi".to_string();
let modified_table_info_value = table_info_value.update(modified_table_info);
let modified_table_info_value =
DeserializedValueWithBytes::from_inner(table_info_value.update(modified_table_info));
// if the table_info_value is wrong, it should return an error.
// The ABA problem.
assert!(table_metadata_manager
@@ -816,7 +986,8 @@ mod tests {
.unwrap();
let mut new_table_info = table_info.clone();
new_table_info.name = "hi".to_string();
let current_table_info_value = TableInfoValue::new(table_info.clone());
let current_table_info_value =
DeserializedValueWithBytes::from_inner(TableInfoValue::new(table_info.clone()));
// should be ok.
table_metadata_manager
.update_table_info(current_table_info_value.clone(), new_table_info.clone())
@@ -834,12 +1005,15 @@ mod tests {
.get(table_id)
.await
.unwrap()
.unwrap();
.unwrap()
.into_inner();
assert_eq!(updated_table_info.table_info, new_table_info);
let mut wrong_table_info = table_info.clone();
wrong_table_info.name = "wrong".to_string();
let wrong_table_info_value = current_table_info_value.update(wrong_table_info);
let wrong_table_info_value = DeserializedValueWithBytes::from_inner(
current_table_info_value.update(wrong_table_info),
);
// if the current_table_info_value is wrong, it should return an error.
// The ABA problem.
assert!(table_metadata_manager
@@ -878,7 +1052,8 @@ mod tests {
let engine = table_info.meta.engine.as_str();
let region_storage_path =
region_storage_path(&table_info.catalog_name, &table_info.schema_name);
let current_table_route_value = TableRouteValue::new(region_routes.clone());
let current_table_route_value =
DeserializedValueWithBytes::from_inner(TableRouteValue::new(region_routes.clone()));
// creates metadata.
table_metadata_manager
.create_table_metadata(table_info.clone(), region_routes.clone())
@@ -894,10 +1069,14 @@ mod tests {
table_metadata_manager
.update_table_route(
table_id,
engine,
&region_storage_path,
RegionInfo {
engine: engine.to_string(),
region_storage_path: region_storage_path.to_string(),
region_options: HashMap::new(),
},
current_table_route_value.clone(),
new_region_routes.clone(),
&HashMap::new(),
)
.await
.unwrap();
@@ -907,24 +1086,36 @@ mod tests {
table_metadata_manager
.update_table_route(
table_id,
engine,
&region_storage_path,
RegionInfo {
engine: engine.to_string(),
region_storage_path: region_storage_path.to_string(),
region_options: HashMap::new(),
},
current_table_route_value.clone(),
new_region_routes.clone(),
&HashMap::new(),
)
.await
.unwrap();
let current_table_route_value = current_table_route_value.update(new_region_routes.clone());
let current_table_route_value = DeserializedValueWithBytes::from_inner(
current_table_route_value
.inner
.update(new_region_routes.clone()),
);
let new_region_routes = vec![new_region_route(2, 4), new_region_route(5, 5)];
// it should be ok.
table_metadata_manager
.update_table_route(
table_id,
engine,
&region_storage_path,
RegionInfo {
engine: engine.to_string(),
region_storage_path: region_storage_path.to_string(),
region_options: HashMap::new(),
},
current_table_route_value.clone(),
new_region_routes.clone(),
&HashMap::new(),
)
.await
.unwrap();
@@ -932,19 +1123,24 @@ mod tests {
// if the current_table_route_value is wrong, it should return an error.
// The ABA problem.
let wrong_table_route_value = current_table_route_value.update(vec![
new_region_route(1, 1),
new_region_route(2, 2),
new_region_route(3, 3),
new_region_route(4, 4),
]);
let wrong_table_route_value =
DeserializedValueWithBytes::from_inner(current_table_route_value.update(vec![
new_region_route(1, 1),
new_region_route(2, 2),
new_region_route(3, 3),
new_region_route(4, 4),
]));
assert!(table_metadata_manager
.update_table_route(
table_id,
engine,
&region_storage_path,
RegionInfo {
engine: engine.to_string(),
region_storage_path: region_storage_path.to_string(),
region_options: HashMap::new(),
},
wrong_table_route_value,
new_region_routes
new_region_routes,
&HashMap::new(),
)
.await
.is_err());

View File

@@ -12,6 +12,7 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use std::collections::HashMap;
use std::sync::Arc;
use futures::stream::BoxStream;
@@ -32,6 +33,21 @@ use crate::rpc::store::RangeRequest;
use crate::rpc::KeyValue;
use crate::DatanodeId;
#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Default)]
/// RegionInfo
/// For compatible reason, DON'T modify the field name.
pub struct RegionInfo {
#[serde(default)]
// The table engine, it SHOULD be immutable after created.
pub engine: String,
// The region storage path, it SHOULD be immutable after created.
#[serde(default)]
pub region_storage_path: String,
// The region options.
#[serde(default)]
pub region_options: HashMap<String, String>,
}
pub struct DatanodeTableKey {
datanode_id: DatanodeId,
table_id: TableId,
@@ -85,25 +101,17 @@ impl TableMetaKey for DatanodeTableKey {
pub struct DatanodeTableValue {
pub table_id: TableId,
pub regions: Vec<RegionNumber>,
#[serde(default)]
pub engine: String,
#[serde(default)]
pub region_storage_path: String,
#[serde(flatten)]
pub region_info: RegionInfo,
version: u64,
}
impl DatanodeTableValue {
pub fn new(
table_id: TableId,
regions: Vec<RegionNumber>,
engine: String,
region_storage_path: String,
) -> Self {
pub fn new(table_id: TableId, regions: Vec<RegionNumber>, region_info: RegionInfo) -> Self {
Self {
table_id,
regions,
engine,
region_storage_path,
region_info,
version: 0,
}
}
@@ -156,6 +164,7 @@ impl DatanodeTableManager {
table_id: TableId,
engine: &str,
region_storage_path: &str,
region_options: HashMap<String, String>,
distribution: RegionDistribution,
) -> Result<Txn> {
let txns = distribution
@@ -165,8 +174,11 @@ impl DatanodeTableManager {
let val = DatanodeTableValue::new(
table_id,
regions,
engine.to_string(),
region_storage_path.to_string(),
RegionInfo {
engine: engine.to_string(),
region_storage_path: region_storage_path.to_string(),
region_options: region_options.clone(),
},
);
Ok(TxnOp::Put(key.as_raw_key(), val.try_as_raw_value()?))
@@ -182,10 +194,10 @@ impl DatanodeTableManager {
pub(crate) fn build_update_txn(
&self,
table_id: TableId,
engine: &str,
region_storage_path: &str,
region_info: RegionInfo,
current_region_distribution: RegionDistribution,
new_region_distribution: RegionDistribution,
new_region_options: &HashMap<String, String>,
) -> Result<Txn> {
let mut opts = Vec::new();
@@ -197,33 +209,20 @@ impl DatanodeTableManager {
opts.push(TxnOp::Delete(raw_key))
}
}
let need_update_options = region_info.region_options != *new_region_options;
for (datanode, regions) in new_region_distribution.into_iter() {
if let Some(current_region) = current_region_distribution.get(&datanode) {
// Updates if need.
if *current_region != regions {
let key = DatanodeTableKey::new(datanode, table_id);
let raw_key = key.as_raw_key();
let val = DatanodeTableValue::new(
table_id,
regions,
engine.to_string(),
region_storage_path.to_string(),
)
.try_as_raw_value()?;
opts.push(TxnOp::Put(raw_key, val));
}
} else {
// New datanodes
let need_update =
if let Some(current_region) = current_region_distribution.get(&datanode) {
// Updates if need.
*current_region != regions || need_update_options
} else {
true
};
if need_update {
let key = DatanodeTableKey::new(datanode, table_id);
let raw_key = key.as_raw_key();
let val = DatanodeTableValue::new(
table_id,
regions,
engine.to_string(),
region_storage_path.to_string(),
)
.try_as_raw_value()?;
let val = DatanodeTableValue::new(table_id, regions, region_info.clone())
.try_as_raw_value()?;
opts.push(TxnOp::Put(raw_key, val));
}
}
@@ -270,11 +269,10 @@ mod tests {
let value = DatanodeTableValue {
table_id: 42,
regions: vec![1, 2, 3],
engine: Default::default(),
region_storage_path: Default::default(),
region_info: RegionInfo::default(),
version: 1,
};
let literal = br#"{"table_id":42,"regions":[1,2,3],"engine":"","region_storage_path":"","version":1}"#;
let literal = br#"{"table_id":42,"regions":[1,2,3],"engine":"","region_storage_path":"","region_options":{},"version":1}"#;
let raw_value = value.try_as_raw_value().unwrap();
assert_eq!(raw_value, literal);

View File

@@ -16,7 +16,7 @@ use serde::{Deserialize, Serialize};
use table::engine::TableReference;
use table::metadata::{RawTableInfo, TableId};
use super::TABLE_INFO_KEY_PREFIX;
use super::{DeserializedValueWithBytes, TABLE_INFO_KEY_PREFIX};
use crate::error::Result;
use crate::key::{to_removed_key, TableMetaKey};
use crate::kv_backend::txn::{Compare, CompareOp, Txn, TxnOp, TxnOpResponse};
@@ -103,7 +103,7 @@ impl TableInfoManager {
table_id: TableId,
) -> (
Txn,
impl FnOnce(&Vec<TxnOpResponse>) -> Result<Option<TableInfoValue>>,
impl FnOnce(&Vec<TxnOpResponse>) -> Result<Option<DeserializedValueWithBytes<TableInfoValue>>>,
) {
let key = TableInfoKey::new(table_id);
let raw_key = key.as_raw_key();
@@ -119,7 +119,7 @@ impl TableInfoManager {
table_info_value: &TableInfoValue,
) -> Result<(
Txn,
impl FnOnce(&Vec<TxnOpResponse>) -> Result<Option<TableInfoValue>>,
impl FnOnce(&Vec<TxnOpResponse>) -> Result<Option<DeserializedValueWithBytes<TableInfoValue>>>,
)> {
let key = TableInfoKey::new(table_id);
let raw_key = key.as_raw_key();
@@ -143,15 +143,15 @@ impl TableInfoManager {
pub(crate) fn build_update_txn(
&self,
table_id: TableId,
current_table_info_value: &TableInfoValue,
current_table_info_value: &DeserializedValueWithBytes<TableInfoValue>,
new_table_info_value: &TableInfoValue,
) -> Result<(
Txn,
impl FnOnce(&Vec<TxnOpResponse>) -> Result<Option<TableInfoValue>>,
impl FnOnce(&Vec<TxnOpResponse>) -> Result<Option<DeserializedValueWithBytes<TableInfoValue>>>,
)> {
let key = TableInfoKey::new(table_id);
let raw_key = key.as_raw_key();
let raw_value = current_table_info_value.try_as_raw_value()?;
let raw_value = current_table_info_value.into_bytes();
let txn = Txn::new()
.when(vec![Compare::with_value(
@@ -172,11 +172,11 @@ impl TableInfoManager {
pub(crate) fn build_delete_txn(
&self,
table_id: TableId,
table_info_value: &TableInfoValue,
table_info_value: &DeserializedValueWithBytes<TableInfoValue>,
) -> Result<Txn> {
let key = TableInfoKey::new(table_id);
let raw_key = key.as_raw_key();
let raw_value = table_info_value.try_as_raw_value()?;
let raw_value = table_info_value.into_bytes();
let removed_key = to_removed_key(&String::from_utf8_lossy(&raw_key));
let txn = Txn::new().and_then(vec![
@@ -189,7 +189,8 @@ impl TableInfoManager {
fn build_decode_fn(
raw_key: Vec<u8>,
) -> impl FnOnce(&Vec<TxnOpResponse>) -> Result<Option<TableInfoValue>> {
) -> impl FnOnce(&Vec<TxnOpResponse>) -> Result<Option<DeserializedValueWithBytes<TableInfoValue>>>
{
move |kvs: &Vec<TxnOpResponse>| {
kvs.iter()
.filter_map(|resp| {
@@ -201,29 +202,35 @@ impl TableInfoManager {
})
.flat_map(|r| &r.kvs)
.find(|kv| kv.key == raw_key)
.map(|kv| TableInfoValue::try_from_raw_value(&kv.value))
.map(|kv| DeserializedValueWithBytes::from_inner_slice(&kv.value))
.transpose()
}
}
#[cfg(test)]
pub async fn get_removed(&self, table_id: TableId) -> Result<Option<TableInfoValue>> {
pub async fn get_removed(
&self,
table_id: TableId,
) -> Result<Option<DeserializedValueWithBytes<TableInfoValue>>> {
let key = TableInfoKey::new(table_id).to_string();
let removed_key = to_removed_key(&key).into_bytes();
self.kv_backend
.get(&removed_key)
.await?
.map(|x| TableInfoValue::try_from_raw_value(&x.value))
.map(|x| DeserializedValueWithBytes::from_inner_slice(&x.value))
.transpose()
}
pub async fn get(&self, table_id: TableId) -> Result<Option<TableInfoValue>> {
pub async fn get(
&self,
table_id: TableId,
) -> Result<Option<DeserializedValueWithBytes<TableInfoValue>>> {
let key = TableInfoKey::new(table_id);
let raw_key = key.as_raw_key();
self.kv_backend
.get(&raw_key)
.await?
.map(|x| TableInfoValue::try_from_raw_value(&x.value))
.map(|x| DeserializedValueWithBytes::from_inner_slice(&x.value))
.transpose()
}
}

View File

@@ -17,6 +17,7 @@ use std::fmt::Display;
use serde::{Deserialize, Serialize};
use table::metadata::TableId;
use super::DeserializedValueWithBytes;
use crate::error::Result;
use crate::key::{to_removed_key, RegionDistribution, TableMetaKey, TABLE_ROUTE_PREFIX};
use crate::kv_backend::txn::{Compare, CompareOp, Txn, TxnOp, TxnOpResponse};
@@ -81,7 +82,7 @@ impl TableRouteManager {
table_id: TableId,
) -> (
Txn,
impl FnOnce(&Vec<TxnOpResponse>) -> Result<Option<TableRouteValue>>,
impl FnOnce(&Vec<TxnOpResponse>) -> Result<Option<DeserializedValueWithBytes<TableRouteValue>>>,
) {
let key = TableRouteKey::new(table_id);
let raw_key = key.as_raw_key();
@@ -97,7 +98,7 @@ impl TableRouteManager {
table_route_value: &TableRouteValue,
) -> Result<(
Txn,
impl FnOnce(&Vec<TxnOpResponse>) -> Result<Option<TableRouteValue>>,
impl FnOnce(&Vec<TxnOpResponse>) -> Result<Option<DeserializedValueWithBytes<TableRouteValue>>>,
)> {
let key = TableRouteKey::new(table_id);
let raw_key = key.as_raw_key();
@@ -121,15 +122,15 @@ impl TableRouteManager {
pub(crate) fn build_update_txn(
&self,
table_id: TableId,
current_table_route_value: &TableRouteValue,
current_table_route_value: &DeserializedValueWithBytes<TableRouteValue>,
new_table_route_value: &TableRouteValue,
) -> Result<(
Txn,
impl FnOnce(&Vec<TxnOpResponse>) -> Result<Option<TableRouteValue>>,
impl FnOnce(&Vec<TxnOpResponse>) -> Result<Option<DeserializedValueWithBytes<TableRouteValue>>>,
)> {
let key = TableRouteKey::new(table_id);
let raw_key = key.as_raw_key();
let raw_value = current_table_route_value.try_as_raw_value()?;
let raw_value = current_table_route_value.into_bytes();
let new_raw_value: Vec<u8> = new_table_route_value.try_as_raw_value()?;
let txn = Txn::new()
@@ -148,11 +149,11 @@ impl TableRouteManager {
pub(crate) fn build_delete_txn(
&self,
table_id: TableId,
table_route_value: &TableRouteValue,
table_route_value: &DeserializedValueWithBytes<TableRouteValue>,
) -> Result<Txn> {
let key = TableRouteKey::new(table_id);
let raw_key = key.as_raw_key();
let raw_value = table_route_value.try_as_raw_value()?;
let raw_value = table_route_value.into_bytes();
let removed_key = to_removed_key(&String::from_utf8_lossy(&raw_key));
let txn = Txn::new().and_then(vec![
@@ -165,7 +166,8 @@ impl TableRouteManager {
fn build_decode_fn(
raw_key: Vec<u8>,
) -> impl FnOnce(&Vec<TxnOpResponse>) -> Result<Option<TableRouteValue>> {
) -> impl FnOnce(&Vec<TxnOpResponse>) -> Result<Option<DeserializedValueWithBytes<TableRouteValue>>>
{
move |response: &Vec<TxnOpResponse>| {
response
.iter()
@@ -178,28 +180,34 @@ impl TableRouteManager {
})
.flat_map(|r| &r.kvs)
.find(|kv| kv.key == raw_key)
.map(|kv| TableRouteValue::try_from_raw_value(&kv.value))
.map(|kv| DeserializedValueWithBytes::from_inner_slice(&kv.value))
.transpose()
}
}
pub async fn get(&self, table_id: TableId) -> Result<Option<TableRouteValue>> {
pub async fn get(
&self,
table_id: TableId,
) -> Result<Option<DeserializedValueWithBytes<TableRouteValue>>> {
let key = TableRouteKey::new(table_id);
self.kv_backend
.get(&key.as_raw_key())
.await?
.map(|kv| TableRouteValue::try_from_raw_value(&kv.value))
.map(|kv| DeserializedValueWithBytes::from_inner_slice(&kv.value))
.transpose()
}
#[cfg(test)]
pub async fn get_removed(&self, table_id: TableId) -> Result<Option<TableRouteValue>> {
pub async fn get_removed(
&self,
table_id: TableId,
) -> Result<Option<DeserializedValueWithBytes<TableRouteValue>>> {
let key = TableRouteKey::new(table_id).to_string();
let removed_key = to_removed_key(&key).into_bytes();
self.kv_backend
.get(&removed_key)
.await?
.map(|x| TableRouteValue::try_from_raw_value(&x.value))
.map(|x| DeserializedValueWithBytes::from_inner_slice(&x.value))
.transpose()
}
@@ -209,7 +217,7 @@ impl TableRouteManager {
) -> Result<Option<RegionDistribution>> {
self.get(table_id)
.await?
.map(|table_route| region_distribution(&table_route.region_routes))
.map(|table_route| region_distribution(&table_route.into_inner().region_routes))
.transpose()
}
}

View File

@@ -21,6 +21,8 @@ use api::v1::meta::{
SubmitDdlTaskResponse as PbSubmitDdlTaskResponse, TruncateTableTask as PbTruncateTableTask,
};
use api::v1::{AlterExpr, CreateTableExpr, DropTableExpr, TruncateTableExpr};
use base64::engine::general_purpose;
use base64::Engine as _;
use prost::Message;
use serde::{Deserialize, Serialize};
use snafu::{OptionExt, ResultExt};
@@ -287,7 +289,8 @@ impl Serialize for CreateTableTask {
table_info,
};
let buf = pb.encode_to_vec();
serializer.serialize_bytes(&buf)
let encoded = general_purpose::STANDARD_NO_PAD.encode(buf);
serializer.serialize_str(&encoded)
}
}
@@ -296,7 +299,10 @@ impl<'de> Deserialize<'de> for CreateTableTask {
where
D: serde::Deserializer<'de>,
{
let buf = Vec::<u8>::deserialize(deserializer)?;
let encoded = String::deserialize(deserializer)?;
let buf = general_purpose::STANDARD_NO_PAD
.decode(encoded)
.map_err(|err| serde::de::Error::custom(err.to_string()))?;
let expr: PbCreateTableTask = PbCreateTableTask::decode(&*buf)
.map_err(|err| serde::de::Error::custom(err.to_string()))?;
@@ -353,7 +359,8 @@ impl Serialize for AlterTableTask {
alter_table: Some(self.alter_table.clone()),
};
let buf = pb.encode_to_vec();
serializer.serialize_bytes(&buf)
let encoded = general_purpose::STANDARD_NO_PAD.encode(buf);
serializer.serialize_str(&encoded)
}
}
@@ -362,7 +369,10 @@ impl<'de> Deserialize<'de> for AlterTableTask {
where
D: serde::Deserializer<'de>,
{
let buf = Vec::<u8>::deserialize(deserializer)?;
let encoded = String::deserialize(deserializer)?;
let buf = general_purpose::STANDARD_NO_PAD
.decode(encoded)
.map_err(|err| serde::de::Error::custom(err.to_string()))?;
let expr: PbAlterTableTask = PbAlterTableTask::decode(&*buf)
.map_err(|err| serde::de::Error::custom(err.to_string()))?;
@@ -425,12 +435,12 @@ impl TryFrom<PbTruncateTableTask> for TruncateTableTask {
mod tests {
use std::sync::Arc;
use api::v1::CreateTableExpr;
use api::v1::{AlterExpr, CreateTableExpr};
use datatypes::schema::SchemaBuilder;
use table::metadata::RawTableInfo;
use table::test_util::table_info::test_table_info;
use super::CreateTableTask;
use super::{AlterTableTask, CreateTableTask};
#[test]
fn test_basic_ser_de_create_table_task() {
@@ -447,4 +457,16 @@ mod tests {
let de = serde_json::from_slice(&output).unwrap();
assert_eq!(task, de);
}
#[test]
fn test_basic_ser_de_alter_table_task() {
let task = AlterTableTask {
alter_table: AlterExpr::default(),
};
let output = serde_json::to_vec(&task).unwrap();
let de = serde_json::from_slice(&output).unwrap();
assert_eq!(task, de);
}
}

View File

@@ -34,6 +34,9 @@ pub enum Error {
#[snafu(display("Loader {} is already registered", name))]
LoaderConflict { name: String, location: Location },
#[snafu(display("Procedure Manager is stopped"))]
ManagerNotStart { location: Location },
#[snafu(display("Failed to serialize to json"))]
ToJson {
#[snafu(source)]
@@ -148,7 +151,8 @@ impl ErrorExt for Error {
| Error::FromJson { .. }
| Error::RetryTimesExceeded { .. }
| Error::RetryLater { .. }
| Error::WaitWatcher { .. } => StatusCode::Internal,
| Error::WaitWatcher { .. }
| Error::ManagerNotStart { .. } => StatusCode::Internal,
Error::LoaderConflict { .. } | Error::DuplicateProcedure { .. } => {
StatusCode::InvalidArguments
}

View File

@@ -14,6 +14,8 @@
//! Common traits and structures for the procedure framework.
#![feature(assert_matches)]
pub mod error;
pub mod local;
pub mod options;

View File

@@ -16,20 +16,21 @@ mod lock;
mod runner;
use std::collections::{HashMap, VecDeque};
use std::sync::atomic::{AtomicBool, Ordering};
use std::sync::{Arc, Mutex, RwLock};
use std::time::{Duration, Instant};
use async_trait::async_trait;
use backon::ExponentialBuilder;
use common_runtime::{RepeatedTask, TaskFunction};
use common_telemetry::logging;
use common_telemetry::{info, logging};
use snafu::{ensure, ResultExt};
use tokio::sync::watch::{self, Receiver, Sender};
use tokio::sync::Notify;
use tokio::sync::{Mutex as TokioMutex, Notify};
use crate::error::{
DuplicateProcedureSnafu, Error, LoaderConflictSnafu, Result, StartRemoveOutdatedMetaTaskSnafu,
StopRemoveOutdatedMetaTaskSnafu,
DuplicateProcedureSnafu, Error, LoaderConflictSnafu, ManagerNotStartSnafu, Result,
StartRemoveOutdatedMetaTaskSnafu, StopRemoveOutdatedMetaTaskSnafu,
};
use crate::local::lock::LockMap;
use crate::local::runner::Runner;
@@ -135,6 +136,8 @@ pub(crate) struct ManagerContext {
messages: Mutex<HashMap<ProcedureId, ProcedureMessage>>,
/// Ids and finished time of finished procedures.
finished_procedures: Mutex<VecDeque<(ProcedureId, Instant)>>,
/// Running flag.
running: Arc<AtomicBool>,
}
#[async_trait]
@@ -153,9 +156,29 @@ impl ManagerContext {
procedures: RwLock::new(HashMap::new()),
messages: Mutex::new(HashMap::new()),
finished_procedures: Mutex::new(VecDeque::new()),
running: Arc::new(AtomicBool::new(false)),
}
}
#[cfg(test)]
pub(crate) fn set_running(&self) {
self.running.store(true, Ordering::Relaxed);
}
/// Set the running flag.
pub(crate) fn start(&self) {
self.running.store(true, Ordering::Relaxed);
}
pub(crate) fn stop(&self) {
self.running.store(false, Ordering::Relaxed);
}
/// Return `ProcedureManager` is running.
pub(crate) fn running(&self) -> bool {
self.running.load(Ordering::Relaxed)
}
/// Returns true if the procedure with specific `procedure_id` exists.
fn contains_procedure(&self, procedure_id: ProcedureId) -> bool {
let procedures = self.procedures.read().unwrap();
@@ -368,29 +391,37 @@ pub struct LocalManager {
procedure_store: Arc<ProcedureStore>,
max_retry_times: usize,
retry_delay: Duration,
remove_outdated_meta_task: RepeatedTask<Error>,
/// GC task.
remove_outdated_meta_task: TokioMutex<Option<RepeatedTask<Error>>>,
config: ManagerConfig,
}
impl LocalManager {
/// Create a new [LocalManager] with specific `config`.
pub fn new(config: ManagerConfig, state_store: StateStoreRef) -> LocalManager {
let manager_ctx = Arc::new(ManagerContext::new());
let remove_outdated_meta_task = RepeatedTask::new(
config.remove_outdated_meta_task_interval,
Box::new(RemoveOutdatedMetaFunction {
manager_ctx: manager_ctx.clone(),
ttl: config.remove_outdated_meta_ttl,
}),
);
LocalManager {
manager_ctx,
procedure_store: Arc::new(ProcedureStore::new(&config.parent_path, state_store)),
max_retry_times: config.max_retry_times,
retry_delay: config.retry_delay,
remove_outdated_meta_task,
remove_outdated_meta_task: TokioMutex::new(None),
config,
}
}
/// Build remove outedated meta task
pub fn build_remove_outdated_meta_task(&self) -> RepeatedTask<Error> {
RepeatedTask::new(
self.config.remove_outdated_meta_task_interval,
Box::new(RemoveOutdatedMetaFunction {
manager_ctx: self.manager_ctx.clone(),
ttl: self.config.remove_outdated_meta_ttl,
}),
)
}
/// Submit a root procedure with given `procedure_id`.
fn submit_root(
&self,
@@ -398,6 +429,8 @@ impl LocalManager {
step: u32,
procedure: BoxedProcedure,
) -> Result<Watcher> {
ensure!(self.manager_ctx.running(), ManagerNotStartSnafu);
let meta = Arc::new(ProcedureMeta::new(procedure_id, None, procedure.lock_key()));
let runner = Runner {
meta: meta.clone(),
@@ -426,44 +459,8 @@ impl LocalManager {
Ok(watcher)
}
}
#[async_trait]
impl ProcedureManager for LocalManager {
fn register_loader(&self, name: &str, loader: BoxedProcedureLoader) -> Result<()> {
let mut loaders = self.manager_ctx.loaders.lock().unwrap();
ensure!(!loaders.contains_key(name), LoaderConflictSnafu { name });
let _ = loaders.insert(name.to_string(), loader);
Ok(())
}
fn start(&self) -> Result<()> {
self.remove_outdated_meta_task
.start(common_runtime::bg_runtime())
.context(StartRemoveOutdatedMetaTaskSnafu)?;
Ok(())
}
async fn stop(&self) -> Result<()> {
self.remove_outdated_meta_task
.stop()
.await
.context(StopRemoveOutdatedMetaTaskSnafu)?;
Ok(())
}
async fn submit(&self, procedure: ProcedureWithId) -> Result<Watcher> {
let procedure_id = procedure.id;
ensure!(
!self.manager_ctx.contains_procedure(procedure_id),
DuplicateProcedureSnafu { procedure_id }
);
self.submit_root(procedure.id, 0, procedure.procedure)
}
/// Recovers unfinished procedures and reruns them.
async fn recover(&self) -> Result<()> {
logging::info!("LocalManager start to recover");
let recover_start = Instant::now();
@@ -519,6 +516,64 @@ impl ProcedureManager for LocalManager {
Ok(())
}
}
#[async_trait]
impl ProcedureManager for LocalManager {
fn register_loader(&self, name: &str, loader: BoxedProcedureLoader) -> Result<()> {
let mut loaders = self.manager_ctx.loaders.lock().unwrap();
ensure!(!loaders.contains_key(name), LoaderConflictSnafu { name });
let _ = loaders.insert(name.to_string(), loader);
Ok(())
}
async fn start(&self) -> Result<()> {
let mut task = self.remove_outdated_meta_task.lock().await;
if task.is_some() {
return Ok(());
}
let task_inner = self.build_remove_outdated_meta_task();
task_inner
.start(common_runtime::bg_runtime())
.context(StartRemoveOutdatedMetaTaskSnafu)?;
*task = Some(task_inner);
self.manager_ctx.start();
info!("LocalManager is start.");
self.recover().await
}
async fn stop(&self) -> Result<()> {
let mut task = self.remove_outdated_meta_task.lock().await;
if let Some(task) = task.take() {
task.stop().await.context(StopRemoveOutdatedMetaTaskSnafu)?;
}
self.manager_ctx.stop();
info!("LocalManager is stopped.");
Ok(())
}
async fn submit(&self, procedure: ProcedureWithId) -> Result<Watcher> {
let procedure_id = procedure.id;
ensure!(
!self.manager_ctx.contains_procedure(procedure_id),
DuplicateProcedureSnafu { procedure_id }
);
self.submit_root(procedure.id, 0, procedure.procedure)
}
async fn procedure_state(&self, procedure_id: ProcedureId) -> Result<Option<ProcedureState>> {
Ok(self.manager_ctx.state(procedure_id))
@@ -569,12 +624,14 @@ pub(crate) mod test_util {
#[cfg(test)]
mod tests {
use std::assert_matches::assert_matches;
use common_error::mock::MockError;
use common_error::status_code::StatusCode;
use common_test_util::temp_dir::create_temp_dir;
use super::*;
use crate::error::Error;
use crate::error::{self, Error};
use crate::store::state_store::ObjectStateStore;
use crate::{Context, Procedure, Status};
@@ -691,6 +748,7 @@ mod tests {
};
let state_store = Arc::new(ObjectStateStore::new(test_util::new_object_store(&dir)));
let manager = LocalManager::new(config, state_store);
manager.manager_ctx.start();
manager
.register_loader("ProcedureToLoad", ProcedureToLoad::loader())
@@ -714,6 +772,7 @@ mod tests {
};
let state_store = Arc::new(ObjectStateStore::new(object_store.clone()));
let manager = LocalManager::new(config, state_store);
manager.manager_ctx.start();
manager
.register_loader("ProcedureToLoad", ProcedureToLoad::loader())
@@ -762,6 +821,7 @@ mod tests {
};
let state_store = Arc::new(ObjectStateStore::new(test_util::new_object_store(&dir)));
let manager = LocalManager::new(config, state_store);
manager.manager_ctx.start();
let procedure_id = ProcedureId::random();
assert!(manager
@@ -812,6 +872,7 @@ mod tests {
};
let state_store = Arc::new(ObjectStateStore::new(test_util::new_object_store(&dir)));
let manager = LocalManager::new(config, state_store);
manager.manager_ctx.start();
#[derive(Debug)]
struct MockProcedure {
@@ -864,6 +925,66 @@ mod tests {
}
#[tokio::test]
async fn test_procedure_manager_stopped() {
let dir = create_temp_dir("procedure_manager_stopped");
let config = ManagerConfig {
parent_path: "data/".to_string(),
max_retry_times: 3,
retry_delay: Duration::from_millis(500),
..Default::default()
};
let state_store = Arc::new(ObjectStateStore::new(test_util::new_object_store(&dir)));
let manager = LocalManager::new(config, state_store);
let mut procedure = ProcedureToLoad::new("submit");
procedure.lock_key = LockKey::single("test.submit");
let procedure_id = ProcedureId::random();
assert_matches!(
manager
.submit(ProcedureWithId {
id: procedure_id,
procedure: Box::new(procedure),
})
.await
.unwrap_err(),
error::Error::ManagerNotStart { .. }
);
}
#[tokio::test]
async fn test_procedure_manager_restart() {
let dir = create_temp_dir("procedure_manager_restart");
let config = ManagerConfig {
parent_path: "data/".to_string(),
max_retry_times: 3,
retry_delay: Duration::from_millis(500),
..Default::default()
};
let state_store = Arc::new(ObjectStateStore::new(test_util::new_object_store(&dir)));
let manager = LocalManager::new(config, state_store);
manager.start().await.unwrap();
manager.stop().await.unwrap();
manager.start().await.unwrap();
let mut procedure = ProcedureToLoad::new("submit");
procedure.lock_key = LockKey::single("test.submit");
let procedure_id = ProcedureId::random();
assert!(manager
.submit(ProcedureWithId {
id: procedure_id,
procedure: Box::new(procedure),
})
.await
.is_ok());
assert!(manager
.procedure_state(procedure_id)
.await
.unwrap()
.is_some());
}
#[tokio::test(flavor = "multi_thread")]
async fn test_remove_outdated_meta_task() {
let dir = create_temp_dir("remove_outdated_meta_task");
let object_store = test_util::new_object_store(&dir);
@@ -876,6 +997,7 @@ mod tests {
};
let state_store = Arc::new(ObjectStateStore::new(object_store.clone()));
let manager = LocalManager::new(config, state_store);
manager.manager_ctx.set_running();
let mut procedure = ProcedureToLoad::new("submit");
procedure.lock_key = LockKey::single("test.submit");
@@ -889,8 +1011,9 @@ mod tests {
.is_ok());
let mut watcher = manager.procedure_watcher(procedure_id).unwrap();
watcher.changed().await.unwrap();
manager.start().unwrap();
tokio::time::sleep(Duration::from_millis(10)).await;
manager.start().await.unwrap();
tokio::time::sleep(Duration::from_millis(300)).await;
assert!(manager
.procedure_state(procedure_id)
.await
@@ -902,6 +1025,8 @@ mod tests {
let mut procedure = ProcedureToLoad::new("submit");
procedure.lock_key = LockKey::single("test.submit");
let procedure_id = ProcedureId::random();
manager.manager_ctx.set_running();
assert!(manager
.submit(ProcedureWithId {
id: procedure_id,
@@ -911,11 +1036,33 @@ mod tests {
.is_ok());
let mut watcher = manager.procedure_watcher(procedure_id).unwrap();
watcher.changed().await.unwrap();
tokio::time::sleep(Duration::from_millis(10)).await;
tokio::time::sleep(Duration::from_millis(300)).await;
assert!(manager
.procedure_state(procedure_id)
.await
.unwrap()
.is_some());
// After restart
let mut procedure = ProcedureToLoad::new("submit");
procedure.lock_key = LockKey::single("test.submit");
let procedure_id = ProcedureId::random();
assert!(manager
.submit(ProcedureWithId {
id: procedure_id,
procedure: Box::new(procedure),
})
.await
.is_ok());
let mut watcher = manager.procedure_watcher(procedure_id).unwrap();
watcher.changed().await.unwrap();
manager.start().await.unwrap();
tokio::time::sleep(Duration::from_millis(300)).await;
assert!(manager
.procedure_state(procedure_id)
.await
.unwrap()
.is_none());
}
}

View File

@@ -19,7 +19,7 @@ use backon::{BackoffBuilder, ExponentialBuilder};
use common_telemetry::logging;
use tokio::time;
use crate::error::{ProcedurePanicSnafu, Result};
use crate::error::{self, ProcedurePanicSnafu, Result};
use crate::local::{ManagerContext, ProcedureMeta, ProcedureMetaRef};
use crate::store::ProcedureStore;
use crate::ProcedureState::Retrying;
@@ -102,7 +102,6 @@ impl Drop for ProcedureGuard {
}
}
// TODO(yingwen): Support cancellation.
pub(crate) struct Runner {
pub(crate) meta: ProcedureMetaRef,
pub(crate) procedure: BoxedProcedure,
@@ -114,6 +113,11 @@ pub(crate) struct Runner {
}
impl Runner {
/// Return `ProcedureManager` is running.
pub(crate) fn running(&self) -> bool {
self.manager_ctx.running()
}
/// Run the procedure.
pub(crate) async fn run(mut self) {
// Ensure we can update the procedure state.
@@ -152,6 +156,12 @@ impl Runner {
let procedure_ids = self.manager_ctx.procedures_in_tree(&self.meta);
// Clean resources.
self.manager_ctx.on_procedures_finish(&procedure_ids);
// If `ProcedureManager` is stopped, it stops the current task immediately without deleting the procedure.
if !self.running() {
return;
}
for id in procedure_ids {
if let Err(e) = self.store.delete_procedure(id).await {
logging::error!(
@@ -186,6 +196,13 @@ impl Runner {
let mut retry = self.exponential_builder.build();
let mut retry_times = 0;
loop {
// Don't store state if `ProcedureManager` is stopped.
if !self.running() {
self.meta.set_state(ProcedureState::Failed {
error: Arc::new(error::ManagerNotStartSnafu {}.build()),
});
return;
}
match self.execute_once(ctx).await {
ExecResult::Done | ExecResult::Failed => return,
ExecResult::Continue => (),
@@ -238,6 +255,14 @@ impl Runner {
status.need_persist(),
);
// Don't store state if `ProcedureManager` is stopped.
if !self.running() {
self.meta.set_state(ProcedureState::Failed {
error: Arc::new(error::ManagerNotStartSnafu {}.build()),
});
return ExecResult::Failed;
}
if status.need_persist() {
if let Err(err) = self.persist_procedure().await {
self.meta.set_state(ProcedureState::retrying(Arc::new(err)));
@@ -272,6 +297,14 @@ impl Runner {
e.is_retry_later(),
);
// Don't store state if `ProcedureManager` is stopped.
if !self.running() {
self.meta.set_state(ProcedureState::Failed {
error: Arc::new(error::ManagerNotStartSnafu {}.build()),
});
return ExecResult::Failed;
}
if e.is_retry_later() {
self.meta.set_state(ProcedureState::retrying(Arc::new(e)));
return ExecResult::RetryLater;
@@ -581,6 +614,7 @@ mod tests {
let object_store = test_util::new_object_store(&dir);
let procedure_store = Arc::new(ProcedureStore::from_object_store(object_store.clone()));
let mut runner = new_runner(meta, Box::new(normal), procedure_store.clone());
runner.manager_ctx.start();
let res = runner.execute_once(&ctx).await;
assert!(res.is_continue(), "{res:?}");
@@ -641,6 +675,7 @@ mod tests {
let object_store = test_util::new_object_store(&dir);
let procedure_store = Arc::new(ProcedureStore::from_object_store(object_store.clone()));
let mut runner = new_runner(meta, Box::new(suspend), procedure_store);
runner.manager_ctx.start();
let res = runner.execute_once(&ctx).await;
assert!(res.is_continue(), "{res:?}");
@@ -742,6 +777,7 @@ mod tests {
let procedure_store = Arc::new(ProcedureStore::from_object_store(object_store.clone()));
let mut runner = new_runner(meta.clone(), Box::new(parent), procedure_store.clone());
let manager_ctx = Arc::new(ManagerContext::new());
manager_ctx.start();
// Manually add this procedure to the manager ctx.
assert!(manager_ctx.try_insert_procedure(meta));
// Replace the manager ctx.
@@ -769,6 +805,70 @@ mod tests {
}
}
#[tokio::test]
async fn test_running_is_stopped() {
let exec_fn = move |_| async move { Ok(Status::Executing { persist: true }) }.boxed();
let normal = ProcedureAdapter {
data: "normal".to_string(),
lock_key: LockKey::single("catalog.schema.table"),
exec_fn,
};
let dir = create_temp_dir("test_running_is_stopped");
let meta = normal.new_meta(ROOT_ID);
let ctx = context_without_provider(meta.id);
let object_store = test_util::new_object_store(&dir);
let procedure_store = Arc::new(ProcedureStore::from_object_store(object_store.clone()));
let mut runner = new_runner(meta, Box::new(normal), procedure_store.clone());
runner.manager_ctx.start();
let res = runner.execute_once(&ctx).await;
assert!(res.is_continue(), "{res:?}");
check_files(
&object_store,
&procedure_store,
ctx.procedure_id,
&["0000000000.step"],
)
.await;
runner.manager_ctx.stop();
let res = runner.execute_once(&ctx).await;
assert!(res.is_failed());
// Shouldn't write any files
check_files(
&object_store,
&procedure_store,
ctx.procedure_id,
&["0000000000.step"],
)
.await;
}
#[tokio::test]
async fn test_running_is_stopped_on_error() {
let exec_fn =
|_| async { Err(Error::external(MockError::new(StatusCode::Unexpected))) }.boxed();
let normal = ProcedureAdapter {
data: "fail".to_string(),
lock_key: LockKey::single("catalog.schema.table"),
exec_fn,
};
let dir = create_temp_dir("test_running_is_stopped_on_error");
let meta = normal.new_meta(ROOT_ID);
let ctx = context_without_provider(meta.id);
let object_store = test_util::new_object_store(&dir);
let procedure_store = Arc::new(ProcedureStore::from_object_store(object_store.clone()));
let mut runner = new_runner(meta, Box::new(normal), procedure_store.clone());
runner.manager_ctx.stop();
let res = runner.execute_once(&ctx).await;
assert!(res.is_failed(), "{res:?}");
// Shouldn't write any files
check_files(&object_store, &procedure_store, ctx.procedure_id, &[]).await;
}
#[tokio::test]
async fn test_execute_on_error() {
let exec_fn =
@@ -785,6 +885,7 @@ mod tests {
let object_store = test_util::new_object_store(&dir);
let procedure_store = Arc::new(ProcedureStore::from_object_store(object_store.clone()));
let mut runner = new_runner(meta.clone(), Box::new(fail), procedure_store.clone());
runner.manager_ctx.start();
let res = runner.execute_once(&ctx).await;
assert!(res.is_failed(), "{res:?}");
@@ -826,6 +927,7 @@ mod tests {
let object_store = test_util::new_object_store(&dir);
let procedure_store = Arc::new(ProcedureStore::from_object_store(object_store.clone()));
let mut runner = new_runner(meta.clone(), Box::new(retry_later), procedure_store.clone());
runner.manager_ctx.start();
let res = runner.execute_once(&ctx).await;
assert!(res.is_retry_later(), "{res:?}");
@@ -863,6 +965,8 @@ mod tests {
Box::new(exceed_max_retry_later),
procedure_store,
);
runner.manager_ctx.start();
runner.exponential_builder = ExponentialBuilder::default()
.with_min_delay(Duration::from_millis(1))
.with_max_times(3);
@@ -933,8 +1037,8 @@ mod tests {
let object_store = test_util::new_object_store(&dir);
let procedure_store = Arc::new(ProcedureStore::from_object_store(object_store.clone()));
let mut runner = new_runner(meta.clone(), Box::new(parent), procedure_store);
let manager_ctx = Arc::new(ManagerContext::new());
manager_ctx.start();
// Manually add this procedure to the manager ctx.
assert!(manager_ctx.try_insert_procedure(meta.clone()));
// Replace the manager ctx.

View File

@@ -279,8 +279,14 @@ pub trait ProcedureManager: Send + Sync + 'static {
/// Registers loader for specific procedure type `name`.
fn register_loader(&self, name: &str, loader: BoxedProcedureLoader) -> Result<()>;
fn start(&self) -> Result<()>;
/// Starts the background GC task.
///
/// Recovers unfinished procedures and reruns them.
///
/// Callers should ensure all loaders are registered.
async fn start(&self) -> Result<()>;
/// Stops the background GC task.
async fn stop(&self) -> Result<()>;
/// Submits a procedure to execute.
@@ -288,11 +294,6 @@ pub trait ProcedureManager: Send + Sync + 'static {
/// Returns a [Watcher] to watch the created procedure.
async fn submit(&self, procedure: ProcedureWithId) -> Result<Watcher>;
/// Recovers unfinished procedures and reruns them.
///
/// Callers should ensure all loaders are registered.
async fn recover(&self) -> Result<()>;
/// Query the procedure state.
///
/// Returns `Ok(None)` if the procedure doesn't exist.

View File

@@ -71,6 +71,7 @@ mod tests {
};
let state_store = Arc::new(ObjectStateStore::new(test_util::new_object_store(&dir)));
let manager = LocalManager::new(config, state_store);
manager.start().await.unwrap();
#[derive(Debug)]
struct MockProcedure {

View File

@@ -13,8 +13,10 @@
// limitations under the License.
use std::collections::HashMap;
use std::slice;
use std::sync::Arc;
use datafusion::arrow::util::pretty::pretty_format_batches;
use datatypes::schema::SchemaRef;
use datatypes::value::Value;
use datatypes::vectors::{Helper, VectorRef};
@@ -169,6 +171,13 @@ impl RecordBatch {
Ok(vectors)
}
/// Pretty display this record batch like a table
pub fn pretty_print(&self) -> String {
pretty_format_batches(slice::from_ref(&self.df_record_batch))
.map(|t| t.to_string())
.unwrap_or("failed to pretty display a record batch".to_string())
}
}
impl Serialize for RecordBatch {

View File

@@ -14,6 +14,7 @@
// metric stuffs, inspired by databend
use std::fmt;
use std::sync::{Arc, Once, RwLock};
use std::time::{Duration, Instant};
@@ -63,6 +64,7 @@ pub fn try_handle() -> Option<PrometheusHandle> {
pub struct Timer {
start: Instant,
histogram: Histogram,
observed: bool,
}
impl From<Histogram> for Timer {
@@ -71,12 +73,22 @@ impl From<Histogram> for Timer {
}
}
impl fmt::Debug for Timer {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
f.debug_struct("Timer")
.field("start", &self.start)
.field("observed", &self.observed)
.finish()
}
}
impl Timer {
/// Creates a timer from given histogram.
pub fn from_histogram(histogram: Histogram) -> Self {
Self {
start: Instant::now(),
histogram,
observed: false,
}
}
@@ -85,6 +97,7 @@ impl Timer {
Self {
start: Instant::now(),
histogram: register_histogram!(name),
observed: false,
}
}
@@ -93,6 +106,7 @@ impl Timer {
Self {
start: Instant::now(),
histogram: register_histogram!(name, labels),
observed: false,
}
}
@@ -100,11 +114,18 @@ impl Timer {
pub fn elapsed(&self) -> Duration {
self.start.elapsed()
}
/// Discards the timer result.
pub fn discard(mut self) {
self.observed = true;
}
}
impl Drop for Timer {
fn drop(&mut self) {
self.histogram.record(self.elapsed())
if !self.observed {
self.histogram.record(self.elapsed())
}
}
}

View File

@@ -93,18 +93,22 @@ impl From<Date> for DateTime {
}
impl DateTime {
pub fn new(val: i64) -> Self {
Self(val)
/// Create a new [DateTime] from milliseconds elapsed since "1970-01-01 00:00:00 UTC" (UNIX Epoch).
pub fn new(millis: i64) -> Self {
Self(millis)
}
/// Get the milliseconds elapsed since "1970-01-01 00:00:00 UTC" (UNIX Epoch).
pub fn val(&self) -> i64 {
self.0
}
/// Convert to [NaiveDateTime].
pub fn to_chrono_datetime(&self) -> Option<NaiveDateTime> {
NaiveDateTime::from_timestamp_millis(self.0)
}
/// Convert to [common_time::date].
pub fn to_date(&self) -> Option<Date> {
self.to_chrono_datetime().map(|d| Date::from(d.date()))
}

View File

@@ -37,7 +37,7 @@ use storage::config::{
};
use storage::scheduler::SchedulerConfig;
pub const DEFAULT_OBJECT_STORE_CACHE_SIZE: ReadableSize = ReadableSize(1024);
pub const DEFAULT_OBJECT_STORE_CACHE_SIZE: ReadableSize = ReadableSize::mb(256);
/// Default data home in file storage
const DEFAULT_DATA_HOME: &str = "/tmp/greptimedb";
@@ -90,6 +90,15 @@ impl Default for StorageConfig {
#[serde(default)]
pub struct FileConfig {}
#[derive(Debug, Clone, Serialize, Deserialize, Default)]
#[serde(default)]
pub struct ObjectStorageCacheConfig {
/// The local file cache directory
pub cache_path: Option<String>,
/// The cache capacity in bytes
pub cache_capacity: Option<ReadableSize>,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(default)]
pub struct S3Config {
@@ -101,8 +110,8 @@ pub struct S3Config {
pub secret_access_key: SecretString,
pub endpoint: Option<String>,
pub region: Option<String>,
pub cache_path: Option<String>,
pub cache_capacity: Option<ReadableSize>,
#[serde(flatten)]
pub cache: ObjectStorageCacheConfig,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
@@ -115,8 +124,8 @@ pub struct OssConfig {
#[serde(skip_serializing)]
pub access_key_secret: SecretString,
pub endpoint: String,
pub cache_path: Option<String>,
pub cache_capacity: Option<ReadableSize>,
#[serde(flatten)]
pub cache: ObjectStorageCacheConfig,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
@@ -130,8 +139,8 @@ pub struct AzblobConfig {
pub account_key: SecretString,
pub endpoint: String,
pub sas_token: Option<String>,
pub cache_path: Option<String>,
pub cache_capacity: Option<ReadableSize>,
#[serde(flatten)]
pub cache: ObjectStorageCacheConfig,
}
#[derive(Debug, Clone, Serialize, Deserialize)]
@@ -143,8 +152,8 @@ pub struct GcsConfig {
#[serde(skip_serializing)]
pub credential_path: SecretString,
pub endpoint: String,
pub cache_path: Option<String>,
pub cache_capacity: Option<ReadableSize>,
#[serde(flatten)]
pub cache: ObjectStorageCacheConfig,
}
impl Default for S3Config {
@@ -156,8 +165,7 @@ impl Default for S3Config {
secret_access_key: SecretString::from(String::default()),
endpoint: Option::default(),
region: Option::default(),
cache_path: Option::default(),
cache_capacity: Option::default(),
cache: ObjectStorageCacheConfig::default(),
}
}
}
@@ -170,8 +178,7 @@ impl Default for OssConfig {
access_key_id: SecretString::from(String::default()),
access_key_secret: SecretString::from(String::default()),
endpoint: String::default(),
cache_path: Option::default(),
cache_capacity: Option::default(),
cache: ObjectStorageCacheConfig::default(),
}
}
}
@@ -184,9 +191,8 @@ impl Default for AzblobConfig {
account_name: SecretString::from(String::default()),
account_key: SecretString::from(String::default()),
endpoint: String::default(),
cache_path: Option::default(),
cache_capacity: Option::default(),
sas_token: Option::default(),
cache: ObjectStorageCacheConfig::default(),
}
}
}
@@ -199,8 +205,7 @@ impl Default for GcsConfig {
scope: String::default(),
credential_path: SecretString::from(String::default()),
endpoint: String::default(),
cache_path: Option::default(),
cache_capacity: Option::default(),
cache: ObjectStorageCacheConfig::default(),
}
}
}
@@ -328,9 +333,9 @@ pub struct DatanodeOptions {
pub rpc_hostname: Option<String>,
pub rpc_runtime_size: usize,
// Max gRPC receiving(decoding) message size
pub rpc_max_recv_message_size: usize,
pub rpc_max_recv_message_size: ReadableSize,
// Max gRPC sending(encoding) message size
pub rpc_max_send_message_size: usize,
pub rpc_max_send_message_size: ReadableSize,
pub heartbeat: HeartbeatOptions,
pub http: HttpOptions,
pub meta_client: Option<MetaClientOptions>,

View File

@@ -14,13 +14,11 @@
//! Datanode implementation.
use std::collections::HashMap;
use std::path::Path;
use std::sync::Arc;
use catalog::kvbackend::MetaKvBackend;
use catalog::memory::MemoryCatalogManager;
use common_base::readable_size::ReadableSize;
use common_base::Plugins;
use common_error::ext::BoxedError;
use common_greptimedb_telemetry::GreptimeDBTelemetryTask;
@@ -63,8 +61,6 @@ use crate::region_server::RegionServer;
use crate::server::Services;
use crate::store;
pub const DEFAULT_OBJECT_STORE_CACHE_SIZE: ReadableSize = ReadableSize(1024);
const OPEN_REGION_PARALLELISM: usize = 16;
/// Datanode service.
@@ -76,6 +72,7 @@ pub struct Datanode {
region_server: RegionServer,
greptimedb_telemetry_task: Arc<GreptimeDBTelemetryTask>,
leases_notifier: Option<Arc<Notify>>,
plugins: Plugins,
}
impl Datanode {
@@ -141,11 +138,15 @@ impl Datanode {
pub fn region_server(&self) -> RegionServer {
self.region_server.clone()
}
pub fn plugins(&self) -> Plugins {
self.plugins.clone()
}
}
pub struct DatanodeBuilder {
opts: DatanodeOptions,
plugins: Arc<Plugins>,
plugins: Plugins,
meta_client: Option<MetaClient>,
kv_backend: Option<KvBackendRef>,
}
@@ -153,11 +154,7 @@ pub struct DatanodeBuilder {
impl DatanodeBuilder {
/// `kv_backend` is optional. If absent, the builder will try to build one
/// by using the given `opts`
pub fn new(
opts: DatanodeOptions,
kv_backend: Option<KvBackendRef>,
plugins: Arc<Plugins>,
) -> Self {
pub fn new(opts: DatanodeOptions, kv_backend: Option<KvBackendRef>, plugins: Plugins) -> Self {
Self {
opts,
plugins,
@@ -266,6 +263,7 @@ impl DatanodeBuilder {
greptimedb_telemetry_task,
region_event_receiver,
leases_notifier,
plugins: self.plugins.clone(),
})
}
@@ -286,8 +284,9 @@ impl DatanodeBuilder {
for region_number in table_value.regions {
regions.push((
RegionId::new(table_value.table_id, region_number),
table_value.engine.clone(),
table_value.region_storage_path.clone(),
table_value.region_info.engine.clone(),
table_value.region_info.region_storage_path.clone(),
table_value.region_info.region_options.clone(),
));
}
}
@@ -296,7 +295,7 @@ impl DatanodeBuilder {
let semaphore = Arc::new(tokio::sync::Semaphore::new(OPEN_REGION_PARALLELISM));
let mut tasks = vec![];
for (region_id, engine, store_path) in regions {
for (region_id, engine, store_path, options) in regions {
let region_dir = region_dir(&store_path, region_id);
let semaphore_moved = semaphore.clone();
tasks.push(async move {
@@ -307,7 +306,7 @@ impl DatanodeBuilder {
RegionRequest::Open(RegionOpenRequest {
engine: engine.clone(),
region_dir,
options: HashMap::new(),
options,
}),
)
.await?;
@@ -330,7 +329,7 @@ impl DatanodeBuilder {
async fn new_region_server(
opts: &DatanodeOptions,
plugins: Arc<Plugins>,
plugins: Plugins,
log_store: Arc<RaftEngineLogStore>,
event_listener: RegionServerEventListenerRef,
) -> Result<RegionServer> {
@@ -363,10 +362,15 @@ impl DatanodeBuilder {
Ok(region_server)
}
// internal utils
/// Build [RaftEngineLogStore]
async fn build_log_store(opts: &DatanodeOptions) -> Result<Arc<RaftEngineLogStore>> {
let data_home = normalize_dir(&opts.storage.data_home);
let wal_dir = format!("{}{WAL_DIR}", data_home);
let wal_dir = match &opts.wal.dir {
Some(dir) => dir.clone(),
None => format!("{}{WAL_DIR}", data_home),
};
let wal_config = opts.wal.clone();
// create WAL directory
@@ -410,3 +414,80 @@ impl DatanodeBuilder {
Ok(engines)
}
}
#[cfg(test)]
mod tests {
use std::assert_matches::assert_matches;
use std::collections::{BTreeMap, HashMap};
use std::sync::Arc;
use common_base::Plugins;
use common_meta::key::datanode_table::DatanodeTableManager;
use common_meta::kv_backend::memory::MemoryKvBackend;
use common_meta::kv_backend::KvBackendRef;
use store_api::region_request::RegionRequest;
use store_api::storage::RegionId;
use crate::config::DatanodeOptions;
use crate::datanode::DatanodeBuilder;
use crate::tests::{mock_region_server, MockRegionEngine};
async fn setup_table_datanode(kv: &KvBackendRef) {
let mgr = DatanodeTableManager::new(kv.clone());
let txn = mgr
.build_create_txn(
1028,
"mock",
"foo/bar/weny",
HashMap::from([("foo".to_string(), "bar".to_string())]),
BTreeMap::from([(0, vec![0, 1, 2])]),
)
.unwrap();
let r = kv.txn(txn).await.unwrap();
assert!(r.succeeded);
}
#[tokio::test]
async fn test_initialize_region_server() {
let mut mock_region_server = mock_region_server();
let (mock_region, mut mock_region_handler) = MockRegionEngine::new();
mock_region_server.register_engine(mock_region.clone());
let builder = DatanodeBuilder::new(
DatanodeOptions {
node_id: Some(0),
..Default::default()
},
None,
Plugins::default(),
);
let kv = Arc::new(MemoryKvBackend::default()) as _;
setup_table_datanode(&kv).await;
builder
.initialize_region_server(&mock_region_server, kv.clone(), false)
.await
.unwrap();
for i in 0..3 {
let (region_id, req) = mock_region_handler.recv().await.unwrap();
assert_eq!(region_id, RegionId::new(1028, i));
if let RegionRequest::Open(req) = req {
assert_eq!(
req.options,
HashMap::from([("foo".to_string(), "bar".to_string())])
)
} else {
unreachable!()
}
}
assert_matches!(
mock_region_handler.try_recv(),
Err(tokio::sync::mpsc::error::TryRecvError::Empty)
);
}
}

View File

@@ -12,6 +12,7 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use std::sync::atomic::AtomicBool;
use std::sync::Arc;
use async_trait::async_trait;
@@ -60,6 +61,8 @@ pub async fn get_greptimedb_telemetry_task(
if !enable || cfg!(test) || cfg!(debug_assertions) {
return Arc::new(GreptimeDBTelemetryTask::disable());
}
// Always enable.
let should_report = Arc::new(AtomicBool::new(true));
match mode {
Mode::Standalone => Arc::new(GreptimeDBTelemetryTask::enable(
@@ -70,7 +73,9 @@ pub async fn get_greptimedb_telemetry_task(
uuid: default_get_uuid(&working_home),
retry: 0,
}),
should_report.clone(),
)),
should_report,
)),
Mode::Distributed => Arc::new(GreptimeDBTelemetryTask::disable()),
}

View File

@@ -69,7 +69,7 @@ impl HeartbeatTask {
) -> Result<Self> {
let region_alive_keeper = Arc::new(RegionAliveKeeper::new(
region_server.clone(),
opts.heartbeat.interval_millis,
opts.heartbeat.interval.as_millis() as u64,
));
let resp_handler_executor = Arc::new(HandlerGroupExecutor::new(vec![
Arc::new(ParseMailboxMessageHandler),
@@ -86,7 +86,7 @@ impl HeartbeatTask {
running: Arc::new(AtomicBool::new(false)),
meta_client: Arc::new(meta_client),
region_server,
interval: opts.heartbeat.interval_millis,
interval: opts.heartbeat.interval.as_millis() as u64,
resp_handler_executor,
region_alive_keeper,
})
@@ -332,14 +332,14 @@ pub async fn new_metasrv_client(
let member_id = node_id;
let config = ChannelConfig::new()
.timeout(Duration::from_millis(meta_config.timeout_millis))
.connect_timeout(Duration::from_millis(meta_config.connect_timeout_millis))
.timeout(meta_config.timeout)
.connect_timeout(meta_config.connect_timeout)
.tcp_nodelay(meta_config.tcp_nodelay);
let channel_manager = ChannelManager::with_config(config.clone());
let heartbeat_channel_manager = ChannelManager::with_config(
config
.timeout(Duration::from_millis(meta_config.heartbeat_timeout_millis))
.connect_timeout(Duration::from_millis(meta_config.heartbeat_timeout_millis)),
.timeout(meta_config.timeout)
.connect_timeout(meta_config.connect_timeout),
);
let mut meta_client = MetaClientBuilder::new(cluster_id, member_id, Role::Datanode)

View File

@@ -12,8 +12,6 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use std::collections::HashMap;
use async_trait::async_trait;
use common_error::ext::ErrorExt;
use common_error::status_code::StatusCode;
@@ -49,12 +47,13 @@ impl RegionHeartbeatResponseHandler {
Instruction::OpenRegion(OpenRegion {
region_ident,
region_storage_path,
options,
}) => {
let region_id = Self::region_ident_to_region_id(&region_ident);
let open_region_req = RegionRequest::Open(RegionOpenRequest {
engine: region_ident.engine,
region_dir: region_dir(&region_storage_path, region_id),
options: HashMap::new(),
options,
});
Ok((region_id, open_region_req))
}

View File

@@ -14,5 +14,7 @@
//! datanode metrics
pub const HANDLE_SQL_ELAPSED: &str = "datanode.handle_sql_elapsed";
pub const HANDLE_PROMQL_ELAPSED: &str = "datanode.handle_promql_elapsed";
/// The elapsed time of handling a request in the region_server.
pub const HANDLE_REGION_REQUEST_ELAPSED: &str = "datanode.handle_region_request_elapsed";
/// Region request type label.
pub const REGION_REQUEST_TYPE: &str = "datanode.region_request_type";

View File

@@ -28,7 +28,7 @@ use common_query::physical_plan::DfPhysicalPlanAdapter;
use common_query::{DfPhysicalPlan, Output};
use common_recordbatch::SendableRecordBatchStream;
use common_runtime::Runtime;
use common_telemetry::{info, warn};
use common_telemetry::{info, timer, warn};
use dashmap::DashMap;
use datafusion::catalog::schema::SchemaProvider;
use datafusion::catalog::{CatalogList, CatalogProvider};
@@ -44,7 +44,7 @@ use query::QueryEngineRef;
use servers::error::{self as servers_error, ExecuteGrpcRequestSnafu, Result as ServerResult};
use servers::grpc::flight::{FlightCraft, FlightRecordBatchStream, TonicStream};
use servers::grpc::region_server::RegionServerHandler;
use session::context::QueryContext;
use session::context::{QueryContextBuilder, QueryContextRef};
use snafu::{OptionExt, ResultExt};
use store_api::metadata::RegionMetadataRef;
use store_api::region_engine::RegionEngineRef;
@@ -227,7 +227,11 @@ impl RegionServerInner {
region_id: RegionId,
request: RegionRequest,
) -> Result<Output> {
// TODO(ruihang): add some metrics
let request_type = request.request_type();
let _timer = timer!(
crate::metrics::HANDLE_REGION_REQUEST_ELAPSED,
&[(crate::metrics::REGION_REQUEST_TYPE, request_type),]
);
let region_change = match &request {
RegionRequest::Create(create) => RegionChange::Register(create.engine.clone()),
@@ -285,12 +289,17 @@ impl RegionServerInner {
// TODO(ruihang): add metrics and set trace id
let QueryRequest {
header: _,
header,
region_id,
plan,
} = request;
let region_id = RegionId::from_u64(region_id);
let ctx: QueryContextRef = header
.as_ref()
.map(|h| Arc::new(h.into()))
.unwrap_or_else(|| QueryContextBuilder::default().build());
// build dummy catalog list
let engine = self
.region_map
@@ -306,7 +315,7 @@ impl RegionServerInner {
.context(DecodeLogicalPlanSnafu)?;
let result = self
.query_engine
.execute(logical_plan.into(), QueryContext::arc())
.execute(logical_plan.into(), ctx)
.await
.context(ExecuteLogicalPlanSnafu)?;

View File

@@ -40,8 +40,8 @@ impl Services {
let region_server_handler = Some(Arc::new(region_server.clone()) as _);
let runtime = region_server.runtime();
let grpc_config = GrpcServerConfig {
max_recv_message_size: opts.rpc_max_recv_message_size,
max_send_message_size: opts.rpc_max_send_message_size,
max_recv_message_size: opts.rpc_max_recv_message_size.as_bytes() as usize,
max_send_message_size: opts.rpc_max_send_message_size.as_bytes() as usize,
};
Ok(Self {

View File

@@ -76,29 +76,33 @@ async fn create_object_store_with_cache(
) -> Result<ObjectStore> {
let (cache_path, cache_capacity) = match store_config {
ObjectStoreConfig::S3(s3_config) => {
let path = s3_config.cache_path.as_ref();
let path = s3_config.cache.cache_path.as_ref();
let capacity = s3_config
.cache
.cache_capacity
.unwrap_or(DEFAULT_OBJECT_STORE_CACHE_SIZE);
(path, capacity)
}
ObjectStoreConfig::Oss(oss_config) => {
let path = oss_config.cache_path.as_ref();
let path = oss_config.cache.cache_path.as_ref();
let capacity = oss_config
.cache
.cache_capacity
.unwrap_or(DEFAULT_OBJECT_STORE_CACHE_SIZE);
(path, capacity)
}
ObjectStoreConfig::Azblob(azblob_config) => {
let path = azblob_config.cache_path.as_ref();
let path = azblob_config.cache.cache_path.as_ref();
let capacity = azblob_config
.cache
.cache_capacity
.unwrap_or(DEFAULT_OBJECT_STORE_CACHE_SIZE);
(path, capacity)
}
ObjectStoreConfig::Gcs(gcs_config) => {
let path = gcs_config.cache_path.as_ref();
let path = gcs_config.cache.cache_path.as_ref();
let capacity = gcs_config
.cache
.cache_capacity
.unwrap_or(DEFAULT_OBJECT_STORE_CACHE_SIZE);
(path, capacity)
@@ -119,6 +123,12 @@ async fn create_object_store_with_cache(
let cache_layer = LruCacheLayer::new(Arc::new(cache_store), cache_capacity.0 as usize)
.await
.context(error::InitBackendSnafu)?;
info!(
"Enabled local object storage cache, path: {}, capacity: {}.",
path, cache_capacity
);
Ok(object_store.layer(cache_layer))
} else {
Ok(object_store)

View File

@@ -13,10 +13,12 @@
// limitations under the License.
use std::any::Any;
use std::collections::HashMap;
use std::sync::Arc;
use api::v1::meta::HeartbeatResponse;
use async_trait::async_trait;
use common_error::ext::BoxedError;
use common_function::scalars::aggregate::AggregateFunctionMetaRef;
use common_function::scalars::FunctionRef;
use common_meta::heartbeat::handler::{
@@ -26,6 +28,7 @@ use common_meta::heartbeat::mailbox::{HeartbeatMailbox, MessageMeta};
use common_meta::instruction::{Instruction, OpenRegion, RegionIdent};
use common_query::prelude::ScalarUdf;
use common_query::Output;
use common_recordbatch::SendableRecordBatchStream;
use common_runtime::Runtime;
use query::dataframe::DataFrame;
use query::plan::LogicalPlan;
@@ -33,7 +36,12 @@ use query::planner::LogicalPlanner;
use query::query_engine::DescribeResult;
use query::QueryEngine;
use session::context::QueryContextRef;
use store_api::metadata::RegionMetadataRef;
use store_api::region_engine::RegionEngine;
use store_api::region_request::RegionRequest;
use store_api::storage::{RegionId, ScanRequest};
use table::TableRef;
use tokio::sync::mpsc::{Receiver, Sender};
use crate::event_listener::NoopRegionServerEventListener;
use crate::region_server::RegionServer;
@@ -79,6 +87,7 @@ fn open_region_instruction() -> Instruction {
engine: "mito2".to_string(),
},
"path/dir",
HashMap::new(),
))
}
@@ -129,3 +138,52 @@ pub fn mock_region_server() -> RegionServer {
Box::new(NoopRegionServerEventListener),
)
}
pub struct MockRegionEngine {
sender: Sender<(RegionId, RegionRequest)>,
}
impl MockRegionEngine {
pub fn new() -> (Arc<Self>, Receiver<(RegionId, RegionRequest)>) {
let (tx, rx) = tokio::sync::mpsc::channel(8);
(Arc::new(Self { sender: tx }), rx)
}
}
#[async_trait::async_trait]
impl RegionEngine for MockRegionEngine {
fn name(&self) -> &str {
"mock"
}
async fn handle_request(
&self,
region_id: RegionId,
request: RegionRequest,
) -> Result<Output, BoxedError> {
let _ = self.sender.send((region_id, request)).await;
Ok(Output::AffectedRows(0))
}
async fn handle_query(
&self,
_region_id: RegionId,
_request: ScanRequest,
) -> Result<SendableRecordBatchStream, BoxedError> {
unimplemented!()
}
async fn get_metadata(&self, _region_id: RegionId) -> Result<RegionMetadataRef, BoxedError> {
unimplemented!()
}
async fn stop(&self) -> Result<(), BoxedError> {
Ok(())
}
fn set_writable(&self, _region_id: RegionId, _writable: bool) -> Result<(), BoxedError> {
Ok(())
}
}

View File

@@ -17,6 +17,7 @@ mod constraint;
mod raw;
use std::collections::HashMap;
use std::fmt;
use std::sync::Arc;
use arrow::datatypes::{Field, Schema as ArrowSchema};
@@ -32,7 +33,7 @@ pub use crate::schema::raw::RawSchema;
pub const VERSION_KEY: &str = "greptime:version";
/// A common schema, should be immutable.
#[derive(Debug, Clone, PartialEq, Eq)]
#[derive(Clone, PartialEq, Eq)]
pub struct Schema {
column_schemas: Vec<ColumnSchema>,
name_to_index: HashMap<String, usize>,
@@ -48,6 +49,17 @@ pub struct Schema {
version: u32,
}
impl fmt::Debug for Schema {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
f.debug_struct("Schema")
.field("column_schemas", &self.column_schemas)
.field("name_to_index", &self.name_to_index)
.field("timestamp_index", &self.timestamp_index)
.field("version", &self.version)
.finish()
}
}
impl Schema {
/// Initial version of the schema.
pub const INITIAL_VERSION: u32 = 0;

View File

@@ -13,6 +13,7 @@
// limitations under the License.
use std::collections::HashMap;
use std::fmt;
use arrow::datatypes::Field;
use serde::{Deserialize, Serialize};
@@ -33,7 +34,7 @@ pub const COMMENT_KEY: &str = "greptime:storage:comment";
const DEFAULT_CONSTRAINT_KEY: &str = "greptime:default_constraint";
/// Schema of a column, used as an immutable struct.
#[derive(Debug, Clone, PartialEq, Eq, Serialize, Deserialize)]
#[derive(Clone, PartialEq, Eq, Serialize, Deserialize)]
pub struct ColumnSchema {
pub name: String,
pub data_type: ConcreteDataType,
@@ -43,6 +44,30 @@ pub struct ColumnSchema {
metadata: Metadata,
}
impl fmt::Debug for ColumnSchema {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!(
f,
"{} {} {}",
self.name,
self.data_type,
if self.is_nullable { "null" } else { "not null" },
)?;
// Add default constraint if present
if let Some(default_constraint) = &self.default_constraint {
write!(f, " default={:?}", default_constraint)?;
}
// Add metadata if present
if !self.metadata.is_empty() {
write!(f, " metadata={:?}", self.metadata)?;
}
Ok(())
}
}
impl ColumnSchema {
pub fn new<T: Into<String>>(
name: T,
@@ -394,4 +419,18 @@ mod tests {
let column_schema = ColumnSchema::new("test", ConcreteDataType::int32_datatype(), false);
assert!(column_schema.create_default().unwrap().is_none());
}
#[test]
fn test_debug_for_column_schema() {
let column_schema_int8 =
ColumnSchema::new("test_column_1", ConcreteDataType::int8_datatype(), true);
let column_schema_int32 =
ColumnSchema::new("test_column_2", ConcreteDataType::int32_datatype(), false);
let formatted_int8 = format!("{:?}", column_schema_int8);
let formatted_int32 = format!("{:?}", column_schema_int32);
assert_eq!(formatted_int8, "test_column_1 Int8 null");
assert_eq!(formatted_int32, "test_column_2 Int32 not null");
}
}

View File

@@ -182,9 +182,6 @@ pub enum Error {
#[snafu(display("Failed to find leaders when altering table, table: {}", table))]
LeaderNotFound { table: String, location: Location },
#[snafu(display("Table already exists: `{}`", table))]
TableAlreadyExist { table: String, location: Location },
#[snafu(display("Failed to found context value: {}", key))]
ContextValueNotFound { key: String, location: Location },
@@ -272,6 +269,9 @@ pub enum Error {
source: operator::error::Error,
location: Location,
},
#[snafu(display("Invalid auth config"))]
IllegalAuthConfig { source: auth::error::Error },
}
pub type Result<T> = std::result::Result<T, Error>;
@@ -288,6 +288,7 @@ impl ErrorExt for Error {
| Error::ColumnNotFound { .. }
| Error::MissingMetasrvOpts { .. }
| Error::UnsupportedFormat { .. }
| Error::IllegalAuthConfig { .. }
| Error::EmptyData { .. }
| Error::ColumnNoneDefaultValue { .. }
| Error::IncompleteGrpcRequest { .. } => StatusCode::InvalidArguments,
@@ -341,7 +342,6 @@ impl ErrorExt for Error {
| Error::ExecLogicalPlan { source, .. } => source.status_code(),
Error::LeaderNotFound { .. } => StatusCode::StorageUnavailable,
Error::TableAlreadyExist { .. } => StatusCode::TableAlreadyExists,
Error::InvokeRegionServer { source, .. } => source.status_code(),
Error::External { source, .. } => source.status_code(),

View File

@@ -41,6 +41,7 @@ pub struct FrontendOptions {
pub meta_client: Option<MetaClientOptions>,
pub logging: LoggingOptions,
pub datanode: DatanodeOptions,
pub user_provider: Option<String>,
}
impl Default for FrontendOptions {
@@ -60,6 +61,7 @@ impl Default for FrontendOptions {
meta_client: None,
logging: LoggingOptions::default(),
datanode: DatanodeOptions::default(),
user_provider: None,
}
}
}

View File

@@ -49,8 +49,8 @@ impl HeartbeatTask {
) -> Self {
HeartbeatTask {
meta_client,
report_interval: heartbeat_opts.interval_millis,
retry_interval: heartbeat_opts.retry_interval_millis,
report_interval: heartbeat_opts.interval.as_millis() as u64,
retry_interval: heartbeat_opts.retry_interval.as_millis() as u64,
resp_handler_executor,
}
}

View File

@@ -13,9 +13,7 @@
// limitations under the License.
use async_trait::async_trait;
use common_meta::cache_invalidator::{
CacheInvalidator, Context, KvCacheInvalidatorRef, TableMetadataCacheInvalidator,
};
use common_meta::cache_invalidator::{CacheInvalidatorRef, Context};
use common_meta::error::Result as MetaResult;
use common_meta::heartbeat::handler::{
HandleControl, HeartbeatResponseHandler, HeartbeatResponseHandlerContext,
@@ -26,7 +24,7 @@ use futures::future::Either;
#[derive(Clone)]
pub struct InvalidateTableCacheHandler {
table_metadata_cache_invalidator: TableMetadataCacheInvalidator,
cache_invalidator: CacheInvalidatorRef,
}
#[async_trait]
@@ -41,7 +39,7 @@ impl HeartbeatResponseHandler for InvalidateTableCacheHandler {
async fn handle(&self, ctx: &mut HeartbeatResponseHandlerContext) -> MetaResult<HandleControl> {
let mailbox = ctx.mailbox.clone();
let cache_invalidator = self.table_metadata_cache_invalidator.clone();
let cache_invalidator = self.cache_invalidator.clone();
let (meta, invalidator) = match ctx.incoming_message.take() {
Some((meta, Instruction::InvalidateTableIdCache(table_id))) => (
@@ -86,11 +84,7 @@ impl HeartbeatResponseHandler for InvalidateTableCacheHandler {
}
impl InvalidateTableCacheHandler {
pub fn new(backend_cache_invalidator: KvCacheInvalidatorRef) -> Self {
Self {
table_metadata_cache_invalidator: TableMetadataCacheInvalidator::new(
backend_cache_invalidator,
),
}
pub fn new(cache_invalidator: CacheInvalidatorRef) -> Self {
Self { cache_invalidator }
}
}

View File

@@ -22,7 +22,6 @@ mod script;
mod standalone;
use std::collections::HashMap;
use std::sync::Arc;
use std::time::Duration;
use api::v1::meta::Role;
use async_trait::async_trait;
@@ -122,9 +121,7 @@ pub struct Instance {
script_executor: Arc<ScriptExecutor>,
statement_executor: Arc<StatementExecutor>,
query_engine: QueryEngineRef,
/// plugins: this map holds extensions to customize query or auth
/// behaviours.
plugins: Arc<Plugins>,
plugins: Plugins,
servers: Arc<ServerHandlers>,
heartbeat_task: Option<HeartbeatTask>,
inserter: InserterRef,
@@ -132,10 +129,7 @@ pub struct Instance {
}
impl Instance {
pub async fn try_new_distributed(
opts: &FrontendOptions,
plugins: Arc<Plugins>,
) -> Result<Self> {
pub async fn try_new_distributed(opts: &FrontendOptions, plugins: Plugins) -> Result<Self> {
let meta_client = Self::create_meta_client(opts).await?;
let datanode_clients = Arc::new(DatanodeClients::default());
@@ -146,7 +140,7 @@ impl Instance {
pub async fn try_new_distributed_with(
meta_client: Arc<MetaClient>,
datanode_clients: Arc<DatanodeClients>,
plugins: Arc<Plugins>,
plugins: Plugins,
opts: &FrontendOptions,
) -> Result<Self> {
let meta_backend = Arc::new(CachedMetaKvBackend::new(meta_client.clone()));
@@ -236,14 +230,12 @@ impl Instance {
);
let channel_config = ChannelConfig::new()
.timeout(Duration::from_millis(meta_client_options.timeout_millis))
.connect_timeout(Duration::from_millis(
meta_client_options.connect_timeout_millis,
))
.timeout(meta_client_options.timeout)
.connect_timeout(meta_client_options.connect_timeout)
.tcp_nodelay(meta_client_options.tcp_nodelay);
let ddl_channel_config = channel_config.clone().timeout(Duration::from_millis(
meta_client_options.ddl_timeout_millis,
));
let ddl_channel_config = channel_config
.clone()
.timeout(meta_client_options.ddl_timeout);
let channel_manager = ChannelManager::with_config(channel_config);
let ddl_channel_manager = ChannelManager::with_config(ddl_channel_config);
@@ -297,7 +289,7 @@ impl Instance {
kv_backend: KvBackendRef,
procedure_manager: ProcedureManagerRef,
catalog_manager: CatalogManagerRef,
plugins: Arc<Plugins>,
plugins: Plugins,
region_server: RegionServer,
) -> Result<Self> {
let partition_manager = Arc::new(PartitionRuleManager::new(kv_backend.clone()));
@@ -377,7 +369,7 @@ impl Instance {
&self.catalog_manager
}
pub fn plugins(&self) -> Arc<Plugins> {
pub fn plugins(&self) -> Plugins {
self.plugins.clone()
}
@@ -593,7 +585,7 @@ impl PrometheusHandler for Instance {
}
pub fn check_permission(
plugins: Arc<Plugins>,
plugins: Plugins,
stmt: &Statement,
query_ctx: &QueryContextRef,
) -> Result<()> {
@@ -664,6 +656,7 @@ fn validate_param(name: &ObjectName, query_ctx: &QueryContextRef) -> Result<()>
mod tests {
use std::collections::HashMap;
use common_base::Plugins;
use query::query_engine::options::QueryOptions;
use session::context::QueryContext;
use sql::dialect::GreptimeDbDialect;
@@ -674,11 +667,10 @@ mod tests {
#[test]
fn test_exec_validation() {
let query_ctx = QueryContext::arc();
let plugins = Plugins::new();
let plugins: Plugins = Plugins::new();
plugins.insert(QueryOptions {
disallow_cross_schema_query: true,
});
let plugins = Arc::new(plugins);
let sql = r#"
SELECT * FROM demo;
@@ -704,7 +696,7 @@ mod tests {
re.unwrap();
}
fn replace_test(template_sql: &str, plugins: Arc<Plugins>, query_ctx: &QueryContextRef) {
fn replace_test(template_sql: &str, plugins: Plugins, query_ctx: &QueryContextRef) {
// test right
let right = vec![("", ""), ("", "public."), ("greptime.", "public.")];
for (catalog, schema) in right {
@@ -732,7 +724,7 @@ mod tests {
template.format(&vars).unwrap()
}
fn do_test(sql: &str, plugins: Arc<Plugins>, query_ctx: &QueryContextRef, is_ok: bool) {
fn do_test(sql: &str, plugins: Plugins, query_ctx: &QueryContextRef, is_ok: bool) {
let stmt = &parse_stmt(sql, &GreptimeDbDialect {}).unwrap()[0];
let re = check_permission(plugins, stmt, query_ctx);
if is_ok {

View File

@@ -20,7 +20,6 @@ use auth::UserProviderRef;
use common_base::Plugins;
use common_runtime::Builder as RuntimeBuilder;
use common_telemetry::info;
use servers::configurator::ConfiguratorRef;
use servers::error::InternalIoSnafu;
use servers::grpc::{GrpcServer, GrpcServerConfig};
use servers::http::HttpServerBuilder;
@@ -47,7 +46,7 @@ impl Services {
pub(crate) async fn build<T>(
opts: &FrontendOptions,
instance: Arc<T>,
plugins: Arc<Plugins>,
plugins: Plugins,
) -> Result<ServerHandlers>
where
T: FrontendInstance,
@@ -69,8 +68,8 @@ impl Services {
);
let grpc_config = GrpcServerConfig {
max_recv_message_size: opts.max_recv_message_size,
max_send_message_size: opts.max_send_message_size,
max_recv_message_size: opts.max_recv_message_size.as_bytes() as usize,
max_send_message_size: opts.max_send_message_size.as_bytes() as usize,
};
let grpc_server = GrpcServer::new(
Some(grpc_config),
@@ -120,7 +119,7 @@ impl Services {
let http_server = http_server_builder
.with_metrics_handler(MetricsHandler)
.with_script_handler(instance.clone())
.with_configurator(plugins.get::<ConfiguratorRef>())
.with_plugins(plugins)
.with_greptime_config_options(opts.to_toml_string())
.build();
result.push((Box::new(http_server), http_addr));

View File

@@ -12,6 +12,7 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use common_base::readable_size::ReadableSize;
use common_grpc::channel_manager::{
DEFAULT_MAX_GRPC_RECV_MESSAGE_SIZE, DEFAULT_MAX_GRPC_SEND_MESSAGE_SIZE,
};
@@ -22,9 +23,9 @@ pub struct GrpcOptions {
pub addr: String,
pub runtime_size: usize,
// Max gRPC receiving(decoding) message size
pub max_recv_message_size: usize,
pub max_recv_message_size: ReadableSize,
// Max gRPC sending(encoding) message size
pub max_send_message_size: usize,
pub max_send_message_size: ReadableSize,
}
impl Default for GrpcOptions {

View File

@@ -14,6 +14,7 @@ common-macro = { workspace = true }
common-meta = { workspace = true }
common-telemetry = { workspace = true }
etcd-client.workspace = true
humantime-serde.workspace = true
rand.workspace = true
serde.workspace = true
serde_json.workspace = true

View File

@@ -12,6 +12,8 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use std::time::Duration;
use serde::{Deserialize, Serialize};
pub mod client;
@@ -21,31 +23,45 @@ pub mod error;
#[derive(Clone, Debug, Serialize, Deserialize, PartialEq, Eq)]
pub struct MetaClientOptions {
pub metasrv_addrs: Vec<String>,
pub timeout_millis: u64,
#[serde(default = "default_heartbeat_timeout_millis")]
pub heartbeat_timeout_millis: u64,
#[serde(default = "default_ddl_timeout_millis")]
pub ddl_timeout_millis: u64,
pub connect_timeout_millis: u64,
#[serde(default = "default_timeout")]
#[serde(with = "humantime_serde")]
pub timeout: Duration,
#[serde(default = "default_heartbeat_timeout")]
#[serde(with = "humantime_serde")]
pub heartbeat_timeout: Duration,
#[serde(default = "default_ddl_timeout")]
#[serde(with = "humantime_serde")]
pub ddl_timeout: Duration,
#[serde(default = "default_connect_timeout")]
#[serde(with = "humantime_serde")]
pub connect_timeout: Duration,
pub tcp_nodelay: bool,
}
fn default_heartbeat_timeout_millis() -> u64 {
500u64
fn default_heartbeat_timeout() -> Duration {
Duration::from_millis(500u64)
}
fn default_ddl_timeout_millis() -> u64 {
10_000u64
fn default_ddl_timeout() -> Duration {
Duration::from_millis(10_000u64)
}
fn default_connect_timeout() -> Duration {
Duration::from_millis(1_000u64)
}
fn default_timeout() -> Duration {
Duration::from_millis(3_000u64)
}
impl Default for MetaClientOptions {
fn default() -> Self {
Self {
metasrv_addrs: vec!["127.0.0.1:3002".to_string()],
timeout_millis: 3_000u64,
heartbeat_timeout_millis: default_heartbeat_timeout_millis(),
ddl_timeout_millis: default_ddl_timeout_millis(),
connect_timeout_millis: 1_000u64,
timeout: default_timeout(),
heartbeat_timeout: default_heartbeat_timeout(),
ddl_timeout: default_ddl_timeout(),
connect_timeout: default_connect_timeout(),
tcp_nodelay: true,
}
}

Some files were not shown because too many files have changed in this diff Show More