Compare commits

...

35 Commits

Author SHA1 Message Date
Yingwen
eff07d5986 Merge pull request #16 from GreptimeTeam/feat/grpc-unary-insert
feat: GRPC unary insert method
2023-03-17 19:26:48 +08:00
luofucong
40c55e4da7 feat: GRPC unary insert method 2023-03-17 19:11:46 +08:00
Yingwen
8d113550cf Merge pull request #15 from GreptimeTeam/ci/release-yaml
ci: Adjust release yaml
2023-03-15 11:23:31 +08:00
evenyag
15a0ed0853 ci: Adjust release yaml 2023-03-14 19:36:55 +08:00
Ruihang Xia
44493e9d8c feat: impl flush on shutdown (#14)
* feat: impl flush on shutdown

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* powerful if-else!

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-03-14 18:29:38 +08:00
Lei, HUANG
efd15839d4 Merge pull request #13 from GreptimeTeam/chore/flush-message
chore(servers): Change flush message
2023-03-14 17:34:22 +08:00
evenyag
1f62b36537 chore(servers): Change flush message 2023-03-14 17:16:39 +08:00
Ruihang Xia
7b8e65ce93 chore: merge public repo (#12)
* feat: implement table flush (#1121)

* feat: add flush method for trait

* feat: implement flush via grpc

* chore: move table_dir/region_name/region_id to table crate

* chore: Update src/mito/src/table.rs

---------

Co-authored-by: Yingwen <realevenyag@gmail.com>

* fix: use correct env var (#1166)

* fix: use correct env var

* fix: move COPY up so rustup know it's nightly

* fix: add `pyo3_backend` in GHA yml

* chore: name for `TODO`

* temp: not set `pyo3_backend` before find DSO

* fix: release linux with pyo3_backend

* fix: failed to run subquery wrapped in two parentheses (#1157)

* refactor: add the separate GitHub Action job to push the image to the UCloud registry (#1170)

* refactor: make the cmd hold the application instance (#1159)

* fix: export 'PYO3_CROSS_LIB_DIR' when cargo build for aarch64-linux and refactor matrix opts (#1171)

---------

Co-authored-by: Weny Xu <wenymedia@gmail.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>
Co-authored-by: discord9 <55937128+discord9@users.noreply.github.com>
Co-authored-by: LFC <luofucong@greptime.com>
Co-authored-by: zyy17 <zyylsxm@gmail.com>
2023-03-14 16:42:40 +08:00
Yingwen
6475339ad0 Merge pull request #10 from GreptimeTeam/feat/manual-flush-http
feat: manual flush http API
2023-03-14 16:37:56 +08:00
Lei, HUANG
0bd802c70d Update src/servers/src/error.rs
Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
2023-03-14 16:26:47 +08:00
Lei, HUANG
28d07c7a2e Merge pull request #9 from GreptimeTeam/docs/edge-example-toml
docs(config): Add edge example
2023-03-14 16:25:29 +08:00
Lei, HUANG
dc33b0c0ce Merge pull request #11 from GreptimeTeam/chore/adjust-log
chore(storage): Adjust log level
2023-03-14 16:25:05 +08:00
evenyag
4b4f8f27e8 chore(storage): Adjust log level 2023-03-14 16:14:25 +08:00
Lei, HUANG
c994e0de88 fix: format 2023-03-14 16:13:57 +08:00
Lei, HUANG
d1ba9ca126 fix: unit tests 2023-03-14 16:11:25 +08:00
evenyag
0877dabce2 docs(config): Add edge example 2023-03-14 16:02:11 +08:00
Lei, HUANG
8b9671f376 feat: manual flush http API 2023-03-14 15:49:37 +08:00
Yingwen
dcf66d9d52 chore(datanode): derive serde default for Wal/CompactionConfig (#8) 2023-03-14 15:22:13 +08:00
Yingwen
65b61e78ad Merge pull request #6 from GreptimeTeam/docs/project-version
docs: Set greptimedb-edge version to 0.1.0
2023-03-14 15:01:29 +08:00
evenyag
3638704f95 docs: Set greptimedb-edge version to 0.1.0 2023-03-14 14:38:00 +08:00
Lei, HUANG
8a2f4256bf Merge pull request #5 from GreptimeTeam/feat/wait-flush-done
feat: Region writer wait flush done
2023-03-14 14:28:39 +08:00
evenyag
83aeadc506 feat: Region writer wait flush done 2023-03-14 14:09:22 +08:00
Lei, HUANG
f556052951 Merge pull request #3 from GreptimeTeam/feat/merge-public
feat: Merge develop branch of public repo
2023-03-14 14:05:38 +08:00
LFC
8658d428e0 fix: failed to run subquery wrapped in two parentheses (#1157) 2023-03-14 11:52:25 +08:00
discord9
e8e11072f8 fix: use correct env var (#1166)
* fix: use correct env var

* fix: move COPY up so rustup know it's nightly

* fix: add `pyo3_backend` in GHA yml

* chore: name for `TODO`

* temp: not set `pyo3_backend` before find DSO

* fix: release linux with pyo3_backend
2023-03-14 11:52:25 +08:00
Weny Xu
6f0f72c377 feat: implement table flush (#1121)
* feat: add flush method for trait

* feat: implement flush via grpc

* chore: move table_dir/region_name/region_id to table crate

* chore: Update src/mito/src/table.rs

---------

Co-authored-by: Yingwen <realevenyag@gmail.com>
2023-03-14 11:52:25 +08:00
Yingwen
32030a8194 Merge pull request #4 from GreptimeTeam/ci/remove-unnecessary
ci: Remove api doc ci and coverage statistics
2023-03-14 11:51:21 +08:00
evenyag
0f7cde2411 ci: Remove api doc ci and coverage statistics 2023-03-14 11:46:48 +08:00
Lei, HUANG
1ece402ec8 Merge pull request #2 from GreptimeTeam/feat/skip-wal
feat: skip wal for user table
2023-03-14 11:39:20 +08:00
Lei, HUANG
7ee54b3e69 fix: don't skip wal in test 2023-03-14 11:19:17 +08:00
Lei, HUANG
9b4dcba8cf fix: check errors 2023-03-14 10:51:16 +08:00
Lei, HUANG
c3bcb1111f fix: fmt 2023-03-13 20:24:25 +08:00
Lei, HUANG
a4ebd03a61 feat: skip wal for user table 2023-03-13 20:18:41 +08:00
Lei, HUANG
e7daf1226f feat: tune parquet parameters 2023-03-13 20:17:23 +08:00
Lei, HUANG
05c0ea9a59 feat: tune parquet parameters (#1)
* feat: tune parquet parameters

* Update src/storage/src/sst/parquet.rs

---------

Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
2023-03-13 20:11:06 +08:00
83 changed files with 1920 additions and 697 deletions

View File

@@ -1,42 +1,42 @@
on: # on:
push: # push:
branches: # branches:
- develop # - develop
paths-ignore: # paths-ignore:
- 'docs/**' # - 'docs/**'
- 'config/**' # - 'config/**'
- '**.md' # - '**.md'
- '.dockerignore' # - '.dockerignore'
- 'docker/**' # - 'docker/**'
- '.gitignore' # - '.gitignore'
name: Build API docs # name: Build API docs
env: # env:
RUST_TOOLCHAIN: nightly-2023-02-26 # RUST_TOOLCHAIN: nightly-2023-02-26
jobs: # jobs:
apidoc: # apidoc:
runs-on: ubuntu-latest # runs-on: ubuntu-latest
steps: # steps:
- uses: actions/checkout@v3 # - uses: actions/checkout@v3
- uses: arduino/setup-protoc@v1 # - uses: arduino/setup-protoc@v1
with: # with:
repo-token: ${{ secrets.GITHUB_TOKEN }} # repo-token: ${{ secrets.GITHUB_TOKEN }}
- uses: dtolnay/rust-toolchain@master # - uses: dtolnay/rust-toolchain@master
with: # with:
toolchain: ${{ env.RUST_TOOLCHAIN }} # toolchain: ${{ env.RUST_TOOLCHAIN }}
- run: cargo doc --workspace --no-deps --document-private-items # - run: cargo doc --workspace --no-deps --document-private-items
- run: | # - run: |
cat <<EOF > target/doc/index.html # cat <<EOF > target/doc/index.html
<!DOCTYPE html> # <!DOCTYPE html>
<html> # <html>
<head> # <head>
<meta http-equiv="refresh" content="0; url='greptime/'" /> # <meta http-equiv="refresh" content="0; url='greptime/'" />
</head> # </head>
<body></body></html> # <body></body></html>
EOF # EOF
- name: Publish dist directory # - name: Publish dist directory
uses: JamesIves/github-pages-deploy-action@v4 # uses: JamesIves/github-pages-deploy-action@v4
with: # with:
folder: target/doc # folder: target/doc

View File

@@ -213,10 +213,11 @@ jobs:
python-version: '3.10' python-version: '3.10'
- name: Install PyArrow Package - name: Install PyArrow Package
run: pip install pyarrow run: pip install pyarrow
- name: Install cargo-llvm-cov # - name: Install cargo-llvm-cov
uses: taiki-e/install-action@cargo-llvm-cov # uses: taiki-e/install-action@cargo-llvm-cov
- name: Collect coverage data - name: Collect coverage data
run: cargo llvm-cov nextest --workspace --lcov --output-path lcov.info -F pyo3_backend run: cargo nextest run -F pyo3_backend
# run: cargo llvm-cov nextest --workspace --lcov --output-path lcov.info -F pyo3_backend
env: env:
CARGO_BUILD_RUSTFLAGS: "-C link-arg=-fuse-ld=lld" CARGO_BUILD_RUSTFLAGS: "-C link-arg=-fuse-ld=lld"
RUST_BACKTRACE: 1 RUST_BACKTRACE: 1

View File

@@ -2,9 +2,9 @@ on:
push: push:
tags: tags:
- "v*.*.*" - "v*.*.*"
schedule: # schedule:
# At 00:00 on Monday. # # At 00:00 on Monday.
- cron: '0 0 * * 1' # - cron: '0 0 * * 1'
workflow_dispatch: workflow_dispatch:
name: Release name: Release
@@ -29,21 +29,23 @@ jobs:
os: ubuntu-2004-16-cores os: ubuntu-2004-16-cores
file: greptime-linux-amd64 file: greptime-linux-amd64
continue-on-error: false continue-on-error: false
# opts: "-F pyo3_backend"
- arch: aarch64-unknown-linux-gnu - arch: aarch64-unknown-linux-gnu
os: ubuntu-2004-16-cores os: ubuntu-2004-16-cores
file: greptime-linux-arm64 file: greptime-linux-arm64
continue-on-error: true continue-on-error: true
- arch: aarch64-apple-darwin # opts: "-F pyo3_backend"
os: macos-latest # - arch: aarch64-apple-darwin
file: greptime-darwin-arm64 # os: macos-latest
continue-on-error: true # file: greptime-darwin-arm64
- arch: x86_64-apple-darwin # continue-on-error: true
os: macos-latest # - arch: x86_64-apple-darwin
file: greptime-darwin-amd64 # os: macos-latest
continue-on-error: true # file: greptime-darwin-amd64
# continue-on-error: true
runs-on: ${{ matrix.os }} runs-on: ${{ matrix.os }}
continue-on-error: ${{ matrix.continue-on-error }} continue-on-error: ${{ matrix.continue-on-error }}
if: github.repository == 'GreptimeTeam/greptimedb' if: github.repository == 'GreptimeTeam/greptimedb-edge'
steps: steps:
- name: Checkout sources - name: Checkout sources
uses: actions/checkout@v3 uses: actions/checkout@v3
@@ -103,8 +105,6 @@ jobs:
run: | run: |
sudo chmod +x ./docker/aarch64/compile-python.sh sudo chmod +x ./docker/aarch64/compile-python.sh
sudo ./docker/aarch64/compile-python.sh sudo ./docker/aarch64/compile-python.sh
export PYO3_CROSS_LIB_DIR=${PWD}/python310-aarch64/lib
echo $PYO3_CROSS_LIB_DIR
- name: Install rust toolchain - name: Install rust toolchain
uses: dtolnay/rust-toolchain@master uses: dtolnay/rust-toolchain@master
@@ -118,8 +118,18 @@ jobs:
- name: Run tests - name: Run tests
run: make unit-test integration-test sqlness-test run: make unit-test integration-test sqlness-test
- name: Run cargo build for aarch64-linux
if: contains(matrix.arch, 'aarch64-unknown-linux-gnu')
run: |
# TODO(zyy17): We should make PYO3_CROSS_LIB_DIR configurable.
export PYO3_CROSS_LIB_DIR=$(pwd)/python_arm64_build/lib
echo "PYO3_CROSS_LIB_DIR: $PYO3_CROSS_LIB_DIR"
alias python=python3
cargo build --profile ${{ env.CARGO_PROFILE }} --locked --target ${{ matrix.arch }} ${{ matrix.opts }}
- name: Run cargo build - name: Run cargo build
run: cargo build ${{ matrix.opts }} --profile ${{ env.CARGO_PROFILE }} --locked --target ${{ matrix.arch }} if: contains(matrix.arch, 'aarch64-unknown-linux-gnu') == false
run: cargo build --profile ${{ env.CARGO_PROFILE }} --locked --target ${{ matrix.arch }} ${{ matrix.opts }}
- name: Calculate checksum and rename binary - name: Calculate checksum and rename binary
shell: bash shell: bash
@@ -144,7 +154,7 @@ jobs:
name: Release artifacts name: Release artifacts
needs: [build] needs: [build]
runs-on: ubuntu-latest runs-on: ubuntu-latest
if: github.repository == 'GreptimeTeam/greptimedb' if: github.repository == 'GreptimeTeam/greptimedb-edge'
steps: steps:
- name: Checkout sources - name: Checkout sources
uses: actions/checkout@v3 uses: actions/checkout@v3
@@ -183,100 +193,142 @@ jobs:
files: | files: |
**/greptime-* **/greptime-*
docker: # docker:
name: Build docker image # name: Build docker image
needs: [build] # needs: [build]
runs-on: ubuntu-latest # runs-on: ubuntu-latest
if: github.repository == 'GreptimeTeam/greptimedb' # if: github.repository == 'GreptimeTeam/greptimedb'
steps: # steps:
- name: Checkout sources # - name: Checkout sources
uses: actions/checkout@v3 # uses: actions/checkout@v3
- name: Login to UCloud Container Registry # - name: Login to UCloud Container Registry
uses: docker/login-action@v2 # uses: docker/login-action@v2
with: # with:
registry: uhub.service.ucloud.cn # registry: uhub.service.ucloud.cn
username: ${{ secrets.UCLOUD_USERNAME }} # username: ${{ secrets.UCLOUD_USERNAME }}
password: ${{ secrets.UCLOUD_PASSWORD }} # password: ${{ secrets.UCLOUD_PASSWORD }}
- name: Login to Dockerhub # - name: Login to Dockerhub
uses: docker/login-action@v2 # uses: docker/login-action@v2
with: # with:
username: ${{ secrets.DOCKERHUB_USERNAME }} # username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }} # password: ${{ secrets.DOCKERHUB_TOKEN }}
- name: Configure scheduled build image tag # the tag would be ${SCHEDULED_BUILD_VERSION_PREFIX}-YYYYMMDD-${SCHEDULED_PERIOD} # - name: Configure scheduled build image tag # the tag would be ${SCHEDULED_BUILD_VERSION_PREFIX}-YYYYMMDD-${SCHEDULED_PERIOD}
shell: bash # shell: bash
if: github.event_name == 'schedule' # if: github.event_name == 'schedule'
run: | # run: |
buildTime=`date "+%Y%m%d"` # buildTime=`date "+%Y%m%d"`
SCHEDULED_BUILD_VERSION=${{ env.SCHEDULED_BUILD_VERSION_PREFIX }}-$buildTime-${{ env.SCHEDULED_PERIOD }} # SCHEDULED_BUILD_VERSION=${{ env.SCHEDULED_BUILD_VERSION_PREFIX }}-$buildTime-${{ env.SCHEDULED_PERIOD }}
echo "IMAGE_TAG=${SCHEDULED_BUILD_VERSION:1}" >> $GITHUB_ENV # echo "IMAGE_TAG=${SCHEDULED_BUILD_VERSION:1}" >> $GITHUB_ENV
- name: Configure tag # If the release tag is v0.1.0, then the image version tag will be 0.1.0. # - name: Configure tag # If the release tag is v0.1.0, then the image version tag will be 0.1.0.
shell: bash # shell: bash
if: github.event_name != 'schedule' # if: github.event_name != 'schedule'
run: | # run: |
VERSION=${{ github.ref_name }} # VERSION=${{ github.ref_name }}
echo "IMAGE_TAG=${VERSION:1}" >> $GITHUB_ENV # echo "IMAGE_TAG=${VERSION:1}" >> $GITHUB_ENV
- name: Set up QEMU # - name: Set up QEMU
uses: docker/setup-qemu-action@v2 # uses: docker/setup-qemu-action@v2
- name: Set up buildx # - name: Set up buildx
uses: docker/setup-buildx-action@v2 # uses: docker/setup-buildx-action@v2
- name: Download amd64 binary # - name: Download amd64 binary
uses: actions/download-artifact@v3 # uses: actions/download-artifact@v3
with: # with:
name: greptime-linux-amd64 # name: greptime-linux-amd64
path: amd64 # path: amd64
- name: Unzip the amd64 artifacts # - name: Unzip the amd64 artifacts
run: | # run: |
cd amd64 # cd amd64
tar xvf greptime-linux-amd64.tgz # tar xvf greptime-linux-amd64.tgz
rm greptime-linux-amd64.tgz # rm greptime-linux-amd64.tgz
- name: Download arm64 binary # - name: Download arm64 binary
id: download-arm64 # id: download-arm64
uses: actions/download-artifact@v3 # uses: actions/download-artifact@v3
with: # with:
name: greptime-linux-arm64 # name: greptime-linux-arm64
path: arm64 # path: arm64
- name: Unzip the arm64 artifacts # - name: Unzip the arm64 artifacts
id: unzip-arm64 # id: unzip-arm64
if: success() || steps.download-arm64.conclusion == 'success' # if: success() || steps.download-arm64.conclusion == 'success'
run: | # run: |
cd arm64 # cd arm64
tar xvf greptime-linux-arm64.tgz # tar xvf greptime-linux-arm64.tgz
rm greptime-linux-arm64.tgz # rm greptime-linux-arm64.tgz
- name: Build and push all # - name: Build and push all
uses: docker/build-push-action@v3 # uses: docker/build-push-action@v3
if: success() || steps.unzip-arm64.conclusion == 'success' # Build and push all platform if unzip-arm64 succeeds # if: success() || steps.unzip-arm64.conclusion == 'success' # Build and push all platform if unzip-arm64 succeeds
with: # with:
context: . # context: .
file: ./docker/ci/Dockerfile # file: ./docker/ci/Dockerfile
push: true # push: true
platforms: linux/amd64,linux/arm64 # platforms: linux/amd64,linux/arm64
tags: | # tags: |
greptime/greptimedb:latest # greptime/greptimedb:latest
greptime/greptimedb:${{ env.IMAGE_TAG }} # greptime/greptimedb:${{ env.IMAGE_TAG }}
uhub.service.ucloud.cn/greptime/greptimedb:latest
uhub.service.ucloud.cn/greptime/greptimedb:${{ env.IMAGE_TAG }}
- name: Build and push amd64 only # - name: Build and push amd64 only
uses: docker/build-push-action@v3 # uses: docker/build-push-action@v3
if: success() || steps.download-arm64.conclusion == 'failure' # Only build and push amd64 platform if download-arm64 fails # if: success() || steps.download-arm64.conclusion == 'failure' # Only build and push amd64 platform if download-arm64 fails
with: # with:
context: . # context: .
file: ./docker/ci/Dockerfile # file: ./docker/ci/Dockerfile
push: true # push: true
platforms: linux/amd64 # platforms: linux/amd64
tags: | # tags: |
greptime/greptimedb:latest # greptime/greptimedb:latest
greptime/greptimedb:${{ env.IMAGE_TAG }} # greptime/greptimedb:${{ env.IMAGE_TAG }}
uhub.service.ucloud.cn/greptime/greptimedb:latest
uhub.service.ucloud.cn/greptime/greptimedb:${{ env.IMAGE_TAG }} # docker-push-uhub:
# name: Push docker image to UCloud Container Registry
# needs: [docker]
# runs-on: ubuntu-latest
# if: github.repository == 'GreptimeTeam/greptimedb'
# # Push to uhub may fail(500 error), but we don't want to block the release process. The failed job will be retried manually.
# continue-on-error: true
# steps:
# - name: Checkout sources
# uses: actions/checkout@v3
# - name: Set up QEMU
# uses: docker/setup-qemu-action@v2
# - name: Set up Docker Buildx
# uses: docker/setup-buildx-action@v2
# - name: Login to UCloud Container Registry
# uses: docker/login-action@v2
# with:
# registry: uhub.service.ucloud.cn
# username: ${{ secrets.UCLOUD_USERNAME }}
# password: ${{ secrets.UCLOUD_PASSWORD }}
# - name: Configure scheduled build image tag # the tag would be ${SCHEDULED_BUILD_VERSION_PREFIX}-YYYYMMDD-${SCHEDULED_PERIOD}
# shell: bash
# if: github.event_name == 'schedule'
# run: |
# buildTime=`date "+%Y%m%d"`
# SCHEDULED_BUILD_VERSION=${{ env.SCHEDULED_BUILD_VERSION_PREFIX }}-$buildTime-${{ env.SCHEDULED_PERIOD }}
# echo "IMAGE_TAG=${SCHEDULED_BUILD_VERSION:1}" >> $GITHUB_ENV
# - name: Configure tag # If the release tag is v0.1.0, then the image version tag will be 0.1.0.
# shell: bash
# if: github.event_name != 'schedule'
# run: |
# VERSION=${{ github.ref_name }}
# echo "IMAGE_TAG=${VERSION:1}" >> $GITHUB_ENV
# - name: Push image to uhub # Use 'docker buildx imagetools create' to create a new image base on source image.
# run: |
# docker buildx imagetools create \
# --tag uhub.service.ucloud.cn/greptime/greptimedb:latest \
# --tag uhub.service.ucloud.cn/greptime/greptimedb:${{ env.IMAGE_TAG }} \
# greptime/greptimedb:${{ env.IMAGE_TAG }}

226
Cargo.lock generated
View File

@@ -135,7 +135,7 @@ checksum = "8f1f8f5a6f3d50d89e3797d7593a50f96bb2aaa20ca0cc7be1fb673232c91d72"
[[package]] [[package]]
name = "api" name = "api"
version = "0.1.1" version = "0.1.0"
dependencies = [ dependencies = [
"arrow-flight", "arrow-flight",
"common-base", "common-base",
@@ -190,9 +190,9 @@ checksum = "8da52d66c7071e2e3fa2a1e5c6d088fec47b593032b254f5e980de8ea54454d6"
[[package]] [[package]]
name = "arrow" name = "arrow"
version = "33.0.0" version = "34.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f3724c874f1517cf898cd1c3ad18ab5071edf893c48e73139ab1e16cf0f2affe" checksum = "f410d3907b6b3647b9e7bca4551274b2e3d716aa940afb67b7287257401da921"
dependencies = [ dependencies = [
"ahash 0.8.3", "ahash 0.8.3",
"arrow-arith", "arrow-arith",
@@ -214,9 +214,9 @@ dependencies = [
[[package]] [[package]]
name = "arrow-arith" name = "arrow-arith"
version = "33.0.0" version = "34.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e958823b8383ca14d0a2e973de478dd7674cd9f72837f8c41c132a0fda6a4e5e" checksum = "f87391cf46473c9bc53dab68cb8872c3a81d4dfd1703f1c8aa397dba9880a043"
dependencies = [ dependencies = [
"arrow-array", "arrow-array",
"arrow-buffer", "arrow-buffer",
@@ -229,9 +229,9 @@ dependencies = [
[[package]] [[package]]
name = "arrow-array" name = "arrow-array"
version = "33.0.0" version = "34.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "db670eab50e76654065b5aed930f4367101fcddcb2223802007d1e0b4d5a2579" checksum = "d35d5475e65c57cffba06d0022e3006b677515f99b54af33a7cd54f6cdd4a5b5"
dependencies = [ dependencies = [
"ahash 0.8.3", "ahash 0.8.3",
"arrow-buffer", "arrow-buffer",
@@ -245,9 +245,9 @@ dependencies = [
[[package]] [[package]]
name = "arrow-buffer" name = "arrow-buffer"
version = "33.0.0" version = "34.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9f0e01c931882448c0407bd32311a624b9f099739e94e786af68adc97016b5f2" checksum = "68b4ec72eda7c0207727df96cf200f539749d736b21f3e782ece113e18c1a0a7"
dependencies = [ dependencies = [
"half 2.2.1", "half 2.2.1",
"num", "num",
@@ -255,9 +255,9 @@ dependencies = [
[[package]] [[package]]
name = "arrow-cast" name = "arrow-cast"
version = "33.0.0" version = "34.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "4bf35d78836c93f80d9362f3ccb47ff5e2c5ecfc270ff42cdf1ef80334961d44" checksum = "0a7285272c9897321dfdba59de29f5b05aeafd3cdedf104a941256d155f6d304"
dependencies = [ dependencies = [
"arrow-array", "arrow-array",
"arrow-buffer", "arrow-buffer",
@@ -271,9 +271,9 @@ dependencies = [
[[package]] [[package]]
name = "arrow-csv" name = "arrow-csv"
version = "33.0.0" version = "34.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "0a6aa7c2531d89d01fed8c469a9b1bf97132a0bdf70b4724fe4bbb4537a50880" checksum = "981ee4e7f6a120da04e00d0b39182e1eeacccb59c8da74511de753c56b7fddf7"
dependencies = [ dependencies = [
"arrow-array", "arrow-array",
"arrow-buffer", "arrow-buffer",
@@ -290,9 +290,9 @@ dependencies = [
[[package]] [[package]]
name = "arrow-data" name = "arrow-data"
version = "33.0.0" version = "34.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "ea50db4d1e1e4c2da2bfdea7b6d2722eef64267d5ab680d815f7ae42428057f5" checksum = "27cc673ee6989ea6e4b4e8c7d461f7e06026a096c8f0b1a7288885ff71ae1e56"
dependencies = [ dependencies = [
"arrow-buffer", "arrow-buffer",
"arrow-schema", "arrow-schema",
@@ -302,9 +302,9 @@ dependencies = [
[[package]] [[package]]
name = "arrow-flight" name = "arrow-flight"
version = "33.0.0" version = "34.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6ad4c883d509d89f05b2891ad889729f17ab2191b5fd22b0cf3660a28cc40af5" checksum = "bd16945f8f3be0f6170b8ced60d414e56239d91a16a3f8800bc1504bc58b2592"
dependencies = [ dependencies = [
"arrow-array", "arrow-array",
"arrow-buffer", "arrow-buffer",
@@ -325,9 +325,9 @@ dependencies = [
[[package]] [[package]]
name = "arrow-ipc" name = "arrow-ipc"
version = "33.0.0" version = "34.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a4042fe6585155d1ec28a8e4937ec901a3ca7a19a22b9f6cd3f551b935cd84f5" checksum = "e37b8b69d9e59116b6b538e8514e0ec63a30f08b617ce800d31cb44e3ef64c1a"
dependencies = [ dependencies = [
"arrow-array", "arrow-array",
"arrow-buffer", "arrow-buffer",
@@ -339,9 +339,9 @@ dependencies = [
[[package]] [[package]]
name = "arrow-json" name = "arrow-json"
version = "33.0.0" version = "34.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "7c907c4ab4f26970a3719dc06e78e8054a01d0c96da3664d23b941e201b33d2b" checksum = "80c3fa0bed7cfebf6d18e46b733f9cb8a1cb43ce8e6539055ca3e1e48a426266"
dependencies = [ dependencies = [
"arrow-array", "arrow-array",
"arrow-buffer", "arrow-buffer",
@@ -358,9 +358,9 @@ dependencies = [
[[package]] [[package]]
name = "arrow-ord" name = "arrow-ord"
version = "33.0.0" version = "34.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "e131b447242a32129efc7932f58ed8931b42f35d8701c1a08f9f524da13b1d3c" checksum = "d247dce7bed6a8d6a3c6debfa707a3a2f694383f0c692a39d736a593eae5ef94"
dependencies = [ dependencies = [
"arrow-array", "arrow-array",
"arrow-buffer", "arrow-buffer",
@@ -372,9 +372,9 @@ dependencies = [
[[package]] [[package]]
name = "arrow-row" name = "arrow-row"
version = "33.0.0" version = "34.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b591ef70d76f4ac28dd7666093295fece0e5f9298f49af51ea49c001e1635bb6" checksum = "8d609c0181f963cea5c70fddf9a388595b5be441f3aa1d1cdbf728ca834bbd3a"
dependencies = [ dependencies = [
"ahash 0.8.3", "ahash 0.8.3",
"arrow-array", "arrow-array",
@@ -387,9 +387,9 @@ dependencies = [
[[package]] [[package]]
name = "arrow-schema" name = "arrow-schema"
version = "33.0.0" version = "34.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "eb327717d87eb94be5eff3b0cb8987f54059d343ee5235abf7f143c85f54cfc8" checksum = "64951898473bfb8e22293e83a44f02874d2257514d49cd95f9aa4afcff183fbc"
dependencies = [ dependencies = [
"bitflags", "bitflags",
"serde", "serde",
@@ -397,9 +397,9 @@ dependencies = [
[[package]] [[package]]
name = "arrow-select" name = "arrow-select"
version = "33.0.0" version = "34.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "79d3c389d1cea86793934f31594f914c8547d82e91e3411d4833ad0aac3266a7" checksum = "2a513d89c2e1ac22b28380900036cf1f3992c6443efc5e079de631dcf83c6888"
dependencies = [ dependencies = [
"arrow-array", "arrow-array",
"arrow-buffer", "arrow-buffer",
@@ -410,9 +410,9 @@ dependencies = [
[[package]] [[package]]
name = "arrow-string" name = "arrow-string"
version = "33.0.0" version = "34.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "30ee67790496dd310ddbf5096870324431e89aa76453e010020ac29b1184d356" checksum = "5288979b2705dae1114c864d73150629add9153b9b8f1d7ee3963db94c372ba5"
dependencies = [ dependencies = [
"arrow-array", "arrow-array",
"arrow-buffer", "arrow-buffer",
@@ -477,6 +477,8 @@ dependencies = [
"pin-project-lite", "pin-project-lite",
"tokio", "tokio",
"xz2", "xz2",
"zstd 0.11.2+zstd.1.5.2",
"zstd-safe 5.0.2+zstd.1.5.2",
] ]
[[package]] [[package]]
@@ -752,7 +754,7 @@ dependencies = [
[[package]] [[package]]
name = "benchmarks" name = "benchmarks"
version = "0.1.1" version = "0.1.0"
dependencies = [ dependencies = [
"arrow", "arrow",
"clap 4.1.8", "clap 4.1.8",
@@ -1086,7 +1088,7 @@ checksum = "37b2a672a2cb129a2e41c10b1224bb368f9f37a2b16b612598138befd7b37eb5"
[[package]] [[package]]
name = "catalog" name = "catalog"
version = "0.1.1" version = "0.1.0"
dependencies = [ dependencies = [
"api", "api",
"arc-swap", "arc-swap",
@@ -1337,7 +1339,7 @@ dependencies = [
[[package]] [[package]]
name = "client" name = "client"
version = "0.1.1" version = "0.1.0"
dependencies = [ dependencies = [
"api", "api",
"arrow-flight", "arrow-flight",
@@ -1360,7 +1362,7 @@ dependencies = [
"prost", "prost",
"rand", "rand",
"snafu", "snafu",
"substrait 0.1.1", "substrait 0.1.0",
"substrait 0.4.1", "substrait 0.4.1",
"tokio", "tokio",
"tonic", "tonic",
@@ -1390,7 +1392,7 @@ dependencies = [
[[package]] [[package]]
name = "cmd" name = "cmd"
version = "0.1.1" version = "0.1.0"
dependencies = [ dependencies = [
"anymap", "anymap",
"build-data", "build-data",
@@ -1418,7 +1420,7 @@ dependencies = [
"servers", "servers",
"session", "session",
"snafu", "snafu",
"substrait 0.1.1", "substrait 0.1.0",
"tikv-jemalloc-ctl", "tikv-jemalloc-ctl",
"tikv-jemallocator", "tikv-jemallocator",
"tokio", "tokio",
@@ -1454,7 +1456,7 @@ checksum = "55b672471b4e9f9e95499ea597ff64941a309b2cdbffcc46f2cc5e2d971fd335"
[[package]] [[package]]
name = "common-base" name = "common-base"
version = "0.1.1" version = "0.1.0"
dependencies = [ dependencies = [
"anymap", "anymap",
"bitvec", "bitvec",
@@ -1468,7 +1470,7 @@ dependencies = [
[[package]] [[package]]
name = "common-catalog" name = "common-catalog"
version = "0.1.1" version = "0.1.0"
dependencies = [ dependencies = [
"async-trait", "async-trait",
"chrono", "chrono",
@@ -1485,7 +1487,7 @@ dependencies = [
[[package]] [[package]]
name = "common-error" name = "common-error"
version = "0.1.1" version = "0.1.0"
dependencies = [ dependencies = [
"snafu", "snafu",
"strum", "strum",
@@ -1493,7 +1495,7 @@ dependencies = [
[[package]] [[package]]
name = "common-function" name = "common-function"
version = "0.1.1" version = "0.1.0"
dependencies = [ dependencies = [
"arc-swap", "arc-swap",
"chrono-tz", "chrono-tz",
@@ -1516,7 +1518,7 @@ dependencies = [
[[package]] [[package]]
name = "common-function-macro" name = "common-function-macro"
version = "0.1.1" version = "0.1.0"
dependencies = [ dependencies = [
"arc-swap", "arc-swap",
"common-query", "common-query",
@@ -1530,7 +1532,7 @@ dependencies = [
[[package]] [[package]]
name = "common-grpc" name = "common-grpc"
version = "0.1.1" version = "0.1.0"
dependencies = [ dependencies = [
"api", "api",
"arrow-flight", "arrow-flight",
@@ -1556,7 +1558,7 @@ dependencies = [
[[package]] [[package]]
name = "common-grpc-expr" name = "common-grpc-expr"
version = "0.1.1" version = "0.1.0"
dependencies = [ dependencies = [
"api", "api",
"async-trait", "async-trait",
@@ -1574,7 +1576,7 @@ dependencies = [
[[package]] [[package]]
name = "common-mem-prof" name = "common-mem-prof"
version = "0.1.1" version = "0.1.0"
dependencies = [ dependencies = [
"common-error", "common-error",
"snafu", "snafu",
@@ -1587,7 +1589,7 @@ dependencies = [
[[package]] [[package]]
name = "common-procedure" name = "common-procedure"
version = "0.1.1" version = "0.1.0"
dependencies = [ dependencies = [
"async-trait", "async-trait",
"common-error", "common-error",
@@ -1607,7 +1609,7 @@ dependencies = [
[[package]] [[package]]
name = "common-query" name = "common-query"
version = "0.1.1" version = "0.1.0"
dependencies = [ dependencies = [
"async-trait", "async-trait",
"common-base", "common-base",
@@ -1625,7 +1627,7 @@ dependencies = [
[[package]] [[package]]
name = "common-recordbatch" name = "common-recordbatch"
version = "0.1.1" version = "0.1.0"
dependencies = [ dependencies = [
"common-error", "common-error",
"datafusion", "datafusion",
@@ -1641,7 +1643,7 @@ dependencies = [
[[package]] [[package]]
name = "common-runtime" name = "common-runtime"
version = "0.1.1" version = "0.1.0"
dependencies = [ dependencies = [
"common-error", "common-error",
"common-telemetry", "common-telemetry",
@@ -1655,7 +1657,7 @@ dependencies = [
[[package]] [[package]]
name = "common-telemetry" name = "common-telemetry"
version = "0.1.1" version = "0.1.0"
dependencies = [ dependencies = [
"backtrace", "backtrace",
"common-error", "common-error",
@@ -1677,14 +1679,14 @@ dependencies = [
[[package]] [[package]]
name = "common-test-util" name = "common-test-util"
version = "0.1.1" version = "0.1.0"
dependencies = [ dependencies = [
"tempfile", "tempfile",
] ]
[[package]] [[package]]
name = "common-time" name = "common-time"
version = "0.1.1" version = "0.1.0"
dependencies = [ dependencies = [
"chrono", "chrono",
"common-error", "common-error",
@@ -2106,7 +2108,7 @@ dependencies = [
[[package]] [[package]]
name = "datafusion" name = "datafusion"
version = "19.0.0" version = "19.0.0"
source = "git+https://github.com/apache/arrow-datafusion.git?rev=fad360df0132a2fcb264a7c07b2b02f0b1dfc644#fad360df0132a2fcb264a7c07b2b02f0b1dfc644" source = "git+https://github.com/MichaelScofield/arrow-datafusion.git?rev=d7b3c730049f2561755f9d855f638cb580c38eff#d7b3c730049f2561755f9d855f638cb580c38eff"
dependencies = [ dependencies = [
"ahash 0.8.3", "ahash 0.8.3",
"arrow", "arrow",
@@ -2117,6 +2119,7 @@ dependencies = [
"chrono", "chrono",
"dashmap", "dashmap",
"datafusion-common", "datafusion-common",
"datafusion-execution",
"datafusion-expr", "datafusion-expr",
"datafusion-optimizer", "datafusion-optimizer",
"datafusion-physical-expr", "datafusion-physical-expr",
@@ -2147,12 +2150,13 @@ dependencies = [
"url", "url",
"uuid", "uuid",
"xz2", "xz2",
"zstd 0.12.3+zstd.1.5.2",
] ]
[[package]] [[package]]
name = "datafusion-common" name = "datafusion-common"
version = "19.0.0" version = "19.0.0"
source = "git+https://github.com/apache/arrow-datafusion.git?rev=fad360df0132a2fcb264a7c07b2b02f0b1dfc644#fad360df0132a2fcb264a7c07b2b02f0b1dfc644" source = "git+https://github.com/MichaelScofield/arrow-datafusion.git?rev=d7b3c730049f2561755f9d855f638cb580c38eff#d7b3c730049f2561755f9d855f638cb580c38eff"
dependencies = [ dependencies = [
"arrow", "arrow",
"chrono", "chrono",
@@ -2162,10 +2166,27 @@ dependencies = [
"sqlparser", "sqlparser",
] ]
[[package]]
name = "datafusion-execution"
version = "19.0.0"
source = "git+https://github.com/MichaelScofield/arrow-datafusion.git?rev=d7b3c730049f2561755f9d855f638cb580c38eff#d7b3c730049f2561755f9d855f638cb580c38eff"
dependencies = [
"dashmap",
"datafusion-common",
"datafusion-expr",
"hashbrown 0.13.2",
"log",
"object_store",
"parking_lot",
"rand",
"tempfile",
"url",
]
[[package]] [[package]]
name = "datafusion-expr" name = "datafusion-expr"
version = "19.0.0" version = "19.0.0"
source = "git+https://github.com/apache/arrow-datafusion.git?rev=fad360df0132a2fcb264a7c07b2b02f0b1dfc644#fad360df0132a2fcb264a7c07b2b02f0b1dfc644" source = "git+https://github.com/MichaelScofield/arrow-datafusion.git?rev=d7b3c730049f2561755f9d855f638cb580c38eff#d7b3c730049f2561755f9d855f638cb580c38eff"
dependencies = [ dependencies = [
"ahash 0.8.3", "ahash 0.8.3",
"arrow", "arrow",
@@ -2177,7 +2198,7 @@ dependencies = [
[[package]] [[package]]
name = "datafusion-optimizer" name = "datafusion-optimizer"
version = "19.0.0" version = "19.0.0"
source = "git+https://github.com/apache/arrow-datafusion.git?rev=fad360df0132a2fcb264a7c07b2b02f0b1dfc644#fad360df0132a2fcb264a7c07b2b02f0b1dfc644" source = "git+https://github.com/MichaelScofield/arrow-datafusion.git?rev=d7b3c730049f2561755f9d855f638cb580c38eff#d7b3c730049f2561755f9d855f638cb580c38eff"
dependencies = [ dependencies = [
"arrow", "arrow",
"async-trait", "async-trait",
@@ -2194,7 +2215,7 @@ dependencies = [
[[package]] [[package]]
name = "datafusion-physical-expr" name = "datafusion-physical-expr"
version = "19.0.0" version = "19.0.0"
source = "git+https://github.com/apache/arrow-datafusion.git?rev=fad360df0132a2fcb264a7c07b2b02f0b1dfc644#fad360df0132a2fcb264a7c07b2b02f0b1dfc644" source = "git+https://github.com/MichaelScofield/arrow-datafusion.git?rev=d7b3c730049f2561755f9d855f638cb580c38eff#d7b3c730049f2561755f9d855f638cb580c38eff"
dependencies = [ dependencies = [
"ahash 0.8.3", "ahash 0.8.3",
"arrow", "arrow",
@@ -2214,6 +2235,7 @@ dependencies = [
"md-5", "md-5",
"num-traits", "num-traits",
"paste", "paste",
"petgraph",
"rand", "rand",
"regex", "regex",
"sha2", "sha2",
@@ -2224,7 +2246,7 @@ dependencies = [
[[package]] [[package]]
name = "datafusion-row" name = "datafusion-row"
version = "19.0.0" version = "19.0.0"
source = "git+https://github.com/apache/arrow-datafusion.git?rev=fad360df0132a2fcb264a7c07b2b02f0b1dfc644#fad360df0132a2fcb264a7c07b2b02f0b1dfc644" source = "git+https://github.com/MichaelScofield/arrow-datafusion.git?rev=d7b3c730049f2561755f9d855f638cb580c38eff#d7b3c730049f2561755f9d855f638cb580c38eff"
dependencies = [ dependencies = [
"arrow", "arrow",
"datafusion-common", "datafusion-common",
@@ -2235,7 +2257,7 @@ dependencies = [
[[package]] [[package]]
name = "datafusion-sql" name = "datafusion-sql"
version = "19.0.0" version = "19.0.0"
source = "git+https://github.com/apache/arrow-datafusion.git?rev=fad360df0132a2fcb264a7c07b2b02f0b1dfc644#fad360df0132a2fcb264a7c07b2b02f0b1dfc644" source = "git+https://github.com/MichaelScofield/arrow-datafusion.git?rev=d7b3c730049f2561755f9d855f638cb580c38eff#d7b3c730049f2561755f9d855f638cb580c38eff"
dependencies = [ dependencies = [
"arrow-schema", "arrow-schema",
"datafusion-common", "datafusion-common",
@@ -2246,7 +2268,7 @@ dependencies = [
[[package]] [[package]]
name = "datanode" name = "datanode"
version = "0.1.1" version = "0.1.0"
dependencies = [ dependencies = [
"api", "api",
"async-compat", "async-compat",
@@ -2297,7 +2319,7 @@ dependencies = [
"sql", "sql",
"storage", "storage",
"store-api", "store-api",
"substrait 0.1.1", "substrait 0.1.0",
"table", "table",
"table-procedure", "table-procedure",
"tokio", "tokio",
@@ -2311,7 +2333,7 @@ dependencies = [
[[package]] [[package]]
name = "datatypes" name = "datatypes"
version = "0.1.1" version = "0.1.0"
dependencies = [ dependencies = [
"arrow", "arrow",
"arrow-schema", "arrow-schema",
@@ -2759,7 +2781,7 @@ dependencies = [
[[package]] [[package]]
name = "frontend" name = "frontend"
version = "0.1.1" version = "0.1.0"
dependencies = [ dependencies = [
"api", "api",
"async-stream", "async-stream",
@@ -2802,7 +2824,7 @@ dependencies = [
"sql", "sql",
"store-api", "store-api",
"strfmt", "strfmt",
"substrait 0.1.1", "substrait 0.1.0",
"table", "table",
"tokio", "tokio",
"toml", "toml",
@@ -3063,7 +3085,7 @@ checksum = "d2fabcfbdc87f4758337ca535fb41a6d701b65693ce38287d856d1674551ec9b"
[[package]] [[package]]
name = "greptime-proto" name = "greptime-proto"
version = "0.1.0" version = "0.1.0"
source = "git+https://github.com/GreptimeTeam/greptime-proto.git?rev=3a715150563b89d5dfc81a5838eac1f66a5658a1#3a715150563b89d5dfc81a5838eac1f66a5658a1" source = "git+https://github.com/GreptimeTeam/greptime-proto.git?rev=0a7b790ed41364b5599dff806d1080bd59c5c9f6#0a7b790ed41364b5599dff806d1080bd59c5c9f6"
dependencies = [ dependencies = [
"prost", "prost",
"tonic", "tonic",
@@ -3710,7 +3732,7 @@ dependencies = [
[[package]] [[package]]
name = "log-store" name = "log-store"
version = "0.1.1" version = "0.1.0"
dependencies = [ dependencies = [
"arc-swap", "arc-swap",
"async-stream", "async-stream",
@@ -3953,7 +3975,7 @@ dependencies = [
[[package]] [[package]]
name = "meta-client" name = "meta-client"
version = "0.1.1" version = "0.1.0"
dependencies = [ dependencies = [
"api", "api",
"async-trait", "async-trait",
@@ -3980,7 +4002,7 @@ dependencies = [
[[package]] [[package]]
name = "meta-srv" name = "meta-srv"
version = "0.1.1" version = "0.1.0"
dependencies = [ dependencies = [
"anymap", "anymap",
"api", "api",
@@ -4115,7 +4137,7 @@ dependencies = [
[[package]] [[package]]
name = "mito" name = "mito"
version = "0.1.1" version = "0.1.0"
dependencies = [ dependencies = [
"anymap", "anymap",
"arc-swap", "arc-swap",
@@ -4518,7 +4540,7 @@ dependencies = [
[[package]] [[package]]
name = "object-store" name = "object-store"
version = "0.1.1" version = "0.1.0"
dependencies = [ dependencies = [
"anyhow", "anyhow",
"async-trait", "async-trait",
@@ -4785,9 +4807,9 @@ dependencies = [
[[package]] [[package]]
name = "parquet" name = "parquet"
version = "33.0.0" version = "34.0.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b1b076829801167d889795cd1957989055543430fa1469cb1f6e32b789bfc764" checksum = "7ac135ecf63ebb5f53dda0921b0b76d6048b3ef631a5f4760b9e8f863ff00cfa"
dependencies = [ dependencies = [
"ahash 0.8.3", "ahash 0.8.3",
"arrow-array", "arrow-array",
@@ -4813,7 +4835,7 @@ dependencies = [
"thrift 0.17.0", "thrift 0.17.0",
"tokio", "tokio",
"twox-hash", "twox-hash",
"zstd", "zstd 0.12.3+zstd.1.5.2",
] ]
[[package]] [[package]]
@@ -4840,7 +4862,7 @@ dependencies = [
[[package]] [[package]]
name = "partition" name = "partition"
version = "0.1.1" version = "0.1.0"
dependencies = [ dependencies = [
"common-catalog", "common-catalog",
"common-error", "common-error",
@@ -5361,7 +5383,7 @@ dependencies = [
[[package]] [[package]]
name = "promql" name = "promql"
version = "0.1.1" version = "0.1.0"
dependencies = [ dependencies = [
"async-recursion", "async-recursion",
"async-trait", "async-trait",
@@ -5406,9 +5428,9 @@ dependencies = [
[[package]] [[package]]
name = "prost-build" name = "prost-build"
version = "0.11.6" version = "0.11.7"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "a3f8ad728fb08fe212df3c05169e940fbb6d9d16a877ddde14644a983ba2012e" checksum = "a24be1d23b4552a012093e1b93697b73d644ae9590e3253d878d0e77d411b614"
dependencies = [ dependencies = [
"bytes", "bytes",
"heck 0.4.1", "heck 0.4.1",
@@ -5593,7 +5615,7 @@ dependencies = [
[[package]] [[package]]
name = "query" name = "query"
version = "0.1.1" version = "0.1.0"
dependencies = [ dependencies = [
"approx_eq", "approx_eq",
"arc-swap", "arc-swap",
@@ -6606,7 +6628,7 @@ checksum = "1792db035ce95be60c3f8853017b3999209281c24e2ba5bc8e59bf97a0c590c1"
[[package]] [[package]]
name = "script" name = "script"
version = "0.1.1" version = "0.1.0"
dependencies = [ dependencies = [
"arrow", "arrow",
"async-trait", "async-trait",
@@ -6836,7 +6858,7 @@ dependencies = [
[[package]] [[package]]
name = "servers" name = "servers"
version = "0.1.1" version = "0.1.0"
dependencies = [ dependencies = [
"aide", "aide",
"api", "api",
@@ -6912,7 +6934,7 @@ dependencies = [
[[package]] [[package]]
name = "session" name = "session"
version = "0.1.1" version = "0.1.0"
dependencies = [ dependencies = [
"arc-swap", "arc-swap",
"common-catalog", "common-catalog",
@@ -7149,7 +7171,7 @@ dependencies = [
[[package]] [[package]]
name = "sql" name = "sql"
version = "0.1.1" version = "0.1.0"
dependencies = [ dependencies = [
"api", "api",
"catalog", "catalog",
@@ -7184,7 +7206,7 @@ dependencies = [
[[package]] [[package]]
name = "sqlness-runner" name = "sqlness-runner"
version = "0.1.1" version = "0.1.0"
dependencies = [ dependencies = [
"async-trait", "async-trait",
"client", "client",
@@ -7201,9 +7223,9 @@ dependencies = [
[[package]] [[package]]
name = "sqlparser" name = "sqlparser"
version = "0.30.0" version = "0.32.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "db67dc6ef36edb658196c3fef0464a80b53dbbc194a904e81f9bd4190f9ecc5b" checksum = "0366f270dbabb5cc2e4c88427dc4c08bba144f81e32fbd459a013f26a4d16aa0"
dependencies = [ dependencies = [
"log", "log",
"sqlparser_derive", "sqlparser_derive",
@@ -7262,7 +7284,7 @@ dependencies = [
[[package]] [[package]]
name = "storage" name = "storage"
version = "0.1.1" version = "0.1.0"
dependencies = [ dependencies = [
"arc-swap", "arc-swap",
"arrow", "arrow",
@@ -7310,7 +7332,7 @@ dependencies = [
[[package]] [[package]]
name = "store-api" name = "store-api"
version = "0.1.1" version = "0.1.0"
dependencies = [ dependencies = [
"async-stream", "async-stream",
"async-trait", "async-trait",
@@ -7442,7 +7464,7 @@ dependencies = [
[[package]] [[package]]
name = "substrait" name = "substrait"
version = "0.1.1" version = "0.1.0"
dependencies = [ dependencies = [
"async-recursion", "async-recursion",
"async-trait", "async-trait",
@@ -7536,7 +7558,7 @@ dependencies = [
[[package]] [[package]]
name = "table" name = "table"
version = "0.1.1" version = "0.1.0"
dependencies = [ dependencies = [
"anymap", "anymap",
"async-trait", "async-trait",
@@ -7553,6 +7575,7 @@ dependencies = [
"datafusion", "datafusion",
"datafusion-common", "datafusion-common",
"datafusion-expr", "datafusion-expr",
"datafusion-physical-expr",
"datatypes", "datatypes",
"derive_builder 0.11.2", "derive_builder 0.11.2",
"futures", "futures",
@@ -7571,7 +7594,7 @@ dependencies = [
[[package]] [[package]]
name = "table-procedure" name = "table-procedure"
version = "0.1.1" version = "0.1.0"
dependencies = [ dependencies = [
"async-trait", "async-trait",
"catalog", "catalog",
@@ -7653,7 +7676,7 @@ dependencies = [
[[package]] [[package]]
name = "tests-integration" name = "tests-integration"
version = "0.1.1" version = "0.1.0"
dependencies = [ dependencies = [
"api", "api",
"axum", "axum",
@@ -9097,13 +9120,32 @@ version = "1.5.7"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "c394b5bd0c6f669e7275d9c20aa90ae064cb22e75a1cad54e1b34088034b149f" checksum = "c394b5bd0c6f669e7275d9c20aa90ae064cb22e75a1cad54e1b34088034b149f"
[[package]]
name = "zstd"
version = "0.11.2+zstd.1.5.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "20cc960326ece64f010d2d2107537f26dc589a6573a316bd5b1dba685fa5fde4"
dependencies = [
"zstd-safe 5.0.2+zstd.1.5.2",
]
[[package]] [[package]]
name = "zstd" name = "zstd"
version = "0.12.3+zstd.1.5.2" version = "0.12.3+zstd.1.5.2"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "76eea132fb024e0e13fd9c2f5d5d595d8a967aa72382ac2f9d39fcc95afd0806" checksum = "76eea132fb024e0e13fd9c2f5d5d595d8a967aa72382ac2f9d39fcc95afd0806"
dependencies = [ dependencies = [
"zstd-safe", "zstd-safe 6.0.4+zstd.1.5.4",
]
[[package]]
name = "zstd-safe"
version = "5.0.2+zstd.1.5.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1d2a5585e04f9eea4b2a3d1eca508c4dee9592a89ef6f450c11719da0726f4db"
dependencies = [
"libc",
"zstd-sys",
] ]
[[package]] [[package]]

View File

@@ -45,33 +45,34 @@ members = [
] ]
[workspace.package] [workspace.package]
version = "0.1.1" version = "0.1.0"
edition = "2021" edition = "2021"
license = "Apache-2.0" license = "Apache-2.0"
[workspace.dependencies] [workspace.dependencies]
arrow = { version = "33.0" } arrow = { version = "34.0" }
arrow-array = "33.0" arrow-array = "34.0"
arrow-flight = "33.0" arrow-flight = "34.0"
arrow-schema = { version = "33.0", features = ["serde"] } arrow-schema = { version = "34.0", features = ["serde"] }
async-stream = "0.3" async-stream = "0.3"
async-trait = "0.1" async-trait = "0.1"
chrono = { version = "0.4", features = ["serde"] } chrono = { version = "0.4", features = ["serde"] }
datafusion = { git = "https://github.com/apache/arrow-datafusion.git", rev = "fad360df0132a2fcb264a7c07b2b02f0b1dfc644" } # TODO(LFC): Use official DataFusion, when https://github.com/apache/arrow-datafusion/pull/5542 got merged
datafusion-common = { git = "https://github.com/apache/arrow-datafusion.git", rev = "fad360df0132a2fcb264a7c07b2b02f0b1dfc644" } datafusion = { git = "https://github.com/MichaelScofield/arrow-datafusion.git", rev = "d7b3c730049f2561755f9d855f638cb580c38eff" }
datafusion-expr = { git = "https://github.com/apache/arrow-datafusion.git", rev = "fad360df0132a2fcb264a7c07b2b02f0b1dfc644" } datafusion-common = { git = "https://github.com/MichaelScofield/arrow-datafusion.git", rev = "d7b3c730049f2561755f9d855f638cb580c38eff" }
datafusion-optimizer = { git = "https://github.com/apache/arrow-datafusion.git", rev = "fad360df0132a2fcb264a7c07b2b02f0b1dfc644" } datafusion-expr = { git = "https://github.com/MichaelScofield/arrow-datafusion.git", rev = "d7b3c730049f2561755f9d855f638cb580c38eff" }
datafusion-physical-expr = { git = "https://github.com/apache/arrow-datafusion.git", rev = "fad360df0132a2fcb264a7c07b2b02f0b1dfc644" } datafusion-optimizer = { git = "https://github.com/MichaelScofield/arrow-datafusion.git", rev = "d7b3c730049f2561755f9d855f638cb580c38eff" }
datafusion-sql = { git = "https://github.com/apache/arrow-datafusion.git", rev = "fad360df0132a2fcb264a7c07b2b02f0b1dfc644" } datafusion-physical-expr = { git = "https://github.com/MichaelScofield/arrow-datafusion.git", rev = "d7b3c730049f2561755f9d855f638cb580c38eff" }
datafusion-sql = { git = "https://github.com/MichaelScofield/arrow-datafusion.git", rev = "d7b3c730049f2561755f9d855f638cb580c38eff" }
futures = "0.3" futures = "0.3"
futures-util = "0.3" futures-util = "0.3"
parquet = "33.0" parquet = "34.0"
paste = "1.0" paste = "1.0"
prost = "0.11" prost = "0.11"
serde = { version = "1.0", features = ["derive"] } serde = { version = "1.0", features = ["derive"] }
serde_json = "1.0" serde_json = "1.0"
snafu = { version = "0.7", features = ["backtraces"] } snafu = { version = "0.7", features = ["backtraces"] }
sqlparser = "0.30" sqlparser = "0.32"
tempfile = "3" tempfile = "3"
tokio = { version = "1.24.2", features = ["full"] } tokio = { version = "1.24.2", features = ["full"] }
tokio-util = "0.7" tokio-util = "0.7"

11
config/edge.example.toml Normal file
View File

@@ -0,0 +1,11 @@
# WAL options.
[wal]
# WAL data directory.
dir = "/tmp/greptimedb/wal"
# Storage options.
[storage]
# Storage type.
type = "File"
# Data directory, "/tmp/greptimedb/data" by default.
data_dir = "/tmp/greptimedb/data"

View File

@@ -22,20 +22,27 @@ RUN apt-get -y update && \
apt-get -y install g++-aarch64-linux-gnu gcc-aarch64-linux-gnu && \ apt-get -y install g++-aarch64-linux-gnu gcc-aarch64-linux-gnu && \
apt-get install binutils-aarch64-linux-gnu apt-get install binutils-aarch64-linux-gnu
COPY ./docker/aarch64/compile-python.sh ./docker/aarch64/
RUN chmod +x ./docker/aarch64/compile-python.sh && \
./docker/aarch64/compile-python.sh
COPY ./rust-toolchain.toml .
# Install rustup target for cross compiling.
RUN rustup target add aarch64-unknown-linux-gnu
COPY . . COPY . .
# Update dependency, using separate `RUN` to separate cache
RUN cargo fetch
# This three env var is set in script, so I set it manually in dockerfile. # This three env var is set in script, so I set it manually in dockerfile.
ENV LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib/ ENV LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib/
ENV LIBRARY_PATH=$LIBRARY_PATH:/usr/local/lib/ ENV LIBRARY_PATH=$LIBRARY_PATH:/usr/local/lib/
ENV PY_INSTALL_PATH=${PWD}/python_arm64_build ENV PY_INSTALL_PATH=/greptimedb/python_arm64_build
RUN chmod +x ./docker/aarch64/compile-python.sh && \
./docker/aarch64/compile-python.sh
# Install rustup target for cross compiling.
RUN rustup target add aarch64-unknown-linux-gnu
# Set the environment variable for cross compiling and compile it # Set the environment variable for cross compiling and compile it
# Build the project in release mode. Set Net fetch with git cli to true to avoid git error. # cross compiled python is `python3` in path, but pyo3 need `python` in path so alias it
# Build the project in release mode.
RUN export PYO3_CROSS_LIB_DIR=$PY_INSTALL_PATH/lib && \ RUN export PYO3_CROSS_LIB_DIR=$PY_INSTALL_PATH/lib && \
alias python=python3 && \ alias python=python3 && \
CARGO_NET_GIT_FETCH_WITH_CLI=1 && \
cargo build --target aarch64-unknown-linux-gnu --release -F pyo3_backend cargo build --target aarch64-unknown-linux-gnu --release -F pyo3_backend
# Exporting the binary to the clean image # Exporting the binary to the clean image

View File

@@ -26,7 +26,7 @@ make install
cd .. cd ..
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib/ export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/lib/
export LIBRARY_PATH=$LIBRARY_PATH:/usr/local/lib/ export LIBRARY_PATH=$LIBRARY_PATH:/usr/local/lib/
export PY_INSTALL_PATH=${PWD}/python_arm64_build export PY_INSTALL_PATH=$(pwd)/python_arm64_build
cd Python-3.10.10 && \ cd Python-3.10.10 && \
make clean && \ make clean && \
make distclean && \ make distclean && \

View File

@@ -10,7 +10,7 @@ common-base = { path = "../common/base" }
common-error = { path = "../common/error" } common-error = { path = "../common/error" }
common-time = { path = "../common/time" } common-time = { path = "../common/time" }
datatypes = { path = "../datatypes" } datatypes = { path = "../datatypes" }
greptime-proto = { git = "https://github.com/GreptimeTeam/greptime-proto.git", rev = "3a715150563b89d5dfc81a5838eac1f66a5658a1" } greptime-proto = { git = "https://github.com/GreptimeTeam/greptime-proto.git", rev = "0a7b790ed41364b5599dff806d1080bd59c5c9f6" }
prost.workspace = true prost.workspace = true
snafu = { version = "0.7", features = ["backtraces"] } snafu = { version = "0.7", features = ["backtraces"] }
tonic.workspace = true tonic.workspace = true

View File

@@ -14,6 +14,7 @@
use std::sync::Arc; use std::sync::Arc;
use api::v1::greptime_database_client::GreptimeDatabaseClient;
use arrow_flight::flight_service_client::FlightServiceClient; use arrow_flight::flight_service_client::FlightServiceClient;
use common_grpc::channel_manager::ChannelManager; use common_grpc::channel_manager::ChannelManager;
use parking_lot::RwLock; use parking_lot::RwLock;
@@ -23,6 +24,10 @@ use tonic::transport::Channel;
use crate::load_balance::{LoadBalance, Loadbalancer}; use crate::load_balance::{LoadBalance, Loadbalancer};
use crate::{error, Result}; use crate::{error, Result};
pub(crate) struct DatabaseClient {
pub(crate) inner: GreptimeDatabaseClient<Channel>,
}
pub(crate) struct FlightClient { pub(crate) struct FlightClient {
addr: String, addr: String,
client: FlightServiceClient<Channel>, client: FlightServiceClient<Channel>,
@@ -118,7 +123,7 @@ impl Client {
self.inner.set_peers(urls); self.inner.set_peers(urls);
} }
pub(crate) fn make_client(&self) -> Result<FlightClient> { fn find_channel(&self) -> Result<(String, Channel)> {
let addr = self let addr = self
.inner .inner
.get_peer() .get_peer()
@@ -131,11 +136,23 @@ impl Client {
.channel_manager .channel_manager
.get(&addr) .get(&addr)
.context(error::CreateChannelSnafu { addr: &addr })?; .context(error::CreateChannelSnafu { addr: &addr })?;
Ok((addr, channel))
}
pub(crate) fn make_flight_client(&self) -> Result<FlightClient> {
let (addr, channel) = self.find_channel()?;
Ok(FlightClient { Ok(FlightClient {
addr, addr,
client: FlightServiceClient::new(channel), client: FlightServiceClient::new(channel),
}) })
} }
pub(crate) fn make_database_client(&self) -> Result<DatabaseClient> {
let (_, channel) = self.find_channel()?;
Ok(DatabaseClient {
inner: GreptimeDatabaseClient::new(channel),
})
}
} }
#[cfg(test)] #[cfg(test)]

View File

@@ -12,15 +12,14 @@
// See the License for the specific language governing permissions and // See the License for the specific language governing permissions and
// limitations under the License. // limitations under the License.
use std::str::FromStr;
use api::v1::auth_header::AuthScheme; use api::v1::auth_header::AuthScheme;
use api::v1::ddl_request::Expr as DdlExpr; use api::v1::ddl_request::Expr as DdlExpr;
use api::v1::greptime_request::Request; use api::v1::greptime_request::Request;
use api::v1::query_request::Query; use api::v1::query_request::Query;
use api::v1::{ use api::v1::{
AlterExpr, AuthHeader, CreateTableExpr, DdlRequest, DropTableExpr, GreptimeRequest, greptime_response, AffectedRows, AlterExpr, AuthHeader, CreateTableExpr, DdlRequest,
InsertRequest, PromRangeQuery, QueryRequest, RequestHeader, DropTableExpr, FlushTableExpr, GreptimeRequest, InsertRequest, PromRangeQuery, QueryRequest,
RequestHeader,
}; };
use arrow_flight::{FlightData, Ticket}; use arrow_flight::{FlightData, Ticket};
use common_error::prelude::*; use common_error::prelude::*;
@@ -31,7 +30,9 @@ use futures_util::{TryFutureExt, TryStreamExt};
use prost::Message; use prost::Message;
use snafu::{ensure, ResultExt}; use snafu::{ensure, ResultExt};
use crate::error::{ConvertFlightDataSnafu, IllegalFlightMessagesSnafu}; use crate::error::{
ConvertFlightDataSnafu, IllegalDatabaseResponseSnafu, IllegalFlightMessagesSnafu,
};
use crate::{error, Client, Result}; use crate::{error, Client, Result};
#[derive(Clone, Debug)] #[derive(Clone, Debug)]
@@ -78,8 +79,26 @@ impl Database {
}); });
} }
pub async fn insert(&self, request: InsertRequest) -> Result<Output> { pub async fn insert(&self, request: InsertRequest) -> Result<u32> {
self.do_get(Request::Insert(request)).await let mut client = self.client.make_database_client()?.inner;
let request = GreptimeRequest {
header: Some(RequestHeader {
catalog: self.catalog.clone(),
schema: self.schema.clone(),
authorization: self.ctx.auth_header.clone(),
}),
request: Some(Request::Insert(request)),
};
let response = client
.handle(request)
.await?
.into_inner()
.response
.context(IllegalDatabaseResponseSnafu {
err_msg: "GreptimeResponse is empty",
})?;
let greptime_response::Response::AffectedRows(AffectedRows { value }) = response;
Ok(value)
} }
pub async fn sql(&self, sql: &str) -> Result<Output> { pub async fn sql(&self, sql: &str) -> Result<Output> {
@@ -135,6 +154,13 @@ impl Database {
.await .await
} }
pub async fn flush_table(&self, expr: FlushTableExpr) -> Result<Output> {
self.do_get(Request::Ddl(DdlRequest {
expr: Some(DdlExpr::FlushTable(expr)),
}))
.await
}
async fn do_get(&self, request: Request) -> Result<Output> { async fn do_get(&self, request: Request) -> Result<Output> {
let request = GreptimeRequest { let request = GreptimeRequest {
header: Some(RequestHeader { header: Some(RequestHeader {
@@ -148,7 +174,7 @@ impl Database {
ticket: request.encode_to_vec().into(), ticket: request.encode_to_vec().into(),
}; };
let mut client = self.client.make_client()?; let mut client = self.client.make_flight_client()?;
// TODO(LFC): Streaming get flight data. // TODO(LFC): Streaming get flight data.
let flight_data: Vec<FlightData> = client let flight_data: Vec<FlightData> = client
@@ -157,22 +183,22 @@ impl Database {
.and_then(|response| response.into_inner().try_collect()) .and_then(|response| response.into_inner().try_collect())
.await .await
.map_err(|e| { .map_err(|e| {
let code = get_metadata_value(&e, INNER_ERROR_CODE) let tonic_code = e.code();
.and_then(|s| StatusCode::from_str(&s).ok()) let e: error::Error = e.into();
.unwrap_or(StatusCode::Unknown); let code = e.status_code();
let msg = get_metadata_value(&e, INNER_ERROR_MSG).unwrap_or(e.to_string()); let msg = e.to_string();
error::ExternalSnafu { code, msg } error::ServerSnafu { code, msg }
.fail::<()>() .fail::<()>()
.map_err(BoxedError::new) .map_err(BoxedError::new)
.context(error::FlightGetSnafu { .context(error::FlightGetSnafu {
tonic_code: e.code(), tonic_code,
addr: client.addr(), addr: client.addr(),
}) })
.map_err(|error| { .map_err(|error| {
logging::error!( logging::error!(
"Failed to do Flight get, addr: {}, code: {}, source: {}", "Failed to do Flight get, addr: {}, code: {}, source: {}",
client.addr(), client.addr(),
e.code(), tonic_code,
error error
); );
error error
@@ -203,12 +229,6 @@ impl Database {
} }
} }
fn get_metadata_value(e: &tonic::Status, key: &str) -> Option<String> {
e.metadata()
.get(key)
.and_then(|v| String::from_utf8(v.as_bytes().to_vec()).ok())
}
#[derive(Default, Debug, Clone)] #[derive(Default, Debug, Clone)]
pub struct FlightContext { pub struct FlightContext {
auth_header: Option<AuthHeader>, auth_header: Option<AuthHeader>,

View File

@@ -13,9 +13,10 @@
// limitations under the License. // limitations under the License.
use std::any::Any; use std::any::Any;
use std::str::FromStr;
use common_error::prelude::*; use common_error::prelude::*;
use tonic::Code; use tonic::{Code, Status};
#[derive(Debug, Snafu)] #[derive(Debug, Snafu)]
#[snafu(visibility(pub))] #[snafu(visibility(pub))]
@@ -68,6 +69,13 @@ pub enum Error {
/// Error deserialized from gRPC metadata /// Error deserialized from gRPC metadata
#[snafu(display("{}", msg))] #[snafu(display("{}", msg))]
ExternalError { code: StatusCode, msg: String }, ExternalError { code: StatusCode, msg: String },
// Server error carried in Tonic Status's metadata.
#[snafu(display("{}", msg))]
Server { code: StatusCode, msg: String },
#[snafu(display("Illegal Database response: {err_msg}"))]
IllegalDatabaseResponse { err_msg: String },
} }
pub type Result<T> = std::result::Result<T, Error>; pub type Result<T> = std::result::Result<T, Error>;
@@ -77,7 +85,10 @@ impl ErrorExt for Error {
match self { match self {
Error::IllegalFlightMessages { .. } Error::IllegalFlightMessages { .. }
| Error::ColumnDataType { .. } | Error::ColumnDataType { .. }
| Error::MissingField { .. } => StatusCode::Internal, | Error::MissingField { .. }
| Error::IllegalDatabaseResponse { .. } => StatusCode::Internal,
Error::Server { code, .. } => *code,
Error::FlightGet { source, .. } => source.status_code(), Error::FlightGet { source, .. } => source.status_code(),
Error::CreateChannel { source, .. } | Error::ConvertFlightData { source } => { Error::CreateChannel { source, .. } | Error::ConvertFlightData { source } => {
source.status_code() source.status_code()
@@ -95,3 +106,21 @@ impl ErrorExt for Error {
self self
} }
} }
impl From<Status> for Error {
fn from(e: Status) -> Self {
fn get_metadata_value(e: &Status, key: &str) -> Option<String> {
e.metadata()
.get(key)
.and_then(|v| String::from_utf8(v.as_bytes().to_vec()).ok())
}
let code = get_metadata_value(&e, INNER_ERROR_CODE)
.and_then(|s| StatusCode::from_str(&s).ok())
.unwrap_or(StatusCode::Unknown);
let msg = get_metadata_value(&e, INNER_ERROR_MSG).unwrap_or(e.to_string());
Self::Server { code, msg }
}
}

View File

@@ -30,9 +30,39 @@ struct Command {
subcmd: SubCommand, subcmd: SubCommand,
} }
pub enum Application {
Datanode(datanode::Instance),
Frontend(frontend::Instance),
Metasrv(metasrv::Instance),
Standalone(standalone::Instance),
Cli(cli::Instance),
}
impl Application {
async fn run(&mut self) -> Result<()> {
match self {
Application::Datanode(instance) => instance.run().await,
Application::Frontend(instance) => instance.run().await,
Application::Metasrv(instance) => instance.run().await,
Application::Standalone(instance) => instance.run().await,
Application::Cli(instance) => instance.run().await,
}
}
async fn stop(&self) -> Result<()> {
match self {
Application::Datanode(instance) => instance.stop().await,
Application::Frontend(instance) => instance.stop().await,
Application::Metasrv(instance) => instance.stop().await,
Application::Standalone(instance) => instance.stop().await,
Application::Cli(instance) => instance.stop().await,
}
}
}
impl Command { impl Command {
async fn run(self) -> Result<()> { async fn build(self) -> Result<Application> {
self.subcmd.run().await self.subcmd.build().await
} }
} }
@@ -51,13 +81,28 @@ enum SubCommand {
} }
impl SubCommand { impl SubCommand {
async fn run(self) -> Result<()> { async fn build(self) -> Result<Application> {
match self { match self {
SubCommand::Datanode(cmd) => cmd.run().await, SubCommand::Datanode(cmd) => {
SubCommand::Frontend(cmd) => cmd.run().await, let app = cmd.build().await?;
SubCommand::Metasrv(cmd) => cmd.run().await, Ok(Application::Datanode(app))
SubCommand::Standalone(cmd) => cmd.run().await, }
SubCommand::Cli(cmd) => cmd.run().await, SubCommand::Frontend(cmd) => {
let app = cmd.build().await?;
Ok(Application::Frontend(app))
}
SubCommand::Metasrv(cmd) => {
let app = cmd.build().await?;
Ok(Application::Metasrv(app))
}
SubCommand::Standalone(cmd) => {
let app = cmd.build().await?;
Ok(Application::Standalone(app))
}
SubCommand::Cli(cmd) => {
let app = cmd.build().await?;
Ok(Application::Cli(app))
}
} }
} }
} }
@@ -104,13 +149,18 @@ async fn main() -> Result<()> {
common_telemetry::init_default_metrics_recorder(); common_telemetry::init_default_metrics_recorder();
let _guard = common_telemetry::init_global_logging(app_name, log_dir, log_level, false); let _guard = common_telemetry::init_global_logging(app_name, log_dir, log_level, false);
let mut app = cmd.build().await?;
tokio::select! { tokio::select! {
result = cmd.run() => { result = app.run() => {
if let Err(err) = result { if let Err(err) = result {
error!(err; "Fatal error occurs!"); error!(err; "Fatal error occurs!");
} }
} }
_ = tokio::signal::ctrl_c() => { _ = tokio::signal::ctrl_c() => {
if let Err(err) = app.stop().await {
error!(err; "Fatal error occurs!");
}
info!("Goodbye!"); info!("Goodbye!");
} }
} }

View File

@@ -17,10 +17,25 @@ mod helper;
mod repl; mod repl;
use clap::Parser; use clap::Parser;
use repl::Repl; pub use repl::Repl;
use crate::error::Result; use crate::error::Result;
pub struct Instance {
repl: Repl,
}
impl Instance {
pub async fn run(&mut self) -> Result<()> {
self.repl.run().await
}
pub async fn stop(&self) -> Result<()> {
// TODO: handle cli shutdown
Ok(())
}
}
#[derive(Parser)] #[derive(Parser)]
pub struct Command { pub struct Command {
#[clap(subcommand)] #[clap(subcommand)]
@@ -28,8 +43,8 @@ pub struct Command {
} }
impl Command { impl Command {
pub async fn run(self) -> Result<()> { pub async fn build(self) -> Result<Instance> {
self.cmd.run().await self.cmd.build().await
} }
} }
@@ -39,9 +54,9 @@ enum SubCommand {
} }
impl SubCommand { impl SubCommand {
async fn run(self) -> Result<()> { async fn build(self) -> Result<Instance> {
match self { match self {
SubCommand::Attach(cmd) => cmd.run().await, SubCommand::Attach(cmd) => cmd.build().await,
} }
} }
} }
@@ -57,8 +72,8 @@ pub(crate) struct AttachCommand {
} }
impl AttachCommand { impl AttachCommand {
async fn run(self) -> Result<()> { async fn build(self) -> Result<Instance> {
let mut repl = Repl::try_new(&self).await?; let repl = Repl::try_new(&self).await?;
repl.run().await Ok(Instance { repl })
} }
} }

View File

@@ -50,7 +50,7 @@ use crate::error::{
}; };
/// Captures the state of the repl, gathers commands and executes them one by one /// Captures the state of the repl, gathers commands and executes them one by one
pub(crate) struct Repl { pub struct Repl {
/// Rustyline editor for interacting with user on command line /// Rustyline editor for interacting with user on command line
rl: Editor<RustylineHelper>, rl: Editor<RustylineHelper>,

View File

@@ -24,6 +24,21 @@ use snafu::ResultExt;
use crate::error::{Error, MissingConfigSnafu, Result, StartDatanodeSnafu}; use crate::error::{Error, MissingConfigSnafu, Result, StartDatanodeSnafu};
use crate::toml_loader; use crate::toml_loader;
pub struct Instance {
datanode: Datanode,
}
impl Instance {
pub async fn run(&mut self) -> Result<()> {
self.datanode.start().await.context(StartDatanodeSnafu)
}
pub async fn stop(&self) -> Result<()> {
// TODO: handle datanode shutdown
Ok(())
}
}
#[derive(Parser)] #[derive(Parser)]
pub struct Command { pub struct Command {
#[clap(subcommand)] #[clap(subcommand)]
@@ -31,8 +46,8 @@ pub struct Command {
} }
impl Command { impl Command {
pub async fn run(self) -> Result<()> { pub async fn build(self) -> Result<Instance> {
self.subcmd.run().await self.subcmd.build().await
} }
} }
@@ -42,9 +57,9 @@ enum SubCommand {
} }
impl SubCommand { impl SubCommand {
async fn run(self) -> Result<()> { async fn build(self) -> Result<Instance> {
match self { match self {
SubCommand::Start(cmd) => cmd.run().await, SubCommand::Start(cmd) => cmd.build().await,
} }
} }
} }
@@ -72,19 +87,16 @@ struct StartCommand {
} }
impl StartCommand { impl StartCommand {
async fn run(self) -> Result<()> { async fn build(self) -> Result<Instance> {
logging::info!("Datanode start command: {:#?}", self); logging::info!("Datanode start command: {:#?}", self);
let opts: DatanodeOptions = self.try_into()?; let opts: DatanodeOptions = self.try_into()?;
logging::info!("Datanode options: {:#?}", opts); logging::info!("Datanode options: {:#?}", opts);
Datanode::new(opts) let datanode = Datanode::new(opts).await.context(StartDatanodeSnafu)?;
.await
.context(StartDatanodeSnafu)? Ok(Instance { datanode })
.start()
.await
.context(StartDatanodeSnafu)
} }
} }

View File

@@ -26,12 +26,24 @@ pub enum Error {
source: datanode::error::Error, source: datanode::error::Error,
}, },
#[snafu(display("Failed to stop datanode, source: {}", source))]
StopDatanode {
#[snafu(backtrace)]
source: BoxedError,
},
#[snafu(display("Failed to start frontend, source: {}", source))] #[snafu(display("Failed to start frontend, source: {}", source))]
StartFrontend { StartFrontend {
#[snafu(backtrace)] #[snafu(backtrace)]
source: frontend::error::Error, source: frontend::error::Error,
}, },
#[snafu(display("Failed to build meta server, source: {}", source))]
BuildMetaServer {
#[snafu(backtrace)]
source: meta_srv::error::Error,
},
#[snafu(display("Failed to start meta server, source: {}", source))] #[snafu(display("Failed to start meta server, source: {}", source))]
StartMetaServer { StartMetaServer {
#[snafu(backtrace)] #[snafu(backtrace)]
@@ -138,6 +150,7 @@ impl ErrorExt for Error {
Error::StartDatanode { source } => source.status_code(), Error::StartDatanode { source } => source.status_code(),
Error::StartFrontend { source } => source.status_code(), Error::StartFrontend { source } => source.status_code(),
Error::StartMetaServer { source } => source.status_code(), Error::StartMetaServer { source } => source.status_code(),
Error::BuildMetaServer { source } => source.status_code(),
Error::UnsupportedSelectorType { source, .. } => source.status_code(), Error::UnsupportedSelectorType { source, .. } => source.status_code(),
Error::ReadConfig { .. } | Error::ParseConfig { .. } | Error::MissingConfig { .. } => { Error::ReadConfig { .. } | Error::ParseConfig { .. } | Error::MissingConfig { .. } => {
StatusCode::InvalidArguments StatusCode::InvalidArguments
@@ -156,6 +169,7 @@ impl ErrorExt for Error {
source.status_code() source.status_code()
} }
Error::SubstraitEncodeLogicalPlan { source } => source.status_code(), Error::SubstraitEncodeLogicalPlan { source } => source.status_code(),
Error::StopDatanode { source } => source.status_code(),
} }
} }

View File

@@ -19,7 +19,7 @@ use common_base::Plugins;
use frontend::frontend::FrontendOptions; use frontend::frontend::FrontendOptions;
use frontend::grpc::GrpcOptions; use frontend::grpc::GrpcOptions;
use frontend::influxdb::InfluxdbOptions; use frontend::influxdb::InfluxdbOptions;
use frontend::instance::{FrontendInstance, Instance}; use frontend::instance::{FrontendInstance, Instance as FeInstance};
use frontend::mysql::MysqlOptions; use frontend::mysql::MysqlOptions;
use frontend::opentsdb::OpentsdbOptions; use frontend::opentsdb::OpentsdbOptions;
use frontend::postgres::PostgresOptions; use frontend::postgres::PostgresOptions;
@@ -34,6 +34,24 @@ use snafu::ResultExt;
use crate::error::{self, IllegalAuthConfigSnafu, Result}; use crate::error::{self, IllegalAuthConfigSnafu, Result};
use crate::toml_loader; use crate::toml_loader;
pub struct Instance {
frontend: FeInstance,
}
impl Instance {
pub async fn run(&mut self) -> Result<()> {
self.frontend
.start()
.await
.context(error::StartFrontendSnafu)
}
pub async fn stop(&self) -> Result<()> {
// TODO: handle frontend shutdown
Ok(())
}
}
#[derive(Parser)] #[derive(Parser)]
pub struct Command { pub struct Command {
#[clap(subcommand)] #[clap(subcommand)]
@@ -41,8 +59,8 @@ pub struct Command {
} }
impl Command { impl Command {
pub async fn run(self) -> Result<()> { pub async fn build(self) -> Result<Instance> {
self.subcmd.run().await self.subcmd.build().await
} }
} }
@@ -52,9 +70,9 @@ enum SubCommand {
} }
impl SubCommand { impl SubCommand {
async fn run(self) -> Result<()> { async fn build(self) -> Result<Instance> {
match self { match self {
SubCommand::Start(cmd) => cmd.run().await, SubCommand::Start(cmd) => cmd.build().await,
} }
} }
} }
@@ -90,11 +108,11 @@ pub struct StartCommand {
} }
impl StartCommand { impl StartCommand {
async fn run(self) -> Result<()> { async fn build(self) -> Result<Instance> {
let plugins = Arc::new(load_frontend_plugins(&self.user_provider)?); let plugins = Arc::new(load_frontend_plugins(&self.user_provider)?);
let opts: FrontendOptions = self.try_into()?; let opts: FrontendOptions = self.try_into()?;
let mut instance = Instance::try_new_distributed(&opts, plugins.clone()) let mut instance = FeInstance::try_new_distributed(&opts, plugins.clone())
.await .await
.context(error::StartFrontendSnafu)?; .context(error::StartFrontendSnafu)?;
@@ -103,7 +121,7 @@ impl StartCommand {
.await .await
.context(error::StartFrontendSnafu)?; .context(error::StartFrontendSnafu)?;
instance.start().await.context(error::StartFrontendSnafu) Ok(Instance { frontend: instance })
} }
} }

View File

@@ -14,13 +14,32 @@
use clap::Parser; use clap::Parser;
use common_telemetry::{info, logging, warn}; use common_telemetry::{info, logging, warn};
use meta_srv::bootstrap; use meta_srv::bootstrap::MetaSrvInstance;
use meta_srv::metasrv::MetaSrvOptions; use meta_srv::metasrv::MetaSrvOptions;
use snafu::ResultExt; use snafu::ResultExt;
use crate::error::{Error, Result}; use crate::error::{Error, Result};
use crate::{error, toml_loader}; use crate::{error, toml_loader};
pub struct Instance {
instance: MetaSrvInstance,
}
impl Instance {
pub async fn run(&mut self) -> Result<()> {
self.instance
.start()
.await
.context(error::StartMetaServerSnafu)?;
Ok(())
}
pub async fn stop(&self) -> Result<()> {
// TODO: handle metasrv shutdown
Ok(())
}
}
#[derive(Parser)] #[derive(Parser)]
pub struct Command { pub struct Command {
#[clap(subcommand)] #[clap(subcommand)]
@@ -28,8 +47,8 @@ pub struct Command {
} }
impl Command { impl Command {
pub async fn run(self) -> Result<()> { pub async fn build(self) -> Result<Instance> {
self.subcmd.run().await self.subcmd.build().await
} }
} }
@@ -39,9 +58,9 @@ enum SubCommand {
} }
impl SubCommand { impl SubCommand {
async fn run(self) -> Result<()> { async fn build(self) -> Result<Instance> {
match self { match self {
SubCommand::Start(cmd) => cmd.run().await, SubCommand::Start(cmd) => cmd.build().await,
} }
} }
} }
@@ -63,16 +82,17 @@ struct StartCommand {
} }
impl StartCommand { impl StartCommand {
async fn run(self) -> Result<()> { async fn build(self) -> Result<Instance> {
logging::info!("MetaSrv start command: {:#?}", self); logging::info!("MetaSrv start command: {:#?}", self);
let opts: MetaSrvOptions = self.try_into()?; let opts: MetaSrvOptions = self.try_into()?;
logging::info!("MetaSrv options: {:#?}", opts); logging::info!("MetaSrv options: {:#?}", opts);
let instance = MetaSrvInstance::new(opts)
bootstrap::bootstrap_meta_srv(opts)
.await .await
.context(error::StartMetaServerSnafu) .context(error::BuildMetaServerSnafu)?;
Ok(Instance { instance })
} }
} }

View File

@@ -16,6 +16,7 @@ use std::sync::Arc;
use clap::Parser; use clap::Parser;
use common_base::Plugins; use common_base::Plugins;
use common_error::prelude::BoxedError;
use common_telemetry::info; use common_telemetry::info;
use datanode::datanode::{ use datanode::datanode::{
CompactionConfig, Datanode, DatanodeOptions, ObjectStoreConfig, ProcedureConfig, WalConfig, CompactionConfig, Datanode, DatanodeOptions, ObjectStoreConfig, ProcedureConfig, WalConfig,
@@ -36,7 +37,9 @@ use servers::tls::{TlsMode, TlsOption};
use servers::Mode; use servers::Mode;
use snafu::ResultExt; use snafu::ResultExt;
use crate::error::{Error, IllegalConfigSnafu, Result, StartDatanodeSnafu, StartFrontendSnafu}; use crate::error::{
Error, IllegalConfigSnafu, Result, StartDatanodeSnafu, StartFrontendSnafu, StopDatanodeSnafu,
};
use crate::frontend::load_frontend_plugins; use crate::frontend::load_frontend_plugins;
use crate::toml_loader; use crate::toml_loader;
@@ -47,8 +50,8 @@ pub struct Command {
} }
impl Command { impl Command {
pub async fn run(self) -> Result<()> { pub async fn build(self) -> Result<Instance> {
self.subcmd.run().await self.subcmd.build().await
} }
} }
@@ -58,9 +61,9 @@ enum SubCommand {
} }
impl SubCommand { impl SubCommand {
async fn run(self) -> Result<()> { async fn build(self) -> Result<Instance> {
match self { match self {
SubCommand::Start(cmd) => cmd.run().await, SubCommand::Start(cmd) => cmd.build().await,
} }
} }
} }
@@ -133,6 +136,40 @@ impl StandaloneOptions {
} }
} }
pub struct Instance {
datanode: Datanode,
frontend: FeInstance,
}
impl Instance {
pub async fn run(&mut self) -> Result<()> {
// Start datanode instance before starting services, to avoid requests come in before internal components are started.
self.datanode
.start_instance()
.await
.context(StartDatanodeSnafu)?;
info!("Datanode instance started");
self.frontend.start().await.context(StartFrontendSnafu)?;
Ok(())
}
pub async fn stop(&self) -> Result<()> {
self.datanode
.shutdown()
.await
.map_err(BoxedError::new)
.context(StopDatanodeSnafu)?;
self.frontend
.shutdown()
.await
.map_err(BoxedError::new)
.context(StopDatanodeSnafu)?;
Ok(())
}
}
#[derive(Debug, Parser)] #[derive(Debug, Parser)]
struct StartCommand { struct StartCommand {
#[clap(long)] #[clap(long)]
@@ -164,7 +201,7 @@ struct StartCommand {
} }
impl StartCommand { impl StartCommand {
async fn run(self) -> Result<()> { async fn build(self) -> Result<Instance> {
let enable_memory_catalog = self.enable_memory_catalog; let enable_memory_catalog = self.enable_memory_catalog;
let config_file = self.config_file.clone(); let config_file = self.config_file.clone();
let plugins = Arc::new(load_frontend_plugins(&self.user_provider)?); let plugins = Arc::new(load_frontend_plugins(&self.user_provider)?);
@@ -184,25 +221,18 @@ impl StartCommand {
fe_opts, dn_opts fe_opts, dn_opts
); );
let mut datanode = Datanode::new(dn_opts.clone()) let datanode = Datanode::new(dn_opts.clone())
.await .await
.context(StartDatanodeSnafu)?; .context(StartDatanodeSnafu)?;
let mut frontend = build_frontend(plugins.clone(), datanode.get_instance()).await?;
// Start datanode instance before starting services, to avoid requests come in before internal components are started. let mut frontend = build_frontend(plugins.clone(), datanode.get_instance()).await?;
datanode
.start_instance()
.await
.context(StartDatanodeSnafu)?;
info!("Datanode instance started");
frontend frontend
.build_servers(&fe_opts, plugins) .build_servers(&fe_opts, plugins)
.await .await
.context(StartFrontendSnafu)?; .context(StartFrontendSnafu)?;
frontend.start().await.context(StartFrontendSnafu)?; Ok(Instance { datanode, frontend })
Ok(())
} }
} }

View File

@@ -77,6 +77,7 @@ impl Default for ObjectStoreConfig {
} }
#[derive(Debug, Clone, Serialize, Deserialize)] #[derive(Debug, Clone, Serialize, Deserialize)]
#[serde(default)]
pub struct WalConfig { pub struct WalConfig {
// wal directory // wal directory
pub dir: String, pub dir: String,
@@ -108,6 +109,7 @@ impl Default for WalConfig {
/// Options for table compaction /// Options for table compaction
#[derive(Debug, Clone, Serialize, Deserialize, Eq, PartialEq)] #[derive(Debug, Clone, Serialize, Deserialize, Eq, PartialEq)]
#[serde(default)]
pub struct CompactionConfig { pub struct CompactionConfig {
/// Max task number that can concurrently run. /// Max task number that can concurrently run.
pub max_inflight_tasks: usize, pub max_inflight_tasks: usize,

View File

@@ -169,6 +169,13 @@ pub enum Error {
source: TableError, source: TableError,
}, },
#[snafu(display("Failed to flush table: {}, source: {}", table_name, source))]
FlushTable {
table_name: String,
#[snafu(backtrace)]
source: TableError,
},
#[snafu(display("Failed to start server, source: {}", source))] #[snafu(display("Failed to start server, source: {}", source))]
StartServer { StartServer {
#[snafu(backtrace)] #[snafu(backtrace)]
@@ -539,6 +546,7 @@ impl ErrorExt for Error {
source.status_code() source.status_code()
} }
DropTable { source, .. } => source.status_code(), DropTable { source, .. } => source.status_code(),
FlushTable { source, .. } => source.status_code(),
Insert { source, .. } => source.status_code(), Insert { source, .. } => source.status_code(),
Delete { source, .. } => source.status_code(), Delete { source, .. } => source.status_code(),

View File

@@ -37,12 +37,14 @@ use object_store::services::{Fs as FsBuilder, Oss as OSSBuilder, S3 as S3Builder
use object_store::{util, ObjectStore, ObjectStoreBuilder}; use object_store::{util, ObjectStore, ObjectStoreBuilder};
use query::query_engine::{QueryEngineFactory, QueryEngineRef}; use query::query_engine::{QueryEngineFactory, QueryEngineRef};
use servers::Mode; use servers::Mode;
use session::context::QueryContext;
use snafu::prelude::*; use snafu::prelude::*;
use storage::compaction::{CompactionHandler, CompactionSchedulerRef, SimplePicker}; use storage::compaction::{CompactionHandler, CompactionSchedulerRef, SimplePicker};
use storage::config::EngineConfig as StorageEngineConfig; use storage::config::EngineConfig as StorageEngineConfig;
use storage::scheduler::{LocalScheduler, SchedulerConfig}; use storage::scheduler::{LocalScheduler, SchedulerConfig};
use storage::EngineImpl; use storage::EngineImpl;
use store_api::logstore::LogStore; use store_api::logstore::LogStore;
use table::requests::FlushTableRequest;
use table::table::numbers::NumbersTable; use table::table::numbers::NumbersTable;
use table::table::TableIdProviderRef; use table::table::TableIdProviderRef;
use table::Table; use table::Table;
@@ -56,7 +58,7 @@ use crate::error::{
}; };
use crate::heartbeat::HeartbeatTask; use crate::heartbeat::HeartbeatTask;
use crate::script::ScriptExecutor; use crate::script::ScriptExecutor;
use crate::sql::SqlHandler; use crate::sql::{SqlHandler, SqlRequest};
mod grpc; mod grpc;
mod script; mod script;
@@ -233,6 +235,8 @@ impl Instance {
.context(ShutdownInstanceSnafu)?; .context(ShutdownInstanceSnafu)?;
} }
self.flush_tables().await?;
self.sql_handler self.sql_handler
.close() .close()
.await .await
@@ -240,6 +244,42 @@ impl Instance {
.context(ShutdownInstanceSnafu) .context(ShutdownInstanceSnafu)
} }
pub async fn flush_tables(&self) -> Result<()> {
info!("going to flush all schemas");
let schema_list = self
.catalog_manager
.catalog(DEFAULT_CATALOG_NAME)
.map_err(BoxedError::new)
.context(ShutdownInstanceSnafu)?
.expect("Default schema not found")
.schema_names()
.map_err(BoxedError::new)
.context(ShutdownInstanceSnafu)?;
let flush_requests = schema_list
.into_iter()
.map(|schema_name| {
SqlRequest::FlushTable(FlushTableRequest {
catalog_name: DEFAULT_CATALOG_NAME.to_string(),
schema_name,
table_name: None,
region_number: None,
})
})
.collect::<Vec<_>>();
let flush_result = futures::future::try_join_all(
flush_requests
.into_iter()
.map(|request| self.sql_handler.execute(request, QueryContext::arc())),
)
.await
.map_err(BoxedError::new)
.context(ShutdownInstanceSnafu);
info!("flush success: {}", flush_result.is_ok());
flush_result?;
Ok(())
}
pub fn sql_handler(&self) -> &SqlHandler { pub fn sql_handler(&self) -> &SqlHandler {
&self.sql_handler &self.sql_handler
} }

View File

@@ -127,7 +127,7 @@ impl Instance {
DdlExpr::Alter(expr) => self.handle_alter(expr).await, DdlExpr::Alter(expr) => self.handle_alter(expr).await,
DdlExpr::CreateDatabase(expr) => self.handle_create_database(expr, query_ctx).await, DdlExpr::CreateDatabase(expr) => self.handle_create_database(expr, query_ctx).await,
DdlExpr::DropTable(expr) => self.handle_drop_table(expr).await, DdlExpr::DropTable(expr) => self.handle_drop_table(expr).await,
DdlExpr::FlushTable(_) => todo!(), DdlExpr::FlushTable(expr) => self.handle_flush_table(expr).await,
} }
} }
} }

View File

@@ -12,13 +12,13 @@
// See the License for the specific language governing permissions and // See the License for the specific language governing permissions and
// limitations under the License. // limitations under the License.
use api::v1::{AlterExpr, CreateTableExpr, DropTableExpr}; use api::v1::{AlterExpr, CreateTableExpr, DropTableExpr, FlushTableExpr};
use common_grpc_expr::{alter_expr_to_request, create_expr_to_request}; use common_grpc_expr::{alter_expr_to_request, create_expr_to_request};
use common_query::Output; use common_query::Output;
use common_telemetry::info; use common_telemetry::info;
use session::context::QueryContext; use session::context::QueryContext;
use snafu::prelude::*; use snafu::prelude::*;
use table::requests::DropTableRequest; use table::requests::{DropTableRequest, FlushTableRequest};
use crate::error::{ use crate::error::{
AlterExprToRequestSnafu, BumpTableIdSnafu, CreateExprToRequestSnafu, AlterExprToRequestSnafu, BumpTableIdSnafu, CreateExprToRequestSnafu,
@@ -82,6 +82,24 @@ impl Instance {
.execute(SqlRequest::DropTable(req), QueryContext::arc()) .execute(SqlRequest::DropTable(req), QueryContext::arc())
.await .await
} }
pub(crate) async fn handle_flush_table(&self, expr: FlushTableExpr) -> Result<Output> {
let table_name = if expr.table_name.trim().is_empty() {
None
} else {
Some(expr.table_name)
};
let req = FlushTableRequest {
catalog_name: expr.catalog_name,
schema_name: expr.schema_name,
table_name,
region_number: expr.region_id,
};
self.sql_handler()
.execute(SqlRequest::FlushTable(req), QueryContext::arc())
.await
}
} }
#[cfg(test)] #[cfg(test)]
@@ -136,7 +154,6 @@ mod tests {
} }
#[test] #[test]
fn test_create_column_schema() { fn test_create_column_schema() {
let column_def = ColumnDef { let column_def = ColumnDef {
name: "a".to_string(), name: "a".to_string(),

View File

@@ -39,6 +39,7 @@ mod copy_table_from;
mod create; mod create;
mod delete; mod delete;
mod drop_table; mod drop_table;
mod flush_table;
pub(crate) mod insert; pub(crate) mod insert;
#[derive(Debug)] #[derive(Debug)]
@@ -48,6 +49,7 @@ pub enum SqlRequest {
CreateDatabase(CreateDatabaseRequest), CreateDatabase(CreateDatabaseRequest),
Alter(AlterTableRequest), Alter(AlterTableRequest),
DropTable(DropTableRequest), DropTable(DropTableRequest),
FlushTable(FlushTableRequest),
ShowDatabases(ShowDatabases), ShowDatabases(ShowDatabases),
ShowTables(ShowTables), ShowTables(ShowTables),
DescribeTable(DescribeTable), DescribeTable(DescribeTable),
@@ -116,6 +118,7 @@ impl SqlHandler {
})?; })?;
describe_table(table).context(ExecuteSqlSnafu) describe_table(table).context(ExecuteSqlSnafu)
} }
SqlRequest::FlushTable(req) => self.flush_table(req).await,
}; };
if let Err(e) = &result { if let Err(e) = &result {
error!(e; "{query_ctx}"); error!(e; "{query_ctx}");

View File

@@ -0,0 +1,83 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use common_catalog::consts::DEFAULT_SCHEMA_NAME;
use common_query::Output;
use snafu::{OptionExt, ResultExt};
use table::engine::TableReference;
use table::requests::FlushTableRequest;
use crate::error::{self, CatalogSnafu, DatabaseNotFoundSnafu, Result};
use crate::sql::SqlHandler;
impl SqlHandler {
pub(crate) async fn flush_table(&self, req: FlushTableRequest) -> Result<Output> {
if let Some(table) = &req.table_name {
self.flush_table_inner(
&req.catalog_name,
&req.schema_name,
table,
req.region_number,
)
.await?;
} else {
let schema = self
.catalog_manager
.schema(&req.catalog_name, &req.schema_name)
.context(CatalogSnafu)?
.context(DatabaseNotFoundSnafu {
catalog: &req.catalog_name,
schema: &req.schema_name,
})?;
let all_table_names = schema.table_names().context(CatalogSnafu)?;
futures::future::join_all(all_table_names.iter().map(|table| {
self.flush_table_inner(
&req.catalog_name,
&req.schema_name,
table,
req.region_number,
)
}))
.await
.into_iter()
.collect::<Result<Vec<_>>>()?;
}
Ok(Output::AffectedRows(0))
}
async fn flush_table_inner(
&self,
catalog: &str,
schema: &str,
table: &str,
region: Option<u32>,
) -> Result<()> {
if schema == DEFAULT_SCHEMA_NAME && table == "numbers" {
return Ok(());
}
let table_ref = TableReference {
catalog,
schema,
table,
};
let full_table_name = table_ref.to_string();
let table = self.get_table(&table_ref)?;
table.flush(region).await.context(error::FlushTableSnafu {
table_name: full_table_name,
})
}
}

View File

@@ -19,8 +19,8 @@ use std::sync::Arc;
use api::helper::ColumnDataTypeWrapper; use api::helper::ColumnDataTypeWrapper;
use api::v1::{ use api::v1::{
column_def, AlterExpr, CreateDatabaseExpr, CreateTableExpr, DropTableExpr, InsertRequest, column_def, AlterExpr, CreateDatabaseExpr, CreateTableExpr, DropTableExpr, FlushTableExpr,
TableId, InsertRequest, TableId,
}; };
use async_trait::async_trait; use async_trait::async_trait;
use catalog::helper::{SchemaKey, SchemaValue}; use catalog::helper::{SchemaKey, SchemaValue};
@@ -39,7 +39,7 @@ use meta_client::client::MetaClient;
use meta_client::rpc::router::DeleteRequest as MetaDeleteRequest; use meta_client::rpc::router::DeleteRequest as MetaDeleteRequest;
use meta_client::rpc::{ use meta_client::rpc::{
CompareAndPutRequest, CreateRequest as MetaCreateRequest, Partition as MetaPartition, CompareAndPutRequest, CreateRequest as MetaCreateRequest, Partition as MetaPartition,
RouteResponse, TableName, RouteRequest, RouteResponse, TableName,
}; };
use partition::partition::{PartitionBound, PartitionDef}; use partition::partition::{PartitionBound, PartitionDef};
use query::error::QueryExecutionSnafu; use query::error::QueryExecutionSnafu;
@@ -259,6 +259,61 @@ impl DistInstance {
Ok(Output::AffectedRows(1)) Ok(Output::AffectedRows(1))
} }
async fn flush_table(&self, table_name: TableName, region_id: Option<u32>) -> Result<Output> {
let _ = self
.catalog_manager
.table(
&table_name.catalog_name,
&table_name.schema_name,
&table_name.table_name,
)
.await
.context(CatalogSnafu)?
.with_context(|| TableNotFoundSnafu {
table_name: table_name.to_string(),
})?;
let route_response = self
.meta_client
.route(RouteRequest {
table_names: vec![table_name.clone()],
})
.await
.context(RequestMetaSnafu)?;
let expr = FlushTableExpr {
catalog_name: table_name.catalog_name.clone(),
schema_name: table_name.schema_name.clone(),
table_name: table_name.table_name.clone(),
region_id,
};
for table_route in &route_response.table_routes {
let should_send_rpc = table_route.region_routes.iter().any(|route| {
if let Some(region_id) = region_id {
region_id == route.region.id as u32
} else {
true
}
});
if !should_send_rpc {
continue;
}
for datanode in table_route.find_leaders() {
debug!("Flushing table {table_name} on Datanode {datanode:?}");
let client = self.datanode_clients.get_client(&datanode).await;
let client = Database::new(&expr.catalog_name, &expr.schema_name, client);
client
.flush_table(expr.clone())
.await
.context(RequestDatanodeSnafu)?;
}
}
Ok(Output::AffectedRows(0))
}
async fn handle_statement( async fn handle_statement(
&self, &self,
stmt: Statement, stmt: Statement,

View File

@@ -57,7 +57,11 @@ impl GrpcQueryHandler for DistInstance {
TableName::new(&expr.catalog_name, &expr.schema_name, &expr.table_name); TableName::new(&expr.catalog_name, &expr.schema_name, &expr.table_name);
self.drop_table(table_name).await self.drop_table(table_name).await
} }
DdlExpr::FlushTable(_) => todo!(), DdlExpr::FlushTable(expr) => {
let table_name =
TableName::new(&expr.catalog_name, &expr.schema_name, &expr.table_name);
self.flush_table(table_name, expr.region_id).await
}
} }
} }
} }

View File

@@ -91,14 +91,15 @@ mod test {
use api::v1::ddl_request::Expr as DdlExpr; use api::v1::ddl_request::Expr as DdlExpr;
use api::v1::{ use api::v1::{
alter_expr, AddColumn, AddColumns, AlterExpr, Column, ColumnDataType, ColumnDef, alter_expr, AddColumn, AddColumns, AlterExpr, Column, ColumnDataType, ColumnDef,
CreateDatabaseExpr, CreateTableExpr, DdlRequest, DropTableExpr, InsertRequest, CreateDatabaseExpr, CreateTableExpr, DdlRequest, DropTableExpr, FlushTableExpr,
QueryRequest, InsertRequest, QueryRequest,
}; };
use catalog::helper::{TableGlobalKey, TableGlobalValue}; use catalog::helper::{TableGlobalKey, TableGlobalValue};
use common_query::Output; use common_query::Output;
use common_recordbatch::RecordBatches; use common_recordbatch::RecordBatches;
use query::parser::QueryLanguageParser; use query::parser::QueryLanguageParser;
use session::context::QueryContext; use session::context::QueryContext;
use tests::{has_parquet_file, test_region_dir};
use super::*; use super::*;
use crate::table::DistTable; use crate::table::DistTable;
@@ -352,6 +353,108 @@ CREATE TABLE {table_name} (
test_insert_and_query_on_auto_created_table(instance).await test_insert_and_query_on_auto_created_table(instance).await
} }
#[tokio::test(flavor = "multi_thread")]
async fn test_distributed_flush_table() {
common_telemetry::init_default_ut_logging();
let instance = tests::create_distributed_instance("test_distributed_flush_table").await;
let data_tmp_dirs = instance.data_tmp_dirs();
let frontend = instance.frontend.as_ref();
let table_name = "my_dist_table";
let sql = format!(
r"
CREATE TABLE {table_name} (
a INT,
ts TIMESTAMP,
TIME INDEX (ts)
) PARTITION BY RANGE COLUMNS(a) (
PARTITION r0 VALUES LESS THAN (10),
PARTITION r1 VALUES LESS THAN (20),
PARTITION r2 VALUES LESS THAN (50),
PARTITION r3 VALUES LESS THAN (MAXVALUE),
)"
);
create_table(frontend, sql).await;
test_insert_and_query_on_existing_table(frontend, table_name).await;
flush_table(frontend, "greptime", "public", table_name, None).await;
// Wait for previous task finished
flush_table(frontend, "greptime", "public", table_name, None).await;
let table_id = 1024;
let table = instance
.frontend
.catalog_manager()
.table("greptime", "public", table_name)
.await
.unwrap()
.unwrap();
let table = table.as_any().downcast_ref::<DistTable>().unwrap();
let TableGlobalValue { regions_id_map, .. } = table
.table_global_value(&TableGlobalKey {
catalog_name: "greptime".to_string(),
schema_name: "public".to_string(),
table_name: table_name.to_string(),
})
.await
.unwrap()
.unwrap();
let region_to_dn_map = regions_id_map
.iter()
.map(|(k, v)| (v[0], *k))
.collect::<HashMap<u32, u64>>();
for (region, dn) in region_to_dn_map.iter() {
// data_tmp_dirs -> dn: 1..4
let data_tmp_dir = data_tmp_dirs.get((*dn - 1) as usize).unwrap();
let region_dir = test_region_dir(
data_tmp_dir.path().to_str().unwrap(),
"greptime",
"public",
table_id,
*region,
);
has_parquet_file(&region_dir);
}
}
#[tokio::test(flavor = "multi_thread")]
async fn test_standalone_flush_table() {
common_telemetry::init_default_ut_logging();
let standalone = tests::create_standalone_instance("test_standalone_flush_table").await;
let instance = &standalone.instance;
let data_tmp_dir = standalone.data_tmp_dir();
let table_name = "my_table";
let sql = format!("CREATE TABLE {table_name} (a INT, ts TIMESTAMP, TIME INDEX (ts))");
create_table(instance, sql).await;
test_insert_and_query_on_existing_table(instance, table_name).await;
let table_id = 1024;
let region_id = 0;
let region_dir = test_region_dir(
data_tmp_dir.path().to_str().unwrap(),
"greptime",
"public",
table_id,
region_id,
);
assert!(!has_parquet_file(&region_dir));
flush_table(instance, "greptime", "public", "my_table", None).await;
// Wait for previous task finished
flush_table(instance, "greptime", "public", "my_table", None).await;
assert!(has_parquet_file(&region_dir));
}
async fn create_table(frontend: &Instance, sql: String) { async fn create_table(frontend: &Instance, sql: String) {
let request = Request::Query(QueryRequest { let request = Request::Query(QueryRequest {
query: Some(Query::Sql(sql)), query: Some(Query::Sql(sql)),
@@ -360,6 +463,26 @@ CREATE TABLE {table_name} (
assert!(matches!(output, Output::AffectedRows(0))); assert!(matches!(output, Output::AffectedRows(0)));
} }
async fn flush_table(
frontend: &Instance,
catalog_name: &str,
schema_name: &str,
table_name: &str,
region_id: Option<u32>,
) {
let request = Request::Ddl(DdlRequest {
expr: Some(DdlExpr::FlushTable(FlushTableExpr {
catalog_name: catalog_name.to_string(),
schema_name: schema_name.to_string(),
table_name: table_name.to_string(),
region_id,
})),
});
let output = query(frontend, request).await;
assert!(matches!(output, Output::AffectedRows(0)));
}
async fn test_insert_and_query_on_existing_table(instance: &Instance, table_name: &str) { async fn test_insert_and_query_on_existing_table(instance: &Instance, table_name: &str) {
let insert = InsertRequest { let insert = InsertRequest {
table_name: table_name.to_string(), table_name: table_name.to_string(),

View File

@@ -152,6 +152,7 @@ impl Services {
let mut http_server = HttpServer::new( let mut http_server = HttpServer::new(
ServerSqlQueryHandlerAdaptor::arc(instance.clone()), ServerSqlQueryHandlerAdaptor::arc(instance.clone()),
ServerGrpcQueryHandlerAdaptor::arc(instance.clone()),
http_options.clone(), http_options.clone(),
); );
if let Some(user_provider) = user_provider.clone() { if let Some(user_provider) = user_provider.clone() {

View File

@@ -140,8 +140,11 @@ impl Table for DistTable {
Ok(Arc::new(dist_scan)) Ok(Arc::new(dist_scan))
} }
fn supports_filter_pushdown(&self, _filter: &Expr) -> table::Result<FilterPushDownType> { fn supports_filters_pushdown(
Ok(FilterPushDownType::Inexact) &self,
filters: &[&Expr],
) -> table::Result<Vec<FilterPushDownType>> {
Ok(vec![FilterPushDownType::Inexact; filters.len()])
} }
async fn alter(&self, context: AlterContext, request: &AlterTableRequest) -> table::Result<()> { async fn alter(&self, context: AlterContext, request: &AlterTableRequest) -> table::Result<()> {

View File

@@ -74,8 +74,7 @@ impl DistTable {
let mut success = 0; let mut success = 0;
for join in joins { for join in joins {
let object_result = join.await.context(error::JoinTaskSnafu)??; let rows = join.await.context(error::JoinTaskSnafu)?? as usize;
let Output::AffectedRows(rows) = object_result else { unreachable!() };
success += rows; success += rows;
} }
Ok(Output::AffectedRows(success)) Ok(Output::AffectedRows(success))

View File

@@ -47,7 +47,7 @@ impl DatanodeInstance {
Self { table, db } Self { table, db }
} }
pub(crate) async fn grpc_insert(&self, request: InsertRequest) -> client::Result<Output> { pub(crate) async fn grpc_insert(&self, request: InsertRequest) -> client::Result<u32> {
self.db.insert(request).await self.db.insert(request).await
} }

View File

@@ -34,6 +34,7 @@ use partition::route::TableRoutes;
use servers::grpc::GrpcServer; use servers::grpc::GrpcServer;
use servers::query_handler::grpc::ServerGrpcQueryHandlerAdaptor; use servers::query_handler::grpc::ServerGrpcQueryHandlerAdaptor;
use servers::Mode; use servers::Mode;
use table::engine::{region_name, table_dir};
use tonic::transport::Server; use tonic::transport::Server;
use tower::service_fn; use tower::service_fn;
@@ -56,11 +57,23 @@ pub(crate) struct MockDistributedInstance {
_guards: Vec<TestGuard>, _guards: Vec<TestGuard>,
} }
impl MockDistributedInstance {
pub fn data_tmp_dirs(&self) -> Vec<&TempDir> {
self._guards.iter().map(|g| &g._data_tmp_dir).collect()
}
}
pub(crate) struct MockStandaloneInstance { pub(crate) struct MockStandaloneInstance {
pub(crate) instance: Arc<Instance>, pub(crate) instance: Arc<Instance>,
_guard: TestGuard, _guard: TestGuard,
} }
impl MockStandaloneInstance {
pub fn data_tmp_dir(&self) -> &TempDir {
&self._guard._data_tmp_dir
}
}
pub(crate) async fn create_standalone_instance(test_name: &str) -> MockStandaloneInstance { pub(crate) async fn create_standalone_instance(test_name: &str) -> MockStandaloneInstance {
let (opts, guard) = create_tmp_dir_and_datanode_opts(test_name); let (opts, guard) = create_tmp_dir_and_datanode_opts(test_name);
let datanode_instance = DatanodeInstance::new(&opts).await.unwrap(); let datanode_instance = DatanodeInstance::new(&opts).await.unwrap();
@@ -112,15 +125,15 @@ pub(crate) async fn create_datanode_client(
// create a mock datanode grpc service, see example here: // create a mock datanode grpc service, see example here:
// https://github.com/hyperium/tonic/blob/master/examples/src/mock/mock.rs // https://github.com/hyperium/tonic/blob/master/examples/src/mock/mock.rs
let datanode_service = GrpcServer::new( let grpc_server = GrpcServer::new(
ServerGrpcQueryHandlerAdaptor::arc(datanode_instance), ServerGrpcQueryHandlerAdaptor::arc(datanode_instance),
None, None,
runtime, runtime,
) );
.create_service();
tokio::spawn(async move { tokio::spawn(async move {
Server::builder() Server::builder()
.add_service(datanode_service) .add_service(grpc_server.create_flight_service())
.add_service(grpc_server.create_database_service())
.serve_with_incoming(futures::stream::iter(vec![Ok::<_, std::io::Error>(server)])) .serve_with_incoming(futures::stream::iter(vec![Ok::<_, std::io::Error>(server)]))
.await .await
}); });
@@ -269,3 +282,29 @@ pub(crate) async fn create_distributed_instance(test_name: &str) -> MockDistribu
_guards: test_guards, _guards: test_guards,
} }
} }
pub fn test_region_dir(
dir: &str,
catalog_name: &str,
schema_name: &str,
table_id: u32,
region_id: u32,
) -> String {
let table_dir = table_dir(catalog_name, schema_name, table_id);
let region_name = region_name(table_id, region_id);
format!("{}/{}/{}", dir, table_dir, region_name)
}
pub fn has_parquet_file(sst_dir: &str) -> bool {
for entry in std::fs::read_dir(sst_dir).unwrap() {
let entry = entry.unwrap();
let path = entry.path();
if !path.is_dir() {
assert_eq!("parquet", path.extension().unwrap());
return true;
}
}
false
}

View File

@@ -39,18 +39,45 @@ use crate::service::store::kv::ResettableKvStoreRef;
use crate::service::store::memory::MemStore; use crate::service::store::memory::MemStore;
use crate::{error, Result}; use crate::{error, Result};
// Bootstrap the rpc server to serve incoming request #[derive(Clone)]
pub async fn bootstrap_meta_srv(opts: MetaSrvOptions) -> Result<()> { pub struct MetaSrvInstance {
let meta_srv = make_meta_srv(opts.clone()).await?; meta_srv: MetaSrv,
bootstrap_meta_srv_with_router(opts, router(meta_srv)).await
opts: MetaSrvOptions,
} }
pub async fn bootstrap_meta_srv_with_router(opts: MetaSrvOptions, router: Router) -> Result<()> { impl MetaSrvInstance {
let listener = TcpListener::bind(&opts.bind_addr) pub async fn new(opts: MetaSrvOptions) -> Result<MetaSrvInstance> {
let meta_srv = build_meta_srv(&opts).await?;
Ok(MetaSrvInstance { meta_srv, opts })
}
pub async fn start(&self) -> Result<()> {
self.meta_srv.start().await;
bootstrap_meta_srv_with_router(&self.opts.bind_addr, router(self.meta_srv.clone())).await?;
Ok(())
}
pub async fn close(&self) -> Result<()> {
// TODO: shutdown the router
self.meta_srv.shutdown();
Ok(())
}
}
// Bootstrap the rpc server to serve incoming request
pub async fn bootstrap_meta_srv(opts: MetaSrvOptions) -> Result<()> {
let meta_srv = make_meta_srv(&opts).await?;
bootstrap_meta_srv_with_router(&opts.bind_addr, router(meta_srv)).await
}
pub async fn bootstrap_meta_srv_with_router(bind_addr: &str, router: Router) -> Result<()> {
let listener = TcpListener::bind(bind_addr)
.await .await
.context(error::TcpBindSnafu { .context(error::TcpBindSnafu { addr: bind_addr })?;
addr: &opts.bind_addr,
})?;
let listener = TcpListenerStream::new(listener); let listener = TcpListenerStream::new(listener);
router router
@@ -72,7 +99,7 @@ pub fn router(meta_srv: MetaSrv) -> Router {
.add_service(admin::make_admin_service(meta_srv)) .add_service(admin::make_admin_service(meta_srv))
} }
pub async fn make_meta_srv(opts: MetaSrvOptions) -> Result<MetaSrv> { pub async fn build_meta_srv(opts: &MetaSrvOptions) -> Result<MetaSrv> {
let (kv_store, election, lock) = if opts.use_memory_store { let (kv_store, election, lock) = if opts.use_memory_store {
(Arc::new(MemStore::new()) as _, None, None) (Arc::new(MemStore::new()) as _, None, None)
} else { } else {
@@ -107,7 +134,7 @@ pub async fn make_meta_srv(opts: MetaSrvOptions) -> Result<MetaSrv> {
}; };
let meta_srv = MetaSrvBuilder::new() let meta_srv = MetaSrvBuilder::new()
.options(opts) .options(opts.clone())
.kv_store(kv_store) .kv_store(kv_store)
.in_memory(in_memory) .in_memory(in_memory)
.selector(selector) .selector(selector)
@@ -117,6 +144,12 @@ pub async fn make_meta_srv(opts: MetaSrvOptions) -> Result<MetaSrv> {
.build() .build()
.await; .await;
Ok(meta_srv)
}
pub async fn make_meta_srv(opts: &MetaSrvOptions) -> Result<MetaSrv> {
let meta_srv = build_meta_srv(opts).await?;
meta_srv.start().await; meta_srv.start().await;
Ok(meta_srv) Ok(meta_srv)

View File

@@ -31,13 +31,14 @@ use snafu::{ensure, OptionExt, ResultExt};
use store_api::storage::{ use store_api::storage::{
ColumnDescriptorBuilder, ColumnFamilyDescriptor, ColumnFamilyDescriptorBuilder, ColumnId, ColumnDescriptorBuilder, ColumnFamilyDescriptor, ColumnFamilyDescriptorBuilder, ColumnId,
CreateOptions, EngineContext as StorageEngineContext, OpenOptions, Region, CreateOptions, EngineContext as StorageEngineContext, OpenOptions, Region,
RegionDescriptorBuilder, RegionId, RowKeyDescriptor, RowKeyDescriptorBuilder, StorageEngine, RegionDescriptorBuilder, RowKeyDescriptor, RowKeyDescriptorBuilder, StorageEngine,
};
use table::engine::{
region_id, region_name, table_dir, EngineContext, TableEngine, TableEngineProcedure,
TableReference,
}; };
use table::engine::{EngineContext, TableEngine, TableEngineProcedure, TableReference};
use table::error::TableOperationSnafu; use table::error::TableOperationSnafu;
use table::metadata::{ use table::metadata::{TableInfo, TableInfoBuilder, TableMetaBuilder, TableType, TableVersion};
TableId, TableInfo, TableInfoBuilder, TableMetaBuilder, TableType, TableVersion,
};
use table::requests::{ use table::requests::{
AlterKind, AlterTableRequest, CreateTableRequest, DropTableRequest, OpenTableRequest, AlterKind, AlterTableRequest, CreateTableRequest, DropTableRequest, OpenTableRequest,
}; };
@@ -59,22 +60,6 @@ pub const MITO_ENGINE: &str = "mito";
pub const INIT_COLUMN_ID: ColumnId = 0; pub const INIT_COLUMN_ID: ColumnId = 0;
const INIT_TABLE_VERSION: TableVersion = 0; const INIT_TABLE_VERSION: TableVersion = 0;
/// Generate region name in the form of "{TABLE_ID}_{REGION_NUMBER}"
#[inline]
fn region_name(table_id: TableId, n: u32) -> String {
format!("{table_id}_{n:010}")
}
#[inline]
fn region_id(table_id: TableId, n: u32) -> RegionId {
(u64::from(table_id) << 32) | u64::from(n)
}
#[inline]
fn table_dir(catalog_name: &str, schema_name: &str, table_id: TableId) -> String {
format!("{catalog_name}/{schema_name}/{table_id}/")
}
/// [TableEngine] implementation. /// [TableEngine] implementation.
/// ///
/// About mito <https://en.wikipedia.org/wiki/Alfa_Romeo_MiTo>. /// About mito <https://en.wikipedia.org/wiki/Alfa_Romeo_MiTo>.

View File

@@ -25,6 +25,7 @@ use store_api::storage::{
ColumnId, CreateOptions, EngineContext, OpenOptions, RegionDescriptorBuilder, RegionNumber, ColumnId, CreateOptions, EngineContext, OpenOptions, RegionDescriptorBuilder, RegionNumber,
StorageEngine, StorageEngine,
}; };
use table::engine::{region_id, table_dir};
use table::metadata::{TableInfoBuilder, TableMetaBuilder, TableType}; use table::metadata::{TableInfoBuilder, TableMetaBuilder, TableType};
use table::requests::CreateTableRequest; use table::requests::CreateTableRequest;
@@ -146,7 +147,7 @@ impl<S: StorageEngine> CreateMitoTable<S> {
/// Creates regions for the table. /// Creates regions for the table.
async fn on_create_regions(&mut self) -> Result<Status> { async fn on_create_regions(&mut self) -> Result<Status> {
let engine_ctx = EngineContext::default(); let engine_ctx = EngineContext::default();
let table_dir = engine::table_dir( let table_dir = table_dir(
&self.data.request.catalog_name, &self.data.request.catalog_name,
&self.data.request.schema_name, &self.data.request.schema_name,
self.data.request.id, self.data.request.id,
@@ -203,7 +204,7 @@ impl<S: StorageEngine> CreateMitoTable<S> {
} }
// We need to create that region. // We need to create that region.
let region_id = engine::region_id(self.data.request.id, *number); let region_id = region_id(self.data.request.id, *number);
let region_desc = RegionDescriptorBuilder::default() let region_desc = RegionDescriptorBuilder::default()
.id(region_id) .id(region_id)
.name(region_name.clone()) .name(region_name.clone())
@@ -234,7 +235,7 @@ impl<S: StorageEngine> CreateMitoTable<S> {
/// Writes metadata to the table manifest. /// Writes metadata to the table manifest.
async fn on_write_table_manifest(&mut self) -> Result<Status> { async fn on_write_table_manifest(&mut self) -> Result<Status> {
let table_dir = engine::table_dir( let table_dir = table_dir(
&self.data.request.catalog_name, &self.data.request.catalog_name,
&self.data.request.schema_name, &self.data.request.schema_name,
self.data.request.id, self.data.request.id,

View File

@@ -31,14 +31,28 @@ use storage::region::RegionImpl;
use storage::EngineImpl; use storage::EngineImpl;
use store_api::manifest::Manifest; use store_api::manifest::Manifest;
use store_api::storage::ReadContext; use store_api::storage::ReadContext;
use table::requests::{AddColumnRequest, AlterKind, DeleteRequest, TableOptions}; use table::requests::{
AddColumnRequest, AlterKind, DeleteRequest, FlushTableRequest, TableOptions,
};
use super::*; use super::*;
use crate::table::test_util;
use crate::table::test_util::{ use crate::table::test_util::{
new_insert_request, schema_for_test, TestEngineComponents, TABLE_NAME, self, new_insert_request, schema_for_test, setup_table, TestEngineComponents, TABLE_NAME,
}; };
pub fn has_parquet_file(sst_dir: &str) -> bool {
for entry in std::fs::read_dir(sst_dir).unwrap() {
let entry = entry.unwrap();
let path = entry.path();
if !path.is_dir() {
assert_eq!("parquet", path.extension().unwrap());
return true;
}
}
false
}
async fn setup_table_with_column_default_constraint() -> (TempDir, String, TableRef) { async fn setup_table_with_column_default_constraint() -> (TempDir, String, TableRef) {
let table_name = "test_default_constraint"; let table_name = "test_default_constraint";
let column_schemas = vec![ let column_schemas = vec![
@@ -752,3 +766,76 @@ async fn test_table_delete_rows() {
+-------+-----+--------+-------------------------+" +-------+-----+--------+-------------------------+"
); );
} }
#[tokio::test]
async fn test_flush_table_all_regions() {
let TestEngineComponents {
table_ref: table,
dir,
..
} = test_util::setup_test_engine_and_table().await;
setup_table(table.clone()).await;
let table_id = 1u32;
let region_name = region_name(table_id, 0);
let table_info = table.table_info();
let table_dir = table_dir(&table_info.catalog_name, &table_info.schema_name, table_id);
let region_dir = format!(
"{}/{}/{}",
dir.path().to_str().unwrap(),
table_dir,
region_name
);
assert!(!has_parquet_file(&region_dir));
// Trigger flush all region
table.flush(None).await.unwrap();
// Trigger again, wait for the previous task finished
table.flush(None).await.unwrap();
assert!(has_parquet_file(&region_dir));
}
#[tokio::test]
async fn test_flush_table_with_region_id() {
let TestEngineComponents {
table_ref: table,
dir,
..
} = test_util::setup_test_engine_and_table().await;
setup_table(table.clone()).await;
let table_id = 1u32;
let region_name = region_name(table_id, 0);
let table_info = table.table_info();
let table_dir = table_dir(&table_info.catalog_name, &table_info.schema_name, table_id);
let region_dir = format!(
"{}/{}/{}",
dir.path().to_str().unwrap(),
table_dir,
region_name
);
assert!(!has_parquet_file(&region_dir));
let req = FlushTableRequest {
region_number: Some(0),
..Default::default()
};
// Trigger flush all region
table.flush(req.region_number).await.unwrap();
// Trigger again, wait for the previous task finished
table.flush(req.region_number).await.unwrap();
assert!(has_parquet_file(&region_dir));
}

View File

@@ -208,8 +208,8 @@ impl<R: Region> Table for MitoTable<R> {
Ok(Arc::new(SimpleTableScan::new(stream))) Ok(Arc::new(SimpleTableScan::new(stream)))
} }
fn supports_filter_pushdown(&self, _filter: &Expr) -> table::error::Result<FilterPushDownType> { fn supports_filters_pushdown(&self, filters: &[&Expr]) -> TableResult<Vec<FilterPushDownType>> {
Ok(FilterPushDownType::Inexact) Ok(vec![FilterPushDownType::Inexact; filters.len()])
} }
/// Alter table changes the schemas of the table. /// Alter table changes the schemas of the table.
@@ -323,6 +323,25 @@ impl<R: Region> Table for MitoTable<R> {
Ok(rows_deleted) Ok(rows_deleted)
} }
async fn flush(&self, region_number: Option<RegionNumber>) -> TableResult<()> {
if let Some(region_number) = region_number {
if let Some(region) = self.regions.get(&region_number) {
region
.flush()
.await
.map_err(BoxedError::new)
.context(table_error::TableOperationSnafu)?;
}
} else {
futures::future::try_join_all(self.regions.values().map(|region| region.flush()))
.await
.map_err(BoxedError::new)
.context(table_error::TableOperationSnafu)?;
}
Ok(())
}
async fn close(&self) -> TableResult<()> { async fn close(&self) -> TableResult<()> {
futures::future::try_join_all(self.regions.values().map(|region| region.close())) futures::future::try_join_all(self.regions.values().map(|region| region.close()))
.await .await

View File

@@ -20,7 +20,7 @@ use common_catalog::consts::{DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME};
use common_test_util::temp_dir::{create_temp_dir, TempDir}; use common_test_util::temp_dir::{create_temp_dir, TempDir};
use datatypes::prelude::ConcreteDataType; use datatypes::prelude::ConcreteDataType;
use datatypes::schema::{ColumnSchema, RawSchema, Schema, SchemaBuilder, SchemaRef}; use datatypes::schema::{ColumnSchema, RawSchema, Schema, SchemaBuilder, SchemaRef};
use datatypes::vectors::VectorRef; use datatypes::vectors::{Float64Vector, StringVector, TimestampMillisecondVector, VectorRef};
use log_store::NoopLogStore; use log_store::NoopLogStore;
use object_store::services::Fs as Builder; use object_store::services::Fs as Builder;
use object_store::{ObjectStore, ObjectStoreBuilder}; use object_store::{ObjectStore, ObjectStoreBuilder};
@@ -30,7 +30,7 @@ use storage::EngineImpl;
use table::engine::{EngineContext, TableEngine}; use table::engine::{EngineContext, TableEngine};
use table::metadata::{TableInfo, TableInfoBuilder, TableMetaBuilder, TableType}; use table::metadata::{TableInfo, TableInfoBuilder, TableMetaBuilder, TableType};
use table::requests::{CreateTableRequest, InsertRequest, TableOptions}; use table::requests::{CreateTableRequest, InsertRequest, TableOptions};
use table::TableRef; use table::{Table, TableRef};
use crate::config::EngineConfig; use crate::config::EngineConfig;
use crate::engine::{MitoEngine, MITO_ENGINE}; use crate::engine::{MitoEngine, MITO_ENGINE};
@@ -178,3 +178,19 @@ pub async fn setup_mock_engine_and_table(
(mock_engine, table_engine, table, object_store, dir) (mock_engine, table_engine, table, object_store, dir)
} }
pub async fn setup_table(table: Arc<dyn Table>) {
let mut columns_values: HashMap<String, VectorRef> = HashMap::with_capacity(4);
let hosts: VectorRef = Arc::new(StringVector::from(vec!["host1", "host2", "host3", "host4"]));
let cpus: VectorRef = Arc::new(Float64Vector::from_vec(vec![1.0, 2.0, 3.0, 4.0]));
let memories: VectorRef = Arc::new(Float64Vector::from_vec(vec![1.0, 2.0, 3.0, 4.0]));
let tss: VectorRef = Arc::new(TimestampMillisecondVector::from_vec(vec![1, 2, 2, 1]));
columns_values.insert("host".to_string(), hosts.clone());
columns_values.insert("cpu".to_string(), cpus.clone());
columns_values.insert("memory".to_string(), memories.clone());
columns_values.insert("ts".to_string(), tss.clone());
let insert_req = new_insert_request("demo".to_string(), columns_values);
assert_eq!(4, table.insert(insert_req).await.unwrap());
}

View File

@@ -200,6 +200,10 @@ impl Region for MockRegion {
fn disk_usage_bytes(&self) -> u64 { fn disk_usage_bytes(&self) -> u64 {
0 0
} }
async fn flush(&self) -> Result<()> {
unimplemented!()
}
} }
impl MockRegionInner { impl MockRegionInner {

View File

@@ -23,7 +23,7 @@ use datafusion::arrow::datatypes::{DataType, TimeUnit};
use datafusion::common::{DFField, DFSchema, DFSchemaRef, Result as DataFusionResult, Statistics}; use datafusion::common::{DFField, DFSchema, DFSchemaRef, Result as DataFusionResult, Statistics};
use datafusion::error::DataFusionError; use datafusion::error::DataFusionError;
use datafusion::execution::context::TaskContext; use datafusion::execution::context::TaskContext;
use datafusion::logical_expr::{LogicalPlan, UserDefinedLogicalNode}; use datafusion::logical_expr::{LogicalPlan, UserDefinedLogicalNodeCore};
use datafusion::physical_expr::PhysicalSortExpr; use datafusion::physical_expr::PhysicalSortExpr;
use datafusion::physical_plan::metrics::{BaselineMetrics, ExecutionPlanMetricsSet, MetricsSet}; use datafusion::physical_plan::metrics::{BaselineMetrics, ExecutionPlanMetricsSet, MetricsSet};
use datafusion::physical_plan::{ use datafusion::physical_plan::{
@@ -37,7 +37,7 @@ use futures::Stream;
use crate::extension_plan::Millisecond; use crate::extension_plan::Millisecond;
#[derive(Debug, Clone)] #[derive(Debug, Clone, PartialEq, Eq, Hash)]
pub struct EmptyMetric { pub struct EmptyMetric {
start: Millisecond, start: Millisecond,
end: Millisecond, end: Millisecond,
@@ -86,9 +86,9 @@ impl EmptyMetric {
} }
} }
impl UserDefinedLogicalNode for EmptyMetric { impl UserDefinedLogicalNodeCore for EmptyMetric {
fn as_any(&self) -> &dyn Any { fn name(&self) -> &str {
self as _ "EmptyMetric"
} }
fn inputs(&self) -> Vec<&LogicalPlan> { fn inputs(&self) -> Vec<&LogicalPlan> {
@@ -111,12 +111,8 @@ impl UserDefinedLogicalNode for EmptyMetric {
) )
} }
fn from_template( fn from_template(&self, _expr: &[Expr], _inputs: &[LogicalPlan]) -> Self {
&self, self.clone()
_exprs: &[datafusion::prelude::Expr],
_inputs: &[LogicalPlan],
) -> Arc<dyn UserDefinedLogicalNode> {
Arc::new(self.clone())
} }
} }

View File

@@ -24,7 +24,7 @@ use datafusion::arrow::record_batch::RecordBatch;
use datafusion::common::DFSchemaRef; use datafusion::common::DFSchemaRef;
use datafusion::error::{DataFusionError, Result as DataFusionResult}; use datafusion::error::{DataFusionError, Result as DataFusionResult};
use datafusion::execution::context::TaskContext; use datafusion::execution::context::TaskContext;
use datafusion::logical_expr::{Expr, LogicalPlan, UserDefinedLogicalNode}; use datafusion::logical_expr::{Expr, LogicalPlan, UserDefinedLogicalNodeCore};
use datafusion::physical_expr::PhysicalSortExpr; use datafusion::physical_expr::PhysicalSortExpr;
use datafusion::physical_plan::metrics::{BaselineMetrics, ExecutionPlanMetricsSet, MetricsSet}; use datafusion::physical_plan::metrics::{BaselineMetrics, ExecutionPlanMetricsSet, MetricsSet};
use datafusion::physical_plan::{ use datafusion::physical_plan::{
@@ -42,7 +42,7 @@ use crate::extension_plan::Millisecond;
/// This plan will try to align the input time series, for every timestamp between /// This plan will try to align the input time series, for every timestamp between
/// `start` and `end` with step `interval`. Find in the `lookback` range if data /// `start` and `end` with step `interval`. Find in the `lookback` range if data
/// is missing at the given timestamp. /// is missing at the given timestamp.
#[derive(Debug)] #[derive(Debug, PartialEq, Eq, Hash)]
pub struct InstantManipulate { pub struct InstantManipulate {
start: Millisecond, start: Millisecond,
end: Millisecond, end: Millisecond,
@@ -52,9 +52,9 @@ pub struct InstantManipulate {
input: LogicalPlan, input: LogicalPlan,
} }
impl UserDefinedLogicalNode for InstantManipulate { impl UserDefinedLogicalNodeCore for InstantManipulate {
fn as_any(&self) -> &dyn Any { fn name(&self) -> &str {
self as _ "InstantManipulate"
} }
fn inputs(&self) -> Vec<&LogicalPlan> { fn inputs(&self) -> Vec<&LogicalPlan> {
@@ -77,21 +77,17 @@ impl UserDefinedLogicalNode for InstantManipulate {
) )
} }
fn from_template( fn from_template(&self, _exprs: &[Expr], inputs: &[LogicalPlan]) -> Self {
&self,
_exprs: &[Expr],
inputs: &[LogicalPlan],
) -> Arc<dyn UserDefinedLogicalNode> {
assert!(!inputs.is_empty()); assert!(!inputs.is_empty());
Arc::new(Self { Self {
start: self.start, start: self.start,
end: self.end, end: self.end,
lookback_delta: self.lookback_delta, lookback_delta: self.lookback_delta,
interval: self.interval, interval: self.interval,
time_index_column: self.time_index_column.clone(), time_index_column: self.time_index_column.clone(),
input: inputs[0].clone(), input: inputs[0].clone(),
}) }
} }
} }

View File

@@ -22,7 +22,7 @@ use datafusion::arrow::compute;
use datafusion::common::{DFSchemaRef, Result as DataFusionResult, Statistics}; use datafusion::common::{DFSchemaRef, Result as DataFusionResult, Statistics};
use datafusion::error::DataFusionError; use datafusion::error::DataFusionError;
use datafusion::execution::context::TaskContext; use datafusion::execution::context::TaskContext;
use datafusion::logical_expr::{LogicalPlan, UserDefinedLogicalNode}; use datafusion::logical_expr::{Expr, LogicalPlan, UserDefinedLogicalNodeCore};
use datafusion::physical_expr::PhysicalSortExpr; use datafusion::physical_expr::PhysicalSortExpr;
use datafusion::physical_plan::metrics::{BaselineMetrics, ExecutionPlanMetricsSet, MetricsSet}; use datafusion::physical_plan::metrics::{BaselineMetrics, ExecutionPlanMetricsSet, MetricsSet};
use datafusion::physical_plan::{ use datafusion::physical_plan::{
@@ -43,7 +43,7 @@ use crate::extension_plan::Millisecond;
/// - bias sample's timestamp by offset /// - bias sample's timestamp by offset
/// - sort the record batch based on timestamp column /// - sort the record batch based on timestamp column
/// - remove NaN values /// - remove NaN values
#[derive(Debug)] #[derive(Debug, PartialEq, Eq, Hash)]
pub struct SeriesNormalize { pub struct SeriesNormalize {
offset: Millisecond, offset: Millisecond,
time_index_column_name: String, time_index_column_name: String,
@@ -51,9 +51,9 @@ pub struct SeriesNormalize {
input: LogicalPlan, input: LogicalPlan,
} }
impl UserDefinedLogicalNode for SeriesNormalize { impl UserDefinedLogicalNodeCore for SeriesNormalize {
fn as_any(&self) -> &dyn Any { fn name(&self) -> &str {
self as _ "SeriesNormalize"
} }
fn inputs(&self) -> Vec<&LogicalPlan> { fn inputs(&self) -> Vec<&LogicalPlan> {
@@ -76,18 +76,14 @@ impl UserDefinedLogicalNode for SeriesNormalize {
) )
} }
fn from_template( fn from_template(&self, _exprs: &[Expr], inputs: &[LogicalPlan]) -> Self {
&self,
_exprs: &[datafusion::logical_expr::Expr],
inputs: &[LogicalPlan],
) -> Arc<dyn UserDefinedLogicalNode> {
assert!(!inputs.is_empty()); assert!(!inputs.is_empty());
Arc::new(Self { Self {
offset: self.offset, offset: self.offset,
time_index_column_name: self.time_index_column_name.clone(), time_index_column_name: self.time_index_column_name.clone(),
input: inputs[0].clone(), input: inputs[0].clone(),
}) }
} }
} }

View File

@@ -26,7 +26,7 @@ use datafusion::arrow::record_batch::RecordBatch;
use datafusion::common::{DFField, DFSchema, DFSchemaRef}; use datafusion::common::{DFField, DFSchema, DFSchemaRef};
use datafusion::error::{DataFusionError, Result as DataFusionResult}; use datafusion::error::{DataFusionError, Result as DataFusionResult};
use datafusion::execution::context::TaskContext; use datafusion::execution::context::TaskContext;
use datafusion::logical_expr::{Expr, LogicalPlan, UserDefinedLogicalNode}; use datafusion::logical_expr::{Expr, LogicalPlan, UserDefinedLogicalNodeCore};
use datafusion::physical_expr::PhysicalSortExpr; use datafusion::physical_expr::PhysicalSortExpr;
use datafusion::physical_plan::metrics::{BaselineMetrics, ExecutionPlanMetricsSet, MetricsSet}; use datafusion::physical_plan::metrics::{BaselineMetrics, ExecutionPlanMetricsSet, MetricsSet};
use datafusion::physical_plan::{ use datafusion::physical_plan::{
@@ -42,7 +42,7 @@ use crate::range_array::RangeArray;
/// ///
/// This plan will "fold" time index and value columns into [RangeArray]s, and truncate /// This plan will "fold" time index and value columns into [RangeArray]s, and truncate
/// other columns to the same length with the "folded" [RangeArray] column. /// other columns to the same length with the "folded" [RangeArray] column.
#[derive(Debug)] #[derive(Debug, PartialEq, Eq, Hash)]
pub struct RangeManipulate { pub struct RangeManipulate {
start: Millisecond, start: Millisecond,
end: Millisecond, end: Millisecond,
@@ -137,9 +137,9 @@ impl RangeManipulate {
} }
} }
impl UserDefinedLogicalNode for RangeManipulate { impl UserDefinedLogicalNodeCore for RangeManipulate {
fn as_any(&self) -> &dyn Any { fn name(&self) -> &str {
self as _ "RangeManipulate"
} }
fn inputs(&self) -> Vec<&LogicalPlan> { fn inputs(&self) -> Vec<&LogicalPlan> {
@@ -162,14 +162,10 @@ impl UserDefinedLogicalNode for RangeManipulate {
) )
} }
fn from_template( fn from_template(&self, _exprs: &[Expr], inputs: &[LogicalPlan]) -> Self {
&self,
_exprs: &[Expr],
inputs: &[LogicalPlan],
) -> Arc<dyn UserDefinedLogicalNode> {
assert!(!inputs.is_empty()); assert!(!inputs.is_empty());
Arc::new(Self { Self {
start: self.start, start: self.start,
end: self.end, end: self.end,
interval: self.interval, interval: self.interval,
@@ -178,7 +174,7 @@ impl UserDefinedLogicalNode for RangeManipulate {
value_columns: self.value_columns.clone(), value_columns: self.value_columns.clone(),
input: inputs[0].clone(), input: inputs[0].clone(),
output_schema: self.output_schema.clone(), output_schema: self.output_schema.clone(),
}) }
} }
} }

View File

@@ -23,7 +23,7 @@ use datafusion::arrow::record_batch::RecordBatch;
use datafusion::common::DFSchemaRef; use datafusion::common::DFSchemaRef;
use datafusion::error::Result as DataFusionResult; use datafusion::error::Result as DataFusionResult;
use datafusion::execution::context::TaskContext; use datafusion::execution::context::TaskContext;
use datafusion::logical_expr::{Expr, LogicalPlan, UserDefinedLogicalNode}; use datafusion::logical_expr::{Expr, LogicalPlan, UserDefinedLogicalNodeCore};
use datafusion::physical_expr::PhysicalSortExpr; use datafusion::physical_expr::PhysicalSortExpr;
use datafusion::physical_plan::metrics::{BaselineMetrics, ExecutionPlanMetricsSet, MetricsSet}; use datafusion::physical_plan::metrics::{BaselineMetrics, ExecutionPlanMetricsSet, MetricsSet};
use datafusion::physical_plan::{ use datafusion::physical_plan::{
@@ -33,15 +33,15 @@ use datafusion::physical_plan::{
use datatypes::arrow::compute; use datatypes::arrow::compute;
use futures::{ready, Stream, StreamExt}; use futures::{ready, Stream, StreamExt};
#[derive(Debug)] #[derive(Debug, PartialEq, Eq, Hash)]
pub struct SeriesDivide { pub struct SeriesDivide {
tag_columns: Vec<String>, tag_columns: Vec<String>,
input: LogicalPlan, input: LogicalPlan,
} }
impl UserDefinedLogicalNode for SeriesDivide { impl UserDefinedLogicalNodeCore for SeriesDivide {
fn as_any(&self) -> &dyn Any { fn name(&self) -> &str {
self as _ "SeriesDivide"
} }
fn inputs(&self) -> Vec<&LogicalPlan> { fn inputs(&self) -> Vec<&LogicalPlan> {
@@ -60,17 +60,13 @@ impl UserDefinedLogicalNode for SeriesDivide {
write!(f, "PromSeriesDivide: tags={:?}", self.tag_columns) write!(f, "PromSeriesDivide: tags={:?}", self.tag_columns)
} }
fn from_template( fn from_template(&self, _exprs: &[Expr], inputs: &[LogicalPlan]) -> Self {
&self,
_exprs: &[Expr],
inputs: &[LogicalPlan],
) -> Arc<dyn UserDefinedLogicalNode> {
assert!(!inputs.is_empty()); assert!(!inputs.is_empty());
Arc::new(Self { Self {
tag_columns: self.tag_columns.clone(), tag_columns: self.tag_columns.clone(),
input: inputs[0].clone(), input: inputs[0].clone(),
}) }
} }
} }

View File

@@ -157,7 +157,7 @@ mod test {
distinct: false, \ distinct: false, \
top: None, \ top: None, \
projection: \ projection: \
[Wildcard(WildcardAdditionalOptions { opt_exclude: None, opt_except: None, opt_rename: None })], \ [Wildcard(WildcardAdditionalOptions { opt_exclude: None, opt_except: None, opt_rename: None, opt_replace: None })], \
into: None, \ into: None, \
from: [TableWithJoins { relation: Table { name: ObjectName([Ident { value: \"t1\", quote_style: None }]\ from: [TableWithJoins { relation: Table { name: ObjectName([Ident { value: \"t1\", quote_style: None }]\
), \ ), \

View File

@@ -70,8 +70,11 @@ impl Table for MemTableWrapper {
self.inner.scan(projection, filters, limit).await self.inner.scan(projection, filters, limit).await
} }
fn supports_filter_pushdown(&self, _filter: &Expr) -> table::Result<FilterPushDownType> { fn supports_filters_pushdown(
Ok(FilterPushDownType::Exact) &self,
filters: &[&Expr],
) -> table::Result<Vec<FilterPushDownType>> {
Ok(vec![FilterPushDownType::Exact; filters.len()])
} }
} }

View File

@@ -263,8 +263,12 @@ pub enum Error {
#[snafu(backtrace)] #[snafu(backtrace)]
source: common_mem_prof::error::Error, source: common_mem_prof::error::Error,
}, },
#[snafu(display("Invalid prepare statement: {}", err_msg))] #[snafu(display("Invalid prepare statement: {}", err_msg))]
InvalidPrepareStatement { err_msg: String }, InvalidPrepareStatement { err_msg: String },
#[snafu(display("Invalid flush argument: {}", err_msg))]
InvalidFlushArgument { err_msg: String },
} }
pub type Result<T> = std::result::Result<T, Error>; pub type Result<T> = std::result::Result<T, Error>;
@@ -327,6 +331,7 @@ impl ErrorExt for Error {
DatabaseNotFound { .. } => StatusCode::DatabaseNotFound, DatabaseNotFound { .. } => StatusCode::DatabaseNotFound,
#[cfg(feature = "mem-prof")] #[cfg(feature = "mem-prof")]
DumpProfileData { source, .. } => source.status_code(), DumpProfileData { source, .. } => source.status_code(),
InvalidFlushArgument { .. } => StatusCode::InvalidArguments,
} }
} }

View File

@@ -12,11 +12,14 @@
// See the License for the specific language governing permissions and // See the License for the specific language governing permissions and
// limitations under the License. // limitations under the License.
mod database;
pub mod flight; pub mod flight;
pub mod handler;
use std::net::SocketAddr; use std::net::SocketAddr;
use std::sync::Arc; use std::sync::Arc;
use api::v1::greptime_database_server::{GreptimeDatabase, GreptimeDatabaseServer};
use arrow_flight::flight_service_server::{FlightService, FlightServiceServer}; use arrow_flight::flight_service_server::{FlightService, FlightServiceServer};
use async_trait::async_trait; use async_trait::async_trait;
use common_runtime::Runtime; use common_runtime::Runtime;
@@ -27,18 +30,21 @@ use tokio::net::TcpListener;
use tokio::sync::oneshot::{self, Sender}; use tokio::sync::oneshot::{self, Sender};
use tokio::sync::Mutex; use tokio::sync::Mutex;
use tokio_stream::wrappers::TcpListenerStream; use tokio_stream::wrappers::TcpListenerStream;
use tonic::Status;
use crate::auth::UserProviderRef; use crate::auth::UserProviderRef;
use crate::error::{AlreadyStartedSnafu, Result, StartGrpcSnafu, TcpBindSnafu}; use crate::error::{AlreadyStartedSnafu, Result, StartGrpcSnafu, TcpBindSnafu};
use crate::grpc::database::DatabaseService;
use crate::grpc::flight::FlightHandler; use crate::grpc::flight::FlightHandler;
use crate::grpc::handler::GreptimeRequestHandler;
use crate::query_handler::grpc::ServerGrpcQueryHandlerRef; use crate::query_handler::grpc::ServerGrpcQueryHandlerRef;
use crate::server::Server; use crate::server::Server;
type TonicResult<T> = std::result::Result<T, Status>;
pub struct GrpcServer { pub struct GrpcServer {
query_handler: ServerGrpcQueryHandlerRef,
user_provider: Option<UserProviderRef>,
shutdown_tx: Mutex<Option<Sender<()>>>, shutdown_tx: Mutex<Option<Sender<()>>>,
runtime: Arc<Runtime>, request_handler: Arc<GreptimeRequestHandler>,
} }
impl GrpcServer { impl GrpcServer {
@@ -47,21 +53,23 @@ impl GrpcServer {
user_provider: Option<UserProviderRef>, user_provider: Option<UserProviderRef>,
runtime: Arc<Runtime>, runtime: Arc<Runtime>,
) -> Self { ) -> Self {
Self { let request_handler = Arc::new(GreptimeRequestHandler::new(
query_handler, query_handler,
user_provider, user_provider,
shutdown_tx: Mutex::new(None),
runtime, runtime,
));
Self {
shutdown_tx: Mutex::new(None),
request_handler,
} }
} }
pub fn create_service(&self) -> FlightServiceServer<impl FlightService> { pub fn create_flight_service(&self) -> FlightServiceServer<impl FlightService> {
let service = FlightHandler::new( FlightServiceServer::new(FlightHandler::new(self.request_handler.clone()))
self.query_handler.clone(), }
self.user_provider.clone(),
self.runtime.clone(), pub fn create_database_service(&self) -> GreptimeDatabaseServer<impl GreptimeDatabase> {
); GreptimeDatabaseServer::new(DatabaseService::new(self.request_handler.clone()))
FlightServiceServer::new(service)
} }
} }
@@ -103,7 +111,8 @@ impl Server for GrpcServer {
// Would block to serve requests. // Would block to serve requests.
tonic::transport::Server::builder() tonic::transport::Server::builder()
.add_service(self.create_service()) .add_service(self.create_flight_service())
.add_service(self.create_database_service())
.serve_with_incoming_shutdown(TcpListenerStream::new(listener), rx.map(drop)) .serve_with_incoming_shutdown(TcpListenerStream::new(listener), rx.map(drop))
.await .await
.context(StartGrpcSnafu)?; .context(StartGrpcSnafu)?;

View File

@@ -0,0 +1,57 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::sync::Arc;
use api::v1::greptime_database_server::GreptimeDatabase;
use api::v1::{greptime_response, AffectedRows, GreptimeRequest, GreptimeResponse};
use async_trait::async_trait;
use common_query::Output;
use tonic::{Request, Response, Status};
use crate::grpc::handler::GreptimeRequestHandler;
use crate::grpc::TonicResult;
pub(crate) struct DatabaseService {
handler: Arc<GreptimeRequestHandler>,
}
impl DatabaseService {
pub(crate) fn new(handler: Arc<GreptimeRequestHandler>) -> Self {
Self { handler }
}
}
#[async_trait]
impl GreptimeDatabase for DatabaseService {
async fn handle(
&self,
request: Request<GreptimeRequest>,
) -> TonicResult<Response<GreptimeResponse>> {
let request = request.into_inner();
let output = self.handler.handle_request(request).await?;
let response = match output {
Output::AffectedRows(rows) => GreptimeResponse {
header: None,
response: Some(greptime_response::Response::AffectedRows(AffectedRows {
value: rows as _,
})),
},
Output::Stream(_) | Output::RecordBatches(_) => {
return Err(Status::unimplemented("GreptimeDatabase::handle for query"));
}
};
Ok(Response::new(response))
}
}

View File

@@ -17,8 +17,7 @@ mod stream;
use std::pin::Pin; use std::pin::Pin;
use std::sync::Arc; use std::sync::Arc;
use api::v1::auth_header::AuthScheme; use api::v1::GreptimeRequest;
use api::v1::{Basic, GreptimeRequest, RequestHeader};
use arrow_flight::flight_service_server::FlightService; use arrow_flight::flight_service_server::FlightService;
use arrow_flight::{ use arrow_flight::{
Action, ActionType, Criteria, Empty, FlightData, FlightDescriptor, FlightInfo, Action, ActionType, Criteria, Empty, FlightData, FlightDescriptor, FlightInfo,
@@ -27,40 +26,25 @@ use arrow_flight::{
use async_trait::async_trait; use async_trait::async_trait;
use common_grpc::flight::{FlightEncoder, FlightMessage}; use common_grpc::flight::{FlightEncoder, FlightMessage};
use common_query::Output; use common_query::Output;
use common_runtime::Runtime;
use futures::Stream; use futures::Stream;
use prost::Message; use prost::Message;
use session::context::{QueryContext, QueryContextRef}; use snafu::ResultExt;
use snafu::{OptionExt, ResultExt};
use tonic::{Request, Response, Status, Streaming}; use tonic::{Request, Response, Status, Streaming};
use crate::auth::{Identity, UserProviderRef};
use crate::error; use crate::error;
use crate::error::Error::Auth;
use crate::error::{NotFoundAuthHeaderSnafu, UnsupportedAuthSchemeSnafu};
use crate::grpc::flight::stream::FlightRecordBatchStream; use crate::grpc::flight::stream::FlightRecordBatchStream;
use crate::query_handler::grpc::ServerGrpcQueryHandlerRef; use crate::grpc::handler::GreptimeRequestHandler;
use crate::grpc::TonicResult;
type TonicResult<T> = Result<T, Status>;
type TonicStream<T> = Pin<Box<dyn Stream<Item = TonicResult<T>> + Send + Sync + 'static>>; type TonicStream<T> = Pin<Box<dyn Stream<Item = TonicResult<T>> + Send + Sync + 'static>>;
pub struct FlightHandler { pub struct FlightHandler {
handler: ServerGrpcQueryHandlerRef, handler: Arc<GreptimeRequestHandler>,
user_provider: Option<UserProviderRef>,
runtime: Arc<Runtime>,
} }
impl FlightHandler { impl FlightHandler {
pub fn new( pub fn new(handler: Arc<GreptimeRequestHandler>) -> Self {
handler: ServerGrpcQueryHandlerRef, Self { handler }
user_provider: Option<UserProviderRef>,
runtime: Arc<Runtime>,
) -> Self {
Self {
handler,
user_provider,
runtime,
}
} }
} }
@@ -105,40 +89,8 @@ impl FlightService for FlightHandler {
let request = let request =
GreptimeRequest::decode(ticket.as_ref()).context(error::InvalidFlightTicketSnafu)?; GreptimeRequest::decode(ticket.as_ref()).context(error::InvalidFlightTicketSnafu)?;
let query = request.request.context(error::InvalidQuerySnafu { let output = self.handler.handle_request(request).await?;
reason: "Expecting non-empty GreptimeRequest.",
})?;
let query_ctx = create_query_context(request.header.as_ref());
auth(
self.user_provider.as_ref(),
request.header.as_ref(),
&query_ctx,
)
.await?;
let handler = self.handler.clone();
// Executes requests in another runtime to
// 1. prevent the execution from being cancelled unexpected by Tonic runtime;
// - Refer to our blog for the rational behind it:
// https://www.greptime.com/blogs/2023-01-12-hidden-control-flow.html
// - Obtaining a `JoinHandle` to get the panic message (if there's any).
// From its docs, `JoinHandle` is cancel safe. The task keeps running even it's handle been dropped.
// 2. avoid the handler blocks the gRPC runtime incidentally.
let handle = self
.runtime
.spawn(async move { handler.do_query(query, query_ctx).await });
let output = handle.await.map_err(|e| {
if e.is_cancelled() {
Status::cancelled(e.to_string())
} else if e.is_panic() {
Status::internal(format!("{:?}", e.into_panic()))
} else {
Status::unknown(e.to_string())
}
})??;
let stream = to_flight_data_stream(output); let stream = to_flight_data_stream(output);
Ok(Response::new(stream)) Ok(Response::new(stream))
} }
@@ -195,56 +147,3 @@ fn to_flight_data_stream(output: Output) -> TonicStream<FlightData> {
} }
} }
} }
fn create_query_context(header: Option<&RequestHeader>) -> QueryContextRef {
let ctx = QueryContext::arc();
if let Some(header) = header {
if !header.catalog.is_empty() {
ctx.set_current_catalog(&header.catalog);
}
if !header.schema.is_empty() {
ctx.set_current_schema(&header.schema);
}
};
ctx
}
async fn auth(
user_provider: Option<&UserProviderRef>,
request_header: Option<&RequestHeader>,
query_ctx: &QueryContextRef,
) -> TonicResult<()> {
let Some(user_provider) = user_provider else { return Ok(()) };
let user_info = match request_header
.context(NotFoundAuthHeaderSnafu)?
.clone()
.authorization
.context(NotFoundAuthHeaderSnafu)?
.auth_scheme
.context(NotFoundAuthHeaderSnafu)?
{
AuthScheme::Basic(Basic { username, password }) => user_provider
.authenticate(
Identity::UserId(&username, None),
crate::auth::Password::PlainText(&password),
)
.await
.map_err(|e| Auth { source: e }),
AuthScheme::Token(_) => UnsupportedAuthSchemeSnafu {
name: "Token AuthScheme",
}
.fail(),
}
.map_err(|e| Status::unauthenticated(e.to_string()))?;
user_provider
.authorize(
&query_ctx.current_catalog(),
&query_ctx.current_schema(),
&user_info,
)
.await
.map_err(|e| Status::permission_denied(e.to_string()))
}

View File

@@ -0,0 +1,137 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::sync::Arc;
use api::v1::auth_header::AuthScheme;
use api::v1::{Basic, GreptimeRequest, RequestHeader};
use common_query::Output;
use common_runtime::Runtime;
use session::context::{QueryContext, QueryContextRef};
use snafu::OptionExt;
use tonic::Status;
use crate::auth::{Identity, Password, UserProviderRef};
use crate::error::Error::{Auth, UnsupportedAuthScheme};
use crate::error::{InvalidQuerySnafu, NotFoundAuthHeaderSnafu};
use crate::grpc::TonicResult;
use crate::query_handler::grpc::ServerGrpcQueryHandlerRef;
pub struct GreptimeRequestHandler {
handler: ServerGrpcQueryHandlerRef,
user_provider: Option<UserProviderRef>,
runtime: Arc<Runtime>,
}
impl GreptimeRequestHandler {
pub fn new(
handler: ServerGrpcQueryHandlerRef,
user_provider: Option<UserProviderRef>,
runtime: Arc<Runtime>,
) -> Self {
Self {
handler,
user_provider,
runtime,
}
}
pub(crate) async fn handle_request(&self, request: GreptimeRequest) -> TonicResult<Output> {
let query = request.request.context(InvalidQuerySnafu {
reason: "Expecting non-empty GreptimeRequest.",
})?;
let header = request.header.as_ref();
let query_ctx = create_query_context(header);
self.auth(header, &query_ctx).await?;
let handler = self.handler.clone();
// Executes requests in another runtime to
// 1. prevent the execution from being cancelled unexpected by Tonic runtime;
// - Refer to our blog for the rational behind it:
// https://www.greptime.com/blogs/2023-01-12-hidden-control-flow.html
// - Obtaining a `JoinHandle` to get the panic message (if there's any).
// From its docs, `JoinHandle` is cancel safe. The task keeps running even it's handle been dropped.
// 2. avoid the handler blocks the gRPC runtime incidentally.
let handle = self
.runtime
.spawn(async move { handler.do_query(query, query_ctx).await });
let output = handle.await.map_err(|e| {
if e.is_cancelled() {
Status::cancelled(e.to_string())
} else if e.is_panic() {
Status::internal(format!("{:?}", e.into_panic()))
} else {
Status::unknown(e.to_string())
}
})??;
Ok(output)
}
async fn auth(
&self,
header: Option<&RequestHeader>,
query_ctx: &QueryContextRef,
) -> TonicResult<()> {
let Some(user_provider) = self.user_provider.as_ref() else { return Ok(()) };
let auth_scheme = header
.and_then(|header| {
header
.authorization
.as_ref()
.and_then(|x| x.auth_scheme.clone())
})
.context(NotFoundAuthHeaderSnafu)?;
let user_info = match auth_scheme {
AuthScheme::Basic(Basic { username, password }) => user_provider
.authenticate(
Identity::UserId(&username, None),
Password::PlainText(&password),
)
.await
.map_err(|e| Auth { source: e }),
AuthScheme::Token(_) => Err(UnsupportedAuthScheme {
name: "Token AuthScheme".to_string(),
}),
}
.map_err(|e| Status::unauthenticated(e.to_string()))?;
user_provider
.authorize(
&query_ctx.current_catalog(),
&query_ctx.current_schema(),
&user_info,
)
.await
.map_err(|e| Status::permission_denied(e.to_string()))
}
}
fn create_query_context(header: Option<&RequestHeader>) -> QueryContextRef {
let ctx = QueryContext::arc();
if let Some(header) = header {
if !header.catalog.is_empty() {
ctx.set_current_catalog(&header.catalog);
}
if !header.schema.is_empty() {
ctx.set_current_schema(&header.schema);
}
};
ctx
}

View File

@@ -19,6 +19,7 @@ pub mod opentsdb;
pub mod prometheus; pub mod prometheus;
pub mod script; pub mod script;
mod admin;
#[cfg(feature = "mem-prof")] #[cfg(feature = "mem-prof")]
pub mod mem_prof; pub mod mem_prof;
@@ -56,6 +57,8 @@ use self::authorize::HttpAuth;
use self::influxdb::{influxdb_health, influxdb_ping, influxdb_write}; use self::influxdb::{influxdb_health, influxdb_ping, influxdb_write};
use crate::auth::UserProviderRef; use crate::auth::UserProviderRef;
use crate::error::{AlreadyStartedSnafu, Result, StartHttpSnafu}; use crate::error::{AlreadyStartedSnafu, Result, StartHttpSnafu};
use crate::http::admin::flush;
use crate::query_handler::grpc::ServerGrpcQueryHandlerRef;
use crate::query_handler::sql::ServerSqlQueryHandlerRef; use crate::query_handler::sql::ServerSqlQueryHandlerRef;
use crate::query_handler::{ use crate::query_handler::{
InfluxdbLineProtocolHandlerRef, OpentsdbProtocolHandlerRef, PrometheusProtocolHandlerRef, InfluxdbLineProtocolHandlerRef, OpentsdbProtocolHandlerRef, PrometheusProtocolHandlerRef,
@@ -96,6 +99,7 @@ pub static PUBLIC_APIS: [&str; 2] = ["/v1/influxdb/ping", "/v1/influxdb/health"]
pub struct HttpServer { pub struct HttpServer {
sql_handler: ServerSqlQueryHandlerRef, sql_handler: ServerSqlQueryHandlerRef,
grpc_handler: ServerGrpcQueryHandlerRef,
options: HttpOptions, options: HttpOptions,
influxdb_handler: Option<InfluxdbLineProtocolHandlerRef>, influxdb_handler: Option<InfluxdbLineProtocolHandlerRef>,
opentsdb_handler: Option<OpentsdbProtocolHandlerRef>, opentsdb_handler: Option<OpentsdbProtocolHandlerRef>,
@@ -349,9 +353,14 @@ pub struct ApiState {
} }
impl HttpServer { impl HttpServer {
pub fn new(sql_handler: ServerSqlQueryHandlerRef, options: HttpOptions) -> Self { pub fn new(
sql_handler: ServerSqlQueryHandlerRef,
grpc_handler: ServerGrpcQueryHandlerRef,
options: HttpOptions,
) -> Self {
Self { Self {
sql_handler, sql_handler,
grpc_handler,
options, options,
opentsdb_handler: None, opentsdb_handler: None,
influxdb_handler: None, influxdb_handler: None,
@@ -426,6 +435,10 @@ impl HttpServer {
.layer(Extension(api)); .layer(Extension(api));
let mut router = Router::new().nest(&format!("/{HTTP_API_VERSION}"), sql_router); let mut router = Router::new().nest(&format!("/{HTTP_API_VERSION}"), sql_router);
router = router.nest(
&format!("/{HTTP_API_VERSION}/admin"),
self.route_admin(self.grpc_handler.clone()),
);
if let Some(opentsdb_handler) = self.opentsdb_handler.clone() { if let Some(opentsdb_handler) = self.opentsdb_handler.clone() {
router = router.nest( router = router.nest(
@@ -517,6 +530,12 @@ impl HttpServer {
.route("/api/put", routing::post(opentsdb::put)) .route("/api/put", routing::post(opentsdb::put))
.with_state(opentsdb_handler) .with_state(opentsdb_handler)
} }
fn route_admin<S>(&self, grpc_handler: ServerGrpcQueryHandlerRef) -> Router<S> {
Router::new()
.route("/flush", routing::post(flush))
.with_state(grpc_handler)
}
} }
pub const HTTP_SERVER: &str = "HTTP_SERVER"; pub const HTTP_SERVER: &str = "HTTP_SERVER";
@@ -578,6 +597,7 @@ mod test {
use std::future::pending; use std::future::pending;
use std::sync::Arc; use std::sync::Arc;
use api::v1::greptime_request::Request;
use axum::handler::Handler; use axum::handler::Handler;
use axum::http::StatusCode; use axum::http::StatusCode;
use axum::routing::get; use axum::routing::get;
@@ -592,12 +612,26 @@ mod test {
use super::*; use super::*;
use crate::error::Error; use crate::error::Error;
use crate::query_handler::grpc::{GrpcQueryHandler, ServerGrpcQueryHandlerAdaptor};
use crate::query_handler::sql::{ServerSqlQueryHandlerAdaptor, SqlQueryHandler}; use crate::query_handler::sql::{ServerSqlQueryHandlerAdaptor, SqlQueryHandler};
struct DummyInstance { struct DummyInstance {
_tx: mpsc::Sender<(String, Vec<u8>)>, _tx: mpsc::Sender<(String, Vec<u8>)>,
} }
#[async_trait]
impl GrpcQueryHandler for DummyInstance {
type Error = Error;
async fn do_query(
&self,
_query: Request,
_ctx: QueryContextRef,
) -> std::result::Result<Output, Self::Error> {
unimplemented!()
}
}
#[async_trait] #[async_trait]
impl SqlQueryHandler for DummyInstance { impl SqlQueryHandler for DummyInstance {
type Error = Error; type Error = Error;
@@ -637,8 +671,10 @@ mod test {
fn make_test_app(tx: mpsc::Sender<(String, Vec<u8>)>) -> Router { fn make_test_app(tx: mpsc::Sender<(String, Vec<u8>)>) -> Router {
let instance = Arc::new(DummyInstance { _tx: tx }); let instance = Arc::new(DummyInstance { _tx: tx });
let instance = ServerSqlQueryHandlerAdaptor::arc(instance); let sql_instance = ServerSqlQueryHandlerAdaptor::arc(instance.clone());
let server = HttpServer::new(instance, HttpOptions::default()); let grpc_instance = ServerGrpcQueryHandlerAdaptor::arc(instance);
let server = HttpServer::new(sql_instance, grpc_instance, HttpOptions::default());
server.make_app().route( server.make_app().route(
"/test/timeout", "/test/timeout",
get(forever.layer( get(forever.layer(

View File

@@ -0,0 +1,69 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::collections::HashMap;
use api::v1::ddl_request::Expr;
use api::v1::greptime_request::Request;
use api::v1::{DdlRequest, FlushTableExpr};
use axum::extract::{Query, RawBody, State};
use axum::http::StatusCode as HttpStatusCode;
use axum::Json;
use session::context::QueryContext;
use snafu::OptionExt;
use crate::error;
use crate::error::Result;
use crate::query_handler::grpc::ServerGrpcQueryHandlerRef;
#[axum_macros::debug_handler]
pub async fn flush(
State(grpc_handler): State<ServerGrpcQueryHandlerRef>,
Query(params): Query<HashMap<String, String>>,
RawBody(_): RawBody,
) -> Result<(HttpStatusCode, Json<String>)> {
let catalog_name = params
.get("catalog_name")
.cloned()
.unwrap_or("greptime".to_string());
let schema_name =
params
.get("schema_name")
.cloned()
.context(error::InvalidFlushArgumentSnafu {
err_msg: "schema_name is not present",
})?;
// if table name is not present, flush all tables inside schema
let table_name = params.get("table_name").cloned().unwrap_or_default();
let region_id: Option<u32> = params
.get("region")
.map(|v| v.parse())
.transpose()
.ok()
.flatten();
let request = Request::Ddl(DdlRequest {
expr: Some(Expr::FlushTable(FlushTableExpr {
catalog_name: catalog_name.clone(),
schema_name: schema_name.clone(),
table_name: table_name.clone(),
region_id,
})),
});
grpc_handler.do_query(request, QueryContext::arc()).await?;
Ok((HttpStatusCode::OK, Json::from("done".to_string())))
}

View File

@@ -43,6 +43,7 @@ pub async fn sql(
Form(form_params): Form<SqlQuery>, Form(form_params): Form<SqlQuery>,
) -> Json<JsonResponse> { ) -> Json<JsonResponse> {
let sql_handler = &state.sql_handler; let sql_handler = &state.sql_handler;
let start = Instant::now(); let start = Instant::now();
let sql = query_params.sql.or(form_params.sql); let sql = query_params.sql.or(form_params.sql);
let db = query_params.db.or(form_params.db); let db = query_params.db.or(form_params.db);

View File

@@ -24,6 +24,7 @@ use common_runtime::{Builder as RuntimeBuilder, Runtime};
use servers::auth::UserProviderRef; use servers::auth::UserProviderRef;
use servers::error::{Result, StartGrpcSnafu, TcpBindSnafu}; use servers::error::{Result, StartGrpcSnafu, TcpBindSnafu};
use servers::grpc::flight::FlightHandler; use servers::grpc::flight::FlightHandler;
use servers::grpc::handler::GreptimeRequestHandler;
use servers::query_handler::grpc::ServerGrpcQueryHandlerRef; use servers::query_handler::grpc::ServerGrpcQueryHandlerRef;
use servers::server::Server; use servers::server::Server;
use snafu::ResultExt; use snafu::ResultExt;
@@ -54,11 +55,11 @@ impl MockGrpcServer {
} }
fn create_service(&self) -> FlightServiceServer<impl FlightService> { fn create_service(&self) -> FlightServiceServer<impl FlightService> {
let service = FlightHandler::new( let service = FlightHandler::new(Arc::new(GreptimeRequestHandler::new(
self.query_handler.clone(), self.query_handler.clone(),
self.user_provider.clone(), self.user_provider.clone(),
self.runtime.clone(), self.runtime.clone(),
); )));
FlightServiceServer::new(service) FlightServiceServer::new(service)
} }
} }

View File

@@ -17,11 +17,12 @@ use axum_test_helper::TestClient;
use servers::http::{HttpOptions, HttpServer}; use servers::http::{HttpOptions, HttpServer};
use table::test_util::MemTable; use table::test_util::MemTable;
use crate::create_testing_sql_query_handler; use crate::{create_testing_grpc_query_handler, create_testing_sql_query_handler};
fn make_test_app() -> Router { fn make_test_app() -> Router {
let server = HttpServer::new( let server = HttpServer::new(
create_testing_sql_query_handler(MemTable::default_numbers_table()), create_testing_sql_query_handler(MemTable::default_numbers_table()),
create_testing_grpc_query_handler(MemTable::default_numbers_table()),
HttpOptions::default(), HttpOptions::default(),
); );
server.make_app() server.make_app()

View File

@@ -14,6 +14,7 @@
use std::sync::Arc; use std::sync::Arc;
use api::v1::greptime_request::Request;
use api::v1::InsertRequest; use api::v1::InsertRequest;
use async_trait::async_trait; use async_trait::async_trait;
use axum::{http, Router}; use axum::{http, Router};
@@ -24,6 +25,7 @@ use query::parser::PromQuery;
use servers::error::{Error, Result}; use servers::error::{Error, Result};
use servers::http::{HttpOptions, HttpServer}; use servers::http::{HttpOptions, HttpServer};
use servers::influxdb::InfluxdbRequest; use servers::influxdb::InfluxdbRequest;
use servers::query_handler::grpc::GrpcQueryHandler;
use servers::query_handler::sql::SqlQueryHandler; use servers::query_handler::sql::SqlQueryHandler;
use servers::query_handler::InfluxdbLineProtocolHandler; use servers::query_handler::InfluxdbLineProtocolHandler;
use session::context::QueryContextRef; use session::context::QueryContextRef;
@@ -35,6 +37,19 @@ struct DummyInstance {
tx: Arc<mpsc::Sender<(String, String)>>, tx: Arc<mpsc::Sender<(String, String)>>,
} }
#[async_trait]
impl GrpcQueryHandler for DummyInstance {
type Error = Error;
async fn do_query(
&self,
_query: Request,
_ctx: QueryContextRef,
) -> std::result::Result<Output, Self::Error> {
unimplemented!()
}
}
#[async_trait] #[async_trait]
impl InfluxdbLineProtocolHandler for DummyInstance { impl InfluxdbLineProtocolHandler for DummyInstance {
async fn exec(&self, request: &InfluxdbRequest, ctx: QueryContextRef) -> Result<()> { async fn exec(&self, request: &InfluxdbRequest, ctx: QueryContextRef) -> Result<()> {
@@ -79,7 +94,7 @@ impl SqlQueryHandler for DummyInstance {
fn make_test_app(tx: Arc<mpsc::Sender<(String, String)>>, db_name: Option<&str>) -> Router { fn make_test_app(tx: Arc<mpsc::Sender<(String, String)>>, db_name: Option<&str>) -> Router {
let instance = Arc::new(DummyInstance { tx }); let instance = Arc::new(DummyInstance { tx });
let mut server = HttpServer::new(instance.clone(), HttpOptions::default()); let mut server = HttpServer::new(instance.clone(), instance.clone(), HttpOptions::default());
let mut user_provider = MockUserProvider::default(); let mut user_provider = MockUserProvider::default();
if let Some(name) = db_name { if let Some(name) = db_name {
user_provider.set_authorization_info(DatabaseAuthInfo { user_provider.set_authorization_info(DatabaseAuthInfo {

View File

@@ -14,6 +14,7 @@
use std::sync::Arc; use std::sync::Arc;
use api::v1::greptime_request::Request;
use async_trait::async_trait; use async_trait::async_trait;
use axum::Router; use axum::Router;
use axum_test_helper::TestClient; use axum_test_helper::TestClient;
@@ -23,6 +24,7 @@ use query::parser::PromQuery;
use servers::error::{self, Result}; use servers::error::{self, Result};
use servers::http::{HttpOptions, HttpServer}; use servers::http::{HttpOptions, HttpServer};
use servers::opentsdb::codec::DataPoint; use servers::opentsdb::codec::DataPoint;
use servers::query_handler::grpc::GrpcQueryHandler;
use servers::query_handler::sql::SqlQueryHandler; use servers::query_handler::sql::SqlQueryHandler;
use servers::query_handler::OpentsdbProtocolHandler; use servers::query_handler::OpentsdbProtocolHandler;
use session::context::QueryContextRef; use session::context::QueryContextRef;
@@ -32,6 +34,19 @@ struct DummyInstance {
tx: mpsc::Sender<String>, tx: mpsc::Sender<String>,
} }
#[async_trait]
impl GrpcQueryHandler for DummyInstance {
type Error = crate::Error;
async fn do_query(
&self,
_query: Request,
_ctx: QueryContextRef,
) -> std::result::Result<Output, Self::Error> {
unimplemented!()
}
}
#[async_trait] #[async_trait]
impl OpentsdbProtocolHandler for DummyInstance { impl OpentsdbProtocolHandler for DummyInstance {
async fn exec(&self, data_point: &DataPoint, _ctx: QueryContextRef) -> Result<()> { async fn exec(&self, data_point: &DataPoint, _ctx: QueryContextRef) -> Result<()> {
@@ -77,7 +92,7 @@ impl SqlQueryHandler for DummyInstance {
fn make_test_app(tx: mpsc::Sender<String>) -> Router { fn make_test_app(tx: mpsc::Sender<String>) -> Router {
let instance = Arc::new(DummyInstance { tx }); let instance = Arc::new(DummyInstance { tx });
let mut server = HttpServer::new(instance.clone(), HttpOptions::default()); let mut server = HttpServer::new(instance.clone(), instance.clone(), HttpOptions::default());
server.set_opentsdb_handler(instance); server.set_opentsdb_handler(instance);
server.make_app() server.make_app()
} }

View File

@@ -17,6 +17,7 @@ use std::sync::Arc;
use api::prometheus::remote::{ use api::prometheus::remote::{
LabelMatcher, Query, QueryResult, ReadRequest, ReadResponse, WriteRequest, LabelMatcher, Query, QueryResult, ReadRequest, ReadResponse, WriteRequest,
}; };
use api::v1::greptime_request::Request;
use async_trait::async_trait; use async_trait::async_trait;
use axum::Router; use axum::Router;
use axum_test_helper::TestClient; use axum_test_helper::TestClient;
@@ -28,6 +29,7 @@ use servers::error::{Error, Result};
use servers::http::{HttpOptions, HttpServer}; use servers::http::{HttpOptions, HttpServer};
use servers::prometheus; use servers::prometheus;
use servers::prometheus::{snappy_compress, Metrics}; use servers::prometheus::{snappy_compress, Metrics};
use servers::query_handler::grpc::GrpcQueryHandler;
use servers::query_handler::sql::SqlQueryHandler; use servers::query_handler::sql::SqlQueryHandler;
use servers::query_handler::{PrometheusProtocolHandler, PrometheusResponse}; use servers::query_handler::{PrometheusProtocolHandler, PrometheusResponse};
use session::context::QueryContextRef; use session::context::QueryContextRef;
@@ -37,6 +39,19 @@ struct DummyInstance {
tx: mpsc::Sender<(String, Vec<u8>)>, tx: mpsc::Sender<(String, Vec<u8>)>,
} }
#[async_trait]
impl GrpcQueryHandler for DummyInstance {
type Error = Error;
async fn do_query(
&self,
_query: Request,
_ctx: QueryContextRef,
) -> std::result::Result<Output, Self::Error> {
unimplemented!()
}
}
#[async_trait] #[async_trait]
impl PrometheusProtocolHandler for DummyInstance { impl PrometheusProtocolHandler for DummyInstance {
async fn write(&self, request: WriteRequest, ctx: QueryContextRef) -> Result<()> { async fn write(&self, request: WriteRequest, ctx: QueryContextRef) -> Result<()> {
@@ -102,7 +117,7 @@ impl SqlQueryHandler for DummyInstance {
fn make_test_app(tx: mpsc::Sender<(String, Vec<u8>)>) -> Router { fn make_test_app(tx: mpsc::Sender<(String, Vec<u8>)>) -> Router {
let instance = Arc::new(DummyInstance { tx }); let instance = Arc::new(DummyInstance { tx });
let mut server = HttpServer::new(instance.clone(), HttpOptions::default()); let mut server = HttpServer::new(instance.clone(), instance.clone(), HttpOptions::default());
server.set_prom_handler(instance); server.set_prom_handler(instance);
server.make_app() server.make_app()
} }

View File

@@ -47,7 +47,7 @@ mod py_script;
const LOCALHOST_WITH_0: &str = "127.0.0.1:0"; const LOCALHOST_WITH_0: &str = "127.0.0.1:0";
struct DummyInstance { pub struct DummyInstance {
query_engine: QueryEngineRef, query_engine: QueryEngineRef,
py_engine: Arc<PyEngine>, py_engine: Arc<PyEngine>,
scripts: RwLock<HashMap<String, Arc<PyScript>>>, scripts: RwLock<HashMap<String, Arc<PyScript>>>,

View File

@@ -135,6 +135,10 @@ impl<S: LogStore> Region for RegionImpl<S> {
.map(|level_ssts| level_ssts.files().map(|sst| sst.file_size()).sum::<u64>()) .map(|level_ssts| level_ssts.files().map(|sst| sst.file_size()).sum::<u64>())
.sum() .sum()
} }
async fn flush(&self) -> Result<()> {
self.inner.flush().await
}
} }
/// Storage related config for region. /// Storage related config for region.
@@ -560,4 +564,18 @@ impl<S: LogStore> RegionInner<S> {
async fn close(&self) -> Result<()> { async fn close(&self) -> Result<()> {
self.writer.close().await self.writer.close().await
} }
async fn flush(&self) -> Result<()> {
let writer_ctx = WriterContext {
shared: &self.shared,
flush_strategy: &self.flush_strategy,
flush_scheduler: &self.flush_scheduler,
compaction_scheduler: &self.compaction_scheduler,
sst_layer: &self.sst_layer,
wal: &self.wal,
writer: &self.writer,
manifest: &self.manifest,
};
self.writer.flush(writer_ctx).await
}
} }

View File

@@ -18,7 +18,7 @@ use std::sync::Arc;
use common_test_util::temp_dir::create_temp_dir; use common_test_util::temp_dir::create_temp_dir;
use log_store::raft_engine::log_store::RaftEngineLogStore; use log_store::raft_engine::log_store::RaftEngineLogStore;
use store_api::storage::{OpenOptions, WriteResponse}; use store_api::storage::{OpenOptions, Region, WriteResponse};
use crate::engine; use crate::engine;
use crate::flush::FlushStrategyRef; use crate::flush::FlushStrategyRef;
@@ -94,6 +94,10 @@ impl FlushTester {
async fn wait_flush_done(&self) { async fn wait_flush_done(&self) {
self.base().region.wait_flush_done().await.unwrap(); self.base().region.wait_flush_done().await.unwrap();
} }
async fn flush(&self) {
self.base().region.flush().await.unwrap();
}
} }
#[tokio::test] #[tokio::test]
@@ -124,6 +128,30 @@ async fn test_flush_and_stall() {
assert!(has_parquet_file(&sst_dir)); assert!(has_parquet_file(&sst_dir));
} }
#[tokio::test]
async fn test_manual_flush() {
common_telemetry::init_default_ut_logging();
let dir = create_temp_dir("manual_flush");
let store_dir = dir.path().to_str().unwrap();
let flush_switch = Arc::new(FlushSwitch::default());
let tester = FlushTester::new(store_dir, flush_switch.clone()).await;
let data = [(1000, Some(100))];
// Put one element so we have content to flush.
tester.put(&data).await;
// No parquet file should be flushed.
let sst_dir = format!("{}/{}", store_dir, engine::region_sst_dir("", REGION_NAME));
assert!(!has_parquet_file(&sst_dir));
tester.flush().await;
tester.wait_flush_done().await;
assert!(has_parquet_file(&sst_dir));
}
#[tokio::test] #[tokio::test]
async fn test_flush_empty() { async fn test_flush_empty() {
let dir = create_temp_dir("flush-empty"); let dir = create_temp_dir("flush-empty");

View File

@@ -260,6 +260,22 @@ impl RegionWriter {
Ok(()) Ok(())
} }
/// Flush task manually
pub async fn flush<S: LogStore>(&self, writer_ctx: WriterContext<'_, S>) -> Result<()> {
let mut inner = self.inner.lock().await;
ensure!(!inner.is_closed(), error::ClosedRegionSnafu);
inner.manual_flush(writer_ctx).await?;
// Wait flush.
if let Some(handle) = inner.flush_handle.take() {
handle.join().await?;
}
Ok(())
}
/// Cancel flush task if any /// Cancel flush task if any
async fn cancel_flush(&self) -> Result<()> { async fn cancel_flush(&self) -> Result<()> {
let mut inner = self.inner.lock().await; let mut inner = self.inner.lock().await;
@@ -375,6 +391,7 @@ impl WriterInner {
let next_sequence = committed_sequence + 1; let next_sequence = committed_sequence + 1;
let version = version_control.current(); let version = version_control.current();
let wal_header = WalHeader::with_last_manifest_version(version.manifest_version()); let wal_header = WalHeader::with_last_manifest_version(version.manifest_version());
writer_ctx writer_ctx
.wal .wal
@@ -680,6 +697,11 @@ impl WriterInner {
Some(schedule_compaction_cb) Some(schedule_compaction_cb)
} }
async fn manual_flush<S: LogStore>(&mut self, writer_ctx: WriterContext<'_, S>) -> Result<()> {
self.trigger_flush(&writer_ctx).await?;
Ok(())
}
#[inline] #[inline]
fn is_closed(&self) -> bool { fn is_closed(&self) -> bool {
self.closed self.closed

View File

@@ -43,7 +43,7 @@ use parquet::basic::{Compression, Encoding};
use parquet::file::metadata::KeyValue; use parquet::file::metadata::KeyValue;
use parquet::file::properties::WriterProperties; use parquet::file::properties::WriterProperties;
use parquet::format::FileMetaData; use parquet::format::FileMetaData;
use parquet::schema::types::SchemaDescriptor; use parquet::schema::types::{ColumnPath, SchemaDescriptor};
use snafu::{OptionExt, ResultExt}; use snafu::{OptionExt, ResultExt};
use table::predicate::Predicate; use table::predicate::Predicate;
use tokio::io::BufReader; use tokio::io::BufReader;
@@ -71,7 +71,7 @@ impl<'a> ParquetWriter<'a> {
file_path, file_path,
source, source,
object_store, object_store,
max_row_group_size: 4096, // TODO(hl): make this configurable max_row_group_size: 64 * 1024, // TODO(hl): make this configurable
} }
} }
@@ -88,9 +88,25 @@ impl<'a> ParquetWriter<'a> {
let schema = store_schema.arrow_schema().clone(); let schema = store_schema.arrow_schema().clone();
let object = self.object_store.object(self.file_path); let object = self.object_store.object(self.file_path);
let ts_col_name = store_schema
.schema()
.timestamp_column()
.unwrap()
.name
.clone();
let writer_props = WriterProperties::builder() let writer_props = WriterProperties::builder()
.set_compression(Compression::ZSTD) .set_compression(Compression::ZSTD)
.set_encoding(Encoding::PLAIN) .set_column_dictionary_enabled(ColumnPath::new(vec![ts_col_name.clone()]), false)
.set_column_encoding(
ColumnPath::new(vec![ts_col_name]),
Encoding::DELTA_BINARY_PACKED,
)
.set_column_dictionary_enabled(ColumnPath::new(vec!["__sequence".to_string()]), false)
.set_column_encoding(
ColumnPath::new(vec!["__sequence".to_string()]),
Encoding::DELTA_BINARY_PACKED,
)
.set_max_row_group_size(self.max_row_group_size) .set_max_row_group_size(self.max_row_group_size)
.set_key_value_metadata(extra_meta.map(|map| { .set_key_value_metadata(extra_meta.map(|map| {
map.iter() map.iter()

View File

@@ -24,7 +24,7 @@
use std::sync::atomic::{AtomicU64, Ordering}; use std::sync::atomic::{AtomicU64, Ordering};
use std::sync::Arc; use std::sync::Arc;
use common_telemetry::info; use common_telemetry::logging;
use store_api::manifest::ManifestVersion; use store_api::manifest::ManifestVersion;
use store_api::storage::{SchemaRef, SequenceNumber}; use store_api::storage::{SchemaRef, SequenceNumber};
@@ -248,7 +248,7 @@ impl Version {
.ssts .ssts
.merge(handles_to_add, edit.files_to_remove.into_iter()); .merge(handles_to_add, edit.files_to_remove.into_iter());
info!( logging::debug!(
"After apply edit, region: {}, SST files: {:?}", "After apply edit, region: {}, SST files: {:?}",
self.metadata.id(), self.metadata.id(),
merged_ssts merged_ssts

View File

@@ -106,6 +106,9 @@ impl<S: LogStore> Wal<S> {
mut header: WalHeader, mut header: WalHeader,
payload: Option<&Payload>, payload: Option<&Payload>,
) -> Result<Id> { ) -> Result<Id> {
if !cfg!(test) && (self.region_id >> 32) >= 1024 {
return Ok(seq);
}
if let Some(p) = payload { if let Some(p) = payload {
header.mutation_types = wal::gen_mutation_types(p); header.mutation_types = wal::gen_mutation_types(p);
} }

View File

@@ -76,6 +76,8 @@ pub trait Region: Send + Sync + Clone + std::fmt::Debug + 'static {
async fn close(&self) -> Result<(), Self::Error>; async fn close(&self) -> Result<(), Self::Error>;
fn disk_usage_bytes(&self) -> u64; fn disk_usage_bytes(&self) -> u64;
async fn flush(&self) -> Result<(), Self::Error>;
} }
/// Context for write operations. /// Context for write operations.

View File

@@ -19,6 +19,7 @@ common-time = { path = "../common/time" }
datafusion.workspace = true datafusion.workspace = true
datafusion-common.workspace = true datafusion-common.workspace = true
datafusion-expr.workspace = true datafusion-expr.workspace = true
datafusion-physical-expr.workspace = true
datatypes = { path = "../datatypes" } datatypes = { path = "../datatypes" }
derive_builder = "0.11" derive_builder = "0.11"
futures.workspace = true futures.workspace = true

View File

@@ -16,8 +16,10 @@ use std::fmt::{self, Display};
use std::sync::Arc; use std::sync::Arc;
use common_procedure::BoxedProcedure; use common_procedure::BoxedProcedure;
use store_api::storage::RegionId;
use crate::error::Result; use crate::error::Result;
use crate::metadata::TableId;
use crate::requests::{AlterTableRequest, CreateTableRequest, DropTableRequest, OpenTableRequest}; use crate::requests::{AlterTableRequest, CreateTableRequest, DropTableRequest, OpenTableRequest};
use crate::TableRef; use crate::TableRef;
@@ -123,6 +125,22 @@ pub trait TableEngineProcedure: Send + Sync {
pub type TableEngineProcedureRef = Arc<dyn TableEngineProcedure>; pub type TableEngineProcedureRef = Arc<dyn TableEngineProcedure>;
/// Generate region name in the form of "{TABLE_ID}_{REGION_NUMBER}"
#[inline]
pub fn region_name(table_id: TableId, n: u32) -> String {
format!("{table_id}_{n:010}")
}
#[inline]
pub fn region_id(table_id: TableId, n: u32) -> RegionId {
(u64::from(table_id) << 32) | u64::from(n)
}
#[inline]
pub fn table_dir(catalog_name: &str, schema_name: &str, table_id: TableId) -> String {
format!("{catalog_name}/{schema_name}/{table_id}/")
}
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use super::*; use super::*;

View File

@@ -17,6 +17,7 @@ use std::sync::Arc;
use chrono::{DateTime, Utc}; use chrono::{DateTime, Utc};
use common_catalog::consts::{DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME}; use common_catalog::consts::{DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME};
use datafusion_expr::TableProviderFilterPushDown;
pub use datatypes::error::{Error as ConvertError, Result as ConvertResult}; pub use datatypes::error::{Error as ConvertError, Result as ConvertResult};
use datatypes::schema::{ColumnSchema, RawSchema, Schema, SchemaBuilder, SchemaRef}; use datatypes::schema::{ColumnSchema, RawSchema, Schema, SchemaBuilder, SchemaRef};
use derive_builder::Builder; use derive_builder::Builder;
@@ -47,6 +48,26 @@ pub enum FilterPushDownType {
Exact, Exact,
} }
impl From<TableProviderFilterPushDown> for FilterPushDownType {
fn from(value: TableProviderFilterPushDown) -> Self {
match value {
TableProviderFilterPushDown::Unsupported => FilterPushDownType::Unsupported,
TableProviderFilterPushDown::Inexact => FilterPushDownType::Inexact,
TableProviderFilterPushDown::Exact => FilterPushDownType::Exact,
}
}
}
impl From<FilterPushDownType> for TableProviderFilterPushDown {
fn from(value: FilterPushDownType) -> Self {
match value {
FilterPushDownType::Unsupported => TableProviderFilterPushDown::Unsupported,
FilterPushDownType::Inexact => TableProviderFilterPushDown::Inexact,
FilterPushDownType::Exact => TableProviderFilterPushDown::Exact,
}
}
}
/// Indicates the type of this table for metadata/catalog purposes. /// Indicates the type of this table for metadata/catalog purposes.
#[derive(Serialize, Deserialize, Debug, Clone, Copy, PartialEq, Eq)] #[derive(Serialize, Deserialize, Debug, Clone, Copy, PartialEq, Eq)]
pub enum TableType { pub enum TableType {

View File

@@ -18,7 +18,10 @@ use common_time::range::TimestampRange;
use common_time::Timestamp; use common_time::Timestamp;
use datafusion::parquet::file::metadata::RowGroupMetaData; use datafusion::parquet::file::metadata::RowGroupMetaData;
use datafusion::physical_optimizer::pruning::PruningPredicate; use datafusion::physical_optimizer::pruning::PruningPredicate;
use datafusion_common::ToDFSchema;
use datafusion_expr::{Between, BinaryExpr, Operator}; use datafusion_expr::{Between, BinaryExpr, Operator};
use datafusion_physical_expr::create_physical_expr;
use datafusion_physical_expr::execution_props::ExecutionProps;
use datatypes::schema::SchemaRef; use datatypes::schema::SchemaRef;
use datatypes::value::scalar_value_to_timestamp; use datatypes::value::scalar_value_to_timestamp;
@@ -46,8 +49,26 @@ impl Predicate {
row_groups: &[RowGroupMetaData], row_groups: &[RowGroupMetaData],
) -> Vec<bool> { ) -> Vec<bool> {
let mut res = vec![true; row_groups.len()]; let mut res = vec![true; row_groups.len()];
let arrow_schema = (*schema.arrow_schema()).clone();
let df_schema = arrow_schema.clone().to_dfschema_ref();
let df_schema = match df_schema {
Ok(x) => x,
Err(e) => {
warn!("Failed to create Datafusion schema when trying to prune row groups, error: {e}");
return res;
}
};
let execution_props = &ExecutionProps::new();
for expr in &self.exprs { for expr in &self.exprs {
match PruningPredicate::try_new(expr.df_expr().clone(), schema.arrow_schema().clone()) { match create_physical_expr(
expr.df_expr(),
df_schema.as_ref(),
arrow_schema.as_ref(),
execution_props,
)
.and_then(|expr| PruningPredicate::try_new(expr, arrow_schema.clone()))
{
Ok(p) => { Ok(p) => {
let stat = RowGroupPruningStatistics::new(row_groups, &schema); let stat = RowGroupPruningStatistics::new(row_groups, &schema);
match p.prune(&stat) { match p.prune(&stat) {

View File

@@ -209,6 +209,14 @@ pub struct CopyTableFromRequest {
pub from: String, pub from: String,
} }
#[derive(Debug, Clone, Default)]
pub struct FlushTableRequest {
pub catalog_name: String,
pub schema_name: String,
pub table_name: Option<String>,
pub region_number: Option<RegionNumber>,
}
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use super::*; use super::*;

View File

@@ -23,6 +23,7 @@ use async_trait::async_trait;
use common_query::logical_plan::Expr; use common_query::logical_plan::Expr;
use common_query::physical_plan::PhysicalPlanRef; use common_query::physical_plan::PhysicalPlanRef;
use datatypes::schema::SchemaRef; use datatypes::schema::SchemaRef;
use store_api::storage::RegionNumber;
use crate::error::{Result, UnsupportedSnafu}; use crate::error::{Result, UnsupportedSnafu};
use crate::metadata::{FilterPushDownType, TableId, TableInfoRef, TableType}; use crate::metadata::{FilterPushDownType, TableId, TableInfoRef, TableType};
@@ -70,10 +71,10 @@ pub trait Table: Send + Sync {
limit: Option<usize>, limit: Option<usize>,
) -> Result<PhysicalPlanRef>; ) -> Result<PhysicalPlanRef>;
/// Tests whether the table provider can make use of a filter expression /// Tests whether the table provider can make use of any or all filter expressions
/// to optimise data retrieval. /// to optimise data retrieval.
fn supports_filter_pushdown(&self, _filter: &Expr) -> Result<FilterPushDownType> { fn supports_filters_pushdown(&self, filters: &[&Expr]) -> Result<Vec<FilterPushDownType>> {
Ok(FilterPushDownType::Unsupported) Ok(vec![FilterPushDownType::Unsupported; filters.len()])
} }
/// Alter table. /// Alter table.
@@ -94,6 +95,12 @@ pub trait Table: Send + Sync {
.fail()? .fail()?
} }
/// Flush table.
async fn flush(&self, region_number: Option<RegionNumber>) -> Result<()> {
let _ = region_number;
UnsupportedSnafu { operation: "FLUSH" }.fail()?
}
/// Close the table. /// Close the table.
async fn close(&self) -> Result<()> { async fn close(&self) -> Result<()> {
Ok(()) Ok(())

View File

@@ -78,15 +78,18 @@ impl TableProvider for DfTableProviderAdapter {
Ok(Arc::new(DfPhysicalPlanAdapter(inner))) Ok(Arc::new(DfPhysicalPlanAdapter(inner)))
} }
fn supports_filter_pushdown(&self, filter: &DfExpr) -> DfResult<DfTableProviderFilterPushDown> { fn supports_filters_pushdown(
let p = self &self,
filters: &[&DfExpr],
) -> DfResult<Vec<DfTableProviderFilterPushDown>> {
let filters = filters
.iter()
.map(|&x| x.clone().into())
.collect::<Vec<_>>();
Ok(self
.table .table
.supports_filter_pushdown(&filter.clone().into())?; .supports_filters_pushdown(&filters.iter().collect::<Vec<_>>())
match p { .map(|v| v.into_iter().map(Into::into).collect::<Vec<_>>())?)
FilterPushDownType::Unsupported => Ok(DfTableProviderFilterPushDown::Unsupported),
FilterPushDownType::Inexact => Ok(DfTableProviderFilterPushDown::Inexact),
FilterPushDownType::Exact => Ok(DfTableProviderFilterPushDown::Exact),
}
} }
} }
@@ -155,16 +158,11 @@ impl Table for TableAdapter {
Ok(Arc::new(PhysicalPlanAdapter::new(schema, execution_plan))) Ok(Arc::new(PhysicalPlanAdapter::new(schema, execution_plan)))
} }
fn supports_filter_pushdown(&self, filter: &Expr) -> Result<FilterPushDownType> { fn supports_filters_pushdown(&self, filters: &[&Expr]) -> Result<Vec<FilterPushDownType>> {
match self self.table_provider
.table_provider .supports_filters_pushdown(&filters.iter().map(|x| x.df_expr()).collect::<Vec<_>>())
.supports_filter_pushdown(filter.df_expr()) .context(error::DatafusionSnafu)
.context(error::DatafusionSnafu)? .map(|v| v.into_iter().map(Into::into).collect::<Vec<_>>())
{
DfTableProviderFilterPushDown::Unsupported => Ok(FilterPushDownType::Unsupported),
DfTableProviderFilterPushDown::Inexact => Ok(FilterPushDownType::Inexact),
DfTableProviderFilterPushDown::Exact => Ok(FilterPushDownType::Exact),
}
} }
} }

View File

@@ -275,7 +275,8 @@ pub async fn setup_test_http_app(store_type: StorageType, name: &str) -> (Router
.await .await
.unwrap(); .unwrap();
let http_server = HttpServer::new( let http_server = HttpServer::new(
ServerSqlQueryHandlerAdaptor::arc(Arc::new(build_frontend_instance(instance))), ServerSqlQueryHandlerAdaptor::arc(Arc::new(build_frontend_instance(instance.clone()))),
ServerGrpcQueryHandlerAdaptor::arc(instance.clone()),
HttpOptions::default(), HttpOptions::default(),
); );
(http_server.make_app(), guard) (http_server.make_app(), guard)
@@ -296,8 +297,10 @@ pub async fn setup_test_http_app_with_frontend(
) )
.await .await
.unwrap(); .unwrap();
let frontend_ref = Arc::new(frontend);
let mut http_server = HttpServer::new( let mut http_server = HttpServer::new(
ServerSqlQueryHandlerAdaptor::arc(Arc::new(frontend)), ServerSqlQueryHandlerAdaptor::arc(frontend_ref.clone()),
ServerGrpcQueryHandlerAdaptor::arc(frontend_ref),
HttpOptions::default(), HttpOptions::default(),
); );
http_server.set_script_handler(instance.clone()); http_server.set_script_handler(instance.clone());

View File

@@ -183,7 +183,7 @@ async fn insert_and_assert(db: &Database) {
row_count: 4, row_count: 4,
}; };
let result = db.insert(request).await; let result = db.insert(request).await;
result.unwrap(); assert_eq!(result.unwrap(), 4);
let result = db let result = db
.sql( .sql(

View File

@@ -1,49 +0,0 @@
CREATE TABLE integers(i INTEGER, j BIGINT TIME INDEX);
Affected Rows: 0
INSERT INTO integers VALUES (1, 1), (2, 2), (3, 3), (NULL, 4);
Affected Rows: 4
SELECT i1.i, i2.i FROM integers i1, integers i2 WHERE i1.i=i2.i ORDER BY 1;
+---+---+
| i | i |
+---+---+
| 1 | 1 |
| 2 | 2 |
| 3 | 3 |
+---+---+
SELECT i1.i,i2.i FROM integers i1, integers i2 WHERE i1.i=i2.i AND i1.i>1 ORDER BY 1;
+---+---+
| i | i |
+---+---+
| 2 | 2 |
| 3 | 3 |
+---+---+
SELECT i1.i,i2.i,i3.i FROM integers i1, integers i2, integers i3 WHERE i1.i=i2.i AND i1.i=i3.i AND i1.i>1 ORDER BY 1;
+---+---+---+
| i | i | i |
+---+---+---+
| 2 | 2 | 2 |
| 3 | 3 | 3 |
+---+---+---+
SELECT i1.i,i2.i FROM integers i1 JOIN integers i2 ON i1.i=i2.i WHERE i1.i>1 ORDER BY 1;
+---+---+
| i | i |
+---+---+
| 2 | 2 |
| 3 | 3 |
+---+---+
DROP TABLE integers;
Affected Rows: 1

View File

@@ -1,15 +0,0 @@
CREATE TABLE integers(i INTEGER, j BIGINT TIME INDEX);
INSERT INTO integers VALUES (1, 1), (2, 2), (3, 3), (NULL, 4);
SELECT i1.i, i2.i FROM integers i1, integers i2 WHERE i1.i=i2.i ORDER BY 1;
SELECT i1.i,i2.i FROM integers i1, integers i2 WHERE i1.i=i2.i AND i1.i>1 ORDER BY 1;
SELECT i1.i,i2.i,i3.i FROM integers i1, integers i2, integers i3 WHERE i1.i=i2.i AND i1.i=i3.i AND i1.i>1 ORDER BY 1;
SELECT i1.i,i2.i FROM integers i1 JOIN integers i2 ON i1.i=i2.i WHERE i1.i>1 ORDER BY 1;
-- TODO(LFC): Resolve #790, then port remaining test case from standalone.
DROP TABLE integers;

View File

@@ -92,15 +92,32 @@ SELECT i1.i,i2.i FROM integers i1 LEFT OUTER JOIN integers i2 ON 1=1 WHERE i1.i=
SELECT * FROM integers WHERE i IN ((SELECT i FROM integers)) ORDER BY i; SELECT * FROM integers WHERE i IN ((SELECT i FROM integers)) ORDER BY i;
Error: 3001(EngineExecuteQuery), This feature is not implemented: Physical plan does not support logical expression (<subquery>) +---+---+
| i | j |
+---+---+
| 1 | 1 |
| 2 | 2 |
| 3 | 3 |
+---+---+
SELECT * FROM integers WHERE i NOT IN ((SELECT i FROM integers WHERE i=1)) ORDER BY i; SELECT * FROM integers WHERE i NOT IN ((SELECT i FROM integers WHERE i=1)) ORDER BY i;
Error: 3001(EngineExecuteQuery), This feature is not implemented: Physical plan does not support logical expression (<subquery>) +---+---+
| i | j |
+---+---+
| 2 | 2 |
| 3 | 3 |
| | 4 |
+---+---+
SELECT * FROM integers WHERE i IN ((SELECT i FROM integers)) AND i<3 ORDER BY i; SELECT * FROM integers WHERE i IN ((SELECT i FROM integers)) AND i<3 ORDER BY i;
Error: 3001(EngineExecuteQuery), This feature is not implemented: Physical plan does not support logical expression (<subquery>) +---+---+
| i | j |
+---+---+
| 1 | 1 |
| 2 | 2 |
+---+---+
SELECT i1.i,i2.i FROM integers i1, integers i2 WHERE i IN ((SELECT i FROM integers)) AND i1.i=i2.i ORDER BY 1; SELECT i1.i,i2.i FROM integers i1, integers i2 WHERE i IN ((SELECT i FROM integers)) AND i1.i=i2.i ORDER BY 1;