Compare commits

...

20 Commits

Author SHA1 Message Date
ZonaHe
d2c90b4c59 feat: update dashboard to v0.2.2 (#1426)
Co-authored-by: ZonaHex <ZonaHex@users.noreply.github.com>
2023-04-19 19:09:17 +08:00
Weny Xu
5a05e3107c feat: implement parsing format from hashmap (#1420)
* feat: implement parsing format from hashmap

* chore: apply suggestions from CR
2023-04-19 16:29:31 +08:00
Hao
e4cd08c750 feat: add table id and engine to information_schema.TABLES (#1407)
* feat: add table id and engine to informatin_schema.TABLES

* Update src/catalog/src/information_schema/tables.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

* chore: change table_engine to engine

* test: update sqlness for information schema

* test: update information_schema test in frontend::tests::instance_test.rs

* fix: github action sqlness information_schema test fail

* test: ignore table_id in information_schema

* test: support distribute and standalone have different output

---------

Co-authored-by: Yingwen <realevenyag@gmail.com>
2023-04-19 10:52:02 +08:00
Ruihang Xia
e8bb00f0be feat: impl instant query interface (#1410)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-04-18 23:25:14 +08:00
LFC
ff2784da0f test: add SELECT ... LIMIT ... test cases for distributed mode (#1419) 2023-04-18 23:05:43 +08:00
liyang
4652b62481 chore: use alicloud imagehub (#1418)
Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>
2023-04-18 14:35:14 +00:00
Lei, HUANG
0e4d4f0300 chore: release 0.2.0 (#1413)
* chore: bump version to v0.2.0

* chore: bump dashboard to v0.2.1

* chore: remove push uhub step

* fix: static assets path prefix
2023-04-18 22:12:13 +08:00
shuiyisong
145f8eb5a7 refactor: parallelize open table (#1392)
* refactor: change open_table to parallel on datanode startup

* chore: try move out register schema table

* chore: change mito engine to key lock

* chore: minor change

* chore: minor change

* chore: update error definition

* chore: remove rwlock on tables

* chore: try parallel register table on schema provider

* chore: add rt log

* chore: add region open rt log

* chore: add actual open region rt log

* chore: add recover rt log

* chore: divide to three part rt log

* chore: remove debug log

* chore: add replay rt log

* chore: update cargo lock

* chore: remove debug log

* chore: revert unused change

* chore: update err msg

Co-authored-by: dennis zhuang <killme2008@gmail.com>

* chore: fix cr issue

Co-authored-by: dennis zhuang <killme2008@gmail.com>

* chore: fix cr issue

* chore: fix cr issue

---------

Co-authored-by: dennis zhuang <killme2008@gmail.com>
2023-04-18 21:36:29 +08:00
discord9
de8b889701 chore: update RustPython depend (#1406)
* chore: update RustPython to newer version

* chore: bump ver

* chore: PR advices
2023-04-18 15:39:57 +08:00
Lei, HUANG
1c65987026 chore: remove Release prefix from release name (#1409) 2023-04-18 06:25:08 +00:00
Near
c6f024a171 feat: Add metrics for cache hit/miss for object store cache (#1405)
* Add the cache hit/miss counter

* Verify the cache metrics are included

* Resolve comments

* Rename the error kind label name to be consistent with other metrics

* Rename the object store metric names

* Avoid using glob imports

* Format the code

* chore: Update src/object-store/src/metrics.rs mod doc

---------

Co-authored-by: Yingwen <realevenyag@gmail.com>
2023-04-18 14:08:19 +08:00
localhost
0c88bb09e3 chore: add some metrics for grpc client (#1398)
* chore: add some metrics for grpc client

* chore: add grpc preix and change metrics-exporter-ptometheus to add global prefix

---------

Co-authored-by: paomian <qtang@greptime.com>
2023-04-18 13:55:01 +08:00
Ruihang Xia
f4190cfca6 fix: table scan without projection (#1404)
* fix: table scan without projection

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update PR reference

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-04-17 20:10:36 +08:00
zyy17
b933ffddd0 ci: set whether it is the latest release by using 'ncipollo/release-action and update install.sh (#1400)
* ci: set whether it is the latest release by using 'ncipollo/release-action'

* ci: modify greptimedb install script to use the latest nightly version binary
2023-04-17 18:44:00 +08:00
Lei, HUANG
1214b5b43e docs: fix timestamp rendering in readme (#1399)
doc: fix timestamp rendering in readme
2023-04-17 17:07:25 +08:00
Ruihang Xia
a47134a971 chore: don't render reproduce as shell in issue template (#1397)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2023-04-17 16:42:51 +08:00
Ruihang Xia
dc85a4b5bb feat: migrate substrait to datafusion official implementation (#1238)
* some test cases will fail

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* revert version changes

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update substrait-proto version

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix compile

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update df again

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* clean up

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* Update src/common/substrait/Cargo.toml

Co-authored-by: dennis zhuang <killme2008@gmail.com>

* revert COPY FROM / COPY TO sqlness to standalone only

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: dennis zhuang <killme2008@gmail.com>
2023-04-17 15:54:35 +08:00
Yingwen
0937ccdb61 docs: Add docs about schema structs (#1373)
* docs: Add docs about schema structs

* docs: refine schema struct docs

- Describe SchemaRef and relationship between our schema and arrow's.
- Add more examples

* docs: Add code link to schemas

* docs: Add conversion graph

* docs: Apply suggestions from code review

Co-authored-by: LFC <bayinamine@gmail.com>

---------

Co-authored-by: LFC <bayinamine@gmail.com>
2023-04-17 12:11:24 +08:00
Weny Xu
408de51be8 feat: implement JsonOpener and CsvOpener (#1367)
* feat: introduce JsonOpener and CsvOpener

* refactor: refactor Opener

* docs: add doc
2023-04-17 11:42:16 +08:00
LFC
f7b7a9c801 feat: implement COPY for cluster (#1388) 2023-04-17 11:04:47 +08:00
93 changed files with 3270 additions and 967 deletions

View File

@@ -81,6 +81,5 @@ body:
Please walk us through and provide steps and details on how
to reproduce the issue. If possible, provide scripts that we
can run to trigger the bug.
render: bash
validations:
required: true

View File

@@ -319,35 +319,50 @@ jobs:
SCHEDULED_BUILD_VERSION=${{ env.SCHEDULED_BUILD_VERSION_PREFIX }}-${{ env.SCHEDULED_PERIOD }}-$buildTime
echo "SCHEDULED_BUILD_VERSION=${SCHEDULED_BUILD_VERSION}" >> $GITHUB_ENV
# Only publish release when the release tag is like v1.0.0, v1.0.1, v1.0.2, etc.
- name: Set whether it is the latest release
run: |
if [[ "${{ github.ref_name }}" =~ ^v[0-9]+\.[0-9]+\.[0-9]+$ ]]; then
echo "prerelease=false" >> $GITHUB_ENV
echo "makeLatest=true" >> $GITHUB_ENV
else
echo "prerelease=true" >> $GITHUB_ENV
echo "makeLatest=false" >> $GITHUB_ENV
fi
- name: Create scheduled build git tag
if: github.event_name == 'schedule'
run: |
git tag ${{ env.SCHEDULED_BUILD_VERSION }}
- name: Publish scheduled release # configure the different release title and tags.
uses: softprops/action-gh-release@v1
uses: ncipollo/release-action@v1
if: github.event_name == 'schedule'
with:
name: "Release ${{ env.SCHEDULED_BUILD_VERSION }}"
tag_name: ${{ env.SCHEDULED_BUILD_VERSION }}
generate_release_notes: true
files: |
prerelease: ${{ env.prerelease }}
makeLatest: ${{ env.makeLatest }}
tag: ${{ env.SCHEDULED_BUILD_VERSION }}
generateReleaseNotes: true
artifacts: |
**/greptime-*
- name: Publish release
uses: softprops/action-gh-release@v1
uses: ncipollo/release-action@v1
if: github.event_name != 'schedule'
with:
name: "Release ${{ github.ref_name }}"
files: |
name: "${{ github.ref_name }}"
prerelease: ${{ env.prerelease }}
makeLatest: ${{ env.makeLatest }}
generateReleaseNotes: true
artifacts: |
**/greptime-*
docker-push-uhub:
name: Push docker image to UCloud Container Registry
docker-push-acr:
name: Push docker image to alibaba cloud container registry
needs: [docker]
runs-on: ubuntu-latest
if: github.repository == 'GreptimeTeam/greptimedb' && github.event_name != 'workflow_dispatch'
# Push to uhub may fail(500 error), but we don't want to block the release process. The failed job will be retried manually.
continue-on-error: true
steps:
- name: Checkout sources
@@ -359,12 +374,12 @@ jobs:
- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2
- name: Login to UCloud Container Registry
- name: Login to alibaba cloud container registry
uses: docker/login-action@v2
with:
registry: uhub.service.ucloud.cn
username: ${{ secrets.UCLOUD_USERNAME }}
password: ${{ secrets.UCLOUD_PASSWORD }}
registry: registry.cn-hangzhou.aliyuncs.com
username: ${{ secrets.ALICLOUD_USERNAME }}
password: ${{ secrets.ALICLOUD_PASSWORD }}
- name: Configure scheduled build image tag # the tag would be ${SCHEDULED_BUILD_VERSION_PREFIX}-YYYYMMDD-${SCHEDULED_PERIOD}
shell: bash
@@ -381,9 +396,9 @@ jobs:
VERSION=${{ github.ref_name }}
echo "IMAGE_TAG=${VERSION:1}" >> $GITHUB_ENV
- name: Push image to uhub # Use 'docker buildx imagetools create' to create a new image base on source image.
- name: Push image to alibaba cloud container registry # Use 'docker buildx imagetools create' to create a new image base on source image.
run: |
docker buildx imagetools create \
--tag uhub.service.ucloud.cn/greptime/greptimedb:latest \
--tag uhub.service.ucloud.cn/greptime/greptimedb:${{ env.IMAGE_TAG }} \
--tag registry.cn-hangzhou.aliyuncs.com/greptime/greptimedb:latest \
--tag registry.cn-hangzhou.aliyuncs.com/greptime/greptimedb:${{ env.IMAGE_TAG }} \
greptime/greptimedb:${{ env.IMAGE_TAG }}

1535
Cargo.lock generated

File diff suppressed because it is too large Load Diff

View File

@@ -47,7 +47,7 @@ members = [
]
[workspace.package]
version = "0.1.1"
version = "0.2.0"
edition = "2021"
license = "Apache-2.0"
@@ -59,12 +59,14 @@ arrow-schema = { version = "37.0", features = ["serde"] }
async-stream = "0.3"
async-trait = "0.1"
chrono = { version = "0.4", features = ["serde"] }
datafusion = { git = "https://github.com/apache/arrow-datafusion.git", rev = "74a778ca6016a853a3c3add3fa8c6f12f4fe4561" }
datafusion-common = { git = "https://github.com/apache/arrow-datafusion.git", rev = "74a778ca6016a853a3c3add3fa8c6f12f4fe4561" }
datafusion-expr = { git = "https://github.com/apache/arrow-datafusion.git", rev = "74a778ca6016a853a3c3add3fa8c6f12f4fe4561" }
datafusion-optimizer = { git = "https://github.com/apache/arrow-datafusion.git", rev = "74a778ca6016a853a3c3add3fa8c6f12f4fe4561" }
datafusion-physical-expr = { git = "https://github.com/apache/arrow-datafusion.git", rev = "74a778ca6016a853a3c3add3fa8c6f12f4fe4561" }
datafusion-sql = { git = "https://github.com/apache/arrow-datafusion.git", rev = "74a778ca6016a853a3c3add3fa8c6f12f4fe4561" }
# TODO(ruihang): use arrow-datafusion when it contains https://github.com/apache/arrow-datafusion/pull/6032
datafusion = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "b14f7a9ffe91257fc3d2a5d654f2a1a14a8fc793" }
datafusion-common = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "b14f7a9ffe91257fc3d2a5d654f2a1a14a8fc793" }
datafusion-expr = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "b14f7a9ffe91257fc3d2a5d654f2a1a14a8fc793" }
datafusion-optimizer = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "b14f7a9ffe91257fc3d2a5d654f2a1a14a8fc793" }
datafusion-physical-expr = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "b14f7a9ffe91257fc3d2a5d654f2a1a14a8fc793" }
datafusion-sql = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "b14f7a9ffe91257fc3d2a5d654f2a1a14a8fc793" }
datafusion-substrait = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "b14f7a9ffe91257fc3d2a5d654f2a1a14a8fc793" }
futures = "0.3"
futures-util = "0.3"
parquet = "37.0"
@@ -80,6 +82,7 @@ tokio = { version = "1.24.2", features = ["full"] }
tokio-util = { version = "0.7", features = ["io-util"] }
tonic = { version = "0.9", features = ["tls"] }
uuid = { version = "1", features = ["serde", "v4", "fast-rng"] }
metrics = "0.20"
[profile.release]
debug = true

View File

@@ -131,16 +131,16 @@ about Kubernetes deployment, check our [docs](https://docs.greptime.com/).
SELECT * FROM monitor;
```
```TEXT
+-------+---------------------+------+--------+
| host | ts | cpu | memory |
+-------+---------------------+------+--------+
| host1 | 2022-08-19 08:32:35 | 66.6 | 1024 |
| host2 | 2022-08-19 08:32:36 | 77.7 | 2048 |
| host3 | 2022-08-19 08:32:37 | 88.8 | 4096 |
+-------+---------------------+------+--------+
3 rows in set (0.01 sec)
```
```TEXT
+-------+--------------------------+------+--------+
| host | ts | cpu | memory |
+-------+--------------------------+------+--------+
| host1 | 2022-08-19 16:32:35+0800 | 66.6 | 1024 |
| host2 | 2022-08-19 16:32:36+0800 | 77.7 | 2048 |
| host3 | 2022-08-19 16:32:37+0800 | 88.8 | 4096 |
+-------+--------------------------+------+--------+
3 rows in set (0.03 sec)
```
You can always cleanup test database by removing `/tmp/greptimedb`.

527
docs/schema-structs.md Normal file
View File

@@ -0,0 +1,527 @@
# Schema Structs
# Common Schemas
The `datatypes` crate defines the elementary schema struct to describe the metadata.
## ColumnSchema
[ColumnSchema](https://github.com/GreptimeTeam/greptimedb/blob/9fa871a3fad07f583dc1863a509414da393747f8/src/datatypes/src/schema/column_schema.rs#L36) represents the metadata of a column. It is equivalent to arrow's [Field](https://docs.rs/arrow/latest/arrow/datatypes/struct.Field.html) with additional metadata such as default constraint and whether the column is a time index. The time index is the column with a `TIME INDEX` constraint of a table. We can convert the `ColumnSchema` into an arrow `Field` and convert the `Field` back to the `ColumnSchema` without losing metadata.
```rust
pub struct ColumnSchema {
pub name: String,
pub data_type: ConcreteDataType,
is_nullable: bool,
is_time_index: bool,
default_constraint: Option<ColumnDefaultConstraint>,
metadata: Metadata,
}
```
## Schema
[Schema](https://github.com/GreptimeTeam/greptimedb/blob/9fa871a3fad07f583dc1863a509414da393747f8/src/datatypes/src/schema.rs#L38) is an ordered sequence of `ColumnSchema`. It is equivalent to arrow's [Schema](https://docs.rs/arrow/latest/arrow/datatypes/struct.Schema.html) with additional metadata including the index of the time index column and the version of this schema. Same as `ColumnSchema`, we can convert our `Schema` from/to arrow's `Schema`.
```rust
use arrow::datatypes::Schema as ArrowSchema;
pub struct Schema {
column_schemas: Vec<ColumnSchema>,
name_to_index: HashMap<String, usize>,
arrow_schema: Arc<ArrowSchema>,
timestamp_index: Option<usize>,
version: u32,
}
pub type SchemaRef = Arc<Schema>;
```
We alias `Arc<Schema>` as `SchemaRef` since it is used frequently. Mostly, we use our `ColumnSchema` and `Schema` structs instead of Arrow's `Field` and `Schema` unless we need to invoke third-party libraries (like DataFusion or ArrowFlight) that rely on Arrow.
## RawSchema
`Schema` contains fields like a map from column names to their indices in the `ColumnSchema` sequences and a cached arrow `Schema`. We can construct these fields from the `ColumnSchema` sequences thus we don't want to serialize them. This is why we don't derive `Serialize` and `Deserialize` for `Schema`. We introduce a new struct [RawSchema](https://github.com/GreptimeTeam/greptimedb/blob/9fa871a3fad07f583dc1863a509414da393747f8/src/datatypes/src/schema/raw.rs#L24) which keeps all required fields of a `Schema` and derives the serialization traits. To serialize a `Schema`, we need to convert it into a `RawSchema` first and serialize the `RawSchema`.
```rust
pub struct RawSchema {
pub column_schemas: Vec<ColumnSchema>,
pub timestamp_index: Option<usize>,
pub version: u32,
}
```
We want to keep the `Schema` simple and avoid putting too much business-related metadata in it as many different structs or traits rely on it.
# Schema of the Table
A table maintains its schema in [TableMeta](https://github.com/GreptimeTeam/greptimedb/blob/9fa871a3fad07f583dc1863a509414da393747f8/src/table/src/metadata.rs#L97).
```rust
pub struct TableMeta {
pub schema: SchemaRef,
pub primary_key_indices: Vec<usize>,
pub value_indices: Vec<usize>,
// ...
}
```
The order of columns in `TableMeta::schema` is the same as the order specified in the `CREATE TABLE` statement which users use to create this table.
The field `primary_key_indices` stores indices of primary key columns. The field `value_indices` records the indices of value columns (non-primary key and time index, we sometimes call them field columns).
Suppose we create a table with the following SQL
```sql
CREATE TABLE cpu (
ts TIMESTAMP,
host STRING,
usage_user DOUBLE,
usage_system DOUBLE,
datacenter STRING,
TIME INDEX (ts),
PRIMARY KEY(datacenter, host)) ENGINE=mito WITH(regions=1);
```
Then the table's `TableMeta` may look like this:
```json
{
"schema":{
"column_schemas":[
"ts",
"host",
"usage_user",
"usage_system",
"datacenter"
],
"time_index":0,
"version":0
},
"primary_key_indices":[
4,
1
],
"value_indices":[
2,
3
]
}
```
# Schemas of the storage engine
We split a table into one or more units with the same schema and then store these units in the storage engine. Each unit is a region in the storage engine.
The storage engine maintains schemas of regions in more complicated ways because it
- adds internal columns that are invisible to users to store additional metadata for each row
- provides a data model similar to the key-value model so it organizes columns in a different order
- maintains additional metadata like column id or column family
So the storage engine defines several schema structs:
- RegionSchema
- StoreSchema
- ProjectedSchema
## RegionSchema
A [RegionSchema](https://github.com/GreptimeTeam/greptimedb/blob/9fa871a3fad07f583dc1863a509414da393747f8/src/storage/src/schema/region.rs#L37) describes the schema of a region.
```rust
pub struct RegionSchema {
user_schema: SchemaRef,
store_schema: StoreSchemaRef,
columns: ColumnsMetadataRef,
}
```
Each region reserves some columns called `internal columns` for internal usage:
- `__sequence`, sequence number of a row
- `__op_type`, operation type of a row, such as `PUT` or `DELETE`
- `__version`, user-specified version of a row, reserved but not used. We might remove this in the future
The table engine can't see the `__sequence` and `__op_type` columns, so the `RegionSchema` itself maintains two internal schemas:
- User schema, a `Schema` struct that doesn't have internal columns
- Store schema, a `StoreSchema` struct that has internal columns
The `ColumnsMetadata` struct keeps metadata about all columns but most time we only need to use metadata in user schema and store schema, so we just ignore it. We may remove this struct in the future.
`RegionSchema` organizes columns in the following order:
```
key columns, timestamp, [__version,] value columns, __sequence, __op_type
```
We can ignore the `__version` column because it is disabled now:
```
key columns, timestamp, value columns, __sequence, __op_type
```
Key columns are columns of a table's primary key. Timestamp is the time index column. A region sorts all rows by key columns, timestamp, sequence, and op type.
So the `RegionSchema` of our `cpu` table above looks like this:
```json
{
"user_schema":[
"datacenter",
"host",
"ts",
"usage_user",
"usage_system"
],
"store_schema":[
"datacenter",
"host",
"ts",
"usage_user",
"usage_system",
"__sequence",
"__op_type"
]
}
```
## StoreSchema
As described above, a [StoreSchema](https://github.com/GreptimeTeam/greptimedb/blob/9fa871a3fad07f583dc1863a509414da393747f8/src/storage/src/schema/store.rs#L36) is a schema that knows all internal columns.
```rust
struct StoreSchema {
columns: Vec<ColumnMetadata>,
schema: SchemaRef,
row_key_end: usize,
user_column_end: usize,
}
```
The columns in the `columns` and `schema` fields have the same order. The `ColumnMetadata` has metadata like column id, column family id, and comment. The `StoreSchema` also stores this metadata in `StoreSchema::schema`, so we can convert the `StoreSchema` between arrow's `Schema`. We use this feature to persist the `StoreSchema` in the SST since our SST format is `Parquet`, which can take arrow's `Schema` as its schema.
The `StoreSchema` of the region above is similar to this:
```json
{
"schema":{
"column_schemas":[
"datacenter",
"host",
"ts",
"usage_user",
"usage_system",
"__sequence",
"__op_type"
],
"time_index":2,
"version":0
},
"row_key_end":3,
"user_column_end":5
}
```
The key and timestamp columns form row keys of rows. We put them together so we can use `row_key_end` to get indices of all row key columns. Similarly, we can use the `user_column_end` to get indices of all user columns (non-internal columns).
```rust
impl StoreSchema {
#[inline]
pub(crate) fn row_key_indices(&self) -> impl Iterator<Item = usize> {
0..self.row_key_end
}
#[inline]
pub(crate) fn value_indices(&self) -> impl Iterator<Item = usize> {
self.row_key_end..self.user_column_end
}
}
```
Another useful feature of `StoreSchema` is that we ensure it always contains key columns, a timestamp column, and internal columns because we need them to perform merge, deduplication, and delete. Projection on `StoreSchema` only projects value columns.
## ProjectedSchema
To support arbitrary projection, we introduce the [ProjectedSchema](https://github.com/GreptimeTeam/greptimedb/blob/9fa871a3fad07f583dc1863a509414da393747f8/src/storage/src/schema/projected.rs#L106).
```rust
pub struct ProjectedSchema {
projection: Option<Projection>,
schema_to_read: StoreSchemaRef,
projected_user_schema: SchemaRef,
}
```
We need to handle many cases while doing projection:
- The columns' order of table and region is different
- The projection can be in arbitrary order, e.g. `select usage_user, host from cpu` and `select host, usage_user from cpu` have different projection order
- We support `ALTER TABLE` so data files may have different schemas.
### Projection
Let's take an example to see how projection works. Suppose we want to select `ts`, `usage_system` from the `cpu` table.
```sql
CREATE TABLE cpu (
ts TIMESTAMP,
host STRING,
usage_user DOUBLE,
usage_system DOUBLE,
datacenter STRING,
TIME INDEX (ts),
PRIMARY KEY(datacenter, host)) ENGINE=mito WITH(regions=1);
select ts, usage_system from cpu;
```
The query engine uses the projection `[0, 3]` to scan the table. However, columns in the region have a different order, so the table engine adjusts the projection to `2, 4`.
```json
{
"user_schema":[
"datacenter",
"host",
"ts",
"usage_user",
"usage_system"
],
}
```
As you can see, the output order is still `[ts, usage_system]`. This is the schema users can see after projection so we call it `projected user schema`.
But the storage engine also needs to read key columns, a timestamp column, and internal columns. So we maintain a `StoreSchema` after projection in the `ProjectedSchema`.
The `Projection` struct is a helper struct to help compute the projected user schema and store schema.
So we can construct the following `ProjectedSchema`:
```json
{
"schema_to_read":{
"schema":{
"column_schemas":[
"datacenter",
"host",
"ts",
"usage_system",
"__sequence",
"__op_type"
],
"time_index":2,
"version":0
},
"row_key_end":3,
"user_column_end":4
},
"projected_user_schema":{
"column_schemas":[
"ts",
"usage_system"
],
"time_index":0
}
}
```
As you can see, `schema_to_read` doesn't contain the column `usage_user` that is not intended to be read (not in projection).
### ReadAdapter
As mentioned above, we can alter a table so the underlying files (SSTs) and memtables in the storage engine may have different schemas.
To simplify the logic of `ProjectedSchema`, we handle the difference between schemas before projection (constructing the `ProjectedSchema`). We introduce [ReadAdapter](https://github.com/GreptimeTeam/greptimedb/blob/9fa871a3fad07f583dc1863a509414da393747f8/src/storage/src/schema/compat.rs#L90) that adapts rows with different source schemas to the same expected schema.
So we can always use the current `RegionSchema` of the region to construct the `ProjectedSchema`, and then create a `ReadAdapter` for each memtable or SST.
```rust
#[derive(Debug)]
pub struct ReadAdapter {
source_schema: StoreSchemaRef,
dest_schema: ProjectedSchemaRef,
indices_in_result: Vec<Option<usize>>,
is_source_needed: Vec<bool>,
}
```
For each column required by `dest_schema`, `indices_in_result` stores the index of that column in the row read from the source memtable or SST. If the source row doesn't contain that column, the index is `None`.
The field `is_source_needed` stores whether a column in the source memtable or SST is needed.
Suppose we add a new column `usage_idle` to the table `cpu`.
```sql
ALTER TABLE cpu ADD COLUMN usage_idle DOUBLE;
```
The new `StoreSchema` becomes:
```json
{
"schema":{
"column_schemas":[
"datacenter",
"host",
"ts",
"usage_user",
"usage_system",
"usage_idle",
"__sequence",
"__op_type"
],
"time_index":2,
"version":1
},
"row_key_end":3,
"user_column_end":6
}
```
Note that we bump the version of the schema to 1.
If we want to select `ts`, `usage_system`, and `usage_idle`. While reading from the old schema, the storage engine creates a `ReadAdapter` like this:
```json
{
"source_schema":{
"schema":{
"column_schemas":[
"datacenter",
"host",
"ts",
"usage_user",
"usage_system",
"__sequence",
"__op_type"
],
"time_index":2,
"version":0
},
"row_key_end":3,
"user_column_end":5
},
"dest_schema":{
"schema_to_read":{
"schema":{
"column_schemas":[
"datacenter",
"host",
"ts",
"usage_system",
"usage_idle",
"__sequence",
"__op_type"
],
"time_index":2,
"version":1
},
"row_key_end":3,
"user_column_end":5
},
"projected_user_schema":{
"column_schemas":[
"ts",
"usage_system",
"usage_idle"
],
"time_index":0
}
},
"indices_in_result":[
0,
1,
2,
3,
null,
4,
5
],
"is_source_needed":[
true,
true,
true,
false,
true,
true,
true
]
}
```
We don't need to read `usage_user` so `is_source_needed[3]` is false. The old schema doesn't have column `usage_idle` so `indices_in_result[4]` is `null` and the `ReadAdapter` needs to insert a null column to the output row so the output schema still contains `usage_idle`.
The figure below shows the relationship between `RegionSchema`, `StoreSchema`, `ProjectedSchema`, and `ReadAdapter`.
```text
┌──────────────────────────────┐
│ │
│ ┌────────────────────┐ │
│ │ store_schema │ │
│ │ │ │
│ │ StoreSchema │ │
│ │ version 1 │ │
│ └────────────────────┘ │
│ │
│ ┌────────────────────┐ │
│ │ user_schema │ │
│ └────────────────────┘ │
│ │
│ RegionSchema │
│ │
└──────────────┬───────────────┘
┌──────────────▼───────────────┐
│ │
│ ┌──────────────────────────┐ │
│ │ schema_to_read │ │
│ │ │ │
│ │ StoreSchema (projected) │ │
│ │ version 1 │ │
│ └──────────────────────────┘ │
┌───┤ ├───┐
│ │ ┌──────────────────────────┐ │ │
│ │ │ projected_user_schema │ │ │
│ │ └──────────────────────────┘ │ │
│ │ │ │
│ │ ProjectedSchema │ │
dest schema │ └──────────────────────────────┘ │ dest schema
│ │
│ │
┌──────▼───────┐ ┌───────▼──────┐
│ │ │ │
│ ReadAdapter │ │ ReadAdapter │
│ │ │ │
└──────▲───────┘ └───────▲──────┘
│ │
│ │
source schema │ │ source schema
│ │
┌───────┴─────────┐ ┌────────┴────────┐
│ │ │ │
│ ┌─────────────┐ │ │ ┌─────────────┐ │
│ │ │ │ │ │ │ │
│ │ StoreSchema │ │ │ │ StoreSchema │ │
│ │ │ │ │ │ │ │
│ │ version 0 │ │ │ │ version 1 │ │
│ │ │ │ │ │ │ │
│ └─────────────┘ │ │ └─────────────┘ │
│ │ │ │
│ SST 0 │ │ SST 1 │
│ │ │ │
└─────────────────┘ └─────────────────┘
```
# Conversion
This figure shows the conversion between schemas:
```text
┌─────────────┐ schema From ┌─────────────┐
│ ├──────────────────┐ ┌────────────────────────────► │
│ TableMeta │ │ │ │ RawSchema │
│ │ │ │ ┌─────────────────────────┤ │
└─────────────┘ │ │ │ TryFrom └─────────────┘
│ │ │
│ │ │
│ │ │
│ │ │
│ │ │
┌───────────────────┐ ┌─────▼──┴──▼──┐ arrow_schema() ┌─────────────────┐
│ │ │ ├─────────────────────► │
│ ColumnsMetadata │ ┌─────► Schema │ │ ArrowSchema ├──┐
│ │ │ │ ◄─────────────────────┤ │ │
└────┬───────────▲──┘ │ └───▲───▲──────┘ TryFrom └─────────────────┘ │
│ │ │ │ │ │
│ │ │ │ └────────────────────────────────────────┐ │
│ │ │ │ │ │
│ columns │ user_schema() │ │ │
│ │ │ │ projected_user_schema() schema() │
│ │ │ │ │ │
│ ┌───┴─────────────┴─┐ │ ┌────────────────────┐ │ │
columns │ │ │ └─────────────────┤ │ │ │ TryFrom
│ │ RegionSchema │ │ ProjectedSchema │ │ │
│ │ ├─────────────────────────► │ │ │
│ └─────────────────┬─┘ ProjectedSchema::new() └──────────────────┬─┘ │ │
│ │ │ │ │
│ │ │ │ │
│ │ │ │ │
│ │ │ │ │
┌────▼────────────────────┐ │ store_schema() ┌────▼───────┴──┐ │
│ │ └─────────────────────────────────────────► │ │
│ Vec<ColumnMetadata> │ │ StoreSchema ◄─────┘
│ ◄──────────────────────────────────────────────┤ │
└─────────────────────────┘ columns └───────────────┘
```

View File

@@ -51,13 +51,17 @@ get_os_type
get_arch_type
if [ -n "${OS_TYPE}" ] && [ -n "${ARCH_TYPE}" ]; then
echo "Downloading ${BIN}, OS: ${OS_TYPE}, Arch: ${ARCH_TYPE}, Version: ${VERSION}"
# Use the latest nightly version.
if [ "${VERSION}" = "latest" ]; then
wget "https://github.com/${GITHUB_ORG}/${GITHUB_REPO}/releases/latest/download/${BIN}-${OS_TYPE}-${ARCH_TYPE}.tgz"
else
wget "https://github.com/${GITHUB_ORG}/${GITHUB_REPO}/releases/download/${VERSION}/${BIN}-${OS_TYPE}-${ARCH_TYPE}.tgz"
VERSION=$(curl -s -XGET "https://api.github.com/repos/${GITHUB_ORG}/${GITHUB_REPO}/releases" | grep tag_name | grep nightly | cut -d: -f 2 | sed 's/.*"\(.*\)".*/\1/' | uniq | sort -r | head -n 1)
if [ -z "${VERSION}" ]; then
echo "Failed to get the latest version."
exit 1
fi
fi
echo "Downloading ${BIN}, OS: ${OS_TYPE}, Arch: ${ARCH_TYPE}, Version: ${VERSION}"
wget "https://github.com/${GITHUB_ORG}/${GITHUB_REPO}/releases/download/${VERSION}/${BIN}-${OS_TYPE}-${ARCH_TYPE}.tgz"
tar xvf ${BIN}-${OS_TYPE}-${ARCH_TYPE}.tgz && rm ${BIN}-${OS_TYPE}-${ARCH_TYPE}.tgz && echo "Run './${BIN} --help' to get started"
fi

View File

@@ -24,6 +24,7 @@ datafusion.workspace = true
datatypes = { path = "../datatypes" }
futures = "0.3"
futures-util.workspace = true
key-lock = "0.1"
lazy_static = "1.4"
meta-client = { path = "../meta-client" }
parking_lot = "0.12"

View File

@@ -109,10 +109,16 @@ impl CatalogProvider for CatalogProviderAdapter {
}
///Greptime CatalogProvider -> datafusion's CatalogProvider
struct DfCatalogProviderAdapter {
pub struct DfCatalogProviderAdapter {
catalog_provider: CatalogProviderRef,
}
impl DfCatalogProviderAdapter {
pub fn new(catalog_provider: CatalogProviderRef) -> Self {
Self { catalog_provider }
}
}
impl DfCatalogProvider for DfCatalogProviderAdapter {
fn as_any(&self) -> &dyn Any {
self

View File

@@ -20,6 +20,7 @@ use common_error::prelude::{Snafu, StatusCode};
use datafusion::error::DataFusionError;
use datatypes::prelude::ConcreteDataType;
use snafu::Location;
use tokio::task::JoinError;
use crate::DeregisterTableRequest;
@@ -127,6 +128,9 @@ pub enum Error {
source: table::error::Error,
},
#[snafu(display("Failed to open table in parallel, source: {}", source))]
ParallelOpenTable { source: JoinError },
#[snafu(display("Table not found while opening table, table info: {}", table_info))]
TableNotFound {
table_info: String,
@@ -261,7 +265,8 @@ impl ErrorExt for Error {
| Error::IllegalManagerState { .. }
| Error::CatalogNotFound { .. }
| Error::InvalidEntryType { .. }
| Error::InvalidSystemTableDef { .. } => StatusCode::Unexpected,
| Error::InvalidSystemTableDef { .. }
| Error::ParallelOpenTable { .. } => StatusCode::Unexpected,
Error::SystemCatalog { .. }
| Error::EmptyValue { .. }

View File

@@ -23,7 +23,7 @@ use datafusion::physical_plan::stream::RecordBatchStreamAdapter as DfRecordBatch
use datafusion::physical_plan::SendableRecordBatchStream as DfSendableRecordBatchStream;
use datatypes::prelude::{ConcreteDataType, ScalarVectorBuilder, VectorRef};
use datatypes::schema::{ColumnSchema, Schema, SchemaRef};
use datatypes::vectors::StringVectorBuilder;
use datatypes::vectors::{StringVectorBuilder, UInt32VectorBuilder};
use snafu::ResultExt;
use table::metadata::TableType;
@@ -44,6 +44,8 @@ impl InformationSchemaTables {
ColumnSchema::new("table_schema", ConcreteDataType::string_datatype(), false),
ColumnSchema::new("table_name", ConcreteDataType::string_datatype(), false),
ColumnSchema::new("table_type", ConcreteDataType::string_datatype(), false),
ColumnSchema::new("table_id", ConcreteDataType::uint32_datatype(), true),
ColumnSchema::new("engine", ConcreteDataType::string_datatype(), true),
]));
Self {
schema,
@@ -73,6 +75,8 @@ struct InformationSchemaTablesBuilder {
schema_names: StringVectorBuilder,
table_names: StringVectorBuilder,
table_types: StringVectorBuilder,
table_ids: UInt32VectorBuilder,
engines: StringVectorBuilder,
}
impl InformationSchemaTablesBuilder {
@@ -85,6 +89,8 @@ impl InformationSchemaTablesBuilder {
schema_names: StringVectorBuilder::with_capacity(42),
table_names: StringVectorBuilder::with_capacity(42),
table_types: StringVectorBuilder::with_capacity(42),
table_ids: UInt32VectorBuilder::with_capacity(42),
engines: StringVectorBuilder::with_capacity(42),
}
}
@@ -100,7 +106,15 @@ impl InformationSchemaTablesBuilder {
let Some(schema) = self.catalog_provider.schema(&schema_name)? else { continue };
for table_name in schema.table_names()? {
let Some(table) = schema.table(&table_name).await? else { continue };
self.add_table(&catalog_name, &schema_name, &table_name, table.table_type());
let table_info = table.table_info();
self.add_table(
&catalog_name,
&schema_name,
&table_name,
table.table_type(),
Some(table_info.ident.table_id),
Some(&table_info.meta.engine),
);
}
}
@@ -110,6 +124,8 @@ impl InformationSchemaTablesBuilder {
INFORMATION_SCHEMA_NAME,
TABLES,
TableType::View,
None,
None,
);
self.finish()
@@ -121,6 +137,8 @@ impl InformationSchemaTablesBuilder {
schema_name: &str,
table_name: &str,
table_type: TableType,
table_id: Option<u32>,
engine: Option<&str>,
) {
self.catalog_names.push(Some(catalog_name));
self.schema_names.push(Some(schema_name));
@@ -130,6 +148,8 @@ impl InformationSchemaTablesBuilder {
TableType::View => "VIEW",
TableType::Temporary => "LOCAL TEMPORARY",
}));
self.table_ids.push(table_id);
self.engines.push(engine);
}
fn finish(&mut self) -> Result<RecordBatch> {
@@ -138,6 +158,8 @@ impl InformationSchemaTablesBuilder {
Arc::new(self.schema_names.finish()),
Arc::new(self.table_names.finish()),
Arc::new(self.table_types.finish()),
Arc::new(self.table_ids.finish()),
Arc::new(self.engines.finish()),
];
RecordBatch::new(self.schema.clone(), columns).context(CreateRecordBatchSnafu)
}

View File

@@ -26,7 +26,8 @@ use common_catalog::consts::{
use common_telemetry::{debug, error, info};
use dashmap::DashMap;
use futures::Stream;
use futures_util::StreamExt;
use futures_util::{StreamExt, TryStreamExt};
use key_lock::KeyLock;
use parking_lot::RwLock;
use snafu::{OptionExt, ResultExt};
use table::engine::manager::TableEngineManagerRef;
@@ -37,8 +38,9 @@ use table::TableRef;
use tokio::sync::Mutex;
use crate::error::{
CatalogNotFoundSnafu, CreateTableSnafu, InvalidCatalogValueSnafu, OpenTableSnafu, Result,
SchemaNotFoundSnafu, TableEngineNotFoundSnafu, TableExistsSnafu, UnimplementedSnafu,
CatalogNotFoundSnafu, CreateTableSnafu, InvalidCatalogValueSnafu, OpenTableSnafu,
ParallelOpenTableSnafu, Result, SchemaNotFoundSnafu, TableEngineNotFoundSnafu,
TableExistsSnafu, UnimplementedSnafu,
};
use crate::helper::{
build_catalog_prefix, build_schema_prefix, build_table_global_prefix, CatalogKey, CatalogValue,
@@ -254,15 +256,31 @@ impl RemoteCatalogManager {
) -> Result<()> {
info!("initializing tables in {}.{}", catalog_name, schema_name);
let mut table_num = 0;
let mut tables = self.iter_remote_tables(catalog_name, schema_name).await;
while let Some(r) = tables.next().await {
let (table_key, table_value) = r?;
let table_ref = self.open_or_create_table(&table_key, &table_value).await?;
schema.register_table(table_key.table_name.to_string(), table_ref)?;
info!("Registered table {}", &table_key.table_name);
max_table_id = max_table_id.max(table_value.table_id());
let tables = self.iter_remote_tables(catalog_name, schema_name).await;
let kvs = tables.try_collect::<Vec<_>>().await?;
let node_id = self.node_id;
let joins = kvs
.into_iter()
.map(|(table_key, table_value)| {
let engine_manager = self.engine_manager.clone();
common_runtime::spawn_bg(async move {
open_or_create_table(node_id, engine_manager, &table_key, &table_value).await
})
})
.collect::<Vec<_>>();
let vec = futures::future::join_all(joins).await;
for res in vec {
let table_ref = res.context(ParallelOpenTableSnafu)??;
let table_info = table_ref.table_info();
let table_name = &table_info.name;
let table_id = table_info.ident.table_id;
schema.register_table(table_name.clone(), table_ref)?;
info!("Registered table {}", table_name);
max_table_id = max_table_id.max(table_id);
table_num += 1;
}
info!(
"initialized tables in {}.{}, total: {}",
catalog_name, schema_name, table_num
@@ -310,87 +328,88 @@ impl RemoteCatalogManager {
info!("Created catalog '{catalog_key}");
Ok(catalog_provider)
}
}
async fn open_or_create_table(
&self,
table_key: &TableGlobalKey,
table_value: &TableGlobalValue,
) -> Result<TableRef> {
let context = EngineContext {};
let TableGlobalKey {
catalog_name,
schema_name,
table_name,
..
} = table_key;
async fn open_or_create_table(
node_id: u64,
engine_manager: TableEngineManagerRef,
table_key: &TableGlobalKey,
table_value: &TableGlobalValue,
) -> Result<TableRef> {
let context = EngineContext {};
let TableGlobalKey {
catalog_name,
schema_name,
table_name,
..
} = table_key;
let table_id = table_value.table_id();
let table_id = table_value.table_id();
let TableGlobalValue {
table_info,
regions_id_map,
..
} = table_value;
let TableGlobalValue {
table_info,
regions_id_map,
..
} = table_value;
// unwrap safety: checked in yielding this table when `iter_remote_tables`
let region_numbers = regions_id_map.get(&self.node_id).unwrap();
// unwrap safety: checked in yielding this table when `iter_remote_tables`
let region_numbers = regions_id_map.get(&node_id).unwrap();
let request = OpenTableRequest {
catalog_name: catalog_name.clone(),
schema_name: schema_name.clone(),
table_name: table_name.clone(),
table_id,
};
let engine = self
.engine_manager
let request = OpenTableRequest {
catalog_name: catalog_name.clone(),
schema_name: schema_name.clone(),
table_name: table_name.clone(),
table_id,
};
let engine =
engine_manager
.engine(&table_info.meta.engine)
.context(TableEngineNotFoundSnafu {
engine_name: &table_info.meta.engine,
})?;
match engine
.open_table(&context, request)
.await
.with_context(|_| OpenTableSnafu {
table_info: format!("{catalog_name}.{schema_name}.{table_name}, id:{table_id}"),
})? {
Some(table) => {
info!(
"Table opened: {}.{}.{}",
catalog_name, schema_name, table_name
);
Ok(table)
}
None => {
info!(
"Try create table: {}.{}.{}",
catalog_name, schema_name, table_name
);
match engine
.open_table(&context, request)
.await
.with_context(|_| OpenTableSnafu {
table_info: format!("{catalog_name}.{schema_name}.{table_name}, id:{table_id}"),
})? {
Some(table) => {
info!(
"Table opened: {}.{}.{}",
catalog_name, schema_name, table_name
);
Ok(table)
}
None => {
info!(
"Try create table: {}.{}.{}",
catalog_name, schema_name, table_name
);
let meta = &table_info.meta;
let req = CreateTableRequest {
id: table_id,
catalog_name: catalog_name.clone(),
schema_name: schema_name.clone(),
table_name: table_name.clone(),
desc: None,
schema: meta.schema.clone(),
region_numbers: region_numbers.clone(),
primary_key_indices: meta.primary_key_indices.clone(),
create_if_not_exists: true,
table_options: meta.options.clone(),
engine: engine.name().to_string(),
};
let meta = &table_info.meta;
let req = CreateTableRequest {
id: table_id,
catalog_name: catalog_name.clone(),
schema_name: schema_name.clone(),
table_name: table_name.clone(),
desc: None,
schema: meta.schema.clone(),
region_numbers: region_numbers.clone(),
primary_key_indices: meta.primary_key_indices.clone(),
create_if_not_exists: true,
table_options: meta.options.clone(),
engine: engine.name().to_string(),
};
engine
.create_table(&context, req)
.await
.context(CreateTableSnafu {
table_info: format!(
"{}.{}.{}, id:{}",
&catalog_name, &schema_name, &table_name, table_id
),
})
}
engine
.create_table(&context, req)
.await
.context(CreateTableSnafu {
table_info: format!(
"{}.{}.{}, id:{}",
&catalog_name, &schema_name, &table_name, table_id
),
})
}
}
}
@@ -737,8 +756,8 @@ pub struct RemoteSchemaProvider {
schema_name: String,
node_id: u64,
backend: KvBackendRef,
tables: Arc<ArcSwap<HashMap<String, TableRef>>>,
mutex: Arc<Mutex<()>>,
tables: Arc<ArcSwap<DashMap<String, TableRef>>>,
mutex: Arc<KeyLock<String>>,
}
impl RemoteSchemaProvider {
@@ -775,11 +794,16 @@ impl SchemaProvider for RemoteSchemaProvider {
}
fn table_names(&self) -> Result<Vec<String>> {
Ok(self.tables.load().keys().cloned().collect::<Vec<_>>())
Ok(self
.tables
.load()
.iter()
.map(|en| en.key().clone())
.collect::<Vec<_>>())
}
async fn table(&self, name: &str) -> Result<Option<TableRef>> {
Ok(self.tables.load().get(name).cloned())
Ok(self.tables.load().get(name).map(|en| en.value().clone()))
}
fn register_table(&self, name: String, table: TableRef) -> Result<Option<TableRef>> {
@@ -796,7 +820,7 @@ impl SchemaProvider for RemoteSchemaProvider {
let prev = std::thread::spawn(move || {
common_runtime::block_on_read(async move {
let _guard = mutex.lock().await;
let _guard = mutex.lock(table_key.clone()).await;
backend
.set(
table_key.as_bytes(),
@@ -808,11 +832,8 @@ impl SchemaProvider for RemoteSchemaProvider {
table_key, table_value
);
let prev_tables = tables.load();
let mut new_tables = HashMap::with_capacity(prev_tables.len() + 1);
new_tables.clone_from(&prev_tables);
let prev = new_tables.insert(name, table);
tables.store(Arc::new(new_tables));
let tables = tables.load();
let prev = tables.insert(name, table);
Ok(prev)
})
})
@@ -836,18 +857,15 @@ impl SchemaProvider for RemoteSchemaProvider {
let tables = self.tables.clone();
let prev = std::thread::spawn(move || {
common_runtime::block_on_read(async move {
let _guard = mutex.lock().await;
let _guard = mutex.lock(table_key.clone()).await;
backend.delete(table_key.as_bytes()).await?;
debug!(
"Successfully deleted catalog table entry, key: {}",
table_key
);
let prev_tables = tables.load();
let mut new_tables = HashMap::with_capacity(prev_tables.len() + 1);
new_tables.clone_from(&prev_tables);
let prev = new_tables.remove(&table_name);
tables.store(Arc::new(new_tables));
let tables = tables.load();
let prev = tables.remove(&table_name).map(|en| en.1);
Ok(prev)
})
})

View File

@@ -25,7 +25,7 @@ use arrow_flight::{FlightData, Ticket};
use common_error::prelude::*;
use common_grpc::flight::{flight_messages_to_recordbatches, FlightDecoder, FlightMessage};
use common_query::Output;
use common_telemetry::logging;
use common_telemetry::{logging, timer};
use futures_util::{TryFutureExt, TryStreamExt};
use prost::Message;
use snafu::{ensure, ResultExt};
@@ -33,7 +33,7 @@ use snafu::{ensure, ResultExt};
use crate::error::{
ConvertFlightDataSnafu, IllegalDatabaseResponseSnafu, IllegalFlightMessagesSnafu,
};
use crate::{error, Client, Result};
use crate::{error, metrics, Client, Result};
#[derive(Clone, Debug, Default)]
pub struct Database {
@@ -107,6 +107,7 @@ impl Database {
}
pub async fn insert(&self, request: InsertRequest) -> Result<u32> {
let _timer = timer!(metrics::METRIC_GRPC_INSERT);
let mut client = self.client.make_database_client()?.inner;
let request = GreptimeRequest {
header: Some(RequestHeader {
@@ -130,6 +131,7 @@ impl Database {
}
pub async fn sql(&self, sql: &str) -> Result<Output> {
let _timer = timer!(metrics::METRIC_GRPC_SQL);
self.do_get(Request::Query(QueryRequest {
query: Some(Query::Sql(sql.to_string())),
}))
@@ -137,6 +139,7 @@ impl Database {
}
pub async fn logical_plan(&self, logical_plan: Vec<u8>) -> Result<Output> {
let _timer = timer!(metrics::METRIC_GRPC_LOGICAL_PLAN);
self.do_get(Request::Query(QueryRequest {
query: Some(Query::LogicalPlan(logical_plan)),
}))
@@ -150,6 +153,7 @@ impl Database {
end: &str,
step: &str,
) -> Result<Output> {
let _timer = timer!(metrics::METRIC_GRPC_PROMQL_RANGE_QUERY);
self.do_get(Request::Query(QueryRequest {
query: Some(Query::PromRangeQuery(PromRangeQuery {
query: promql.to_string(),
@@ -162,6 +166,7 @@ impl Database {
}
pub async fn create(&self, expr: CreateTableExpr) -> Result<Output> {
let _timer = timer!(metrics::METRIC_GRPC_CREATE_TABLE);
self.do_get(Request::Ddl(DdlRequest {
expr: Some(DdlExpr::CreateTable(expr)),
}))
@@ -169,6 +174,7 @@ impl Database {
}
pub async fn alter(&self, expr: AlterExpr) -> Result<Output> {
let _timer = timer!(metrics::METRIC_GRPC_ALTER);
self.do_get(Request::Ddl(DdlRequest {
expr: Some(DdlExpr::Alter(expr)),
}))
@@ -176,6 +182,7 @@ impl Database {
}
pub async fn drop_table(&self, expr: DropTableExpr) -> Result<Output> {
let _timer = timer!(metrics::METRIC_GRPC_DROP_TABLE);
self.do_get(Request::Ddl(DdlRequest {
expr: Some(DdlExpr::DropTable(expr)),
}))
@@ -183,6 +190,7 @@ impl Database {
}
pub async fn flush_table(&self, expr: FlushTableExpr) -> Result<Output> {
let _timer = timer!(metrics::METRIC_GRPC_FLUSH_TABLE);
self.do_get(Request::Ddl(DdlRequest {
expr: Some(DdlExpr::FlushTable(expr)),
}))
@@ -190,6 +198,8 @@ impl Database {
}
async fn do_get(&self, request: Request) -> Result<Output> {
// FIXME(paomian): should be added some labels for metrics
let _timer = timer!(metrics::METRIC_GRPC_DO_GET);
let request = GreptimeRequest {
header: Some(RequestHeader {
catalog: self.catalog.clone(),

View File

@@ -16,6 +16,7 @@ mod client;
mod database;
mod error;
pub mod load_balance;
mod metrics;
pub use api;
pub use common_catalog::consts::{DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME};

24
src/client/src/metrics.rs Normal file
View File

@@ -0,0 +1,24 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//! client metrics
pub const METRIC_GRPC_CREATE_TABLE: &str = "grpc.create_table";
pub const METRIC_GRPC_PROMQL_RANGE_QUERY: &str = "grpc.promql.range_query";
pub const METRIC_GRPC_INSERT: &str = "grpc.insert";
pub const METRIC_GRPC_SQL: &str = "grpc.sql";
pub const METRIC_GRPC_LOGICAL_PLAN: &str = "grpc.logical_plan";
pub const METRIC_GRPC_ALTER: &str = "grpc.alter";
pub const METRIC_GRPC_DROP_TABLE: &str = "grpc.drop_table";
pub const METRIC_GRPC_FLUSH_TABLE: &str = "grpc.flush_table";
pub const METRIC_GRPC_DO_GET: &str = "grpc.do_get";

View File

@@ -16,9 +16,11 @@ async-compression = { version = "0.3", features = [
"tokio",
] }
async-trait.workspace = true
bytes = "1.1"
common-error = { path = "../error" }
common-runtime = { path = "../runtime" }
datafusion.workspace = true
derive_builder = "0.12"
futures.workspace = true
object-store = { path = "../../object-store" }
regex = "1.7"

View File

@@ -13,10 +13,14 @@
// limitations under the License.
use std::fmt::Display;
use std::io;
use std::str::FromStr;
use async_compression::tokio::bufread::{BzDecoder, GzipDecoder, XzDecoder, ZstdDecoder};
use bytes::Bytes;
use futures::Stream;
use tokio::io::{AsyncRead, BufReader};
use tokio_util::io::{ReaderStream, StreamReader};
use crate::error::{self, Error, Result};
#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]
@@ -81,4 +85,25 @@ impl CompressionType {
CompressionType::UNCOMPRESSED => Box::new(s),
}
}
pub fn convert_stream<T: Stream<Item = io::Result<Bytes>> + Unpin + Send + 'static>(
&self,
s: T,
) -> Box<dyn Stream<Item = io::Result<Bytes>> + Send + Unpin> {
match self {
CompressionType::GZIP => {
Box::new(ReaderStream::new(GzipDecoder::new(StreamReader::new(s))))
}
CompressionType::BZIP2 => {
Box::new(ReaderStream::new(BzDecoder::new(StreamReader::new(s))))
}
CompressionType::XZ => {
Box::new(ReaderStream::new(XzDecoder::new(StreamReader::new(s))))
}
CompressionType::ZSTD => {
Box::new(ReaderStream::new(ZstdDecoder::new(StreamReader::new(s))))
}
CompressionType::UNCOMPRESSED => Box::new(s),
}
}
}

View File

@@ -89,6 +89,19 @@ pub enum Error {
location: Location,
source: tokio::task::JoinError,
},
#[snafu(display("Failed to parse format {} with value: {}", key, value))]
ParseFormat {
key: &'static str,
value: String,
location: Location,
},
#[snafu(display("Failed to merge schema: {}", source))]
MergeSchema {
source: arrow_schema::ArrowError,
location: Location,
},
}
pub type Result<T> = std::result::Result<T, Error>;
@@ -109,7 +122,9 @@ impl ErrorExt for Error {
| InvalidPath { .. }
| InferSchema { .. }
| ReadParquetSnafu { .. }
| ParquetToSchema { .. } => StatusCode::InvalidArguments,
| ParquetToSchema { .. }
| ParseFormat { .. }
| MergeSchema { .. } => StatusCode::InvalidArguments,
Decompression { .. } | JoinHandle { .. } => StatusCode::Unexpected,
}
@@ -130,6 +145,8 @@ impl ErrorExt for Error {
ParquetToSchema { location, .. } => Some(*location),
Decompression { location, .. } => Some(*location),
JoinHandle { location, .. } => Some(*location),
ParseFormat { location, .. } => Some(*location),
MergeSchema { location, .. } => Some(*location),
UnsupportedBackendProtocol { .. }
| EmptyHostPath { .. }

View File

@@ -15,16 +15,112 @@
pub mod csv;
pub mod json;
pub mod parquet;
#[cfg(test)]
pub mod tests;
pub const DEFAULT_SCHEMA_INFER_MAX_RECORD: usize = 1000;
use arrow::datatypes::SchemaRef;
use std::result;
use std::sync::Arc;
use std::task::Poll;
use arrow::record_batch::RecordBatch;
use arrow_schema::{ArrowError, Schema};
use async_trait::async_trait;
use bytes::{Buf, Bytes};
use datafusion::error::{DataFusionError, Result as DataFusionResult};
use datafusion::physical_plan::file_format::FileOpenFuture;
use futures::StreamExt;
use object_store::ObjectStore;
use crate::compression::CompressionType;
use crate::error::Result;
pub const FORMAT_COMPRESSION_TYPE: &str = "COMPRESSION_TYPE";
pub const FORMAT_DELIMTERL: &str = "DELIMTERL";
pub const FORMAT_SCHEMA_INFER_MAX_RECORD: &str = "SCHEMA_INFER_MAX_RECORD";
pub const FORMAT_HAS_HEADER: &str = "FORMAT_HAS_HEADER";
#[async_trait]
pub trait FileFormat: Send + Sync + std::fmt::Debug {
async fn infer_schema(&self, store: &ObjectStore, path: String) -> Result<SchemaRef>;
async fn infer_schema(&self, store: &ObjectStore, path: String) -> Result<Schema>;
}
pub trait ArrowDecoder: Send + 'static {
/// Decode records from `buf` returning the number of bytes read.
///
/// This method returns `Ok(0)` once `batch_size` objects have been parsed since the
/// last call to [`Self::flush`], or `buf` is exhausted.
///
/// Any remaining bytes should be included in the next call to [`Self::decode`].
fn decode(&mut self, buf: &[u8]) -> result::Result<usize, ArrowError>;
/// Flushes the currently buffered data to a [`RecordBatch`].
///
/// This should only be called after [`Self::decode`] has returned `Ok(0)`,
/// otherwise may return an error if part way through decoding a record
///
/// Returns `Ok(None)` if no buffered data.
fn flush(&mut self) -> result::Result<Option<RecordBatch>, ArrowError>;
}
impl ArrowDecoder for arrow::csv::reader::Decoder {
fn decode(&mut self, buf: &[u8]) -> result::Result<usize, ArrowError> {
self.decode(buf)
}
fn flush(&mut self) -> result::Result<Option<RecordBatch>, ArrowError> {
self.flush()
}
}
impl ArrowDecoder for arrow::json::RawDecoder {
fn decode(&mut self, buf: &[u8]) -> result::Result<usize, ArrowError> {
self.decode(buf)
}
fn flush(&mut self) -> result::Result<Option<RecordBatch>, ArrowError> {
self.flush()
}
}
pub fn open_with_decoder<T: ArrowDecoder, F: Fn() -> DataFusionResult<T>>(
object_store: Arc<ObjectStore>,
path: String,
compression_type: CompressionType,
decoder_factory: F,
) -> DataFusionResult<FileOpenFuture> {
let mut decoder = decoder_factory()?;
Ok(Box::pin(async move {
let reader = object_store
.reader(&path)
.await
.map_err(|e| DataFusionError::External(Box::new(e)))?;
let mut upstream = compression_type.convert_stream(reader).fuse();
let mut buffered = Bytes::new();
let stream = futures::stream::poll_fn(move |cx| {
loop {
if buffered.is_empty() {
if let Some(result) = futures::ready!(upstream.poll_next_unpin(cx)) {
buffered = result?;
};
}
let decoded = decoder.decode(buffered.as_ref())?;
if decoded == 0 {
break;
} else {
buffered.advance(decoded);
}
}
Poll::Ready(decoder.flush().transpose())
});
Ok(stream.boxed())
}))
}

View File

@@ -12,21 +12,27 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use std::collections::HashMap;
use std::str::FromStr;
use std::sync::Arc;
use arrow::csv;
use arrow::csv::reader::infer_reader_schema as infer_csv_schema;
use arrow_schema::SchemaRef;
use arrow_schema::{Schema, SchemaRef};
use async_trait::async_trait;
use common_runtime;
use datafusion::error::Result as DataFusionResult;
use datafusion::physical_plan::file_format::{FileMeta, FileOpenFuture, FileOpener};
use derive_builder::Builder;
use object_store::ObjectStore;
use snafu::ResultExt;
use tokio_util::io::SyncIoBridge;
use crate::compression::CompressionType;
use crate::error::{self, Result};
use crate::file_format::{self, FileFormat};
use crate::file_format::{self, open_with_decoder, FileFormat};
#[derive(Debug)]
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub struct CsvFormat {
pub has_header: bool,
pub delimiter: u8,
@@ -34,6 +40,49 @@ pub struct CsvFormat {
pub compression_type: CompressionType,
}
impl TryFrom<&HashMap<String, String>> for CsvFormat {
type Error = error::Error;
fn try_from(value: &HashMap<String, String>) -> Result<Self> {
let mut format = CsvFormat::default();
if let Some(delimiter) = value.get(file_format::FORMAT_DELIMTERL) {
// TODO(weny): considers to support parse like "\t" (not only b'\t')
format.delimiter = u8::from_str(delimiter).map_err(|_| {
error::ParseFormatSnafu {
key: file_format::FORMAT_DELIMTERL,
value: delimiter,
}
.build()
})?;
};
if let Some(compression_type) = value.get(file_format::FORMAT_COMPRESSION_TYPE) {
format.compression_type = CompressionType::from_str(compression_type)?;
};
if let Some(schema_infer_max_record) =
value.get(file_format::FORMAT_SCHEMA_INFER_MAX_RECORD)
{
format.schema_infer_max_record =
Some(schema_infer_max_record.parse::<usize>().map_err(|_| {
error::ParseFormatSnafu {
key: file_format::FORMAT_SCHEMA_INFER_MAX_RECORD,
value: schema_infer_max_record,
}
.build()
})?);
};
if let Some(has_header) = value.get(file_format::FORMAT_HAS_HEADER) {
format.has_header = has_header.parse().map_err(|_| {
error::ParseFormatSnafu {
key: file_format::FORMAT_HAS_HEADER,
value: has_header,
}
.build()
})?;
}
Ok(format)
}
}
impl Default for CsvFormat {
fn default() -> Self {
Self {
@@ -45,9 +94,70 @@ impl Default for CsvFormat {
}
}
#[derive(Debug, Clone, Builder)]
pub struct CsvConfig {
batch_size: usize,
file_schema: SchemaRef,
#[builder(default = "None")]
file_projection: Option<Vec<usize>>,
#[builder(default = "true")]
has_header: bool,
#[builder(default = "b','")]
delimiter: u8,
}
impl CsvConfig {
fn builder(&self) -> csv::ReaderBuilder {
let mut builder = csv::ReaderBuilder::new()
.with_schema(self.file_schema.clone())
.with_delimiter(self.delimiter)
.with_batch_size(self.batch_size)
.has_header(self.has_header);
if let Some(proj) = &self.file_projection {
builder = builder.with_projection(proj.clone());
}
builder
}
}
#[derive(Debug, Clone)]
pub struct CsvOpener {
config: Arc<CsvConfig>,
object_store: Arc<ObjectStore>,
compression_type: CompressionType,
}
impl CsvOpener {
/// Return a new [`CsvOpener`]. The caller must ensure [`CsvConfig`].file_schema must correspond to the opening file.
pub fn new(
config: CsvConfig,
object_store: ObjectStore,
compression_type: CompressionType,
) -> Self {
CsvOpener {
config: Arc::new(config),
object_store: Arc::new(object_store),
compression_type,
}
}
}
impl FileOpener for CsvOpener {
fn open(&self, meta: FileMeta) -> DataFusionResult<FileOpenFuture> {
open_with_decoder(
self.object_store.clone(),
meta.location().to_string(),
self.compression_type,
|| Ok(self.config.builder().build_decoder()),
)
}
}
#[async_trait]
impl FileFormat for CsvFormat {
async fn infer_schema(&self, store: &ObjectStore, path: String) -> Result<SchemaRef> {
async fn infer_schema(&self, store: &ObjectStore, path: String) -> Result<Schema> {
let reader = store
.reader(&path)
.await
@@ -65,8 +175,7 @@ impl FileFormat for CsvFormat {
let (schema, _records_read) =
infer_csv_schema(reader, delimiter, schema_infer_max_record, has_header)
.context(error::InferSchemaSnafu { path: &path })?;
Ok(Arc::new(schema))
Ok(schema)
})
.await
.context(error::JoinHandleSnafu)?
@@ -77,7 +186,10 @@ impl FileFormat for CsvFormat {
mod tests {
use super::*;
use crate::file_format::FileFormat;
use crate::file_format::{
FileFormat, FORMAT_COMPRESSION_TYPE, FORMAT_DELIMTERL, FORMAT_HAS_HEADER,
FORMAT_SCHEMA_INFER_MAX_RECORD,
};
use crate::test_util::{self, format_schema, test_store};
fn test_data_root() -> String {
@@ -155,4 +267,33 @@ mod tests {
formatted
);
}
#[test]
fn test_try_from() {
let mut map = HashMap::new();
let format: CsvFormat = CsvFormat::try_from(&map).unwrap();
assert_eq!(format, CsvFormat::default());
map.insert(
FORMAT_SCHEMA_INFER_MAX_RECORD.to_string(),
"2000".to_string(),
);
map.insert(FORMAT_COMPRESSION_TYPE.to_string(), "zstd".to_string());
map.insert(FORMAT_DELIMTERL.to_string(), b'\t'.to_string());
map.insert(FORMAT_HAS_HEADER.to_string(), "false".to_string());
let format = CsvFormat::try_from(&map).unwrap();
assert_eq!(
format,
CsvFormat {
compression_type: CompressionType::ZSTD,
schema_infer_max_record: Some(2000),
delimiter: b'\t',
has_header: false,
}
);
}
}

View File

@@ -12,27 +12,57 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use std::collections::HashMap;
use std::io::BufReader;
use std::str::FromStr;
use std::sync::Arc;
use arrow::datatypes::SchemaRef;
use arrow::json::reader::{infer_json_schema_from_iterator, ValueIter};
use arrow::json::RawReaderBuilder;
use arrow_schema::Schema;
use async_trait::async_trait;
use common_runtime;
use datafusion::error::{DataFusionError, Result as DataFusionResult};
use datafusion::physical_plan::file_format::{FileMeta, FileOpenFuture, FileOpener};
use object_store::ObjectStore;
use snafu::ResultExt;
use tokio_util::io::SyncIoBridge;
use crate::compression::CompressionType;
use crate::error::{self, Result};
use crate::file_format::{self, FileFormat};
use crate::file_format::{self, open_with_decoder, FileFormat};
#[derive(Debug)]
#[derive(Debug, Clone, Copy, PartialEq, Eq)]
pub struct JsonFormat {
pub schema_infer_max_record: Option<usize>,
pub compression_type: CompressionType,
}
impl TryFrom<&HashMap<String, String>> for JsonFormat {
type Error = error::Error;
fn try_from(value: &HashMap<String, String>) -> Result<Self> {
let mut format = JsonFormat::default();
if let Some(compression_type) = value.get(file_format::FORMAT_COMPRESSION_TYPE) {
format.compression_type = CompressionType::from_str(compression_type)?
};
if let Some(schema_infer_max_record) =
value.get(file_format::FORMAT_SCHEMA_INFER_MAX_RECORD)
{
format.schema_infer_max_record =
Some(schema_infer_max_record.parse::<usize>().map_err(|_| {
error::ParseFormatSnafu {
key: file_format::FORMAT_SCHEMA_INFER_MAX_RECORD,
value: schema_infer_max_record,
}
.build()
})?);
};
Ok(format)
}
}
impl Default for JsonFormat {
fn default() -> Self {
Self {
@@ -44,7 +74,7 @@ impl Default for JsonFormat {
#[async_trait]
impl FileFormat for JsonFormat {
async fn infer_schema(&self, store: &ObjectStore, path: String) -> Result<SchemaRef> {
async fn infer_schema(&self, store: &ObjectStore, path: String) -> Result<Schema> {
let reader = store
.reader(&path)
.await
@@ -62,17 +92,58 @@ impl FileFormat for JsonFormat {
let schema = infer_json_schema_from_iterator(iter)
.context(error::InferSchemaSnafu { path: &path })?;
Ok(Arc::new(schema))
Ok(schema)
})
.await
.context(error::JoinHandleSnafu)?
}
}
#[derive(Debug, Clone)]
pub struct JsonOpener {
batch_size: usize,
projected_schema: SchemaRef,
object_store: Arc<ObjectStore>,
compression_type: CompressionType,
}
impl JsonOpener {
/// Return a new [`JsonOpener`]. Any fields not present in `projected_schema` will be ignored.
pub fn new(
batch_size: usize,
projected_schema: SchemaRef,
object_store: ObjectStore,
compression_type: CompressionType,
) -> Self {
Self {
batch_size,
projected_schema,
object_store: Arc::new(object_store),
compression_type,
}
}
}
impl FileOpener for JsonOpener {
fn open(&self, meta: FileMeta) -> DataFusionResult<FileOpenFuture> {
open_with_decoder(
self.object_store.clone(),
meta.location().to_string(),
self.compression_type,
|| {
RawReaderBuilder::new(self.projected_schema.clone())
.with_batch_size(self.batch_size)
.build_decoder()
.map_err(DataFusionError::from)
},
)
}
}
#[cfg(test)]
mod tests {
use super::*;
use crate::file_format::FileFormat;
use crate::file_format::{FileFormat, FORMAT_COMPRESSION_TYPE, FORMAT_SCHEMA_INFER_MAX_RECORD};
use crate::test_util::{self, format_schema, test_store};
fn test_data_root() -> String {
@@ -118,4 +189,29 @@ mod tests {
formatted
);
}
#[test]
fn test_try_from() {
let mut map = HashMap::new();
let format = JsonFormat::try_from(&map).unwrap();
assert_eq!(format, JsonFormat::default());
map.insert(
FORMAT_SCHEMA_INFER_MAX_RECORD.to_string(),
"2000".to_string(),
);
map.insert(FORMAT_COMPRESSION_TYPE.to_string(), "zstd".to_string());
let format = JsonFormat::try_from(&map).unwrap();
assert_eq!(
format,
JsonFormat {
compression_type: CompressionType::ZSTD,
schema_infer_max_record: Some(2000),
}
);
}
}

View File

@@ -12,9 +12,7 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use std::sync::Arc;
use arrow_schema::SchemaRef;
use arrow_schema::Schema;
use async_trait::async_trait;
use datafusion::parquet::arrow::async_reader::AsyncFileReader;
use datafusion::parquet::arrow::parquet_to_arrow_schema;
@@ -29,7 +27,7 @@ pub struct ParquetFormat {}
#[async_trait]
impl FileFormat for ParquetFormat {
async fn infer_schema(&self, store: &ObjectStore, path: String) -> Result<SchemaRef> {
async fn infer_schema(&self, store: &ObjectStore, path: String) -> Result<Schema> {
let mut reader = store
.reader(&path)
.await
@@ -47,7 +45,7 @@ impl FileFormat for ParquetFormat {
)
.context(error::ParquetToSchemaSnafu)?;
Ok(Arc::new(schema))
Ok(schema)
}
}

View File

@@ -0,0 +1,161 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::vec;
use arrow_schema::SchemaRef;
use datafusion::assert_batches_eq;
use datafusion::datasource::listing::PartitionedFile;
use datafusion::datasource::object_store::ObjectStoreUrl;
use datafusion::physical_plan::file_format::{FileOpener, FileScanConfig, FileStream};
use datafusion::physical_plan::metrics::ExecutionPlanMetricsSet;
use futures::StreamExt;
use crate::compression::CompressionType;
use crate::file_format::csv::{CsvConfigBuilder, CsvOpener};
use crate::file_format::json::JsonOpener;
use crate::test_util::{self, test_basic_schema, test_store};
fn scan_config(file_schema: SchemaRef, limit: Option<usize>, filename: &str) -> FileScanConfig {
FileScanConfig {
object_store_url: ObjectStoreUrl::parse("empty://").unwrap(), // won't be used
file_schema,
file_groups: vec![vec![PartitionedFile::new(filename.to_string(), 10)]],
statistics: Default::default(),
projection: None,
limit,
table_partition_cols: vec![],
output_ordering: None,
infinite_source: false,
}
}
struct Test<'a, T: FileOpener> {
config: FileScanConfig,
opener: T,
expected: Vec<&'a str>,
}
impl<'a, T: FileOpener> Test<'a, T> {
pub async fn run(self) {
let result = FileStream::new(
&self.config,
0,
self.opener,
&ExecutionPlanMetricsSet::new(),
)
.unwrap()
.map(|b| b.unwrap())
.collect::<Vec<_>>()
.await;
assert_batches_eq!(self.expected, &result);
}
}
#[tokio::test]
async fn test_json_opener() {
let store = test_store("/");
let schema = test_basic_schema();
let json_opener = JsonOpener::new(
100,
schema.clone(),
store.clone(),
CompressionType::UNCOMPRESSED,
);
let path = &test_util::get_data_dir("tests/json/basic.json")
.display()
.to_string();
let tests = [
Test {
config: scan_config(schema.clone(), None, path),
opener: json_opener.clone(),
expected: vec![
"+-----+-------+",
"| num | str |",
"+-----+-------+",
"| 5 | test |",
"| 2 | hello |",
"| 4 | foo |",
"+-----+-------+",
],
},
Test {
config: scan_config(schema.clone(), Some(1), path),
opener: json_opener.clone(),
expected: vec![
"+-----+------+",
"| num | str |",
"+-----+------+",
"| 5 | test |",
"+-----+------+",
],
},
];
for test in tests {
test.run().await;
}
}
#[tokio::test]
async fn test_csv_opener() {
let store = test_store("/");
let schema = test_basic_schema();
let path = &test_util::get_data_dir("tests/csv/basic.csv")
.display()
.to_string();
let csv_conf = CsvConfigBuilder::default()
.batch_size(test_util::TEST_BATCH_SIZE)
.file_schema(schema.clone())
.build()
.unwrap();
let csv_opener = CsvOpener::new(csv_conf, store, CompressionType::UNCOMPRESSED);
let tests = [
Test {
config: scan_config(schema.clone(), None, path),
opener: csv_opener.clone(),
expected: vec![
"+-----+-------+",
"| num | str |",
"+-----+-------+",
"| 5 | test |",
"| 2 | hello |",
"| 4 | foo |",
"+-----+-------+",
],
},
Test {
config: scan_config(schema.clone(), Some(1), path),
opener: csv_opener.clone(),
expected: vec![
"+-----+------+",
"| num | str |",
"+-----+------+",
"| 5 | test |",
"+-----+------+",
],
},
];
for test in tests {
test.run().await;
}
}

View File

@@ -17,5 +17,6 @@ pub mod error;
pub mod file_format;
pub mod lister;
pub mod object_store;
#[cfg(test)]
pub mod test_util;
pub mod util;

View File

@@ -27,7 +27,7 @@ use crate::error::{self, Result};
pub const FS_SCHEMA: &str = "FS";
pub const S3_SCHEMA: &str = "S3";
/// parse url returns (schema,Option<host>,path)
/// Returns (schema, Option<host>, path)
pub fn parse_url(url: &str) -> Result<(String, Option<String>, String)> {
let parsed_url = Url::parse(url);
match parsed_url {
@@ -43,7 +43,7 @@ pub fn parse_url(url: &str) -> Result<(String, Option<String>, String)> {
}
}
pub fn build_backend(url: &str, connection: HashMap<String, String>) -> Result<ObjectStore> {
pub fn build_backend(url: &str, connection: &HashMap<String, String>) -> Result<ObjectStore> {
let (schema, host, _path) = parse_url(url)?;
match schema.to_uppercase().as_str() {

View File

@@ -30,7 +30,7 @@ const ENABLE_VIRTUAL_HOST_STYLE: &str = "ENABLE_VIRTUAL_HOST_STYLE";
pub fn build_s3_backend(
host: &str,
path: &str,
connection: HashMap<String, String>,
connection: &HashMap<String, String>,
) -> Result<ObjectStore> {
let mut builder = S3::default();

View File

@@ -13,11 +13,14 @@
// limitations under the License.
use std::path::PathBuf;
use std::sync::Arc;
use arrow_schema::SchemaRef;
use arrow_schema::{DataType, Field, Schema, SchemaRef};
use object_store::services::Fs;
use object_store::ObjectStore;
pub const TEST_BATCH_SIZE: usize = 100;
pub fn get_data_dir(path: &str) -> PathBuf {
// https://doc.rust-lang.org/cargo/reference/environment-variables.html
let dir = env!("CARGO_MANIFEST_DIR");
@@ -25,7 +28,7 @@ pub fn get_data_dir(path: &str) -> PathBuf {
PathBuf::from(dir).join(path)
}
pub fn format_schema(schema: SchemaRef) -> Vec<String> {
pub fn format_schema(schema: Schema) -> Vec<String> {
schema
.fields()
.iter()
@@ -46,3 +49,11 @@ pub fn test_store(root: &str) -> ObjectStore {
ObjectStore::new(builder).unwrap().finish()
}
pub fn test_basic_schema() -> SchemaRef {
let schema = Schema::new(vec![
Field::new("num", DataType::Int64, false),
Field::new("str", DataType::Utf8, false),
]);
Arc::new(schema)
}

View File

@@ -0,0 +1,3 @@
{"num":5,"str":"test"}
{"num":2,"str":"hello"}
{"num":4,"str":"foo"}

View File

@@ -8,7 +8,7 @@ license.workspace = true
async-trait.workspace = true
common-error = { path = "../error" }
common-telemetry = { path = "../telemetry" }
metrics = "0.20"
metrics.workspace = true
once_cell = "1.12"
paste.workspace = true
snafu.workspace = true

View File

@@ -14,16 +14,18 @@ common-error = { path = "../error" }
common-telemetry = { path = "../telemetry" }
datafusion.workspace = true
datafusion-expr.workspace = true
datafusion-substrait.workspace = true
datatypes = { path = "../../datatypes" }
futures = "0.3"
prost.workspace = true
session = { path = "../../session" }
snafu.workspace = true
table = { path = "../../table" }
query = { path = "../../query" }
[dependencies.substrait_proto]
package = "substrait"
version = "0.4"
version = "0.7"
[dev-dependencies]
datatypes = { path = "../../datatypes" }

View File

@@ -49,10 +49,10 @@ use crate::error::{
use crate::schema::{from_schema, to_schema};
use crate::SubstraitPlan;
pub struct DFLogicalSubstraitConvertor;
pub struct DFLogicalSubstraitConvertorDeprecated;
#[async_trait]
impl SubstraitPlan for DFLogicalSubstraitConvertor {
impl SubstraitPlan for DFLogicalSubstraitConvertorDeprecated {
type Error = Error;
type Plan = LogicalPlan;
@@ -76,7 +76,7 @@ impl SubstraitPlan for DFLogicalSubstraitConvertor {
}
}
impl DFLogicalSubstraitConvertor {
impl DFLogicalSubstraitConvertorDeprecated {
async fn convert_plan(
&self,
mut plan: Plan,
@@ -197,6 +197,14 @@ impl DFLogicalSubstraitConvertor {
name: "Cross Relation",
}
.fail()?,
RelType::HashJoin(_) => UnsupportedPlanSnafu {
name: "Cross Relation",
}
.fail()?,
RelType::MergeJoin(_) => UnsupportedPlanSnafu {
name: "Cross Relation",
}
.fail()?,
};
Ok(logical_plan)
@@ -311,7 +319,7 @@ impl DFLogicalSubstraitConvertor {
}
}
impl DFLogicalSubstraitConvertor {
impl DFLogicalSubstraitConvertorDeprecated {
fn logical_plan_to_rel(
&self,
ctx: &mut ConvertorContext,
@@ -585,7 +593,7 @@ mod test {
}
async fn logical_plan_round_trip(plan: LogicalPlan, catalog: CatalogManagerRef) {
let convertor = DFLogicalSubstraitConvertor;
let convertor = DFLogicalSubstraitConvertorDeprecated;
let proto = convertor.encode(plan.clone()).unwrap();
let tripped_plan = convertor.decode(proto, catalog).await.unwrap();

View File

@@ -0,0 +1,72 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::sync::Arc;
use async_trait::async_trait;
use bytes::{Buf, Bytes, BytesMut};
use catalog::datafusion::catalog_adapter::DfCatalogProviderAdapter;
use catalog::CatalogManagerRef;
use common_catalog::consts::DEFAULT_CATALOG_NAME;
use datafusion::prelude::SessionContext;
use datafusion_expr::LogicalPlan;
use datafusion_substrait::logical_plan::consumer::from_substrait_plan;
use datafusion_substrait::logical_plan::producer::to_substrait_plan;
use prost::Message;
use snafu::ResultExt;
use substrait_proto::proto::Plan;
use crate::error::{DecodeDfPlanSnafu, DecodeRelSnafu, EncodeDfPlanSnafu, EncodeRelSnafu, Error};
use crate::SubstraitPlan;
pub struct DFLogicalSubstraitConvertor;
#[async_trait]
impl SubstraitPlan for DFLogicalSubstraitConvertor {
type Error = Error;
type Plan = LogicalPlan;
async fn decode<B: Buf + Send>(
&self,
message: B,
catalog_manager: CatalogManagerRef,
) -> Result<Self::Plan, Self::Error> {
let mut context = SessionContext::new();
context.register_catalog(
DEFAULT_CATALOG_NAME,
Arc::new(DfCatalogProviderAdapter::new(
catalog_manager
.catalog(DEFAULT_CATALOG_NAME)
.unwrap()
.unwrap(),
)),
);
let plan = Plan::decode(message).context(DecodeRelSnafu)?;
let df_plan = from_substrait_plan(&mut context, &plan)
.await
.context(DecodeDfPlanSnafu)?;
Ok(df_plan)
}
fn encode(&self, plan: Self::Plan) -> Result<Bytes, Self::Error> {
let mut buf = BytesMut::new();
let substrait_plan = to_substrait_plan(&plan).context(EncodeDfPlanSnafu)?;
substrait_plan.encode(&mut buf).context(EncodeRelSnafu)?;
Ok(buf.freeze())
}
}

View File

@@ -109,6 +109,18 @@ pub enum Error {
#[snafu(backtrace)]
source: catalog::error::Error,
},
#[snafu(display("Failed to encode DataFusion plan, source: {}", source))]
EncodeDfPlan {
source: datafusion::error::DataFusionError,
location: Location,
},
#[snafu(display("Failed to decode DataFusion plan, source: {}", source))]
DecodeDfPlan {
source: datafusion::error::DataFusionError,
location: Location,
},
}
pub type Result<T> = std::result::Result<T, Error>;
@@ -129,7 +141,10 @@ impl ErrorExt for Error {
| Error::InvalidParameters { .. }
| Error::TableNotFound { .. }
| Error::SchemaNotMatch { .. } => StatusCode::InvalidArguments,
Error::DFInternal { .. } | Error::Internal { .. } => StatusCode::Internal,
Error::DFInternal { .. }
| Error::Internal { .. }
| Error::EncodeDfPlan { .. }
| Error::DecodeDfPlan { .. } => StatusCode::Internal,
Error::ConvertDfSchema { source } => source.status_code(),
Error::ResolveTable { source, .. } => source.status_code(),
}

View File

@@ -18,6 +18,7 @@
mod context;
mod df_expr;
mod df_logical;
mod df_substrait;
pub mod error;
mod schema;
mod types;
@@ -26,7 +27,7 @@ use async_trait::async_trait;
use bytes::{Buf, Bytes};
use catalog::CatalogManagerRef;
pub use crate::df_logical::DFLogicalSubstraitConvertor;
pub use crate::df_substrait::DFLogicalSubstraitConvertor;
#[async_trait]
pub trait SubstraitPlan {

View File

@@ -12,8 +12,8 @@ deadlock_detection = ["parking_lot"]
backtrace = "0.3"
common-error = { path = "../error" }
console-subscriber = { version = "0.1", optional = true }
metrics = "0.20.1"
metrics-exporter-prometheus = { version = "0.11", default-features = false }
metrics-exporter-prometheus = { git = "https://github.com/GreptimeTeam/metrics.git", rev = "174de287e9f7f9f57c0272be56c95df156489476", default-features = false }
metrics.workspace = true
once_cell = "1.10"
opentelemetry = { version = "0.17", default-features = false, features = [
"trace",

View File

@@ -32,7 +32,9 @@ pub fn init_default_metrics_recorder() {
/// Init prometheus recorder.
fn init_prometheus_recorder() {
let recorder = PrometheusBuilder::new().build_recorder();
let recorder = PrometheusBuilder::new()
.add_global_prefix("greptime".to_string())
.build_recorder();
let mut h = PROMETHEUS_HANDLE.as_ref().write().unwrap();
*h = Some(recorder.handle());
// TODO(LFC): separate metrics for testing and metrics for production
@@ -60,6 +62,7 @@ pub fn try_handle() -> Option<PrometheusHandle> {
pub struct Timer {
start: Instant,
name: &'static str,
labels: Vec<(String, String)>,
}
impl Timer {
@@ -67,9 +70,25 @@ impl Timer {
Self {
start: Instant::now(),
name,
labels: Vec::new(),
}
}
pub fn new_with_labels<S: Into<String> + Clone>(name: &'static str, labels: &[(S, S)]) -> Self {
Self {
start: Instant::now(),
name,
labels: labels
.iter()
.map(|(k, v)| (k.clone().into(), v.clone().into()))
.collect::<Vec<_>>(),
}
}
pub fn labels_mut(&mut self) -> &mut Vec<(String, String)> {
self.labels.as_mut()
}
pub fn elapsed(&self) -> Duration {
self.start.elapsed()
}
@@ -77,7 +96,11 @@ impl Timer {
impl Drop for Timer {
fn drop(&mut self) {
histogram!(self.name, self.start.elapsed());
if !self.labels.is_empty() {
histogram!(self.name, self.start.elapsed(), &self.labels);
} else {
histogram!(self.name, self.start.elapsed());
}
}
}
@@ -86,6 +109,9 @@ macro_rules! timer {
($name: expr) => {
$crate::metric::Timer::new($name)
};
($name:expr,$labels:expr) => {
$crate::metric::Timer::new_with_labels($name, $labels)
};
}
#[cfg(test)]
@@ -109,4 +135,48 @@ mod tests {
assert!(text.contains("test_elapsed_timer_a"));
assert!(text.contains("test_elapsed_timer_b"));
}
#[test]
fn test_elapsed_timer_with_label() {
init_default_metrics_recorder();
{
let t = Timer::new("test_elapsed_timer_a");
drop(t);
}
let handle = try_handle().unwrap();
let text = handle.render();
assert!(text.contains("test_elapsed_timer_a"));
assert!(!text.contains("test_elapsed_timer_b"));
let label_a = "label_a";
let label_b = "label_b";
let label_c = "label_c";
let label_d = "label_d";
let label_e = "label_e";
assert!(!text.contains(label_a));
assert!(!text.contains(label_b));
{
let mut timer_b = timer!("test_elapsed_timer_b", &[(label_a, "a"), (label_b, "b")]);
let labels = timer_b.labels_mut();
labels.push((label_c.to_owned(), "d".to_owned()));
}
let text = handle.render();
assert!(text.contains("test_elapsed_timer_a"));
assert!(text.contains("test_elapsed_timer_b"));
assert!(text.contains(label_a));
assert!(text.contains(label_b));
assert!(text.contains(label_c));
{
let mut timer_c = timer!("test_elapsed_timer_c");
let labels = timer_c.labels_mut();
labels.push((label_d.to_owned(), "d".to_owned()));
labels.push((label_e.to_owned(), "e".to_owned()));
}
let text = handle.render();
assert!(text.contains("test_elapsed_timer_c"));
assert!(text.contains(label_d));
assert!(text.contains(label_e));
}
}

View File

@@ -38,7 +38,7 @@ log = "0.4"
log-store = { path = "../log-store" }
meta-client = { path = "../meta-client" }
meta-srv = { path = "../meta-srv", features = ["mock"] }
metrics = "0.20"
metrics.workspace = true
mito = { path = "../mito", features = ["test"] }
object-store = { path = "../object-store" }
pin-project = "1.0"

View File

@@ -14,14 +14,11 @@
use std::any::Any;
use common_datasource::error::Error as DataSourceError;
use common_error::prelude::*;
use common_procedure::ProcedureId;
use datafusion::parquet;
use snafu::Location;
use storage::error::Error as StorageError;
use table::error::Error as TableError;
use url::ParseError;
use crate::datanode::ObjectStoreConfig;
@@ -208,18 +205,6 @@ pub enum Error {
location: Location,
},
#[snafu(display("Failed to build backend, source: {}", source))]
BuildBackend {
#[snafu(backtrace)]
source: DataSourceError,
},
#[snafu(display("Failed to parse url, source: {}", source))]
ParseUrl {
source: DataSourceError,
location: Location,
},
#[snafu(display("Runtime resource error, source: {}", source))]
RuntimeResource {
#[snafu(backtrace)]
@@ -229,36 +214,6 @@ pub enum Error {
#[snafu(display("Invalid SQL, error: {}", msg))]
InvalidSql { msg: String },
#[snafu(display("Invalid url: {}, error :{}", url, source))]
InvalidUrl { url: String, source: ParseError },
#[snafu(display("Invalid filepath: {}", path))]
InvalidPath { path: String },
#[snafu(display("Invalid connection: {}", msg))]
InvalidConnection { msg: String },
#[snafu(display("Unsupported backend protocol: {}", protocol))]
UnsupportedBackendProtocol { protocol: String },
#[snafu(display("Failed to regex, source: {}", source))]
BuildRegex {
location: Location,
source: regex::Error,
},
#[snafu(display("Failed to list objects, source: {}", source))]
ListObjects {
#[snafu(backtrace)]
source: DataSourceError,
},
#[snafu(display("Failed to parse the data, source: {}", source))]
ParseDataTypes {
#[snafu(backtrace)]
source: common_recordbatch::error::Error,
},
#[snafu(display("Not support SQL, error: {}", msg))]
NotSupportSql { msg: String },
@@ -409,63 +364,6 @@ pub enum Error {
source: query::error::Error,
},
#[snafu(display("Failed to copy data from table: {}, source: {}", table_name, source))]
CopyTable {
table_name: String,
#[snafu(backtrace)]
source: TableError,
},
#[snafu(display("Failed to execute table scan, source: {}", source))]
TableScanExec {
#[snafu(backtrace)]
source: common_query::error::Error,
},
#[snafu(display(
"File schema mismatch at index {}, expected table schema: {} but found: {}",
index,
table_schema,
file_schema
))]
InvalidSchema {
index: usize,
table_schema: String,
file_schema: String,
},
#[snafu(display("Failed to read parquet file, source: {}", source))]
ReadParquet {
source: parquet::errors::ParquetError,
location: Location,
},
#[snafu(display("Failed to poll stream, source: {}", source))]
PollStream {
source: datafusion_common::DataFusionError,
location: Location,
},
#[snafu(display("Failed to build parquet record batch stream, source: {}", source))]
BuildParquetRecordBatchStream {
location: Location,
source: parquet::errors::ParquetError,
},
#[snafu(display("Failed to read object in path: {}, source: {}", path, source))]
ReadObject {
path: String,
location: Location,
source: object_store::Error,
},
#[snafu(display("Failed to write object into path: {}, source: {}", path, source))]
WriteObject {
path: String,
location: Location,
source: object_store::Error,
},
#[snafu(display("Unrecognized table option: {}", source))]
UnrecognizedTableOption {
#[snafu(backtrace)]
@@ -509,12 +407,6 @@ pub enum Error {
#[snafu(backtrace)]
source: BoxedError,
},
#[snafu(display("Failed to copy table to parquet file, source: {}", source))]
WriteParquet {
#[snafu(backtrace)]
source: storage::error::Error,
},
}
pub type Result<T> = std::result::Result<T, Error>;
@@ -554,11 +446,6 @@ impl ErrorExt for Error {
ColumnValuesNumberMismatch { .. }
| InvalidSql { .. }
| InvalidUrl { .. }
| InvalidPath { .. }
| InvalidConnection { .. }
| UnsupportedBackendProtocol { .. }
| BuildRegex { .. }
| NotSupportSql { .. }
| KeyColumnNotFound { .. }
| IllegalPrimaryKeysDef { .. }
@@ -572,8 +459,7 @@ impl ErrorExt for Error {
| DatabaseNotFound { .. }
| MissingNodeId { .. }
| MissingMetasrvOpts { .. }
| ColumnNoneDefaultValue { .. }
| ParseUrl { .. } => StatusCode::InvalidArguments,
| ColumnNoneDefaultValue { .. } => StatusCode::InvalidArguments,
// TODO(yingwen): Further categorize http error.
StartServer { .. }
@@ -586,22 +472,13 @@ impl ErrorExt for Error {
| RenameTable { .. }
| Catalog { .. }
| MissingRequiredField { .. }
| BuildParquetRecordBatchStream { .. }
| InvalidSchema { .. }
| ParseDataTypes { .. }
| IncorrectInternalState { .. }
| ShutdownServer { .. }
| ShutdownInstance { .. }
| CloseTableEngine { .. } => StatusCode::Internal,
BuildBackend { .. }
| InitBackend { .. }
| ReadParquet { .. }
| WriteParquet { .. }
| PollStream { .. }
| ReadObject { .. }
| WriteObject { .. }
| ListObjects { .. } => StatusCode::StorageUnavailable,
InitBackend { .. } => StatusCode::StorageUnavailable,
OpenLogStore { source } => source.status_code(),
OpenStorageEngine { source } => source.status_code(),
RuntimeResource { .. } => StatusCode::RuntimeResourcesExhausted,
@@ -609,8 +486,6 @@ impl ErrorExt for Error {
TableIdProviderNotFound { .. } => StatusCode::Unsupported,
BumpTableId { source, .. } => source.status_code(),
ColumnDefaultValue { source, .. } => source.status_code(),
CopyTable { source, .. } => source.status_code(),
TableScanExec { source, .. } => source.status_code(),
UnrecognizedTableOption { .. } => StatusCode::InvalidArguments,
RecoverProcedure { source, .. } | SubmitProcedure { source, .. } => {
source.status_code()

View File

@@ -25,10 +25,9 @@ use query::query_engine::SqlStatementExecutor;
use session::context::QueryContextRef;
use snafu::prelude::*;
use sql::ast::ObjectName;
use sql::statements::copy::{CopyTable, CopyTableArgument};
use sql::statements::statement::Statement;
use table::engine::TableReference;
use table::requests::{CopyDirection, CopyTableRequest, CreateDatabaseRequest, DropTableRequest};
use table::requests::{CreateDatabaseRequest, DropTableRequest};
use crate::error::{
self, BumpTableIdSnafu, ExecuteSqlSnafu, ExecuteStatementSnafu, NotSupportSqlSnafu,
@@ -105,54 +104,6 @@ impl Instance {
.execute(SqlRequest::DropTable(req), query_ctx)
.await
}
Statement::Copy(copy_table) => {
let req = match copy_table {
CopyTable::To(copy_table) => {
let CopyTableArgument {
location,
connection,
pattern,
table_name,
..
} = copy_table;
let (catalog_name, schema_name, table_name) =
table_idents_to_full_name(&table_name, query_ctx.clone())?;
CopyTableRequest {
catalog_name,
schema_name,
table_name,
location,
connection,
pattern,
direction: CopyDirection::Export,
}
}
CopyTable::From(copy_table) => {
let CopyTableArgument {
location,
connection,
pattern,
table_name,
..
} = copy_table;
let (catalog_name, schema_name, table_name) =
table_idents_to_full_name(&table_name, query_ctx.clone())?;
CopyTableRequest {
catalog_name,
schema_name,
table_name,
location,
connection,
pattern,
direction: CopyDirection::Import,
}
}
};
self.sql_handler
.execute(SqlRequest::CopyTable(req), query_ctx)
.await
}
_ => NotSupportSqlSnafu {
msg: format!("not supported to execute {stmt:?}"),
}

View File

@@ -32,8 +32,6 @@ use crate::error::{
use crate::instance::sql::table_idents_to_full_name;
mod alter;
mod copy_table_from;
mod copy_table_to;
mod create;
mod drop_table;
mod flush_table;
@@ -46,7 +44,6 @@ pub enum SqlRequest {
Alter(AlterTableRequest),
DropTable(DropTableRequest),
FlushTable(FlushTableRequest),
CopyTable(CopyTableRequest),
}
// Handler to execute SQL except query
@@ -85,10 +82,6 @@ impl SqlHandler {
SqlRequest::CreateDatabase(req) => self.create_database(req, query_ctx.clone()).await,
SqlRequest::Alter(req) => self.alter(req).await,
SqlRequest::DropTable(req) => self.drop_table(req).await,
SqlRequest::CopyTable(req) => match req.direction {
CopyDirection::Export => self.copy_table_to(req).await,
CopyDirection::Import => self.copy_table_from(req).await,
},
SqlRequest::FlushTable(req) => self.flush_table(req).await,
};
if let Err(e) = &result {

View File

@@ -10,6 +10,7 @@ python = ["dep:script"]
[dependencies]
api = { path = "../api" }
async-compat = "0.2"
async-stream.workspace = true
async-trait = "0.1"
catalog = { path = "../catalog" }
@@ -17,6 +18,7 @@ chrono.workspace = true
client = { path = "../client" }
common-base = { path = "../common/base" }
common-catalog = { path = "../common/catalog" }
common-datasource = { path = "../common/datasource" }
common-error = { path = "../common/error" }
common-function = { path = "../common/function" }
common-grpc = { path = "../common/grpc" }
@@ -36,10 +38,12 @@ itertools = "0.10"
meta-client = { path = "../meta-client" }
mito = { path = "../mito", features = ["test"] }
moka = { version = "0.9", features = ["future"] }
object-store = { path = "../object-store" }
openmetrics-parser = "0.4"
partition = { path = "../partition" }
prost.workspace = true
query = { path = "../query" }
regex = "1.6"
script = { path = "../script", features = ["python"], optional = true }
serde = "1.0"
serde_json = "1.0"
@@ -47,6 +51,7 @@ servers = { path = "../servers" }
session = { path = "../session" }
snafu.workspace = true
sql = { path = "../sql" }
storage = { path = "../storage" }
store-api = { path = "../store-api" }
substrait = { path = "../common/substrait" }
table = { path = "../table" }

View File

@@ -15,6 +15,7 @@
use std::any::Any;
use common_error::prelude::*;
use datafusion::parquet;
use snafu::Location;
use store_api::storage::RegionId;
@@ -310,13 +311,8 @@ pub enum Error {
source: common_grpc_expr::error::Error,
},
#[snafu(display(
"Failed to build default value, column: {}, source: {}",
column,
source
))]
ColumnDefaultValue {
column: String,
#[snafu(display("Failed to convert into vectors, source: {}", source))]
IntoVectors {
#[snafu(backtrace)]
source: datatypes::error::Error,
},
@@ -370,6 +366,92 @@ pub enum Error {
#[snafu(backtrace)]
source: script::error::Error,
},
#[snafu(display("Failed to build regex, source: {}", source))]
BuildRegex {
location: Location,
source: regex::Error,
},
#[snafu(display("Failed to copy table: {}, source: {}", table_name, source))]
CopyTable {
table_name: String,
#[snafu(backtrace)]
source: table::error::Error,
},
#[snafu(display(
"Failed to insert value into table: {}, source: {}",
table_name,
source
))]
Insert {
table_name: String,
#[snafu(backtrace)]
source: table::error::Error,
},
#[snafu(display("Failed to execute table scan, source: {}", source))]
TableScanExec {
#[snafu(backtrace)]
source: common_query::error::Error,
},
#[snafu(display("Failed to parse data source url, source: {}", source))]
ParseUrl {
#[snafu(backtrace)]
source: common_datasource::error::Error,
},
#[snafu(display("Failed to build data source backend, source: {}", source))]
BuildBackend {
#[snafu(backtrace)]
source: common_datasource::error::Error,
},
#[snafu(display("Failed to list objects, source: {}", source))]
ListObjects {
#[snafu(backtrace)]
source: common_datasource::error::Error,
},
#[snafu(display("Failed to read object in path: {}, source: {}", path, source))]
ReadObject {
path: String,
location: Location,
source: object_store::Error,
},
#[snafu(display("Failed to read parquet file, source: {}", source))]
ReadParquet {
source: parquet::errors::ParquetError,
location: Location,
},
#[snafu(display("Failed to build parquet record batch stream, source: {}", source))]
BuildParquetRecordBatchStream {
location: Location,
source: parquet::errors::ParquetError,
},
#[snafu(display("Failed to write parquet file, source: {}", source))]
WriteParquet {
#[snafu(backtrace)]
source: storage::error::Error,
},
#[snafu(display(
"Schema datatypes not match at index {}, expected table schema: {}, actual file schema: {}",
index,
table_schema,
file_schema
))]
InvalidSchema {
index: usize,
table_schema: String,
file_schema: String,
location: Location,
},
}
pub type Result<T> = std::result::Result<T, Error>;
@@ -388,7 +470,9 @@ impl ErrorExt for Error {
| Error::MissingInsertValues { .. }
| Error::PrimaryKeyNotFound { .. }
| Error::MissingMetasrvOpts { .. }
| Error::ColumnNoneDefaultValue { .. } => StatusCode::InvalidArguments,
| Error::ColumnNoneDefaultValue { .. }
| Error::BuildRegex { .. }
| Error::InvalidSchema { .. } => StatusCode::InvalidArguments,
Error::NotSupported { .. } => StatusCode::Unsupported,
@@ -401,10 +485,13 @@ impl ErrorExt for Error {
Error::ParseSql { source } => source.status_code(),
Error::Table { source } => source.status_code(),
Error::Table { source }
| Error::CopyTable { source, .. }
| Error::Insert { source, .. } => source.status_code(),
Error::ConvertColumnDefaultConstraint { source, .. }
| Error::CreateTableInfo { source } => source.status_code(),
| Error::CreateTableInfo { source }
| Error::IntoVectors { source } => source.status_code(),
Error::RequestDatanode { source } => source.status_code(),
@@ -447,7 +534,6 @@ impl ErrorExt for Error {
Error::TableAlreadyExist { .. } => StatusCode::TableAlreadyExists,
Error::EncodeSubstraitLogicalPlan { source } => source.status_code(),
Error::InvokeDatanode { source } => source.status_code(),
Error::ColumnDefaultValue { source, .. } => source.status_code(),
Error::External { source } => source.status_code(),
Error::DeserializePartition { source, .. } | Error::FindTableRoute { source, .. } => {
@@ -456,6 +542,18 @@ impl ErrorExt for Error {
Error::UnrecognizedTableOption { .. } => StatusCode::InvalidArguments,
Error::StartScriptManager { source } => source.status_code(),
Error::TableScanExec { source, .. } => source.status_code(),
Error::ReadObject { .. }
| Error::ReadParquet { .. }
| Error::BuildParquetRecordBatchStream { .. } => StatusCode::StorageUnavailable,
Error::ListObjects { source }
| Error::ParseUrl { source }
| Error::BuildBackend { source } => source.status_code(),
Error::WriteParquet { source, .. } => source.status_code(),
}
}

View File

@@ -12,23 +12,31 @@
// See the License for the specific language governing permissions and
// limitations under the License.
mod copy_table_from;
mod copy_table_to;
mod describe;
mod show;
mod tql;
use catalog::CatalogManagerRef;
use common_error::prelude::BoxedError;
use common_query::Output;
use common_recordbatch::RecordBatches;
use datanode::instance::sql::table_idents_to_full_name;
use query::parser::QueryStatement;
use query::query_engine::SqlStatementExecutorRef;
use query::QueryEngineRef;
use session::context::QueryContextRef;
use snafu::{ensure, ResultExt};
use snafu::{ensure, OptionExt, ResultExt};
use sql::statements::copy::{CopyTable, CopyTableArgument};
use sql::statements::statement::Statement;
use table::engine::TableReference;
use table::requests::{CopyDirection, CopyTableRequest};
use table::TableRef;
use crate::error::{
CatalogSnafu, ExecLogicalPlanSnafu, ExecuteStatementSnafu, PlanStatementSnafu, Result,
SchemaNotFoundSnafu,
CatalogSnafu, ExecLogicalPlanSnafu, ExecuteStatementSnafu, ExternalSnafu, PlanStatementSnafu,
Result, SchemaNotFoundSnafu, TableNotFoundSnafu,
};
#[derive(Clone)]
@@ -84,13 +92,20 @@ impl StatementExecutor {
Statement::ShowTables(stmt) => self.show_tables(stmt, query_ctx),
Statement::Copy(stmt) => {
let req = to_copy_table_request(stmt, query_ctx)?;
match req.direction {
CopyDirection::Export => self.copy_table_to(req).await,
CopyDirection::Import => self.copy_table_from(req).await,
}
}
Statement::CreateDatabase(_)
| Statement::CreateTable(_)
| Statement::CreateExternalTable(_)
| Statement::Insert(_)
| Statement::Alter(_)
| Statement::DropTable(_)
| Statement::Copy(_)
| Statement::ShowCreateTable(_) => self
.sql_stmt_executor
.execute_sql(stmt, query_ctx)
@@ -99,11 +114,7 @@ impl StatementExecutor {
}
}
pub(crate) async fn plan_exec(
&self,
stmt: QueryStatement,
query_ctx: QueryContextRef,
) -> Result<Output> {
async fn plan_exec(&self, stmt: QueryStatement, query_ctx: QueryContextRef) -> Result<Output> {
let planner = self.query_engine.planner();
let plan = planner
.plan(stmt, query_ctx.clone())
@@ -129,4 +140,50 @@ impl StatementExecutor {
Ok(Output::RecordBatches(RecordBatches::empty()))
}
async fn get_table(&self, table_ref: &TableReference<'_>) -> Result<TableRef> {
let TableReference {
catalog,
schema,
table,
} = table_ref;
self.catalog_manager
.table(catalog, schema, table)
.await
.context(CatalogSnafu)?
.with_context(|| TableNotFoundSnafu {
table_name: table_ref.to_string(),
})
}
}
fn to_copy_table_request(stmt: CopyTable, query_ctx: QueryContextRef) -> Result<CopyTableRequest> {
let direction = match stmt {
CopyTable::To(_) => CopyDirection::Export,
CopyTable::From(_) => CopyDirection::Import,
};
let CopyTableArgument {
location,
connection,
pattern,
table_name,
..
} = match stmt {
CopyTable::To(arg) => arg,
CopyTable::From(arg) => arg,
};
let (catalog_name, schema_name, table_name) = table_idents_to_full_name(&table_name, query_ctx)
.map_err(BoxedError::new)
.context(ExternalSnafu)?;
Ok(CopyTableRequest {
catalog_name,
schema_name,
table_name,
location,
connection,
pattern,
direction,
})
}

View File

@@ -21,7 +21,6 @@ use common_datasource::lister::{Lister, Source};
use common_datasource::object_store::{build_backend, parse_url};
use common_datasource::util::find_dir_and_filename;
use common_query::Output;
use common_recordbatch::error::DataTypesSnafu;
use datafusion::parquet::arrow::ParquetRecordBatchStreamBuilder;
use datatypes::arrow::datatypes::{DataType, SchemaRef};
use datatypes::vectors::Helper;
@@ -32,10 +31,10 @@ use table::engine::TableReference;
use table::requests::{CopyTableRequest, InsertRequest};
use tokio::io::BufReader;
use crate::error::{self, ParseDataTypesSnafu, Result};
use crate::sql::SqlHandler;
use crate::error::{self, IntoVectorsSnafu, Result};
use crate::statement::StatementExecutor;
impl SqlHandler {
impl StatementExecutor {
pub(crate) async fn copy_table_from(&self, req: CopyTableRequest) -> Result<Output> {
let table_ref = TableReference {
catalog: &req.catalog_name,
@@ -47,7 +46,7 @@ impl SqlHandler {
let (_schema, _host, path) = parse_url(&req.location).context(error::ParseUrlSnafu)?;
let object_store =
build_backend(&req.location, req.connection).context(error::BuildBackendSnafu)?;
build_backend(&req.location, &req.connection).context(error::BuildBackendSnafu)?;
let (dir, filename) = find_dir_and_filename(&path);
let regex = req
@@ -102,9 +101,8 @@ impl SqlHandler {
while let Some(r) = stream.next().await {
let record_batch = r.context(error::ReadParquetSnafu)?;
let vectors = Helper::try_into_vectors(record_batch.columns())
.context(DataTypesSnafu)
.context(ParseDataTypesSnafu)?;
let vectors =
Helper::try_into_vectors(record_batch.columns()).context(IntoVectorsSnafu)?;
pending_mem_size += vectors.iter().map(|v| v.memory_size()).sum::<usize>();

View File

@@ -12,7 +12,6 @@
// See the License for the specific language governing permissions and
// limitations under the License.
use common_datasource;
use common_datasource::object_store::{build_backend, parse_url};
use common_query::physical_plan::SessionContext;
use common_query::Output;
@@ -23,9 +22,9 @@ use table::engine::TableReference;
use table::requests::CopyTableRequest;
use crate::error::{self, Result, WriteParquetSnafu};
use crate::sql::SqlHandler;
use crate::statement::StatementExecutor;
impl SqlHandler {
impl StatementExecutor {
pub(crate) async fn copy_table_to(&self, req: CopyTableRequest) -> Result<Output> {
let table_ref = TableReference {
catalog: &req.catalog_name,
@@ -47,7 +46,7 @@ impl SqlHandler {
let (_schema, _host, path) = parse_url(&req.location).context(error::ParseUrlSnafu)?;
let object_store =
build_backend(&req.location, req.connection).context(error::BuildBackendSnafu)?;
build_backend(&req.location, &req.connection).context(error::BuildBackendSnafu)?;
let writer = ParquetWriter::new(&path, Source::Stream(stream), object_store);

View File

@@ -770,7 +770,8 @@ mod test {
options: SortOptions::default(),
}],
Arc::new(merge),
);
)
.with_fetch(None);
assert_eq!(sort.output_partitioning().partition_count(), 1);
let session_ctx = SessionContext::new();

View File

@@ -302,7 +302,7 @@ async fn test_execute_insert_query_with_i64_timestamp(instance: Arc<dyn MockInst
}
}
#[apply(standalone_instance_case)]
#[apply(both_instances_cases)]
async fn test_execute_query(instance: Arc<dyn MockInstance>) {
let instance = instance.frontend();
@@ -765,7 +765,7 @@ async fn test_delete(instance: Arc<dyn MockInstance>) {
check_output_stream(output, expect).await;
}
#[apply(standalone_instance_case)]
#[apply(both_instances_cases)]
async fn test_execute_copy_to_s3(instance: Arc<dyn MockInstance>) {
if let Ok(bucket) = env::var("GT_S3_BUCKET") {
if !bucket.is_empty() {
@@ -803,7 +803,7 @@ async fn test_execute_copy_to_s3(instance: Arc<dyn MockInstance>) {
}
}
#[apply(standalone_instance_case)]
#[apply(both_instances_cases)]
async fn test_execute_copy_from_s3(instance: Arc<dyn MockInstance>) {
logging::init_default_ut_logging();
if let Ok(bucket) = env::var("GT_S3_BUCKET") {
@@ -904,6 +904,7 @@ async fn test_execute_copy_from_s3(instance: Arc<dyn MockInstance>) {
#[apply(both_instances_cases)]
async fn test_information_schema(instance: Arc<dyn MockInstance>) {
let is_distributed_mode = instance.is_distributed_mode();
let instance = instance.frontend();
let sql = "create table another_table(i bigint time index)";
@@ -913,27 +914,55 @@ async fn test_information_schema(instance: Arc<dyn MockInstance>) {
// User can only see information schema under current catalog.
// A necessary requirement to GreptimeCloud.
let sql = "select table_catalog, table_schema, table_name, table_type from information_schema.tables where table_type != 'SYSTEM VIEW' order by table_name";
let sql = "select table_catalog, table_schema, table_name, table_type, table_id, engine from information_schema.tables where table_type != 'SYSTEM VIEW' order by table_name";
let output = execute_sql(&instance, sql).await;
let expected = "\
+---------------+--------------------+------------+------------+
| table_catalog | table_schema | table_name | table_type |
+---------------+--------------------+------------+------------+
| greptime | public | numbers | BASE TABLE |
| greptime | public | scripts | BASE TABLE |
| greptime | information_schema | tables | VIEW |
+---------------+--------------------+------------+------------+";
let expected = match is_distributed_mode {
true => {
"\
+---------------+--------------------+------------+------------+----------+-------------+
| table_catalog | table_schema | table_name | table_type | table_id | engine |
+---------------+--------------------+------------+------------+----------+-------------+
| greptime | public | numbers | BASE TABLE | 1 | test_engine |
| greptime | public | scripts | BASE TABLE | 1024 | mito |
| greptime | information_schema | tables | VIEW | | |
+---------------+--------------------+------------+------------+----------+-------------+"
}
false => {
"\
+---------------+--------------------+------------+------------+----------+-------------+
| table_catalog | table_schema | table_name | table_type | table_id | engine |
+---------------+--------------------+------------+------------+----------+-------------+
| greptime | public | numbers | BASE TABLE | 1 | test_engine |
| greptime | public | scripts | BASE TABLE | 1 | mito |
| greptime | information_schema | tables | VIEW | | |
+---------------+--------------------+------------+------------+----------+-------------+"
}
};
check_output_stream(output, expected).await;
let output = execute_sql_with(&instance, sql, query_ctx).await;
let expected = "\
+-----------------+--------------------+---------------+------------+
| table_catalog | table_schema | table_name | table_type |
+-----------------+--------------------+---------------+------------+
| another_catalog | another_schema | another_table | BASE TABLE |
| another_catalog | information_schema | tables | VIEW |
+-----------------+--------------------+---------------+------------+";
let expected = match is_distributed_mode {
true => {
"\
+-----------------+--------------------+---------------+------------+----------+--------+
| table_catalog | table_schema | table_name | table_type | table_id | engine |
+-----------------+--------------------+---------------+------------+----------+--------+
| another_catalog | another_schema | another_table | BASE TABLE | 1025 | mito |
| another_catalog | information_schema | tables | VIEW | | |
+-----------------+--------------------+---------------+------------+----------+--------+"
}
false => {
"\
+-----------------+--------------------+---------------+------------+----------+--------+
| table_catalog | table_schema | table_name | table_type | table_id | engine |
+-----------------+--------------------+---------------+------------+----------+--------+
| another_catalog | another_schema | another_table | BASE TABLE | 1024 | mito |
| another_catalog | information_schema | tables | VIEW | | |
+-----------------+--------------------+---------------+------------+----------+--------+"
}
};
check_output_stream(output, expected).await;
}

View File

@@ -54,7 +54,7 @@ async fn create_insert_query_assert(
let query_output = instance
.statement_executor()
.plan_exec(QueryStatement::Promql(eval_stmt), QueryContext::arc())
.execute_stmt(QueryStatement::Promql(eval_stmt), QueryContext::arc())
.await
.unwrap();
check_unordered_output_stream(query_output, expected).await;

View File

@@ -26,18 +26,28 @@ use crate::tests::{
pub(crate) trait MockInstance {
fn frontend(&self) -> Arc<Instance>;
fn is_distributed_mode(&self) -> bool;
}
impl MockInstance for MockStandaloneInstance {
fn frontend(&self) -> Arc<Instance> {
self.instance.clone()
}
fn is_distributed_mode(&self) -> bool {
false
}
}
impl MockInstance for MockDistributedInstance {
fn frontend(&self) -> Arc<Instance> {
self.frontend.clone()
}
fn is_distributed_mode(&self) -> bool {
true
}
}
pub(crate) async fn standalone() -> Arc<dyn MockInstance> {

View File

@@ -21,10 +21,12 @@ common-query = { path = "../common/query" }
common-recordbatch = { path = "../common/recordbatch" }
common-telemetry = { path = "../common/telemetry" }
common-time = { path = "../common/time" }
dashmap = "5.4"
datafusion.workspace = true
datafusion-common.workspace = true
datatypes = { path = "../datatypes" }
futures.workspace = true
key-lock = "0.1"
log-store = { path = "../log-store" }
object-store = { path = "../object-store" }
serde = { version = "1.0", features = ["derive"] }

View File

@@ -17,7 +17,7 @@ mod procedure;
mod tests;
use std::collections::HashMap;
use std::sync::{Arc, RwLock};
use std::sync::Arc;
use async_trait::async_trait;
pub use common_catalog::consts::MITO_ENGINE;
@@ -26,7 +26,9 @@ use common_error::ext::BoxedError;
use common_procedure::{BoxedProcedure, ProcedureManager};
use common_telemetry::tracing::log::info;
use common_telemetry::{debug, logging};
use dashmap::DashMap;
use datatypes::schema::Schema;
use key_lock::KeyLock;
use object_store::ObjectStore;
use snafu::{ensure, OptionExt, ResultExt};
use store_api::storage::{
@@ -38,14 +40,12 @@ use table::engine::{
region_id, region_name, table_dir, EngineContext, TableEngine, TableEngineProcedure,
TableReference,
};
use table::error::TableOperationSnafu;
use table::metadata::{TableInfo, TableInfoBuilder, TableMetaBuilder, TableType, TableVersion};
use table::requests::{
AlterKind, AlterTableRequest, CreateTableRequest, DropTableRequest, OpenTableRequest,
};
use table::table::{AlterContext, TableRef};
use table::{error as table_error, Result as TableResult, Table};
use tokio::sync::Mutex;
use crate::config::EngineConfig;
use crate::engine::procedure::{AlterMitoTable, CreateMitoTable, DropMitoTable};
@@ -208,12 +208,12 @@ pub(crate) struct MitoEngineInner<S: StorageEngine> {
/// All tables opened by the engine. Map key is formatted [TableReference].
///
/// Writing to `tables` should also hold the `table_mutex`.
tables: RwLock<HashMap<String, Arc<MitoTable<S::Region>>>>,
tables: DashMap<String, Arc<MitoTable<S::Region>>>,
object_store: ObjectStore,
storage_engine: S,
/// Table mutex is used to protect the operations such as creating/opening/closing
/// a table, to avoid things like opening the same table simultaneously.
table_mutex: Mutex<()>,
table_mutex: Arc<KeyLock<String>>,
}
fn build_row_key_desc(
@@ -392,7 +392,7 @@ impl<S: StorageEngine> MitoEngineInner<S> {
let table_dir = table_dir(catalog_name, schema_name, table_id);
let mut regions = HashMap::with_capacity(request.region_numbers.len());
let _lock = self.table_mutex.lock().await;
let _lock = self.table_mutex.lock(table_ref.to_string()).await;
// Checks again, read lock should be enough since we are guarded by the mutex.
if let Some(table) = self.get_table(&table_ref) {
return if request.create_if_not_exists {
@@ -481,10 +481,8 @@ impl<S: StorageEngine> MitoEngineInner<S> {
table_id
);
self.tables
.write()
.unwrap()
.insert(table_ref.to_string(), table.clone());
// already locked
self.tables.insert(table_ref.to_string(), table.clone());
Ok(table)
}
@@ -510,7 +508,9 @@ impl<S: StorageEngine> MitoEngineInner<S> {
// Acquires the mutex before opening a new table.
let table = {
let _lock = self.table_mutex.lock().await;
let table_name_key = table_ref.to_string();
let _lock = self.table_mutex.lock(table_name_key.clone()).await;
// Checks again, read lock should be enough since we are guarded by the mutex.
if let Some(table) = self.get_table(&table_ref) {
return Ok(Some(table));
@@ -523,7 +523,7 @@ impl<S: StorageEngine> MitoEngineInner<S> {
let Some((manifest, table_info)) = self
.recover_table_manifest_and_info(table_name, &table_dir)
.await.map_err(BoxedError::new)
.context(TableOperationSnafu)? else { return Ok(None) };
.context(table_error::TableOperationSnafu)? else { return Ok(None) };
let opts = OpenOptions {
parent_dir: table_dir.to_string(),
@@ -564,10 +564,8 @@ impl<S: StorageEngine> MitoEngineInner<S> {
let table = Arc::new(MitoTable::new(table_info, regions, manifest));
self.tables
.write()
.unwrap()
.insert(table_ref.to_string(), table.clone());
// already locked
self.tables.insert(table_ref.to_string(), table.clone());
Some(table as _)
};
@@ -598,20 +596,15 @@ impl<S: StorageEngine> MitoEngineInner<S> {
fn get_table(&self, table_ref: &TableReference) -> Option<TableRef> {
self.tables
.read()
.unwrap()
.get(&table_ref.to_string())
.cloned()
.map(|table| table as _)
.map(|en| en.value().clone() as _)
}
/// Returns the [MitoTable].
fn get_mito_table(&self, table_ref: &TableReference) -> Option<Arc<MitoTable<S::Region>>> {
self.tables
.read()
.unwrap()
.get(&table_ref.to_string())
.cloned()
.map(|en| en.value().clone())
}
async fn alter_table(&self, _ctx: &EngineContext, req: AlterTableRequest) -> Result<TableRef> {
@@ -650,10 +643,14 @@ impl<S: StorageEngine> MitoEngineInner<S> {
.context(error::AlterTableSnafu { table_name })?;
if let AlterKind::RenameTable { new_table_name } = &req.alter_kind {
let mut tables = self.tables.write().unwrap();
tables.remove(&table_ref.to_string());
let removed = {
let _lock = self.table_mutex.lock(table_ref.to_string()).await;
self.tables.remove(&table_ref.to_string())
};
ensure!(removed.is_some(), error::TableNotFoundSnafu { table_name });
table_ref.table = new_table_name.as_str();
tables.insert(table_ref.to_string(), table.clone());
let _lock = self.table_mutex.lock(table_ref.to_string()).await;
self.tables.insert(table_ref.to_string(), table.clone());
}
Ok(table)
}
@@ -666,35 +663,42 @@ impl<S: StorageEngine> MitoEngineInner<S> {
table: &req.table_name,
};
// todo(ruihang): reclaim persisted data
Ok(self
.tables
.write()
.unwrap()
.remove(&table_reference.to_string())
.is_some())
let _lock = self.table_mutex.lock(table_reference.to_string()).await;
Ok(self.tables.remove(&table_reference.to_string()).is_some())
}
async fn close(&self) -> TableResult<()> {
let _lock = self.table_mutex.lock().await;
let tables = self.tables.write().unwrap().clone();
futures::future::try_join_all(tables.values().map(|t| t.close()))
.await
.map_err(BoxedError::new)
.context(table_error::TableOperationSnafu)?;
futures::future::try_join_all(
self.tables
.iter()
.map(|item| close_table(self.table_mutex.clone(), item.value().clone())),
)
.await
.map_err(BoxedError::new)
.context(table_error::TableOperationSnafu)?;
Ok(())
}
}
async fn close_table(lock: Arc<KeyLock<String>>, table: TableRef) -> TableResult<()> {
let info = table.table_info();
let table_ref = TableReference {
catalog: &info.catalog_name,
schema: &info.schema_name,
table: &info.name,
};
let _lock = lock.lock(table_ref.to_string()).await;
table.close().await
}
impl<S: StorageEngine> MitoEngineInner<S> {
fn new(_config: EngineConfig, storage_engine: S, object_store: ObjectStore) -> Self {
Self {
tables: RwLock::new(HashMap::default()),
tables: DashMap::new(),
storage_engine,
object_store,
table_mutex: Mutex::new(()),
table_mutex: Arc::new(KeyLock::new()),
}
}
}

View File

@@ -265,10 +265,17 @@ impl<S: StorageEngine> AlterMitoTable<S> {
// Rename key in tables map.
if let AlterKind::RenameTable { new_table_name } = &self.data.request.alter_kind {
let mut table_ref = self.data.table_ref();
let mut tables = self.engine_inner.tables.write().unwrap();
tables.remove(&table_ref.to_string());
let removed = {
let _lock = self.engine_inner.table_mutex.lock(table_ref.to_string());
self.engine_inner.tables.remove(&table_ref.to_string())
};
ensure!(removed.is_some(), TableNotFoundSnafu { table_name });
table_ref.table = new_table_name.as_str();
tables.insert(table_ref.to_string(), self.table.clone());
let _lock = self.engine_inner.table_mutex.lock(table_ref.to_string());
self.engine_inner
.tables
.insert(table_ref.to_string(), self.table.clone());
}
Ok(Status::Done)

View File

@@ -258,10 +258,9 @@ impl<S: StorageEngine> CreateMitoTable<S> {
{
let table = Arc::new(MitoTable::new(table_info, self.regions.clone(), manifest));
let _lock = self.engine_inner.table_mutex.lock(table_ref.to_string());
self.engine_inner
.tables
.write()
.unwrap()
.insert(table_ref.to_string(), table);
return Ok(Status::Done);
}
@@ -269,10 +268,10 @@ impl<S: StorageEngine> CreateMitoTable<S> {
// We need to persist the table manifest and create the table.
let table = self.write_manifest_and_create_table(&table_dir).await?;
let table = Arc::new(table);
let _lock = self.engine_inner.table_mutex.lock(table_ref.to_string());
self.engine_inner
.tables
.write()
.unwrap()
.insert(table_ref.to_string(), table);
Ok(Status::Done)

View File

@@ -139,11 +139,13 @@ impl<S: StorageEngine> DropMitoTable<S> {
async fn on_close_regions(&mut self) -> Result<Status> {
// Remove the table from the engine to avoid further access from users.
let table_ref = self.data.table_ref();
self.engine_inner
.tables
.write()
.unwrap()
.remove(&table_ref.to_string());
let _lock = self
.engine_inner
.table_mutex
.lock(table_ref.to_string())
.await;
self.engine_inner.tables.remove(&table_ref.to_string());
// Close the table to close all regions. Closing a region is idempotent.
self.table.close().await.map_err(Error::from_error_ext)?;

View File

@@ -9,6 +9,7 @@ lru = "0.9"
async-trait = "0.1"
bytes = "1.4"
futures = { version = "0.3" }
metrics = "0.20"
opendal = { version = "0.30", features = ["layers-tracing", "layers-metrics"] }
pin-project = "1.0"
tokio.workspace = true

View File

@@ -19,12 +19,18 @@ use std::sync::Arc;
use async_trait::async_trait;
use bytes::Bytes;
use lru::LruCache;
use metrics::increment_counter;
use opendal::ops::{OpDelete, OpList, OpRead, OpScan, OpWrite};
use opendal::raw::oio::{Read, Reader, Write};
use opendal::raw::{Accessor, Layer, LayeredAccessor, RpDelete, RpList, RpRead, RpScan, RpWrite};
use opendal::{ErrorKind, Result};
use tokio::sync::Mutex;
use crate::metrics::{
OBJECT_STORE_LRU_CACHE_ERROR, OBJECT_STORE_LRU_CACHE_ERROR_KIND, OBJECT_STORE_LRU_CACHE_HIT,
OBJECT_STORE_LRU_CACHE_MISS,
};
pub struct LruCacheLayer<C> {
cache: Arc<C>,
lru_cache: Arc<Mutex<LruCache<String, ()>>>,
@@ -89,12 +95,16 @@ impl<I: Accessor, C: Accessor> LayeredAccessor for LruCacheAccessor<I, C> {
match self.cache.read(&cache_path, OpRead::default()).await {
Ok((rp, r)) => {
increment_counter!(OBJECT_STORE_LRU_CACHE_HIT);
// update lru when cache hit
let mut lru_cache = lru_cache.lock().await;
lru_cache.get_or_insert(cache_path.clone(), || ());
Ok(to_output_reader((rp, r)))
}
Err(err) if err.kind() == ErrorKind::NotFound => {
increment_counter!(OBJECT_STORE_LRU_CACHE_MISS);
let (rp, mut reader) = self.inner.read(&path, args.clone()).await?;
let size = rp.clone().into_metadata().content_length();
let (_, mut writer) = self.cache.write(&cache_path, OpWrite::new()).await?;
@@ -122,7 +132,10 @@ impl<I: Accessor, C: Accessor> LayeredAccessor for LruCacheAccessor<I, C> {
Err(_) => return self.inner.read(&path, args).await.map(to_output_reader),
}
}
Err(_) => return self.inner.read(&path, args).await.map(to_output_reader),
Err(err) => {
increment_counter!(OBJECT_STORE_LRU_CACHE_ERROR, OBJECT_STORE_LRU_CACHE_ERROR_KIND => format!("{}", err.kind()));
return self.inner.read(&path, args).await.map(to_output_reader);
}
}
}

View File

@@ -20,5 +20,6 @@ pub use opendal::{
};
pub mod cache_policy;
mod metrics;
pub mod test_util;
pub mod util;

View File

@@ -0,0 +1,20 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
//! object-store metrics
pub const OBJECT_STORE_LRU_CACHE_HIT: &str = "object_store.lru_cache.hit";
pub const OBJECT_STORE_LRU_CACHE_MISS: &str = "object_store.lru_cache.miss";
pub const OBJECT_STORE_LRU_CACHE_ERROR: &str = "object_store.lru_cache.error";
pub const OBJECT_STORE_LRU_CACHE_ERROR_KIND: &str = "error";

View File

@@ -16,7 +16,7 @@ use std::env;
use std::sync::Arc;
use anyhow::Result;
use common_telemetry::logging;
use common_telemetry::{logging, metric};
use common_test_util::temp_dir::create_temp_dir;
use object_store::cache_policy::LruCacheLayer;
use object_store::services::{Fs, S3};
@@ -185,6 +185,7 @@ async fn assert_cache_files(
#[tokio::test]
async fn test_object_store_cache_policy() -> Result<()> {
common_telemetry::init_default_ut_logging();
common_telemetry::init_default_metrics_recorder();
// create file storage
let root_dir = create_temp_dir("test_fs_backend");
let store = OperatorBuilder::new(
@@ -258,5 +259,11 @@ async fn test_object_store_cache_policy() -> Result<()> {
)
.await?;
let handle = metric::try_handle().unwrap();
let metric_text = handle.render();
assert!(metric_text.contains("object_store_lru_cache_hit"));
assert!(metric_text.contains("object_store_lru_cache_miss"));
Ok(())
}

View File

@@ -27,7 +27,7 @@ datafusion-sql.workspace = true
datatypes = { path = "../datatypes" }
futures = "0.3"
futures-util.workspace = true
metrics = "0.20"
metrics.workspace = true
once_cell = "1.10"
promql = { path = "../promql" }
promql-parser = "0.1.0"

View File

@@ -46,17 +46,17 @@ futures-util.workspace = true
once_cell = "1.17.0"
paste = { workspace = true, optional = true }
query = { path = "../query" }
# TODO(discord9): This is a forked and tweaked version of RustPython, please update it to newest original RustPython After Update toolchain to 1.65
rustpython-ast = { git = "https://github.com/discord9/RustPython", optional = true, rev = "2e126345" }
rustpython-codegen = { git = "https://github.com/discord9/RustPython", optional = true, rev = "2e126345" }
rustpython-compiler = { git = "https://github.com/discord9/RustPython", optional = true, rev = "2e126345" }
rustpython-compiler-core = { git = "https://github.com/discord9/RustPython", optional = true, rev = "2e126345" }
rustpython-parser = { git = "https://github.com/discord9/RustPython", optional = true, rev = "2e126345" }
rustpython-pylib = { git = "https://github.com/discord9/RustPython", optional = true, rev = "2e126345", features = [
# TODO(discord9): This is a forked and tweaked version of RustPython, please update it to newest original RustPython After RustPython support GC
rustpython-ast = { git = "https://github.com/discord9/RustPython", optional = true, rev = "9ed5137412" }
rustpython-codegen = { git = "https://github.com/discord9/RustPython", optional = true, rev = "9ed5137412" }
rustpython-compiler = { git = "https://github.com/discord9/RustPython", optional = true, rev = "9ed5137412" }
rustpython-compiler-core = { git = "https://github.com/discord9/RustPython", optional = true, rev = "9ed5137412" }
rustpython-parser = { git = "https://github.com/discord9/RustPython", optional = true, rev = "9ed5137412" }
rustpython-pylib = { git = "https://github.com/discord9/RustPython", optional = true, rev = "9ed5137412", features = [
"freeze-stdlib",
] }
rustpython-stdlib = { git = "https://github.com/discord9/RustPython", optional = true, rev = "2e126345" }
rustpython-vm = { git = "https://github.com/discord9/RustPython", optional = true, rev = "2e126345", features = [
rustpython-stdlib = { git = "https://github.com/discord9/RustPython", optional = true, rev = "9ed5137412" }
rustpython-vm = { git = "https://github.com/discord9/RustPython", optional = true, rev = "9ed5137412", features = [
"default",
"codegen",
] }

View File

@@ -20,7 +20,7 @@ use datatypes::error::Error as DataTypeError;
use query::error::Error as QueryError;
use rustpython_codegen::error::CodegenError;
use rustpython_parser::ast::Location;
use rustpython_parser::error::ParseError;
use rustpython_parser::ParseError;
pub use snafu::ensure;
use snafu::prelude::Snafu;
use snafu::Location as SnafuLocation;

View File

@@ -17,7 +17,7 @@ use rustpython_codegen::compile::compile_top;
use rustpython_compiler::{CompileOpts, Mode};
use rustpython_compiler_core::CodeObject;
use rustpython_parser::ast::{ArgData, Located, Location};
use rustpython_parser::{ast, parser};
use rustpython_parser::{ast, parse, Mode as ParseMode};
use snafu::ResultExt;
use crate::fail_parse_error;
@@ -95,8 +95,7 @@ pub fn compile_script(
script: &str,
) -> Result<CodeObject> {
// note that it's important to use `parser::Mode::Interactive` so the ast can be compile to return a result instead of return None in eval mode
let mut top =
parser::parse(script, parser::Mode::Interactive, "<embedded>").context(PyParseSnafu)?;
let mut top = parse(script, ParseMode::Interactive, "<embedded>").context(PyParseSnafu)?;
// erase decorator
if let ast::Mod::Interactive { body } = &mut top {
let stmts = body;

View File

@@ -18,7 +18,7 @@ use std::sync::Arc;
use datatypes::prelude::ConcreteDataType;
use query::QueryEngineRef;
use rustpython_parser::ast::{Arguments, Location};
use rustpython_parser::{ast, parser};
use rustpython_parser::{ast, parse_program};
#[cfg(test)]
use serde::Deserialize;
use snafu::{OptionExt, ResultExt};
@@ -443,7 +443,7 @@ pub fn parse_and_compile_copr(
script: &str,
query_engine: Option<QueryEngineRef>,
) -> Result<Coprocessor> {
let python_ast = parser::parse_program(script, "<embedded>").context(PyParseSnafu)?;
let python_ast = parse_program(script, "<embedded>").context(PyParseSnafu)?;
let mut coprocessor = None;

View File

@@ -38,7 +38,7 @@ use rustpython_vm::{
VirtualMachine,
};
use crate::python::utils::is_instance;
use crate::python::rspython::utils::is_instance;
/// The Main FFI type `PyVector` that is used both in RustPython and PyO3
#[cfg_attr(feature = "pyo3_backend", pyo3class(name = "vector"))]

View File

@@ -155,8 +155,10 @@ fn eval_pyo3(testcase: TestCase, locals: HashMap<String, PyVector>) {
}
fn eval_rspy(testcase: TestCase, locals: HashMap<String, PyVector>) {
vm::Interpreter::without_stdlib(Default::default()).enter(|vm| {
vm::Interpreter::with_init(Default::default(), |vm| {
PyVector::make_class(&vm.ctx);
})
.enter(|vm| {
let scope = vm.new_scope_with_builtins();
locals.into_iter().for_each(|(k, v)| {
scope
@@ -166,10 +168,17 @@ fn eval_rspy(testcase: TestCase, locals: HashMap<String, PyVector>) {
.unwrap();
});
let code_obj = vm
.compile(&testcase.eval, Mode::Eval, "<embedded>".to_owned())
.compile(&testcase.eval, Mode::Eval, "<embedded>".to_string())
.map_err(|err| vm.new_syntax_error(&err))
.unwrap();
let obj = vm.run_code_obj(code_obj, scope).unwrap();
let obj = vm
.run_code_obj(code_obj, scope)
.map_err(|e| {
let mut output = String::new();
vm.write_exception(&mut output, &e).unwrap();
(e, output)
})
.unwrap();
let v = obj.downcast::<PyVector>().unwrap();
let result_arr = v.to_arrow_array();
let expect_arr = testcase.result.to_arrow_array();

View File

@@ -19,7 +19,7 @@ pub(crate) mod vector_impl;
pub(crate) mod builtins;
mod dataframe_impl;
mod utils;
pub(crate) mod utils;
#[cfg(test)]
pub(crate) use copr_impl::init_interpreter;

View File

@@ -30,7 +30,7 @@ use rustpython_vm::builtins::{PyBaseExceptionRef, PyBool, PyFloat, PyInt, PyList
use rustpython_vm::{pymodule, AsObject, PyObjectRef, PyPayload, PyResult, VirtualMachine};
use crate::python::ffi_types::PyVector;
use crate::python::utils::is_instance;
use crate::python::rspython::utils::is_instance;
pub fn init_greptime_builtins(module_name: &str, vm: &mut VirtualMachine) {
vm.add_native_module(
@@ -356,7 +356,7 @@ pub(crate) mod greptime_builtin {
#[pyfunction]
fn col(name: String, vm: &VirtualMachine) -> PyExprRef {
let expr: PyExpr = DfExpr::Column(datafusion_common::Column::from_name(name)).into();
expr.into_ref(vm)
expr.into_ref(&vm.ctx)
}
#[pyfunction]
@@ -366,7 +366,7 @@ pub(crate) mod greptime_builtin {
.try_to_scalar_value(&val.data_type())
.map_err(|e| vm.new_runtime_error(format!("{e}")))?;
let expr: PyExpr = DfExpr::Literal(scalar_val).into();
Ok(expr.into_ref(vm))
Ok(expr.into_ref(&vm.ctx))
}
// the main binding code, due to proc macro things, can't directly use a simpler macro

View File

@@ -33,8 +33,8 @@ use serde::{Deserialize, Serialize};
use super::*;
use crate::python::ffi_types::PyVector;
use crate::python::utils::{format_py_error, is_instance};
use crate::python::rspython::utils::is_instance;
use crate::python::utils::format_py_error;
#[test]
fn convert_scalar_to_py_obj_and_back() {
rustpython_vm::Interpreter::with_init(Default::default(), |vm| {

View File

@@ -213,7 +213,7 @@ pub(crate) fn init_interpreter() -> Arc<Interpreter> {
// so according to this issue:
// https://github.com/RustPython/RustPython/issues/4292
// add this line for stdlib, so rustpython can found stdlib's python part in bytecode format
vm.add_frozen(rustpython_pylib::frozen_stdlib());
vm.add_frozen(rustpython_pylib::FROZEN_STDLIB);
// add our own custom datatype and module
PyVector::make_class(&vm.ctx);
PyQueryEngine::make_class(&vm.ctx);

View File

@@ -28,8 +28,10 @@ pub(crate) mod data_frame {
use datafusion::dataframe::DataFrame as DfDataFrame;
use datafusion::execution::context::SessionContext;
use datafusion_expr::Expr as DfExpr;
use rustpython_vm::convert::ToPyResult;
use rustpython_vm::function::PyComparisonValue;
use rustpython_vm::types::{Comparable, PyComparisonOp};
use rustpython_vm::protocol::PyNumberMethods;
use rustpython_vm::types::{AsNumber, Comparable, PyComparisonOp};
use rustpython_vm::{
pyclass as rspyclass, PyObject, PyObjectRef, PyPayload, PyRef, PyResult, VirtualMachine,
};
@@ -40,6 +42,7 @@ pub(crate) mod data_frame {
use crate::python::rspython::builtins::greptime_builtin::{
lit, query as get_query_engine, PyDataFrame,
};
use crate::python::rspython::utils::obj_cast_to;
use crate::python::utils::block_on_async;
impl From<DfDataFrame> for PyDataFrame {
@@ -313,7 +316,20 @@ pub(crate) mod data_frame {
}
}
#[rspyclass(with(Comparable))]
impl AsNumber for PyExpr {
fn as_number() -> &'static PyNumberMethods {
static AS_NUMBER: PyNumberMethods = PyNumberMethods {
and: Some(|a, b, vm| PyExpr::and(a.to_owned(), b.to_owned(), vm).to_pyresult(vm)),
or: Some(|a, b, vm| PyExpr::or(a.to_owned(), b.to_owned(), vm).to_pyresult(vm)),
invert: Some(|a, vm| PyExpr::invert((*a).to_owned(), vm).to_pyresult(vm)),
..PyNumberMethods::NOT_IMPLEMENTED
};
&AS_NUMBER
}
}
#[rspyclass(with(Comparable, AsNumber))]
impl PyExpr {
fn richcompare(
&self,
@@ -342,18 +358,23 @@ pub(crate) mod data_frame {
}
#[pymethod(magic)]
fn and(&self, other: PyExprRef) -> PyResult<PyExpr> {
Ok(self.inner.clone().and(other.inner.clone()).into())
fn and(zelf: PyObjectRef, other: PyObjectRef, vm: &VirtualMachine) -> PyResult<PyExpr> {
let zelf = obj_cast_to::<Self>(zelf, vm)?;
let other = obj_cast_to::<Self>(other, vm)?;
Ok(zelf.inner.clone().and(other.inner.clone()).into())
}
#[pymethod(magic)]
fn or(&self, other: PyExprRef) -> PyResult<PyExpr> {
Ok(self.inner.clone().or(other.inner.clone()).into())
fn or(zelf: PyObjectRef, other: PyObjectRef, vm: &VirtualMachine) -> PyResult<PyExpr> {
let zelf = obj_cast_to::<Self>(zelf, vm)?;
let other = obj_cast_to::<Self>(other, vm)?;
Ok(zelf.inner.clone().or(other.inner.clone()).into())
}
/// `~` operator, return `!self`
#[pymethod(magic)]
fn invert(&self) -> PyResult<PyExpr> {
Ok(self.inner.clone().not().into())
fn invert(zelf: PyObjectRef, vm: &VirtualMachine) -> PyResult<PyExpr> {
let zelf = obj_cast_to::<Self>(zelf, vm)?;
Ok(zelf.inner.clone().not().into())
}
/// sort ascending&nulls_first

View File

@@ -25,7 +25,7 @@ use datatypes::data_type::{ConcreteDataType, DataType};
use datatypes::schema::{ColumnSchema, Schema};
use datatypes::vectors::{Float32Vector, Float64Vector, Int64Vector, VectorRef};
use ron::from_str as from_ron_string;
use rustpython_parser::parser;
use rustpython_parser::{parse, Mode};
use serde::{Deserialize, Serialize};
use crate::python::error::{get_error_reason_loc, pretty_print_error_in_src, visualize_loc, Error};
@@ -181,7 +181,7 @@ fn test_type_anno() {
def a(cpu, mem: vector[f64])->(vector[f64|None], vector[f64], vector[_], vector[ _ | None]):
return cpu + mem, cpu - mem, cpu * mem, cpu / mem
"#;
let pyast = parser::parse(python_source, parser::Mode::Interactive, "<embedded>").unwrap();
let pyast = parse(python_source, Mode::Interactive, "<embedded>").unwrap();
let copr = parse_and_compile_copr(python_source, None);
dbg!(copr);
}

View File

@@ -22,7 +22,8 @@ use datatypes::vectors::{
BooleanVector, Float64Vector, Helper, Int64Vector, NullVector, StringVector, VectorRef,
};
use rustpython_vm::builtins::{PyBaseExceptionRef, PyBool, PyFloat, PyInt, PyList, PyStr};
use rustpython_vm::{PyObjectRef, PyPayload, PyResult, VirtualMachine};
use rustpython_vm::object::PyObjectPayload;
use rustpython_vm::{PyObjectRef, PyPayload, PyRef, PyResult, VirtualMachine};
use snafu::{OptionExt, ResultExt};
use crate::python::error;
@@ -33,7 +34,21 @@ use crate::python::rspython::builtins::try_into_columnar_value;
/// use `rustpython`'s `is_instance` method to check if a PyObject is a instance of class.
/// if `PyResult` is Err, then this function return `false`
pub fn is_instance<T: PyPayload>(obj: &PyObjectRef, vm: &VirtualMachine) -> bool {
obj.is_instance(T::class(vm).into(), vm).unwrap_or(false)
obj.is_instance(T::class(&vm.ctx).into(), vm)
.unwrap_or(false)
}
pub fn obj_cast_to<T: PyObjectPayload>(
obj: PyObjectRef,
vm: &VirtualMachine,
) -> PyResult<PyRef<T>> {
obj.downcast::<T>().map_err(|e| {
vm.new_type_error(format!(
"Can't cast object into {}, actual type: {}",
std::any::type_name::<T>(),
e.class().name()
))
})
}
pub fn format_py_error(excep: PyBaseExceptionRef, vm: &VirtualMachine) -> error::Error {

View File

@@ -25,18 +25,21 @@ use datatypes::value::{self, OrderedFloat};
use datatypes::vectors::Helper;
use once_cell::sync::Lazy;
use rustpython_vm::builtins::{PyBaseExceptionRef, PyBool, PyBytes, PyFloat, PyInt, PyNone, PyStr};
use rustpython_vm::convert::ToPyResult;
use rustpython_vm::function::{Either, OptionalArg, PyComparisonValue};
use rustpython_vm::protocol::{PyMappingMethods, PySequenceMethods};
use rustpython_vm::types::{AsMapping, AsSequence, Comparable, PyComparisonOp};
use rustpython_vm::protocol::{PyMappingMethods, PyNumberMethods, PySequenceMethods};
use rustpython_vm::types::{
AsMapping, AsNumber, AsSequence, Comparable, PyComparisonOp, Representable,
};
use rustpython_vm::{
atomic_func, pyclass as rspyclass, PyObject, PyObjectRef, PyPayload, PyRef, PyResult,
atomic_func, pyclass as rspyclass, Py, PyObject, PyObjectRef, PyPayload, PyRef, PyResult,
VirtualMachine,
};
use crate::python::ffi_types::vector::{
arrow_rfloordiv, arrow_rsub, arrow_rtruediv, rspy_is_pyobj_scalar, wrap_result, PyVector,
};
use crate::python::utils::is_instance;
use crate::python::rspython::utils::{is_instance, obj_cast_to};
/// PyVectors' rustpython specify methods
fn to_type_error(vm: &'_ VirtualMachine) -> impl FnOnce(String) -> PyBaseExceptionRef + '_ {
@@ -45,7 +48,7 @@ fn to_type_error(vm: &'_ VirtualMachine) -> impl FnOnce(String) -> PyBaseExcepti
pub(crate) type PyVectorRef = PyRef<PyVector>;
/// PyVector type wraps a greptime vector, impl multiply/div/add/sub opeerators etc.
#[rspyclass(with(AsMapping, AsSequence, Comparable))]
#[rspyclass(with(AsMapping, AsSequence, Comparable, AsNumber, Representable))]
impl PyVector {
#[pymethod]
pub(crate) fn new(
@@ -86,29 +89,32 @@ impl PyVector {
#[pymethod(name = "__radd__")]
#[pymethod(magic)]
fn add(&self, other: PyObjectRef, vm: &VirtualMachine) -> PyResult<PyVector> {
fn add(zelf: PyObjectRef, other: PyObjectRef, vm: &VirtualMachine) -> PyResult<PyVector> {
let zelf = obj_cast_to::<PyVector>(zelf, vm)?;
if rspy_is_pyobj_scalar(&other, vm) {
self.rspy_scalar_arith_op(other, None, wrap_result(arithmetic::add_dyn), vm)
zelf.rspy_scalar_arith_op(other, None, wrap_result(arithmetic::add_dyn), vm)
} else {
self.rspy_vector_arith_op(other, None, wrap_result(arithmetic::add_dyn), vm)
zelf.rspy_vector_arith_op(other, None, wrap_result(arithmetic::add_dyn), vm)
}
}
#[pymethod(magic)]
fn sub(&self, other: PyObjectRef, vm: &VirtualMachine) -> PyResult<PyVector> {
fn sub(zelf: PyObjectRef, other: PyObjectRef, vm: &VirtualMachine) -> PyResult<PyVector> {
let zelf = obj_cast_to::<PyVector>(zelf, vm)?;
if rspy_is_pyobj_scalar(&other, vm) {
self.rspy_scalar_arith_op(other, None, wrap_result(arithmetic::subtract_dyn), vm)
zelf.rspy_scalar_arith_op(other, None, wrap_result(arithmetic::subtract_dyn), vm)
} else {
self.rspy_vector_arith_op(other, None, wrap_result(arithmetic::subtract_dyn), vm)
zelf.rspy_vector_arith_op(other, None, wrap_result(arithmetic::subtract_dyn), vm)
}
}
#[pymethod(magic)]
fn rsub(&self, other: PyObjectRef, vm: &VirtualMachine) -> PyResult<PyVector> {
fn rsub(zelf: PyObjectRef, other: PyObjectRef, vm: &VirtualMachine) -> PyResult<PyVector> {
let zelf = obj_cast_to::<PyVector>(zelf, vm)?;
if rspy_is_pyobj_scalar(&other, vm) {
self.rspy_scalar_arith_op(other, None, arrow_rsub, vm)
zelf.rspy_scalar_arith_op(other, None, arrow_rsub, vm)
} else {
self.rspy_vector_arith_op(
zelf.rspy_vector_arith_op(
other,
None,
wrap_result(|a, b| arithmetic::subtract_dyn(b, a)),
@@ -119,25 +125,27 @@ impl PyVector {
#[pymethod(name = "__rmul__")]
#[pymethod(magic)]
fn mul(&self, other: PyObjectRef, vm: &VirtualMachine) -> PyResult<PyVector> {
fn mul(zelf: PyObjectRef, other: PyObjectRef, vm: &VirtualMachine) -> PyResult<PyVector> {
let zelf = obj_cast_to::<PyVector>(zelf, vm)?;
if rspy_is_pyobj_scalar(&other, vm) {
self.rspy_scalar_arith_op(other, None, wrap_result(arithmetic::multiply_dyn), vm)
zelf.rspy_scalar_arith_op(other, None, wrap_result(arithmetic::multiply_dyn), vm)
} else {
self.rspy_vector_arith_op(other, None, wrap_result(arithmetic::multiply_dyn), vm)
zelf.rspy_vector_arith_op(other, None, wrap_result(arithmetic::multiply_dyn), vm)
}
}
#[pymethod(magic)]
fn truediv(&self, other: PyObjectRef, vm: &VirtualMachine) -> PyResult<PyVector> {
fn truediv(zelf: PyObjectRef, other: PyObjectRef, vm: &VirtualMachine) -> PyResult<PyVector> {
let zelf = obj_cast_to::<PyVector>(zelf, vm)?;
if rspy_is_pyobj_scalar(&other, vm) {
self.rspy_scalar_arith_op(
zelf.rspy_scalar_arith_op(
other,
Some(ArrowDataType::Float64),
wrap_result(arithmetic::divide_dyn),
vm,
)
} else {
self.rspy_vector_arith_op(
zelf.rspy_vector_arith_op(
other,
Some(ArrowDataType::Float64),
wrap_result(arithmetic::divide_dyn),
@@ -161,16 +169,17 @@ impl PyVector {
}
#[pymethod(magic)]
fn floordiv(&self, other: PyObjectRef, vm: &VirtualMachine) -> PyResult<PyVector> {
fn floordiv(zelf: PyObjectRef, other: PyObjectRef, vm: &VirtualMachine) -> PyResult<PyVector> {
let zelf = obj_cast_to::<PyVector>(zelf, vm)?;
if rspy_is_pyobj_scalar(&other, vm) {
self.rspy_scalar_arith_op(
zelf.rspy_scalar_arith_op(
other,
Some(ArrowDataType::Int64),
wrap_result(arithmetic::divide_dyn),
vm,
)
} else {
self.rspy_vector_arith_op(
zelf.rspy_vector_arith_op(
other,
Some(ArrowDataType::Int64),
wrap_result(arithmetic::divide_dyn),
@@ -194,19 +203,33 @@ impl PyVector {
}
}
#[pymethod(magic)]
fn and(&self, other: PyVectorRef, vm: &VirtualMachine) -> PyResult<PyVector> {
Self::vector_and(self, &other).map_err(to_type_error(vm))
fn obj_to_vector(obj: PyObjectRef, vm: &VirtualMachine) -> PyResult<PyRef<PyVector>> {
obj.downcast::<PyVector>().map_err(|e| {
vm.new_type_error(format!(
"Can't cast right operand into PyVector, actual type: {}",
e.class().name()
))
})
}
#[pymethod(magic)]
fn or(&self, other: PyVectorRef, vm: &VirtualMachine) -> PyResult<PyVector> {
Self::vector_or(self, &other).map_err(to_type_error(vm))
fn and(zelf: PyObjectRef, other: PyObjectRef, vm: &VirtualMachine) -> PyResult<PyVector> {
let zelf = Self::obj_to_vector(zelf, vm)?;
let other = Self::obj_to_vector(other, vm)?;
Self::vector_and(&zelf, &other).map_err(to_type_error(vm))
}
#[pymethod(magic)]
fn invert(&self, vm: &VirtualMachine) -> PyResult<PyVector> {
Self::vector_invert(self).map_err(to_type_error(vm))
fn or(zelf: PyObjectRef, other: PyObjectRef, vm: &VirtualMachine) -> PyResult<PyVector> {
let zelf = Self::obj_to_vector(zelf, vm)?;
let other = Self::obj_to_vector(other, vm)?;
Self::vector_or(&zelf, &other).map_err(to_type_error(vm))
}
#[pymethod(magic)]
fn invert(zelf: PyObjectRef, vm: &VirtualMachine) -> PyResult<PyVector> {
let zelf = Self::obj_to_vector(zelf, vm)?;
Self::vector_invert(&zelf).map_err(to_type_error(vm))
}
#[pymethod(name = "__len__")]
@@ -262,6 +285,40 @@ impl PyVector {
}
}
impl Representable for PyVector {
#[inline]
fn repr_str(zelf: &Py<Self>, _vm: &VirtualMachine) -> PyResult<String> {
Ok(format!("{:#?}", *zelf))
}
}
impl AsNumber for PyVector {
fn as_number() -> &'static PyNumberMethods {
// FIXME(discord9): have to use `&PyObject.to_owned()` here
// because it seems to be the only way to convert a `&PyObject` to `PyObjectRef`.
static AS_NUMBER: PyNumberMethods = PyNumberMethods {
and: Some(|a, b, vm| PyVector::and(a.to_owned(), b.to_owned(), vm).to_pyresult(vm)),
or: Some(|a, b, vm| PyVector::or(a.to_owned(), b.to_owned(), vm).to_pyresult(vm)),
invert: Some(|a, vm| PyVector::invert((*a).to_owned(), vm).to_pyresult(vm)),
add: Some(|a, b, vm| PyVector::add(a.to_owned(), b.to_owned(), vm).to_pyresult(vm)),
subtract: Some(|a, b, vm| {
PyVector::sub(a.to_owned(), b.to_owned(), vm).to_pyresult(vm)
}),
multiply: Some(|a, b, vm| {
PyVector::mul(a.to_owned(), b.to_owned(), vm).to_pyresult(vm)
}),
true_divide: Some(|a, b, vm| {
PyVector::truediv(a.to_owned(), b.to_owned(), vm).to_pyresult(vm)
}),
floor_divide: Some(|a, b, vm| {
PyVector::floordiv(a.to_owned(), b.to_owned(), vm).to_pyresult(vm)
}),
..PyNumberMethods::NOT_IMPLEMENTED
};
&AS_NUMBER
}
}
impl AsMapping for PyVector {
fn as_mapping() -> &'static PyMappingMethods {
static AS_MAPPING: PyMappingMethods = PyMappingMethods {

View File

@@ -15,17 +15,11 @@
use futures::Future;
use once_cell::sync::OnceCell;
use rustpython_vm::builtins::PyBaseExceptionRef;
use rustpython_vm::{PyObjectRef, PyPayload, VirtualMachine};
use rustpython_vm::VirtualMachine;
use tokio::runtime::Runtime;
use crate::python::error;
/// use `rustpython`'s `is_instance` method to check if a PyObject is a instance of class.
/// if `PyResult` is Err, then this function return `false`
pub fn is_instance<T: PyPayload>(obj: &PyObjectRef, vm: &VirtualMachine) -> bool {
obj.is_instance(T::class(vm).into(), vm).unwrap_or(false)
}
pub fn format_py_error(excep: PyBaseExceptionRef, vm: &VirtualMachine) -> error::Error {
let mut msg = String::new();
if let Err(e) = vm.write_exception(&mut msg, &excep) {

View File

@@ -39,7 +39,7 @@ http-body = "0.4"
humantime-serde = "1.1"
hyper = { version = "0.14", features = ["full"] }
influxdb_line_protocol = { git = "https://github.com/evenyag/influxdb_iox", branch = "feat/line-protocol" }
metrics = "0.20"
metrics.workspace = true
mime_guess = "2.0"
num_cpus = "1.13"
once_cell = "1.16"

View File

@@ -1 +1 @@
v0.1.0
v0.2.2

View File

@@ -23,7 +23,7 @@ use snafu::ResultExt;
use crate::error::{BuildHttpResponseSnafu, Result};
#[derive(RustEmbed)]
#[folder = "dashboard/"]
#[folder = "dashboard/dist/"]
pub struct Assets;
pub(crate) fn dashboard() -> Router {

View File

@@ -351,20 +351,34 @@ impl PromJsonResponse {
#[derive(Debug, Default, Serialize, Deserialize, JsonSchema)]
pub struct InstantQuery {
query: String,
query: Option<String>,
time: Option<String>,
timeout: Option<String>,
db: Option<String>,
}
#[axum_macros::debug_handler]
pub async fn instant_query(
State(_handler): State<PromHandlerRef>,
Query(_params): Query<InstantQuery>,
State(handler): State<PromHandlerRef>,
Query(params): Query<InstantQuery>,
Form(form_params): Form<InstantQuery>,
) -> Json<PromJsonResponse> {
PromJsonResponse::error(
"not implemented",
"instant query api `/query` is not implemented. Use `/query_range` instead.",
)
let time = params.time.or(form_params.time).unwrap_or_default();
let prom_query = PromQuery {
query: params.query.or(form_params.query).unwrap_or_default(),
start: time.clone(),
end: time,
step: "1s".to_string(),
};
let db = &params.db.unwrap_or(DEFAULT_SCHEMA_NAME.to_string());
let (catalog, schema) = super::parse_catalog_and_schema_from_client_database_name(db);
let query_ctx = QueryContext::with(catalog, schema);
let result = handler.do_query(&prom_query, Arc::new(query_ctx)).await;
let metric_name = retrieve_metric_name(&prom_query.query).unwrap_or_default();
PromJsonResponse::from_query_result(result, metric_name).await
}
#[derive(Debug, Default, Serialize, Deserialize, JsonSchema)]

View File

@@ -75,18 +75,22 @@ impl QueryContext {
pub fn set_current_schema(&self, schema: &str) {
let last = self.current_schema.swap(Arc::new(schema.to_string()));
debug!(
"set new session default schema: {:?}, swap old: {:?}",
schema, last
)
if schema != last.as_str() {
debug!(
"set new session default schema: {:?}, swap old: {:?}",
schema, last
)
}
}
pub fn set_current_catalog(&self, catalog: &str) {
let last = self.current_catalog.swap(Arc::new(catalog.to_string()));
debug!(
"set new session default catalog: {:?}, swap old: {:?}",
catalog, last
)
if catalog != last.as_str() {
debug!(
"set new session default catalog: {:?}, swap old: {:?}",
catalog, last
)
}
}
}

View File

@@ -290,11 +290,16 @@ pub async fn test_prom_http_api(store_type: StorageType) {
let client = TestClient::new(app);
// instant query
let res = client.get("/api/v1/query?query=up").send().await;
let res = client.get("/api/v1/query?query=up&time=1").send().await;
assert_eq!(res.status(), StatusCode::OK);
let res = client.post("/api/v1/query?query=up").send().await;
let res = client
.post("/api/v1/query?query=up&time=1")
.header("Content-Type", "application/x-www-form-urlencoded")
.send()
.await;
assert_eq!(res.status(), StatusCode::OK);
// range query
let res = client
.get("/api/v1/query_range?query=up&start=1&end=100&step=5")
.send()

View File

@@ -27,18 +27,18 @@ order by table_name;
| foo |
+------------+
select table_catalog, table_schema, table_name, table_type
select table_catalog, table_schema, table_name, table_type, engine
from information_schema.tables
where table_catalog = 'greptime'
and table_schema != 'public'
order by table_schema, table_name;
+---------------+--------------------+------------+------------+
| table_catalog | table_schema | table_name | table_type |
+---------------+--------------------+------------+------------+
| greptime | information_schema | tables | VIEW |
| greptime | my_db | foo | BASE TABLE |
+---------------+--------------------+------------+------------+
+---------------+--------------------+------------+------------+--------+
| table_catalog | table_schema | table_name | table_type | engine |
+---------------+--------------------+------------+------------+--------+
| greptime | information_schema | tables | VIEW | |
| greptime | my_db | foo | BASE TABLE | mito |
+---------------+--------------------+------------+------------+--------+
use
public;

View File

@@ -14,7 +14,7 @@ from information_schema.tables
where table_schema = 'my_db'
order by table_name;
select table_catalog, table_schema, table_name, table_type
select table_catalog, table_schema, table_name, table_type, engine
from information_schema.tables
where table_catalog = 'greptime'
and table_schema != 'public'