Compare commits

...

75 Commits

Author SHA1 Message Date
Ruihang Xia
d4aa4159d4 feat: support windowed sort with where condition
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-11-04 19:34:03 +08:00
evenyag
960f6d821b feat: spawn block write wal 2024-11-04 17:35:12 +08:00
Ruihang Xia
9c5d044238 Merge branch 'main' into transform-count-min-max
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-11-01 17:45:28 +08:00
Ruihang Xia
be72d3bedb feat: simple limit impl in PartSort (#4922)
* feat: simple limit impl in PartSort

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix: update time_index method to return a non-optional String

Co-authored-by: Yingwen <realevenyag@gmail.com>
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* use builtin limit

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add more info to analyze display

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* update sqlness

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>
2024-11-01 09:25:03 +00:00
discord9
1ff29d8fde chore: short desc markdown about change log level (#4921)
* chore: tiny doc about change log level

* chore: per review

* chore
2024-11-01 07:10:57 +00:00
Yingwen
39ab1a6415 feat: get row group time range from cached metadata (#4869)
* feat: get part range min-max from cache for unordered scan

* feat: seq scan push row groups if num_row_groups > 0

* test: test split

* feat: update comment

* test: fix split test

* refactor: rename get meta data method
2024-11-01 06:35:03 +00:00
Ruihang Xia
70c354eed6 fix: the way to retrieve time index column
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-11-01 12:10:12 +08:00
Ruihang Xia
23bf663d58 feat: handle sort that wont preserving partition
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-10-31 22:13:36 +08:00
Ruihang Xia
817648eac5 Merge branch 'main' into transform-count-min-max
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-10-31 15:38:12 +08:00
Weny Xu
758ad0a8c5 refactor: simplify WeightedChoose (#4916)
* refactor: simplify WeightedChoose

* chore: remove unused errors
2024-10-31 06:22:30 +00:00
Ruihang Xia
8b60c27c2e feat: enhance windowed-sort optimizer rule (#4910)
* add RegionScanner::metadata

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* skip PartSort when there is no tag column

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add more sqlness test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* handle desc

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix: should keep part sort on DESC

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix clippy

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-10-31 06:15:45 +00:00
Yingwen
ea6df9ba49 fix: prune batches from memtable by time range (#4913)
* feat: add an iter to prune by time range

* feat: filter rows from mem range
2024-10-31 05:13:35 +00:00
Ning Sun
69420793e2 feat: implement parse_query api (#4860)
* feat: implement parse_query api

* chore: switch to upstream

* fix: add post method for parse_query

* chore: bump promql-parser

* test: use latest promql ast serialization
2024-10-30 12:16:22 +00:00
Yingwen
0da112b335 chore: provide more info in check batch message (#4906)
* chore: provide more info in check message

* chore: set timeout to 240s

---------

Co-authored-by: WenyXu <wenymedia@gmail.com>
2024-10-30 11:56:10 +00:00
dennis zhuang
dcc08f6b3e feat: adds the number of rows and index files size to region_statistics table (#4909)
* feat: adds index size to region statistics

* feat: adds the number of rows for region statistics

* test: adds sqlness test for region_statistics

* fix: test
2024-10-30 11:12:58 +00:00
dennis zhuang
a34035a1f2 fix: set transaction variables not working in mysql protocol (#4912) 2024-10-30 10:59:13 +00:00
LFC
fd8eba36a8 refactor: make use of the "pre_execute" in sql execution interceptor (#4875)
* feat: dynamic definition of plugin options

* rebase

* revert

* fix ci
2024-10-30 09:16:46 +00:00
Ruihang Xia
9712295177 fix(config): update tracing section headers in example TOML files (#4898)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-10-30 08:31:31 +00:00
Lei, HUANG
d275cdd570 feat: Support altering table TTL (#4848)
* feat/alter-ttl:
 Update greptime-proto source and add ChangeTableOptions handling

 - Change greptime-proto source repository and revision in Cargo.lock and Cargo.toml
 - Implement handling for ChangeTableOptions in grpc-expr and meta modules
 - Add support for parsing and applying region option changes in mito2
 - Introduce new error type for invalid change table option requests
 - Add humantime dependency to store-api
 - Fix SQL syntax in tests for changing column types

* chore: remove write buffer size option handling since we don't support specifying write_buffer_size for single table or region

* persist ttl to manifest

* chore: add sqlness

* fix: sqlness

* fix: typo and toml format

* fix: tests

* update: change alter syntax

* feat/alter-ttl: Add Clone trait to RegionFlushRequest and remove redundant Default derive in region_request.rs.

* feat/alter-ttl: Refactor code to replace 'ChangeTableOption' with 'ChangeRegionOption' and handle TTL as a region option

 • Rename ChangeTableOption to ChangeRegionOption across various files.
 • Update AlterKind::ChangeTableOptions to AlterKind::ChangeRegionOptions.
 • Modify TTL handling to treat '0d' as None for TTL in table options.
 • Adjust related function names and comments to reflect the change from table to region options.
 • Include test case updates to verify the new TTL handling behavior.

* chore: update format

* refactor: update region options in DatanodeTableValue

* feat/alter-ttl:
 Remove TTL handling from RegionManifest and related structures

 - Eliminate TTL fields from `RegionManifest`, `RegionChange`, and associated handling logic.
 - Update tests and checksums to reflect removal of TTL.
 - Refactor `RegionOpener` and `handle_alter` to adjust to TTL removal.
 - Simplify `RegionChangeResult` by replacing `change` with `new_meta`.

* chore: fmt

* remove useless delete op

* feat/alter-ttl: Updated Cargo.lock and gRPC expression Cargo.toml to include store-api dependency. Refactored alter.rs to use ChangeOption from store-api instead of ChangeTableOptionRequest.
Adjusted error handling in error.rs to use MetadataError. Modified handle_alter.rs to handle TTL changes with ChangeOption. Simplified region_request.rs by replacing
ChangeRegionOption with ChangeOption and removing redundant code. Removed UnsupportedTableOptionChange error in table/src/error.rs. Updated metadata.rs to use ChangeOption for table
options. Removed ChangeTableOptionRequest enum and related conversion code from requests.rs.

* feat/alter-ttl: Update greptime-proto dependency to revision 53ab9a9553

* chore: format code

* chore: update greptime-proto
2024-10-30 04:39:48 +00:00
Weny Xu
83eb777d21 test: add fuzz test for metric region migration (#4862)
* test: add fuzz tests for migrate metric regions

* test: insert values before migrating metric region

* feat: correct table num

* chore: apply suggestions from CR
2024-10-29 15:47:48 +00:00
Yohan Wal
8ed5bc5305 refactor: json conversion (#4893)
* refactor: json type update

* test: update test

* fix: convert when needed

* revert: leave sqlness tests unchanged

* fix: fmt

* refactor: just refactor

* Apply suggestions from code review

Co-authored-by: Weny Xu <wenymedia@gmail.com>

* refactor: parse jsonb first

* test: add bad cases

* Update src/datatypes/src/vectors/binary.rs

Co-authored-by: Weny Xu <wenymedia@gmail.com>

* fix: fmt

* fix: fix clippy/check

---------

Co-authored-by: Weny Xu <wenymedia@gmail.com>
2024-10-29 15:46:24 +00:00
Weny Xu
9ded314905 feat: add json datatype for grpc protocol (#4897)
* chore: update greptime-proto

* feat: add json datatype for grpc protocol
2024-10-29 12:37:53 +00:00
discord9
702a55a235 chore: update proto depend (#4899) 2024-10-29 09:32:28 +00:00
discord9
f3e5a5a7aa ci: install numpy in CI (#4895)
chore: install numpy in CI
2024-10-29 07:57:40 +00:00
Zhenchi
9c79baca4b feat(index): support building inverted index for the field column on Mito (#4887)
feat(index): support building inverted index for the field column

Signed-off-by: Zhenchi <zhongzc_arch@outlook.com>
2024-10-29 07:57:17 +00:00
Ruihang Xia
03f2fa219d feat: optimizer rule for windowed sort (#4874)
* basic impl

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* implement physical rule

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* feat: install windowed sort physical rule and optimize partition ranges

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* add logs and sqlness test

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* feat: introduce PartSortExec for partitioned sorting

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* tune exec nodes' properties and metrics

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* clean up

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix typo

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* debug: add more info on very wrong

* debug: also print overlap ranges

* feat: add check when emit PartSort Stream

* dbg: info on overlap working range

* feat: check batch range is inside part range

* set distinguish partition range param

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* chore: more logs

* update sqlness

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* tune optimizer

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* clean up

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix lints

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix windowed sort rule

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix: early terminate sort stream

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* chore: remove min/max check

* chore: remove unused windowed_sort module, uuid feature and refactor region_scanner to synchronous

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* chore: print more fuzz log

* chore: more log

* fix: part sort should skip empty part

* chore: remove insert logs

* tests: empty PartitionRange

* refactor: testcase

* docs: update comment&tests: all empty

* ci: enlarge etcd cpu limit

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: discord9 <discord9@163.com>
Co-authored-by: evenyag <realevenyag@gmail.com>
2024-10-29 07:46:05 +00:00
Lei, HUANG
0ee455a980 fix: pyo3 ut (#4894) 2024-10-29 04:47:57 +00:00
Lei, HUANG
eab9e3a48d chore: remove struct size assertion (#4885)
chore/remove-struct-size-assertion: Remove unit tests for parquet_meta_size function in cache_size.rs
2024-10-28 08:50:10 +00:00
Yingwen
1008af5324 feat!: Divide flush and compaction job pool (#4871)
* feat: divide flush/compact job pool

* feat!: divide bg jobs config

* docs: update config examples

* test: fix tests
2024-10-25 23:36:16 +00:00
discord9
2485f66077 chore: graceful exit on bind fail (#4882) 2024-10-25 09:29:39 +00:00
Weny Xu
4f3afb13b6 fix: fix broken import (#4880) 2024-10-25 07:09:51 +00:00
shuiyisong
32a0023010 chore: add schema urls to otlp logs (#4876)
* chore: add schema urls to otlp logs table

* chore: update meter-macros version to remove anymap warning

* chore: change span id and trace id to field
2024-10-25 03:45:24 +00:00
Kaifeng Zheng
4e9c251041 feat: add json_path_match udf (#4864)
* add json_path_match udf

* sql tests for json_path_match

* fix clippy & comment

* fix null value behavior

* added null tests

* adjust function's behavior on nulls

* update test cases

* fix null check of json
2024-10-25 03:13:34 +00:00
Lei, HUANG
e328c7067c chore: udapte Rust toolchain to 2024-10-19 (#4857)
* update rust toolchain

* change toolchain to 2024-10-17

* fix: clippy

* fix: ut

* bump shadow-rs

* fix: use nightly-2024-10-19

* fix: clippy

* chore/udapte-toolchain-2024-10-17: Update DEV_BUILDER_IMAGE_TAG to 2024-10-19-a5c00e85-20241024184445 in Makefile
2024-10-25 00:23:32 +00:00
Weny Xu
8b307e4548 feat: introduce the PluginOptions (#4835)
* feat: introduce the `PluginOptions`

* chore: apply suggestions from CR
2024-10-24 12:02:10 +00:00
discord9
ff38abde2e chore: better column schema check for flow (#4855)
* chore: better column schema check for flow

* chore: better msg

* tests: clean up after tests

* chore: better msg

* chore: per review

* tests: sqlness
2024-10-24 09:43:32 +00:00
jeremyhi
aa9a265984 chore: make pusher log easy to understand (#4841)
* chore: make pusher log easy to understand

* Update src/meta-srv/src/service/heartbeat.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

* Update src/meta-srv/src/service/heartbeat.rs

Co-authored-by: Yingwen <realevenyag@gmail.com>

* chore: by comment

---------

Co-authored-by: Yingwen <realevenyag@gmail.com>
2024-10-24 07:44:16 +00:00
pa
9d3ee6384a feat: Limit CPU in runtime (#3685) (#4782)
feat: add throttle runtime (#3685)
2024-10-24 07:30:24 +00:00
localhost
fcde0a4874 feat: Add functionality to the Opentelemetry write interface to extract fields from attr to top-level data. (#4859)
* chore: add otlp select

* chore: change otlp select

* chore: remove json path

* chore: format toml

* chore: change opentelemetry extract keys header name

* chore: add some doc and remove useless code and lib

* chore: make clippy happy

* chore: fix by pr comment

* chore: fix by pr comment

* chore: opentelemetry logs select key change some type default semantic type
2024-10-24 05:55:57 +00:00
Weny Xu
5d42e63ab0 fix!: replace timeout_millis and connect_timeout_millis with Duration in DatanodeClientOptions (#4867)
* fix: correct options struct

* fix: fix unit test
2024-10-23 08:20:34 +00:00
discord9
0c01532a37 feat: Sort within each PartitionRange (#4847)
* feat: PartSort

* chore: rm unused

* chore: typo

* chore: mem pool df

* chore: add location to arrow error

* refactor: test_util

* refactor: per review

* chore: rm unused

* chore: more cases

* chore: test&buffer clear

* fix: remove fetch

* chore: fmt

* chore: per review

* chore: rm unused
2024-10-23 07:01:55 +00:00
ZonaHe
6d503b047a feat: update dashboard to v0.6.0 (#4861)
Co-authored-by: ZonaHex <ZonaHex@users.noreply.github.com>
2024-10-22 02:34:09 +00:00
Yingwen
5d28f7a912 feat: yields empty batch after reading a range (#4845)
* feat: add empty batch to end of range stream

* feat: add batch validation

* fix: validate batch order

* fix: not yield empty batch in compaction

* fix: empty record batch

* feat: add a flag to enable empty batch
2024-10-21 13:52:47 +00:00
Lei, HUANG
a50eea76a6 chore: bump greptime-meter (#4858)
chore/bump-greptime-meter: Add meter-core package and update meter-core dependency across various packages to
new git revision.
2024-10-21 08:18:30 +00:00
Yingwen
2ee1ce2ba1 docs: change cpu/mem panel to time-series (#4844)
* docs: change cpu/mem panel to time-series

* docs: update version
2024-10-18 08:42:01 +00:00
Weny Xu
c02b5dae93 chore: bump version to 0.9.5 (#4853) 2024-10-18 08:07:13 +00:00
Weny Xu
081c6d9e74 fix: flush metric metadata region (#4852)
* fix: flush metric metadata region

* chore: apply suggestions from CR
2024-10-18 07:21:35 +00:00
Weny Xu
ca6e02980e fix: overwrite entry_id if entry id is less than start_offset (#4842)
* fix: overwrite entry_id if entry id is less than start_offset

* feat: add `overwrite_entry_start_id` to options

* chore: update config.md
2024-10-18 06:31:02 +00:00
Weny Xu
74bdba4613 fix: fix metadata forward compatibility issue (#4846) 2024-10-18 06:26:41 +00:00
Weny Xu
2e0e82ddc8 chore: update greptime-proto to b4d3011 (#4850) 2024-10-18 04:10:22 +00:00
Yingwen
e0c4157ad8 feat: Seq scanner scans data by time range (#4809)
* feat: seq scan by partition

* feat: part metrics

* chore: remove unused codes

* chore: fmt stream

* feat: build ranges returns smallvec

* feat: move scan mem/file ranges to util and reuse

* feat: log metrics

* chore: correct some metrics

* feat: get explain info from ranges

* test: group test and remove unused codes

* chore: fix clippy

* feat: change PartitionRange end to exclusive

* test: add tests
2024-10-17 11:05:12 +00:00
discord9
613e07afb4 feat: window sort physical plan (#4814)
* WIP

* feat: range split& tests

* WIP: split range

* add sort exprs

* chore: typo

* WIP

* feat: find successive runs

* WIP

* READY FOR REVIEW PART ONE: more tests

* refactor: break into smaller functions

* feat: precompute working range(need testing)

* tests: on working range

* tests: on working range

* feat: support rev working range

* feat(to be tested): core logic of merge sort

* fix: poll results

* fix: find_slice_from_range&test

* chore: remove some unused util func&fields

* chore: typos

* chore: impl exec plan for WindowedSortExec

* test(WIP): window sort stream

* test: window sort stream

* chore: remove unused

* fix: fetch

* fix: WIP intersection remaining

* test: fix and test!

* chore: remove outdated comments

* chore: rename test

* chore: remove dbg line

* chore: sorted runs

* feat: handling unexpected data

* chore: unused

* chore: remove a print in test

* chore: per review

* docs: wrong comment

* chore: more test cases
2024-10-16 11:50:25 +00:00
Weny Xu
0ce93f0b88 chore: add more metrics for region migration (#4838) 2024-10-16 09:36:57 +00:00
Ning Sun
c231eee7c1 fix: respect feature flags for geo function (#4836) 2024-10-16 07:46:31 +00:00
Yiran
176f2df5b3 fix: dead links (#4837) 2024-10-16 07:43:14 +00:00
localhost
4622412dfe feat: add API to write OpenTelemetry logs to GreptimeDB (#4755)
* chore: otlp logs api

* feat: add API to write OpenTelemetry logs to GreptimeDB

* chore: fix test data schema error

* chore: modify the underlying data structure of the pipeline value map type from hashmap to btremap to keep key order

* chore: fix by pr comment

* chore: resolve conflicts and add some test

* chore: remove useless error

* chore: change otlp header name

* chore: fmt code

* chore: fix integration test for otlp log write api

* chore: fix by pr comment

* chore: set otlp body with fulltext default
2024-10-16 04:36:08 +00:00
jeremyhi
59ec90299b refactor: metasrv cannot be cloned (#4834)
* refactor: metasrv cannot be cloned

* chore: remove MetasrvInstance's clone
2024-10-15 11:36:48 +00:00
discord9
16b8cdc3d5 chore: bump version v0.9.4 (#4833) 2024-10-15 10:48:03 +00:00
Weny Xu
3197b8b535 feat: introduce default customizers (#4831)
* feat: introduce `DefaultHeartbeatHandlerGroupBuilderCustomizer` and `DefaultLeadershipChangeNotifierCustomizer`

* chore: code styling
2024-10-15 09:48:13 +00:00
zyy17
972c2441af chore: bump promql-parser to v0.4.1 and use to_string() for EvalStmt (#4832)
chore: bump promql-parser to v0.4.1 and use to_string() for EvalStmt
2024-10-15 08:50:37 +00:00
Ning Sun
bb8b54b5d3 feat: add some s2 geo functions (#4823)
* feat: add first batch of s2 functions

* refactor: update reusable code from main

* test: add sqlness tests for s2

* feat: add tostring function for s2

* Update src/common/function/src/scalars/geo/s2.rs

Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>

* Apply suggestions from code review

* one more change

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>
2024-10-15 06:47:29 +00:00
Weny Xu
b5233e500b feat: defer HeartbeatHandlerGroup construction and enhance LeadershipChangeNotifier (#4826)
* feat: enhance `HeartbeatHandlerGroup`

* chore: apply suggestions from CR

* chore: minor refactoring

* chore: code styling

* chore: apply suggestions from CR
2024-10-15 03:35:31 +00:00
Ruihang Xia
b61a388d04 refactor: replace info logs with debug logs in region server (#4829)
* refactor: replace info logs with debug logs in region server

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix: update error handling for closing and opening nonexistent regions

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-10-14 12:46:07 +00:00
Ruihang Xia
06e565d25a feat: cache logical region's metadata (#4827)
* feat: cache logical region's metadata

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* feat: implement logical region locking for metadata operations

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* fix: correct typo in comment for MetadataRegion struct

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-10-14 08:44:13 +00:00
Yingwen
3b2ce31a19 feat: enable prof features by default (#4815)
* feat: enable prof by default

* docs: don't need to build with features

* feat: add common-pprof as optional dep for pprof feature

* build: remove optional

* feat: use dump_text
2024-10-14 03:32:47 +00:00
Ruihang Xia
a889ea88ca fix: case sensitive for __field__ matcher (#4822)
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-10-14 03:18:59 +00:00
Yingwen
2f2b4b306c feat!: implement interval type by multiple structs (#4772)
* define structs and methods

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>

* feat: re-implement interval types in time crate

* feat: use new

* feat: interval value

* feat: query crate interval

* feat: pg and mysql interval

* chore: remove unused imports

* chore: remove commented codes

* feat: make flow compile but may not work

* feat: flow datetime

* test: fix some tests

* test: fix some flow tests(WIP)

* chore: some fix test&docs

* fix: change interval order

* chore: remove unused codes

* chore: fix cilppy

* chore: now signature change

* chore: remove todo

* feat: update error message

---------

Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
Co-authored-by: discord9 <discord9@163.com>
2024-10-14 03:09:03 +00:00
jeremyhi
856c0280f5 feat: remove the distributed lock (#4825)
* feat: remove the distributed lock as we do not need it any more

* chore: delete todo comment

* chore: remove unused error
2024-10-12 09:04:22 +00:00
Ruihang Xia
03b29439e2 Merge branch 'main' into transform-count-min-max
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-09-11 11:09:07 +08:00
Ruihang Xia
712f4ca0ef try sort partial commutative
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-09-09 21:08:59 +08:00
Ruihang Xia
60bacff57e ignore unmatched left and right greater
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-09-08 11:12:21 +08:00
Ruihang Xia
6208772ba4 Merge branch 'main' into transform-count-min-max
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-09-08 11:02:04 +08:00
Ruihang Xia
67184c0498 Merge branch 'main' into transform-count-min-max
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-09-05 14:30:47 +08:00
Ruihang Xia
1dd908fdf7 handle group by
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-09-05 12:50:13 +08:00
Ruihang Xia
8179b4798e feat: support transforming min/max/count aggr fn
Signed-off-by: Ruihang Xia <waynestxia@gmail.com>
2024-09-04 22:17:31 +08:00
376 changed files with 14584 additions and 4383 deletions

View File

@@ -40,7 +40,7 @@ runs:
- name: Install PyArrow Package - name: Install PyArrow Package
shell: pwsh shell: pwsh
run: pip install pyarrow run: pip install pyarrow numpy
- name: Install WSL distribution - name: Install WSL distribution
uses: Vampire/setup-wsl@v2 uses: Vampire/setup-wsl@v2

View File

@@ -18,7 +18,7 @@ runs:
--set replicaCount=${{ inputs.etcd-replicas }} \ --set replicaCount=${{ inputs.etcd-replicas }} \
--set resources.requests.cpu=50m \ --set resources.requests.cpu=50m \
--set resources.requests.memory=128Mi \ --set resources.requests.memory=128Mi \
--set resources.limits.cpu=1000m \ --set resources.limits.cpu=1500m \
--set resources.limits.memory=2Gi \ --set resources.limits.memory=2Gi \
--set auth.rbac.create=false \ --set auth.rbac.create=false \
--set auth.rbac.token.enabled=false \ --set auth.rbac.token.enabled=false \

View File

@@ -436,7 +436,7 @@ jobs:
timeout-minutes: 60 timeout-minutes: 60
strategy: strategy:
matrix: matrix:
target: ["fuzz_migrate_mito_regions", "fuzz_failover_mito_regions", "fuzz_failover_metric_regions"] target: ["fuzz_migrate_mito_regions", "fuzz_migrate_metric_regions", "fuzz_failover_mito_regions", "fuzz_failover_metric_regions"]
mode: mode:
- name: "Remote WAL" - name: "Remote WAL"
minio: true minio: true
@@ -449,6 +449,12 @@ jobs:
minio: true minio: true
kafka: false kafka: false
values: "with-minio.yaml" values: "with-minio.yaml"
- target: "fuzz_migrate_metric_regions"
mode:
name: "Local WAL"
minio: true
kafka: false
values: "with-minio.yaml"
steps: steps:
- name: Remove unused software - name: Remove unused software
run: | run: |
@@ -688,7 +694,7 @@ jobs:
with: with:
python-version: '3.10' python-version: '3.10'
- name: Install PyArrow Package - name: Install PyArrow Package
run: pip install pyarrow run: pip install pyarrow numpy
- name: Setup etcd server - name: Setup etcd server
working-directory: tests-integration/fixtures/etcd working-directory: tests-integration/fixtures/etcd
run: docker compose -f docker-compose-standalone.yml up -d --wait run: docker compose -f docker-compose-standalone.yml up -d --wait

View File

@@ -92,7 +92,7 @@ jobs:
with: with:
python-version: "3.10" python-version: "3.10"
- name: Install PyArrow Package - name: Install PyArrow Package
run: pip install pyarrow run: pip install pyarrow numpy
- name: Install WSL distribution - name: Install WSL distribution
uses: Vampire/setup-wsl@v2 uses: Vampire/setup-wsl@v2
with: with:

273
Cargo.lock generated
View File

@@ -1,6 +1,6 @@
# This file is automatically @generated by Cargo. # This file is automatically @generated by Cargo.
# It is not intended for manual editing. # It is not intended for manual editing.
version = 3 version = 4
[[package]] [[package]]
name = "Inflector" name = "Inflector"
@@ -200,12 +200,6 @@ version = "1.0.89"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "86fdf8605db99b54d3cd748a44c6d04df638eb5dafb219b135d0149bd0db01f6" checksum = "86fdf8605db99b54d3cd748a44c6d04df638eb5dafb219b135d0149bd0db01f6"
[[package]]
name = "anymap"
version = "1.0.0-beta.2"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "8f1f8f5a6f3d50d89e3797d7593a50f96bb2aaa20ca0cc7be1fb673232c91d72"
[[package]] [[package]]
name = "anymap2" name = "anymap2"
version = "0.13.0" version = "0.13.0"
@@ -214,7 +208,7 @@ checksum = "d301b3b94cb4b2f23d7917810addbbaff90738e0ca2be692bd027e70d7e0330c"
[[package]] [[package]]
name = "api" name = "api"
version = "0.9.3" version = "0.9.5"
dependencies = [ dependencies = [
"common-base", "common-base",
"common-decimal", "common-decimal",
@@ -230,6 +224,15 @@ dependencies = [
"tonic-build", "tonic-build",
] ]
[[package]]
name = "approx"
version = "0.4.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "3f2a05fd1bd10b2527e20a2cd32d8873d115b8b39fe219ee25f42a8aca6ba278"
dependencies = [
"num-traits",
]
[[package]] [[package]]
name = "approx" name = "approx"
version = "0.5.1" version = "0.5.1"
@@ -766,7 +769,7 @@ dependencies = [
[[package]] [[package]]
name = "auth" name = "auth"
version = "0.9.3" version = "0.9.5"
dependencies = [ dependencies = [
"api", "api",
"async-trait", "async-trait",
@@ -985,6 +988,7 @@ dependencies = [
"num-bigint", "num-bigint",
"num-integer", "num-integer",
"num-traits", "num-traits",
"serde",
] ]
[[package]] [[package]]
@@ -1375,7 +1379,7 @@ dependencies = [
[[package]] [[package]]
name = "cache" name = "cache"
version = "0.9.3" version = "0.9.5"
dependencies = [ dependencies = [
"catalog", "catalog",
"common-error", "common-error",
@@ -1383,7 +1387,7 @@ dependencies = [
"common-meta", "common-meta",
"moka", "moka",
"snafu 0.8.5", "snafu 0.8.5",
"substrait 0.9.3", "substrait 0.9.5",
] ]
[[package]] [[package]]
@@ -1410,7 +1414,7 @@ checksum = "37b2a672a2cb129a2e41c10b1224bb368f9f37a2b16b612598138befd7b37eb5"
[[package]] [[package]]
name = "catalog" name = "catalog"
version = "0.9.3" version = "0.9.5"
dependencies = [ dependencies = [
"api", "api",
"arrow", "arrow",
@@ -1548,6 +1552,16 @@ dependencies = [
"vob", "vob",
] ]
[[package]]
name = "cgmath"
version = "0.18.0"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "1a98d30140e3296250832bbaaff83b27dcd6fa3cc70fb6f1f3e5c9c0023b5317"
dependencies = [
"approx 0.4.0",
"num-traits",
]
[[package]] [[package]]
name = "chrono" name = "chrono"
version = "0.4.38" version = "0.4.38"
@@ -1739,7 +1753,7 @@ checksum = "1462739cb27611015575c0c11df5df7601141071f07518d56fcc1be504cbec97"
[[package]] [[package]]
name = "client" name = "client"
version = "0.9.3" version = "0.9.5"
dependencies = [ dependencies = [
"api", "api",
"arc-swap", "arc-swap",
@@ -1769,12 +1783,11 @@ dependencies = [
"serde_json", "serde_json",
"snafu 0.8.5", "snafu 0.8.5",
"substrait 0.37.3", "substrait 0.37.3",
"substrait 0.9.3", "substrait 0.9.5",
"tokio", "tokio",
"tokio-stream", "tokio-stream",
"tonic 0.11.0", "tonic 0.11.0",
"tracing", "tracing",
"tracing-subscriber",
] ]
[[package]] [[package]]
@@ -1788,6 +1801,17 @@ dependencies = [
"winapi", "winapi",
] ]
[[package]]
name = "clocksource"
version = "0.8.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "129026dd5a8a9592d96916258f3a5379589e513ea5e86aeb0bd2530286e44e9e"
dependencies = [
"libc",
"time",
"winapi",
]
[[package]] [[package]]
name = "cmake" name = "cmake"
version = "0.1.51" version = "0.1.51"
@@ -1799,7 +1823,7 @@ dependencies = [
[[package]] [[package]]
name = "cmd" name = "cmd"
version = "0.9.3" version = "0.9.5"
dependencies = [ dependencies = [
"async-trait", "async-trait",
"auth", "auth",
@@ -1856,7 +1880,7 @@ dependencies = [
"similar-asserts", "similar-asserts",
"snafu 0.8.5", "snafu 0.8.5",
"store-api", "store-api",
"substrait 0.9.3", "substrait 0.9.5",
"table", "table",
"temp-env", "temp-env",
"tempfile", "tempfile",
@@ -1902,7 +1926,7 @@ checksum = "55b672471b4e9f9e95499ea597ff64941a309b2cdbffcc46f2cc5e2d971fd335"
[[package]] [[package]]
name = "common-base" name = "common-base"
version = "0.9.3" version = "0.9.5"
dependencies = [ dependencies = [
"anymap2", "anymap2",
"async-trait", "async-trait",
@@ -1920,7 +1944,7 @@ dependencies = [
[[package]] [[package]]
name = "common-catalog" name = "common-catalog"
version = "0.9.3" version = "0.9.5"
dependencies = [ dependencies = [
"chrono", "chrono",
"common-error", "common-error",
@@ -1931,7 +1955,7 @@ dependencies = [
[[package]] [[package]]
name = "common-config" name = "common-config"
version = "0.9.3" version = "0.9.5"
dependencies = [ dependencies = [
"common-base", "common-base",
"common-error", "common-error",
@@ -1954,7 +1978,7 @@ dependencies = [
[[package]] [[package]]
name = "common-datasource" name = "common-datasource"
version = "0.9.3" version = "0.9.5"
dependencies = [ dependencies = [
"arrow", "arrow",
"arrow-schema", "arrow-schema",
@@ -1991,7 +2015,7 @@ dependencies = [
[[package]] [[package]]
name = "common-decimal" name = "common-decimal"
version = "0.9.3" version = "0.9.5"
dependencies = [ dependencies = [
"bigdecimal 0.4.5", "bigdecimal 0.4.5",
"common-error", "common-error",
@@ -2004,7 +2028,7 @@ dependencies = [
[[package]] [[package]]
name = "common-error" name = "common-error"
version = "0.9.3" version = "0.9.5"
dependencies = [ dependencies = [
"snafu 0.8.5", "snafu 0.8.5",
"strum 0.25.0", "strum 0.25.0",
@@ -2013,7 +2037,7 @@ dependencies = [
[[package]] [[package]]
name = "common-frontend" name = "common-frontend"
version = "0.9.3" version = "0.9.5"
dependencies = [ dependencies = [
"api", "api",
"async-trait", "async-trait",
@@ -2028,7 +2052,7 @@ dependencies = [
[[package]] [[package]]
name = "common-function" name = "common-function"
version = "0.9.3" version = "0.9.5"
dependencies = [ dependencies = [
"api", "api",
"arc-swap", "arc-swap",
@@ -2054,6 +2078,7 @@ dependencies = [
"once_cell", "once_cell",
"paste", "paste",
"ron", "ron",
"s2",
"serde", "serde",
"serde_json", "serde_json",
"session", "session",
@@ -2067,7 +2092,7 @@ dependencies = [
[[package]] [[package]]
name = "common-greptimedb-telemetry" name = "common-greptimedb-telemetry"
version = "0.9.3" version = "0.9.5"
dependencies = [ dependencies = [
"async-trait", "async-trait",
"common-runtime", "common-runtime",
@@ -2084,7 +2109,7 @@ dependencies = [
[[package]] [[package]]
name = "common-grpc" name = "common-grpc"
version = "0.9.3" version = "0.9.5"
dependencies = [ dependencies = [
"api", "api",
"arrow-flight", "arrow-flight",
@@ -2110,7 +2135,7 @@ dependencies = [
[[package]] [[package]]
name = "common-grpc-expr" name = "common-grpc-expr"
version = "0.9.3" version = "0.9.5"
dependencies = [ dependencies = [
"api", "api",
"common-base", "common-base",
@@ -2123,12 +2148,13 @@ dependencies = [
"paste", "paste",
"prost 0.12.6", "prost 0.12.6",
"snafu 0.8.5", "snafu 0.8.5",
"store-api",
"table", "table",
] ]
[[package]] [[package]]
name = "common-macro" name = "common-macro"
version = "0.9.3" version = "0.9.5"
dependencies = [ dependencies = [
"arc-swap", "arc-swap",
"common-query", "common-query",
@@ -2142,7 +2168,7 @@ dependencies = [
[[package]] [[package]]
name = "common-mem-prof" name = "common-mem-prof"
version = "0.9.3" version = "0.9.5"
dependencies = [ dependencies = [
"common-error", "common-error",
"common-macro", "common-macro",
@@ -2155,7 +2181,7 @@ dependencies = [
[[package]] [[package]]
name = "common-meta" name = "common-meta"
version = "0.9.3" version = "0.9.5"
dependencies = [ dependencies = [
"anymap2", "anymap2",
"api", "api",
@@ -2212,11 +2238,23 @@ dependencies = [
[[package]] [[package]]
name = "common-plugins" name = "common-plugins"
version = "0.9.3" version = "0.9.5"
[[package]]
name = "common-pprof"
version = "0.9.5"
dependencies = [
"common-error",
"common-macro",
"pprof",
"prost 0.12.6",
"snafu 0.8.5",
"tokio",
]
[[package]] [[package]]
name = "common-procedure" name = "common-procedure"
version = "0.9.3" version = "0.9.5"
dependencies = [ dependencies = [
"async-stream", "async-stream",
"async-trait", "async-trait",
@@ -2243,7 +2281,7 @@ dependencies = [
[[package]] [[package]]
name = "common-procedure-test" name = "common-procedure-test"
version = "0.9.3" version = "0.9.5"
dependencies = [ dependencies = [
"async-trait", "async-trait",
"common-procedure", "common-procedure",
@@ -2251,7 +2289,7 @@ dependencies = [
[[package]] [[package]]
name = "common-query" name = "common-query"
version = "0.9.3" version = "0.9.5"
dependencies = [ dependencies = [
"api", "api",
"async-trait", "async-trait",
@@ -2277,7 +2315,7 @@ dependencies = [
[[package]] [[package]]
name = "common-recordbatch" name = "common-recordbatch"
version = "0.9.3" version = "0.9.5"
dependencies = [ dependencies = [
"arc-swap", "arc-swap",
"common-error", "common-error",
@@ -2296,19 +2334,27 @@ dependencies = [
[[package]] [[package]]
name = "common-runtime" name = "common-runtime"
version = "0.9.3" version = "0.9.5"
dependencies = [ dependencies = [
"async-trait", "async-trait",
"clap 4.5.19",
"common-error", "common-error",
"common-macro", "common-macro",
"common-telemetry", "common-telemetry",
"futures",
"lazy_static", "lazy_static",
"num_cpus", "num_cpus",
"once_cell", "once_cell",
"parking_lot 0.12.3",
"paste", "paste",
"pin-project",
"prometheus", "prometheus",
"rand",
"ratelimit",
"serde", "serde",
"serde_json",
"snafu 0.8.5", "snafu 0.8.5",
"tempfile",
"tokio", "tokio",
"tokio-metrics", "tokio-metrics",
"tokio-metrics-collector", "tokio-metrics-collector",
@@ -2318,7 +2364,7 @@ dependencies = [
[[package]] [[package]]
name = "common-telemetry" name = "common-telemetry"
version = "0.9.3" version = "0.9.5"
dependencies = [ dependencies = [
"atty", "atty",
"backtrace", "backtrace",
@@ -2346,7 +2392,7 @@ dependencies = [
[[package]] [[package]]
name = "common-test-util" name = "common-test-util"
version = "0.9.3" version = "0.9.5"
dependencies = [ dependencies = [
"client", "client",
"common-query", "common-query",
@@ -2358,7 +2404,7 @@ dependencies = [
[[package]] [[package]]
name = "common-time" name = "common-time"
version = "0.9.3" version = "0.9.5"
dependencies = [ dependencies = [
"arrow", "arrow",
"chrono", "chrono",
@@ -2374,7 +2420,7 @@ dependencies = [
[[package]] [[package]]
name = "common-version" name = "common-version"
version = "0.9.3" version = "0.9.5"
dependencies = [ dependencies = [
"build-data", "build-data",
"const_format", "const_format",
@@ -2385,7 +2431,7 @@ dependencies = [
[[package]] [[package]]
name = "common-wal" name = "common-wal"
version = "0.9.3" version = "0.9.5"
dependencies = [ dependencies = [
"common-base", "common-base",
"common-error", "common-error",
@@ -3194,7 +3240,7 @@ dependencies = [
[[package]] [[package]]
name = "datanode" name = "datanode"
version = "0.9.3" version = "0.9.5"
dependencies = [ dependencies = [
"api", "api",
"arrow-flight", "arrow-flight",
@@ -3244,7 +3290,7 @@ dependencies = [
"session", "session",
"snafu 0.8.5", "snafu 0.8.5",
"store-api", "store-api",
"substrait 0.9.3", "substrait 0.9.5",
"table", "table",
"tokio", "tokio",
"toml 0.8.19", "toml 0.8.19",
@@ -3253,7 +3299,7 @@ dependencies = [
[[package]] [[package]]
name = "datatypes" name = "datatypes"
version = "0.9.3" version = "0.9.5"
dependencies = [ dependencies = [
"arrow", "arrow",
"arrow-array", "arrow-array",
@@ -3859,7 +3905,7 @@ dependencies = [
[[package]] [[package]]
name = "file-engine" name = "file-engine"
version = "0.9.3" version = "0.9.5"
dependencies = [ dependencies = [
"api", "api",
"async-trait", "async-trait",
@@ -3959,9 +4005,18 @@ version = "1.0.1"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "28a80e3145d8ad11ba0995949bbcf48b9df2be62772b3d351ef017dff6ecb853" checksum = "28a80e3145d8ad11ba0995949bbcf48b9df2be62772b3d351ef017dff6ecb853"
[[package]]
name = "float_extras"
version = "0.1.6"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b22b70f8649ea2315955f1a36d964b0e4da482dfaa5f0d04df0d1fb7c338ab7a"
dependencies = [
"libc",
]
[[package]] [[package]]
name = "flow" name = "flow"
version = "0.9.3" version = "0.9.5"
dependencies = [ dependencies = [
"api", "api",
"arrow", "arrow",
@@ -4018,7 +4073,7 @@ dependencies = [
"snafu 0.8.5", "snafu 0.8.5",
"store-api", "store-api",
"strum 0.25.0", "strum 0.25.0",
"substrait 0.9.3", "substrait 0.9.5",
"table", "table",
"tokio", "tokio",
"tonic 0.11.0", "tonic 0.11.0",
@@ -4080,7 +4135,7 @@ checksum = "6c2141d6d6c8512188a7891b4b01590a45f6dac67afb4f255c4124dbb86d4eaa"
[[package]] [[package]]
name = "frontend" name = "frontend"
version = "0.9.3" version = "0.9.5"
dependencies = [ dependencies = [
"api", "api",
"arc-swap", "arc-swap",
@@ -4389,7 +4444,7 @@ version = "0.7.13"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "9ff16065e5720f376fbced200a5ae0f47ace85fd70b7e54269790281353b6d61" checksum = "9ff16065e5720f376fbced200a5ae0f47ace85fd70b7e54269790281353b6d61"
dependencies = [ dependencies = [
"approx", "approx 0.5.1",
"num-traits", "num-traits",
"serde", "serde",
] ]
@@ -4476,7 +4531,7 @@ dependencies = [
[[package]] [[package]]
name = "greptime-proto" name = "greptime-proto"
version = "0.1.0" version = "0.1.0"
source = "git+https://github.com/GreptimeTeam/greptime-proto.git?rev=0b4f7c8ab06399f6b90e1626e8d5b9697cb33bb9#0b4f7c8ab06399f6b90e1626e8d5b9697cb33bb9" source = "git+https://github.com/GreptimeTeam/greptime-proto.git?rev=255f87a3318ace3f88a67f76995a0e14910983f4#255f87a3318ace3f88a67f76995a0e14910983f4"
dependencies = [ dependencies = [
"prost 0.12.6", "prost 0.12.6",
"serde", "serde",
@@ -5128,7 +5183,7 @@ dependencies = [
[[package]] [[package]]
name = "index" name = "index"
version = "0.9.3" version = "0.9.5"
dependencies = [ dependencies = [
"async-trait", "async-trait",
"asynchronous-codec", "asynchronous-codec",
@@ -5208,7 +5263,7 @@ dependencies = [
[[package]] [[package]]
name = "influxdb_line_protocol" name = "influxdb_line_protocol"
version = "0.1.0" version = "0.1.0"
source = "git+https://github.com/evenyag/influxdb_iox?branch=feat/line-protocol#10ef0d0b02705ac7518717390939fa3a9bcfcacc" source = "git+https://github.com/evenyag/influxdb_iox?branch=feat%2Fline-protocol#10ef0d0b02705ac7518717390939fa3a9bcfcacc"
dependencies = [ dependencies = [
"bytes", "bytes",
"nom", "nom",
@@ -5469,7 +5524,7 @@ dependencies = [
[[package]] [[package]]
name = "jsonb" name = "jsonb"
version = "0.4.1" version = "0.4.1"
source = "git+https://github.com/datafuselabs/jsonb.git?rev=46ad50fc71cf75afbf98eec455f7892a6387c1fc#46ad50fc71cf75afbf98eec455f7892a6387c1fc" source = "git+https://github.com/databendlabs/jsonb.git?rev=46ad50fc71cf75afbf98eec455f7892a6387c1fc#46ad50fc71cf75afbf98eec455f7892a6387c1fc"
dependencies = [ dependencies = [
"byteorder", "byteorder",
"fast-float", "fast-float",
@@ -5959,7 +6014,7 @@ checksum = "a7a70ba024b9dc04c27ea2f0c0548feb474ec5c54bba33a7f72f873a39d07b24"
[[package]] [[package]]
name = "log-store" name = "log-store"
version = "0.9.3" version = "0.9.5"
dependencies = [ dependencies = [
"async-stream", "async-stream",
"async-trait", "async-trait",
@@ -6279,7 +6334,7 @@ dependencies = [
[[package]] [[package]]
name = "meta-client" name = "meta-client"
version = "0.9.3" version = "0.9.5"
dependencies = [ dependencies = [
"api", "api",
"async-trait", "async-trait",
@@ -6305,7 +6360,7 @@ dependencies = [
[[package]] [[package]]
name = "meta-srv" name = "meta-srv"
version = "0.9.3" version = "0.9.5"
dependencies = [ dependencies = [
"api", "api",
"async-trait", "async-trait",
@@ -6366,9 +6421,9 @@ dependencies = [
[[package]] [[package]]
name = "meter-core" name = "meter-core"
version = "0.1.0" version = "0.1.0"
source = "git+https://github.com/GreptimeTeam/greptime-meter.git?rev=80eb97c24c88af4dd9a86f8bbaf50e741d4eb8cd#80eb97c24c88af4dd9a86f8bbaf50e741d4eb8cd" source = "git+https://github.com/GreptimeTeam/greptime-meter.git?rev=a10facb353b41460eeb98578868ebf19c2084fac#a10facb353b41460eeb98578868ebf19c2084fac"
dependencies = [ dependencies = [
"anymap", "anymap2",
"once_cell", "once_cell",
"parking_lot 0.12.3", "parking_lot 0.12.3",
] ]
@@ -6376,14 +6431,14 @@ dependencies = [
[[package]] [[package]]
name = "meter-macros" name = "meter-macros"
version = "0.1.0" version = "0.1.0"
source = "git+https://github.com/GreptimeTeam/greptime-meter.git?rev=80eb97c24c88af4dd9a86f8bbaf50e741d4eb8cd#80eb97c24c88af4dd9a86f8bbaf50e741d4eb8cd" source = "git+https://github.com/GreptimeTeam/greptime-meter.git?rev=a10facb353b41460eeb98578868ebf19c2084fac#a10facb353b41460eeb98578868ebf19c2084fac"
dependencies = [ dependencies = [
"meter-core", "meter-core",
] ]
[[package]] [[package]]
name = "metric-engine" name = "metric-engine"
version = "0.9.3" version = "0.9.5"
dependencies = [ dependencies = [
"api", "api",
"aquamarine", "aquamarine",
@@ -6486,7 +6541,7 @@ dependencies = [
[[package]] [[package]]
name = "mito2" name = "mito2"
version = "0.9.3" version = "0.9.5"
dependencies = [ dependencies = [
"api", "api",
"aquamarine", "aquamarine",
@@ -6871,7 +6926,7 @@ version = "0.29.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "d506eb7e08d6329505faa8a3a00a5dcc6de9f76e0c77e4b75763ae3c770831ff" checksum = "d506eb7e08d6329505faa8a3a00a5dcc6de9f76e0c77e4b75763ae3c770831ff"
dependencies = [ dependencies = [
"approx", "approx 0.5.1",
"matrixmultiply", "matrixmultiply",
"nalgebra-macros", "nalgebra-macros",
"num-complex", "num-complex",
@@ -7222,7 +7277,7 @@ dependencies = [
[[package]] [[package]]
name = "object-store" name = "object-store"
version = "0.9.3" version = "0.9.5"
dependencies = [ dependencies = [
"anyhow", "anyhow",
"bytes", "bytes",
@@ -7507,12 +7562,13 @@ dependencies = [
"ordered-float 4.3.0", "ordered-float 4.3.0",
"percent-encoding", "percent-encoding",
"rand", "rand",
"serde_json",
"thiserror", "thiserror",
] ]
[[package]] [[package]]
name = "operator" name = "operator"
version = "0.9.3" version = "0.9.5"
dependencies = [ dependencies = [
"api", "api",
"async-trait", "async-trait",
@@ -7557,7 +7613,7 @@ dependencies = [
"sql", "sql",
"sqlparser 0.45.0 (git+https://github.com/GreptimeTeam/sqlparser-rs.git?rev=54a267ac89c09b11c0c88934690530807185d3e7)", "sqlparser 0.45.0 (git+https://github.com/GreptimeTeam/sqlparser-rs.git?rev=54a267ac89c09b11c0c88934690530807185d3e7)",
"store-api", "store-api",
"substrait 0.9.3", "substrait 0.9.5",
"table", "table",
"tokio", "tokio",
"tokio-util", "tokio-util",
@@ -7807,7 +7863,7 @@ dependencies = [
[[package]] [[package]]
name = "partition" name = "partition"
version = "0.9.3" version = "0.9.5"
dependencies = [ dependencies = [
"api", "api",
"async-trait", "async-trait",
@@ -8108,7 +8164,7 @@ checksum = "8b870d8c151b6f2fb93e84a13146138f05d02ed11c7e7c54f8826aaaf7c9f184"
[[package]] [[package]]
name = "pipeline" name = "pipeline"
version = "0.9.3" version = "0.9.5"
dependencies = [ dependencies = [
"ahash 0.8.11", "ahash 0.8.11",
"api", "api",
@@ -8270,13 +8326,14 @@ dependencies = [
[[package]] [[package]]
name = "plugins" name = "plugins"
version = "0.9.3" version = "0.9.5"
dependencies = [ dependencies = [
"auth", "auth",
"common-base", "common-base",
"datanode", "datanode",
"frontend", "frontend",
"meta-srv", "meta-srv",
"serde",
"snafu 0.8.5", "snafu 0.8.5",
] ]
@@ -8544,7 +8601,7 @@ dependencies = [
[[package]] [[package]]
name = "promql" name = "promql"
version = "0.9.3" version = "0.9.5"
dependencies = [ dependencies = [
"ahash 0.8.11", "ahash 0.8.11",
"async-trait", "async-trait",
@@ -8570,15 +8627,18 @@ dependencies = [
[[package]] [[package]]
name = "promql-parser" name = "promql-parser"
version = "0.4.0" version = "0.4.3"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "007a331efb31f6ddb644590ef22359c9469784931162aad92599e34bcfa66583" checksum = "7fe99e6f80a79abccf1e8fb48dd63473a36057e600cc6ea36147c8318698ae6f"
dependencies = [ dependencies = [
"cfgrammar", "cfgrammar",
"chrono",
"lazy_static", "lazy_static",
"lrlex", "lrlex",
"lrpar", "lrpar",
"regex", "regex",
"serde",
"serde_json",
] ]
[[package]] [[package]]
@@ -8779,7 +8839,7 @@ dependencies = [
[[package]] [[package]]
name = "puffin" name = "puffin"
version = "0.9.3" version = "0.9.5"
dependencies = [ dependencies = [
"async-compression 0.4.13", "async-compression 0.4.13",
"async-trait", "async-trait",
@@ -8901,7 +8961,7 @@ dependencies = [
[[package]] [[package]]
name = "query" name = "query"
version = "0.9.3" version = "0.9.5"
dependencies = [ dependencies = [
"ahash 0.8.11", "ahash 0.8.11",
"api", "api",
@@ -8936,6 +8996,7 @@ dependencies = [
"datafusion-physical-expr", "datafusion-physical-expr",
"datafusion-sql", "datafusion-sql",
"datatypes", "datatypes",
"fastrand",
"format_num", "format_num",
"futures", "futures",
"futures-util", "futures-util",
@@ -8950,12 +9011,15 @@ dependencies = [
"object-store", "object-store",
"once_cell", "once_cell",
"paste", "paste",
"pretty_assertions",
"prometheus", "prometheus",
"promql", "promql",
"promql-parser", "promql-parser",
"prost 0.12.6", "prost 0.12.6",
"rand", "rand",
"regex", "regex",
"serde",
"serde_json",
"session", "session",
"snafu 0.8.5", "snafu 0.8.5",
"sql", "sql",
@@ -8964,10 +9028,11 @@ dependencies = [
"stats-cli", "stats-cli",
"store-api", "store-api",
"streaming-stats", "streaming-stats",
"substrait 0.9.3", "substrait 0.9.5",
"table", "table",
"tokio", "tokio",
"tokio-stream", "tokio-stream",
"uuid",
] ]
[[package]] [[package]]
@@ -9147,6 +9212,17 @@ dependencies = [
"rand", "rand",
] ]
[[package]]
name = "ratelimit"
version = "0.9.1"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "6c1bb13e2dcfa2232ac6887157aad8d9b3fe4ca57f7c8d4938ff5ea9be742300"
dependencies = [
"clocksource",
"parking_lot 0.12.3",
"thiserror",
]
[[package]] [[package]]
name = "raw-cpuid" name = "raw-cpuid"
version = "11.2.0" version = "11.2.0"
@@ -10262,6 +10338,20 @@ version = "1.0.18"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f3cb5ba0dc43242ce17de99c180e96db90b235b8a9fdc9543c96d2209116bd9f" checksum = "f3cb5ba0dc43242ce17de99c180e96db90b235b8a9fdc9543c96d2209116bd9f"
[[package]]
name = "s2"
version = "0.0.12"
source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "cc7fbc04bb52c40b5f48c9bb2d2961375301916e0c25d9f373750654d588cd5c"
dependencies = [
"bigdecimal 0.3.1",
"cgmath",
"float_extras",
"lazy_static",
"libm",
"serde",
]
[[package]] [[package]]
name = "safe-proc-macro2" name = "safe-proc-macro2"
version = "1.0.67" version = "1.0.67"
@@ -10384,7 +10474,7 @@ checksum = "94143f37725109f92c262ed2cf5e59bce7498c01bcc1502d7b9afe439a4e9f49"
[[package]] [[package]]
name = "script" name = "script"
version = "0.9.3" version = "0.9.5"
dependencies = [ dependencies = [
"api", "api",
"arc-swap", "arc-swap",
@@ -10678,8 +10768,9 @@ dependencies = [
[[package]] [[package]]
name = "servers" name = "servers"
version = "0.9.3" version = "0.9.5"
dependencies = [ dependencies = [
"ahash 0.8.11",
"aide", "aide",
"api", "api",
"arrow", "arrow",
@@ -10705,6 +10796,7 @@ dependencies = [
"common-mem-prof", "common-mem-prof",
"common-meta", "common-meta",
"common-plugins", "common-plugins",
"common-pprof",
"common-query", "common-query",
"common-recordbatch", "common-recordbatch",
"common-runtime", "common-runtime",
@@ -10787,7 +10879,7 @@ dependencies = [
[[package]] [[package]]
name = "session" name = "session"
version = "0.9.3" version = "0.9.5"
dependencies = [ dependencies = [
"api", "api",
"arc-swap", "arc-swap",
@@ -10848,9 +10940,9 @@ dependencies = [
[[package]] [[package]]
name = "shadow-rs" name = "shadow-rs"
version = "0.31.1" version = "0.35.1"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "02c282402d25101f9c893e9cd7e4cae535fe7db18b81291de973026c219ddf1e" checksum = "2311e39772c00391875f40e34d43efef247b23930143a70ca5fbec9505937420"
dependencies = [ dependencies = [
"const_format", "const_format",
"git2", "git2",
@@ -10899,7 +10991,7 @@ version = "0.6.0"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "f0b7840f121a46d63066ee7a99fc81dcabbc6105e437cae43528cea199b5a05f" checksum = "f0b7840f121a46d63066ee7a99fc81dcabbc6105e437cae43528cea199b5a05f"
dependencies = [ dependencies = [
"approx", "approx 0.5.1",
"num-complex", "num-complex",
"num-traits", "num-traits",
"paste", "paste",
@@ -11108,7 +11200,7 @@ dependencies = [
[[package]] [[package]]
name = "sql" name = "sql"
version = "0.9.3" version = "0.9.5"
dependencies = [ dependencies = [
"api", "api",
"chrono", "chrono",
@@ -11169,7 +11261,7 @@ dependencies = [
[[package]] [[package]]
name = "sqlness-runner" name = "sqlness-runner"
version = "0.9.3" version = "0.9.5"
dependencies = [ dependencies = [
"async-trait", "async-trait",
"clap 4.5.19", "clap 4.5.19",
@@ -11370,7 +11462,7 @@ version = "0.16.1"
source = "registry+https://github.com/rust-lang/crates.io-index" source = "registry+https://github.com/rust-lang/crates.io-index"
checksum = "b35a062dbadac17a42e0fc64c27f419b25d6fae98572eb43c8814c9e873d7721" checksum = "b35a062dbadac17a42e0fc64c27f419b25d6fae98572eb43c8814c9e873d7721"
dependencies = [ dependencies = [
"approx", "approx 0.5.1",
"lazy_static", "lazy_static",
"nalgebra", "nalgebra",
"num-traits", "num-traits",
@@ -11389,7 +11481,7 @@ dependencies = [
[[package]] [[package]]
name = "store-api" name = "store-api"
version = "0.9.3" version = "0.9.5"
dependencies = [ dependencies = [
"api", "api",
"aquamarine", "aquamarine",
@@ -11407,6 +11499,7 @@ dependencies = [
"datatypes", "datatypes",
"derive_builder 0.12.0", "derive_builder 0.12.0",
"futures", "futures",
"humantime",
"serde", "serde",
"serde_json", "serde_json",
"snafu 0.8.5", "snafu 0.8.5",
@@ -11558,7 +11651,7 @@ dependencies = [
[[package]] [[package]]
name = "substrait" name = "substrait"
version = "0.9.3" version = "0.9.5"
dependencies = [ dependencies = [
"async-trait", "async-trait",
"bytes", "bytes",
@@ -11757,7 +11850,7 @@ dependencies = [
[[package]] [[package]]
name = "table" name = "table"
version = "0.9.3" version = "0.9.5"
dependencies = [ dependencies = [
"api", "api",
"async-trait", "async-trait",
@@ -12023,7 +12116,7 @@ checksum = "3369f5ac52d5eb6ab48c6b4ffdc8efbcad6b89c765749064ba298f2c68a16a76"
[[package]] [[package]]
name = "tests-fuzz" name = "tests-fuzz"
version = "0.9.3" version = "0.9.5"
dependencies = [ dependencies = [
"arbitrary", "arbitrary",
"async-trait", "async-trait",
@@ -12065,7 +12158,7 @@ dependencies = [
[[package]] [[package]]
name = "tests-integration" name = "tests-integration"
version = "0.9.3" version = "0.9.5"
dependencies = [ dependencies = [
"api", "api",
"arrow-flight", "arrow-flight",
@@ -12127,7 +12220,7 @@ dependencies = [
"sql", "sql",
"sqlx", "sqlx",
"store-api", "store-api",
"substrait 0.9.3", "substrait 0.9.5",
"table", "table",
"tempfile", "tempfile",
"time", "time",

View File

@@ -20,6 +20,7 @@ members = [
"src/common/mem-prof", "src/common/mem-prof",
"src/common/meta", "src/common/meta",
"src/common/plugins", "src/common/plugins",
"src/common/pprof",
"src/common/procedure", "src/common/procedure",
"src/common/procedure-test", "src/common/procedure-test",
"src/common/query", "src/common/query",
@@ -64,7 +65,7 @@ members = [
resolver = "2" resolver = "2"
[workspace.package] [workspace.package]
version = "0.9.3" version = "0.9.5"
edition = "2021" edition = "2021"
license = "Apache-2.0" license = "Apache-2.0"
@@ -120,13 +121,13 @@ etcd-client = { version = "0.13" }
fst = "0.4.7" fst = "0.4.7"
futures = "0.3" futures = "0.3"
futures-util = "0.3" futures-util = "0.3"
greptime-proto = { git = "https://github.com/GreptimeTeam/greptime-proto.git", rev = "0b4f7c8ab06399f6b90e1626e8d5b9697cb33bb9" } greptime-proto = { git = "https://github.com/GreptimeTeam/greptime-proto.git", rev = "255f87a3318ace3f88a67f76995a0e14910983f4" }
humantime = "2.1" humantime = "2.1"
humantime-serde = "1.1" humantime-serde = "1.1"
itertools = "0.10" itertools = "0.10"
jsonb = { git = "https://github.com/datafuselabs/jsonb.git", rev = "46ad50fc71cf75afbf98eec455f7892a6387c1fc", default-features = false } jsonb = { git = "https://github.com/databendlabs/jsonb.git", rev = "46ad50fc71cf75afbf98eec455f7892a6387c1fc", default-features = false }
lazy_static = "1.4" lazy_static = "1.4"
meter-core = { git = "https://github.com/GreptimeTeam/greptime-meter.git", rev = "80eb97c24c88af4dd9a86f8bbaf50e741d4eb8cd" } meter-core = { git = "https://github.com/GreptimeTeam/greptime-meter.git", rev = "a10facb353b41460eeb98578868ebf19c2084fac" }
mockall = "0.11.4" mockall = "0.11.4"
moka = "0.12" moka = "0.12"
notify = "6.1" notify = "6.1"
@@ -137,15 +138,18 @@ opentelemetry-proto = { version = "0.5", features = [
"metrics", "metrics",
"trace", "trace",
"with-serde", "with-serde",
"logs",
] } ] }
parking_lot = "0.12"
parquet = { version = "51.0.0", default-features = false, features = ["arrow", "async", "object_store"] } parquet = { version = "51.0.0", default-features = false, features = ["arrow", "async", "object_store"] }
paste = "1.0" paste = "1.0"
pin-project = "1.0" pin-project = "1.0"
prometheus = { version = "0.13.3", features = ["process"] } prometheus = { version = "0.13.3", features = ["process"] }
promql-parser = { version = "0.4" } promql-parser = { version = "0.4.3", features = ["ser"] }
prost = "0.12" prost = "0.12"
raft-engine = { version = "0.4.1", default-features = false } raft-engine = { version = "0.4.1", default-features = false }
rand = "0.8" rand = "0.8"
ratelimit = "0.9"
regex = "1.8" regex = "1.8"
regex-automata = { version = "0.4" } regex-automata = { version = "0.4" }
reqwest = { version = "0.12", default-features = false, features = [ reqwest = { version = "0.12", default-features = false, features = [
@@ -165,7 +169,7 @@ schemars = "0.8"
serde = { version = "1.0", features = ["derive"] } serde = { version = "1.0", features = ["derive"] }
serde_json = { version = "1.0", features = ["float_roundtrip"] } serde_json = { version = "1.0", features = ["float_roundtrip"] }
serde_with = "3" serde_with = "3"
shadow-rs = "0.31" shadow-rs = "0.35"
similar-asserts = "1.6.0" similar-asserts = "1.6.0"
smallvec = { version = "1", features = ["serde"] } smallvec = { version = "1", features = ["serde"] }
snafu = "0.8" snafu = "0.8"
@@ -176,13 +180,16 @@ sqlparser = { git = "https://github.com/GreptimeTeam/sqlparser-rs.git", rev = "5
] } ] }
strum = { version = "0.25", features = ["derive"] } strum = { version = "0.25", features = ["derive"] }
tempfile = "3" tempfile = "3"
tokio = { version = "1.36", features = ["full"] } tokio = { version = "1.40", features = ["full"] }
tokio-postgres = "0.7" tokio-postgres = "0.7"
tokio-stream = { version = "0.1" } tokio-stream = { version = "0.1" }
tokio-util = { version = "0.7", features = ["io-util", "compat"] } tokio-util = { version = "0.7", features = ["io-util", "compat"] }
toml = "0.8.8" toml = "0.8.8"
tonic = { version = "0.11", features = ["tls", "gzip", "zstd"] } tonic = { version = "0.11", features = ["tls", "gzip", "zstd"] }
tower = { version = "0.4" } tower = { version = "0.4" }
tracing-appender = "0.2"
tracing-subscriber = { version = "0.3", features = ["env-filter", "json", "fmt"] }
typetag = "0.2"
uuid = { version = "1.7", features = ["serde", "v4", "fast-rng"] } uuid = { version = "1.7", features = ["serde", "v4", "fast-rng"] }
zstd = "0.13" zstd = "0.13"
@@ -208,6 +215,7 @@ common-macro = { path = "src/common/macro" }
common-mem-prof = { path = "src/common/mem-prof" } common-mem-prof = { path = "src/common/mem-prof" }
common-meta = { path = "src/common/meta" } common-meta = { path = "src/common/meta" }
common-plugins = { path = "src/common/plugins" } common-plugins = { path = "src/common/plugins" }
common-pprof = { path = "src/common/pprof" }
common-procedure = { path = "src/common/procedure" } common-procedure = { path = "src/common/procedure" }
common-procedure-test = { path = "src/common/procedure-test" } common-procedure-test = { path = "src/common/procedure-test" }
common-query = { path = "src/common/query" } common-query = { path = "src/common/query" }
@@ -256,7 +264,7 @@ tokio-rustls = { git = "https://github.com/GreptimeTeam/tokio-rustls" }
[workspace.dependencies.meter-macros] [workspace.dependencies.meter-macros]
git = "https://github.com/GreptimeTeam/greptime-meter.git" git = "https://github.com/GreptimeTeam/greptime-meter.git"
rev = "80eb97c24c88af4dd9a86f8bbaf50e741d4eb8cd" rev = "a10facb353b41460eeb98578868ebf19c2084fac"
[profile.release] [profile.release]
debug = 1 debug = 1

View File

@@ -8,7 +8,7 @@ CARGO_BUILD_OPTS := --locked
IMAGE_REGISTRY ?= docker.io IMAGE_REGISTRY ?= docker.io
IMAGE_NAMESPACE ?= greptime IMAGE_NAMESPACE ?= greptime
IMAGE_TAG ?= latest IMAGE_TAG ?= latest
DEV_BUILDER_IMAGE_TAG ?= 2024-06-06-5674c14f-20240920110415 DEV_BUILDER_IMAGE_TAG ?= 2024-10-19-a5c00e85-20241024184445
BUILDX_MULTI_PLATFORM_BUILD ?= false BUILDX_MULTI_PLATFORM_BUILD ?= false
BUILDX_BUILDER_NAME ?= gtbuilder BUILDX_BUILDER_NAME ?= gtbuilder
BASE_IMAGE ?= ubuntu BASE_IMAGE ?= ubuntu

View File

@@ -83,6 +83,7 @@
| `wal.backoff_max` | String | `10s` | The maximum backoff delay.<br/>**It's only used when the provider is `kafka`**. | | `wal.backoff_max` | String | `10s` | The maximum backoff delay.<br/>**It's only used when the provider is `kafka`**. |
| `wal.backoff_base` | Integer | `2` | The exponential backoff rate, i.e. next backoff = base * current backoff.<br/>**It's only used when the provider is `kafka`**. | | `wal.backoff_base` | Integer | `2` | The exponential backoff rate, i.e. next backoff = base * current backoff.<br/>**It's only used when the provider is `kafka`**. |
| `wal.backoff_deadline` | String | `5mins` | The deadline of retries.<br/>**It's only used when the provider is `kafka`**. | | `wal.backoff_deadline` | String | `5mins` | The deadline of retries.<br/>**It's only used when the provider is `kafka`**. |
| `wal.overwrite_entry_start_id` | Bool | `false` | Ignore missing entries during read WAL.<br/>**It's only used when the provider is `kafka`**.<br/><br/>This option ensures that when Kafka messages are deleted, the system<br/>can still successfully replay memtable data without throwing an<br/>out-of-range error.<br/>However, enabling this option might lead to unexpected data loss,<br/>as the system will skip over missing entries instead of treating<br/>them as critical errors. |
| `metadata_store` | -- | -- | Metadata storage options. | | `metadata_store` | -- | -- | Metadata storage options. |
| `metadata_store.file_size` | String | `256MB` | Kv file size in bytes. | | `metadata_store.file_size` | String | `256MB` | Kv file size in bytes. |
| `metadata_store.purge_threshold` | String | `4GB` | Kv purge threshold. | | `metadata_store.purge_threshold` | String | `4GB` | Kv purge threshold. |
@@ -115,7 +116,9 @@
| `region_engine.mito.worker_request_batch_size` | Integer | `64` | Max batch size for a worker to handle requests. | | `region_engine.mito.worker_request_batch_size` | Integer | `64` | Max batch size for a worker to handle requests. |
| `region_engine.mito.manifest_checkpoint_distance` | Integer | `10` | Number of meta action updated to trigger a new checkpoint for the manifest. | | `region_engine.mito.manifest_checkpoint_distance` | Integer | `10` | Number of meta action updated to trigger a new checkpoint for the manifest. |
| `region_engine.mito.compress_manifest` | Bool | `false` | Whether to compress manifest and checkpoint file by gzip (default false). | | `region_engine.mito.compress_manifest` | Bool | `false` | Whether to compress manifest and checkpoint file by gzip (default false). |
| `region_engine.mito.max_background_jobs` | Integer | `4` | Max number of running background jobs | | `region_engine.mito.max_background_flushes` | Integer | Auto | Max number of running background flush jobs (default: 1/2 of cpu cores). |
| `region_engine.mito.max_background_compactions` | Integer | Auto | Max number of running background compaction jobs (default: 1/4 of cpu cores). |
| `region_engine.mito.max_background_purges` | Integer | Auto | Max number of running background purge jobs (default: number of cpu cores). |
| `region_engine.mito.auto_flush_interval` | String | `1h` | Interval to auto flush a region if it has not flushed yet. | | `region_engine.mito.auto_flush_interval` | String | `1h` | Interval to auto flush a region if it has not flushed yet. |
| `region_engine.mito.global_write_buffer_size` | String | Auto | Global write buffer size for all regions. If not set, it's default to 1/8 of OS memory with a max limitation of 1GB. | | `region_engine.mito.global_write_buffer_size` | String | Auto | Global write buffer size for all regions. If not set, it's default to 1/8 of OS memory with a max limitation of 1GB. |
| `region_engine.mito.global_write_buffer_reject_size` | String | Auto | Global write buffer size threshold to reject write requests. If not set, it's default to 2 times of `global_write_buffer_size`. | | `region_engine.mito.global_write_buffer_reject_size` | String | Auto | Global write buffer size threshold to reject write requests. If not set, it's default to 2 times of `global_write_buffer_size`. |
@@ -409,6 +412,7 @@
| `wal.backoff_deadline` | String | `5mins` | The deadline of retries.<br/>**It's only used when the provider is `kafka`**. | | `wal.backoff_deadline` | String | `5mins` | The deadline of retries.<br/>**It's only used when the provider is `kafka`**. |
| `wal.create_index` | Bool | `true` | Whether to enable WAL index creation.<br/>**It's only used when the provider is `kafka`**. | | `wal.create_index` | Bool | `true` | Whether to enable WAL index creation.<br/>**It's only used when the provider is `kafka`**. |
| `wal.dump_index_interval` | String | `60s` | The interval for dumping WAL indexes.<br/>**It's only used when the provider is `kafka`**. | | `wal.dump_index_interval` | String | `60s` | The interval for dumping WAL indexes.<br/>**It's only used when the provider is `kafka`**. |
| `wal.overwrite_entry_start_id` | Bool | `false` | Ignore missing entries during read WAL.<br/>**It's only used when the provider is `kafka`**.<br/><br/>This option ensures that when Kafka messages are deleted, the system<br/>can still successfully replay memtable data without throwing an<br/>out-of-range error.<br/>However, enabling this option might lead to unexpected data loss,<br/>as the system will skip over missing entries instead of treating<br/>them as critical errors. |
| `storage` | -- | -- | The data storage options. | | `storage` | -- | -- | The data storage options. |
| `storage.data_home` | String | `/tmp/greptimedb/` | The working home directory. | | `storage.data_home` | String | `/tmp/greptimedb/` | The working home directory. |
| `storage.type` | String | `File` | The storage type used to store the data.<br/>- `File`: the data is stored in the local file system.<br/>- `S3`: the data is stored in the S3 object storage.<br/>- `Gcs`: the data is stored in the Google Cloud Storage.<br/>- `Azblob`: the data is stored in the Azure Blob Storage.<br/>- `Oss`: the data is stored in the Aliyun OSS. | | `storage.type` | String | `File` | The storage type used to store the data.<br/>- `File`: the data is stored in the local file system.<br/>- `S3`: the data is stored in the S3 object storage.<br/>- `Gcs`: the data is stored in the Google Cloud Storage.<br/>- `Azblob`: the data is stored in the Azure Blob Storage.<br/>- `Oss`: the data is stored in the Aliyun OSS. |
@@ -435,7 +439,9 @@
| `region_engine.mito.worker_request_batch_size` | Integer | `64` | Max batch size for a worker to handle requests. | | `region_engine.mito.worker_request_batch_size` | Integer | `64` | Max batch size for a worker to handle requests. |
| `region_engine.mito.manifest_checkpoint_distance` | Integer | `10` | Number of meta action updated to trigger a new checkpoint for the manifest. | | `region_engine.mito.manifest_checkpoint_distance` | Integer | `10` | Number of meta action updated to trigger a new checkpoint for the manifest. |
| `region_engine.mito.compress_manifest` | Bool | `false` | Whether to compress manifest and checkpoint file by gzip (default false). | | `region_engine.mito.compress_manifest` | Bool | `false` | Whether to compress manifest and checkpoint file by gzip (default false). |
| `region_engine.mito.max_background_jobs` | Integer | `4` | Max number of running background jobs | | `region_engine.mito.max_background_flushes` | Integer | Auto | Max number of running background flush jobs (default: 1/2 of cpu cores). |
| `region_engine.mito.max_background_compactions` | Integer | Auto | Max number of running background compaction jobs (default: 1/4 of cpu cores). |
| `region_engine.mito.max_background_purges` | Integer | Auto | Max number of running background purge jobs (default: number of cpu cores). |
| `region_engine.mito.auto_flush_interval` | String | `1h` | Interval to auto flush a region if it has not flushed yet. | | `region_engine.mito.auto_flush_interval` | String | `1h` | Interval to auto flush a region if it has not flushed yet. |
| `region_engine.mito.global_write_buffer_size` | String | Auto | Global write buffer size for all regions. If not set, it's default to 1/8 of OS memory with a max limitation of 1GB. | | `region_engine.mito.global_write_buffer_size` | String | Auto | Global write buffer size for all regions. If not set, it's default to 1/8 of OS memory with a max limitation of 1GB. |
| `region_engine.mito.global_write_buffer_reject_size` | String | Auto | Global write buffer size threshold to reject write requests. If not set, it's default to 2 times of `global_write_buffer_size` | | `region_engine.mito.global_write_buffer_reject_size` | String | Auto | Global write buffer size threshold to reject write requests. If not set, it's default to 2 times of `global_write_buffer_size` |

View File

@@ -213,6 +213,17 @@ create_index = true
## **It's only used when the provider is `kafka`**. ## **It's only used when the provider is `kafka`**.
dump_index_interval = "60s" dump_index_interval = "60s"
## Ignore missing entries during read WAL.
## **It's only used when the provider is `kafka`**.
##
## This option ensures that when Kafka messages are deleted, the system
## can still successfully replay memtable data without throwing an
## out-of-range error.
## However, enabling this option might lead to unexpected data loss,
## as the system will skip over missing entries instead of treating
## them as critical errors.
overwrite_entry_start_id = false
# The Kafka SASL configuration. # The Kafka SASL configuration.
# **It's only used when the provider is `kafka`**. # **It's only used when the provider is `kafka`**.
# Available SASL mechanisms: # Available SASL mechanisms:
@@ -405,8 +416,17 @@ manifest_checkpoint_distance = 10
## Whether to compress manifest and checkpoint file by gzip (default false). ## Whether to compress manifest and checkpoint file by gzip (default false).
compress_manifest = false compress_manifest = false
## Max number of running background jobs ## Max number of running background flush jobs (default: 1/2 of cpu cores).
max_background_jobs = 4 ## @toml2docs:none-default="Auto"
#+ max_background_flushes = 4
## Max number of running background compaction jobs (default: 1/4 of cpu cores).
## @toml2docs:none-default="Auto"
#+ max_background_compactions = 2
## Max number of running background purge jobs (default: number of cpu cores).
## @toml2docs:none-default="Auto"
#+ max_background_purges = 8
## Interval to auto flush a region if it has not flushed yet. ## Interval to auto flush a region if it has not flushed yet.
auto_flush_interval = "1h" auto_flush_interval = "1h"
@@ -626,7 +646,7 @@ url = ""
headers = { } headers = { }
## The tracing options. Only effect when compiled with `tokio-console` feature. ## The tracing options. Only effect when compiled with `tokio-console` feature.
[tracing] #+ [tracing]
## The tokio console address. ## The tokio console address.
## @toml2docs:none-default ## @toml2docs:none-default
tokio_console_addr = "127.0.0.1" #+ tokio_console_addr = "127.0.0.1"

View File

@@ -101,8 +101,8 @@ threshold = "10s"
sample_ratio = 1.0 sample_ratio = 1.0
## The tracing options. Only effect when compiled with `tokio-console` feature. ## The tracing options. Only effect when compiled with `tokio-console` feature.
[tracing] #+ [tracing]
## The tokio console address. ## The tokio console address.
## @toml2docs:none-default ## @toml2docs:none-default
tokio_console_addr = "127.0.0.1" #+ tokio_console_addr = "127.0.0.1"

View File

@@ -231,7 +231,7 @@ url = ""
headers = { } headers = { }
## The tracing options. Only effect when compiled with `tokio-console` feature. ## The tracing options. Only effect when compiled with `tokio-console` feature.
[tracing] #+ [tracing]
## The tokio console address. ## The tokio console address.
## @toml2docs:none-default ## @toml2docs:none-default
tokio_console_addr = "127.0.0.1" #+ tokio_console_addr = "127.0.0.1"

View File

@@ -218,7 +218,7 @@ url = ""
headers = { } headers = { }
## The tracing options. Only effect when compiled with `tokio-console` feature. ## The tracing options. Only effect when compiled with `tokio-console` feature.
[tracing] #+ [tracing]
## The tokio console address. ## The tokio console address.
## @toml2docs:none-default ## @toml2docs:none-default
tokio_console_addr = "127.0.0.1" #+ tokio_console_addr = "127.0.0.1"

View File

@@ -237,6 +237,17 @@ backoff_base = 2
## **It's only used when the provider is `kafka`**. ## **It's only used when the provider is `kafka`**.
backoff_deadline = "5mins" backoff_deadline = "5mins"
## Ignore missing entries during read WAL.
## **It's only used when the provider is `kafka`**.
##
## This option ensures that when Kafka messages are deleted, the system
## can still successfully replay memtable data without throwing an
## out-of-range error.
## However, enabling this option might lead to unexpected data loss,
## as the system will skip over missing entries instead of treating
## them as critical errors.
overwrite_entry_start_id = false
# The Kafka SASL configuration. # The Kafka SASL configuration.
# **It's only used when the provider is `kafka`**. # **It's only used when the provider is `kafka`**.
# Available SASL mechanisms: # Available SASL mechanisms:
@@ -443,8 +454,17 @@ manifest_checkpoint_distance = 10
## Whether to compress manifest and checkpoint file by gzip (default false). ## Whether to compress manifest and checkpoint file by gzip (default false).
compress_manifest = false compress_manifest = false
## Max number of running background jobs ## Max number of running background flush jobs (default: 1/2 of cpu cores).
max_background_jobs = 4 ## @toml2docs:none-default="Auto"
#+ max_background_flushes = 4
## Max number of running background compaction jobs (default: 1/4 of cpu cores).
## @toml2docs:none-default="Auto"
#+ max_background_compactions = 2
## Max number of running background purge jobs (default: number of cpu cores).
## @toml2docs:none-default="Auto"
#+ max_background_purges = 8
## Interval to auto flush a region if it has not flushed yet. ## Interval to auto flush a region if it has not flushed yet.
auto_flush_interval = "1h" auto_flush_interval = "1h"
@@ -670,7 +690,7 @@ url = ""
headers = { } headers = { }
## The tracing options. Only effect when compiled with `tokio-console` feature. ## The tracing options. Only effect when compiled with `tokio-console` feature.
[tracing] #+ [tracing]
## The tokio console address. ## The tokio console address.
## @toml2docs:none-default ## @toml2docs:none-default
tokio_console_addr = "127.0.0.1" #+ tokio_console_addr = "127.0.0.1"

View File

@@ -48,4 +48,4 @@ Please refer to [SQL query](./query.sql) for GreptimeDB and Clickhouse, and [que
## Addition ## Addition
- You can tune GreptimeDB's configuration to get better performance. - You can tune GreptimeDB's configuration to get better performance.
- You can setup GreptimeDB to use S3 as storage, see [here](https://docs.greptime.com/user-guide/operations/configuration/#storage-options). - You can setup GreptimeDB to use S3 as storage, see [here](https://docs.greptime.com/user-guide/deployments/configuration#storage-options).

View File

@@ -0,0 +1,16 @@
# Change Log Level on the Fly
## HTTP API
example:
```bash
curl --data "trace;flow=debug" 127.0.0.1:4000/debug/log_level
```
And database will reply with something like:
```bash
Log Level changed from Some("info") to "trace;flow=debug"%
```
The data is a string in the format of `global_level;module1=level1;module2=level2;...` that follow the same rule of `RUST_LOG`.
The module is the module name of the log, and the level is the log level. The log level can be one of the following: `trace`, `debug`, `info`, `warn`, `error`, `off`(case insensitive).

View File

@@ -1,11 +1,5 @@
# Profiling CPU # Profiling CPU
## Build GreptimeDB with `pprof` feature
```bash
cargo build --features=pprof
```
## HTTP API ## HTTP API
Sample at 99 Hertz, for 5 seconds, output report in [protobuf format](https://github.com/google/pprof/blob/master/proto/profile.proto). Sample at 99 Hertz, for 5 seconds, output report in [protobuf format](https://github.com/google/pprof/blob/master/proto/profile.proto).
```bash ```bash

View File

@@ -18,12 +18,6 @@ sudo apt install libjemalloc-dev
curl https://raw.githubusercontent.com/brendangregg/FlameGraph/master/flamegraph.pl > ./flamegraph.pl curl https://raw.githubusercontent.com/brendangregg/FlameGraph/master/flamegraph.pl > ./flamegraph.pl
``` ```
### Build GreptimeDB with `mem-prof` feature.
```bash
cargo build --features=mem-prof
```
## Profiling ## Profiling
Start GreptimeDB instance with environment variables: Start GreptimeDB instance with environment variables:

View File

@@ -409,7 +409,39 @@
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"color": { "color": {
"mode": "thresholds" "mode": "palette-classic"
},
"custom": {
"axisBorderShow": false,
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 0,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "auto",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
}, },
"fieldMinMax": false, "fieldMinMax": false,
"mappings": [], "mappings": [],
@@ -438,18 +470,16 @@
}, },
"id": 27, "id": 27,
"options": { "options": {
"colorMode": "value", "legend": {
"graphMode": "area", "calcs": [],
"justifyMode": "auto", "displayMode": "list",
"orientation": "auto", "placement": "bottom",
"reduceOptions": { "showLegend": true
"calcs": ["lastNotNull"],
"fields": "",
"values": false
}, },
"text": {}, "tooltip": {
"textMode": "auto", "mode": "single",
"wideLayout": true "sort": "none"
}
}, },
"pluginVersion": "10.2.3", "pluginVersion": "10.2.3",
"targets": [ "targets": [
@@ -467,7 +497,7 @@
} }
], ],
"title": "CPU", "title": "CPU",
"type": "stat" "type": "timeseries"
}, },
{ {
"datasource": { "datasource": {
@@ -477,7 +507,39 @@
"fieldConfig": { "fieldConfig": {
"defaults": { "defaults": {
"color": { "color": {
"mode": "thresholds" "mode": "palette-classic"
},
"custom": {
"axisBorderShow": false,
"axisCenteredZero": false,
"axisColorMode": "text",
"axisLabel": "",
"axisPlacement": "auto",
"barAlignment": 0,
"drawStyle": "line",
"fillOpacity": 0,
"gradientMode": "none",
"hideFrom": {
"legend": false,
"tooltip": false,
"viz": false
},
"insertNulls": false,
"lineInterpolation": "linear",
"lineWidth": 1,
"pointSize": 5,
"scaleDistribution": {
"type": "linear"
},
"showPoints": "auto",
"spanNulls": false,
"stacking": {
"group": "A",
"mode": "none"
},
"thresholdsStyle": {
"mode": "off"
}
}, },
"decimals": 0, "decimals": 0,
"fieldMinMax": false, "fieldMinMax": false,
@@ -503,18 +565,16 @@
}, },
"id": 28, "id": 28,
"options": { "options": {
"colorMode": "value", "legend": {
"graphMode": "area", "calcs": [],
"justifyMode": "auto", "displayMode": "list",
"orientation": "auto", "placement": "bottom",
"reduceOptions": { "showLegend": true
"calcs": ["lastNotNull"],
"fields": "",
"values": false
}, },
"text": {}, "tooltip": {
"textMode": "auto", "mode": "single",
"wideLayout": true "sort": "none"
}
}, },
"pluginVersion": "10.2.3", "pluginVersion": "10.2.3",
"targets": [ "targets": [
@@ -532,7 +592,7 @@
} }
], ],
"title": "Memory", "title": "Memory",
"type": "stat" "type": "timeseries"
}, },
{ {
"collapsed": false, "collapsed": false,
@@ -3335,6 +3395,6 @@
"timezone": "", "timezone": "",
"title": "GreptimeDB", "title": "GreptimeDB",
"uid": "e7097237-669b-4f8d-b751-13067afbfb68", "uid": "e7097237-669b-4f8d-b751-13067afbfb68",
"version": 15, "version": 16,
"weekStart": "" "weekStart": ""
} }

View File

@@ -1,3 +1,2 @@
[toolchain] [toolchain]
channel = "nightly-2024-06-06" channel = "nightly-2024-10-19"

View File

@@ -17,10 +17,11 @@ use std::sync::Arc;
use common_base::BitVec; use common_base::BitVec;
use common_decimal::decimal128::{DECIMAL128_DEFAULT_SCALE, DECIMAL128_MAX_PRECISION}; use common_decimal::decimal128::{DECIMAL128_DEFAULT_SCALE, DECIMAL128_MAX_PRECISION};
use common_decimal::Decimal128; use common_decimal::Decimal128;
use common_time::interval::IntervalUnit;
use common_time::time::Time; use common_time::time::Time;
use common_time::timestamp::TimeUnit; use common_time::timestamp::TimeUnit;
use common_time::{Date, DateTime, Interval, Timestamp}; use common_time::{
Date, DateTime, IntervalDayTime, IntervalMonthDayNano, IntervalYearMonth, Timestamp,
};
use datatypes::prelude::{ConcreteDataType, ValueRef}; use datatypes::prelude::{ConcreteDataType, ValueRef};
use datatypes::scalars::ScalarVector; use datatypes::scalars::ScalarVector;
use datatypes::types::{ use datatypes::types::{
@@ -115,6 +116,7 @@ impl From<ColumnDataTypeWrapper> for ConcreteDataType {
ConcreteDataType::binary_datatype() ConcreteDataType::binary_datatype()
} }
} }
ColumnDataType::Json => ConcreteDataType::json_datatype(),
ColumnDataType::String => ConcreteDataType::string_datatype(), ColumnDataType::String => ConcreteDataType::string_datatype(),
ColumnDataType::Date => ConcreteDataType::date_datatype(), ColumnDataType::Date => ConcreteDataType::date_datatype(),
ColumnDataType::Datetime => ConcreteDataType::datetime_datatype(), ColumnDataType::Datetime => ConcreteDataType::datetime_datatype(),
@@ -416,6 +418,10 @@ pub fn values_with_capacity(datatype: ColumnDataType, capacity: usize) -> Values
decimal128_values: Vec::with_capacity(capacity), decimal128_values: Vec::with_capacity(capacity),
..Default::default() ..Default::default()
}, },
ColumnDataType::Json => Values {
string_values: Vec::with_capacity(capacity),
..Default::default()
},
} }
} }
@@ -456,13 +462,11 @@ pub fn push_vals(column: &mut Column, origin_count: usize, vector: VectorRef) {
TimeUnit::Microsecond => values.time_microsecond_values.push(val.value()), TimeUnit::Microsecond => values.time_microsecond_values.push(val.value()),
TimeUnit::Nanosecond => values.time_nanosecond_values.push(val.value()), TimeUnit::Nanosecond => values.time_nanosecond_values.push(val.value()),
}, },
Value::Interval(val) => match val.unit() { Value::IntervalYearMonth(val) => values.interval_year_month_values.push(val.to_i32()),
IntervalUnit::YearMonth => values.interval_year_month_values.push(val.to_i32()), Value::IntervalDayTime(val) => values.interval_day_time_values.push(val.to_i64()),
IntervalUnit::DayTime => values.interval_day_time_values.push(val.to_i64()), Value::IntervalMonthDayNano(val) => values
IntervalUnit::MonthDayNano => values
.interval_month_day_nano_values .interval_month_day_nano_values
.push(convert_i128_to_interval(val.to_i128())), .push(convert_month_day_nano_to_pb(val)),
},
Value::Decimal128(val) => values.decimal128_values.push(convert_to_pb_decimal128(val)), Value::Decimal128(val) => values.decimal128_values.push(convert_to_pb_decimal128(val)),
Value::List(_) | Value::Duration(_) => unreachable!(), Value::List(_) | Value::Duration(_) => unreachable!(),
}); });
@@ -507,14 +511,12 @@ fn ddl_request_type(request: &DdlRequest) -> &'static str {
} }
} }
/// Converts an i128 value to google protobuf type [IntervalMonthDayNano]. /// Converts an interval to google protobuf type [IntervalMonthDayNano].
pub fn convert_i128_to_interval(v: i128) -> v1::IntervalMonthDayNano { pub fn convert_month_day_nano_to_pb(v: IntervalMonthDayNano) -> v1::IntervalMonthDayNano {
let interval = Interval::from_i128(v);
let (months, days, nanoseconds) = interval.to_month_day_nano();
v1::IntervalMonthDayNano { v1::IntervalMonthDayNano {
months, months: v.months,
days, days: v.days,
nanoseconds, nanoseconds: v.nanoseconds,
} }
} }
@@ -562,11 +564,15 @@ pub fn pb_value_to_value_ref<'a>(
ValueData::TimeMillisecondValue(t) => ValueRef::Time(Time::new_millisecond(*t)), ValueData::TimeMillisecondValue(t) => ValueRef::Time(Time::new_millisecond(*t)),
ValueData::TimeMicrosecondValue(t) => ValueRef::Time(Time::new_microsecond(*t)), ValueData::TimeMicrosecondValue(t) => ValueRef::Time(Time::new_microsecond(*t)),
ValueData::TimeNanosecondValue(t) => ValueRef::Time(Time::new_nanosecond(*t)), ValueData::TimeNanosecondValue(t) => ValueRef::Time(Time::new_nanosecond(*t)),
ValueData::IntervalYearMonthValue(v) => ValueRef::Interval(Interval::from_i32(*v)), ValueData::IntervalYearMonthValue(v) => {
ValueData::IntervalDayTimeValue(v) => ValueRef::Interval(Interval::from_i64(*v)), ValueRef::IntervalYearMonth(IntervalYearMonth::from_i32(*v))
}
ValueData::IntervalDayTimeValue(v) => {
ValueRef::IntervalDayTime(IntervalDayTime::from_i64(*v))
}
ValueData::IntervalMonthDayNanoValue(v) => { ValueData::IntervalMonthDayNanoValue(v) => {
let interval = Interval::from_month_day_nano(v.months, v.days, v.nanoseconds); let interval = IntervalMonthDayNano::new(v.months, v.days, v.nanoseconds);
ValueRef::Interval(interval) ValueRef::IntervalMonthDayNano(interval)
} }
ValueData::Decimal128Value(v) => { ValueData::Decimal128Value(v) => {
// get precision and scale from datatype_extension // get precision and scale from datatype_extension
@@ -657,7 +663,7 @@ pub fn pb_values_to_vector_ref(data_type: &ConcreteDataType, values: Values) ->
IntervalType::MonthDayNano(_) => { IntervalType::MonthDayNano(_) => {
Arc::new(IntervalMonthDayNanoVector::from_iter_values( Arc::new(IntervalMonthDayNanoVector::from_iter_values(
values.interval_month_day_nano_values.iter().map(|x| { values.interval_month_day_nano_values.iter().map(|x| {
Interval::from_month_day_nano(x.months, x.days, x.nanoseconds).to_i128() IntervalMonthDayNano::new(x.months, x.days, x.nanoseconds).to_i128()
}), }),
)) ))
} }
@@ -802,18 +808,18 @@ pub fn pb_values_to_values(data_type: &ConcreteDataType, values: Values) -> Vec<
ConcreteDataType::Interval(IntervalType::YearMonth(_)) => values ConcreteDataType::Interval(IntervalType::YearMonth(_)) => values
.interval_year_month_values .interval_year_month_values
.into_iter() .into_iter()
.map(|v| Value::Interval(Interval::from_i32(v))) .map(|v| Value::IntervalYearMonth(IntervalYearMonth::from_i32(v)))
.collect(), .collect(),
ConcreteDataType::Interval(IntervalType::DayTime(_)) => values ConcreteDataType::Interval(IntervalType::DayTime(_)) => values
.interval_day_time_values .interval_day_time_values
.into_iter() .into_iter()
.map(|v| Value::Interval(Interval::from_i64(v))) .map(|v| Value::IntervalDayTime(IntervalDayTime::from_i64(v)))
.collect(), .collect(),
ConcreteDataType::Interval(IntervalType::MonthDayNano(_)) => values ConcreteDataType::Interval(IntervalType::MonthDayNano(_)) => values
.interval_month_day_nano_values .interval_month_day_nano_values
.into_iter() .into_iter()
.map(|v| { .map(|v| {
Value::Interval(Interval::from_month_day_nano( Value::IntervalMonthDayNano(IntervalMonthDayNano::new(
v.months, v.months,
v.days, v.days,
v.nanoseconds, v.nanoseconds,
@@ -941,19 +947,17 @@ pub fn to_proto_value(value: Value) -> Option<v1::Value> {
value_data: Some(ValueData::TimeNanosecondValue(v.value())), value_data: Some(ValueData::TimeNanosecondValue(v.value())),
}, },
}, },
Value::Interval(v) => match v.unit() { Value::IntervalYearMonth(v) => v1::Value {
IntervalUnit::YearMonth => v1::Value {
value_data: Some(ValueData::IntervalYearMonthValue(v.to_i32())), value_data: Some(ValueData::IntervalYearMonthValue(v.to_i32())),
}, },
IntervalUnit::DayTime => v1::Value { Value::IntervalDayTime(v) => v1::Value {
value_data: Some(ValueData::IntervalDayTimeValue(v.to_i64())), value_data: Some(ValueData::IntervalDayTimeValue(v.to_i64())),
}, },
IntervalUnit::MonthDayNano => v1::Value { Value::IntervalMonthDayNano(v) => v1::Value {
value_data: Some(ValueData::IntervalMonthDayNanoValue( value_data: Some(ValueData::IntervalMonthDayNanoValue(
convert_i128_to_interval(v.to_i128()), convert_month_day_nano_to_pb(v),
)), )),
}, },
},
Value::Decimal128(v) => v1::Value { Value::Decimal128(v) => v1::Value {
value_data: Some(ValueData::Decimal128Value(convert_to_pb_decimal128(v))), value_data: Some(ValueData::Decimal128Value(convert_to_pb_decimal128(v))),
}, },
@@ -1044,13 +1048,11 @@ pub fn value_to_grpc_value(value: Value) -> GrpcValue {
TimeUnit::Microsecond => ValueData::TimeMicrosecondValue(v.value()), TimeUnit::Microsecond => ValueData::TimeMicrosecondValue(v.value()),
TimeUnit::Nanosecond => ValueData::TimeNanosecondValue(v.value()), TimeUnit::Nanosecond => ValueData::TimeNanosecondValue(v.value()),
}), }),
Value::Interval(v) => Some(match v.unit() { Value::IntervalYearMonth(v) => Some(ValueData::IntervalYearMonthValue(v.to_i32())),
IntervalUnit::YearMonth => ValueData::IntervalYearMonthValue(v.to_i32()), Value::IntervalDayTime(v) => Some(ValueData::IntervalDayTimeValue(v.to_i64())),
IntervalUnit::DayTime => ValueData::IntervalDayTimeValue(v.to_i64()), Value::IntervalMonthDayNano(v) => Some(ValueData::IntervalMonthDayNanoValue(
IntervalUnit::MonthDayNano => { convert_month_day_nano_to_pb(v),
ValueData::IntervalMonthDayNanoValue(convert_i128_to_interval(v.to_i128())) )),
}
}),
Value::Decimal128(v) => Some(ValueData::Decimal128Value(convert_to_pb_decimal128(v))), Value::Decimal128(v) => Some(ValueData::Decimal128Value(convert_to_pb_decimal128(v))),
Value::List(_) | Value::Duration(_) => unreachable!(), Value::List(_) | Value::Duration(_) => unreachable!(),
}, },
@@ -1061,6 +1063,7 @@ pub fn value_to_grpc_value(value: Value) -> GrpcValue {
mod tests { mod tests {
use std::sync::Arc; use std::sync::Arc;
use common_time::interval::IntervalUnit;
use datatypes::types::{ use datatypes::types::{
Int32Type, IntervalDayTimeType, IntervalMonthDayNanoType, IntervalYearMonthType, Int32Type, IntervalDayTimeType, IntervalMonthDayNanoType, IntervalYearMonthType,
TimeMillisecondType, TimeSecondType, TimestampMillisecondType, TimestampSecondType, TimeMillisecondType, TimeSecondType, TimestampMillisecondType, TimestampSecondType,
@@ -1506,11 +1509,11 @@ mod tests {
#[test] #[test]
fn test_convert_i128_to_interval() { fn test_convert_i128_to_interval() {
let i128_val = 3000; let i128_val = 3;
let interval = convert_i128_to_interval(i128_val); let interval = convert_month_day_nano_to_pb(IntervalMonthDayNano::from_i128(i128_val));
assert_eq!(interval.months, 0); assert_eq!(interval.months, 0);
assert_eq!(interval.days, 0); assert_eq!(interval.days, 0);
assert_eq!(interval.nanoseconds, 3000); assert_eq!(interval.nanoseconds, 3);
} }
#[test] #[test]
@@ -1590,9 +1593,9 @@ mod tests {
}, },
); );
let expect = vec![ let expect = vec![
Value::Interval(Interval::from_year_month(1_i32)), Value::IntervalYearMonth(IntervalYearMonth::new(1_i32)),
Value::Interval(Interval::from_year_month(2_i32)), Value::IntervalYearMonth(IntervalYearMonth::new(2_i32)),
Value::Interval(Interval::from_year_month(3_i32)), Value::IntervalYearMonth(IntervalYearMonth::new(3_i32)),
]; ];
assert_eq!(expect, actual); assert_eq!(expect, actual);
@@ -1605,9 +1608,9 @@ mod tests {
}, },
); );
let expect = vec![ let expect = vec![
Value::Interval(Interval::from_i64(1_i64)), Value::IntervalDayTime(IntervalDayTime::from_i64(1_i64)),
Value::Interval(Interval::from_i64(2_i64)), Value::IntervalDayTime(IntervalDayTime::from_i64(2_i64)),
Value::Interval(Interval::from_i64(3_i64)), Value::IntervalDayTime(IntervalDayTime::from_i64(3_i64)),
]; ];
assert_eq!(expect, actual); assert_eq!(expect, actual);
@@ -1636,9 +1639,9 @@ mod tests {
}, },
); );
let expect = vec![ let expect = vec![
Value::Interval(Interval::from_month_day_nano(1, 2, 3)), Value::IntervalMonthDayNano(IntervalMonthDayNano::new(1, 2, 3)),
Value::Interval(Interval::from_month_day_nano(5, 6, 7)), Value::IntervalMonthDayNano(IntervalMonthDayNano::new(5, 6, 7)),
Value::Interval(Interval::from_month_day_nano(9, 10, 11)), Value::IntervalMonthDayNano(IntervalMonthDayNano::new(9, 10, 11)),
]; ];
assert_eq!(expect, actual); assert_eq!(expect, actual);
} }

View File

@@ -33,7 +33,7 @@ impl StaticUserProvider {
value: value.to_string(), value: value.to_string(),
msg: "StaticUserProviderOption must be in format `<option>:<value>`", msg: "StaticUserProviderOption must be in format `<option>:<value>`",
})?; })?;
return match mode { match mode {
"file" => { "file" => {
let users = load_credential_from_file(content)? let users = load_credential_from_file(content)?
.context(InvalidConfigSnafu { .context(InvalidConfigSnafu {
@@ -58,7 +58,7 @@ impl StaticUserProvider {
msg: "StaticUserProviderOption must be in format `file:<path>` or `cmd:<values>`", msg: "StaticUserProviderOption must be in format `file:<path>` or `cmd:<values>`",
} }
.fail(), .fail(),
}; }
} }
} }

View File

@@ -39,9 +39,12 @@ use crate::CatalogManager;
const REGION_ID: &str = "region_id"; const REGION_ID: &str = "region_id";
const TABLE_ID: &str = "table_id"; const TABLE_ID: &str = "table_id";
const REGION_NUMBER: &str = "region_number"; const REGION_NUMBER: &str = "region_number";
const REGION_ROWS: &str = "region_rows";
const DISK_SIZE: &str = "disk_size";
const MEMTABLE_SIZE: &str = "memtable_size"; const MEMTABLE_SIZE: &str = "memtable_size";
const MANIFEST_SIZE: &str = "manifest_size"; const MANIFEST_SIZE: &str = "manifest_size";
const SST_SIZE: &str = "sst_size"; const SST_SIZE: &str = "sst_size";
const INDEX_SIZE: &str = "index_size";
const ENGINE: &str = "engine"; const ENGINE: &str = "engine";
const REGION_ROLE: &str = "region_role"; const REGION_ROLE: &str = "region_role";
@@ -52,9 +55,12 @@ const INIT_CAPACITY: usize = 42;
/// - `region_id`: The region id. /// - `region_id`: The region id.
/// - `table_id`: The table id. /// - `table_id`: The table id.
/// - `region_number`: The region number. /// - `region_number`: The region number.
/// - `region_rows`: The number of rows in region.
/// - `memtable_size`: The memtable size in bytes. /// - `memtable_size`: The memtable size in bytes.
/// - `disk_size`: The approximate disk size in bytes.
/// - `manifest_size`: The manifest size in bytes. /// - `manifest_size`: The manifest size in bytes.
/// - `sst_size`: The sst size in bytes. /// - `sst_size`: The sst data files size in bytes.
/// - `index_size`: The sst index files size in bytes.
/// - `engine`: The engine type. /// - `engine`: The engine type.
/// - `region_role`: The region role. /// - `region_role`: The region role.
/// ///
@@ -76,9 +82,12 @@ impl InformationSchemaRegionStatistics {
ColumnSchema::new(REGION_ID, ConcreteDataType::uint64_datatype(), false), ColumnSchema::new(REGION_ID, ConcreteDataType::uint64_datatype(), false),
ColumnSchema::new(TABLE_ID, ConcreteDataType::uint32_datatype(), false), ColumnSchema::new(TABLE_ID, ConcreteDataType::uint32_datatype(), false),
ColumnSchema::new(REGION_NUMBER, ConcreteDataType::uint32_datatype(), false), ColumnSchema::new(REGION_NUMBER, ConcreteDataType::uint32_datatype(), false),
ColumnSchema::new(REGION_ROWS, ConcreteDataType::uint64_datatype(), true),
ColumnSchema::new(DISK_SIZE, ConcreteDataType::uint64_datatype(), true),
ColumnSchema::new(MEMTABLE_SIZE, ConcreteDataType::uint64_datatype(), true), ColumnSchema::new(MEMTABLE_SIZE, ConcreteDataType::uint64_datatype(), true),
ColumnSchema::new(MANIFEST_SIZE, ConcreteDataType::uint64_datatype(), true), ColumnSchema::new(MANIFEST_SIZE, ConcreteDataType::uint64_datatype(), true),
ColumnSchema::new(SST_SIZE, ConcreteDataType::uint64_datatype(), true), ColumnSchema::new(SST_SIZE, ConcreteDataType::uint64_datatype(), true),
ColumnSchema::new(INDEX_SIZE, ConcreteDataType::uint64_datatype(), true),
ColumnSchema::new(ENGINE, ConcreteDataType::string_datatype(), true), ColumnSchema::new(ENGINE, ConcreteDataType::string_datatype(), true),
ColumnSchema::new(REGION_ROLE, ConcreteDataType::string_datatype(), true), ColumnSchema::new(REGION_ROLE, ConcreteDataType::string_datatype(), true),
])) ]))
@@ -135,9 +144,12 @@ struct InformationSchemaRegionStatisticsBuilder {
region_ids: UInt64VectorBuilder, region_ids: UInt64VectorBuilder,
table_ids: UInt32VectorBuilder, table_ids: UInt32VectorBuilder,
region_numbers: UInt32VectorBuilder, region_numbers: UInt32VectorBuilder,
region_rows: UInt64VectorBuilder,
disk_sizes: UInt64VectorBuilder,
memtable_sizes: UInt64VectorBuilder, memtable_sizes: UInt64VectorBuilder,
manifest_sizes: UInt64VectorBuilder, manifest_sizes: UInt64VectorBuilder,
sst_sizes: UInt64VectorBuilder, sst_sizes: UInt64VectorBuilder,
index_sizes: UInt64VectorBuilder,
engines: StringVectorBuilder, engines: StringVectorBuilder,
region_roles: StringVectorBuilder, region_roles: StringVectorBuilder,
} }
@@ -150,9 +162,12 @@ impl InformationSchemaRegionStatisticsBuilder {
region_ids: UInt64VectorBuilder::with_capacity(INIT_CAPACITY), region_ids: UInt64VectorBuilder::with_capacity(INIT_CAPACITY),
table_ids: UInt32VectorBuilder::with_capacity(INIT_CAPACITY), table_ids: UInt32VectorBuilder::with_capacity(INIT_CAPACITY),
region_numbers: UInt32VectorBuilder::with_capacity(INIT_CAPACITY), region_numbers: UInt32VectorBuilder::with_capacity(INIT_CAPACITY),
region_rows: UInt64VectorBuilder::with_capacity(INIT_CAPACITY),
disk_sizes: UInt64VectorBuilder::with_capacity(INIT_CAPACITY),
memtable_sizes: UInt64VectorBuilder::with_capacity(INIT_CAPACITY), memtable_sizes: UInt64VectorBuilder::with_capacity(INIT_CAPACITY),
manifest_sizes: UInt64VectorBuilder::with_capacity(INIT_CAPACITY), manifest_sizes: UInt64VectorBuilder::with_capacity(INIT_CAPACITY),
sst_sizes: UInt64VectorBuilder::with_capacity(INIT_CAPACITY), sst_sizes: UInt64VectorBuilder::with_capacity(INIT_CAPACITY),
index_sizes: UInt64VectorBuilder::with_capacity(INIT_CAPACITY),
engines: StringVectorBuilder::with_capacity(INIT_CAPACITY), engines: StringVectorBuilder::with_capacity(INIT_CAPACITY),
region_roles: StringVectorBuilder::with_capacity(INIT_CAPACITY), region_roles: StringVectorBuilder::with_capacity(INIT_CAPACITY),
} }
@@ -177,9 +192,12 @@ impl InformationSchemaRegionStatisticsBuilder {
(REGION_ID, &Value::from(region_stat.id.as_u64())), (REGION_ID, &Value::from(region_stat.id.as_u64())),
(TABLE_ID, &Value::from(region_stat.id.table_id())), (TABLE_ID, &Value::from(region_stat.id.table_id())),
(REGION_NUMBER, &Value::from(region_stat.id.region_number())), (REGION_NUMBER, &Value::from(region_stat.id.region_number())),
(REGION_ROWS, &Value::from(region_stat.num_rows)),
(DISK_SIZE, &Value::from(region_stat.approximate_bytes)),
(MEMTABLE_SIZE, &Value::from(region_stat.memtable_size)), (MEMTABLE_SIZE, &Value::from(region_stat.memtable_size)),
(MANIFEST_SIZE, &Value::from(region_stat.manifest_size)), (MANIFEST_SIZE, &Value::from(region_stat.manifest_size)),
(SST_SIZE, &Value::from(region_stat.sst_size)), (SST_SIZE, &Value::from(region_stat.sst_size)),
(INDEX_SIZE, &Value::from(region_stat.index_size)),
(ENGINE, &Value::from(region_stat.engine.as_str())), (ENGINE, &Value::from(region_stat.engine.as_str())),
(REGION_ROLE, &Value::from(region_stat.role.to_string())), (REGION_ROLE, &Value::from(region_stat.role.to_string())),
]; ];
@@ -192,9 +210,12 @@ impl InformationSchemaRegionStatisticsBuilder {
self.table_ids.push(Some(region_stat.id.table_id())); self.table_ids.push(Some(region_stat.id.table_id()));
self.region_numbers self.region_numbers
.push(Some(region_stat.id.region_number())); .push(Some(region_stat.id.region_number()));
self.region_rows.push(Some(region_stat.num_rows));
self.disk_sizes.push(Some(region_stat.approximate_bytes));
self.memtable_sizes.push(Some(region_stat.memtable_size)); self.memtable_sizes.push(Some(region_stat.memtable_size));
self.manifest_sizes.push(Some(region_stat.manifest_size)); self.manifest_sizes.push(Some(region_stat.manifest_size));
self.sst_sizes.push(Some(region_stat.sst_size)); self.sst_sizes.push(Some(region_stat.sst_size));
self.index_sizes.push(Some(region_stat.index_size));
self.engines.push(Some(&region_stat.engine)); self.engines.push(Some(&region_stat.engine));
self.region_roles.push(Some(&region_stat.role.to_string())); self.region_roles.push(Some(&region_stat.role.to_string()));
} }
@@ -204,9 +225,12 @@ impl InformationSchemaRegionStatisticsBuilder {
Arc::new(self.region_ids.finish()), Arc::new(self.region_ids.finish()),
Arc::new(self.table_ids.finish()), Arc::new(self.table_ids.finish()),
Arc::new(self.region_numbers.finish()), Arc::new(self.region_numbers.finish()),
Arc::new(self.region_rows.finish()),
Arc::new(self.disk_sizes.finish()),
Arc::new(self.memtable_sizes.finish()), Arc::new(self.memtable_sizes.finish()),
Arc::new(self.manifest_sizes.finish()), Arc::new(self.manifest_sizes.finish()),
Arc::new(self.sst_sizes.finish()), Arc::new(self.sst_sizes.finish()),
Arc::new(self.index_sizes.finish()),
Arc::new(self.engines.finish()), Arc::new(self.engines.finish()),
Arc::new(self.region_roles.finish()), Arc::new(self.region_roles.finish()),
]; ];

View File

@@ -12,7 +12,7 @@
// See the License for the specific language governing permissions and // See the License for the specific language governing permissions and
// limitations under the License. // limitations under the License.
/// All table names in `information_schema`. //! All table names in `information_schema`.
pub const TABLES: &str = "tables"; pub const TABLES: &str = "tables";
pub const COLUMNS: &str = "columns"; pub const COLUMNS: &str = "columns";

View File

@@ -74,7 +74,7 @@ impl MemoryTableBuilder {
/// Construct the `information_schema.{table_name}` virtual table /// Construct the `information_schema.{table_name}` virtual table
pub async fn memory_records(&mut self) -> Result<RecordBatch> { pub async fn memory_records(&mut self) -> Result<RecordBatch> {
if self.columns.is_empty() { if self.columns.is_empty() {
RecordBatch::new_empty(self.schema.clone()).context(CreateRecordBatchSnafu) Ok(RecordBatch::new_empty(self.schema.clone()))
} else { } else {
RecordBatch::new(self.schema.clone(), std::mem::take(&mut self.columns)) RecordBatch::new(self.schema.clone(), std::mem::take(&mut self.columns))
.context(CreateRecordBatchSnafu) .context(CreateRecordBatchSnafu)

View File

@@ -12,6 +12,9 @@
// See the License for the specific language governing permissions and // See the License for the specific language governing permissions and
// limitations under the License. // limitations under the License.
//! The `pg_catalog.pg_namespace` table implementation.
//! namespace is a schema in greptime
pub(super) mod oid_map; pub(super) mod oid_map;
use std::sync::{Arc, Weak}; use std::sync::{Arc, Weak};
@@ -40,9 +43,6 @@ use crate::system_schema::utils::tables::{string_column, u32_column};
use crate::system_schema::SystemTable; use crate::system_schema::SystemTable;
use crate::CatalogManager; use crate::CatalogManager;
/// The `pg_catalog.pg_namespace` table implementation.
/// namespace is a schema in greptime
const NSPNAME: &str = "nspname"; const NSPNAME: &str = "nspname";
const INIT_CAPACITY: usize = 42; const INIT_CAPACITY: usize = 42;

View File

@@ -28,7 +28,7 @@ enum_dispatch = "0.3"
futures-util.workspace = true futures-util.workspace = true
lazy_static.workspace = true lazy_static.workspace = true
moka = { workspace = true, features = ["future"] } moka = { workspace = true, features = ["future"] }
parking_lot = "0.12" parking_lot.workspace = true
prometheus.workspace = true prometheus.workspace = true
prost.workspace = true prost.workspace = true
query.workspace = true query.workspace = true
@@ -45,7 +45,6 @@ common-grpc-expr.workspace = true
datanode.workspace = true datanode.workspace = true
derive-new = "0.5" derive-new = "0.5"
tracing = "0.1" tracing = "0.1"
tracing-subscriber = { version = "0.3", features = ["env-filter"] }
[dev-dependencies.substrait_proto] [dev-dependencies.substrait_proto]
package = "substrait" package = "substrait"

View File

@@ -10,7 +10,7 @@ name = "greptime"
path = "src/bin/greptime.rs" path = "src/bin/greptime.rs"
[features] [features]
default = ["python"] default = ["python", "servers/pprof", "servers/mem-prof"]
tokio-console = ["common-telemetry/tokio-console"] tokio-console = ["common-telemetry/tokio-console"]
python = ["frontend/python"] python = ["frontend/python"]
@@ -78,7 +78,7 @@ table.workspace = true
tokio.workspace = true tokio.workspace = true
toml.workspace = true toml.workspace = true
tonic.workspace = true tonic.workspace = true
tracing-appender = "0.2" tracing-appender.workspace = true
[target.'cfg(not(windows))'.dependencies] [target.'cfg(not(windows))'.dependencies]
tikv-jemallocator = "0.6" tikv-jemallocator = "0.6"

View File

@@ -174,7 +174,7 @@ impl Repl {
let plan = query_engine let plan = query_engine
.planner() .planner()
.plan(stmt, query_ctx.clone()) .plan(&stmt, query_ctx.clone())
.await .await
.context(PlanStatementSnafu)?; .context(PlanStatementSnafu)?;

View File

@@ -272,9 +272,10 @@ impl StartCommand {
info!("Datanode start command: {:#?}", self); info!("Datanode start command: {:#?}", self);
info!("Datanode options: {:#?}", opts); info!("Datanode options: {:#?}", opts);
let plugin_opts = opts.plugins;
let opts = opts.component; let opts = opts.component;
let mut plugins = Plugins::new(); let mut plugins = Plugins::new();
plugins::setup_datanode_plugins(&mut plugins, &opts) plugins::setup_datanode_plugins(&mut plugins, &plugin_opts, &opts)
.await .await
.context(StartDatanodeSnafu)?; .context(StartDatanodeSnafu)?;

View File

@@ -266,9 +266,10 @@ impl StartCommand {
info!("Frontend start command: {:#?}", self); info!("Frontend start command: {:#?}", self);
info!("Frontend options: {:#?}", opts); info!("Frontend options: {:#?}", opts);
let plugin_opts = opts.plugins;
let opts = opts.component; let opts = opts.component;
let mut plugins = Plugins::new(); let mut plugins = Plugins::new();
plugins::setup_frontend_plugins(&mut plugins, &opts) plugins::setup_frontend_plugins(&mut plugins, &plugin_opts, &opts)
.await .await
.context(StartFrontendSnafu)?; .context(StartFrontendSnafu)?;
@@ -342,6 +343,8 @@ impl StartCommand {
// Some queries are expected to take long time. // Some queries are expected to take long time.
let channel_config = ChannelConfig { let channel_config = ChannelConfig {
timeout: None, timeout: None,
tcp_nodelay: opts.datanode.client.tcp_nodelay,
connect_timeout: Some(opts.datanode.client.connect_timeout),
..Default::default() ..Default::default()
}; };
let client = NodeClients::new(channel_config); let client = NodeClients::new(channel_config);
@@ -472,7 +475,7 @@ mod tests {
}; };
let mut plugins = Plugins::new(); let mut plugins = Plugins::new();
plugins::setup_frontend_plugins(&mut plugins, &fe_opts) plugins::setup_frontend_plugins(&mut plugins, &[], &fe_opts)
.await .await
.unwrap(); .unwrap();

View File

@@ -84,6 +84,7 @@ pub trait App: Send {
} }
/// Log the versions of the application, and the arguments passed to the cli. /// Log the versions of the application, and the arguments passed to the cli.
///
/// `version` should be the same as the output of cli "--version"; /// `version` should be the same as the output of cli "--version";
/// and the `short_version` is the short version of the codes, often consist of git branch and commit. /// and the `short_version` is the short version of the codes, often consist of git branch and commit.
pub fn log_versions(version: &str, short_version: &str, app: &str) { pub fn log_versions(version: &str, short_version: &str, app: &str) {

View File

@@ -48,6 +48,10 @@ impl Instance {
_guard: guard, _guard: guard,
} }
} }
pub fn get_inner(&self) -> &MetasrvInstance {
&self.instance
}
} }
#[async_trait] #[async_trait]
@@ -86,6 +90,14 @@ impl Command {
pub fn load_options(&self, global_options: &GlobalOptions) -> Result<MetasrvOptions> { pub fn load_options(&self, global_options: &GlobalOptions) -> Result<MetasrvOptions> {
self.subcmd.load_options(global_options) self.subcmd.load_options(global_options)
} }
pub fn config_file(&self) -> &Option<String> {
self.subcmd.config_file()
}
pub fn env_prefix(&self) -> &String {
self.subcmd.env_prefix()
}
} }
#[derive(Parser)] #[derive(Parser)]
@@ -105,6 +117,18 @@ impl SubCommand {
SubCommand::Start(cmd) => cmd.load_options(global_options), SubCommand::Start(cmd) => cmd.load_options(global_options),
} }
} }
fn config_file(&self) -> &Option<String> {
match self {
SubCommand::Start(cmd) => &cmd.config_file,
}
}
fn env_prefix(&self) -> &String {
match self {
SubCommand::Start(cmd) => &cmd.env_prefix,
}
}
} }
#[derive(Debug, Default, Parser)] #[derive(Debug, Default, Parser)]
@@ -249,9 +273,10 @@ impl StartCommand {
info!("Metasrv start command: {:#?}", self); info!("Metasrv start command: {:#?}", self);
info!("Metasrv options: {:#?}", opts); info!("Metasrv options: {:#?}", opts);
let plugin_opts = opts.plugins;
let opts = opts.component; let opts = opts.component;
let mut plugins = Plugins::new(); let mut plugins = Plugins::new();
plugins::setup_metasrv_plugins(&mut plugins, &opts) plugins::setup_metasrv_plugins(&mut plugins, &plugin_opts, &opts)
.await .await
.context(StartMetaServerSnafu)?; .context(StartMetaServerSnafu)?;

View File

@@ -15,6 +15,7 @@
use clap::Parser; use clap::Parser;
use common_config::Configurable; use common_config::Configurable;
use common_runtime::global::RuntimeOptions; use common_runtime::global::RuntimeOptions;
use plugins::PluginOptions;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
#[derive(Parser, Default, Debug, Clone)] #[derive(Parser, Default, Debug, Clone)]
@@ -40,6 +41,8 @@ pub struct GlobalOptions {
pub struct GreptimeOptions<T> { pub struct GreptimeOptions<T> {
/// The runtime options. /// The runtime options.
pub runtime: RuntimeOptions, pub runtime: RuntimeOptions,
/// The plugin options.
pub plugins: Vec<PluginOptions>,
/// The options of each component (like Datanode or Standalone) of GreptimeDB. /// The options of each component (like Datanode or Standalone) of GreptimeDB.
#[serde(flatten)] #[serde(flatten)]

View File

@@ -12,6 +12,7 @@
// See the License for the specific language governing permissions and // See the License for the specific language governing permissions and
// limitations under the License. // limitations under the License.
use std::net::SocketAddr;
use std::sync::Arc; use std::sync::Arc;
use std::{fs, path}; use std::{fs, path};
@@ -250,6 +251,13 @@ pub struct Instance {
_guard: Vec<WorkerGuard>, _guard: Vec<WorkerGuard>,
} }
impl Instance {
/// Find the socket addr of a server by its `name`.
pub async fn server_addr(&self, name: &str) -> Option<SocketAddr> {
self.frontend.server_handlers().addr(name).await
}
}
#[async_trait] #[async_trait]
impl App for Instance { impl App for Instance {
fn name(&self) -> &str { fn name(&self) -> &str {
@@ -340,7 +348,8 @@ pub struct StartCommand {
} }
impl StartCommand { impl StartCommand {
fn load_options( /// Load the GreptimeDB options from various sources (command line, config file or env).
pub fn load_options(
&self, &self,
global_options: &GlobalOptions, global_options: &GlobalOptions,
) -> Result<GreptimeOptions<StandaloneOptions>> { ) -> Result<GreptimeOptions<StandaloneOptions>> {
@@ -430,7 +439,8 @@ impl StartCommand {
#[allow(unreachable_code)] #[allow(unreachable_code)]
#[allow(unused_variables)] #[allow(unused_variables)]
#[allow(clippy::diverging_sub_expression)] #[allow(clippy::diverging_sub_expression)]
async fn build(&self, opts: GreptimeOptions<StandaloneOptions>) -> Result<Instance> { /// Build GreptimeDB instance with the loaded options.
pub async fn build(&self, opts: GreptimeOptions<StandaloneOptions>) -> Result<Instance> {
common_runtime::init_global_runtimes(&opts.runtime); common_runtime::init_global_runtimes(&opts.runtime);
let guard = common_telemetry::init_global_logging( let guard = common_telemetry::init_global_logging(
@@ -445,15 +455,16 @@ impl StartCommand {
info!("Standalone options: {opts:#?}"); info!("Standalone options: {opts:#?}");
let mut plugins = Plugins::new(); let mut plugins = Plugins::new();
let plugin_opts = opts.plugins;
let opts = opts.component; let opts = opts.component;
let fe_opts = opts.frontend_options(); let fe_opts = opts.frontend_options();
let dn_opts = opts.datanode_options(); let dn_opts = opts.datanode_options();
plugins::setup_frontend_plugins(&mut plugins, &fe_opts) plugins::setup_frontend_plugins(&mut plugins, &plugin_opts, &fe_opts)
.await .await
.context(StartFrontendSnafu)?; .context(StartFrontendSnafu)?;
plugins::setup_datanode_plugins(&mut plugins, &dn_opts) plugins::setup_datanode_plugins(&mut plugins, &plugin_opts, &dn_opts)
.await .await
.context(StartDatanodeSnafu)?; .context(StartDatanodeSnafu)?;
@@ -653,7 +664,7 @@ impl StartCommand {
} }
} }
struct StandaloneInformationExtension { pub struct StandaloneInformationExtension {
region_server: RegionServer, region_server: RegionServer,
procedure_manager: ProcedureManagerRef, procedure_manager: ProcedureManagerRef,
start_time_ms: u64, start_time_ms: u64,
@@ -725,12 +736,14 @@ impl InformationExtension for StandaloneInformationExtension {
id: stat.region_id, id: stat.region_id,
rcus: 0, rcus: 0,
wcus: 0, wcus: 0,
approximate_bytes: region_stat.estimated_disk_size() as i64, approximate_bytes: region_stat.estimated_disk_size(),
engine: stat.engine, engine: stat.engine,
role: RegionRole::from(stat.role).into(), role: RegionRole::from(stat.role).into(),
num_rows: region_stat.num_rows,
memtable_size: region_stat.memtable_size, memtable_size: region_stat.memtable_size,
manifest_size: region_stat.manifest_size, manifest_size: region_stat.manifest_size,
sst_size: region_stat.sst_size, sst_size: region_stat.sst_size,
index_size: region_stat.index_size,
} }
}) })
.collect::<Vec<_>>(); .collect::<Vec<_>>();
@@ -762,7 +775,7 @@ mod tests {
}; };
let mut plugins = Plugins::new(); let mut plugins = Plugins::new();
plugins::setup_frontend_plugins(&mut plugins, &fe_opts) plugins::setup_frontend_plugins(&mut plugins, &[], &fe_opts)
.await .await
.unwrap(); .unwrap();

View File

@@ -20,7 +20,7 @@ use common_config::Configurable;
use common_grpc::channel_manager::{ use common_grpc::channel_manager::{
DEFAULT_MAX_GRPC_RECV_MESSAGE_SIZE, DEFAULT_MAX_GRPC_SEND_MESSAGE_SIZE, DEFAULT_MAX_GRPC_RECV_MESSAGE_SIZE, DEFAULT_MAX_GRPC_SEND_MESSAGE_SIZE,
}; };
use common_telemetry::logging::{LoggingOptions, DEFAULT_OTLP_ENDPOINT}; use common_telemetry::logging::{LoggingOptions, SlowQueryOptions, DEFAULT_OTLP_ENDPOINT};
use common_wal::config::raft_engine::RaftEngineConfig; use common_wal::config::raft_engine::RaftEngineConfig;
use common_wal::config::DatanodeWalConfig; use common_wal::config::DatanodeWalConfig;
use datanode::config::{DatanodeOptions, RegionEngineConfig, StorageConfig}; use datanode::config::{DatanodeOptions, RegionEngineConfig, StorageConfig};
@@ -159,8 +159,20 @@ fn test_load_metasrv_example_config() {
level: Some("info".to_string()), level: Some("info".to_string()),
otlp_endpoint: Some(DEFAULT_OTLP_ENDPOINT.to_string()), otlp_endpoint: Some(DEFAULT_OTLP_ENDPOINT.to_string()),
tracing_sample_ratio: Some(Default::default()), tracing_sample_ratio: Some(Default::default()),
slow_query: SlowQueryOptions {
enable: false,
threshold: Some(Duration::from_secs(10)),
sample_ratio: Some(1.0),
},
..Default::default() ..Default::default()
}, },
datanode: meta_srv::metasrv::DatanodeOptions {
client: meta_srv::metasrv::DatanodeClientOptions {
timeout: Duration::from_secs(10),
connect_timeout: Duration::from_secs(10),
tcp_nodelay: true,
},
},
export_metrics: ExportMetricsOption { export_metrics: ExportMetricsOption {
self_import: Some(Default::default()), self_import: Some(Default::default()),
remote_write: Some(Default::default()), remote_write: Some(Default::default()),

View File

@@ -38,6 +38,18 @@ impl Plugins {
self.read().get::<T>().cloned() self.read().get::<T>().cloned()
} }
pub fn get_or_insert<T, F>(&self, f: F) -> T
where
T: 'static + Send + Sync + Clone,
F: FnOnce() -> T,
{
let mut binding = self.write();
if !binding.contains::<T>() {
binding.insert(f());
}
binding.get::<T>().cloned().unwrap()
}
pub fn map_mut<T: 'static + Send + Sync, F, R>(&self, mapper: F) -> R pub fn map_mut<T: 'static + Send + Sync, F, R>(&self, mapper: F) -> R
where where
F: FnOnce(Option<&mut T>) -> R, F: FnOnce(Option<&mut T>) -> R,

View File

@@ -46,8 +46,9 @@ impl From<String> for SecretString {
} }
} }
/// Wrapper type for values that contains secrets, which attempts to limit /// Wrapper type for values that contains secrets.
/// accidental exposure and ensure secrets are wiped from memory when dropped. ///
/// It attempts to limit accidental exposure and ensure secrets are wiped from memory when dropped.
/// (e.g. passwords, cryptographic keys, access tokens or other credentials) /// (e.g. passwords, cryptographic keys, access tokens or other credentials)
/// ///
/// Access to the secret inner value occurs through the [`ExposeSecret`] /// Access to the secret inner value occurs through the [`ExposeSecret`]

View File

@@ -103,14 +103,15 @@ pub const INFORMATION_SCHEMA_PROCEDURE_INFO_TABLE_ID: u32 = 34;
/// id for information_schema.region_statistics /// id for information_schema.region_statistics
pub const INFORMATION_SCHEMA_REGION_STATISTICS_TABLE_ID: u32 = 35; pub const INFORMATION_SCHEMA_REGION_STATISTICS_TABLE_ID: u32 = 35;
/// ----- End of information_schema tables ----- // ----- End of information_schema tables -----
/// ----- Begin of pg_catalog tables ----- /// ----- Begin of pg_catalog tables -----
pub const PG_CATALOG_PG_CLASS_TABLE_ID: u32 = 256; pub const PG_CATALOG_PG_CLASS_TABLE_ID: u32 = 256;
pub const PG_CATALOG_PG_TYPE_TABLE_ID: u32 = 257; pub const PG_CATALOG_PG_TYPE_TABLE_ID: u32 = 257;
pub const PG_CATALOG_PG_NAMESPACE_TABLE_ID: u32 = 258; pub const PG_CATALOG_PG_NAMESPACE_TABLE_ID: u32 = 258;
/// ----- End of pg_catalog tables ----- // ----- End of pg_catalog tables -----
pub const MITO_ENGINE: &str = "mito"; pub const MITO_ENGINE: &str = "mito";
pub const MITO2_ENGINE: &str = "mito2"; pub const MITO2_ENGINE: &str = "mito2";
pub const METRIC_ENGINE: &str = "metric"; pub const METRIC_ENGINE: &str = "metric";

View File

@@ -9,7 +9,7 @@ workspace = true
[features] [features]
default = ["geo"] default = ["geo"]
geo = ["geohash", "h3o"] geo = ["geohash", "h3o", "s2"]
[dependencies] [dependencies]
api.workspace = true api.workspace = true
@@ -35,6 +35,7 @@ num = "0.4"
num-traits = "0.2" num-traits = "0.2"
once_cell.workspace = true once_cell.workspace = true
paste = "1.0" paste = "1.0"
s2 = { version = "0.0.12", optional = true }
serde.workspace = true serde.workspace = true
serde_json.workspace = true serde_json.workspace = true
session.workspace = true session.workspace = true

View File

@@ -31,7 +31,6 @@ pub use polyval::PolyvalAccumulatorCreator;
pub use scipy_stats_norm_cdf::ScipyStatsNormCdfAccumulatorCreator; pub use scipy_stats_norm_cdf::ScipyStatsNormCdfAccumulatorCreator;
pub use scipy_stats_norm_pdf::ScipyStatsNormPdfAccumulatorCreator; pub use scipy_stats_norm_pdf::ScipyStatsNormPdfAccumulatorCreator;
use super::geo::encoding::JsonPathEncodeFunctionCreator;
use crate::function_registry::FunctionRegistry; use crate::function_registry::FunctionRegistry;
/// A function creates `AggregateFunctionCreator`. /// A function creates `AggregateFunctionCreator`.
@@ -93,6 +92,11 @@ impl AggregateFunctions {
register_aggr_func!("scipystatsnormcdf", 2, ScipyStatsNormCdfAccumulatorCreator); register_aggr_func!("scipystatsnormcdf", 2, ScipyStatsNormCdfAccumulatorCreator);
register_aggr_func!("scipystatsnormpdf", 2, ScipyStatsNormPdfAccumulatorCreator); register_aggr_func!("scipystatsnormpdf", 2, ScipyStatsNormPdfAccumulatorCreator);
register_aggr_func!("json_encode_path", 3, JsonPathEncodeFunctionCreator); #[cfg(feature = "geo")]
register_aggr_func!(
"json_encode_path",
3,
super::geo::encoding::JsonPathEncodeFunctionCreator
);
} }
} }

View File

@@ -14,18 +14,19 @@
use std::fmt; use std::fmt;
use common_query::error::{InvalidFuncArgsSnafu, Result, UnsupportedInputDataTypeSnafu}; use common_query::error::{ArrowComputeSnafu, IntoVectorSnafu, InvalidFuncArgsSnafu, Result};
use common_query::prelude::Signature; use common_query::prelude::Signature;
use datatypes::data_type::DataType; use datatypes::arrow::compute::kernels::numeric;
use datatypes::prelude::ConcreteDataType; use datatypes::prelude::ConcreteDataType;
use datatypes::value::ValueRef; use datatypes::vectors::{Helper, VectorRef};
use datatypes::vectors::VectorRef; use snafu::{ensure, ResultExt};
use snafu::ensure;
use crate::function::{Function, FunctionContext}; use crate::function::{Function, FunctionContext};
use crate::helper; use crate::helper;
/// A function adds an interval value to Timestamp, Date or DateTime, and return the result. /// A function adds an interval value to Timestamp, Date, and return the result.
/// The implementation of datetime type is based on Date64 which is incorrect so this function
/// doesn't support the datetime type.
#[derive(Clone, Debug, Default)] #[derive(Clone, Debug, Default)]
pub struct DateAddFunction; pub struct DateAddFunction;
@@ -44,7 +45,6 @@ impl Function for DateAddFunction {
helper::one_of_sigs2( helper::one_of_sigs2(
vec![ vec![
ConcreteDataType::date_datatype(), ConcreteDataType::date_datatype(),
ConcreteDataType::datetime_datatype(),
ConcreteDataType::timestamp_second_datatype(), ConcreteDataType::timestamp_second_datatype(),
ConcreteDataType::timestamp_millisecond_datatype(), ConcreteDataType::timestamp_millisecond_datatype(),
ConcreteDataType::timestamp_microsecond_datatype(), ConcreteDataType::timestamp_microsecond_datatype(),
@@ -69,64 +69,14 @@ impl Function for DateAddFunction {
} }
); );
let left = &columns[0]; let left = columns[0].to_arrow_array();
let right = &columns[1]; let right = columns[1].to_arrow_array();
let size = left.len(); let result = numeric::add(&left, &right).context(ArrowComputeSnafu)?;
let left_datatype = columns[0].data_type(); let arrow_type = result.data_type().clone();
match left_datatype { Helper::try_into_vector(result).context(IntoVectorSnafu {
ConcreteDataType::Timestamp(_) => { data_type: arrow_type,
let mut result = left_datatype.create_mutable_vector(size); })
for i in 0..size {
let ts = left.get(i).as_timestamp();
let interval = right.get(i).as_interval();
let new_ts = match (ts, interval) {
(Some(ts), Some(interval)) => ts.add_interval(interval),
_ => ts,
};
result.push_value_ref(ValueRef::from(new_ts));
}
Ok(result.to_vector())
}
ConcreteDataType::Date(_) => {
let mut result = left_datatype.create_mutable_vector(size);
for i in 0..size {
let date = left.get(i).as_date();
let interval = right.get(i).as_interval();
let new_date = match (date, interval) {
(Some(date), Some(interval)) => date.add_interval(interval),
_ => date,
};
result.push_value_ref(ValueRef::from(new_date));
}
Ok(result.to_vector())
}
ConcreteDataType::DateTime(_) => {
let mut result = left_datatype.create_mutable_vector(size);
for i in 0..size {
let datetime = left.get(i).as_datetime();
let interval = right.get(i).as_interval();
let new_datetime = match (datetime, interval) {
(Some(datetime), Some(interval)) => datetime.add_interval(interval),
_ => datetime,
};
result.push_value_ref(ValueRef::from(new_datetime));
}
Ok(result.to_vector())
}
_ => UnsupportedInputDataTypeSnafu {
function: NAME,
datatypes: columns.iter().map(|c| c.data_type()).collect::<Vec<_>>(),
}
.fail(),
}
} }
} }
@@ -144,8 +94,7 @@ mod tests {
use datatypes::prelude::ConcreteDataType; use datatypes::prelude::ConcreteDataType;
use datatypes::value::Value; use datatypes::value::Value;
use datatypes::vectors::{ use datatypes::vectors::{
DateTimeVector, DateVector, IntervalDayTimeVector, IntervalYearMonthVector, DateVector, IntervalDayTimeVector, IntervalYearMonthVector, TimestampSecondVector,
TimestampSecondVector,
}; };
use super::{DateAddFunction, *}; use super::{DateAddFunction, *};
@@ -168,16 +117,15 @@ mod tests {
ConcreteDataType::date_datatype(), ConcreteDataType::date_datatype(),
f.return_type(&[ConcreteDataType::date_datatype()]).unwrap() f.return_type(&[ConcreteDataType::date_datatype()]).unwrap()
); );
assert_eq!( assert!(
ConcreteDataType::datetime_datatype(), matches!(f.signature(),
f.return_type(&[ConcreteDataType::datetime_datatype()])
.unwrap()
);
assert!(matches!(f.signature(),
Signature { Signature {
type_signature: TypeSignature::OneOf(sigs), type_signature: TypeSignature::OneOf(sigs),
volatility: Volatility::Immutable volatility: Volatility::Immutable
} if sigs.len() == 18)); } if sigs.len() == 15),
"{:?}",
f.signature()
);
} }
#[test] #[test]
@@ -243,36 +191,4 @@ mod tests {
} }
} }
} }
#[test]
fn test_datetime_date_add() {
let f = DateAddFunction;
let dates = vec![Some(123), None, Some(42), None];
// Intervals in months
let intervals = vec![1, 2, 3, 1];
let results = [Some(2678400123), None, Some(7776000042), None];
let date_vector = DateTimeVector::from(dates.clone());
let interval_vector = IntervalYearMonthVector::from_vec(intervals);
let args: Vec<VectorRef> = vec![Arc::new(date_vector), Arc::new(interval_vector)];
let vector = f.eval(FunctionContext::default(), &args).unwrap();
assert_eq!(4, vector.len());
for (i, _t) in dates.iter().enumerate() {
let v = vector.get(i);
let result = results.get(i).unwrap();
if result.is_none() {
assert_eq!(Value::Null, v);
continue;
}
match v {
Value::DateTime(date) => {
assert_eq!(date.val(), result.unwrap());
}
_ => unreachable!(),
}
}
}
} }

View File

@@ -14,18 +14,19 @@
use std::fmt; use std::fmt;
use common_query::error::{InvalidFuncArgsSnafu, Result, UnsupportedInputDataTypeSnafu}; use common_query::error::{ArrowComputeSnafu, IntoVectorSnafu, InvalidFuncArgsSnafu, Result};
use common_query::prelude::Signature; use common_query::prelude::Signature;
use datatypes::data_type::DataType; use datatypes::arrow::compute::kernels::numeric;
use datatypes::prelude::ConcreteDataType; use datatypes::prelude::ConcreteDataType;
use datatypes::value::ValueRef; use datatypes::vectors::{Helper, VectorRef};
use datatypes::vectors::VectorRef; use snafu::{ensure, ResultExt};
use snafu::ensure;
use crate::function::{Function, FunctionContext}; use crate::function::{Function, FunctionContext};
use crate::helper; use crate::helper;
/// A function subtracts an interval value to Timestamp, Date or DateTime, and return the result. /// A function subtracts an interval value to Timestamp, Date, and return the result.
/// The implementation of datetime type is based on Date64 which is incorrect so this function
/// doesn't support the datetime type.
#[derive(Clone, Debug, Default)] #[derive(Clone, Debug, Default)]
pub struct DateSubFunction; pub struct DateSubFunction;
@@ -44,7 +45,6 @@ impl Function for DateSubFunction {
helper::one_of_sigs2( helper::one_of_sigs2(
vec![ vec![
ConcreteDataType::date_datatype(), ConcreteDataType::date_datatype(),
ConcreteDataType::datetime_datatype(),
ConcreteDataType::timestamp_second_datatype(), ConcreteDataType::timestamp_second_datatype(),
ConcreteDataType::timestamp_millisecond_datatype(), ConcreteDataType::timestamp_millisecond_datatype(),
ConcreteDataType::timestamp_microsecond_datatype(), ConcreteDataType::timestamp_microsecond_datatype(),
@@ -69,65 +69,14 @@ impl Function for DateSubFunction {
} }
); );
let left = &columns[0]; let left = columns[0].to_arrow_array();
let right = &columns[1]; let right = columns[1].to_arrow_array();
let size = left.len(); let result = numeric::sub(&left, &right).context(ArrowComputeSnafu)?;
let left_datatype = columns[0].data_type(); let arrow_type = result.data_type().clone();
Helper::try_into_vector(result).context(IntoVectorSnafu {
match left_datatype { data_type: arrow_type,
ConcreteDataType::Timestamp(_) => { })
let mut result = left_datatype.create_mutable_vector(size);
for i in 0..size {
let ts = left.get(i).as_timestamp();
let interval = right.get(i).as_interval();
let new_ts = match (ts, interval) {
(Some(ts), Some(interval)) => ts.sub_interval(interval),
_ => ts,
};
result.push_value_ref(ValueRef::from(new_ts));
}
Ok(result.to_vector())
}
ConcreteDataType::Date(_) => {
let mut result = left_datatype.create_mutable_vector(size);
for i in 0..size {
let date = left.get(i).as_date();
let interval = right.get(i).as_interval();
let new_date = match (date, interval) {
(Some(date), Some(interval)) => date.sub_interval(interval),
_ => date,
};
result.push_value_ref(ValueRef::from(new_date));
}
Ok(result.to_vector())
}
ConcreteDataType::DateTime(_) => {
let mut result = left_datatype.create_mutable_vector(size);
for i in 0..size {
let datetime = left.get(i).as_datetime();
let interval = right.get(i).as_interval();
let new_datetime = match (datetime, interval) {
(Some(datetime), Some(interval)) => datetime.sub_interval(interval),
_ => datetime,
};
result.push_value_ref(ValueRef::from(new_datetime));
}
Ok(result.to_vector())
}
_ => UnsupportedInputDataTypeSnafu {
function: NAME,
datatypes: columns.iter().map(|c| c.data_type()).collect::<Vec<_>>(),
}
.fail(),
}
} }
} }
@@ -145,8 +94,7 @@ mod tests {
use datatypes::prelude::ConcreteDataType; use datatypes::prelude::ConcreteDataType;
use datatypes::value::Value; use datatypes::value::Value;
use datatypes::vectors::{ use datatypes::vectors::{
DateTimeVector, DateVector, IntervalDayTimeVector, IntervalYearMonthVector, DateVector, IntervalDayTimeVector, IntervalYearMonthVector, TimestampSecondVector,
TimestampSecondVector,
}; };
use super::{DateSubFunction, *}; use super::{DateSubFunction, *};
@@ -174,11 +122,15 @@ mod tests {
f.return_type(&[ConcreteDataType::datetime_datatype()]) f.return_type(&[ConcreteDataType::datetime_datatype()])
.unwrap() .unwrap()
); );
assert!(matches!(f.signature(), assert!(
matches!(f.signature(),
Signature { Signature {
type_signature: TypeSignature::OneOf(sigs), type_signature: TypeSignature::OneOf(sigs),
volatility: Volatility::Immutable volatility: Volatility::Immutable
} if sigs.len() == 18)); } if sigs.len() == 15),
"{:?}",
f.signature()
);
} }
#[test] #[test]
@@ -250,42 +202,4 @@ mod tests {
} }
} }
} }
#[test]
fn test_datetime_date_sub() {
let f = DateSubFunction;
let millis_per_month = 3600 * 24 * 30 * 1000;
let dates = vec![
Some(123 * millis_per_month),
None,
Some(42 * millis_per_month),
None,
];
// Intervals in months
let intervals = vec![1, 2, 3, 1];
let results = [Some(316137600000), None, Some(100915200000), None];
let date_vector = DateTimeVector::from(dates.clone());
let interval_vector = IntervalYearMonthVector::from_vec(intervals);
let args: Vec<VectorRef> = vec![Arc::new(date_vector), Arc::new(interval_vector)];
let vector = f.eval(FunctionContext::default(), &args).unwrap();
assert_eq!(4, vector.len());
for (i, _t) in dates.iter().enumerate() {
let v = vector.get(i);
let result = results.get(i).unwrap();
if result.is_none() {
assert_eq!(Value::Null, v);
continue;
}
match v {
Value::DateTime(date) => {
assert_eq!(date.val(), result.unwrap());
}
_ => unreachable!(),
}
}
}
} }

View File

@@ -17,8 +17,7 @@ pub(crate) mod encoding;
mod geohash; mod geohash;
mod h3; mod h3;
mod helpers; mod helpers;
mod s2;
use geohash::{GeohashFunction, GeohashNeighboursFunction};
use crate::function_registry::FunctionRegistry; use crate::function_registry::FunctionRegistry;
@@ -27,8 +26,8 @@ pub(crate) struct GeoFunctions;
impl GeoFunctions { impl GeoFunctions {
pub fn register(registry: &FunctionRegistry) { pub fn register(registry: &FunctionRegistry) {
// geohash // geohash
registry.register(Arc::new(GeohashFunction)); registry.register(Arc::new(geohash::GeohashFunction));
registry.register(Arc::new(GeohashNeighboursFunction)); registry.register(Arc::new(geohash::GeohashNeighboursFunction));
// h3 index // h3 index
registry.register(Arc::new(h3::H3LatLngToCell)); registry.register(Arc::new(h3::H3LatLngToCell));
@@ -55,5 +54,11 @@ impl GeoFunctions {
registry.register(Arc::new(h3::H3GridDiskDistances)); registry.register(Arc::new(h3::H3GridDiskDistances));
registry.register(Arc::new(h3::H3GridDistance)); registry.register(Arc::new(h3::H3GridDistance));
registry.register(Arc::new(h3::H3GridPathCells)); registry.register(Arc::new(h3::H3GridPathCells));
// s2
registry.register(Arc::new(s2::S2LatLngToCell));
registry.register(Arc::new(s2::S2CellLevel));
registry.register(Arc::new(s2::S2CellToToken));
registry.register(Arc::new(s2::S2CellParent));
} }
} }

View File

@@ -17,7 +17,7 @@ use std::sync::Arc;
use common_error::ext::{BoxedError, PlainError}; use common_error::ext::{BoxedError, PlainError};
use common_error::status_code::StatusCode; use common_error::status_code::StatusCode;
use common_macro::{as_aggr_func_creator, AggrFuncTypeStore}; use common_macro::{as_aggr_func_creator, AggrFuncTypeStore};
use common_query::error::{self, InvalidFuncArgsSnafu, InvalidInputStateSnafu, Result}; use common_query::error::{self, InvalidInputStateSnafu, Result};
use common_query::logical_plan::accumulator::AggrFuncTypeStore; use common_query::logical_plan::accumulator::AggrFuncTypeStore;
use common_query::logical_plan::{Accumulator, AggregateFunctionCreator}; use common_query::logical_plan::{Accumulator, AggregateFunctionCreator};
use common_query::prelude::AccumulatorCreatorFunction; use common_query::prelude::AccumulatorCreatorFunction;

View File

@@ -16,7 +16,7 @@ use std::str::FromStr;
use common_error::ext::{BoxedError, PlainError}; use common_error::ext::{BoxedError, PlainError};
use common_error::status_code::StatusCode; use common_error::status_code::StatusCode;
use common_query::error::{self, InvalidFuncArgsSnafu, Result}; use common_query::error::{self, Result};
use common_query::prelude::{Signature, TypeSignature}; use common_query::prelude::{Signature, TypeSignature};
use datafusion::logical_expr::Volatility; use datafusion::logical_expr::Volatility;
use datatypes::prelude::ConcreteDataType; use datatypes::prelude::ConcreteDataType;
@@ -29,9 +29,9 @@ use datatypes::vectors::{
use derive_more::Display; use derive_more::Display;
use h3o::{CellIndex, LatLng, Resolution}; use h3o::{CellIndex, LatLng, Resolution};
use once_cell::sync::Lazy; use once_cell::sync::Lazy;
use snafu::{ensure, ResultExt}; use snafu::ResultExt;
use super::helpers::{ensure_columns_len, ensure_columns_n}; use super::helpers::{ensure_and_coerce, ensure_columns_len, ensure_columns_n};
use crate::function::{Function, FunctionContext}; use crate::function::{Function, FunctionContext};
static CELL_TYPES: Lazy<Vec<ConcreteDataType>> = Lazy::new(|| { static CELL_TYPES: Lazy<Vec<ConcreteDataType>> = Lazy::new(|| {
@@ -382,15 +382,7 @@ impl Function for H3CellResolution {
} }
fn eval(&self, _func_ctx: FunctionContext, columns: &[VectorRef]) -> Result<VectorRef> { fn eval(&self, _func_ctx: FunctionContext, columns: &[VectorRef]) -> Result<VectorRef> {
ensure!( ensure_columns_n!(columns, 1);
columns.len() == 1,
InvalidFuncArgsSnafu {
err_msg: format!(
"The length of the args is not correct, expect 1, provided : {}",
columns.len()
),
}
);
let cell_vec = &columns[0]; let cell_vec = &columns[0];
let size = cell_vec.len(); let size = cell_vec.len();
@@ -982,18 +974,6 @@ fn value_to_resolution(v: Value) -> Result<Resolution> {
.context(error::ExecuteSnafu) .context(error::ExecuteSnafu)
} }
macro_rules! ensure_and_coerce {
($compare:expr, $coerce:expr) => {{
ensure!(
$compare,
InvalidFuncArgsSnafu {
err_msg: "Argument was outside of acceptable range "
}
);
Ok($coerce)
}};
}
fn value_to_position(v: Value) -> Result<u64> { fn value_to_position(v: Value) -> Result<u64> {
match v { match v {
Value::Int8(v) => ensure_and_coerce!(v >= 0, v as u64), Value::Int8(v) => ensure_and_coerce!(v >= 0, v as u64),

View File

@@ -14,15 +14,15 @@
macro_rules! ensure_columns_len { macro_rules! ensure_columns_len {
($columns:ident) => { ($columns:ident) => {
ensure!( snafu::ensure!(
$columns.windows(2).all(|c| c[0].len() == c[1].len()), $columns.windows(2).all(|c| c[0].len() == c[1].len()),
InvalidFuncArgsSnafu { common_query::error::InvalidFuncArgsSnafu {
err_msg: "The length of input columns are in different size" err_msg: "The length of input columns are in different size"
} }
) )
}; };
($column_a:ident, $column_b:ident, $($column_n:ident),*) => { ($column_a:ident, $column_b:ident, $($column_n:ident),*) => {
ensure!( snafu::ensure!(
{ {
let mut result = $column_a.len() == $column_b.len(); let mut result = $column_a.len() == $column_b.len();
$( $(
@@ -30,7 +30,7 @@ macro_rules! ensure_columns_len {
)* )*
result result
} }
InvalidFuncArgsSnafu { common_query::error::InvalidFuncArgsSnafu {
err_msg: "The length of input columns are in different size" err_msg: "The length of input columns are in different size"
} }
) )
@@ -41,9 +41,9 @@ pub(super) use ensure_columns_len;
macro_rules! ensure_columns_n { macro_rules! ensure_columns_n {
($columns:ident, $n:literal) => { ($columns:ident, $n:literal) => {
ensure!( snafu::ensure!(
$columns.len() == $n, $columns.len() == $n,
InvalidFuncArgsSnafu { common_query::error::InvalidFuncArgsSnafu {
err_msg: format!( err_msg: format!(
"The length of arguments is not correct, expect {}, provided : {}", "The length of arguments is not correct, expect {}, provided : {}",
stringify!($n), stringify!($n),
@@ -59,3 +59,17 @@ macro_rules! ensure_columns_n {
} }
pub(super) use ensure_columns_n; pub(super) use ensure_columns_n;
macro_rules! ensure_and_coerce {
($compare:expr, $coerce:expr) => {{
snafu::ensure!(
$compare,
common_query::error::InvalidFuncArgsSnafu {
err_msg: "Argument was outside of acceptable range "
}
);
Ok($coerce)
}};
}
pub(super) use ensure_and_coerce;

View File

@@ -0,0 +1,275 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use common_query::error::{InvalidFuncArgsSnafu, Result};
use common_query::prelude::{Signature, TypeSignature};
use datafusion::logical_expr::Volatility;
use datatypes::prelude::ConcreteDataType;
use datatypes::scalars::ScalarVectorBuilder;
use datatypes::value::Value;
use datatypes::vectors::{MutableVector, StringVectorBuilder, UInt64VectorBuilder, VectorRef};
use derive_more::Display;
use once_cell::sync::Lazy;
use s2::cellid::{CellID, MAX_LEVEL};
use s2::latlng::LatLng;
use snafu::ensure;
use crate::function::{Function, FunctionContext};
use crate::scalars::geo::helpers::{ensure_and_coerce, ensure_columns_len, ensure_columns_n};
static CELL_TYPES: Lazy<Vec<ConcreteDataType>> = Lazy::new(|| {
vec![
ConcreteDataType::int64_datatype(),
ConcreteDataType::uint64_datatype(),
]
});
static COORDINATE_TYPES: Lazy<Vec<ConcreteDataType>> = Lazy::new(|| {
vec![
ConcreteDataType::float32_datatype(),
ConcreteDataType::float64_datatype(),
]
});
static LEVEL_TYPES: Lazy<Vec<ConcreteDataType>> = Lazy::new(|| {
vec![
ConcreteDataType::int8_datatype(),
ConcreteDataType::int16_datatype(),
ConcreteDataType::int32_datatype(),
ConcreteDataType::int64_datatype(),
ConcreteDataType::uint8_datatype(),
ConcreteDataType::uint16_datatype(),
ConcreteDataType::uint32_datatype(),
ConcreteDataType::uint64_datatype(),
]
});
/// Function that returns [s2] encoding cellid for a given geospatial coordinate.
///
/// [s2]: http://s2geometry.io
#[derive(Clone, Debug, Default, Display)]
#[display("{}", self.name())]
pub struct S2LatLngToCell;
impl Function for S2LatLngToCell {
fn name(&self) -> &str {
"s2_latlng_to_cell"
}
fn return_type(&self, _input_types: &[ConcreteDataType]) -> Result<ConcreteDataType> {
Ok(ConcreteDataType::uint64_datatype())
}
fn signature(&self) -> Signature {
let mut signatures = Vec::with_capacity(COORDINATE_TYPES.len());
for coord_type in COORDINATE_TYPES.as_slice() {
signatures.push(TypeSignature::Exact(vec![
// latitude
coord_type.clone(),
// longitude
coord_type.clone(),
]));
}
Signature::one_of(signatures, Volatility::Stable)
}
fn eval(&self, _func_ctx: FunctionContext, columns: &[VectorRef]) -> Result<VectorRef> {
ensure_columns_n!(columns, 2);
let lat_vec = &columns[0];
let lon_vec = &columns[1];
let size = lat_vec.len();
let mut results = UInt64VectorBuilder::with_capacity(size);
for i in 0..size {
let lat = lat_vec.get(i).as_f64_lossy();
let lon = lon_vec.get(i).as_f64_lossy();
let result = match (lat, lon) {
(Some(lat), Some(lon)) => {
let coord = LatLng::from_degrees(lat, lon);
ensure!(
coord.is_valid(),
InvalidFuncArgsSnafu {
err_msg: "The input coordinates are invalid",
}
);
let cellid = CellID::from(coord);
let encoded: u64 = cellid.0;
Some(encoded)
}
_ => None,
};
results.push(result);
}
Ok(results.to_vector())
}
}
/// Return the level of current s2 cell
#[derive(Clone, Debug, Default, Display)]
#[display("{}", self.name())]
pub struct S2CellLevel;
impl Function for S2CellLevel {
fn name(&self) -> &str {
"s2_cell_level"
}
fn return_type(&self, _input_types: &[ConcreteDataType]) -> Result<ConcreteDataType> {
Ok(ConcreteDataType::uint64_datatype())
}
fn signature(&self) -> Signature {
signature_of_cell()
}
fn eval(&self, _func_ctx: FunctionContext, columns: &[VectorRef]) -> Result<VectorRef> {
ensure_columns_n!(columns, 1);
let cell_vec = &columns[0];
let size = cell_vec.len();
let mut results = UInt64VectorBuilder::with_capacity(size);
for i in 0..size {
let cell = cell_from_value(cell_vec.get(i));
let res = cell.map(|cell| cell.level());
results.push(res);
}
Ok(results.to_vector())
}
}
/// Return the string presentation of the cell
#[derive(Clone, Debug, Default, Display)]
#[display("{}", self.name())]
pub struct S2CellToToken;
impl Function for S2CellToToken {
fn name(&self) -> &str {
"s2_cell_to_token"
}
fn return_type(&self, _input_types: &[ConcreteDataType]) -> Result<ConcreteDataType> {
Ok(ConcreteDataType::string_datatype())
}
fn signature(&self) -> Signature {
signature_of_cell()
}
fn eval(&self, _func_ctx: FunctionContext, columns: &[VectorRef]) -> Result<VectorRef> {
ensure_columns_n!(columns, 1);
let cell_vec = &columns[0];
let size = cell_vec.len();
let mut results = StringVectorBuilder::with_capacity(size);
for i in 0..size {
let cell = cell_from_value(cell_vec.get(i));
let res = cell.map(|cell| cell.to_token());
results.push(res.as_deref());
}
Ok(results.to_vector())
}
}
/// Return parent at given level of current s2 cell
#[derive(Clone, Debug, Default, Display)]
#[display("{}", self.name())]
pub struct S2CellParent;
impl Function for S2CellParent {
fn name(&self) -> &str {
"s2_cell_parent"
}
fn return_type(&self, _input_types: &[ConcreteDataType]) -> Result<ConcreteDataType> {
Ok(ConcreteDataType::uint64_datatype())
}
fn signature(&self) -> Signature {
signature_of_cell_and_level()
}
fn eval(&self, _func_ctx: FunctionContext, columns: &[VectorRef]) -> Result<VectorRef> {
ensure_columns_n!(columns, 2);
let cell_vec = &columns[0];
let level_vec = &columns[1];
let size = cell_vec.len();
let mut results = UInt64VectorBuilder::with_capacity(size);
for i in 0..size {
let cell = cell_from_value(cell_vec.get(i));
let level = value_to_level(level_vec.get(i))?;
let result = cell.map(|cell| cell.parent(level).0);
results.push(result);
}
Ok(results.to_vector())
}
}
fn signature_of_cell() -> Signature {
let mut signatures = Vec::with_capacity(CELL_TYPES.len());
for cell_type in CELL_TYPES.as_slice() {
signatures.push(TypeSignature::Exact(vec![cell_type.clone()]));
}
Signature::one_of(signatures, Volatility::Stable)
}
fn signature_of_cell_and_level() -> Signature {
let mut signatures = Vec::with_capacity(CELL_TYPES.len() * LEVEL_TYPES.len());
for cell_type in CELL_TYPES.as_slice() {
for level_type in LEVEL_TYPES.as_slice() {
signatures.push(TypeSignature::Exact(vec![
cell_type.clone(),
level_type.clone(),
]));
}
}
Signature::one_of(signatures, Volatility::Stable)
}
fn cell_from_value(v: Value) -> Option<CellID> {
match v {
Value::Int64(v) => Some(CellID(v as u64)),
Value::UInt64(v) => Some(CellID(v)),
_ => None,
}
}
fn value_to_level(v: Value) -> Result<u64> {
match v {
Value::Int8(v) => ensure_and_coerce!(v >= 0 && v <= MAX_LEVEL as i8, v as u64),
Value::Int16(v) => ensure_and_coerce!(v >= 0 && v <= MAX_LEVEL as i16, v as u64),
Value::Int32(v) => ensure_and_coerce!(v >= 0 && v <= MAX_LEVEL as i32, v as u64),
Value::Int64(v) => ensure_and_coerce!(v >= 0 && v <= MAX_LEVEL as i64, v as u64),
Value::UInt8(v) => ensure_and_coerce!(v <= MAX_LEVEL as u8, v as u64),
Value::UInt16(v) => ensure_and_coerce!(v <= MAX_LEVEL as u16, v as u64),
Value::UInt32(v) => ensure_and_coerce!(v <= MAX_LEVEL as u32, v as u64),
Value::UInt64(v) => ensure_and_coerce!(v <= MAX_LEVEL, v),
_ => unreachable!(),
}
}

View File

@@ -16,6 +16,7 @@ use std::sync::Arc;
mod json_get; mod json_get;
mod json_is; mod json_is;
mod json_path_exists; mod json_path_exists;
mod json_path_match;
mod json_to_string; mod json_to_string;
mod parse_json; mod parse_json;
@@ -49,5 +50,6 @@ impl JsonFunction {
registry.register(Arc::new(JsonIsObject)); registry.register(Arc::new(JsonIsObject));
registry.register(Arc::new(json_path_exists::JsonPathExistsFunction)); registry.register(Arc::new(json_path_exists::JsonPathExistsFunction));
registry.register(Arc::new(json_path_match::JsonPathMatchFunction));
} }
} }

View File

@@ -0,0 +1,202 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::fmt::{self, Display};
use common_query::error::{InvalidFuncArgsSnafu, Result, UnsupportedInputDataTypeSnafu};
use common_query::prelude::Signature;
use datafusion::logical_expr::Volatility;
use datatypes::data_type::ConcreteDataType;
use datatypes::prelude::VectorRef;
use datatypes::scalars::ScalarVectorBuilder;
use datatypes::vectors::{BooleanVectorBuilder, MutableVector};
use snafu::ensure;
use crate::function::{Function, FunctionContext};
/// Check if the given JSON data match the given JSON path's predicate.
#[derive(Clone, Debug, Default)]
pub struct JsonPathMatchFunction;
const NAME: &str = "json_path_match";
impl Function for JsonPathMatchFunction {
fn name(&self) -> &str {
NAME
}
fn return_type(&self, _input_types: &[ConcreteDataType]) -> Result<ConcreteDataType> {
Ok(ConcreteDataType::boolean_datatype())
}
fn signature(&self) -> Signature {
Signature::exact(
vec![
ConcreteDataType::json_datatype(),
ConcreteDataType::string_datatype(),
],
Volatility::Immutable,
)
}
fn eval(&self, _func_ctx: FunctionContext, columns: &[VectorRef]) -> Result<VectorRef> {
ensure!(
columns.len() == 2,
InvalidFuncArgsSnafu {
err_msg: format!(
"The length of the args is not correct, expect exactly two, have: {}",
columns.len()
),
}
);
let jsons = &columns[0];
let paths = &columns[1];
let size = jsons.len();
let mut results = BooleanVectorBuilder::with_capacity(size);
for i in 0..size {
let json = jsons.get_ref(i);
let path = paths.get_ref(i);
match json.data_type() {
// JSON data type uses binary vector
ConcreteDataType::Binary(_) => {
let json = json.as_binary();
let path = path.as_string();
let result = match (json, path) {
(Ok(Some(json)), Ok(Some(path))) => {
if !jsonb::is_null(json) {
let json_path = jsonb::jsonpath::parse_json_path(path.as_bytes());
match json_path {
Ok(json_path) => jsonb::path_match(json, json_path).ok(),
Err(_) => None,
}
} else {
None
}
}
_ => None,
};
results.push(result);
}
_ => {
return UnsupportedInputDataTypeSnafu {
function: NAME,
datatypes: columns.iter().map(|c| c.data_type()).collect::<Vec<_>>(),
}
.fail();
}
}
}
Ok(results.to_vector())
}
}
impl Display for JsonPathMatchFunction {
fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result {
write!(f, "JSON_PATH_MATCH")
}
}
#[cfg(test)]
mod tests {
use std::sync::Arc;
use common_query::prelude::TypeSignature;
use datatypes::vectors::{BinaryVector, StringVector};
use super::*;
#[test]
fn test_json_path_match_function() {
let json_path_match = JsonPathMatchFunction;
assert_eq!("json_path_match", json_path_match.name());
assert_eq!(
ConcreteDataType::boolean_datatype(),
json_path_match
.return_type(&[ConcreteDataType::json_datatype()])
.unwrap()
);
assert!(matches!(json_path_match.signature(),
Signature {
type_signature: TypeSignature::Exact(valid_types),
volatility: Volatility::Immutable
} if valid_types == vec![ConcreteDataType::json_datatype(), ConcreteDataType::string_datatype()],
));
let json_strings = [
Some(r#"{"a": {"b": 2}, "b": 2, "c": 3}"#.to_string()),
Some(r#"{"a": 1, "b": [1,2,3]}"#.to_string()),
Some(r#"{"a": 1 ,"b": [1,2,3]}"#.to_string()),
Some(r#"[1,2,3]"#.to_string()),
Some(r#"{"a":1,"b":[1,2,3]}"#.to_string()),
Some(r#"null"#.to_string()),
Some(r#"null"#.to_string()),
];
let paths = vec![
Some("$.a.b == 2".to_string()),
Some("$.b[1 to last] >= 2".to_string()),
Some("$.c > 0".to_string()),
Some("$[0 to last] > 0".to_string()),
Some(r#"null"#.to_string()),
Some("$.c > 0".to_string()),
Some(r#"null"#.to_string()),
];
let results = [
Some(true),
Some(true),
Some(false),
Some(true),
None,
None,
None,
];
let jsonbs = json_strings
.into_iter()
.map(|s| s.map(|json| jsonb::parse_value(json.as_bytes()).unwrap().to_vec()))
.collect::<Vec<_>>();
let json_vector = BinaryVector::from(jsonbs);
let path_vector = StringVector::from(paths);
let args: Vec<VectorRef> = vec![Arc::new(json_vector), Arc::new(path_vector)];
let vector = json_path_match
.eval(FunctionContext::default(), &args)
.unwrap();
assert_eq!(7, vector.len());
for (i, expected) in results.iter().enumerate() {
let result = vector.get_ref(i);
match expected {
Some(expected_value) => {
assert!(!result.is_null());
let result_value = result.as_boolean().unwrap().unwrap();
assert_eq!(*expected_value, result_value);
}
None => {
assert!(result.is_null());
}
}
}
}
}

View File

@@ -199,6 +199,7 @@ pub fn default_get_uuid(working_home: &Option<String>) -> Option<String> {
} }
/// Report version info to GreptimeDB. /// Report version info to GreptimeDB.
///
/// We do not collect any identity-sensitive information. /// We do not collect any identity-sensitive information.
/// This task is scheduled to run every 30 minutes. /// This task is scheduled to run every 30 minutes.
/// The task will be disabled default. It can be enabled by setting the build feature `greptimedb-telemetry` /// The task will be disabled default. It can be enabled by setting the build feature `greptimedb-telemetry`
@@ -324,7 +325,7 @@ mod tests {
}); });
let addr = ([127, 0, 0, 1], port).into(); let addr = ([127, 0, 0, 1], port).into();
let server = Server::bind(&addr).serve(make_svc); let server = Server::try_bind(&addr).unwrap().serve(make_svc);
let graceful = server.with_graceful_shutdown(async { let graceful = server.with_graceful_shutdown(async {
rx.await.ok(); rx.await.ok();
}); });

View File

@@ -18,6 +18,7 @@ common-time.workspace = true
datatypes.workspace = true datatypes.workspace = true
prost.workspace = true prost.workspace = true
snafu.workspace = true snafu.workspace = true
store-api.workspace = true
table.workspace = true table.workspace = true
[dev-dependencies] [dev-dependencies]

View File

@@ -22,12 +22,13 @@ use api::v1::{
use common_query::AddColumnLocation; use common_query::AddColumnLocation;
use datatypes::schema::{ColumnSchema, RawSchema}; use datatypes::schema::{ColumnSchema, RawSchema};
use snafu::{ensure, OptionExt, ResultExt}; use snafu::{ensure, OptionExt, ResultExt};
use store_api::region_request::ChangeOption;
use table::metadata::TableId; use table::metadata::TableId;
use table::requests::{AddColumnRequest, AlterKind, AlterTableRequest, ChangeColumnTypeRequest}; use table::requests::{AddColumnRequest, AlterKind, AlterTableRequest, ChangeColumnTypeRequest};
use crate::error::{ use crate::error::{
InvalidColumnDefSnafu, MissingFieldSnafu, MissingTimestampColumnSnafu, Result, InvalidChangeTableOptionRequestSnafu, InvalidColumnDefSnafu, MissingFieldSnafu,
UnknownLocationTypeSnafu, MissingTimestampColumnSnafu, Result, UnknownLocationTypeSnafu,
}; };
const LOCATION_TYPE_FIRST: i32 = LocationType::First as i32; const LOCATION_TYPE_FIRST: i32 = LocationType::First as i32;
@@ -92,6 +93,15 @@ pub fn alter_expr_to_request(table_id: TableId, expr: AlterExpr) -> Result<Alter
Kind::RenameTable(RenameTable { new_table_name }) => { Kind::RenameTable(RenameTable { new_table_name }) => {
AlterKind::RenameTable { new_table_name } AlterKind::RenameTable { new_table_name }
} }
Kind::ChangeTableOptions(api::v1::ChangeTableOptions {
change_table_options,
}) => AlterKind::ChangeTableOptions {
options: change_table_options
.iter()
.map(ChangeOption::try_from)
.collect::<std::result::Result<Vec<_>, _>>()
.context(InvalidChangeTableOptionRequestSnafu)?,
},
}; };
let request = AlterTableRequest { let request = AlterTableRequest {

View File

@@ -19,6 +19,7 @@ use common_error::ext::ErrorExt;
use common_error::status_code::StatusCode; use common_error::status_code::StatusCode;
use common_macro::stack_trace_debug; use common_macro::stack_trace_debug;
use snafu::{Location, Snafu}; use snafu::{Location, Snafu};
use store_api::metadata::MetadataError;
#[derive(Snafu)] #[derive(Snafu)]
#[snafu(visibility(pub))] #[snafu(visibility(pub))]
@@ -118,6 +119,12 @@ pub enum Error {
#[snafu(implicit)] #[snafu(implicit)]
location: Location, location: Location,
}, },
#[snafu(display("Invalid change table option request"))]
InvalidChangeTableOptionRequest {
#[snafu(source)]
error: MetadataError,
},
} }
pub type Result<T> = std::result::Result<T, Error>; pub type Result<T> = std::result::Result<T, Error>;
@@ -141,6 +148,7 @@ impl ErrorExt for Error {
Error::UnknownColumnDataType { .. } | Error::InvalidFulltextColumnType { .. } => { Error::UnknownColumnDataType { .. } | Error::InvalidFulltextColumnType { .. } => {
StatusCode::InvalidArguments StatusCode::InvalidArguments
} }
Error::InvalidChangeTableOptionRequest { .. } => StatusCode::InvalidArguments,
} }
} }

View File

@@ -12,7 +12,7 @@
// See the License for the specific language governing permissions and // See the License for the specific language governing permissions and
// limitations under the License. // limitations under the License.
use api::helper::{convert_i128_to_interval, convert_to_pb_decimal128}; use api::helper::{convert_month_day_nano_to_pb, convert_to_pb_decimal128};
use api::v1::column::Values; use api::v1::column::Values;
use common_base::BitVec; use common_base::BitVec;
use datatypes::types::{IntervalType, TimeType, TimestampType, WrapperType}; use datatypes::types::{IntervalType, TimeType, TimestampType, WrapperType};
@@ -211,7 +211,7 @@ pub fn values(arrays: &[VectorRef]) -> Result<Values> {
ConcreteDataType::Interval(IntervalType::MonthDayNano(_)), ConcreteDataType::Interval(IntervalType::MonthDayNano(_)),
IntervalMonthDayNanoVector, IntervalMonthDayNanoVector,
interval_month_day_nano_values, interval_month_day_nano_values,
|x| { convert_i128_to_interval(x.into_native()) } |x| { convert_month_day_nano_to_pb(x) }
), ),
( (
ConcreteDataType::Decimal128(_), ConcreteDataType::Decimal128(_),

View File

@@ -35,7 +35,9 @@ pub fn aggr_func_type_store_derive(input: TokenStream) -> TokenStream {
} }
/// A struct can be used as a creator for aggregate function if it has been annotated with this /// A struct can be used as a creator for aggregate function if it has been annotated with this
/// attribute first. This attribute add a necessary field which is intended to store the input /// attribute first.
///
/// This attribute add a necessary field which is intended to store the input
/// data's types to the struct. /// data's types to the struct.
/// This attribute is expected to be used along with derive macro [AggrFuncTypeStore]. /// This attribute is expected to be used along with derive macro [AggrFuncTypeStore].
#[proc_macro_attribute] #[proc_macro_attribute]
@@ -44,9 +46,10 @@ pub fn as_aggr_func_creator(args: TokenStream, input: TokenStream) -> TokenStrea
} }
/// Attribute macro to convert an arithimetic function to a range function. The annotated function /// Attribute macro to convert an arithimetic function to a range function. The annotated function
/// should accept servaral arrays as input and return a single value as output. This procedure /// should accept servaral arrays as input and return a single value as output.
/// macro can works on any number of input parameters. Return type can be either primitive type ///
/// or wrapped in `Option`. /// This procedure macro can works on any number of input parameters. Return type can be either
/// primitive type or wrapped in `Option`.
/// ///
/// # Example /// # Example
/// Take `count_over_time()` in PromQL as an example: /// Take `count_over_time()` in PromQL as an example:

View File

@@ -60,7 +60,7 @@ table.workspace = true
tokio.workspace = true tokio.workspace = true
tokio-postgres = { workspace = true, optional = true } tokio-postgres = { workspace = true, optional = true }
tonic.workspace = true tonic.workspace = true
typetag = "0.2" typetag.workspace = true
[dev-dependencies] [dev-dependencies]
chrono.workspace = true chrono.workspace = true

View File

@@ -55,6 +55,7 @@ pub trait ClusterInfo {
} }
/// The key of [NodeInfo] in the storage. The format is `__meta_cluster_node_info-{cluster_id}-{role}-{node_id}`. /// The key of [NodeInfo] in the storage. The format is `__meta_cluster_node_info-{cluster_id}-{role}-{node_id}`.
///
/// This key cannot be used to describe the `Metasrv` because the `Metasrv` does not have /// This key cannot be used to describe the `Metasrv` because the `Metasrv` does not have
/// a `cluster_id`, it serves multiple clusters. /// a `cluster_id`, it serves multiple clusters.
#[derive(Debug, Clone, Eq, Hash, PartialEq, Serialize, Deserialize)] #[derive(Debug, Clone, Eq, Hash, PartialEq, Serialize, Deserialize)]

View File

@@ -78,17 +78,21 @@ pub struct RegionStat {
/// The write capacity units during this period /// The write capacity units during this period
pub wcus: i64, pub wcus: i64,
/// Approximate bytes of this region /// Approximate bytes of this region
pub approximate_bytes: i64, pub approximate_bytes: u64,
/// The engine name. /// The engine name.
pub engine: String, pub engine: String,
/// The region role. /// The region role.
pub role: RegionRole, pub role: RegionRole,
/// The number of rows
pub num_rows: u64,
/// The size of the memtable in bytes. /// The size of the memtable in bytes.
pub memtable_size: u64, pub memtable_size: u64,
/// The size of the manifest in bytes. /// The size of the manifest in bytes.
pub manifest_size: u64, pub manifest_size: u64,
/// The size of the SST files in bytes. /// The size of the SST data files in bytes.
pub sst_size: u64, pub sst_size: u64,
/// The size of the SST index files in bytes.
pub index_size: u64,
} }
impl Stat { impl Stat {
@@ -178,12 +182,14 @@ impl From<&api::v1::meta::RegionStat> for RegionStat {
id: RegionId::from_u64(value.region_id), id: RegionId::from_u64(value.region_id),
rcus: value.rcus, rcus: value.rcus,
wcus: value.wcus, wcus: value.wcus,
approximate_bytes: value.approximate_bytes, approximate_bytes: value.approximate_bytes as u64,
engine: value.engine.to_string(), engine: value.engine.to_string(),
role: RegionRole::from(value.role()), role: RegionRole::from(value.role()),
num_rows: region_stat.num_rows,
memtable_size: region_stat.memtable_size, memtable_size: region_stat.memtable_size,
manifest_size: region_stat.manifest_size, manifest_size: region_stat.manifest_size,
sst_size: region_stat.sst_size, sst_size: region_stat.sst_size,
index_size: region_stat.index_size,
} }
} }
} }

View File

@@ -43,10 +43,10 @@ impl AlterLogicalTablesProcedure {
&self.data.physical_columns, &self.data.physical_columns,
); );
// Updates physical table's metadata // Updates physical table's metadata, and we don't need to touch per-region settings.
self.context self.context
.table_metadata_manager .table_metadata_manager
.update_table_info(physical_table_info, new_raw_table_info) .update_table_info(physical_table_info, None, new_raw_table_info)
.await?; .await?;
Ok(()) Ok(())

View File

@@ -43,10 +43,10 @@ use crate::ddl::DdlContext;
use crate::error::{Error, Result}; use crate::error::{Error, Result};
use crate::instruction::CacheIdent; use crate::instruction::CacheIdent;
use crate::key::table_info::TableInfoValue; use crate::key::table_info::TableInfoValue;
use crate::key::DeserializedValueWithBytes; use crate::key::{DeserializedValueWithBytes, RegionDistribution};
use crate::lock_key::{CatalogLock, SchemaLock, TableLock, TableNameLock}; use crate::lock_key::{CatalogLock, SchemaLock, TableLock, TableNameLock};
use crate::rpc::ddl::AlterTableTask; use crate::rpc::ddl::AlterTableTask;
use crate::rpc::router::{find_leader_regions, find_leaders}; use crate::rpc::router::{find_leader_regions, find_leaders, region_distribution};
use crate::{metrics, ClusterId}; use crate::{metrics, ClusterId};
/// The alter table procedure /// The alter table procedure
@@ -101,6 +101,9 @@ impl AlterTableProcedure {
.get_physical_table_route(table_id) .get_physical_table_route(table_id)
.await?; .await?;
self.data.region_distribution =
Some(region_distribution(&physical_table_route.region_routes));
let leaders = find_leaders(&physical_table_route.region_routes); let leaders = find_leaders(&physical_table_route.region_routes);
let mut alter_region_tasks = Vec::with_capacity(leaders.len()); let mut alter_region_tasks = Vec::with_capacity(leaders.len());
@@ -161,7 +164,13 @@ impl AlterTableProcedure {
self.on_update_metadata_for_rename(new_table_name.to_string(), table_info_value) self.on_update_metadata_for_rename(new_table_name.to_string(), table_info_value)
.await?; .await?;
} else { } else {
self.on_update_metadata_for_alter(new_info.into(), table_info_value) // region distribution is set in submit_alter_region_requests
let region_distribution = self.data.region_distribution.as_ref().unwrap().clone();
self.on_update_metadata_for_alter(
new_info.into(),
region_distribution,
table_info_value,
)
.await?; .await?;
} }
@@ -271,6 +280,8 @@ pub struct AlterTableData {
table_id: TableId, table_id: TableId,
/// Table info value before alteration. /// Table info value before alteration.
table_info_value: Option<DeserializedValueWithBytes<TableInfoValue>>, table_info_value: Option<DeserializedValueWithBytes<TableInfoValue>>,
/// Region distribution for table in case we need to update region options.
region_distribution: Option<RegionDistribution>,
} }
impl AlterTableData { impl AlterTableData {
@@ -281,6 +292,7 @@ impl AlterTableData {
table_id, table_id,
cluster_id, cluster_id,
table_info_value: None, table_info_value: None,
region_distribution: None,
} }
} }

View File

@@ -106,6 +106,7 @@ fn create_proto_alter_kind(
}))) })))
} }
Kind::RenameTable(_) => Ok(None), Kind::RenameTable(_) => Ok(None),
Kind::ChangeTableOptions(v) => Ok(Some(alter_request::Kind::ChangeTableOptions(v.clone()))),
} }
} }

View File

@@ -20,7 +20,7 @@ use table::requests::AlterKind;
use crate::ddl::alter_table::AlterTableProcedure; use crate::ddl::alter_table::AlterTableProcedure;
use crate::error::{self, Result}; use crate::error::{self, Result};
use crate::key::table_info::TableInfoValue; use crate::key::table_info::TableInfoValue;
use crate::key::DeserializedValueWithBytes; use crate::key::{DeserializedValueWithBytes, RegionDistribution};
impl AlterTableProcedure { impl AlterTableProcedure {
/// Builds new_meta /// Builds new_meta
@@ -51,7 +51,9 @@ impl AlterTableProcedure {
AlterKind::RenameTable { new_table_name } => { AlterKind::RenameTable { new_table_name } => {
new_info.name = new_table_name.to_string(); new_info.name = new_table_name.to_string();
} }
AlterKind::DropColumns { .. } | AlterKind::ChangeColumnTypes { .. } => {} AlterKind::DropColumns { .. }
| AlterKind::ChangeColumnTypes { .. }
| AlterKind::ChangeTableOptions { .. } => {}
} }
Ok(new_info) Ok(new_info)
@@ -75,11 +77,16 @@ impl AlterTableProcedure {
pub(crate) async fn on_update_metadata_for_alter( pub(crate) async fn on_update_metadata_for_alter(
&self, &self,
new_table_info: RawTableInfo, new_table_info: RawTableInfo,
region_distribution: RegionDistribution,
current_table_info_value: &DeserializedValueWithBytes<TableInfoValue>, current_table_info_value: &DeserializedValueWithBytes<TableInfoValue>,
) -> Result<()> { ) -> Result<()> {
let table_metadata_manager = &self.context.table_metadata_manager; let table_metadata_manager = &self.context.table_metadata_manager;
table_metadata_manager table_metadata_manager
.update_table_info(current_table_info_value, new_table_info) .update_table_info(
current_table_info_value,
Some(region_distribution),
new_table_info,
)
.await?; .await?;
Ok(()) Ok(())

View File

@@ -58,10 +58,10 @@ impl CreateLogicalTablesProcedure {
&new_table_info.name, &new_table_info.name,
); );
// Update physical table's metadata // Update physical table's metadata and we don't need to touch per-region settings.
self.context self.context
.table_metadata_manager .table_metadata_manager
.update_table_info(&physical_table_info, new_table_info) .update_table_info(&physical_table_info, None, new_table_info)
.await?; .await?;
// Invalid physical table cache // Invalid physical table cache

View File

@@ -29,7 +29,10 @@ use crate::test_util::MockDatanodeHandler;
#[async_trait::async_trait] #[async_trait::async_trait]
impl MockDatanodeHandler for () { impl MockDatanodeHandler for () {
async fn handle(&self, _peer: &Peer, _request: RegionRequest) -> Result<RegionResponse> { async fn handle(&self, _peer: &Peer, _request: RegionRequest) -> Result<RegionResponse> {
unreachable!() Ok(RegionResponse {
affected_rows: 0,
extensions: Default::default(),
})
} }
async fn handle_query( async fn handle_query(

View File

@@ -19,13 +19,14 @@ use std::sync::Arc;
use api::v1::alter_expr::Kind; use api::v1::alter_expr::Kind;
use api::v1::region::{region_request, RegionRequest}; use api::v1::region::{region_request, RegionRequest};
use api::v1::{ use api::v1::{
AddColumn, AddColumns, AlterExpr, ColumnDataType, ColumnDef as PbColumnDef, DropColumn, AddColumn, AddColumns, AlterExpr, ChangeTableOption, ChangeTableOptions, ColumnDataType,
DropColumns, SemanticType, ColumnDef as PbColumnDef, DropColumn, DropColumns, SemanticType,
}; };
use common_catalog::consts::{DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME}; use common_catalog::consts::{DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME};
use common_error::ext::ErrorExt; use common_error::ext::ErrorExt;
use common_error::status_code::StatusCode; use common_error::status_code::StatusCode;
use store_api::storage::RegionId; use store_api::storage::RegionId;
use table::requests::TTL_KEY;
use tokio::sync::mpsc::{self}; use tokio::sync::mpsc::{self};
use crate::ddl::alter_table::AlterTableProcedure; use crate::ddl::alter_table::AlterTableProcedure;
@@ -34,6 +35,7 @@ use crate::ddl::test_util::create_table::test_create_table_task;
use crate::ddl::test_util::datanode_handler::{ use crate::ddl::test_util::datanode_handler::{
DatanodeWatcher, RequestOutdatedErrorDatanodeHandler, DatanodeWatcher, RequestOutdatedErrorDatanodeHandler,
}; };
use crate::key::datanode_table::DatanodeTableKey;
use crate::key::table_name::TableNameKey; use crate::key::table_name::TableNameKey;
use crate::key::table_route::TableRouteValue; use crate::key::table_route::TableRouteValue;
use crate::peer::Peer; use crate::peer::Peer;
@@ -293,12 +295,21 @@ async fn test_on_update_metadata_add_columns() {
let table_name = "foo"; let table_name = "foo";
let table_id = 1024; let table_id = 1024;
let task = test_create_table_task(table_name, table_id); let task = test_create_table_task(table_name, table_id);
let region_id = RegionId::new(table_id, 0);
let mock_table_routes = vec![RegionRoute {
region: Region::new_test(region_id),
leader_peer: Some(Peer::default()),
follower_peers: vec![],
leader_state: None,
leader_down_since: None,
}];
// Puts a value to table name key. // Puts a value to table name key.
ddl_context ddl_context
.table_metadata_manager .table_metadata_manager
.create_table_metadata( .create_table_metadata(
task.table_info.clone(), task.table_info.clone(),
TableRouteValue::physical(vec![]), TableRouteValue::physical(mock_table_routes),
HashMap::new(), HashMap::new(),
) )
.await .await
@@ -326,6 +337,7 @@ async fn test_on_update_metadata_add_columns() {
let mut procedure = let mut procedure =
AlterTableProcedure::new(cluster_id, table_id, task, ddl_context.clone()).unwrap(); AlterTableProcedure::new(cluster_id, table_id, task, ddl_context.clone()).unwrap();
procedure.on_prepare().await.unwrap(); procedure.on_prepare().await.unwrap();
procedure.submit_alter_region_requests().await.unwrap();
procedure.on_update_metadata().await.unwrap(); procedure.on_update_metadata().await.unwrap();
let table_info = ddl_context let table_info = ddl_context
@@ -343,3 +355,76 @@ async fn test_on_update_metadata_add_columns() {
table_info.meta.next_column_id table_info.meta.next_column_id
); );
} }
#[tokio::test]
async fn test_on_update_table_options() {
let node_manager = Arc::new(MockDatanodeManager::new(()));
let ddl_context = new_ddl_context(node_manager);
let cluster_id = 1;
let table_name = "foo";
let table_id = 1024;
let task = test_create_table_task(table_name, table_id);
let region_id = RegionId::new(table_id, 0);
let mock_table_routes = vec![RegionRoute {
region: Region::new_test(region_id),
leader_peer: Some(Peer::default()),
follower_peers: vec![],
leader_state: None,
leader_down_since: None,
}];
// Puts a value to table name key.
ddl_context
.table_metadata_manager
.create_table_metadata(
task.table_info.clone(),
TableRouteValue::physical(mock_table_routes),
HashMap::new(),
)
.await
.unwrap();
let task = AlterTableTask {
alter_table: AlterExpr {
catalog_name: DEFAULT_CATALOG_NAME.to_string(),
schema_name: DEFAULT_SCHEMA_NAME.to_string(),
table_name: table_name.to_string(),
kind: Some(Kind::ChangeTableOptions(ChangeTableOptions {
change_table_options: vec![ChangeTableOption {
key: TTL_KEY.to_string(),
value: "1d".to_string(),
}],
})),
},
};
let mut procedure =
AlterTableProcedure::new(cluster_id, table_id, task, ddl_context.clone()).unwrap();
procedure.on_prepare().await.unwrap();
procedure.submit_alter_region_requests().await.unwrap();
procedure.on_update_metadata().await.unwrap();
let table_info = ddl_context
.table_metadata_manager
.table_info_manager()
.get(table_id)
.await
.unwrap()
.unwrap()
.into_inner()
.table_info;
let datanode_key = DatanodeTableKey::new(0, table_id);
let region_info = ddl_context
.table_metadata_manager
.datanode_table_manager()
.get(&datanode_key)
.await
.unwrap()
.unwrap()
.region_info;
assert_eq!(
region_info.region_options,
HashMap::from(&table_info.meta.options)
);
}

View File

@@ -652,6 +652,18 @@ pub enum Error {
#[snafu(implicit)] #[snafu(implicit)]
location: Location, location: Location,
}, },
#[snafu(display(
"Datanode table info not found, table id: {}, datanode id: {}",
table_id,
datanode_id
))]
DatanodeTableInfoNotFound {
datanode_id: DatanodeId,
table_id: TableId,
#[snafu(implicit)]
location: Location,
},
} }
pub type Result<T> = std::result::Result<T, Error>; pub type Result<T> = std::result::Result<T, Error>;
@@ -752,6 +764,7 @@ impl ErrorExt for Error {
PostgresExecution { .. } => StatusCode::Internal, PostgresExecution { .. } => StatusCode::Internal,
#[cfg(feature = "pg_kvbackend")] #[cfg(feature = "pg_kvbackend")]
ConnectPostgres { .. } => StatusCode::Internal, ConnectPostgres { .. } => StatusCode::Internal,
Error::DatanodeTableInfoNotFound { .. } => StatusCode::Internal,
} }
} }

View File

@@ -133,7 +133,6 @@ use self::flow::flow_name::FlowNameValue;
use self::schema_name::{SchemaManager, SchemaNameKey, SchemaNameValue}; use self::schema_name::{SchemaManager, SchemaNameKey, SchemaNameValue};
use self::table_route::{TableRouteManager, TableRouteValue}; use self::table_route::{TableRouteManager, TableRouteValue};
use self::tombstone::TombstoneManager; use self::tombstone::TombstoneManager;
use crate::ddl::utils::region_storage_path;
use crate::error::{self, Result, SerdeJsonSnafu}; use crate::error::{self, Result, SerdeJsonSnafu};
use crate::key::node_address::NodeAddressValue; use crate::key::node_address::NodeAddressValue;
use crate::key::table_route::TableRouteKey; use crate::key::table_route::TableRouteKey;
@@ -593,8 +592,6 @@ impl TableMetadataManager {
table_info.meta.region_numbers = region_numbers; table_info.meta.region_numbers = region_numbers;
let table_id = table_info.ident.table_id; let table_id = table_info.ident.table_id;
let engine = table_info.meta.engine.clone(); let engine = table_info.meta.engine.clone();
let region_storage_path =
region_storage_path(&table_info.catalog_name, &table_info.schema_name);
// Creates table name. // Creates table name.
let table_name = TableNameKey::new( let table_name = TableNameKey::new(
@@ -606,7 +603,7 @@ impl TableMetadataManager {
.table_name_manager() .table_name_manager()
.build_create_txn(&table_name, table_id)?; .build_create_txn(&table_name, table_id)?;
let region_options = (&table_info.meta.options).into(); let region_options = table_info.to_region_options();
// Creates table info. // Creates table info.
let table_info_value = TableInfoValue::new(table_info); let table_info_value = TableInfoValue::new(table_info);
let (create_table_info_txn, on_create_table_info_failure) = self let (create_table_info_txn, on_create_table_info_failure) = self
@@ -625,6 +622,7 @@ impl TableMetadataManager {
]); ]);
if let TableRouteValue::Physical(x) = &table_route_value { if let TableRouteValue::Physical(x) = &table_route_value {
let region_storage_path = table_info_value.region_storage_path();
let create_datanode_table_txn = self.datanode_table_manager().build_create_txn( let create_datanode_table_txn = self.datanode_table_manager().build_create_txn(
table_id, table_id,
&engine, &engine,
@@ -926,13 +924,15 @@ impl TableMetadataManager {
} }
/// Updates table info and returns an error if different metadata exists. /// Updates table info and returns an error if different metadata exists.
/// And cascade-ly update all redundant table options for each region
/// if region_distribution is present.
pub async fn update_table_info( pub async fn update_table_info(
&self, &self,
current_table_info_value: &DeserializedValueWithBytes<TableInfoValue>, current_table_info_value: &DeserializedValueWithBytes<TableInfoValue>,
region_distribution: Option<RegionDistribution>,
new_table_info: RawTableInfo, new_table_info: RawTableInfo,
) -> Result<()> { ) -> Result<()> {
let table_id = current_table_info_value.table_info.ident.table_id; let table_id = current_table_info_value.table_info.ident.table_id;
let new_table_info_value = current_table_info_value.update(new_table_info); let new_table_info_value = current_table_info_value.update(new_table_info);
// Updates table info. // Updates table info.
@@ -940,8 +940,19 @@ impl TableMetadataManager {
.table_info_manager() .table_info_manager()
.build_update_txn(table_id, current_table_info_value, &new_table_info_value)?; .build_update_txn(table_id, current_table_info_value, &new_table_info_value)?;
let mut r = self.kv_backend.txn(update_table_info_txn).await?; let txn = if let Some(region_distribution) = region_distribution {
// region options induced from table info.
let new_region_options = new_table_info_value.table_info.to_region_options();
let update_datanode_table_options_txn = self
.datanode_table_manager
.build_update_table_options_txn(table_id, region_distribution, new_region_options)
.await?;
Txn::merge_all([update_table_info_txn, update_datanode_table_options_txn])
} else {
update_table_info_txn
};
let mut r = self.kv_backend.txn(txn).await?;
// Checks whether metadata was already updated. // Checks whether metadata was already updated.
if !r.succeeded { if !r.succeeded {
let mut set = TxnOpGetResponseSet::from(&mut r.responses); let mut set = TxnOpGetResponseSet::from(&mut r.responses);
@@ -1669,12 +1680,12 @@ mod tests {
DeserializedValueWithBytes::from_inner(TableInfoValue::new(table_info.clone())); DeserializedValueWithBytes::from_inner(TableInfoValue::new(table_info.clone()));
// should be ok. // should be ok.
table_metadata_manager table_metadata_manager
.update_table_info(&current_table_info_value, new_table_info.clone()) .update_table_info(&current_table_info_value, None, new_table_info.clone())
.await .await
.unwrap(); .unwrap();
// if table info was updated, it should be ok. // if table info was updated, it should be ok.
table_metadata_manager table_metadata_manager
.update_table_info(&current_table_info_value, new_table_info.clone()) .update_table_info(&current_table_info_value, None, new_table_info.clone())
.await .await
.unwrap(); .unwrap();
@@ -1696,7 +1707,7 @@ mod tests {
// if the current_table_info_value is wrong, it should return an error. // if the current_table_info_value is wrong, it should return an error.
// The ABA problem. // The ABA problem.
assert!(table_metadata_manager assert!(table_metadata_manager
.update_table_info(&wrong_table_info_value, new_table_info) .update_table_info(&wrong_table_info_value, None, new_table_info)
.await .await
.is_err()) .is_err())
} }

View File

@@ -35,7 +35,7 @@ pub struct CatalogNameKey<'a> {
pub catalog: &'a str, pub catalog: &'a str,
} }
impl<'a> Default for CatalogNameKey<'a> { impl Default for CatalogNameKey<'_> {
fn default() -> Self { fn default() -> Self {
Self { Self {
catalog: DEFAULT_CATALOG_NAME, catalog: DEFAULT_CATALOG_NAME,

View File

@@ -23,7 +23,7 @@ use store_api::storage::RegionNumber;
use table::metadata::TableId; use table::metadata::TableId;
use super::MetadataKey; use super::MetadataKey;
use crate::error::{InvalidMetadataSnafu, Result}; use crate::error::{DatanodeTableInfoNotFoundSnafu, InvalidMetadataSnafu, Result};
use crate::key::{ use crate::key::{
MetadataValue, RegionDistribution, DATANODE_TABLE_KEY_PATTERN, DATANODE_TABLE_KEY_PREFIX, MetadataValue, RegionDistribution, DATANODE_TABLE_KEY_PATTERN, DATANODE_TABLE_KEY_PREFIX,
}; };
@@ -77,7 +77,7 @@ impl DatanodeTableKey {
} }
} }
impl<'a> MetadataKey<'a, DatanodeTableKey> for DatanodeTableKey { impl MetadataKey<'_, DatanodeTableKey> for DatanodeTableKey {
fn to_bytes(&self) -> Vec<u8> { fn to_bytes(&self) -> Vec<u8> {
self.to_string().into_bytes() self.to_string().into_bytes()
} }
@@ -209,6 +209,49 @@ impl DatanodeTableManager {
Ok(txn) Ok(txn)
} }
/// Builds a transaction to updates the redundant table options (including WAL options)
/// for given table id, if provided.
///
/// Note that the provided `new_region_options` must be a
/// complete set of all options rather than incremental changes.
pub(crate) async fn build_update_table_options_txn(
&self,
table_id: TableId,
region_distribution: RegionDistribution,
new_region_options: HashMap<String, String>,
) -> Result<Txn> {
assert!(!region_distribution.is_empty());
// safety: region_distribution must not be empty
let (any_datanode, _) = region_distribution.first_key_value().unwrap();
let mut region_info = self
.kv_backend
.get(&DatanodeTableKey::new(*any_datanode, table_id).to_bytes())
.await
.transpose()
.context(DatanodeTableInfoNotFoundSnafu {
datanode_id: *any_datanode,
table_id,
})?
.and_then(|r| DatanodeTableValue::try_from_raw_value(&r.value))?
.region_info;
// substitute region options only.
region_info.region_options = new_region_options;
let mut txns = Vec::with_capacity(region_distribution.len());
for (datanode, regions) in region_distribution.into_iter() {
let key = DatanodeTableKey::new(datanode, table_id);
let key_bytes = key.to_bytes();
let value_bytes = DatanodeTableValue::new(table_id, regions, region_info.clone())
.try_as_raw_value()?;
txns.push(TxnOp::Put(key_bytes, value_bytes));
}
let txn = Txn::new().and_then(txns);
Ok(txn)
}
/// Builds the update datanode table transactions. It only executes while the primary keys comparing successes. /// Builds the update datanode table transactions. It only executes while the primary keys comparing successes.
pub(crate) fn build_update_txn( pub(crate) fn build_update_txn(
&self, &self,

View File

@@ -42,6 +42,8 @@ lazy_static! {
/// The layout: `__flow/info/{flow_id}`. /// The layout: `__flow/info/{flow_id}`.
pub struct FlowInfoKey(FlowScoped<FlowInfoKeyInner>); pub struct FlowInfoKey(FlowScoped<FlowInfoKeyInner>);
pub type FlowInfoDecodeResult = Result<Option<DeserializedValueWithBytes<FlowInfoValue>>>;
impl<'a> MetadataKey<'a, FlowInfoKey> for FlowInfoKey { impl<'a> MetadataKey<'a, FlowInfoKey> for FlowInfoKey {
fn to_bytes(&self) -> Vec<u8> { fn to_bytes(&self) -> Vec<u8> {
self.0.to_bytes() self.0.to_bytes()
@@ -203,9 +205,7 @@ impl FlowInfoManager {
flow_value: &FlowInfoValue, flow_value: &FlowInfoValue,
) -> Result<( ) -> Result<(
Txn, Txn,
impl FnOnce( impl FnOnce(&mut TxnOpGetResponseSet) -> FlowInfoDecodeResult,
&mut TxnOpGetResponseSet,
) -> Result<Option<DeserializedValueWithBytes<FlowInfoValue>>>,
)> { )> {
let key = FlowInfoKey::new(flow_id).to_bytes(); let key = FlowInfoKey::new(flow_id).to_bytes();
let txn = Txn::put_if_not_exists(key.clone(), flow_value.try_as_raw_value()?); let txn = Txn::put_if_not_exists(key.clone(), flow_value.try_as_raw_value()?);

View File

@@ -46,6 +46,8 @@ lazy_static! {
/// The layout: `__flow/name/{catalog_name}/{flow_name}`. /// The layout: `__flow/name/{catalog_name}/{flow_name}`.
pub struct FlowNameKey<'a>(FlowScoped<FlowNameKeyInner<'a>>); pub struct FlowNameKey<'a>(FlowScoped<FlowNameKeyInner<'a>>);
pub type FlowNameDecodeResult = Result<Option<DeserializedValueWithBytes<FlowNameValue>>>;
#[allow(dead_code)] #[allow(dead_code)]
impl<'a> FlowNameKey<'a> { impl<'a> FlowNameKey<'a> {
/// Returns the [FlowNameKey] /// Returns the [FlowNameKey]
@@ -104,7 +106,7 @@ impl<'a> MetadataKey<'a, FlowNameKeyInner<'a>> for FlowNameKeyInner<'_> {
.into_bytes() .into_bytes()
} }
fn from_bytes(bytes: &'a [u8]) -> Result<FlowNameKeyInner> { fn from_bytes(bytes: &'a [u8]) -> Result<FlowNameKeyInner<'a>> {
let key = std::str::from_utf8(bytes).map_err(|e| { let key = std::str::from_utf8(bytes).map_err(|e| {
error::InvalidMetadataSnafu { error::InvalidMetadataSnafu {
err_msg: format!( err_msg: format!(
@@ -223,9 +225,7 @@ impl FlowNameManager {
flow_id: FlowId, flow_id: FlowId,
) -> Result<( ) -> Result<(
Txn, Txn,
impl FnOnce( impl FnOnce(&mut TxnOpGetResponseSet) -> FlowNameDecodeResult,
&mut TxnOpGetResponseSet,
) -> Result<Option<DeserializedValueWithBytes<FlowNameValue>>>,
)> { )> {
let key = FlowNameKey::new(catalog_name, flow_name); let key = FlowNameKey::new(catalog_name, flow_name);
let raw_key = key.to_bytes(); let raw_key = key.to_bytes();

View File

@@ -52,7 +52,7 @@ impl NodeAddressValue {
} }
} }
impl<'a> MetadataKey<'a, NodeAddressKey> for NodeAddressKey { impl MetadataKey<'_, NodeAddressKey> for NodeAddressKey {
fn to_bytes(&self) -> Vec<u8> { fn to_bytes(&self) -> Vec<u8> {
self.to_string().into_bytes() self.to_string().into_bytes()
} }

View File

@@ -41,7 +41,7 @@ pub struct SchemaNameKey<'a> {
pub schema: &'a str, pub schema: &'a str,
} }
impl<'a> Default for SchemaNameKey<'a> { impl Default for SchemaNameKey<'_> {
fn default() -> Self { fn default() -> Self {
Self { Self {
catalog: DEFAULT_CATALOG_NAME, catalog: DEFAULT_CATALOG_NAME,

View File

@@ -23,6 +23,7 @@ use table::table_name::TableName;
use table::table_reference::TableReference; use table::table_reference::TableReference;
use super::TABLE_INFO_KEY_PATTERN; use super::TABLE_INFO_KEY_PATTERN;
use crate::ddl::utils::region_storage_path;
use crate::error::{InvalidMetadataSnafu, Result}; use crate::error::{InvalidMetadataSnafu, Result};
use crate::key::txn_helper::TxnOpGetResponseSet; use crate::key::txn_helper::TxnOpGetResponseSet;
use crate::key::{DeserializedValueWithBytes, MetadataKey, MetadataValue, TABLE_INFO_KEY_PREFIX}; use crate::key::{DeserializedValueWithBytes, MetadataKey, MetadataValue, TABLE_INFO_KEY_PREFIX};
@@ -51,7 +52,7 @@ impl Display for TableInfoKey {
} }
} }
impl<'a> MetadataKey<'a, TableInfoKey> for TableInfoKey { impl MetadataKey<'_, TableInfoKey> for TableInfoKey {
fn to_bytes(&self) -> Vec<u8> { fn to_bytes(&self) -> Vec<u8> {
self.to_string().into_bytes() self.to_string().into_bytes()
} }
@@ -125,6 +126,11 @@ impl TableInfoValue {
table_name: self.table_info.name.to_string(), table_name: self.table_info.name.to_string(),
} }
} }
/// Builds storage path for all regions in table.
pub fn region_storage_path(&self) -> String {
region_storage_path(&self.table_info.catalog_name, &self.table_info.schema_name)
}
} }
pub type TableInfoManagerRef = Arc<TableInfoManager>; pub type TableInfoManagerRef = Arc<TableInfoManager>;
@@ -132,6 +138,7 @@ pub type TableInfoManagerRef = Arc<TableInfoManager>;
pub struct TableInfoManager { pub struct TableInfoManager {
kv_backend: KvBackendRef, kv_backend: KvBackendRef,
} }
pub type TableInfoDecodeResult = Result<Option<DeserializedValueWithBytes<TableInfoValue>>>;
impl TableInfoManager { impl TableInfoManager {
pub fn new(kv_backend: KvBackendRef) -> Self { pub fn new(kv_backend: KvBackendRef) -> Self {
@@ -145,9 +152,7 @@ impl TableInfoManager {
table_info_value: &TableInfoValue, table_info_value: &TableInfoValue,
) -> Result<( ) -> Result<(
Txn, Txn,
impl FnOnce( impl FnOnce(&mut TxnOpGetResponseSet) -> TableInfoDecodeResult,
&mut TxnOpGetResponseSet,
) -> Result<Option<DeserializedValueWithBytes<TableInfoValue>>>,
)> { )> {
let key = TableInfoKey::new(table_id); let key = TableInfoKey::new(table_id);
let raw_key = key.to_bytes(); let raw_key = key.to_bytes();
@@ -169,9 +174,7 @@ impl TableInfoManager {
new_table_info_value: &TableInfoValue, new_table_info_value: &TableInfoValue,
) -> Result<( ) -> Result<(
Txn, Txn,
impl FnOnce( impl FnOnce(&mut TxnOpGetResponseSet) -> TableInfoDecodeResult,
&mut TxnOpGetResponseSet,
) -> Result<Option<DeserializedValueWithBytes<TableInfoValue>>>,
)> { )> {
let key = TableInfoKey::new(table_id); let key = TableInfoKey::new(table_id);
let raw_key = key.to_bytes(); let raw_key = key.to_bytes();

View File

@@ -245,7 +245,7 @@ impl LogicalTableRouteValue {
} }
} }
impl<'a> MetadataKey<'a, TableRouteKey> for TableRouteKey { impl MetadataKey<'_, TableRouteKey> for TableRouteKey {
fn to_bytes(&self) -> Vec<u8> { fn to_bytes(&self) -> Vec<u8> {
self.to_string().into_bytes() self.to_string().into_bytes()
} }
@@ -472,6 +472,8 @@ pub struct TableRouteStorage {
kv_backend: KvBackendRef, kv_backend: KvBackendRef,
} }
pub type TableRouteValueDecodeResult = Result<Option<DeserializedValueWithBytes<TableRouteValue>>>;
impl TableRouteStorage { impl TableRouteStorage {
pub fn new(kv_backend: KvBackendRef) -> Self { pub fn new(kv_backend: KvBackendRef) -> Self {
Self { kv_backend } Self { kv_backend }
@@ -485,9 +487,7 @@ impl TableRouteStorage {
table_route_value: &TableRouteValue, table_route_value: &TableRouteValue,
) -> Result<( ) -> Result<(
Txn, Txn,
impl FnOnce( impl FnOnce(&mut TxnOpGetResponseSet) -> TableRouteValueDecodeResult,
&mut TxnOpGetResponseSet,
) -> Result<Option<DeserializedValueWithBytes<TableRouteValue>>>,
)> { )> {
let key = TableRouteKey::new(table_id); let key = TableRouteKey::new(table_id);
let raw_key = key.to_bytes(); let raw_key = key.to_bytes();
@@ -510,9 +510,7 @@ impl TableRouteStorage {
new_table_route_value: &TableRouteValue, new_table_route_value: &TableRouteValue,
) -> Result<( ) -> Result<(
Txn, Txn,
impl FnOnce( impl FnOnce(&mut TxnOpGetResponseSet) -> TableRouteValueDecodeResult,
&mut TxnOpGetResponseSet,
) -> Result<Option<DeserializedValueWithBytes<TableRouteValue>>>,
)> { )> {
let key = TableRouteKey::new(table_id); let key = TableRouteKey::new(table_id);
let raw_key = key.to_bytes(); let raw_key = key.to_bytes();

View File

@@ -53,7 +53,7 @@ impl Display for ViewInfoKey {
} }
} }
impl<'a> MetadataKey<'a, ViewInfoKey> for ViewInfoKey { impl MetadataKey<'_, ViewInfoKey> for ViewInfoKey {
fn to_bytes(&self) -> Vec<u8> { fn to_bytes(&self) -> Vec<u8> {
self.to_string().into_bytes() self.to_string().into_bytes()
} }
@@ -139,6 +139,8 @@ pub struct ViewInfoManager {
pub type ViewInfoManagerRef = Arc<ViewInfoManager>; pub type ViewInfoManagerRef = Arc<ViewInfoManager>;
pub type ViewInfoValueDecodeResult = Result<Option<DeserializedValueWithBytes<ViewInfoValue>>>;
impl ViewInfoManager { impl ViewInfoManager {
pub fn new(kv_backend: KvBackendRef) -> Self { pub fn new(kv_backend: KvBackendRef) -> Self {
Self { kv_backend } Self { kv_backend }
@@ -151,9 +153,7 @@ impl ViewInfoManager {
view_info_value: &ViewInfoValue, view_info_value: &ViewInfoValue,
) -> Result<( ) -> Result<(
Txn, Txn,
impl FnOnce( impl FnOnce(&mut TxnOpGetResponseSet) -> ViewInfoValueDecodeResult,
&mut TxnOpGetResponseSet,
) -> Result<Option<DeserializedValueWithBytes<ViewInfoValue>>>,
)> { )> {
let key = ViewInfoKey::new(view_id); let key = ViewInfoKey::new(view_id);
let raw_key = key.to_bytes(); let raw_key = key.to_bytes();
@@ -175,9 +175,7 @@ impl ViewInfoManager {
new_view_info_value: &ViewInfoValue, new_view_info_value: &ViewInfoValue,
) -> Result<( ) -> Result<(
Txn, Txn,
impl FnOnce( impl FnOnce(&mut TxnOpGetResponseSet) -> ViewInfoValueDecodeResult,
&mut TxnOpGetResponseSet,
) -> Result<Option<DeserializedValueWithBytes<ViewInfoValue>>>,
)> { )> {
let key = ViewInfoKey::new(view_id); let key = ViewInfoKey::new(view_id);
let raw_key = key.to_bytes(); let raw_key = key.to_bytes();

View File

@@ -12,10 +12,10 @@
// See the License for the specific language governing permissions and // See the License for the specific language governing permissions and
// limitations under the License. // limitations under the License.
use std::sync::Arc; use std::sync::{Arc, Mutex};
use async_trait::async_trait; use async_trait::async_trait;
use common_telemetry::error; use common_telemetry::{error, info};
use crate::error::Result; use crate::error::Result;
@@ -24,6 +24,8 @@ pub type LeadershipChangeNotifierCustomizerRef = Arc<dyn LeadershipChangeNotifie
/// A trait for customizing the leadership change notifier. /// A trait for customizing the leadership change notifier.
pub trait LeadershipChangeNotifierCustomizer: Send + Sync { pub trait LeadershipChangeNotifierCustomizer: Send + Sync {
fn customize(&self, notifier: &mut LeadershipChangeNotifier); fn customize(&self, notifier: &mut LeadershipChangeNotifier);
fn add_listener(&self, listener: Arc<dyn LeadershipChangeListener>);
} }
/// A trait for handling leadership change events in a distributed system. /// A trait for handling leadership change events in a distributed system.
@@ -45,6 +47,31 @@ pub struct LeadershipChangeNotifier {
listeners: Vec<Arc<dyn LeadershipChangeListener>>, listeners: Vec<Arc<dyn LeadershipChangeListener>>,
} }
#[derive(Default)]
pub struct DefaultLeadershipChangeNotifierCustomizer {
listeners: Mutex<Vec<Arc<dyn LeadershipChangeListener>>>,
}
impl DefaultLeadershipChangeNotifierCustomizer {
pub fn new() -> Self {
Self {
listeners: Mutex::new(Vec::new()),
}
}
}
impl LeadershipChangeNotifierCustomizer for DefaultLeadershipChangeNotifierCustomizer {
fn customize(&self, notifier: &mut LeadershipChangeNotifier) {
info!("Customizing leadership change notifier");
let listeners = self.listeners.lock().unwrap().clone();
notifier.listeners.extend(listeners);
}
fn add_listener(&self, listener: Arc<dyn LeadershipChangeListener>) {
self.listeners.lock().unwrap().push(listener);
}
}
impl LeadershipChangeNotifier { impl LeadershipChangeNotifier {
/// Adds a listener to the notifier. /// Adds a listener to the notifier.
pub fn add_listener(&mut self, listener: Arc<dyn LeadershipChangeListener>) { pub fn add_listener(&mut self, listener: Arc<dyn LeadershipChangeListener>) {

View File

@@ -34,7 +34,7 @@ pub enum CatalogLock<'a> {
Write(&'a str), Write(&'a str),
} }
impl<'a> Display for CatalogLock<'a> { impl Display for CatalogLock<'_> {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result { fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
let key = match self { let key = match self {
CatalogLock::Read(s) => s, CatalogLock::Read(s) => s,
@@ -44,7 +44,7 @@ impl<'a> Display for CatalogLock<'a> {
} }
} }
impl<'a> From<CatalogLock<'a>> for StringKey { impl From<CatalogLock<'_>> for StringKey {
fn from(value: CatalogLock) -> Self { fn from(value: CatalogLock) -> Self {
match value { match value {
CatalogLock::Write(_) => StringKey::Exclusive(value.to_string()), CatalogLock::Write(_) => StringKey::Exclusive(value.to_string()),

View File

@@ -289,6 +289,7 @@ pub enum LeaderState {
/// ///
/// - The [`Region`] may be unavailable (e.g., Crashed, Network disconnected). /// - The [`Region`] may be unavailable (e.g., Crashed, Network disconnected).
/// - The [`Region`] was planned to migrate to another [`Peer`]. /// - The [`Region`] was planned to migrate to another [`Peer`].
#[serde(alias = "Downgraded")]
Downgrading, Downgrading,
} }
@@ -516,6 +517,73 @@ mod tests {
assert_eq!(decoded, region_route); assert_eq!(decoded, region_route);
} }
#[test]
fn test_region_route_compatibility() {
let region_route = RegionRoute {
region: Region {
id: 2.into(),
name: "r2".to_string(),
partition: None,
attrs: BTreeMap::new(),
},
leader_peer: Some(Peer::new(1, "a1")),
follower_peers: vec![Peer::new(2, "a2"), Peer::new(3, "a3")],
leader_state: Some(LeaderState::Downgrading),
leader_down_since: None,
};
let input = r#"{"region":{"id":2,"name":"r2","partition":null,"attrs":{}},"leader_peer":{"id":1,"addr":"a1"},"follower_peers":[{"id":2,"addr":"a2"},{"id":3,"addr":"a3"}],"leader_state":"Downgraded","leader_down_since":null}"#;
let decoded: RegionRoute = serde_json::from_str(input).unwrap();
assert_eq!(decoded, region_route);
let region_route = RegionRoute {
region: Region {
id: 2.into(),
name: "r2".to_string(),
partition: None,
attrs: BTreeMap::new(),
},
leader_peer: Some(Peer::new(1, "a1")),
follower_peers: vec![Peer::new(2, "a2"), Peer::new(3, "a3")],
leader_state: Some(LeaderState::Downgrading),
leader_down_since: None,
};
let input = r#"{"region":{"id":2,"name":"r2","partition":null,"attrs":{}},"leader_peer":{"id":1,"addr":"a1"},"follower_peers":[{"id":2,"addr":"a2"},{"id":3,"addr":"a3"}],"leader_status":"Downgraded","leader_down_since":null}"#;
let decoded: RegionRoute = serde_json::from_str(input).unwrap();
assert_eq!(decoded, region_route);
let region_route = RegionRoute {
region: Region {
id: 2.into(),
name: "r2".to_string(),
partition: None,
attrs: BTreeMap::new(),
},
leader_peer: Some(Peer::new(1, "a1")),
follower_peers: vec![Peer::new(2, "a2"), Peer::new(3, "a3")],
leader_state: Some(LeaderState::Downgrading),
leader_down_since: None,
};
let input = r#"{"region":{"id":2,"name":"r2","partition":null,"attrs":{}},"leader_peer":{"id":1,"addr":"a1"},"follower_peers":[{"id":2,"addr":"a2"},{"id":3,"addr":"a3"}],"leader_state":"Downgrading","leader_down_since":null}"#;
let decoded: RegionRoute = serde_json::from_str(input).unwrap();
assert_eq!(decoded, region_route);
let region_route = RegionRoute {
region: Region {
id: 2.into(),
name: "r2".to_string(),
partition: None,
attrs: BTreeMap::new(),
},
leader_peer: Some(Peer::new(1, "a1")),
follower_peers: vec![Peer::new(2, "a2"), Peer::new(3, "a3")],
leader_state: Some(LeaderState::Downgrading),
leader_down_since: None,
};
let input = r#"{"region":{"id":2,"name":"r2","partition":null,"attrs":{}},"leader_peer":{"id":1,"addr":"a1"},"follower_peers":[{"id":2,"addr":"a2"},{"id":3,"addr":"a3"}],"leader_status":"Downgrading","leader_down_since":null}"#;
let decoded: RegionRoute = serde_json::from_str(input).unwrap();
assert_eq!(decoded, region_route);
}
#[test] #[test]
fn test_de_serialize_partition() { fn test_de_serialize_partition() {
let p = Partition { let p = Partition {

View File

@@ -0,0 +1,22 @@
[package]
name = "common-pprof"
version.workspace = true
edition.workspace = true
license.workspace = true
[dependencies]
common-error.workspace = true
common-macro.workspace = true
prost.workspace = true
snafu.workspace = true
tokio.workspace = true
[target.'cfg(unix)'.dependencies]
pprof = { version = "0.13", features = [
"flamegraph",
"prost-codec",
"protobuf",
] }
[lints]
workspace = true

View File

@@ -0,0 +1,99 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
#[cfg(unix)]
pub mod nix;
pub mod error {
use std::any::Any;
use common_error::ext::ErrorExt;
use common_error::status_code::StatusCode;
use common_macro::stack_trace_debug;
use snafu::{Location, Snafu};
#[derive(Snafu)]
#[stack_trace_debug]
#[snafu(visibility(pub(crate)))]
pub enum Error {
#[cfg(unix)]
#[snafu(display("Pprof error"))]
Pprof {
#[snafu(source)]
error: pprof::Error,
#[snafu(implicit)]
location: Location,
},
#[snafu(display("Pprof is unsupported on this platform"))]
Unsupported {
#[snafu(implicit)]
location: Location,
},
}
pub type Result<T> = std::result::Result<T, Error>;
impl ErrorExt for Error {
fn status_code(&self) -> StatusCode {
match self {
#[cfg(unix)]
Error::Pprof { .. } => StatusCode::Unexpected,
Error::Unsupported { .. } => StatusCode::Unsupported,
}
}
fn as_any(&self) -> &dyn Any {
self
}
}
}
#[cfg(not(unix))]
pub mod dummy {
use std::time::Duration;
use crate::error::{Result, UnsupportedSnafu};
/// Dummpy CPU profiler utility.
#[derive(Debug)]
pub struct Profiling {}
impl Profiling {
/// Creates a new profiler.
pub fn new(_duration: Duration, _frequency: i32) -> Profiling {
Profiling {}
}
/// Profiles and returns a generated text.
pub async fn dump_text(&self) -> Result<String> {
UnsupportedSnafu {}.fail()
}
/// Profiles and returns a generated flamegraph.
pub async fn dump_flamegraph(&self) -> Result<Vec<u8>> {
UnsupportedSnafu {}.fail()
}
/// Profiles and returns a generated proto.
pub async fn dump_proto(&self) -> Result<Vec<u8>> {
UnsupportedSnafu {}.fail()
}
}
}
#[cfg(not(unix))]
pub use dummy::Profiling;
#[cfg(unix)]
pub use nix::Profiling;

View File

@@ -0,0 +1,78 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::time::Duration;
use pprof::protos::Message;
use snafu::ResultExt;
use crate::error::{PprofSnafu, Result};
/// CPU profiler utility.
// Inspired by https://github.com/datafuselabs/databend/blob/67f445e83cd4eceda98f6c1c114858929d564029/src/common/base/src/base/profiling.rs
#[derive(Debug)]
pub struct Profiling {
/// Sample duration.
duration: Duration,
/// Sample frequency.
frequency: i32,
}
impl Profiling {
/// Creates a new profiler.
pub fn new(duration: Duration, frequency: i32) -> Profiling {
Profiling {
duration,
frequency,
}
}
/// Profiles and returns a generated pprof report.
pub async fn report(&self) -> Result<pprof::Report> {
let guard = pprof::ProfilerGuardBuilder::default()
.frequency(self.frequency)
.blocklist(&["libc", "libgcc", "pthread", "vdso"])
.build()
.context(PprofSnafu)?;
tokio::time::sleep(self.duration).await;
guard.report().build().context(PprofSnafu)
}
/// Profiles and returns a generated text.
pub async fn dump_text(&self) -> Result<String> {
let report = self.report().await?;
let text = format!("{report:?}");
Ok(text)
}
/// Profiles and returns a generated flamegraph.
pub async fn dump_flamegraph(&self) -> Result<Vec<u8>> {
let mut body: Vec<u8> = Vec::new();
let report = self.report().await?;
report.flamegraph(&mut body).context(PprofSnafu)?;
Ok(body)
}
/// Profiles and returns a generated proto.
pub async fn dump_proto(&self) -> Result<Vec<u8>> {
let report = self.report().await?;
// Generate googles pprof format report.
let profile = report.pprof().context(PprofSnafu)?;
let body = profile.encode_to_vec();
Ok(body)
}
}

View File

@@ -297,7 +297,7 @@ struct ParsedKey<'a> {
key_type: KeyType, key_type: KeyType,
} }
impl<'a> fmt::Display for ParsedKey<'a> { impl fmt::Display for ParsedKey<'_> {
fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result { fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {
write!( write!(
f, f,

View File

@@ -17,6 +17,7 @@ use std::slice;
use std::sync::Arc; use std::sync::Arc;
use datafusion::arrow::util::pretty::pretty_format_batches; use datafusion::arrow::util::pretty::pretty_format_batches;
use datatypes::prelude::DataType;
use datatypes::schema::SchemaRef; use datatypes::schema::SchemaRef;
use datatypes::value::Value; use datatypes::value::Value;
use datatypes::vectors::{Helper, VectorRef}; use datatypes::vectors::{Helper, VectorRef};
@@ -58,13 +59,18 @@ impl RecordBatch {
} }
/// Create an empty [`RecordBatch`] from `schema`. /// Create an empty [`RecordBatch`] from `schema`.
pub fn new_empty(schema: SchemaRef) -> Result<RecordBatch> { pub fn new_empty(schema: SchemaRef) -> RecordBatch {
let df_record_batch = DfRecordBatch::new_empty(schema.arrow_schema().clone()); let df_record_batch = DfRecordBatch::new_empty(schema.arrow_schema().clone());
Ok(RecordBatch { let columns = schema
.column_schemas()
.iter()
.map(|col| col.data_type.create_mutable_vector(0).to_vector())
.collect();
RecordBatch {
schema, schema,
columns: vec![], columns,
df_record_batch, df_record_batch,
}) }
} }
pub fn try_project(&self, indices: &[usize]) -> Result<Self> { pub fn try_project(&self, indices: &[usize]) -> Result<Self> {
@@ -220,7 +226,7 @@ pub struct RecordBatchRowIterator<'a> {
} }
impl<'a> RecordBatchRowIterator<'a> { impl<'a> RecordBatchRowIterator<'a> {
fn new(record_batch: &'a RecordBatch) -> RecordBatchRowIterator { fn new(record_batch: &'a RecordBatch) -> RecordBatchRowIterator<'a> {
RecordBatchRowIterator { RecordBatchRowIterator {
record_batch, record_batch,
rows: record_batch.df_record_batch.num_rows(), rows: record_batch.df_record_batch.num_rows(),
@@ -230,7 +236,7 @@ impl<'a> RecordBatchRowIterator<'a> {
} }
} }
impl<'a> Iterator for RecordBatchRowIterator<'a> { impl Iterator for RecordBatchRowIterator<'_> {
type Item = Vec<Value>; type Item = Vec<Value>;
fn next(&mut self) -> Option<Self::Item> { fn next(&mut self) -> Option<Self::Item> {

View File

@@ -4,21 +4,36 @@ version.workspace = true
edition.workspace = true edition.workspace = true
license.workspace = true license.workspace = true
[lib]
path = "src/lib.rs"
[[bin]]
name = "common-runtime-bin"
path = "src/bin.rs"
[lints] [lints]
workspace = true workspace = true
[dependencies] [dependencies]
async-trait.workspace = true async-trait.workspace = true
clap.workspace = true
common-error.workspace = true common-error.workspace = true
common-macro.workspace = true common-macro.workspace = true
common-telemetry.workspace = true common-telemetry.workspace = true
futures.workspace = true
lazy_static.workspace = true lazy_static.workspace = true
num_cpus.workspace = true num_cpus.workspace = true
once_cell.workspace = true once_cell.workspace = true
parking_lot.workspace = true
paste.workspace = true paste.workspace = true
pin-project.workspace = true
prometheus.workspace = true prometheus.workspace = true
rand.workspace = true
ratelimit.workspace = true
serde.workspace = true serde.workspace = true
serde_json.workspace = true
snafu.workspace = true snafu.workspace = true
tempfile.workspace = true
tokio.workspace = true tokio.workspace = true
tokio-metrics = "0.3" tokio-metrics = "0.3"
tokio-metrics-collector = { git = "https://github.com/MichaelScofield/tokio-metrics-collector.git", rev = "89d692d5753d28564a7aac73c6ac5aba22243ba0" } tokio-metrics-collector = { git = "https://github.com/MichaelScofield/tokio-metrics-collector.git", rev = "89d692d5753d28564a7aac73c6ac5aba22243ba0" }

View File

@@ -0,0 +1,60 @@
# Greptime Runtime
## Run performance test for different priority & workload type
```
# workspace is at this subcrate
cargo run --release -- --loop-cnt 500
```
## Related PRs & issues
- Preliminary support cpu limitation
ISSUE: https://github.com/GreptimeTeam/greptimedb/issues/3685
PR: https://github.com/GreptimeTeam/greptimedb/pull/4782
## CPU resource constraints (ThrottleableRuntime)
To achieve CPU resource constraints, we adopt the concept of rate limiting. When creating a future, we first wrap it with another layer of future to intercept the poll operation during runtime. By using the ratelimit library, we can simply implement a mechanism that allows only a limited number of polls for a batch of tasks under a certain priority within a specific time frame (the current token generation interval is set to 10ms).
The default used runtime can be switched by
``` rust
pub type Runtime = DefaultRuntime;
```
in `runtime.rs`.
We tested four type of workload with 5 priorities, whose setup are as follows:
``` rust
impl Priority {
fn ratelimiter_count(&self) -> Result<Option<Ratelimiter>> {
let max = 8000;
let gen_per_10ms = match self {
Priority::VeryLow => Some(2000),
Priority::Low => Some(4000),
Priority::Middle => Some(6000),
Priority::High => Some(8000),
Priority::VeryHigh => None,
};
if let Some(gen_per_10ms) = gen_per_10ms {
Ratelimiter::builder(gen_per_10ms, Duration::from_millis(10)) // generate poll count per 10ms
.max_tokens(max) // reserved token for batch request
.build()
.context(BuildRuntimeRateLimiterSnafu)
.map(Some)
} else {
Ok(None)
}
}
}
```
This is the preliminary experimental effect so far:
![](resources/rdme-exp.png)
## TODO
- Introduce PID to achieve more accurate limitation.

Binary file not shown.

After

Width:  |  Height:  |  Size: 226 KiB

View File

@@ -0,0 +1,205 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use clap::Parser;
#[derive(Debug, Default, Parser)]
pub struct Command {
#[clap(long)]
loop_cnt: usize,
}
fn main() {
common_telemetry::init_default_ut_logging();
let cmd = Command::parse();
test_diff_priority_cpu::test_diff_workload_priority(cmd.loop_cnt);
}
mod test_diff_priority_cpu {
use std::path::PathBuf;
use common_runtime::runtime::{BuilderBuild, Priority, RuntimeTrait};
use common_runtime::{Builder, Runtime};
use common_telemetry::debug;
use tempfile::TempDir;
fn compute_pi_str(precision: usize) -> String {
let mut pi = 0.0;
let mut sign = 1.0;
for i in 0..precision {
pi += sign / (2 * i + 1) as f64;
sign *= -1.0;
}
pi *= 4.0;
format!("{:.prec$}", pi, prec = precision)
}
macro_rules! def_workload_enum {
($($variant:ident),+) => {
#[derive(Debug)]
enum WorkloadType {
$($variant),+
}
/// array of workloads for iteration
const WORKLOADS: &'static [WorkloadType] = &[
$( WorkloadType::$variant ),+
];
};
}
def_workload_enum!(
ComputeHeavily,
ComputeHeavily2,
WriteFile,
SpawnBlockingWriteFile
);
async fn workload_compute_heavily() {
let prefix = 10;
for _ in 0..3000 {
let _ = compute_pi_str(prefix);
tokio::task::yield_now().await;
}
}
async fn workload_compute_heavily2() {
let prefix = 30;
for _ in 0..2000 {
let _ = compute_pi_str(prefix);
tokio::task::yield_now().await;
}
}
async fn workload_write_file(_idx: u64, tempdir: PathBuf) {
use tokio::io::AsyncWriteExt;
let prefix = 50;
let mut file = tokio::fs::OpenOptions::new()
.write(true)
.append(true)
.create(true)
.open(tempdir.join(format!("pi_{}", prefix)))
.await
.unwrap();
for i in 0..200 {
let pi = compute_pi_str(prefix);
if i % 2 == 0 {
file.write_all(pi.as_bytes()).await.unwrap();
}
}
}
async fn workload_spawn_blocking_write_file(tempdir: PathBuf) {
use std::io::Write;
let prefix = 100;
let mut file = Some(
std::fs::OpenOptions::new()
.append(true)
.create(true)
.open(tempdir.join(format!("pi_{}", prefix)))
.unwrap(),
);
for i in 0..100 {
let pi = compute_pi_str(prefix);
if i % 2 == 0 {
let mut file1 = file.take().unwrap();
file = Some(
tokio::task::spawn_blocking(move || {
file1.write_all(pi.as_bytes()).unwrap();
file1
})
.await
.unwrap(),
);
}
}
}
pub fn test_diff_workload_priority(loop_cnt: usize) {
let tempdir = tempfile::tempdir().unwrap();
let priorities = [
Priority::VeryLow,
Priority::Low,
Priority::Middle,
Priority::High,
Priority::VeryHigh,
];
for wl in WORKLOADS {
for p in priorities.iter() {
let runtime: Runtime = Builder::default()
.runtime_name("test")
.thread_name("test")
.worker_threads(8)
.priority(*p)
.build()
.expect("Fail to create runtime");
let runtime2 = runtime.clone();
runtime.block_on(test_spec_priority_and_workload(
*p, runtime2, wl, &tempdir, loop_cnt,
));
}
}
}
async fn test_spec_priority_and_workload(
priority: Priority,
runtime: Runtime,
workload_id: &WorkloadType,
tempdir: &TempDir,
loop_cnt: usize,
) {
tokio::time::sleep(tokio::time::Duration::from_millis(1000)).await;
debug!(
"testing cpu usage for priority {:?} workload_id {:?}",
priority, workload_id,
);
// start monitor thread
let mut tasks = vec![];
let start = std::time::Instant::now();
for i in 0..loop_cnt {
// persist cpu usage in json: {priority}.{workload_id}
match *workload_id {
WorkloadType::ComputeHeavily => {
tasks.push(runtime.spawn(workload_compute_heavily()));
}
WorkloadType::ComputeHeavily2 => {
tasks.push(runtime.spawn(workload_compute_heavily2()));
}
WorkloadType::SpawnBlockingWriteFile => {
tasks.push(runtime.spawn(workload_spawn_blocking_write_file(
tempdir.path().to_path_buf(),
)));
}
WorkloadType::WriteFile => {
tasks.push(
runtime.spawn(workload_write_file(i as u64, tempdir.path().to_path_buf())),
);
}
}
}
for task in tasks {
task.await.unwrap();
}
let elapsed = start.elapsed();
debug!(
"test cpu usage for priority {:?} workload_id {:?} elapsed {}ms",
priority,
workload_id,
elapsed.as_millis()
);
}
}

View File

@@ -33,6 +33,14 @@ pub enum Error {
location: Location, location: Location,
}, },
#[snafu(display("Failed to build runtime rate limiter"))]
BuildRuntimeRateLimiter {
#[snafu(implicit)]
location: Location,
#[snafu(source)]
error: ratelimit::Error,
},
#[snafu(display("Repeated task {} is already started", name))] #[snafu(display("Repeated task {} is already started", name))]
IllegalState { IllegalState {
name: String, name: String,

View File

@@ -21,6 +21,7 @@ use once_cell::sync::Lazy;
use paste::paste; use paste::paste;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use crate::runtime::{BuilderBuild, RuntimeTrait};
use crate::{Builder, JoinHandle, Runtime}; use crate::{Builder, JoinHandle, Runtime};
const GLOBAL_WORKERS: usize = 8; const GLOBAL_WORKERS: usize = 8;

View File

@@ -17,6 +17,8 @@ pub mod global;
mod metrics; mod metrics;
mod repeated_task; mod repeated_task;
pub mod runtime; pub mod runtime;
pub mod runtime_default;
pub mod runtime_throttleable;
pub use global::{ pub use global::{
block_on_compact, block_on_global, compact_runtime, create_runtime, global_runtime, block_on_compact, block_on_global, compact_runtime, create_runtime, global_runtime,

View File

@@ -23,6 +23,7 @@ use tokio::task::JoinHandle;
use tokio_util::sync::CancellationToken; use tokio_util::sync::CancellationToken;
use crate::error::{IllegalStateSnafu, Result, WaitGcTaskStopSnafu}; use crate::error::{IllegalStateSnafu, Result, WaitGcTaskStopSnafu};
use crate::runtime::RuntimeTrait;
use crate::Runtime; use crate::Runtime;
/// Task to execute repeatedly. /// Task to execute repeatedly.

View File

@@ -19,24 +19,20 @@ use std::thread;
use std::time::Duration; use std::time::Duration;
use snafu::ResultExt; use snafu::ResultExt;
use tokio::runtime::{Builder as RuntimeBuilder, Handle}; use tokio::runtime::Builder as RuntimeBuilder;
use tokio::sync::oneshot; use tokio::sync::oneshot;
pub use tokio::task::{JoinError, JoinHandle}; pub use tokio::task::{JoinError, JoinHandle};
use crate::error::*; use crate::error::*;
use crate::metrics::*; use crate::metrics::*;
use crate::runtime_default::DefaultRuntime;
use crate::runtime_throttleable::ThrottleableRuntime;
// configurations
pub type Runtime = DefaultRuntime;
static RUNTIME_ID: AtomicUsize = AtomicUsize::new(0); static RUNTIME_ID: AtomicUsize = AtomicUsize::new(0);
/// A runtime to run future tasks
#[derive(Clone, Debug)]
pub struct Runtime {
name: String,
handle: Handle,
// Used to receive a drop signal when dropper is dropped, inspired by databend
_dropper: Arc<Dropper>,
}
/// Dropping the dropper will cause runtime to shutdown. /// Dropping the dropper will cause runtime to shutdown.
#[derive(Debug)] #[derive(Debug)]
pub struct Dropper { pub struct Dropper {
@@ -50,45 +46,42 @@ impl Drop for Dropper {
} }
} }
impl Runtime { pub trait RuntimeTrait {
pub fn builder() -> Builder { /// Get a runtime builder
fn builder() -> Builder {
Builder::default() Builder::default()
} }
/// Spawn a future and execute it in this thread pool /// Spawn a future and execute it in this thread pool
/// ///
/// Similar to tokio::runtime::Runtime::spawn() /// Similar to tokio::runtime::Runtime::spawn()
pub fn spawn<F>(&self, future: F) -> JoinHandle<F::Output> fn spawn<F>(&self, future: F) -> JoinHandle<F::Output>
where where
F: Future + Send + 'static, F: Future + Send + 'static,
F::Output: Send + 'static, F::Output: Send + 'static;
{
self.handle.spawn(future)
}
/// Run the provided function on an executor dedicated to blocking /// Run the provided function on an executor dedicated to blocking
/// operations. /// operations.
pub fn spawn_blocking<F, R>(&self, func: F) -> JoinHandle<R> fn spawn_blocking<F, R>(&self, func: F) -> JoinHandle<R>
where where
F: FnOnce() -> R + Send + 'static, F: FnOnce() -> R + Send + 'static,
R: Send + 'static, R: Send + 'static;
{
self.handle.spawn_blocking(func)
}
/// Run a future to complete, this is the runtime's entry point /// Run a future to complete, this is the runtime's entry point
pub fn block_on<F: Future>(&self, future: F) -> F::Output { fn block_on<F: Future>(&self, future: F) -> F::Output;
self.handle.block_on(future)
}
pub fn name(&self) -> &str { /// Get the name of the runtime
&self.name fn name(&self) -> &str;
} }
pub trait BuilderBuild<R: RuntimeTrait> {
fn build(&mut self) -> Result<R>;
} }
pub struct Builder { pub struct Builder {
runtime_name: String, runtime_name: String,
thread_name: String, thread_name: String,
priority: Priority,
builder: RuntimeBuilder, builder: RuntimeBuilder,
} }
@@ -98,11 +91,17 @@ impl Default for Builder {
runtime_name: format!("runtime-{}", RUNTIME_ID.fetch_add(1, Ordering::Relaxed)), runtime_name: format!("runtime-{}", RUNTIME_ID.fetch_add(1, Ordering::Relaxed)),
thread_name: "default-worker".to_string(), thread_name: "default-worker".to_string(),
builder: RuntimeBuilder::new_multi_thread(), builder: RuntimeBuilder::new_multi_thread(),
priority: Priority::VeryHigh,
} }
} }
} }
impl Builder { impl Builder {
pub fn priority(&mut self, priority: Priority) -> &mut Self {
self.priority = priority;
self
}
/// Sets the number of worker threads the Runtime will use. /// Sets the number of worker threads the Runtime will use.
/// ///
/// This can be any number above 0. The default value is the number of cores available to the system. /// This can be any number above 0. The default value is the number of cores available to the system.
@@ -139,8 +138,10 @@ impl Builder {
self.thread_name = val.into(); self.thread_name = val.into();
self self
} }
}
pub fn build(&mut self) -> Result<Runtime> { impl BuilderBuild<DefaultRuntime> for Builder {
fn build(&mut self) -> Result<DefaultRuntime> {
let runtime = self let runtime = self
.builder .builder
.enable_all() .enable_all()
@@ -163,18 +164,53 @@ impl Builder {
#[cfg(tokio_unstable)] #[cfg(tokio_unstable)]
register_collector(name.clone(), &handle); register_collector(name.clone(), &handle);
Ok(Runtime { Ok(DefaultRuntime::new(
name, &name,
handle, handle,
_dropper: Arc::new(Dropper { Arc::new(Dropper {
close: Some(send_stop), close: Some(send_stop),
}), }),
}) ))
}
}
impl BuilderBuild<ThrottleableRuntime> for Builder {
fn build(&mut self) -> Result<ThrottleableRuntime> {
let runtime = self
.builder
.enable_all()
.thread_name(self.thread_name.clone())
.on_thread_start(on_thread_start(self.thread_name.clone()))
.on_thread_stop(on_thread_stop(self.thread_name.clone()))
.on_thread_park(on_thread_park(self.thread_name.clone()))
.on_thread_unpark(on_thread_unpark(self.thread_name.clone()))
.build()
.context(BuildRuntimeSnafu)?;
let name = self.runtime_name.clone();
let handle = runtime.handle().clone();
let (send_stop, recv_stop) = oneshot::channel();
// Block the runtime to shutdown.
let _ = thread::Builder::new()
.name(format!("{}-blocker", self.thread_name))
.spawn(move || runtime.block_on(recv_stop));
#[cfg(tokio_unstable)]
register_collector(name.clone(), &handle);
ThrottleableRuntime::new(
&name,
self.priority,
handle,
Arc::new(Dropper {
close: Some(send_stop),
}),
)
} }
} }
#[cfg(tokio_unstable)] #[cfg(tokio_unstable)]
pub fn register_collector(name: String, handle: &Handle) { pub fn register_collector(name: String, handle: &tokio::runtime::Handle) {
let name = name.replace("-", "_"); let name = name.replace("-", "_");
let monitor = tokio_metrics::RuntimeMonitor::new(handle); let monitor = tokio_metrics::RuntimeMonitor::new(handle);
let collector = tokio_metrics_collector::RuntimeCollector::new(monitor, name); let collector = tokio_metrics_collector::RuntimeCollector::new(monitor, name);
@@ -213,8 +249,18 @@ fn on_thread_unpark(thread_name: String) -> impl Fn() + 'static {
} }
} }
#[derive(Clone, Copy, Debug, Hash, PartialEq, Eq)]
pub enum Priority {
VeryLow = 0,
Low = 1,
Middle = 2,
High = 3,
VeryHigh = 4,
}
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use std::sync::Arc; use std::sync::Arc;
use std::thread; use std::thread;
use std::time::Duration; use std::time::Duration;
@@ -235,12 +281,12 @@ mod tests {
#[test] #[test]
fn test_metric() { fn test_metric() {
let runtime = Builder::default() let runtime: Runtime = Builder::default()
.worker_threads(5) .worker_threads(5)
.thread_name("test_runtime_metric") .thread_name("test_runtime_metric")
.build() .build()
.unwrap(); .unwrap();
// wait threads created // wait threads create
thread::sleep(Duration::from_millis(50)); thread::sleep(Duration::from_millis(50));
let _handle = runtime.spawn(async { let _handle = runtime.spawn(async {

View File

@@ -0,0 +1,77 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::future::Future;
use std::sync::Arc;
use tokio::runtime::Handle;
pub use tokio::task::JoinHandle;
use crate::runtime::{Dropper, RuntimeTrait};
use crate::Builder;
/// A runtime to run future tasks
#[derive(Clone, Debug)]
pub struct DefaultRuntime {
name: String,
handle: Handle,
// Used to receive a drop signal when dropper is dropped, inspired by databend
_dropper: Arc<Dropper>,
}
impl DefaultRuntime {
pub(crate) fn new(name: &str, handle: Handle, dropper: Arc<Dropper>) -> Self {
Self {
name: name.to_string(),
handle,
_dropper: dropper,
}
}
}
impl RuntimeTrait for DefaultRuntime {
fn builder() -> Builder {
Builder::default()
}
/// Spawn a future and execute it in this thread pool
///
/// Similar to tokio::runtime::Runtime::spawn()
fn spawn<F>(&self, future: F) -> JoinHandle<F::Output>
where
F: Future + Send + 'static,
F::Output: Send + 'static,
{
self.handle.spawn(future)
}
/// Run the provided function on an executor dedicated to blocking
/// operations.
fn spawn_blocking<F, R>(&self, func: F) -> JoinHandle<R>
where
F: FnOnce() -> R + Send + 'static,
R: Send + 'static,
{
self.handle.spawn_blocking(func)
}
/// Run a future to complete, this is the runtime's entry point
fn block_on<F: Future>(&self, future: F) -> F::Output {
self.handle.block_on(future)
}
fn name(&self) -> &str {
&self.name
}
}

View File

@@ -0,0 +1,285 @@
// Copyright 2023 Greptime Team
//
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// http://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
use std::fmt::Debug;
use std::future::Future;
use std::pin::Pin;
use std::sync::Arc;
use std::task::{Context, Poll};
use std::time::Duration;
use futures::FutureExt;
use ratelimit::Ratelimiter;
use snafu::ResultExt;
use tokio::runtime::Handle;
pub use tokio::task::JoinHandle;
use tokio::time::Sleep;
use crate::error::{BuildRuntimeRateLimiterSnafu, Result};
use crate::runtime::{Dropper, Priority, RuntimeTrait};
use crate::Builder;
struct RuntimeRateLimiter {
pub ratelimiter: Option<Ratelimiter>,
}
impl Debug for RuntimeRateLimiter {
fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {
f.debug_struct("RuntimeThrottleShareWithFuture")
.field(
"ratelimiter_max_tokens",
&self.ratelimiter.as_ref().map(|v| v.max_tokens()),
)
.field(
"ratelimiter_refill_amount",
&self.ratelimiter.as_ref().map(|v| v.refill_amount()),
)
.finish()
}
}
/// A runtime to run future tasks
#[derive(Clone, Debug)]
pub struct ThrottleableRuntime {
name: String,
handle: Handle,
shared_with_future: Arc<RuntimeRateLimiter>,
// Used to receive a drop signal when dropper is dropped, inspired by databend
_dropper: Arc<Dropper>,
}
impl ThrottleableRuntime {
pub(crate) fn new(
name: &str,
priority: Priority,
handle: Handle,
dropper: Arc<Dropper>,
) -> Result<Self> {
Ok(Self {
name: name.to_string(),
handle,
shared_with_future: Arc::new(RuntimeRateLimiter {
ratelimiter: priority.ratelimiter_count()?,
}),
_dropper: dropper,
})
}
}
impl RuntimeTrait for ThrottleableRuntime {
fn builder() -> Builder {
Builder::default()
}
/// Spawn a future and execute it in this thread pool
///
/// Similar to tokio::runtime::Runtime::spawn()
fn spawn<F>(&self, future: F) -> JoinHandle<F::Output>
where
F: Future + Send + 'static,
F::Output: Send + 'static,
{
self.handle
.spawn(ThrottleFuture::new(self.shared_with_future.clone(), future))
}
/// Run the provided function on an executor dedicated to blocking
/// operations.
fn spawn_blocking<F, R>(&self, func: F) -> JoinHandle<R>
where
F: FnOnce() -> R + Send + 'static,
R: Send + 'static,
{
self.handle.spawn_blocking(func)
}
/// Run a future to complete, this is the runtime's entry point
fn block_on<F: Future>(&self, future: F) -> F::Output {
self.handle.block_on(future)
}
fn name(&self) -> &str {
&self.name
}
}
enum State {
Pollable,
Throttled(Pin<Box<Sleep>>),
}
impl State {
fn unwrap_backoff(&mut self) -> &mut Pin<Box<Sleep>> {
match self {
State::Throttled(sleep) => sleep,
_ => panic!("unwrap_backoff failed"),
}
}
}
#[pin_project::pin_project]
pub struct ThrottleFuture<F: Future + Send + 'static> {
#[pin]
future: F,
/// RateLimiter of this future
handle: Arc<RuntimeRateLimiter>,
state: State,
}
impl<F> ThrottleFuture<F>
where
F: Future + Send + 'static,
F::Output: Send + 'static,
{
fn new(handle: Arc<RuntimeRateLimiter>, future: F) -> Self {
Self {
future,
handle,
state: State::Pollable,
}
}
}
impl<F> Future for ThrottleFuture<F>
where
F: Future + Send + 'static,
F::Output: Send + 'static,
{
type Output = F::Output;
fn poll(self: std::pin::Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Self::Output> {
let this = self.project();
match this.state {
State::Pollable => {}
State::Throttled(ref mut sleep) => match sleep.poll_unpin(cx) {
Poll::Ready(_) => {
*this.state = State::Pollable;
}
Poll::Pending => return Poll::Pending,
},
};
if let Some(ratelimiter) = &this.handle.ratelimiter {
if let Err(wait) = ratelimiter.try_wait() {
*this.state = State::Throttled(Box::pin(tokio::time::sleep(wait)));
match this.state.unwrap_backoff().poll_unpin(cx) {
Poll::Ready(_) => {
*this.state = State::Pollable;
}
Poll::Pending => {
return Poll::Pending;
}
}
}
}
let poll_res = this.future.poll(cx);
match poll_res {
Poll::Ready(r) => Poll::Ready(r),
Poll::Pending => Poll::Pending,
}
}
}
impl Priority {
fn ratelimiter_count(&self) -> Result<Option<Ratelimiter>> {
let max = 8000;
let gen_per_10ms = match self {
Priority::VeryLow => Some(2000),
Priority::Low => Some(4000),
Priority::Middle => Some(6000),
Priority::High => Some(8000),
Priority::VeryHigh => None,
};
if let Some(gen_per_10ms) = gen_per_10ms {
Ratelimiter::builder(gen_per_10ms, Duration::from_millis(10)) // generate poll count per 10ms
.max_tokens(max) // reserved token for batch request
.build()
.context(BuildRuntimeRateLimiterSnafu)
.map(Some)
} else {
Ok(None)
}
}
}
#[cfg(test)]
mod tests {
use tokio::fs::File;
use tokio::io::AsyncWriteExt;
use tokio::time::Duration;
use super::*;
use crate::runtime::BuilderBuild;
#[tokio::test]
async fn test_throttleable_runtime_spawn_simple() {
for p in [
Priority::VeryLow,
Priority::Low,
Priority::Middle,
Priority::High,
Priority::VeryHigh,
] {
let runtime: ThrottleableRuntime = Builder::default()
.runtime_name("test")
.thread_name("test")
.worker_threads(8)
.priority(p)
.build()
.expect("Fail to create runtime");
// Spawn a simple future that returns 42
let handle = runtime.spawn(async {
tokio::time::sleep(Duration::from_millis(10)).await;
42
});
let result = handle.await.expect("Task panicked");
assert_eq!(result, 42);
}
}
#[tokio::test]
async fn test_throttleable_runtime_spawn_complex() {
let tempdir = tempfile::tempdir().unwrap();
for p in [
Priority::VeryLow,
Priority::Low,
Priority::Middle,
Priority::High,
Priority::VeryHigh,
] {
let runtime: ThrottleableRuntime = Builder::default()
.runtime_name("test")
.thread_name("test")
.worker_threads(8)
.priority(p)
.build()
.expect("Fail to create runtime");
let tempdirpath = tempdir.path().to_path_buf();
let handle = runtime.spawn(async move {
let mut file = File::create(tempdirpath.join("test.txt")).await.unwrap();
file.write_all(b"Hello, world!").await.unwrap();
42
});
let result = handle.await.expect("Task panicked");
assert_eq!(result, 42);
}
}
}

View File

@@ -26,13 +26,13 @@ opentelemetry = { version = "0.21.0", default-features = false, features = [
opentelemetry-otlp = { version = "0.14.0", features = ["tokio"] } opentelemetry-otlp = { version = "0.14.0", features = ["tokio"] }
opentelemetry-semantic-conventions = "0.13.0" opentelemetry-semantic-conventions = "0.13.0"
opentelemetry_sdk = { version = "0.21.0", features = ["rt-tokio"] } opentelemetry_sdk = { version = "0.21.0", features = ["rt-tokio"] }
parking_lot = { version = "0.12" } parking_lot.workspace = true
prometheus.workspace = true prometheus.workspace = true
serde.workspace = true serde.workspace = true
serde_json.workspace = true serde_json.workspace = true
tokio.workspace = true tokio.workspace = true
tracing = "0.1" tracing = "0.1"
tracing-appender = "0.2" tracing-appender.workspace = true
tracing-log = "0.1" tracing-log = "0.1"
tracing-opentelemetry = "0.22.0" tracing-opentelemetry = "0.22.0"
tracing-subscriber = { version = "0.3", features = ["env-filter", "json", "fmt"] } tracing-subscriber.workspace = true

View File

@@ -14,13 +14,13 @@
use std::fmt::{Display, Formatter, Write}; use std::fmt::{Display, Formatter, Write};
use chrono::{Datelike, Days, LocalResult, Months, NaiveDate, NaiveTime, TimeZone}; use chrono::{Datelike, Days, LocalResult, Months, NaiveDate, NaiveTime, TimeDelta, TimeZone};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use serde_json::Value; use serde_json::Value;
use snafu::ResultExt; use snafu::ResultExt;
use crate::error::{InvalidDateStrSnafu, ParseDateStrSnafu, Result}; use crate::error::{InvalidDateStrSnafu, ParseDateStrSnafu, Result};
use crate::interval::Interval; use crate::interval::{IntervalDayTime, IntervalMonthDayNano, IntervalYearMonth};
use crate::timezone::get_timezone; use crate::timezone::get_timezone;
use crate::util::datetime_to_utc; use crate::util::datetime_to_utc;
use crate::Timezone; use crate::Timezone;
@@ -134,29 +134,64 @@ impl Date {
(self.0 as i64) * 24 * 3600 (self.0 as i64) * 24 * 3600
} }
/// Adds given Interval to the current date. // FIXME(yingwen): remove add/sub intervals later
/// Returns None if the resulting date would be out of range. /// Adds given [IntervalYearMonth] to the current date.
pub fn add_interval(&self, interval: Interval) -> Option<Date> { pub fn add_year_month(&self, interval: IntervalYearMonth) -> Option<Date> {
let naive_date = self.to_chrono_date()?; let naive_date = self.to_chrono_date()?;
let (months, days, _) = interval.to_month_day_nano();
naive_date naive_date
.checked_add_months(Months::new(months as u32))? .checked_add_months(Months::new(interval.months as u32))
.checked_add_days(Days::new(days as u64))
.map(Into::into) .map(Into::into)
} }
/// Subtracts given Interval to the current date. /// Adds given [IntervalDayTime] to the current date.
/// Returns None if the resulting date would be out of range. pub fn add_day_time(&self, interval: IntervalDayTime) -> Option<Date> {
pub fn sub_interval(&self, interval: Interval) -> Option<Date> {
let naive_date = self.to_chrono_date()?; let naive_date = self.to_chrono_date()?;
let (months, days, _) = interval.to_month_day_nano(); naive_date
.checked_add_days(Days::new(interval.days as u64))?
.checked_add_signed(TimeDelta::milliseconds(interval.milliseconds as i64))
.map(Into::into)
}
/// Adds given [IntervalMonthDayNano] to the current date.
pub fn add_month_day_nano(&self, interval: IntervalMonthDayNano) -> Option<Date> {
let naive_date = self.to_chrono_date()?;
naive_date naive_date
.checked_sub_months(Months::new(months as u32))? .checked_add_months(Months::new(interval.months as u32))?
.checked_sub_days(Days::new(days as u64)) .checked_add_days(Days::new(interval.days as u64))?
.checked_add_signed(TimeDelta::nanoseconds(interval.nanoseconds))
.map(Into::into)
}
/// Subtracts given [IntervalYearMonth] to the current date.
pub fn sub_year_month(&self, interval: IntervalYearMonth) -> Option<Date> {
let naive_date = self.to_chrono_date()?;
naive_date
.checked_sub_months(Months::new(interval.months as u32))
.map(Into::into)
}
/// Subtracts given [IntervalDayTime] to the current date.
pub fn sub_day_time(&self, interval: IntervalDayTime) -> Option<Date> {
let naive_date = self.to_chrono_date()?;
naive_date
.checked_sub_days(Days::new(interval.days as u64))?
.checked_sub_signed(TimeDelta::milliseconds(interval.milliseconds as i64))
.map(Into::into)
}
/// Subtracts given [IntervalMonthDayNano] to the current date.
pub fn sub_month_day_nano(&self, interval: IntervalMonthDayNano) -> Option<Date> {
let naive_date = self.to_chrono_date()?;
naive_date
.checked_sub_months(Months::new(interval.months as u32))?
.checked_sub_days(Days::new(interval.days as u64))?
.checked_sub_signed(TimeDelta::nanoseconds(interval.nanoseconds))
.map(Into::into) .map(Into::into)
} }
@@ -246,12 +281,12 @@ mod tests {
fn test_add_sub_interval() { fn test_add_sub_interval() {
let date = Date::new(1000); let date = Date::new(1000);
let interval = Interval::from_year_month(3); let interval = IntervalYearMonth::new(3);
let new_date = date.add_interval(interval).unwrap(); let new_date = date.add_year_month(interval).unwrap();
assert_eq!(new_date.val(), 1091); assert_eq!(new_date.val(), 1091);
assert_eq!(date, new_date.sub_interval(interval).unwrap()); assert_eq!(date, new_date.sub_year_month(interval).unwrap());
} }
#[test] #[test]

View File

@@ -13,16 +13,18 @@
// limitations under the License. // limitations under the License.
use std::fmt::{Display, Formatter, Write}; use std::fmt::{Display, Formatter, Write};
use std::time::Duration;
use chrono::{Days, LocalResult, Months, NaiveDateTime, TimeZone as ChronoTimeZone, Utc}; use chrono::{
Days, LocalResult, Months, NaiveDateTime, TimeDelta, TimeZone as ChronoTimeZone, Utc,
};
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use snafu::ResultExt; use snafu::ResultExt;
use crate::error::{InvalidDateStrSnafu, Result}; use crate::error::{InvalidDateStrSnafu, Result};
use crate::interval::{IntervalDayTime, IntervalMonthDayNano, IntervalYearMonth};
use crate::timezone::{get_timezone, Timezone}; use crate::timezone::{get_timezone, Timezone};
use crate::util::{datetime_to_utc, format_utc_datetime}; use crate::util::{datetime_to_utc, format_utc_datetime};
use crate::{Date, Interval}; use crate::Date;
const DATETIME_FORMAT: &str = "%F %H:%M:%S%.f"; const DATETIME_FORMAT: &str = "%F %H:%M:%S%.f";
const DATETIME_FORMAT_WITH_TZ: &str = "%F %H:%M:%S%.f%z"; const DATETIME_FORMAT_WITH_TZ: &str = "%F %H:%M:%S%.f%z";
@@ -160,32 +162,66 @@ impl DateTime {
None => Utc.from_utc_datetime(&v).naive_local(), None => Utc.from_utc_datetime(&v).naive_local(),
}) })
} }
/// Adds given Interval to the current datetime.
/// Returns None if the resulting datetime would be out of range. // FIXME(yingwen): remove add/sub intervals later
pub fn add_interval(&self, interval: Interval) -> Option<Self> { /// Adds given [IntervalYearMonth] to the current datetime.
pub fn add_year_month(&self, interval: IntervalYearMonth) -> Option<Self> {
let naive_datetime = self.to_chrono_datetime()?; let naive_datetime = self.to_chrono_datetime()?;
let (months, days, nsecs) = interval.to_month_day_nano();
let naive_datetime = naive_datetime naive_datetime
.checked_add_months(Months::new(months as u32))? .checked_add_months(Months::new(interval.months as u32))
.checked_add_days(Days::new(days as u64))? .map(Into::into)
+ Duration::from_nanos(nsecs as u64);
Some(naive_datetime.into())
} }
/// Subtracts given Interval to the current datetime. /// Adds given [IntervalDayTime] to the current datetime.
/// Returns None if the resulting datetime would be out of range. pub fn add_day_time(&self, interval: IntervalDayTime) -> Option<Self> {
pub fn sub_interval(&self, interval: Interval) -> Option<Self> {
let naive_datetime = self.to_chrono_datetime()?; let naive_datetime = self.to_chrono_datetime()?;
let (months, days, nsecs) = interval.to_month_day_nano();
let naive_datetime = naive_datetime naive_datetime
.checked_sub_months(Months::new(months as u32))? .checked_add_days(Days::new(interval.days as u64))?
.checked_sub_days(Days::new(days as u64))? .checked_add_signed(TimeDelta::milliseconds(interval.milliseconds as i64))
- Duration::from_nanos(nsecs as u64); .map(Into::into)
}
Some(naive_datetime.into()) /// Adds given [IntervalMonthDayNano] to the current datetime.
pub fn add_month_day_nano(&self, interval: IntervalMonthDayNano) -> Option<Self> {
let naive_datetime = self.to_chrono_datetime()?;
naive_datetime
.checked_add_months(Months::new(interval.months as u32))?
.checked_add_days(Days::new(interval.days as u64))?
.checked_add_signed(TimeDelta::nanoseconds(interval.nanoseconds))
.map(Into::into)
}
/// Subtracts given [IntervalYearMonth] to the current datetime.
pub fn sub_year_month(&self, interval: IntervalYearMonth) -> Option<Self> {
let naive_datetime = self.to_chrono_datetime()?;
naive_datetime
.checked_sub_months(Months::new(interval.months as u32))
.map(Into::into)
}
/// Subtracts given [IntervalDayTime] to the current datetime.
pub fn sub_day_time(&self, interval: IntervalDayTime) -> Option<Self> {
let naive_datetime = self.to_chrono_datetime()?;
naive_datetime
.checked_sub_days(Days::new(interval.days as u64))?
.checked_sub_signed(TimeDelta::milliseconds(interval.milliseconds as i64))
.map(Into::into)
}
/// Subtracts given [IntervalMonthDayNano] to the current datetime.
pub fn sub_month_day_nano(&self, interval: IntervalMonthDayNano) -> Option<Self> {
let naive_datetime = self.to_chrono_datetime()?;
naive_datetime
.checked_sub_months(Months::new(interval.months as u32))?
.checked_sub_days(Days::new(interval.days as u64))?
.checked_sub_signed(TimeDelta::nanoseconds(interval.nanoseconds))
.map(Into::into)
} }
/// Convert to [common_time::date]. /// Convert to [common_time::date].
@@ -231,12 +267,12 @@ mod tests {
fn test_add_sub_interval() { fn test_add_sub_interval() {
let datetime = DateTime::new(1000); let datetime = DateTime::new(1000);
let interval = Interval::from_day_time(1, 200); let interval = IntervalDayTime::new(1, 200);
let new_datetime = datetime.add_interval(interval).unwrap(); let new_datetime = datetime.add_day_time(interval).unwrap();
assert_eq!(new_datetime.val(), 1000 + 3600 * 24 * 1000 + 200); assert_eq!(new_datetime.val(), 1000 + 3600 * 24 * 1000 + 200);
assert_eq!(datetime, new_datetime.sub_interval(interval).unwrap()); assert_eq!(datetime, new_datetime.sub_day_time(interval).unwrap());
} }
#[test] #[test]

View File

@@ -12,18 +12,10 @@
// See the License for the specific language governing permissions and // See the License for the specific language governing permissions and
// limitations under the License. // limitations under the License.
use std::cmp::Ordering; use std::hash::Hash;
use std::default::Default;
use std::fmt::{self, Display, Formatter, Write};
use std::hash::{Hash, Hasher};
use arrow::datatypes::IntervalUnit as ArrowIntervalUnit; use arrow::datatypes::IntervalUnit as ArrowIntervalUnit;
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
use serde_json::Value;
use snafu::ResultExt;
use crate::duration::Duration;
use crate::error::{Result, TimestampOverflowSnafu};
#[derive( #[derive(
Debug, Default, Copy, Clone, PartialEq, Eq, Hash, PartialOrd, Ord, Serialize, Deserialize, Debug, Default, Copy, Clone, PartialEq, Eq, Hash, PartialOrd, Ord, Serialize, Deserialize,
@@ -61,268 +53,269 @@ impl From<ArrowIntervalUnit> for IntervalUnit {
} }
} }
/// Interval Type represents a period of time. // The `Value` type requires Serialize, Deserialize.
/// It is composed of months, days and nanoseconds. #[derive(
/// 3 kinds of interval are supported: year-month, day-time and Debug, Default, Copy, Clone, Eq, PartialEq, Hash, Ord, PartialOrd, Serialize, Deserialize,
/// month-day-nano, which will be stored in the following format. )]
/// Interval data format: #[repr(C)]
/// | months | days | nsecs | pub struct IntervalYearMonth {
/// | 4bytes | 4bytes | 8bytes | /// Number of months
#[derive(Debug, Clone, Default, Copy, Serialize, Deserialize)] pub months: i32,
pub struct Interval {
months: i32,
days: i32,
nsecs: i64,
unit: IntervalUnit,
} }
// Nanosecond convert to other time unit impl IntervalYearMonth {
pub const NANOS_PER_SEC: i64 = 1_000_000_000; pub fn new(months: i32) -> Self {
pub const NANOS_PER_MILLI: i64 = 1_000_000; Self { months }
pub const NANOS_PER_MICRO: i64 = 1_000;
pub const NANOS_PER_HOUR: i64 = 60 * 60 * NANOS_PER_SEC;
pub const NANOS_PER_DAY: i64 = 24 * NANOS_PER_HOUR;
pub const NANOS_PER_MONTH: i64 = 30 * NANOS_PER_DAY;
pub const DAYS_PER_MONTH: i64 = 30;
impl Interval {
/// Creates a new interval from months, days and nanoseconds.
/// Precision is nanosecond.
pub fn from_month_day_nano(months: i32, days: i32, nsecs: i64) -> Self {
Interval {
months,
days,
nsecs,
unit: IntervalUnit::MonthDayNano,
}
}
/// Creates a new interval from months.
pub fn from_year_month(months: i32) -> Self {
Interval {
months,
days: 0,
nsecs: 0,
unit: IntervalUnit::YearMonth,
}
}
/// Creates a new interval from days and milliseconds.
pub fn from_day_time(days: i32, millis: i32) -> Self {
Interval {
months: 0,
days,
nsecs: (millis as i64) * NANOS_PER_MILLI,
unit: IntervalUnit::DayTime,
}
}
pub fn to_duration(&self) -> Result<Duration> {
Ok(Duration::new_nanosecond(
self.to_nanosecond()
.try_into()
.context(TimestampOverflowSnafu)?,
))
}
/// Return a tuple(months, days, nanoseconds) from the interval.
pub fn to_month_day_nano(&self) -> (i32, i32, i64) {
(self.months, self.days, self.nsecs)
}
/// Converts the interval to nanoseconds.
pub fn to_nanosecond(&self) -> i128 {
let days = (self.days as i64) + DAYS_PER_MONTH * (self.months as i64);
(self.nsecs as i128) + (NANOS_PER_DAY as i128) * (days as i128)
}
/// Smallest interval value.
pub const MIN: Self = Self {
months: i32::MIN,
days: i32::MIN,
nsecs: i64::MIN,
unit: IntervalUnit::MonthDayNano,
};
/// Largest interval value.
pub const MAX: Self = Self {
months: i32::MAX,
days: i32::MAX,
nsecs: i64::MAX,
unit: IntervalUnit::MonthDayNano,
};
/// Returns the justified interval.
/// allows you to adjust the interval of 30-day as one month and the interval of 24-hour as one day
pub fn justified_interval(&self) -> Self {
let mut result = *self;
let extra_months_d = self.days as i64 / DAYS_PER_MONTH;
let extra_months_nsecs = self.nsecs / NANOS_PER_MONTH;
result.days -= (extra_months_d * DAYS_PER_MONTH) as i32;
result.nsecs -= extra_months_nsecs * NANOS_PER_MONTH;
let extra_days = self.nsecs / NANOS_PER_DAY;
result.nsecs -= extra_days * NANOS_PER_DAY;
result.months += extra_months_d as i32 + extra_months_nsecs as i32;
result.days += extra_days as i32;
result
}
/// Convert Interval to nanoseconds,
/// to check whether Interval is positive
pub fn is_positive(&self) -> bool {
self.to_nanosecond() > 0
}
/// is_zero
pub fn is_zero(&self) -> bool {
self.months == 0 && self.days == 0 && self.nsecs == 0
}
/// get unit
pub fn unit(&self) -> IntervalUnit {
self.unit
}
/// Multiple Interval by an integer with overflow check.
/// Returns justified Interval, or `None` if overflow occurred.
pub fn checked_mul_int<I>(&self, rhs: I) -> Option<Self>
where
I: TryInto<i32>,
{
let rhs = rhs.try_into().ok()?;
let months = self.months.checked_mul(rhs)?;
let days = self.days.checked_mul(rhs)?;
let nsecs = self.nsecs.checked_mul(rhs as i64)?;
Some(
Self {
months,
days,
nsecs,
unit: self.unit,
}
.justified_interval(),
)
}
/// Convert Interval to ISO 8601 string
pub fn to_iso8601_string(self) -> String {
IntervalFormat::from(self).to_iso8601_string()
}
/// Convert Interval to postgres verbose string
pub fn to_postgres_string(self) -> String {
IntervalFormat::from(self).to_postgres_string()
}
/// Convert Interval to sql_standard string
pub fn to_sql_standard_string(self) -> String {
IntervalFormat::from(self).to_sql_standard_string()
}
/// Interval Type and i128 [IntervalUnit::MonthDayNano] Convert
/// v consists of months(i32) | days(i32) | nsecs(i64)
pub fn from_i128(v: i128) -> Self {
Interval {
nsecs: v as i64,
days: (v >> 64) as i32,
months: (v >> 96) as i32,
unit: IntervalUnit::MonthDayNano,
}
}
/// `Interval` Type and i64 [IntervalUnit::DayTime] Convert
/// v consists of days(i32) | milliseconds(i32)
pub fn from_i64(v: i64) -> Self {
Interval {
nsecs: ((v as i32) as i64) * NANOS_PER_MILLI,
days: (v >> 32) as i32,
months: 0,
unit: IntervalUnit::DayTime,
}
}
/// `Interval` Type and i32 [IntervalUnit::YearMonth] Convert
/// v consists of months(i32)
pub fn from_i32(v: i32) -> Self {
Interval {
nsecs: 0,
days: 0,
months: v,
unit: IntervalUnit::YearMonth,
}
}
pub fn to_i128(&self) -> i128 {
// 128 96 64 0
// +-------+-------+-------+-------+-------+-------+-------+-------+
// | months | days | nanoseconds |
// +-------+-------+-------+-------+-------+-------+-------+-------+
let months = (self.months as u128 & u32::MAX as u128) << 96;
let days = (self.days as u128 & u32::MAX as u128) << 64;
let nsecs = self.nsecs as u128 & u64::MAX as u128;
(months | days | nsecs) as i128
}
pub fn to_i64(&self) -> i64 {
// 64 32 0
// +-------+-------+-------+-------+-------+-------+-------+-------+
// | days | milliseconds |
// +-------+-------+-------+-------+-------+-------+-------+-------+
let days = (self.days as u64 & u32::MAX as u64) << 32;
let milliseconds = (self.nsecs / NANOS_PER_MILLI) as u64 & u32::MAX as u64;
(days | milliseconds) as i64
} }
pub fn to_i32(&self) -> i32 { pub fn to_i32(&self) -> i32 {
self.months self.months
} }
pub fn from_i32(months: i32) -> Self {
Self { months }
}
pub fn negative(&self) -> Self { pub fn negative(&self) -> Self {
Self { Self::new(-self.months)
months: -self.months, }
days: -self.days,
nsecs: -self.nsecs, pub fn to_iso8601_string(&self) -> String {
unit: self.unit, IntervalFormat::from(*self).to_iso8601_string()
}
}
impl From<IntervalYearMonth> for IntervalFormat {
fn from(interval: IntervalYearMonth) -> Self {
IntervalFormat {
years: interval.months / 12,
months: interval.months % 12,
..Default::default()
} }
} }
} }
impl From<i128> for Interval { impl From<i32> for IntervalYearMonth {
fn from(v: i32) -> Self {
Self::from_i32(v)
}
}
impl From<IntervalYearMonth> for i32 {
fn from(v: IntervalYearMonth) -> Self {
v.to_i32()
}
}
impl From<IntervalYearMonth> for serde_json::Value {
fn from(v: IntervalYearMonth) -> Self {
serde_json::Value::from(v.to_i32())
}
}
#[derive(
Debug, Default, Copy, Clone, Eq, PartialEq, Hash, Ord, PartialOrd, Serialize, Deserialize,
)]
#[repr(C)]
pub struct IntervalDayTime {
/// Number of days
pub days: i32,
/// Number of milliseconds
pub milliseconds: i32,
}
impl IntervalDayTime {
/// The additive identity i.e. `0`.
pub const ZERO: Self = Self::new(0, 0);
/// The multiplicative inverse, i.e. `-1`.
pub const MINUS_ONE: Self = Self::new(-1, -1);
/// The maximum value that can be represented
pub const MAX: Self = Self::new(i32::MAX, i32::MAX);
/// The minimum value that can be represented
pub const MIN: Self = Self::new(i32::MIN, i32::MIN);
pub const fn new(days: i32, milliseconds: i32) -> Self {
Self { days, milliseconds }
}
pub fn to_i64(&self) -> i64 {
let d = (self.days as u64 & u32::MAX as u64) << 32;
let m = self.milliseconds as u64 & u32::MAX as u64;
(d | m) as i64
}
pub fn from_i64(value: i64) -> Self {
let days = (value >> 32) as i32;
let milliseconds = value as i32;
Self { days, milliseconds }
}
pub fn negative(&self) -> Self {
Self::new(-self.days, -self.milliseconds)
}
pub fn to_iso8601_string(&self) -> String {
IntervalFormat::from(*self).to_iso8601_string()
}
pub fn as_millis(&self) -> i64 {
self.days as i64 * MS_PER_DAY + self.milliseconds as i64
}
}
impl From<i64> for IntervalDayTime {
fn from(v: i64) -> Self {
Self::from_i64(v)
}
}
impl From<IntervalDayTime> for i64 {
fn from(v: IntervalDayTime) -> Self {
v.to_i64()
}
}
impl From<IntervalDayTime> for serde_json::Value {
fn from(v: IntervalDayTime) -> Self {
serde_json::Value::from(v.to_i64())
}
}
// Millisecond convert to other time unit
pub const MS_PER_SEC: i64 = 1_000;
pub const MS_PER_MINUTE: i64 = 60 * MS_PER_SEC;
pub const MS_PER_HOUR: i64 = 60 * MS_PER_MINUTE;
pub const MS_PER_DAY: i64 = 24 * MS_PER_HOUR;
pub const NANOS_PER_MILLI: i64 = 1_000_000;
impl From<IntervalDayTime> for IntervalFormat {
fn from(interval: IntervalDayTime) -> Self {
IntervalFormat {
days: interval.days,
hours: interval.milliseconds as i64 / MS_PER_HOUR,
minutes: (interval.milliseconds as i64 % MS_PER_HOUR) / MS_PER_MINUTE,
seconds: (interval.milliseconds as i64 % MS_PER_MINUTE) / MS_PER_SEC,
microseconds: (interval.milliseconds as i64 % MS_PER_SEC) * MS_PER_SEC,
..Default::default()
}
}
}
#[derive(
Debug, Default, Copy, Clone, Eq, PartialEq, Hash, Ord, PartialOrd, Serialize, Deserialize,
)]
#[repr(C)]
pub struct IntervalMonthDayNano {
/// Number of months
pub months: i32,
/// Number of days
pub days: i32,
/// Number of nanoseconds
pub nanoseconds: i64,
}
impl IntervalMonthDayNano {
/// The additive identity i.e. `0`.
pub const ZERO: Self = Self::new(0, 0, 0);
/// The multiplicative inverse, i.e. `-1`.
pub const MINUS_ONE: Self = Self::new(-1, -1, -1);
/// The maximum value that can be represented
pub const MAX: Self = Self::new(i32::MAX, i32::MAX, i64::MAX);
/// The minimum value that can be represented
pub const MIN: Self = Self::new(i32::MIN, i32::MIN, i64::MIN);
pub const fn new(months: i32, days: i32, nanoseconds: i64) -> Self {
Self {
months,
days,
nanoseconds,
}
}
pub fn to_i128(&self) -> i128 {
let m = (self.months as u128 & u32::MAX as u128) << 96;
let d = (self.days as u128 & u32::MAX as u128) << 64;
let n = self.nanoseconds as u128 & u64::MAX as u128;
(m | d | n) as i128
}
pub fn from_i128(value: i128) -> Self {
let months = (value >> 96) as i32;
let days = (value >> 64) as i32;
let nanoseconds = value as i64;
Self {
months,
days,
nanoseconds,
}
}
pub fn negative(&self) -> Self {
Self::new(-self.months, -self.days, -self.nanoseconds)
}
pub fn to_iso8601_string(&self) -> String {
IntervalFormat::from(*self).to_iso8601_string()
}
}
impl From<i128> for IntervalMonthDayNano {
fn from(v: i128) -> Self { fn from(v: i128) -> Self {
Self::from_i128(v) Self::from_i128(v)
} }
} }
impl From<Interval> for i128 { impl From<IntervalMonthDayNano> for i128 {
fn from(v: Interval) -> Self { fn from(v: IntervalMonthDayNano) -> Self {
v.to_i128() v.to_i128()
} }
} }
impl From<Interval> for serde_json::Value { impl From<IntervalMonthDayNano> for serde_json::Value {
fn from(v: Interval) -> Self { fn from(v: IntervalMonthDayNano) -> Self {
Value::String(v.to_string()) serde_json::Value::from(v.to_i128().to_string())
} }
} }
impl Display for Interval { // Nanosecond convert to other time unit
fn fmt(&self, f: &mut Formatter<'_>) -> fmt::Result { pub const NS_PER_SEC: i64 = 1_000_000_000;
let mut s = String::new(); pub const NS_PER_MINUTE: i64 = 60 * NS_PER_SEC;
if self.months != 0 { pub const NS_PER_HOUR: i64 = 60 * NS_PER_MINUTE;
write!(s, "{} months ", self.months)?; pub const NS_PER_DAY: i64 = 24 * NS_PER_HOUR;
impl From<IntervalMonthDayNano> for IntervalFormat {
fn from(interval: IntervalMonthDayNano) -> Self {
IntervalFormat {
years: interval.months / 12,
months: interval.months % 12,
days: interval.days,
hours: interval.nanoseconds / NS_PER_HOUR,
minutes: (interval.nanoseconds % NS_PER_HOUR) / NS_PER_MINUTE,
seconds: (interval.nanoseconds % NS_PER_MINUTE) / NS_PER_SEC,
microseconds: (interval.nanoseconds % NS_PER_SEC) / 1_000,
} }
if self.days != 0 {
write!(s, "{} days ", self.days)?;
} }
if self.nsecs != 0 { }
write!(s, "{} nsecs", self.nsecs)?;
pub fn interval_year_month_to_month_day_nano(interval: IntervalYearMonth) -> IntervalMonthDayNano {
IntervalMonthDayNano {
months: interval.months,
days: 0,
nanoseconds: 0,
} }
write!(f, "{}", s.trim()) }
pub fn interval_day_time_to_month_day_nano(interval: IntervalDayTime) -> IntervalMonthDayNano {
IntervalMonthDayNano {
months: 0,
days: interval.days,
nanoseconds: interval.milliseconds as i64 * NANOS_PER_MILLI,
} }
} }
@@ -339,31 +332,6 @@ pub struct IntervalFormat {
pub microseconds: i64, pub microseconds: i64,
} }
impl From<Interval> for IntervalFormat {
fn from(val: Interval) -> IntervalFormat {
let months = val.months;
let days = val.days;
let microseconds = val.nsecs / NANOS_PER_MICRO;
let years = (months - (months % 12)) / 12;
let months = months - years * 12;
let hours = (microseconds - (microseconds % 3_600_000_000)) / 3_600_000_000;
let microseconds = microseconds - hours * 3_600_000_000;
let minutes = (microseconds - (microseconds % 60_000_000)) / 60_000_000;
let microseconds = microseconds - minutes * 60_000_000;
let seconds = (microseconds - (microseconds % 1_000_000)) / 1_000_000;
let microseconds = microseconds - seconds * 1_000_000;
IntervalFormat {
years,
months,
days,
hours,
minutes,
seconds,
microseconds,
}
}
}
impl IntervalFormat { impl IntervalFormat {
/// All the field in the interval is 0 /// All the field in the interval is 0
pub fn is_zero(&self) -> bool { pub fn is_zero(&self) -> bool {
@@ -540,117 +508,37 @@ fn get_time_part(
interval interval
} }
/// IntervalCompare is used to compare two intervals
/// It makes interval into nanoseconds style.
#[derive(PartialEq, Eq, Hash, PartialOrd, Ord)]
struct IntervalCompare(i128);
impl From<Interval> for IntervalCompare {
fn from(interval: Interval) -> Self {
Self(interval.to_nanosecond())
}
}
impl Ord for Interval {
fn cmp(&self, other: &Self) -> Ordering {
IntervalCompare::from(*self).cmp(&IntervalCompare::from(*other))
}
}
impl PartialOrd for Interval {
fn partial_cmp(&self, other: &Self) -> Option<Ordering> {
Some(self.cmp(other))
}
}
impl Eq for Interval {}
impl PartialEq for Interval {
fn eq(&self, other: &Self) -> bool {
self.cmp(other).is_eq()
}
}
impl Hash for Interval {
fn hash<H: Hasher>(&self, state: &mut H) {
IntervalCompare::from(*self).hash(state)
}
}
#[cfg(test)] #[cfg(test)]
mod tests { mod tests {
use std::collections::HashMap;
use super::*; use super::*;
use crate::timestamp::TimeUnit;
#[test] #[test]
fn test_from_year_month() { fn test_from_year_month() {
let interval = Interval::from_year_month(1); let interval = IntervalYearMonth::new(1);
assert_eq!(interval.months, 1); assert_eq!(interval.months, 1);
} }
#[test] #[test]
fn test_from_date_time() { fn test_from_date_time() {
let interval = Interval::from_day_time(1, 2); let interval = IntervalDayTime::new(1, 2);
assert_eq!(interval.days, 1); assert_eq!(interval.days, 1);
assert_eq!(interval.nsecs, 2_000_000); assert_eq!(interval.milliseconds, 2);
} }
#[test] #[test]
fn test_to_duration() { fn test_from_month_day_nano() {
let interval = Interval::from_day_time(1, 2); let interval = IntervalMonthDayNano::new(1, 2, 3);
assert_eq!(interval.months, 1);
let duration = interval.to_duration().unwrap(); assert_eq!(interval.days, 2);
assert_eq!(86400002000000, duration.value()); assert_eq!(interval.nanoseconds, 3);
assert_eq!(TimeUnit::Nanosecond, duration.unit());
let interval = Interval::from_year_month(12);
let duration = interval.to_duration().unwrap();
assert_eq!(31104000000000000, duration.value());
assert_eq!(TimeUnit::Nanosecond, duration.unit());
}
#[test]
fn test_interval_is_positive() {
let interval = Interval::from_year_month(1);
assert!(interval.is_positive());
let interval = Interval::from_year_month(-1);
assert!(!interval.is_positive());
let interval = Interval::from_day_time(1, i32::MIN);
assert!(!interval.is_positive());
}
#[test]
fn test_to_nanosecond() {
let interval = Interval::from_year_month(1);
assert_eq!(interval.to_nanosecond(), 2592000000000000);
let interval = Interval::from_day_time(1, 2);
assert_eq!(interval.to_nanosecond(), 86400002000000);
let max_interval = Interval::from_month_day_nano(i32::MAX, i32::MAX, i64::MAX);
assert_eq!(max_interval.to_nanosecond(), 5751829423496836854775807);
let min_interval = Interval::from_month_day_nano(i32::MIN, i32::MIN, i64::MIN);
assert_eq!(min_interval.to_nanosecond(), -5751829426175236854775808);
}
#[test]
fn test_interval_is_zero() {
let interval = Interval::from_month_day_nano(1, 1, 1);
assert!(!interval.is_zero());
let interval = Interval::from_month_day_nano(0, 0, 0);
assert!(interval.is_zero());
} }
#[test] #[test]
fn test_interval_i128_convert() { fn test_interval_i128_convert() {
let test_interval_eq = |month, day, nano| { let test_interval_eq = |month, day, nano| {
let interval = Interval::from_month_day_nano(month, day, nano); let interval = IntervalMonthDayNano::new(month, day, nano);
let interval_i128 = interval.to_i128(); let interval_i128 = interval.to_i128();
let interval2 = Interval::from_i128(interval_i128); let interval2 = IntervalMonthDayNano::from_i128(interval_i128);
assert_eq!(interval, interval2); assert_eq!(interval, interval2);
}; };
@@ -666,11 +554,26 @@ mod tests {
test_interval_eq(i32::MAX, i32::MIN, i64::MIN); test_interval_eq(i32::MAX, i32::MIN, i64::MIN);
test_interval_eq(i32::MIN, i32::MAX, i64::MIN); test_interval_eq(i32::MIN, i32::MAX, i64::MIN);
test_interval_eq(i32::MIN, i32::MIN, i64::MIN); test_interval_eq(i32::MIN, i32::MIN, i64::MIN);
let interval = IntervalMonthDayNano::from_i128(1);
assert_eq!(interval, IntervalMonthDayNano::new(0, 0, 1));
assert_eq!(1, IntervalMonthDayNano::new(0, 0, 1).to_i128());
}
#[test]
fn test_interval_i64_convert() {
let interval = IntervalDayTime::from_i64(1);
assert_eq!(interval, IntervalDayTime::new(0, 1));
assert_eq!(1, IntervalDayTime::new(0, 1).to_i64());
} }
#[test] #[test]
fn test_convert_interval_format() { fn test_convert_interval_format() {
let interval = Interval::from_month_day_nano(14, 160, 1000000); let interval = IntervalMonthDayNano {
months: 14,
days: 160,
nanoseconds: 1000000,
};
let interval_format = IntervalFormat::from(interval); let interval_format = IntervalFormat::from(interval);
assert_eq!(interval_format.years, 1); assert_eq!(interval_format.years, 1);
assert_eq!(interval_format.months, 2); assert_eq!(interval_format.months, 2);
@@ -681,94 +584,34 @@ mod tests {
assert_eq!(interval_format.microseconds, 1000); assert_eq!(interval_format.microseconds, 1000);
} }
#[test]
fn test_interval_hash() {
let interval = Interval::from_month_day_nano(1, 31, 1);
let interval2 = Interval::from_month_day_nano(2, 1, 1);
let mut map = HashMap::new();
map.insert(interval, 1);
assert_eq!(map.get(&interval2), Some(&1));
}
#[test]
fn test_interval_mul_int() {
let interval = Interval::from_month_day_nano(1, 1, 1);
let interval2 = interval.checked_mul_int(2).unwrap();
assert_eq!(interval2.months, 2);
assert_eq!(interval2.days, 2);
assert_eq!(interval2.nsecs, 2);
// test justified interval
let interval = Interval::from_month_day_nano(1, 31, 1);
let interval2 = interval.checked_mul_int(2).unwrap();
assert_eq!(interval2.months, 4);
assert_eq!(interval2.days, 2);
assert_eq!(interval2.nsecs, 2);
// test overflow situation
let interval = Interval::from_month_day_nano(i32::MAX, 1, 1);
let interval2 = interval.checked_mul_int(2);
assert!(interval2.is_none());
}
#[test]
fn test_display() {
let interval = Interval::from_month_day_nano(1, 1, 1);
assert_eq!(interval.to_string(), "1 months 1 days 1 nsecs");
let interval = Interval::from_month_day_nano(14, 31, 10000000000);
assert_eq!(interval.to_string(), "14 months 31 days 10000000000 nsecs");
}
#[test]
fn test_interval_justified() {
let interval = Interval::from_month_day_nano(1, 131, 1).justified_interval();
let interval2 = Interval::from_month_day_nano(5, 11, 1);
assert_eq!(interval, interval2);
let interval = Interval::from_month_day_nano(1, 1, NANOS_PER_MONTH + 2 * NANOS_PER_DAY)
.justified_interval();
let interval2 = Interval::from_month_day_nano(2, 3, 0);
assert_eq!(interval, interval2);
}
#[test]
fn test_serde_json() {
let interval = Interval::from_month_day_nano(1, 1, 1);
let json = serde_json::to_string(&interval).unwrap();
assert_eq!(
json,
"{\"months\":1,\"days\":1,\"nsecs\":1,\"unit\":\"MonthDayNano\"}"
);
let interval2: Interval = serde_json::from_str(&json).unwrap();
assert_eq!(interval, interval2);
}
#[test] #[test]
fn test_to_iso8601_string() { fn test_to_iso8601_string() {
// Test interval zero // Test interval zero
let interval = Interval::from_month_day_nano(0, 0, 0); let interval = IntervalMonthDayNano::new(0, 0, 0);
assert_eq!(interval.to_iso8601_string(), "PT0S"); assert_eq!(interval.to_iso8601_string(), "PT0S");
let interval = Interval::from_month_day_nano(1, 1, 1); let interval = IntervalMonthDayNano::new(1, 1, 1);
assert_eq!(interval.to_iso8601_string(), "P0Y1M1DT0H0M0S"); assert_eq!(interval.to_iso8601_string(), "P0Y1M1DT0H0M0S");
let interval = Interval::from_month_day_nano(14, 31, 10000000000); let interval = IntervalMonthDayNano::new(14, 31, 10000000000);
assert_eq!(interval.to_iso8601_string(), "P1Y2M31DT0H0M10S"); assert_eq!(interval.to_iso8601_string(), "P1Y2M31DT0H0M10S");
let interval = Interval::from_month_day_nano(14, 31, 23210200000000); let interval = IntervalMonthDayNano::new(14, 31, 23210200000000);
assert_eq!(interval.to_iso8601_string(), "P1Y2M31DT6H26M50.2S"); assert_eq!(interval.to_iso8601_string(), "P1Y2M31DT6H26M50.2S");
} }
#[test] #[test]
fn test_to_postgres_string() { fn test_to_postgres_string() {
// Test interval zero // Test interval zero
let interval = Interval::from_month_day_nano(0, 0, 0); let interval = IntervalMonthDayNano::new(0, 0, 0);
assert_eq!(interval.to_postgres_string(), "00:00:00");
let interval = Interval::from_month_day_nano(23, 100, 23210200000000);
assert_eq!( assert_eq!(
interval.to_postgres_string(), IntervalFormat::from(interval).to_postgres_string(),
"00:00:00"
);
let interval = IntervalMonthDayNano::new(23, 100, 23210200000000);
assert_eq!(
IntervalFormat::from(interval).to_postgres_string(),
"1 year 11 mons 100 days 06:26:50.200000" "1 year 11 mons 100 days 06:26:50.200000"
); );
} }
@@ -776,18 +619,21 @@ mod tests {
#[test] #[test]
fn test_to_sql_standard_string() { fn test_to_sql_standard_string() {
// Test zero interval // Test zero interval
let interval = Interval::from_month_day_nano(0, 0, 0); let interval = IntervalMonthDayNano::new(0, 0, 0);
assert_eq!(interval.to_sql_standard_string(), "0"); assert_eq!(IntervalFormat::from(interval).to_sql_standard_string(), "0");
let interval = Interval::from_month_day_nano(23, 100, 23210200000000); let interval = IntervalMonthDayNano::new(23, 100, 23210200000000);
assert_eq!( assert_eq!(
interval.to_sql_standard_string(), IntervalFormat::from(interval).to_sql_standard_string(),
"+1-11 +100 +6:26:50.200000" "+1-11 +100 +6:26:50.200000"
); );
// Test interval without year, month, day // Test interval without year, month, day
let interval = Interval::from_month_day_nano(0, 0, 23210200000000); let interval = IntervalMonthDayNano::new(0, 0, 23210200000000);
assert_eq!(interval.to_sql_standard_string(), "6:26:50.200000"); assert_eq!(
IntervalFormat::from(interval).to_sql_standard_string(),
"6:26:50.200000"
);
} }
#[test] #[test]

View File

@@ -27,7 +27,7 @@ pub mod util;
pub use date::Date; pub use date::Date;
pub use datetime::DateTime; pub use datetime::DateTime;
pub use duration::Duration; pub use duration::Duration;
pub use interval::Interval; pub use interval::{IntervalDayTime, IntervalMonthDayNano, IntervalYearMonth};
pub use range::RangeMillis; pub use range::RangeMillis;
pub use timestamp::Timestamp; pub use timestamp::Timestamp;
pub use timestamp_millis::TimestampMillis; pub use timestamp_millis::TimestampMillis;

Some files were not shown because too many files have changed in this diff Show More