* feat: add backward compatibility test for persistent ctx
* refactor: refactor State of region migration
* feat: add test utils for region migration tests
* test: add simple region migration tests
* chore: apply suggestions from CR
* feat: adds date_add and date_sub function
* test: add date function
* fix: adds interval to date returns wrong result
* fix: header
* fix: typo
* fix: timestamp resolution
* fix: capacity
* chore: apply suggestion
* fix: wrong behavior when adding intervals to timestamp, date and datetime
* chore: remove unused error
* test: refactor and add some tests
* chore: add logs and metrics
* feat: add the timer to track heartbeat intervel
* feat: add the gauge to track region leases
* refactor: use gauge instead of the timer
* chore: apply suggestions from CR
* feat: add hit rate and etcd txn metrics
* feat: add random weigted choose in load_based selector
* fix: meta cannot save heartbeats when cluster have no region
* chore: print some log
* chore: remove unused code
* cr
* add some logs when filter result is empty
* feat: add page cache
* docs: update mito config toml
* feat: impl CachedPageReader
* feat: use cache reader to read row group
* feat: do not fetch data if we have pages in cache
* chore: return if nothing to fetch
* feat: enlarge page cache size
* test: test write read parquet
* test: test cache
* docs: update comments
* test: fix config api test
* feat: cache metrics
* feat: change default page cache size
* test: fix config api test
* feat: bump prost and fix pprof feature compiler errors
* feat: fix compiler errors on tokio-console
* chore: fix compiler errors
* ci: add all features check to ci
* feat: decrease the `page size` if the response message size exceeds the limit
* chore: apply suggestions from CR
* feat: prefer to use adaptive_page_size
* chore: apply suggestions from CR
* feat: Control merge reader by batch size
* test: test heap have large range
* fix: merge one batch
* test: merge many duplicates
* test: test reheap hot
* feat: don't handle empty batch in merge reader
* feat: use fixed error message for unknown error
* feat: return fixed message for internal error as well
* chore: include status code in error message
* test: update tests for asserts of error message
* feat: change status code of some datafusion error
* fix: make CollectRecordbatch an query error
* test: update sqlness results
* feat: RepeatedTask adds execute-first-wait-later behavior.
* feat: add inverval generator for repeate task component
* feat: impl debug for dyn IntervalGenerator trait
* chore: change some words
* chore: instead of complicated way, we add an initial_delay to control task interval
* chore: some improve by pr comment
* feat: opentsdb row protocol
* fix: added commnets for num of rows and failure if output is not of affecetd rows
* fix: added extra 1 to number of columns
* fix: avoided cloning datapoints, took ownership instead
* fix: avoided cloning datapoints, took ownership instead
* fix: changed vecotr slice to vector
* fix: remove clone
* fix: combined datapoints and requests with zip instead of enumerating
---------
Co-authored-by: Ubuntu <ubuntu@ip-172-31-43-183.us-east-2.compute.internal>
* feat: eable range expr nest
* fix: change range expr rewrite format
* chore: organize range query tests
* chore: change range expr name(e.g. MAX(v) RANGE 5s FILL 6)
* chore: add range query test
* chore: fix code advice
* chore: fix ca
* ci: add copy-image.sh and upload-artifacts-to-s3.sh
* ci: remove unused options in dev build
* ci: use 'upload-artifacts-to-s3.sh' and 'copy-image.sh' in release-cn-artifacts action
* refactor: refine copy-image.sh
* fix: invalid requests created by nyc-taxi
* feat: add timestamp to table name
* style: fix clippy
* chore: re-export deps for client
* fix: wait result
* chore: no need to define a prefix constant
* feat: memtable support filter pushdown to prune primary keys
* fix: switch to next time series when pk not selected
* fix: allow predicate evaluation failure
* fix: some clippy warnings
* fix: panic when no primary key in schema
* feat: cache decoded record batch for primary key
* refactor: use arcswap instead of rwlock
* fix: format toml
* test: test different order
* test: add tests for missing and invalid columns
* fix: do not skip schema validation while missing columns
* chore: use field_columns()
* test: add tests for different column order
* chore: add dbname in region request header for tracking purpose
* chore: fix handle read
* chore: add write meter
* chore: add meter-core to dep
* chore: add converter between RegionRequestHeader and QueryContext & update proto version
* feat: add vector_cache to CacheManager
* feat: cache repeated vectors
* feat: skip decoding pk if output doesn't contain tags
* test: add TestRegionMetadataBuilder
* test: test ProjectionMapper
* test: test vector cache
* test: test projection mapper convert
* style: fix clippy
* feat: do not cache vector if it is too large
* docs: update comment
* feat: merge by heap
* fix: fix heap order
* feat: avoid pop/push next and refactor some functions
* feat: replace merge_batches and fixe tests
* test: add test that a key is deleted
* fix: skip empty batch
* style: clippy
* chore: fix typos
* feat: support greatest function
* feat: make greatest take date_type as input
* fix: move sqlness test into common/function/time.sql
* fix: avoid using unwarp
* fix: use downcast
* refactor: simplify arrow cast
* feat: implement new histogram data model
* feat: use prometheus table format for histogram
* refactor: remove duplicated code
* fix: histogram tag column
* fix: use accumulated count in buckets
* refactor: using row based protocol for otlp WIP
* refactor: use row based writer for otlp.
Also updated row writer for owned keys
* refactor: use row writers for otlp
* test: add integration tests for histogram
* refactor: change le column name
* test: test on_compaction_finished
* fix: avoid submit same region to compact
* feat: persist and recover compaction time window
* test: fix test
* test: sort like result
* feat: added show tables command
* fix(tests): fixed parser and statement unit tests
* chore: implemeted display trait for table type
* fix: handled no tabletype and error for usopprted command in show databse
* chore: removed full as a show kind, instead as a show option
* chore(tests): fixed failing test and added more tests for show full
* chore: refactored table types to use filters
* fix: changed table_type to tables
* feat: RegionMetadataBuilder allow adding/dropping columns multiple times
* test: test add if not exists/drop if exists
* feat: change validator and add need_alter
* test: fix tests and test need_alter
* test: test alter retry
* feat: open before create
* style: fix clippy
* refactor: move RegionOptions to options mod
* refactor: define compaction strategy in region/options.rs
* feat: use duration for time window
* refactor: rename CompactionStrategy to CompactionOptions
* feat: use serde to parse options
* feat: parse options
* feat: set options on creation/opening
* test: test create/open with options
* chore: remove todo
* feat: get compaction ttl and options from RegionOptions
* style: fix clippy
* chore: Remove unused engine_options
* style: fix clippy
* chore: remove todo
* fix: check version before alter region
* chore: apply suggestions from CR
* Update src/mito2/src/worker/handle_alter.rs
Co-authored-by: dennis zhuang <killme2008@gmail.com>
---------
Co-authored-by: dennis zhuang <killme2008@gmail.com>
* feat: allow multiple waiters in compaction request
* feat: compaction status wip
* feat: track region status in compaction scheduler
* feat: impl compaction scheduler
* feat: call compaction scheduler
* feat: remove status if nothing to compact
* feat: schedule compaction after flush
* feat: set compacting to false after compaction finished
* refactor: flush status only needs region id and version control
* refactor: schedule_compaction don't need region as argument
* test: test flush/scheduler for empty requests
* test: trigger compaction in test
* feat: notify scheduler on truncated
* chore: Apply suggestions from code review
Co-authored-by: JeremyHi <jiachun_feng@proton.me>
---------
Co-authored-by: JeremyHi <jiachun_feng@proton.me>
* refactor:
1. remove TableIdent, use TableId directly
2. use the latest greptime-proto
3. independently invalidate table id cache and table name cache
* rebase
* fix: resolve PR comments
* fix: resolve PR comments
* chore: set mutable limit to half of the global write buffer size
* refactor: put handle_flush_finished after handle_flush_request
* refactor: rename tests.rs to basic_test.rs
* style: fmt code
* feat: add writable flag to region.
* refactor: rename MitoEngine to MitoEngine::scanner
* feat: add set_writable() to RegionEngine
* feat: check whether region is writable
* feat: make set_writable sync
* test: test set_writable
* docs: update comments
* feat: send result on compaction failure
* refactor: wrap output sender in new type
* feat: on failure
* refactor: use get_region_or/writable_region_or
* refactor: remove send_result
* feat: notify waiters on flush scheduler drop
* test: fix tests
* fix: only alter writable region
* feat: add more info to error messages
* feat: store next column id in procedure
* fix: update next column id for table info
* test: fix add col test
* chore: remove location from invalid request error
* test: update test
* test: fix test
* test: add test for reopen
* feat: last entry id starts from flushed entry id
* fix: store flushed sequence and recover it from manifest
* test: check sequence in alter test
* test: more tests for alter
* feat: impl handle_alter wip
* refactor: move send_result to worker.rs
* feat: skeleton for handle_alter_request
* feat: write requests should wait for alteration
* feat: define alter request
* chore: no warnings
* fix: remove memtables after flush
* chore: update comments and impl add_write_request_to_pending
* feat: add schema version to RegionMetadata
* feat: impl alter_schema/can_alter_directly
* chore: use send_result
* test: pull next_batch again
* feat: convert pb AlterRequest to RegionAlterRequest
* feat: validate alter request
* feat: validate request and alter metadata
* feat: allow none location
* test: test alter
* fix: recover files and flushed entry id from manifest
* test: test alter
* chore: change comments and variables
* chore: fix compiler errors
* feat: add is_empty() to MemtableVersion
* test: fix metadata alter test
* fix: Compaction picker doesn't notify waiters if it returns None
* chore: address CR comments
* test: add tests for alter request
* refactor: use send_result
* feat: compaction component
* feat: mito2 compaction
* Avoid building time range predicates when merge SST files since in TWCS we don't enforce strict time window.
* fix: some CR comments
* minor: change CompactionRequest::senders to an option
* chore: handle compaction finish error
* feat: integrate compaction into region worker
* chore: rebase upstream
* fix: Some CR comments
* chore: Apply suggestions from code review
* style: fix clippy
---------
Co-authored-by: Yingwen <realevenyag@gmail.com>
* refactor:
1. remove method `register_system_table` from CatalogManager
2. the creation of ScriptTable (as a system table) is removed from CatalogManager. Instead, the ScriptTable is created when Frontend instance is starting; and is created by calling Frontend instance's grpc handler.
* rebase
* fix: filter out outdated heartbeat, #1707
* feat: reorder handlers
* refactor: disableXXX to enableXXX
* feat: make full use of region leases to facilitate failover
* chore: minor refactor
* chore: by comment
* feat: logging on inactive/active
* chore: call handle_flush_request
* feat: alias SchedulerRef and clean scheduler on drop
* feat: add scheduler to workers
* feat: remove RegionMemtableStats
* feat: pick regions to flush
* feat: add more fields to region flush task
* feat: smallvec workspace dep
* feat: Use list to hold immutable memtables
* feat: flush job wip
* feat: use access layer to read write sst
* feat: flush memtables to l0
* feat: write manifest
* feat: schedule next flush on success
* feat: schedule flush on success and failure
* feat: add purger to region
* feat: apply edit after flush
* feat: collect stats for SSTs
* feat: manual flush
* test: test flush and fix manifest test
* feat: remove flush scheduler job limit
* fix: typo
* style: clippy
* feat: clean flushed files on failure
* chore: address CR comment
* refactor: Use put_rows
* feat: Clean flush scheduler on drop
* feat: remove region flush status on drop and close
* chore: address CR comment
* feat: alias SchedulerRef and clean scheduler on drop
* feat: add scheduler to workers
* feat: use access layer to read write sst
* feat: add purger to region
* refactor: allow getting region_dir from AccessLayer
* feat: add scheduler to FlushScheduler
* feat: getter for object store
* chore: fix typo
Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
---------
Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
* feat: only allow timestamp data type as time index
* test: update sqltest cases, todo: need some fixes
* fix: sqlness tests
* fix: forgot adding back cte test
* chore: style
* fix: existing null value for schema name value
* chore: fix null check
* fix: change catalognamevalue and schemanamevalue to option
* fix: fix null case
* chore: update proto
* chore: add try from for schema name value
* chore: merge schema opts to table opts while creating table
* chore: use table ttl opts first
* chore: add unit test
* chore: update proto version
* feat: replay memtable when opening table
* test: region replay
* refactor: save logstore in TestEnv
* fix: some cr comments
* chore: rebase develop
* chore: update last entry id during replay
* chore: change userinfo to query_ctx in http handler
* chore: minor change
* chore: move prometheus http to http mod
* chore: fix uni test:
* chore: add back schema check
* chore: minor change
* chore: remove clone
* refactor: use arrow::compute::concat instead of push values to vector builders
* feat: support projection
* refactor: remove sequence
* refactor: concatenate
* fix: series must not be empty
* refactor: projection
* feat: timestamp types sqlness tests
* feat: adds timestamp tests
* test: add string tests
* test: comment a case in timestamp
* test: add float type tests
* chore: adds TODO
* feat: set TZ=UTC for sqlness test
* feat: Implement slice and first/last timestamp for Batch
* feat(mito): implements sort/concat for Batch
* chore: fix typo
* chore: remove comments
* feat: sort and dedup
* test: test batch operations
* chore: cast enum to test op type
* test: test filter related api
* sytle: fix clippy
* docs: comment for slice
* chore: address CR comment
Don't return Option in get_timestamp()/get_sequence()
* feat: time series memtable
* feat: add some test
* fix: some clippy warnings
* chore: some rustdoc
* refactor: test
* fix: remove useless functions
* feat: add config for TimeSeriesMemtable
* chore: some optimize
* refactor: remove bucketing
* refactor: avoid cloing RegionMetadataRef across all Series; make initial_builder_capacity a const; sort batch only by timestamp and sequence
* feat: rewrite do_get for streaming get flight data
* feat: rewrite do_get call stack but leave the async stream adapter not modified yet
* feat: rewrite the async stream adapter to accept greptime record batch stream
* fix: resolve some PR comments
* feat: rewrite tests to adapt to the streaming do_get
* feat: add unit tests for streaming do_get
* feat: rewrite timer metric of merge scan
* remove unhelpful unit tests for streaming do_get
* add a new metric timer for merge scan and fix some test errors
* rewrite mysql writer to write query results in a streaming manner
* fix: fix fmt errors
* fix: rewrite sqlness runner to take into account the streaming do_get
* fix: fix toml format errors
* fix: resolve some PR comments
* fix: resolve some PR comments
* fix: refactor do_get to increase readability
* fix: refactor mysql try_write_one to increase readability
* fix: fix ddl client can not update leader addr
* chore: apply suggestions from CR
* feat: add message to context
* fix: only retry if unavailable or deadline exceeded
* chore: apply suggestions from CR
* feat: datanode's row insrter
* refactor: ExprFactory
* feat: row inserter in standalon mode
* chore: minor refactor
* feat: influxdb line protocol's row protocol
* chore: minor refactor
* improve: avoid to use too many string
* no longer async
Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
* chore: do not check empty data
* chore: by review comment
* chore: by comment
* chore: by review comment
* chore: by review comment
---------
Co-authored-by: Ruihang Xia <waynestxia@gmail.com>
* move prometheus routes to default http server
Signed-off-by: sh2 <shawnhxh@outlook.com>
* fix ci test and remove the server logic of prometheus
* remove unused import and prometheus relevant code
* fix ci: rustfmt and test
* fix ci: silly fmt
* fix ci: silly silly fmt
* change `/prom_store` back to `/prometheus`
* remove unused variable
---------
Signed-off-by: sh2 <shawnhxh@outlook.com>
* refactor: KeyValues return ValueRef
* 1. Change KeyValues returned value from pb value to ValueRef
2. Replace OpType/SemanticType with pb's OpType and SemanticType to avoid duplicated conversions.
* feat: define min value of OpType as a const
* fix: toml format
* feat: remove greptimedb-telemetry feature
* feat: adds enable_telemetry option to metasrv and datanode
* refactor: move data_home from file config to storage config
* feat: store the installation uuid into datanode and metasrv working home
* fix: cargo toml fmt
* test: ignore region failver test when using local fle storage
* test: ignore telemetry reporter in test mode
* feat: print warning log when enabling telemetry
* chore: the telemetry doc link
* chore: remove enable_telemetry from datanode example config file
* refactor: rename GREPTIMEDB_TELEMETRY_CLIENT_REQUEST_TIMEOUT
* chore: rename print_warn_log to print_anonymous_usage_data_disclaimer
* ci: add context argument in build-greptime-binary action
* refactor: add 'working-dir' in upload-artifacts action and rename 'context' to 'working-dir'
* refactor: use timestamp as part of image tag when trigger manually
* fix(timestamp): add trim for the input date string
* fix(timestamp): add analyzer rule to trim strings before conversion
* fix: adjust according to CR
* chore: add version reporter
* chore: add uuid for version report
* chore: add file license
* chore: format code
* chore: fix by pr comment
* chore: change version report api url
* chore: change greptimedb opentelemetry crate name
* chore: minor code beautification
* chore: add keys only option when range etcd
* chore: fix by pr comment
* chore: fix by pr comment
* chore: change uuid file location
* chore: only run telemetry in meta leader
* chore: add more test and some minor fix
* chore: make clippy happy
* chore: fix by pr comment
* chore: fix by pr comment
* chore: add debug log for greptimedb telemetry
* feat: add exists api into KvBackend
* refactor: region lease
* feat: fiter out inactive node in keep-lease
* feat: register&deregister inactive node
* chore: doc
* chore: ut
* chore: minor refactor
* feat: use memory_kv to store inactive node
* fix: use real error in
* chore: make inactive_node_manager's func compact
* chore: more efficiently
* feat: clear inactive status on cadidate node
* refactor: unify the make targets of building images
* refactor: make Dockerfile more clean
1. Add dev-builder image to build greptime binary easily;
2. Add 'docker/ci/Dockerfile-centos' to release centos image;
3. Delete Dockerfile of aarch64 and just need to use one Dockerfile;
Signed-off-by: zyy17 <zyylsxm@gmail.com>
---------
Signed-off-by: zyy17 <zyylsxm@gmail.com>
* feat: define structs for version
* feat: Build region from metadata and memtable builder
* feat: impl validate for metadata
* feat: add more fields to RegionMetadata
* test: more tests
* test: more check and test
* feat: allow overwriting version
* style: fix clippy
* chore: remove useless Option type in plugins (#1544)
Co-authored-by: paomian <qtang@greptime.com>
* chore: remove useless Option type in plugins (#1544)
Co-authored-by: paomian <qtang@greptime.com>
* chore: remove useless Option type in plugins (#1544)
Co-authored-by: paomian <qtang@greptime.com>
* chore: remove useless Option type in plugins (#1544)
Co-authored-by: paomian <qtang@greptime.com>
* feat: first commit for time type
* feat: impl time type
* fix: arrow vectors type conversion
* test: add time test
* test: adds more tests for time type
* chore: style
* fix: sqlness result
* Update src/common/time/src/time.rs
Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>
* chore: CR comments
---------
Co-authored-by: localhost <xpaomian@gmail.com>
Co-authored-by: paomian <qtang@greptime.com>
Co-authored-by: Lei, HUANG <6406592+v0y4g3r@users.noreply.github.com>
refactor: move heartbeat configuration into an independent section in config file
* refactor: move heartbeat configuration into an independent section in config file
* feat: add HeartbeatOptions struct
* test: modify corresponding test case
* chore: modify corresponding example file
* feat: support append entries from multiple regions at a time
* chore: add some tests
* fix: false postive mutable_key warning
* fix: append_batch api
* fix: remove unused clippy allows
* chore(prom): rename prometheus(remote storage) to prom-store and promql(HTTP server) to prometheus
* chore: apply clippy suggestions
* chore: adjust format according to rustfmt
* feat: meta procedure options
* chore: tune meta procedure options in tests
* Update src/common/procedure/Cargo.toml
Co-authored-by: dennis zhuang <killme2008@gmail.com>
---------
Co-authored-by: dennis zhuang <killme2008@gmail.com>
* feat(config-endpoint): add initial implementation
* feat: add initial handler implementation
* fix: apply clippy suggestions, use axum response instead of string
* feat: address CR suggestions
* fix: minor adjustments in formatting
* fix: add a test
* feat: add to_toml_string method to options
* fix: adjust the assertion for the integration test
* fix: adjust expected indents
* fix: adjust assertion for the integration test
* fix: improve according to clippy
* feat(http_body_limit): add initial support for DefaultBodyLimit
* fix: address CR suggestions
* fix: adjust the const for default http body limit
* fix: adjust the toml_str for the test
* fix: address CR suggestions
* fix: body_limit units in example config toml files
* fix: address clippy suggestions
* feat: initial twcs impl
* chore: rename SimplePicker to LeveledPicker
* rename some structs
* Remove Compaction strategy
* make compaction picker a trait object
* make compaction picker configurable for every region
* chore: add some test for ttl
* add some tests
* fix: some style issues in cr
* feat: enable twcs when creating tables
* feat: allow config time window when creating tables
* fix: some cr comments
* feat: add log
* feat: print more info
* feat: use chain reader
* fix: panic on getting first range
* fix: prev not updated
* fix: reverse readers and iter backward
* chore: don't print windows in log
* feat: consider memtable range
Also fix the issue that using incorrect comparision method to sort time
ranges.
* fix: merge memtable window with sst's
* feat: add use_chain_reader option
* feat: skip empty memtables
* chore: change log level
* fix: memtable range not ordered
* style: fix clippy
* chore: address review comments
* chore: print region id in log
* feat: txn for meta kvstore
* feat: txn
* chore: add unit test
* chore: more test
* chore: more test
* Update src/meta-srv/src/service/store/memory.rs
Co-authored-by: LFC <bayinamine@gmail.com>
* chore: by cr
---------
Co-authored-by: LFC <bayinamine@gmail.com>
* refactor: add table_id to get_table()/table_exists()
* refactor: Add table_id to alter table request
* refactor: Add table id to DropTableRequest
* refactor: add table id to DropTableRequest
* refactor: Use table id as key for the tables map
* refactor: use table id as file engine's map key
* refactor: Remove table reference from engine's get_table/table_exists
* style: remove unused imports
* feat!: Add table id to TableRegionalValue
* style: fix cilppy
* chore: add comments and logs
* feat: support to copy from orc format
* test: add copy from orc test
* chore: add license header
* refactor: remove unimplemented macro
* chore: apply suggestions from CR
* chore: bump orc-rust to 0.2.3
* feat: add initial implementation for status endpoint
* feat(status_endpoint): add more data to response
* feat(status_endpoint): use build data env vars
* feat(status_endpoint): add simple test
* fix(status_endpoint): adjust the toml indentation
* fix: set max_files_in_l0 in unit tests to avoid compaction
* refactor: pass while EngineConfig
* fix: comment out unstable sqlness test
* revert commented sqlness
* add some debug log
* fix: use lazy parquet reader in MitoTable::scan_to_stream to avoid IO in plan stage
* fix: unit tests
* fix: order-by optimization
* add some tests
* fix: move metric names to metrics.rs
* fix: some cr comments
* refactor: Remove MySQL related options from Datanode
remove mysql_addr and mysql_runtime_size in datanode.rs, remove command line argument mysql_addr in cmd/src/datanode.rs
#1739
* feat: remove --mysql-addr from command line
in pre commit, sqlness can not find --mysql-addrr, because we remove it
issue#1739
* refactor: remove --mysql-addr from command line
in pre commit, sqlness can not find --mysql-addrr, because we remove it
issue#1739
# - If it's a tag push release, the version is the tag name(${{ github.ref_name }});
# - If it's a scheduled release, the version is '${{ env.NEXT_RELEASE_VERSION }}-nightly-$buildTime', like 'v0.2.0-nightly-20230313';
# - If it's a manual release, the version is '${{ env.NEXT_RELEASE_VERSION }}-$(git rev-parse --short HEAD)-YYYYMMDDSS', like 'v0.2.0-e5b243c-2023071245';
# - If it's a nightly build, the version is 'nightly-YYYYMMDD-$(git rev-parse --short HEAD)', like 'nightly-20230712-e5b243c'.
release-dev-builder-images-cn: # Note: Be careful issue:https://github.com/containers/skopeo/issues/1874 and we decide to use the latest stable skopeo container.
# 1. The tag('v*.*.*') push release: the release workflow will be triggered by the tag push event.
# 2. The scheduled release(the version will be '${{ env.NEXT_RELEASE_VERSION }}-nightly-YYYYMMDD'): the release workflow will be triggered by the schedule event.
on:
push:
tags:
@@ -5,464 +10,406 @@ on:
schedule:
# At 00:00 on Monday.
- cron:'0 0 * * 1'
# Mannually trigger only builds binaries.
workflow_dispatch:
name:Release
workflow_dispatch:# Allows you to run this workflow manually.
# Notes: The GitHub Actions ONLY support 10 inputs, and it's already used up.
inputs:
linux_amd64_runner:
type:choice
description:The runner uses to build linux-amd64 artifacts
default:ec2-c6i.4xlarge-amd64
options:
- ubuntu-20.04
- ubuntu-20.04-8-cores
- ubuntu-20.04-16-cores
- ubuntu-20.04-32-cores
- ubuntu-20.04-64-cores
- ec2-c6i.xlarge-amd64# 4C8G
- ec2-c6i.2xlarge-amd64# 8C16G
- ec2-c6i.4xlarge-amd64# 16C32G
- ec2-c6i.8xlarge-amd64# 32C64G
- ec2-c6i.16xlarge-amd64# 64C128G
linux_arm64_runner:
type:choice
description:The runner uses to build linux-arm64 artifacts
default:ec2-c6g.4xlarge-arm64
options:
- ec2-c6g.xlarge-arm64# 4C8G
- ec2-c6g.2xlarge-arm64# 8C16G
- ec2-c6g.4xlarge-arm64# 16C32G
- ec2-c6g.8xlarge-arm64# 32C64G
- ec2-c6g.16xlarge-arm64# 64C128G
macos_runner:
type:choice
description:The runner uses to build macOS artifacts
default:macos-latest
options:
- macos-latest
skip_test:
description:Do not run integration tests during the build
type:boolean
default:true
build_linux_amd64_artifacts:
type:boolean
description:Build linux-amd64 artifacts
required:false
default:false
build_linux_arm64_artifacts:
type:boolean
description:Build linux-arm64 artifacts
required:false
default:false
build_macos_artifacts:
type:boolean
description:Build macos artifacts
required:false
default:false
build_windows_artifacts:
type:boolean
description:Build Windows artifacts
required:false
default:false
publish_github_release:
type:boolean
description:Create GitHub release and upload artifacts
required:false
default:false
release_images:
type:boolean
description:Build and push images to DockerHub and ACR
required:false
default:false
# Use env variables to control all the release process.
env:
RUST_TOOLCHAIN:nightly-2023-05-03
SCHEDULED_BUILD_VERSION_PREFIX:v0.3.0
SCHEDULED_PERIOD:nightly
# The arguments of building greptime.
RUST_TOOLCHAIN:nightly-2023-10-21
CARGO_PROFILE:nightly
# Controls whether to run tests, include unit-test, integration-test and sqlness.
- name:Configure scheduled build version# the version would be ${SCHEDULED_BUILD_VERSION_PREFIX}-${SCHEDULED_PERIOD}-YYYYMMDD, like v0.2.0-nigthly-20230313.
Thanks a lot for considering contributing to GreptimeDB. We believe people like you would make GreptimeDB a great product. We intend to build a community where individuals can have open talks, show respect for one another, and speak with true ❤️. Meanwhile, we are to keep transparency and make your effort count here.
Read the guidelines, and they can help you get started. Communicate with respect to developers maintaining and developing the project. In return, they should reciprocate that respect by addressing your issue, reviewing changes, as well as helping finalize and merge your pull requests.
Please read the guidelines, and they can help you get started. Communicate with respect to developers maintaining and developing the project. In return, they should reciprocate that respect by addressing your issue, reviewing changes, as well as helping finalize and merge your pull requests.
Follow our [README](https://github.com/GreptimeTeam/greptimedb#readme) to get the whole picture of the project. To learn about the design of GreptimeDB, please refer to the [design docs](https://github.com/GrepTimeTeam/docs).
@@ -21,7 +21,7 @@ Pull requests are great, but we accept all kinds of other help if you like. Such
- Write tutorials or blog posts. Blog, speak about, or create tutorials about one of GreptimeDB's many features. Mention [@greptime](https://twitter.com/greptime) on Twitter and email info@greptime.com so we can give pointers and tips and help you spread the word by promoting your content on Greptime communication channels.
- Improve the documentation. [Submit documentation](http://github.com/greptimeTeam/docs/) updates, enhancements, designs, or bug fixes, and fixing any spelling or grammar errors will be very much appreciated.
- Present at meetups and conferences about your GreptimeDB projects. Your unique challenges and successes in building things with GreptimeDB can provide great speaking material. We'd love to review your talk abstract, so get in touch with us if you'd like some help!
- Submit bug reports. To report a bug or a security issue, you can [open a new GitHub issue](https://github.com/GrepTimeTeam/greptimedb/issues/new).
- Submitting bug reports. To report a bug or a security issue, you can [open a new GitHub issue](https://github.com/GrepTimeTeam/greptimedb/issues/new).
- Speak up feature requests. Send feedback is a great way for us to understand your different use cases of GreptimeDB better. If you want to share your experience with GreptimeDB, or if you want to discuss any ideas, you can start a discussion on [GitHub discussions](https://github.com/GreptimeTeam/greptimedb/discussions), chat with the Greptime team on [Slack](https://greptime.com/slack), or you can tweet [@greptime](https://twitter.com/greptime) on Twitter.
## Code of Conduct
@@ -49,6 +49,7 @@ GreptimeDB uses the [Apache 2.0 license](https://github.com/GreptimeTeam/greptim
### Before PR
- To ensure that community is free and confident in its ability to use your contributions, please sign the Contributor License Agreement (CLA) which will be incorporated in the pull request process.
- Make sure all files have proper license header (running `docker run --rm -v $(pwd):/github/workspace ghcr.io/korandoru/hawkeye-native:v3 format` from the project root).
- Make sure all your codes are formatted and follow the [coding style](https://pingcap.github.io/style-guide/rust/).
- Make sure all unit tests are passed (using `cargo test --workspace` or [nextest](https://nexte.st/index.html) `cargo nextest run`).
- Make sure all clippy warnings are fixed (you can check it locally by running `cargo clippy --workspace --all-targets -- -D warnings`).
@@ -81,7 +82,7 @@ Now, `pre-commit` will run automatically on `git commit`.
### Title
The titles of pull requests should be prefixed with category names listed in [Conventional Commits specification](https://www.conventionalcommits.org/en/v1.0.0)
like `feat`/`fix`/`docs`, with a concise summary of code change following. DO NOT use last commit message as pull request title.
like `feat`/`fix`/`docs`, with a concise summary of code change following. AVOID using the last commit message as pull request title.
### Description
@@ -100,7 +101,7 @@ of what you were trying to do and what went wrong. You can also reach for help i
## Community
The core team will be thrilled if you participate in any way you like. When you are stuck, try ask for help by filing an issue, with a detailed description of what you were trying to do and what went wrong. If you have any questions or if you would like to get involved in our community, please check out:
The core team will be thrilled if you would like to participate in any way you like. When you are stuck, try to ask for help by filing an issue, with a detailed description of what you were trying to do and what went wrong. If you have any questions or if you would like to get involved in our community, please check out:
- [GreptimeDB Community Slack](https://greptime.com/slack)
docker run -p 4002:4002 -v "$(pwd):/tmp/greptimedb" greptime/greptimedb standalone start
```
Please see [the online document site](https://docs.greptime.com/getting-started/overview#install-greptimedb) for more installation options and [operations info](https://docs.greptime.com/user-guide/operations/overview).
Please see the online document site for more installation options and [operations info](https://docs.greptime.com/user-guide/operations/overview).
### Get started
Read the [complete getting started guide](https://docs.greptime.com/getting-started/overview#connect) on our [official document site](https://docs.greptime.com/).
Read the [complete getting started guide](https://docs.greptime.com/getting-started/overview) on our [official document site](https://docs.greptime.com/).
To write and query data, GreptimeDB is compatible with multiple [protocols and clients](https://docs.greptime.com/user-guide/clients).
To write and query data, GreptimeDB is compatible with multiple [protocols and clients](https://docs.greptime.com/user-guide/clients/overview).
For Linux and macOS, you can easily download pre-built binaries including official releases and nightly builds that are ready to use.
For Linux and macOS, you can easily download pre-built binaries including official releases and nightly builds that are ready to use.
In most cases, downloading the version without PyO3 is sufficient. However, if you plan to run scripts in CPython (and use Python packages like NumPy and Pandas), you will need to download the version with PyO3 and install a Python with the same version as the Python in the PyO3 version.
We recommend using virtualenv for the installation process to manage multiple Python versions.
@@ -176,6 +177,6 @@ Please refer to [contribution guidelines](CONTRIBUTING.md) for more information.
## Acknowledgement
- GreptimeDB uses [Apache Arrow](https://arrow.apache.org/) as the memory model and [Apache Parquet](https://parquet.apache.org/) as the persistent file format.
- GreptimeDB's query engine is powered by [Apache Arrow DataFusion](https://github.com/apache/arrow-datafusion).
- [OpenDAL](https://github.com/datafuselabs/opendal) from [Datafuse Labs](https://github.com/datafuselabs) gives GreptimeDB a very general and elegant data access abstraction layer.
- GreptimeDB’s meta service is based on [etcd](https://etcd.io/).
- [Apache OpenDAL (incubating)](https://opendal.apache.org) gives GreptimeDB a very general and elegant data access abstraction layer.
- GreptimeDB's meta service is based on [etcd](https://etcd.io/).
- GreptimeDB uses [RustPython](https://github.com/RustPython/RustPython) for experimental embedded python scripting.
# tracing exporter endpoint with format `ip:port`, we use grpc oltp as exporter, default endpoint is `localhost:4317`
# otlp_endpoint = "localhost:4317"
# The percentage of tracing will be sampled and exported. Valid range `[0, 1]`, 1 means all traces are sampled, 0 means all traces are not sampled, the default value is 1. ratio > 1 are treated as 1. Fractions < 0 are treated as 0
FROM --platform=linux/amd64 saschpe/android-ndk:34-jdk17.0.8_7-ndk25.2.9519653-cmake3.22.1
ENV LANG en_US.utf8
WORKDIR /greptimedb
# Rename libunwind to libgcc
RUN cp ${NDK_ROOT}/toolchains/llvm/prebuilt/linux-x86_64/lib64/clang/14.0.7/lib/linux/aarch64/libunwind.a ${NDK_ROOT}/toolchains/llvm/prebuilt/linux-x86_64/lib64/clang/14.0.7/lib/linux/aarch64/libgcc.a
# Install dependencies.
RUN apt-get update && apt-get install -y \
libssl-dev \
protobuf-compiler \
curl \
git \
build-essential \
pkg-config \
python3 \
python3-dev \
python3-pip \
&& pip3 install --upgrade pip \
&& pip3 install pyarrow
# Trust workdir
RUN git config --global --add safe.directory /greptimedb
# Install Rust.
SHELL["/bin/bash","-c"]
RUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- --no-modify-path --default-toolchain none -y
Refactor table engines to address several historical tech debts.
# Motivation
Both `Frontend` and `Datanode` have to deal with multiple regions in a table. This results in code duplication and additional burden to the `Datanode`.
Before:
```mermaid
graph TB
subgraph Frontend["Frontend"]
subgraph MyTable
A("region 0, 2 -> Datanode0")
B("region 1, 3 -> Datanode1")
end
end
MyTable --> MetaSrv
MetaSrv --> ETCD
MyTable-->TableEngine0
MyTable-->TableEngine1
subgraph Datanode0
Procedure0("procedure")
TableEngine0("table engine")
region0
region2
mytable0("my_table")
Procedure0-->mytable0
TableEngine0-->mytable0
mytable0-->region0
mytable0-->region2
end
subgraph Datanode1
Procedure1("procedure")
TableEngine1("table engine")
region1
region3
mytable1("my_table")
Procedure1-->mytable1
TableEngine1-->mytable1
mytable1-->region1
mytable1-->region3
end
subgraph manifest["table manifest"]
M0("my_table")
M1("regions: [0, 1, 2, 3]")
end
mytable1-->manifest
mytable0-->manifest
RegionManifest0("region manifest 0")
RegionManifest1("region manifest 1")
RegionManifest2("region manifest 2")
RegionManifest3("region manifest 3")
region0-->RegionManifest0
region1-->RegionManifest1
region2-->RegionManifest2
region3-->RegionManifest3
```
`Datanodes` can update the same manifest file for a table as regions are assigned to different nodes in the cluster. We also have to run procedures on `Datanode` to ensure the table manifest is consistent with region manifests. "Table" in a `Datanode` is a subset of the table's regions. The `Datanode` is much closer to `RegionServer` in `HBase` which only deals with regions.
In cluster mode, we store table metadata in etcd and table manifest. The table manifest becomes redundant. We can remove the table manifest if we refactor the table engines to region engines that only care about regions. What's more, we don't need to run those procedures on `Datanode`.
After:
```mermaid
graph TB
subgraph Frontend["Frontend"]
direction LR
subgraph MyTable
A("region 0, 2 -> Datanode0")
B("region 1, 3 -> Datanode1")
end
end
MyTable --> MetaSrv
MetaSrv --> ETCD
MyTable-->RegionEngine
MyTable-->RegionEngine1
subgraph Datanode0
RegionEngine("region engine")
region0
region2
RegionEngine-->region0
RegionEngine-->region2
end
subgraph Datanode1
RegionEngine1("region engine")
region1
region3
RegionEngine1-->region1
RegionEngine1-->region3
end
RegionManifest0("region manifest 0")
RegionManifest1("region manifest 1")
RegionManifest2("region manifest 2")
RegionManifest3("region manifest 3")
region0-->RegionManifest0
region1-->RegionManifest1
region2-->RegionManifest2
region3-->RegionManifest3
```
This RFC proposes to refactor table engines into region engines as a first step to make the `Datanode` acts like a `RegionServer`.
# Details
## Overview
We plan to refactor the `TableEngine` trait into `RegionEngine` gradually. This RFC focuses on the `mito` engine as it is the default table engine and the most complicated engine.
Currently, we built `MitoEngine` upon `StorageEngine` that manages regions of the `mito` engine. Since `MitoEngine` becomes a region engine, we could combine `StorageEngine` with `MitoEngine` to simplify our code structure.
The chart below shows the overall architecture of the `MitoEngine`.
A new metric engine that can significantly enhance our ability to handle the tremendous number of small tables in scenarios like Prometheus metrics, by leveraging a synthetic wide table that offers storage and metadata multiplexing capabilities over the existing engine.
# Motivation
The concept "Table" in GreptimeDB is a bit "heavy" compared to other time-series storage like Prometheus or VictoriaMetrics. This has lots of disadvantages in aspects from performance, footprint, and storage to cost.
# Details
## Top level description
- User Interface
This feature will add a new type of storage engine. It might be available to be an option like `with ENGINE=mito` or an internal interface like auto create table on Prometheus remote write. From the user side, there is no difference from tables in mito engine. All the DDL like `CREATE`, `ALTER` and DML like `SELECT` should be supported.
- Implementation Overlook
This new engine doesn't re-implement low level components like file R/W etc. It's a wrapper layer over the existing mito engine, with extra storage and metadata multiplexing capabilities. I.e., it expose multiple table based on one mito engine table like this:
The following parts will describe these implementation details:
- How to route these metric region tables and how those table are distributed
- How to maintain the schema and other metadata of the underlying mito engine table
- How to maintain the schema of metric engine table
- How the query goes
## Routing
Before this change, the region route rule was based on a group of partition keys. Relation of physical table to region is one-to-many.
``` rust
pub struct PartitionDef {
partition_columns: Vec<String>,
partition_bounds: Vec<PartitionBound>,
}
```
And for metric engine tables, the key difference is we split the concept of "physical table" and "logical table". Like the previous ASCII chart, multiple logical tables are based on one physical table. The relationship of logical table to region becomes many-to-many. Thus, we must include the table name (of logical table) into partition rules.
Consider the partition/route interface is a generic map of string array to region id, all we need to do is to insert logical table name into the request:
``` rust
fn route(request: Vec<String>) -> RegionId;
```
The next question is, where to do this conversion? The basic idea is to dispatch different routing behavior based on the engine type. Since we have all the necessary information in frontend, it's a good place to do that. And can leave meta server untouched. The essential change is to associate engine type with route rule.
## Physical Region Schema
The idea "physical wide table" is to perform column-level multiplexing. I.e., map all logical columns to physical columns by their names.
This approach is very straightforward but has one problem. It only works when two columns have different semantic type (time index, tag or field) or data types but with the same name. E.g., `CREATE TABLE t1 (c1 timestamp(3) TIME INDEX)` and `CREATE TABLE t2 (c1 STRING PRIMARY KEY)`.
One possible workaround is to prefix each column with its data type and semantic type, like `_STRING_PK_c1`. However, considering the primary goal at present is to support data from monitoring metrics like Prometheus remote write, it's acceptable not to support this at first because data types are often simple and limited here.
The next point is changing the physical table's schema. This is only needed when creating a new logical table or altering the existing table. Typically speaking, table creating and altering are explicit. We only need to emit an add column request to underlying physical table on processing logical table's DDL. GreptimeDB can create or alter table automatically on some protocols, but the internal logic is the same.
Also for simplicity, we don't support shrinking the underlying table at first. This can be achieved by introducing mechanism on the physical column.
Frontend needs not to keep physical table's schema.
## Metadata of physical regions
Those metric engine regions need to store extra metadata like the schema of logical table or all logical table's name. That information is relatively simple and can be stored in a format like key-value pair. For now, we have to use another physical mito region for metadata. This involves an issue with region scheduling. Since we don't have the ability to perform affinity scheduling, the initial version will just assume the data region and metadata region are in the same instance. See alternatives - other storage for physical region's metadata for possible future improvement.
Here is the schema of metadata region and how we would use it. The `CREATE TABLE` clause of metadata region looks like the following. Notice that it wouldn't be actually created by SQL.
``` sql
CREATE TABLE metadata(
ts timestamp time index,
key string primary key,
value string
);
```
The `ts` field is just a placeholder -- for the constraints that a mito region must contain a time index field. It will always be `0`. The other two fields `key` and `value` will be used as a k-v storage. It contains two group of key
- `__table_<TABLE_NAME>` is used for marking table existence. It doesn't have value.
- `__column_<TABLE_NAME>_<COLUMN_NAME>` is used for marking table existence, the value is column's semantic type.
## Physical region implementation
This RFC proposes to add a new region implementation named "MetricRegion". As showed in the first chart, it's wrapped over the existing mito region. This section will describe the implementation details. Firstly, here is a chart shows how the region hierarchy looks like:
```plaintext
┌───────────────────────┐
│ Metric Region │
│ │
│ ┌────────┬──────────┤
│ │ Mito │ Mito │
│ │ Region │ Region │
│ │ for │ for │
│ │ Data │ Metadata │
└───┴────────┴──────────┘
```
All upper levels only see the Metric Region. E.g., Meta Server schedules on this region, or Frontend routes requests to this Metrics Region's id. To be scheduled (open or close etc.), Metric Region needs to implement its own procedures. Most of those procedures can be simply assembled from underlying Mito Regions', but those related to data like alter or drop will have its own new logic.
Another point is region id. Since the region id is used widely from meta server to persisted state, it's better to keep it unchanged. This means we can't use the same id for two regions, but one for each. To achieve this, this RFC proposes a concept named "region id group". A region id group is a group of region ids that are bound for different purposes. Like the two underlying regions here.
This preserves the first 8 bits of the `u32` region number for grouping. Each group has one main id (the first one) and other sub ids (the rest non-zero ids). All components other than the region implementation itself doesn't aware of the existence of region id group. They only see the main id. The region implementation is in response of managing and using the region id group.
From previous sections, we can conclude the following points about routing:
- Each "logical table" has its own, universe unique table id.
- Logical table doesn't have physical region, they share the same physical region with other logical tables.
- Route rule of logical table's is a strict subset of physical table's.
To associate the logical table with physical region, we need to specify necessary information in the create table request. Specifically, the table type and its parent table. This require to change our gRPC proto's definition. And once meta recognize the table to create is a logical table, it will use the parent table's region to create route entry.
And to reduce the consumption of region failover (which need to update the physical table route info), we'd better to split the current route table structure into two parts:
```rust
region_route: Map<TableName, [RegionId]>,
node_route: Map<RegionId, NodeId>,
```
By doing this on each failover the meta server only needs to update the second `node_route` map and leave the first one untouched.
## Query
Like other existing components, a user query always starts in the frontend. In the planning phase, frontend needs to fetch related schemas of the queried table. This part is the same. I.e., changes in this RFC don't affect components above the `Table` abstraction.
# Alternatives
## Other routing method
We can also do this "special" route rule in the meta server. But there is no difference with the proposed method.
## Other storage for physical region's metadata
Once we have implemented the "region family" that allows multiple physical schemas exist in one region, we can store the metadata and table data into one region.
Before that, we can also let the `MetricRegion` holds a `KvBackend` to access the storage layer directly. But this breaks the abstraction in some way.
# Drawbacks
Since the physical storage is mixed together. It's hard to do fine-grained operations in table level. Like configuring TTL, memtable size or compaction strategy in table level. Or define different partition rules for different tables. For scenarios like this, it's better to move the table out of metrics engine and "upgrade" it to a normal mito engine table. This requires a migration process in a low cost. And we have to ensure data consistency during the migration, which may require a out-of-service period.
Refactor `Table` trait to adapt the new region server architecture and make code more straightforward.
# Motivation
The `Table` is designed in the background of both frontend and datanode keeping the same concepts. And all the operations are served by a `Table`. However, in our practice, we found that not all the operations are suitable to be served by a `Table`. For example, the `Table` doesn't hold actual physical data itself, thus operations like write or alter are simply a proxy over underlying regions. And in the recent refactor to datanode ([rfc table-engine-refactor](./2023-07-06-table-engine-refactor.md)), we are changing datanode to region server that is only aware of `Region` things. This also calls for a refactor to the `Table` trait.
# Details
## Definitions
The current `Table` trait contains the following methods:
```rust
pubtraitTable{
/// Get a reference to the schema for this table
fnschema(&self)-> SchemaRef;
/// Get a reference to the table info.
fntable_info(&self)-> TableInfoRef;
/// Get the type of this table for metadata/catalog purposes.
And considering most of the access to metadata happens in frontend, like route or query; and all the persisted data are stored in regions; while only the query engine needs to read data. We can divide the `Table` trait into three concepts:
- struct `Table` provides metadata:
```rust
impl Table {
/// Get a reference to the schema for this table
fn schema(&self) -> SchemaRef;
/// Get a reference to the table info.
fn table_info(&self) -> TableInfoRef;
/// Get the type of this table for metadata/catalog purposes.
fn table_type(&self) -> TableType;
/// Get statistics for this table, if available
fn statistics(&self) -> Option<TableStatistics>;
fn to_data_source(&self) -> DataSourceRef;
}
```
- Requests to region server
- `InsertRequest`
- `AlterRequest`
- `DeleteRequest`
- `FlushRequest`
- `CompactRequest`
- `CloseRequest`
- trait `DataSource` provides data (`RecordBatch`)
`Table` will only be used in frontend. It's constructed from the `OpenTableRequest` or `CreateTableRequest`.
`Table` also provides a method `to_data_source` to generate a `DataSource` from itself. But this method is only for non-`TableType::Base` tables (i.e., `TableType::View` and `TableType::Temporary`) because `TableType::Base` table doesn't hold actual data itself. Its `DataSource` should be constructed from the `Region` directly (in other words, it's a remote query).
And it requires some extra information to construct a `DataSource`, named `TableSourceProvider`:
```rust
type TableFactory = Arc<dyn Fn() -> DataSourceRef>;
pub enum TableSourceProvider {
Base,
View(LogicalPlan),
Temporary(TableFactory),
}
```
## Use `DataSource`
`DataSource` will be adapted to the `TableProvider` from DataFusion that can be `scan()`ed in a `TableScan` plan.
In frontend this is done in the planning phase. And datanode will have one implementation for `Region` to generate record batch stream.
## Interact with RegionServer
Previously, persisted state change operations were through the old `Table` trait, like said before. Now they will come from the action source, like the procedure or protocol handler directly to the region server. E.g., on alter table, the corresponding procedure will generate its `AlterRequest` and send it to regions. Or write request will be split in frontend handler, and sent to regions. `Table` only provides necessary metadata like route information if needed, but not the necessary part anymore.
## Implement temporary table
Temporary table is a special table that doesn't revolves to any persistent physical region. Examples are:
- the `Numbers` table for testing, which produces a record batch that contains 0-100 integers.
- tables in information schema. It is an interface for querying catalog's metadata. The contents are generated on the fly with information from `CatalogManager`. The `CatalogManager` can be held in `TableFactory`.
- Function table that produces data generated by a formula or a function. Like something that always `sin(current_timestamp())`.
## Relationship among those components
Here is a diagram to show the relationship among those components, and how they interact with each other.
Currently, multiple transactions are involved during the procedure. This implementation is inefficient, and it's hard to make data consistent. Therefore, We can update multiple metadata in a single transaction.
These table metadata only updates in the following operations.
## Region Failover
It needs to update `TableRoute` key and `DatanodeTable` keys. If the `TableRoute` equals the Snapshot of `TableRoute` submitting the Failover task, then we can safely update these keys.
After submitting Failover tasks to acquire locks for execution, the `TableRoute` may be updated by another task. After acquiring the lock, we can get the latest `TableRoute` again and then execute it if needed.
## Create Table DDL
Creates all of the above keys. `TableRoute`, `TableInfo`, should be empty.
The **TableNameKey**'s lock will be held by the procedure framework.
## Drop Table DDL
`TableInfoKey` and `NextTableRouteKey` will be added with `__removed-` prefix, and the other above keys will be deleted. The transaction will not compare any keys.
## Alter Table DDL
1. Rename table, updates `TableInfo` and `TableName`. Compares `TableInfo`, and the new `TableNameKey` should be empty, and TableInfo should equal the Snapshot when submitting DDL.
The old and new **TableNameKey**'s lock will be held by the procedure framework.
2. Alter table, updates `TableInfo`. `TableInfo` should equal the Snapshot when submitting DDL.
This RFC proposes an optimization towards the storage engine by introducing an inverted indexing methodology aimed at optimizing label selection queries specifically pertaining to Metrics with tag columns as the target for optimization.
# Introduction
In the current system setup, in the Mito Engine, the first column of Primary Keys has a Min-Max index, which significantly optimizes the outcome. However, there are limitations when it comes to other columns, primarily tags. This RFC suggests the implementation of an inverted index to provide enhanced filtering benefits to bridge these limitations and improve overall system performance.
# Design Detail
## Inverted Index
The primary aim of the proposed inverted index is to optimize tag columns in the SST Parquet Files within the Mito Engine. The mapping and construction of an inverted index, from Tag Values to Row Groups, enables efficient logical structures that provide faster and more flexible queries.
When scanning SST Files, pushed-down filters applied to a respective Tag's inverted index, determine the final Row Groups to be indexed and scanned, further bolstering the speed and efficiency of data retrieval processes.
## Index Format
The Inverted Index for each SST file follows the format shown below:
The `footer_payload` is presented in protobuf encoding of `InvertedIndexFooter`.
The complete format is containerized in [Puffin](https://iceberg.apache.org/puffin-spec/) with the type defined as `greptime-inverted-index-v1`.
## Protobuf Details
The `InvertedIndexFooter` is defined in the following protobuf structure:
```protobuf
messageInvertedIndexFooter{
repeatedInvertedIndexMetametas;
}
messageInvertedIndexMeta{
stringname;
uint64row_count_in_group;
uint64fst_offset;
uint64fst_size;
uint64null_bitmap_offset;
uint64null_bitmap_size;
InvertedIndexStatsstats;
}
messageInvertedIndexStats{
uint64null_count;
uint64distinct_count;
bytesmin_value;
bytesmax_value;
}
```
## Bitmap
Bitmaps are used to represent indices of fixed-size groups. Rows are divided into groups of a fixed size, defined in the `InvertedIndexMeta` as `row_count_in_group`.
For example, when `row_count_in_group` is `4096`, it means each group has `4096` rows. If there are a total of `10000` rows, there will be `3` groups in total. The first two groups will have `4096` rows each, and the last group will have `1808` rows. If the indexed values are found in row `200` and `9000`, they will correspond to groups `0` and `2`, respectively. Therefore, the bitmap should show `0` and `2`.
Bitmap is implemented using [BitVec](https://docs.rs/bitvec/latest/bitvec/), selected due to its efficient representation of dense data arrays typical of indices of groups.
## Finite State Transducer (FST)
[FST](https://docs.rs/fst/latest/fst/) is a highly efficient data structure ideal for in-memory indexing. It represents ordered sets or maps where the keys are bytes. The choice of the FST effectively balances the need for performance, space efficiency, and the ability to perform complex analyses such as regular expression matching.
The conventional usage of FST and `u64` values has been adapted to facilitate indirect indexing to row groups. As the row groups are represented as Bitmaps, we utilize the `u64` values split into bitmap's offset (higher 32 bits) and size (lower 32 bits) to represent the location of these Bitmaps.
## API Design
Two APIs `InvertedIndexBuilder` for building indexes and `InvertedIndexSearcher` for querying indexes are designed:
**Only the red nodes will persist state after it has succeeded**, and other nodes won't persist state. (excluding the Start and End nodes).
## Steps
**The persistent context:** It's shared in each step and available after recovering. It will only be updated/stored after the Red node has succeeded.
Values:
-`region_id`: The target leader region.
-`peer`: The target datanode.
-`close_old_leader`: Indicates whether close the region.
-`leader_may_unreachable`: It's used to support the failover procedure.
**The Volatile context:** It's shared in each step and available in executing (including retrying). It will be dropped if the procedure runner crashes.
### Select Candidate
The Persistent state: Selected Candidate Region.
### Update Metadata(Down)
**The Persistent context:**
- The (latest/updated) `version` of `TableRouteValue`, It will be used in the step of `Update Metadata(Up)`.
### Downgrade Leader
This step sends an instruction via heartbeat and performs:
1. Downgrades leader region.
2. Retrieves the `last_entry_id` (if available).
If the target leader region is not found:
- Sets `close_old_leader` to true.
- Sets `leader_may_unreachable` to true.
If the target Datanode is unreachable:
- Waits for region lease expired.
- Sets `close_old_leader` to true.
- Sets `leader_may_unreachable` to true.
**The Persistent context:**
None
**The Persistent state:**
-`last_entry_id`
*Passes to next step.
### Upgrade Candidate
This step sends an instruction via heartbeat and performs:
1. Replays the WAL to latest(`last_entry_id`).
2. Upgrades the candidate region.
If the target region is not found:
- Rollbacks.
- Notifies the failover detector if `leader_may_unreachable` == true.
- Exits procedure.
If the target Datanode is unreachable:
- Rollbacks.
- Notifies the failover detector if `leader_may_unreachable` == true.
- Exits procedure.
**The Persistent context:**
None
### Update Metadata(Up)
This step performs
1. Switches Leader.
2. Removes Old Leader(Opt.).
3. Moves Old Leader to follower(Opt.).
The `TableRouteValue` version should equal the `TableRouteValue`'s `version` in Persistent context. Otherwise, verifies whether `TableRouteValue` already updated.
**The Persistent context:**
None
### Close Old Leader(Opt.)
This step sends a close region instruction via heartbeat.
If the target leader region is not found:
- Ignore.
If the target Datanode is unreachable:
- Ignore.
### Open Candidate(Opt.)
This step sends an open region instruction via heartbeat and waits for conditions to be met (typically, the condition is that the `last_entry_id` of the Candidate Region is very close to that of the Leader Region or the latest).
lets=r#"{"node_id":1,"regions_id_map":{"1":[0]},"table_info":{"ident":{"table_id":1098,"version":1},"name":"container_cpu_limit","desc":"Created on insertion","catalog_name":"greptime","schema_name":"dd","meta":{"schema":{"column_schemas":[{"name":"container_id","data_type":{"String":null},"is_nullable":true,"is_time_index":false,"default_constraint":null,"metadata":{}},{"name":"container_name","data_type":{"String":null},"is_nullable":true,"is_time_index":false,"default_constraint":null,"metadata":{}},{"name":"docker_image","data_type":{"String":null},"is_nullable":true,"is_time_index":false,"default_constraint":null,"metadata":{}},{"name":"host","data_type":{"String":null},"is_nullable":true,"is_time_index":false,"default_constraint":null,"metadata":{}},{"name":"image_name","data_type":{"String":null},"is_nullable":true,"is_time_index":false,"default_constraint":null,"metadata":{}},{"name":"image_tag","data_type":{"String":null},"is_nullable":true,"is_time_index":false,"default_constraint":null,"metadata":{}},{"name":"interval","data_type":{"String":null},"is_nullable":true,"is_time_index":false,"default_constraint":null,"metadata":{}},{"name":"runtime","data_type":{"String":null},"is_nullable":true,"is_time_index":false,"default_constraint":null,"metadata":{}},{"name":"short_image","data_type":{"String":null},"is_nullable":true,"is_time_index":false,"default_constraint":null,"metadata":{}},{"name":"type","data_type":{"String":null},"is_nullable":true,"is_time_index":false,"default_constraint":null,"metadata":{}},{"name":"dd_value","data_type":{"Float64":{}},"is_nullable":true,"is_time_index":false,"default_constraint":null,"metadata":{}},{"name":"ts","data_type":{"Timestamp":{"Millisecond":null}},"is_nullable":false,"is_time_index":true,"default_constraint":null,"metadata":{"greptime:time_index":"true"}},{"name":"git.repository_url","data_type":{"String":null},"is_nullable":true,"is_time_index":false,"default_constraint":null,"metadata":{}}],"timestamp_index":11,"version":1},"primary_key_indices":[0,1,2,3,4,5,6,7,8,9,12],"value_indices":[10,11],"engine":"mito","next_column_id":12,"region_numbers":[],"engine_options":{},"options":{},"created_on":"1970-01-01T00:00:00Z"},"table_type":"Base"}}"#;
Some files were not shown because too many files have changed in this diff
Show More
Reference in New Issue
Block a user
Blocking a user prevents them from interacting with repositories, such as opening or commenting on pull requests or issues. Learn more about blocking a user.