* chore: refactor dir for local catalog manager
* refactor: CatalogProvider returns Result
* refactor: SchemaProvider returns Result
* feat: add kv operations to remote catalog
* chore: refactor some code
* feat: impl catalog initialization
* feat: add register table and register system table function
* refactor: add table_info method for Table trait
* chore: add some tests
* chore: add register schema test
* chore: fix build issue after rebase onto develop
* refactor: mock to separate file
* build: failed to compile
* fix: use a container struct to bridge KvBackend and Accessor trait
* feat: upgrade opendal to 0.17
* test: add more tests
* chore: add catalog name and schema name to table info
* chore: add catalog name and schema name to table info
* chore: rebase onto develop
* refactor: common-catalog crate
* refactor: remove remote catalog related files
* fix: compilation
* feat: add table version to TableKey
* feat: add node id to TableValue
* fix: some CR comments
* chore: change async fn create_expr_to_request to sync
* fix: add backtrace to errors
* fix: code style
* fix: CatalogManager::table also requires both catalog_name and schema_name
* chore: merge develop
* refactor: return PhysicalPlan in Table trait's scan method, to support partitioned execution in Frontend's distribute read
* refactor: pub use necessary DataFusion types
* refactor: replace old "PhysicalPlan" and its adapters
Co-authored-by: luofucong <luofucong@greptime.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>
* refactor: add table_info method for Table trait
* feat: add table_info method to Table trait
* test: add more unit test
* fix: impl table_info for SystemTable
* test: fix failing test
* feat: Adds ColumnDefaultConstraint::create_default_vector
ColumnDefaultConstraint::create_default_vector is ported from
MitoTable::try_get_column_default_constraint_vector.
* refactor: Replace try_get_column_default_constraint_vector by create_default_vector
* style: Remove unnecessary map_err in MitoTable::insert
* feat: Adds compat_write
For column in `dest_schema` but not in `write_batch`, this method would insert a
vector with default value to the `write_batch`. If there are columns not in
`dest_schema`, an error would be returned.
* chore: Add info log to RegionInner::alter
* feat(storage): RegionImpl::write support request with old version
* feat: Add nullable check when creating default value
* feat: Validate nullable and default value
* chore: Modify PutOperation comments
* chore: Make ColumnDescriptor::is_nullable readonly and validate name
* feat: Use CompatWrite trait to replace campat::compat_write method
Adds a CompactWrite trait to support padding columns to WriteBatch:
- The WriteBatch and PutData implements this trait
- Fix the issue that WriteBatch::schema is not updated to the
schema after compat
- Also validate the created column when adding to PutData
The WriteBatch would also pad default value to missing columns in
PutData, so the memtable inserter don't need to manually check whether
the column is nullable and then insert a NullVector. All WriteBatch is
ensured to have all columns defined by the schema in its PutData.
* feat: Validate constraint by ColumnDefaultConstraint::validate()
The ColumnDefaultConstraint::validate() would also ensure the default
value has the same data type as the column's.
* feat: Use NullVector for null columns
* fix: Fix BinaryType returns wrong logical_type_id
* fix: Fix tests and revert NullVector for null columns
NullVector doesn't support custom logical type make it hard to
encode/decode, which also cause the arrow/protobuf codec of write batch
fail.
* fix: create_default_vector use replicate to create vector with default value
This would fix the test_codec_with_none_column_protobuf test, as we need
to downcast the vector to construct the protobuf values.
* test: add tests for column default constraints
* test: Add tests for CompatWrite trait impl
* test: Test write region with old schema
* fix(storage): Fix replay() applies metadata too early
The committed sequence of the RegionChange action is the sequence of the
last entry that use the old metadata (schema). During replay, we should
apply the new metadata after we see an entry that has sequence greater
than (not equals to) the `RegionChange::committed_sequence`
Also remove duplicate `set_committed_sequence()` call in
persist_manifest_version()
* chore: Removes some unreachable codes
Also add more comments to document codes in these files
* refactor: Refactor MitoTable::insert
Return error if we could not create a default vector for given column,
instead of ignoring the error
* chore: Fix incorrect comments
* chore: Fix typo in error message
* refactor:replace another axum-test-helper branch
* refactor: upgrade opendal version
* refactor: use cursor for file buffer
* refactor:remove native-tls in mysql_async
* refactor: use async block and pipeline for newer opendal api
* chore: update Cargo.lock
* chore: update dependencies
* docs: removed openssl from build requirement
* fix: call close on pipe writer to flush reader for parquet streamer
* refactor: remove redundant return
* chore: use pinned revision for our forked mysql_async
* style: avoid wild-card import in test code
* Apply suggestions from code review
Co-authored-by: Yingwen <realevenyag@gmail.com>
* style: use chained call for builder
Co-authored-by: liangxingjian <965662709@qq.com>
Co-authored-by: Yingwen <realevenyag@gmail.com>
* feat: adds commited_sequence to RegionChange action, #281
* refactor: saving protocol action when writer version is changed
* feat: recover all region medata in manifest and replay them when replaying WAL, #282
* refactor: minor change and test recovering metadata after altering table schema
* fix: write wrong min_reader_version into manifest for region
* refactor: move up DataRow
* refactor: by CR comments
* test: assert recovered metadata
* refactor: by CR comments
* fix: comment
* feat: Change signature of the Region::alter method
* refactor: Add builders for ColumnsMetadata and ColumnFamiliesMetadata
* feat: Support altering the region metadata
Altering the region metadata is done in a copy-write fashion:
1. Convert the `RegionMetadata` into `RegionDescriptor` which is more
convenient to mutate
2. Apply the `AlterOperation` to the `RegionDescriptor`. This would
mutate the descriptor in-place
3. Create a `RegionMetadataBuilder` from the descriptor, bump the
version and then build the new metadata
* feat: Implement altering table using the new Region::alter api
* refactor: Replaced wal name by region id
Region id is cheaper to clone than name
* chore: Remove pub(crate) of build_xxxx in engine mod
* style: fix clippy
* test: Add tests for AlterOperation and RegionMetadata::alter
* chore: ColumnsMetadataBuilder methods return &mut Self
* wip add predicate definition
* fix value move
* implement predicate and prune
* impl filter push down in chunk reader
* add more expr tests
* chore: rebase develop
* fix: unit test
* fix: field name/index lookup when building pruning stats
* chore: add some meaningless test
* fix: remove unnecessary extern crate
* fix: use datatypes::schema::SchemaRef
LogicalTypeId to ConcreteDataType is only allowed in tests, since some
additional info is not stored in LogicalTypeId now. It is just an id, or
kind, not contains full type info.
* wip: impl timestamp data type
* add timestamp vectors
* adapt to recent changes to vector module
* fix all unit test
* rebase develop
* fix slice
* change default time unit to millisecond
* add more tests
* fix some CR comments
* fix some CR comments
* fix clippy
* fix some cr comments
* fix some CR comments
* fix some CR comments
* remove time unit in LogicalTypeId::Timestamp
* feat: Add tests for different batch
Add region scan test with different batch size
* fix: Fix table scan only returns one batch
* style: Fix clippy
* test: Add tests to scan table with rows more than batch size
* fix: Fix MockChunkReader never stop
* feat: implement alter table
* Currently we have no plans to support altering the primary keys (maybe never), so removed the related codes.
* make `alter` a trait function in table
* address other CR comments
* cleanup
* rebase develop
* resolve code review comments
Co-authored-by: luofucong <luofucong@greptime.com>
* feat: upgrade rust to nightly-2022-07-14
* style: Fix some clippy warnings
* style: clippy fix
* style: fix clippy
* style: Fix clippy
Some PartialEq warnings have been work around using cfg_attr test
* feat: Implement Eq and PartialEq for PrimitiveType
* chore: Remove unnecessary allow
* chore: Remove usage of cfg_attr for PartialEq
* feat: save create table schema and respect user defined columns order when querying, close#179
* fix: address CR problems
* refactor: use with_context with ProjectedColumnNotFoundSnafu
* feat: Add projected schema
* feat: Use projected schema to read sst
* feat: Use vector of column to implement Batch
* feat: Use projected schema to convert batch to chunk
* feat: Add no_projection() to build ProjectedSchema
* feat: Memtable supports projection
The btree memtable use `is_needed()` to filter unneeded value columns,
then use `ProjectedSchema::batch_from_parts()` to construct
batch, so it don't need to known the layout of internal columns.
* test: Add tests for ProjectedSchema
* test: Add tests for ProjectedSchema
Also returns error if the `projected_columns` used to build the
`ProjectedSchema` is empty.
* test: Add test for memtable projection
* feat: Table pass projection to storage engine
* fix: Use timestamp column name as schema metadata
This fix the issue that the metadata refer to the wrong timestamp column
if datafusion reorder the fields of the arrow schema.
* fix: Fix projected schema not passed to memtable
* feat: Add tests for region projection
* chore: fix clippy
* test: Add test for unordered projection
* chore: Move projected_schema to ReadOptions
Also fix some typo
* refactor: Merge RowKeyMetadata into ColumnsMetadata
Now RowKeyMetadata and ColumnsMetadata are almost always being used together, no need
to separate them into two structs. Now they are combined into the single
ColumnsMetadata struct.
chore: Make some fields of metadata private
feat: Replace schema in RegionMetadata by RegionSchema
The internal schema of a region should have the knownledge about all
internal columns that are reserved and used by the storage engine, such as
sequence, value type. So we introduce the `RegionSchema`, and it would
holds a `SchemaRef` that only contains the columns that user could see.
feat: Value derives Serialize and supports converting into json value
feat: Add version to schema
The schema version has an initial value 0 and would bump each time the
schema being altered.
feat: Adds internal columns to region metadata
Introduce the concept of reserved columns and internal columns.
Reserved columns are columns that their names, ids are reserved by the storage
engine, and could not be used by the user. Reserved columns usually have
special usage. Reserved columns expect the version columns are also
called internal columns (though the version could also be thought as a
special kind of internal column), are not visible to user, such as our
internal sequence, value_type columns.
The RegionMetadataBuilder always push internal columns used by the
engine to the columns in metadata. Internal columns are all stored
behind all user columns in the columns vector.
To avoid column id collision, the id reserved for columns has the most
significant bit set to 1. And the RegionMetadataBuilder would check the
uniqueness of the column id.
chore: Rebase develop and fix compile error
feat: add internal schema to region schema
feat: Add SchemaBuilder to build Schema
feat: Store row key end in region schema metadata
Also move the arrow schema construction to region::schema mod
feat: Add SstSchema
refactor: Replace MemtableSchema by RegionSchema
Now when writing sst files, we could use the arrow schema from our sst
schema, which contains the internal columns.
feat: Use SstSchema to read parquet
Adds user_column_end to metadata. When reading parquet file,
converts the arrow schema into SstSchema, then uses the row_key_end
and user_column_end to find out row key parts, value parts and internal
columns, instead of using the timestamp index, which may yields
incorrect index if we don't put the timestamp at the end of row key.
Move conversion from Batch to arrow Chunk to SstSchema, so SST mod doesn't
need to care the order of key, value and internal columns.
test: Add test for Value to serde_json::Value
feat: Add RawRegionMetadata to persist RegionMetadata
test: Add test to RegionSchema
fix: Fix clippy
To fix clippy::enum_clike_unportable_variant lint, define the column id
offset in ReservedColumnType and compute the final column id in
ReservedColumnId's const method
refactor: Move batch/chunk conversion to SstSchema
The parquet ChunkStream now holds the SstSchema and use its method to
convert Chunk into Batch.
chore: Address CR comment
Also add a test for pushing internal column to RegionMetadataBuilder
chore: Address CR comment
chore: Use bitwise or to compute column id
* chore: Address CR comment
* catalog manager allocates table id
* rebase develop
* add some tests
* add some more test
* fix some cr comments
* insert into system catalog
* use slice pattern to simplify code
* add optional dependencies
* add sql-to-request test
* successfully recover
* fix unit tests
* rebase develop
* add some tests
* fix some cr comments
* fix some cr comments
* add a lock to CatalogManager
* feat: add gmt_created and gmt_modified columns to system catalog table
* feat: impl TableManifest and refactor table engine, object store etc.
* feat: persist table metadata when creating it
* fix: remove unused file src/storage/src/manifest/impl.rs
* feat: impl recover table info from manifest
* test: add open table test and table manifest test
* fix: resolve CR problems
* fix: compile error and remove region id
* doc: describe parent_dir
* fix: address CR problems
* fix: typo
* Revert "fix: compile error and remove region id"
This reverts commit c14c250f8a.
* fix: compile error and generate region id by table_id and region number
Implement catalog manager that provides a vision of all existing tables while instance start. Current implementation is based on local table engine, all catalog info is stored in an system catalog table.
* feat: Add `open_table()` method to `TableEngine`
* feat: Implements MitoEngine::open_table()
For simplicity, this implementation just use the table name as region
name, and using that name to open a region for that table. It also
introduce a mutex to avoid opening the same table simultaneously.
* refactor: Shorten generic param name
Use `S` instead of `Store` for `MitoEngine`.
* test: Mock storage engine for table engine test
Add a `MockEngine` to mock the storage engine, so that testing the mito
table engine can sometimes use the mocked storage.
* test: Add open table test
Also remove `storage::gen_region_name` method, and always use table name
as default region name, so the table engine can open the table created
by `create_table()`.
* chore: Add open table log
* feat: impl scanning data from storage for MitoTable
* adds test mod to setup table engine test
* fix: comment error
* fix: boyan -> dennis in todo comments
* fix: remove necessary send in BatchIteratorPtr
* Impl TableEngine, bridge to storage
* Impl sql handler to process insert sql
* fix: minor changes and typo
* test: add datanode test
* test: add table-engine test
* fix: code style
* refactor: split out insert mod from sql and minor changes by CR
* refactor: replace with_context with context