LogicalTypeId to ConcreteDataType is only allowed in tests, since some
additional info is not stored in LogicalTypeId now. It is just an id, or
kind, not contains full type info.
* wip: impl timestamp data type
* add timestamp vectors
* adapt to recent changes to vector module
* fix all unit test
* rebase develop
* fix slice
* change default time unit to millisecond
* add more tests
* fix some CR comments
* fix some CR comments
* fix clippy
* fix some cr comments
* fix some CR comments
* fix some CR comments
* remove time unit in LogicalTypeId::Timestamp
* feat: Add tests for different batch
Add region scan test with different batch size
* fix: Fix table scan only returns one batch
* style: Fix clippy
* test: Add tests to scan table with rows more than batch size
* fix: Fix MockChunkReader never stop
* feat: implement alter table
* Currently we have no plans to support altering the primary keys (maybe never), so removed the related codes.
* make `alter` a trait function in table
* address other CR comments
* cleanup
* rebase develop
* resolve code review comments
Co-authored-by: luofucong <luofucong@greptime.com>
* feat: upgrade rust to nightly-2022-07-14
* style: Fix some clippy warnings
* style: clippy fix
* style: fix clippy
* style: Fix clippy
Some PartialEq warnings have been work around using cfg_attr test
* feat: Implement Eq and PartialEq for PrimitiveType
* chore: Remove unnecessary allow
* chore: Remove usage of cfg_attr for PartialEq
* feat: save create table schema and respect user defined columns order when querying, close#179
* fix: address CR problems
* refactor: use with_context with ProjectedColumnNotFoundSnafu
* feat: Add projected schema
* feat: Use projected schema to read sst
* feat: Use vector of column to implement Batch
* feat: Use projected schema to convert batch to chunk
* feat: Add no_projection() to build ProjectedSchema
* feat: Memtable supports projection
The btree memtable use `is_needed()` to filter unneeded value columns,
then use `ProjectedSchema::batch_from_parts()` to construct
batch, so it don't need to known the layout of internal columns.
* test: Add tests for ProjectedSchema
* test: Add tests for ProjectedSchema
Also returns error if the `projected_columns` used to build the
`ProjectedSchema` is empty.
* test: Add test for memtable projection
* feat: Table pass projection to storage engine
* fix: Use timestamp column name as schema metadata
This fix the issue that the metadata refer to the wrong timestamp column
if datafusion reorder the fields of the arrow schema.
* fix: Fix projected schema not passed to memtable
* feat: Add tests for region projection
* chore: fix clippy
* test: Add test for unordered projection
* chore: Move projected_schema to ReadOptions
Also fix some typo
* refactor: Merge RowKeyMetadata into ColumnsMetadata
Now RowKeyMetadata and ColumnsMetadata are almost always being used together, no need
to separate them into two structs. Now they are combined into the single
ColumnsMetadata struct.
chore: Make some fields of metadata private
feat: Replace schema in RegionMetadata by RegionSchema
The internal schema of a region should have the knownledge about all
internal columns that are reserved and used by the storage engine, such as
sequence, value type. So we introduce the `RegionSchema`, and it would
holds a `SchemaRef` that only contains the columns that user could see.
feat: Value derives Serialize and supports converting into json value
feat: Add version to schema
The schema version has an initial value 0 and would bump each time the
schema being altered.
feat: Adds internal columns to region metadata
Introduce the concept of reserved columns and internal columns.
Reserved columns are columns that their names, ids are reserved by the storage
engine, and could not be used by the user. Reserved columns usually have
special usage. Reserved columns expect the version columns are also
called internal columns (though the version could also be thought as a
special kind of internal column), are not visible to user, such as our
internal sequence, value_type columns.
The RegionMetadataBuilder always push internal columns used by the
engine to the columns in metadata. Internal columns are all stored
behind all user columns in the columns vector.
To avoid column id collision, the id reserved for columns has the most
significant bit set to 1. And the RegionMetadataBuilder would check the
uniqueness of the column id.
chore: Rebase develop and fix compile error
feat: add internal schema to region schema
feat: Add SchemaBuilder to build Schema
feat: Store row key end in region schema metadata
Also move the arrow schema construction to region::schema mod
feat: Add SstSchema
refactor: Replace MemtableSchema by RegionSchema
Now when writing sst files, we could use the arrow schema from our sst
schema, which contains the internal columns.
feat: Use SstSchema to read parquet
Adds user_column_end to metadata. When reading parquet file,
converts the arrow schema into SstSchema, then uses the row_key_end
and user_column_end to find out row key parts, value parts and internal
columns, instead of using the timestamp index, which may yields
incorrect index if we don't put the timestamp at the end of row key.
Move conversion from Batch to arrow Chunk to SstSchema, so SST mod doesn't
need to care the order of key, value and internal columns.
test: Add test for Value to serde_json::Value
feat: Add RawRegionMetadata to persist RegionMetadata
test: Add test to RegionSchema
fix: Fix clippy
To fix clippy::enum_clike_unportable_variant lint, define the column id
offset in ReservedColumnType and compute the final column id in
ReservedColumnId's const method
refactor: Move batch/chunk conversion to SstSchema
The parquet ChunkStream now holds the SstSchema and use its method to
convert Chunk into Batch.
chore: Address CR comment
Also add a test for pushing internal column to RegionMetadataBuilder
chore: Address CR comment
chore: Use bitwise or to compute column id
* chore: Address CR comment
* catalog manager allocates table id
* rebase develop
* add some tests
* add some more test
* fix some cr comments
* insert into system catalog
* use slice pattern to simplify code
* add optional dependencies
* add sql-to-request test
* successfully recover
* fix unit tests
* rebase develop
* add some tests
* fix some cr comments
* fix some cr comments
* add a lock to CatalogManager
* feat: add gmt_created and gmt_modified columns to system catalog table
* feat: impl TableManifest and refactor table engine, object store etc.
* feat: persist table metadata when creating it
* fix: remove unused file src/storage/src/manifest/impl.rs
* feat: impl recover table info from manifest
* test: add open table test and table manifest test
* fix: resolve CR problems
* fix: compile error and remove region id
* doc: describe parent_dir
* fix: address CR problems
* fix: typo
* Revert "fix: compile error and remove region id"
This reverts commit c14c250f8a.
* fix: compile error and generate region id by table_id and region number
Implement catalog manager that provides a vision of all existing tables while instance start. Current implementation is based on local table engine, all catalog info is stored in an system catalog table.
* feat: Add `open_table()` method to `TableEngine`
* feat: Implements MitoEngine::open_table()
For simplicity, this implementation just use the table name as region
name, and using that name to open a region for that table. It also
introduce a mutex to avoid opening the same table simultaneously.
* refactor: Shorten generic param name
Use `S` instead of `Store` for `MitoEngine`.
* test: Mock storage engine for table engine test
Add a `MockEngine` to mock the storage engine, so that testing the mito
table engine can sometimes use the mocked storage.
* test: Add open table test
Also remove `storage::gen_region_name` method, and always use table name
as default region name, so the table engine can open the table created
by `create_table()`.
* chore: Add open table log
* feat: impl scanning data from storage for MitoTable
* adds test mod to setup table engine test
* fix: comment error
* fix: boyan -> dennis in todo comments
* fix: remove necessary send in BatchIteratorPtr
* Impl TableEngine, bridge to storage
* Impl sql handler to process insert sql
* fix: minor changes and typo
* test: add datanode test
* test: add table-engine test
* fix: code style
* refactor: split out insert mod from sql and minor changes by CR
* refactor: replace with_context with context