Files
greptimedb/tests/cases/standalone/insert/insert.sql
dennis zhuang 9428e70971 feat: integration test (#770)
* feat: add insert test cases

* fix: update results after rebase develop

* feat: supports unsigned integer types and big_insert test

* test: add insert_invalid test

* feat: supports time index constraint for bigint type

* chore: time index column at last

* test: adds more order, limit test

* fix: style

* feat: adds numbers table in standable memory catalog mode

* feat: enable fail_fast and test_filter in sqlness

* feat: add more tests

* fix: test_filter

* test: add alter tests

* feat: supports if_not_exists when create database

* test: filter_push_down and catalog test

* fix: compile error

* fix: delete output file

* chore: ignore integration test output in git

* test: update all integration test results

* fix: by code review

* chore: revert .gitignore

* feat: sort the show tables/databases results

* chore: remove issue link

* fix: compile error and code format after rebase

* test: update all integration test results
2023-01-10 18:15:50 +08:00

24 lines
1.8 KiB
SQL

CREATE TABLE integers (
ts TIMESTAMP,
TIME INDEX(ts)
);
INSERT INTO integers VALUES (1), (2), (3), (4), (5);
SELECT * FROM integers;
-- Test insert with long string constant
CREATE TABLE IF NOT EXISTS presentations (
presentation_date TIMESTAMP,
author VARCHAR NOT NULL,
title STRING NOT NULL,
bio VARCHAR,
abstract VARCHAR,
zoom_link VARCHAR,
TIME INDEX(presentation_date)
);
insert into presentations values (1, 'Patrick Damme', 'Analytical Query Processing Based on Continuous Compression of Intermediates', NULL, 'Modern in-memory column-stores are widely accepted as the adequate database architecture for the efficient processing of complex analytical queries over large relational data volumes. These systems keep their entire data in main memory and typically employ lightweight compression to address the bottleneck between main memory and CPU. Numerous lightweight compression algorithms have been proposed in the past years, but none of them is suitable in all cases. While lightweight compression is already well established for base data, the efficient representation of intermediate results generated during query processing has attracted insufficient attention so far, although in in-memory systems, accessing intermeFdiates is as expensive as accessing base data. Thus, our vision is a continuous use of lightweight compression for all intermediates in a query execution plan, whereby a suitable compression algorithm should be selected for each intermediate. In this talk, I will provide an overview of our research in the context of this vision, including an experimental survey of lightweight compression algorithms, our compression-enabled processing model, and our compression-aware query optimization strategies.', 'https://zoom.us/j/7845983526');
DROP TABLE integers;