mirror of
https://github.com/GreptimeTeam/greptimedb.git
synced 2025-12-23 06:30:05 +00:00
Compare commits
865 Commits
v0.7.2
...
transform-
| Author | SHA1 | Date | |
|---|---|---|---|
|
|
d4aa4159d4 | ||
|
|
960f6d821b | ||
|
|
9c5d044238 | ||
|
|
be72d3bedb | ||
|
|
1ff29d8fde | ||
|
|
39ab1a6415 | ||
|
|
70c354eed6 | ||
|
|
23bf663d58 | ||
|
|
817648eac5 | ||
|
|
758ad0a8c5 | ||
|
|
8b60c27c2e | ||
|
|
ea6df9ba49 | ||
|
|
69420793e2 | ||
|
|
0da112b335 | ||
|
|
dcc08f6b3e | ||
|
|
a34035a1f2 | ||
|
|
fd8eba36a8 | ||
|
|
9712295177 | ||
|
|
d275cdd570 | ||
|
|
83eb777d21 | ||
|
|
8ed5bc5305 | ||
|
|
9ded314905 | ||
|
|
702a55a235 | ||
|
|
f3e5a5a7aa | ||
|
|
9c79baca4b | ||
|
|
03f2fa219d | ||
|
|
0ee455a980 | ||
|
|
eab9e3a48d | ||
|
|
1008af5324 | ||
|
|
2485f66077 | ||
|
|
4f3afb13b6 | ||
|
|
32a0023010 | ||
|
|
4e9c251041 | ||
|
|
e328c7067c | ||
|
|
8b307e4548 | ||
|
|
ff38abde2e | ||
|
|
aa9a265984 | ||
|
|
9d3ee6384a | ||
|
|
fcde0a4874 | ||
|
|
5d42e63ab0 | ||
|
|
0c01532a37 | ||
|
|
6d503b047a | ||
|
|
5d28f7a912 | ||
|
|
a50eea76a6 | ||
|
|
2ee1ce2ba1 | ||
|
|
c02b5dae93 | ||
|
|
081c6d9e74 | ||
|
|
ca6e02980e | ||
|
|
74bdba4613 | ||
|
|
2e0e82ddc8 | ||
|
|
e0c4157ad8 | ||
|
|
613e07afb4 | ||
|
|
0ce93f0b88 | ||
|
|
c231eee7c1 | ||
|
|
176f2df5b3 | ||
|
|
4622412dfe | ||
|
|
59ec90299b | ||
|
|
16b8cdc3d5 | ||
|
|
3197b8b535 | ||
|
|
972c2441af | ||
|
|
bb8b54b5d3 | ||
|
|
b5233e500b | ||
|
|
b61a388d04 | ||
|
|
06e565d25a | ||
|
|
3b2ce31a19 | ||
|
|
a889ea88ca | ||
|
|
2f2b4b306c | ||
|
|
856c0280f5 | ||
|
|
aaa9b32908 | ||
|
|
4bb1f4f184 | ||
|
|
0f907ef99e | ||
|
|
a61c0bd1d8 | ||
|
|
7dd0e3ab37 | ||
|
|
d168bde226 | ||
|
|
4b34f610aa | ||
|
|
695ff1e037 | ||
|
|
288fdc3145 | ||
|
|
a8ed3db0aa | ||
|
|
0dd11f53f5 | ||
|
|
19918928c5 | ||
|
|
5f0a83b2b1 | ||
|
|
71a66d15f7 | ||
|
|
2cdd103874 | ||
|
|
4dea4cac47 | ||
|
|
a283e13da7 | ||
|
|
47a3277d12 | ||
|
|
caf5f2c7a5 | ||
|
|
c1e8084af6 | ||
|
|
6e776d5f98 | ||
|
|
e39a9e6feb | ||
|
|
77af4fd981 | ||
|
|
cd55202136 | ||
|
|
50cb59587d | ||
|
|
0a82b12d08 | ||
|
|
d9f2f0ccf0 | ||
|
|
cedbbcf2b8 | ||
|
|
d6be44bc7f | ||
|
|
3a46c1b235 | ||
|
|
934bc13967 | ||
|
|
4045298cb2 | ||
|
|
cc4106cbd2 | ||
|
|
627a326273 | ||
|
|
0274e752ae | ||
|
|
cd4bf239d0 | ||
|
|
e3c0b5482f | ||
|
|
d1b252736d | ||
|
|
54f6e13d13 | ||
|
|
5c64f0ce09 | ||
|
|
2feddca1cb | ||
|
|
0f99218386 | ||
|
|
163cea81c2 | ||
|
|
0c9b8eb0d2 | ||
|
|
75c6fad1a3 | ||
|
|
e12ffbeb2f | ||
|
|
c4e52ebf91 | ||
|
|
f02410c39b | ||
|
|
f5cf25b0db | ||
|
|
1acda74c26 | ||
|
|
95787825f1 | ||
|
|
49004391d3 | ||
|
|
d0f5b2ad7d | ||
|
|
0295f8dbea | ||
|
|
8786624515 | ||
|
|
52d627e37d | ||
|
|
b5f7138d33 | ||
|
|
08bd40333c | ||
|
|
d1e0602c76 | ||
|
|
befb6d85f0 | ||
|
|
f73fb82133 | ||
|
|
50b3bb4c0d | ||
|
|
0847ff36ce | ||
|
|
c014e875f3 | ||
|
|
3b5b906543 | ||
|
|
d1dfffcdaf | ||
|
|
36b1bafbf0 | ||
|
|
67fb3d003e | ||
|
|
aa03d3b11c | ||
|
|
a3d567f0c9 | ||
|
|
03b29439e2 | ||
|
|
f252599ac6 | ||
|
|
ff40d512bd | ||
|
|
dcae21208b | ||
|
|
d0fd79ac7f | ||
|
|
3e17c09e45 | ||
|
|
04de3ed929 | ||
|
|
29f215531a | ||
|
|
712f4ca0ef | ||
|
|
545a80c6e0 | ||
|
|
04e7dd6fd5 | ||
|
|
dc89944570 | ||
|
|
8bf549c2fa | ||
|
|
208afe402b | ||
|
|
c22a398f59 | ||
|
|
a8477e4142 | ||
|
|
60bacff57e | ||
|
|
6208772ba4 | ||
|
|
b950e705f5 | ||
|
|
d2d62e0c6f | ||
|
|
5d9f8a3be7 | ||
|
|
e88465840d | ||
|
|
67d95d2088 | ||
|
|
506dc20765 | ||
|
|
114772ba87 | ||
|
|
89a3da8a3a | ||
|
|
67184c0498 | ||
|
|
8814695b58 | ||
|
|
86cef648cd | ||
|
|
1dd908fdf7 | ||
|
|
e476e36647 | ||
|
|
4781b327f3 | ||
|
|
8179b4798e | ||
|
|
3e4a69017d | ||
|
|
d43e31c7ed | ||
|
|
19e2a9d44b | ||
|
|
8453df1392 | ||
|
|
8ca35a4a1a | ||
|
|
93f202694c | ||
|
|
b52e3c694a | ||
|
|
a612b67470 | ||
|
|
9b03940e03 | ||
|
|
8d6cd8ae16 | ||
|
|
8f4ec536de | ||
|
|
f0e2d6e663 | ||
|
|
306bd25c64 | ||
|
|
ddafcc678c | ||
|
|
2564b5daee | ||
|
|
37dcf34bb9 | ||
|
|
8eda36bfe3 | ||
|
|
68b59e0e5e | ||
|
|
a37aeb2814 | ||
|
|
f641c562c2 | ||
|
|
9286e963e7 | ||
|
|
8ea4f67e4b | ||
|
|
5e4bac2633 | ||
|
|
d45b04180c | ||
|
|
8c8499ce53 | ||
|
|
79f40a762b | ||
|
|
b062d8515d | ||
|
|
9f9c1dab60 | ||
|
|
841e66c810 | ||
|
|
d1c635085c | ||
|
|
47657ebbc8 | ||
|
|
64ae32def0 | ||
|
|
744946957e | ||
|
|
d5455db2d5 | ||
|
|
28bf549907 | ||
|
|
4ea412249a | ||
|
|
eacc7bc471 | ||
|
|
b72d3bc71d | ||
|
|
0b102ef846 | ||
|
|
e404e9dafc | ||
|
|
63a442632e | ||
|
|
d39bafcfbd | ||
|
|
1717445ebe | ||
|
|
55d65da24d | ||
|
|
3297d5f657 | ||
|
|
d6865911ee | ||
|
|
63f2463273 | ||
|
|
da337a9635 | ||
|
|
3973d6b01f | ||
|
|
2c731c76ad | ||
|
|
40e7b58c80 | ||
|
|
5177717f71 | ||
|
|
8d61e6fe49 | ||
|
|
a3b8d2fe8f | ||
|
|
863ee073a9 | ||
|
|
25cd61b310 | ||
|
|
3517c13192 | ||
|
|
b9cedf2c1a | ||
|
|
883c5bc5b0 | ||
|
|
d628079f4c | ||
|
|
0025fa6ec7 | ||
|
|
ff04109ee6 | ||
|
|
9c1704d4cb | ||
|
|
a12a905578 | ||
|
|
449236360d | ||
|
|
bf16422cee | ||
|
|
9db08dbbe0 | ||
|
|
9d885fa0c2 | ||
|
|
b25a2b117e | ||
|
|
6fccff4810 | ||
|
|
30af78700f | ||
|
|
8de11a0e34 | ||
|
|
975b8c69e5 | ||
|
|
8036b44347 | ||
|
|
4c72b3f3fe | ||
|
|
76dc906574 | ||
|
|
2a73e0937f | ||
|
|
c8de8b80f4 | ||
|
|
ec59ce5c9a | ||
|
|
f578155602 | ||
|
|
d1472782d0 | ||
|
|
93be81c041 | ||
|
|
2c3fccb516 | ||
|
|
c1b1be47ba | ||
|
|
0f85037024 | ||
|
|
f88705080b | ||
|
|
cbb06cd0c6 | ||
|
|
b59a93dfbc | ||
|
|
202c730363 | ||
|
|
63e1892dc1 | ||
|
|
216bce6973 | ||
|
|
4466fee580 | ||
|
|
5aa4c70057 | ||
|
|
72a1732fb4 | ||
|
|
c821d21111 | ||
|
|
2e2eacf3b2 | ||
|
|
9bcaeaaa0e | ||
|
|
90cfe276b4 | ||
|
|
6694d2a930 | ||
|
|
9532ffb954 | ||
|
|
665b7e5c6e | ||
|
|
27d9aa0f3b | ||
|
|
8f3293d4fb | ||
|
|
7dd20b0348 | ||
|
|
4c1a3f29c0 | ||
|
|
0d70961448 | ||
|
|
a75cfaa516 | ||
|
|
aa3f53f08a | ||
|
|
8f0959fa9f | ||
|
|
4a3982ca60 | ||
|
|
559219496d | ||
|
|
685aa7dd8f | ||
|
|
be5364a056 | ||
|
|
a25d9f736f | ||
|
|
2cd4a78f17 | ||
|
|
188e182d75 | ||
|
|
d64cc79ab4 | ||
|
|
e6cc4df8c8 | ||
|
|
803780030d | ||
|
|
79f10d0415 | ||
|
|
3937e67694 | ||
|
|
4c93fe6c2d | ||
|
|
c4717abb68 | ||
|
|
3b701d8f5e | ||
|
|
cb4cffe636 | ||
|
|
cc7f33c90c | ||
|
|
fe1cfbf2b3 | ||
|
|
ded874da04 | ||
|
|
fe2d29a2a0 | ||
|
|
b388829a96 | ||
|
|
8e7c027bf5 | ||
|
|
9d5d7c1f9a | ||
|
|
efe5eeef14 | ||
|
|
ca54b05be3 | ||
|
|
d67314789c | ||
|
|
6c4b8b63a5 | ||
|
|
62a0defd63 | ||
|
|
291d9d55a4 | ||
|
|
90301a6250 | ||
|
|
c66d3090b6 | ||
|
|
656050722c | ||
|
|
b741a7181b | ||
|
|
dd23d47743 | ||
|
|
80aaa7725e | ||
|
|
c24de8b908 | ||
|
|
f382a7695f | ||
|
|
1ea43da9ea | ||
|
|
6113f46284 | ||
|
|
6d8a502430 | ||
|
|
2d992f4f12 | ||
|
|
7daf24c47f | ||
|
|
567f5105bf | ||
|
|
78962015dd | ||
|
|
1138f32af9 | ||
|
|
53fc14a50b | ||
|
|
1895a5478b | ||
|
|
f0c953f84a | ||
|
|
1a38f36d2d | ||
|
|
cb94bd45d3 | ||
|
|
b298b35b3b | ||
|
|
164232e073 | ||
|
|
9a5fa49955 | ||
|
|
92d6d4e64a | ||
|
|
021ec7b6ac | ||
|
|
0710e6ff36 | ||
|
|
db3a07804e | ||
|
|
bdd3d2d9ce | ||
|
|
b81d3a28e6 | ||
|
|
89b86c87a2 | ||
|
|
0b0ed03ee6 | ||
|
|
ea4a71b387 | ||
|
|
4cd5ec7769 | ||
|
|
c8f4a85720 | ||
|
|
024dac8171 | ||
|
|
918be099cd | ||
|
|
91dbac4141 | ||
|
|
e935bf7574 | ||
|
|
f7872654cc | ||
|
|
547730a467 | ||
|
|
49f22f0fc5 | ||
|
|
2ae2a6674e | ||
|
|
c8cf3b1677 | ||
|
|
7aae19aa8b | ||
|
|
b90267dd80 | ||
|
|
9fa9156bde | ||
|
|
ce900e850a | ||
|
|
5274c5a407 | ||
|
|
0b13ac6e16 | ||
|
|
8ab6136d1c | ||
|
|
e39f49fe56 | ||
|
|
c595a56ac8 | ||
|
|
d6c7b848da | ||
|
|
2010a2a33d | ||
|
|
be3ea0fae7 | ||
|
|
7b28da277d | ||
|
|
b2c5f8eefa | ||
|
|
072d7c2022 | ||
|
|
7900367433 | ||
|
|
9fbc4ba649 | ||
|
|
2e7b12c344 | ||
|
|
2b912d93fb | ||
|
|
04ac0c8da0 | ||
|
|
64cad4e891 | ||
|
|
20d9c0a345 | ||
|
|
9501318ce5 | ||
|
|
b8bd8456f0 | ||
|
|
4b8b04ffa2 | ||
|
|
15ac8116ea | ||
|
|
377a513690 | ||
|
|
5a1732279b | ||
|
|
16075ada67 | ||
|
|
67dfdd6c61 | ||
|
|
9f2d53c3df | ||
|
|
05c7d3eb42 | ||
|
|
63acc30ce7 | ||
|
|
285ffc5850 | ||
|
|
ab22bbac84 | ||
|
|
7ad248d6f6 | ||
|
|
50e4539667 | ||
|
|
da1ea253ba | ||
|
|
da0c840261 | ||
|
|
20417e646a | ||
|
|
9271b3b7bd | ||
|
|
374cfe74bf | ||
|
|
52a9a748a1 | ||
|
|
33ed745049 | ||
|
|
458e5d7e66 | ||
|
|
1ddf19d886 | ||
|
|
185953e586 | ||
|
|
7fe3f496ac | ||
|
|
1a9314a581 | ||
|
|
23bb9d92cb | ||
|
|
f1d17a8ba5 | ||
|
|
d1f1fad440 | ||
|
|
00308218b3 | ||
|
|
81308b9063 | ||
|
|
aa4d10eef7 | ||
|
|
4811fe83f5 | ||
|
|
96861137b2 | ||
|
|
8e69543704 | ||
|
|
e5730a3745 | ||
|
|
c0e9b3dbe2 | ||
|
|
59afa70311 | ||
|
|
bb32230f00 | ||
|
|
fe0be1583a | ||
|
|
08c415c729 | ||
|
|
58f991b864 | ||
|
|
a710676d06 | ||
|
|
3f4928effc | ||
|
|
bc398cf197 | ||
|
|
09fff24ac4 | ||
|
|
30b65ca99e | ||
|
|
b1219fa456 | ||
|
|
4f0984c1d7 | ||
|
|
0b624dc337 | ||
|
|
60f599c3ef | ||
|
|
f71b7b997d | ||
|
|
8a119aa0b2 | ||
|
|
d2f6daf7b7 | ||
|
|
d9efa564ee | ||
|
|
849e0b9249 | ||
|
|
c21e969329 | ||
|
|
9393a1c51e | ||
|
|
69bb7ded6a | ||
|
|
b5c6c72b02 | ||
|
|
8399dcada3 | ||
|
|
6e2c21dd3f | ||
|
|
70f7baffda | ||
|
|
4ec247f34d | ||
|
|
22f4d43b10 | ||
|
|
d9175213fd | ||
|
|
03c933c006 | ||
|
|
65c9fbbd2f | ||
|
|
ee9a5d7611 | ||
|
|
8e306f3f51 | ||
|
|
76fac359cd | ||
|
|
705b22411b | ||
|
|
c9177cceeb | ||
|
|
ddf2e6a3c0 | ||
|
|
967b2cada6 | ||
|
|
0f4b9e576d | ||
|
|
c4db9e8aa7 | ||
|
|
11cf9c827e | ||
|
|
be29e48a60 | ||
|
|
226136011e | ||
|
|
fd4a928521 | ||
|
|
ef5d1a6a65 | ||
|
|
e64379d4f7 | ||
|
|
f2c08b8ddd | ||
|
|
db5d1162f0 | ||
|
|
ea081c95bf | ||
|
|
6276e006b9 | ||
|
|
2665616f72 | ||
|
|
e5313260d0 | ||
|
|
b69b24a237 | ||
|
|
f035a7c79c | ||
|
|
a4e99f5666 | ||
|
|
5d396bd6d7 | ||
|
|
fe2c5c3735 | ||
|
|
6a634f8e5d | ||
|
|
214fd38f69 | ||
|
|
ddc7a80f56 | ||
|
|
a7aa556763 | ||
|
|
ef935a1de6 | ||
|
|
352cc9ddde | ||
|
|
b6585e3581 | ||
|
|
10b7a3d24d | ||
|
|
8702066967 | ||
|
|
df0fff2f2c | ||
|
|
a779cb36ec | ||
|
|
948c8695d0 | ||
|
|
4d4a6cd265 | ||
|
|
5dde148b3d | ||
|
|
8cbe7166b0 | ||
|
|
f5ac158605 | ||
|
|
120447779c | ||
|
|
82f6373574 | ||
|
|
1e815dddf1 | ||
|
|
b2f61aa1cf | ||
|
|
a1e2612bbf | ||
|
|
9aaf7d79bf | ||
|
|
4a4237115a | ||
|
|
840f81e0fd | ||
|
|
cdd4baf183 | ||
|
|
4b42c7b840 | ||
|
|
a44fe627ce | ||
|
|
77904adaaf | ||
|
|
07cbabab7b | ||
|
|
ea7c17089f | ||
|
|
517917453d | ||
|
|
0139a70549 | ||
|
|
5566dd72f2 | ||
|
|
dea33a7aaf | ||
|
|
15ad9f2f6f | ||
|
|
fce65c97e3 | ||
|
|
ac574b66ab | ||
|
|
1e52ba325f | ||
|
|
b739c9fd10 | ||
|
|
21c89f3247 | ||
|
|
5bcd7a14bb | ||
|
|
4306cba866 | ||
|
|
4c3d4af127 | ||
|
|
48a0f39b19 | ||
|
|
8abebad458 | ||
|
|
cc2f7efb98 | ||
|
|
22d12683b4 | ||
|
|
fe74efdafe | ||
|
|
cd9705ccd7 | ||
|
|
ea2d067cf1 | ||
|
|
70d113a355 | ||
|
|
cb657ae51e | ||
|
|
141d017576 | ||
|
|
0fc18b6865 | ||
|
|
0aceebf0a3 | ||
|
|
558272de61 | ||
|
|
f4a5a44549 | ||
|
|
5390603855 | ||
|
|
a2e3532a57 | ||
|
|
2faa6d6c97 | ||
|
|
d6392acd65 | ||
|
|
01e3a24cf7 | ||
|
|
bf3ad44584 | ||
|
|
11a903f193 | ||
|
|
acdfaabfa5 | ||
|
|
54ca06ba08 | ||
|
|
1f315e300f | ||
|
|
573e25a40f | ||
|
|
f8ec46493f | ||
|
|
14a2d83594 | ||
|
|
65f8b72d34 | ||
|
|
9473daab8b | ||
|
|
5a6021e34f | ||
|
|
1b00526de5 | ||
|
|
5533bd9293 | ||
|
|
587e99d806 | ||
|
|
9cae15bd1b | ||
|
|
d8b51cfaba | ||
|
|
e142ca40d7 | ||
|
|
e982d2e55c | ||
|
|
09e0e1b246 | ||
|
|
9c42825f5d | ||
|
|
4719569e4f | ||
|
|
b03cb3860e | ||
|
|
2ade511f26 | ||
|
|
16b85b06b6 | ||
|
|
03cacf9948 | ||
|
|
c23f8ad113 | ||
|
|
e0a2c5a581 | ||
|
|
417ab3b779 | ||
|
|
1850fe2956 | ||
|
|
dd06e107f9 | ||
|
|
98c19ed0fa | ||
|
|
c0aed1d267 | ||
|
|
0a07130931 | ||
|
|
a6269397c8 | ||
|
|
a80059b47f | ||
|
|
b3a4362626 | ||
|
|
51e2b6e728 | ||
|
|
d1838fb28d | ||
|
|
cd97a39904 | ||
|
|
4e5dd1ebb0 | ||
|
|
88cdefa41e | ||
|
|
c2218f8be8 | ||
|
|
45fee948e9 | ||
|
|
ea49f8a5c4 | ||
|
|
43afea1a9d | ||
|
|
fcfcf86385 | ||
|
|
26b112ab57 | ||
|
|
24612f62dd | ||
|
|
85a231850d | ||
|
|
f024054ed3 | ||
|
|
05751084e7 | ||
|
|
8b6596faa0 | ||
|
|
eab309ff7e | ||
|
|
7de336f087 | ||
|
|
6e9a9dc333 | ||
|
|
848bd7e553 | ||
|
|
f0effd2680 | ||
|
|
aafb468547 | ||
|
|
4aa756c896 | ||
|
|
d3860671a8 | ||
|
|
9dd6e033a7 | ||
|
|
097f62f459 | ||
|
|
048368fd87 | ||
|
|
f9db5ff0d6 | ||
|
|
20ce7d428d | ||
|
|
75bddc0bf5 | ||
|
|
c78043d526 | ||
|
|
297105266b | ||
|
|
1de17aec74 | ||
|
|
389ded93d1 | ||
|
|
af486ec0d0 | ||
|
|
25d64255a3 | ||
|
|
3790020d78 | ||
|
|
5df3d4e5da | ||
|
|
af670df515 | ||
|
|
a58256d4d3 | ||
|
|
466f7c6448 | ||
|
|
0101657649 | ||
|
|
a3a2c8d063 | ||
|
|
dfc1acbb2a | ||
|
|
0d055b6ee6 | ||
|
|
614643ef7b | ||
|
|
b90b7adf6f | ||
|
|
418090b464 | ||
|
|
090b59e8d6 | ||
|
|
9e1af79637 | ||
|
|
9800807fe5 | ||
|
|
b86d79b906 | ||
|
|
e070ba3c32 | ||
|
|
43bf7bffd0 | ||
|
|
56aed6e6ff | ||
|
|
47785756e5 | ||
|
|
0aa523cd8c | ||
|
|
7a8222dd97 | ||
|
|
40c585890a | ||
|
|
da925e956e | ||
|
|
d7ade3c854 | ||
|
|
179c8c716c | ||
|
|
19543f9819 | ||
|
|
533ada70ca | ||
|
|
c50ff23194 | ||
|
|
d7f1150098 | ||
|
|
82c3eca25e | ||
|
|
df13832a59 | ||
|
|
7da92eb9eb | ||
|
|
c71298d3d5 | ||
|
|
de594833ac | ||
|
|
6a9a92931d | ||
|
|
11ad5b3ed1 | ||
|
|
b8354bbb55 | ||
|
|
258675b75e | ||
|
|
11a08cb272 | ||
|
|
e9b178b8b9 | ||
|
|
3477fde0e5 | ||
|
|
9baa431656 | ||
|
|
e2a1cb5840 | ||
|
|
f696f41a02 | ||
|
|
0168d43d60 | ||
|
|
e372e25e30 | ||
|
|
ca409a732f | ||
|
|
5c0a530ad1 | ||
|
|
4b030456f6 | ||
|
|
f93b5b19f0 | ||
|
|
669a6d84e9 | ||
|
|
a45017ad71 | ||
|
|
0d9e71b653 | ||
|
|
93f178f3ad | ||
|
|
9f4a6c6fe2 | ||
|
|
c915916b62 | ||
|
|
dff7ba7598 | ||
|
|
fe34ebf770 | ||
|
|
a1c51a5885 | ||
|
|
63a8d293a1 | ||
|
|
6c621b7fcf | ||
|
|
529e344450 | ||
|
|
2a169f9364 | ||
|
|
97eb196699 | ||
|
|
cfae276d37 | ||
|
|
09129a911e | ||
|
|
15d7b9755e | ||
|
|
72897a20e3 | ||
|
|
c04d02460f | ||
|
|
4ca7ac7632 | ||
|
|
a260ba3ee7 | ||
|
|
efd3f04b7c | ||
|
|
f16ce3ca27 | ||
|
|
6214180ecd | ||
|
|
00e21e2021 | ||
|
|
494ce65729 | ||
|
|
e15294db41 | ||
|
|
be1eb4efb7 | ||
|
|
9d12496aaf | ||
|
|
5d8084a32f | ||
|
|
60eb5de3f1 | ||
|
|
a0be7198f9 | ||
|
|
6ab3aeb142 | ||
|
|
590aedd466 | ||
|
|
27e376e892 | ||
|
|
36c41d129c | ||
|
|
89da42dbc1 | ||
|
|
04852aa27e | ||
|
|
d0820bb26d | ||
|
|
fa6c371380 | ||
|
|
9aa2182cb2 | ||
|
|
bca2e393bf | ||
|
|
b1ef327bac | ||
|
|
115c74791d | ||
|
|
aec5cca2c7 | ||
|
|
06e1c43743 | ||
|
|
9d36c31209 | ||
|
|
c91132bd14 | ||
|
|
25e9076f5b | ||
|
|
08945f128b | ||
|
|
5a0629eaa0 | ||
|
|
89dbf6ddd2 | ||
|
|
66aa08d815 | ||
|
|
b8a325d18c | ||
|
|
ed95e99556 | ||
|
|
5545a8b023 | ||
|
|
5140d247e3 | ||
|
|
f995f6099f | ||
|
|
7de62ef5d0 | ||
|
|
0e05f85a9d | ||
|
|
a6a702de4e | ||
|
|
d99746385b | ||
|
|
9d8f72d611 | ||
|
|
c07a1babd5 | ||
|
|
cc8d6b1200 | ||
|
|
5274806108 | ||
|
|
6e1cc1df55 | ||
|
|
65f80af9a9 | ||
|
|
a68072cb21 | ||
|
|
71c1c7ca24 | ||
|
|
1b5862223c | ||
|
|
c0be0c30de | ||
|
|
154f561da1 | ||
|
|
aa2934b422 | ||
|
|
1b93a026c2 | ||
|
|
530353785c | ||
|
|
573c19be32 | ||
|
|
f3b68253c2 | ||
|
|
6e9e8fad26 | ||
|
|
6e12e1b84b | ||
|
|
7d447c20c5 | ||
|
|
9c3b9600ca | ||
|
|
73fe075049 | ||
|
|
2748cec7e2 | ||
|
|
65d47bab56 | ||
|
|
f6e2039eb8 | ||
|
|
3b89b9ddd8 | ||
|
|
695746193b | ||
|
|
573d369f77 | ||
|
|
e6eca8ca0c | ||
|
|
e84b1eefdf | ||
|
|
777bc3b89d | ||
|
|
81f3007f6f | ||
|
|
863ee608ca | ||
|
|
20cbc039e6 | ||
|
|
d11b1fa389 | ||
|
|
a0f4881c6e | ||
|
|
aba5e41799 | ||
|
|
371d4cf9f5 | ||
|
|
8e3515d396 | ||
|
|
701aba9cdb | ||
|
|
b493ea1b38 | ||
|
|
336db38ce9 | ||
|
|
c387687262 | ||
|
|
7ef18c0915 | ||
|
|
1bbde15a15 | ||
|
|
3dac7cbe37 | ||
|
|
08263995f6 | ||
|
|
c0b909330a | ||
|
|
dadee99d69 | ||
|
|
f29aebf89f | ||
|
|
e154dc5fd4 | ||
|
|
ed8b13689e | ||
|
|
3112ced9c0 | ||
|
|
e410192560 | ||
|
|
eb3d2ca759 | ||
|
|
934c7e3fef | ||
|
|
d8ea7c5585 | ||
|
|
77fc1e6de0 | ||
|
|
4eadd9f8a8 | ||
|
|
1ec595134d | ||
|
|
9206f60b28 | ||
|
|
2d0f493040 | ||
|
|
bba3108e0d | ||
|
|
9524ec83bc | ||
|
|
e0b5f52c2a | ||
|
|
1272bc9afc | ||
|
|
df01ac05a1 | ||
|
|
659d34a170 | ||
|
|
62037ee4c8 | ||
|
|
8d229dda98 | ||
|
|
42e7403fcc | ||
|
|
20a933e395 | ||
|
|
b619950c70 | ||
|
|
4685b59ef1 | ||
|
|
86a989517e | ||
|
|
0aaf7621bd | ||
|
|
924c52af7c | ||
|
|
f5e5a89e44 | ||
|
|
778e195f07 | ||
|
|
f764fd5847 | ||
|
|
19a9035f4b | ||
|
|
96c01a3bf0 | ||
|
|
bf21527f18 | ||
|
|
9e1441e48b | ||
|
|
eeb4e26c71 | ||
|
|
7ca0fa52d4 | ||
|
|
443722597b | ||
|
|
d4b814f698 | ||
|
|
d0b2a11f2b | ||
|
|
54432df92f | ||
|
|
8f2ce4abe8 | ||
|
|
d077892e1c | ||
|
|
cfed466fcd | ||
|
|
0c5f4801b7 | ||
|
|
2114b153e7 | ||
|
|
314f2704d4 | ||
|
|
510782261d | ||
|
|
20e8c3d864 | ||
|
|
2a2a44883f | ||
|
|
4248dfcf36 | ||
|
|
64945533dd | ||
|
|
ffc8074556 | ||
|
|
7e56bf250b | ||
|
|
50ae4dc174 | ||
|
|
16aef70089 | ||
|
|
786f43da91 | ||
|
|
3e9bda3267 | ||
|
|
89d58538c7 | ||
|
|
d12379106e | ||
|
|
64941d848e | ||
|
|
96a40e0300 | ||
|
|
d2e081c1f9 | ||
|
|
cdbdb04d93 | ||
|
|
5af87baeb0 | ||
|
|
d5a948a0a6 | ||
|
|
bbea651d08 | ||
|
|
8060c81e1d | ||
|
|
e6507aaf34 | ||
|
|
87795248dd | ||
|
|
7a04bfe50a | ||
|
|
2f4726f7b5 | ||
|
|
75d85f9915 | ||
|
|
db329f6c80 | ||
|
|
544c4a70f8 | ||
|
|
02f806fba9 | ||
|
|
9459ace33e | ||
|
|
c1e005b148 | ||
|
|
c00c1d95ee | ||
|
|
5d739932c0 | ||
|
|
aab7367804 | ||
|
|
34f935df66 | ||
|
|
fda1523ced | ||
|
|
2c0c7759ee | ||
|
|
2398918adf | ||
|
|
50bea2f107 | ||
|
|
1629435888 | ||
|
|
b3c94a303b | ||
|
|
883b7fce96 | ||
|
|
ea9367f371 | ||
|
|
2896e1f868 | ||
|
|
183fccbbd6 | ||
|
|
b51089fa61 | ||
|
|
682b04cbe4 | ||
|
|
e1d2f9a596 | ||
|
|
2fca45b048 | ||
|
|
3e1a125732 | ||
|
|
34b1427a82 | ||
|
|
28fd0dc276 | ||
|
|
32b9639d7c |
15
.coderabbit.yaml
Normal file
15
.coderabbit.yaml
Normal file
@@ -0,0 +1,15 @@
|
|||||||
|
# yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json
|
||||||
|
language: "en-US"
|
||||||
|
early_access: false
|
||||||
|
reviews:
|
||||||
|
profile: "chill"
|
||||||
|
request_changes_workflow: false
|
||||||
|
high_level_summary: true
|
||||||
|
poem: true
|
||||||
|
review_status: true
|
||||||
|
collapse_walkthrough: false
|
||||||
|
auto_review:
|
||||||
|
enabled: false
|
||||||
|
drafts: false
|
||||||
|
chat:
|
||||||
|
auto_reply: true
|
||||||
16
.env.example
16
.env.example
@@ -14,13 +14,23 @@ GT_AZBLOB_CONTAINER=AZBLOB container
|
|||||||
GT_AZBLOB_ACCOUNT_NAME=AZBLOB account name
|
GT_AZBLOB_ACCOUNT_NAME=AZBLOB account name
|
||||||
GT_AZBLOB_ACCOUNT_KEY=AZBLOB account key
|
GT_AZBLOB_ACCOUNT_KEY=AZBLOB account key
|
||||||
GT_AZBLOB_ENDPOINT=AZBLOB endpoint
|
GT_AZBLOB_ENDPOINT=AZBLOB endpoint
|
||||||
# Settings for gcs test
|
# Settings for gcs test
|
||||||
GT_GCS_BUCKET = GCS bucket
|
GT_GCS_BUCKET = GCS bucket
|
||||||
GT_GCS_SCOPE = GCS scope
|
GT_GCS_SCOPE = GCS scope
|
||||||
GT_GCS_CREDENTIAL_PATH = GCS credential path
|
GT_GCS_CREDENTIAL_PATH = GCS credential path
|
||||||
|
GT_GCS_CREDENTIAL = GCS credential
|
||||||
GT_GCS_ENDPOINT = GCS end point
|
GT_GCS_ENDPOINT = GCS end point
|
||||||
# Settings for kafka wal test
|
# Settings for kafka wal test
|
||||||
GT_KAFKA_ENDPOINTS = localhost:9092
|
GT_KAFKA_ENDPOINTS = localhost:9092
|
||||||
|
|
||||||
# Setting for fuzz tests
|
# Setting for fuzz tests
|
||||||
GT_MYSQL_ADDR = localhost:4002
|
GT_MYSQL_ADDR = localhost:4002
|
||||||
|
|
||||||
|
# Setting for unstable fuzz tests
|
||||||
|
GT_FUZZ_BINARY_PATH=/path/to/
|
||||||
|
GT_FUZZ_INSTANCE_ROOT_DIR=/tmp/unstable_greptime
|
||||||
|
GT_FUZZ_INPUT_MAX_ROWS=2048
|
||||||
|
GT_FUZZ_INPUT_MAX_TABLES=32
|
||||||
|
GT_FUZZ_INPUT_MAX_COLUMNS=32
|
||||||
|
GT_FUZZ_INPUT_MAX_ALTER_ACTIONS=256
|
||||||
|
GT_FUZZ_INPUT_MAX_INSERT_ACTIONS=8
|
||||||
|
|||||||
27
.github/CODEOWNERS
vendored
Normal file
27
.github/CODEOWNERS
vendored
Normal file
@@ -0,0 +1,27 @@
|
|||||||
|
# GreptimeDB CODEOWNERS
|
||||||
|
|
||||||
|
# These owners will be the default owners for everything in the repo.
|
||||||
|
|
||||||
|
* @GreptimeTeam/db-approver
|
||||||
|
|
||||||
|
## [Module] Databse Engine
|
||||||
|
/src/index @zhongzc
|
||||||
|
/src/mito2 @evenyag @v0y4g3r @waynexia
|
||||||
|
/src/query @evenyag
|
||||||
|
|
||||||
|
## [Module] Distributed
|
||||||
|
/src/common/meta @MichaelScofield
|
||||||
|
/src/common/procedure @MichaelScofield
|
||||||
|
/src/meta-client @MichaelScofield
|
||||||
|
/src/meta-srv @MichaelScofield
|
||||||
|
|
||||||
|
## [Module] Write Ahead Log
|
||||||
|
/src/log-store @v0y4g3r
|
||||||
|
/src/store-api @v0y4g3r
|
||||||
|
|
||||||
|
## [Module] Metrics Engine
|
||||||
|
/src/metric-engine @waynexia
|
||||||
|
/src/promql @waynexia
|
||||||
|
|
||||||
|
## [Module] Flow
|
||||||
|
/src/flow @zhongzc @waynexia
|
||||||
@@ -1,7 +1,7 @@
|
|||||||
---
|
---
|
||||||
name: Bug report
|
name: Bug report
|
||||||
description: Is something not working? Help us fix it!
|
description: Is something not working? Help us fix it!
|
||||||
labels: [ "bug" ]
|
labels: [ "C-bug" ]
|
||||||
body:
|
body:
|
||||||
- type: markdown
|
- type: markdown
|
||||||
attributes:
|
attributes:
|
||||||
@@ -39,7 +39,7 @@ body:
|
|||||||
- Query Engine
|
- Query Engine
|
||||||
- Table Engine
|
- Table Engine
|
||||||
- Write Protocols
|
- Write Protocols
|
||||||
- MetaSrv
|
- Metasrv
|
||||||
- Frontend
|
- Frontend
|
||||||
- Datanode
|
- Datanode
|
||||||
- Other
|
- Other
|
||||||
2
.github/ISSUE_TEMPLATE/config.yml
vendored
2
.github/ISSUE_TEMPLATE/config.yml
vendored
@@ -4,5 +4,5 @@ contact_links:
|
|||||||
url: https://greptime.com/slack
|
url: https://greptime.com/slack
|
||||||
about: Get free help from the Greptime community
|
about: Get free help from the Greptime community
|
||||||
- name: Greptime Community Discussion
|
- name: Greptime Community Discussion
|
||||||
url: https://github.com/greptimeTeam/greptimedb/discussions
|
url: https://github.com/greptimeTeam/discussions
|
||||||
about: Get free help from the Greptime community
|
about: Get free help from the Greptime community
|
||||||
|
|||||||
2
.github/ISSUE_TEMPLATE/enhancement.yml
vendored
2
.github/ISSUE_TEMPLATE/enhancement.yml
vendored
@@ -1,7 +1,7 @@
|
|||||||
---
|
---
|
||||||
name: Enhancement
|
name: Enhancement
|
||||||
description: Suggest an enhancement to existing functionality
|
description: Suggest an enhancement to existing functionality
|
||||||
labels: [ "enhancement" ]
|
labels: [ "C-enhancement" ]
|
||||||
body:
|
body:
|
||||||
- type: dropdown
|
- type: dropdown
|
||||||
id: type
|
id: type
|
||||||
|
|||||||
@@ -1,7 +1,7 @@
|
|||||||
---
|
---
|
||||||
name: Feature request
|
name: New Feature
|
||||||
description: Suggest a new feature for GreptimeDB
|
description: Suggest a new feature for GreptimeDB
|
||||||
labels: [ "feature request" ]
|
labels: [ "C-feature" ]
|
||||||
body:
|
body:
|
||||||
- type: markdown
|
- type: markdown
|
||||||
id: info
|
id: info
|
||||||
18
.github/actions/build-and-push-ci-image/action.yml
vendored
Normal file
18
.github/actions/build-and-push-ci-image/action.yml
vendored
Normal file
@@ -0,0 +1,18 @@
|
|||||||
|
name: Build and push CI Docker image
|
||||||
|
description: Build and push CI Docker image to local registry
|
||||||
|
inputs:
|
||||||
|
binary_path:
|
||||||
|
default: "./bin"
|
||||||
|
description: "Binary path"
|
||||||
|
runs:
|
||||||
|
using: composite
|
||||||
|
steps:
|
||||||
|
- name: Build and push to local registry
|
||||||
|
uses: docker/build-push-action@v5
|
||||||
|
with:
|
||||||
|
context: .
|
||||||
|
file: ./docker/ci/ubuntu/Dockerfile.fuzztests
|
||||||
|
push: true
|
||||||
|
tags: localhost:5001/greptime/greptimedb:latest
|
||||||
|
build-args: |
|
||||||
|
BINARY_PATH=${{ inputs.binary_path }}
|
||||||
@@ -22,15 +22,15 @@ inputs:
|
|||||||
build-dev-builder-ubuntu:
|
build-dev-builder-ubuntu:
|
||||||
description: Build dev-builder-ubuntu image
|
description: Build dev-builder-ubuntu image
|
||||||
required: false
|
required: false
|
||||||
default: 'true'
|
default: "true"
|
||||||
build-dev-builder-centos:
|
build-dev-builder-centos:
|
||||||
description: Build dev-builder-centos image
|
description: Build dev-builder-centos image
|
||||||
required: false
|
required: false
|
||||||
default: 'true'
|
default: "true"
|
||||||
build-dev-builder-android:
|
build-dev-builder-android:
|
||||||
description: Build dev-builder-android image
|
description: Build dev-builder-android image
|
||||||
required: false
|
required: false
|
||||||
default: 'true'
|
default: "true"
|
||||||
runs:
|
runs:
|
||||||
using: composite
|
using: composite
|
||||||
steps:
|
steps:
|
||||||
@@ -47,10 +47,10 @@ runs:
|
|||||||
run: |
|
run: |
|
||||||
make dev-builder \
|
make dev-builder \
|
||||||
BASE_IMAGE=ubuntu \
|
BASE_IMAGE=ubuntu \
|
||||||
BUILDX_MULTI_PLATFORM_BUILD=true \
|
BUILDX_MULTI_PLATFORM_BUILD=all \
|
||||||
IMAGE_REGISTRY=${{ inputs.dockerhub-image-registry }} \
|
IMAGE_REGISTRY=${{ inputs.dockerhub-image-registry }} \
|
||||||
IMAGE_NAMESPACE=${{ inputs.dockerhub-image-namespace }} \
|
IMAGE_NAMESPACE=${{ inputs.dockerhub-image-namespace }} \
|
||||||
IMAGE_TAG=${{ inputs.version }}
|
DEV_BUILDER_IMAGE_TAG=${{ inputs.version }}
|
||||||
|
|
||||||
- name: Build and push dev-builder-centos image
|
- name: Build and push dev-builder-centos image
|
||||||
shell: bash
|
shell: bash
|
||||||
@@ -58,10 +58,10 @@ runs:
|
|||||||
run: |
|
run: |
|
||||||
make dev-builder \
|
make dev-builder \
|
||||||
BASE_IMAGE=centos \
|
BASE_IMAGE=centos \
|
||||||
BUILDX_MULTI_PLATFORM_BUILD=true \
|
BUILDX_MULTI_PLATFORM_BUILD=amd64 \
|
||||||
IMAGE_REGISTRY=${{ inputs.dockerhub-image-registry }} \
|
IMAGE_REGISTRY=${{ inputs.dockerhub-image-registry }} \
|
||||||
IMAGE_NAMESPACE=${{ inputs.dockerhub-image-namespace }} \
|
IMAGE_NAMESPACE=${{ inputs.dockerhub-image-namespace }} \
|
||||||
IMAGE_TAG=${{ inputs.version }}
|
DEV_BUILDER_IMAGE_TAG=${{ inputs.version }}
|
||||||
|
|
||||||
- name: Build and push dev-builder-android image # Only build image for amd64 platform.
|
- name: Build and push dev-builder-android image # Only build image for amd64 platform.
|
||||||
shell: bash
|
shell: bash
|
||||||
@@ -71,6 +71,6 @@ runs:
|
|||||||
BASE_IMAGE=android \
|
BASE_IMAGE=android \
|
||||||
IMAGE_REGISTRY=${{ inputs.dockerhub-image-registry }} \
|
IMAGE_REGISTRY=${{ inputs.dockerhub-image-registry }} \
|
||||||
IMAGE_NAMESPACE=${{ inputs.dockerhub-image-namespace }} \
|
IMAGE_NAMESPACE=${{ inputs.dockerhub-image-namespace }} \
|
||||||
IMAGE_TAG=${{ inputs.version }} && \
|
DEV_BUILDER_IMAGE_TAG=${{ inputs.version }} && \
|
||||||
|
|
||||||
docker push ${{ inputs.dockerhub-image-registry }}/${{ inputs.dockerhub-image-namespace }}/dev-builder-android:${{ inputs.version }}
|
docker push ${{ inputs.dockerhub-image-registry }}/${{ inputs.dockerhub-image-namespace }}/dev-builder-android:${{ inputs.version }}
|
||||||
|
|||||||
16
.github/actions/build-greptime-binary/action.yml
vendored
16
.github/actions/build-greptime-binary/action.yml
vendored
@@ -24,6 +24,14 @@ inputs:
|
|||||||
description: Build android artifacts
|
description: Build android artifacts
|
||||||
required: false
|
required: false
|
||||||
default: 'false'
|
default: 'false'
|
||||||
|
image-namespace:
|
||||||
|
description: Image Namespace
|
||||||
|
required: false
|
||||||
|
default: 'greptime'
|
||||||
|
image-registry:
|
||||||
|
description: Image Registry
|
||||||
|
required: false
|
||||||
|
default: 'docker.io'
|
||||||
runs:
|
runs:
|
||||||
using: composite
|
using: composite
|
||||||
steps:
|
steps:
|
||||||
@@ -35,7 +43,9 @@ runs:
|
|||||||
make build-by-dev-builder \
|
make build-by-dev-builder \
|
||||||
CARGO_PROFILE=${{ inputs.cargo-profile }} \
|
CARGO_PROFILE=${{ inputs.cargo-profile }} \
|
||||||
FEATURES=${{ inputs.features }} \
|
FEATURES=${{ inputs.features }} \
|
||||||
BASE_IMAGE=${{ inputs.base-image }}
|
BASE_IMAGE=${{ inputs.base-image }} \
|
||||||
|
IMAGE_NAMESPACE=${{ inputs.image-namespace }} \
|
||||||
|
IMAGE_REGISTRY=${{ inputs.image-registry }}
|
||||||
|
|
||||||
- name: Upload artifacts
|
- name: Upload artifacts
|
||||||
uses: ./.github/actions/upload-artifacts
|
uses: ./.github/actions/upload-artifacts
|
||||||
@@ -53,7 +63,9 @@ runs:
|
|||||||
shell: bash
|
shell: bash
|
||||||
if: ${{ inputs.build-android-artifacts == 'true' }}
|
if: ${{ inputs.build-android-artifacts == 'true' }}
|
||||||
run: |
|
run: |
|
||||||
cd ${{ inputs.working-dir }} && make strip-android-bin
|
cd ${{ inputs.working-dir }} && make strip-android-bin \
|
||||||
|
IMAGE_NAMESPACE=${{ inputs.image-namespace }} \
|
||||||
|
IMAGE_REGISTRY=${{ inputs.image-registry }}
|
||||||
|
|
||||||
- name: Upload android artifacts
|
- name: Upload android artifacts
|
||||||
uses: ./.github/actions/upload-artifacts
|
uses: ./.github/actions/upload-artifacts
|
||||||
|
|||||||
24
.github/actions/build-linux-artifacts/action.yml
vendored
24
.github/actions/build-linux-artifacts/action.yml
vendored
@@ -16,7 +16,13 @@ inputs:
|
|||||||
dev-mode:
|
dev-mode:
|
||||||
description: Enable dev mode, only build standard greptime
|
description: Enable dev mode, only build standard greptime
|
||||||
required: false
|
required: false
|
||||||
default: 'false'
|
default: "false"
|
||||||
|
image-namespace:
|
||||||
|
description: Image Namespace
|
||||||
|
required: true
|
||||||
|
image-registry:
|
||||||
|
description: Image Registry
|
||||||
|
required: true
|
||||||
working-dir:
|
working-dir:
|
||||||
description: Working directory to build the artifacts
|
description: Working directory to build the artifacts
|
||||||
required: false
|
required: false
|
||||||
@@ -30,7 +36,9 @@ runs:
|
|||||||
# NOTE: If the BUILD_JOBS > 4, it's always OOM in EC2 instance.
|
# NOTE: If the BUILD_JOBS > 4, it's always OOM in EC2 instance.
|
||||||
run: |
|
run: |
|
||||||
cd ${{ inputs.working-dir }} && \
|
cd ${{ inputs.working-dir }} && \
|
||||||
make run-it-in-container BUILD_JOBS=4
|
make run-it-in-container BUILD_JOBS=4 \
|
||||||
|
IMAGE_NAMESPACE=${{ inputs.image-namespace }} \
|
||||||
|
IMAGE_REGISTRY=${{ inputs.image-registry }}
|
||||||
|
|
||||||
- name: Upload sqlness logs
|
- name: Upload sqlness logs
|
||||||
if: ${{ failure() && inputs.disable-run-tests == 'false' }} # Only upload logs when the integration tests failed.
|
if: ${{ failure() && inputs.disable-run-tests == 'false' }} # Only upload logs when the integration tests failed.
|
||||||
@@ -49,6 +57,8 @@ runs:
|
|||||||
artifacts-dir: greptime-linux-${{ inputs.arch }}-pyo3-${{ inputs.version }}
|
artifacts-dir: greptime-linux-${{ inputs.arch }}-pyo3-${{ inputs.version }}
|
||||||
version: ${{ inputs.version }}
|
version: ${{ inputs.version }}
|
||||||
working-dir: ${{ inputs.working-dir }}
|
working-dir: ${{ inputs.working-dir }}
|
||||||
|
image-registry: ${{ inputs.image-registry }}
|
||||||
|
image-namespace: ${{ inputs.image-namespace }}
|
||||||
|
|
||||||
- name: Build greptime without pyo3
|
- name: Build greptime without pyo3
|
||||||
if: ${{ inputs.dev-mode == 'false' }}
|
if: ${{ inputs.dev-mode == 'false' }}
|
||||||
@@ -60,6 +70,8 @@ runs:
|
|||||||
artifacts-dir: greptime-linux-${{ inputs.arch }}-${{ inputs.version }}
|
artifacts-dir: greptime-linux-${{ inputs.arch }}-${{ inputs.version }}
|
||||||
version: ${{ inputs.version }}
|
version: ${{ inputs.version }}
|
||||||
working-dir: ${{ inputs.working-dir }}
|
working-dir: ${{ inputs.working-dir }}
|
||||||
|
image-registry: ${{ inputs.image-registry }}
|
||||||
|
image-namespace: ${{ inputs.image-namespace }}
|
||||||
|
|
||||||
- name: Clean up the target directory # Clean up the target directory for the centos7 base image, or it will still use the objects of last build.
|
- name: Clean up the target directory # Clean up the target directory for the centos7 base image, or it will still use the objects of last build.
|
||||||
shell: bash
|
shell: bash
|
||||||
@@ -68,7 +80,7 @@ runs:
|
|||||||
|
|
||||||
- name: Build greptime on centos base image
|
- name: Build greptime on centos base image
|
||||||
uses: ./.github/actions/build-greptime-binary
|
uses: ./.github/actions/build-greptime-binary
|
||||||
if: ${{ inputs.arch == 'amd64' && inputs.dev-mode == 'false' }} # Only build centos7 base image for amd64.
|
if: ${{ inputs.arch == 'amd64' && inputs.dev-mode == 'false' }} # Builds greptime for centos if the host machine is amd64.
|
||||||
with:
|
with:
|
||||||
base-image: centos
|
base-image: centos
|
||||||
features: servers/dashboard
|
features: servers/dashboard
|
||||||
@@ -76,13 +88,17 @@ runs:
|
|||||||
artifacts-dir: greptime-linux-${{ inputs.arch }}-centos-${{ inputs.version }}
|
artifacts-dir: greptime-linux-${{ inputs.arch }}-centos-${{ inputs.version }}
|
||||||
version: ${{ inputs.version }}
|
version: ${{ inputs.version }}
|
||||||
working-dir: ${{ inputs.working-dir }}
|
working-dir: ${{ inputs.working-dir }}
|
||||||
|
image-registry: ${{ inputs.image-registry }}
|
||||||
|
image-namespace: ${{ inputs.image-namespace }}
|
||||||
|
|
||||||
- name: Build greptime on android base image
|
- name: Build greptime on android base image
|
||||||
uses: ./.github/actions/build-greptime-binary
|
uses: ./.github/actions/build-greptime-binary
|
||||||
if: ${{ inputs.arch == 'amd64' && inputs.dev-mode == 'false' }} # Only build android base image on amd64.
|
if: ${{ inputs.arch == 'amd64' && inputs.dev-mode == 'false' }} # Builds arm64 greptime binary for android if the host machine amd64.
|
||||||
with:
|
with:
|
||||||
base-image: android
|
base-image: android
|
||||||
artifacts-dir: greptime-android-arm64-${{ inputs.version }}
|
artifacts-dir: greptime-android-arm64-${{ inputs.version }}
|
||||||
version: ${{ inputs.version }}
|
version: ${{ inputs.version }}
|
||||||
working-dir: ${{ inputs.working-dir }}
|
working-dir: ${{ inputs.working-dir }}
|
||||||
build-android-artifacts: true
|
build-android-artifacts: true
|
||||||
|
image-registry: ${{ inputs.image-registry }}
|
||||||
|
image-namespace: ${{ inputs.image-namespace }}
|
||||||
|
|||||||
17
.github/actions/build-macos-artifacts/action.yml
vendored
17
.github/actions/build-macos-artifacts/action.yml
vendored
@@ -4,9 +4,6 @@ inputs:
|
|||||||
arch:
|
arch:
|
||||||
description: Architecture to build
|
description: Architecture to build
|
||||||
required: true
|
required: true
|
||||||
rust-toolchain:
|
|
||||||
description: Rust toolchain to use
|
|
||||||
required: true
|
|
||||||
cargo-profile:
|
cargo-profile:
|
||||||
description: Cargo profile to build
|
description: Cargo profile to build
|
||||||
required: true
|
required: true
|
||||||
@@ -43,10 +40,9 @@ runs:
|
|||||||
brew install protobuf
|
brew install protobuf
|
||||||
|
|
||||||
- name: Install rust toolchain
|
- name: Install rust toolchain
|
||||||
uses: dtolnay/rust-toolchain@master
|
uses: actions-rust-lang/setup-rust-toolchain@v1
|
||||||
with:
|
with:
|
||||||
toolchain: ${{ inputs.rust-toolchain }}
|
target: ${{ inputs.arch }}
|
||||||
targets: ${{ inputs.arch }}
|
|
||||||
|
|
||||||
- name: Start etcd # For integration tests.
|
- name: Start etcd # For integration tests.
|
||||||
if: ${{ inputs.disable-run-tests == 'false' }}
|
if: ${{ inputs.disable-run-tests == 'false' }}
|
||||||
@@ -59,9 +55,16 @@ runs:
|
|||||||
if: ${{ inputs.disable-run-tests == 'false' }}
|
if: ${{ inputs.disable-run-tests == 'false' }}
|
||||||
uses: taiki-e/install-action@nextest
|
uses: taiki-e/install-action@nextest
|
||||||
|
|
||||||
|
# Get proper backtraces in mac Sonoma. Currently there's an issue with the new
|
||||||
|
# linker that prevents backtraces from getting printed correctly.
|
||||||
|
#
|
||||||
|
# <https://github.com/rust-lang/rust/issues/113783>
|
||||||
- name: Run integration tests
|
- name: Run integration tests
|
||||||
if: ${{ inputs.disable-run-tests == 'false' }}
|
if: ${{ inputs.disable-run-tests == 'false' }}
|
||||||
shell: bash
|
shell: bash
|
||||||
|
env:
|
||||||
|
CARGO_BUILD_RUSTFLAGS: "-Clink-arg=-Wl,-ld_classic"
|
||||||
|
SQLNESS_OPTS: "--preserve-state"
|
||||||
run: |
|
run: |
|
||||||
make test sqlness-test
|
make test sqlness-test
|
||||||
|
|
||||||
@@ -75,6 +78,8 @@ runs:
|
|||||||
|
|
||||||
- name: Build greptime binary
|
- name: Build greptime binary
|
||||||
shell: bash
|
shell: bash
|
||||||
|
env:
|
||||||
|
CARGO_BUILD_RUSTFLAGS: "-Clink-arg=-Wl,-ld_classic"
|
||||||
run: |
|
run: |
|
||||||
make build \
|
make build \
|
||||||
CARGO_PROFILE=${{ inputs.cargo-profile }} \
|
CARGO_PROFILE=${{ inputs.cargo-profile }} \
|
||||||
|
|||||||
@@ -4,9 +4,6 @@ inputs:
|
|||||||
arch:
|
arch:
|
||||||
description: Architecture to build
|
description: Architecture to build
|
||||||
required: true
|
required: true
|
||||||
rust-toolchain:
|
|
||||||
description: Rust toolchain to use
|
|
||||||
required: true
|
|
||||||
cargo-profile:
|
cargo-profile:
|
||||||
description: Cargo profile to build
|
description: Cargo profile to build
|
||||||
required: true
|
required: true
|
||||||
@@ -28,10 +25,9 @@ runs:
|
|||||||
- uses: arduino/setup-protoc@v3
|
- uses: arduino/setup-protoc@v3
|
||||||
|
|
||||||
- name: Install rust toolchain
|
- name: Install rust toolchain
|
||||||
uses: dtolnay/rust-toolchain@master
|
uses: actions-rust-lang/setup-rust-toolchain@v1
|
||||||
with:
|
with:
|
||||||
toolchain: ${{ inputs.rust-toolchain }}
|
target: ${{ inputs.arch }}
|
||||||
targets: ${{ inputs.arch }}
|
|
||||||
components: llvm-tools-preview
|
components: llvm-tools-preview
|
||||||
|
|
||||||
- name: Rust Cache
|
- name: Rust Cache
|
||||||
@@ -40,11 +36,11 @@ runs:
|
|||||||
- name: Install Python
|
- name: Install Python
|
||||||
uses: actions/setup-python@v5
|
uses: actions/setup-python@v5
|
||||||
with:
|
with:
|
||||||
python-version: '3.10'
|
python-version: "3.10"
|
||||||
|
|
||||||
- name: Install PyArrow Package
|
- name: Install PyArrow Package
|
||||||
shell: pwsh
|
shell: pwsh
|
||||||
run: pip install pyarrow
|
run: pip install pyarrow numpy
|
||||||
|
|
||||||
- name: Install WSL distribution
|
- name: Install WSL distribution
|
||||||
uses: Vampire/setup-wsl@v2
|
uses: Vampire/setup-wsl@v2
|
||||||
@@ -59,13 +55,17 @@ runs:
|
|||||||
if: ${{ inputs.disable-run-tests == 'false' }}
|
if: ${{ inputs.disable-run-tests == 'false' }}
|
||||||
shell: pwsh
|
shell: pwsh
|
||||||
run: make test sqlness-test
|
run: make test sqlness-test
|
||||||
|
env:
|
||||||
|
RUSTUP_WINDOWS_PATH_ADD_BIN: 1 # Workaround for https://github.com/nextest-rs/nextest/issues/1493
|
||||||
|
RUST_BACKTRACE: 1
|
||||||
|
SQLNESS_OPTS: "--preserve-state"
|
||||||
|
|
||||||
- name: Upload sqlness logs
|
- name: Upload sqlness logs
|
||||||
if: ${{ failure() }} # Only upload logs when the integration tests failed.
|
if: ${{ failure() }} # Only upload logs when the integration tests failed.
|
||||||
uses: actions/upload-artifact@v4
|
uses: actions/upload-artifact@v4
|
||||||
with:
|
with:
|
||||||
name: sqlness-logs
|
name: sqlness-logs
|
||||||
path: /tmp/greptime-*.log
|
path: C:\Users\RUNNER~1\AppData\Local\Temp\sqlness*
|
||||||
retention-days: 3
|
retention-days: 3
|
||||||
|
|
||||||
- name: Build greptime binary
|
- name: Build greptime binary
|
||||||
|
|||||||
12
.github/actions/fuzz-test/action.yaml
vendored
12
.github/actions/fuzz-test/action.yaml
vendored
@@ -3,11 +3,17 @@ description: 'Fuzz test given setup and service'
|
|||||||
inputs:
|
inputs:
|
||||||
target:
|
target:
|
||||||
description: "The fuzz target to test"
|
description: "The fuzz target to test"
|
||||||
|
required: true
|
||||||
|
max-total-time:
|
||||||
|
description: "Max total time(secs)"
|
||||||
|
required: true
|
||||||
|
unstable:
|
||||||
|
default: 'false'
|
||||||
|
description: "Enable unstable feature"
|
||||||
runs:
|
runs:
|
||||||
using: composite
|
using: composite
|
||||||
steps:
|
steps:
|
||||||
- name: Run Fuzz Test
|
- name: Run Fuzz Test
|
||||||
shell: bash
|
shell: bash
|
||||||
run: cargo fuzz run ${{ inputs.target }} --fuzz-dir tests-fuzz -D -s none -- -max_total_time=120
|
run: cargo fuzz run ${{ inputs.target }} --fuzz-dir tests-fuzz -D -s none ${{ inputs.unstable == 'true' && '--features=unstable' || '' }} -- -max_total_time=${{ inputs.max-total-time }}
|
||||||
env:
|
|
||||||
GT_MYSQL_ADDR: 127.0.0.1:4002
|
|
||||||
|
|||||||
@@ -123,10 +123,10 @@ runs:
|
|||||||
DST_REGISTRY_PASSWORD: ${{ inputs.dst-image-registry-password }}
|
DST_REGISTRY_PASSWORD: ${{ inputs.dst-image-registry-password }}
|
||||||
run: |
|
run: |
|
||||||
./.github/scripts/copy-image.sh \
|
./.github/scripts/copy-image.sh \
|
||||||
${{ inputs.src-image-registry }}/${{ inputs.src-image-namespace }}/${{ inputs.src-image-name }}-centos:latest \
|
${{ inputs.src-image-registry }}/${{ inputs.src-image-namespace }}/${{ inputs.src-image-name }}-centos:${{ inputs.version }} \
|
||||||
${{ inputs.dst-image-registry }}/${{ inputs.dst-image-namespace }}
|
${{ inputs.dst-image-registry }}/${{ inputs.dst-image-namespace }}
|
||||||
|
|
||||||
- name: Push greptimedb-centos image from DockerHub to ACR
|
- name: Push latest greptimedb-centos image from DockerHub to ACR
|
||||||
shell: bash
|
shell: bash
|
||||||
if: ${{ inputs.dev-mode == 'false' && inputs.push-latest-tag == 'true' }}
|
if: ${{ inputs.dev-mode == 'false' && inputs.push-latest-tag == 'true' }}
|
||||||
env:
|
env:
|
||||||
|
|||||||
17
.github/actions/setup-chaos/action.yml
vendored
Normal file
17
.github/actions/setup-chaos/action.yml
vendored
Normal file
@@ -0,0 +1,17 @@
|
|||||||
|
name: Setup Kind
|
||||||
|
description: Deploy Kind
|
||||||
|
runs:
|
||||||
|
using: composite
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v4
|
||||||
|
- name: Create kind cluster
|
||||||
|
shell: bash
|
||||||
|
run: |
|
||||||
|
helm repo add chaos-mesh https://charts.chaos-mesh.org
|
||||||
|
kubectl create ns chaos-mesh
|
||||||
|
helm install chaos-mesh chaos-mesh/chaos-mesh -n=chaos-mesh --version 2.6.3
|
||||||
|
- name: Print Chaos-mesh
|
||||||
|
if: always()
|
||||||
|
shell: bash
|
||||||
|
run: |
|
||||||
|
kubectl get po -n chaos-mesh
|
||||||
16
.github/actions/setup-cyborg/action.yml
vendored
Normal file
16
.github/actions/setup-cyborg/action.yml
vendored
Normal file
@@ -0,0 +1,16 @@
|
|||||||
|
name: Setup cyborg environment
|
||||||
|
description: Setup cyborg environment
|
||||||
|
runs:
|
||||||
|
using: composite
|
||||||
|
steps:
|
||||||
|
- uses: actions/setup-node@v4
|
||||||
|
with:
|
||||||
|
node-version: 22
|
||||||
|
- uses: pnpm/action-setup@v3
|
||||||
|
with:
|
||||||
|
package_json_file: 'cyborg/package.json'
|
||||||
|
run_install: true
|
||||||
|
- name: Describe the Environment
|
||||||
|
working-directory: cyborg
|
||||||
|
shell: bash
|
||||||
|
run: pnpm tsx -v
|
||||||
27
.github/actions/setup-etcd-cluster/action.yml
vendored
Normal file
27
.github/actions/setup-etcd-cluster/action.yml
vendored
Normal file
@@ -0,0 +1,27 @@
|
|||||||
|
name: Setup Etcd cluster
|
||||||
|
description: Deploy Etcd cluster on Kubernetes
|
||||||
|
inputs:
|
||||||
|
etcd-replicas:
|
||||||
|
default: 1
|
||||||
|
description: "Etcd replicas"
|
||||||
|
namespace:
|
||||||
|
default: "etcd-cluster"
|
||||||
|
|
||||||
|
runs:
|
||||||
|
using: composite
|
||||||
|
steps:
|
||||||
|
- name: Install Etcd cluster
|
||||||
|
shell: bash
|
||||||
|
run: |
|
||||||
|
helm upgrade \
|
||||||
|
--install etcd oci://registry-1.docker.io/bitnamicharts/etcd \
|
||||||
|
--set replicaCount=${{ inputs.etcd-replicas }} \
|
||||||
|
--set resources.requests.cpu=50m \
|
||||||
|
--set resources.requests.memory=128Mi \
|
||||||
|
--set resources.limits.cpu=1500m \
|
||||||
|
--set resources.limits.memory=2Gi \
|
||||||
|
--set auth.rbac.create=false \
|
||||||
|
--set auth.rbac.token.enabled=false \
|
||||||
|
--set persistence.size=2Gi \
|
||||||
|
--create-namespace \
|
||||||
|
-n ${{ inputs.namespace }}
|
||||||
95
.github/actions/setup-greptimedb-cluster/action.yml
vendored
Normal file
95
.github/actions/setup-greptimedb-cluster/action.yml
vendored
Normal file
@@ -0,0 +1,95 @@
|
|||||||
|
name: Setup GreptimeDB cluster
|
||||||
|
description: Deploy GreptimeDB cluster on Kubernetes
|
||||||
|
inputs:
|
||||||
|
frontend-replicas:
|
||||||
|
default: 2
|
||||||
|
description: "Number of Frontend replicas"
|
||||||
|
datanode-replicas:
|
||||||
|
default: 2
|
||||||
|
description: "Number of Datanode replicas"
|
||||||
|
meta-replicas:
|
||||||
|
default: 3
|
||||||
|
description: "Number of Metasrv replicas"
|
||||||
|
image-registry:
|
||||||
|
default: "docker.io"
|
||||||
|
description: "Image registry"
|
||||||
|
image-repository:
|
||||||
|
default: "greptime/greptimedb"
|
||||||
|
description: "Image repository"
|
||||||
|
image-tag:
|
||||||
|
default: "latest"
|
||||||
|
description: 'Image tag'
|
||||||
|
etcd-endpoints:
|
||||||
|
default: "etcd.etcd-cluster.svc.cluster.local:2379"
|
||||||
|
description: "Etcd endpoints"
|
||||||
|
values-filename:
|
||||||
|
default: "with-minio.yaml"
|
||||||
|
enable-region-failover:
|
||||||
|
default: false
|
||||||
|
|
||||||
|
runs:
|
||||||
|
using: composite
|
||||||
|
steps:
|
||||||
|
- name: Install GreptimeDB operator
|
||||||
|
uses: nick-fields/retry@v3
|
||||||
|
with:
|
||||||
|
timeout_minutes: 3
|
||||||
|
max_attempts: 3
|
||||||
|
shell: bash
|
||||||
|
command: |
|
||||||
|
helm repo add greptime https://greptimeteam.github.io/helm-charts/
|
||||||
|
helm repo update
|
||||||
|
helm upgrade \
|
||||||
|
--install \
|
||||||
|
--create-namespace \
|
||||||
|
greptimedb-operator greptime/greptimedb-operator \
|
||||||
|
-n greptimedb-admin \
|
||||||
|
--wait \
|
||||||
|
--wait-for-jobs
|
||||||
|
- name: Install GreptimeDB cluster
|
||||||
|
shell: bash
|
||||||
|
run: |
|
||||||
|
helm upgrade \
|
||||||
|
--install my-greptimedb \
|
||||||
|
--set meta.etcdEndpoints=${{ inputs.etcd-endpoints }} \
|
||||||
|
--set meta.enableRegionFailover=${{ inputs.enable-region-failover }} \
|
||||||
|
--set image.registry=${{ inputs.image-registry }} \
|
||||||
|
--set image.repository=${{ inputs.image-repository }} \
|
||||||
|
--set image.tag=${{ inputs.image-tag }} \
|
||||||
|
--set base.podTemplate.main.resources.requests.cpu=50m \
|
||||||
|
--set base.podTemplate.main.resources.requests.memory=256Mi \
|
||||||
|
--set base.podTemplate.main.resources.limits.cpu=1000m \
|
||||||
|
--set base.podTemplate.main.resources.limits.memory=2Gi \
|
||||||
|
--set frontend.replicas=${{ inputs.frontend-replicas }} \
|
||||||
|
--set datanode.replicas=${{ inputs.datanode-replicas }} \
|
||||||
|
--set meta.replicas=${{ inputs.meta-replicas }} \
|
||||||
|
greptime/greptimedb-cluster \
|
||||||
|
--create-namespace \
|
||||||
|
-n my-greptimedb \
|
||||||
|
--values ./.github/actions/setup-greptimedb-cluster/${{ inputs.values-filename }} \
|
||||||
|
--wait \
|
||||||
|
--wait-for-jobs
|
||||||
|
- name: Wait for GreptimeDB
|
||||||
|
shell: bash
|
||||||
|
run: |
|
||||||
|
while true; do
|
||||||
|
PHASE=$(kubectl -n my-greptimedb get gtc my-greptimedb -o jsonpath='{.status.clusterPhase}')
|
||||||
|
if [ "$PHASE" == "Running" ]; then
|
||||||
|
echo "Cluster is ready"
|
||||||
|
break
|
||||||
|
else
|
||||||
|
echo "Cluster is not ready yet: Current phase: $PHASE"
|
||||||
|
kubectl get pods -n my-greptimedb
|
||||||
|
sleep 5 # wait for 5 seconds before check again.
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
- name: Print GreptimeDB info
|
||||||
|
if: always()
|
||||||
|
shell: bash
|
||||||
|
run: |
|
||||||
|
kubectl get all --show-labels -n my-greptimedb
|
||||||
|
- name: Describe Nodes
|
||||||
|
if: always()
|
||||||
|
shell: bash
|
||||||
|
run: |
|
||||||
|
kubectl describe nodes
|
||||||
13
.github/actions/setup-greptimedb-cluster/with-disk.yaml
vendored
Normal file
13
.github/actions/setup-greptimedb-cluster/with-disk.yaml
vendored
Normal file
@@ -0,0 +1,13 @@
|
|||||||
|
meta:
|
||||||
|
configData: |-
|
||||||
|
[runtime]
|
||||||
|
global_rt_size = 4
|
||||||
|
datanode:
|
||||||
|
configData: |-
|
||||||
|
[runtime]
|
||||||
|
global_rt_size = 4
|
||||||
|
compact_rt_size = 2
|
||||||
|
frontend:
|
||||||
|
configData: |-
|
||||||
|
[runtime]
|
||||||
|
global_rt_size = 4
|
||||||
33
.github/actions/setup-greptimedb-cluster/with-minio-and-cache.yaml
vendored
Normal file
33
.github/actions/setup-greptimedb-cluster/with-minio-and-cache.yaml
vendored
Normal file
@@ -0,0 +1,33 @@
|
|||||||
|
meta:
|
||||||
|
configData: |-
|
||||||
|
[runtime]
|
||||||
|
global_rt_size = 4
|
||||||
|
|
||||||
|
[datanode]
|
||||||
|
[datanode.client]
|
||||||
|
timeout = "60s"
|
||||||
|
datanode:
|
||||||
|
configData: |-
|
||||||
|
[runtime]
|
||||||
|
global_rt_size = 4
|
||||||
|
compact_rt_size = 2
|
||||||
|
|
||||||
|
[storage]
|
||||||
|
cache_path = "/data/greptimedb/s3cache"
|
||||||
|
cache_capacity = "256MB"
|
||||||
|
frontend:
|
||||||
|
configData: |-
|
||||||
|
[runtime]
|
||||||
|
global_rt_size = 4
|
||||||
|
|
||||||
|
[meta_client]
|
||||||
|
ddl_timeout = "60s"
|
||||||
|
objectStorage:
|
||||||
|
s3:
|
||||||
|
bucket: default
|
||||||
|
region: us-west-2
|
||||||
|
root: test-root
|
||||||
|
endpoint: http://minio.minio.svc.cluster.local
|
||||||
|
credentials:
|
||||||
|
accessKeyId: rootuser
|
||||||
|
secretAccessKey: rootpass123
|
||||||
29
.github/actions/setup-greptimedb-cluster/with-minio.yaml
vendored
Normal file
29
.github/actions/setup-greptimedb-cluster/with-minio.yaml
vendored
Normal file
@@ -0,0 +1,29 @@
|
|||||||
|
meta:
|
||||||
|
configData: |-
|
||||||
|
[runtime]
|
||||||
|
global_rt_size = 4
|
||||||
|
|
||||||
|
[datanode]
|
||||||
|
[datanode.client]
|
||||||
|
timeout = "60s"
|
||||||
|
datanode:
|
||||||
|
configData: |-
|
||||||
|
[runtime]
|
||||||
|
global_rt_size = 4
|
||||||
|
compact_rt_size = 2
|
||||||
|
frontend:
|
||||||
|
configData: |-
|
||||||
|
[runtime]
|
||||||
|
global_rt_size = 4
|
||||||
|
|
||||||
|
[meta_client]
|
||||||
|
ddl_timeout = "60s"
|
||||||
|
objectStorage:
|
||||||
|
s3:
|
||||||
|
bucket: default
|
||||||
|
region: us-west-2
|
||||||
|
root: test-root
|
||||||
|
endpoint: http://minio.minio.svc.cluster.local
|
||||||
|
credentials:
|
||||||
|
accessKeyId: rootuser
|
||||||
|
secretAccessKey: rootpass123
|
||||||
45
.github/actions/setup-greptimedb-cluster/with-remote-wal.yaml
vendored
Normal file
45
.github/actions/setup-greptimedb-cluster/with-remote-wal.yaml
vendored
Normal file
@@ -0,0 +1,45 @@
|
|||||||
|
meta:
|
||||||
|
configData: |-
|
||||||
|
[runtime]
|
||||||
|
global_rt_size = 4
|
||||||
|
|
||||||
|
[wal]
|
||||||
|
provider = "kafka"
|
||||||
|
broker_endpoints = ["kafka.kafka-cluster.svc.cluster.local:9092"]
|
||||||
|
num_topics = 3
|
||||||
|
|
||||||
|
|
||||||
|
[datanode]
|
||||||
|
[datanode.client]
|
||||||
|
timeout = "60s"
|
||||||
|
datanode:
|
||||||
|
configData: |-
|
||||||
|
[runtime]
|
||||||
|
global_rt_size = 4
|
||||||
|
compact_rt_size = 2
|
||||||
|
|
||||||
|
[wal]
|
||||||
|
provider = "kafka"
|
||||||
|
broker_endpoints = ["kafka.kafka-cluster.svc.cluster.local:9092"]
|
||||||
|
linger = "2ms"
|
||||||
|
frontend:
|
||||||
|
configData: |-
|
||||||
|
[runtime]
|
||||||
|
global_rt_size = 4
|
||||||
|
|
||||||
|
[meta_client]
|
||||||
|
ddl_timeout = "60s"
|
||||||
|
objectStorage:
|
||||||
|
s3:
|
||||||
|
bucket: default
|
||||||
|
region: us-west-2
|
||||||
|
root: test-root
|
||||||
|
endpoint: http://minio.minio.svc.cluster.local
|
||||||
|
credentials:
|
||||||
|
accessKeyId: rootuser
|
||||||
|
secretAccessKey: rootpass123
|
||||||
|
remoteWal:
|
||||||
|
enabled: true
|
||||||
|
kafka:
|
||||||
|
brokerEndpoints:
|
||||||
|
- "kafka.kafka-cluster.svc.cluster.local:9092"
|
||||||
24
.github/actions/setup-kafka-cluster/action.yml
vendored
Normal file
24
.github/actions/setup-kafka-cluster/action.yml
vendored
Normal file
@@ -0,0 +1,24 @@
|
|||||||
|
name: Setup Kafka cluster
|
||||||
|
description: Deploy Kafka cluster on Kubernetes
|
||||||
|
inputs:
|
||||||
|
controller-replicas:
|
||||||
|
default: 3
|
||||||
|
description: "Kafka controller replicas"
|
||||||
|
namespace:
|
||||||
|
default: "kafka-cluster"
|
||||||
|
|
||||||
|
runs:
|
||||||
|
using: composite
|
||||||
|
steps:
|
||||||
|
- name: Install Kafka cluster
|
||||||
|
shell: bash
|
||||||
|
run: |
|
||||||
|
helm upgrade \
|
||||||
|
--install kafka oci://registry-1.docker.io/bitnamicharts/kafka \
|
||||||
|
--set controller.replicaCount=${{ inputs.controller-replicas }} \
|
||||||
|
--set controller.resources.requests.cpu=50m \
|
||||||
|
--set controller.resources.requests.memory=128Mi \
|
||||||
|
--set listeners.controller.protocol=PLAINTEXT \
|
||||||
|
--set listeners.client.protocol=PLAINTEXT \
|
||||||
|
--create-namespace \
|
||||||
|
-n ${{ inputs.namespace }}
|
||||||
10
.github/actions/setup-kind/action.yml
vendored
Normal file
10
.github/actions/setup-kind/action.yml
vendored
Normal file
@@ -0,0 +1,10 @@
|
|||||||
|
name: Setup Kind
|
||||||
|
description: Deploy Kind
|
||||||
|
runs:
|
||||||
|
using: composite
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v4
|
||||||
|
- name: Create kind cluster
|
||||||
|
shell: bash
|
||||||
|
run: |
|
||||||
|
./.github/scripts/kind-with-registry.sh
|
||||||
24
.github/actions/setup-minio/action.yml
vendored
Normal file
24
.github/actions/setup-minio/action.yml
vendored
Normal file
@@ -0,0 +1,24 @@
|
|||||||
|
name: Setup Minio cluster
|
||||||
|
description: Deploy Minio cluster on Kubernetes
|
||||||
|
inputs:
|
||||||
|
replicas:
|
||||||
|
default: 1
|
||||||
|
description: "replicas"
|
||||||
|
|
||||||
|
runs:
|
||||||
|
using: composite
|
||||||
|
steps:
|
||||||
|
- name: Install Etcd cluster
|
||||||
|
shell: bash
|
||||||
|
run: |
|
||||||
|
helm repo add minio https://charts.min.io/
|
||||||
|
helm upgrade --install minio \
|
||||||
|
--set resources.requests.memory=128Mi \
|
||||||
|
--set replicas=${{ inputs.replicas }} \
|
||||||
|
--set mode=standalone \
|
||||||
|
--set rootUser=rootuser,rootPassword=rootpass123 \
|
||||||
|
--set buckets[0].name=default \
|
||||||
|
--set service.port=80,service.targetPort=9000 \
|
||||||
|
minio/minio \
|
||||||
|
--create-namespace \
|
||||||
|
-n minio
|
||||||
30
.github/actions/setup-postgres-cluster/action.yml
vendored
Normal file
30
.github/actions/setup-postgres-cluster/action.yml
vendored
Normal file
@@ -0,0 +1,30 @@
|
|||||||
|
name: Setup PostgreSQL
|
||||||
|
description: Deploy PostgreSQL on Kubernetes
|
||||||
|
inputs:
|
||||||
|
postgres-replicas:
|
||||||
|
default: 1
|
||||||
|
description: "Number of PostgreSQL replicas"
|
||||||
|
namespace:
|
||||||
|
default: "postgres-namespace"
|
||||||
|
postgres-version:
|
||||||
|
default: "14.2"
|
||||||
|
description: "PostgreSQL version"
|
||||||
|
storage-size:
|
||||||
|
default: "1Gi"
|
||||||
|
description: "Storage size for PostgreSQL"
|
||||||
|
|
||||||
|
runs:
|
||||||
|
using: composite
|
||||||
|
steps:
|
||||||
|
- name: Install PostgreSQL
|
||||||
|
shell: bash
|
||||||
|
run: |
|
||||||
|
helm upgrade \
|
||||||
|
--install postgresql oci://registry-1.docker.io/bitnamicharts/postgresql \
|
||||||
|
--set replicaCount=${{ inputs.postgres-replicas }} \
|
||||||
|
--set image.tag=${{ inputs.postgres-version }} \
|
||||||
|
--set persistence.size=${{ inputs.storage-size }} \
|
||||||
|
--set postgresql.username=greptimedb \
|
||||||
|
--set postgresql.password=admin \
|
||||||
|
--create-namespace \
|
||||||
|
-n ${{ inputs.namespace }}
|
||||||
11
.github/actions/sqlness-test/action.yml
vendored
11
.github/actions/sqlness-test/action.yml
vendored
@@ -57,3 +57,14 @@ runs:
|
|||||||
AWS_SECRET_ACCESS_KEY: ${{ inputs.aws-secret-access-key }}
|
AWS_SECRET_ACCESS_KEY: ${{ inputs.aws-secret-access-key }}
|
||||||
run: |
|
run: |
|
||||||
aws s3 rm s3://${{ inputs.aws-ci-test-bucket }}/${{ inputs.data-root }} --recursive
|
aws s3 rm s3://${{ inputs.aws-ci-test-bucket }}/${{ inputs.data-root }} --recursive
|
||||||
|
- name: Export kind logs
|
||||||
|
if: failure()
|
||||||
|
shell: bash
|
||||||
|
run: kind export logs -n greptimedb-operator-e2e /tmp/kind
|
||||||
|
- name: Upload logs
|
||||||
|
if: failure()
|
||||||
|
uses: actions/upload-artifact@v4
|
||||||
|
with:
|
||||||
|
name: kind-logs
|
||||||
|
path: /tmp/kind
|
||||||
|
retention-days: 3
|
||||||
|
|||||||
2
.github/actions/start-runner/action.yml
vendored
2
.github/actions/start-runner/action.yml
vendored
@@ -38,7 +38,7 @@ runs:
|
|||||||
steps:
|
steps:
|
||||||
- name: Configure AWS credentials
|
- name: Configure AWS credentials
|
||||||
if: startsWith(inputs.runner, 'ec2')
|
if: startsWith(inputs.runner, 'ec2')
|
||||||
uses: aws-actions/configure-aws-credentials@v2
|
uses: aws-actions/configure-aws-credentials@v4
|
||||||
with:
|
with:
|
||||||
aws-access-key-id: ${{ inputs.aws-access-key-id }}
|
aws-access-key-id: ${{ inputs.aws-access-key-id }}
|
||||||
aws-secret-access-key: ${{ inputs.aws-secret-access-key }}
|
aws-secret-access-key: ${{ inputs.aws-secret-access-key }}
|
||||||
|
|||||||
2
.github/actions/stop-runner/action.yml
vendored
2
.github/actions/stop-runner/action.yml
vendored
@@ -25,7 +25,7 @@ runs:
|
|||||||
steps:
|
steps:
|
||||||
- name: Configure AWS credentials
|
- name: Configure AWS credentials
|
||||||
if: ${{ inputs.label && inputs.ec2-instance-id }}
|
if: ${{ inputs.label && inputs.ec2-instance-id }}
|
||||||
uses: aws-actions/configure-aws-credentials@v2
|
uses: aws-actions/configure-aws-credentials@v4
|
||||||
with:
|
with:
|
||||||
aws-access-key-id: ${{ inputs.aws-access-key-id }}
|
aws-access-key-id: ${{ inputs.aws-access-key-id }}
|
||||||
aws-secret-access-key: ${{ inputs.aws-secret-access-key }}
|
aws-secret-access-key: ${{ inputs.aws-secret-access-key }}
|
||||||
|
|||||||
4
.github/doc-label-config.yml
vendored
4
.github/doc-label-config.yml
vendored
@@ -1,4 +0,0 @@
|
|||||||
Doc not needed:
|
|
||||||
- '- \[x\] This PR does not require documentation updates.'
|
|
||||||
Doc update required:
|
|
||||||
- '- \[ \] This PR does not require documentation updates.'
|
|
||||||
@@ -1,13 +0,0 @@
|
|||||||
{
|
|
||||||
"LABEL": {
|
|
||||||
"name": "breaking change",
|
|
||||||
"color": "D93F0B"
|
|
||||||
},
|
|
||||||
"CHECKS": {
|
|
||||||
"regexp": "^(?:(?!!:).)*$",
|
|
||||||
"ignoreLabels": [
|
|
||||||
"ignore-title"
|
|
||||||
],
|
|
||||||
"alwaysPassCI": true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
12
.github/pr-title-checker-config.json
vendored
12
.github/pr-title-checker-config.json
vendored
@@ -1,12 +0,0 @@
|
|||||||
{
|
|
||||||
"LABEL": {
|
|
||||||
"name": "Invalid PR Title",
|
|
||||||
"color": "B60205"
|
|
||||||
},
|
|
||||||
"CHECKS": {
|
|
||||||
"regexp": "^(feat|fix|test|refactor|chore|style|docs|perf|build|ci|revert)(\\(.*\\))?\\!?:.*",
|
|
||||||
"ignoreLabels": [
|
|
||||||
"ignore-title"
|
|
||||||
]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
6
.github/pull_request_template.md
vendored
6
.github/pull_request_template.md
vendored
@@ -15,6 +15,6 @@ Please explain IN DETAIL what the changes are in this PR and why they are needed
|
|||||||
|
|
||||||
## Checklist
|
## Checklist
|
||||||
|
|
||||||
- [ ] I have written the necessary rustdoc comments.
|
- [ ] I have written the necessary rustdoc comments.
|
||||||
- [ ] I have added the necessary unit tests and integration tests.
|
- [ ] I have added the necessary unit tests and integration tests.
|
||||||
- [x] This PR does not require documentation updates.
|
- [ ] This PR requires documentation updates.
|
||||||
|
|||||||
66
.github/scripts/kind-with-registry.sh
vendored
Executable file
66
.github/scripts/kind-with-registry.sh
vendored
Executable file
@@ -0,0 +1,66 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
|
||||||
|
set -e
|
||||||
|
set -o pipefail
|
||||||
|
|
||||||
|
# 1. Create registry container unless it already exists
|
||||||
|
reg_name='kind-registry'
|
||||||
|
reg_port='5001'
|
||||||
|
if [ "$(docker inspect -f '{{.State.Running}}' "${reg_name}" 2>/dev/null || true)" != 'true' ]; then
|
||||||
|
docker run \
|
||||||
|
-d --restart=always -p "127.0.0.1:${reg_port}:5000" --network bridge --name "${reg_name}" \
|
||||||
|
registry:2
|
||||||
|
fi
|
||||||
|
|
||||||
|
# 2. Create kind cluster with containerd registry config dir enabled
|
||||||
|
# TODO: kind will eventually enable this by default and this patch will
|
||||||
|
# be unnecessary.
|
||||||
|
#
|
||||||
|
# See:
|
||||||
|
# https://github.com/kubernetes-sigs/kind/issues/2875
|
||||||
|
# https://github.com/containerd/containerd/blob/main/docs/cri/config.md#registry-configuration
|
||||||
|
# See: https://github.com/containerd/containerd/blob/main/docs/hosts.md
|
||||||
|
cat <<EOF | kind create cluster --wait 2m --config=-
|
||||||
|
kind: Cluster
|
||||||
|
apiVersion: kind.x-k8s.io/v1alpha4
|
||||||
|
containerdConfigPatches:
|
||||||
|
- |-
|
||||||
|
[plugins."io.containerd.grpc.v1.cri".registry]
|
||||||
|
config_path = "/etc/containerd/certs.d"
|
||||||
|
EOF
|
||||||
|
|
||||||
|
# 3. Add the registry config to the nodes
|
||||||
|
#
|
||||||
|
# This is necessary because localhost resolves to loopback addresses that are
|
||||||
|
# network-namespace local.
|
||||||
|
# In other words: localhost in the container is not localhost on the host.
|
||||||
|
#
|
||||||
|
# We want a consistent name that works from both ends, so we tell containerd to
|
||||||
|
# alias localhost:${reg_port} to the registry container when pulling images
|
||||||
|
REGISTRY_DIR="/etc/containerd/certs.d/localhost:${reg_port}"
|
||||||
|
for node in $(kind get nodes); do
|
||||||
|
docker exec "${node}" mkdir -p "${REGISTRY_DIR}"
|
||||||
|
cat <<EOF | docker exec -i "${node}" cp /dev/stdin "${REGISTRY_DIR}/hosts.toml"
|
||||||
|
[host."http://${reg_name}:5000"]
|
||||||
|
EOF
|
||||||
|
done
|
||||||
|
|
||||||
|
# 4. Connect the registry to the cluster network if not already connected
|
||||||
|
# This allows kind to bootstrap the network but ensures they're on the same network
|
||||||
|
if [ "$(docker inspect -f='{{json .NetworkSettings.Networks.kind}}' "${reg_name}")" = 'null' ]; then
|
||||||
|
docker network connect "kind" "${reg_name}"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# 5. Document the local registry
|
||||||
|
# https://github.com/kubernetes/enhancements/tree/master/keps/sig-cluster-lifecycle/generic/1755-communicating-a-local-registry
|
||||||
|
cat <<EOF | kubectl apply -f -
|
||||||
|
apiVersion: v1
|
||||||
|
kind: ConfigMap
|
||||||
|
metadata:
|
||||||
|
name: local-registry-hosting
|
||||||
|
namespace: kube-public
|
||||||
|
data:
|
||||||
|
localRegistryHosting.v1: |
|
||||||
|
host: "localhost:${reg_port}"
|
||||||
|
help: "https://kind.sigs.k8s.io/docs/user/local-registry/"
|
||||||
|
EOF
|
||||||
7
.github/workflows/apidoc.yml
vendored
7
.github/workflows/apidoc.yml
vendored
@@ -12,9 +12,6 @@ on:
|
|||||||
|
|
||||||
name: Build API docs
|
name: Build API docs
|
||||||
|
|
||||||
env:
|
|
||||||
RUST_TOOLCHAIN: nightly-2023-12-19
|
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
apidoc:
|
apidoc:
|
||||||
runs-on: ubuntu-20.04
|
runs-on: ubuntu-20.04
|
||||||
@@ -23,9 +20,7 @@ jobs:
|
|||||||
- uses: arduino/setup-protoc@v3
|
- uses: arduino/setup-protoc@v3
|
||||||
with:
|
with:
|
||||||
repo-token: ${{ secrets.GITHUB_TOKEN }}
|
repo-token: ${{ secrets.GITHUB_TOKEN }}
|
||||||
- uses: dtolnay/rust-toolchain@master
|
- uses: actions-rust-lang/setup-rust-toolchain@v1
|
||||||
with:
|
|
||||||
toolchain: ${{ env.RUST_TOOLCHAIN }}
|
|
||||||
- run: cargo doc --workspace --no-deps --document-private-items
|
- run: cargo doc --workspace --no-deps --document-private-items
|
||||||
- run: |
|
- run: |
|
||||||
cat <<EOF > target/doc/index.html
|
cat <<EOF > target/doc/index.html
|
||||||
|
|||||||
24
.github/workflows/dev-build.yml
vendored
24
.github/workflows/dev-build.yml
vendored
@@ -82,6 +82,9 @@ env:
|
|||||||
# The source code will check out in the following path: '${WORKING_DIR}/dev/greptime'.
|
# The source code will check out in the following path: '${WORKING_DIR}/dev/greptime'.
|
||||||
CHECKOUT_GREPTIMEDB_PATH: dev/greptimedb
|
CHECKOUT_GREPTIMEDB_PATH: dev/greptimedb
|
||||||
|
|
||||||
|
permissions:
|
||||||
|
issues: write
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
allocate-runners:
|
allocate-runners:
|
||||||
name: Allocate runners
|
name: Allocate runners
|
||||||
@@ -174,6 +177,8 @@ jobs:
|
|||||||
disable-run-tests: ${{ env.DISABLE_RUN_TESTS }}
|
disable-run-tests: ${{ env.DISABLE_RUN_TESTS }}
|
||||||
dev-mode: true # Only build the standard greptime binary.
|
dev-mode: true # Only build the standard greptime binary.
|
||||||
working-dir: ${{ env.CHECKOUT_GREPTIMEDB_PATH }}
|
working-dir: ${{ env.CHECKOUT_GREPTIMEDB_PATH }}
|
||||||
|
image-registry: ${{ vars.ECR_IMAGE_REGISTRY }}
|
||||||
|
image-namespace: ${{ vars.ECR_IMAGE_NAMESPACE }}
|
||||||
|
|
||||||
build-linux-arm64-artifacts:
|
build-linux-arm64-artifacts:
|
||||||
name: Build linux-arm64 artifacts
|
name: Build linux-arm64 artifacts
|
||||||
@@ -203,6 +208,8 @@ jobs:
|
|||||||
disable-run-tests: ${{ env.DISABLE_RUN_TESTS }}
|
disable-run-tests: ${{ env.DISABLE_RUN_TESTS }}
|
||||||
dev-mode: true # Only build the standard greptime binary.
|
dev-mode: true # Only build the standard greptime binary.
|
||||||
working-dir: ${{ env.CHECKOUT_GREPTIMEDB_PATH }}
|
working-dir: ${{ env.CHECKOUT_GREPTIMEDB_PATH }}
|
||||||
|
image-registry: ${{ vars.ECR_IMAGE_REGISTRY }}
|
||||||
|
image-namespace: ${{ vars.ECR_IMAGE_NAMESPACE }}
|
||||||
|
|
||||||
release-images-to-dockerhub:
|
release-images-to-dockerhub:
|
||||||
name: Build and push images to DockerHub
|
name: Build and push images to DockerHub
|
||||||
@@ -321,7 +328,7 @@ jobs:
|
|||||||
github-token: ${{ secrets.GH_PERSONAL_ACCESS_TOKEN }}
|
github-token: ${{ secrets.GH_PERSONAL_ACCESS_TOKEN }}
|
||||||
|
|
||||||
notification:
|
notification:
|
||||||
if: ${{ always() }} # Not requiring successful dependent jobs, always run.
|
if: ${{ github.repository == 'GreptimeTeam/greptimedb' && always() }} # Not requiring successful dependent jobs, always run.
|
||||||
name: Send notification to Greptime team
|
name: Send notification to Greptime team
|
||||||
needs: [
|
needs: [
|
||||||
release-images-to-dockerhub
|
release-images-to-dockerhub
|
||||||
@@ -330,16 +337,25 @@ jobs:
|
|||||||
env:
|
env:
|
||||||
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL_DEVELOP_CHANNEL }}
|
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL_DEVELOP_CHANNEL }}
|
||||||
steps:
|
steps:
|
||||||
- name: Notifiy dev build successful result
|
- uses: actions/checkout@v4
|
||||||
|
- uses: ./.github/actions/setup-cyborg
|
||||||
|
- name: Report CI status
|
||||||
|
id: report-ci-status
|
||||||
|
working-directory: cyborg
|
||||||
|
run: pnpm tsx bin/report-ci-failure.ts
|
||||||
|
env:
|
||||||
|
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
CI_REPORT_STATUS: ${{ needs.release-images-to-dockerhub.outputs.build-result == 'success' }}
|
||||||
|
- name: Notify dev build successful result
|
||||||
uses: slackapi/slack-github-action@v1.23.0
|
uses: slackapi/slack-github-action@v1.23.0
|
||||||
if: ${{ needs.release-images-to-dockerhub.outputs.build-result == 'success' }}
|
if: ${{ needs.release-images-to-dockerhub.outputs.build-result == 'success' }}
|
||||||
with:
|
with:
|
||||||
payload: |
|
payload: |
|
||||||
{"text": "GreptimeDB's ${{ env.NEXT_RELEASE_VERSION }} build has completed successfully."}
|
{"text": "GreptimeDB's ${{ env.NEXT_RELEASE_VERSION }} build has completed successfully."}
|
||||||
|
|
||||||
- name: Notifiy dev build failed result
|
- name: Notify dev build failed result
|
||||||
uses: slackapi/slack-github-action@v1.23.0
|
uses: slackapi/slack-github-action@v1.23.0
|
||||||
if: ${{ needs.release-images-to-dockerhub.outputs.build-result != 'success' }}
|
if: ${{ needs.release-images-to-dockerhub.outputs.build-result != 'success' }}
|
||||||
with:
|
with:
|
||||||
payload: |
|
payload: |
|
||||||
{"text": "GreptimeDB's ${{ env.NEXT_RELEASE_VERSION }} build has failed, please check 'https://github.com/GreptimeTeam/greptimedb/actions/workflows/${{ env.NEXT_RELEASE_VERSION }}-build.yml'."}
|
{"text": "GreptimeDB's ${{ env.NEXT_RELEASE_VERSION }} build has failed, please check ${{ steps.report-ci-status.outputs.html_url }}."}
|
||||||
|
|||||||
609
.github/workflows/develop.yml
vendored
609
.github/workflows/develop.yml
vendored
@@ -29,32 +29,39 @@ concurrency:
|
|||||||
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
|
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
|
||||||
cancel-in-progress: true
|
cancel-in-progress: true
|
||||||
|
|
||||||
env:
|
|
||||||
RUST_TOOLCHAIN: nightly-2023-12-19
|
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
typos:
|
check-typos-and-docs:
|
||||||
name: Spell Check with Typos
|
name: Check typos and docs
|
||||||
runs-on: ubuntu-20.04
|
runs-on: ubuntu-20.04
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v4
|
- uses: actions/checkout@v4
|
||||||
- uses: crate-ci/typos@v1.13.10
|
- uses: crate-ci/typos@master
|
||||||
|
- name: Check the config docs
|
||||||
|
run: |
|
||||||
|
make config-docs && \
|
||||||
|
git diff --name-only --exit-code ./config/config.md \
|
||||||
|
|| (echo "'config/config.md' is not up-to-date, please run 'make config-docs'." && exit 1)
|
||||||
|
|
||||||
|
license-header-check:
|
||||||
|
runs-on: ubuntu-20.04
|
||||||
|
name: Check License Header
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v4
|
||||||
|
- uses: korandoru/hawkeye@v5
|
||||||
|
|
||||||
check:
|
check:
|
||||||
name: Check
|
name: Check
|
||||||
runs-on: ${{ matrix.os }}
|
runs-on: ${{ matrix.os }}
|
||||||
strategy:
|
strategy:
|
||||||
matrix:
|
matrix:
|
||||||
os: [ windows-latest, ubuntu-20.04 ]
|
os: [ windows-2022, ubuntu-20.04 ]
|
||||||
timeout-minutes: 60
|
timeout-minutes: 60
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v4
|
- uses: actions/checkout@v4
|
||||||
- uses: arduino/setup-protoc@v3
|
- uses: arduino/setup-protoc@v3
|
||||||
with:
|
with:
|
||||||
repo-token: ${{ secrets.GITHUB_TOKEN }}
|
repo-token: ${{ secrets.GITHUB_TOKEN }}
|
||||||
- uses: dtolnay/rust-toolchain@master
|
- uses: actions-rust-lang/setup-rust-toolchain@v1
|
||||||
with:
|
|
||||||
toolchain: ${{ env.RUST_TOOLCHAIN }}
|
|
||||||
- name: Rust Cache
|
- name: Rust Cache
|
||||||
uses: Swatinem/rust-cache@v2
|
uses: Swatinem/rust-cache@v2
|
||||||
with:
|
with:
|
||||||
@@ -70,9 +77,7 @@ jobs:
|
|||||||
timeout-minutes: 60
|
timeout-minutes: 60
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v4
|
- uses: actions/checkout@v4
|
||||||
- uses: dtolnay/rust-toolchain@master
|
- uses: actions-rust-lang/setup-rust-toolchain@v1
|
||||||
with:
|
|
||||||
toolchain: stable
|
|
||||||
- name: Rust Cache
|
- name: Rust Cache
|
||||||
uses: Swatinem/rust-cache@v2
|
uses: Swatinem/rust-cache@v2
|
||||||
with:
|
with:
|
||||||
@@ -93,16 +98,20 @@ jobs:
|
|||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v4
|
- uses: actions/checkout@v4
|
||||||
- uses: arduino/setup-protoc@v3
|
- uses: arduino/setup-protoc@v3
|
||||||
- uses: dtolnay/rust-toolchain@master
|
|
||||||
with:
|
with:
|
||||||
toolchain: ${{ env.RUST_TOOLCHAIN }}
|
repo-token: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
- uses: actions-rust-lang/setup-rust-toolchain@v1
|
||||||
- uses: Swatinem/rust-cache@v2
|
- uses: Swatinem/rust-cache@v2
|
||||||
with:
|
with:
|
||||||
# Shares across multiple jobs
|
# Shares across multiple jobs
|
||||||
shared-key: "build-binaries"
|
shared-key: "build-binaries"
|
||||||
|
- name: Install cargo-gc-bin
|
||||||
|
shell: bash
|
||||||
|
run: cargo install cargo-gc-bin
|
||||||
- name: Build greptime binaries
|
- name: Build greptime binaries
|
||||||
shell: bash
|
shell: bash
|
||||||
run: cargo build --bin greptime --bin sqlness-runner
|
# `cargo gc` will invoke `cargo build` with specified args
|
||||||
|
run: cargo gc -- --bin greptime --bin sqlness-runner
|
||||||
- name: Pack greptime binaries
|
- name: Pack greptime binaries
|
||||||
shell: bash
|
shell: bash
|
||||||
run: |
|
run: |
|
||||||
@@ -121,15 +130,87 @@ jobs:
|
|||||||
name: Fuzz Test
|
name: Fuzz Test
|
||||||
needs: build
|
needs: build
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
|
timeout-minutes: 60
|
||||||
strategy:
|
strategy:
|
||||||
|
fail-fast: false
|
||||||
matrix:
|
matrix:
|
||||||
target: [ "fuzz_create_table", "fuzz_alter_table" ]
|
target: [ "fuzz_create_table", "fuzz_alter_table", "fuzz_create_database", "fuzz_create_logical_table", "fuzz_alter_logical_table", "fuzz_insert", "fuzz_insert_logical_table" ]
|
||||||
steps:
|
steps:
|
||||||
|
- name: Remove unused software
|
||||||
|
run: |
|
||||||
|
echo "Disk space before:"
|
||||||
|
df -h
|
||||||
|
[[ -d /usr/share/dotnet ]] && sudo rm -rf /usr/share/dotnet
|
||||||
|
[[ -d /usr/local/lib/android ]] && sudo rm -rf /usr/local/lib/android
|
||||||
|
[[ -d /opt/ghc ]] && sudo rm -rf /opt/ghc
|
||||||
|
[[ -d /opt/hostedtoolcache/CodeQL ]] && sudo rm -rf /opt/hostedtoolcache/CodeQL
|
||||||
|
sudo docker image prune --all --force
|
||||||
|
sudo docker builder prune -a
|
||||||
|
echo "Disk space after:"
|
||||||
|
df -h
|
||||||
- uses: actions/checkout@v4
|
- uses: actions/checkout@v4
|
||||||
- uses: arduino/setup-protoc@v3
|
- uses: arduino/setup-protoc@v3
|
||||||
- uses: dtolnay/rust-toolchain@master
|
|
||||||
with:
|
with:
|
||||||
toolchain: ${{ env.RUST_TOOLCHAIN }}
|
repo-token: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
- uses: actions-rust-lang/setup-rust-toolchain@v1
|
||||||
|
- name: Rust Cache
|
||||||
|
uses: Swatinem/rust-cache@v2
|
||||||
|
with:
|
||||||
|
# Shares across multiple jobs
|
||||||
|
shared-key: "fuzz-test-targets"
|
||||||
|
- name: Set Rust Fuzz
|
||||||
|
shell: bash
|
||||||
|
run: |
|
||||||
|
sudo apt-get install -y libfuzzer-14-dev
|
||||||
|
rustup install nightly
|
||||||
|
cargo +nightly install cargo-fuzz cargo-gc-bin
|
||||||
|
- name: Download pre-built binaries
|
||||||
|
uses: actions/download-artifact@v4
|
||||||
|
with:
|
||||||
|
name: bins
|
||||||
|
path: .
|
||||||
|
- name: Unzip binaries
|
||||||
|
run: |
|
||||||
|
tar -xvf ./bins.tar.gz
|
||||||
|
rm ./bins.tar.gz
|
||||||
|
- name: Run GreptimeDB
|
||||||
|
run: |
|
||||||
|
./bins/greptime standalone start&
|
||||||
|
- name: Fuzz Test
|
||||||
|
uses: ./.github/actions/fuzz-test
|
||||||
|
env:
|
||||||
|
CUSTOM_LIBFUZZER_PATH: /usr/lib/llvm-14/lib/libFuzzer.a
|
||||||
|
GT_MYSQL_ADDR: 127.0.0.1:4002
|
||||||
|
with:
|
||||||
|
target: ${{ matrix.target }}
|
||||||
|
max-total-time: 120
|
||||||
|
|
||||||
|
unstable-fuzztest:
|
||||||
|
name: Unstable Fuzz Test
|
||||||
|
needs: build-greptime-ci
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
timeout-minutes: 60
|
||||||
|
strategy:
|
||||||
|
matrix:
|
||||||
|
target: [ "unstable_fuzz_create_table_standalone" ]
|
||||||
|
steps:
|
||||||
|
- name: Remove unused software
|
||||||
|
run: |
|
||||||
|
echo "Disk space before:"
|
||||||
|
df -h
|
||||||
|
[[ -d /usr/share/dotnet ]] && sudo rm -rf /usr/share/dotnet
|
||||||
|
[[ -d /usr/local/lib/android ]] && sudo rm -rf /usr/local/lib/android
|
||||||
|
[[ -d /opt/ghc ]] && sudo rm -rf /opt/ghc
|
||||||
|
[[ -d /opt/hostedtoolcache/CodeQL ]] && sudo rm -rf /opt/hostedtoolcache/CodeQL
|
||||||
|
sudo docker image prune --all --force
|
||||||
|
sudo docker builder prune -a
|
||||||
|
echo "Disk space after:"
|
||||||
|
df -h
|
||||||
|
- uses: actions/checkout@v4
|
||||||
|
- uses: arduino/setup-protoc@v3
|
||||||
|
with:
|
||||||
|
repo-token: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
- uses: actions-rust-lang/setup-rust-toolchain@v1
|
||||||
- name: Rust Cache
|
- name: Rust Cache
|
||||||
uses: Swatinem/rust-cache@v2
|
uses: Swatinem/rust-cache@v2
|
||||||
with:
|
with:
|
||||||
@@ -139,79 +220,403 @@ jobs:
|
|||||||
shell: bash
|
shell: bash
|
||||||
run: |
|
run: |
|
||||||
sudo apt update && sudo apt install -y libfuzzer-14-dev
|
sudo apt update && sudo apt install -y libfuzzer-14-dev
|
||||||
cargo install cargo-fuzz
|
cargo install cargo-fuzz cargo-gc-bin
|
||||||
- name: Download pre-built binaries
|
- name: Download pre-built binariy
|
||||||
uses: actions/download-artifact@v4
|
uses: actions/download-artifact@v4
|
||||||
with:
|
with:
|
||||||
name: bins
|
name: bin
|
||||||
path: .
|
path: .
|
||||||
- name: Unzip binaries
|
- name: Unzip bianry
|
||||||
run: tar -xvf ./bins.tar.gz
|
|
||||||
- name: Run GreptimeDB
|
|
||||||
run: |
|
run: |
|
||||||
./bins/greptime standalone start&
|
tar -xvf ./bin.tar.gz
|
||||||
|
rm ./bin.tar.gz
|
||||||
|
- name: Run Fuzz Test
|
||||||
|
uses: ./.github/actions/fuzz-test
|
||||||
|
env:
|
||||||
|
CUSTOM_LIBFUZZER_PATH: /usr/lib/llvm-14/lib/libFuzzer.a
|
||||||
|
GT_MYSQL_ADDR: 127.0.0.1:4002
|
||||||
|
GT_FUZZ_BINARY_PATH: ./bin/greptime
|
||||||
|
GT_FUZZ_INSTANCE_ROOT_DIR: /tmp/unstable-greptime/
|
||||||
|
with:
|
||||||
|
target: ${{ matrix.target }}
|
||||||
|
max-total-time: 120
|
||||||
|
unstable: 'true'
|
||||||
|
- name: Upload unstable fuzz test logs
|
||||||
|
if: failure()
|
||||||
|
uses: actions/upload-artifact@v4
|
||||||
|
with:
|
||||||
|
name: unstable-fuzz-logs
|
||||||
|
path: /tmp/unstable-greptime/
|
||||||
|
retention-days: 3
|
||||||
|
|
||||||
|
build-greptime-ci:
|
||||||
|
name: Build GreptimeDB binary (profile-CI)
|
||||||
|
runs-on: ${{ matrix.os }}
|
||||||
|
strategy:
|
||||||
|
matrix:
|
||||||
|
os: [ ubuntu-20.04 ]
|
||||||
|
timeout-minutes: 60
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v4
|
||||||
|
- uses: arduino/setup-protoc@v3
|
||||||
|
with:
|
||||||
|
repo-token: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
- uses: actions-rust-lang/setup-rust-toolchain@v1
|
||||||
|
- uses: Swatinem/rust-cache@v2
|
||||||
|
with:
|
||||||
|
# Shares across multiple jobs
|
||||||
|
shared-key: "build-greptime-ci"
|
||||||
|
- name: Install cargo-gc-bin
|
||||||
|
shell: bash
|
||||||
|
run: cargo install cargo-gc-bin
|
||||||
|
- name: Check aws-lc-sys will not build
|
||||||
|
shell: bash
|
||||||
|
run: |
|
||||||
|
if cargo tree -i aws-lc-sys -e features | grep -q aws-lc-sys; then
|
||||||
|
echo "Found aws-lc-sys, which has compilation problems on older gcc versions. Please replace it with ring until its building experience improves."
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
- name: Build greptime bianry
|
||||||
|
shell: bash
|
||||||
|
# `cargo gc` will invoke `cargo build` with specified args
|
||||||
|
run: cargo gc --profile ci -- --bin greptime
|
||||||
|
- name: Pack greptime binary
|
||||||
|
shell: bash
|
||||||
|
run: |
|
||||||
|
mkdir bin && \
|
||||||
|
mv ./target/ci/greptime bin
|
||||||
|
- name: Print greptime binaries info
|
||||||
|
run: ls -lh bin
|
||||||
|
- name: Upload artifacts
|
||||||
|
uses: ./.github/actions/upload-artifacts
|
||||||
|
with:
|
||||||
|
artifacts-dir: bin
|
||||||
|
version: current
|
||||||
|
|
||||||
|
distributed-fuzztest:
|
||||||
|
name: Fuzz Test (Distributed, ${{ matrix.mode.name }}, ${{ matrix.target }})
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
needs: build-greptime-ci
|
||||||
|
timeout-minutes: 60
|
||||||
|
strategy:
|
||||||
|
matrix:
|
||||||
|
target: [ "fuzz_create_table", "fuzz_alter_table", "fuzz_create_database", "fuzz_create_logical_table", "fuzz_alter_logical_table", "fuzz_insert", "fuzz_insert_logical_table" ]
|
||||||
|
mode:
|
||||||
|
- name: "Remote WAL"
|
||||||
|
minio: true
|
||||||
|
kafka: true
|
||||||
|
values: "with-remote-wal.yaml"
|
||||||
|
steps:
|
||||||
|
- name: Remove unused software
|
||||||
|
run: |
|
||||||
|
echo "Disk space before:"
|
||||||
|
df -h
|
||||||
|
[[ -d /usr/share/dotnet ]] && sudo rm -rf /usr/share/dotnet
|
||||||
|
[[ -d /usr/local/lib/android ]] && sudo rm -rf /usr/local/lib/android
|
||||||
|
[[ -d /opt/ghc ]] && sudo rm -rf /opt/ghc
|
||||||
|
[[ -d /opt/hostedtoolcache/CodeQL ]] && sudo rm -rf /opt/hostedtoolcache/CodeQL
|
||||||
|
sudo docker image prune --all --force
|
||||||
|
sudo docker builder prune -a
|
||||||
|
echo "Disk space after:"
|
||||||
|
df -h
|
||||||
|
- uses: actions/checkout@v4
|
||||||
|
- name: Setup Kind
|
||||||
|
uses: ./.github/actions/setup-kind
|
||||||
|
- if: matrix.mode.minio
|
||||||
|
name: Setup Minio
|
||||||
|
uses: ./.github/actions/setup-minio
|
||||||
|
- if: matrix.mode.kafka
|
||||||
|
name: Setup Kafka cluser
|
||||||
|
uses: ./.github/actions/setup-kafka-cluster
|
||||||
|
- name: Setup Etcd cluser
|
||||||
|
uses: ./.github/actions/setup-etcd-cluster
|
||||||
|
- name: Setup Postgres cluser
|
||||||
|
uses: ./.github/actions/setup-postgres-cluster
|
||||||
|
# Prepares for fuzz tests
|
||||||
|
- uses: arduino/setup-protoc@v3
|
||||||
|
with:
|
||||||
|
repo-token: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
- uses: actions-rust-lang/setup-rust-toolchain@v1
|
||||||
|
- name: Rust Cache
|
||||||
|
uses: Swatinem/rust-cache@v2
|
||||||
|
with:
|
||||||
|
# Shares across multiple jobs
|
||||||
|
shared-key: "fuzz-test-targets"
|
||||||
|
- name: Set Rust Fuzz
|
||||||
|
shell: bash
|
||||||
|
run: |
|
||||||
|
sudo apt-get install -y libfuzzer-14-dev
|
||||||
|
rustup install nightly
|
||||||
|
cargo +nightly install cargo-fuzz cargo-gc-bin
|
||||||
|
# Downloads ci image
|
||||||
|
- name: Download pre-built binariy
|
||||||
|
uses: actions/download-artifact@v4
|
||||||
|
with:
|
||||||
|
name: bin
|
||||||
|
path: .
|
||||||
|
- name: Unzip binary
|
||||||
|
run: |
|
||||||
|
tar -xvf ./bin.tar.gz
|
||||||
|
rm ./bin.tar.gz
|
||||||
|
- name: Build and push GreptimeDB image
|
||||||
|
uses: ./.github/actions/build-and-push-ci-image
|
||||||
|
- name: Wait for etcd
|
||||||
|
run: |
|
||||||
|
kubectl wait \
|
||||||
|
--for=condition=Ready \
|
||||||
|
pod -l app.kubernetes.io/instance=etcd \
|
||||||
|
--timeout=120s \
|
||||||
|
-n etcd-cluster
|
||||||
|
- if: matrix.mode.minio
|
||||||
|
name: Wait for minio
|
||||||
|
run: |
|
||||||
|
kubectl wait \
|
||||||
|
--for=condition=Ready \
|
||||||
|
pod -l app=minio \
|
||||||
|
--timeout=120s \
|
||||||
|
-n minio
|
||||||
|
- if: matrix.mode.kafka
|
||||||
|
name: Wait for kafka
|
||||||
|
run: |
|
||||||
|
kubectl wait \
|
||||||
|
--for=condition=Ready \
|
||||||
|
pod -l app.kubernetes.io/instance=kafka \
|
||||||
|
--timeout=120s \
|
||||||
|
-n kafka-cluster
|
||||||
|
- name: Print etcd info
|
||||||
|
shell: bash
|
||||||
|
run: kubectl get all --show-labels -n etcd-cluster
|
||||||
|
# Setup cluster for test
|
||||||
|
- name: Setup GreptimeDB cluster
|
||||||
|
uses: ./.github/actions/setup-greptimedb-cluster
|
||||||
|
with:
|
||||||
|
image-registry: localhost:5001
|
||||||
|
values-filename: ${{ matrix.mode.values }}
|
||||||
|
- name: Port forward (mysql)
|
||||||
|
run: |
|
||||||
|
kubectl port-forward service/my-greptimedb-frontend 4002:4002 -n my-greptimedb&
|
||||||
- name: Fuzz Test
|
- name: Fuzz Test
|
||||||
uses: ./.github/actions/fuzz-test
|
uses: ./.github/actions/fuzz-test
|
||||||
env:
|
env:
|
||||||
CUSTOM_LIBFUZZER_PATH: /usr/lib/llvm-14/lib/libFuzzer.a
|
CUSTOM_LIBFUZZER_PATH: /usr/lib/llvm-14/lib/libFuzzer.a
|
||||||
|
GT_MYSQL_ADDR: 127.0.0.1:4002
|
||||||
with:
|
with:
|
||||||
target: ${{ matrix.target }}
|
target: ${{ matrix.target }}
|
||||||
|
max-total-time: 120
|
||||||
|
- name: Describe Nodes
|
||||||
|
if: failure()
|
||||||
|
shell: bash
|
||||||
|
run: |
|
||||||
|
kubectl describe nodes
|
||||||
|
- name: Export kind logs
|
||||||
|
if: failure()
|
||||||
|
shell: bash
|
||||||
|
run: |
|
||||||
|
kind export logs /tmp/kind
|
||||||
|
- name: Upload logs
|
||||||
|
if: failure()
|
||||||
|
uses: actions/upload-artifact@v4
|
||||||
|
with:
|
||||||
|
name: fuzz-tests-kind-logs-${{ matrix.mode.name }}-${{ matrix.target }}
|
||||||
|
path: /tmp/kind
|
||||||
|
retention-days: 3
|
||||||
|
- name: Delete cluster
|
||||||
|
if: success()
|
||||||
|
shell: bash
|
||||||
|
run: |
|
||||||
|
kind delete cluster
|
||||||
|
docker stop $(docker ps -a -q)
|
||||||
|
docker rm $(docker ps -a -q)
|
||||||
|
docker system prune -f
|
||||||
|
|
||||||
|
distributed-fuzztest-with-chaos:
|
||||||
|
name: Fuzz Test with Chaos (Distributed, ${{ matrix.mode.name }}, ${{ matrix.target }})
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
needs: build-greptime-ci
|
||||||
|
timeout-minutes: 60
|
||||||
|
strategy:
|
||||||
|
matrix:
|
||||||
|
target: ["fuzz_migrate_mito_regions", "fuzz_migrate_metric_regions", "fuzz_failover_mito_regions", "fuzz_failover_metric_regions"]
|
||||||
|
mode:
|
||||||
|
- name: "Remote WAL"
|
||||||
|
minio: true
|
||||||
|
kafka: true
|
||||||
|
values: "with-remote-wal.yaml"
|
||||||
|
include:
|
||||||
|
- target: "fuzz_migrate_mito_regions"
|
||||||
|
mode:
|
||||||
|
name: "Local WAL"
|
||||||
|
minio: true
|
||||||
|
kafka: false
|
||||||
|
values: "with-minio.yaml"
|
||||||
|
- target: "fuzz_migrate_metric_regions"
|
||||||
|
mode:
|
||||||
|
name: "Local WAL"
|
||||||
|
minio: true
|
||||||
|
kafka: false
|
||||||
|
values: "with-minio.yaml"
|
||||||
|
steps:
|
||||||
|
- name: Remove unused software
|
||||||
|
run: |
|
||||||
|
echo "Disk space before:"
|
||||||
|
df -h
|
||||||
|
[[ -d /usr/share/dotnet ]] && sudo rm -rf /usr/share/dotnet
|
||||||
|
[[ -d /usr/local/lib/android ]] && sudo rm -rf /usr/local/lib/android
|
||||||
|
[[ -d /opt/ghc ]] && sudo rm -rf /opt/ghc
|
||||||
|
[[ -d /opt/hostedtoolcache/CodeQL ]] && sudo rm -rf /opt/hostedtoolcache/CodeQL
|
||||||
|
sudo docker image prune --all --force
|
||||||
|
sudo docker builder prune -a
|
||||||
|
echo "Disk space after:"
|
||||||
|
df -h
|
||||||
|
- uses: actions/checkout@v4
|
||||||
|
- name: Setup Kind
|
||||||
|
uses: ./.github/actions/setup-kind
|
||||||
|
- name: Setup Chaos Mesh
|
||||||
|
uses: ./.github/actions/setup-chaos
|
||||||
|
- if: matrix.mode.minio
|
||||||
|
name: Setup Minio
|
||||||
|
uses: ./.github/actions/setup-minio
|
||||||
|
- if: matrix.mode.kafka
|
||||||
|
name: Setup Kafka cluser
|
||||||
|
uses: ./.github/actions/setup-kafka-cluster
|
||||||
|
- name: Setup Etcd cluser
|
||||||
|
uses: ./.github/actions/setup-etcd-cluster
|
||||||
|
- name: Setup Postgres cluser
|
||||||
|
uses: ./.github/actions/setup-postgres-cluster
|
||||||
|
# Prepares for fuzz tests
|
||||||
|
- uses: arduino/setup-protoc@v3
|
||||||
|
with:
|
||||||
|
repo-token: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
- uses: actions-rust-lang/setup-rust-toolchain@v1
|
||||||
|
- name: Rust Cache
|
||||||
|
uses: Swatinem/rust-cache@v2
|
||||||
|
with:
|
||||||
|
# Shares across multiple jobs
|
||||||
|
shared-key: "fuzz-test-targets"
|
||||||
|
- name: Set Rust Fuzz
|
||||||
|
shell: bash
|
||||||
|
run: |
|
||||||
|
sudo apt-get install -y libfuzzer-14-dev
|
||||||
|
rustup install nightly
|
||||||
|
cargo +nightly install cargo-fuzz cargo-gc-bin
|
||||||
|
# Downloads ci image
|
||||||
|
- name: Download pre-built binariy
|
||||||
|
uses: actions/download-artifact@v4
|
||||||
|
with:
|
||||||
|
name: bin
|
||||||
|
path: .
|
||||||
|
- name: Unzip binary
|
||||||
|
run: |
|
||||||
|
tar -xvf ./bin.tar.gz
|
||||||
|
rm ./bin.tar.gz
|
||||||
|
- name: Build and push GreptimeDB image
|
||||||
|
uses: ./.github/actions/build-and-push-ci-image
|
||||||
|
- name: Wait for etcd
|
||||||
|
run: |
|
||||||
|
kubectl wait \
|
||||||
|
--for=condition=Ready \
|
||||||
|
pod -l app.kubernetes.io/instance=etcd \
|
||||||
|
--timeout=120s \
|
||||||
|
-n etcd-cluster
|
||||||
|
- if: matrix.mode.minio
|
||||||
|
name: Wait for minio
|
||||||
|
run: |
|
||||||
|
kubectl wait \
|
||||||
|
--for=condition=Ready \
|
||||||
|
pod -l app=minio \
|
||||||
|
--timeout=120s \
|
||||||
|
-n minio
|
||||||
|
- if: matrix.mode.kafka
|
||||||
|
name: Wait for kafka
|
||||||
|
run: |
|
||||||
|
kubectl wait \
|
||||||
|
--for=condition=Ready \
|
||||||
|
pod -l app.kubernetes.io/instance=kafka \
|
||||||
|
--timeout=120s \
|
||||||
|
-n kafka-cluster
|
||||||
|
- name: Print etcd info
|
||||||
|
shell: bash
|
||||||
|
run: kubectl get all --show-labels -n etcd-cluster
|
||||||
|
# Setup cluster for test
|
||||||
|
- name: Setup GreptimeDB cluster
|
||||||
|
uses: ./.github/actions/setup-greptimedb-cluster
|
||||||
|
with:
|
||||||
|
image-registry: localhost:5001
|
||||||
|
values-filename: ${{ matrix.mode.values }}
|
||||||
|
enable-region-failover: ${{ matrix.mode.kafka }}
|
||||||
|
- name: Port forward (mysql)
|
||||||
|
run: |
|
||||||
|
kubectl port-forward service/my-greptimedb-frontend 4002:4002 -n my-greptimedb&
|
||||||
|
- name: Fuzz Test
|
||||||
|
uses: ./.github/actions/fuzz-test
|
||||||
|
env:
|
||||||
|
CUSTOM_LIBFUZZER_PATH: /usr/lib/llvm-14/lib/libFuzzer.a
|
||||||
|
GT_MYSQL_ADDR: 127.0.0.1:4002
|
||||||
|
with:
|
||||||
|
target: ${{ matrix.target }}
|
||||||
|
max-total-time: 120
|
||||||
|
- name: Describe Nodes
|
||||||
|
if: failure()
|
||||||
|
shell: bash
|
||||||
|
run: |
|
||||||
|
kubectl describe nodes
|
||||||
|
- name: Export kind logs
|
||||||
|
if: failure()
|
||||||
|
shell: bash
|
||||||
|
run: |
|
||||||
|
kind export logs /tmp/kind
|
||||||
|
- name: Upload logs
|
||||||
|
if: failure()
|
||||||
|
uses: actions/upload-artifact@v4
|
||||||
|
with:
|
||||||
|
name: fuzz-tests-kind-logs-${{ matrix.mode.name }}-${{ matrix.target }}
|
||||||
|
path: /tmp/kind
|
||||||
|
retention-days: 3
|
||||||
|
- name: Delete cluster
|
||||||
|
if: success()
|
||||||
|
shell: bash
|
||||||
|
run: |
|
||||||
|
kind delete cluster
|
||||||
|
docker stop $(docker ps -a -q)
|
||||||
|
docker rm $(docker ps -a -q)
|
||||||
|
docker system prune -f
|
||||||
|
|
||||||
sqlness:
|
sqlness:
|
||||||
name: Sqlness Test
|
name: Sqlness Test (${{ matrix.mode.name }})
|
||||||
needs: build
|
needs: build
|
||||||
runs-on: ${{ matrix.os }}
|
runs-on: ${{ matrix.os }}
|
||||||
strategy:
|
strategy:
|
||||||
matrix:
|
matrix:
|
||||||
os: [ ubuntu-20.04 ]
|
os: [ ubuntu-20.04 ]
|
||||||
|
mode:
|
||||||
|
- name: "Basic"
|
||||||
|
opts: ""
|
||||||
|
kafka: false
|
||||||
|
- name: "Remote WAL"
|
||||||
|
opts: "-w kafka -k 127.0.0.1:9092"
|
||||||
|
kafka: true
|
||||||
timeout-minutes: 60
|
timeout-minutes: 60
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v4
|
- uses: actions/checkout@v4
|
||||||
- name: Download pre-built binaries
|
- if: matrix.mode.kafka
|
||||||
uses: actions/download-artifact@v4
|
name: Setup kafka server
|
||||||
with:
|
|
||||||
name: bins
|
|
||||||
path: .
|
|
||||||
- name: Unzip binaries
|
|
||||||
run: tar -xvf ./bins.tar.gz
|
|
||||||
- name: Run sqlness
|
|
||||||
run: RUST_BACKTRACE=1 ./bins/sqlness-runner -c ./tests/cases --bins-dir ./bins
|
|
||||||
- name: Upload sqlness logs
|
|
||||||
if: always()
|
|
||||||
uses: actions/upload-artifact@v4
|
|
||||||
with:
|
|
||||||
name: sqlness-logs
|
|
||||||
path: /tmp/greptime-*.log
|
|
||||||
retention-days: 3
|
|
||||||
|
|
||||||
sqlness-kafka-wal:
|
|
||||||
name: Sqlness Test with Kafka Wal
|
|
||||||
needs: build
|
|
||||||
runs-on: ${{ matrix.os }}
|
|
||||||
strategy:
|
|
||||||
matrix:
|
|
||||||
os: [ ubuntu-20.04 ]
|
|
||||||
timeout-minutes: 60
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v4
|
|
||||||
- name: Download pre-built binaries
|
|
||||||
uses: actions/download-artifact@v4
|
|
||||||
with:
|
|
||||||
name: bins
|
|
||||||
path: .
|
|
||||||
- name: Unzip binaries
|
|
||||||
run: tar -xvf ./bins.tar.gz
|
|
||||||
- name: Setup kafka server
|
|
||||||
working-directory: tests-integration/fixtures/kafka
|
working-directory: tests-integration/fixtures/kafka
|
||||||
run: docker compose -f docker-compose-standalone.yml up -d --wait
|
run: docker compose -f docker-compose-standalone.yml up -d --wait
|
||||||
|
- name: Download pre-built binaries
|
||||||
|
uses: actions/download-artifact@v4
|
||||||
|
with:
|
||||||
|
name: bins
|
||||||
|
path: .
|
||||||
|
- name: Unzip binaries
|
||||||
|
run: tar -xvf ./bins.tar.gz
|
||||||
- name: Run sqlness
|
- name: Run sqlness
|
||||||
run: RUST_BACKTRACE=1 ./bins/sqlness-runner -w kafka -k 127.0.0.1:9092 -c ./tests/cases --bins-dir ./bins
|
run: RUST_BACKTRACE=1 ./bins/sqlness-runner ${{ matrix.mode.opts }} -c ./tests/cases --bins-dir ./bins --preserve-state
|
||||||
- name: Upload sqlness logs
|
- name: Upload sqlness logs
|
||||||
if: always()
|
if: failure()
|
||||||
uses: actions/upload-artifact@v4
|
uses: actions/upload-artifact@v4
|
||||||
with:
|
with:
|
||||||
name: sqlness-logs-with-kafka-wal
|
name: sqlness-logs-${{ matrix.mode.name }}
|
||||||
path: /tmp/greptime-*.log
|
path: /tmp/sqlness*
|
||||||
retention-days: 3
|
retention-days: 3
|
||||||
|
|
||||||
fmt:
|
fmt:
|
||||||
@@ -223,17 +628,16 @@ jobs:
|
|||||||
- uses: arduino/setup-protoc@v3
|
- uses: arduino/setup-protoc@v3
|
||||||
with:
|
with:
|
||||||
repo-token: ${{ secrets.GITHUB_TOKEN }}
|
repo-token: ${{ secrets.GITHUB_TOKEN }}
|
||||||
- uses: dtolnay/rust-toolchain@master
|
- uses: actions-rust-lang/setup-rust-toolchain@v1
|
||||||
with:
|
with:
|
||||||
toolchain: ${{ env.RUST_TOOLCHAIN }}
|
|
||||||
components: rustfmt
|
components: rustfmt
|
||||||
- name: Rust Cache
|
- name: Rust Cache
|
||||||
uses: Swatinem/rust-cache@v2
|
uses: Swatinem/rust-cache@v2
|
||||||
with:
|
with:
|
||||||
# Shares across multiple jobs
|
# Shares across multiple jobs
|
||||||
shared-key: "check-rust-fmt"
|
shared-key: "check-rust-fmt"
|
||||||
- name: Run cargo fmt
|
- name: Check format
|
||||||
run: cargo fmt --all -- --check
|
run: make fmt-check
|
||||||
|
|
||||||
clippy:
|
clippy:
|
||||||
name: Clippy
|
name: Clippy
|
||||||
@@ -244,9 +648,8 @@ jobs:
|
|||||||
- uses: arduino/setup-protoc@v3
|
- uses: arduino/setup-protoc@v3
|
||||||
with:
|
with:
|
||||||
repo-token: ${{ secrets.GITHUB_TOKEN }}
|
repo-token: ${{ secrets.GITHUB_TOKEN }}
|
||||||
- uses: dtolnay/rust-toolchain@master
|
- uses: actions-rust-lang/setup-rust-toolchain@v1
|
||||||
with:
|
with:
|
||||||
toolchain: ${{ env.RUST_TOOLCHAIN }}
|
|
||||||
components: clippy
|
components: clippy
|
||||||
- name: Rust Cache
|
- name: Rust Cache
|
||||||
uses: Swatinem/rust-cache@v2
|
uses: Swatinem/rust-cache@v2
|
||||||
@@ -255,7 +658,7 @@ jobs:
|
|||||||
# Shares with `Check` job
|
# Shares with `Check` job
|
||||||
shared-key: "check-lint"
|
shared-key: "check-lint"
|
||||||
- name: Run cargo clippy
|
- name: Run cargo clippy
|
||||||
run: cargo clippy --workspace --all-targets -- -D warnings
|
run: make clippy
|
||||||
|
|
||||||
coverage:
|
coverage:
|
||||||
if: github.event.pull_request.draft == false
|
if: github.event.pull_request.draft == false
|
||||||
@@ -270,9 +673,8 @@ jobs:
|
|||||||
with:
|
with:
|
||||||
version: "14.0"
|
version: "14.0"
|
||||||
- name: Install toolchain
|
- name: Install toolchain
|
||||||
uses: dtolnay/rust-toolchain@master
|
uses: actions-rust-lang/setup-rust-toolchain@v1
|
||||||
with:
|
with:
|
||||||
toolchain: ${{ env.RUST_TOOLCHAIN }}
|
|
||||||
components: llvm-tools-preview
|
components: llvm-tools-preview
|
||||||
- name: Rust Cache
|
- name: Rust Cache
|
||||||
uses: Swatinem/rust-cache@v2
|
uses: Swatinem/rust-cache@v2
|
||||||
@@ -292,25 +694,38 @@ jobs:
|
|||||||
with:
|
with:
|
||||||
python-version: '3.10'
|
python-version: '3.10'
|
||||||
- name: Install PyArrow Package
|
- name: Install PyArrow Package
|
||||||
run: pip install pyarrow
|
run: pip install pyarrow numpy
|
||||||
- name: Setup etcd server
|
- name: Setup etcd server
|
||||||
working-directory: tests-integration/fixtures/etcd
|
working-directory: tests-integration/fixtures/etcd
|
||||||
run: docker compose -f docker-compose-standalone.yml up -d --wait
|
run: docker compose -f docker-compose-standalone.yml up -d --wait
|
||||||
- name: Setup kafka server
|
- name: Setup kafka server
|
||||||
working-directory: tests-integration/fixtures/kafka
|
working-directory: tests-integration/fixtures/kafka
|
||||||
run: docker compose -f docker-compose-standalone.yml up -d --wait
|
run: docker compose -f docker-compose-standalone.yml up -d --wait
|
||||||
|
- name: Setup minio
|
||||||
|
working-directory: tests-integration/fixtures/minio
|
||||||
|
run: docker compose -f docker-compose-standalone.yml up -d --wait
|
||||||
|
- name: Setup postgres server
|
||||||
|
working-directory: tests-integration/fixtures/postgres
|
||||||
|
run: docker compose -f docker-compose-standalone.yml up -d --wait
|
||||||
- name: Run nextest cases
|
- name: Run nextest cases
|
||||||
run: cargo llvm-cov nextest --workspace --lcov --output-path lcov.info -F pyo3_backend -F dashboard
|
run: cargo llvm-cov nextest --workspace --lcov --output-path lcov.info -F pyo3_backend -F dashboard
|
||||||
env:
|
env:
|
||||||
CARGO_BUILD_RUSTFLAGS: "-C link-arg=-fuse-ld=lld"
|
CARGO_BUILD_RUSTFLAGS: "-C link-arg=-fuse-ld=lld"
|
||||||
RUST_BACKTRACE: 1
|
RUST_BACKTRACE: 1
|
||||||
CARGO_INCREMENTAL: 0
|
CARGO_INCREMENTAL: 0
|
||||||
GT_S3_BUCKET: ${{ secrets.S3_BUCKET }}
|
GT_S3_BUCKET: ${{ vars.AWS_CI_TEST_BUCKET }}
|
||||||
GT_S3_ACCESS_KEY_ID: ${{ secrets.S3_ACCESS_KEY_ID }}
|
GT_S3_ACCESS_KEY_ID: ${{ secrets.AWS_CI_TEST_ACCESS_KEY_ID }}
|
||||||
GT_S3_ACCESS_KEY: ${{ secrets.S3_ACCESS_KEY }}
|
GT_S3_ACCESS_KEY: ${{ secrets.AWS_CI_TEST_SECRET_ACCESS_KEY }}
|
||||||
GT_S3_REGION: ${{ secrets.S3_REGION }}
|
GT_S3_REGION: ${{ vars.AWS_CI_TEST_BUCKET_REGION }}
|
||||||
|
GT_MINIO_BUCKET: greptime
|
||||||
|
GT_MINIO_ACCESS_KEY_ID: superpower_ci_user
|
||||||
|
GT_MINIO_ACCESS_KEY: superpower_password
|
||||||
|
GT_MINIO_REGION: us-west-2
|
||||||
|
GT_MINIO_ENDPOINT_URL: http://127.0.0.1:9000
|
||||||
GT_ETCD_ENDPOINTS: http://127.0.0.1:2379
|
GT_ETCD_ENDPOINTS: http://127.0.0.1:2379
|
||||||
|
GT_POSTGRES_ENDPOINTS: postgres://greptimedb:admin@127.0.0.1:5432/postgres
|
||||||
GT_KAFKA_ENDPOINTS: 127.0.0.1:9092
|
GT_KAFKA_ENDPOINTS: 127.0.0.1:9092
|
||||||
|
GT_KAFKA_SASL_ENDPOINTS: 127.0.0.1:9093
|
||||||
UNITTEST_LOG_DIR: "__unittest_logs"
|
UNITTEST_LOG_DIR: "__unittest_logs"
|
||||||
- name: Codecov upload
|
- name: Codecov upload
|
||||||
uses: codecov/codecov-action@v4
|
uses: codecov/codecov-action@v4
|
||||||
@@ -321,20 +736,20 @@ jobs:
|
|||||||
fail_ci_if_error: false
|
fail_ci_if_error: false
|
||||||
verbose: true
|
verbose: true
|
||||||
|
|
||||||
compat:
|
# compat:
|
||||||
name: Compatibility Test
|
# name: Compatibility Test
|
||||||
needs: build
|
# needs: build
|
||||||
runs-on: ubuntu-20.04
|
# runs-on: ubuntu-20.04
|
||||||
timeout-minutes: 60
|
# timeout-minutes: 60
|
||||||
steps:
|
# steps:
|
||||||
- uses: actions/checkout@v4
|
# - uses: actions/checkout@v4
|
||||||
- name: Download pre-built binaries
|
# - name: Download pre-built binaries
|
||||||
uses: actions/download-artifact@v4
|
# uses: actions/download-artifact@v4
|
||||||
with:
|
# with:
|
||||||
name: bins
|
# name: bins
|
||||||
path: .
|
# path: .
|
||||||
- name: Unzip binaries
|
# - name: Unzip binaries
|
||||||
run: |
|
# run: |
|
||||||
mkdir -p ./bins/current
|
# mkdir -p ./bins/current
|
||||||
tar -xvf ./bins.tar.gz --strip-components=1 -C ./bins/current
|
# tar -xvf ./bins.tar.gz --strip-components=1 -C ./bins/current
|
||||||
- run: ./tests/compat/test-compat.sh 0.6.0
|
# - run: ./tests/compat/test-compat.sh 0.6.0
|
||||||
|
|||||||
39
.github/workflows/doc-issue.yml
vendored
39
.github/workflows/doc-issue.yml
vendored
@@ -1,39 +0,0 @@
|
|||||||
name: Create Issue in downstream repos
|
|
||||||
|
|
||||||
on:
|
|
||||||
issues:
|
|
||||||
types:
|
|
||||||
- labeled
|
|
||||||
pull_request_target:
|
|
||||||
types:
|
|
||||||
- labeled
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
doc_issue:
|
|
||||||
if: github.event.label.name == 'doc update required'
|
|
||||||
runs-on: ubuntu-20.04
|
|
||||||
steps:
|
|
||||||
- name: create an issue in doc repo
|
|
||||||
uses: dacbd/create-issue-action@v1.2.1
|
|
||||||
with:
|
|
||||||
owner: GreptimeTeam
|
|
||||||
repo: docs
|
|
||||||
token: ${{ secrets.DOCS_REPO_TOKEN }}
|
|
||||||
title: Update docs for ${{ github.event.issue.title || github.event.pull_request.title }}
|
|
||||||
body: |
|
|
||||||
A document change request is generated from
|
|
||||||
${{ github.event.issue.html_url || github.event.pull_request.html_url }}
|
|
||||||
cloud_issue:
|
|
||||||
if: github.event.label.name == 'cloud followup required'
|
|
||||||
runs-on: ubuntu-20.04
|
|
||||||
steps:
|
|
||||||
- name: create an issue in cloud repo
|
|
||||||
uses: dacbd/create-issue-action@v1.2.1
|
|
||||||
with:
|
|
||||||
owner: GreptimeTeam
|
|
||||||
repo: greptimedb-cloud
|
|
||||||
token: ${{ secrets.DOCS_REPO_TOKEN }}
|
|
||||||
title: Followup changes in ${{ github.event.issue.title || github.event.pull_request.title }}
|
|
||||||
body: |
|
|
||||||
A followup request is generated from
|
|
||||||
${{ github.event.issue.html_url || github.event.pull_request.html_url }}
|
|
||||||
36
.github/workflows/doc-label.yml
vendored
36
.github/workflows/doc-label.yml
vendored
@@ -1,36 +0,0 @@
|
|||||||
name: "PR Doc Labeler"
|
|
||||||
on:
|
|
||||||
pull_request_target:
|
|
||||||
types: [opened, edited, synchronize, ready_for_review, auto_merge_enabled, labeled, unlabeled]
|
|
||||||
|
|
||||||
permissions:
|
|
||||||
pull-requests: write
|
|
||||||
contents: read
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
triage:
|
|
||||||
if: ${{ github.repository == 'GreptimeTeam/greptimedb' }}
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
steps:
|
|
||||||
- uses: github/issue-labeler@v3.4
|
|
||||||
with:
|
|
||||||
configuration-path: .github/doc-label-config.yml
|
|
||||||
enable-versioned-regex: false
|
|
||||||
repo-token: ${{ secrets.GITHUB_TOKEN }}
|
|
||||||
sync-labels: 1
|
|
||||||
- name: create an issue in doc repo
|
|
||||||
uses: dacbd/create-issue-action@v1.2.1
|
|
||||||
if: ${{ github.event.action == 'opened' && contains(github.event.pull_request.body, '- [ ] This PR does not require documentation updates.') }}
|
|
||||||
with:
|
|
||||||
owner: GreptimeTeam
|
|
||||||
repo: docs
|
|
||||||
token: ${{ secrets.DOCS_REPO_TOKEN }}
|
|
||||||
title: Update docs for ${{ github.event.issue.title || github.event.pull_request.title }}
|
|
||||||
body: |
|
|
||||||
A document change request is generated from
|
|
||||||
${{ github.event.issue.html_url || github.event.pull_request.html_url }}
|
|
||||||
- name: Check doc labels
|
|
||||||
uses: docker://agilepathway/pull-request-label-checker:latest
|
|
||||||
with:
|
|
||||||
one_of: Doc update required,Doc not needed
|
|
||||||
repo_token: ${{ secrets.GITHUB_TOKEN }}
|
|
||||||
22
.github/workflows/docbot.yml
vendored
Normal file
22
.github/workflows/docbot.yml
vendored
Normal file
@@ -0,0 +1,22 @@
|
|||||||
|
name: Follow Up Docs
|
||||||
|
on:
|
||||||
|
pull_request_target:
|
||||||
|
types: [opened, edited]
|
||||||
|
|
||||||
|
permissions:
|
||||||
|
pull-requests: write
|
||||||
|
contents: read
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
docbot:
|
||||||
|
runs-on: ubuntu-20.04
|
||||||
|
timeout-minutes: 10
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v4
|
||||||
|
- uses: ./.github/actions/setup-cyborg
|
||||||
|
- name: Maybe Follow Up Docs Issue
|
||||||
|
working-directory: cyborg
|
||||||
|
run: pnpm tsx bin/follow-up-docs-issue.ts
|
||||||
|
env:
|
||||||
|
DOCS_REPO_TOKEN: ${{ secrets.DOCS_REPO_TOKEN }}
|
||||||
|
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
23
.github/workflows/docs.yml
vendored
23
.github/workflows/docs.yml
vendored
@@ -34,7 +34,14 @@ jobs:
|
|||||||
runs-on: ubuntu-20.04
|
runs-on: ubuntu-20.04
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v4
|
- uses: actions/checkout@v4
|
||||||
- uses: crate-ci/typos@v1.13.10
|
- uses: crate-ci/typos@master
|
||||||
|
|
||||||
|
license-header-check:
|
||||||
|
runs-on: ubuntu-20.04
|
||||||
|
name: Check License Header
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v4
|
||||||
|
- uses: korandoru/hawkeye@v5
|
||||||
|
|
||||||
check:
|
check:
|
||||||
name: Check
|
name: Check
|
||||||
@@ -60,19 +67,13 @@ jobs:
|
|||||||
- run: 'echo "No action required"'
|
- run: 'echo "No action required"'
|
||||||
|
|
||||||
sqlness:
|
sqlness:
|
||||||
name: Sqlness Test
|
name: Sqlness Test (${{ matrix.mode.name }})
|
||||||
runs-on: ${{ matrix.os }}
|
|
||||||
strategy:
|
|
||||||
matrix:
|
|
||||||
os: [ ubuntu-20.04 ]
|
|
||||||
steps:
|
|
||||||
- run: 'echo "No action required"'
|
|
||||||
|
|
||||||
sqlness-kafka-wal:
|
|
||||||
name: Sqlness Test with Kafka Wal
|
|
||||||
runs-on: ${{ matrix.os }}
|
runs-on: ${{ matrix.os }}
|
||||||
strategy:
|
strategy:
|
||||||
matrix:
|
matrix:
|
||||||
os: [ ubuntu-20.04 ]
|
os: [ ubuntu-20.04 ]
|
||||||
|
mode:
|
||||||
|
- name: "Basic"
|
||||||
|
- name: "Remote WAL"
|
||||||
steps:
|
steps:
|
||||||
- run: 'echo "No action required"'
|
- run: 'echo "No action required"'
|
||||||
|
|||||||
16
.github/workflows/license.yaml
vendored
16
.github/workflows/license.yaml
vendored
@@ -1,16 +0,0 @@
|
|||||||
name: License checker
|
|
||||||
|
|
||||||
on:
|
|
||||||
push:
|
|
||||||
branches:
|
|
||||||
- main
|
|
||||||
pull_request:
|
|
||||||
types: [opened, synchronize, reopened, ready_for_review]
|
|
||||||
jobs:
|
|
||||||
license-header-check:
|
|
||||||
runs-on: ubuntu-20.04
|
|
||||||
name: license-header-check
|
|
||||||
steps:
|
|
||||||
- uses: actions/checkout@v4
|
|
||||||
- name: Check License Header
|
|
||||||
uses: korandoru/hawkeye@v5
|
|
||||||
35
.github/workflows/nightly-build.yml
vendored
35
.github/workflows/nightly-build.yml
vendored
@@ -66,6 +66,13 @@ env:
|
|||||||
|
|
||||||
NIGHTLY_RELEASE_PREFIX: nightly
|
NIGHTLY_RELEASE_PREFIX: nightly
|
||||||
|
|
||||||
|
# Use the different image name to avoid conflict with the release images.
|
||||||
|
# The DockerHub image will be greptime/greptimedb-nightly.
|
||||||
|
IMAGE_NAME: greptimedb-nightly
|
||||||
|
|
||||||
|
permissions:
|
||||||
|
issues: write
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
allocate-runners:
|
allocate-runners:
|
||||||
name: Allocate runners
|
name: Allocate runners
|
||||||
@@ -147,6 +154,8 @@ jobs:
|
|||||||
cargo-profile: ${{ env.CARGO_PROFILE }}
|
cargo-profile: ${{ env.CARGO_PROFILE }}
|
||||||
version: ${{ needs.allocate-runners.outputs.version }}
|
version: ${{ needs.allocate-runners.outputs.version }}
|
||||||
disable-run-tests: ${{ env.DISABLE_RUN_TESTS }}
|
disable-run-tests: ${{ env.DISABLE_RUN_TESTS }}
|
||||||
|
image-registry: ${{ vars.ECR_IMAGE_REGISTRY }}
|
||||||
|
image-namespace: ${{ vars.ECR_IMAGE_NAMESPACE }}
|
||||||
|
|
||||||
build-linux-arm64-artifacts:
|
build-linux-arm64-artifacts:
|
||||||
name: Build linux-arm64 artifacts
|
name: Build linux-arm64 artifacts
|
||||||
@@ -166,6 +175,8 @@ jobs:
|
|||||||
cargo-profile: ${{ env.CARGO_PROFILE }}
|
cargo-profile: ${{ env.CARGO_PROFILE }}
|
||||||
version: ${{ needs.allocate-runners.outputs.version }}
|
version: ${{ needs.allocate-runners.outputs.version }}
|
||||||
disable-run-tests: ${{ env.DISABLE_RUN_TESTS }}
|
disable-run-tests: ${{ env.DISABLE_RUN_TESTS }}
|
||||||
|
image-registry: ${{ vars.ECR_IMAGE_REGISTRY }}
|
||||||
|
image-namespace: ${{ vars.ECR_IMAGE_NAMESPACE }}
|
||||||
|
|
||||||
release-images-to-dockerhub:
|
release-images-to-dockerhub:
|
||||||
name: Build and push images to DockerHub
|
name: Build and push images to DockerHub
|
||||||
@@ -188,10 +199,11 @@ jobs:
|
|||||||
with:
|
with:
|
||||||
image-registry: docker.io
|
image-registry: docker.io
|
||||||
image-namespace: ${{ vars.IMAGE_NAMESPACE }}
|
image-namespace: ${{ vars.IMAGE_NAMESPACE }}
|
||||||
|
image-name: ${{ env.IMAGE_NAME }}
|
||||||
image-registry-username: ${{ secrets.DOCKERHUB_USERNAME }}
|
image-registry-username: ${{ secrets.DOCKERHUB_USERNAME }}
|
||||||
image-registry-password: ${{ secrets.DOCKERHUB_TOKEN }}
|
image-registry-password: ${{ secrets.DOCKERHUB_TOKEN }}
|
||||||
version: ${{ needs.allocate-runners.outputs.version }}
|
version: ${{ needs.allocate-runners.outputs.version }}
|
||||||
push-latest-tag: false # Don't push the latest tag to registry.
|
push-latest-tag: true
|
||||||
|
|
||||||
- name: Set nightly build result
|
- name: Set nightly build result
|
||||||
id: set-nightly-build-result
|
id: set-nightly-build-result
|
||||||
@@ -220,7 +232,7 @@ jobs:
|
|||||||
with:
|
with:
|
||||||
src-image-registry: docker.io
|
src-image-registry: docker.io
|
||||||
src-image-namespace: ${{ vars.IMAGE_NAMESPACE }}
|
src-image-namespace: ${{ vars.IMAGE_NAMESPACE }}
|
||||||
src-image-name: greptimedb
|
src-image-name: ${{ env.IMAGE_NAME }}
|
||||||
dst-image-registry-username: ${{ secrets.ALICLOUD_USERNAME }}
|
dst-image-registry-username: ${{ secrets.ALICLOUD_USERNAME }}
|
||||||
dst-image-registry-password: ${{ secrets.ALICLOUD_PASSWORD }}
|
dst-image-registry-password: ${{ secrets.ALICLOUD_PASSWORD }}
|
||||||
dst-image-registry: ${{ vars.ACR_IMAGE_REGISTRY }}
|
dst-image-registry: ${{ vars.ACR_IMAGE_REGISTRY }}
|
||||||
@@ -232,7 +244,7 @@ jobs:
|
|||||||
aws-cn-region: ${{ vars.AWS_RELEASE_BUCKET_REGION }}
|
aws-cn-region: ${{ vars.AWS_RELEASE_BUCKET_REGION }}
|
||||||
dev-mode: false
|
dev-mode: false
|
||||||
update-version-info: false # Don't update version info in S3.
|
update-version-info: false # Don't update version info in S3.
|
||||||
push-latest-tag: false # Don't push the latest tag to registry.
|
push-latest-tag: true
|
||||||
|
|
||||||
stop-linux-amd64-runner: # It's always run as the last job in the workflow to make sure that the runner is released.
|
stop-linux-amd64-runner: # It's always run as the last job in the workflow to make sure that the runner is released.
|
||||||
name: Stop linux-amd64 runner
|
name: Stop linux-amd64 runner
|
||||||
@@ -285,7 +297,7 @@ jobs:
|
|||||||
github-token: ${{ secrets.GH_PERSONAL_ACCESS_TOKEN }}
|
github-token: ${{ secrets.GH_PERSONAL_ACCESS_TOKEN }}
|
||||||
|
|
||||||
notification:
|
notification:
|
||||||
if: ${{ always() }} # Not requiring successful dependent jobs, always run.
|
if: ${{ github.repository == 'GreptimeTeam/greptimedb' && always() }} # Not requiring successful dependent jobs, always run.
|
||||||
name: Send notification to Greptime team
|
name: Send notification to Greptime team
|
||||||
needs: [
|
needs: [
|
||||||
release-images-to-dockerhub
|
release-images-to-dockerhub
|
||||||
@@ -294,16 +306,25 @@ jobs:
|
|||||||
env:
|
env:
|
||||||
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL_DEVELOP_CHANNEL }}
|
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL_DEVELOP_CHANNEL }}
|
||||||
steps:
|
steps:
|
||||||
- name: Notifiy nightly build successful result
|
- uses: actions/checkout@v4
|
||||||
|
- uses: ./.github/actions/setup-cyborg
|
||||||
|
- name: Report CI status
|
||||||
|
id: report-ci-status
|
||||||
|
working-directory: cyborg
|
||||||
|
run: pnpm tsx bin/report-ci-failure.ts
|
||||||
|
env:
|
||||||
|
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
CI_REPORT_STATUS: ${{ needs.release-images-to-dockerhub.outputs.nightly-build-result == 'success' }}
|
||||||
|
- name: Notify nightly build successful result
|
||||||
uses: slackapi/slack-github-action@v1.23.0
|
uses: slackapi/slack-github-action@v1.23.0
|
||||||
if: ${{ needs.release-images-to-dockerhub.outputs.nightly-build-result == 'success' }}
|
if: ${{ needs.release-images-to-dockerhub.outputs.nightly-build-result == 'success' }}
|
||||||
with:
|
with:
|
||||||
payload: |
|
payload: |
|
||||||
{"text": "GreptimeDB's ${{ env.NEXT_RELEASE_VERSION }} build has completed successfully."}
|
{"text": "GreptimeDB's ${{ env.NEXT_RELEASE_VERSION }} build has completed successfully."}
|
||||||
|
|
||||||
- name: Notifiy nightly build failed result
|
- name: Notify nightly build failed result
|
||||||
uses: slackapi/slack-github-action@v1.23.0
|
uses: slackapi/slack-github-action@v1.23.0
|
||||||
if: ${{ needs.release-images-to-dockerhub.outputs.nightly-build-result != 'success' }}
|
if: ${{ needs.release-images-to-dockerhub.outputs.nightly-build-result != 'success' }}
|
||||||
with:
|
with:
|
||||||
payload: |
|
payload: |
|
||||||
{"text": "GreptimeDB's ${{ env.NEXT_RELEASE_VERSION }} build has failed, please check 'https://github.com/GreptimeTeam/greptimedb/actions/workflows/${{ env.NEXT_RELEASE_VERSION }}-build.yml'."}
|
{"text": "GreptimeDB's ${{ env.NEXT_RELEASE_VERSION }} build has failed, please check ${{ steps.report-ci-status.outputs.html_url }}."}
|
||||||
|
|||||||
133
.github/workflows/nightly-ci.yml
vendored
133
.github/workflows/nightly-ci.yml
vendored
@@ -1,8 +1,6 @@
|
|||||||
# Nightly CI: runs tests every night for our second tier plaforms (Windows)
|
|
||||||
|
|
||||||
on:
|
on:
|
||||||
schedule:
|
schedule:
|
||||||
- cron: '0 23 * * 1-5'
|
- cron: "0 23 * * 1-5"
|
||||||
workflow_dispatch:
|
workflow_dispatch:
|
||||||
|
|
||||||
name: Nightly CI
|
name: Nightly CI
|
||||||
@@ -11,60 +9,79 @@ concurrency:
|
|||||||
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
|
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
|
||||||
cancel-in-progress: true
|
cancel-in-progress: true
|
||||||
|
|
||||||
env:
|
permissions:
|
||||||
RUST_TOOLCHAIN: nightly-2023-12-19
|
issues: write
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
sqlness:
|
sqlness-test:
|
||||||
name: Sqlness Test
|
name: Run sqlness test
|
||||||
if: ${{ github.repository == 'GreptimeTeam/greptimedb' }}
|
if: ${{ github.repository == 'GreptimeTeam/greptimedb' }}
|
||||||
runs-on: ${{ matrix.os }}
|
runs-on: ubuntu-22.04
|
||||||
strategy:
|
steps:
|
||||||
matrix:
|
- name: Checkout
|
||||||
os: [ windows-latest-8-cores ]
|
uses: actions/checkout@v4
|
||||||
|
with:
|
||||||
|
fetch-depth: 0
|
||||||
|
- name: Run sqlness test
|
||||||
|
uses: ./.github/actions/sqlness-test
|
||||||
|
with:
|
||||||
|
data-root: sqlness-test
|
||||||
|
aws-ci-test-bucket: ${{ vars.AWS_CI_TEST_BUCKET }}
|
||||||
|
aws-region: ${{ vars.AWS_CI_TEST_BUCKET_REGION }}
|
||||||
|
aws-access-key-id: ${{ secrets.AWS_CI_TEST_ACCESS_KEY_ID }}
|
||||||
|
aws-secret-access-key: ${{ secrets.AWS_CI_TEST_SECRET_ACCESS_KEY }}
|
||||||
|
- name: Upload sqlness logs
|
||||||
|
if: failure()
|
||||||
|
uses: actions/upload-artifact@v4
|
||||||
|
with:
|
||||||
|
name: sqlness-logs-kind
|
||||||
|
path: /tmp/kind/
|
||||||
|
retention-days: 3
|
||||||
|
|
||||||
|
sqlness-windows:
|
||||||
|
name: Sqlness tests on Windows
|
||||||
|
if: ${{ github.repository == 'GreptimeTeam/greptimedb' }}
|
||||||
|
runs-on: windows-2022-8-cores
|
||||||
timeout-minutes: 60
|
timeout-minutes: 60
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v4
|
- uses: actions/checkout@v4
|
||||||
|
- uses: ./.github/actions/setup-cyborg
|
||||||
- uses: arduino/setup-protoc@v3
|
- uses: arduino/setup-protoc@v3
|
||||||
with:
|
with:
|
||||||
repo-token: ${{ secrets.GITHUB_TOKEN }}
|
repo-token: ${{ secrets.GITHUB_TOKEN }}
|
||||||
- uses: dtolnay/rust-toolchain@master
|
- uses: actions-rust-lang/setup-rust-toolchain@v1
|
||||||
with:
|
|
||||||
toolchain: ${{ env.RUST_TOOLCHAIN }}
|
|
||||||
- name: Rust Cache
|
- name: Rust Cache
|
||||||
uses: Swatinem/rust-cache@v2
|
uses: Swatinem/rust-cache@v2
|
||||||
- name: Run sqlness
|
- name: Run sqlness
|
||||||
run: cargo sqlness
|
run: make sqlness-test
|
||||||
- name: Notify slack if failed
|
|
||||||
if: failure()
|
|
||||||
uses: slackapi/slack-github-action@v1.23.0
|
|
||||||
env:
|
env:
|
||||||
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL_DEVELOP_CHANNEL }}
|
SQLNESS_OPTS: "--preserve-state"
|
||||||
with:
|
|
||||||
payload: |
|
|
||||||
{"text": "Nightly CI failed for sqlness tests"}
|
|
||||||
- name: Upload sqlness logs
|
- name: Upload sqlness logs
|
||||||
if: always()
|
if: failure()
|
||||||
uses: actions/upload-artifact@v4
|
uses: actions/upload-artifact@v4
|
||||||
with:
|
with:
|
||||||
name: sqlness-logs
|
name: sqlness-logs
|
||||||
path: /tmp/greptime-*.log
|
path: C:\Users\RUNNER~1\AppData\Local\Temp\sqlness*
|
||||||
retention-days: 3
|
retention-days: 3
|
||||||
|
|
||||||
test-on-windows:
|
test-on-windows:
|
||||||
|
name: Run tests on Windows
|
||||||
if: ${{ github.repository == 'GreptimeTeam/greptimedb' }}
|
if: ${{ github.repository == 'GreptimeTeam/greptimedb' }}
|
||||||
runs-on: windows-latest-8-cores
|
runs-on: windows-2022-8-cores
|
||||||
timeout-minutes: 60
|
timeout-minutes: 60
|
||||||
steps:
|
steps:
|
||||||
- run: git config --global core.autocrlf false
|
- run: git config --global core.autocrlf false
|
||||||
- uses: actions/checkout@v4
|
- uses: actions/checkout@v4
|
||||||
|
- uses: ./.github/actions/setup-cyborg
|
||||||
- uses: arduino/setup-protoc@v3
|
- uses: arduino/setup-protoc@v3
|
||||||
with:
|
with:
|
||||||
repo-token: ${{ secrets.GITHUB_TOKEN }}
|
repo-token: ${{ secrets.GITHUB_TOKEN }}
|
||||||
- name: Install Rust toolchain
|
- uses: KyleMayes/install-llvm-action@v1
|
||||||
uses: dtolnay/rust-toolchain@master
|
with:
|
||||||
|
version: "14.0"
|
||||||
|
- name: Install Rust toolchain
|
||||||
|
uses: actions-rust-lang/setup-rust-toolchain@v1
|
||||||
with:
|
with:
|
||||||
toolchain: ${{ env.RUST_TOOLCHAIN }}
|
|
||||||
components: llvm-tools-preview
|
components: llvm-tools-preview
|
||||||
- name: Rust Cache
|
- name: Rust Cache
|
||||||
uses: Swatinem/rust-cache@v2
|
uses: Swatinem/rust-cache@v2
|
||||||
@@ -73,9 +90,9 @@ jobs:
|
|||||||
- name: Install Python
|
- name: Install Python
|
||||||
uses: actions/setup-python@v5
|
uses: actions/setup-python@v5
|
||||||
with:
|
with:
|
||||||
python-version: '3.10'
|
python-version: "3.10"
|
||||||
- name: Install PyArrow Package
|
- name: Install PyArrow Package
|
||||||
run: pip install pyarrow
|
run: pip install pyarrow numpy
|
||||||
- name: Install WSL distribution
|
- name: Install WSL distribution
|
||||||
uses: Vampire/setup-wsl@v2
|
uses: Vampire/setup-wsl@v2
|
||||||
with:
|
with:
|
||||||
@@ -83,18 +100,56 @@ jobs:
|
|||||||
- name: Running tests
|
- name: Running tests
|
||||||
run: cargo nextest run -F pyo3_backend,dashboard
|
run: cargo nextest run -F pyo3_backend,dashboard
|
||||||
env:
|
env:
|
||||||
|
CARGO_BUILD_RUSTFLAGS: "-C linker=lld-link"
|
||||||
RUST_BACKTRACE: 1
|
RUST_BACKTRACE: 1
|
||||||
CARGO_INCREMENTAL: 0
|
CARGO_INCREMENTAL: 0
|
||||||
GT_S3_BUCKET: ${{ secrets.S3_BUCKET }}
|
RUSTUP_WINDOWS_PATH_ADD_BIN: 1 # Workaround for https://github.com/nextest-rs/nextest/issues/1493
|
||||||
GT_S3_ACCESS_KEY_ID: ${{ secrets.S3_ACCESS_KEY_ID }}
|
GT_S3_BUCKET: ${{ vars.AWS_CI_TEST_BUCKET }}
|
||||||
GT_S3_ACCESS_KEY: ${{ secrets.S3_ACCESS_KEY }}
|
GT_S3_ACCESS_KEY_ID: ${{ secrets.AWS_CI_TEST_ACCESS_KEY_ID }}
|
||||||
GT_S3_REGION: ${{ secrets.S3_REGION }}
|
GT_S3_ACCESS_KEY: ${{ secrets.AWS_CI_TEST_SECRET_ACCESS_KEY }}
|
||||||
|
GT_S3_REGION: ${{ vars.AWS_CI_TEST_BUCKET_REGION }}
|
||||||
UNITTEST_LOG_DIR: "__unittest_logs"
|
UNITTEST_LOG_DIR: "__unittest_logs"
|
||||||
- name: Notify slack if failed
|
|
||||||
if: failure()
|
check-status:
|
||||||
uses: slackapi/slack-github-action@v1.23.0
|
name: Check status
|
||||||
|
needs: [sqlness-test, sqlness-windows, test-on-windows]
|
||||||
|
if: ${{ github.repository == 'GreptimeTeam/greptimedb' }}
|
||||||
|
runs-on: ubuntu-20.04
|
||||||
|
outputs:
|
||||||
|
check-result: ${{ steps.set-check-result.outputs.check-result }}
|
||||||
|
steps:
|
||||||
|
- name: Set check result
|
||||||
|
id: set-check-result
|
||||||
|
run: |
|
||||||
|
echo "check-result=success" >> $GITHUB_OUTPUT
|
||||||
|
|
||||||
|
notification:
|
||||||
|
if: ${{ github.repository == 'GreptimeTeam/greptimedb' && always() }} # Not requiring successful dependent jobs, always run.
|
||||||
|
name: Send notification to Greptime team
|
||||||
|
needs: [check-status]
|
||||||
|
runs-on: ubuntu-20.04
|
||||||
|
env:
|
||||||
|
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL_DEVELOP_CHANNEL }}
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v4
|
||||||
|
- uses: ./.github/actions/setup-cyborg
|
||||||
|
- name: Report CI status
|
||||||
|
id: report-ci-status
|
||||||
|
working-directory: cyborg
|
||||||
|
run: pnpm tsx bin/report-ci-failure.ts
|
||||||
env:
|
env:
|
||||||
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL_DEVELOP_CHANNEL }}
|
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
CI_REPORT_STATUS: ${{ needs.check-status.outputs.check-result == 'success' }}
|
||||||
|
- name: Notify dev build successful result
|
||||||
|
uses: slackapi/slack-github-action@v1.23.0
|
||||||
|
if: ${{ needs.check-status.outputs.check-result == 'success' }}
|
||||||
with:
|
with:
|
||||||
payload: |
|
payload: |
|
||||||
{"text": "Nightly CI failed for cargo test"}
|
{"text": "Nightly CI has completed successfully."}
|
||||||
|
|
||||||
|
- name: Notify dev build failed result
|
||||||
|
uses: slackapi/slack-github-action@v1.23.0
|
||||||
|
if: ${{ needs.check-status.outputs.check-result != 'success' }}
|
||||||
|
with:
|
||||||
|
payload: |
|
||||||
|
{"text": "Nightly CI failed has failed, please check ${{ steps.report-ci-status.outputs.html_url }}."}
|
||||||
|
|||||||
27
.github/workflows/nightly-funtional-tests.yml
vendored
27
.github/workflows/nightly-funtional-tests.yml
vendored
@@ -1,27 +0,0 @@
|
|||||||
name: Nightly functional tests
|
|
||||||
|
|
||||||
on:
|
|
||||||
schedule:
|
|
||||||
# At 00:00 on Tuesday.
|
|
||||||
- cron: '0 0 * * 2'
|
|
||||||
workflow_dispatch:
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
sqlness-test:
|
|
||||||
name: Run sqlness test
|
|
||||||
if: ${{ github.repository == 'GreptimeTeam/greptimedb' }}
|
|
||||||
runs-on: ubuntu-22.04
|
|
||||||
steps:
|
|
||||||
- name: Checkout
|
|
||||||
uses: actions/checkout@v4
|
|
||||||
with:
|
|
||||||
fetch-depth: 0
|
|
||||||
|
|
||||||
- name: Run sqlness test
|
|
||||||
uses: ./.github/actions/sqlness-test
|
|
||||||
with:
|
|
||||||
data-root: sqlness-test
|
|
||||||
aws-ci-test-bucket: ${{ vars.AWS_CI_TEST_BUCKET }}
|
|
||||||
aws-region: ${{ vars.AWS_CI_TEST_BUCKET_REGION }}
|
|
||||||
aws-access-key-id: ${{ secrets.AWS_CI_TEST_ACCESS_KEY_ID }}
|
|
||||||
aws-secret-access-key: ${{ secrets.AWS_CI_TEST_SECRET_ACCESS_KEY }}
|
|
||||||
29
.github/workflows/pr-title-checker.yml
vendored
29
.github/workflows/pr-title-checker.yml
vendored
@@ -1,29 +0,0 @@
|
|||||||
name: "PR Title Checker"
|
|
||||||
on:
|
|
||||||
pull_request_target:
|
|
||||||
types:
|
|
||||||
- opened
|
|
||||||
- edited
|
|
||||||
- synchronize
|
|
||||||
- labeled
|
|
||||||
- unlabeled
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
check:
|
|
||||||
runs-on: ubuntu-20.04
|
|
||||||
timeout-minutes: 10
|
|
||||||
steps:
|
|
||||||
- uses: thehanimo/pr-title-checker@v1.4.2
|
|
||||||
with:
|
|
||||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
|
||||||
pass_on_octokit_error: false
|
|
||||||
configuration_path: ".github/pr-title-checker-config.json"
|
|
||||||
breaking:
|
|
||||||
runs-on: ubuntu-20.04
|
|
||||||
timeout-minutes: 10
|
|
||||||
steps:
|
|
||||||
- uses: thehanimo/pr-title-checker@v1.4.2
|
|
||||||
with:
|
|
||||||
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
|
||||||
pass_on_octokit_error: false
|
|
||||||
configuration_path: ".github/pr-title-breaking-change-label-config.json"
|
|
||||||
133
.github/workflows/release-dev-builder-images.yaml
vendored
133
.github/workflows/release-dev-builder-images.yaml
vendored
@@ -1,12 +1,14 @@
|
|||||||
name: Release dev-builder images
|
name: Release dev-builder images
|
||||||
|
|
||||||
on:
|
on:
|
||||||
|
push:
|
||||||
|
branches:
|
||||||
|
- main
|
||||||
|
paths:
|
||||||
|
- rust-toolchain.toml
|
||||||
|
- 'docker/dev-builder/**'
|
||||||
workflow_dispatch: # Allows you to run this workflow manually.
|
workflow_dispatch: # Allows you to run this workflow manually.
|
||||||
inputs:
|
inputs:
|
||||||
version:
|
|
||||||
description: Version of the dev-builder
|
|
||||||
required: false
|
|
||||||
default: latest
|
|
||||||
release_dev_builder_ubuntu_image:
|
release_dev_builder_ubuntu_image:
|
||||||
type: boolean
|
type: boolean
|
||||||
description: Release dev-builder-ubuntu image
|
description: Release dev-builder-ubuntu image
|
||||||
@@ -28,22 +30,103 @@ jobs:
|
|||||||
name: Release dev builder images
|
name: Release dev builder images
|
||||||
if: ${{ inputs.release_dev_builder_ubuntu_image || inputs.release_dev_builder_centos_image || inputs.release_dev_builder_android_image }} # Only manually trigger this job.
|
if: ${{ inputs.release_dev_builder_ubuntu_image || inputs.release_dev_builder_centos_image || inputs.release_dev_builder_android_image }} # Only manually trigger this job.
|
||||||
runs-on: ubuntu-20.04-16-cores
|
runs-on: ubuntu-20.04-16-cores
|
||||||
|
outputs:
|
||||||
|
version: ${{ steps.set-version.outputs.version }}
|
||||||
steps:
|
steps:
|
||||||
- name: Checkout
|
- name: Checkout
|
||||||
uses: actions/checkout@v4
|
uses: actions/checkout@v4
|
||||||
with:
|
with:
|
||||||
fetch-depth: 0
|
fetch-depth: 0
|
||||||
|
|
||||||
|
- name: Configure build image version
|
||||||
|
id: set-version
|
||||||
|
shell: bash
|
||||||
|
run: |
|
||||||
|
commitShortSHA=`echo ${{ github.sha }} | cut -c1-8`
|
||||||
|
buildTime=`date +%Y%m%d%H%M%S`
|
||||||
|
BUILD_VERSION="$commitShortSHA-$buildTime"
|
||||||
|
RUST_TOOLCHAIN_VERSION=$(cat rust-toolchain.toml | grep -Eo '[0-9]{4}-[0-9]{2}-[0-9]{2}')
|
||||||
|
IMAGE_VERSION="${RUST_TOOLCHAIN_VERSION}-${BUILD_VERSION}"
|
||||||
|
echo "VERSION=${IMAGE_VERSION}" >> $GITHUB_ENV
|
||||||
|
echo "version=$IMAGE_VERSION" >> $GITHUB_OUTPUT
|
||||||
|
|
||||||
- name: Build and push dev builder images
|
- name: Build and push dev builder images
|
||||||
uses: ./.github/actions/build-dev-builder-images
|
uses: ./.github/actions/build-dev-builder-images
|
||||||
with:
|
with:
|
||||||
version: ${{ inputs.version }}
|
version: ${{ env.VERSION }}
|
||||||
dockerhub-image-registry-username: ${{ secrets.DOCKERHUB_USERNAME }}
|
dockerhub-image-registry-username: ${{ secrets.DOCKERHUB_USERNAME }}
|
||||||
dockerhub-image-registry-token: ${{ secrets.DOCKERHUB_TOKEN }}
|
dockerhub-image-registry-token: ${{ secrets.DOCKERHUB_TOKEN }}
|
||||||
build-dev-builder-ubuntu: ${{ inputs.release_dev_builder_ubuntu_image }}
|
build-dev-builder-ubuntu: ${{ inputs.release_dev_builder_ubuntu_image }}
|
||||||
build-dev-builder-centos: ${{ inputs.release_dev_builder_centos_image }}
|
build-dev-builder-centos: ${{ inputs.release_dev_builder_centos_image }}
|
||||||
build-dev-builder-android: ${{ inputs.release_dev_builder_android_image }}
|
build-dev-builder-android: ${{ inputs.release_dev_builder_android_image }}
|
||||||
|
|
||||||
|
release-dev-builder-images-ecr:
|
||||||
|
name: Release dev builder images to AWS ECR
|
||||||
|
runs-on: ubuntu-20.04
|
||||||
|
needs: [
|
||||||
|
release-dev-builder-images
|
||||||
|
]
|
||||||
|
steps:
|
||||||
|
- name: Configure AWS credentials
|
||||||
|
uses: aws-actions/configure-aws-credentials@v4
|
||||||
|
with:
|
||||||
|
aws-access-key-id: ${{ secrets.AWS_ECR_ACCESS_KEY_ID }}
|
||||||
|
aws-secret-access-key: ${{ secrets.AWS_ECR_SECRET_ACCESS_KEY }}
|
||||||
|
aws-region: ${{ vars.ECR_REGION }}
|
||||||
|
|
||||||
|
- name: Login to Amazon ECR
|
||||||
|
id: login-ecr-public
|
||||||
|
uses: aws-actions/amazon-ecr-login@v2
|
||||||
|
env:
|
||||||
|
AWS_REGION: ${{ vars.ECR_REGION }}
|
||||||
|
with:
|
||||||
|
registry-type: public
|
||||||
|
|
||||||
|
- name: Push dev-builder-ubuntu image
|
||||||
|
shell: bash
|
||||||
|
if: ${{ inputs.release_dev_builder_ubuntu_image }}
|
||||||
|
run: |
|
||||||
|
docker run -v "${DOCKER_CONFIG:-$HOME/.docker}:/root/.docker:ro" \
|
||||||
|
-e "REGISTRY_AUTH_FILE=/root/.docker/config.json" \
|
||||||
|
quay.io/skopeo/stable:latest \
|
||||||
|
copy -a docker://docker.io/${{ vars.IMAGE_NAMESPACE }}/dev-builder-ubuntu:${{ needs.release-dev-builder-images.outputs.version }} \
|
||||||
|
docker://${{ vars.ECR_IMAGE_REGISTRY }}/${{ vars.ECR_IMAGE_NAMESPACE }}/dev-builder-ubuntu:${{ needs.release-dev-builder-images.outputs.version }}
|
||||||
|
|
||||||
|
docker run -v "${DOCKER_CONFIG:-$HOME/.docker}:/root/.docker:ro" \
|
||||||
|
-e "REGISTRY_AUTH_FILE=/root/.docker/config.json" \
|
||||||
|
quay.io/skopeo/stable:latest \
|
||||||
|
copy -a docker://docker.io/${{ vars.IMAGE_NAMESPACE }}/dev-builder-ubuntu:latest \
|
||||||
|
docker://${{ vars.ECR_IMAGE_REGISTRY }}/${{ vars.ECR_IMAGE_NAMESPACE }}/dev-builder-ubuntu:latest
|
||||||
|
- name: Push dev-builder-centos image
|
||||||
|
shell: bash
|
||||||
|
if: ${{ inputs.release_dev_builder_centos_image }}
|
||||||
|
run: |
|
||||||
|
docker run -v "${DOCKER_CONFIG:-$HOME/.docker}:/root/.docker:ro" \
|
||||||
|
-e "REGISTRY_AUTH_FILE=/root/.docker/config.json" \
|
||||||
|
quay.io/skopeo/stable:latest \
|
||||||
|
copy -a docker://docker.io/${{ vars.IMAGE_NAMESPACE }}/dev-builder-centos:${{ needs.release-dev-builder-images.outputs.version }} \
|
||||||
|
docker://${{ vars.ECR_IMAGE_REGISTRY }}/${{ vars.ECR_IMAGE_NAMESPACE }}/dev-builder-centos:${{ needs.release-dev-builder-images.outputs.version }}
|
||||||
|
|
||||||
|
docker run -v "${DOCKER_CONFIG:-$HOME/.docker}:/root/.docker:ro" \
|
||||||
|
-e "REGISTRY_AUTH_FILE=/root/.docker/config.json" \
|
||||||
|
quay.io/skopeo/stable:latest \
|
||||||
|
copy -a docker://docker.io/${{ vars.IMAGE_NAMESPACE }}/dev-builder-centos:latest \
|
||||||
|
docker://${{ vars.ECR_IMAGE_REGISTRY }}/${{ vars.ECR_IMAGE_NAMESPACE }}/dev-builder-centos:latest
|
||||||
|
- name: Push dev-builder-android image
|
||||||
|
shell: bash
|
||||||
|
if: ${{ inputs.release_dev_builder_android_image }}
|
||||||
|
run: |
|
||||||
|
docker run -v "${DOCKER_CONFIG:-$HOME/.docker}:/root/.docker:ro" \
|
||||||
|
-e "REGISTRY_AUTH_FILE=/root/.docker/config.json" \
|
||||||
|
quay.io/skopeo/stable:latest \
|
||||||
|
copy -a docker://docker.io/${{ vars.IMAGE_NAMESPACE }}/dev-builder-android:${{ needs.release-dev-builder-images.outputs.version }} \
|
||||||
|
docker://${{ vars.ECR_IMAGE_REGISTRY }}/${{ vars.ECR_IMAGE_NAMESPACE }}/dev-builder-android:${{ needs.release-dev-builder-images.outputs.version }}
|
||||||
|
|
||||||
|
docker run -v "${DOCKER_CONFIG:-$HOME/.docker}:/root/.docker:ro" \
|
||||||
|
-e "REGISTRY_AUTH_FILE=/root/.docker/config.json" \
|
||||||
|
quay.io/skopeo/stable:latest \
|
||||||
|
copy -a docker://docker.io/${{ vars.IMAGE_NAMESPACE }}/dev-builder-android:latest \
|
||||||
|
docker://${{ vars.ECR_IMAGE_REGISTRY }}/${{ vars.ECR_IMAGE_NAMESPACE }}/dev-builder-android:latest
|
||||||
release-dev-builder-images-cn: # Note: Be careful issue: https://github.com/containers/skopeo/issues/1874 and we decide to use the latest stable skopeo container.
|
release-dev-builder-images-cn: # Note: Be careful issue: https://github.com/containers/skopeo/issues/1874 and we decide to use the latest stable skopeo container.
|
||||||
name: Release dev builder images to CN region
|
name: Release dev builder images to CN region
|
||||||
runs-on: ubuntu-20.04
|
runs-on: ubuntu-20.04
|
||||||
@@ -51,35 +134,39 @@ jobs:
|
|||||||
release-dev-builder-images
|
release-dev-builder-images
|
||||||
]
|
]
|
||||||
steps:
|
steps:
|
||||||
|
- name: Login to AliCloud Container Registry
|
||||||
|
uses: docker/login-action@v3
|
||||||
|
with:
|
||||||
|
registry: ${{ vars.ACR_IMAGE_REGISTRY }}
|
||||||
|
username: ${{ secrets.ALICLOUD_USERNAME }}
|
||||||
|
password: ${{ secrets.ALICLOUD_PASSWORD }}
|
||||||
|
|
||||||
- name: Push dev-builder-ubuntu image
|
- name: Push dev-builder-ubuntu image
|
||||||
shell: bash
|
shell: bash
|
||||||
if: ${{ inputs.release_dev_builder_ubuntu_image }}
|
if: ${{ inputs.release_dev_builder_ubuntu_image }}
|
||||||
env:
|
|
||||||
DST_REGISTRY_USERNAME: ${{ secrets.ALICLOUD_USERNAME }}
|
|
||||||
DST_REGISTRY_PASSWORD: ${{ secrets.ALICLOUD_PASSWORD }}
|
|
||||||
run: |
|
run: |
|
||||||
docker run quay.io/skopeo/stable:latest copy -a docker://docker.io/${{ vars.IMAGE_NAMESPACE }}/dev-builder-ubuntu:${{ inputs.version }} \
|
docker run -v "${DOCKER_CONFIG:-$HOME/.docker}:/root/.docker:ro" \
|
||||||
--dest-creds "$DST_REGISTRY_USERNAME":"$DST_REGISTRY_PASSWORD" \
|
-e "REGISTRY_AUTH_FILE=/root/.docker/config.json" \
|
||||||
docker://${{ vars.ACR_IMAGE_REGISTRY }}/${{ vars.IMAGE_NAMESPACE }}/dev-builder-ubuntu:${{ inputs.version }}
|
quay.io/skopeo/stable:latest \
|
||||||
|
copy -a docker://docker.io/${{ vars.IMAGE_NAMESPACE }}/dev-builder-ubuntu:${{ needs.release-dev-builder-images.outputs.version }} \
|
||||||
|
docker://${{ vars.ACR_IMAGE_REGISTRY }}/${{ vars.IMAGE_NAMESPACE }}/dev-builder-ubuntu:${{ needs.release-dev-builder-images.outputs.version }}
|
||||||
|
|
||||||
- name: Push dev-builder-centos image
|
- name: Push dev-builder-centos image
|
||||||
shell: bash
|
shell: bash
|
||||||
if: ${{ inputs.release_dev_builder_centos_image }}
|
if: ${{ inputs.release_dev_builder_centos_image }}
|
||||||
env:
|
|
||||||
DST_REGISTRY_USERNAME: ${{ secrets.ALICLOUD_USERNAME }}
|
|
||||||
DST_REGISTRY_PASSWORD: ${{ secrets.ALICLOUD_PASSWORD }}
|
|
||||||
run: |
|
run: |
|
||||||
docker run quay.io/skopeo/stable:latest copy -a docker://docker.io/${{ vars.IMAGE_NAMESPACE }}/dev-builder-centos:${{ inputs.version }} \
|
docker run -v "${DOCKER_CONFIG:-$HOME/.docker}:/root/.docker:ro" \
|
||||||
--dest-creds "$DST_REGISTRY_USERNAME":"$DST_REGISTRY_PASSWORD" \
|
-e "REGISTRY_AUTH_FILE=/root/.docker/config.json" \
|
||||||
docker://${{ vars.ACR_IMAGE_REGISTRY }}/${{ vars.IMAGE_NAMESPACE }}/dev-builder-centos:${{ inputs.version }}
|
quay.io/skopeo/stable:latest \
|
||||||
|
copy -a docker://docker.io/${{ vars.IMAGE_NAMESPACE }}/dev-builder-centos:${{ needs.release-dev-builder-images.outputs.version }} \
|
||||||
|
docker://${{ vars.ACR_IMAGE_REGISTRY }}/${{ vars.IMAGE_NAMESPACE }}/dev-builder-centos:${{ needs.release-dev-builder-images.outputs.version }}
|
||||||
|
|
||||||
- name: Push dev-builder-android image
|
- name: Push dev-builder-android image
|
||||||
shell: bash
|
shell: bash
|
||||||
if: ${{ inputs.release_dev_builder_android_image }}
|
if: ${{ inputs.release_dev_builder_android_image }}
|
||||||
env:
|
|
||||||
DST_REGISTRY_USERNAME: ${{ secrets.ALICLOUD_USERNAME }}
|
|
||||||
DST_REGISTRY_PASSWORD: ${{ secrets.ALICLOUD_PASSWORD }}
|
|
||||||
run: |
|
run: |
|
||||||
docker run quay.io/skopeo/stable:latest copy -a docker://docker.io/${{ vars.IMAGE_NAMESPACE }}/dev-builder-android:${{ inputs.version }} \
|
docker run -v "${DOCKER_CONFIG:-$HOME/.docker}:/root/.docker:ro" \
|
||||||
--dest-creds "$DST_REGISTRY_USERNAME":"$DST_REGISTRY_PASSWORD" \
|
-e "REGISTRY_AUTH_FILE=/root/.docker/config.json" \
|
||||||
docker://${{ vars.ACR_IMAGE_REGISTRY }}/${{ vars.IMAGE_NAMESPACE }}/dev-builder-android:${{ inputs.version }}
|
quay.io/skopeo/stable:latest \
|
||||||
|
copy -a docker://docker.io/${{ vars.IMAGE_NAMESPACE }}/dev-builder-android:${{ needs.release-dev-builder-images.outputs.version }} \
|
||||||
|
docker://${{ vars.ACR_IMAGE_REGISTRY }}/${{ vars.IMAGE_NAMESPACE }}/dev-builder-android:${{ needs.release-dev-builder-images.outputs.version }}
|
||||||
|
|||||||
46
.github/workflows/release.yml
vendored
46
.github/workflows/release.yml
vendored
@@ -33,6 +33,7 @@ on:
|
|||||||
description: The runner uses to build linux-arm64 artifacts
|
description: The runner uses to build linux-arm64 artifacts
|
||||||
default: ec2-c6g.4xlarge-arm64
|
default: ec2-c6g.4xlarge-arm64
|
||||||
options:
|
options:
|
||||||
|
- ubuntu-2204-32-cores-arm
|
||||||
- ec2-c6g.xlarge-arm64 # 4C8G
|
- ec2-c6g.xlarge-arm64 # 4C8G
|
||||||
- ec2-c6g.2xlarge-arm64 # 8C16G
|
- ec2-c6g.2xlarge-arm64 # 8C16G
|
||||||
- ec2-c6g.4xlarge-arm64 # 16C32G
|
- ec2-c6g.4xlarge-arm64 # 16C32G
|
||||||
@@ -82,7 +83,6 @@ on:
|
|||||||
# Use env variables to control all the release process.
|
# Use env variables to control all the release process.
|
||||||
env:
|
env:
|
||||||
# The arguments of building greptime.
|
# The arguments of building greptime.
|
||||||
RUST_TOOLCHAIN: nightly-2023-12-19
|
|
||||||
CARGO_PROFILE: nightly
|
CARGO_PROFILE: nightly
|
||||||
|
|
||||||
# Controls whether to run tests, include unit-test, integration-test and sqlness.
|
# Controls whether to run tests, include unit-test, integration-test and sqlness.
|
||||||
@@ -91,7 +91,12 @@ env:
|
|||||||
# The scheduled version is '${{ env.NEXT_RELEASE_VERSION }}-nightly-YYYYMMDD', like v0.2.0-nigthly-20230313;
|
# The scheduled version is '${{ env.NEXT_RELEASE_VERSION }}-nightly-YYYYMMDD', like v0.2.0-nigthly-20230313;
|
||||||
NIGHTLY_RELEASE_PREFIX: nightly
|
NIGHTLY_RELEASE_PREFIX: nightly
|
||||||
# Note: The NEXT_RELEASE_VERSION should be modified manually by every formal release.
|
# Note: The NEXT_RELEASE_VERSION should be modified manually by every formal release.
|
||||||
NEXT_RELEASE_VERSION: v0.8.0
|
NEXT_RELEASE_VERSION: v0.10.0
|
||||||
|
|
||||||
|
# Permission reference: https://docs.github.com/en/actions/using-jobs/assigning-permissions-to-jobs
|
||||||
|
permissions:
|
||||||
|
issues: write # Allows the action to create issues for cyborg.
|
||||||
|
contents: write # Allows the action to create a release.
|
||||||
|
|
||||||
jobs:
|
jobs:
|
||||||
allocate-runners:
|
allocate-runners:
|
||||||
@@ -102,7 +107,7 @@ jobs:
|
|||||||
linux-amd64-runner: ${{ steps.start-linux-amd64-runner.outputs.label }}
|
linux-amd64-runner: ${{ steps.start-linux-amd64-runner.outputs.label }}
|
||||||
linux-arm64-runner: ${{ steps.start-linux-arm64-runner.outputs.label }}
|
linux-arm64-runner: ${{ steps.start-linux-arm64-runner.outputs.label }}
|
||||||
macos-runner: ${{ inputs.macos_runner || vars.DEFAULT_MACOS_RUNNER }}
|
macos-runner: ${{ inputs.macos_runner || vars.DEFAULT_MACOS_RUNNER }}
|
||||||
windows-runner: windows-latest-8-cores
|
windows-runner: windows-2022-8-cores
|
||||||
|
|
||||||
# The following EC2 resource id will be used for resource releasing.
|
# The following EC2 resource id will be used for resource releasing.
|
||||||
linux-amd64-ec2-runner-label: ${{ steps.start-linux-amd64-runner.outputs.label }}
|
linux-amd64-ec2-runner-label: ${{ steps.start-linux-amd64-runner.outputs.label }}
|
||||||
@@ -118,6 +123,11 @@ jobs:
|
|||||||
with:
|
with:
|
||||||
fetch-depth: 0
|
fetch-depth: 0
|
||||||
|
|
||||||
|
- name: Check Rust toolchain version
|
||||||
|
shell: bash
|
||||||
|
run: |
|
||||||
|
./scripts/check-builder-rust-version.sh
|
||||||
|
|
||||||
# The create-version will create a global variable named 'version' in the global workflows.
|
# The create-version will create a global variable named 'version' in the global workflows.
|
||||||
# - If it's a tag push release, the version is the tag name(${{ github.ref_name }});
|
# - If it's a tag push release, the version is the tag name(${{ github.ref_name }});
|
||||||
# - If it's a scheduled release, the version is '${{ env.NEXT_RELEASE_VERSION }}-nightly-$buildTime', like v0.2.0-nigthly-20230313;
|
# - If it's a scheduled release, the version is '${{ env.NEXT_RELEASE_VERSION }}-nightly-$buildTime', like v0.2.0-nigthly-20230313;
|
||||||
@@ -178,6 +188,8 @@ jobs:
|
|||||||
cargo-profile: ${{ env.CARGO_PROFILE }}
|
cargo-profile: ${{ env.CARGO_PROFILE }}
|
||||||
version: ${{ needs.allocate-runners.outputs.version }}
|
version: ${{ needs.allocate-runners.outputs.version }}
|
||||||
disable-run-tests: ${{ env.DISABLE_RUN_TESTS }}
|
disable-run-tests: ${{ env.DISABLE_RUN_TESTS }}
|
||||||
|
image-registry: ${{ vars.ECR_IMAGE_REGISTRY }}
|
||||||
|
image-namespace: ${{ vars.ECR_IMAGE_NAMESPACE }}
|
||||||
|
|
||||||
build-linux-arm64-artifacts:
|
build-linux-arm64-artifacts:
|
||||||
name: Build linux-arm64 artifacts
|
name: Build linux-arm64 artifacts
|
||||||
@@ -197,6 +209,8 @@ jobs:
|
|||||||
cargo-profile: ${{ env.CARGO_PROFILE }}
|
cargo-profile: ${{ env.CARGO_PROFILE }}
|
||||||
version: ${{ needs.allocate-runners.outputs.version }}
|
version: ${{ needs.allocate-runners.outputs.version }}
|
||||||
disable-run-tests: ${{ env.DISABLE_RUN_TESTS }}
|
disable-run-tests: ${{ env.DISABLE_RUN_TESTS }}
|
||||||
|
image-registry: ${{ vars.ECR_IMAGE_REGISTRY }}
|
||||||
|
image-namespace: ${{ vars.ECR_IMAGE_NAMESPACE }}
|
||||||
|
|
||||||
build-macos-artifacts:
|
build-macos-artifacts:
|
||||||
name: Build macOS artifacts
|
name: Build macOS artifacts
|
||||||
@@ -235,17 +249,17 @@ jobs:
|
|||||||
- uses: ./.github/actions/build-macos-artifacts
|
- uses: ./.github/actions/build-macos-artifacts
|
||||||
with:
|
with:
|
||||||
arch: ${{ matrix.arch }}
|
arch: ${{ matrix.arch }}
|
||||||
rust-toolchain: ${{ env.RUST_TOOLCHAIN }}
|
|
||||||
cargo-profile: ${{ env.CARGO_PROFILE }}
|
cargo-profile: ${{ env.CARGO_PROFILE }}
|
||||||
features: ${{ matrix.features }}
|
features: ${{ matrix.features }}
|
||||||
version: ${{ needs.allocate-runners.outputs.version }}
|
version: ${{ needs.allocate-runners.outputs.version }}
|
||||||
disable-run-tests: ${{ env.DISABLE_RUN_TESTS }}
|
# We decide to disable the integration tests on macOS because it's unnecessary and time-consuming.
|
||||||
|
disable-run-tests: true
|
||||||
artifacts-dir: ${{ matrix.artifacts-dir-prefix }}-${{ needs.allocate-runners.outputs.version }}
|
artifacts-dir: ${{ matrix.artifacts-dir-prefix }}-${{ needs.allocate-runners.outputs.version }}
|
||||||
|
|
||||||
- name: Set build macos result
|
- name: Set build macos result
|
||||||
id: set-build-macos-result
|
id: set-build-macos-result
|
||||||
run: |
|
run: |
|
||||||
echo "build-macos-result=success" >> $GITHUB_OUTPUT
|
echo "build-macos-result=success" >> $GITHUB_OUTPUT
|
||||||
|
|
||||||
build-windows-artifacts:
|
build-windows-artifacts:
|
||||||
name: Build Windows artifacts
|
name: Build Windows artifacts
|
||||||
@@ -278,7 +292,6 @@ jobs:
|
|||||||
- uses: ./.github/actions/build-windows-artifacts
|
- uses: ./.github/actions/build-windows-artifacts
|
||||||
with:
|
with:
|
||||||
arch: ${{ matrix.arch }}
|
arch: ${{ matrix.arch }}
|
||||||
rust-toolchain: ${{ env.RUST_TOOLCHAIN }}
|
|
||||||
cargo-profile: ${{ env.CARGO_PROFILE }}
|
cargo-profile: ${{ env.CARGO_PROFILE }}
|
||||||
features: ${{ matrix.features }}
|
features: ${{ matrix.features }}
|
||||||
version: ${{ needs.allocate-runners.outputs.version }}
|
version: ${{ needs.allocate-runners.outputs.version }}
|
||||||
@@ -318,7 +331,7 @@ jobs:
|
|||||||
- name: Set build image result
|
- name: Set build image result
|
||||||
id: set-build-image-result
|
id: set-build-image-result
|
||||||
run: |
|
run: |
|
||||||
echo "build-image-result=success" >> $GITHUB_OUTPUT
|
echo "build-image-result=success" >> $GITHUB_OUTPUT
|
||||||
|
|
||||||
release-cn-artifacts:
|
release-cn-artifacts:
|
||||||
name: Release artifacts to CN region
|
name: Release artifacts to CN region
|
||||||
@@ -436,7 +449,7 @@ jobs:
|
|||||||
github-token: ${{ secrets.GH_PERSONAL_ACCESS_TOKEN }}
|
github-token: ${{ secrets.GH_PERSONAL_ACCESS_TOKEN }}
|
||||||
|
|
||||||
notification:
|
notification:
|
||||||
if: ${{ always() }} # Not requiring successful dependent jobs, always run.
|
if: ${{ github.repository == 'GreptimeTeam/greptimedb' && (github.event_name == 'push' || github.event_name == 'schedule') && always() }}
|
||||||
name: Send notification to Greptime team
|
name: Send notification to Greptime team
|
||||||
needs: [
|
needs: [
|
||||||
release-images-to-dockerhub,
|
release-images-to-dockerhub,
|
||||||
@@ -447,16 +460,25 @@ jobs:
|
|||||||
env:
|
env:
|
||||||
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL_DEVELOP_CHANNEL }}
|
SLACK_WEBHOOK_URL: ${{ secrets.SLACK_WEBHOOK_URL_DEVELOP_CHANNEL }}
|
||||||
steps:
|
steps:
|
||||||
- name: Notifiy release successful result
|
- uses: actions/checkout@v4
|
||||||
|
- uses: ./.github/actions/setup-cyborg
|
||||||
|
- name: Report CI status
|
||||||
|
id: report-ci-status
|
||||||
|
working-directory: cyborg
|
||||||
|
run: pnpm tsx bin/report-ci-failure.ts
|
||||||
|
env:
|
||||||
|
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
|
CI_REPORT_STATUS: ${{ needs.release-images-to-dockerhub.outputs.build-image-result == 'success' && needs.build-windows-artifacts.outputs.build-windows-result == 'success' && needs.build-macos-artifacts.outputs.build-macos-result == 'success' }}
|
||||||
|
- name: Notify release successful result
|
||||||
uses: slackapi/slack-github-action@v1.25.0
|
uses: slackapi/slack-github-action@v1.25.0
|
||||||
if: ${{ needs.release-images-to-dockerhub.outputs.build-image-result == 'success' && needs.build-windows-artifacts.outputs.build-windows-result == 'success' && needs.build-macos-artifacts.outputs.build-macos-result == 'success' }}
|
if: ${{ needs.release-images-to-dockerhub.outputs.build-image-result == 'success' && needs.build-windows-artifacts.outputs.build-windows-result == 'success' && needs.build-macos-artifacts.outputs.build-macos-result == 'success' }}
|
||||||
with:
|
with:
|
||||||
payload: |
|
payload: |
|
||||||
{"text": "GreptimeDB's release version has completed successfully."}
|
{"text": "GreptimeDB's release version has completed successfully."}
|
||||||
|
|
||||||
- name: Notifiy release failed result
|
- name: Notify release failed result
|
||||||
uses: slackapi/slack-github-action@v1.25.0
|
uses: slackapi/slack-github-action@v1.25.0
|
||||||
if: ${{ needs.release-images-to-dockerhub.outputs.build-image-result != 'success' || needs.build-windows-artifacts.outputs.build-windows-result != 'success' || needs.build-macos-artifacts.outputs.build-macos-result != 'success' }}
|
if: ${{ needs.release-images-to-dockerhub.outputs.build-image-result != 'success' || needs.build-windows-artifacts.outputs.build-windows-result != 'success' || needs.build-macos-artifacts.outputs.build-macos-result != 'success' }}
|
||||||
with:
|
with:
|
||||||
payload: |
|
payload: |
|
||||||
{"text": "GreptimeDB's release version has failed, please check 'https://github.com/GreptimeTeam/greptimedb/actions/workflows/release.yml'."}
|
{"text": "GreptimeDB's release version has failed, please check ${{ steps.report-ci-status.outputs.html_url }}."}
|
||||||
|
|||||||
24
.github/workflows/schedule.yml
vendored
Normal file
24
.github/workflows/schedule.yml
vendored
Normal file
@@ -0,0 +1,24 @@
|
|||||||
|
name: Schedule Management
|
||||||
|
on:
|
||||||
|
schedule:
|
||||||
|
- cron: '4 2 * * *'
|
||||||
|
workflow_dispatch:
|
||||||
|
|
||||||
|
permissions:
|
||||||
|
contents: read
|
||||||
|
issues: write
|
||||||
|
pull-requests: write
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
maintenance:
|
||||||
|
name: Periodic Maintenance
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
if: ${{ github.repository == 'GreptimeTeam/greptimedb' }}
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v4
|
||||||
|
- uses: ./.github/actions/setup-cyborg
|
||||||
|
- name: Do Maintenance
|
||||||
|
working-directory: cyborg
|
||||||
|
run: pnpm tsx bin/schedule.ts
|
||||||
|
env:
|
||||||
|
GITHUB_TOKEN: ${{ secrets.GH_PERSONAL_ACCESS_TOKEN }}
|
||||||
21
.github/workflows/semantic-pull-request.yml
vendored
Normal file
21
.github/workflows/semantic-pull-request.yml
vendored
Normal file
@@ -0,0 +1,21 @@
|
|||||||
|
name: "Semantic Pull Request"
|
||||||
|
|
||||||
|
on:
|
||||||
|
pull_request_target:
|
||||||
|
types:
|
||||||
|
- opened
|
||||||
|
- reopened
|
||||||
|
- edited
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
check:
|
||||||
|
runs-on: ubuntu-20.04
|
||||||
|
timeout-minutes: 10
|
||||||
|
steps:
|
||||||
|
- uses: actions/checkout@v4
|
||||||
|
- uses: ./.github/actions/setup-cyborg
|
||||||
|
- name: Check Pull Request
|
||||||
|
working-directory: cyborg
|
||||||
|
run: pnpm tsx bin/check-pull-request.ts
|
||||||
|
env:
|
||||||
|
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
|
||||||
21
.github/workflows/unassign.yml
vendored
21
.github/workflows/unassign.yml
vendored
@@ -1,21 +0,0 @@
|
|||||||
name: Auto Unassign
|
|
||||||
on:
|
|
||||||
schedule:
|
|
||||||
- cron: '4 2 * * *'
|
|
||||||
workflow_dispatch:
|
|
||||||
|
|
||||||
permissions:
|
|
||||||
contents: read
|
|
||||||
issues: write
|
|
||||||
pull-requests: write
|
|
||||||
|
|
||||||
jobs:
|
|
||||||
auto-unassign:
|
|
||||||
name: Auto Unassign
|
|
||||||
runs-on: ubuntu-latest
|
|
||||||
steps:
|
|
||||||
- name: Auto Unassign
|
|
||||||
uses: tisonspieces/auto-unassign@main
|
|
||||||
with:
|
|
||||||
token: ${{ secrets.GH_PERSONAL_ACCESS_TOKEN }}
|
|
||||||
repository: ${{ github.repository }}
|
|
||||||
@@ -16,6 +16,7 @@ repos:
|
|||||||
hooks:
|
hooks:
|
||||||
- id: fmt
|
- id: fmt
|
||||||
- id: clippy
|
- id: clippy
|
||||||
args: ["--workspace", "--all-targets", "--", "-D", "warnings", "-D", "clippy::print_stdout", "-D", "clippy::print_stderr"]
|
args: ["--workspace", "--all-targets", "--all-features", "--", "-D", "warnings"]
|
||||||
stages: [push]
|
stages: [push]
|
||||||
- id: cargo-check
|
- id: cargo-check
|
||||||
|
args: ["--workspace", "--all-targets", "--all-features"]
|
||||||
|
|||||||
43
AUTHOR.md
Normal file
43
AUTHOR.md
Normal file
@@ -0,0 +1,43 @@
|
|||||||
|
# GreptimeDB Authors
|
||||||
|
|
||||||
|
## Individual Committers (in alphabetical order)
|
||||||
|
|
||||||
|
* [CookiePieWw](https://github.com/CookiePieWw)
|
||||||
|
* [KKould](https://github.com/KKould)
|
||||||
|
* [NiwakaDev](https://github.com/NiwakaDev)
|
||||||
|
* [etolbakov](https://github.com/etolbakov)
|
||||||
|
* [irenjj](https://github.com/irenjj)
|
||||||
|
|
||||||
|
## Team Members (in alphabetical order)
|
||||||
|
|
||||||
|
* [Breeze-P](https://github.com/Breeze-P)
|
||||||
|
* [GrepTime](https://github.com/GrepTime)
|
||||||
|
* [MichaelScofield](https://github.com/MichaelScofield)
|
||||||
|
* [Wenjie0329](https://github.com/Wenjie0329)
|
||||||
|
* [WenyXu](https://github.com/WenyXu)
|
||||||
|
* [ZonaHex](https://github.com/ZonaHex)
|
||||||
|
* [apdong2022](https://github.com/apdong2022)
|
||||||
|
* [beryl678](https://github.com/beryl678)
|
||||||
|
* [daviderli614](https://github.com/daviderli614)
|
||||||
|
* [discord9](https://github.com/discord9)
|
||||||
|
* [evenyag](https://github.com/evenyag)
|
||||||
|
* [fengjiachun](https://github.com/fengjiachun)
|
||||||
|
* [fengys1996](https://github.com/fengys1996)
|
||||||
|
* [holalengyu](https://github.com/holalengyu)
|
||||||
|
* [killme2008](https://github.com/killme2008)
|
||||||
|
* [nicecui](https://github.com/nicecui)
|
||||||
|
* [paomian](https://github.com/paomian)
|
||||||
|
* [shuiyisong](https://github.com/shuiyisong)
|
||||||
|
* [sunchanglong](https://github.com/sunchanglong)
|
||||||
|
* [sunng87](https://github.com/sunng87)
|
||||||
|
* [tisonkun](https://github.com/tisonkun)
|
||||||
|
* [v0y4g3r](https://github.com/v0y4g3r)
|
||||||
|
* [waynexia](https://github.com/waynexia)
|
||||||
|
* [xtang](https://github.com/xtang)
|
||||||
|
* [zhaoyingnan01](https://github.com/zhaoyingnan01)
|
||||||
|
* [zhongzc](https://github.com/zhongzc)
|
||||||
|
* [zyy17](https://github.com/zyy17)
|
||||||
|
|
||||||
|
## All Contributors
|
||||||
|
|
||||||
|
[](https://github.com/GreptimeTeam/greptimedb/graphs/contributors)
|
||||||
@@ -2,7 +2,11 @@
|
|||||||
|
|
||||||
Thanks a lot for considering contributing to GreptimeDB. We believe people like you would make GreptimeDB a great product. We intend to build a community where individuals can have open talks, show respect for one another, and speak with true ❤️. Meanwhile, we are to keep transparency and make your effort count here.
|
Thanks a lot for considering contributing to GreptimeDB. We believe people like you would make GreptimeDB a great product. We intend to build a community where individuals can have open talks, show respect for one another, and speak with true ❤️. Meanwhile, we are to keep transparency and make your effort count here.
|
||||||
|
|
||||||
Please read the guidelines, and they can help you get started. Communicate with respect to developers maintaining and developing the project. In return, they should reciprocate that respect by addressing your issue, reviewing changes, as well as helping finalize and merge your pull requests.
|
You can find our contributors at https://github.com/GreptimeTeam/greptimedb/graphs/contributors. When you dedicate to GreptimeDB for a few months and keep bringing high-quality contributions (code, docs, advocate, etc.), you will be a candidate of a committer.
|
||||||
|
|
||||||
|
A committer will be granted both read & write access to GreptimeDB repos. Check the [AUTHOR.md](AUTHOR.md) file for all current individual committers.
|
||||||
|
|
||||||
|
Please read the guidelines, and they can help you get started. Communicate respectfully with the developers maintaining and developing the project. In return, they should reciprocate that respect by addressing your issue, reviewing changes, as well as helping finalize and merge your pull requests.
|
||||||
|
|
||||||
Follow our [README](https://github.com/GreptimeTeam/greptimedb#readme) to get the whole picture of the project. To learn about the design of GreptimeDB, please refer to the [design docs](https://github.com/GrepTimeTeam/docs).
|
Follow our [README](https://github.com/GreptimeTeam/greptimedb#readme) to get the whole picture of the project. To learn about the design of GreptimeDB, please refer to the [design docs](https://github.com/GrepTimeTeam/docs).
|
||||||
|
|
||||||
@@ -10,7 +14,7 @@ Follow our [README](https://github.com/GreptimeTeam/greptimedb#readme) to get th
|
|||||||
|
|
||||||
It can feel intimidating to contribute to a complex project, but it can also be exciting and fun. These general notes will help everyone participate in this communal activity.
|
It can feel intimidating to contribute to a complex project, but it can also be exciting and fun. These general notes will help everyone participate in this communal activity.
|
||||||
|
|
||||||
- Follow the [Code of Conduct](https://github.com/GreptimeTeam/greptimedb/blob/main/CODE_OF_CONDUCT.md)
|
- Follow the [Code of Conduct](https://github.com/GreptimeTeam/.github/blob/main/.github/CODE_OF_CONDUCT.md)
|
||||||
- Small changes make huge differences. We will happily accept a PR making a single character change if it helps move forward. Don't wait to have everything working.
|
- Small changes make huge differences. We will happily accept a PR making a single character change if it helps move forward. Don't wait to have everything working.
|
||||||
- Check the closed issues before opening your issue.
|
- Check the closed issues before opening your issue.
|
||||||
- Try to follow the existing style of the code.
|
- Try to follow the existing style of the code.
|
||||||
@@ -26,7 +30,7 @@ Pull requests are great, but we accept all kinds of other help if you like. Such
|
|||||||
|
|
||||||
## Code of Conduct
|
## Code of Conduct
|
||||||
|
|
||||||
Also, there are things that we are not looking for because they don't match the goals of the product or benefit the community. Please read [Code of Conduct](https://github.com/GreptimeTeam/greptimedb/blob/main/CODE_OF_CONDUCT.md); we hope everyone can keep good manners and become an honored member.
|
Also, there are things that we are not looking for because they don't match the goals of the product or benefit the community. Please read [Code of Conduct](https://github.com/GreptimeTeam/.github/blob/main/.github/CODE_OF_CONDUCT.md); we hope everyone can keep good manners and become an honored member.
|
||||||
|
|
||||||
## License
|
## License
|
||||||
|
|
||||||
@@ -50,8 +54,8 @@ GreptimeDB uses the [Apache 2.0 license](https://github.com/GreptimeTeam/greptim
|
|||||||
|
|
||||||
- To ensure that community is free and confident in its ability to use your contributions, please sign the Contributor License Agreement (CLA) which will be incorporated in the pull request process.
|
- To ensure that community is free and confident in its ability to use your contributions, please sign the Contributor License Agreement (CLA) which will be incorporated in the pull request process.
|
||||||
- Make sure all files have proper license header (running `docker run --rm -v $(pwd):/github/workspace ghcr.io/korandoru/hawkeye-native:v3 format` from the project root).
|
- Make sure all files have proper license header (running `docker run --rm -v $(pwd):/github/workspace ghcr.io/korandoru/hawkeye-native:v3 format` from the project root).
|
||||||
- Make sure all your codes are formatted and follow the [coding style](https://pingcap.github.io/style-guide/rust/).
|
- Make sure all your codes are formatted and follow the [coding style](https://pingcap.github.io/style-guide/rust/) and [style guide](docs/style-guide.md).
|
||||||
- Make sure all unit tests are passed (using `cargo test --workspace` or [nextest](https://nexte.st/index.html) `cargo nextest run`).
|
- Make sure all unit tests are passed using [nextest](https://nexte.st/index.html) `cargo nextest run`.
|
||||||
- Make sure all clippy warnings are fixed (you can check it locally by running `cargo clippy --workspace --all-targets -- -D warnings`).
|
- Make sure all clippy warnings are fixed (you can check it locally by running `cargo clippy --workspace --all-targets -- -D warnings`).
|
||||||
|
|
||||||
#### `pre-commit` Hooks
|
#### `pre-commit` Hooks
|
||||||
|
|||||||
6197
Cargo.lock
generated
6197
Cargo.lock
generated
File diff suppressed because it is too large
Load Diff
126
Cargo.toml
126
Cargo.toml
@@ -1,9 +1,9 @@
|
|||||||
[workspace]
|
[workspace]
|
||||||
members = [
|
members = [
|
||||||
"benchmarks",
|
|
||||||
"src/api",
|
"src/api",
|
||||||
"src/auth",
|
"src/auth",
|
||||||
"src/catalog",
|
"src/catalog",
|
||||||
|
"src/cache",
|
||||||
"src/client",
|
"src/client",
|
||||||
"src/cmd",
|
"src/cmd",
|
||||||
"src/common/base",
|
"src/common/base",
|
||||||
@@ -11,6 +11,7 @@ members = [
|
|||||||
"src/common/config",
|
"src/common/config",
|
||||||
"src/common/datasource",
|
"src/common/datasource",
|
||||||
"src/common/error",
|
"src/common/error",
|
||||||
|
"src/common/frontend",
|
||||||
"src/common/function",
|
"src/common/function",
|
||||||
"src/common/macro",
|
"src/common/macro",
|
||||||
"src/common/greptimedb-telemetry",
|
"src/common/greptimedb-telemetry",
|
||||||
@@ -19,6 +20,7 @@ members = [
|
|||||||
"src/common/mem-prof",
|
"src/common/mem-prof",
|
||||||
"src/common/meta",
|
"src/common/meta",
|
||||||
"src/common/plugins",
|
"src/common/plugins",
|
||||||
|
"src/common/pprof",
|
||||||
"src/common/procedure",
|
"src/common/procedure",
|
||||||
"src/common/procedure-test",
|
"src/common/procedure-test",
|
||||||
"src/common/query",
|
"src/common/query",
|
||||||
@@ -44,6 +46,7 @@ members = [
|
|||||||
"src/object-store",
|
"src/object-store",
|
||||||
"src/operator",
|
"src/operator",
|
||||||
"src/partition",
|
"src/partition",
|
||||||
|
"src/pipeline",
|
||||||
"src/plugins",
|
"src/plugins",
|
||||||
"src/promql",
|
"src/promql",
|
||||||
"src/puffin",
|
"src/puffin",
|
||||||
@@ -62,24 +65,34 @@ members = [
|
|||||||
resolver = "2"
|
resolver = "2"
|
||||||
|
|
||||||
[workspace.package]
|
[workspace.package]
|
||||||
version = "0.7.2"
|
version = "0.9.5"
|
||||||
edition = "2021"
|
edition = "2021"
|
||||||
license = "Apache-2.0"
|
license = "Apache-2.0"
|
||||||
|
|
||||||
[workspace.lints]
|
[workspace.lints]
|
||||||
clippy.print_stdout = "warn"
|
clippy.print_stdout = "warn"
|
||||||
clippy.print_stderr = "warn"
|
clippy.print_stderr = "warn"
|
||||||
|
clippy.dbg_macro = "warn"
|
||||||
clippy.implicit_clone = "warn"
|
clippy.implicit_clone = "warn"
|
||||||
|
clippy.readonly_write_lock = "allow"
|
||||||
rust.unknown_lints = "deny"
|
rust.unknown_lints = "deny"
|
||||||
|
# Remove this after https://github.com/PyO3/pyo3/issues/4094
|
||||||
|
rust.non_local_definitions = "allow"
|
||||||
|
rust.unexpected_cfgs = { level = "warn", check-cfg = ['cfg(tokio_unstable)'] }
|
||||||
|
|
||||||
[workspace.dependencies]
|
[workspace.dependencies]
|
||||||
|
# We turn off default-features for some dependencies here so the workspaces which inherit them can
|
||||||
|
# selectively turn them on if needed, since we can override default-features = true (from false)
|
||||||
|
# for the inherited dependency but cannot do the reverse (override from true to false).
|
||||||
|
#
|
||||||
|
# See for more detaiils: https://github.com/rust-lang/cargo/issues/11329
|
||||||
ahash = { version = "0.8", features = ["compile-time-rng"] }
|
ahash = { version = "0.8", features = ["compile-time-rng"] }
|
||||||
aquamarine = "0.3"
|
aquamarine = "0.3"
|
||||||
arrow = { version = "47.0" }
|
arrow = { version = "51.0.0", features = ["prettyprint"] }
|
||||||
arrow-array = "47.0"
|
arrow-array = { version = "51.0.0", default-features = false, features = ["chrono-tz"] }
|
||||||
arrow-flight = "47.0"
|
arrow-flight = "51.0"
|
||||||
arrow-ipc = { version = "47.0", features = ["lz4"] }
|
arrow-ipc = { version = "51.0.0", default-features = false, features = ["lz4", "zstd"] }
|
||||||
arrow-schema = { version = "47.0", features = ["serde"] }
|
arrow-schema = { version = "51.0", features = ["serde"] }
|
||||||
async-stream = "0.3"
|
async-stream = "0.3"
|
||||||
async-trait = "0.1"
|
async-trait = "0.1"
|
||||||
axum = { version = "0.6", features = ["headers"] }
|
axum = { version = "0.6", features = ["headers"] }
|
||||||
@@ -87,88 +100,113 @@ base64 = "0.21"
|
|||||||
bigdecimal = "0.4.2"
|
bigdecimal = "0.4.2"
|
||||||
bitflags = "2.4.1"
|
bitflags = "2.4.1"
|
||||||
bytemuck = "1.12"
|
bytemuck = "1.12"
|
||||||
bytes = { version = "1.5", features = ["serde"] }
|
bytes = { version = "1.7", features = ["serde"] }
|
||||||
chrono = { version = "0.4", features = ["serde"] }
|
chrono = { version = "0.4", features = ["serde"] }
|
||||||
clap = { version = "4.4", features = ["derive"] }
|
clap = { version = "4.4", features = ["derive"] }
|
||||||
|
config = "0.13.0"
|
||||||
|
crossbeam-utils = "0.8"
|
||||||
dashmap = "5.4"
|
dashmap = "5.4"
|
||||||
datafusion = { git = "https://github.com/apache/arrow-datafusion.git", rev = "26e43acac3a96cec8dd4c8365f22dfb1a84306e9" }
|
datafusion = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "7823ef2f63663907edab46af0d51359900f608d6" }
|
||||||
datafusion-common = { git = "https://github.com/apache/arrow-datafusion.git", rev = "26e43acac3a96cec8dd4c8365f22dfb1a84306e9" }
|
datafusion-common = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "7823ef2f63663907edab46af0d51359900f608d6" }
|
||||||
datafusion-expr = { git = "https://github.com/apache/arrow-datafusion.git", rev = "26e43acac3a96cec8dd4c8365f22dfb1a84306e9" }
|
datafusion-expr = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "7823ef2f63663907edab46af0d51359900f608d6" }
|
||||||
datafusion-optimizer = { git = "https://github.com/apache/arrow-datafusion.git", rev = "26e43acac3a96cec8dd4c8365f22dfb1a84306e9" }
|
datafusion-functions = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "7823ef2f63663907edab46af0d51359900f608d6" }
|
||||||
datafusion-physical-expr = { git = "https://github.com/apache/arrow-datafusion.git", rev = "26e43acac3a96cec8dd4c8365f22dfb1a84306e9" }
|
datafusion-optimizer = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "7823ef2f63663907edab46af0d51359900f608d6" }
|
||||||
datafusion-sql = { git = "https://github.com/apache/arrow-datafusion.git", rev = "26e43acac3a96cec8dd4c8365f22dfb1a84306e9" }
|
datafusion-physical-expr = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "7823ef2f63663907edab46af0d51359900f608d6" }
|
||||||
datafusion-substrait = { git = "https://github.com/apache/arrow-datafusion.git", rev = "26e43acac3a96cec8dd4c8365f22dfb1a84306e9" }
|
datafusion-physical-plan = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "7823ef2f63663907edab46af0d51359900f608d6" }
|
||||||
|
datafusion-sql = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "7823ef2f63663907edab46af0d51359900f608d6" }
|
||||||
|
datafusion-substrait = { git = "https://github.com/waynexia/arrow-datafusion.git", rev = "7823ef2f63663907edab46af0d51359900f608d6" }
|
||||||
derive_builder = "0.12"
|
derive_builder = "0.12"
|
||||||
dotenv = "0.15"
|
dotenv = "0.15"
|
||||||
etcd-client = "0.12"
|
etcd-client = { version = "0.13" }
|
||||||
fst = "0.4.7"
|
fst = "0.4.7"
|
||||||
futures = "0.3"
|
futures = "0.3"
|
||||||
futures-util = "0.3"
|
futures-util = "0.3"
|
||||||
greptime-proto = { git = "https://github.com/GreptimeTeam/greptime-proto.git", rev = "1bd2398b686e5ac6c1eef6daf615867ce27f75c1" }
|
greptime-proto = { git = "https://github.com/GreptimeTeam/greptime-proto.git", rev = "255f87a3318ace3f88a67f76995a0e14910983f4" }
|
||||||
humantime = "2.1"
|
humantime = "2.1"
|
||||||
humantime-serde = "1.1"
|
humantime-serde = "1.1"
|
||||||
itertools = "0.10"
|
itertools = "0.10"
|
||||||
|
jsonb = { git = "https://github.com/databendlabs/jsonb.git", rev = "46ad50fc71cf75afbf98eec455f7892a6387c1fc", default-features = false }
|
||||||
lazy_static = "1.4"
|
lazy_static = "1.4"
|
||||||
meter-core = { git = "https://github.com/GreptimeTeam/greptime-meter.git", rev = "80b72716dcde47ec4161478416a5c6c21343364d" }
|
meter-core = { git = "https://github.com/GreptimeTeam/greptime-meter.git", rev = "a10facb353b41460eeb98578868ebf19c2084fac" }
|
||||||
mockall = "0.11.4"
|
mockall = "0.11.4"
|
||||||
moka = "0.12"
|
moka = "0.12"
|
||||||
notify = "6.1"
|
notify = "6.1"
|
||||||
num_cpus = "1.16"
|
num_cpus = "1.16"
|
||||||
once_cell = "1.18"
|
once_cell = "1.18"
|
||||||
opentelemetry-proto = { git = "https://github.com/waynexia/opentelemetry-rust.git", rev = "33841b38dda79b15f2024952be5f32533325ca02", features = [
|
opentelemetry-proto = { version = "0.5", features = [
|
||||||
"gen-tonic",
|
"gen-tonic",
|
||||||
"metrics",
|
"metrics",
|
||||||
"trace",
|
"trace",
|
||||||
|
"with-serde",
|
||||||
|
"logs",
|
||||||
] }
|
] }
|
||||||
parquet = "47.0"
|
parking_lot = "0.12"
|
||||||
|
parquet = { version = "51.0.0", default-features = false, features = ["arrow", "async", "object_store"] }
|
||||||
paste = "1.0"
|
paste = "1.0"
|
||||||
pin-project = "1.0"
|
pin-project = "1.0"
|
||||||
prometheus = { version = "0.13.3", features = ["process"] }
|
prometheus = { version = "0.13.3", features = ["process"] }
|
||||||
|
promql-parser = { version = "0.4.3", features = ["ser"] }
|
||||||
prost = "0.12"
|
prost = "0.12"
|
||||||
raft-engine = { version = "0.4.1", default-features = false }
|
raft-engine = { version = "0.4.1", default-features = false }
|
||||||
rand = "0.8"
|
rand = "0.8"
|
||||||
|
ratelimit = "0.9"
|
||||||
regex = "1.8"
|
regex = "1.8"
|
||||||
regex-automata = { version = "0.4" }
|
regex-automata = { version = "0.4" }
|
||||||
reqwest = { version = "0.11", default-features = false, features = [
|
reqwest = { version = "0.12", default-features = false, features = [
|
||||||
"json",
|
"json",
|
||||||
"rustls-tls-native-roots",
|
"rustls-tls-native-roots",
|
||||||
"stream",
|
"stream",
|
||||||
|
"multipart",
|
||||||
] }
|
] }
|
||||||
rskafka = "0.5"
|
rskafka = { git = "https://github.com/influxdata/rskafka.git", rev = "75535b5ad9bae4a5dbb582c82e44dfd81ec10105", features = [
|
||||||
|
"transport-tls",
|
||||||
|
] }
|
||||||
|
rstest = "0.21"
|
||||||
|
rstest_reuse = "0.7"
|
||||||
rust_decimal = "1.33"
|
rust_decimal = "1.33"
|
||||||
|
rustc-hash = "2.0"
|
||||||
schemars = "0.8"
|
schemars = "0.8"
|
||||||
serde = { version = "1.0", features = ["derive"] }
|
serde = { version = "1.0", features = ["derive"] }
|
||||||
serde_json = { version = "1.0", features = ["float_roundtrip"] }
|
serde_json = { version = "1.0", features = ["float_roundtrip"] }
|
||||||
serde_with = "3"
|
serde_with = "3"
|
||||||
|
shadow-rs = "0.35"
|
||||||
|
similar-asserts = "1.6.0"
|
||||||
smallvec = { version = "1", features = ["serde"] }
|
smallvec = { version = "1", features = ["serde"] }
|
||||||
snafu = "0.7"
|
snafu = "0.8"
|
||||||
sysinfo = "0.30"
|
sysinfo = "0.30"
|
||||||
# on branch v0.38.x
|
# on branch v0.44.x
|
||||||
sqlparser = { git = "https://github.com/GreptimeTeam/sqlparser-rs.git", rev = "6a93567ae38d42be5c8d08b13c8ff4dde26502ef", features = [
|
sqlparser = { git = "https://github.com/GreptimeTeam/sqlparser-rs.git", rev = "54a267ac89c09b11c0c88934690530807185d3e7", features = [
|
||||||
"visitor",
|
"visitor",
|
||||||
] }
|
] }
|
||||||
strum = { version = "0.25", features = ["derive"] }
|
strum = { version = "0.25", features = ["derive"] }
|
||||||
tempfile = "3"
|
tempfile = "3"
|
||||||
tokio = { version = "1.28", features = ["full"] }
|
tokio = { version = "1.40", features = ["full"] }
|
||||||
|
tokio-postgres = "0.7"
|
||||||
tokio-stream = { version = "0.1" }
|
tokio-stream = { version = "0.1" }
|
||||||
tokio-util = { version = "0.7", features = ["io-util", "compat"] }
|
tokio-util = { version = "0.7", features = ["io-util", "compat"] }
|
||||||
toml = "0.8.8"
|
toml = "0.8.8"
|
||||||
tonic = { version = "0.10", features = ["tls"] }
|
tonic = { version = "0.11", features = ["tls", "gzip", "zstd"] }
|
||||||
uuid = { version = "1", features = ["serde", "v4", "fast-rng"] }
|
tower = { version = "0.4" }
|
||||||
|
tracing-appender = "0.2"
|
||||||
|
tracing-subscriber = { version = "0.3", features = ["env-filter", "json", "fmt"] }
|
||||||
|
typetag = "0.2"
|
||||||
|
uuid = { version = "1.7", features = ["serde", "v4", "fast-rng"] }
|
||||||
zstd = "0.13"
|
zstd = "0.13"
|
||||||
|
|
||||||
## workspaces members
|
## workspaces members
|
||||||
api = { path = "src/api" }
|
api = { path = "src/api" }
|
||||||
auth = { path = "src/auth" }
|
auth = { path = "src/auth" }
|
||||||
|
cache = { path = "src/cache" }
|
||||||
catalog = { path = "src/catalog" }
|
catalog = { path = "src/catalog" }
|
||||||
client = { path = "src/client" }
|
client = { path = "src/client" }
|
||||||
cmd = { path = "src/cmd" }
|
cmd = { path = "src/cmd", default-features = false }
|
||||||
common-base = { path = "src/common/base" }
|
common-base = { path = "src/common/base" }
|
||||||
common-catalog = { path = "src/common/catalog" }
|
common-catalog = { path = "src/common/catalog" }
|
||||||
common-config = { path = "src/common/config" }
|
common-config = { path = "src/common/config" }
|
||||||
common-datasource = { path = "src/common/datasource" }
|
common-datasource = { path = "src/common/datasource" }
|
||||||
common-decimal = { path = "src/common/decimal" }
|
common-decimal = { path = "src/common/decimal" }
|
||||||
common-error = { path = "src/common/error" }
|
common-error = { path = "src/common/error" }
|
||||||
|
common-frontend = { path = "src/common/frontend" }
|
||||||
common-function = { path = "src/common/function" }
|
common-function = { path = "src/common/function" }
|
||||||
common-greptimedb-telemetry = { path = "src/common/greptimedb-telemetry" }
|
common-greptimedb-telemetry = { path = "src/common/greptimedb-telemetry" }
|
||||||
common-grpc = { path = "src/common/grpc" }
|
common-grpc = { path = "src/common/grpc" }
|
||||||
@@ -177,6 +215,7 @@ common-macro = { path = "src/common/macro" }
|
|||||||
common-mem-prof = { path = "src/common/mem-prof" }
|
common-mem-prof = { path = "src/common/mem-prof" }
|
||||||
common-meta = { path = "src/common/meta" }
|
common-meta = { path = "src/common/meta" }
|
||||||
common-plugins = { path = "src/common/plugins" }
|
common-plugins = { path = "src/common/plugins" }
|
||||||
|
common-pprof = { path = "src/common/pprof" }
|
||||||
common-procedure = { path = "src/common/procedure" }
|
common-procedure = { path = "src/common/procedure" }
|
||||||
common-procedure-test = { path = "src/common/procedure-test" }
|
common-procedure-test = { path = "src/common/procedure-test" }
|
||||||
common-query = { path = "src/common/query" }
|
common-query = { path = "src/common/query" }
|
||||||
@@ -190,7 +229,8 @@ common-wal = { path = "src/common/wal" }
|
|||||||
datanode = { path = "src/datanode" }
|
datanode = { path = "src/datanode" }
|
||||||
datatypes = { path = "src/datatypes" }
|
datatypes = { path = "src/datatypes" }
|
||||||
file-engine = { path = "src/file-engine" }
|
file-engine = { path = "src/file-engine" }
|
||||||
frontend = { path = "src/frontend" }
|
flow = { path = "src/flow" }
|
||||||
|
frontend = { path = "src/frontend", default-features = false }
|
||||||
index = { path = "src/index" }
|
index = { path = "src/index" }
|
||||||
log-store = { path = "src/log-store" }
|
log-store = { path = "src/log-store" }
|
||||||
meta-client = { path = "src/meta-client" }
|
meta-client = { path = "src/meta-client" }
|
||||||
@@ -200,6 +240,7 @@ mito2 = { path = "src/mito2" }
|
|||||||
object-store = { path = "src/object-store" }
|
object-store = { path = "src/object-store" }
|
||||||
operator = { path = "src/operator" }
|
operator = { path = "src/operator" }
|
||||||
partition = { path = "src/partition" }
|
partition = { path = "src/partition" }
|
||||||
|
pipeline = { path = "src/pipeline" }
|
||||||
plugins = { path = "src/plugins" }
|
plugins = { path = "src/plugins" }
|
||||||
promql = { path = "src/promql" }
|
promql = { path = "src/promql" }
|
||||||
puffin = { path = "src/puffin" }
|
puffin = { path = "src/puffin" }
|
||||||
@@ -212,16 +253,37 @@ store-api = { path = "src/store-api" }
|
|||||||
substrait = { path = "src/common/substrait" }
|
substrait = { path = "src/common/substrait" }
|
||||||
table = { path = "src/table" }
|
table = { path = "src/table" }
|
||||||
|
|
||||||
|
[patch.crates-io]
|
||||||
|
# change all rustls dependencies to use our fork to default to `ring` to make it "just work"
|
||||||
|
hyper-rustls = { git = "https://github.com/GreptimeTeam/hyper-rustls" }
|
||||||
|
rustls = { git = "https://github.com/GreptimeTeam/rustls" }
|
||||||
|
tokio-rustls = { git = "https://github.com/GreptimeTeam/tokio-rustls" }
|
||||||
|
# This is commented, since we are not using aws-lc-sys, if we need to use it, we need to uncomment this line or use a release after this commit, or it wouldn't compile with gcc < 8.1
|
||||||
|
# see https://github.com/aws/aws-lc-rs/pull/526
|
||||||
|
# aws-lc-sys = { git ="https://github.com/aws/aws-lc-rs", rev = "556558441e3494af4b156ae95ebc07ebc2fd38aa" }
|
||||||
|
|
||||||
[workspace.dependencies.meter-macros]
|
[workspace.dependencies.meter-macros]
|
||||||
git = "https://github.com/GreptimeTeam/greptime-meter.git"
|
git = "https://github.com/GreptimeTeam/greptime-meter.git"
|
||||||
rev = "80b72716dcde47ec4161478416a5c6c21343364d"
|
rev = "a10facb353b41460eeb98578868ebf19c2084fac"
|
||||||
|
|
||||||
[profile.release]
|
[profile.release]
|
||||||
debug = 1
|
debug = 1
|
||||||
|
|
||||||
[profile.nightly]
|
[profile.nightly]
|
||||||
inherits = "release"
|
inherits = "release"
|
||||||
strip = true
|
strip = "debuginfo"
|
||||||
lto = "thin"
|
lto = "thin"
|
||||||
debug = false
|
debug = false
|
||||||
incremental = false
|
incremental = false
|
||||||
|
|
||||||
|
[profile.ci]
|
||||||
|
inherits = "dev"
|
||||||
|
strip = true
|
||||||
|
|
||||||
|
[profile.dev.package.sqlness-runner]
|
||||||
|
debug = false
|
||||||
|
strip = true
|
||||||
|
|
||||||
|
[profile.dev.package.tests-fuzz]
|
||||||
|
debug = false
|
||||||
|
strip = true
|
||||||
|
|||||||
54
Makefile
54
Makefile
@@ -8,6 +8,7 @@ CARGO_BUILD_OPTS := --locked
|
|||||||
IMAGE_REGISTRY ?= docker.io
|
IMAGE_REGISTRY ?= docker.io
|
||||||
IMAGE_NAMESPACE ?= greptime
|
IMAGE_NAMESPACE ?= greptime
|
||||||
IMAGE_TAG ?= latest
|
IMAGE_TAG ?= latest
|
||||||
|
DEV_BUILDER_IMAGE_TAG ?= 2024-10-19-a5c00e85-20241024184445
|
||||||
BUILDX_MULTI_PLATFORM_BUILD ?= false
|
BUILDX_MULTI_PLATFORM_BUILD ?= false
|
||||||
BUILDX_BUILDER_NAME ?= gtbuilder
|
BUILDX_BUILDER_NAME ?= gtbuilder
|
||||||
BASE_IMAGE ?= ubuntu
|
BASE_IMAGE ?= ubuntu
|
||||||
@@ -15,6 +16,7 @@ RUST_TOOLCHAIN ?= $(shell cat rust-toolchain.toml | grep channel | cut -d'"' -f2
|
|||||||
CARGO_REGISTRY_CACHE ?= ${HOME}/.cargo/registry
|
CARGO_REGISTRY_CACHE ?= ${HOME}/.cargo/registry
|
||||||
ARCH := $(shell uname -m | sed 's/x86_64/amd64/' | sed 's/aarch64/arm64/')
|
ARCH := $(shell uname -m | sed 's/x86_64/amd64/' | sed 's/aarch64/arm64/')
|
||||||
OUTPUT_DIR := $(shell if [ "$(RELEASE)" = "true" ]; then echo "release"; elif [ ! -z "$(CARGO_PROFILE)" ]; then echo "$(CARGO_PROFILE)" ; else echo "debug"; fi)
|
OUTPUT_DIR := $(shell if [ "$(RELEASE)" = "true" ]; then echo "release"; elif [ ! -z "$(CARGO_PROFILE)" ]; then echo "$(CARGO_PROFILE)" ; else echo "debug"; fi)
|
||||||
|
SQLNESS_OPTS ?=
|
||||||
|
|
||||||
# The arguments for running integration tests.
|
# The arguments for running integration tests.
|
||||||
ETCD_VERSION ?= v3.5.9
|
ETCD_VERSION ?= v3.5.9
|
||||||
@@ -54,8 +56,10 @@ ifneq ($(strip $(RELEASE)),)
|
|||||||
CARGO_BUILD_OPTS += --release
|
CARGO_BUILD_OPTS += --release
|
||||||
endif
|
endif
|
||||||
|
|
||||||
ifeq ($(BUILDX_MULTI_PLATFORM_BUILD), true)
|
ifeq ($(BUILDX_MULTI_PLATFORM_BUILD), all)
|
||||||
BUILDX_MULTI_PLATFORM_BUILD_OPTS := --platform linux/amd64,linux/arm64 --push
|
BUILDX_MULTI_PLATFORM_BUILD_OPTS := --platform linux/amd64,linux/arm64 --push
|
||||||
|
else ifeq ($(BUILDX_MULTI_PLATFORM_BUILD), amd64)
|
||||||
|
BUILDX_MULTI_PLATFORM_BUILD_OPTS := --platform linux/amd64 --push
|
||||||
else
|
else
|
||||||
BUILDX_MULTI_PLATFORM_BUILD_OPTS := -o type=docker
|
BUILDX_MULTI_PLATFORM_BUILD_OPTS := -o type=docker
|
||||||
endif
|
endif
|
||||||
@@ -74,7 +78,7 @@ build: ## Build debug version greptime.
|
|||||||
build-by-dev-builder: ## Build greptime by dev-builder.
|
build-by-dev-builder: ## Build greptime by dev-builder.
|
||||||
docker run --network=host \
|
docker run --network=host \
|
||||||
-v ${PWD}:/greptimedb -v ${CARGO_REGISTRY_CACHE}:/root/.cargo/registry \
|
-v ${PWD}:/greptimedb -v ${CARGO_REGISTRY_CACHE}:/root/.cargo/registry \
|
||||||
-w /greptimedb ${IMAGE_REGISTRY}/${IMAGE_NAMESPACE}/dev-builder-${BASE_IMAGE}:latest \
|
-w /greptimedb ${IMAGE_REGISTRY}/${IMAGE_NAMESPACE}/dev-builder-${BASE_IMAGE}:${DEV_BUILDER_IMAGE_TAG} \
|
||||||
make build \
|
make build \
|
||||||
CARGO_EXTENSION="${CARGO_EXTENSION}" \
|
CARGO_EXTENSION="${CARGO_EXTENSION}" \
|
||||||
CARGO_PROFILE=${CARGO_PROFILE} \
|
CARGO_PROFILE=${CARGO_PROFILE} \
|
||||||
@@ -88,7 +92,7 @@ build-by-dev-builder: ## Build greptime by dev-builder.
|
|||||||
build-android-bin: ## Build greptime binary for android.
|
build-android-bin: ## Build greptime binary for android.
|
||||||
docker run --network=host \
|
docker run --network=host \
|
||||||
-v ${PWD}:/greptimedb -v ${CARGO_REGISTRY_CACHE}:/root/.cargo/registry \
|
-v ${PWD}:/greptimedb -v ${CARGO_REGISTRY_CACHE}:/root/.cargo/registry \
|
||||||
-w /greptimedb ${IMAGE_REGISTRY}/${IMAGE_NAMESPACE}/dev-builder-android:latest \
|
-w /greptimedb ${IMAGE_REGISTRY}/${IMAGE_NAMESPACE}/dev-builder-android:${DEV_BUILDER_IMAGE_TAG} \
|
||||||
make build \
|
make build \
|
||||||
CARGO_EXTENSION="ndk --platform 23 -t aarch64-linux-android" \
|
CARGO_EXTENSION="ndk --platform 23 -t aarch64-linux-android" \
|
||||||
CARGO_PROFILE=release \
|
CARGO_PROFILE=release \
|
||||||
@@ -102,8 +106,8 @@ build-android-bin: ## Build greptime binary for android.
|
|||||||
strip-android-bin: build-android-bin ## Strip greptime binary for android.
|
strip-android-bin: build-android-bin ## Strip greptime binary for android.
|
||||||
docker run --network=host \
|
docker run --network=host \
|
||||||
-v ${PWD}:/greptimedb \
|
-v ${PWD}:/greptimedb \
|
||||||
-w /greptimedb ${IMAGE_REGISTRY}/${IMAGE_NAMESPACE}/dev-builder-android:latest \
|
-w /greptimedb ${IMAGE_REGISTRY}/${IMAGE_NAMESPACE}/dev-builder-android:${DEV_BUILDER_IMAGE_TAG} \
|
||||||
bash -c '$${NDK_ROOT}/toolchains/llvm/prebuilt/linux-x86_64/bin/llvm-strip /greptimedb/target/aarch64-linux-android/release/greptime'
|
bash -c '$${NDK_ROOT}/toolchains/llvm/prebuilt/linux-x86_64/bin/llvm-strip --strip-debug /greptimedb/target/aarch64-linux-android/release/greptime'
|
||||||
|
|
||||||
.PHONY: clean
|
.PHONY: clean
|
||||||
clean: ## Clean the project.
|
clean: ## Clean the project.
|
||||||
@@ -142,7 +146,7 @@ dev-builder: multi-platform-buildx ## Build dev-builder image.
|
|||||||
docker buildx build --builder ${BUILDX_BUILDER_NAME} \
|
docker buildx build --builder ${BUILDX_BUILDER_NAME} \
|
||||||
--build-arg="RUST_TOOLCHAIN=${RUST_TOOLCHAIN}" \
|
--build-arg="RUST_TOOLCHAIN=${RUST_TOOLCHAIN}" \
|
||||||
-f docker/dev-builder/${BASE_IMAGE}/Dockerfile \
|
-f docker/dev-builder/${BASE_IMAGE}/Dockerfile \
|
||||||
-t ${IMAGE_REGISTRY}/${IMAGE_NAMESPACE}/dev-builder-${BASE_IMAGE}:${IMAGE_TAG} ${BUILDX_MULTI_PLATFORM_BUILD_OPTS} .
|
-t ${IMAGE_REGISTRY}/${IMAGE_NAMESPACE}/dev-builder-${BASE_IMAGE}:${DEV_BUILDER_IMAGE_TAG} ${BUILDX_MULTI_PLATFORM_BUILD_OPTS} .
|
||||||
|
|
||||||
.PHONY: multi-platform-buildx
|
.PHONY: multi-platform-buildx
|
||||||
multi-platform-buildx: ## Create buildx multi-platform builder.
|
multi-platform-buildx: ## Create buildx multi-platform builder.
|
||||||
@@ -159,7 +163,18 @@ nextest: ## Install nextest tools.
|
|||||||
|
|
||||||
.PHONY: sqlness-test
|
.PHONY: sqlness-test
|
||||||
sqlness-test: ## Run sqlness test.
|
sqlness-test: ## Run sqlness test.
|
||||||
cargo sqlness
|
cargo sqlness ${SQLNESS_OPTS}
|
||||||
|
|
||||||
|
# Run fuzz test ${FUZZ_TARGET}.
|
||||||
|
RUNS ?= 1
|
||||||
|
FUZZ_TARGET ?= fuzz_alter_table
|
||||||
|
.PHONY: fuzz
|
||||||
|
fuzz:
|
||||||
|
cargo fuzz run ${FUZZ_TARGET} --fuzz-dir tests-fuzz -D -s none -- -runs=${RUNS}
|
||||||
|
|
||||||
|
.PHONY: fuzz-ls
|
||||||
|
fuzz-ls:
|
||||||
|
cargo fuzz list --fuzz-dir tests-fuzz
|
||||||
|
|
||||||
.PHONY: check
|
.PHONY: check
|
||||||
check: ## Cargo check all the targets.
|
check: ## Cargo check all the targets.
|
||||||
@@ -169,9 +184,14 @@ check: ## Cargo check all the targets.
|
|||||||
clippy: ## Check clippy rules.
|
clippy: ## Check clippy rules.
|
||||||
cargo clippy --workspace --all-targets --all-features -- -D warnings
|
cargo clippy --workspace --all-targets --all-features -- -D warnings
|
||||||
|
|
||||||
|
.PHONY: fix-clippy
|
||||||
|
fix-clippy: ## Fix clippy violations.
|
||||||
|
cargo clippy --workspace --all-targets --all-features --fix
|
||||||
|
|
||||||
.PHONY: fmt-check
|
.PHONY: fmt-check
|
||||||
fmt-check: ## Check code format.
|
fmt-check: ## Check code format.
|
||||||
cargo fmt --all -- --check
|
cargo fmt --all -- --check
|
||||||
|
python3 scripts/check-snafu.py
|
||||||
|
|
||||||
.PHONY: start-etcd
|
.PHONY: start-etcd
|
||||||
start-etcd: ## Start single node etcd for testing purpose.
|
start-etcd: ## Start single node etcd for testing purpose.
|
||||||
@@ -185,9 +205,27 @@ stop-etcd: ## Stop single node etcd for testing purpose.
|
|||||||
run-it-in-container: start-etcd ## Run integration tests in dev-builder.
|
run-it-in-container: start-etcd ## Run integration tests in dev-builder.
|
||||||
docker run --network=host \
|
docker run --network=host \
|
||||||
-v ${PWD}:/greptimedb -v ${CARGO_REGISTRY_CACHE}:/root/.cargo/registry -v /tmp:/tmp \
|
-v ${PWD}:/greptimedb -v ${CARGO_REGISTRY_CACHE}:/root/.cargo/registry -v /tmp:/tmp \
|
||||||
-w /greptimedb ${IMAGE_REGISTRY}/${IMAGE_NAMESPACE}/dev-builder-${BASE_IMAGE}:latest \
|
-w /greptimedb ${IMAGE_REGISTRY}/${IMAGE_NAMESPACE}/dev-builder-${BASE_IMAGE}:${DEV_BUILDER_IMAGE_TAG} \
|
||||||
make test sqlness-test BUILD_JOBS=${BUILD_JOBS}
|
make test sqlness-test BUILD_JOBS=${BUILD_JOBS}
|
||||||
|
|
||||||
|
.PHONY: start-cluster
|
||||||
|
start-cluster: ## Start the greptimedb cluster with etcd by using docker compose.
|
||||||
|
docker compose -f ./docker/docker-compose/cluster-with-etcd.yaml up
|
||||||
|
|
||||||
|
.PHONY: stop-cluster
|
||||||
|
stop-cluster: ## Stop the greptimedb cluster that created by docker compose.
|
||||||
|
docker compose -f ./docker/docker-compose/cluster-with-etcd.yaml stop
|
||||||
|
|
||||||
|
##@ Docs
|
||||||
|
config-docs: ## Generate configuration documentation from toml files.
|
||||||
|
docker run --rm \
|
||||||
|
-v ${PWD}:/greptimedb \
|
||||||
|
-w /greptimedb/config \
|
||||||
|
toml2docs/toml2docs:v0.1.3 \
|
||||||
|
-p '##' \
|
||||||
|
-t ./config-docs-template.md \
|
||||||
|
-o ./config.md
|
||||||
|
|
||||||
##@ General
|
##@ General
|
||||||
|
|
||||||
# The help target prints out all targets with their descriptions organized
|
# The help target prints out all targets with their descriptions organized
|
||||||
|
|||||||
45
README.md
45
README.md
@@ -6,12 +6,12 @@
|
|||||||
</picture>
|
</picture>
|
||||||
</p>
|
</p>
|
||||||
|
|
||||||
<h1 align="center">Cloud-scale, Fast and Efficient Time Series Database</h1>
|
<h2 align="center">Unified Time Series Database for Metrics, Logs, and Events</h2>
|
||||||
|
|
||||||
<div align="center">
|
<div align="center">
|
||||||
<h3 align="center">
|
<h3 align="center">
|
||||||
<a href="https://greptime.com/product/cloud">GreptimeCloud</a> |
|
<a href="https://greptime.com/product/cloud">GreptimeCloud</a> |
|
||||||
<a href="https://docs.greptime.com/">User guide</a> |
|
<a href="https://docs.greptime.com/">User Guide</a> |
|
||||||
<a href="https://greptimedb.rs/">API Docs</a> |
|
<a href="https://greptimedb.rs/">API Docs</a> |
|
||||||
<a href="https://github.com/GreptimeTeam/greptimedb/issues/3412">Roadmap 2024</a>
|
<a href="https://github.com/GreptimeTeam/greptimedb/issues/3412">Roadmap 2024</a>
|
||||||
</h4>
|
</h4>
|
||||||
@@ -50,24 +50,23 @@
|
|||||||
|
|
||||||
## Introduction
|
## Introduction
|
||||||
|
|
||||||
**GreptimeDB** is an open-source time-series database focusing on efficiency, scalability, and analytical capabilities.
|
**GreptimeDB** is an open-source unified time-series database for **Metrics**, **Logs**, and **Events** (also **Traces** in plan). You can gain real-time insights from Edge to Cloud at any scale.
|
||||||
Designed to work on infrastructure of the cloud era, GreptimeDB benefits users with its elasticity and commodity storage, offering a fast and cost-effective **alternative to InfluxDB** and a **long-term storage for Prometheus**.
|
|
||||||
|
|
||||||
## Why GreptimeDB
|
## Why GreptimeDB
|
||||||
|
|
||||||
Our core developers have been building time-series data platforms for years. Based on our best-practices, GreptimeDB is born to give you:
|
Our core developers have been building time-series data platforms for years. Based on our best-practices, GreptimeDB is born to give you:
|
||||||
|
|
||||||
* **Easy horizontal scaling**
|
* **Unified all kinds of time series**
|
||||||
|
|
||||||
Seamless scalability from a standalone binary at edge to a robust, highly available distributed cluster in cloud, with a transparent experience for both developers and administrators.
|
GreptimeDB treats all time series as contextual events with timestamp, and thus unifies the processing of metrics, logs, and events. It supports analyzing metrics, logs, and events with SQL and PromQL, and doing streaming with continuous aggregation.
|
||||||
|
|
||||||
* **Analyzing time-series data**
|
* **Cloud-Edge collaboration**
|
||||||
|
|
||||||
Query your time-series data with SQL and PromQL. Use Python scripts to facilitate complex analytical tasks.
|
GreptimeDB can be deployed on ARM architecture-compatible Android/Linux systems as well as cloud environments from various vendors. Both sides run the same software, providing identical APIs and control planes, so your application can run at the edge or on the cloud without modification, and data synchronization also becomes extremely easy and efficient.
|
||||||
|
|
||||||
* **Cloud-native distributed database**
|
* **Cloud-native distributed database**
|
||||||
|
|
||||||
Fully open-source distributed cluster architecture that harnesses the power of cloud-native elastic computing resources.
|
By leveraging object storage (S3 and others), separating compute and storage, scaling stateless compute nodes arbitrarily, GreptimeDB implements seamless scalability. It also supports cross-cloud deployment with a built-in unified data access layer over different object storages.
|
||||||
|
|
||||||
* **Performance and Cost-effective**
|
* **Performance and Cost-effective**
|
||||||
|
|
||||||
@@ -75,7 +74,7 @@ Our core developers have been building time-series data platforms for years. Bas
|
|||||||
|
|
||||||
* **Compatible with InfluxDB, Prometheus and more protocols**
|
* **Compatible with InfluxDB, Prometheus and more protocols**
|
||||||
|
|
||||||
Widely adopted database protocols and APIs, including MySQL, PostgreSQL, and Prometheus Remote Storage, etc. [Read more](https://docs.greptime.com/user-guide/clients/overview).
|
Widely adopted database protocols and APIs, including MySQL, PostgreSQL, and Prometheus Remote Storage, etc. [Read more](https://docs.greptime.com/user-guide/protocols/overview).
|
||||||
|
|
||||||
## Try GreptimeDB
|
## Try GreptimeDB
|
||||||
|
|
||||||
@@ -105,10 +104,10 @@ Read more about [Installation](https://docs.greptime.com/getting-started/install
|
|||||||
|
|
||||||
## Getting Started
|
## Getting Started
|
||||||
|
|
||||||
* [Quickstart](https://docs.greptime.com/getting-started/quick-start/overview)
|
* [Quickstart](https://docs.greptime.com/getting-started/quick-start)
|
||||||
* [Write Data](https://docs.greptime.com/user-guide/clients/overview)
|
* [User Guide](https://docs.greptime.com/user-guide/overview)
|
||||||
* [Query Data](https://docs.greptime.com/user-guide/query-data/overview)
|
* [Demos](https://github.com/GreptimeTeam/demo-scene)
|
||||||
* [Operations](https://docs.greptime.com/user-guide/operations/overview)
|
* [FAQ](https://docs.greptime.com/faq-and-others/faq)
|
||||||
|
|
||||||
## Build
|
## Build
|
||||||
|
|
||||||
@@ -143,7 +142,7 @@ cargo run -- standalone start
|
|||||||
- [GreptimeDB C++ Ingester](https://github.com/GreptimeTeam/greptimedb-ingester-cpp)
|
- [GreptimeDB C++ Ingester](https://github.com/GreptimeTeam/greptimedb-ingester-cpp)
|
||||||
- [GreptimeDB Erlang Ingester](https://github.com/GreptimeTeam/greptimedb-ingester-erl)
|
- [GreptimeDB Erlang Ingester](https://github.com/GreptimeTeam/greptimedb-ingester-erl)
|
||||||
- [GreptimeDB Rust Ingester](https://github.com/GreptimeTeam/greptimedb-ingester-rust)
|
- [GreptimeDB Rust Ingester](https://github.com/GreptimeTeam/greptimedb-ingester-rust)
|
||||||
- [GreptimeDB JavaScript Ingester](https://github.com/GreptimeTeam/greptime-ingester-js)
|
- [GreptimeDB JavaScript Ingester](https://github.com/GreptimeTeam/greptimedb-ingester-js)
|
||||||
|
|
||||||
### Grafana Dashboard
|
### Grafana Dashboard
|
||||||
|
|
||||||
@@ -151,9 +150,10 @@ Our official Grafana dashboard is available at [grafana](grafana/README.md) dire
|
|||||||
|
|
||||||
## Project Status
|
## Project Status
|
||||||
|
|
||||||
The current version has not yet reached General Availability version standards.
|
The current version has not yet reached the standards for General Availability.
|
||||||
In line with our Greptime 2024 Roadmap, we plan to achieve a production-level
|
According to our Greptime 2024 Roadmap, we aim to achieve a production-level version with the release of v1.0 by the end of 2024. [Join Us](https://github.com/GreptimeTeam/greptimedb/issues/3412)
|
||||||
version with the update to v1.0 in August. [[Join Force]](https://github.com/GreptimeTeam/greptimedb/issues/3412)
|
|
||||||
|
We welcome you to test and use GreptimeDB. Some users have already adopted it in their production environments. If you're interested in trying it out, please use the latest stable release available.
|
||||||
|
|
||||||
## Community
|
## Community
|
||||||
|
|
||||||
@@ -172,6 +172,13 @@ In addition, you may:
|
|||||||
- Connect us with [Linkedin](https://www.linkedin.com/company/greptime/)
|
- Connect us with [Linkedin](https://www.linkedin.com/company/greptime/)
|
||||||
- Follow us on [Twitter](https://twitter.com/greptime)
|
- Follow us on [Twitter](https://twitter.com/greptime)
|
||||||
|
|
||||||
|
## Commerial Support
|
||||||
|
|
||||||
|
If you are running GreptimeDB OSS in your organization, we offer additional
|
||||||
|
enterprise addons, installation service, training and consulting. [Contact
|
||||||
|
us](https://greptime.com/contactus) and we will reach out to you with more
|
||||||
|
detail of our commerial license.
|
||||||
|
|
||||||
## License
|
## License
|
||||||
|
|
||||||
GreptimeDB uses the [Apache License 2.0](https://apache.org/licenses/LICENSE-2.0.txt) to strike a balance between
|
GreptimeDB uses the [Apache License 2.0](https://apache.org/licenses/LICENSE-2.0.txt) to strike a balance between
|
||||||
@@ -183,6 +190,8 @@ Please refer to [contribution guidelines](CONTRIBUTING.md) and [internal concept
|
|||||||
|
|
||||||
## Acknowledgement
|
## Acknowledgement
|
||||||
|
|
||||||
|
Special thanks to all the contributors who have propelled GreptimeDB forward. For a complete list of contributors, please refer to [AUTHOR.md](AUTHOR.md).
|
||||||
|
|
||||||
- GreptimeDB uses [Apache Arrow™](https://arrow.apache.org/) as the memory model and [Apache Parquet™](https://parquet.apache.org/) as the persistent file format.
|
- GreptimeDB uses [Apache Arrow™](https://arrow.apache.org/) as the memory model and [Apache Parquet™](https://parquet.apache.org/) as the persistent file format.
|
||||||
- GreptimeDB's query engine is powered by [Apache Arrow DataFusion™](https://arrow.apache.org/datafusion/).
|
- GreptimeDB's query engine is powered by [Apache Arrow DataFusion™](https://arrow.apache.org/datafusion/).
|
||||||
- [Apache OpenDAL™](https://opendal.apache.org) gives GreptimeDB a very general and elegant data access abstraction layer.
|
- [Apache OpenDAL™](https://opendal.apache.org) gives GreptimeDB a very general and elegant data access abstraction layer.
|
||||||
|
|||||||
@@ -1,38 +0,0 @@
|
|||||||
[package]
|
|
||||||
name = "benchmarks"
|
|
||||||
version.workspace = true
|
|
||||||
edition.workspace = true
|
|
||||||
license.workspace = true
|
|
||||||
|
|
||||||
[lints]
|
|
||||||
workspace = true
|
|
||||||
|
|
||||||
[dependencies]
|
|
||||||
api.workspace = true
|
|
||||||
arrow.workspace = true
|
|
||||||
chrono.workspace = true
|
|
||||||
clap.workspace = true
|
|
||||||
client.workspace = true
|
|
||||||
common-base.workspace = true
|
|
||||||
common-telemetry.workspace = true
|
|
||||||
common-wal.workspace = true
|
|
||||||
dotenv.workspace = true
|
|
||||||
futures.workspace = true
|
|
||||||
futures-util.workspace = true
|
|
||||||
humantime.workspace = true
|
|
||||||
humantime-serde.workspace = true
|
|
||||||
indicatif = "0.17.1"
|
|
||||||
itertools.workspace = true
|
|
||||||
lazy_static.workspace = true
|
|
||||||
log-store.workspace = true
|
|
||||||
mito2.workspace = true
|
|
||||||
num_cpus.workspace = true
|
|
||||||
parquet.workspace = true
|
|
||||||
prometheus.workspace = true
|
|
||||||
rand.workspace = true
|
|
||||||
rskafka.workspace = true
|
|
||||||
serde.workspace = true
|
|
||||||
store-api.workspace = true
|
|
||||||
tokio.workspace = true
|
|
||||||
toml.workspace = true
|
|
||||||
uuid.workspace = true
|
|
||||||
@@ -1,11 +0,0 @@
|
|||||||
Benchmarkers for GreptimeDB
|
|
||||||
--------------------------------
|
|
||||||
|
|
||||||
## Wal Benchmarker
|
|
||||||
The wal benchmarker serves to evaluate the performance of GreptimeDB's Write-Ahead Log (WAL) component. It meticulously assesses the read/write performance of the WAL under diverse workloads generated by the benchmarker.
|
|
||||||
|
|
||||||
|
|
||||||
### How to use
|
|
||||||
To compile the benchmarker, navigate to the `greptimedb/benchmarks` directory and execute `cargo build --release`. Subsequently, you'll find the compiled target located at `greptimedb/target/release/wal_bench`.
|
|
||||||
|
|
||||||
The `./wal_bench -h` command reveals numerous arguments that the target accepts. Among these, a notable one is the `cfg-file` argument. By utilizing a configuration file in the TOML format, you can bypass the need to repeatedly specify cumbersome arguments.
|
|
||||||
@@ -1,21 +0,0 @@
|
|||||||
# Refers to the documents of `Args` in benchmarks/src/wal.rs`.
|
|
||||||
wal_provider = "kafka"
|
|
||||||
bootstrap_brokers = ["localhost:9092"]
|
|
||||||
num_workers = 10
|
|
||||||
num_topics = 32
|
|
||||||
num_regions = 1000
|
|
||||||
num_scrapes = 1000
|
|
||||||
num_rows = 5
|
|
||||||
col_types = "ifs"
|
|
||||||
max_batch_size = "512KB"
|
|
||||||
linger = "1ms"
|
|
||||||
backoff_init = "10ms"
|
|
||||||
backoff_max = "1ms"
|
|
||||||
backoff_base = 2
|
|
||||||
backoff_deadline = "3s"
|
|
||||||
compression = "zstd"
|
|
||||||
rng_seed = 42
|
|
||||||
skip_read = false
|
|
||||||
skip_write = false
|
|
||||||
random_topics = true
|
|
||||||
report_metrics = false
|
|
||||||
@@ -1,543 +0,0 @@
|
|||||||
// Copyright 2023 Greptime Team
|
|
||||||
//
|
|
||||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
// you may not use this file except in compliance with the License.
|
|
||||||
// You may obtain a copy of the License at
|
|
||||||
//
|
|
||||||
// http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
//
|
|
||||||
// Unless required by applicable law or agreed to in writing, software
|
|
||||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
// See the License for the specific language governing permissions and
|
|
||||||
// limitations under the License.
|
|
||||||
|
|
||||||
//! Use the taxi trip records from New York City dataset to bench. You can download the dataset from
|
|
||||||
//! [here](https://www1.nyc.gov/site/tlc/about/tlc-trip-record-data.page).
|
|
||||||
|
|
||||||
#![allow(clippy::print_stdout)]
|
|
||||||
|
|
||||||
use std::collections::HashMap;
|
|
||||||
use std::path::{Path, PathBuf};
|
|
||||||
use std::time::Instant;
|
|
||||||
|
|
||||||
use arrow::array::{ArrayRef, PrimitiveArray, StringArray, TimestampMicrosecondArray};
|
|
||||||
use arrow::datatypes::{DataType, Float64Type, Int64Type};
|
|
||||||
use arrow::record_batch::RecordBatch;
|
|
||||||
use clap::Parser;
|
|
||||||
use client::api::v1::column::Values;
|
|
||||||
use client::api::v1::{
|
|
||||||
Column, ColumnDataType, ColumnDef, CreateTableExpr, InsertRequest, InsertRequests, SemanticType,
|
|
||||||
};
|
|
||||||
use client::{Client, Database, OutputData, DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME};
|
|
||||||
use futures_util::TryStreamExt;
|
|
||||||
use indicatif::{MultiProgress, ProgressBar, ProgressStyle};
|
|
||||||
use parquet::arrow::arrow_reader::ParquetRecordBatchReaderBuilder;
|
|
||||||
use tokio::task::JoinSet;
|
|
||||||
|
|
||||||
const CATALOG_NAME: &str = "greptime";
|
|
||||||
const SCHEMA_NAME: &str = "public";
|
|
||||||
|
|
||||||
#[derive(Parser)]
|
|
||||||
#[command(name = "NYC benchmark runner")]
|
|
||||||
struct Args {
|
|
||||||
/// Path to the dataset
|
|
||||||
#[arg(short, long)]
|
|
||||||
path: Option<String>,
|
|
||||||
|
|
||||||
/// Batch size of insert request.
|
|
||||||
#[arg(short = 's', long = "batch-size", default_value_t = 4096)]
|
|
||||||
batch_size: usize,
|
|
||||||
|
|
||||||
/// Number of client threads on write (parallel on file level)
|
|
||||||
#[arg(short = 't', long = "thread-num", default_value_t = 4)]
|
|
||||||
thread_num: usize,
|
|
||||||
|
|
||||||
/// Number of query iteration
|
|
||||||
#[arg(short = 'i', long = "iter-num", default_value_t = 3)]
|
|
||||||
iter_num: usize,
|
|
||||||
|
|
||||||
#[arg(long = "skip-write")]
|
|
||||||
skip_write: bool,
|
|
||||||
|
|
||||||
#[arg(long = "skip-read")]
|
|
||||||
skip_read: bool,
|
|
||||||
|
|
||||||
#[arg(short, long, default_value_t = String::from("127.0.0.1:4001"))]
|
|
||||||
endpoint: String,
|
|
||||||
}
|
|
||||||
|
|
||||||
fn get_file_list<P: AsRef<Path>>(path: P) -> Vec<PathBuf> {
|
|
||||||
std::fs::read_dir(path)
|
|
||||||
.unwrap()
|
|
||||||
.map(|dir| dir.unwrap().path().canonicalize().unwrap())
|
|
||||||
.collect()
|
|
||||||
}
|
|
||||||
|
|
||||||
fn new_table_name() -> String {
|
|
||||||
format!("nyc_taxi_{}", chrono::Utc::now().timestamp())
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn write_data(
|
|
||||||
table_name: &str,
|
|
||||||
batch_size: usize,
|
|
||||||
db: &Database,
|
|
||||||
path: PathBuf,
|
|
||||||
mpb: MultiProgress,
|
|
||||||
pb_style: ProgressStyle,
|
|
||||||
) -> u128 {
|
|
||||||
let file = std::fs::File::open(&path).unwrap();
|
|
||||||
let record_batch_reader_builder = ParquetRecordBatchReaderBuilder::try_new(file).unwrap();
|
|
||||||
let row_num = record_batch_reader_builder
|
|
||||||
.metadata()
|
|
||||||
.file_metadata()
|
|
||||||
.num_rows();
|
|
||||||
let record_batch_reader = record_batch_reader_builder
|
|
||||||
.with_batch_size(batch_size)
|
|
||||||
.build()
|
|
||||||
.unwrap();
|
|
||||||
let progress_bar = mpb.add(ProgressBar::new(row_num as _));
|
|
||||||
progress_bar.set_style(pb_style);
|
|
||||||
progress_bar.set_message(format!("{path:?}"));
|
|
||||||
|
|
||||||
let mut total_rpc_elapsed_ms = 0;
|
|
||||||
|
|
||||||
for record_batch in record_batch_reader {
|
|
||||||
let record_batch = record_batch.unwrap();
|
|
||||||
if !is_record_batch_full(&record_batch) {
|
|
||||||
continue;
|
|
||||||
}
|
|
||||||
let (columns, row_count) = convert_record_batch(record_batch);
|
|
||||||
let request = InsertRequest {
|
|
||||||
table_name: table_name.to_string(),
|
|
||||||
columns,
|
|
||||||
row_count,
|
|
||||||
};
|
|
||||||
let requests = InsertRequests {
|
|
||||||
inserts: vec![request],
|
|
||||||
};
|
|
||||||
|
|
||||||
let now = Instant::now();
|
|
||||||
db.insert(requests).await.unwrap();
|
|
||||||
let elapsed = now.elapsed();
|
|
||||||
total_rpc_elapsed_ms += elapsed.as_millis();
|
|
||||||
progress_bar.inc(row_count as _);
|
|
||||||
}
|
|
||||||
|
|
||||||
progress_bar.finish_with_message(format!("file {path:?} done in {total_rpc_elapsed_ms}ms",));
|
|
||||||
total_rpc_elapsed_ms
|
|
||||||
}
|
|
||||||
|
|
||||||
fn convert_record_batch(record_batch: RecordBatch) -> (Vec<Column>, u32) {
|
|
||||||
let schema = record_batch.schema();
|
|
||||||
let fields = schema.fields();
|
|
||||||
let row_count = record_batch.num_rows();
|
|
||||||
let mut columns = vec![];
|
|
||||||
|
|
||||||
for (array, field) in record_batch.columns().iter().zip(fields.iter()) {
|
|
||||||
let (values, datatype) = build_values(array);
|
|
||||||
let semantic_type = match field.name().as_str() {
|
|
||||||
"VendorID" => SemanticType::Tag,
|
|
||||||
"tpep_pickup_datetime" => SemanticType::Timestamp,
|
|
||||||
_ => SemanticType::Field,
|
|
||||||
};
|
|
||||||
|
|
||||||
let column = Column {
|
|
||||||
column_name: field.name().clone(),
|
|
||||||
values: Some(values),
|
|
||||||
null_mask: array
|
|
||||||
.to_data()
|
|
||||||
.nulls()
|
|
||||||
.map(|bitmap| bitmap.buffer().as_slice().to_vec())
|
|
||||||
.unwrap_or_default(),
|
|
||||||
datatype: datatype.into(),
|
|
||||||
semantic_type: semantic_type as i32,
|
|
||||||
..Default::default()
|
|
||||||
};
|
|
||||||
columns.push(column);
|
|
||||||
}
|
|
||||||
|
|
||||||
(columns, row_count as _)
|
|
||||||
}
|
|
||||||
|
|
||||||
fn build_values(column: &ArrayRef) -> (Values, ColumnDataType) {
|
|
||||||
match column.data_type() {
|
|
||||||
DataType::Int64 => {
|
|
||||||
let array = column
|
|
||||||
.as_any()
|
|
||||||
.downcast_ref::<PrimitiveArray<Int64Type>>()
|
|
||||||
.unwrap();
|
|
||||||
let values = array.values();
|
|
||||||
(
|
|
||||||
Values {
|
|
||||||
i64_values: values.to_vec(),
|
|
||||||
..Default::default()
|
|
||||||
},
|
|
||||||
ColumnDataType::Int64,
|
|
||||||
)
|
|
||||||
}
|
|
||||||
DataType::Float64 => {
|
|
||||||
let array = column
|
|
||||||
.as_any()
|
|
||||||
.downcast_ref::<PrimitiveArray<Float64Type>>()
|
|
||||||
.unwrap();
|
|
||||||
let values = array.values();
|
|
||||||
(
|
|
||||||
Values {
|
|
||||||
f64_values: values.to_vec(),
|
|
||||||
..Default::default()
|
|
||||||
},
|
|
||||||
ColumnDataType::Float64,
|
|
||||||
)
|
|
||||||
}
|
|
||||||
DataType::Timestamp(_, _) => {
|
|
||||||
let array = column
|
|
||||||
.as_any()
|
|
||||||
.downcast_ref::<TimestampMicrosecondArray>()
|
|
||||||
.unwrap();
|
|
||||||
let values = array.values();
|
|
||||||
(
|
|
||||||
Values {
|
|
||||||
timestamp_microsecond_values: values.to_vec(),
|
|
||||||
..Default::default()
|
|
||||||
},
|
|
||||||
ColumnDataType::TimestampMicrosecond,
|
|
||||||
)
|
|
||||||
}
|
|
||||||
DataType::Utf8 => {
|
|
||||||
let array = column.as_any().downcast_ref::<StringArray>().unwrap();
|
|
||||||
let values = array.iter().filter_map(|s| s.map(String::from)).collect();
|
|
||||||
(
|
|
||||||
Values {
|
|
||||||
string_values: values,
|
|
||||||
..Default::default()
|
|
||||||
},
|
|
||||||
ColumnDataType::String,
|
|
||||||
)
|
|
||||||
}
|
|
||||||
DataType::Null
|
|
||||||
| DataType::Boolean
|
|
||||||
| DataType::Int8
|
|
||||||
| DataType::Int16
|
|
||||||
| DataType::Int32
|
|
||||||
| DataType::UInt8
|
|
||||||
| DataType::UInt16
|
|
||||||
| DataType::UInt32
|
|
||||||
| DataType::UInt64
|
|
||||||
| DataType::Float16
|
|
||||||
| DataType::Float32
|
|
||||||
| DataType::Date32
|
|
||||||
| DataType::Date64
|
|
||||||
| DataType::Time32(_)
|
|
||||||
| DataType::Time64(_)
|
|
||||||
| DataType::Duration(_)
|
|
||||||
| DataType::Interval(_)
|
|
||||||
| DataType::Binary
|
|
||||||
| DataType::FixedSizeBinary(_)
|
|
||||||
| DataType::LargeBinary
|
|
||||||
| DataType::LargeUtf8
|
|
||||||
| DataType::List(_)
|
|
||||||
| DataType::FixedSizeList(_, _)
|
|
||||||
| DataType::LargeList(_)
|
|
||||||
| DataType::Struct(_)
|
|
||||||
| DataType::Union(_, _)
|
|
||||||
| DataType::Dictionary(_, _)
|
|
||||||
| DataType::Decimal128(_, _)
|
|
||||||
| DataType::Decimal256(_, _)
|
|
||||||
| DataType::RunEndEncoded(_, _)
|
|
||||||
| DataType::Map(_, _) => todo!(),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
fn is_record_batch_full(batch: &RecordBatch) -> bool {
|
|
||||||
batch.columns().iter().all(|col| col.null_count() == 0)
|
|
||||||
}
|
|
||||||
|
|
||||||
fn create_table_expr(table_name: &str) -> CreateTableExpr {
|
|
||||||
CreateTableExpr {
|
|
||||||
catalog_name: CATALOG_NAME.to_string(),
|
|
||||||
schema_name: SCHEMA_NAME.to_string(),
|
|
||||||
table_name: table_name.to_string(),
|
|
||||||
desc: String::default(),
|
|
||||||
column_defs: vec![
|
|
||||||
ColumnDef {
|
|
||||||
name: "VendorID".to_string(),
|
|
||||||
data_type: ColumnDataType::Int64 as i32,
|
|
||||||
is_nullable: true,
|
|
||||||
default_constraint: vec![],
|
|
||||||
semantic_type: SemanticType::Tag as i32,
|
|
||||||
comment: String::new(),
|
|
||||||
..Default::default()
|
|
||||||
},
|
|
||||||
ColumnDef {
|
|
||||||
name: "tpep_pickup_datetime".to_string(),
|
|
||||||
data_type: ColumnDataType::TimestampMicrosecond as i32,
|
|
||||||
is_nullable: false,
|
|
||||||
default_constraint: vec![],
|
|
||||||
semantic_type: SemanticType::Timestamp as i32,
|
|
||||||
comment: String::new(),
|
|
||||||
..Default::default()
|
|
||||||
},
|
|
||||||
ColumnDef {
|
|
||||||
name: "tpep_dropoff_datetime".to_string(),
|
|
||||||
data_type: ColumnDataType::TimestampMicrosecond as i32,
|
|
||||||
is_nullable: true,
|
|
||||||
default_constraint: vec![],
|
|
||||||
semantic_type: SemanticType::Field as i32,
|
|
||||||
comment: String::new(),
|
|
||||||
..Default::default()
|
|
||||||
},
|
|
||||||
ColumnDef {
|
|
||||||
name: "passenger_count".to_string(),
|
|
||||||
data_type: ColumnDataType::Float64 as i32,
|
|
||||||
is_nullable: true,
|
|
||||||
default_constraint: vec![],
|
|
||||||
semantic_type: SemanticType::Field as i32,
|
|
||||||
comment: String::new(),
|
|
||||||
..Default::default()
|
|
||||||
},
|
|
||||||
ColumnDef {
|
|
||||||
name: "trip_distance".to_string(),
|
|
||||||
data_type: ColumnDataType::Float64 as i32,
|
|
||||||
is_nullable: true,
|
|
||||||
default_constraint: vec![],
|
|
||||||
semantic_type: SemanticType::Field as i32,
|
|
||||||
comment: String::new(),
|
|
||||||
..Default::default()
|
|
||||||
},
|
|
||||||
ColumnDef {
|
|
||||||
name: "RatecodeID".to_string(),
|
|
||||||
data_type: ColumnDataType::Float64 as i32,
|
|
||||||
is_nullable: true,
|
|
||||||
default_constraint: vec![],
|
|
||||||
semantic_type: SemanticType::Field as i32,
|
|
||||||
comment: String::new(),
|
|
||||||
..Default::default()
|
|
||||||
},
|
|
||||||
ColumnDef {
|
|
||||||
name: "store_and_fwd_flag".to_string(),
|
|
||||||
data_type: ColumnDataType::String as i32,
|
|
||||||
is_nullable: true,
|
|
||||||
default_constraint: vec![],
|
|
||||||
semantic_type: SemanticType::Field as i32,
|
|
||||||
comment: String::new(),
|
|
||||||
..Default::default()
|
|
||||||
},
|
|
||||||
ColumnDef {
|
|
||||||
name: "PULocationID".to_string(),
|
|
||||||
data_type: ColumnDataType::Int64 as i32,
|
|
||||||
is_nullable: true,
|
|
||||||
default_constraint: vec![],
|
|
||||||
semantic_type: SemanticType::Field as i32,
|
|
||||||
comment: String::new(),
|
|
||||||
..Default::default()
|
|
||||||
},
|
|
||||||
ColumnDef {
|
|
||||||
name: "DOLocationID".to_string(),
|
|
||||||
data_type: ColumnDataType::Int64 as i32,
|
|
||||||
is_nullable: true,
|
|
||||||
default_constraint: vec![],
|
|
||||||
semantic_type: SemanticType::Field as i32,
|
|
||||||
comment: String::new(),
|
|
||||||
..Default::default()
|
|
||||||
},
|
|
||||||
ColumnDef {
|
|
||||||
name: "payment_type".to_string(),
|
|
||||||
data_type: ColumnDataType::Int64 as i32,
|
|
||||||
is_nullable: true,
|
|
||||||
default_constraint: vec![],
|
|
||||||
semantic_type: SemanticType::Field as i32,
|
|
||||||
comment: String::new(),
|
|
||||||
..Default::default()
|
|
||||||
},
|
|
||||||
ColumnDef {
|
|
||||||
name: "fare_amount".to_string(),
|
|
||||||
data_type: ColumnDataType::Float64 as i32,
|
|
||||||
is_nullable: true,
|
|
||||||
default_constraint: vec![],
|
|
||||||
semantic_type: SemanticType::Field as i32,
|
|
||||||
comment: String::new(),
|
|
||||||
..Default::default()
|
|
||||||
},
|
|
||||||
ColumnDef {
|
|
||||||
name: "extra".to_string(),
|
|
||||||
data_type: ColumnDataType::Float64 as i32,
|
|
||||||
is_nullable: true,
|
|
||||||
default_constraint: vec![],
|
|
||||||
semantic_type: SemanticType::Field as i32,
|
|
||||||
comment: String::new(),
|
|
||||||
..Default::default()
|
|
||||||
},
|
|
||||||
ColumnDef {
|
|
||||||
name: "mta_tax".to_string(),
|
|
||||||
data_type: ColumnDataType::Float64 as i32,
|
|
||||||
is_nullable: true,
|
|
||||||
default_constraint: vec![],
|
|
||||||
semantic_type: SemanticType::Field as i32,
|
|
||||||
comment: String::new(),
|
|
||||||
..Default::default()
|
|
||||||
},
|
|
||||||
ColumnDef {
|
|
||||||
name: "tip_amount".to_string(),
|
|
||||||
data_type: ColumnDataType::Float64 as i32,
|
|
||||||
is_nullable: true,
|
|
||||||
default_constraint: vec![],
|
|
||||||
semantic_type: SemanticType::Field as i32,
|
|
||||||
comment: String::new(),
|
|
||||||
..Default::default()
|
|
||||||
},
|
|
||||||
ColumnDef {
|
|
||||||
name: "tolls_amount".to_string(),
|
|
||||||
data_type: ColumnDataType::Float64 as i32,
|
|
||||||
is_nullable: true,
|
|
||||||
default_constraint: vec![],
|
|
||||||
semantic_type: SemanticType::Field as i32,
|
|
||||||
comment: String::new(),
|
|
||||||
..Default::default()
|
|
||||||
},
|
|
||||||
ColumnDef {
|
|
||||||
name: "improvement_surcharge".to_string(),
|
|
||||||
data_type: ColumnDataType::Float64 as i32,
|
|
||||||
is_nullable: true,
|
|
||||||
default_constraint: vec![],
|
|
||||||
semantic_type: SemanticType::Field as i32,
|
|
||||||
comment: String::new(),
|
|
||||||
..Default::default()
|
|
||||||
},
|
|
||||||
ColumnDef {
|
|
||||||
name: "total_amount".to_string(),
|
|
||||||
data_type: ColumnDataType::Float64 as i32,
|
|
||||||
is_nullable: true,
|
|
||||||
default_constraint: vec![],
|
|
||||||
semantic_type: SemanticType::Field as i32,
|
|
||||||
comment: String::new(),
|
|
||||||
..Default::default()
|
|
||||||
},
|
|
||||||
ColumnDef {
|
|
||||||
name: "congestion_surcharge".to_string(),
|
|
||||||
data_type: ColumnDataType::Float64 as i32,
|
|
||||||
is_nullable: true,
|
|
||||||
default_constraint: vec![],
|
|
||||||
semantic_type: SemanticType::Field as i32,
|
|
||||||
comment: String::new(),
|
|
||||||
..Default::default()
|
|
||||||
},
|
|
||||||
ColumnDef {
|
|
||||||
name: "airport_fee".to_string(),
|
|
||||||
data_type: ColumnDataType::Float64 as i32,
|
|
||||||
is_nullable: true,
|
|
||||||
default_constraint: vec![],
|
|
||||||
semantic_type: SemanticType::Field as i32,
|
|
||||||
comment: String::new(),
|
|
||||||
..Default::default()
|
|
||||||
},
|
|
||||||
],
|
|
||||||
time_index: "tpep_pickup_datetime".to_string(),
|
|
||||||
primary_keys: vec!["VendorID".to_string()],
|
|
||||||
create_if_not_exists: true,
|
|
||||||
table_options: Default::default(),
|
|
||||||
table_id: None,
|
|
||||||
engine: "mito".to_string(),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
fn query_set(table_name: &str) -> HashMap<String, String> {
|
|
||||||
HashMap::from([
|
|
||||||
(
|
|
||||||
"count_all".to_string(),
|
|
||||||
format!("SELECT COUNT(*) FROM {table_name};"),
|
|
||||||
),
|
|
||||||
(
|
|
||||||
"fare_amt_by_passenger".to_string(),
|
|
||||||
format!("SELECT passenger_count, MIN(fare_amount), MAX(fare_amount), SUM(fare_amount) FROM {table_name} GROUP BY passenger_count"),
|
|
||||||
)
|
|
||||||
])
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn do_write(args: &Args, db: &Database, table_name: &str) {
|
|
||||||
let mut file_list = get_file_list(args.path.clone().expect("Specify data path in argument"));
|
|
||||||
let mut write_jobs = JoinSet::new();
|
|
||||||
|
|
||||||
let create_table_result = db.create(create_table_expr(table_name)).await;
|
|
||||||
println!("Create table result: {create_table_result:?}");
|
|
||||||
|
|
||||||
let progress_bar_style = ProgressStyle::with_template(
|
|
||||||
"[{elapsed_precise}] {bar:60.cyan/blue} {pos:>7}/{len:7} {msg}",
|
|
||||||
)
|
|
||||||
.unwrap()
|
|
||||||
.progress_chars("##-");
|
|
||||||
let multi_progress_bar = MultiProgress::new();
|
|
||||||
let file_progress = multi_progress_bar.add(ProgressBar::new(file_list.len() as _));
|
|
||||||
file_progress.inc(0);
|
|
||||||
|
|
||||||
let batch_size = args.batch_size;
|
|
||||||
for _ in 0..args.thread_num {
|
|
||||||
if let Some(path) = file_list.pop() {
|
|
||||||
let db = db.clone();
|
|
||||||
let mpb = multi_progress_bar.clone();
|
|
||||||
let pb_style = progress_bar_style.clone();
|
|
||||||
let table_name = table_name.to_string();
|
|
||||||
let _ = write_jobs.spawn(async move {
|
|
||||||
write_data(&table_name, batch_size, &db, path, mpb, pb_style).await
|
|
||||||
});
|
|
||||||
}
|
|
||||||
}
|
|
||||||
while write_jobs.join_next().await.is_some() {
|
|
||||||
file_progress.inc(1);
|
|
||||||
if let Some(path) = file_list.pop() {
|
|
||||||
let db = db.clone();
|
|
||||||
let mpb = multi_progress_bar.clone();
|
|
||||||
let pb_style = progress_bar_style.clone();
|
|
||||||
let table_name = table_name.to_string();
|
|
||||||
let _ = write_jobs.spawn(async move {
|
|
||||||
write_data(&table_name, batch_size, &db, path, mpb, pb_style).await
|
|
||||||
});
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn do_query(num_iter: usize, db: &Database, table_name: &str) {
|
|
||||||
for (query_name, query) in query_set(table_name) {
|
|
||||||
println!("Running query: {query}");
|
|
||||||
for i in 0..num_iter {
|
|
||||||
let now = Instant::now();
|
|
||||||
let res = db.sql(&query).await.unwrap();
|
|
||||||
match res.data {
|
|
||||||
OutputData::AffectedRows(_) | OutputData::RecordBatches(_) => (),
|
|
||||||
OutputData::Stream(stream) => {
|
|
||||||
stream.try_collect::<Vec<_>>().await.unwrap();
|
|
||||||
}
|
|
||||||
}
|
|
||||||
let elapsed = now.elapsed();
|
|
||||||
println!(
|
|
||||||
"query {}, iteration {}: {}ms",
|
|
||||||
query_name,
|
|
||||||
i,
|
|
||||||
elapsed.as_millis(),
|
|
||||||
);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
fn main() {
|
|
||||||
let args = Args::parse();
|
|
||||||
|
|
||||||
tokio::runtime::Builder::new_multi_thread()
|
|
||||||
.worker_threads(args.thread_num)
|
|
||||||
.enable_all()
|
|
||||||
.build()
|
|
||||||
.unwrap()
|
|
||||||
.block_on(async {
|
|
||||||
let client = Client::with_urls(vec![&args.endpoint]);
|
|
||||||
let db = Database::new(DEFAULT_CATALOG_NAME, DEFAULT_SCHEMA_NAME, client);
|
|
||||||
let table_name = new_table_name();
|
|
||||||
|
|
||||||
if !args.skip_write {
|
|
||||||
do_write(&args, &db, &table_name).await;
|
|
||||||
}
|
|
||||||
|
|
||||||
if !args.skip_read {
|
|
||||||
do_query(args.iter_num, &db, &table_name).await;
|
|
||||||
}
|
|
||||||
})
|
|
||||||
}
|
|
||||||
@@ -1,326 +0,0 @@
|
|||||||
// Copyright 2023 Greptime Team
|
|
||||||
//
|
|
||||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
// you may not use this file except in compliance with the License.
|
|
||||||
// You may obtain a copy of the License at
|
|
||||||
//
|
|
||||||
// http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
//
|
|
||||||
// Unless required by applicable law or agreed to in writing, software
|
|
||||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
// See the License for the specific language governing permissions and
|
|
||||||
// limitations under the License.
|
|
||||||
|
|
||||||
#![feature(int_roundings)]
|
|
||||||
|
|
||||||
use std::fs;
|
|
||||||
use std::sync::Arc;
|
|
||||||
use std::time::Instant;
|
|
||||||
|
|
||||||
use api::v1::{ColumnDataType, ColumnSchema, SemanticType};
|
|
||||||
use benchmarks::metrics;
|
|
||||||
use benchmarks::wal_bench::{Args, Config, Region, WalProvider};
|
|
||||||
use clap::Parser;
|
|
||||||
use common_telemetry::info;
|
|
||||||
use common_wal::config::kafka::common::BackoffConfig;
|
|
||||||
use common_wal::config::kafka::DatanodeKafkaConfig as KafkaConfig;
|
|
||||||
use common_wal::config::raft_engine::RaftEngineConfig;
|
|
||||||
use common_wal::options::{KafkaWalOptions, WalOptions};
|
|
||||||
use itertools::Itertools;
|
|
||||||
use log_store::kafka::log_store::KafkaLogStore;
|
|
||||||
use log_store::raft_engine::log_store::RaftEngineLogStore;
|
|
||||||
use mito2::wal::Wal;
|
|
||||||
use prometheus::{Encoder, TextEncoder};
|
|
||||||
use rand::distributions::{Alphanumeric, DistString};
|
|
||||||
use rand::rngs::SmallRng;
|
|
||||||
use rand::SeedableRng;
|
|
||||||
use rskafka::client::partition::Compression;
|
|
||||||
use rskafka::client::ClientBuilder;
|
|
||||||
use store_api::logstore::LogStore;
|
|
||||||
use store_api::storage::RegionId;
|
|
||||||
|
|
||||||
async fn run_benchmarker<S: LogStore>(cfg: &Config, topics: &[String], wal: Arc<Wal<S>>) {
|
|
||||||
let chunk_size = cfg.num_regions.div_ceil(cfg.num_workers);
|
|
||||||
let region_chunks = (0..cfg.num_regions)
|
|
||||||
.map(|id| {
|
|
||||||
build_region(
|
|
||||||
id as u64,
|
|
||||||
topics,
|
|
||||||
&mut SmallRng::seed_from_u64(cfg.rng_seed),
|
|
||||||
cfg,
|
|
||||||
)
|
|
||||||
})
|
|
||||||
.chunks(chunk_size as usize)
|
|
||||||
.into_iter()
|
|
||||||
.map(|chunk| Arc::new(chunk.collect::<Vec<_>>()))
|
|
||||||
.collect::<Vec<_>>();
|
|
||||||
|
|
||||||
let mut write_elapsed = 0;
|
|
||||||
let mut read_elapsed = 0;
|
|
||||||
|
|
||||||
if !cfg.skip_write {
|
|
||||||
info!("Benchmarking write ...");
|
|
||||||
|
|
||||||
let num_scrapes = cfg.num_scrapes;
|
|
||||||
let timer = Instant::now();
|
|
||||||
futures::future::join_all((0..cfg.num_workers).map(|i| {
|
|
||||||
let wal = wal.clone();
|
|
||||||
let regions = region_chunks[i as usize].clone();
|
|
||||||
tokio::spawn(async move {
|
|
||||||
for _ in 0..num_scrapes {
|
|
||||||
let mut wal_writer = wal.writer();
|
|
||||||
regions
|
|
||||||
.iter()
|
|
||||||
.for_each(|region| region.add_wal_entry(&mut wal_writer));
|
|
||||||
wal_writer.write_to_wal().await.unwrap();
|
|
||||||
}
|
|
||||||
})
|
|
||||||
}))
|
|
||||||
.await;
|
|
||||||
write_elapsed += timer.elapsed().as_millis();
|
|
||||||
}
|
|
||||||
|
|
||||||
if !cfg.skip_read {
|
|
||||||
info!("Benchmarking read ...");
|
|
||||||
|
|
||||||
let timer = Instant::now();
|
|
||||||
futures::future::join_all((0..cfg.num_workers).map(|i| {
|
|
||||||
let wal = wal.clone();
|
|
||||||
let regions = region_chunks[i as usize].clone();
|
|
||||||
tokio::spawn(async move {
|
|
||||||
for region in regions.iter() {
|
|
||||||
region.replay(&wal).await;
|
|
||||||
}
|
|
||||||
})
|
|
||||||
}))
|
|
||||||
.await;
|
|
||||||
read_elapsed = timer.elapsed().as_millis();
|
|
||||||
}
|
|
||||||
|
|
||||||
dump_report(cfg, write_elapsed, read_elapsed);
|
|
||||||
}
|
|
||||||
|
|
||||||
fn build_region(id: u64, topics: &[String], rng: &mut SmallRng, cfg: &Config) -> Region {
|
|
||||||
let wal_options = match cfg.wal_provider {
|
|
||||||
WalProvider::Kafka => {
|
|
||||||
assert!(!topics.is_empty());
|
|
||||||
WalOptions::Kafka(KafkaWalOptions {
|
|
||||||
topic: topics.get(id as usize % topics.len()).cloned().unwrap(),
|
|
||||||
})
|
|
||||||
}
|
|
||||||
WalProvider::RaftEngine => WalOptions::RaftEngine,
|
|
||||||
};
|
|
||||||
Region::new(
|
|
||||||
RegionId::from_u64(id),
|
|
||||||
build_schema(&parse_col_types(&cfg.col_types), rng),
|
|
||||||
wal_options,
|
|
||||||
cfg.num_rows,
|
|
||||||
cfg.rng_seed,
|
|
||||||
)
|
|
||||||
}
|
|
||||||
|
|
||||||
fn build_schema(col_types: &[ColumnDataType], mut rng: &mut SmallRng) -> Vec<ColumnSchema> {
|
|
||||||
col_types
|
|
||||||
.iter()
|
|
||||||
.map(|col_type| ColumnSchema {
|
|
||||||
column_name: Alphanumeric.sample_string(&mut rng, 5),
|
|
||||||
datatype: *col_type as i32,
|
|
||||||
semantic_type: SemanticType::Field as i32,
|
|
||||||
datatype_extension: None,
|
|
||||||
})
|
|
||||||
.chain(vec![ColumnSchema {
|
|
||||||
column_name: "ts".to_string(),
|
|
||||||
datatype: ColumnDataType::TimestampMillisecond as i32,
|
|
||||||
semantic_type: SemanticType::Tag as i32,
|
|
||||||
datatype_extension: None,
|
|
||||||
}])
|
|
||||||
.collect()
|
|
||||||
}
|
|
||||||
|
|
||||||
fn dump_report(cfg: &Config, write_elapsed: u128, read_elapsed: u128) {
|
|
||||||
let cost_report = format!(
|
|
||||||
"write costs: {} ms, read costs: {} ms",
|
|
||||||
write_elapsed, read_elapsed,
|
|
||||||
);
|
|
||||||
|
|
||||||
let total_written_bytes = metrics::METRIC_WAL_WRITE_BYTES_TOTAL.get() as u128;
|
|
||||||
let write_throughput = if write_elapsed > 0 {
|
|
||||||
(total_written_bytes * 1000).div_floor(write_elapsed)
|
|
||||||
} else {
|
|
||||||
0
|
|
||||||
};
|
|
||||||
let total_read_bytes = metrics::METRIC_WAL_READ_BYTES_TOTAL.get() as u128;
|
|
||||||
let read_throughput = if read_elapsed > 0 {
|
|
||||||
(total_read_bytes * 1000).div_floor(read_elapsed)
|
|
||||||
} else {
|
|
||||||
0
|
|
||||||
};
|
|
||||||
|
|
||||||
let throughput_report = format!(
|
|
||||||
"total written bytes: {} bytes, total read bytes: {} bytes, write throuput: {} bytes/s ({} mb/s), read throughput: {} bytes/s ({} mb/s)",
|
|
||||||
total_written_bytes,
|
|
||||||
total_read_bytes,
|
|
||||||
write_throughput,
|
|
||||||
write_throughput.div_floor(1 << 20),
|
|
||||||
read_throughput,
|
|
||||||
read_throughput.div_floor(1 << 20),
|
|
||||||
);
|
|
||||||
|
|
||||||
let metrics_report = if cfg.report_metrics {
|
|
||||||
let mut buffer = Vec::new();
|
|
||||||
let encoder = TextEncoder::new();
|
|
||||||
let metrics = prometheus::gather();
|
|
||||||
encoder.encode(&metrics, &mut buffer).unwrap();
|
|
||||||
String::from_utf8(buffer).unwrap()
|
|
||||||
} else {
|
|
||||||
String::new()
|
|
||||||
};
|
|
||||||
|
|
||||||
info!(
|
|
||||||
r#"
|
|
||||||
Benchmark config:
|
|
||||||
{cfg:?}
|
|
||||||
|
|
||||||
Benchmark report:
|
|
||||||
{cost_report}
|
|
||||||
{throughput_report}
|
|
||||||
{metrics_report}"#
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
async fn create_topics(cfg: &Config) -> Vec<String> {
|
|
||||||
// Creates topics.
|
|
||||||
let client = ClientBuilder::new(cfg.bootstrap_brokers.clone())
|
|
||||||
.build()
|
|
||||||
.await
|
|
||||||
.unwrap();
|
|
||||||
let ctrl_client = client.controller_client().unwrap();
|
|
||||||
let (topics, tasks): (Vec<_>, Vec<_>) = (0..cfg.num_topics)
|
|
||||||
.map(|i| {
|
|
||||||
let topic = if cfg.random_topics {
|
|
||||||
format!(
|
|
||||||
"greptime_wal_bench_topic_{}_{}",
|
|
||||||
uuid::Uuid::new_v4().as_u128(),
|
|
||||||
i
|
|
||||||
)
|
|
||||||
} else {
|
|
||||||
format!("greptime_wal_bench_topic_{}", i)
|
|
||||||
};
|
|
||||||
let task = ctrl_client.create_topic(
|
|
||||||
topic.clone(),
|
|
||||||
1,
|
|
||||||
cfg.bootstrap_brokers.len() as i16,
|
|
||||||
2000,
|
|
||||||
);
|
|
||||||
(topic, task)
|
|
||||||
})
|
|
||||||
.unzip();
|
|
||||||
// Must ignore errors since we allow topics being created more than once.
|
|
||||||
let _ = futures::future::try_join_all(tasks).await;
|
|
||||||
|
|
||||||
topics
|
|
||||||
}
|
|
||||||
|
|
||||||
fn parse_compression(comp: &str) -> Compression {
|
|
||||||
match comp {
|
|
||||||
"no" => Compression::NoCompression,
|
|
||||||
"gzip" => Compression::Gzip,
|
|
||||||
"lz4" => Compression::Lz4,
|
|
||||||
"snappy" => Compression::Snappy,
|
|
||||||
"zstd" => Compression::Zstd,
|
|
||||||
other => unreachable!("Unrecognized compression {other}"),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
fn parse_col_types(col_types: &str) -> Vec<ColumnDataType> {
|
|
||||||
let parts = col_types.split('x').collect::<Vec<_>>();
|
|
||||||
assert!(parts.len() <= 2);
|
|
||||||
|
|
||||||
let pattern = parts[0];
|
|
||||||
let repeat = parts
|
|
||||||
.get(1)
|
|
||||||
.map(|r| r.parse::<usize>().unwrap())
|
|
||||||
.unwrap_or(1);
|
|
||||||
|
|
||||||
pattern
|
|
||||||
.chars()
|
|
||||||
.map(|c| match c {
|
|
||||||
'i' | 'I' => ColumnDataType::Int64,
|
|
||||||
'f' | 'F' => ColumnDataType::Float64,
|
|
||||||
's' | 'S' => ColumnDataType::String,
|
|
||||||
other => unreachable!("Cannot parse {other} as a column data type"),
|
|
||||||
})
|
|
||||||
.cycle()
|
|
||||||
.take(pattern.len() * repeat)
|
|
||||||
.collect()
|
|
||||||
}
|
|
||||||
|
|
||||||
fn main() {
|
|
||||||
// Sets the global logging to INFO and suppress loggings from rskafka other than ERROR and upper ones.
|
|
||||||
std::env::set_var("UNITTEST_LOG_LEVEL", "info,rskafka=error");
|
|
||||||
common_telemetry::init_default_ut_logging();
|
|
||||||
|
|
||||||
let args = Args::parse();
|
|
||||||
let cfg = if !args.cfg_file.is_empty() {
|
|
||||||
toml::from_str(&fs::read_to_string(&args.cfg_file).unwrap()).unwrap()
|
|
||||||
} else {
|
|
||||||
Config::from(args)
|
|
||||||
};
|
|
||||||
|
|
||||||
// Validates arguments.
|
|
||||||
if cfg.num_regions < cfg.num_workers {
|
|
||||||
panic!("num_regions must be greater than or equal to num_workers");
|
|
||||||
}
|
|
||||||
if cfg
|
|
||||||
.num_workers
|
|
||||||
.min(cfg.num_topics)
|
|
||||||
.min(cfg.num_regions)
|
|
||||||
.min(cfg.num_scrapes)
|
|
||||||
.min(cfg.max_batch_size.as_bytes() as u32)
|
|
||||||
.min(cfg.bootstrap_brokers.len() as u32)
|
|
||||||
== 0
|
|
||||||
{
|
|
||||||
panic!("Invalid arguments");
|
|
||||||
}
|
|
||||||
|
|
||||||
tokio::runtime::Builder::new_multi_thread()
|
|
||||||
.enable_all()
|
|
||||||
.build()
|
|
||||||
.unwrap()
|
|
||||||
.block_on(async {
|
|
||||||
match cfg.wal_provider {
|
|
||||||
WalProvider::Kafka => {
|
|
||||||
let topics = create_topics(&cfg).await;
|
|
||||||
let kafka_cfg = KafkaConfig {
|
|
||||||
broker_endpoints: cfg.bootstrap_brokers.clone(),
|
|
||||||
max_batch_size: cfg.max_batch_size,
|
|
||||||
linger: cfg.linger,
|
|
||||||
backoff: BackoffConfig {
|
|
||||||
init: cfg.backoff_init,
|
|
||||||
max: cfg.backoff_max,
|
|
||||||
base: cfg.backoff_base,
|
|
||||||
deadline: Some(cfg.backoff_deadline),
|
|
||||||
},
|
|
||||||
compression: parse_compression(&cfg.compression),
|
|
||||||
..Default::default()
|
|
||||||
};
|
|
||||||
let store = Arc::new(KafkaLogStore::try_new(&kafka_cfg).await.unwrap());
|
|
||||||
let wal = Arc::new(Wal::new(store));
|
|
||||||
run_benchmarker(&cfg, &topics, wal).await;
|
|
||||||
}
|
|
||||||
WalProvider::RaftEngine => {
|
|
||||||
// The benchmarker assumes the raft engine directory exists.
|
|
||||||
let store = RaftEngineLogStore::try_new(
|
|
||||||
"/tmp/greptimedb/raft-engine-wal".to_string(),
|
|
||||||
RaftEngineConfig::default(),
|
|
||||||
)
|
|
||||||
.await
|
|
||||||
.map(Arc::new)
|
|
||||||
.unwrap();
|
|
||||||
let wal = Arc::new(Wal::new(store));
|
|
||||||
run_benchmarker(&cfg, &[], wal).await;
|
|
||||||
}
|
|
||||||
}
|
|
||||||
});
|
|
||||||
}
|
|
||||||
@@ -1,39 +0,0 @@
|
|||||||
// Copyright 2023 Greptime Team
|
|
||||||
//
|
|
||||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
// you may not use this file except in compliance with the License.
|
|
||||||
// You may obtain a copy of the License at
|
|
||||||
//
|
|
||||||
// http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
//
|
|
||||||
// Unless required by applicable law or agreed to in writing, software
|
|
||||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
// See the License for the specific language governing permissions and
|
|
||||||
// limitations under the License.
|
|
||||||
|
|
||||||
use lazy_static::lazy_static;
|
|
||||||
use prometheus::*;
|
|
||||||
|
|
||||||
/// Logstore label.
|
|
||||||
pub const LOGSTORE_LABEL: &str = "logstore";
|
|
||||||
/// Operation type label.
|
|
||||||
pub const OPTYPE_LABEL: &str = "optype";
|
|
||||||
|
|
||||||
lazy_static! {
|
|
||||||
/// Counters of bytes of each operation on a logstore.
|
|
||||||
pub static ref METRIC_WAL_OP_BYTES_TOTAL: IntCounterVec = register_int_counter_vec!(
|
|
||||||
"greptime_bench_wal_op_bytes_total",
|
|
||||||
"wal operation bytes total",
|
|
||||||
&[OPTYPE_LABEL],
|
|
||||||
)
|
|
||||||
.unwrap();
|
|
||||||
/// Counter of bytes of the append_batch operation.
|
|
||||||
pub static ref METRIC_WAL_WRITE_BYTES_TOTAL: IntCounter = METRIC_WAL_OP_BYTES_TOTAL.with_label_values(
|
|
||||||
&["write"],
|
|
||||||
);
|
|
||||||
/// Counter of bytes of the read operation.
|
|
||||||
pub static ref METRIC_WAL_READ_BYTES_TOTAL: IntCounter = METRIC_WAL_OP_BYTES_TOTAL.with_label_values(
|
|
||||||
&["read"],
|
|
||||||
);
|
|
||||||
}
|
|
||||||
@@ -1,361 +0,0 @@
|
|||||||
// Copyright 2023 Greptime Team
|
|
||||||
//
|
|
||||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
// you may not use this file except in compliance with the License.
|
|
||||||
// You may obtain a copy of the License at
|
|
||||||
//
|
|
||||||
// http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
//
|
|
||||||
// Unless required by applicable law or agreed to in writing, software
|
|
||||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
// See the License for the specific language governing permissions and
|
|
||||||
// limitations under the License.
|
|
||||||
|
|
||||||
use std::mem::size_of;
|
|
||||||
use std::sync::atomic::{AtomicI64, AtomicU64, Ordering};
|
|
||||||
use std::sync::{Arc, Mutex};
|
|
||||||
use std::time::Duration;
|
|
||||||
|
|
||||||
use api::v1::value::ValueData;
|
|
||||||
use api::v1::{ColumnDataType, ColumnSchema, Mutation, OpType, Row, Rows, Value, WalEntry};
|
|
||||||
use clap::{Parser, ValueEnum};
|
|
||||||
use common_base::readable_size::ReadableSize;
|
|
||||||
use common_wal::options::WalOptions;
|
|
||||||
use futures::StreamExt;
|
|
||||||
use mito2::wal::{Wal, WalWriter};
|
|
||||||
use rand::distributions::{Alphanumeric, DistString, Uniform};
|
|
||||||
use rand::rngs::SmallRng;
|
|
||||||
use rand::{Rng, SeedableRng};
|
|
||||||
use serde::{Deserialize, Serialize};
|
|
||||||
use store_api::logstore::LogStore;
|
|
||||||
use store_api::storage::RegionId;
|
|
||||||
|
|
||||||
use crate::metrics;
|
|
||||||
|
|
||||||
/// The wal provider.
|
|
||||||
#[derive(Clone, ValueEnum, Default, Debug, PartialEq, Serialize, Deserialize)]
|
|
||||||
#[serde(rename_all = "snake_case")]
|
|
||||||
pub enum WalProvider {
|
|
||||||
#[default]
|
|
||||||
RaftEngine,
|
|
||||||
Kafka,
|
|
||||||
}
|
|
||||||
|
|
||||||
#[derive(Parser)]
|
|
||||||
pub struct Args {
|
|
||||||
/// The provided configuration file.
|
|
||||||
/// The example configuration file can be found at `greptimedb/benchmarks/config/wal_bench.example.toml`.
|
|
||||||
#[clap(long, short = 'c')]
|
|
||||||
pub cfg_file: String,
|
|
||||||
|
|
||||||
/// The wal provider.
|
|
||||||
#[clap(long, value_enum, default_value_t = WalProvider::default())]
|
|
||||||
pub wal_provider: WalProvider,
|
|
||||||
|
|
||||||
/// The advertised addresses of the kafka brokers.
|
|
||||||
/// If there're multiple bootstrap brokers, their addresses should be separated by comma, for e.g. "localhost:9092,localhost:9093".
|
|
||||||
#[clap(long, short = 'b', default_value = "localhost:9092")]
|
|
||||||
pub bootstrap_brokers: String,
|
|
||||||
|
|
||||||
/// The number of workers each running in a dedicated thread.
|
|
||||||
#[clap(long, default_value_t = num_cpus::get() as u32)]
|
|
||||||
pub num_workers: u32,
|
|
||||||
|
|
||||||
/// The number of kafka topics to be created.
|
|
||||||
#[clap(long, default_value_t = 32)]
|
|
||||||
pub num_topics: u32,
|
|
||||||
|
|
||||||
/// The number of regions.
|
|
||||||
#[clap(long, default_value_t = 1000)]
|
|
||||||
pub num_regions: u32,
|
|
||||||
|
|
||||||
/// The number of times each region is scraped.
|
|
||||||
#[clap(long, default_value_t = 1000)]
|
|
||||||
pub num_scrapes: u32,
|
|
||||||
|
|
||||||
/// The number of rows in each wal entry.
|
|
||||||
/// Each time a region is scraped, a wal entry containing will be produced.
|
|
||||||
#[clap(long, default_value_t = 5)]
|
|
||||||
pub num_rows: u32,
|
|
||||||
|
|
||||||
/// The column types of the schema for each region.
|
|
||||||
/// Currently, three column types are supported:
|
|
||||||
/// - i = ColumnDataType::Int64
|
|
||||||
/// - f = ColumnDataType::Float64
|
|
||||||
/// - s = ColumnDataType::String
|
|
||||||
/// For e.g., "ifs" will be parsed as three columns: i64, f64, and string.
|
|
||||||
///
|
|
||||||
/// Additionally, a "x" sign can be provided to repeat the column types for a given number of times.
|
|
||||||
/// For e.g., "iix2" will be parsed as 4 columns: i64, i64, i64, and i64.
|
|
||||||
/// This feature is useful if you want to specify many columns.
|
|
||||||
#[clap(long, default_value = "ifs")]
|
|
||||||
pub col_types: String,
|
|
||||||
|
|
||||||
/// The maximum size of a batch of kafka records.
|
|
||||||
/// The default value is 1mb.
|
|
||||||
#[clap(long, default_value = "512KB")]
|
|
||||||
pub max_batch_size: ReadableSize,
|
|
||||||
|
|
||||||
/// The minimum latency the kafka client issues a batch of kafka records.
|
|
||||||
/// However, a batch of kafka records would be immediately issued if a record cannot be fit into the batch.
|
|
||||||
#[clap(long, default_value = "1ms")]
|
|
||||||
pub linger: String,
|
|
||||||
|
|
||||||
/// The initial backoff delay of the kafka consumer.
|
|
||||||
#[clap(long, default_value = "10ms")]
|
|
||||||
pub backoff_init: String,
|
|
||||||
|
|
||||||
/// The maximum backoff delay of the kafka consumer.
|
|
||||||
#[clap(long, default_value = "1s")]
|
|
||||||
pub backoff_max: String,
|
|
||||||
|
|
||||||
/// The exponential backoff rate of the kafka consumer. The next back off = base * the current backoff.
|
|
||||||
#[clap(long, default_value_t = 2)]
|
|
||||||
pub backoff_base: u32,
|
|
||||||
|
|
||||||
/// The deadline of backoff. The backoff ends if the total backoff delay reaches the deadline.
|
|
||||||
#[clap(long, default_value = "3s")]
|
|
||||||
pub backoff_deadline: String,
|
|
||||||
|
|
||||||
/// The client-side compression algorithm for kafka records.
|
|
||||||
#[clap(long, default_value = "zstd")]
|
|
||||||
pub compression: String,
|
|
||||||
|
|
||||||
/// The seed of random number generators.
|
|
||||||
#[clap(long, default_value_t = 42)]
|
|
||||||
pub rng_seed: u64,
|
|
||||||
|
|
||||||
/// Skips the read phase, aka. region replay, if set to true.
|
|
||||||
#[clap(long, default_value_t = false)]
|
|
||||||
pub skip_read: bool,
|
|
||||||
|
|
||||||
/// Skips the write phase if set to true.
|
|
||||||
#[clap(long, default_value_t = false)]
|
|
||||||
pub skip_write: bool,
|
|
||||||
|
|
||||||
/// Randomly generates topic names if set to true.
|
|
||||||
/// Useful when you want to run the benchmarker without worrying about the topics created before.
|
|
||||||
#[clap(long, default_value_t = false)]
|
|
||||||
pub random_topics: bool,
|
|
||||||
|
|
||||||
/// Logs out the gathered prometheus metrics when the benchmarker ends.
|
|
||||||
#[clap(long, default_value_t = false)]
|
|
||||||
pub report_metrics: bool,
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Benchmarker config.
|
|
||||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
|
||||||
pub struct Config {
|
|
||||||
pub wal_provider: WalProvider,
|
|
||||||
pub bootstrap_brokers: Vec<String>,
|
|
||||||
pub num_workers: u32,
|
|
||||||
pub num_topics: u32,
|
|
||||||
pub num_regions: u32,
|
|
||||||
pub num_scrapes: u32,
|
|
||||||
pub num_rows: u32,
|
|
||||||
pub col_types: String,
|
|
||||||
pub max_batch_size: ReadableSize,
|
|
||||||
#[serde(with = "humantime_serde")]
|
|
||||||
pub linger: Duration,
|
|
||||||
#[serde(with = "humantime_serde")]
|
|
||||||
pub backoff_init: Duration,
|
|
||||||
#[serde(with = "humantime_serde")]
|
|
||||||
pub backoff_max: Duration,
|
|
||||||
pub backoff_base: u32,
|
|
||||||
#[serde(with = "humantime_serde")]
|
|
||||||
pub backoff_deadline: Duration,
|
|
||||||
pub compression: String,
|
|
||||||
pub rng_seed: u64,
|
|
||||||
pub skip_read: bool,
|
|
||||||
pub skip_write: bool,
|
|
||||||
pub random_topics: bool,
|
|
||||||
pub report_metrics: bool,
|
|
||||||
}
|
|
||||||
|
|
||||||
impl From<Args> for Config {
|
|
||||||
fn from(args: Args) -> Self {
|
|
||||||
let cfg = Self {
|
|
||||||
wal_provider: args.wal_provider,
|
|
||||||
bootstrap_brokers: args
|
|
||||||
.bootstrap_brokers
|
|
||||||
.split(',')
|
|
||||||
.map(ToString::to_string)
|
|
||||||
.collect::<Vec<_>>(),
|
|
||||||
num_workers: args.num_workers.min(num_cpus::get() as u32),
|
|
||||||
num_topics: args.num_topics,
|
|
||||||
num_regions: args.num_regions,
|
|
||||||
num_scrapes: args.num_scrapes,
|
|
||||||
num_rows: args.num_rows,
|
|
||||||
col_types: args.col_types,
|
|
||||||
max_batch_size: args.max_batch_size,
|
|
||||||
linger: humantime::parse_duration(&args.linger).unwrap(),
|
|
||||||
backoff_init: humantime::parse_duration(&args.backoff_init).unwrap(),
|
|
||||||
backoff_max: humantime::parse_duration(&args.backoff_max).unwrap(),
|
|
||||||
backoff_base: args.backoff_base,
|
|
||||||
backoff_deadline: humantime::parse_duration(&args.backoff_deadline).unwrap(),
|
|
||||||
compression: args.compression,
|
|
||||||
rng_seed: args.rng_seed,
|
|
||||||
skip_read: args.skip_read,
|
|
||||||
skip_write: args.skip_write,
|
|
||||||
random_topics: args.random_topics,
|
|
||||||
report_metrics: args.report_metrics,
|
|
||||||
};
|
|
||||||
|
|
||||||
cfg
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/// The region used for wal benchmarker.
|
|
||||||
pub struct Region {
|
|
||||||
id: RegionId,
|
|
||||||
schema: Vec<ColumnSchema>,
|
|
||||||
wal_options: WalOptions,
|
|
||||||
next_sequence: AtomicU64,
|
|
||||||
next_entry_id: AtomicU64,
|
|
||||||
next_timestamp: AtomicI64,
|
|
||||||
rng: Mutex<Option<SmallRng>>,
|
|
||||||
num_rows: u32,
|
|
||||||
}
|
|
||||||
|
|
||||||
impl Region {
|
|
||||||
/// Creates a new region.
|
|
||||||
pub fn new(
|
|
||||||
id: RegionId,
|
|
||||||
schema: Vec<ColumnSchema>,
|
|
||||||
wal_options: WalOptions,
|
|
||||||
num_rows: u32,
|
|
||||||
rng_seed: u64,
|
|
||||||
) -> Self {
|
|
||||||
Self {
|
|
||||||
id,
|
|
||||||
schema,
|
|
||||||
wal_options,
|
|
||||||
next_sequence: AtomicU64::new(1),
|
|
||||||
next_entry_id: AtomicU64::new(1),
|
|
||||||
next_timestamp: AtomicI64::new(1655276557000),
|
|
||||||
rng: Mutex::new(Some(SmallRng::seed_from_u64(rng_seed))),
|
|
||||||
num_rows,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Scrapes the region and adds the generated entry to wal.
|
|
||||||
pub fn add_wal_entry<S: LogStore>(&self, wal_writer: &mut WalWriter<S>) {
|
|
||||||
let mutation = Mutation {
|
|
||||||
op_type: OpType::Put as i32,
|
|
||||||
sequence: self
|
|
||||||
.next_sequence
|
|
||||||
.fetch_add(self.num_rows as u64, Ordering::Relaxed),
|
|
||||||
rows: Some(self.build_rows()),
|
|
||||||
};
|
|
||||||
let entry = WalEntry {
|
|
||||||
mutations: vec![mutation],
|
|
||||||
};
|
|
||||||
metrics::METRIC_WAL_WRITE_BYTES_TOTAL.inc_by(Self::entry_estimated_size(&entry) as u64);
|
|
||||||
|
|
||||||
wal_writer
|
|
||||||
.add_entry(
|
|
||||||
self.id,
|
|
||||||
self.next_entry_id.fetch_add(1, Ordering::Relaxed),
|
|
||||||
&entry,
|
|
||||||
&self.wal_options,
|
|
||||||
)
|
|
||||||
.unwrap();
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Replays the region.
|
|
||||||
pub async fn replay<S: LogStore>(&self, wal: &Arc<Wal<S>>) {
|
|
||||||
let mut wal_stream = wal.scan(self.id, 0, &self.wal_options).unwrap();
|
|
||||||
while let Some(res) = wal_stream.next().await {
|
|
||||||
let (_, entry) = res.unwrap();
|
|
||||||
metrics::METRIC_WAL_READ_BYTES_TOTAL.inc_by(Self::entry_estimated_size(&entry) as u64);
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Computes the estimated size in bytes of the entry.
|
|
||||||
pub fn entry_estimated_size(entry: &WalEntry) -> usize {
|
|
||||||
let wrapper_size = size_of::<WalEntry>()
|
|
||||||
+ entry.mutations.capacity() * size_of::<Mutation>()
|
|
||||||
+ size_of::<Rows>();
|
|
||||||
|
|
||||||
let rows = entry.mutations[0].rows.as_ref().unwrap();
|
|
||||||
|
|
||||||
let schema_size = rows.schema.capacity() * size_of::<ColumnSchema>()
|
|
||||||
+ rows
|
|
||||||
.schema
|
|
||||||
.iter()
|
|
||||||
.map(|s| s.column_name.capacity())
|
|
||||||
.sum::<usize>();
|
|
||||||
let values_size = (rows.rows.capacity() * size_of::<Row>())
|
|
||||||
+ rows
|
|
||||||
.rows
|
|
||||||
.iter()
|
|
||||||
.map(|r| r.values.capacity() * size_of::<Value>())
|
|
||||||
.sum::<usize>();
|
|
||||||
|
|
||||||
wrapper_size + schema_size + values_size
|
|
||||||
}
|
|
||||||
|
|
||||||
fn build_rows(&self) -> Rows {
|
|
||||||
let cols = self
|
|
||||||
.schema
|
|
||||||
.iter()
|
|
||||||
.map(|col_schema| {
|
|
||||||
let col_data_type = ColumnDataType::try_from(col_schema.datatype).unwrap();
|
|
||||||
self.build_col(&col_data_type, self.num_rows)
|
|
||||||
})
|
|
||||||
.collect::<Vec<_>>();
|
|
||||||
|
|
||||||
let rows = (0..self.num_rows)
|
|
||||||
.map(|i| {
|
|
||||||
let values = cols.iter().map(|col| col[i as usize].clone()).collect();
|
|
||||||
Row { values }
|
|
||||||
})
|
|
||||||
.collect();
|
|
||||||
|
|
||||||
Rows {
|
|
||||||
schema: self.schema.clone(),
|
|
||||||
rows,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
fn build_col(&self, col_data_type: &ColumnDataType, num_rows: u32) -> Vec<Value> {
|
|
||||||
let mut rng_guard = self.rng.lock().unwrap();
|
|
||||||
let rng = rng_guard.as_mut().unwrap();
|
|
||||||
match col_data_type {
|
|
||||||
ColumnDataType::TimestampMillisecond => (0..num_rows)
|
|
||||||
.map(|_| {
|
|
||||||
let ts = self.next_timestamp.fetch_add(1000, Ordering::Relaxed);
|
|
||||||
Value {
|
|
||||||
value_data: Some(ValueData::TimestampMillisecondValue(ts)),
|
|
||||||
}
|
|
||||||
})
|
|
||||||
.collect(),
|
|
||||||
ColumnDataType::Int64 => (0..num_rows)
|
|
||||||
.map(|_| {
|
|
||||||
let v = rng.sample(Uniform::new(0, 10_000));
|
|
||||||
Value {
|
|
||||||
value_data: Some(ValueData::I64Value(v)),
|
|
||||||
}
|
|
||||||
})
|
|
||||||
.collect(),
|
|
||||||
ColumnDataType::Float64 => (0..num_rows)
|
|
||||||
.map(|_| {
|
|
||||||
let v = rng.sample(Uniform::new(0.0, 5000.0));
|
|
||||||
Value {
|
|
||||||
value_data: Some(ValueData::F64Value(v)),
|
|
||||||
}
|
|
||||||
})
|
|
||||||
.collect(),
|
|
||||||
ColumnDataType::String => (0..num_rows)
|
|
||||||
.map(|_| {
|
|
||||||
let v = Alphanumeric.sample_string(rng, 10);
|
|
||||||
Value {
|
|
||||||
value_data: Some(ValueData::StringValue(v)),
|
|
||||||
}
|
|
||||||
})
|
|
||||||
.collect(),
|
|
||||||
_ => unreachable!(),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
14
cliff.toml
14
cliff.toml
@@ -53,7 +53,7 @@ Release date: {{ timestamp | date(format="%B %d, %Y") }}
|
|||||||
## New Contributors
|
## New Contributors
|
||||||
{% endif -%}
|
{% endif -%}
|
||||||
{% for contributor in github.contributors | filter(attribute="is_first_time", value=true) %}
|
{% for contributor in github.contributors | filter(attribute="is_first_time", value=true) %}
|
||||||
* @{{ contributor.username }} made their first contribution
|
* [@{{ contributor.username }}](https://github.com/{{ contributor.username }}) made their first contribution
|
||||||
{%- if contributor.pr_number %} in \
|
{%- if contributor.pr_number %} in \
|
||||||
[#{{ contributor.pr_number }}]({{ self::remote_url() }}/pull/{{ contributor.pr_number }}) \
|
[#{{ contributor.pr_number }}]({{ self::remote_url() }}/pull/{{ contributor.pr_number }}) \
|
||||||
{%- endif %}
|
{%- endif %}
|
||||||
@@ -65,7 +65,17 @@ Release date: {{ timestamp | date(format="%B %d, %Y") }}
|
|||||||
|
|
||||||
We would like to thank the following contributors from the GreptimeDB community:
|
We would like to thank the following contributors from the GreptimeDB community:
|
||||||
|
|
||||||
{{ github.contributors | map(attribute="username") | join(sep=", ") }}
|
{%- set contributors = github.contributors | sort(attribute="username") | map(attribute="username") -%}
|
||||||
|
{%- set bots = ['dependabot[bot]'] %}
|
||||||
|
|
||||||
|
{% for contributor in contributors %}
|
||||||
|
{%- if bots is containing(contributor) -%}{% continue %}{%- endif -%}
|
||||||
|
{%- if loop.first -%}
|
||||||
|
[@{{ contributor }}](https://github.com/{{ contributor }})
|
||||||
|
{%- else -%}
|
||||||
|
, [@{{ contributor }}](https://github.com/{{ contributor }})
|
||||||
|
{%- endif -%}
|
||||||
|
{%- endfor %}
|
||||||
{%- endif %}
|
{%- endif %}
|
||||||
{% raw %}\n{% endraw %}
|
{% raw %}\n{% endraw %}
|
||||||
|
|
||||||
|
|||||||
31
config/config-docs-template.md
Normal file
31
config/config-docs-template.md
Normal file
@@ -0,0 +1,31 @@
|
|||||||
|
# Configurations
|
||||||
|
|
||||||
|
- [Configurations](#configurations)
|
||||||
|
- [Standalone Mode](#standalone-mode)
|
||||||
|
- [Distributed Mode](#distributed-mode)
|
||||||
|
- [Frontend](#frontend)
|
||||||
|
- [Metasrv](#metasrv)
|
||||||
|
- [Datanode](#datanode)
|
||||||
|
- [Flownode](#flownode)
|
||||||
|
|
||||||
|
## Standalone Mode
|
||||||
|
|
||||||
|
{{ toml2docs "./standalone.example.toml" }}
|
||||||
|
|
||||||
|
## Distributed Mode
|
||||||
|
|
||||||
|
### Frontend
|
||||||
|
|
||||||
|
{{ toml2docs "./frontend.example.toml" }}
|
||||||
|
|
||||||
|
### Metasrv
|
||||||
|
|
||||||
|
{{ toml2docs "./metasrv.example.toml" }}
|
||||||
|
|
||||||
|
### Datanode
|
||||||
|
|
||||||
|
{{ toml2docs "./datanode.example.toml" }}
|
||||||
|
|
||||||
|
### Flownode
|
||||||
|
|
||||||
|
{{ toml2docs "./flownode.example.toml"}}
|
||||||
547
config/config.md
Normal file
547
config/config.md
Normal file
@@ -0,0 +1,547 @@
|
|||||||
|
# Configurations
|
||||||
|
|
||||||
|
- [Configurations](#configurations)
|
||||||
|
- [Standalone Mode](#standalone-mode)
|
||||||
|
- [Distributed Mode](#distributed-mode)
|
||||||
|
- [Frontend](#frontend)
|
||||||
|
- [Metasrv](#metasrv)
|
||||||
|
- [Datanode](#datanode)
|
||||||
|
- [Flownode](#flownode)
|
||||||
|
|
||||||
|
## Standalone Mode
|
||||||
|
|
||||||
|
| Key | Type | Default | Descriptions |
|
||||||
|
| --- | -----| ------- | ----------- |
|
||||||
|
| `mode` | String | `standalone` | The running mode of the datanode. It can be `standalone` or `distributed`. |
|
||||||
|
| `enable_telemetry` | Bool | `true` | Enable telemetry to collect anonymous usage data. |
|
||||||
|
| `default_timezone` | String | Unset | The default timezone of the server. |
|
||||||
|
| `init_regions_in_background` | Bool | `false` | Initialize all regions in the background during the startup.<br/>By default, it provides services after all regions have been initialized. |
|
||||||
|
| `init_regions_parallelism` | Integer | `16` | Parallelism of initializing regions. |
|
||||||
|
| `max_concurrent_queries` | Integer | `0` | The maximum current queries allowed to be executed. Zero means unlimited. |
|
||||||
|
| `runtime` | -- | -- | The runtime options. |
|
||||||
|
| `runtime.global_rt_size` | Integer | `8` | The number of threads to execute the runtime for global read operations. |
|
||||||
|
| `runtime.compact_rt_size` | Integer | `4` | The number of threads to execute the runtime for global write operations. |
|
||||||
|
| `http` | -- | -- | The HTTP server options. |
|
||||||
|
| `http.addr` | String | `127.0.0.1:4000` | The address to bind the HTTP server. |
|
||||||
|
| `http.timeout` | String | `30s` | HTTP request timeout. Set to 0 to disable timeout. |
|
||||||
|
| `http.body_limit` | String | `64MB` | HTTP request body limit.<br/>The following units are supported: `B`, `KB`, `KiB`, `MB`, `MiB`, `GB`, `GiB`, `TB`, `TiB`, `PB`, `PiB`.<br/>Set to 0 to disable limit. |
|
||||||
|
| `grpc` | -- | -- | The gRPC server options. |
|
||||||
|
| `grpc.addr` | String | `127.0.0.1:4001` | The address to bind the gRPC server. |
|
||||||
|
| `grpc.runtime_size` | Integer | `8` | The number of server worker threads. |
|
||||||
|
| `grpc.tls` | -- | -- | gRPC server TLS options, see `mysql.tls` section. |
|
||||||
|
| `grpc.tls.mode` | String | `disable` | TLS mode. |
|
||||||
|
| `grpc.tls.cert_path` | String | Unset | Certificate file path. |
|
||||||
|
| `grpc.tls.key_path` | String | Unset | Private key file path. |
|
||||||
|
| `grpc.tls.watch` | Bool | `false` | Watch for Certificate and key file change and auto reload.<br/>For now, gRPC tls config does not support auto reload. |
|
||||||
|
| `mysql` | -- | -- | MySQL server options. |
|
||||||
|
| `mysql.enable` | Bool | `true` | Whether to enable. |
|
||||||
|
| `mysql.addr` | String | `127.0.0.1:4002` | The addr to bind the MySQL server. |
|
||||||
|
| `mysql.runtime_size` | Integer | `2` | The number of server worker threads. |
|
||||||
|
| `mysql.tls` | -- | -- | -- |
|
||||||
|
| `mysql.tls.mode` | String | `disable` | TLS mode, refer to https://www.postgresql.org/docs/current/libpq-ssl.html<br/>- `disable` (default value)<br/>- `prefer`<br/>- `require`<br/>- `verify-ca`<br/>- `verify-full` |
|
||||||
|
| `mysql.tls.cert_path` | String | Unset | Certificate file path. |
|
||||||
|
| `mysql.tls.key_path` | String | Unset | Private key file path. |
|
||||||
|
| `mysql.tls.watch` | Bool | `false` | Watch for Certificate and key file change and auto reload |
|
||||||
|
| `postgres` | -- | -- | PostgresSQL server options. |
|
||||||
|
| `postgres.enable` | Bool | `true` | Whether to enable |
|
||||||
|
| `postgres.addr` | String | `127.0.0.1:4003` | The addr to bind the PostgresSQL server. |
|
||||||
|
| `postgres.runtime_size` | Integer | `2` | The number of server worker threads. |
|
||||||
|
| `postgres.tls` | -- | -- | PostgresSQL server TLS options, see `mysql.tls` section. |
|
||||||
|
| `postgres.tls.mode` | String | `disable` | TLS mode. |
|
||||||
|
| `postgres.tls.cert_path` | String | Unset | Certificate file path. |
|
||||||
|
| `postgres.tls.key_path` | String | Unset | Private key file path. |
|
||||||
|
| `postgres.tls.watch` | Bool | `false` | Watch for Certificate and key file change and auto reload |
|
||||||
|
| `opentsdb` | -- | -- | OpenTSDB protocol options. |
|
||||||
|
| `opentsdb.enable` | Bool | `true` | Whether to enable OpenTSDB put in HTTP API. |
|
||||||
|
| `influxdb` | -- | -- | InfluxDB protocol options. |
|
||||||
|
| `influxdb.enable` | Bool | `true` | Whether to enable InfluxDB protocol in HTTP API. |
|
||||||
|
| `prom_store` | -- | -- | Prometheus remote storage options |
|
||||||
|
| `prom_store.enable` | Bool | `true` | Whether to enable Prometheus remote write and read in HTTP API. |
|
||||||
|
| `prom_store.with_metric_engine` | Bool | `true` | Whether to store the data from Prometheus remote write in metric engine. |
|
||||||
|
| `wal` | -- | -- | The WAL options. |
|
||||||
|
| `wal.provider` | String | `raft_engine` | The provider of the WAL.<br/>- `raft_engine`: the wal is stored in the local file system by raft-engine.<br/>- `kafka`: it's remote wal that data is stored in Kafka. |
|
||||||
|
| `wal.dir` | String | Unset | The directory to store the WAL files.<br/>**It's only used when the provider is `raft_engine`**. |
|
||||||
|
| `wal.file_size` | String | `256MB` | The size of the WAL segment file.<br/>**It's only used when the provider is `raft_engine`**. |
|
||||||
|
| `wal.purge_threshold` | String | `4GB` | The threshold of the WAL size to trigger a flush.<br/>**It's only used when the provider is `raft_engine`**. |
|
||||||
|
| `wal.purge_interval` | String | `10m` | The interval to trigger a flush.<br/>**It's only used when the provider is `raft_engine`**. |
|
||||||
|
| `wal.read_batch_size` | Integer | `128` | The read batch size.<br/>**It's only used when the provider is `raft_engine`**. |
|
||||||
|
| `wal.sync_write` | Bool | `false` | Whether to use sync write.<br/>**It's only used when the provider is `raft_engine`**. |
|
||||||
|
| `wal.enable_log_recycle` | Bool | `true` | Whether to reuse logically truncated log files.<br/>**It's only used when the provider is `raft_engine`**. |
|
||||||
|
| `wal.prefill_log_files` | Bool | `false` | Whether to pre-create log files on start up.<br/>**It's only used when the provider is `raft_engine`**. |
|
||||||
|
| `wal.sync_period` | String | `10s` | Duration for fsyncing log files.<br/>**It's only used when the provider is `raft_engine`**. |
|
||||||
|
| `wal.recovery_parallelism` | Integer | `2` | Parallelism during WAL recovery. |
|
||||||
|
| `wal.broker_endpoints` | Array | -- | The Kafka broker endpoints.<br/>**It's only used when the provider is `kafka`**. |
|
||||||
|
| `wal.auto_create_topics` | Bool | `true` | Automatically create topics for WAL.<br/>Set to `true` to automatically create topics for WAL.<br/>Otherwise, use topics named `topic_name_prefix_[0..num_topics)` |
|
||||||
|
| `wal.num_topics` | Integer | `64` | Number of topics.<br/>**It's only used when the provider is `kafka`**. |
|
||||||
|
| `wal.selector_type` | String | `round_robin` | Topic selector type.<br/>Available selector types:<br/>- `round_robin` (default)<br/>**It's only used when the provider is `kafka`**. |
|
||||||
|
| `wal.topic_name_prefix` | String | `greptimedb_wal_topic` | A Kafka topic is constructed by concatenating `topic_name_prefix` and `topic_id`.<br/>i.g., greptimedb_wal_topic_0, greptimedb_wal_topic_1.<br/>**It's only used when the provider is `kafka`**. |
|
||||||
|
| `wal.replication_factor` | Integer | `1` | Expected number of replicas of each partition.<br/>**It's only used when the provider is `kafka`**. |
|
||||||
|
| `wal.create_topic_timeout` | String | `30s` | Above which a topic creation operation will be cancelled.<br/>**It's only used when the provider is `kafka`**. |
|
||||||
|
| `wal.max_batch_bytes` | String | `1MB` | The max size of a single producer batch.<br/>Warning: Kafka has a default limit of 1MB per message in a topic.<br/>**It's only used when the provider is `kafka`**. |
|
||||||
|
| `wal.consumer_wait_timeout` | String | `100ms` | The consumer wait timeout.<br/>**It's only used when the provider is `kafka`**. |
|
||||||
|
| `wal.backoff_init` | String | `500ms` | The initial backoff delay.<br/>**It's only used when the provider is `kafka`**. |
|
||||||
|
| `wal.backoff_max` | String | `10s` | The maximum backoff delay.<br/>**It's only used when the provider is `kafka`**. |
|
||||||
|
| `wal.backoff_base` | Integer | `2` | The exponential backoff rate, i.e. next backoff = base * current backoff.<br/>**It's only used when the provider is `kafka`**. |
|
||||||
|
| `wal.backoff_deadline` | String | `5mins` | The deadline of retries.<br/>**It's only used when the provider is `kafka`**. |
|
||||||
|
| `wal.overwrite_entry_start_id` | Bool | `false` | Ignore missing entries during read WAL.<br/>**It's only used when the provider is `kafka`**.<br/><br/>This option ensures that when Kafka messages are deleted, the system<br/>can still successfully replay memtable data without throwing an<br/>out-of-range error.<br/>However, enabling this option might lead to unexpected data loss,<br/>as the system will skip over missing entries instead of treating<br/>them as critical errors. |
|
||||||
|
| `metadata_store` | -- | -- | Metadata storage options. |
|
||||||
|
| `metadata_store.file_size` | String | `256MB` | Kv file size in bytes. |
|
||||||
|
| `metadata_store.purge_threshold` | String | `4GB` | Kv purge threshold. |
|
||||||
|
| `procedure` | -- | -- | Procedure storage options. |
|
||||||
|
| `procedure.max_retry_times` | Integer | `3` | Procedure max retry time. |
|
||||||
|
| `procedure.retry_delay` | String | `500ms` | Initial retry delay of procedures, increases exponentially |
|
||||||
|
| `storage` | -- | -- | The data storage options. |
|
||||||
|
| `storage.data_home` | String | `/tmp/greptimedb/` | The working home directory. |
|
||||||
|
| `storage.type` | String | `File` | The storage type used to store the data.<br/>- `File`: the data is stored in the local file system.<br/>- `S3`: the data is stored in the S3 object storage.<br/>- `Gcs`: the data is stored in the Google Cloud Storage.<br/>- `Azblob`: the data is stored in the Azure Blob Storage.<br/>- `Oss`: the data is stored in the Aliyun OSS. |
|
||||||
|
| `storage.cache_path` | String | Unset | Cache configuration for object storage such as 'S3' etc.<br/>The local file cache directory. |
|
||||||
|
| `storage.cache_capacity` | String | Unset | The local file cache capacity in bytes. |
|
||||||
|
| `storage.bucket` | String | Unset | The S3 bucket name.<br/>**It's only used when the storage type is `S3`, `Oss` and `Gcs`**. |
|
||||||
|
| `storage.root` | String | Unset | The S3 data will be stored in the specified prefix, for example, `s3://${bucket}/${root}`.<br/>**It's only used when the storage type is `S3`, `Oss` and `Azblob`**. |
|
||||||
|
| `storage.access_key_id` | String | Unset | The access key id of the aws account.<br/>It's **highly recommended** to use AWS IAM roles instead of hardcoding the access key id and secret key.<br/>**It's only used when the storage type is `S3` and `Oss`**. |
|
||||||
|
| `storage.secret_access_key` | String | Unset | The secret access key of the aws account.<br/>It's **highly recommended** to use AWS IAM roles instead of hardcoding the access key id and secret key.<br/>**It's only used when the storage type is `S3`**. |
|
||||||
|
| `storage.access_key_secret` | String | Unset | The secret access key of the aliyun account.<br/>**It's only used when the storage type is `Oss`**. |
|
||||||
|
| `storage.account_name` | String | Unset | The account key of the azure account.<br/>**It's only used when the storage type is `Azblob`**. |
|
||||||
|
| `storage.account_key` | String | Unset | The account key of the azure account.<br/>**It's only used when the storage type is `Azblob`**. |
|
||||||
|
| `storage.scope` | String | Unset | The scope of the google cloud storage.<br/>**It's only used when the storage type is `Gcs`**. |
|
||||||
|
| `storage.credential_path` | String | Unset | The credential path of the google cloud storage.<br/>**It's only used when the storage type is `Gcs`**. |
|
||||||
|
| `storage.credential` | String | Unset | The credential of the google cloud storage.<br/>**It's only used when the storage type is `Gcs`**. |
|
||||||
|
| `storage.container` | String | Unset | The container of the azure account.<br/>**It's only used when the storage type is `Azblob`**. |
|
||||||
|
| `storage.sas_token` | String | Unset | The sas token of the azure account.<br/>**It's only used when the storage type is `Azblob`**. |
|
||||||
|
| `storage.endpoint` | String | Unset | The endpoint of the S3 service.<br/>**It's only used when the storage type is `S3`, `Oss`, `Gcs` and `Azblob`**. |
|
||||||
|
| `storage.region` | String | Unset | The region of the S3 service.<br/>**It's only used when the storage type is `S3`, `Oss`, `Gcs` and `Azblob`**. |
|
||||||
|
| `[[region_engine]]` | -- | -- | The region engine options. You can configure multiple region engines. |
|
||||||
|
| `region_engine.mito` | -- | -- | The Mito engine options. |
|
||||||
|
| `region_engine.mito.num_workers` | Integer | `8` | Number of region workers. |
|
||||||
|
| `region_engine.mito.worker_channel_size` | Integer | `128` | Request channel size of each worker. |
|
||||||
|
| `region_engine.mito.worker_request_batch_size` | Integer | `64` | Max batch size for a worker to handle requests. |
|
||||||
|
| `region_engine.mito.manifest_checkpoint_distance` | Integer | `10` | Number of meta action updated to trigger a new checkpoint for the manifest. |
|
||||||
|
| `region_engine.mito.compress_manifest` | Bool | `false` | Whether to compress manifest and checkpoint file by gzip (default false). |
|
||||||
|
| `region_engine.mito.max_background_flushes` | Integer | Auto | Max number of running background flush jobs (default: 1/2 of cpu cores). |
|
||||||
|
| `region_engine.mito.max_background_compactions` | Integer | Auto | Max number of running background compaction jobs (default: 1/4 of cpu cores). |
|
||||||
|
| `region_engine.mito.max_background_purges` | Integer | Auto | Max number of running background purge jobs (default: number of cpu cores). |
|
||||||
|
| `region_engine.mito.auto_flush_interval` | String | `1h` | Interval to auto flush a region if it has not flushed yet. |
|
||||||
|
| `region_engine.mito.global_write_buffer_size` | String | Auto | Global write buffer size for all regions. If not set, it's default to 1/8 of OS memory with a max limitation of 1GB. |
|
||||||
|
| `region_engine.mito.global_write_buffer_reject_size` | String | Auto | Global write buffer size threshold to reject write requests. If not set, it's default to 2 times of `global_write_buffer_size`. |
|
||||||
|
| `region_engine.mito.sst_meta_cache_size` | String | Auto | Cache size for SST metadata. Setting it to 0 to disable the cache.<br/>If not set, it's default to 1/32 of OS memory with a max limitation of 128MB. |
|
||||||
|
| `region_engine.mito.vector_cache_size` | String | Auto | Cache size for vectors and arrow arrays. Setting it to 0 to disable the cache.<br/>If not set, it's default to 1/16 of OS memory with a max limitation of 512MB. |
|
||||||
|
| `region_engine.mito.page_cache_size` | String | Auto | Cache size for pages of SST row groups. Setting it to 0 to disable the cache.<br/>If not set, it's default to 1/8 of OS memory. |
|
||||||
|
| `region_engine.mito.selector_result_cache_size` | String | Auto | Cache size for time series selector (e.g. `last_value()`). Setting it to 0 to disable the cache.<br/>If not set, it's default to 1/16 of OS memory with a max limitation of 512MB. |
|
||||||
|
| `region_engine.mito.enable_experimental_write_cache` | Bool | `false` | Whether to enable the experimental write cache. |
|
||||||
|
| `region_engine.mito.experimental_write_cache_path` | String | `""` | File system path for write cache, defaults to `{data_home}/write_cache`. |
|
||||||
|
| `region_engine.mito.experimental_write_cache_size` | String | `512MB` | Capacity for write cache. |
|
||||||
|
| `region_engine.mito.experimental_write_cache_ttl` | String | Unset | TTL for write cache. |
|
||||||
|
| `region_engine.mito.sst_write_buffer_size` | String | `8MB` | Buffer size for SST writing. |
|
||||||
|
| `region_engine.mito.scan_parallelism` | Integer | `0` | Parallelism to scan a region (default: 1/4 of cpu cores).<br/>- `0`: using the default value (1/4 of cpu cores).<br/>- `1`: scan in current thread.<br/>- `n`: scan in parallelism n. |
|
||||||
|
| `region_engine.mito.parallel_scan_channel_size` | Integer | `32` | Capacity of the channel to send data from parallel scan tasks to the main task. |
|
||||||
|
| `region_engine.mito.allow_stale_entries` | Bool | `false` | Whether to allow stale WAL entries read during replay. |
|
||||||
|
| `region_engine.mito.min_compaction_interval` | String | `0m` | Minimum time interval between two compactions.<br/>To align with the old behavior, the default value is 0 (no restrictions). |
|
||||||
|
| `region_engine.mito.index` | -- | -- | The options for index in Mito engine. |
|
||||||
|
| `region_engine.mito.index.aux_path` | String | `""` | Auxiliary directory path for the index in filesystem, used to store intermediate files for<br/>creating the index and staging files for searching the index, defaults to `{data_home}/index_intermediate`.<br/>The default name for this directory is `index_intermediate` for backward compatibility.<br/><br/>This path contains two subdirectories:<br/>- `__intm`: for storing intermediate files used during creating index.<br/>- `staging`: for storing staging files used during searching index. |
|
||||||
|
| `region_engine.mito.index.staging_size` | String | `2GB` | The max capacity of the staging directory. |
|
||||||
|
| `region_engine.mito.inverted_index` | -- | -- | The options for inverted index in Mito engine. |
|
||||||
|
| `region_engine.mito.inverted_index.create_on_flush` | String | `auto` | Whether to create the index on flush.<br/>- `auto`: automatically (default)<br/>- `disable`: never |
|
||||||
|
| `region_engine.mito.inverted_index.create_on_compaction` | String | `auto` | Whether to create the index on compaction.<br/>- `auto`: automatically (default)<br/>- `disable`: never |
|
||||||
|
| `region_engine.mito.inverted_index.apply_on_query` | String | `auto` | Whether to apply the index on query<br/>- `auto`: automatically (default)<br/>- `disable`: never |
|
||||||
|
| `region_engine.mito.inverted_index.mem_threshold_on_create` | String | `auto` | Memory threshold for performing an external sort during index creation.<br/>- `auto`: automatically determine the threshold based on the system memory size (default)<br/>- `unlimited`: no memory limit<br/>- `[size]` e.g. `64MB`: fixed memory threshold |
|
||||||
|
| `region_engine.mito.inverted_index.intermediate_path` | String | `""` | Deprecated, use `region_engine.mito.index.aux_path` instead. |
|
||||||
|
| `region_engine.mito.inverted_index.metadata_cache_size` | String | `64MiB` | Cache size for inverted index metadata. |
|
||||||
|
| `region_engine.mito.inverted_index.content_cache_size` | String | `128MiB` | Cache size for inverted index content. |
|
||||||
|
| `region_engine.mito.fulltext_index` | -- | -- | The options for full-text index in Mito engine. |
|
||||||
|
| `region_engine.mito.fulltext_index.create_on_flush` | String | `auto` | Whether to create the index on flush.<br/>- `auto`: automatically (default)<br/>- `disable`: never |
|
||||||
|
| `region_engine.mito.fulltext_index.create_on_compaction` | String | `auto` | Whether to create the index on compaction.<br/>- `auto`: automatically (default)<br/>- `disable`: never |
|
||||||
|
| `region_engine.mito.fulltext_index.apply_on_query` | String | `auto` | Whether to apply the index on query<br/>- `auto`: automatically (default)<br/>- `disable`: never |
|
||||||
|
| `region_engine.mito.fulltext_index.mem_threshold_on_create` | String | `auto` | Memory threshold for index creation.<br/>- `auto`: automatically determine the threshold based on the system memory size (default)<br/>- `unlimited`: no memory limit<br/>- `[size]` e.g. `64MB`: fixed memory threshold |
|
||||||
|
| `region_engine.mito.memtable` | -- | -- | -- |
|
||||||
|
| `region_engine.mito.memtable.type` | String | `time_series` | Memtable type.<br/>- `time_series`: time-series memtable<br/>- `partition_tree`: partition tree memtable (experimental) |
|
||||||
|
| `region_engine.mito.memtable.index_max_keys_per_shard` | Integer | `8192` | The max number of keys in one shard.<br/>Only available for `partition_tree` memtable. |
|
||||||
|
| `region_engine.mito.memtable.data_freeze_threshold` | Integer | `32768` | The max rows of data inside the actively writing buffer in one shard.<br/>Only available for `partition_tree` memtable. |
|
||||||
|
| `region_engine.mito.memtable.fork_dictionary_bytes` | String | `1GiB` | Max dictionary bytes.<br/>Only available for `partition_tree` memtable. |
|
||||||
|
| `region_engine.file` | -- | -- | Enable the file engine. |
|
||||||
|
| `logging` | -- | -- | The logging options. |
|
||||||
|
| `logging.dir` | String | `/tmp/greptimedb/logs` | The directory to store the log files. If set to empty, logs will not be written to files. |
|
||||||
|
| `logging.level` | String | Unset | The log level. Can be `info`/`debug`/`warn`/`error`. |
|
||||||
|
| `logging.enable_otlp_tracing` | Bool | `false` | Enable OTLP tracing. |
|
||||||
|
| `logging.otlp_endpoint` | String | `http://localhost:4317` | The OTLP tracing endpoint. |
|
||||||
|
| `logging.append_stdout` | Bool | `true` | Whether to append logs to stdout. |
|
||||||
|
| `logging.log_format` | String | `text` | The log format. Can be `text`/`json`. |
|
||||||
|
| `logging.max_log_files` | Integer | `720` | The maximum amount of log files. |
|
||||||
|
| `logging.tracing_sample_ratio` | -- | -- | The percentage of tracing will be sampled and exported.<br/>Valid range `[0, 1]`, 1 means all traces are sampled, 0 means all traces are not sampled, the default value is 1.<br/>ratio > 1 are treated as 1. Fractions < 0 are treated as 0 |
|
||||||
|
| `logging.tracing_sample_ratio.default_ratio` | Float | `1.0` | -- |
|
||||||
|
| `logging.slow_query` | -- | -- | The slow query log options. |
|
||||||
|
| `logging.slow_query.enable` | Bool | `false` | Whether to enable slow query log. |
|
||||||
|
| `logging.slow_query.threshold` | String | Unset | The threshold of slow query. |
|
||||||
|
| `logging.slow_query.sample_ratio` | Float | Unset | The sampling ratio of slow query log. The value should be in the range of (0, 1]. |
|
||||||
|
| `export_metrics` | -- | -- | The datanode can export its metrics and send to Prometheus compatible service (e.g. send to `greptimedb` itself) from remote-write API.<br/>This is only used for `greptimedb` to export its own metrics internally. It's different from prometheus scrape. |
|
||||||
|
| `export_metrics.enable` | Bool | `false` | whether enable export metrics. |
|
||||||
|
| `export_metrics.write_interval` | String | `30s` | The interval of export metrics. |
|
||||||
|
| `export_metrics.self_import` | -- | -- | For `standalone` mode, `self_import` is recommended to collect metrics generated by itself<br/>You must create the database before enabling it. |
|
||||||
|
| `export_metrics.self_import.db` | String | Unset | -- |
|
||||||
|
| `export_metrics.remote_write` | -- | -- | -- |
|
||||||
|
| `export_metrics.remote_write.url` | String | `""` | The url the metrics send to. The url example can be: `http://127.0.0.1:4000/v1/prometheus/write?db=greptime_metrics`. |
|
||||||
|
| `export_metrics.remote_write.headers` | InlineTable | -- | HTTP headers of Prometheus remote-write carry. |
|
||||||
|
| `tracing` | -- | -- | The tracing options. Only effect when compiled with `tokio-console` feature. |
|
||||||
|
| `tracing.tokio_console_addr` | String | Unset | The tokio console address. |
|
||||||
|
|
||||||
|
|
||||||
|
## Distributed Mode
|
||||||
|
|
||||||
|
### Frontend
|
||||||
|
|
||||||
|
| Key | Type | Default | Descriptions |
|
||||||
|
| --- | -----| ------- | ----------- |
|
||||||
|
| `default_timezone` | String | Unset | The default timezone of the server. |
|
||||||
|
| `runtime` | -- | -- | The runtime options. |
|
||||||
|
| `runtime.global_rt_size` | Integer | `8` | The number of threads to execute the runtime for global read operations. |
|
||||||
|
| `runtime.compact_rt_size` | Integer | `4` | The number of threads to execute the runtime for global write operations. |
|
||||||
|
| `heartbeat` | -- | -- | The heartbeat options. |
|
||||||
|
| `heartbeat.interval` | String | `18s` | Interval for sending heartbeat messages to the metasrv. |
|
||||||
|
| `heartbeat.retry_interval` | String | `3s` | Interval for retrying to send heartbeat messages to the metasrv. |
|
||||||
|
| `http` | -- | -- | The HTTP server options. |
|
||||||
|
| `http.addr` | String | `127.0.0.1:4000` | The address to bind the HTTP server. |
|
||||||
|
| `http.timeout` | String | `30s` | HTTP request timeout. Set to 0 to disable timeout. |
|
||||||
|
| `http.body_limit` | String | `64MB` | HTTP request body limit.<br/>The following units are supported: `B`, `KB`, `KiB`, `MB`, `MiB`, `GB`, `GiB`, `TB`, `TiB`, `PB`, `PiB`.<br/>Set to 0 to disable limit. |
|
||||||
|
| `grpc` | -- | -- | The gRPC server options. |
|
||||||
|
| `grpc.addr` | String | `127.0.0.1:4001` | The address to bind the gRPC server. |
|
||||||
|
| `grpc.hostname` | String | `127.0.0.1` | The hostname advertised to the metasrv,<br/>and used for connections from outside the host |
|
||||||
|
| `grpc.runtime_size` | Integer | `8` | The number of server worker threads. |
|
||||||
|
| `grpc.tls` | -- | -- | gRPC server TLS options, see `mysql.tls` section. |
|
||||||
|
| `grpc.tls.mode` | String | `disable` | TLS mode. |
|
||||||
|
| `grpc.tls.cert_path` | String | Unset | Certificate file path. |
|
||||||
|
| `grpc.tls.key_path` | String | Unset | Private key file path. |
|
||||||
|
| `grpc.tls.watch` | Bool | `false` | Watch for Certificate and key file change and auto reload.<br/>For now, gRPC tls config does not support auto reload. |
|
||||||
|
| `mysql` | -- | -- | MySQL server options. |
|
||||||
|
| `mysql.enable` | Bool | `true` | Whether to enable. |
|
||||||
|
| `mysql.addr` | String | `127.0.0.1:4002` | The addr to bind the MySQL server. |
|
||||||
|
| `mysql.runtime_size` | Integer | `2` | The number of server worker threads. |
|
||||||
|
| `mysql.tls` | -- | -- | -- |
|
||||||
|
| `mysql.tls.mode` | String | `disable` | TLS mode, refer to https://www.postgresql.org/docs/current/libpq-ssl.html<br/>- `disable` (default value)<br/>- `prefer`<br/>- `require`<br/>- `verify-ca`<br/>- `verify-full` |
|
||||||
|
| `mysql.tls.cert_path` | String | Unset | Certificate file path. |
|
||||||
|
| `mysql.tls.key_path` | String | Unset | Private key file path. |
|
||||||
|
| `mysql.tls.watch` | Bool | `false` | Watch for Certificate and key file change and auto reload |
|
||||||
|
| `postgres` | -- | -- | PostgresSQL server options. |
|
||||||
|
| `postgres.enable` | Bool | `true` | Whether to enable |
|
||||||
|
| `postgres.addr` | String | `127.0.0.1:4003` | The addr to bind the PostgresSQL server. |
|
||||||
|
| `postgres.runtime_size` | Integer | `2` | The number of server worker threads. |
|
||||||
|
| `postgres.tls` | -- | -- | PostgresSQL server TLS options, see `mysql.tls` section. |
|
||||||
|
| `postgres.tls.mode` | String | `disable` | TLS mode. |
|
||||||
|
| `postgres.tls.cert_path` | String | Unset | Certificate file path. |
|
||||||
|
| `postgres.tls.key_path` | String | Unset | Private key file path. |
|
||||||
|
| `postgres.tls.watch` | Bool | `false` | Watch for Certificate and key file change and auto reload |
|
||||||
|
| `opentsdb` | -- | -- | OpenTSDB protocol options. |
|
||||||
|
| `opentsdb.enable` | Bool | `true` | Whether to enable OpenTSDB put in HTTP API. |
|
||||||
|
| `influxdb` | -- | -- | InfluxDB protocol options. |
|
||||||
|
| `influxdb.enable` | Bool | `true` | Whether to enable InfluxDB protocol in HTTP API. |
|
||||||
|
| `prom_store` | -- | -- | Prometheus remote storage options |
|
||||||
|
| `prom_store.enable` | Bool | `true` | Whether to enable Prometheus remote write and read in HTTP API. |
|
||||||
|
| `prom_store.with_metric_engine` | Bool | `true` | Whether to store the data from Prometheus remote write in metric engine. |
|
||||||
|
| `meta_client` | -- | -- | The metasrv client options. |
|
||||||
|
| `meta_client.metasrv_addrs` | Array | -- | The addresses of the metasrv. |
|
||||||
|
| `meta_client.timeout` | String | `3s` | Operation timeout. |
|
||||||
|
| `meta_client.heartbeat_timeout` | String | `500ms` | Heartbeat timeout. |
|
||||||
|
| `meta_client.ddl_timeout` | String | `10s` | DDL timeout. |
|
||||||
|
| `meta_client.connect_timeout` | String | `1s` | Connect server timeout. |
|
||||||
|
| `meta_client.tcp_nodelay` | Bool | `true` | `TCP_NODELAY` option for accepted connections. |
|
||||||
|
| `meta_client.metadata_cache_max_capacity` | Integer | `100000` | The configuration about the cache of the metadata. |
|
||||||
|
| `meta_client.metadata_cache_ttl` | String | `10m` | TTL of the metadata cache. |
|
||||||
|
| `meta_client.metadata_cache_tti` | String | `5m` | -- |
|
||||||
|
| `datanode` | -- | -- | Datanode options. |
|
||||||
|
| `datanode.client` | -- | -- | Datanode client options. |
|
||||||
|
| `datanode.client.connect_timeout` | String | `10s` | -- |
|
||||||
|
| `datanode.client.tcp_nodelay` | Bool | `true` | -- |
|
||||||
|
| `logging` | -- | -- | The logging options. |
|
||||||
|
| `logging.dir` | String | `/tmp/greptimedb/logs` | The directory to store the log files. If set to empty, logs will not be written to files. |
|
||||||
|
| `logging.level` | String | Unset | The log level. Can be `info`/`debug`/`warn`/`error`. |
|
||||||
|
| `logging.enable_otlp_tracing` | Bool | `false` | Enable OTLP tracing. |
|
||||||
|
| `logging.otlp_endpoint` | String | `http://localhost:4317` | The OTLP tracing endpoint. |
|
||||||
|
| `logging.append_stdout` | Bool | `true` | Whether to append logs to stdout. |
|
||||||
|
| `logging.log_format` | String | `text` | The log format. Can be `text`/`json`. |
|
||||||
|
| `logging.max_log_files` | Integer | `720` | The maximum amount of log files. |
|
||||||
|
| `logging.tracing_sample_ratio` | -- | -- | The percentage of tracing will be sampled and exported.<br/>Valid range `[0, 1]`, 1 means all traces are sampled, 0 means all traces are not sampled, the default value is 1.<br/>ratio > 1 are treated as 1. Fractions < 0 are treated as 0 |
|
||||||
|
| `logging.tracing_sample_ratio.default_ratio` | Float | `1.0` | -- |
|
||||||
|
| `logging.slow_query` | -- | -- | The slow query log options. |
|
||||||
|
| `logging.slow_query.enable` | Bool | `false` | Whether to enable slow query log. |
|
||||||
|
| `logging.slow_query.threshold` | String | Unset | The threshold of slow query. |
|
||||||
|
| `logging.slow_query.sample_ratio` | Float | Unset | The sampling ratio of slow query log. The value should be in the range of (0, 1]. |
|
||||||
|
| `export_metrics` | -- | -- | The datanode can export its metrics and send to Prometheus compatible service (e.g. send to `greptimedb` itself) from remote-write API.<br/>This is only used for `greptimedb` to export its own metrics internally. It's different from prometheus scrape. |
|
||||||
|
| `export_metrics.enable` | Bool | `false` | whether enable export metrics. |
|
||||||
|
| `export_metrics.write_interval` | String | `30s` | The interval of export metrics. |
|
||||||
|
| `export_metrics.self_import` | -- | -- | For `standalone` mode, `self_import` is recommend to collect metrics generated by itself<br/>You must create the database before enabling it. |
|
||||||
|
| `export_metrics.self_import.db` | String | Unset | -- |
|
||||||
|
| `export_metrics.remote_write` | -- | -- | -- |
|
||||||
|
| `export_metrics.remote_write.url` | String | `""` | The url the metrics send to. The url example can be: `http://127.0.0.1:4000/v1/prometheus/write?db=greptime_metrics`. |
|
||||||
|
| `export_metrics.remote_write.headers` | InlineTable | -- | HTTP headers of Prometheus remote-write carry. |
|
||||||
|
| `tracing` | -- | -- | The tracing options. Only effect when compiled with `tokio-console` feature. |
|
||||||
|
| `tracing.tokio_console_addr` | String | Unset | The tokio console address. |
|
||||||
|
|
||||||
|
|
||||||
|
### Metasrv
|
||||||
|
|
||||||
|
| Key | Type | Default | Descriptions |
|
||||||
|
| --- | -----| ------- | ----------- |
|
||||||
|
| `data_home` | String | `/tmp/metasrv/` | The working home directory. |
|
||||||
|
| `bind_addr` | String | `127.0.0.1:3002` | The bind address of metasrv. |
|
||||||
|
| `server_addr` | String | `127.0.0.1:3002` | The communication server address for frontend and datanode to connect to metasrv, "127.0.0.1:3002" by default for localhost. |
|
||||||
|
| `store_addr` | String | `127.0.0.1:2379` | Store server address default to etcd store. |
|
||||||
|
| `selector` | String | `round_robin` | Datanode selector type.<br/>- `round_robin` (default value)<br/>- `lease_based`<br/>- `load_based`<br/>For details, please see "https://docs.greptime.com/developer-guide/metasrv/selector". |
|
||||||
|
| `use_memory_store` | Bool | `false` | Store data in memory. |
|
||||||
|
| `enable_telemetry` | Bool | `true` | Whether to enable greptimedb telemetry. |
|
||||||
|
| `store_key_prefix` | String | `""` | If it's not empty, the metasrv will store all data with this key prefix. |
|
||||||
|
| `enable_region_failover` | Bool | `false` | Whether to enable region failover.<br/>This feature is only available on GreptimeDB running on cluster mode and<br/>- Using Remote WAL<br/>- Using shared storage (e.g., s3). |
|
||||||
|
| `backend` | String | `EtcdStore` | The datastore for meta server. |
|
||||||
|
| `runtime` | -- | -- | The runtime options. |
|
||||||
|
| `runtime.global_rt_size` | Integer | `8` | The number of threads to execute the runtime for global read operations. |
|
||||||
|
| `runtime.compact_rt_size` | Integer | `4` | The number of threads to execute the runtime for global write operations. |
|
||||||
|
| `procedure` | -- | -- | Procedure storage options. |
|
||||||
|
| `procedure.max_retry_times` | Integer | `12` | Procedure max retry time. |
|
||||||
|
| `procedure.retry_delay` | String | `500ms` | Initial retry delay of procedures, increases exponentially |
|
||||||
|
| `procedure.max_metadata_value_size` | String | `1500KiB` | Auto split large value<br/>GreptimeDB procedure uses etcd as the default metadata storage backend.<br/>The etcd the maximum size of any request is 1.5 MiB<br/>1500KiB = 1536KiB (1.5MiB) - 36KiB (reserved size of key)<br/>Comments out the `max_metadata_value_size`, for don't split large value (no limit). |
|
||||||
|
| `failure_detector` | -- | -- | -- |
|
||||||
|
| `failure_detector.threshold` | Float | `8.0` | The threshold value used by the failure detector to determine failure conditions. |
|
||||||
|
| `failure_detector.min_std_deviation` | String | `100ms` | The minimum standard deviation of the heartbeat intervals, used to calculate acceptable variations. |
|
||||||
|
| `failure_detector.acceptable_heartbeat_pause` | String | `10000ms` | The acceptable pause duration between heartbeats, used to determine if a heartbeat interval is acceptable. |
|
||||||
|
| `failure_detector.first_heartbeat_estimate` | String | `1000ms` | The initial estimate of the heartbeat interval used by the failure detector. |
|
||||||
|
| `datanode` | -- | -- | Datanode options. |
|
||||||
|
| `datanode.client` | -- | -- | Datanode client options. |
|
||||||
|
| `datanode.client.timeout` | String | `10s` | Operation timeout. |
|
||||||
|
| `datanode.client.connect_timeout` | String | `10s` | Connect server timeout. |
|
||||||
|
| `datanode.client.tcp_nodelay` | Bool | `true` | `TCP_NODELAY` option for accepted connections. |
|
||||||
|
| `wal` | -- | -- | -- |
|
||||||
|
| `wal.provider` | String | `raft_engine` | -- |
|
||||||
|
| `wal.broker_endpoints` | Array | -- | The broker endpoints of the Kafka cluster. |
|
||||||
|
| `wal.auto_create_topics` | Bool | `true` | Automatically create topics for WAL.<br/>Set to `true` to automatically create topics for WAL.<br/>Otherwise, use topics named `topic_name_prefix_[0..num_topics)` |
|
||||||
|
| `wal.num_topics` | Integer | `64` | Number of topics. |
|
||||||
|
| `wal.selector_type` | String | `round_robin` | Topic selector type.<br/>Available selector types:<br/>- `round_robin` (default) |
|
||||||
|
| `wal.topic_name_prefix` | String | `greptimedb_wal_topic` | A Kafka topic is constructed by concatenating `topic_name_prefix` and `topic_id`.<br/>i.g., greptimedb_wal_topic_0, greptimedb_wal_topic_1. |
|
||||||
|
| `wal.replication_factor` | Integer | `1` | Expected number of replicas of each partition. |
|
||||||
|
| `wal.create_topic_timeout` | String | `30s` | Above which a topic creation operation will be cancelled. |
|
||||||
|
| `wal.backoff_init` | String | `500ms` | The initial backoff for kafka clients. |
|
||||||
|
| `wal.backoff_max` | String | `10s` | The maximum backoff for kafka clients. |
|
||||||
|
| `wal.backoff_base` | Integer | `2` | Exponential backoff rate, i.e. next backoff = base * current backoff. |
|
||||||
|
| `wal.backoff_deadline` | String | `5mins` | Stop reconnecting if the total wait time reaches the deadline. If this config is missing, the reconnecting won't terminate. |
|
||||||
|
| `logging` | -- | -- | The logging options. |
|
||||||
|
| `logging.dir` | String | `/tmp/greptimedb/logs` | The directory to store the log files. If set to empty, logs will not be written to files. |
|
||||||
|
| `logging.level` | String | Unset | The log level. Can be `info`/`debug`/`warn`/`error`. |
|
||||||
|
| `logging.enable_otlp_tracing` | Bool | `false` | Enable OTLP tracing. |
|
||||||
|
| `logging.otlp_endpoint` | String | `http://localhost:4317` | The OTLP tracing endpoint. |
|
||||||
|
| `logging.append_stdout` | Bool | `true` | Whether to append logs to stdout. |
|
||||||
|
| `logging.log_format` | String | `text` | The log format. Can be `text`/`json`. |
|
||||||
|
| `logging.max_log_files` | Integer | `720` | The maximum amount of log files. |
|
||||||
|
| `logging.tracing_sample_ratio` | -- | -- | The percentage of tracing will be sampled and exported.<br/>Valid range `[0, 1]`, 1 means all traces are sampled, 0 means all traces are not sampled, the default value is 1.<br/>ratio > 1 are treated as 1. Fractions < 0 are treated as 0 |
|
||||||
|
| `logging.tracing_sample_ratio.default_ratio` | Float | `1.0` | -- |
|
||||||
|
| `logging.slow_query` | -- | -- | The slow query log options. |
|
||||||
|
| `logging.slow_query.enable` | Bool | `false` | Whether to enable slow query log. |
|
||||||
|
| `logging.slow_query.threshold` | String | Unset | The threshold of slow query. |
|
||||||
|
| `logging.slow_query.sample_ratio` | Float | Unset | The sampling ratio of slow query log. The value should be in the range of (0, 1]. |
|
||||||
|
| `export_metrics` | -- | -- | The datanode can export its metrics and send to Prometheus compatible service (e.g. send to `greptimedb` itself) from remote-write API.<br/>This is only used for `greptimedb` to export its own metrics internally. It's different from prometheus scrape. |
|
||||||
|
| `export_metrics.enable` | Bool | `false` | whether enable export metrics. |
|
||||||
|
| `export_metrics.write_interval` | String | `30s` | The interval of export metrics. |
|
||||||
|
| `export_metrics.self_import` | -- | -- | For `standalone` mode, `self_import` is recommend to collect metrics generated by itself<br/>You must create the database before enabling it. |
|
||||||
|
| `export_metrics.self_import.db` | String | Unset | -- |
|
||||||
|
| `export_metrics.remote_write` | -- | -- | -- |
|
||||||
|
| `export_metrics.remote_write.url` | String | `""` | The url the metrics send to. The url example can be: `http://127.0.0.1:4000/v1/prometheus/write?db=greptime_metrics`. |
|
||||||
|
| `export_metrics.remote_write.headers` | InlineTable | -- | HTTP headers of Prometheus remote-write carry. |
|
||||||
|
| `tracing` | -- | -- | The tracing options. Only effect when compiled with `tokio-console` feature. |
|
||||||
|
| `tracing.tokio_console_addr` | String | Unset | The tokio console address. |
|
||||||
|
|
||||||
|
|
||||||
|
### Datanode
|
||||||
|
|
||||||
|
| Key | Type | Default | Descriptions |
|
||||||
|
| --- | -----| ------- | ----------- |
|
||||||
|
| `mode` | String | `standalone` | The running mode of the datanode. It can be `standalone` or `distributed`. |
|
||||||
|
| `node_id` | Integer | Unset | The datanode identifier and should be unique in the cluster. |
|
||||||
|
| `require_lease_before_startup` | Bool | `false` | Start services after regions have obtained leases.<br/>It will block the datanode start if it can't receive leases in the heartbeat from metasrv. |
|
||||||
|
| `init_regions_in_background` | Bool | `false` | Initialize all regions in the background during the startup.<br/>By default, it provides services after all regions have been initialized. |
|
||||||
|
| `enable_telemetry` | Bool | `true` | Enable telemetry to collect anonymous usage data. |
|
||||||
|
| `init_regions_parallelism` | Integer | `16` | Parallelism of initializing regions. |
|
||||||
|
| `max_concurrent_queries` | Integer | `0` | The maximum current queries allowed to be executed. Zero means unlimited. |
|
||||||
|
| `rpc_addr` | String | Unset | Deprecated, use `grpc.addr` instead. |
|
||||||
|
| `rpc_hostname` | String | Unset | Deprecated, use `grpc.hostname` instead. |
|
||||||
|
| `rpc_runtime_size` | Integer | Unset | Deprecated, use `grpc.runtime_size` instead. |
|
||||||
|
| `rpc_max_recv_message_size` | String | Unset | Deprecated, use `grpc.rpc_max_recv_message_size` instead. |
|
||||||
|
| `rpc_max_send_message_size` | String | Unset | Deprecated, use `grpc.rpc_max_send_message_size` instead. |
|
||||||
|
| `http` | -- | -- | The HTTP server options. |
|
||||||
|
| `http.addr` | String | `127.0.0.1:4000` | The address to bind the HTTP server. |
|
||||||
|
| `http.timeout` | String | `30s` | HTTP request timeout. Set to 0 to disable timeout. |
|
||||||
|
| `http.body_limit` | String | `64MB` | HTTP request body limit.<br/>The following units are supported: `B`, `KB`, `KiB`, `MB`, `MiB`, `GB`, `GiB`, `TB`, `TiB`, `PB`, `PiB`.<br/>Set to 0 to disable limit. |
|
||||||
|
| `grpc` | -- | -- | The gRPC server options. |
|
||||||
|
| `grpc.addr` | String | `127.0.0.1:3001` | The address to bind the gRPC server. |
|
||||||
|
| `grpc.hostname` | String | `127.0.0.1` | The hostname advertised to the metasrv,<br/>and used for connections from outside the host |
|
||||||
|
| `grpc.runtime_size` | Integer | `8` | The number of server worker threads. |
|
||||||
|
| `grpc.max_recv_message_size` | String | `512MB` | The maximum receive message size for gRPC server. |
|
||||||
|
| `grpc.max_send_message_size` | String | `512MB` | The maximum send message size for gRPC server. |
|
||||||
|
| `grpc.tls` | -- | -- | gRPC server TLS options, see `mysql.tls` section. |
|
||||||
|
| `grpc.tls.mode` | String | `disable` | TLS mode. |
|
||||||
|
| `grpc.tls.cert_path` | String | Unset | Certificate file path. |
|
||||||
|
| `grpc.tls.key_path` | String | Unset | Private key file path. |
|
||||||
|
| `grpc.tls.watch` | Bool | `false` | Watch for Certificate and key file change and auto reload.<br/>For now, gRPC tls config does not support auto reload. |
|
||||||
|
| `runtime` | -- | -- | The runtime options. |
|
||||||
|
| `runtime.global_rt_size` | Integer | `8` | The number of threads to execute the runtime for global read operations. |
|
||||||
|
| `runtime.compact_rt_size` | Integer | `4` | The number of threads to execute the runtime for global write operations. |
|
||||||
|
| `heartbeat` | -- | -- | The heartbeat options. |
|
||||||
|
| `heartbeat.interval` | String | `3s` | Interval for sending heartbeat messages to the metasrv. |
|
||||||
|
| `heartbeat.retry_interval` | String | `3s` | Interval for retrying to send heartbeat messages to the metasrv. |
|
||||||
|
| `meta_client` | -- | -- | The metasrv client options. |
|
||||||
|
| `meta_client.metasrv_addrs` | Array | -- | The addresses of the metasrv. |
|
||||||
|
| `meta_client.timeout` | String | `3s` | Operation timeout. |
|
||||||
|
| `meta_client.heartbeat_timeout` | String | `500ms` | Heartbeat timeout. |
|
||||||
|
| `meta_client.ddl_timeout` | String | `10s` | DDL timeout. |
|
||||||
|
| `meta_client.connect_timeout` | String | `1s` | Connect server timeout. |
|
||||||
|
| `meta_client.tcp_nodelay` | Bool | `true` | `TCP_NODELAY` option for accepted connections. |
|
||||||
|
| `meta_client.metadata_cache_max_capacity` | Integer | `100000` | The configuration about the cache of the metadata. |
|
||||||
|
| `meta_client.metadata_cache_ttl` | String | `10m` | TTL of the metadata cache. |
|
||||||
|
| `meta_client.metadata_cache_tti` | String | `5m` | -- |
|
||||||
|
| `wal` | -- | -- | The WAL options. |
|
||||||
|
| `wal.provider` | String | `raft_engine` | The provider of the WAL.<br/>- `raft_engine`: the wal is stored in the local file system by raft-engine.<br/>- `kafka`: it's remote wal that data is stored in Kafka. |
|
||||||
|
| `wal.dir` | String | Unset | The directory to store the WAL files.<br/>**It's only used when the provider is `raft_engine`**. |
|
||||||
|
| `wal.file_size` | String | `256MB` | The size of the WAL segment file.<br/>**It's only used when the provider is `raft_engine`**. |
|
||||||
|
| `wal.purge_threshold` | String | `4GB` | The threshold of the WAL size to trigger a flush.<br/>**It's only used when the provider is `raft_engine`**. |
|
||||||
|
| `wal.purge_interval` | String | `10m` | The interval to trigger a flush.<br/>**It's only used when the provider is `raft_engine`**. |
|
||||||
|
| `wal.read_batch_size` | Integer | `128` | The read batch size.<br/>**It's only used when the provider is `raft_engine`**. |
|
||||||
|
| `wal.sync_write` | Bool | `false` | Whether to use sync write.<br/>**It's only used when the provider is `raft_engine`**. |
|
||||||
|
| `wal.enable_log_recycle` | Bool | `true` | Whether to reuse logically truncated log files.<br/>**It's only used when the provider is `raft_engine`**. |
|
||||||
|
| `wal.prefill_log_files` | Bool | `false` | Whether to pre-create log files on start up.<br/>**It's only used when the provider is `raft_engine`**. |
|
||||||
|
| `wal.sync_period` | String | `10s` | Duration for fsyncing log files.<br/>**It's only used when the provider is `raft_engine`**. |
|
||||||
|
| `wal.recovery_parallelism` | Integer | `2` | Parallelism during WAL recovery. |
|
||||||
|
| `wal.broker_endpoints` | Array | -- | The Kafka broker endpoints.<br/>**It's only used when the provider is `kafka`**. |
|
||||||
|
| `wal.max_batch_bytes` | String | `1MB` | The max size of a single producer batch.<br/>Warning: Kafka has a default limit of 1MB per message in a topic.<br/>**It's only used when the provider is `kafka`**. |
|
||||||
|
| `wal.consumer_wait_timeout` | String | `100ms` | The consumer wait timeout.<br/>**It's only used when the provider is `kafka`**. |
|
||||||
|
| `wal.backoff_init` | String | `500ms` | The initial backoff delay.<br/>**It's only used when the provider is `kafka`**. |
|
||||||
|
| `wal.backoff_max` | String | `10s` | The maximum backoff delay.<br/>**It's only used when the provider is `kafka`**. |
|
||||||
|
| `wal.backoff_base` | Integer | `2` | The exponential backoff rate, i.e. next backoff = base * current backoff.<br/>**It's only used when the provider is `kafka`**. |
|
||||||
|
| `wal.backoff_deadline` | String | `5mins` | The deadline of retries.<br/>**It's only used when the provider is `kafka`**. |
|
||||||
|
| `wal.create_index` | Bool | `true` | Whether to enable WAL index creation.<br/>**It's only used when the provider is `kafka`**. |
|
||||||
|
| `wal.dump_index_interval` | String | `60s` | The interval for dumping WAL indexes.<br/>**It's only used when the provider is `kafka`**. |
|
||||||
|
| `wal.overwrite_entry_start_id` | Bool | `false` | Ignore missing entries during read WAL.<br/>**It's only used when the provider is `kafka`**.<br/><br/>This option ensures that when Kafka messages are deleted, the system<br/>can still successfully replay memtable data without throwing an<br/>out-of-range error.<br/>However, enabling this option might lead to unexpected data loss,<br/>as the system will skip over missing entries instead of treating<br/>them as critical errors. |
|
||||||
|
| `storage` | -- | -- | The data storage options. |
|
||||||
|
| `storage.data_home` | String | `/tmp/greptimedb/` | The working home directory. |
|
||||||
|
| `storage.type` | String | `File` | The storage type used to store the data.<br/>- `File`: the data is stored in the local file system.<br/>- `S3`: the data is stored in the S3 object storage.<br/>- `Gcs`: the data is stored in the Google Cloud Storage.<br/>- `Azblob`: the data is stored in the Azure Blob Storage.<br/>- `Oss`: the data is stored in the Aliyun OSS. |
|
||||||
|
| `storage.cache_path` | String | Unset | Cache configuration for object storage such as 'S3' etc.<br/>The local file cache directory. |
|
||||||
|
| `storage.cache_capacity` | String | Unset | The local file cache capacity in bytes. |
|
||||||
|
| `storage.bucket` | String | Unset | The S3 bucket name.<br/>**It's only used when the storage type is `S3`, `Oss` and `Gcs`**. |
|
||||||
|
| `storage.root` | String | Unset | The S3 data will be stored in the specified prefix, for example, `s3://${bucket}/${root}`.<br/>**It's only used when the storage type is `S3`, `Oss` and `Azblob`**. |
|
||||||
|
| `storage.access_key_id` | String | Unset | The access key id of the aws account.<br/>It's **highly recommended** to use AWS IAM roles instead of hardcoding the access key id and secret key.<br/>**It's only used when the storage type is `S3` and `Oss`**. |
|
||||||
|
| `storage.secret_access_key` | String | Unset | The secret access key of the aws account.<br/>It's **highly recommended** to use AWS IAM roles instead of hardcoding the access key id and secret key.<br/>**It's only used when the storage type is `S3`**. |
|
||||||
|
| `storage.access_key_secret` | String | Unset | The secret access key of the aliyun account.<br/>**It's only used when the storage type is `Oss`**. |
|
||||||
|
| `storage.account_name` | String | Unset | The account key of the azure account.<br/>**It's only used when the storage type is `Azblob`**. |
|
||||||
|
| `storage.account_key` | String | Unset | The account key of the azure account.<br/>**It's only used when the storage type is `Azblob`**. |
|
||||||
|
| `storage.scope` | String | Unset | The scope of the google cloud storage.<br/>**It's only used when the storage type is `Gcs`**. |
|
||||||
|
| `storage.credential_path` | String | Unset | The credential path of the google cloud storage.<br/>**It's only used when the storage type is `Gcs`**. |
|
||||||
|
| `storage.credential` | String | Unset | The credential of the google cloud storage.<br/>**It's only used when the storage type is `Gcs`**. |
|
||||||
|
| `storage.container` | String | Unset | The container of the azure account.<br/>**It's only used when the storage type is `Azblob`**. |
|
||||||
|
| `storage.sas_token` | String | Unset | The sas token of the azure account.<br/>**It's only used when the storage type is `Azblob`**. |
|
||||||
|
| `storage.endpoint` | String | Unset | The endpoint of the S3 service.<br/>**It's only used when the storage type is `S3`, `Oss`, `Gcs` and `Azblob`**. |
|
||||||
|
| `storage.region` | String | Unset | The region of the S3 service.<br/>**It's only used when the storage type is `S3`, `Oss`, `Gcs` and `Azblob`**. |
|
||||||
|
| `[[region_engine]]` | -- | -- | The region engine options. You can configure multiple region engines. |
|
||||||
|
| `region_engine.mito` | -- | -- | The Mito engine options. |
|
||||||
|
| `region_engine.mito.num_workers` | Integer | `8` | Number of region workers. |
|
||||||
|
| `region_engine.mito.worker_channel_size` | Integer | `128` | Request channel size of each worker. |
|
||||||
|
| `region_engine.mito.worker_request_batch_size` | Integer | `64` | Max batch size for a worker to handle requests. |
|
||||||
|
| `region_engine.mito.manifest_checkpoint_distance` | Integer | `10` | Number of meta action updated to trigger a new checkpoint for the manifest. |
|
||||||
|
| `region_engine.mito.compress_manifest` | Bool | `false` | Whether to compress manifest and checkpoint file by gzip (default false). |
|
||||||
|
| `region_engine.mito.max_background_flushes` | Integer | Auto | Max number of running background flush jobs (default: 1/2 of cpu cores). |
|
||||||
|
| `region_engine.mito.max_background_compactions` | Integer | Auto | Max number of running background compaction jobs (default: 1/4 of cpu cores). |
|
||||||
|
| `region_engine.mito.max_background_purges` | Integer | Auto | Max number of running background purge jobs (default: number of cpu cores). |
|
||||||
|
| `region_engine.mito.auto_flush_interval` | String | `1h` | Interval to auto flush a region if it has not flushed yet. |
|
||||||
|
| `region_engine.mito.global_write_buffer_size` | String | Auto | Global write buffer size for all regions. If not set, it's default to 1/8 of OS memory with a max limitation of 1GB. |
|
||||||
|
| `region_engine.mito.global_write_buffer_reject_size` | String | Auto | Global write buffer size threshold to reject write requests. If not set, it's default to 2 times of `global_write_buffer_size` |
|
||||||
|
| `region_engine.mito.sst_meta_cache_size` | String | Auto | Cache size for SST metadata. Setting it to 0 to disable the cache.<br/>If not set, it's default to 1/32 of OS memory with a max limitation of 128MB. |
|
||||||
|
| `region_engine.mito.vector_cache_size` | String | Auto | Cache size for vectors and arrow arrays. Setting it to 0 to disable the cache.<br/>If not set, it's default to 1/16 of OS memory with a max limitation of 512MB. |
|
||||||
|
| `region_engine.mito.page_cache_size` | String | Auto | Cache size for pages of SST row groups. Setting it to 0 to disable the cache.<br/>If not set, it's default to 1/8 of OS memory. |
|
||||||
|
| `region_engine.mito.selector_result_cache_size` | String | Auto | Cache size for time series selector (e.g. `last_value()`). Setting it to 0 to disable the cache.<br/>If not set, it's default to 1/16 of OS memory with a max limitation of 512MB. |
|
||||||
|
| `region_engine.mito.enable_experimental_write_cache` | Bool | `false` | Whether to enable the experimental write cache. |
|
||||||
|
| `region_engine.mito.experimental_write_cache_path` | String | `""` | File system path for write cache, defaults to `{data_home}/write_cache`. |
|
||||||
|
| `region_engine.mito.experimental_write_cache_size` | String | `512MB` | Capacity for write cache. |
|
||||||
|
| `region_engine.mito.experimental_write_cache_ttl` | String | Unset | TTL for write cache. |
|
||||||
|
| `region_engine.mito.sst_write_buffer_size` | String | `8MB` | Buffer size for SST writing. |
|
||||||
|
| `region_engine.mito.scan_parallelism` | Integer | `0` | Parallelism to scan a region (default: 1/4 of cpu cores).<br/>- `0`: using the default value (1/4 of cpu cores).<br/>- `1`: scan in current thread.<br/>- `n`: scan in parallelism n. |
|
||||||
|
| `region_engine.mito.parallel_scan_channel_size` | Integer | `32` | Capacity of the channel to send data from parallel scan tasks to the main task. |
|
||||||
|
| `region_engine.mito.allow_stale_entries` | Bool | `false` | Whether to allow stale WAL entries read during replay. |
|
||||||
|
| `region_engine.mito.min_compaction_interval` | String | `0m` | Minimum time interval between two compactions.<br/>To align with the old behavior, the default value is 0 (no restrictions). |
|
||||||
|
| `region_engine.mito.index` | -- | -- | The options for index in Mito engine. |
|
||||||
|
| `region_engine.mito.index.aux_path` | String | `""` | Auxiliary directory path for the index in filesystem, used to store intermediate files for<br/>creating the index and staging files for searching the index, defaults to `{data_home}/index_intermediate`.<br/>The default name for this directory is `index_intermediate` for backward compatibility.<br/><br/>This path contains two subdirectories:<br/>- `__intm`: for storing intermediate files used during creating index.<br/>- `staging`: for storing staging files used during searching index. |
|
||||||
|
| `region_engine.mito.index.staging_size` | String | `2GB` | The max capacity of the staging directory. |
|
||||||
|
| `region_engine.mito.inverted_index` | -- | -- | The options for inverted index in Mito engine. |
|
||||||
|
| `region_engine.mito.inverted_index.create_on_flush` | String | `auto` | Whether to create the index on flush.<br/>- `auto`: automatically (default)<br/>- `disable`: never |
|
||||||
|
| `region_engine.mito.inverted_index.create_on_compaction` | String | `auto` | Whether to create the index on compaction.<br/>- `auto`: automatically (default)<br/>- `disable`: never |
|
||||||
|
| `region_engine.mito.inverted_index.apply_on_query` | String | `auto` | Whether to apply the index on query<br/>- `auto`: automatically (default)<br/>- `disable`: never |
|
||||||
|
| `region_engine.mito.inverted_index.mem_threshold_on_create` | String | `auto` | Memory threshold for performing an external sort during index creation.<br/>- `auto`: automatically determine the threshold based on the system memory size (default)<br/>- `unlimited`: no memory limit<br/>- `[size]` e.g. `64MB`: fixed memory threshold |
|
||||||
|
| `region_engine.mito.inverted_index.intermediate_path` | String | `""` | Deprecated, use `region_engine.mito.index.aux_path` instead. |
|
||||||
|
| `region_engine.mito.fulltext_index` | -- | -- | The options for full-text index in Mito engine. |
|
||||||
|
| `region_engine.mito.fulltext_index.create_on_flush` | String | `auto` | Whether to create the index on flush.<br/>- `auto`: automatically (default)<br/>- `disable`: never |
|
||||||
|
| `region_engine.mito.fulltext_index.create_on_compaction` | String | `auto` | Whether to create the index on compaction.<br/>- `auto`: automatically (default)<br/>- `disable`: never |
|
||||||
|
| `region_engine.mito.fulltext_index.apply_on_query` | String | `auto` | Whether to apply the index on query<br/>- `auto`: automatically (default)<br/>- `disable`: never |
|
||||||
|
| `region_engine.mito.fulltext_index.mem_threshold_on_create` | String | `auto` | Memory threshold for index creation.<br/>- `auto`: automatically determine the threshold based on the system memory size (default)<br/>- `unlimited`: no memory limit<br/>- `[size]` e.g. `64MB`: fixed memory threshold |
|
||||||
|
| `region_engine.mito.memtable` | -- | -- | -- |
|
||||||
|
| `region_engine.mito.memtable.type` | String | `time_series` | Memtable type.<br/>- `time_series`: time-series memtable<br/>- `partition_tree`: partition tree memtable (experimental) |
|
||||||
|
| `region_engine.mito.memtable.index_max_keys_per_shard` | Integer | `8192` | The max number of keys in one shard.<br/>Only available for `partition_tree` memtable. |
|
||||||
|
| `region_engine.mito.memtable.data_freeze_threshold` | Integer | `32768` | The max rows of data inside the actively writing buffer in one shard.<br/>Only available for `partition_tree` memtable. |
|
||||||
|
| `region_engine.mito.memtable.fork_dictionary_bytes` | String | `1GiB` | Max dictionary bytes.<br/>Only available for `partition_tree` memtable. |
|
||||||
|
| `region_engine.file` | -- | -- | Enable the file engine. |
|
||||||
|
| `logging` | -- | -- | The logging options. |
|
||||||
|
| `logging.dir` | String | `/tmp/greptimedb/logs` | The directory to store the log files. If set to empty, logs will not be written to files. |
|
||||||
|
| `logging.level` | String | Unset | The log level. Can be `info`/`debug`/`warn`/`error`. |
|
||||||
|
| `logging.enable_otlp_tracing` | Bool | `false` | Enable OTLP tracing. |
|
||||||
|
| `logging.otlp_endpoint` | String | `http://localhost:4317` | The OTLP tracing endpoint. |
|
||||||
|
| `logging.append_stdout` | Bool | `true` | Whether to append logs to stdout. |
|
||||||
|
| `logging.log_format` | String | `text` | The log format. Can be `text`/`json`. |
|
||||||
|
| `logging.max_log_files` | Integer | `720` | The maximum amount of log files. |
|
||||||
|
| `logging.tracing_sample_ratio` | -- | -- | The percentage of tracing will be sampled and exported.<br/>Valid range `[0, 1]`, 1 means all traces are sampled, 0 means all traces are not sampled, the default value is 1.<br/>ratio > 1 are treated as 1. Fractions < 0 are treated as 0 |
|
||||||
|
| `logging.tracing_sample_ratio.default_ratio` | Float | `1.0` | -- |
|
||||||
|
| `logging.slow_query` | -- | -- | The slow query log options. |
|
||||||
|
| `logging.slow_query.enable` | Bool | `false` | Whether to enable slow query log. |
|
||||||
|
| `logging.slow_query.threshold` | String | Unset | The threshold of slow query. |
|
||||||
|
| `logging.slow_query.sample_ratio` | Float | Unset | The sampling ratio of slow query log. The value should be in the range of (0, 1]. |
|
||||||
|
| `export_metrics` | -- | -- | The datanode can export its metrics and send to Prometheus compatible service (e.g. send to `greptimedb` itself) from remote-write API.<br/>This is only used for `greptimedb` to export its own metrics internally. It's different from prometheus scrape. |
|
||||||
|
| `export_metrics.enable` | Bool | `false` | whether enable export metrics. |
|
||||||
|
| `export_metrics.write_interval` | String | `30s` | The interval of export metrics. |
|
||||||
|
| `export_metrics.self_import` | -- | -- | For `standalone` mode, `self_import` is recommend to collect metrics generated by itself<br/>You must create the database before enabling it. |
|
||||||
|
| `export_metrics.self_import.db` | String | Unset | -- |
|
||||||
|
| `export_metrics.remote_write` | -- | -- | -- |
|
||||||
|
| `export_metrics.remote_write.url` | String | `""` | The url the metrics send to. The url example can be: `http://127.0.0.1:4000/v1/prometheus/write?db=greptime_metrics`. |
|
||||||
|
| `export_metrics.remote_write.headers` | InlineTable | -- | HTTP headers of Prometheus remote-write carry. |
|
||||||
|
| `tracing` | -- | -- | The tracing options. Only effect when compiled with `tokio-console` feature. |
|
||||||
|
| `tracing.tokio_console_addr` | String | Unset | The tokio console address. |
|
||||||
|
|
||||||
|
|
||||||
|
### Flownode
|
||||||
|
|
||||||
|
| Key | Type | Default | Descriptions |
|
||||||
|
| --- | -----| ------- | ----------- |
|
||||||
|
| `mode` | String | `distributed` | The running mode of the flownode. It can be `standalone` or `distributed`. |
|
||||||
|
| `node_id` | Integer | Unset | The flownode identifier and should be unique in the cluster. |
|
||||||
|
| `grpc` | -- | -- | The gRPC server options. |
|
||||||
|
| `grpc.addr` | String | `127.0.0.1:6800` | The address to bind the gRPC server. |
|
||||||
|
| `grpc.hostname` | String | `127.0.0.1` | The hostname advertised to the metasrv,<br/>and used for connections from outside the host |
|
||||||
|
| `grpc.runtime_size` | Integer | `2` | The number of server worker threads. |
|
||||||
|
| `grpc.max_recv_message_size` | String | `512MB` | The maximum receive message size for gRPC server. |
|
||||||
|
| `grpc.max_send_message_size` | String | `512MB` | The maximum send message size for gRPC server. |
|
||||||
|
| `meta_client` | -- | -- | The metasrv client options. |
|
||||||
|
| `meta_client.metasrv_addrs` | Array | -- | The addresses of the metasrv. |
|
||||||
|
| `meta_client.timeout` | String | `3s` | Operation timeout. |
|
||||||
|
| `meta_client.heartbeat_timeout` | String | `500ms` | Heartbeat timeout. |
|
||||||
|
| `meta_client.ddl_timeout` | String | `10s` | DDL timeout. |
|
||||||
|
| `meta_client.connect_timeout` | String | `1s` | Connect server timeout. |
|
||||||
|
| `meta_client.tcp_nodelay` | Bool | `true` | `TCP_NODELAY` option for accepted connections. |
|
||||||
|
| `meta_client.metadata_cache_max_capacity` | Integer | `100000` | The configuration about the cache of the metadata. |
|
||||||
|
| `meta_client.metadata_cache_ttl` | String | `10m` | TTL of the metadata cache. |
|
||||||
|
| `meta_client.metadata_cache_tti` | String | `5m` | -- |
|
||||||
|
| `heartbeat` | -- | -- | The heartbeat options. |
|
||||||
|
| `heartbeat.interval` | String | `3s` | Interval for sending heartbeat messages to the metasrv. |
|
||||||
|
| `heartbeat.retry_interval` | String | `3s` | Interval for retrying to send heartbeat messages to the metasrv. |
|
||||||
|
| `logging` | -- | -- | The logging options. |
|
||||||
|
| `logging.dir` | String | `/tmp/greptimedb/logs` | The directory to store the log files. If set to empty, logs will not be written to files. |
|
||||||
|
| `logging.level` | String | Unset | The log level. Can be `info`/`debug`/`warn`/`error`. |
|
||||||
|
| `logging.enable_otlp_tracing` | Bool | `false` | Enable OTLP tracing. |
|
||||||
|
| `logging.otlp_endpoint` | String | `http://localhost:4317` | The OTLP tracing endpoint. |
|
||||||
|
| `logging.append_stdout` | Bool | `true` | Whether to append logs to stdout. |
|
||||||
|
| `logging.log_format` | String | `text` | The log format. Can be `text`/`json`. |
|
||||||
|
| `logging.max_log_files` | Integer | `720` | The maximum amount of log files. |
|
||||||
|
| `logging.tracing_sample_ratio` | -- | -- | The percentage of tracing will be sampled and exported.<br/>Valid range `[0, 1]`, 1 means all traces are sampled, 0 means all traces are not sampled, the default value is 1.<br/>ratio > 1 are treated as 1. Fractions < 0 are treated as 0 |
|
||||||
|
| `logging.tracing_sample_ratio.default_ratio` | Float | `1.0` | -- |
|
||||||
|
| `logging.slow_query` | -- | -- | The slow query log options. |
|
||||||
|
| `logging.slow_query.enable` | Bool | `false` | Whether to enable slow query log. |
|
||||||
|
| `logging.slow_query.threshold` | String | Unset | The threshold of slow query. |
|
||||||
|
| `logging.slow_query.sample_ratio` | Float | Unset | The sampling ratio of slow query log. The value should be in the range of (0, 1]. |
|
||||||
|
| `tracing` | -- | -- | The tracing options. Only effect when compiled with `tokio-console` feature. |
|
||||||
|
| `tracing.tokio_console_addr` | String | Unset | The tokio console address. |
|
||||||
@@ -1,171 +1,652 @@
|
|||||||
# Node running mode, see `standalone.example.toml`.
|
## The running mode of the datanode. It can be `standalone` or `distributed`.
|
||||||
mode = "distributed"
|
mode = "standalone"
|
||||||
# The datanode identifier, should be unique.
|
|
||||||
|
## The datanode identifier and should be unique in the cluster.
|
||||||
|
## @toml2docs:none-default
|
||||||
node_id = 42
|
node_id = 42
|
||||||
# gRPC server address, "127.0.0.1:3001" by default.
|
|
||||||
rpc_addr = "127.0.0.1:3001"
|
## Start services after regions have obtained leases.
|
||||||
# Hostname of this node.
|
## It will block the datanode start if it can't receive leases in the heartbeat from metasrv.
|
||||||
rpc_hostname = "127.0.0.1"
|
|
||||||
# The number of gRPC server worker threads, 8 by default.
|
|
||||||
rpc_runtime_size = 8
|
|
||||||
# Start services after regions have obtained leases.
|
|
||||||
# It will block the datanode start if it can't receive leases in the heartbeat from metasrv.
|
|
||||||
require_lease_before_startup = false
|
require_lease_before_startup = false
|
||||||
|
|
||||||
# Initialize all regions in the background during the startup.
|
## Initialize all regions in the background during the startup.
|
||||||
# By default, it provides services after all regions have been initialized.
|
## By default, it provides services after all regions have been initialized.
|
||||||
init_regions_in_background = false
|
init_regions_in_background = false
|
||||||
|
|
||||||
|
## Enable telemetry to collect anonymous usage data.
|
||||||
|
enable_telemetry = true
|
||||||
|
|
||||||
|
## Parallelism of initializing regions.
|
||||||
|
init_regions_parallelism = 16
|
||||||
|
|
||||||
|
## The maximum current queries allowed to be executed. Zero means unlimited.
|
||||||
|
max_concurrent_queries = 0
|
||||||
|
|
||||||
|
## Deprecated, use `grpc.addr` instead.
|
||||||
|
## @toml2docs:none-default
|
||||||
|
rpc_addr = "127.0.0.1:3001"
|
||||||
|
|
||||||
|
## Deprecated, use `grpc.hostname` instead.
|
||||||
|
## @toml2docs:none-default
|
||||||
|
rpc_hostname = "127.0.0.1"
|
||||||
|
|
||||||
|
## Deprecated, use `grpc.runtime_size` instead.
|
||||||
|
## @toml2docs:none-default
|
||||||
|
rpc_runtime_size = 8
|
||||||
|
|
||||||
|
## Deprecated, use `grpc.rpc_max_recv_message_size` instead.
|
||||||
|
## @toml2docs:none-default
|
||||||
|
rpc_max_recv_message_size = "512MB"
|
||||||
|
|
||||||
|
## Deprecated, use `grpc.rpc_max_send_message_size` instead.
|
||||||
|
## @toml2docs:none-default
|
||||||
|
rpc_max_send_message_size = "512MB"
|
||||||
|
|
||||||
|
|
||||||
|
## The HTTP server options.
|
||||||
|
[http]
|
||||||
|
## The address to bind the HTTP server.
|
||||||
|
addr = "127.0.0.1:4000"
|
||||||
|
## HTTP request timeout. Set to 0 to disable timeout.
|
||||||
|
timeout = "30s"
|
||||||
|
## HTTP request body limit.
|
||||||
|
## The following units are supported: `B`, `KB`, `KiB`, `MB`, `MiB`, `GB`, `GiB`, `TB`, `TiB`, `PB`, `PiB`.
|
||||||
|
## Set to 0 to disable limit.
|
||||||
|
body_limit = "64MB"
|
||||||
|
|
||||||
|
## The gRPC server options.
|
||||||
|
[grpc]
|
||||||
|
## The address to bind the gRPC server.
|
||||||
|
addr = "127.0.0.1:3001"
|
||||||
|
## The hostname advertised to the metasrv,
|
||||||
|
## and used for connections from outside the host
|
||||||
|
hostname = "127.0.0.1"
|
||||||
|
## The number of server worker threads.
|
||||||
|
runtime_size = 8
|
||||||
|
## The maximum receive message size for gRPC server.
|
||||||
|
max_recv_message_size = "512MB"
|
||||||
|
## The maximum send message size for gRPC server.
|
||||||
|
max_send_message_size = "512MB"
|
||||||
|
|
||||||
|
## gRPC server TLS options, see `mysql.tls` section.
|
||||||
|
[grpc.tls]
|
||||||
|
## TLS mode.
|
||||||
|
mode = "disable"
|
||||||
|
|
||||||
|
## Certificate file path.
|
||||||
|
## @toml2docs:none-default
|
||||||
|
cert_path = ""
|
||||||
|
|
||||||
|
## Private key file path.
|
||||||
|
## @toml2docs:none-default
|
||||||
|
key_path = ""
|
||||||
|
|
||||||
|
## Watch for Certificate and key file change and auto reload.
|
||||||
|
## For now, gRPC tls config does not support auto reload.
|
||||||
|
watch = false
|
||||||
|
|
||||||
|
## The runtime options.
|
||||||
|
#+ [runtime]
|
||||||
|
## The number of threads to execute the runtime for global read operations.
|
||||||
|
#+ global_rt_size = 8
|
||||||
|
## The number of threads to execute the runtime for global write operations.
|
||||||
|
#+ compact_rt_size = 4
|
||||||
|
|
||||||
|
## The heartbeat options.
|
||||||
[heartbeat]
|
[heartbeat]
|
||||||
# Interval for sending heartbeat messages to the Metasrv, 3 seconds by default.
|
## Interval for sending heartbeat messages to the metasrv.
|
||||||
interval = "3s"
|
interval = "3s"
|
||||||
|
|
||||||
# Metasrv client options.
|
## Interval for retrying to send heartbeat messages to the metasrv.
|
||||||
|
retry_interval = "3s"
|
||||||
|
|
||||||
|
## The metasrv client options.
|
||||||
[meta_client]
|
[meta_client]
|
||||||
# Metasrv address list.
|
## The addresses of the metasrv.
|
||||||
metasrv_addrs = ["127.0.0.1:3002"]
|
metasrv_addrs = ["127.0.0.1:3002"]
|
||||||
# Heartbeat timeout, 500 milliseconds by default.
|
|
||||||
heartbeat_timeout = "500ms"
|
## Operation timeout.
|
||||||
# Operation timeout, 3 seconds by default.
|
|
||||||
timeout = "3s"
|
timeout = "3s"
|
||||||
# Connect server timeout, 1 second by default.
|
|
||||||
|
## Heartbeat timeout.
|
||||||
|
heartbeat_timeout = "500ms"
|
||||||
|
|
||||||
|
## DDL timeout.
|
||||||
|
ddl_timeout = "10s"
|
||||||
|
|
||||||
|
## Connect server timeout.
|
||||||
connect_timeout = "1s"
|
connect_timeout = "1s"
|
||||||
# `TCP_NODELAY` option for accepted connections, true by default.
|
|
||||||
|
## `TCP_NODELAY` option for accepted connections.
|
||||||
tcp_nodelay = true
|
tcp_nodelay = true
|
||||||
|
|
||||||
# WAL options.
|
## The configuration about the cache of the metadata.
|
||||||
|
metadata_cache_max_capacity = 100000
|
||||||
|
|
||||||
|
## TTL of the metadata cache.
|
||||||
|
metadata_cache_ttl = "10m"
|
||||||
|
|
||||||
|
# TTI of the metadata cache.
|
||||||
|
metadata_cache_tti = "5m"
|
||||||
|
|
||||||
|
## The WAL options.
|
||||||
[wal]
|
[wal]
|
||||||
|
## The provider of the WAL.
|
||||||
|
## - `raft_engine`: the wal is stored in the local file system by raft-engine.
|
||||||
|
## - `kafka`: it's remote wal that data is stored in Kafka.
|
||||||
provider = "raft_engine"
|
provider = "raft_engine"
|
||||||
|
|
||||||
# Raft-engine wal options, see `standalone.example.toml`.
|
## The directory to store the WAL files.
|
||||||
# dir = "/tmp/greptimedb/wal"
|
## **It's only used when the provider is `raft_engine`**.
|
||||||
|
## @toml2docs:none-default
|
||||||
|
dir = "/tmp/greptimedb/wal"
|
||||||
|
|
||||||
|
## The size of the WAL segment file.
|
||||||
|
## **It's only used when the provider is `raft_engine`**.
|
||||||
file_size = "256MB"
|
file_size = "256MB"
|
||||||
|
|
||||||
|
## The threshold of the WAL size to trigger a flush.
|
||||||
|
## **It's only used when the provider is `raft_engine`**.
|
||||||
purge_threshold = "4GB"
|
purge_threshold = "4GB"
|
||||||
|
|
||||||
|
## The interval to trigger a flush.
|
||||||
|
## **It's only used when the provider is `raft_engine`**.
|
||||||
purge_interval = "10m"
|
purge_interval = "10m"
|
||||||
|
|
||||||
|
## The read batch size.
|
||||||
|
## **It's only used when the provider is `raft_engine`**.
|
||||||
read_batch_size = 128
|
read_batch_size = 128
|
||||||
|
|
||||||
|
## Whether to use sync write.
|
||||||
|
## **It's only used when the provider is `raft_engine`**.
|
||||||
sync_write = false
|
sync_write = false
|
||||||
|
|
||||||
# Kafka wal options, see `standalone.example.toml`.
|
## Whether to reuse logically truncated log files.
|
||||||
# broker_endpoints = ["127.0.0.1:9092"]
|
## **It's only used when the provider is `raft_engine`**.
|
||||||
# Warning: Kafka has a default limit of 1MB per message in a topic.
|
enable_log_recycle = true
|
||||||
# max_batch_size = "1MB"
|
|
||||||
# linger = "200ms"
|
|
||||||
# consumer_wait_timeout = "100ms"
|
|
||||||
# backoff_init = "500ms"
|
|
||||||
# backoff_max = "10s"
|
|
||||||
# backoff_base = 2
|
|
||||||
# backoff_deadline = "5mins"
|
|
||||||
|
|
||||||
# Storage options, see `standalone.example.toml`.
|
## Whether to pre-create log files on start up.
|
||||||
|
## **It's only used when the provider is `raft_engine`**.
|
||||||
|
prefill_log_files = false
|
||||||
|
|
||||||
|
## Duration for fsyncing log files.
|
||||||
|
## **It's only used when the provider is `raft_engine`**.
|
||||||
|
sync_period = "10s"
|
||||||
|
|
||||||
|
## Parallelism during WAL recovery.
|
||||||
|
recovery_parallelism = 2
|
||||||
|
|
||||||
|
## The Kafka broker endpoints.
|
||||||
|
## **It's only used when the provider is `kafka`**.
|
||||||
|
broker_endpoints = ["127.0.0.1:9092"]
|
||||||
|
|
||||||
|
## The max size of a single producer batch.
|
||||||
|
## Warning: Kafka has a default limit of 1MB per message in a topic.
|
||||||
|
## **It's only used when the provider is `kafka`**.
|
||||||
|
max_batch_bytes = "1MB"
|
||||||
|
|
||||||
|
## The consumer wait timeout.
|
||||||
|
## **It's only used when the provider is `kafka`**.
|
||||||
|
consumer_wait_timeout = "100ms"
|
||||||
|
|
||||||
|
## The initial backoff delay.
|
||||||
|
## **It's only used when the provider is `kafka`**.
|
||||||
|
backoff_init = "500ms"
|
||||||
|
|
||||||
|
## The maximum backoff delay.
|
||||||
|
## **It's only used when the provider is `kafka`**.
|
||||||
|
backoff_max = "10s"
|
||||||
|
|
||||||
|
## The exponential backoff rate, i.e. next backoff = base * current backoff.
|
||||||
|
## **It's only used when the provider is `kafka`**.
|
||||||
|
backoff_base = 2
|
||||||
|
|
||||||
|
## The deadline of retries.
|
||||||
|
## **It's only used when the provider is `kafka`**.
|
||||||
|
backoff_deadline = "5mins"
|
||||||
|
|
||||||
|
## Whether to enable WAL index creation.
|
||||||
|
## **It's only used when the provider is `kafka`**.
|
||||||
|
create_index = true
|
||||||
|
|
||||||
|
## The interval for dumping WAL indexes.
|
||||||
|
## **It's only used when the provider is `kafka`**.
|
||||||
|
dump_index_interval = "60s"
|
||||||
|
|
||||||
|
## Ignore missing entries during read WAL.
|
||||||
|
## **It's only used when the provider is `kafka`**.
|
||||||
|
##
|
||||||
|
## This option ensures that when Kafka messages are deleted, the system
|
||||||
|
## can still successfully replay memtable data without throwing an
|
||||||
|
## out-of-range error.
|
||||||
|
## However, enabling this option might lead to unexpected data loss,
|
||||||
|
## as the system will skip over missing entries instead of treating
|
||||||
|
## them as critical errors.
|
||||||
|
overwrite_entry_start_id = false
|
||||||
|
|
||||||
|
# The Kafka SASL configuration.
|
||||||
|
# **It's only used when the provider is `kafka`**.
|
||||||
|
# Available SASL mechanisms:
|
||||||
|
# - `PLAIN`
|
||||||
|
# - `SCRAM-SHA-256`
|
||||||
|
# - `SCRAM-SHA-512`
|
||||||
|
# [wal.sasl]
|
||||||
|
# type = "SCRAM-SHA-512"
|
||||||
|
# username = "user_kafka"
|
||||||
|
# password = "secret"
|
||||||
|
|
||||||
|
# The Kafka TLS configuration.
|
||||||
|
# **It's only used when the provider is `kafka`**.
|
||||||
|
# [wal.tls]
|
||||||
|
# server_ca_cert_path = "/path/to/server_cert"
|
||||||
|
# client_cert_path = "/path/to/client_cert"
|
||||||
|
# client_key_path = "/path/to/key"
|
||||||
|
|
||||||
|
# Example of using S3 as the storage.
|
||||||
|
# [storage]
|
||||||
|
# type = "S3"
|
||||||
|
# bucket = "greptimedb"
|
||||||
|
# root = "data"
|
||||||
|
# access_key_id = "test"
|
||||||
|
# secret_access_key = "123456"
|
||||||
|
# endpoint = "https://s3.amazonaws.com"
|
||||||
|
# region = "us-west-2"
|
||||||
|
|
||||||
|
# Example of using Oss as the storage.
|
||||||
|
# [storage]
|
||||||
|
# type = "Oss"
|
||||||
|
# bucket = "greptimedb"
|
||||||
|
# root = "data"
|
||||||
|
# access_key_id = "test"
|
||||||
|
# access_key_secret = "123456"
|
||||||
|
# endpoint = "https://oss-cn-hangzhou.aliyuncs.com"
|
||||||
|
|
||||||
|
# Example of using Azblob as the storage.
|
||||||
|
# [storage]
|
||||||
|
# type = "Azblob"
|
||||||
|
# container = "greptimedb"
|
||||||
|
# root = "data"
|
||||||
|
# account_name = "test"
|
||||||
|
# account_key = "123456"
|
||||||
|
# endpoint = "https://greptimedb.blob.core.windows.net"
|
||||||
|
# sas_token = ""
|
||||||
|
|
||||||
|
# Example of using Gcs as the storage.
|
||||||
|
# [storage]
|
||||||
|
# type = "Gcs"
|
||||||
|
# bucket = "greptimedb"
|
||||||
|
# root = "data"
|
||||||
|
# scope = "test"
|
||||||
|
# credential_path = "123456"
|
||||||
|
# credential = "base64-credential"
|
||||||
|
# endpoint = "https://storage.googleapis.com"
|
||||||
|
|
||||||
|
## The data storage options.
|
||||||
[storage]
|
[storage]
|
||||||
# The working home directory.
|
## The working home directory.
|
||||||
data_home = "/tmp/greptimedb/"
|
data_home = "/tmp/greptimedb/"
|
||||||
# Storage type.
|
|
||||||
type = "File"
|
|
||||||
# TTL for all tables. Disabled by default.
|
|
||||||
# global_ttl = "7d"
|
|
||||||
|
|
||||||
# Cache configuration for object storage such as 'S3' etc.
|
## The storage type used to store the data.
|
||||||
# The local file cache directory
|
## - `File`: the data is stored in the local file system.
|
||||||
# cache_path = "/path/local_cache"
|
## - `S3`: the data is stored in the S3 object storage.
|
||||||
# The local file cache capacity in bytes.
|
## - `Gcs`: the data is stored in the Google Cloud Storage.
|
||||||
# cache_capacity = "256MB"
|
## - `Azblob`: the data is stored in the Azure Blob Storage.
|
||||||
|
## - `Oss`: the data is stored in the Aliyun OSS.
|
||||||
|
type = "File"
|
||||||
|
|
||||||
|
## Cache configuration for object storage such as 'S3' etc.
|
||||||
|
## The local file cache directory.
|
||||||
|
## @toml2docs:none-default
|
||||||
|
cache_path = "/path/local_cache"
|
||||||
|
|
||||||
|
## The local file cache capacity in bytes.
|
||||||
|
## @toml2docs:none-default
|
||||||
|
cache_capacity = "256MB"
|
||||||
|
|
||||||
|
## The S3 bucket name.
|
||||||
|
## **It's only used when the storage type is `S3`, `Oss` and `Gcs`**.
|
||||||
|
## @toml2docs:none-default
|
||||||
|
bucket = "greptimedb"
|
||||||
|
|
||||||
|
## The S3 data will be stored in the specified prefix, for example, `s3://${bucket}/${root}`.
|
||||||
|
## **It's only used when the storage type is `S3`, `Oss` and `Azblob`**.
|
||||||
|
## @toml2docs:none-default
|
||||||
|
root = "greptimedb"
|
||||||
|
|
||||||
|
## The access key id of the aws account.
|
||||||
|
## It's **highly recommended** to use AWS IAM roles instead of hardcoding the access key id and secret key.
|
||||||
|
## **It's only used when the storage type is `S3` and `Oss`**.
|
||||||
|
## @toml2docs:none-default
|
||||||
|
access_key_id = "test"
|
||||||
|
|
||||||
|
## The secret access key of the aws account.
|
||||||
|
## It's **highly recommended** to use AWS IAM roles instead of hardcoding the access key id and secret key.
|
||||||
|
## **It's only used when the storage type is `S3`**.
|
||||||
|
## @toml2docs:none-default
|
||||||
|
secret_access_key = "test"
|
||||||
|
|
||||||
|
## The secret access key of the aliyun account.
|
||||||
|
## **It's only used when the storage type is `Oss`**.
|
||||||
|
## @toml2docs:none-default
|
||||||
|
access_key_secret = "test"
|
||||||
|
|
||||||
|
## The account key of the azure account.
|
||||||
|
## **It's only used when the storage type is `Azblob`**.
|
||||||
|
## @toml2docs:none-default
|
||||||
|
account_name = "test"
|
||||||
|
|
||||||
|
## The account key of the azure account.
|
||||||
|
## **It's only used when the storage type is `Azblob`**.
|
||||||
|
## @toml2docs:none-default
|
||||||
|
account_key = "test"
|
||||||
|
|
||||||
|
## The scope of the google cloud storage.
|
||||||
|
## **It's only used when the storage type is `Gcs`**.
|
||||||
|
## @toml2docs:none-default
|
||||||
|
scope = "test"
|
||||||
|
|
||||||
|
## The credential path of the google cloud storage.
|
||||||
|
## **It's only used when the storage type is `Gcs`**.
|
||||||
|
## @toml2docs:none-default
|
||||||
|
credential_path = "test"
|
||||||
|
|
||||||
|
## The credential of the google cloud storage.
|
||||||
|
## **It's only used when the storage type is `Gcs`**.
|
||||||
|
## @toml2docs:none-default
|
||||||
|
credential = "base64-credential"
|
||||||
|
|
||||||
|
## The container of the azure account.
|
||||||
|
## **It's only used when the storage type is `Azblob`**.
|
||||||
|
## @toml2docs:none-default
|
||||||
|
container = "greptimedb"
|
||||||
|
|
||||||
|
## The sas token of the azure account.
|
||||||
|
## **It's only used when the storage type is `Azblob`**.
|
||||||
|
## @toml2docs:none-default
|
||||||
|
sas_token = ""
|
||||||
|
|
||||||
|
## The endpoint of the S3 service.
|
||||||
|
## **It's only used when the storage type is `S3`, `Oss`, `Gcs` and `Azblob`**.
|
||||||
|
## @toml2docs:none-default
|
||||||
|
endpoint = "https://s3.amazonaws.com"
|
||||||
|
|
||||||
|
## The region of the S3 service.
|
||||||
|
## **It's only used when the storage type is `S3`, `Oss`, `Gcs` and `Azblob`**.
|
||||||
|
## @toml2docs:none-default
|
||||||
|
region = "us-west-2"
|
||||||
|
|
||||||
# Custom storage options
|
# Custom storage options
|
||||||
#[[storage.providers]]
|
# [[storage.providers]]
|
||||||
#type = "S3"
|
# name = "S3"
|
||||||
#[[storage.providers]]
|
# type = "S3"
|
||||||
#type = "Gcs"
|
# bucket = "greptimedb"
|
||||||
|
# root = "data"
|
||||||
|
# access_key_id = "test"
|
||||||
|
# secret_access_key = "123456"
|
||||||
|
# endpoint = "https://s3.amazonaws.com"
|
||||||
|
# region = "us-west-2"
|
||||||
|
# [[storage.providers]]
|
||||||
|
# name = "Gcs"
|
||||||
|
# type = "Gcs"
|
||||||
|
# bucket = "greptimedb"
|
||||||
|
# root = "data"
|
||||||
|
# scope = "test"
|
||||||
|
# credential_path = "123456"
|
||||||
|
# credential = "base64-credential"
|
||||||
|
# endpoint = "https://storage.googleapis.com"
|
||||||
|
|
||||||
# Mito engine options
|
## The region engine options. You can configure multiple region engines.
|
||||||
[[region_engine]]
|
[[region_engine]]
|
||||||
|
|
||||||
|
## The Mito engine options.
|
||||||
[region_engine.mito]
|
[region_engine.mito]
|
||||||
# Number of region workers
|
|
||||||
num_workers = 8
|
## Number of region workers.
|
||||||
# Request channel size of each worker
|
#+ num_workers = 8
|
||||||
|
|
||||||
|
## Request channel size of each worker.
|
||||||
worker_channel_size = 128
|
worker_channel_size = 128
|
||||||
# Max batch size for a worker to handle requests
|
|
||||||
|
## Max batch size for a worker to handle requests.
|
||||||
worker_request_batch_size = 64
|
worker_request_batch_size = 64
|
||||||
# Number of meta action updated to trigger a new checkpoint for the manifest
|
|
||||||
|
## Number of meta action updated to trigger a new checkpoint for the manifest.
|
||||||
manifest_checkpoint_distance = 10
|
manifest_checkpoint_distance = 10
|
||||||
# Whether to compress manifest and checkpoint file by gzip (default false).
|
|
||||||
|
## Whether to compress manifest and checkpoint file by gzip (default false).
|
||||||
compress_manifest = false
|
compress_manifest = false
|
||||||
# Max number of running background jobs
|
|
||||||
max_background_jobs = 4
|
## Max number of running background flush jobs (default: 1/2 of cpu cores).
|
||||||
# Interval to auto flush a region if it has not flushed yet.
|
## @toml2docs:none-default="Auto"
|
||||||
|
#+ max_background_flushes = 4
|
||||||
|
|
||||||
|
## Max number of running background compaction jobs (default: 1/4 of cpu cores).
|
||||||
|
## @toml2docs:none-default="Auto"
|
||||||
|
#+ max_background_compactions = 2
|
||||||
|
|
||||||
|
## Max number of running background purge jobs (default: number of cpu cores).
|
||||||
|
## @toml2docs:none-default="Auto"
|
||||||
|
#+ max_background_purges = 8
|
||||||
|
|
||||||
|
## Interval to auto flush a region if it has not flushed yet.
|
||||||
auto_flush_interval = "1h"
|
auto_flush_interval = "1h"
|
||||||
# Global write buffer size for all regions. If not set, it's default to 1/8 of OS memory with a max limitation of 1GB.
|
|
||||||
global_write_buffer_size = "1GB"
|
## Global write buffer size for all regions. If not set, it's default to 1/8 of OS memory with a max limitation of 1GB.
|
||||||
# Global write buffer size threshold to reject write requests. If not set, it's default to 2 times of `global_write_buffer_size`
|
## @toml2docs:none-default="Auto"
|
||||||
global_write_buffer_reject_size = "2GB"
|
#+ global_write_buffer_size = "1GB"
|
||||||
# Cache size for SST metadata. Setting it to 0 to disable the cache.
|
|
||||||
# If not set, it's default to 1/32 of OS memory with a max limitation of 128MB.
|
## Global write buffer size threshold to reject write requests. If not set, it's default to 2 times of `global_write_buffer_size`
|
||||||
sst_meta_cache_size = "128MB"
|
## @toml2docs:none-default="Auto"
|
||||||
# Cache size for vectors and arrow arrays. Setting it to 0 to disable the cache.
|
#+ global_write_buffer_reject_size = "2GB"
|
||||||
# If not set, it's default to 1/16 of OS memory with a max limitation of 512MB.
|
|
||||||
vector_cache_size = "512MB"
|
## Cache size for SST metadata. Setting it to 0 to disable the cache.
|
||||||
# Cache size for pages of SST row groups. Setting it to 0 to disable the cache.
|
## If not set, it's default to 1/32 of OS memory with a max limitation of 128MB.
|
||||||
# If not set, it's default to 1/16 of OS memory with a max limitation of 512MB.
|
## @toml2docs:none-default="Auto"
|
||||||
page_cache_size = "512MB"
|
#+ sst_meta_cache_size = "128MB"
|
||||||
# Buffer size for SST writing.
|
|
||||||
|
## Cache size for vectors and arrow arrays. Setting it to 0 to disable the cache.
|
||||||
|
## If not set, it's default to 1/16 of OS memory with a max limitation of 512MB.
|
||||||
|
## @toml2docs:none-default="Auto"
|
||||||
|
#+ vector_cache_size = "512MB"
|
||||||
|
|
||||||
|
## Cache size for pages of SST row groups. Setting it to 0 to disable the cache.
|
||||||
|
## If not set, it's default to 1/8 of OS memory.
|
||||||
|
## @toml2docs:none-default="Auto"
|
||||||
|
#+ page_cache_size = "512MB"
|
||||||
|
|
||||||
|
## Cache size for time series selector (e.g. `last_value()`). Setting it to 0 to disable the cache.
|
||||||
|
## If not set, it's default to 1/16 of OS memory with a max limitation of 512MB.
|
||||||
|
## @toml2docs:none-default="Auto"
|
||||||
|
#+ selector_result_cache_size = "512MB"
|
||||||
|
|
||||||
|
## Whether to enable the experimental write cache.
|
||||||
|
enable_experimental_write_cache = false
|
||||||
|
|
||||||
|
## File system path for write cache, defaults to `{data_home}/write_cache`.
|
||||||
|
experimental_write_cache_path = ""
|
||||||
|
|
||||||
|
## Capacity for write cache.
|
||||||
|
experimental_write_cache_size = "512MB"
|
||||||
|
|
||||||
|
## TTL for write cache.
|
||||||
|
## @toml2docs:none-default
|
||||||
|
experimental_write_cache_ttl = "8h"
|
||||||
|
|
||||||
|
## Buffer size for SST writing.
|
||||||
sst_write_buffer_size = "8MB"
|
sst_write_buffer_size = "8MB"
|
||||||
# Parallelism to scan a region (default: 1/4 of cpu cores).
|
|
||||||
# - 0: using the default value (1/4 of cpu cores).
|
## Parallelism to scan a region (default: 1/4 of cpu cores).
|
||||||
# - 1: scan in current thread.
|
## - `0`: using the default value (1/4 of cpu cores).
|
||||||
# - n: scan in parallelism n.
|
## - `1`: scan in current thread.
|
||||||
|
## - `n`: scan in parallelism n.
|
||||||
scan_parallelism = 0
|
scan_parallelism = 0
|
||||||
# Capacity of the channel to send data from parallel scan tasks to the main task (default 32).
|
|
||||||
|
## Capacity of the channel to send data from parallel scan tasks to the main task.
|
||||||
parallel_scan_channel_size = 32
|
parallel_scan_channel_size = 32
|
||||||
# Whether to allow stale WAL entries read during replay.
|
|
||||||
|
## Whether to allow stale WAL entries read during replay.
|
||||||
allow_stale_entries = false
|
allow_stale_entries = false
|
||||||
|
|
||||||
|
## Minimum time interval between two compactions.
|
||||||
|
## To align with the old behavior, the default value is 0 (no restrictions).
|
||||||
|
min_compaction_interval = "0m"
|
||||||
|
|
||||||
|
## The options for index in Mito engine.
|
||||||
|
[region_engine.mito.index]
|
||||||
|
|
||||||
|
## Auxiliary directory path for the index in filesystem, used to store intermediate files for
|
||||||
|
## creating the index and staging files for searching the index, defaults to `{data_home}/index_intermediate`.
|
||||||
|
## The default name for this directory is `index_intermediate` for backward compatibility.
|
||||||
|
##
|
||||||
|
## This path contains two subdirectories:
|
||||||
|
## - `__intm`: for storing intermediate files used during creating index.
|
||||||
|
## - `staging`: for storing staging files used during searching index.
|
||||||
|
aux_path = ""
|
||||||
|
|
||||||
|
## The max capacity of the staging directory.
|
||||||
|
staging_size = "2GB"
|
||||||
|
|
||||||
|
## The options for inverted index in Mito engine.
|
||||||
[region_engine.mito.inverted_index]
|
[region_engine.mito.inverted_index]
|
||||||
# Whether to create the index on flush.
|
|
||||||
# - "auto": automatically
|
## Whether to create the index on flush.
|
||||||
# - "disable": never
|
## - `auto`: automatically (default)
|
||||||
|
## - `disable`: never
|
||||||
create_on_flush = "auto"
|
create_on_flush = "auto"
|
||||||
# Whether to create the index on compaction.
|
|
||||||
# - "auto": automatically
|
## Whether to create the index on compaction.
|
||||||
# - "disable": never
|
## - `auto`: automatically (default)
|
||||||
|
## - `disable`: never
|
||||||
create_on_compaction = "auto"
|
create_on_compaction = "auto"
|
||||||
# Whether to apply the index on query
|
|
||||||
# - "auto": automatically
|
## Whether to apply the index on query
|
||||||
# - "disable": never
|
## - `auto`: automatically (default)
|
||||||
|
## - `disable`: never
|
||||||
apply_on_query = "auto"
|
apply_on_query = "auto"
|
||||||
# Memory threshold for performing an external sort during index creation.
|
|
||||||
# Setting to empty will disable external sorting, forcing all sorting operations to happen in memory.
|
## Memory threshold for performing an external sort during index creation.
|
||||||
mem_threshold_on_create = "64M"
|
## - `auto`: automatically determine the threshold based on the system memory size (default)
|
||||||
# File system path to store intermediate files for external sorting (default `{data_home}/index_intermediate`).
|
## - `unlimited`: no memory limit
|
||||||
|
## - `[size]` e.g. `64MB`: fixed memory threshold
|
||||||
|
mem_threshold_on_create = "auto"
|
||||||
|
|
||||||
|
## Deprecated, use `region_engine.mito.index.aux_path` instead.
|
||||||
intermediate_path = ""
|
intermediate_path = ""
|
||||||
|
|
||||||
|
## The options for full-text index in Mito engine.
|
||||||
|
[region_engine.mito.fulltext_index]
|
||||||
|
|
||||||
|
## Whether to create the index on flush.
|
||||||
|
## - `auto`: automatically (default)
|
||||||
|
## - `disable`: never
|
||||||
|
create_on_flush = "auto"
|
||||||
|
|
||||||
|
## Whether to create the index on compaction.
|
||||||
|
## - `auto`: automatically (default)
|
||||||
|
## - `disable`: never
|
||||||
|
create_on_compaction = "auto"
|
||||||
|
|
||||||
|
## Whether to apply the index on query
|
||||||
|
## - `auto`: automatically (default)
|
||||||
|
## - `disable`: never
|
||||||
|
apply_on_query = "auto"
|
||||||
|
|
||||||
|
## Memory threshold for index creation.
|
||||||
|
## - `auto`: automatically determine the threshold based on the system memory size (default)
|
||||||
|
## - `unlimited`: no memory limit
|
||||||
|
## - `[size]` e.g. `64MB`: fixed memory threshold
|
||||||
|
mem_threshold_on_create = "auto"
|
||||||
|
|
||||||
[region_engine.mito.memtable]
|
[region_engine.mito.memtable]
|
||||||
# Memtable type.
|
## Memtable type.
|
||||||
# - "partition_tree": partition tree memtable
|
## - `time_series`: time-series memtable
|
||||||
# - "time_series": time-series memtable (deprecated)
|
## - `partition_tree`: partition tree memtable (experimental)
|
||||||
type = "partition_tree"
|
type = "time_series"
|
||||||
# The max number of keys in one shard.
|
|
||||||
|
## The max number of keys in one shard.
|
||||||
|
## Only available for `partition_tree` memtable.
|
||||||
index_max_keys_per_shard = 8192
|
index_max_keys_per_shard = 8192
|
||||||
# The max rows of data inside the actively writing buffer in one shard.
|
|
||||||
|
## The max rows of data inside the actively writing buffer in one shard.
|
||||||
|
## Only available for `partition_tree` memtable.
|
||||||
data_freeze_threshold = 32768
|
data_freeze_threshold = 32768
|
||||||
# Max dictionary bytes.
|
|
||||||
|
## Max dictionary bytes.
|
||||||
|
## Only available for `partition_tree` memtable.
|
||||||
fork_dictionary_bytes = "1GiB"
|
fork_dictionary_bytes = "1GiB"
|
||||||
|
|
||||||
# Log options, see `standalone.example.toml`
|
[[region_engine]]
|
||||||
# [logging]
|
## Enable the file engine.
|
||||||
# dir = "/tmp/greptimedb/logs"
|
[region_engine.file]
|
||||||
# level = "info"
|
|
||||||
|
|
||||||
# Datanode export the metrics generated by itself
|
## The logging options.
|
||||||
# encoded to Prometheus remote-write format
|
[logging]
|
||||||
# and send to Prometheus remote-write compatible receiver (e.g. send to `greptimedb` itself)
|
## The directory to store the log files. If set to empty, logs will not be written to files.
|
||||||
# This is only used for `greptimedb` to export its own metrics internally. It's different from prometheus scrape.
|
dir = "/tmp/greptimedb/logs"
|
||||||
# [export_metrics]
|
|
||||||
# whether enable export metrics, default is false
|
## The log level. Can be `info`/`debug`/`warn`/`error`.
|
||||||
# enable = false
|
## @toml2docs:none-default
|
||||||
# The interval of export metrics
|
level = "info"
|
||||||
# write_interval = "30s"
|
|
||||||
# [export_metrics.remote_write]
|
## Enable OTLP tracing.
|
||||||
# The url the metrics send to. The url is empty by default, url example: `http://127.0.0.1:4000/v1/prometheus/write?db=information_schema`
|
enable_otlp_tracing = false
|
||||||
# url = ""
|
|
||||||
# HTTP headers of Prometheus remote-write carry
|
## The OTLP tracing endpoint.
|
||||||
# headers = {}
|
otlp_endpoint = "http://localhost:4317"
|
||||||
|
|
||||||
|
## Whether to append logs to stdout.
|
||||||
|
append_stdout = true
|
||||||
|
|
||||||
|
## The log format. Can be `text`/`json`.
|
||||||
|
log_format = "text"
|
||||||
|
|
||||||
|
## The maximum amount of log files.
|
||||||
|
max_log_files = 720
|
||||||
|
|
||||||
|
## The percentage of tracing will be sampled and exported.
|
||||||
|
## Valid range `[0, 1]`, 1 means all traces are sampled, 0 means all traces are not sampled, the default value is 1.
|
||||||
|
## ratio > 1 are treated as 1. Fractions < 0 are treated as 0
|
||||||
|
[logging.tracing_sample_ratio]
|
||||||
|
default_ratio = 1.0
|
||||||
|
|
||||||
|
## The slow query log options.
|
||||||
|
[logging.slow_query]
|
||||||
|
## Whether to enable slow query log.
|
||||||
|
enable = false
|
||||||
|
|
||||||
|
## The threshold of slow query.
|
||||||
|
## @toml2docs:none-default
|
||||||
|
threshold = "10s"
|
||||||
|
|
||||||
|
## The sampling ratio of slow query log. The value should be in the range of (0, 1].
|
||||||
|
## @toml2docs:none-default
|
||||||
|
sample_ratio = 1.0
|
||||||
|
|
||||||
|
## The datanode can export its metrics and send to Prometheus compatible service (e.g. send to `greptimedb` itself) from remote-write API.
|
||||||
|
## This is only used for `greptimedb` to export its own metrics internally. It's different from prometheus scrape.
|
||||||
|
[export_metrics]
|
||||||
|
|
||||||
|
## whether enable export metrics.
|
||||||
|
enable = false
|
||||||
|
|
||||||
|
## The interval of export metrics.
|
||||||
|
write_interval = "30s"
|
||||||
|
|
||||||
|
## For `standalone` mode, `self_import` is recommend to collect metrics generated by itself
|
||||||
|
## You must create the database before enabling it.
|
||||||
|
[export_metrics.self_import]
|
||||||
|
## @toml2docs:none-default
|
||||||
|
db = "greptime_metrics"
|
||||||
|
|
||||||
|
[export_metrics.remote_write]
|
||||||
|
## The url the metrics send to. The url example can be: `http://127.0.0.1:4000/v1/prometheus/write?db=greptime_metrics`.
|
||||||
|
url = ""
|
||||||
|
|
||||||
|
## HTTP headers of Prometheus remote-write carry.
|
||||||
|
headers = { }
|
||||||
|
|
||||||
|
## The tracing options. Only effect when compiled with `tokio-console` feature.
|
||||||
|
#+ [tracing]
|
||||||
|
## The tokio console address.
|
||||||
|
## @toml2docs:none-default
|
||||||
|
#+ tokio_console_addr = "127.0.0.1"
|
||||||
|
|||||||
108
config/flownode.example.toml
Normal file
108
config/flownode.example.toml
Normal file
@@ -0,0 +1,108 @@
|
|||||||
|
## The running mode of the flownode. It can be `standalone` or `distributed`.
|
||||||
|
mode = "distributed"
|
||||||
|
|
||||||
|
## The flownode identifier and should be unique in the cluster.
|
||||||
|
## @toml2docs:none-default
|
||||||
|
node_id = 14
|
||||||
|
|
||||||
|
## The gRPC server options.
|
||||||
|
[grpc]
|
||||||
|
## The address to bind the gRPC server.
|
||||||
|
addr = "127.0.0.1:6800"
|
||||||
|
## The hostname advertised to the metasrv,
|
||||||
|
## and used for connections from outside the host
|
||||||
|
hostname = "127.0.0.1"
|
||||||
|
## The number of server worker threads.
|
||||||
|
runtime_size = 2
|
||||||
|
## The maximum receive message size for gRPC server.
|
||||||
|
max_recv_message_size = "512MB"
|
||||||
|
## The maximum send message size for gRPC server.
|
||||||
|
max_send_message_size = "512MB"
|
||||||
|
|
||||||
|
|
||||||
|
## The metasrv client options.
|
||||||
|
[meta_client]
|
||||||
|
## The addresses of the metasrv.
|
||||||
|
metasrv_addrs = ["127.0.0.1:3002"]
|
||||||
|
|
||||||
|
## Operation timeout.
|
||||||
|
timeout = "3s"
|
||||||
|
|
||||||
|
## Heartbeat timeout.
|
||||||
|
heartbeat_timeout = "500ms"
|
||||||
|
|
||||||
|
## DDL timeout.
|
||||||
|
ddl_timeout = "10s"
|
||||||
|
|
||||||
|
## Connect server timeout.
|
||||||
|
connect_timeout = "1s"
|
||||||
|
|
||||||
|
## `TCP_NODELAY` option for accepted connections.
|
||||||
|
tcp_nodelay = true
|
||||||
|
|
||||||
|
## The configuration about the cache of the metadata.
|
||||||
|
metadata_cache_max_capacity = 100000
|
||||||
|
|
||||||
|
## TTL of the metadata cache.
|
||||||
|
metadata_cache_ttl = "10m"
|
||||||
|
|
||||||
|
# TTI of the metadata cache.
|
||||||
|
metadata_cache_tti = "5m"
|
||||||
|
|
||||||
|
## The heartbeat options.
|
||||||
|
[heartbeat]
|
||||||
|
## Interval for sending heartbeat messages to the metasrv.
|
||||||
|
interval = "3s"
|
||||||
|
|
||||||
|
## Interval for retrying to send heartbeat messages to the metasrv.
|
||||||
|
retry_interval = "3s"
|
||||||
|
|
||||||
|
## The logging options.
|
||||||
|
[logging]
|
||||||
|
## The directory to store the log files. If set to empty, logs will not be written to files.
|
||||||
|
dir = "/tmp/greptimedb/logs"
|
||||||
|
|
||||||
|
## The log level. Can be `info`/`debug`/`warn`/`error`.
|
||||||
|
## @toml2docs:none-default
|
||||||
|
level = "info"
|
||||||
|
|
||||||
|
## Enable OTLP tracing.
|
||||||
|
enable_otlp_tracing = false
|
||||||
|
|
||||||
|
## The OTLP tracing endpoint.
|
||||||
|
otlp_endpoint = "http://localhost:4317"
|
||||||
|
|
||||||
|
## Whether to append logs to stdout.
|
||||||
|
append_stdout = true
|
||||||
|
|
||||||
|
## The log format. Can be `text`/`json`.
|
||||||
|
log_format = "text"
|
||||||
|
|
||||||
|
## The maximum amount of log files.
|
||||||
|
max_log_files = 720
|
||||||
|
|
||||||
|
## The percentage of tracing will be sampled and exported.
|
||||||
|
## Valid range `[0, 1]`, 1 means all traces are sampled, 0 means all traces are not sampled, the default value is 1.
|
||||||
|
## ratio > 1 are treated as 1. Fractions < 0 are treated as 0
|
||||||
|
[logging.tracing_sample_ratio]
|
||||||
|
default_ratio = 1.0
|
||||||
|
|
||||||
|
## The slow query log options.
|
||||||
|
[logging.slow_query]
|
||||||
|
## Whether to enable slow query log.
|
||||||
|
enable = false
|
||||||
|
|
||||||
|
## The threshold of slow query.
|
||||||
|
## @toml2docs:none-default
|
||||||
|
threshold = "10s"
|
||||||
|
|
||||||
|
## The sampling ratio of slow query log. The value should be in the range of (0, 1].
|
||||||
|
## @toml2docs:none-default
|
||||||
|
sample_ratio = 1.0
|
||||||
|
|
||||||
|
## The tracing options. Only effect when compiled with `tokio-console` feature.
|
||||||
|
#+ [tracing]
|
||||||
|
## The tokio console address.
|
||||||
|
## @toml2docs:none-default
|
||||||
|
#+ tokio_console_addr = "127.0.0.1"
|
||||||
|
|
||||||
@@ -1,106 +1,237 @@
|
|||||||
# Node running mode, see `standalone.example.toml`.
|
## The default timezone of the server.
|
||||||
mode = "distributed"
|
## @toml2docs:none-default
|
||||||
# The default timezone of the server
|
default_timezone = "UTC"
|
||||||
# default_timezone = "UTC"
|
|
||||||
|
|
||||||
|
## The runtime options.
|
||||||
|
#+ [runtime]
|
||||||
|
## The number of threads to execute the runtime for global read operations.
|
||||||
|
#+ global_rt_size = 8
|
||||||
|
## The number of threads to execute the runtime for global write operations.
|
||||||
|
#+ compact_rt_size = 4
|
||||||
|
|
||||||
|
## The heartbeat options.
|
||||||
[heartbeat]
|
[heartbeat]
|
||||||
# Interval for sending heartbeat task to the Metasrv, 5 seconds by default.
|
## Interval for sending heartbeat messages to the metasrv.
|
||||||
interval = "5s"
|
interval = "18s"
|
||||||
# Interval for retry sending heartbeat task, 5 seconds by default.
|
|
||||||
retry_interval = "5s"
|
|
||||||
|
|
||||||
# HTTP server options, see `standalone.example.toml`.
|
## Interval for retrying to send heartbeat messages to the metasrv.
|
||||||
|
retry_interval = "3s"
|
||||||
|
|
||||||
|
## The HTTP server options.
|
||||||
[http]
|
[http]
|
||||||
|
## The address to bind the HTTP server.
|
||||||
addr = "127.0.0.1:4000"
|
addr = "127.0.0.1:4000"
|
||||||
|
## HTTP request timeout. Set to 0 to disable timeout.
|
||||||
timeout = "30s"
|
timeout = "30s"
|
||||||
|
## HTTP request body limit.
|
||||||
|
## The following units are supported: `B`, `KB`, `KiB`, `MB`, `MiB`, `GB`, `GiB`, `TB`, `TiB`, `PB`, `PiB`.
|
||||||
|
## Set to 0 to disable limit.
|
||||||
body_limit = "64MB"
|
body_limit = "64MB"
|
||||||
|
|
||||||
# gRPC server options, see `standalone.example.toml`.
|
## The gRPC server options.
|
||||||
[grpc]
|
[grpc]
|
||||||
|
## The address to bind the gRPC server.
|
||||||
addr = "127.0.0.1:4001"
|
addr = "127.0.0.1:4001"
|
||||||
|
## The hostname advertised to the metasrv,
|
||||||
|
## and used for connections from outside the host
|
||||||
|
hostname = "127.0.0.1"
|
||||||
|
## The number of server worker threads.
|
||||||
runtime_size = 8
|
runtime_size = 8
|
||||||
|
|
||||||
# MySQL server options, see `standalone.example.toml`.
|
## gRPC server TLS options, see `mysql.tls` section.
|
||||||
|
[grpc.tls]
|
||||||
|
## TLS mode.
|
||||||
|
mode = "disable"
|
||||||
|
|
||||||
|
## Certificate file path.
|
||||||
|
## @toml2docs:none-default
|
||||||
|
cert_path = ""
|
||||||
|
|
||||||
|
## Private key file path.
|
||||||
|
## @toml2docs:none-default
|
||||||
|
key_path = ""
|
||||||
|
|
||||||
|
## Watch for Certificate and key file change and auto reload.
|
||||||
|
## For now, gRPC tls config does not support auto reload.
|
||||||
|
watch = false
|
||||||
|
|
||||||
|
## MySQL server options.
|
||||||
[mysql]
|
[mysql]
|
||||||
|
## Whether to enable.
|
||||||
enable = true
|
enable = true
|
||||||
|
## The addr to bind the MySQL server.
|
||||||
addr = "127.0.0.1:4002"
|
addr = "127.0.0.1:4002"
|
||||||
|
## The number of server worker threads.
|
||||||
runtime_size = 2
|
runtime_size = 2
|
||||||
|
|
||||||
# MySQL server TLS options, see `standalone.example.toml`.
|
# MySQL server TLS options.
|
||||||
[mysql.tls]
|
[mysql.tls]
|
||||||
|
|
||||||
|
## TLS mode, refer to https://www.postgresql.org/docs/current/libpq-ssl.html
|
||||||
|
## - `disable` (default value)
|
||||||
|
## - `prefer`
|
||||||
|
## - `require`
|
||||||
|
## - `verify-ca`
|
||||||
|
## - `verify-full`
|
||||||
mode = "disable"
|
mode = "disable"
|
||||||
|
|
||||||
|
## Certificate file path.
|
||||||
|
## @toml2docs:none-default
|
||||||
cert_path = ""
|
cert_path = ""
|
||||||
|
|
||||||
|
## Private key file path.
|
||||||
|
## @toml2docs:none-default
|
||||||
key_path = ""
|
key_path = ""
|
||||||
|
|
||||||
|
## Watch for Certificate and key file change and auto reload
|
||||||
watch = false
|
watch = false
|
||||||
|
|
||||||
# PostgresSQL server options, see `standalone.example.toml`.
|
## PostgresSQL server options.
|
||||||
[postgres]
|
[postgres]
|
||||||
|
## Whether to enable
|
||||||
enable = true
|
enable = true
|
||||||
|
## The addr to bind the PostgresSQL server.
|
||||||
addr = "127.0.0.1:4003"
|
addr = "127.0.0.1:4003"
|
||||||
|
## The number of server worker threads.
|
||||||
runtime_size = 2
|
runtime_size = 2
|
||||||
|
|
||||||
# PostgresSQL server TLS options, see `standalone.example.toml`.
|
## PostgresSQL server TLS options, see `mysql.tls` section.
|
||||||
[postgres.tls]
|
[postgres.tls]
|
||||||
|
## TLS mode.
|
||||||
mode = "disable"
|
mode = "disable"
|
||||||
|
|
||||||
|
## Certificate file path.
|
||||||
|
## @toml2docs:none-default
|
||||||
cert_path = ""
|
cert_path = ""
|
||||||
|
|
||||||
|
## Private key file path.
|
||||||
|
## @toml2docs:none-default
|
||||||
key_path = ""
|
key_path = ""
|
||||||
|
|
||||||
|
## Watch for Certificate and key file change and auto reload
|
||||||
watch = false
|
watch = false
|
||||||
|
|
||||||
# OpenTSDB protocol options, see `standalone.example.toml`.
|
## OpenTSDB protocol options.
|
||||||
[opentsdb]
|
[opentsdb]
|
||||||
|
## Whether to enable OpenTSDB put in HTTP API.
|
||||||
enable = true
|
enable = true
|
||||||
addr = "127.0.0.1:4242"
|
|
||||||
runtime_size = 2
|
|
||||||
|
|
||||||
# InfluxDB protocol options, see `standalone.example.toml`.
|
## InfluxDB protocol options.
|
||||||
[influxdb]
|
[influxdb]
|
||||||
|
## Whether to enable InfluxDB protocol in HTTP API.
|
||||||
enable = true
|
enable = true
|
||||||
|
|
||||||
# Prometheus remote storage options, see `standalone.example.toml`.
|
## Prometheus remote storage options
|
||||||
[prom_store]
|
[prom_store]
|
||||||
|
## Whether to enable Prometheus remote write and read in HTTP API.
|
||||||
enable = true
|
enable = true
|
||||||
# Whether to store the data from Prometheus remote write in metric engine.
|
## Whether to store the data from Prometheus remote write in metric engine.
|
||||||
# true by default
|
|
||||||
with_metric_engine = true
|
with_metric_engine = true
|
||||||
|
|
||||||
# Metasrv client options, see `datanode.example.toml`.
|
## The metasrv client options.
|
||||||
[meta_client]
|
[meta_client]
|
||||||
|
## The addresses of the metasrv.
|
||||||
metasrv_addrs = ["127.0.0.1:3002"]
|
metasrv_addrs = ["127.0.0.1:3002"]
|
||||||
|
|
||||||
|
## Operation timeout.
|
||||||
timeout = "3s"
|
timeout = "3s"
|
||||||
# DDL timeouts options.
|
|
||||||
|
## Heartbeat timeout.
|
||||||
|
heartbeat_timeout = "500ms"
|
||||||
|
|
||||||
|
## DDL timeout.
|
||||||
ddl_timeout = "10s"
|
ddl_timeout = "10s"
|
||||||
|
|
||||||
|
## Connect server timeout.
|
||||||
connect_timeout = "1s"
|
connect_timeout = "1s"
|
||||||
|
|
||||||
|
## `TCP_NODELAY` option for accepted connections.
|
||||||
tcp_nodelay = true
|
tcp_nodelay = true
|
||||||
# The configuration about the cache of the Metadata.
|
|
||||||
# default: 100000
|
## The configuration about the cache of the metadata.
|
||||||
metadata_cache_max_capacity = 100000
|
metadata_cache_max_capacity = 100000
|
||||||
# default: 10m
|
|
||||||
|
## TTL of the metadata cache.
|
||||||
metadata_cache_ttl = "10m"
|
metadata_cache_ttl = "10m"
|
||||||
# default: 5m
|
|
||||||
|
# TTI of the metadata cache.
|
||||||
metadata_cache_tti = "5m"
|
metadata_cache_tti = "5m"
|
||||||
|
|
||||||
# Log options, see `standalone.example.toml`
|
## Datanode options.
|
||||||
# [logging]
|
|
||||||
# dir = "/tmp/greptimedb/logs"
|
|
||||||
# level = "info"
|
|
||||||
|
|
||||||
# Datanode options.
|
|
||||||
[datanode]
|
[datanode]
|
||||||
# Datanode client options.
|
## Datanode client options.
|
||||||
[datanode.client]
|
[datanode.client]
|
||||||
timeout = "10s"
|
|
||||||
connect_timeout = "10s"
|
connect_timeout = "10s"
|
||||||
tcp_nodelay = true
|
tcp_nodelay = true
|
||||||
|
|
||||||
# Frontend export the metrics generated by itself
|
## The logging options.
|
||||||
# encoded to Prometheus remote-write format
|
[logging]
|
||||||
# and send to Prometheus remote-write compatible receiver (e.g. send to `greptimedb` itself)
|
## The directory to store the log files. If set to empty, logs will not be written to files.
|
||||||
# This is only used for `greptimedb` to export its own metrics internally. It's different from prometheus scrape.
|
dir = "/tmp/greptimedb/logs"
|
||||||
# [export_metrics]
|
|
||||||
# whether enable export metrics, default is false
|
## The log level. Can be `info`/`debug`/`warn`/`error`.
|
||||||
# enable = false
|
## @toml2docs:none-default
|
||||||
# The interval of export metrics
|
level = "info"
|
||||||
# write_interval = "30s"
|
|
||||||
# for `frontend`, `self_import` is recommend to collect metrics generated by itself
|
## Enable OTLP tracing.
|
||||||
# [export_metrics.self_import]
|
enable_otlp_tracing = false
|
||||||
# db = "information_schema"
|
|
||||||
|
## The OTLP tracing endpoint.
|
||||||
|
otlp_endpoint = "http://localhost:4317"
|
||||||
|
|
||||||
|
## Whether to append logs to stdout.
|
||||||
|
append_stdout = true
|
||||||
|
|
||||||
|
## The log format. Can be `text`/`json`.
|
||||||
|
log_format = "text"
|
||||||
|
|
||||||
|
## The maximum amount of log files.
|
||||||
|
max_log_files = 720
|
||||||
|
|
||||||
|
## The percentage of tracing will be sampled and exported.
|
||||||
|
## Valid range `[0, 1]`, 1 means all traces are sampled, 0 means all traces are not sampled, the default value is 1.
|
||||||
|
## ratio > 1 are treated as 1. Fractions < 0 are treated as 0
|
||||||
|
[logging.tracing_sample_ratio]
|
||||||
|
default_ratio = 1.0
|
||||||
|
|
||||||
|
## The slow query log options.
|
||||||
|
[logging.slow_query]
|
||||||
|
## Whether to enable slow query log.
|
||||||
|
enable = false
|
||||||
|
|
||||||
|
## The threshold of slow query.
|
||||||
|
## @toml2docs:none-default
|
||||||
|
threshold = "10s"
|
||||||
|
|
||||||
|
## The sampling ratio of slow query log. The value should be in the range of (0, 1].
|
||||||
|
## @toml2docs:none-default
|
||||||
|
sample_ratio = 1.0
|
||||||
|
|
||||||
|
## The datanode can export its metrics and send to Prometheus compatible service (e.g. send to `greptimedb` itself) from remote-write API.
|
||||||
|
## This is only used for `greptimedb` to export its own metrics internally. It's different from prometheus scrape.
|
||||||
|
[export_metrics]
|
||||||
|
|
||||||
|
## whether enable export metrics.
|
||||||
|
enable = false
|
||||||
|
|
||||||
|
## The interval of export metrics.
|
||||||
|
write_interval = "30s"
|
||||||
|
|
||||||
|
## For `standalone` mode, `self_import` is recommend to collect metrics generated by itself
|
||||||
|
## You must create the database before enabling it.
|
||||||
|
[export_metrics.self_import]
|
||||||
|
## @toml2docs:none-default
|
||||||
|
db = "greptime_metrics"
|
||||||
|
|
||||||
|
[export_metrics.remote_write]
|
||||||
|
## The url the metrics send to. The url example can be: `http://127.0.0.1:4000/v1/prometheus/write?db=greptime_metrics`.
|
||||||
|
url = ""
|
||||||
|
|
||||||
|
## HTTP headers of Prometheus remote-write carry.
|
||||||
|
headers = { }
|
||||||
|
|
||||||
|
## The tracing options. Only effect when compiled with `tokio-console` feature.
|
||||||
|
#+ [tracing]
|
||||||
|
## The tokio console address.
|
||||||
|
## @toml2docs:none-default
|
||||||
|
#+ tokio_console_addr = "127.0.0.1"
|
||||||
|
|||||||
@@ -1,99 +1,224 @@
|
|||||||
# The working home directory.
|
## The working home directory.
|
||||||
data_home = "/tmp/metasrv/"
|
data_home = "/tmp/metasrv/"
|
||||||
# The bind address of metasrv, "127.0.0.1:3002" by default.
|
|
||||||
|
## The bind address of metasrv.
|
||||||
bind_addr = "127.0.0.1:3002"
|
bind_addr = "127.0.0.1:3002"
|
||||||
# The communication server address for frontend and datanode to connect to metasrv, "127.0.0.1:3002" by default for localhost.
|
|
||||||
|
## The communication server address for frontend and datanode to connect to metasrv, "127.0.0.1:3002" by default for localhost.
|
||||||
server_addr = "127.0.0.1:3002"
|
server_addr = "127.0.0.1:3002"
|
||||||
# Etcd server address, "127.0.0.1:2379" by default.
|
|
||||||
|
## Store server address default to etcd store.
|
||||||
store_addr = "127.0.0.1:2379"
|
store_addr = "127.0.0.1:2379"
|
||||||
# Datanode selector type.
|
|
||||||
# - "lease_based" (default value).
|
## Datanode selector type.
|
||||||
# - "load_based"
|
## - `round_robin` (default value)
|
||||||
# For details, please see "https://docs.greptime.com/developer-guide/metasrv/selector".
|
## - `lease_based`
|
||||||
selector = "lease_based"
|
## - `load_based`
|
||||||
# Store data in memory, false by default.
|
## For details, please see "https://docs.greptime.com/developer-guide/metasrv/selector".
|
||||||
|
selector = "round_robin"
|
||||||
|
|
||||||
|
## Store data in memory.
|
||||||
use_memory_store = false
|
use_memory_store = false
|
||||||
# Whether to enable greptimedb telemetry, true by default.
|
|
||||||
|
## Whether to enable greptimedb telemetry.
|
||||||
enable_telemetry = true
|
enable_telemetry = true
|
||||||
# If it's not empty, the metasrv will store all data with this key prefix.
|
|
||||||
|
## If it's not empty, the metasrv will store all data with this key prefix.
|
||||||
store_key_prefix = ""
|
store_key_prefix = ""
|
||||||
|
|
||||||
# Log options, see `standalone.example.toml`
|
## Whether to enable region failover.
|
||||||
# [logging]
|
## This feature is only available on GreptimeDB running on cluster mode and
|
||||||
# dir = "/tmp/greptimedb/logs"
|
## - Using Remote WAL
|
||||||
# level = "info"
|
## - Using shared storage (e.g., s3).
|
||||||
|
enable_region_failover = false
|
||||||
|
|
||||||
# Procedure storage options.
|
## The datastore for meta server.
|
||||||
|
backend = "EtcdStore"
|
||||||
|
|
||||||
|
## The runtime options.
|
||||||
|
#+ [runtime]
|
||||||
|
## The number of threads to execute the runtime for global read operations.
|
||||||
|
#+ global_rt_size = 8
|
||||||
|
## The number of threads to execute the runtime for global write operations.
|
||||||
|
#+ compact_rt_size = 4
|
||||||
|
|
||||||
|
## Procedure storage options.
|
||||||
[procedure]
|
[procedure]
|
||||||
# Procedure max retry time.
|
|
||||||
|
## Procedure max retry time.
|
||||||
max_retry_times = 12
|
max_retry_times = 12
|
||||||
# Initial retry delay of procedures, increases exponentially
|
|
||||||
|
## Initial retry delay of procedures, increases exponentially
|
||||||
retry_delay = "500ms"
|
retry_delay = "500ms"
|
||||||
# Auto split large value
|
|
||||||
# GreptimeDB procedure uses etcd as the default metadata storage backend.
|
## Auto split large value
|
||||||
# The etcd the maximum size of any request is 1.5 MiB
|
## GreptimeDB procedure uses etcd as the default metadata storage backend.
|
||||||
# 1500KiB = 1536KiB (1.5MiB) - 36KiB (reserved size of key)
|
## The etcd the maximum size of any request is 1.5 MiB
|
||||||
# Comments out the `max_metadata_value_size`, for don't split large value (no limit).
|
## 1500KiB = 1536KiB (1.5MiB) - 36KiB (reserved size of key)
|
||||||
|
## Comments out the `max_metadata_value_size`, for don't split large value (no limit).
|
||||||
max_metadata_value_size = "1500KiB"
|
max_metadata_value_size = "1500KiB"
|
||||||
|
|
||||||
# Failure detectors options.
|
# Failure detectors options.
|
||||||
[failure_detector]
|
[failure_detector]
|
||||||
|
|
||||||
|
## The threshold value used by the failure detector to determine failure conditions.
|
||||||
threshold = 8.0
|
threshold = 8.0
|
||||||
|
|
||||||
|
## The minimum standard deviation of the heartbeat intervals, used to calculate acceptable variations.
|
||||||
min_std_deviation = "100ms"
|
min_std_deviation = "100ms"
|
||||||
acceptable_heartbeat_pause = "3000ms"
|
|
||||||
|
## The acceptable pause duration between heartbeats, used to determine if a heartbeat interval is acceptable.
|
||||||
|
acceptable_heartbeat_pause = "10000ms"
|
||||||
|
|
||||||
|
## The initial estimate of the heartbeat interval used by the failure detector.
|
||||||
first_heartbeat_estimate = "1000ms"
|
first_heartbeat_estimate = "1000ms"
|
||||||
|
|
||||||
# # Datanode options.
|
## Datanode options.
|
||||||
# [datanode]
|
[datanode]
|
||||||
# # Datanode client options.
|
|
||||||
# [datanode.client_options]
|
## Datanode client options.
|
||||||
# timeout = "10s"
|
[datanode.client]
|
||||||
# connect_timeout = "10s"
|
|
||||||
# tcp_nodelay = true
|
## Operation timeout.
|
||||||
|
timeout = "10s"
|
||||||
|
|
||||||
|
## Connect server timeout.
|
||||||
|
connect_timeout = "10s"
|
||||||
|
|
||||||
|
## `TCP_NODELAY` option for accepted connections.
|
||||||
|
tcp_nodelay = true
|
||||||
|
|
||||||
[wal]
|
[wal]
|
||||||
# Available wal providers:
|
# Available wal providers:
|
||||||
# - "raft_engine" (default)
|
# - `raft_engine` (default): there're none raft-engine wal config since metasrv only involves in remote wal currently.
|
||||||
# - "kafka"
|
# - `kafka`: metasrv **have to be** configured with kafka wal config when using kafka wal provider in datanode.
|
||||||
provider = "raft_engine"
|
provider = "raft_engine"
|
||||||
|
|
||||||
# There're none raft-engine wal config since meta srv only involves in remote wal currently.
|
|
||||||
|
|
||||||
# Kafka wal config.
|
# Kafka wal config.
|
||||||
# The broker endpoints of the Kafka cluster. ["127.0.0.1:9092"] by default.
|
|
||||||
# broker_endpoints = ["127.0.0.1:9092"]
|
|
||||||
# Number of topics to be created upon start.
|
|
||||||
# num_topics = 64
|
|
||||||
# Topic selector type.
|
|
||||||
# Available selector types:
|
|
||||||
# - "round_robin" (default)
|
|
||||||
# selector_type = "round_robin"
|
|
||||||
# A Kafka topic is constructed by concatenating `topic_name_prefix` and `topic_id`.
|
|
||||||
# topic_name_prefix = "greptimedb_wal_topic"
|
|
||||||
# Expected number of replicas of each partition.
|
|
||||||
# replication_factor = 1
|
|
||||||
# Above which a topic creation operation will be cancelled.
|
|
||||||
# create_topic_timeout = "30s"
|
|
||||||
# The initial backoff for kafka clients.
|
|
||||||
# backoff_init = "500ms"
|
|
||||||
# The maximum backoff for kafka clients.
|
|
||||||
# backoff_max = "10s"
|
|
||||||
# Exponential backoff rate, i.e. next backoff = base * current backoff.
|
|
||||||
# backoff_base = 2
|
|
||||||
# Stop reconnecting if the total wait time reaches the deadline. If this config is missing, the reconnecting won't terminate.
|
|
||||||
# backoff_deadline = "5mins"
|
|
||||||
|
|
||||||
# Metasrv export the metrics generated by itself
|
## The broker endpoints of the Kafka cluster.
|
||||||
# encoded to Prometheus remote-write format
|
broker_endpoints = ["127.0.0.1:9092"]
|
||||||
# and send to Prometheus remote-write compatible receiver (e.g. send to `greptimedb` itself)
|
|
||||||
# This is only used for `greptimedb` to export its own metrics internally. It's different from prometheus scrape.
|
## Automatically create topics for WAL.
|
||||||
# [export_metrics]
|
## Set to `true` to automatically create topics for WAL.
|
||||||
# whether enable export metrics, default is false
|
## Otherwise, use topics named `topic_name_prefix_[0..num_topics)`
|
||||||
# enable = false
|
auto_create_topics = true
|
||||||
# The interval of export metrics
|
|
||||||
# write_interval = "30s"
|
## Number of topics.
|
||||||
# [export_metrics.remote_write]
|
num_topics = 64
|
||||||
# The url the metrics send to. The url is empty by default, url example: `http://127.0.0.1:4000/v1/prometheus/write?db=information_schema`
|
|
||||||
# url = ""
|
## Topic selector type.
|
||||||
# HTTP headers of Prometheus remote-write carry
|
## Available selector types:
|
||||||
# headers = {}
|
## - `round_robin` (default)
|
||||||
|
selector_type = "round_robin"
|
||||||
|
|
||||||
|
## A Kafka topic is constructed by concatenating `topic_name_prefix` and `topic_id`.
|
||||||
|
## i.g., greptimedb_wal_topic_0, greptimedb_wal_topic_1.
|
||||||
|
topic_name_prefix = "greptimedb_wal_topic"
|
||||||
|
|
||||||
|
## Expected number of replicas of each partition.
|
||||||
|
replication_factor = 1
|
||||||
|
|
||||||
|
## Above which a topic creation operation will be cancelled.
|
||||||
|
create_topic_timeout = "30s"
|
||||||
|
## The initial backoff for kafka clients.
|
||||||
|
backoff_init = "500ms"
|
||||||
|
|
||||||
|
## The maximum backoff for kafka clients.
|
||||||
|
backoff_max = "10s"
|
||||||
|
|
||||||
|
## Exponential backoff rate, i.e. next backoff = base * current backoff.
|
||||||
|
backoff_base = 2
|
||||||
|
|
||||||
|
## Stop reconnecting if the total wait time reaches the deadline. If this config is missing, the reconnecting won't terminate.
|
||||||
|
backoff_deadline = "5mins"
|
||||||
|
|
||||||
|
# The Kafka SASL configuration.
|
||||||
|
# **It's only used when the provider is `kafka`**.
|
||||||
|
# Available SASL mechanisms:
|
||||||
|
# - `PLAIN`
|
||||||
|
# - `SCRAM-SHA-256`
|
||||||
|
# - `SCRAM-SHA-512`
|
||||||
|
# [wal.sasl]
|
||||||
|
# type = "SCRAM-SHA-512"
|
||||||
|
# username = "user_kafka"
|
||||||
|
# password = "secret"
|
||||||
|
|
||||||
|
# The Kafka TLS configuration.
|
||||||
|
# **It's only used when the provider is `kafka`**.
|
||||||
|
# [wal.tls]
|
||||||
|
# server_ca_cert_path = "/path/to/server_cert"
|
||||||
|
# client_cert_path = "/path/to/client_cert"
|
||||||
|
# client_key_path = "/path/to/key"
|
||||||
|
|
||||||
|
## The logging options.
|
||||||
|
[logging]
|
||||||
|
## The directory to store the log files. If set to empty, logs will not be written to files.
|
||||||
|
dir = "/tmp/greptimedb/logs"
|
||||||
|
|
||||||
|
## The log level. Can be `info`/`debug`/`warn`/`error`.
|
||||||
|
## @toml2docs:none-default
|
||||||
|
level = "info"
|
||||||
|
|
||||||
|
## Enable OTLP tracing.
|
||||||
|
enable_otlp_tracing = false
|
||||||
|
|
||||||
|
## The OTLP tracing endpoint.
|
||||||
|
otlp_endpoint = "http://localhost:4317"
|
||||||
|
|
||||||
|
## Whether to append logs to stdout.
|
||||||
|
append_stdout = true
|
||||||
|
|
||||||
|
## The log format. Can be `text`/`json`.
|
||||||
|
log_format = "text"
|
||||||
|
|
||||||
|
## The maximum amount of log files.
|
||||||
|
max_log_files = 720
|
||||||
|
|
||||||
|
## The percentage of tracing will be sampled and exported.
|
||||||
|
## Valid range `[0, 1]`, 1 means all traces are sampled, 0 means all traces are not sampled, the default value is 1.
|
||||||
|
## ratio > 1 are treated as 1. Fractions < 0 are treated as 0
|
||||||
|
[logging.tracing_sample_ratio]
|
||||||
|
default_ratio = 1.0
|
||||||
|
|
||||||
|
## The slow query log options.
|
||||||
|
[logging.slow_query]
|
||||||
|
## Whether to enable slow query log.
|
||||||
|
enable = false
|
||||||
|
|
||||||
|
## The threshold of slow query.
|
||||||
|
## @toml2docs:none-default
|
||||||
|
threshold = "10s"
|
||||||
|
|
||||||
|
## The sampling ratio of slow query log. The value should be in the range of (0, 1].
|
||||||
|
## @toml2docs:none-default
|
||||||
|
sample_ratio = 1.0
|
||||||
|
|
||||||
|
## The datanode can export its metrics and send to Prometheus compatible service (e.g. send to `greptimedb` itself) from remote-write API.
|
||||||
|
## This is only used for `greptimedb` to export its own metrics internally. It's different from prometheus scrape.
|
||||||
|
[export_metrics]
|
||||||
|
|
||||||
|
## whether enable export metrics.
|
||||||
|
enable = false
|
||||||
|
|
||||||
|
## The interval of export metrics.
|
||||||
|
write_interval = "30s"
|
||||||
|
|
||||||
|
## For `standalone` mode, `self_import` is recommend to collect metrics generated by itself
|
||||||
|
## You must create the database before enabling it.
|
||||||
|
[export_metrics.self_import]
|
||||||
|
## @toml2docs:none-default
|
||||||
|
db = "greptime_metrics"
|
||||||
|
|
||||||
|
[export_metrics.remote_write]
|
||||||
|
## The url the metrics send to. The url example can be: `http://127.0.0.1:4000/v1/prometheus/write?db=greptime_metrics`.
|
||||||
|
url = ""
|
||||||
|
|
||||||
|
## HTTP headers of Prometheus remote-write carry.
|
||||||
|
headers = { }
|
||||||
|
|
||||||
|
## The tracing options. Only effect when compiled with `tokio-console` feature.
|
||||||
|
#+ [tracing]
|
||||||
|
## The tokio console address.
|
||||||
|
## @toml2docs:none-default
|
||||||
|
#+ tokio_console_addr = "127.0.0.1"
|
||||||
|
|||||||
@@ -1,286 +1,696 @@
|
|||||||
# Node running mode, "standalone" or "distributed".
|
## The running mode of the datanode. It can be `standalone` or `distributed`.
|
||||||
mode = "standalone"
|
mode = "standalone"
|
||||||
# Whether to enable greptimedb telemetry, true by default.
|
|
||||||
enable_telemetry = true
|
|
||||||
# The default timezone of the server
|
|
||||||
# default_timezone = "UTC"
|
|
||||||
|
|
||||||
# HTTP server options.
|
## Enable telemetry to collect anonymous usage data.
|
||||||
|
enable_telemetry = true
|
||||||
|
|
||||||
|
## The default timezone of the server.
|
||||||
|
## @toml2docs:none-default
|
||||||
|
default_timezone = "UTC"
|
||||||
|
|
||||||
|
## Initialize all regions in the background during the startup.
|
||||||
|
## By default, it provides services after all regions have been initialized.
|
||||||
|
init_regions_in_background = false
|
||||||
|
|
||||||
|
## Parallelism of initializing regions.
|
||||||
|
init_regions_parallelism = 16
|
||||||
|
|
||||||
|
## The maximum current queries allowed to be executed. Zero means unlimited.
|
||||||
|
max_concurrent_queries = 0
|
||||||
|
|
||||||
|
## The runtime options.
|
||||||
|
#+ [runtime]
|
||||||
|
## The number of threads to execute the runtime for global read operations.
|
||||||
|
#+ global_rt_size = 8
|
||||||
|
## The number of threads to execute the runtime for global write operations.
|
||||||
|
#+ compact_rt_size = 4
|
||||||
|
|
||||||
|
## The HTTP server options.
|
||||||
[http]
|
[http]
|
||||||
# Server address, "127.0.0.1:4000" by default.
|
## The address to bind the HTTP server.
|
||||||
addr = "127.0.0.1:4000"
|
addr = "127.0.0.1:4000"
|
||||||
# HTTP request timeout, 30s by default.
|
## HTTP request timeout. Set to 0 to disable timeout.
|
||||||
timeout = "30s"
|
timeout = "30s"
|
||||||
# HTTP request body limit, 64Mb by default.
|
## HTTP request body limit.
|
||||||
# the following units are supported: B, KB, KiB, MB, MiB, GB, GiB, TB, TiB, PB, PiB
|
## The following units are supported: `B`, `KB`, `KiB`, `MB`, `MiB`, `GB`, `GiB`, `TB`, `TiB`, `PB`, `PiB`.
|
||||||
|
## Set to 0 to disable limit.
|
||||||
body_limit = "64MB"
|
body_limit = "64MB"
|
||||||
|
|
||||||
# gRPC server options.
|
## The gRPC server options.
|
||||||
[grpc]
|
[grpc]
|
||||||
# Server address, "127.0.0.1:4001" by default.
|
## The address to bind the gRPC server.
|
||||||
addr = "127.0.0.1:4001"
|
addr = "127.0.0.1:4001"
|
||||||
# The number of server worker threads, 8 by default.
|
## The number of server worker threads.
|
||||||
runtime_size = 8
|
runtime_size = 8
|
||||||
|
|
||||||
# MySQL server options.
|
## gRPC server TLS options, see `mysql.tls` section.
|
||||||
|
[grpc.tls]
|
||||||
|
## TLS mode.
|
||||||
|
mode = "disable"
|
||||||
|
|
||||||
|
## Certificate file path.
|
||||||
|
## @toml2docs:none-default
|
||||||
|
cert_path = ""
|
||||||
|
|
||||||
|
## Private key file path.
|
||||||
|
## @toml2docs:none-default
|
||||||
|
key_path = ""
|
||||||
|
|
||||||
|
## Watch for Certificate and key file change and auto reload.
|
||||||
|
## For now, gRPC tls config does not support auto reload.
|
||||||
|
watch = false
|
||||||
|
|
||||||
|
## MySQL server options.
|
||||||
[mysql]
|
[mysql]
|
||||||
# Whether to enable
|
## Whether to enable.
|
||||||
enable = true
|
enable = true
|
||||||
# Server address, "127.0.0.1:4002" by default.
|
## The addr to bind the MySQL server.
|
||||||
addr = "127.0.0.1:4002"
|
addr = "127.0.0.1:4002"
|
||||||
# The number of server worker threads, 2 by default.
|
## The number of server worker threads.
|
||||||
runtime_size = 2
|
runtime_size = 2
|
||||||
|
|
||||||
# MySQL server TLS options.
|
# MySQL server TLS options.
|
||||||
[mysql.tls]
|
[mysql.tls]
|
||||||
# TLS mode, refer to https://www.postgresql.org/docs/current/libpq-ssl.html
|
|
||||||
# - "disable" (default value)
|
## TLS mode, refer to https://www.postgresql.org/docs/current/libpq-ssl.html
|
||||||
# - "prefer"
|
## - `disable` (default value)
|
||||||
# - "require"
|
## - `prefer`
|
||||||
# - "verify-ca"
|
## - `require`
|
||||||
# - "verify-full"
|
## - `verify-ca`
|
||||||
|
## - `verify-full`
|
||||||
mode = "disable"
|
mode = "disable"
|
||||||
# Certificate file path.
|
|
||||||
|
## Certificate file path.
|
||||||
|
## @toml2docs:none-default
|
||||||
cert_path = ""
|
cert_path = ""
|
||||||
# Private key file path.
|
|
||||||
|
## Private key file path.
|
||||||
|
## @toml2docs:none-default
|
||||||
key_path = ""
|
key_path = ""
|
||||||
# Watch for Certificate and key file change and auto reload
|
|
||||||
|
## Watch for Certificate and key file change and auto reload
|
||||||
watch = false
|
watch = false
|
||||||
|
|
||||||
# PostgresSQL server options.
|
## PostgresSQL server options.
|
||||||
[postgres]
|
[postgres]
|
||||||
# Whether to enable
|
## Whether to enable
|
||||||
enable = true
|
enable = true
|
||||||
# Server address, "127.0.0.1:4003" by default.
|
## The addr to bind the PostgresSQL server.
|
||||||
addr = "127.0.0.1:4003"
|
addr = "127.0.0.1:4003"
|
||||||
# The number of server worker threads, 2 by default.
|
## The number of server worker threads.
|
||||||
runtime_size = 2
|
runtime_size = 2
|
||||||
|
|
||||||
# PostgresSQL server TLS options, see `[mysql_options.tls]` section.
|
## PostgresSQL server TLS options, see `mysql.tls` section.
|
||||||
[postgres.tls]
|
[postgres.tls]
|
||||||
# TLS mode.
|
## TLS mode.
|
||||||
mode = "disable"
|
mode = "disable"
|
||||||
# certificate file path.
|
|
||||||
|
## Certificate file path.
|
||||||
|
## @toml2docs:none-default
|
||||||
cert_path = ""
|
cert_path = ""
|
||||||
# private key file path.
|
|
||||||
|
## Private key file path.
|
||||||
|
## @toml2docs:none-default
|
||||||
key_path = ""
|
key_path = ""
|
||||||
# Watch for Certificate and key file change and auto reload
|
|
||||||
|
## Watch for Certificate and key file change and auto reload
|
||||||
watch = false
|
watch = false
|
||||||
|
|
||||||
# OpenTSDB protocol options.
|
## OpenTSDB protocol options.
|
||||||
[opentsdb]
|
[opentsdb]
|
||||||
# Whether to enable
|
## Whether to enable OpenTSDB put in HTTP API.
|
||||||
enable = true
|
enable = true
|
||||||
# OpenTSDB telnet API server address, "127.0.0.1:4242" by default.
|
|
||||||
addr = "127.0.0.1:4242"
|
|
||||||
# The number of server worker threads, 2 by default.
|
|
||||||
runtime_size = 2
|
|
||||||
|
|
||||||
# InfluxDB protocol options.
|
## InfluxDB protocol options.
|
||||||
[influxdb]
|
[influxdb]
|
||||||
# Whether to enable InfluxDB protocol in HTTP API, true by default.
|
## Whether to enable InfluxDB protocol in HTTP API.
|
||||||
enable = true
|
enable = true
|
||||||
|
|
||||||
# Prometheus remote storage options
|
## Prometheus remote storage options
|
||||||
[prom_store]
|
[prom_store]
|
||||||
# Whether to enable Prometheus remote write and read in HTTP API, true by default.
|
## Whether to enable Prometheus remote write and read in HTTP API.
|
||||||
enable = true
|
enable = true
|
||||||
# Whether to store the data from Prometheus remote write in metric engine.
|
## Whether to store the data from Prometheus remote write in metric engine.
|
||||||
# true by default
|
|
||||||
with_metric_engine = true
|
with_metric_engine = true
|
||||||
|
|
||||||
|
## The WAL options.
|
||||||
[wal]
|
[wal]
|
||||||
# Available wal providers:
|
## The provider of the WAL.
|
||||||
# - "raft_engine" (default)
|
## - `raft_engine`: the wal is stored in the local file system by raft-engine.
|
||||||
# - "kafka"
|
## - `kafka`: it's remote wal that data is stored in Kafka.
|
||||||
provider = "raft_engine"
|
provider = "raft_engine"
|
||||||
|
|
||||||
# Raft-engine wal options.
|
## The directory to store the WAL files.
|
||||||
# WAL data directory
|
## **It's only used when the provider is `raft_engine`**.
|
||||||
# dir = "/tmp/greptimedb/wal"
|
## @toml2docs:none-default
|
||||||
# WAL file size in bytes.
|
dir = "/tmp/greptimedb/wal"
|
||||||
|
|
||||||
|
## The size of the WAL segment file.
|
||||||
|
## **It's only used when the provider is `raft_engine`**.
|
||||||
file_size = "256MB"
|
file_size = "256MB"
|
||||||
# WAL purge threshold.
|
|
||||||
|
## The threshold of the WAL size to trigger a flush.
|
||||||
|
## **It's only used when the provider is `raft_engine`**.
|
||||||
purge_threshold = "4GB"
|
purge_threshold = "4GB"
|
||||||
# WAL purge interval in seconds.
|
|
||||||
|
## The interval to trigger a flush.
|
||||||
|
## **It's only used when the provider is `raft_engine`**.
|
||||||
purge_interval = "10m"
|
purge_interval = "10m"
|
||||||
# WAL read batch size.
|
|
||||||
|
## The read batch size.
|
||||||
|
## **It's only used when the provider is `raft_engine`**.
|
||||||
read_batch_size = 128
|
read_batch_size = 128
|
||||||
# Whether to sync log file after every write.
|
|
||||||
|
## Whether to use sync write.
|
||||||
|
## **It's only used when the provider is `raft_engine`**.
|
||||||
sync_write = false
|
sync_write = false
|
||||||
# Whether to reuse logically truncated log files.
|
|
||||||
|
## Whether to reuse logically truncated log files.
|
||||||
|
## **It's only used when the provider is `raft_engine`**.
|
||||||
enable_log_recycle = true
|
enable_log_recycle = true
|
||||||
# Whether to pre-create log files on start up
|
|
||||||
|
## Whether to pre-create log files on start up.
|
||||||
|
## **It's only used when the provider is `raft_engine`**.
|
||||||
prefill_log_files = false
|
prefill_log_files = false
|
||||||
# Duration for fsyncing log files.
|
|
||||||
sync_period = "1000ms"
|
|
||||||
|
|
||||||
# Kafka wal options.
|
## Duration for fsyncing log files.
|
||||||
# The broker endpoints of the Kafka cluster. ["127.0.0.1:9092"] by default.
|
## **It's only used when the provider is `raft_engine`**.
|
||||||
# broker_endpoints = ["127.0.0.1:9092"]
|
sync_period = "10s"
|
||||||
|
|
||||||
# Number of topics to be created upon start.
|
## Parallelism during WAL recovery.
|
||||||
# num_topics = 64
|
recovery_parallelism = 2
|
||||||
# Topic selector type.
|
|
||||||
# Available selector types:
|
|
||||||
# - "round_robin" (default)
|
|
||||||
# selector_type = "round_robin"
|
|
||||||
# The prefix of topic name.
|
|
||||||
# topic_name_prefix = "greptimedb_wal_topic"
|
|
||||||
# The number of replicas of each partition.
|
|
||||||
# Warning: the replication factor must be positive and must not be greater than the number of broker endpoints.
|
|
||||||
# replication_factor = 1
|
|
||||||
|
|
||||||
# The max size of a single producer batch.
|
## The Kafka broker endpoints.
|
||||||
# Warning: Kafka has a default limit of 1MB per message in a topic.
|
## **It's only used when the provider is `kafka`**.
|
||||||
# max_batch_size = "1MB"
|
broker_endpoints = ["127.0.0.1:9092"]
|
||||||
# The linger duration.
|
|
||||||
# linger = "200ms"
|
|
||||||
# The consumer wait timeout.
|
|
||||||
# consumer_wait_timeout = "100ms"
|
|
||||||
# Create topic timeout.
|
|
||||||
# create_topic_timeout = "30s"
|
|
||||||
|
|
||||||
# The initial backoff delay.
|
## Automatically create topics for WAL.
|
||||||
# backoff_init = "500ms"
|
## Set to `true` to automatically create topics for WAL.
|
||||||
# The maximum backoff delay.
|
## Otherwise, use topics named `topic_name_prefix_[0..num_topics)`
|
||||||
# backoff_max = "10s"
|
auto_create_topics = true
|
||||||
# Exponential backoff rate, i.e. next backoff = base * current backoff.
|
|
||||||
# backoff_base = 2
|
|
||||||
# The deadline of retries.
|
|
||||||
# backoff_deadline = "5mins"
|
|
||||||
|
|
||||||
# Metadata storage options.
|
## Number of topics.
|
||||||
|
## **It's only used when the provider is `kafka`**.
|
||||||
|
num_topics = 64
|
||||||
|
|
||||||
|
## Topic selector type.
|
||||||
|
## Available selector types:
|
||||||
|
## - `round_robin` (default)
|
||||||
|
## **It's only used when the provider is `kafka`**.
|
||||||
|
selector_type = "round_robin"
|
||||||
|
|
||||||
|
## A Kafka topic is constructed by concatenating `topic_name_prefix` and `topic_id`.
|
||||||
|
## i.g., greptimedb_wal_topic_0, greptimedb_wal_topic_1.
|
||||||
|
## **It's only used when the provider is `kafka`**.
|
||||||
|
topic_name_prefix = "greptimedb_wal_topic"
|
||||||
|
|
||||||
|
## Expected number of replicas of each partition.
|
||||||
|
## **It's only used when the provider is `kafka`**.
|
||||||
|
replication_factor = 1
|
||||||
|
|
||||||
|
## Above which a topic creation operation will be cancelled.
|
||||||
|
## **It's only used when the provider is `kafka`**.
|
||||||
|
create_topic_timeout = "30s"
|
||||||
|
|
||||||
|
## The max size of a single producer batch.
|
||||||
|
## Warning: Kafka has a default limit of 1MB per message in a topic.
|
||||||
|
## **It's only used when the provider is `kafka`**.
|
||||||
|
max_batch_bytes = "1MB"
|
||||||
|
|
||||||
|
## The consumer wait timeout.
|
||||||
|
## **It's only used when the provider is `kafka`**.
|
||||||
|
consumer_wait_timeout = "100ms"
|
||||||
|
|
||||||
|
## The initial backoff delay.
|
||||||
|
## **It's only used when the provider is `kafka`**.
|
||||||
|
backoff_init = "500ms"
|
||||||
|
|
||||||
|
## The maximum backoff delay.
|
||||||
|
## **It's only used when the provider is `kafka`**.
|
||||||
|
backoff_max = "10s"
|
||||||
|
|
||||||
|
## The exponential backoff rate, i.e. next backoff = base * current backoff.
|
||||||
|
## **It's only used when the provider is `kafka`**.
|
||||||
|
backoff_base = 2
|
||||||
|
|
||||||
|
## The deadline of retries.
|
||||||
|
## **It's only used when the provider is `kafka`**.
|
||||||
|
backoff_deadline = "5mins"
|
||||||
|
|
||||||
|
## Ignore missing entries during read WAL.
|
||||||
|
## **It's only used when the provider is `kafka`**.
|
||||||
|
##
|
||||||
|
## This option ensures that when Kafka messages are deleted, the system
|
||||||
|
## can still successfully replay memtable data without throwing an
|
||||||
|
## out-of-range error.
|
||||||
|
## However, enabling this option might lead to unexpected data loss,
|
||||||
|
## as the system will skip over missing entries instead of treating
|
||||||
|
## them as critical errors.
|
||||||
|
overwrite_entry_start_id = false
|
||||||
|
|
||||||
|
# The Kafka SASL configuration.
|
||||||
|
# **It's only used when the provider is `kafka`**.
|
||||||
|
# Available SASL mechanisms:
|
||||||
|
# - `PLAIN`
|
||||||
|
# - `SCRAM-SHA-256`
|
||||||
|
# - `SCRAM-SHA-512`
|
||||||
|
# [wal.sasl]
|
||||||
|
# type = "SCRAM-SHA-512"
|
||||||
|
# username = "user_kafka"
|
||||||
|
# password = "secret"
|
||||||
|
|
||||||
|
# The Kafka TLS configuration.
|
||||||
|
# **It's only used when the provider is `kafka`**.
|
||||||
|
# [wal.tls]
|
||||||
|
# server_ca_cert_path = "/path/to/server_cert"
|
||||||
|
# client_cert_path = "/path/to/client_cert"
|
||||||
|
# client_key_path = "/path/to/key"
|
||||||
|
|
||||||
|
## Metadata storage options.
|
||||||
[metadata_store]
|
[metadata_store]
|
||||||
# Kv file size in bytes.
|
## Kv file size in bytes.
|
||||||
file_size = "256MB"
|
file_size = "256MB"
|
||||||
# Kv purge threshold.
|
## Kv purge threshold.
|
||||||
purge_threshold = "4GB"
|
purge_threshold = "4GB"
|
||||||
|
|
||||||
# Procedure storage options.
|
## Procedure storage options.
|
||||||
[procedure]
|
[procedure]
|
||||||
# Procedure max retry time.
|
## Procedure max retry time.
|
||||||
max_retry_times = 3
|
max_retry_times = 3
|
||||||
# Initial retry delay of procedures, increases exponentially
|
## Initial retry delay of procedures, increases exponentially
|
||||||
retry_delay = "500ms"
|
retry_delay = "500ms"
|
||||||
|
|
||||||
# Storage options.
|
# Example of using S3 as the storage.
|
||||||
|
# [storage]
|
||||||
|
# type = "S3"
|
||||||
|
# bucket = "greptimedb"
|
||||||
|
# root = "data"
|
||||||
|
# access_key_id = "test"
|
||||||
|
# secret_access_key = "123456"
|
||||||
|
# endpoint = "https://s3.amazonaws.com"
|
||||||
|
# region = "us-west-2"
|
||||||
|
|
||||||
|
# Example of using Oss as the storage.
|
||||||
|
# [storage]
|
||||||
|
# type = "Oss"
|
||||||
|
# bucket = "greptimedb"
|
||||||
|
# root = "data"
|
||||||
|
# access_key_id = "test"
|
||||||
|
# access_key_secret = "123456"
|
||||||
|
# endpoint = "https://oss-cn-hangzhou.aliyuncs.com"
|
||||||
|
|
||||||
|
# Example of using Azblob as the storage.
|
||||||
|
# [storage]
|
||||||
|
# type = "Azblob"
|
||||||
|
# container = "greptimedb"
|
||||||
|
# root = "data"
|
||||||
|
# account_name = "test"
|
||||||
|
# account_key = "123456"
|
||||||
|
# endpoint = "https://greptimedb.blob.core.windows.net"
|
||||||
|
# sas_token = ""
|
||||||
|
|
||||||
|
# Example of using Gcs as the storage.
|
||||||
|
# [storage]
|
||||||
|
# type = "Gcs"
|
||||||
|
# bucket = "greptimedb"
|
||||||
|
# root = "data"
|
||||||
|
# scope = "test"
|
||||||
|
# credential_path = "123456"
|
||||||
|
# credential = "base64-credential"
|
||||||
|
# endpoint = "https://storage.googleapis.com"
|
||||||
|
|
||||||
|
## The data storage options.
|
||||||
[storage]
|
[storage]
|
||||||
# The working home directory.
|
## The working home directory.
|
||||||
data_home = "/tmp/greptimedb/"
|
data_home = "/tmp/greptimedb/"
|
||||||
# Storage type.
|
|
||||||
|
## The storage type used to store the data.
|
||||||
|
## - `File`: the data is stored in the local file system.
|
||||||
|
## - `S3`: the data is stored in the S3 object storage.
|
||||||
|
## - `Gcs`: the data is stored in the Google Cloud Storage.
|
||||||
|
## - `Azblob`: the data is stored in the Azure Blob Storage.
|
||||||
|
## - `Oss`: the data is stored in the Aliyun OSS.
|
||||||
type = "File"
|
type = "File"
|
||||||
# TTL for all tables. Disabled by default.
|
|
||||||
# global_ttl = "7d"
|
## Cache configuration for object storage such as 'S3' etc.
|
||||||
# Cache configuration for object storage such as 'S3' etc.
|
## The local file cache directory.
|
||||||
# cache_path = "/path/local_cache"
|
## @toml2docs:none-default
|
||||||
# The local file cache capacity in bytes.
|
cache_path = "/path/local_cache"
|
||||||
# cache_capacity = "256MB"
|
|
||||||
|
## The local file cache capacity in bytes.
|
||||||
|
## @toml2docs:none-default
|
||||||
|
cache_capacity = "256MB"
|
||||||
|
|
||||||
|
## The S3 bucket name.
|
||||||
|
## **It's only used when the storage type is `S3`, `Oss` and `Gcs`**.
|
||||||
|
## @toml2docs:none-default
|
||||||
|
bucket = "greptimedb"
|
||||||
|
|
||||||
|
## The S3 data will be stored in the specified prefix, for example, `s3://${bucket}/${root}`.
|
||||||
|
## **It's only used when the storage type is `S3`, `Oss` and `Azblob`**.
|
||||||
|
## @toml2docs:none-default
|
||||||
|
root = "greptimedb"
|
||||||
|
|
||||||
|
## The access key id of the aws account.
|
||||||
|
## It's **highly recommended** to use AWS IAM roles instead of hardcoding the access key id and secret key.
|
||||||
|
## **It's only used when the storage type is `S3` and `Oss`**.
|
||||||
|
## @toml2docs:none-default
|
||||||
|
access_key_id = "test"
|
||||||
|
|
||||||
|
## The secret access key of the aws account.
|
||||||
|
## It's **highly recommended** to use AWS IAM roles instead of hardcoding the access key id and secret key.
|
||||||
|
## **It's only used when the storage type is `S3`**.
|
||||||
|
## @toml2docs:none-default
|
||||||
|
secret_access_key = "test"
|
||||||
|
|
||||||
|
## The secret access key of the aliyun account.
|
||||||
|
## **It's only used when the storage type is `Oss`**.
|
||||||
|
## @toml2docs:none-default
|
||||||
|
access_key_secret = "test"
|
||||||
|
|
||||||
|
## The account key of the azure account.
|
||||||
|
## **It's only used when the storage type is `Azblob`**.
|
||||||
|
## @toml2docs:none-default
|
||||||
|
account_name = "test"
|
||||||
|
|
||||||
|
## The account key of the azure account.
|
||||||
|
## **It's only used when the storage type is `Azblob`**.
|
||||||
|
## @toml2docs:none-default
|
||||||
|
account_key = "test"
|
||||||
|
|
||||||
|
## The scope of the google cloud storage.
|
||||||
|
## **It's only used when the storage type is `Gcs`**.
|
||||||
|
## @toml2docs:none-default
|
||||||
|
scope = "test"
|
||||||
|
|
||||||
|
## The credential path of the google cloud storage.
|
||||||
|
## **It's only used when the storage type is `Gcs`**.
|
||||||
|
## @toml2docs:none-default
|
||||||
|
credential_path = "test"
|
||||||
|
|
||||||
|
## The credential of the google cloud storage.
|
||||||
|
## **It's only used when the storage type is `Gcs`**.
|
||||||
|
## @toml2docs:none-default
|
||||||
|
credential = "base64-credential"
|
||||||
|
|
||||||
|
## The container of the azure account.
|
||||||
|
## **It's only used when the storage type is `Azblob`**.
|
||||||
|
## @toml2docs:none-default
|
||||||
|
container = "greptimedb"
|
||||||
|
|
||||||
|
## The sas token of the azure account.
|
||||||
|
## **It's only used when the storage type is `Azblob`**.
|
||||||
|
## @toml2docs:none-default
|
||||||
|
sas_token = ""
|
||||||
|
|
||||||
|
## The endpoint of the S3 service.
|
||||||
|
## **It's only used when the storage type is `S3`, `Oss`, `Gcs` and `Azblob`**.
|
||||||
|
## @toml2docs:none-default
|
||||||
|
endpoint = "https://s3.amazonaws.com"
|
||||||
|
|
||||||
|
## The region of the S3 service.
|
||||||
|
## **It's only used when the storage type is `S3`, `Oss`, `Gcs` and `Azblob`**.
|
||||||
|
## @toml2docs:none-default
|
||||||
|
region = "us-west-2"
|
||||||
|
|
||||||
# Custom storage options
|
# Custom storage options
|
||||||
#[[storage.providers]]
|
# [[storage.providers]]
|
||||||
#type = "S3"
|
# name = "S3"
|
||||||
#[[storage.providers]]
|
# type = "S3"
|
||||||
#type = "Gcs"
|
# bucket = "greptimedb"
|
||||||
|
# root = "data"
|
||||||
|
# access_key_id = "test"
|
||||||
|
# secret_access_key = "123456"
|
||||||
|
# endpoint = "https://s3.amazonaws.com"
|
||||||
|
# region = "us-west-2"
|
||||||
|
# [[storage.providers]]
|
||||||
|
# name = "Gcs"
|
||||||
|
# type = "Gcs"
|
||||||
|
# bucket = "greptimedb"
|
||||||
|
# root = "data"
|
||||||
|
# scope = "test"
|
||||||
|
# credential_path = "123456"
|
||||||
|
# credential = "base64-credential"
|
||||||
|
# endpoint = "https://storage.googleapis.com"
|
||||||
|
|
||||||
# Mito engine options
|
## The region engine options. You can configure multiple region engines.
|
||||||
[[region_engine]]
|
[[region_engine]]
|
||||||
|
|
||||||
|
## The Mito engine options.
|
||||||
[region_engine.mito]
|
[region_engine.mito]
|
||||||
# Number of region workers
|
|
||||||
num_workers = 8
|
## Number of region workers.
|
||||||
# Request channel size of each worker
|
#+ num_workers = 8
|
||||||
|
|
||||||
|
## Request channel size of each worker.
|
||||||
worker_channel_size = 128
|
worker_channel_size = 128
|
||||||
# Max batch size for a worker to handle requests
|
|
||||||
|
## Max batch size for a worker to handle requests.
|
||||||
worker_request_batch_size = 64
|
worker_request_batch_size = 64
|
||||||
# Number of meta action updated to trigger a new checkpoint for the manifest
|
|
||||||
|
## Number of meta action updated to trigger a new checkpoint for the manifest.
|
||||||
manifest_checkpoint_distance = 10
|
manifest_checkpoint_distance = 10
|
||||||
# Whether to compress manifest and checkpoint file by gzip (default false).
|
|
||||||
|
## Whether to compress manifest and checkpoint file by gzip (default false).
|
||||||
compress_manifest = false
|
compress_manifest = false
|
||||||
# Max number of running background jobs
|
|
||||||
max_background_jobs = 4
|
## Max number of running background flush jobs (default: 1/2 of cpu cores).
|
||||||
# Interval to auto flush a region if it has not flushed yet.
|
## @toml2docs:none-default="Auto"
|
||||||
|
#+ max_background_flushes = 4
|
||||||
|
|
||||||
|
## Max number of running background compaction jobs (default: 1/4 of cpu cores).
|
||||||
|
## @toml2docs:none-default="Auto"
|
||||||
|
#+ max_background_compactions = 2
|
||||||
|
|
||||||
|
## Max number of running background purge jobs (default: number of cpu cores).
|
||||||
|
## @toml2docs:none-default="Auto"
|
||||||
|
#+ max_background_purges = 8
|
||||||
|
|
||||||
|
## Interval to auto flush a region if it has not flushed yet.
|
||||||
auto_flush_interval = "1h"
|
auto_flush_interval = "1h"
|
||||||
# Global write buffer size for all regions. If not set, it's default to 1/8 of OS memory with a max limitation of 1GB.
|
|
||||||
global_write_buffer_size = "1GB"
|
## Global write buffer size for all regions. If not set, it's default to 1/8 of OS memory with a max limitation of 1GB.
|
||||||
# Global write buffer size threshold to reject write requests. If not set, it's default to 2 times of `global_write_buffer_size`
|
## @toml2docs:none-default="Auto"
|
||||||
global_write_buffer_reject_size = "2GB"
|
#+ global_write_buffer_size = "1GB"
|
||||||
# Cache size for SST metadata. Setting it to 0 to disable the cache.
|
|
||||||
# If not set, it's default to 1/32 of OS memory with a max limitation of 128MB.
|
## Global write buffer size threshold to reject write requests. If not set, it's default to 2 times of `global_write_buffer_size`.
|
||||||
sst_meta_cache_size = "128MB"
|
## @toml2docs:none-default="Auto"
|
||||||
# Cache size for vectors and arrow arrays. Setting it to 0 to disable the cache.
|
#+ global_write_buffer_reject_size = "2GB"
|
||||||
# If not set, it's default to 1/16 of OS memory with a max limitation of 512MB.
|
|
||||||
vector_cache_size = "512MB"
|
## Cache size for SST metadata. Setting it to 0 to disable the cache.
|
||||||
# Cache size for pages of SST row groups. Setting it to 0 to disable the cache.
|
## If not set, it's default to 1/32 of OS memory with a max limitation of 128MB.
|
||||||
# If not set, it's default to 1/16 of OS memory with a max limitation of 512MB.
|
## @toml2docs:none-default="Auto"
|
||||||
page_cache_size = "512MB"
|
#+ sst_meta_cache_size = "128MB"
|
||||||
# Buffer size for SST writing.
|
|
||||||
|
## Cache size for vectors and arrow arrays. Setting it to 0 to disable the cache.
|
||||||
|
## If not set, it's default to 1/16 of OS memory with a max limitation of 512MB.
|
||||||
|
## @toml2docs:none-default="Auto"
|
||||||
|
#+ vector_cache_size = "512MB"
|
||||||
|
|
||||||
|
## Cache size for pages of SST row groups. Setting it to 0 to disable the cache.
|
||||||
|
## If not set, it's default to 1/8 of OS memory.
|
||||||
|
## @toml2docs:none-default="Auto"
|
||||||
|
#+ page_cache_size = "512MB"
|
||||||
|
|
||||||
|
## Cache size for time series selector (e.g. `last_value()`). Setting it to 0 to disable the cache.
|
||||||
|
## If not set, it's default to 1/16 of OS memory with a max limitation of 512MB.
|
||||||
|
## @toml2docs:none-default="Auto"
|
||||||
|
#+ selector_result_cache_size = "512MB"
|
||||||
|
|
||||||
|
## Whether to enable the experimental write cache.
|
||||||
|
enable_experimental_write_cache = false
|
||||||
|
|
||||||
|
## File system path for write cache, defaults to `{data_home}/write_cache`.
|
||||||
|
experimental_write_cache_path = ""
|
||||||
|
|
||||||
|
## Capacity for write cache.
|
||||||
|
experimental_write_cache_size = "512MB"
|
||||||
|
|
||||||
|
## TTL for write cache.
|
||||||
|
## @toml2docs:none-default
|
||||||
|
experimental_write_cache_ttl = "8h"
|
||||||
|
|
||||||
|
## Buffer size for SST writing.
|
||||||
sst_write_buffer_size = "8MB"
|
sst_write_buffer_size = "8MB"
|
||||||
# Parallelism to scan a region (default: 1/4 of cpu cores).
|
|
||||||
# - 0: using the default value (1/4 of cpu cores).
|
## Parallelism to scan a region (default: 1/4 of cpu cores).
|
||||||
# - 1: scan in current thread.
|
## - `0`: using the default value (1/4 of cpu cores).
|
||||||
# - n: scan in parallelism n.
|
## - `1`: scan in current thread.
|
||||||
|
## - `n`: scan in parallelism n.
|
||||||
scan_parallelism = 0
|
scan_parallelism = 0
|
||||||
# Capacity of the channel to send data from parallel scan tasks to the main task (default 32).
|
|
||||||
|
## Capacity of the channel to send data from parallel scan tasks to the main task.
|
||||||
parallel_scan_channel_size = 32
|
parallel_scan_channel_size = 32
|
||||||
# Whether to allow stale WAL entries read during replay.
|
|
||||||
|
## Whether to allow stale WAL entries read during replay.
|
||||||
allow_stale_entries = false
|
allow_stale_entries = false
|
||||||
|
|
||||||
|
## Minimum time interval between two compactions.
|
||||||
|
## To align with the old behavior, the default value is 0 (no restrictions).
|
||||||
|
min_compaction_interval = "0m"
|
||||||
|
|
||||||
|
## The options for index in Mito engine.
|
||||||
|
[region_engine.mito.index]
|
||||||
|
|
||||||
|
## Auxiliary directory path for the index in filesystem, used to store intermediate files for
|
||||||
|
## creating the index and staging files for searching the index, defaults to `{data_home}/index_intermediate`.
|
||||||
|
## The default name for this directory is `index_intermediate` for backward compatibility.
|
||||||
|
##
|
||||||
|
## This path contains two subdirectories:
|
||||||
|
## - `__intm`: for storing intermediate files used during creating index.
|
||||||
|
## - `staging`: for storing staging files used during searching index.
|
||||||
|
aux_path = ""
|
||||||
|
|
||||||
|
## The max capacity of the staging directory.
|
||||||
|
staging_size = "2GB"
|
||||||
|
|
||||||
|
## The options for inverted index in Mito engine.
|
||||||
[region_engine.mito.inverted_index]
|
[region_engine.mito.inverted_index]
|
||||||
# Whether to create the index on flush.
|
|
||||||
# - "auto": automatically
|
## Whether to create the index on flush.
|
||||||
# - "disable": never
|
## - `auto`: automatically (default)
|
||||||
|
## - `disable`: never
|
||||||
create_on_flush = "auto"
|
create_on_flush = "auto"
|
||||||
# Whether to create the index on compaction.
|
|
||||||
# - "auto": automatically
|
## Whether to create the index on compaction.
|
||||||
# - "disable": never
|
## - `auto`: automatically (default)
|
||||||
|
## - `disable`: never
|
||||||
create_on_compaction = "auto"
|
create_on_compaction = "auto"
|
||||||
# Whether to apply the index on query
|
|
||||||
# - "auto": automatically
|
## Whether to apply the index on query
|
||||||
# - "disable": never
|
## - `auto`: automatically (default)
|
||||||
|
## - `disable`: never
|
||||||
apply_on_query = "auto"
|
apply_on_query = "auto"
|
||||||
# Memory threshold for performing an external sort during index creation.
|
|
||||||
# Setting to empty will disable external sorting, forcing all sorting operations to happen in memory.
|
## Memory threshold for performing an external sort during index creation.
|
||||||
mem_threshold_on_create = "64M"
|
## - `auto`: automatically determine the threshold based on the system memory size (default)
|
||||||
# File system path to store intermediate files for external sorting (default `{data_home}/index_intermediate`).
|
## - `unlimited`: no memory limit
|
||||||
|
## - `[size]` e.g. `64MB`: fixed memory threshold
|
||||||
|
mem_threshold_on_create = "auto"
|
||||||
|
|
||||||
|
## Deprecated, use `region_engine.mito.index.aux_path` instead.
|
||||||
intermediate_path = ""
|
intermediate_path = ""
|
||||||
|
|
||||||
|
## Cache size for inverted index metadata.
|
||||||
|
metadata_cache_size = "64MiB"
|
||||||
|
|
||||||
|
## Cache size for inverted index content.
|
||||||
|
content_cache_size = "128MiB"
|
||||||
|
|
||||||
|
## The options for full-text index in Mito engine.
|
||||||
|
[region_engine.mito.fulltext_index]
|
||||||
|
|
||||||
|
## Whether to create the index on flush.
|
||||||
|
## - `auto`: automatically (default)
|
||||||
|
## - `disable`: never
|
||||||
|
create_on_flush = "auto"
|
||||||
|
|
||||||
|
## Whether to create the index on compaction.
|
||||||
|
## - `auto`: automatically (default)
|
||||||
|
## - `disable`: never
|
||||||
|
create_on_compaction = "auto"
|
||||||
|
|
||||||
|
## Whether to apply the index on query
|
||||||
|
## - `auto`: automatically (default)
|
||||||
|
## - `disable`: never
|
||||||
|
apply_on_query = "auto"
|
||||||
|
|
||||||
|
## Memory threshold for index creation.
|
||||||
|
## - `auto`: automatically determine the threshold based on the system memory size (default)
|
||||||
|
## - `unlimited`: no memory limit
|
||||||
|
## - `[size]` e.g. `64MB`: fixed memory threshold
|
||||||
|
mem_threshold_on_create = "auto"
|
||||||
|
|
||||||
[region_engine.mito.memtable]
|
[region_engine.mito.memtable]
|
||||||
# Memtable type.
|
## Memtable type.
|
||||||
# - "partition_tree": partition tree memtable
|
## - `time_series`: time-series memtable
|
||||||
# - "time_series": time-series memtable (deprecated)
|
## - `partition_tree`: partition tree memtable (experimental)
|
||||||
type = "partition_tree"
|
type = "time_series"
|
||||||
# The max number of keys in one shard.
|
|
||||||
|
## The max number of keys in one shard.
|
||||||
|
## Only available for `partition_tree` memtable.
|
||||||
index_max_keys_per_shard = 8192
|
index_max_keys_per_shard = 8192
|
||||||
# The max rows of data inside the actively writing buffer in one shard.
|
|
||||||
|
## The max rows of data inside the actively writing buffer in one shard.
|
||||||
|
## Only available for `partition_tree` memtable.
|
||||||
data_freeze_threshold = 32768
|
data_freeze_threshold = 32768
|
||||||
# Max dictionary bytes.
|
|
||||||
|
## Max dictionary bytes.
|
||||||
|
## Only available for `partition_tree` memtable.
|
||||||
fork_dictionary_bytes = "1GiB"
|
fork_dictionary_bytes = "1GiB"
|
||||||
|
|
||||||
# Log options
|
[[region_engine]]
|
||||||
# [logging]
|
## Enable the file engine.
|
||||||
# Specify logs directory.
|
[region_engine.file]
|
||||||
# dir = "/tmp/greptimedb/logs"
|
|
||||||
# Specify the log level [info | debug | error | warn]
|
|
||||||
# level = "info"
|
|
||||||
# whether enable tracing, default is false
|
|
||||||
# enable_otlp_tracing = false
|
|
||||||
# tracing exporter endpoint with format `ip:port`, we use grpc oltp as exporter, default endpoint is `localhost:4317`
|
|
||||||
# otlp_endpoint = "localhost:4317"
|
|
||||||
# Whether to append logs to stdout. Defaults to true.
|
|
||||||
# append_stdout = true
|
|
||||||
# The percentage of tracing will be sampled and exported. Valid range `[0, 1]`, 1 means all traces are sampled, 0 means all traces are not sampled, the default value is 1. ratio > 1 are treated as 1. Fractions < 0 are treated as 0
|
|
||||||
# [logging.tracing_sample_ratio]
|
|
||||||
# default_ratio = 0.0
|
|
||||||
|
|
||||||
# Standalone export the metrics generated by itself
|
## The logging options.
|
||||||
# encoded to Prometheus remote-write format
|
[logging]
|
||||||
# and send to Prometheus remote-write compatible receiver (e.g. send to `greptimedb` itself)
|
## The directory to store the log files. If set to empty, logs will not be written to files.
|
||||||
# This is only used for `greptimedb` to export its own metrics internally. It's different from prometheus scrape.
|
dir = "/tmp/greptimedb/logs"
|
||||||
# [export_metrics]
|
|
||||||
# whether enable export metrics, default is false
|
## The log level. Can be `info`/`debug`/`warn`/`error`.
|
||||||
# enable = false
|
## @toml2docs:none-default
|
||||||
# The interval of export metrics
|
level = "info"
|
||||||
# write_interval = "30s"
|
|
||||||
# for `standalone`, `self_import` is recommend to collect metrics generated by itself
|
## Enable OTLP tracing.
|
||||||
# [export_metrics.self_import]
|
enable_otlp_tracing = false
|
||||||
# db = "information_schema"
|
|
||||||
|
## The OTLP tracing endpoint.
|
||||||
|
otlp_endpoint = "http://localhost:4317"
|
||||||
|
|
||||||
|
## Whether to append logs to stdout.
|
||||||
|
append_stdout = true
|
||||||
|
|
||||||
|
## The log format. Can be `text`/`json`.
|
||||||
|
log_format = "text"
|
||||||
|
|
||||||
|
## The maximum amount of log files.
|
||||||
|
max_log_files = 720
|
||||||
|
|
||||||
|
## The percentage of tracing will be sampled and exported.
|
||||||
|
## Valid range `[0, 1]`, 1 means all traces are sampled, 0 means all traces are not sampled, the default value is 1.
|
||||||
|
## ratio > 1 are treated as 1. Fractions < 0 are treated as 0
|
||||||
|
[logging.tracing_sample_ratio]
|
||||||
|
default_ratio = 1.0
|
||||||
|
|
||||||
|
## The slow query log options.
|
||||||
|
[logging.slow_query]
|
||||||
|
## Whether to enable slow query log.
|
||||||
|
enable = false
|
||||||
|
|
||||||
|
## The threshold of slow query.
|
||||||
|
## @toml2docs:none-default
|
||||||
|
threshold = "10s"
|
||||||
|
|
||||||
|
## The sampling ratio of slow query log. The value should be in the range of (0, 1].
|
||||||
|
## @toml2docs:none-default
|
||||||
|
sample_ratio = 1.0
|
||||||
|
|
||||||
|
## The datanode can export its metrics and send to Prometheus compatible service (e.g. send to `greptimedb` itself) from remote-write API.
|
||||||
|
## This is only used for `greptimedb` to export its own metrics internally. It's different from prometheus scrape.
|
||||||
|
[export_metrics]
|
||||||
|
|
||||||
|
## whether enable export metrics.
|
||||||
|
enable = false
|
||||||
|
|
||||||
|
## The interval of export metrics.
|
||||||
|
write_interval = "30s"
|
||||||
|
|
||||||
|
## For `standalone` mode, `self_import` is recommended to collect metrics generated by itself
|
||||||
|
## You must create the database before enabling it.
|
||||||
|
[export_metrics.self_import]
|
||||||
|
## @toml2docs:none-default
|
||||||
|
db = "greptime_metrics"
|
||||||
|
|
||||||
|
[export_metrics.remote_write]
|
||||||
|
## The url the metrics send to. The url example can be: `http://127.0.0.1:4000/v1/prometheus/write?db=greptime_metrics`.
|
||||||
|
url = ""
|
||||||
|
|
||||||
|
## HTTP headers of Prometheus remote-write carry.
|
||||||
|
headers = { }
|
||||||
|
|
||||||
|
## The tracing options. Only effect when compiled with `tokio-console` feature.
|
||||||
|
#+ [tracing]
|
||||||
|
## The tokio console address.
|
||||||
|
## @toml2docs:none-default
|
||||||
|
#+ tokio_console_addr = "127.0.0.1"
|
||||||
|
|||||||
2
cyborg/.gitignore
vendored
Normal file
2
cyborg/.gitignore
vendored
Normal file
@@ -0,0 +1,2 @@
|
|||||||
|
node_modules
|
||||||
|
.env
|
||||||
79
cyborg/bin/check-pull-request.ts
Normal file
79
cyborg/bin/check-pull-request.ts
Normal file
@@ -0,0 +1,79 @@
|
|||||||
|
/*
|
||||||
|
* Copyright 2023 Greptime Team
|
||||||
|
*
|
||||||
|
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
* you may not use this file except in compliance with the License.
|
||||||
|
* You may obtain a copy of the License at
|
||||||
|
*
|
||||||
|
* http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
*
|
||||||
|
* Unless required by applicable law or agreed to in writing, software
|
||||||
|
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
* See the License for the specific language governing permissions and
|
||||||
|
* limitations under the License.
|
||||||
|
*/
|
||||||
|
|
||||||
|
import * as core from '@actions/core'
|
||||||
|
import {handleError, obtainClient} from "@/common";
|
||||||
|
import {context} from "@actions/github";
|
||||||
|
import {PullRequestEvent} from "@octokit/webhooks-types";
|
||||||
|
import {Options, sync as conventionalCommitsParser} from 'conventional-commits-parser';
|
||||||
|
import conventionalCommitTypes from 'conventional-commit-types';
|
||||||
|
import _ from "lodash";
|
||||||
|
|
||||||
|
const defaultTypes = Object.keys(conventionalCommitTypes.types)
|
||||||
|
const breakingChangeLabel = "breaking-change"
|
||||||
|
|
||||||
|
// These options are copied from [1].
|
||||||
|
// [1] https://github.com/conventional-changelog/conventional-changelog/blob/3f60b464/packages/conventional-changelog-conventionalcommits/src/parser.js
|
||||||
|
export const parserOpts: Options = {
|
||||||
|
headerPattern: /^(\w*)(?:\((.*)\))?!?: (.*)$/,
|
||||||
|
breakingHeaderPattern: /^(\w*)(?:\((.*)\))?!: (.*)$/,
|
||||||
|
headerCorrespondence: [
|
||||||
|
'type',
|
||||||
|
'scope',
|
||||||
|
'subject'
|
||||||
|
],
|
||||||
|
noteKeywords: ['BREAKING CHANGE', 'BREAKING-CHANGE'],
|
||||||
|
revertPattern: /^(?:Revert|revert:)\s"?([\s\S]+?)"?\s*This reverts commit (\w*)\./i,
|
||||||
|
revertCorrespondence: ['header', 'hash'],
|
||||||
|
issuePrefixes: ['#']
|
||||||
|
}
|
||||||
|
|
||||||
|
async function main() {
|
||||||
|
if (!context.payload.pull_request) {
|
||||||
|
throw new Error(`Only pull request event supported. ${context.eventName} is unsupported.`)
|
||||||
|
}
|
||||||
|
|
||||||
|
const client = obtainClient("GITHUB_TOKEN")
|
||||||
|
const payload = context.payload as PullRequestEvent
|
||||||
|
const { owner, repo, number } = {
|
||||||
|
owner: payload.pull_request.base.user.login,
|
||||||
|
repo: payload.pull_request.base.repo.name,
|
||||||
|
number: payload.pull_request.number,
|
||||||
|
}
|
||||||
|
const { data: pull_request } = await client.rest.pulls.get({
|
||||||
|
owner, repo, pull_number: number,
|
||||||
|
})
|
||||||
|
|
||||||
|
const commit = conventionalCommitsParser(pull_request.title, parserOpts)
|
||||||
|
core.info(`Receive commit: ${JSON.stringify(commit)}`)
|
||||||
|
|
||||||
|
if (!commit.type) {
|
||||||
|
throw Error(`Malformed commit: ${JSON.stringify(commit)}`)
|
||||||
|
}
|
||||||
|
|
||||||
|
if (!defaultTypes.includes(commit.type)) {
|
||||||
|
throw Error(`Unexpected type ${JSON.stringify(commit.type)} of commit: ${JSON.stringify(commit)}`)
|
||||||
|
}
|
||||||
|
|
||||||
|
const breakingChanges = _.filter(commit.notes, _.matches({ title: 'BREAKING CHANGE'}))
|
||||||
|
if (breakingChanges.length > 0) {
|
||||||
|
await client.rest.issues.addLabels({
|
||||||
|
owner, repo, issue_number: number, labels: [breakingChangeLabel]
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
main().catch(handleError)
|
||||||
106
cyborg/bin/follow-up-docs-issue.ts
Normal file
106
cyborg/bin/follow-up-docs-issue.ts
Normal file
@@ -0,0 +1,106 @@
|
|||||||
|
/*
|
||||||
|
* Copyright 2023 Greptime Team
|
||||||
|
*
|
||||||
|
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
* you may not use this file except in compliance with the License.
|
||||||
|
* You may obtain a copy of the License at
|
||||||
|
*
|
||||||
|
* http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
*
|
||||||
|
* Unless required by applicable law or agreed to in writing, software
|
||||||
|
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
* See the License for the specific language governing permissions and
|
||||||
|
* limitations under the License.
|
||||||
|
*/
|
||||||
|
|
||||||
|
import * as core from '@actions/core'
|
||||||
|
import {handleError, obtainClient} from "@/common";
|
||||||
|
import {context} from "@actions/github";
|
||||||
|
import {PullRequestEditedEvent, PullRequestEvent, PullRequestOpenedEvent} from "@octokit/webhooks-types";
|
||||||
|
// @ts-expect-error moduleResolution:nodenext issue 54523
|
||||||
|
import {RequestError} from "@octokit/request-error";
|
||||||
|
|
||||||
|
const needFollowUpDocs = "[x] This PR requires documentation updates."
|
||||||
|
const labelDocsNotRequired = "docs-not-required"
|
||||||
|
const labelDocsRequired = "docs-required"
|
||||||
|
|
||||||
|
async function main() {
|
||||||
|
if (!context.payload.pull_request) {
|
||||||
|
throw new Error(`Only pull request event supported. ${context.eventName} is unsupported.`)
|
||||||
|
}
|
||||||
|
|
||||||
|
const client = obtainClient("GITHUB_TOKEN")
|
||||||
|
const docsClient = obtainClient("DOCS_REPO_TOKEN")
|
||||||
|
const payload = context.payload as PullRequestEvent
|
||||||
|
const { owner, repo, number, actor, title, html_url } = {
|
||||||
|
owner: payload.pull_request.base.user.login,
|
||||||
|
repo: payload.pull_request.base.repo.name,
|
||||||
|
number: payload.pull_request.number,
|
||||||
|
title: payload.pull_request.title,
|
||||||
|
html_url: payload.pull_request.html_url,
|
||||||
|
actor: payload.pull_request.user.login,
|
||||||
|
}
|
||||||
|
const followUpDocs = checkPullRequestEvent(payload)
|
||||||
|
if (followUpDocs) {
|
||||||
|
core.info("Follow up docs.")
|
||||||
|
await client.rest.issues.removeLabel({
|
||||||
|
owner, repo, issue_number: number, name: labelDocsNotRequired,
|
||||||
|
}).catch((e: RequestError) => {
|
||||||
|
if (e.status != 404) {
|
||||||
|
throw e;
|
||||||
|
}
|
||||||
|
core.debug(`Label ${labelDocsNotRequired} not exist.`)
|
||||||
|
})
|
||||||
|
await client.rest.issues.addLabels({
|
||||||
|
owner, repo, issue_number: number, labels: [labelDocsRequired],
|
||||||
|
})
|
||||||
|
await docsClient.rest.issues.create({
|
||||||
|
owner: 'GreptimeTeam',
|
||||||
|
repo: 'docs',
|
||||||
|
title: `Update docs for ${title}`,
|
||||||
|
body: `A document change request is generated from ${html_url}`,
|
||||||
|
assignee: actor,
|
||||||
|
}).then((res) => {
|
||||||
|
core.info(`Created issue ${res.data}`)
|
||||||
|
})
|
||||||
|
} else {
|
||||||
|
core.info("No need to follow up docs.")
|
||||||
|
await client.rest.issues.removeLabel({
|
||||||
|
owner, repo, issue_number: number, name: labelDocsRequired
|
||||||
|
}).catch((e: RequestError) => {
|
||||||
|
if (e.status != 404) {
|
||||||
|
throw e;
|
||||||
|
}
|
||||||
|
core.debug(`Label ${labelDocsRequired} not exist.`)
|
||||||
|
})
|
||||||
|
await client.rest.issues.addLabels({
|
||||||
|
owner, repo, issue_number: number, labels: [labelDocsNotRequired],
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function checkPullRequestEvent(payload: PullRequestEvent) {
|
||||||
|
switch (payload.action) {
|
||||||
|
case "opened":
|
||||||
|
return checkPullRequestOpenedEvent(payload as PullRequestOpenedEvent)
|
||||||
|
case "edited":
|
||||||
|
return checkPullRequestEditedEvent(payload as PullRequestEditedEvent)
|
||||||
|
default:
|
||||||
|
throw new Error(`${payload.action} is unsupported.`)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
function checkPullRequestOpenedEvent(event: PullRequestOpenedEvent): boolean {
|
||||||
|
// @ts-ignore
|
||||||
|
return event.pull_request.body?.includes(needFollowUpDocs)
|
||||||
|
}
|
||||||
|
|
||||||
|
function checkPullRequestEditedEvent(event: PullRequestEditedEvent): boolean {
|
||||||
|
const previous = event.changes.body?.from.includes(needFollowUpDocs)
|
||||||
|
const current = event.pull_request.body?.includes(needFollowUpDocs)
|
||||||
|
// from docs-not-need to docs-required
|
||||||
|
return (!previous) && current
|
||||||
|
}
|
||||||
|
|
||||||
|
main().catch(handleError)
|
||||||
83
cyborg/bin/report-ci-failure.ts
Normal file
83
cyborg/bin/report-ci-failure.ts
Normal file
@@ -0,0 +1,83 @@
|
|||||||
|
/*
|
||||||
|
* Copyright 2023 Greptime Team
|
||||||
|
*
|
||||||
|
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
* you may not use this file except in compliance with the License.
|
||||||
|
* You may obtain a copy of the License at
|
||||||
|
*
|
||||||
|
* http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
*
|
||||||
|
* Unless required by applicable law or agreed to in writing, software
|
||||||
|
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
* See the License for the specific language governing permissions and
|
||||||
|
* limitations under the License.
|
||||||
|
*/
|
||||||
|
|
||||||
|
import * as core from '@actions/core'
|
||||||
|
import {handleError, obtainClient} from "@/common"
|
||||||
|
import {context} from "@actions/github"
|
||||||
|
import _ from "lodash"
|
||||||
|
|
||||||
|
async function main() {
|
||||||
|
const success = process.env["CI_REPORT_STATUS"] === "true"
|
||||||
|
core.info(`CI_REPORT_STATUS=${process.env["CI_REPORT_STATUS"]}, resolved to ${success}`)
|
||||||
|
|
||||||
|
const client = obtainClient("GITHUB_TOKEN")
|
||||||
|
const title = `Workflow run '${context.workflow}' failed`
|
||||||
|
const url = `${process.env["GITHUB_SERVER_URL"]}/${process.env["GITHUB_REPOSITORY"]}/actions/runs/${process.env["GITHUB_RUN_ID"]}`
|
||||||
|
const failure_comment = `@GreptimeTeam/db-approver\nNew failure: ${url} `
|
||||||
|
const success_comment = `@GreptimeTeam/db-approver\nBack to success: ${url}`
|
||||||
|
|
||||||
|
const {owner, repo} = context.repo
|
||||||
|
const labels = ['O-ci-failure']
|
||||||
|
|
||||||
|
const issues = await client.paginate(client.rest.issues.listForRepo, {
|
||||||
|
owner,
|
||||||
|
repo,
|
||||||
|
labels: labels.join(','),
|
||||||
|
state: "open",
|
||||||
|
sort: "created",
|
||||||
|
direction: "desc",
|
||||||
|
});
|
||||||
|
const issue = _.find(issues, (i) => i.title === title);
|
||||||
|
|
||||||
|
if (issue) { // exist issue
|
||||||
|
core.info(`Found previous issue ${issue.html_url}`)
|
||||||
|
if (!success) {
|
||||||
|
await client.rest.issues.createComment({
|
||||||
|
owner,
|
||||||
|
repo,
|
||||||
|
issue_number: issue.number,
|
||||||
|
body: failure_comment,
|
||||||
|
})
|
||||||
|
} else {
|
||||||
|
await client.rest.issues.createComment({
|
||||||
|
owner,
|
||||||
|
repo,
|
||||||
|
issue_number: issue.number,
|
||||||
|
body: success_comment,
|
||||||
|
})
|
||||||
|
await client.rest.issues.update({
|
||||||
|
owner,
|
||||||
|
repo,
|
||||||
|
issue_number: issue.number,
|
||||||
|
state: "closed",
|
||||||
|
state_reason: "completed",
|
||||||
|
})
|
||||||
|
}
|
||||||
|
core.setOutput("html_url", issue.html_url)
|
||||||
|
} else if (!success) { // create new issue for failure
|
||||||
|
const issue = await client.rest.issues.create({
|
||||||
|
owner,
|
||||||
|
repo,
|
||||||
|
title,
|
||||||
|
labels,
|
||||||
|
body: failure_comment,
|
||||||
|
})
|
||||||
|
core.info(`Created issue ${issue.data.html_url}`)
|
||||||
|
core.setOutput("html_url", issue.data.html_url)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
main().catch(handleError)
|
||||||
73
cyborg/bin/schedule.ts
Normal file
73
cyborg/bin/schedule.ts
Normal file
@@ -0,0 +1,73 @@
|
|||||||
|
/*
|
||||||
|
* Copyright 2023 Greptime Team
|
||||||
|
*
|
||||||
|
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
* you may not use this file except in compliance with the License.
|
||||||
|
* You may obtain a copy of the License at
|
||||||
|
*
|
||||||
|
* http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
*
|
||||||
|
* Unless required by applicable law or agreed to in writing, software
|
||||||
|
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
* See the License for the specific language governing permissions and
|
||||||
|
* limitations under the License.
|
||||||
|
*/
|
||||||
|
|
||||||
|
import * as core from '@actions/core'
|
||||||
|
import {GitHub} from "@actions/github/lib/utils"
|
||||||
|
import _ from "lodash";
|
||||||
|
import dayjs from "dayjs";
|
||||||
|
import {handleError, obtainClient} from "@/common";
|
||||||
|
|
||||||
|
async function main() {
|
||||||
|
const client = obtainClient("GITHUB_TOKEN")
|
||||||
|
await unassign(client)
|
||||||
|
}
|
||||||
|
|
||||||
|
async function unassign(client: InstanceType<typeof GitHub>) {
|
||||||
|
const owner = "GreptimeTeam"
|
||||||
|
const repo = "greptimedb"
|
||||||
|
|
||||||
|
const dt = dayjs().subtract(14, 'days');
|
||||||
|
core.info(`Open issues updated before ${dt.toISOString()} will be considered stale.`)
|
||||||
|
|
||||||
|
const members = await client.paginate(client.rest.repos.listCollaborators, {
|
||||||
|
owner,
|
||||||
|
repo,
|
||||||
|
permission: "push",
|
||||||
|
per_page: 100
|
||||||
|
}).then((members) => members.map((member) => member.login))
|
||||||
|
core.info(`Members (${members.length}): ${members}`)
|
||||||
|
|
||||||
|
const issues = await client.paginate(client.rest.issues.listForRepo, {
|
||||||
|
owner,
|
||||||
|
repo,
|
||||||
|
state: "open",
|
||||||
|
sort: "created",
|
||||||
|
direction: "asc",
|
||||||
|
per_page: 100
|
||||||
|
})
|
||||||
|
for (const issue of issues) {
|
||||||
|
let assignees = [];
|
||||||
|
if (issue.assignee) {
|
||||||
|
assignees.push(issue.assignee.login)
|
||||||
|
}
|
||||||
|
for (const assignee of issue.assignees) {
|
||||||
|
assignees.push(assignee.login)
|
||||||
|
}
|
||||||
|
assignees = _.uniq(assignees)
|
||||||
|
assignees = _.difference(assignees, members)
|
||||||
|
if (assignees.length > 0 && dayjs(issue.updated_at).isBefore(dt)) {
|
||||||
|
core.info(`Assignees ${assignees} of issue ${issue.number} will be unassigned.`)
|
||||||
|
await client.rest.issues.removeAssignees({
|
||||||
|
owner,
|
||||||
|
repo,
|
||||||
|
issue_number: issue.number,
|
||||||
|
assignees: assignees,
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
main().catch(handleError)
|
||||||
26
cyborg/package.json
Normal file
26
cyborg/package.json
Normal file
@@ -0,0 +1,26 @@
|
|||||||
|
{
|
||||||
|
"name": "cyborg",
|
||||||
|
"version": "1.0.0",
|
||||||
|
"description": "Automator for GreptimeDB Repository Management",
|
||||||
|
"private": true,
|
||||||
|
"packageManager": "pnpm@8.15.5",
|
||||||
|
"dependencies": {
|
||||||
|
"@actions/core": "^1.10.1",
|
||||||
|
"@actions/github": "^6.0.0",
|
||||||
|
"@octokit/request-error": "^6.1.1",
|
||||||
|
"@octokit/webhooks-types": "^7.5.1",
|
||||||
|
"conventional-commit-types": "^3.0.0",
|
||||||
|
"conventional-commits-parser": "^5.0.0",
|
||||||
|
"dayjs": "^1.11.11",
|
||||||
|
"dotenv": "^16.4.5",
|
||||||
|
"lodash": "^4.17.21"
|
||||||
|
},
|
||||||
|
"devDependencies": {
|
||||||
|
"@types/conventional-commits-parser": "^5.0.0",
|
||||||
|
"@types/lodash": "^4.17.0",
|
||||||
|
"@types/node": "^20.12.7",
|
||||||
|
"tsconfig-paths": "^4.2.0",
|
||||||
|
"tsx": "^4.8.2",
|
||||||
|
"typescript": "^5.4.5"
|
||||||
|
}
|
||||||
|
}
|
||||||
612
cyborg/pnpm-lock.yaml
generated
Normal file
612
cyborg/pnpm-lock.yaml
generated
Normal file
@@ -0,0 +1,612 @@
|
|||||||
|
lockfileVersion: '6.0'
|
||||||
|
|
||||||
|
settings:
|
||||||
|
autoInstallPeers: true
|
||||||
|
excludeLinksFromLockfile: false
|
||||||
|
|
||||||
|
dependencies:
|
||||||
|
'@actions/core':
|
||||||
|
specifier: ^1.10.1
|
||||||
|
version: 1.10.1
|
||||||
|
'@actions/github':
|
||||||
|
specifier: ^6.0.0
|
||||||
|
version: 6.0.0
|
||||||
|
'@octokit/request-error':
|
||||||
|
specifier: ^6.1.1
|
||||||
|
version: 6.1.1
|
||||||
|
'@octokit/webhooks-types':
|
||||||
|
specifier: ^7.5.1
|
||||||
|
version: 7.5.1
|
||||||
|
conventional-commit-types:
|
||||||
|
specifier: ^3.0.0
|
||||||
|
version: 3.0.0
|
||||||
|
conventional-commits-parser:
|
||||||
|
specifier: ^5.0.0
|
||||||
|
version: 5.0.0
|
||||||
|
dayjs:
|
||||||
|
specifier: ^1.11.11
|
||||||
|
version: 1.11.11
|
||||||
|
dotenv:
|
||||||
|
specifier: ^16.4.5
|
||||||
|
version: 16.4.5
|
||||||
|
lodash:
|
||||||
|
specifier: ^4.17.21
|
||||||
|
version: 4.17.21
|
||||||
|
|
||||||
|
devDependencies:
|
||||||
|
'@types/conventional-commits-parser':
|
||||||
|
specifier: ^5.0.0
|
||||||
|
version: 5.0.0
|
||||||
|
'@types/lodash':
|
||||||
|
specifier: ^4.17.0
|
||||||
|
version: 4.17.0
|
||||||
|
'@types/node':
|
||||||
|
specifier: ^20.12.7
|
||||||
|
version: 20.12.7
|
||||||
|
tsconfig-paths:
|
||||||
|
specifier: ^4.2.0
|
||||||
|
version: 4.2.0
|
||||||
|
tsx:
|
||||||
|
specifier: ^4.8.2
|
||||||
|
version: 4.8.2
|
||||||
|
typescript:
|
||||||
|
specifier: ^5.4.5
|
||||||
|
version: 5.4.5
|
||||||
|
|
||||||
|
packages:
|
||||||
|
|
||||||
|
/@actions/core@1.10.1:
|
||||||
|
resolution: {integrity: sha512-3lBR9EDAY+iYIpTnTIXmWcNbX3T2kCkAEQGIQx4NVQ0575nk2k3GRZDTPQG+vVtS2izSLmINlxXf0uLtnrTP+g==}
|
||||||
|
dependencies:
|
||||||
|
'@actions/http-client': 2.2.1
|
||||||
|
uuid: 8.3.2
|
||||||
|
dev: false
|
||||||
|
|
||||||
|
/@actions/github@6.0.0:
|
||||||
|
resolution: {integrity: sha512-alScpSVnYmjNEXboZjarjukQEzgCRmjMv6Xj47fsdnqGS73bjJNDpiiXmp8jr0UZLdUB6d9jW63IcmddUP+l0g==}
|
||||||
|
dependencies:
|
||||||
|
'@actions/http-client': 2.2.1
|
||||||
|
'@octokit/core': 5.2.0
|
||||||
|
'@octokit/plugin-paginate-rest': 9.2.1(@octokit/core@5.2.0)
|
||||||
|
'@octokit/plugin-rest-endpoint-methods': 10.4.1(@octokit/core@5.2.0)
|
||||||
|
dev: false
|
||||||
|
|
||||||
|
/@actions/http-client@2.2.1:
|
||||||
|
resolution: {integrity: sha512-KhC/cZsq7f8I4LfZSJKgCvEwfkE8o1538VoBeoGzokVLLnbFDEAdFD3UhoMklxo2un9NJVBdANOresx7vTHlHw==}
|
||||||
|
dependencies:
|
||||||
|
tunnel: 0.0.6
|
||||||
|
undici: 5.28.4
|
||||||
|
dev: false
|
||||||
|
|
||||||
|
/@esbuild/aix-ppc64@0.20.2:
|
||||||
|
resolution: {integrity: sha512-D+EBOJHXdNZcLJRBkhENNG8Wji2kgc9AZ9KiPr1JuZjsNtyHzrsfLRrY0tk2H2aoFu6RANO1y1iPPUCDYWkb5g==}
|
||||||
|
engines: {node: '>=12'}
|
||||||
|
cpu: [ppc64]
|
||||||
|
os: [aix]
|
||||||
|
requiresBuild: true
|
||||||
|
dev: true
|
||||||
|
optional: true
|
||||||
|
|
||||||
|
/@esbuild/android-arm64@0.20.2:
|
||||||
|
resolution: {integrity: sha512-mRzjLacRtl/tWU0SvD8lUEwb61yP9cqQo6noDZP/O8VkwafSYwZ4yWy24kan8jE/IMERpYncRt2dw438LP3Xmg==}
|
||||||
|
engines: {node: '>=12'}
|
||||||
|
cpu: [arm64]
|
||||||
|
os: [android]
|
||||||
|
requiresBuild: true
|
||||||
|
dev: true
|
||||||
|
optional: true
|
||||||
|
|
||||||
|
/@esbuild/android-arm@0.20.2:
|
||||||
|
resolution: {integrity: sha512-t98Ra6pw2VaDhqNWO2Oph2LXbz/EJcnLmKLGBJwEwXX/JAN83Fym1rU8l0JUWK6HkIbWONCSSatf4sf2NBRx/w==}
|
||||||
|
engines: {node: '>=12'}
|
||||||
|
cpu: [arm]
|
||||||
|
os: [android]
|
||||||
|
requiresBuild: true
|
||||||
|
dev: true
|
||||||
|
optional: true
|
||||||
|
|
||||||
|
/@esbuild/android-x64@0.20.2:
|
||||||
|
resolution: {integrity: sha512-btzExgV+/lMGDDa194CcUQm53ncxzeBrWJcncOBxuC6ndBkKxnHdFJn86mCIgTELsooUmwUm9FkhSp5HYu00Rg==}
|
||||||
|
engines: {node: '>=12'}
|
||||||
|
cpu: [x64]
|
||||||
|
os: [android]
|
||||||
|
requiresBuild: true
|
||||||
|
dev: true
|
||||||
|
optional: true
|
||||||
|
|
||||||
|
/@esbuild/darwin-arm64@0.20.2:
|
||||||
|
resolution: {integrity: sha512-4J6IRT+10J3aJH3l1yzEg9y3wkTDgDk7TSDFX+wKFiWjqWp/iCfLIYzGyasx9l0SAFPT1HwSCR+0w/h1ES/MjA==}
|
||||||
|
engines: {node: '>=12'}
|
||||||
|
cpu: [arm64]
|
||||||
|
os: [darwin]
|
||||||
|
requiresBuild: true
|
||||||
|
dev: true
|
||||||
|
optional: true
|
||||||
|
|
||||||
|
/@esbuild/darwin-x64@0.20.2:
|
||||||
|
resolution: {integrity: sha512-tBcXp9KNphnNH0dfhv8KYkZhjc+H3XBkF5DKtswJblV7KlT9EI2+jeA8DgBjp908WEuYll6pF+UStUCfEpdysA==}
|
||||||
|
engines: {node: '>=12'}
|
||||||
|
cpu: [x64]
|
||||||
|
os: [darwin]
|
||||||
|
requiresBuild: true
|
||||||
|
dev: true
|
||||||
|
optional: true
|
||||||
|
|
||||||
|
/@esbuild/freebsd-arm64@0.20.2:
|
||||||
|
resolution: {integrity: sha512-d3qI41G4SuLiCGCFGUrKsSeTXyWG6yem1KcGZVS+3FYlYhtNoNgYrWcvkOoaqMhwXSMrZRl69ArHsGJ9mYdbbw==}
|
||||||
|
engines: {node: '>=12'}
|
||||||
|
cpu: [arm64]
|
||||||
|
os: [freebsd]
|
||||||
|
requiresBuild: true
|
||||||
|
dev: true
|
||||||
|
optional: true
|
||||||
|
|
||||||
|
/@esbuild/freebsd-x64@0.20.2:
|
||||||
|
resolution: {integrity: sha512-d+DipyvHRuqEeM5zDivKV1KuXn9WeRX6vqSqIDgwIfPQtwMP4jaDsQsDncjTDDsExT4lR/91OLjRo8bmC1e+Cw==}
|
||||||
|
engines: {node: '>=12'}
|
||||||
|
cpu: [x64]
|
||||||
|
os: [freebsd]
|
||||||
|
requiresBuild: true
|
||||||
|
dev: true
|
||||||
|
optional: true
|
||||||
|
|
||||||
|
/@esbuild/linux-arm64@0.20.2:
|
||||||
|
resolution: {integrity: sha512-9pb6rBjGvTFNira2FLIWqDk/uaf42sSyLE8j1rnUpuzsODBq7FvpwHYZxQ/It/8b+QOS1RYfqgGFNLRI+qlq2A==}
|
||||||
|
engines: {node: '>=12'}
|
||||||
|
cpu: [arm64]
|
||||||
|
os: [linux]
|
||||||
|
requiresBuild: true
|
||||||
|
dev: true
|
||||||
|
optional: true
|
||||||
|
|
||||||
|
/@esbuild/linux-arm@0.20.2:
|
||||||
|
resolution: {integrity: sha512-VhLPeR8HTMPccbuWWcEUD1Az68TqaTYyj6nfE4QByZIQEQVWBB8vup8PpR7y1QHL3CpcF6xd5WVBU/+SBEvGTg==}
|
||||||
|
engines: {node: '>=12'}
|
||||||
|
cpu: [arm]
|
||||||
|
os: [linux]
|
||||||
|
requiresBuild: true
|
||||||
|
dev: true
|
||||||
|
optional: true
|
||||||
|
|
||||||
|
/@esbuild/linux-ia32@0.20.2:
|
||||||
|
resolution: {integrity: sha512-o10utieEkNPFDZFQm9CoP7Tvb33UutoJqg3qKf1PWVeeJhJw0Q347PxMvBgVVFgouYLGIhFYG0UGdBumROyiig==}
|
||||||
|
engines: {node: '>=12'}
|
||||||
|
cpu: [ia32]
|
||||||
|
os: [linux]
|
||||||
|
requiresBuild: true
|
||||||
|
dev: true
|
||||||
|
optional: true
|
||||||
|
|
||||||
|
/@esbuild/linux-loong64@0.20.2:
|
||||||
|
resolution: {integrity: sha512-PR7sp6R/UC4CFVomVINKJ80pMFlfDfMQMYynX7t1tNTeivQ6XdX5r2XovMmha/VjR1YN/HgHWsVcTRIMkymrgQ==}
|
||||||
|
engines: {node: '>=12'}
|
||||||
|
cpu: [loong64]
|
||||||
|
os: [linux]
|
||||||
|
requiresBuild: true
|
||||||
|
dev: true
|
||||||
|
optional: true
|
||||||
|
|
||||||
|
/@esbuild/linux-mips64el@0.20.2:
|
||||||
|
resolution: {integrity: sha512-4BlTqeutE/KnOiTG5Y6Sb/Hw6hsBOZapOVF6njAESHInhlQAghVVZL1ZpIctBOoTFbQyGW+LsVYZ8lSSB3wkjA==}
|
||||||
|
engines: {node: '>=12'}
|
||||||
|
cpu: [mips64el]
|
||||||
|
os: [linux]
|
||||||
|
requiresBuild: true
|
||||||
|
dev: true
|
||||||
|
optional: true
|
||||||
|
|
||||||
|
/@esbuild/linux-ppc64@0.20.2:
|
||||||
|
resolution: {integrity: sha512-rD3KsaDprDcfajSKdn25ooz5J5/fWBylaaXkuotBDGnMnDP1Uv5DLAN/45qfnf3JDYyJv/ytGHQaziHUdyzaAg==}
|
||||||
|
engines: {node: '>=12'}
|
||||||
|
cpu: [ppc64]
|
||||||
|
os: [linux]
|
||||||
|
requiresBuild: true
|
||||||
|
dev: true
|
||||||
|
optional: true
|
||||||
|
|
||||||
|
/@esbuild/linux-riscv64@0.20.2:
|
||||||
|
resolution: {integrity: sha512-snwmBKacKmwTMmhLlz/3aH1Q9T8v45bKYGE3j26TsaOVtjIag4wLfWSiZykXzXuE1kbCE+zJRmwp+ZbIHinnVg==}
|
||||||
|
engines: {node: '>=12'}
|
||||||
|
cpu: [riscv64]
|
||||||
|
os: [linux]
|
||||||
|
requiresBuild: true
|
||||||
|
dev: true
|
||||||
|
optional: true
|
||||||
|
|
||||||
|
/@esbuild/linux-s390x@0.20.2:
|
||||||
|
resolution: {integrity: sha512-wcWISOobRWNm3cezm5HOZcYz1sKoHLd8VL1dl309DiixxVFoFe/o8HnwuIwn6sXre88Nwj+VwZUvJf4AFxkyrQ==}
|
||||||
|
engines: {node: '>=12'}
|
||||||
|
cpu: [s390x]
|
||||||
|
os: [linux]
|
||||||
|
requiresBuild: true
|
||||||
|
dev: true
|
||||||
|
optional: true
|
||||||
|
|
||||||
|
/@esbuild/linux-x64@0.20.2:
|
||||||
|
resolution: {integrity: sha512-1MdwI6OOTsfQfek8sLwgyjOXAu+wKhLEoaOLTjbijk6E2WONYpH9ZU2mNtR+lZ2B4uwr+usqGuVfFT9tMtGvGw==}
|
||||||
|
engines: {node: '>=12'}
|
||||||
|
cpu: [x64]
|
||||||
|
os: [linux]
|
||||||
|
requiresBuild: true
|
||||||
|
dev: true
|
||||||
|
optional: true
|
||||||
|
|
||||||
|
/@esbuild/netbsd-x64@0.20.2:
|
||||||
|
resolution: {integrity: sha512-K8/DhBxcVQkzYc43yJXDSyjlFeHQJBiowJ0uVL6Tor3jGQfSGHNNJcWxNbOI8v5k82prYqzPuwkzHt3J1T1iZQ==}
|
||||||
|
engines: {node: '>=12'}
|
||||||
|
cpu: [x64]
|
||||||
|
os: [netbsd]
|
||||||
|
requiresBuild: true
|
||||||
|
dev: true
|
||||||
|
optional: true
|
||||||
|
|
||||||
|
/@esbuild/openbsd-x64@0.20.2:
|
||||||
|
resolution: {integrity: sha512-eMpKlV0SThJmmJgiVyN9jTPJ2VBPquf6Kt/nAoo6DgHAoN57K15ZghiHaMvqjCye/uU4X5u3YSMgVBI1h3vKrQ==}
|
||||||
|
engines: {node: '>=12'}
|
||||||
|
cpu: [x64]
|
||||||
|
os: [openbsd]
|
||||||
|
requiresBuild: true
|
||||||
|
dev: true
|
||||||
|
optional: true
|
||||||
|
|
||||||
|
/@esbuild/sunos-x64@0.20.2:
|
||||||
|
resolution: {integrity: sha512-2UyFtRC6cXLyejf/YEld4Hajo7UHILetzE1vsRcGL3earZEW77JxrFjH4Ez2qaTiEfMgAXxfAZCm1fvM/G/o8w==}
|
||||||
|
engines: {node: '>=12'}
|
||||||
|
cpu: [x64]
|
||||||
|
os: [sunos]
|
||||||
|
requiresBuild: true
|
||||||
|
dev: true
|
||||||
|
optional: true
|
||||||
|
|
||||||
|
/@esbuild/win32-arm64@0.20.2:
|
||||||
|
resolution: {integrity: sha512-GRibxoawM9ZCnDxnP3usoUDO9vUkpAxIIZ6GQI+IlVmr5kP3zUq+l17xELTHMWTWzjxa2guPNyrpq1GWmPvcGQ==}
|
||||||
|
engines: {node: '>=12'}
|
||||||
|
cpu: [arm64]
|
||||||
|
os: [win32]
|
||||||
|
requiresBuild: true
|
||||||
|
dev: true
|
||||||
|
optional: true
|
||||||
|
|
||||||
|
/@esbuild/win32-ia32@0.20.2:
|
||||||
|
resolution: {integrity: sha512-HfLOfn9YWmkSKRQqovpnITazdtquEW8/SoHW7pWpuEeguaZI4QnCRW6b+oZTztdBnZOS2hqJ6im/D5cPzBTTlQ==}
|
||||||
|
engines: {node: '>=12'}
|
||||||
|
cpu: [ia32]
|
||||||
|
os: [win32]
|
||||||
|
requiresBuild: true
|
||||||
|
dev: true
|
||||||
|
optional: true
|
||||||
|
|
||||||
|
/@esbuild/win32-x64@0.20.2:
|
||||||
|
resolution: {integrity: sha512-N49X4lJX27+l9jbLKSqZ6bKNjzQvHaT8IIFUy+YIqmXQdjYCToGWwOItDrfby14c78aDd5NHQl29xingXfCdLQ==}
|
||||||
|
engines: {node: '>=12'}
|
||||||
|
cpu: [x64]
|
||||||
|
os: [win32]
|
||||||
|
requiresBuild: true
|
||||||
|
dev: true
|
||||||
|
optional: true
|
||||||
|
|
||||||
|
/@fastify/busboy@2.1.1:
|
||||||
|
resolution: {integrity: sha512-vBZP4NlzfOlerQTnba4aqZoMhE/a9HY7HRqoOPaETQcSQuWEIyZMHGfVu6w9wGtGK5fED5qRs2DteVCjOH60sA==}
|
||||||
|
engines: {node: '>=14'}
|
||||||
|
dev: false
|
||||||
|
|
||||||
|
/@octokit/auth-token@4.0.0:
|
||||||
|
resolution: {integrity: sha512-tY/msAuJo6ARbK6SPIxZrPBms3xPbfwBrulZe0Wtr/DIY9lje2HeV1uoebShn6mx7SjCHif6EjMvoREj+gZ+SA==}
|
||||||
|
engines: {node: '>= 18'}
|
||||||
|
dev: false
|
||||||
|
|
||||||
|
/@octokit/core@5.2.0:
|
||||||
|
resolution: {integrity: sha512-1LFfa/qnMQvEOAdzlQymH0ulepxbxnCYAKJZfMci/5XJyIHWgEYnDmgnKakbTh7CH2tFQ5O60oYDvns4i9RAIg==}
|
||||||
|
engines: {node: '>= 18'}
|
||||||
|
dependencies:
|
||||||
|
'@octokit/auth-token': 4.0.0
|
||||||
|
'@octokit/graphql': 7.1.0
|
||||||
|
'@octokit/request': 8.4.0
|
||||||
|
'@octokit/request-error': 5.1.0
|
||||||
|
'@octokit/types': 13.5.0
|
||||||
|
before-after-hook: 2.2.3
|
||||||
|
universal-user-agent: 6.0.1
|
||||||
|
dev: false
|
||||||
|
|
||||||
|
/@octokit/endpoint@9.0.5:
|
||||||
|
resolution: {integrity: sha512-ekqR4/+PCLkEBF6qgj8WqJfvDq65RH85OAgrtnVp1mSxaXF03u2xW/hUdweGS5654IlC0wkNYC18Z50tSYTAFw==}
|
||||||
|
engines: {node: '>= 18'}
|
||||||
|
dependencies:
|
||||||
|
'@octokit/types': 13.5.0
|
||||||
|
universal-user-agent: 6.0.1
|
||||||
|
dev: false
|
||||||
|
|
||||||
|
/@octokit/graphql@7.1.0:
|
||||||
|
resolution: {integrity: sha512-r+oZUH7aMFui1ypZnAvZmn0KSqAUgE1/tUXIWaqUCa1758ts/Jio84GZuzsvUkme98kv0WFY8//n0J1Z+vsIsQ==}
|
||||||
|
engines: {node: '>= 18'}
|
||||||
|
dependencies:
|
||||||
|
'@octokit/request': 8.4.0
|
||||||
|
'@octokit/types': 13.5.0
|
||||||
|
universal-user-agent: 6.0.1
|
||||||
|
dev: false
|
||||||
|
|
||||||
|
/@octokit/openapi-types@20.0.0:
|
||||||
|
resolution: {integrity: sha512-EtqRBEjp1dL/15V7WiX5LJMIxxkdiGJnabzYx5Apx4FkQIFgAfKumXeYAqqJCj1s+BMX4cPFIFC4OLCR6stlnA==}
|
||||||
|
dev: false
|
||||||
|
|
||||||
|
/@octokit/openapi-types@22.2.0:
|
||||||
|
resolution: {integrity: sha512-QBhVjcUa9W7Wwhm6DBFu6ZZ+1/t/oYxqc2tp81Pi41YNuJinbFRx8B133qVOrAaBbF7D/m0Et6f9/pZt9Rc+tg==}
|
||||||
|
dev: false
|
||||||
|
|
||||||
|
/@octokit/plugin-paginate-rest@9.2.1(@octokit/core@5.2.0):
|
||||||
|
resolution: {integrity: sha512-wfGhE/TAkXZRLjksFXuDZdmGnJQHvtU/joFQdweXUgzo1XwvBCD4o4+75NtFfjfLK5IwLf9vHTfSiU3sLRYpRw==}
|
||||||
|
engines: {node: '>= 18'}
|
||||||
|
peerDependencies:
|
||||||
|
'@octokit/core': '5'
|
||||||
|
dependencies:
|
||||||
|
'@octokit/core': 5.2.0
|
||||||
|
'@octokit/types': 12.6.0
|
||||||
|
dev: false
|
||||||
|
|
||||||
|
/@octokit/plugin-rest-endpoint-methods@10.4.1(@octokit/core@5.2.0):
|
||||||
|
resolution: {integrity: sha512-xV1b+ceKV9KytQe3zCVqjg+8GTGfDYwaT1ATU5isiUyVtlVAO3HNdzpS4sr4GBx4hxQ46s7ITtZrAsxG22+rVg==}
|
||||||
|
engines: {node: '>= 18'}
|
||||||
|
peerDependencies:
|
||||||
|
'@octokit/core': '5'
|
||||||
|
dependencies:
|
||||||
|
'@octokit/core': 5.2.0
|
||||||
|
'@octokit/types': 12.6.0
|
||||||
|
dev: false
|
||||||
|
|
||||||
|
/@octokit/request-error@5.1.0:
|
||||||
|
resolution: {integrity: sha512-GETXfE05J0+7H2STzekpKObFe765O5dlAKUTLNGeH+x47z7JjXHfsHKo5z21D/o/IOZTUEI6nyWyR+bZVP/n5Q==}
|
||||||
|
engines: {node: '>= 18'}
|
||||||
|
dependencies:
|
||||||
|
'@octokit/types': 13.5.0
|
||||||
|
deprecation: 2.3.1
|
||||||
|
once: 1.4.0
|
||||||
|
dev: false
|
||||||
|
|
||||||
|
/@octokit/request-error@6.1.1:
|
||||||
|
resolution: {integrity: sha512-1mw1gqT3fR/WFvnoVpY/zUM2o/XkMs/2AszUUG9I69xn0JFLv6PGkPhNk5lbfvROs79wiS0bqiJNxfCZcRJJdg==}
|
||||||
|
engines: {node: '>= 18'}
|
||||||
|
dependencies:
|
||||||
|
'@octokit/types': 13.5.0
|
||||||
|
dev: false
|
||||||
|
|
||||||
|
/@octokit/request@8.4.0:
|
||||||
|
resolution: {integrity: sha512-9Bb014e+m2TgBeEJGEbdplMVWwPmL1FPtggHQRkV+WVsMggPtEkLKPlcVYm/o8xKLkpJ7B+6N8WfQMtDLX2Dpw==}
|
||||||
|
engines: {node: '>= 18'}
|
||||||
|
dependencies:
|
||||||
|
'@octokit/endpoint': 9.0.5
|
||||||
|
'@octokit/request-error': 5.1.0
|
||||||
|
'@octokit/types': 13.5.0
|
||||||
|
universal-user-agent: 6.0.1
|
||||||
|
dev: false
|
||||||
|
|
||||||
|
/@octokit/types@12.6.0:
|
||||||
|
resolution: {integrity: sha512-1rhSOfRa6H9w4YwK0yrf5faDaDTb+yLyBUKOCV4xtCDB5VmIPqd/v9yr9o6SAzOAlRxMiRiCic6JVM1/kunVkw==}
|
||||||
|
dependencies:
|
||||||
|
'@octokit/openapi-types': 20.0.0
|
||||||
|
dev: false
|
||||||
|
|
||||||
|
/@octokit/types@13.5.0:
|
||||||
|
resolution: {integrity: sha512-HdqWTf5Z3qwDVlzCrP8UJquMwunpDiMPt5er+QjGzL4hqr/vBVY/MauQgS1xWxCDT1oMx1EULyqxncdCY/NVSQ==}
|
||||||
|
dependencies:
|
||||||
|
'@octokit/openapi-types': 22.2.0
|
||||||
|
dev: false
|
||||||
|
|
||||||
|
/@octokit/webhooks-types@7.5.1:
|
||||||
|
resolution: {integrity: sha512-1dozxWEP8lKGbtEu7HkRbK1F/nIPuJXNfT0gd96y6d3LcHZTtRtlf8xz3nicSJfesADxJyDh+mWBOsdLkqgzYw==}
|
||||||
|
dev: false
|
||||||
|
|
||||||
|
/@types/conventional-commits-parser@5.0.0:
|
||||||
|
resolution: {integrity: sha512-loB369iXNmAZglwWATL+WRe+CRMmmBPtpolYzIebFaX4YA3x+BEfLqhUAV9WanycKI3TG1IMr5bMJDajDKLlUQ==}
|
||||||
|
dependencies:
|
||||||
|
'@types/node': 20.12.7
|
||||||
|
dev: true
|
||||||
|
|
||||||
|
/@types/lodash@4.17.0:
|
||||||
|
resolution: {integrity: sha512-t7dhREVv6dbNj0q17X12j7yDG4bD/DHYX7o5/DbDxobP0HnGPgpRz2Ej77aL7TZT3DSw13fqUTj8J4mMnqa7WA==}
|
||||||
|
dev: true
|
||||||
|
|
||||||
|
/@types/node@20.12.7:
|
||||||
|
resolution: {integrity: sha512-wq0cICSkRLVaf3UGLMGItu/PtdY7oaXaI/RVU+xliKVOtRna3PRY57ZDfztpDL0n11vfymMUnXv8QwYCO7L1wg==}
|
||||||
|
dependencies:
|
||||||
|
undici-types: 5.26.5
|
||||||
|
dev: true
|
||||||
|
|
||||||
|
/JSONStream@1.3.5:
|
||||||
|
resolution: {integrity: sha512-E+iruNOY8VV9s4JEbe1aNEm6MiszPRr/UfcHMz0TQh1BXSxHK+ASV1R6W4HpjBhSeS+54PIsAMCBmwD06LLsqQ==}
|
||||||
|
hasBin: true
|
||||||
|
dependencies:
|
||||||
|
jsonparse: 1.3.1
|
||||||
|
through: 2.3.8
|
||||||
|
dev: false
|
||||||
|
|
||||||
|
/before-after-hook@2.2.3:
|
||||||
|
resolution: {integrity: sha512-NzUnlZexiaH/46WDhANlyR2bXRopNg4F/zuSA3OpZnllCUgRaOF2znDioDWrmbNVsuZk6l9pMquQB38cfBZwkQ==}
|
||||||
|
dev: false
|
||||||
|
|
||||||
|
/conventional-commit-types@3.0.0:
|
||||||
|
resolution: {integrity: sha512-SmmCYnOniSsAa9GqWOeLqc179lfr5TRu5b4QFDkbsrJ5TZjPJx85wtOr3zn+1dbeNiXDKGPbZ72IKbPhLXh/Lg==}
|
||||||
|
dev: false
|
||||||
|
|
||||||
|
/conventional-commits-parser@5.0.0:
|
||||||
|
resolution: {integrity: sha512-ZPMl0ZJbw74iS9LuX9YIAiW8pfM5p3yh2o/NbXHbkFuZzY5jvdi5jFycEOkmBW5H5I7nA+D6f3UcsCLP2vvSEA==}
|
||||||
|
engines: {node: '>=16'}
|
||||||
|
hasBin: true
|
||||||
|
dependencies:
|
||||||
|
JSONStream: 1.3.5
|
||||||
|
is-text-path: 2.0.0
|
||||||
|
meow: 12.1.1
|
||||||
|
split2: 4.2.0
|
||||||
|
dev: false
|
||||||
|
|
||||||
|
/dayjs@1.11.11:
|
||||||
|
resolution: {integrity: sha512-okzr3f11N6WuqYtZSvm+F776mB41wRZMhKP+hc34YdW+KmtYYK9iqvHSwo2k9FEH3fhGXvOPV6yz2IcSrfRUDg==}
|
||||||
|
dev: false
|
||||||
|
|
||||||
|
/deprecation@2.3.1:
|
||||||
|
resolution: {integrity: sha512-xmHIy4F3scKVwMsQ4WnVaS8bHOx0DmVwRywosKhaILI0ywMDWPtBSku2HNxRvF7jtwDRsoEwYQSfbxj8b7RlJQ==}
|
||||||
|
dev: false
|
||||||
|
|
||||||
|
/dotenv@16.4.5:
|
||||||
|
resolution: {integrity: sha512-ZmdL2rui+eB2YwhsWzjInR8LldtZHGDoQ1ugH85ppHKwpUHL7j7rN0Ti9NCnGiQbhaZ11FpR+7ao1dNsmduNUg==}
|
||||||
|
engines: {node: '>=12'}
|
||||||
|
dev: false
|
||||||
|
|
||||||
|
/esbuild@0.20.2:
|
||||||
|
resolution: {integrity: sha512-WdOOppmUNU+IbZ0PaDiTst80zjnrOkyJNHoKupIcVyU8Lvla3Ugx94VzkQ32Ijqd7UhHJy75gNWDMUekcrSJ6g==}
|
||||||
|
engines: {node: '>=12'}
|
||||||
|
hasBin: true
|
||||||
|
requiresBuild: true
|
||||||
|
optionalDependencies:
|
||||||
|
'@esbuild/aix-ppc64': 0.20.2
|
||||||
|
'@esbuild/android-arm': 0.20.2
|
||||||
|
'@esbuild/android-arm64': 0.20.2
|
||||||
|
'@esbuild/android-x64': 0.20.2
|
||||||
|
'@esbuild/darwin-arm64': 0.20.2
|
||||||
|
'@esbuild/darwin-x64': 0.20.2
|
||||||
|
'@esbuild/freebsd-arm64': 0.20.2
|
||||||
|
'@esbuild/freebsd-x64': 0.20.2
|
||||||
|
'@esbuild/linux-arm': 0.20.2
|
||||||
|
'@esbuild/linux-arm64': 0.20.2
|
||||||
|
'@esbuild/linux-ia32': 0.20.2
|
||||||
|
'@esbuild/linux-loong64': 0.20.2
|
||||||
|
'@esbuild/linux-mips64el': 0.20.2
|
||||||
|
'@esbuild/linux-ppc64': 0.20.2
|
||||||
|
'@esbuild/linux-riscv64': 0.20.2
|
||||||
|
'@esbuild/linux-s390x': 0.20.2
|
||||||
|
'@esbuild/linux-x64': 0.20.2
|
||||||
|
'@esbuild/netbsd-x64': 0.20.2
|
||||||
|
'@esbuild/openbsd-x64': 0.20.2
|
||||||
|
'@esbuild/sunos-x64': 0.20.2
|
||||||
|
'@esbuild/win32-arm64': 0.20.2
|
||||||
|
'@esbuild/win32-ia32': 0.20.2
|
||||||
|
'@esbuild/win32-x64': 0.20.2
|
||||||
|
dev: true
|
||||||
|
|
||||||
|
/fsevents@2.3.3:
|
||||||
|
resolution: {integrity: sha512-5xoDfX+fL7faATnagmWPpbFtwh/R77WmMMqqHGS65C3vvB0YHrgF+B1YmZ3441tMj5n63k0212XNoJwzlhffQw==}
|
||||||
|
engines: {node: ^8.16.0 || ^10.6.0 || >=11.0.0}
|
||||||
|
os: [darwin]
|
||||||
|
requiresBuild: true
|
||||||
|
dev: true
|
||||||
|
optional: true
|
||||||
|
|
||||||
|
/get-tsconfig@4.7.3:
|
||||||
|
resolution: {integrity: sha512-ZvkrzoUA0PQZM6fy6+/Hce561s+faD1rsNwhnO5FelNjyy7EMGJ3Rz1AQ8GYDWjhRs/7dBLOEJvhK8MiEJOAFg==}
|
||||||
|
dependencies:
|
||||||
|
resolve-pkg-maps: 1.0.0
|
||||||
|
dev: true
|
||||||
|
|
||||||
|
/is-text-path@2.0.0:
|
||||||
|
resolution: {integrity: sha512-+oDTluR6WEjdXEJMnC2z6A4FRwFoYuvShVVEGsS7ewc0UTi2QtAKMDJuL4BDEVt+5T7MjFo12RP8ghOM75oKJw==}
|
||||||
|
engines: {node: '>=8'}
|
||||||
|
dependencies:
|
||||||
|
text-extensions: 2.4.0
|
||||||
|
dev: false
|
||||||
|
|
||||||
|
/json5@2.2.3:
|
||||||
|
resolution: {integrity: sha512-XmOWe7eyHYH14cLdVPoyg+GOH3rYX++KpzrylJwSW98t3Nk+U8XOl8FWKOgwtzdb8lXGf6zYwDUzeHMWfxasyg==}
|
||||||
|
engines: {node: '>=6'}
|
||||||
|
hasBin: true
|
||||||
|
dev: true
|
||||||
|
|
||||||
|
/jsonparse@1.3.1:
|
||||||
|
resolution: {integrity: sha512-POQXvpdL69+CluYsillJ7SUhKvytYjW9vG/GKpnf+xP8UWgYEM/RaMzHHofbALDiKbbP1W8UEYmgGl39WkPZsg==}
|
||||||
|
engines: {'0': node >= 0.2.0}
|
||||||
|
dev: false
|
||||||
|
|
||||||
|
/lodash@4.17.21:
|
||||||
|
resolution: {integrity: sha512-v2kDEe57lecTulaDIuNTPy3Ry4gLGJ6Z1O3vE1krgXZNrsQ+LFTGHVxVjcXPs17LhbZVGedAJv8XZ1tvj5FvSg==}
|
||||||
|
dev: false
|
||||||
|
|
||||||
|
/meow@12.1.1:
|
||||||
|
resolution: {integrity: sha512-BhXM0Au22RwUneMPwSCnyhTOizdWoIEPU9sp0Aqa1PnDMR5Wv2FGXYDjuzJEIX+Eo2Rb8xuYe5jrnm5QowQFkw==}
|
||||||
|
engines: {node: '>=16.10'}
|
||||||
|
dev: false
|
||||||
|
|
||||||
|
/minimist@1.2.8:
|
||||||
|
resolution: {integrity: sha512-2yyAR8qBkN3YuheJanUpWC5U3bb5osDywNB8RzDVlDwDHbocAJveqqj1u8+SVD7jkWT4yvsHCpWqqWqAxb0zCA==}
|
||||||
|
dev: true
|
||||||
|
|
||||||
|
/once@1.4.0:
|
||||||
|
resolution: {integrity: sha512-lNaJgI+2Q5URQBkccEKHTQOPaXdUxnZZElQTZY0MFUAuaEqe1E+Nyvgdz/aIyNi6Z9MzO5dv1H8n58/GELp3+w==}
|
||||||
|
dependencies:
|
||||||
|
wrappy: 1.0.2
|
||||||
|
dev: false
|
||||||
|
|
||||||
|
/resolve-pkg-maps@1.0.0:
|
||||||
|
resolution: {integrity: sha512-seS2Tj26TBVOC2NIc2rOe2y2ZO7efxITtLZcGSOnHHNOQ7CkiUBfw0Iw2ck6xkIhPwLhKNLS8BO+hEpngQlqzw==}
|
||||||
|
dev: true
|
||||||
|
|
||||||
|
/split2@4.2.0:
|
||||||
|
resolution: {integrity: sha512-UcjcJOWknrNkF6PLX83qcHM6KHgVKNkV62Y8a5uYDVv9ydGQVwAHMKqHdJje1VTWpljG0WYpCDhrCdAOYH4TWg==}
|
||||||
|
engines: {node: '>= 10.x'}
|
||||||
|
dev: false
|
||||||
|
|
||||||
|
/strip-bom@3.0.0:
|
||||||
|
resolution: {integrity: sha512-vavAMRXOgBVNF6nyEEmL3DBK19iRpDcoIwW+swQ+CbGiu7lju6t+JklA1MHweoWtadgt4ISVUsXLyDq34ddcwA==}
|
||||||
|
engines: {node: '>=4'}
|
||||||
|
dev: true
|
||||||
|
|
||||||
|
/text-extensions@2.4.0:
|
||||||
|
resolution: {integrity: sha512-te/NtwBwfiNRLf9Ijqx3T0nlqZiQ2XrrtBvu+cLL8ZRrGkO0NHTug8MYFKyoSrv/sHTaSKfilUkizV6XhxMJ3g==}
|
||||||
|
engines: {node: '>=8'}
|
||||||
|
dev: false
|
||||||
|
|
||||||
|
/through@2.3.8:
|
||||||
|
resolution: {integrity: sha512-w89qg7PI8wAdvX60bMDP+bFoD5Dvhm9oLheFp5O4a2QF0cSBGsBX4qZmadPMvVqlLJBBci+WqGGOAPvcDeNSVg==}
|
||||||
|
dev: false
|
||||||
|
|
||||||
|
/tsconfig-paths@4.2.0:
|
||||||
|
resolution: {integrity: sha512-NoZ4roiN7LnbKn9QqE1amc9DJfzvZXxF4xDavcOWt1BPkdx+m+0gJuPM+S0vCe7zTJMYUP0R8pO2XMr+Y8oLIg==}
|
||||||
|
engines: {node: '>=6'}
|
||||||
|
dependencies:
|
||||||
|
json5: 2.2.3
|
||||||
|
minimist: 1.2.8
|
||||||
|
strip-bom: 3.0.0
|
||||||
|
dev: true
|
||||||
|
|
||||||
|
/tsx@4.8.2:
|
||||||
|
resolution: {integrity: sha512-hmmzS4U4mdy1Cnzpl/NQiPUC2k34EcNSTZYVJThYKhdqTwuBeF+4cG9KUK/PFQ7KHaAaYwqlb7QfmsE2nuj+WA==}
|
||||||
|
engines: {node: '>=18.0.0'}
|
||||||
|
hasBin: true
|
||||||
|
dependencies:
|
||||||
|
esbuild: 0.20.2
|
||||||
|
get-tsconfig: 4.7.3
|
||||||
|
optionalDependencies:
|
||||||
|
fsevents: 2.3.3
|
||||||
|
dev: true
|
||||||
|
|
||||||
|
/tunnel@0.0.6:
|
||||||
|
resolution: {integrity: sha512-1h/Lnq9yajKY2PEbBadPXj3VxsDDu844OnaAo52UVmIzIvwwtBPIuNvkjuzBlTWpfJyUbG3ez0KSBibQkj4ojg==}
|
||||||
|
engines: {node: '>=0.6.11 <=0.7.0 || >=0.7.3'}
|
||||||
|
dev: false
|
||||||
|
|
||||||
|
/typescript@5.4.5:
|
||||||
|
resolution: {integrity: sha512-vcI4UpRgg81oIRUFwR0WSIHKt11nJ7SAVlYNIu+QpqeyXP+gpQJy/Z4+F0aGxSE4MqwjyXvW/TzgkLAx2AGHwQ==}
|
||||||
|
engines: {node: '>=14.17'}
|
||||||
|
hasBin: true
|
||||||
|
dev: true
|
||||||
|
|
||||||
|
/undici-types@5.26.5:
|
||||||
|
resolution: {integrity: sha512-JlCMO+ehdEIKqlFxk6IfVoAUVmgz7cU7zD/h9XZ0qzeosSHmUJVOzSQvvYSYWXkFXC+IfLKSIffhv0sVZup6pA==}
|
||||||
|
dev: true
|
||||||
|
|
||||||
|
/undici@5.28.4:
|
||||||
|
resolution: {integrity: sha512-72RFADWFqKmUb2hmmvNODKL3p9hcB6Gt2DOQMis1SEBaV6a4MH8soBvzg+95CYhCKPFedut2JY9bMfrDl9D23g==}
|
||||||
|
engines: {node: '>=14.0'}
|
||||||
|
dependencies:
|
||||||
|
'@fastify/busboy': 2.1.1
|
||||||
|
dev: false
|
||||||
|
|
||||||
|
/universal-user-agent@6.0.1:
|
||||||
|
resolution: {integrity: sha512-yCzhz6FN2wU1NiiQRogkTQszlQSlpWaw8SvVegAc+bDxbzHgh1vX8uIe8OYyMH6DwH+sdTJsgMl36+mSMdRJIQ==}
|
||||||
|
dev: false
|
||||||
|
|
||||||
|
/uuid@8.3.2:
|
||||||
|
resolution: {integrity: sha512-+NYs2QeMWy+GWFOEm9xnn6HCDp0l7QBD7ml8zLUmJ+93Q5NF0NocErnwkTkXVFNiX3/fpC6afS8Dhb/gz7R7eg==}
|
||||||
|
hasBin: true
|
||||||
|
dev: false
|
||||||
|
|
||||||
|
/wrappy@1.0.2:
|
||||||
|
resolution: {integrity: sha512-l4Sp/DRseor9wL6EvV2+TuQn63dMkPjZ/sp9XkghTEbV9KlPS1xUsZ3u7/IQO4wxtcFB4bgpQPRcR3QCvezPcQ==}
|
||||||
|
dev: false
|
||||||
30
cyborg/src/common.ts
Normal file
30
cyborg/src/common.ts
Normal file
@@ -0,0 +1,30 @@
|
|||||||
|
/*
|
||||||
|
* Copyright 2023 Greptime Team
|
||||||
|
*
|
||||||
|
* Licensed under the Apache License, Version 2.0 (the "License");
|
||||||
|
* you may not use this file except in compliance with the License.
|
||||||
|
* You may obtain a copy of the License at
|
||||||
|
*
|
||||||
|
* http://www.apache.org/licenses/LICENSE-2.0
|
||||||
|
*
|
||||||
|
* Unless required by applicable law or agreed to in writing, software
|
||||||
|
* distributed under the License is distributed on an "AS IS" BASIS,
|
||||||
|
* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||||
|
* See the License for the specific language governing permissions and
|
||||||
|
* limitations under the License.
|
||||||
|
*/
|
||||||
|
|
||||||
|
import * as core from "@actions/core";
|
||||||
|
import {config} from "dotenv";
|
||||||
|
import {getOctokit} from "@actions/github";
|
||||||
|
import {GitHub} from "@actions/github/lib/utils";
|
||||||
|
|
||||||
|
export function handleError(err: any): void {
|
||||||
|
console.error(err)
|
||||||
|
core.setFailed(`Unhandled error: ${err}`)
|
||||||
|
}
|
||||||
|
|
||||||
|
export function obtainClient(token: string): InstanceType<typeof GitHub> {
|
||||||
|
config()
|
||||||
|
return getOctokit(process.env[token])
|
||||||
|
}
|
||||||
14
cyborg/tsconfig.json
Normal file
14
cyborg/tsconfig.json
Normal file
@@ -0,0 +1,14 @@
|
|||||||
|
{
|
||||||
|
"ts-node": {
|
||||||
|
"require": ["tsconfig-paths/register"]
|
||||||
|
},
|
||||||
|
"compilerOptions": {
|
||||||
|
"module": "NodeNext",
|
||||||
|
"moduleResolution": "NodeNext",
|
||||||
|
"target": "ES6",
|
||||||
|
"paths": {
|
||||||
|
"@/*": ["./src/*"]
|
||||||
|
},
|
||||||
|
"resolveJsonModule": true,
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -1,5 +1,9 @@
|
|||||||
FROM centos:7
|
FROM centos:7
|
||||||
|
|
||||||
|
# Note: CentOS 7 has reached EOL since 2024-07-01 thus `mirror.centos.org` is no longer available and we need to use `vault.centos.org` instead.
|
||||||
|
RUN sed -i s/mirror.centos.org/vault.centos.org/g /etc/yum.repos.d/*.repo
|
||||||
|
RUN sed -i s/^#.*baseurl=http/baseurl=http/g /etc/yum.repos.d/*.repo
|
||||||
|
|
||||||
RUN yum install -y epel-release \
|
RUN yum install -y epel-release \
|
||||||
openssl \
|
openssl \
|
||||||
openssl-devel \
|
openssl-devel \
|
||||||
|
|||||||
16
docker/ci/ubuntu/Dockerfile.fuzztests
Normal file
16
docker/ci/ubuntu/Dockerfile.fuzztests
Normal file
@@ -0,0 +1,16 @@
|
|||||||
|
FROM ubuntu:22.04
|
||||||
|
|
||||||
|
# The binary name of GreptimeDB executable.
|
||||||
|
# Defaults to "greptime", but sometimes in other projects it might be different.
|
||||||
|
ARG TARGET_BIN=greptime
|
||||||
|
|
||||||
|
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y \
|
||||||
|
ca-certificates \
|
||||||
|
curl
|
||||||
|
|
||||||
|
ARG BINARY_PATH
|
||||||
|
ADD $BINARY_PATH/$TARGET_BIN /greptime/bin/
|
||||||
|
|
||||||
|
ENV PATH /greptime/bin/:$PATH
|
||||||
|
|
||||||
|
ENTRYPOINT ["greptime"]
|
||||||
@@ -34,7 +34,7 @@ RUN rustup toolchain install ${RUST_TOOLCHAIN}
|
|||||||
RUN rustup target add aarch64-linux-android
|
RUN rustup target add aarch64-linux-android
|
||||||
|
|
||||||
# Install cargo-ndk
|
# Install cargo-ndk
|
||||||
RUN cargo install cargo-ndk
|
RUN cargo install cargo-ndk@3.5.4
|
||||||
ENV ANDROID_NDK_HOME $NDK_ROOT
|
ENV ANDROID_NDK_HOME $NDK_ROOT
|
||||||
|
|
||||||
# Builder entrypoint.
|
# Builder entrypoint.
|
||||||
|
|||||||
50
docker/dev-builder/binstall/pull_binstall.sh
Executable file
50
docker/dev-builder/binstall/pull_binstall.sh
Executable file
@@ -0,0 +1,50 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
set -euxo pipefail
|
||||||
|
|
||||||
|
cd "$(mktemp -d)"
|
||||||
|
# Fix version to v1.6.6, this is different than the latest version in original install script in
|
||||||
|
# https://raw.githubusercontent.com/cargo-bins/cargo-binstall/main/install-from-binstall-release.sh
|
||||||
|
base_url="https://github.com/cargo-bins/cargo-binstall/releases/download/v1.6.6/cargo-binstall-"
|
||||||
|
|
||||||
|
os="$(uname -s)"
|
||||||
|
if [ "$os" == "Darwin" ]; then
|
||||||
|
url="${base_url}universal-apple-darwin.zip"
|
||||||
|
curl -LO --proto '=https' --tlsv1.2 -sSf "$url"
|
||||||
|
unzip cargo-binstall-universal-apple-darwin.zip
|
||||||
|
elif [ "$os" == "Linux" ]; then
|
||||||
|
machine="$(uname -m)"
|
||||||
|
if [ "$machine" == "armv7l" ]; then
|
||||||
|
machine="armv7"
|
||||||
|
fi
|
||||||
|
target="${machine}-unknown-linux-musl"
|
||||||
|
if [ "$machine" == "armv7" ]; then
|
||||||
|
target="${target}eabihf"
|
||||||
|
fi
|
||||||
|
|
||||||
|
url="${base_url}${target}.tgz"
|
||||||
|
curl -L --proto '=https' --tlsv1.2 -sSf "$url" | tar -xvzf -
|
||||||
|
elif [ "${OS-}" = "Windows_NT" ]; then
|
||||||
|
machine="$(uname -m)"
|
||||||
|
target="${machine}-pc-windows-msvc"
|
||||||
|
url="${base_url}${target}.zip"
|
||||||
|
curl -LO --proto '=https' --tlsv1.2 -sSf "$url"
|
||||||
|
unzip "cargo-binstall-${target}.zip"
|
||||||
|
else
|
||||||
|
echo "Unsupported OS ${os}"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
./cargo-binstall -y --force cargo-binstall
|
||||||
|
|
||||||
|
CARGO_HOME="${CARGO_HOME:-$HOME/.cargo}"
|
||||||
|
|
||||||
|
if ! [[ ":$PATH:" == *":$CARGO_HOME/bin:"* ]]; then
|
||||||
|
if [ -n "${CI:-}" ] && [ -n "${GITHUB_PATH:-}" ]; then
|
||||||
|
echo "$CARGO_HOME/bin" >> "$GITHUB_PATH"
|
||||||
|
else
|
||||||
|
echo
|
||||||
|
printf "\033[0;31mYour path is missing %s, you might want to add it.\033[0m\n" "$CARGO_HOME/bin"
|
||||||
|
echo
|
||||||
|
fi
|
||||||
|
fi
|
||||||
@@ -2,6 +2,10 @@ FROM centos:7 as builder
|
|||||||
|
|
||||||
ENV LANG en_US.utf8
|
ENV LANG en_US.utf8
|
||||||
|
|
||||||
|
# Note: CentOS 7 has reached EOL since 2024-07-01 thus `mirror.centos.org` is no longer available and we need to use `vault.centos.org` instead.
|
||||||
|
RUN sed -i s/mirror.centos.org/vault.centos.org/g /etc/yum.repos.d/*.repo
|
||||||
|
RUN sed -i s/^#.*baseurl=http/baseurl=http/g /etc/yum.repos.d/*.repo
|
||||||
|
|
||||||
# Install dependencies
|
# Install dependencies
|
||||||
RUN ulimit -n 1024000 && yum groupinstall -y 'Development Tools'
|
RUN ulimit -n 1024000 && yum groupinstall -y 'Development Tools'
|
||||||
RUN yum install -y epel-release \
|
RUN yum install -y epel-release \
|
||||||
@@ -25,6 +29,12 @@ ENV PATH /opt/rh/rh-python38/root/usr/bin:/usr/local/bin:/root/.cargo/bin/:$PATH
|
|||||||
ARG RUST_TOOLCHAIN
|
ARG RUST_TOOLCHAIN
|
||||||
RUN rustup toolchain install ${RUST_TOOLCHAIN}
|
RUN rustup toolchain install ${RUST_TOOLCHAIN}
|
||||||
|
|
||||||
|
|
||||||
|
# Install cargo-binstall with a specific version to adapt the current rust toolchain.
|
||||||
|
# Note: if we use the latest version, we may encounter the following `use of unstable library feature 'io_error_downcast'` error.
|
||||||
|
# compile from source take too long, so we use the precompiled binary instead
|
||||||
|
COPY $DOCKER_BUILD_ROOT/docker/dev-builder/binstall/pull_binstall.sh /usr/local/bin/pull_binstall.sh
|
||||||
|
RUN chmod +x /usr/local/bin/pull_binstall.sh && /usr/local/bin/pull_binstall.sh
|
||||||
|
|
||||||
# Install nextest.
|
# Install nextest.
|
||||||
RUN cargo install cargo-binstall --locked
|
|
||||||
RUN cargo binstall cargo-nextest --no-confirm
|
RUN cargo binstall cargo-nextest --no-confirm
|
||||||
|
|||||||
@@ -24,6 +24,15 @@ RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y \
|
|||||||
python3.10 \
|
python3.10 \
|
||||||
python3.10-dev
|
python3.10-dev
|
||||||
|
|
||||||
|
# https://github.com/GreptimeTeam/greptimedb/actions/runs/10935485852/job/30357457188#step:3:7106
|
||||||
|
# `aws-lc-sys` require gcc >= 10.3.0 to work, hence alias to use gcc-10
|
||||||
|
RUN apt-get remove -y gcc-9 g++-9 cpp-9 && \
|
||||||
|
apt-get install -y gcc-10 g++-10 cpp-10 make cmake && \
|
||||||
|
ln -sf /usr/bin/gcc-10 /usr/bin/gcc && ln -sf /usr/bin/g++-10 /usr/bin/g++ && \
|
||||||
|
ln -sf /usr/bin/gcc-10 /usr/bin/cc && \
|
||||||
|
ln -sf /usr/bin/g++-10 /usr/bin/cpp && ln -sf /usr/bin/g++-10 /usr/bin/c++ && \
|
||||||
|
cc --version && gcc --version && g++ --version && cpp --version && c++ --version
|
||||||
|
|
||||||
# Remove Python 3.8 and install pip.
|
# Remove Python 3.8 and install pip.
|
||||||
RUN apt-get -y purge python3.8 && \
|
RUN apt-get -y purge python3.8 && \
|
||||||
apt-get -y autoremove && \
|
apt-get -y autoremove && \
|
||||||
@@ -55,6 +64,11 @@ ENV PATH /root/.cargo/bin/:$PATH
|
|||||||
ARG RUST_TOOLCHAIN
|
ARG RUST_TOOLCHAIN
|
||||||
RUN rustup toolchain install ${RUST_TOOLCHAIN}
|
RUN rustup toolchain install ${RUST_TOOLCHAIN}
|
||||||
|
|
||||||
|
# Install cargo-binstall with a specific version to adapt the current rust toolchain.
|
||||||
|
# Note: if we use the latest version, we may encounter the following `use of unstable library feature 'io_error_downcast'` error.
|
||||||
|
# compile from source take too long, so we use the precompiled binary instead
|
||||||
|
COPY $DOCKER_BUILD_ROOT/docker/dev-builder/binstall/pull_binstall.sh /usr/local/bin/pull_binstall.sh
|
||||||
|
RUN chmod +x /usr/local/bin/pull_binstall.sh && /usr/local/bin/pull_binstall.sh
|
||||||
|
|
||||||
# Install nextest.
|
# Install nextest.
|
||||||
RUN cargo install cargo-binstall --locked
|
|
||||||
RUN cargo binstall cargo-nextest --no-confirm
|
RUN cargo binstall cargo-nextest --no-confirm
|
||||||
|
|||||||
@@ -43,6 +43,9 @@ ENV PATH /root/.cargo/bin/:$PATH
|
|||||||
ARG RUST_TOOLCHAIN
|
ARG RUST_TOOLCHAIN
|
||||||
RUN rustup toolchain install ${RUST_TOOLCHAIN}
|
RUN rustup toolchain install ${RUST_TOOLCHAIN}
|
||||||
|
|
||||||
|
# Install cargo-binstall with a specific version to adapt the current rust toolchain.
|
||||||
|
# Note: if we use the latest version, we may encounter the following `use of unstable library feature 'io_error_downcast'` error.
|
||||||
|
RUN cargo install cargo-binstall --version 1.6.6 --locked
|
||||||
|
|
||||||
# Install nextest.
|
# Install nextest.
|
||||||
RUN cargo install cargo-binstall --locked
|
|
||||||
RUN cargo binstall cargo-nextest --no-confirm
|
RUN cargo binstall cargo-nextest --no-confirm
|
||||||
|
|||||||
133
docker/docker-compose/cluster-with-etcd.yaml
Normal file
133
docker/docker-compose/cluster-with-etcd.yaml
Normal file
@@ -0,0 +1,133 @@
|
|||||||
|
x-custom:
|
||||||
|
etcd_initial_cluster_token: &etcd_initial_cluster_token "--initial-cluster-token=etcd-cluster"
|
||||||
|
etcd_common_settings: &etcd_common_settings
|
||||||
|
image: "${ETCD_REGISTRY:-quay.io}/${ETCD_NAMESPACE:-coreos}/etcd:${ETCD_VERSION:-v3.5.10}"
|
||||||
|
entrypoint: /usr/local/bin/etcd
|
||||||
|
greptimedb_image: &greptimedb_image "${GREPTIMEDB_REGISTRY:-docker.io}/${GREPTIMEDB_NAMESPACE:-greptime}/greptimedb:${GREPTIMEDB_VERSION:-latest}"
|
||||||
|
|
||||||
|
services:
|
||||||
|
etcd0:
|
||||||
|
<<: *etcd_common_settings
|
||||||
|
container_name: etcd0
|
||||||
|
ports:
|
||||||
|
- 2379:2379
|
||||||
|
- 2380:2380
|
||||||
|
command:
|
||||||
|
- --name=etcd0
|
||||||
|
- --data-dir=/var/lib/etcd
|
||||||
|
- --initial-advertise-peer-urls=http://etcd0:2380
|
||||||
|
- --listen-peer-urls=http://0.0.0.0:2380
|
||||||
|
- --listen-client-urls=http://0.0.0.0:2379
|
||||||
|
- --advertise-client-urls=http://etcd0:2379
|
||||||
|
- --heartbeat-interval=250
|
||||||
|
- --election-timeout=1250
|
||||||
|
- --initial-cluster=etcd0=http://etcd0:2380
|
||||||
|
- --initial-cluster-state=new
|
||||||
|
- *etcd_initial_cluster_token
|
||||||
|
volumes:
|
||||||
|
- /tmp/greptimedb-cluster-docker-compose/etcd0:/var/lib/etcd
|
||||||
|
healthcheck:
|
||||||
|
test: [ "CMD", "etcdctl", "--endpoints=http://etcd0:2379", "endpoint", "health" ]
|
||||||
|
interval: 5s
|
||||||
|
timeout: 3s
|
||||||
|
retries: 5
|
||||||
|
networks:
|
||||||
|
- greptimedb
|
||||||
|
|
||||||
|
metasrv:
|
||||||
|
image: *greptimedb_image
|
||||||
|
container_name: metasrv
|
||||||
|
ports:
|
||||||
|
- 3002:3002
|
||||||
|
command:
|
||||||
|
- metasrv
|
||||||
|
- start
|
||||||
|
- --bind-addr=0.0.0.0:3002
|
||||||
|
- --server-addr=metasrv:3002
|
||||||
|
- --store-addrs=etcd0:2379
|
||||||
|
healthcheck:
|
||||||
|
test: [ "CMD", "curl", "-f", "http://metasrv:3002/health" ]
|
||||||
|
interval: 5s
|
||||||
|
timeout: 3s
|
||||||
|
retries: 5
|
||||||
|
depends_on:
|
||||||
|
etcd0:
|
||||||
|
condition: service_healthy
|
||||||
|
networks:
|
||||||
|
- greptimedb
|
||||||
|
|
||||||
|
datanode0:
|
||||||
|
image: *greptimedb_image
|
||||||
|
container_name: datanode0
|
||||||
|
ports:
|
||||||
|
- 3001:3001
|
||||||
|
- 5000:5000
|
||||||
|
command:
|
||||||
|
- datanode
|
||||||
|
- start
|
||||||
|
- --node-id=0
|
||||||
|
- --rpc-addr=0.0.0.0:3001
|
||||||
|
- --rpc-hostname=datanode0:3001
|
||||||
|
- --metasrv-addrs=metasrv:3002
|
||||||
|
- --http-addr=0.0.0.0:5000
|
||||||
|
volumes:
|
||||||
|
- /tmp/greptimedb-cluster-docker-compose/datanode0:/tmp/greptimedb
|
||||||
|
healthcheck:
|
||||||
|
test: [ "CMD", "curl", "-f", "http://datanode0:5000/health" ]
|
||||||
|
interval: 5s
|
||||||
|
timeout: 3s
|
||||||
|
retries: 5
|
||||||
|
depends_on:
|
||||||
|
metasrv:
|
||||||
|
condition: service_healthy
|
||||||
|
networks:
|
||||||
|
- greptimedb
|
||||||
|
|
||||||
|
frontend0:
|
||||||
|
image: *greptimedb_image
|
||||||
|
container_name: frontend0
|
||||||
|
ports:
|
||||||
|
- 4000:4000
|
||||||
|
- 4001:4001
|
||||||
|
- 4002:4002
|
||||||
|
- 4003:4003
|
||||||
|
command:
|
||||||
|
- frontend
|
||||||
|
- start
|
||||||
|
- --metasrv-addrs=metasrv:3002
|
||||||
|
- --http-addr=0.0.0.0:4000
|
||||||
|
- --rpc-addr=0.0.0.0:4001
|
||||||
|
- --mysql-addr=0.0.0.0:4002
|
||||||
|
- --postgres-addr=0.0.0.0:4003
|
||||||
|
healthcheck:
|
||||||
|
test: [ "CMD", "curl", "-f", "http://frontend0:4000/health" ]
|
||||||
|
interval: 5s
|
||||||
|
timeout: 3s
|
||||||
|
retries: 5
|
||||||
|
depends_on:
|
||||||
|
datanode0:
|
||||||
|
condition: service_healthy
|
||||||
|
networks:
|
||||||
|
- greptimedb
|
||||||
|
|
||||||
|
flownode0:
|
||||||
|
image: *greptimedb_image
|
||||||
|
container_name: flownode0
|
||||||
|
ports:
|
||||||
|
- 4004:4004
|
||||||
|
command:
|
||||||
|
- flownode
|
||||||
|
- start
|
||||||
|
- --node-id=0
|
||||||
|
- --metasrv-addrs=metasrv:3002
|
||||||
|
- --rpc-addr=0.0.0.0:4004
|
||||||
|
- --rpc-hostname=flownode0:4004
|
||||||
|
depends_on:
|
||||||
|
frontend0:
|
||||||
|
condition: service_healthy
|
||||||
|
networks:
|
||||||
|
- greptimedb
|
||||||
|
|
||||||
|
networks:
|
||||||
|
greptimedb:
|
||||||
|
name: greptimedb
|
||||||
51
docs/benchmarks/log/README.md
Normal file
51
docs/benchmarks/log/README.md
Normal file
@@ -0,0 +1,51 @@
|
|||||||
|
# Log benchmark configuration
|
||||||
|
This repo holds the configuration we used to benchmark GreptimeDB, Clickhouse and Elastic Search.
|
||||||
|
|
||||||
|
Here are the versions of databases we used in the benchmark
|
||||||
|
|
||||||
|
| name | version |
|
||||||
|
| :------------ | :--------- |
|
||||||
|
| GreptimeDB | v0.9.2 |
|
||||||
|
| Clickhouse | 24.9.1.219 |
|
||||||
|
| Elasticsearch | 8.15.0 |
|
||||||
|
|
||||||
|
## Structured model vs Unstructured model
|
||||||
|
We divide test into two parts, using structured model and unstructured model accordingly. You can also see the difference in create table clause.
|
||||||
|
|
||||||
|
__Structured model__
|
||||||
|
|
||||||
|
The log data is pre-processed into columns by vector. For example an insert request looks like following
|
||||||
|
```SQL
|
||||||
|
INSERT INTO test_table (bytes, http_version, ip, method, path, status, user, timestamp) VALUES ()
|
||||||
|
```
|
||||||
|
The goal is to test string/text support for each database. In real scenarios it means the datasource(or log data producers) have separate fields defined, or have already processed the raw input.
|
||||||
|
|
||||||
|
__Unstructured model__
|
||||||
|
|
||||||
|
The log data is inserted as a long string, and then we build fulltext index upon these strings. For example an insert request looks like following
|
||||||
|
```SQL
|
||||||
|
INSERT INTO test_table (message, timestamp) VALUES ()
|
||||||
|
```
|
||||||
|
The goal is to test fuzzy search performance for each database. In real scenarios it means the log is produced by some kind of middleware and inserted directly into the database.
|
||||||
|
|
||||||
|
## Creating tables
|
||||||
|
See [here](./create_table.sql) for GreptimeDB and Clickhouse's create table clause.
|
||||||
|
The mapping of Elastic search is created automatically.
|
||||||
|
|
||||||
|
## Vector Configuration
|
||||||
|
We use vector to generate random log data and send inserts to databases.
|
||||||
|
Please refer to [structured config](./structured_vector.toml) and [unstructured config](./unstructured_vector.toml) for detailed configuration.
|
||||||
|
|
||||||
|
## SQLs and payloads
|
||||||
|
Please refer to [SQL query](./query.sql) for GreptimeDB and Clickhouse, and [query payload](./query.md) for Elastic search.
|
||||||
|
|
||||||
|
## Steps to reproduce
|
||||||
|
0. Decide whether to run structured model test or unstructured mode test.
|
||||||
|
1. Build vector binary(see vector's config file for specific branch) and databases binaries accordingly.
|
||||||
|
2. Create table in GreptimeDB and Clickhouse in advance.
|
||||||
|
3. Run vector to insert data.
|
||||||
|
4. When data insertion is finished, run queries against each database. Note: you'll need to update timerange value after data insertion.
|
||||||
|
|
||||||
|
## Addition
|
||||||
|
- You can tune GreptimeDB's configuration to get better performance.
|
||||||
|
- You can setup GreptimeDB to use S3 as storage, see [here](https://docs.greptime.com/user-guide/deployments/configuration#storage-options).
|
||||||
56
docs/benchmarks/log/create_table.sql
Normal file
56
docs/benchmarks/log/create_table.sql
Normal file
@@ -0,0 +1,56 @@
|
|||||||
|
-- GreptimeDB create table clause
|
||||||
|
-- structured test, use vector to pre-process log data into fields
|
||||||
|
CREATE TABLE IF NOT EXISTS `test_table` (
|
||||||
|
`bytes` Int64 NULL,
|
||||||
|
`http_version` STRING NULL,
|
||||||
|
`ip` STRING NULL,
|
||||||
|
`method` STRING NULL,
|
||||||
|
`path` STRING NULL,
|
||||||
|
`status` SMALLINT UNSIGNED NULL,
|
||||||
|
`user` STRING NULL,
|
||||||
|
`timestamp` TIMESTAMP(3) NOT NULL,
|
||||||
|
PRIMARY KEY (`user`, `path`, `status`),
|
||||||
|
TIME INDEX (`timestamp`)
|
||||||
|
)
|
||||||
|
ENGINE=mito
|
||||||
|
WITH(
|
||||||
|
append_mode = 'true'
|
||||||
|
);
|
||||||
|
|
||||||
|
-- unstructured test, build fulltext index on message column
|
||||||
|
CREATE TABLE IF NOT EXISTS `test_table` (
|
||||||
|
`message` STRING NULL FULLTEXT WITH(analyzer = 'English', case_sensitive = 'false'),
|
||||||
|
`timestamp` TIMESTAMP(3) NOT NULL,
|
||||||
|
TIME INDEX (`timestamp`)
|
||||||
|
)
|
||||||
|
ENGINE=mito
|
||||||
|
WITH(
|
||||||
|
append_mode = 'true'
|
||||||
|
);
|
||||||
|
|
||||||
|
-- Clickhouse create table clause
|
||||||
|
-- structured test
|
||||||
|
CREATE TABLE IF NOT EXISTS test_table
|
||||||
|
(
|
||||||
|
bytes UInt64 NOT NULL,
|
||||||
|
http_version String NOT NULL,
|
||||||
|
ip String NOT NULL,
|
||||||
|
method String NOT NULL,
|
||||||
|
path String NOT NULL,
|
||||||
|
status UInt8 NOT NULL,
|
||||||
|
user String NOT NULL,
|
||||||
|
timestamp String NOT NULL,
|
||||||
|
)
|
||||||
|
ENGINE = MergeTree()
|
||||||
|
ORDER BY (user, path, status);
|
||||||
|
|
||||||
|
-- unstructured test
|
||||||
|
SET allow_experimental_full_text_index = true;
|
||||||
|
CREATE TABLE IF NOT EXISTS test_table
|
||||||
|
(
|
||||||
|
message String,
|
||||||
|
timestamp String,
|
||||||
|
INDEX inv_idx(message) TYPE full_text(0) GRANULARITY 1
|
||||||
|
)
|
||||||
|
ENGINE = MergeTree()
|
||||||
|
ORDER BY tuple();
|
||||||
199
docs/benchmarks/log/query.md
Normal file
199
docs/benchmarks/log/query.md
Normal file
@@ -0,0 +1,199 @@
|
|||||||
|
# Query URL and payload for Elastic Search
|
||||||
|
## Count
|
||||||
|
URL: `http://127.0.0.1:9200/_count`
|
||||||
|
|
||||||
|
## Query by timerange
|
||||||
|
URL: `http://127.0.0.1:9200/_search`
|
||||||
|
|
||||||
|
You can use the following payload to get the full timerange first.
|
||||||
|
```JSON
|
||||||
|
{"size":0,"aggs":{"max_timestamp":{"max":{"field":"timestamp"}},"min_timestamp":{"min":{"field":"timestamp"}}}}
|
||||||
|
```
|
||||||
|
|
||||||
|
And then use this payload to query by timerange.
|
||||||
|
```JSON
|
||||||
|
{
|
||||||
|
"from": 0,
|
||||||
|
"size": 1000,
|
||||||
|
"query": {
|
||||||
|
"range": {
|
||||||
|
"timestamp": {
|
||||||
|
"gte": "2024-08-16T04:30:44.000Z",
|
||||||
|
"lte": "2024-08-16T04:51:52.000Z"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Query by condition
|
||||||
|
URL: `http://127.0.0.1:9200/_search`
|
||||||
|
### Structured payload
|
||||||
|
```JSON
|
||||||
|
{
|
||||||
|
"from": 0,
|
||||||
|
"size": 10000,
|
||||||
|
"query": {
|
||||||
|
"bool": {
|
||||||
|
"must": [
|
||||||
|
{
|
||||||
|
"term": {
|
||||||
|
"user.keyword": "CrucifiX"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"term": {
|
||||||
|
"method.keyword": "OPTION"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"term": {
|
||||||
|
"path.keyword": "/user/booperbot124"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"term": {
|
||||||
|
"http_version.keyword": "HTTP/1.1"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"term": {
|
||||||
|
"status": "401"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
### Unstructured payload
|
||||||
|
```JSON
|
||||||
|
{
|
||||||
|
"from": 0,
|
||||||
|
"size": 10000,
|
||||||
|
"query": {
|
||||||
|
"bool": {
|
||||||
|
"must": [
|
||||||
|
{
|
||||||
|
"match_phrase": {
|
||||||
|
"message": "CrucifiX"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"match_phrase": {
|
||||||
|
"message": "OPTION"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"match_phrase": {
|
||||||
|
"message": "/user/booperbot124"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"match_phrase": {
|
||||||
|
"message": "HTTP/1.1"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"match_phrase": {
|
||||||
|
"message": "401"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
## Query by condition and timerange
|
||||||
|
URL: `http://127.0.0.1:9200/_search`
|
||||||
|
### Structured payload
|
||||||
|
```JSON
|
||||||
|
{
|
||||||
|
"size": 10000,
|
||||||
|
"query": {
|
||||||
|
"bool": {
|
||||||
|
"must": [
|
||||||
|
{
|
||||||
|
"term": {
|
||||||
|
"user.keyword": "CrucifiX"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"term": {
|
||||||
|
"method.keyword": "OPTION"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"term": {
|
||||||
|
"path.keyword": "/user/booperbot124"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"term": {
|
||||||
|
"http_version.keyword": "HTTP/1.1"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"term": {
|
||||||
|
"status": "401"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"range": {
|
||||||
|
"timestamp": {
|
||||||
|
"gte": "2024-08-19T07:03:37.383Z",
|
||||||
|
"lte": "2024-08-19T07:24:58.883Z"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
|
### Unstructured payload
|
||||||
|
```JSON
|
||||||
|
{
|
||||||
|
"size": 10000,
|
||||||
|
"query": {
|
||||||
|
"bool": {
|
||||||
|
"must": [
|
||||||
|
{
|
||||||
|
"match_phrase": {
|
||||||
|
"message": "CrucifiX"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"match_phrase": {
|
||||||
|
"message": "OPTION"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"match_phrase": {
|
||||||
|
"message": "/user/booperbot124"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"match_phrase": {
|
||||||
|
"message": "HTTP/1.1"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"match_phrase": {
|
||||||
|
"message": "401"
|
||||||
|
}
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"range": {
|
||||||
|
"timestamp": {
|
||||||
|
"gte": "2024-08-19T05:16:17.099Z",
|
||||||
|
"lte": "2024-08-19T05:46:02.722Z"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
```
|
||||||
50
docs/benchmarks/log/query.sql
Normal file
50
docs/benchmarks/log/query.sql
Normal file
@@ -0,0 +1,50 @@
|
|||||||
|
-- Structured query for GreptimeDB and Clickhouse
|
||||||
|
|
||||||
|
-- query count
|
||||||
|
select count(*) from test_table;
|
||||||
|
|
||||||
|
-- query by timerange. Note: place the timestamp range in the where clause
|
||||||
|
-- GreptimeDB
|
||||||
|
-- you can use `select max(timestamp)::bigint from test_table;` and `select min(timestamp)::bigint from test_table;`
|
||||||
|
-- to get the full timestamp range
|
||||||
|
select * from test_table where timestamp between 1723710843619 and 1723711367588;
|
||||||
|
-- Clickhouse
|
||||||
|
-- you can use `select max(timestamp) from test_table;` and `select min(timestamp) from test_table;`
|
||||||
|
-- to get the full timestamp range
|
||||||
|
select * from test_table where timestamp between '2024-08-16T03:58:46Z' and '2024-08-16T04:03:50Z';
|
||||||
|
|
||||||
|
-- query by condition
|
||||||
|
SELECT * FROM test_table WHERE user = 'CrucifiX' and method = 'OPTION' and path = '/user/booperbot124' and http_version = 'HTTP/1.1' and status = 401;
|
||||||
|
|
||||||
|
-- query by condition and timerange
|
||||||
|
-- GreptimeDB
|
||||||
|
SELECT * FROM test_table WHERE user = "CrucifiX" and method = "OPTION" and path = "/user/booperbot124" and http_version = "HTTP/1.1" and status = 401
|
||||||
|
and timestamp between 1723774396760 and 1723774788760;
|
||||||
|
-- Clickhouse
|
||||||
|
SELECT * FROM test_table WHERE user = 'CrucifiX' and method = 'OPTION' and path = '/user/booperbot124' and http_version = 'HTTP/1.1' and status = 401
|
||||||
|
and timestamp between '2024-08-16T03:58:46Z' and '2024-08-16T04:03:50Z';
|
||||||
|
|
||||||
|
-- Unstructured query for GreptimeDB and Clickhouse
|
||||||
|
|
||||||
|
|
||||||
|
-- query by condition
|
||||||
|
-- GreptimeDB
|
||||||
|
SELECT * FROM test_table WHERE MATCHES(message, "+CrucifiX +OPTION +/user/booperbot124 +HTTP/1.1 +401");
|
||||||
|
-- Clickhouse
|
||||||
|
SELECT * FROM test_table WHERE (message LIKE '%CrucifiX%')
|
||||||
|
AND (message LIKE '%OPTION%')
|
||||||
|
AND (message LIKE '%/user/booperbot124%')
|
||||||
|
AND (message LIKE '%HTTP/1.1%')
|
||||||
|
AND (message LIKE '%401%');
|
||||||
|
|
||||||
|
-- query by condition and timerange
|
||||||
|
-- GreptimeDB
|
||||||
|
SELECT * FROM test_table WHERE MATCHES(message, "+CrucifiX +OPTION +/user/booperbot124 +HTTP/1.1 +401")
|
||||||
|
and timestamp between 1723710843619 and 1723711367588;
|
||||||
|
-- Clickhouse
|
||||||
|
SELECT * FROM test_table WHERE (message LIKE '%CrucifiX%')
|
||||||
|
AND (message LIKE '%OPTION%')
|
||||||
|
AND (message LIKE '%/user/booperbot124%')
|
||||||
|
AND (message LIKE '%HTTP/1.1%')
|
||||||
|
AND (message LIKE '%401%')
|
||||||
|
AND timestamp between '2024-08-15T10:25:26.524000000Z' AND '2024-08-15T10:31:31.746000000Z';
|
||||||
57
docs/benchmarks/log/structured_vector.toml
Normal file
57
docs/benchmarks/log/structured_vector.toml
Normal file
@@ -0,0 +1,57 @@
|
|||||||
|
# Please note we use patched branch to build vector
|
||||||
|
# https://github.com/shuiyisong/vector/tree/chore/greptime_log_ingester_logitem
|
||||||
|
|
||||||
|
[sources.demo_logs]
|
||||||
|
type = "demo_logs"
|
||||||
|
format = "apache_common"
|
||||||
|
# interval value = 1 / rps
|
||||||
|
# say you want to insert at 20k/s, that is 1 / 20000 = 0.00005
|
||||||
|
# set to 0 to run as fast as possible
|
||||||
|
interval = 0
|
||||||
|
# total rows to insert
|
||||||
|
count = 100000000
|
||||||
|
lines = [ "line1" ]
|
||||||
|
|
||||||
|
[transforms.parse_logs]
|
||||||
|
type = "remap"
|
||||||
|
inputs = ["demo_logs"]
|
||||||
|
source = '''
|
||||||
|
. = parse_regex!(.message, r'^(?P<ip>\S+) - (?P<user>\S+) \[(?P<timestamp>[^\]]+)\] "(?P<method>\S+) (?P<path>\S+) (?P<http_version>\S+)" (?P<status>\d+) (?P<bytes>\d+)$')
|
||||||
|
|
||||||
|
# Convert timestamp to a standard format
|
||||||
|
.timestamp = parse_timestamp!(.timestamp, format: "%d/%b/%Y:%H:%M:%S %z")
|
||||||
|
|
||||||
|
# Convert status and bytes to integers
|
||||||
|
.status = to_int!(.status)
|
||||||
|
.bytes = to_int!(.bytes)
|
||||||
|
'''
|
||||||
|
|
||||||
|
[sinks.sink_greptime_logs]
|
||||||
|
type = "greptimedb_logs"
|
||||||
|
# The table to insert into
|
||||||
|
table = "test_table"
|
||||||
|
pipeline_name = "demo_pipeline"
|
||||||
|
compression = "none"
|
||||||
|
inputs = [ "parse_logs" ]
|
||||||
|
endpoint = "http://127.0.0.1:4000"
|
||||||
|
# Batch size for each insertion
|
||||||
|
batch.max_events = 4000
|
||||||
|
|
||||||
|
[sinks.clickhouse]
|
||||||
|
type = "clickhouse"
|
||||||
|
inputs = [ "parse_logs" ]
|
||||||
|
database = "default"
|
||||||
|
endpoint = "http://127.0.0.1:8123"
|
||||||
|
format = "json_each_row"
|
||||||
|
# The table to insert into
|
||||||
|
table = "test_table"
|
||||||
|
|
||||||
|
[sinks.sink_elasticsearch]
|
||||||
|
type = "elasticsearch"
|
||||||
|
inputs = [ "parse_logs" ]
|
||||||
|
api_version = "auto"
|
||||||
|
compression = "none"
|
||||||
|
doc_type = "_doc"
|
||||||
|
endpoints = [ "http://127.0.0.1:9200" ]
|
||||||
|
id_key = "id"
|
||||||
|
mode = "bulk"
|
||||||
43
docs/benchmarks/log/unstructured_vector.toml
Normal file
43
docs/benchmarks/log/unstructured_vector.toml
Normal file
@@ -0,0 +1,43 @@
|
|||||||
|
# Please note we use patched branch to build vector
|
||||||
|
# https://github.com/shuiyisong/vector/tree/chore/greptime_log_ingester_ft
|
||||||
|
|
||||||
|
[sources.demo_logs]
|
||||||
|
type = "demo_logs"
|
||||||
|
format = "apache_common"
|
||||||
|
# interval value = 1 / rps
|
||||||
|
# say you want to insert at 20k/s, that is 1 / 20000 = 0.00005
|
||||||
|
# set to 0 to run as fast as possible
|
||||||
|
interval = 0
|
||||||
|
# total rows to insert
|
||||||
|
count = 100000000
|
||||||
|
lines = [ "line1" ]
|
||||||
|
|
||||||
|
[sinks.sink_greptime_logs]
|
||||||
|
type = "greptimedb_logs"
|
||||||
|
# The table to insert into
|
||||||
|
table = "test_table"
|
||||||
|
pipeline_name = "demo_pipeline"
|
||||||
|
compression = "none"
|
||||||
|
inputs = [ "demo_logs" ]
|
||||||
|
endpoint = "http://127.0.0.1:4000"
|
||||||
|
# Batch size for each insertion
|
||||||
|
batch.max_events = 500
|
||||||
|
|
||||||
|
[sinks.clickhouse]
|
||||||
|
type = "clickhouse"
|
||||||
|
inputs = [ "demo_logs" ]
|
||||||
|
database = "default"
|
||||||
|
endpoint = "http://127.0.0.1:8123"
|
||||||
|
format = "json_each_row"
|
||||||
|
# The table to insert into
|
||||||
|
table = "test_table"
|
||||||
|
|
||||||
|
[sinks.sink_elasticsearch]
|
||||||
|
type = "elasticsearch"
|
||||||
|
inputs = [ "demo_logs" ]
|
||||||
|
api_version = "auto"
|
||||||
|
compression = "none"
|
||||||
|
doc_type = "_doc"
|
||||||
|
endpoints = [ "http://127.0.0.1:9200" ]
|
||||||
|
id_key = "id"
|
||||||
|
mode = "bulk"
|
||||||
253
docs/benchmarks/tsbs/README.md
Normal file
253
docs/benchmarks/tsbs/README.md
Normal file
@@ -0,0 +1,253 @@
|
|||||||
|
# How to run TSBS Benchmark
|
||||||
|
|
||||||
|
This document contains the steps to run TSBS Benchmark. Our results are listed in other files in the same directory.
|
||||||
|
|
||||||
|
## Prerequires
|
||||||
|
|
||||||
|
You need the following tools to run TSBS Benchmark:
|
||||||
|
- Go
|
||||||
|
- git
|
||||||
|
- make
|
||||||
|
- rust (optional, if you want to build the DB from source)
|
||||||
|
|
||||||
|
## Build TSBS suite
|
||||||
|
|
||||||
|
Clone our fork of TSBS:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
git clone https://github.com/GreptimeTeam/tsbs.git
|
||||||
|
```
|
||||||
|
|
||||||
|
Then build it:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
cd tsbs
|
||||||
|
make
|
||||||
|
```
|
||||||
|
|
||||||
|
You can check the `bin/` directory for compiled binaries. We will only use some of them.
|
||||||
|
|
||||||
|
```shell
|
||||||
|
ls ./bin/
|
||||||
|
```
|
||||||
|
|
||||||
|
Binaries we will use later:
|
||||||
|
- `tsbs_generate_data`
|
||||||
|
- `tsbs_generate_queries`
|
||||||
|
- `tsbs_load_greptime`
|
||||||
|
- `tsbs_run_queries_influx`
|
||||||
|
|
||||||
|
## Generate test data and queries
|
||||||
|
|
||||||
|
The data is generated by `tsbs_generate_data`
|
||||||
|
|
||||||
|
```shell
|
||||||
|
mkdir bench-data
|
||||||
|
./bin/tsbs_generate_data --use-case="cpu-only" --seed=123 --scale=4000 \
|
||||||
|
--timestamp-start="2023-06-11T00:00:00Z" \
|
||||||
|
--timestamp-end="2023-06-14T00:00:00Z" \
|
||||||
|
--log-interval="10s" --format="influx" \
|
||||||
|
> ./bench-data/influx-data.lp
|
||||||
|
```
|
||||||
|
|
||||||
|
Here we generates 4000 time-series in 3 days with 10s interval. We'll use influx line protocol to write so the target format is `influx`.
|
||||||
|
|
||||||
|
Queries are generated by `tsbs_generate_queries`. You can change the parameters but need to make sure it matches with `tsbs_generate_data`.
|
||||||
|
|
||||||
|
```shell
|
||||||
|
./bin/tsbs_generate_queries \
|
||||||
|
--use-case="devops" --seed=123 --scale=4000 \
|
||||||
|
--timestamp-start="2023-06-11T00:00:00Z" \
|
||||||
|
--timestamp-end="2023-06-14T00:00:01Z" \
|
||||||
|
--queries=100 \
|
||||||
|
--query-type cpu-max-all-1 \
|
||||||
|
--format="greptime" \
|
||||||
|
> ./bench-data/greptime-queries-cpu-max-all-1.dat
|
||||||
|
./bin/tsbs_generate_queries \
|
||||||
|
--use-case="devops" --seed=123 --scale=4000 \
|
||||||
|
--timestamp-start="2023-06-11T00:00:00Z" \
|
||||||
|
--timestamp-end="2023-06-14T00:00:01Z" \
|
||||||
|
--queries=100 \
|
||||||
|
--query-type cpu-max-all-8 \
|
||||||
|
--format="greptime" \
|
||||||
|
> ./bench-data/greptime-queries-cpu-max-all-8.dat
|
||||||
|
./bin/tsbs_generate_queries \
|
||||||
|
--use-case="devops" --seed=123 --scale=4000 \
|
||||||
|
--timestamp-start="2023-06-11T00:00:00Z" \
|
||||||
|
--timestamp-end="2023-06-14T00:00:01Z" \
|
||||||
|
--queries=50 \
|
||||||
|
--query-type double-groupby-1 \
|
||||||
|
--format="greptime" \
|
||||||
|
> ./bench-data/greptime-queries-double-groupby-1.dat
|
||||||
|
./bin/tsbs_generate_queries \
|
||||||
|
--use-case="devops" --seed=123 --scale=4000 \
|
||||||
|
--timestamp-start="2023-06-11T00:00:00Z" \
|
||||||
|
--timestamp-end="2023-06-14T00:00:01Z" \
|
||||||
|
--queries=50 \
|
||||||
|
--query-type double-groupby-5 \
|
||||||
|
--format="greptime" \
|
||||||
|
> ./bench-data/greptime-queries-double-groupby-5.dat
|
||||||
|
./bin/tsbs_generate_queries \
|
||||||
|
--use-case="devops" --seed=123 --scale=4000 \
|
||||||
|
--timestamp-start="2023-06-11T00:00:00Z" \
|
||||||
|
--timestamp-end="2023-06-14T00:00:01Z" \
|
||||||
|
--queries=50 \
|
||||||
|
--query-type double-groupby-all \
|
||||||
|
--format="greptime" \
|
||||||
|
> ./bench-data/greptime-queries-double-groupby-all.dat
|
||||||
|
./bin/tsbs_generate_queries \
|
||||||
|
--use-case="devops" --seed=123 --scale=4000 \
|
||||||
|
--timestamp-start="2023-06-11T00:00:00Z" \
|
||||||
|
--timestamp-end="2023-06-14T00:00:01Z" \
|
||||||
|
--queries=50 \
|
||||||
|
--query-type groupby-orderby-limit \
|
||||||
|
--format="greptime" \
|
||||||
|
> ./bench-data/greptime-queries-groupby-orderby-limit.dat
|
||||||
|
./bin/tsbs_generate_queries \
|
||||||
|
--use-case="devops" --seed=123 --scale=4000 \
|
||||||
|
--timestamp-start="2023-06-11T00:00:00Z" \
|
||||||
|
--timestamp-end="2023-06-14T00:00:01Z" \
|
||||||
|
--queries=100 \
|
||||||
|
--query-type high-cpu-1 \
|
||||||
|
--format="greptime" \
|
||||||
|
> ./bench-data/greptime-queries-high-cpu-1.dat
|
||||||
|
./bin/tsbs_generate_queries \
|
||||||
|
--use-case="devops" --seed=123 --scale=4000 \
|
||||||
|
--timestamp-start="2023-06-11T00:00:00Z" \
|
||||||
|
--timestamp-end="2023-06-14T00:00:01Z" \
|
||||||
|
--queries=50 \
|
||||||
|
--query-type high-cpu-all \
|
||||||
|
--format="greptime" \
|
||||||
|
> ./bench-data/greptime-queries-high-cpu-all.dat
|
||||||
|
./bin/tsbs_generate_queries \
|
||||||
|
--use-case="devops" --seed=123 --scale=4000 \
|
||||||
|
--timestamp-start="2023-06-11T00:00:00Z" \
|
||||||
|
--timestamp-end="2023-06-14T00:00:01Z" \
|
||||||
|
--queries=10 \
|
||||||
|
--query-type lastpoint \
|
||||||
|
--format="greptime" \
|
||||||
|
> ./bench-data/greptime-queries-lastpoint.dat
|
||||||
|
./bin/tsbs_generate_queries \
|
||||||
|
--use-case="devops" --seed=123 --scale=4000 \
|
||||||
|
--timestamp-start="2023-06-11T00:00:00Z" \
|
||||||
|
--timestamp-end="2023-06-14T00:00:01Z" \
|
||||||
|
--queries=100 \
|
||||||
|
--query-type single-groupby-1-1-1 \
|
||||||
|
--format="greptime" \
|
||||||
|
> ./bench-data/greptime-queries-single-groupby-1-1-1.dat
|
||||||
|
./bin/tsbs_generate_queries \
|
||||||
|
--use-case="devops" --seed=123 --scale=4000 \
|
||||||
|
--timestamp-start="2023-06-11T00:00:00Z" \
|
||||||
|
--timestamp-end="2023-06-14T00:00:01Z" \
|
||||||
|
--queries=100 \
|
||||||
|
--query-type single-groupby-1-1-12 \
|
||||||
|
--format="greptime" \
|
||||||
|
> ./bench-data/greptime-queries-single-groupby-1-1-12.dat
|
||||||
|
./bin/tsbs_generate_queries \
|
||||||
|
--use-case="devops" --seed=123 --scale=4000 \
|
||||||
|
--timestamp-start="2023-06-11T00:00:00Z" \
|
||||||
|
--timestamp-end="2023-06-14T00:00:01Z" \
|
||||||
|
--queries=100 \
|
||||||
|
--query-type single-groupby-1-8-1 \
|
||||||
|
--format="greptime" \
|
||||||
|
> ./bench-data/greptime-queries-single-groupby-1-8-1.dat
|
||||||
|
./bin/tsbs_generate_queries \
|
||||||
|
--use-case="devops" --seed=123 --scale=4000 \
|
||||||
|
--timestamp-start="2023-06-11T00:00:00Z" \
|
||||||
|
--timestamp-end="2023-06-14T00:00:01Z" \
|
||||||
|
--queries=100 \
|
||||||
|
--query-type single-groupby-5-1-1 \
|
||||||
|
--format="greptime" \
|
||||||
|
> ./bench-data/greptime-queries-single-groupby-5-1-1.dat
|
||||||
|
./bin/tsbs_generate_queries \
|
||||||
|
--use-case="devops" --seed=123 --scale=4000 \
|
||||||
|
--timestamp-start="2023-06-11T00:00:00Z" \
|
||||||
|
--timestamp-end="2023-06-14T00:00:01Z" \
|
||||||
|
--queries=100 \
|
||||||
|
--query-type single-groupby-5-1-12 \
|
||||||
|
--format="greptime" \
|
||||||
|
> ./bench-data/greptime-queries-single-groupby-5-1-12.dat
|
||||||
|
./bin/tsbs_generate_queries \
|
||||||
|
--use-case="devops" --seed=123 --scale=4000 \
|
||||||
|
--timestamp-start="2023-06-11T00:00:00Z" \
|
||||||
|
--timestamp-end="2023-06-14T00:00:01Z" \
|
||||||
|
--queries=100 \
|
||||||
|
--query-type single-groupby-5-8-1 \
|
||||||
|
--format="greptime" \
|
||||||
|
> ./bench-data/greptime-queries-single-groupby-5-8-1.dat
|
||||||
|
```
|
||||||
|
|
||||||
|
## Start GreptimeDB
|
||||||
|
|
||||||
|
Reference to our [document](https://docs.greptime.com/getting-started/installation/overview) for how to install and start a GreptimeDB. Or you can also check this [document](https://docs.greptime.com/contributor-guide/getting-started#compile-and-run) for how to build a GreptimeDB from source.
|
||||||
|
|
||||||
|
## Write Data
|
||||||
|
|
||||||
|
After the DB is started, we can use `tsbs_load_greptime` to test the write performance.
|
||||||
|
|
||||||
|
```shell
|
||||||
|
./bin/tsbs_load_greptime \
|
||||||
|
--urls=http://localhost:4000 \
|
||||||
|
--file=./bench-data/influx-data.lp \
|
||||||
|
--batch-size=3000 \
|
||||||
|
--gzip=false \
|
||||||
|
--workers=6
|
||||||
|
```
|
||||||
|
|
||||||
|
Parameters here are only provided as an example. You can choose whatever you like or adjust them to match your target scenario.
|
||||||
|
|
||||||
|
Notice that if you want to rerun `tsbs_load_greptime`, please destroy and restart the DB and clear its previous data first. Existing duplicated data will impact the write and query performance.
|
||||||
|
|
||||||
|
## Query Data
|
||||||
|
|
||||||
|
After the data is imported, you can then run queries. The following script runs all queries. You can also choose a subset of queries to run.
|
||||||
|
|
||||||
|
```shell
|
||||||
|
./bin/tsbs_run_queries_influx --file=./bench-data/greptime-queries-cpu-max-all-1.dat \
|
||||||
|
--db-name=benchmark \
|
||||||
|
--urls="http://localhost:4000"
|
||||||
|
./bin/tsbs_run_queries_influx --file=./bench-data/greptime-queries-cpu-max-all-8.dat \
|
||||||
|
--db-name=benchmark \
|
||||||
|
--urls="http://localhost:4000"
|
||||||
|
./bin/tsbs_run_queries_influx --file=./bench-data/greptime-queries-double-groupby-1.dat \
|
||||||
|
--db-name=benchmark \
|
||||||
|
--urls="http://localhost:4000"
|
||||||
|
./bin/tsbs_run_queries_influx --file=./bench-data/greptime-queries-double-groupby-5.dat \
|
||||||
|
--db-name=benchmark \
|
||||||
|
--urls="http://localhost:4000"
|
||||||
|
./bin/tsbs_run_queries_influx --file=./bench-data/greptime-queries-double-groupby-all.dat \
|
||||||
|
--db-name=benchmark \
|
||||||
|
--urls="http://localhost:4000"
|
||||||
|
./bin/tsbs_run_queries_influx --file=./bench-data/greptime-queries-groupby-orderby-limit.dat \
|
||||||
|
--db-name=benchmark \
|
||||||
|
--urls="http://localhost:4000"
|
||||||
|
./bin/tsbs_run_queries_influx --file=./bench-data/greptime-queries-high-cpu-1.dat \
|
||||||
|
--db-name=benchmark \
|
||||||
|
--urls="http://localhost:4000"
|
||||||
|
./bin/tsbs_run_queries_influx --file=./bench-data/greptime-queries-high-cpu-all.dat \
|
||||||
|
--db-name=benchmark \
|
||||||
|
--urls="http://localhost:4000"
|
||||||
|
./bin/tsbs_run_queries_influx --file=./bench-data/greptime-queries-lastpoint.dat \
|
||||||
|
--db-name=benchmark \
|
||||||
|
--urls="http://localhost:4000"
|
||||||
|
./bin/tsbs_run_queries_influx --file=./bench-data/greptime-queries-single-groupby-1-1-1.dat \
|
||||||
|
--db-name=benchmark \
|
||||||
|
--urls="http://localhost:4000"
|
||||||
|
./bin/tsbs_run_queries_influx --file=./bench-data/greptime-queries-single-groupby-1-1-12.dat \
|
||||||
|
--db-name=benchmark \
|
||||||
|
--urls="http://localhost:4000"
|
||||||
|
./bin/tsbs_run_queries_influx --file=./bench-data/greptime-queries-single-groupby-1-8-1.dat \
|
||||||
|
--db-name=benchmark \
|
||||||
|
--urls="http://localhost:4000"
|
||||||
|
./bin/tsbs_run_queries_influx --file=./bench-data/greptime-queries-single-groupby-5-1-1.dat \
|
||||||
|
--db-name=benchmark \
|
||||||
|
--urls="http://localhost:4000"
|
||||||
|
./bin/tsbs_run_queries_influx --file=./bench-data/greptime-queries-single-groupby-5-1-12.dat \
|
||||||
|
--db-name=benchmark \
|
||||||
|
--urls="http://localhost:4000"
|
||||||
|
./bin/tsbs_run_queries_influx --file=./bench-data/greptime-queries-single-groupby-5-8-1.dat \
|
||||||
|
--db-name=benchmark \
|
||||||
|
--urls="http://localhost:4000"
|
||||||
|
```
|
||||||
|
|
||||||
|
Rerun queries need not to re-import data. Just execute the corresponding command again is fine.
|
||||||
@@ -23,28 +23,28 @@
|
|||||||
|
|
||||||
## Write performance
|
## Write performance
|
||||||
|
|
||||||
| Environment | Ingest rate (rows/s) |
|
| Environment | Ingest rate (rows/s) |
|
||||||
| ------------------ | --------------------- |
|
| --------------- | -------------------- |
|
||||||
| Local | 3695814.64 |
|
| Local | 369581.464 |
|
||||||
| EC2 c5d.2xlarge | 2987166.64 |
|
| EC2 c5d.2xlarge | 298716.664 |
|
||||||
|
|
||||||
|
|
||||||
## Query performance
|
## Query performance
|
||||||
|
|
||||||
| Query type | Local (ms) | EC2 c5d.2xlarge (ms) |
|
| Query type | Local (ms) | EC2 c5d.2xlarge (ms) |
|
||||||
| --------------------- | ---------- | ---------------------- |
|
| --------------------- | ---------- | -------------------- |
|
||||||
| cpu-max-all-1 | 30.56 | 54.74 |
|
| cpu-max-all-1 | 30.56 | 54.74 |
|
||||||
| cpu-max-all-8 | 52.69 | 70.50 |
|
| cpu-max-all-8 | 52.69 | 70.50 |
|
||||||
| double-groupby-1 | 664.30 | 1366.63 |
|
| double-groupby-1 | 664.30 | 1366.63 |
|
||||||
| double-groupby-5 | 1391.26 | 2141.71 |
|
| double-groupby-5 | 1391.26 | 2141.71 |
|
||||||
| double-groupby-all | 2828.94 | 3389.59 |
|
| double-groupby-all | 2828.94 | 3389.59 |
|
||||||
| groupby-orderby-limit | 718.92 | 1213.90 |
|
| groupby-orderby-limit | 718.92 | 1213.90 |
|
||||||
| high-cpu-1 | 29.21 | 52.98 |
|
| high-cpu-1 | 29.21 | 52.98 |
|
||||||
| high-cpu-all | 5514.12 | 7194.91 |
|
| high-cpu-all | 5514.12 | 7194.91 |
|
||||||
| lastpoint | 7571.40 | 9423.41 |
|
| lastpoint | 7571.40 | 9423.41 |
|
||||||
| single-groupby-1-1-1 | 19.09 | 7.77 |
|
| single-groupby-1-1-1 | 19.09 | 7.77 |
|
||||||
| single-groupby-1-1-12 | 27.28 | 51.64 |
|
| single-groupby-1-1-12 | 27.28 | 51.64 |
|
||||||
| single-groupby-1-8-1 | 31.85 | 11.64 |
|
| single-groupby-1-8-1 | 31.85 | 11.64 |
|
||||||
| single-groupby-5-1-1 | 16.14 | 9.67 |
|
| single-groupby-5-1-1 | 16.14 | 9.67 |
|
||||||
| single-groupby-5-1-12 | 27.21 | 53.62 |
|
| single-groupby-5-1-12 | 27.21 | 53.62 |
|
||||||
| single-groupby-5-8-1 | 39.62 | 14.96 |
|
| single-groupby-5-8-1 | 39.62 | 14.96 |
|
||||||
|
|||||||
58
docs/benchmarks/tsbs/v0.8.0.md
Normal file
58
docs/benchmarks/tsbs/v0.8.0.md
Normal file
@@ -0,0 +1,58 @@
|
|||||||
|
# TSBS benchmark - v0.8.0
|
||||||
|
|
||||||
|
## Environment
|
||||||
|
|
||||||
|
### Local
|
||||||
|
|
||||||
|
| | |
|
||||||
|
| ------ | ---------------------------------- |
|
||||||
|
| CPU | AMD Ryzen 7 7735HS (8 core 3.2GHz) |
|
||||||
|
| Memory | 32GB |
|
||||||
|
| Disk | SOLIDIGM SSDPFKNU010TZ |
|
||||||
|
| OS | Ubuntu 22.04.2 LTS |
|
||||||
|
|
||||||
|
### Amazon EC2
|
||||||
|
|
||||||
|
| | |
|
||||||
|
| ------- | -------------- |
|
||||||
|
| Machine | c5d.2xlarge |
|
||||||
|
| CPU | 8 core |
|
||||||
|
| Memory | 16GB |
|
||||||
|
| Disk | 50GB (GP3) |
|
||||||
|
| OS | Ubuntu 22.04.1 |
|
||||||
|
|
||||||
|
## Write performance
|
||||||
|
|
||||||
|
| Environment | Ingest rate (rows/s) |
|
||||||
|
| --------------- | -------------------- |
|
||||||
|
| Local | 315369.66 |
|
||||||
|
| EC2 c5d.2xlarge | 222148.56 |
|
||||||
|
|
||||||
|
## Query performance
|
||||||
|
|
||||||
|
| Query type | Local (ms) | EC2 c5d.2xlarge (ms) |
|
||||||
|
| --------------------- | ---------- | -------------------- |
|
||||||
|
| cpu-max-all-1 | 24.63 | 15.29 |
|
||||||
|
| cpu-max-all-8 | 51.69 | 33.53 |
|
||||||
|
| double-groupby-1 | 673.51 | 1295.38 |
|
||||||
|
| double-groupby-5 | 1244.93 | 1993.91 |
|
||||||
|
| double-groupby-all | 2215.44 | 3056.77 |
|
||||||
|
| groupby-orderby-limit | 754.50 | 1546.49 |
|
||||||
|
| high-cpu-1 | 19.62 | 11.58 |
|
||||||
|
| high-cpu-all | 5402.31 | 8011.43 |
|
||||||
|
| lastpoint | 6756.12 | 9312.67 |
|
||||||
|
| single-groupby-1-1-1 | 15.70 | 7.67 |
|
||||||
|
| single-groupby-1-1-12 | 16.72 | 9.29 |
|
||||||
|
| single-groupby-1-8-1 | 26.72 | 17.97 |
|
||||||
|
| single-groupby-5-1-1 | 18.17 | 10.09 |
|
||||||
|
| single-groupby-5-1-12 | 20.04 | 12.37 |
|
||||||
|
| single-groupby-5-8-1 | 35.63 | 23.13 |
|
||||||
|
|
||||||
|
`single-groupby-1-1-1` query throughput
|
||||||
|
|
||||||
|
| Environment | Client concurrency | mean time (ms) | qps (queries/sec) |
|
||||||
|
| --------------- | ------------------ | -------------- | ----------------- |
|
||||||
|
| Local | 50 | 42.87 | 1165.73 |
|
||||||
|
| Local | 100 | 89.29 | 1119.38 |
|
||||||
|
| EC2 c5d.2xlarge | 50 | 69.25 | 721.73 |
|
||||||
|
| EC2 c5d.2xlarge | 100 | 140.93 | 709.35 |
|
||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user