Merge remote-tracking branch 'origin/develop' into feature/linux
This commit is contained in:
commit
519b93310c
14
README-CN.md
14
README-CN.md
|
@ -258,10 +258,16 @@ TDengine 社区生态中也有一些非常友好的第三方连接器,可以
|
||||||
|
|
||||||
TDengine 的测试框架和所有测试例全部开源。
|
TDengine 的测试框架和所有测试例全部开源。
|
||||||
|
|
||||||
点击[这里](tests/How-To-Run-Test-And-How-To-Add-New-Test-Case.md),了解如何运行测试例和添加新的测试例。
|
点击 [这里](tests/How-To-Run-Test-And-How-To-Add-New-Test-Case.md),了解如何运行测试例和添加新的测试例。
|
||||||
|
|
||||||
# 成为社区贡献者
|
# 成为社区贡献者
|
||||||
点击[这里](https://www.taosdata.com/cn/contributor/),了解如何成为 TDengine 的贡献者。
|
|
||||||
|
|
||||||
#加入技术交流群
|
点击 [这里](https://www.taosdata.com/cn/contributor/),了解如何成为 TDengine 的贡献者。
|
||||||
TDengine官方社群「物联网大数据群」对外开放,欢迎您加入讨论。搜索微信号 "tdengine",加小T为好友,即可入群。
|
|
||||||
|
# 加入技术交流群
|
||||||
|
|
||||||
|
TDengine 官方社群「物联网大数据群」对外开放,欢迎您加入讨论。搜索微信号 "tdengine",加小T为好友,即可入群。
|
||||||
|
|
||||||
|
# [谁在使用TDengine](https://github.com/taosdata/TDengine/issues/2432)
|
||||||
|
|
||||||
|
欢迎所有 TDengine 用户及贡献者在 [这里](https://github.com/taosdata/TDengine/issues/2432) 分享您在当前工作中开发/使用 TDengine 的故事。
|
||||||
|
|
|
@ -31,7 +31,7 @@ For user manual, system design and architecture, engineering blogs, refer to [TD
|
||||||
# Building
|
# Building
|
||||||
At the moment, TDengine only supports building and running on Linux systems. You can choose to [install from packages](https://www.taosdata.com/en/getting-started/#Install-from-Package) or from the source code. This quick guide is for installation from the source only.
|
At the moment, TDengine only supports building and running on Linux systems. You can choose to [install from packages](https://www.taosdata.com/en/getting-started/#Install-from-Package) or from the source code. This quick guide is for installation from the source only.
|
||||||
|
|
||||||
To build TDengine, use [CMake](https://cmake.org/) 3.5 or higher versions in the project directory.
|
To build TDengine, use [CMake](https://cmake.org/) 2.8.12.x or higher versions in the project directory.
|
||||||
|
|
||||||
## Install tools
|
## Install tools
|
||||||
|
|
||||||
|
@ -250,3 +250,6 @@ Please follow the [contribution guidelines](CONTRIBUTING.md) to contribute to th
|
||||||
|
|
||||||
Add WeChat “tdengine” to join the group,you can communicate with other users.
|
Add WeChat “tdengine” to join the group,you can communicate with other users.
|
||||||
|
|
||||||
|
# [User List](https://github.com/taosdata/TDengine/issues/2432)
|
||||||
|
|
||||||
|
If you are using TDengine and feel it helps or you'd like to do some contributions, please add your company to [user list](https://github.com/taosdata/TDengine/issues/2432) and let us know your needs.
|
||||||
|
|
|
@ -451,7 +451,8 @@ Query OK, 1 row(s) in set (0.000141s)
|
||||||
|
|
||||||
| taos-jdbcdriver 版本 | TDengine 版本 | JDK 版本 |
|
| taos-jdbcdriver 版本 | TDengine 版本 | JDK 版本 |
|
||||||
| -------------------- | ----------------- | -------- |
|
| -------------------- | ----------------- | -------- |
|
||||||
| 2.0.12 及以上 | 2.0.8.0 及以上 | 1.8.x |
|
| 2.0.22 | 2.0.18.0 及以上 | 1.8.x |
|
||||||
|
| 2.0.12 - 2.0.21 | 2.0.8.0 - 2.0.17.0 | 1.8.x |
|
||||||
| 2.0.4 - 2.0.11 | 2.0.0.0 - 2.0.7.x | 1.8.x |
|
| 2.0.4 - 2.0.11 | 2.0.0.0 - 2.0.7.x | 1.8.x |
|
||||||
| 1.0.3 | 1.6.1.x 及以上 | 1.8.x |
|
| 1.0.3 | 1.6.1.x 及以上 | 1.8.x |
|
||||||
| 1.0.2 | 1.6.1.x 及以上 | 1.8.x |
|
| 1.0.2 | 1.6.1.x 及以上 | 1.8.x |
|
||||||
|
|
|
@ -6,19 +6,27 @@
|
||||||
|
|
||||||
### 内存需求
|
### 内存需求
|
||||||
|
|
||||||
每个 DB 可以创建固定数目的 vgroup,默认与 CPU 核数相同,可通过 maxVgroupsPerDb 配置;vgroup 中的每个副本会是一个 vnode;每个 vnode 会占用固定大小的内存(大小与数据库的配置参数 blocks 和 cache 有关);每个 Table 会占用与标签总长度有关的内存;此外,系统会有一些固定的内存开销。因此,每个 DB 需要的系统内存可通过如下公式计算:
|
每个 Database 可以创建固定数目的 vgroup,默认与 CPU 核数相同,可通过 maxVgroupsPerDb 配置;vgroup 中的每个副本会是一个 vnode;每个 vnode 会占用固定大小的内存(大小与数据库的配置参数 blocks 和 cache 有关);每个 Table 会占用与标签总长度有关的内存;此外,系统会有一些固定的内存开销。因此,每个 DB 需要的系统内存可通过如下公式计算:
|
||||||
|
|
||||||
```
|
```
|
||||||
Memory Size = maxVgroupsPerDb * (blocks * cache + 10MB) + numOfTables * (tagSizePerTable + 0.5KB)
|
Database Memory Size = maxVgroupsPerDb * (blocks * cache + 10MB) + numOfTables * (tagSizePerTable + 0.5KB)
|
||||||
```
|
```
|
||||||
|
|
||||||
示例:假设是 4 核机器,cache 是缺省大小 16M, blocks 是缺省值 6,假设有 10 万张表,标签总长度是 256 字节,则总的内存需求为:4 \* (16 \* 6 + 10) + 100000 \* (0.25 + 0.5) / 1000 = 499M。
|
示例:假设是 4 核机器,cache 是缺省大小 16M, blocks 是缺省值 6,并且一个 DB 中有 10 万张表,标签总长度是 256 字节,则这个 DB 总的内存需求为:4 \* (16 \* 6 + 10) + 100000 \* (0.25 + 0.5) / 1000 = 499M。
|
||||||
|
|
||||||
注意:从这个公式计算得到的内存容量,应理解为系统的“必要需求”,而不是“内存总数”。在实际运行的生产系统中,由于操作系统缓存、资源管理调度等方面的需要,内存规划应当在计算结果的基础上保留一定冗余,以维持系统状态和系统性能的稳定性。
|
在实际的系统运维中,我们通常会更关心 TDengine 服务进程(taosd)会占用的内存量。
|
||||||
|
```
|
||||||
|
taosd 内存总量 = vnode 内存 + mnode 内存 + 查询内存
|
||||||
|
```
|
||||||
|
|
||||||
实际运行的系统往往会根据数据特点的不同,将数据存放在不同的 DB 里。因此做规划时,也需要考虑。
|
其中:
|
||||||
|
1. “vnode 内存”指的是集群中所有的 Database 存储分摊到当前 taosd 节点上所占用的内存资源。可以按上文“Database Memory Size”计算公式估算每个 DB 的内存占用量进行加总,再按集群中总共的 TDengine 节点数做平均(如果设置为多副本,则还需要乘以对应的副本倍数)。
|
||||||
|
2. “mnode 内存”指的是集群中管理节点所占用的资源。如果一个 taosd 节点上分布有 mnode 管理节点,则内存消耗还需要增加“0.2KB * 集群中数据表总数”。
|
||||||
|
3. “查询内存”指的是服务端处理查询请求时所需要占用的内存。单条查询语句至少会占用“0.2KB * 查询涉及的数据表总数”的内存量。
|
||||||
|
|
||||||
如果内存充裕,可以加大 Blocks 的配置,这样更多数据将保存在内存里,提高查询速度。
|
注意:以上内存估算方法,主要讲解了系统的“必须内存需求”,而不是“内存总数上限”。在实际运行的生产环境中,由于操作系统缓存、资源管理调度等方面的原因,内存规划应当在估算结果的基础上保留一定冗余,以维持系统状态和系统性能的稳定性。并且,生产环境通常会配置系统资源的监控工具,以便及时发现硬件资源的紧缺情况。
|
||||||
|
|
||||||
|
最后,如果内存充裕,可以考虑加大 Blocks 的配置,这样更多数据将保存在内存里,提高查询速度。
|
||||||
|
|
||||||
### CPU 需求
|
### CPU 需求
|
||||||
|
|
||||||
|
|
|
@ -249,7 +249,7 @@ TDengine缺省的时间戳是毫秒精度,但通过修改配置参数enableMic
|
||||||
|
|
||||||
3) TAGS 列名不能为预留关键字;
|
3) TAGS 列名不能为预留关键字;
|
||||||
|
|
||||||
4) TAGS 最多允许128个,至少1个,总长度不超过16k个字符。
|
4) TAGS 最多允许 128 个,至少 1 个,总长度不超过 16 KB。
|
||||||
|
|
||||||
- **删除超级表**
|
- **删除超级表**
|
||||||
|
|
||||||
|
|
|
@ -41,7 +41,7 @@
|
||||||
"insert_mode": "taosc",
|
"insert_mode": "taosc",
|
||||||
"insert_rows": 1000,
|
"insert_rows": 1000,
|
||||||
"multi_thread_write_one_tbl": "no",
|
"multi_thread_write_one_tbl": "no",
|
||||||
"rows_per_tbl": 20,
|
"interlace_rows": 20,
|
||||||
"max_sql_len": 1024000,
|
"max_sql_len": 1024000,
|
||||||
"disorder_ratio": 0,
|
"disorder_ratio": 0,
|
||||||
"disorder_range": 1000,
|
"disorder_range": 1000,
|
||||||
|
|
|
@ -41,7 +41,7 @@
|
||||||
"insert_mode": "taosc",
|
"insert_mode": "taosc",
|
||||||
"insert_rows": 100000,
|
"insert_rows": 100000,
|
||||||
"multi_thread_write_one_tbl": "no",
|
"multi_thread_write_one_tbl": "no",
|
||||||
"rows_per_tbl": 0,
|
"interlace_rows": 0,
|
||||||
"max_sql_len": 1024000,
|
"max_sql_len": 1024000,
|
||||||
"disorder_ratio": 0,
|
"disorder_ratio": 0,
|
||||||
"disorder_range": 1000,
|
"disorder_range": 1000,
|
||||||
|
|
|
@ -92,7 +92,7 @@ enum TEST_MODE {
|
||||||
#define MAX_DATABASE_COUNT 256
|
#define MAX_DATABASE_COUNT 256
|
||||||
#define INPUT_BUF_LEN 256
|
#define INPUT_BUF_LEN 256
|
||||||
|
|
||||||
#define DEFAULT_TIMESTAMP_STEP 10
|
#define DEFAULT_TIMESTAMP_STEP 1
|
||||||
|
|
||||||
typedef enum CREATE_SUB_TALBE_MOD_EN {
|
typedef enum CREATE_SUB_TALBE_MOD_EN {
|
||||||
PRE_CREATE_SUBTBL,
|
PRE_CREATE_SUBTBL,
|
||||||
|
@ -199,7 +199,7 @@ typedef struct SArguments_S {
|
||||||
int num_of_CPR;
|
int num_of_CPR;
|
||||||
int num_of_threads;
|
int num_of_threads;
|
||||||
int insert_interval;
|
int insert_interval;
|
||||||
int rows_per_tbl;
|
int interlace_rows;
|
||||||
int num_of_RPR;
|
int num_of_RPR;
|
||||||
int max_sql_len;
|
int max_sql_len;
|
||||||
int num_of_tables;
|
int num_of_tables;
|
||||||
|
@ -523,8 +523,8 @@ SArguments g_args = {
|
||||||
1, // replica
|
1, // replica
|
||||||
"t", // tb_prefix
|
"t", // tb_prefix
|
||||||
NULL, // sqlFile
|
NULL, // sqlFile
|
||||||
false, // use_metric
|
true, // use_metric
|
||||||
false, // insert_only
|
true, // insert_only
|
||||||
false, // debug_print
|
false, // debug_print
|
||||||
false, // verbose_print
|
false, // verbose_print
|
||||||
false, // performance statistic print
|
false, // performance statistic print
|
||||||
|
@ -547,7 +547,7 @@ SArguments g_args = {
|
||||||
10, // num_of_CPR
|
10, // num_of_CPR
|
||||||
10, // num_of_connections/thread
|
10, // num_of_connections/thread
|
||||||
0, // insert_interval
|
0, // insert_interval
|
||||||
0, // rows_per_tbl;
|
0, // interlace_rows;
|
||||||
100, // num_of_RPR
|
100, // num_of_RPR
|
||||||
TSDB_PAYLOAD_SIZE, // max_sql_len
|
TSDB_PAYLOAD_SIZE, // max_sql_len
|
||||||
10000, // num_of_tables
|
10000, // num_of_tables
|
||||||
|
@ -614,7 +614,7 @@ static void printHelp() {
|
||||||
printf("%s%s%s%s\n", indent, "-m", indent,
|
printf("%s%s%s%s\n", indent, "-m", indent,
|
||||||
"Table prefix name. Default is 't'.");
|
"Table prefix name. Default is 't'.");
|
||||||
printf("%s%s%s%s\n", indent, "-s", indent, "The select sql file.");
|
printf("%s%s%s%s\n", indent, "-s", indent, "The select sql file.");
|
||||||
printf("%s%s%s%s\n", indent, "-M", indent, "Use metric flag.");
|
printf("%s%s%s%s\n", indent, "-N", indent, "Use normal table flag.");
|
||||||
printf("%s%s%s%s\n", indent, "-o", indent,
|
printf("%s%s%s%s\n", indent, "-o", indent,
|
||||||
"Direct output to the named file. Default is './output.txt'.");
|
"Direct output to the named file. Default is './output.txt'.");
|
||||||
printf("%s%s%s%s\n", indent, "-q", indent,
|
printf("%s%s%s%s\n", indent, "-q", indent,
|
||||||
|
@ -682,7 +682,7 @@ static void parse_args(int argc, char *argv[], SArguments *arguments) {
|
||||||
} else if (strcmp(argv[i], "-i") == 0) {
|
} else if (strcmp(argv[i], "-i") == 0) {
|
||||||
arguments->insert_interval = atoi(argv[++i]);
|
arguments->insert_interval = atoi(argv[++i]);
|
||||||
} else if (strcmp(argv[i], "-B") == 0) {
|
} else if (strcmp(argv[i], "-B") == 0) {
|
||||||
arguments->rows_per_tbl = atoi(argv[++i]);
|
arguments->interlace_rows = atoi(argv[++i]);
|
||||||
} else if (strcmp(argv[i], "-r") == 0) {
|
} else if (strcmp(argv[i], "-r") == 0) {
|
||||||
arguments->num_of_RPR = atoi(argv[++i]);
|
arguments->num_of_RPR = atoi(argv[++i]);
|
||||||
} else if (strcmp(argv[i], "-t") == 0) {
|
} else if (strcmp(argv[i], "-t") == 0) {
|
||||||
|
@ -742,10 +742,10 @@ static void parse_args(int argc, char *argv[], SArguments *arguments) {
|
||||||
arguments->len_of_binary = atoi(argv[++i]);
|
arguments->len_of_binary = atoi(argv[++i]);
|
||||||
} else if (strcmp(argv[i], "-m") == 0) {
|
} else if (strcmp(argv[i], "-m") == 0) {
|
||||||
arguments->tb_prefix = argv[++i];
|
arguments->tb_prefix = argv[++i];
|
||||||
} else if (strcmp(argv[i], "-M") == 0) {
|
} else if (strcmp(argv[i], "-N") == 0) {
|
||||||
arguments->use_metric = true;
|
arguments->use_metric = false;
|
||||||
} else if (strcmp(argv[i], "-x") == 0) {
|
} else if (strcmp(argv[i], "-x") == 0) {
|
||||||
arguments->insert_only = true;
|
arguments->insert_only = false;
|
||||||
} else if (strcmp(argv[i], "-y") == 0) {
|
} else if (strcmp(argv[i], "-y") == 0) {
|
||||||
arguments->answer_yes = true;
|
arguments->answer_yes = true;
|
||||||
} else if (strcmp(argv[i], "-g") == 0) {
|
} else if (strcmp(argv[i], "-g") == 0) {
|
||||||
|
@ -1064,6 +1064,7 @@ static int printfInsertMeta() {
|
||||||
printf("max sql length: \033[33m%d\033[0m\n", g_args.max_sql_len);
|
printf("max sql length: \033[33m%d\033[0m\n", g_args.max_sql_len);
|
||||||
|
|
||||||
printf("database count: \033[33m%d\033[0m\n", g_Dbs.dbCount);
|
printf("database count: \033[33m%d\033[0m\n", g_Dbs.dbCount);
|
||||||
|
|
||||||
for (int i = 0; i < g_Dbs.dbCount; i++) {
|
for (int i = 0; i < g_Dbs.dbCount; i++) {
|
||||||
printf("database[\033[33m%d\033[0m]:\n", i);
|
printf("database[\033[33m%d\033[0m]:\n", i);
|
||||||
printf(" database[%d] name: \033[33m%s\033[0m\n", i, g_Dbs.db[i].dbName);
|
printf(" database[%d] name: \033[33m%s\033[0m\n", i, g_Dbs.db[i].dbName);
|
||||||
|
@ -1220,16 +1221,19 @@ static int printfInsertMeta() {
|
||||||
}
|
}
|
||||||
|
|
||||||
static void printfInsertMetaToFile(FILE* fp) {
|
static void printfInsertMetaToFile(FILE* fp) {
|
||||||
SHOW_PARSE_RESULT_START_TO_FILE(fp);
|
|
||||||
|
SHOW_PARSE_RESULT_START_TO_FILE(fp);
|
||||||
|
|
||||||
fprintf(fp, "host: %s:%u\n", g_Dbs.host, g_Dbs.port);
|
fprintf(fp, "host: %s:%u\n", g_Dbs.host, g_Dbs.port);
|
||||||
fprintf(fp, "user: %s\n", g_Dbs.user);
|
fprintf(fp, "user: %s\n", g_Dbs.user);
|
||||||
fprintf(fp, "password: %s\n", g_Dbs.password);
|
|
||||||
fprintf(fp, "resultFile: %s\n", g_Dbs.resultFile);
|
fprintf(fp, "resultFile: %s\n", g_Dbs.resultFile);
|
||||||
fprintf(fp, "thread num of insert data: %d\n", g_Dbs.threadCount);
|
fprintf(fp, "thread num of insert data: %d\n", g_Dbs.threadCount);
|
||||||
fprintf(fp, "thread num of create table: %d\n", g_Dbs.threadCountByCreateTbl);
|
fprintf(fp, "thread num of create table: %d\n", g_Dbs.threadCountByCreateTbl);
|
||||||
|
fprintf(fp, "insert interval: %d\n", g_args.insert_interval);
|
||||||
|
fprintf(fp, "number of records per req: %d\n", g_args.num_of_RPR);
|
||||||
|
fprintf(fp, "max sql length: %d\n", g_args.max_sql_len);
|
||||||
fprintf(fp, "database count: %d\n", g_Dbs.dbCount);
|
fprintf(fp, "database count: %d\n", g_Dbs.dbCount);
|
||||||
|
|
||||||
for (int i = 0; i < g_Dbs.dbCount; i++) {
|
for (int i = 0; i < g_Dbs.dbCount; i++) {
|
||||||
fprintf(fp, "database[%d]:\n", i);
|
fprintf(fp, "database[%d]:\n", i);
|
||||||
fprintf(fp, " database[%d] name: %s\n", i, g_Dbs.db[i].dbName);
|
fprintf(fp, " database[%d] name: %s\n", i, g_Dbs.db[i].dbName);
|
||||||
|
@ -1364,11 +1368,14 @@ static void printfInsertMetaToFile(FILE* fp) {
|
||||||
}
|
}
|
||||||
fprintf(fp, "\n");
|
fprintf(fp, "\n");
|
||||||
}
|
}
|
||||||
|
|
||||||
SHOW_PARSE_RESULT_END_TO_FILE(fp);
|
SHOW_PARSE_RESULT_END_TO_FILE(fp);
|
||||||
}
|
}
|
||||||
|
|
||||||
static void printfQueryMeta() {
|
static void printfQueryMeta() {
|
||||||
|
|
||||||
SHOW_PARSE_RESULT_START();
|
SHOW_PARSE_RESULT_START();
|
||||||
|
|
||||||
printf("host: \033[33m%s:%u\033[0m\n",
|
printf("host: \033[33m%s:%u\033[0m\n",
|
||||||
g_queryInfo.host, g_queryInfo.port);
|
g_queryInfo.host, g_queryInfo.port);
|
||||||
printf("user: \033[33m%s\033[0m\n", g_queryInfo.user);
|
printf("user: \033[33m%s\033[0m\n", g_queryInfo.user);
|
||||||
|
@ -1411,11 +1418,11 @@ static void printfQueryMeta() {
|
||||||
}
|
}
|
||||||
printf("\n");
|
printf("\n");
|
||||||
|
|
||||||
SHOW_PARSE_RESULT_END();
|
SHOW_PARSE_RESULT_END();
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
static char* xFormatTimestamp(char* buf, int64_t val, int precision) {
|
static char* formatTimestamp(char* buf, int64_t val, int precision) {
|
||||||
time_t tt;
|
time_t tt;
|
||||||
if (precision == TSDB_TIME_PRECISION_MICRO) {
|
if (precision == TSDB_TIME_PRECISION_MICRO) {
|
||||||
tt = (time_t)(val / 1000000);
|
tt = (time_t)(val / 1000000);
|
||||||
|
@ -1447,7 +1454,9 @@ static char* xFormatTimestamp(char* buf, int64_t val, int precision) {
|
||||||
return buf;
|
return buf;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void xDumpFieldToFile(FILE* fp, const char* val, TAOS_FIELD* field, int32_t length, int precision) {
|
static void xDumpFieldToFile(FILE* fp, const char* val,
|
||||||
|
TAOS_FIELD* field, int32_t length, int precision) {
|
||||||
|
|
||||||
if (val == NULL) {
|
if (val == NULL) {
|
||||||
fprintf(fp, "%s", TSDB_DATA_NULL_STR);
|
fprintf(fp, "%s", TSDB_DATA_NULL_STR);
|
||||||
return;
|
return;
|
||||||
|
@ -1483,7 +1492,7 @@ static void xDumpFieldToFile(FILE* fp, const char* val, TAOS_FIELD* field, int32
|
||||||
fprintf(fp, "\'%s\'", buf);
|
fprintf(fp, "\'%s\'", buf);
|
||||||
break;
|
break;
|
||||||
case TSDB_DATA_TYPE_TIMESTAMP:
|
case TSDB_DATA_TYPE_TIMESTAMP:
|
||||||
xFormatTimestamp(buf, *(int64_t*)val, precision);
|
formatTimestamp(buf, *(int64_t*)val, precision);
|
||||||
fprintf(fp, "'%s'", buf);
|
fprintf(fp, "'%s'", buf);
|
||||||
break;
|
break;
|
||||||
default:
|
default:
|
||||||
|
@ -1562,7 +1571,7 @@ static int getDbFromServer(TAOS * taos, SDbInfo** dbInfos) {
|
||||||
|
|
||||||
tstrncpy(dbInfos[count]->name, (char *)row[TSDB_SHOW_DB_NAME_INDEX],
|
tstrncpy(dbInfos[count]->name, (char *)row[TSDB_SHOW_DB_NAME_INDEX],
|
||||||
fields[TSDB_SHOW_DB_NAME_INDEX].bytes);
|
fields[TSDB_SHOW_DB_NAME_INDEX].bytes);
|
||||||
xFormatTimestamp(dbInfos[count]->create_time,
|
formatTimestamp(dbInfos[count]->create_time,
|
||||||
*(int64_t*)row[TSDB_SHOW_DB_CREATED_TIME_INDEX],
|
*(int64_t*)row[TSDB_SHOW_DB_CREATED_TIME_INDEX],
|
||||||
TSDB_TIME_PRECISION_MILLI);
|
TSDB_TIME_PRECISION_MILLI);
|
||||||
dbInfos[count]->ntables = *((int32_t *)row[TSDB_SHOW_DB_NTABLES_INDEX]);
|
dbInfos[count]->ntables = *((int32_t *)row[TSDB_SHOW_DB_NTABLES_INDEX]);
|
||||||
|
@ -2576,12 +2585,12 @@ static int startMultiThreadCreateChildTable(
|
||||||
t_info->ntables = i<b?a+1:a;
|
t_info->ntables = i<b?a+1:a;
|
||||||
t_info->end_table_to = i < b ? startFrom + a : startFrom + a - 1;
|
t_info->end_table_to = i < b ? startFrom + a : startFrom + a - 1;
|
||||||
startFrom = t_info->end_table_to + 1;
|
startFrom = t_info->end_table_to + 1;
|
||||||
t_info->use_metric = 1;
|
t_info->use_metric = true;
|
||||||
t_info->cols = cols;
|
t_info->cols = cols;
|
||||||
t_info->minDelay = INT16_MAX;
|
t_info->minDelay = INT16_MAX;
|
||||||
pthread_create(pids + i, NULL, createTable, t_info);
|
pthread_create(pids + i, NULL, createTable, t_info);
|
||||||
}
|
}
|
||||||
|
|
||||||
for (int i = 0; i < threads; i++) {
|
for (int i = 0; i < threads; i++) {
|
||||||
pthread_join(pids[i], NULL);
|
pthread_join(pids[i], NULL);
|
||||||
}
|
}
|
||||||
|
@ -2592,12 +2601,11 @@ static int startMultiThreadCreateChildTable(
|
||||||
}
|
}
|
||||||
|
|
||||||
free(pids);
|
free(pids);
|
||||||
free(infos);
|
free(infos);
|
||||||
|
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
||||||
static void createChildTables() {
|
static void createChildTables() {
|
||||||
char tblColsBuf[MAX_SQL_SIZE];
|
char tblColsBuf[MAX_SQL_SIZE];
|
||||||
int len;
|
int len;
|
||||||
|
@ -3009,13 +3017,13 @@ static bool getMetaFromInsertJsonFile(cJSON* root) {
|
||||||
goto PARSE_OVER;
|
goto PARSE_OVER;
|
||||||
}
|
}
|
||||||
|
|
||||||
cJSON* rowsPerTbl = cJSON_GetObjectItem(root, "rows_per_tbl");
|
cJSON* rowsPerTbl = cJSON_GetObjectItem(root, "interlace_rows");
|
||||||
if (rowsPerTbl && rowsPerTbl->type == cJSON_Number) {
|
if (rowsPerTbl && rowsPerTbl->type == cJSON_Number) {
|
||||||
g_args.rows_per_tbl = rowsPerTbl->valueint;
|
g_args.interlace_rows = rowsPerTbl->valueint;
|
||||||
} else if (!rowsPerTbl) {
|
} else if (!rowsPerTbl) {
|
||||||
g_args.rows_per_tbl = 0; // 0 means progressive mode, > 0 mean interlace mode. max value is less or equ num_of_records_per_req
|
g_args.interlace_rows = 0; // 0 means progressive mode, > 0 mean interlace mode. max value is less or equ num_of_records_per_req
|
||||||
} else {
|
} else {
|
||||||
errorPrint("%s() LN%d, failed to read json, rows_per_tbl input mistake\n", __func__, __LINE__);
|
errorPrint("%s() LN%d, failed to read json, interlace_rows input mistake\n", __func__, __LINE__);
|
||||||
goto PARSE_OVER;
|
goto PARSE_OVER;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -3499,7 +3507,7 @@ static bool getMetaFromInsertJsonFile(cJSON* root) {
|
||||||
goto PARSE_OVER;
|
goto PARSE_OVER;
|
||||||
}
|
}
|
||||||
|
|
||||||
cJSON* rowsPerTbl = cJSON_GetObjectItem(stbInfo, "rows_per_tbl");
|
cJSON* rowsPerTbl = cJSON_GetObjectItem(stbInfo, "interlace_rows");
|
||||||
if (rowsPerTbl && rowsPerTbl->type == cJSON_Number) {
|
if (rowsPerTbl && rowsPerTbl->type == cJSON_Number) {
|
||||||
g_Dbs.db[i].superTbls[j].rowsPerTbl = rowsPerTbl->valueint;
|
g_Dbs.db[i].superTbls[j].rowsPerTbl = rowsPerTbl->valueint;
|
||||||
} else if (!rowsPerTbl) {
|
} else if (!rowsPerTbl) {
|
||||||
|
@ -4212,7 +4220,7 @@ static void getTableName(char *pTblName, threadInfo* pThreadInfo, int tableSeq)
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
snprintf(pTblName, TSDB_TABLE_NAME_LEN, "%s%d",
|
snprintf(pTblName, TSDB_TABLE_NAME_LEN, "%s%d",
|
||||||
superTblInfo?superTblInfo->childTblPrefix:g_args.tb_prefix, tableSeq);
|
g_args.tb_prefix, tableSeq);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -4287,13 +4295,14 @@ static int generateDataTail(char *tableName, int32_t tableSeq,
|
||||||
if ((g_args.disorderRatio != 0)
|
if ((g_args.disorderRatio != 0)
|
||||||
&& (rand_num < g_args.disorderRange)) {
|
&& (rand_num < g_args.disorderRange)) {
|
||||||
|
|
||||||
int64_t d = startTime - taosRandom() % 1000000 + rand_num;
|
int64_t d = startTime + DEFAULT_TIMESTAMP_STEP * k
|
||||||
|
- taosRandom() % 1000000 + rand_num;
|
||||||
len = generateData(data, data_type,
|
len = generateData(data, data_type,
|
||||||
ncols_per_record, d, lenOfBinary);
|
ncols_per_record, d, lenOfBinary);
|
||||||
} else {
|
} else {
|
||||||
len = generateData(data, data_type,
|
len = generateData(data, data_type,
|
||||||
ncols_per_record,
|
ncols_per_record,
|
||||||
startTime + DEFAULT_TIMESTAMP_STEP * startFrom,
|
startTime + DEFAULT_TIMESTAMP_STEP * k,
|
||||||
lenOfBinary);
|
lenOfBinary);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -4355,7 +4364,7 @@ static int generateSQLHead(char *tableName, int32_t tableSeq,
|
||||||
tableName);
|
tableName);
|
||||||
} else {
|
} else {
|
||||||
len = snprintf(buffer,
|
len = snprintf(buffer,
|
||||||
(superTblInfo?superTblInfo->maxSqlLen:g_args.max_sql_len),
|
superTblInfo->maxSqlLen,
|
||||||
"insert into %s.%s values",
|
"insert into %s.%s values",
|
||||||
pThreadInfo->db_name,
|
pThreadInfo->db_name,
|
||||||
tableName);
|
tableName);
|
||||||
|
@ -4402,7 +4411,8 @@ static int generateDataBuffer(char *pTblName,
|
||||||
int k;
|
int k;
|
||||||
int dataLen;
|
int dataLen;
|
||||||
k = generateDataTail(pTblName, tableSeq, pThreadInfo, superTblInfo,
|
k = generateDataTail(pTblName, tableSeq, pThreadInfo, superTblInfo,
|
||||||
g_args.num_of_RPR, pstr, insertRows, startFrom, startTime,
|
g_args.num_of_RPR, pstr, insertRows, startFrom,
|
||||||
|
startTime,
|
||||||
pSamplePos, &dataLen);
|
pSamplePos, &dataLen);
|
||||||
return k;
|
return k;
|
||||||
}
|
}
|
||||||
|
@ -4424,7 +4434,7 @@ static void* syncWriteInterlace(threadInfo *pThreadInfo) {
|
||||||
int insertMode;
|
int insertMode;
|
||||||
char tableName[TSDB_TABLE_NAME_LEN];
|
char tableName[TSDB_TABLE_NAME_LEN];
|
||||||
|
|
||||||
int rowsPerTbl = superTblInfo?superTblInfo->rowsPerTbl:g_args.rows_per_tbl;
|
int rowsPerTbl = superTblInfo?superTblInfo->rowsPerTbl:g_args.interlace_rows;
|
||||||
|
|
||||||
if (rowsPerTbl > 0) {
|
if (rowsPerTbl > 0) {
|
||||||
insertMode = INTERLACE_INSERT_MODE;
|
insertMode = INTERLACE_INSERT_MODE;
|
||||||
|
@ -4475,7 +4485,6 @@ static void* syncWriteInterlace(threadInfo *pThreadInfo) {
|
||||||
int generatedRecPerTbl = 0;
|
int generatedRecPerTbl = 0;
|
||||||
bool flagSleep = true;
|
bool flagSleep = true;
|
||||||
int sleepTimeTotal = 0;
|
int sleepTimeTotal = 0;
|
||||||
int timeShift = 0;
|
|
||||||
while(pThreadInfo->totalInsertRows < pThreadInfo->ntables * insertRows) {
|
while(pThreadInfo->totalInsertRows < pThreadInfo->ntables * insertRows) {
|
||||||
if ((flagSleep) && (insert_interval)) {
|
if ((flagSleep) && (insert_interval)) {
|
||||||
st = taosGetTimestampUs();
|
st = taosGetTimestampUs();
|
||||||
|
@ -4513,20 +4522,23 @@ static void* syncWriteInterlace(threadInfo *pThreadInfo) {
|
||||||
generateDataTail(
|
generateDataTail(
|
||||||
tableName, tableSeq, pThreadInfo, superTblInfo,
|
tableName, tableSeq, pThreadInfo, superTblInfo,
|
||||||
batchPerTbl, pstr, insertRows, 0,
|
batchPerTbl, pstr, insertRows, 0,
|
||||||
startTime + timeShift + sleepTimeTotal,
|
startTime,
|
||||||
&(pThreadInfo->samplePos), &dataLen);
|
&(pThreadInfo->samplePos), &dataLen);
|
||||||
|
|
||||||
pstr += dataLen;
|
pstr += dataLen;
|
||||||
recOfBatch += batchPerTbl;
|
recOfBatch += batchPerTbl;
|
||||||
|
startTime += batchPerTbl * superTblInfo->timeStampStep;
|
||||||
pThreadInfo->totalInsertRows += batchPerTbl;
|
pThreadInfo->totalInsertRows += batchPerTbl;
|
||||||
|
|
||||||
verbosePrint("[%d] %s() LN%d batchPerTbl=%d recOfBatch=%d\n",
|
verbosePrint("[%d] %s() LN%d batchPerTbl=%d recOfBatch=%d\n",
|
||||||
pThreadInfo->threadID, __func__, __LINE__,
|
pThreadInfo->threadID, __func__, __LINE__,
|
||||||
batchPerTbl, recOfBatch);
|
batchPerTbl, recOfBatch);
|
||||||
|
|
||||||
timeShift ++;
|
|
||||||
tableSeq ++;
|
tableSeq ++;
|
||||||
if (insertMode == INTERLACE_INSERT_MODE) {
|
if (insertMode == INTERLACE_INSERT_MODE) {
|
||||||
if (tableSeq == pThreadInfo->start_table_from + pThreadInfo->ntables) {
|
if (tableSeq == pThreadInfo->start_table_from + pThreadInfo->ntables) {
|
||||||
// turn to first table
|
// turn to first table
|
||||||
|
startTime += batchPerTbl * superTblInfo->timeStampStep;
|
||||||
tableSeq = pThreadInfo->start_table_from;
|
tableSeq = pThreadInfo->start_table_from;
|
||||||
generatedRecPerTbl += batchPerTbl;
|
generatedRecPerTbl += batchPerTbl;
|
||||||
flagSleep = true;
|
flagSleep = true;
|
||||||
|
@ -4668,13 +4680,14 @@ static void* syncWriteProgressive(threadInfo *pThreadInfo) {
|
||||||
|
|
||||||
int generated = generateDataBuffer(
|
int generated = generateDataBuffer(
|
||||||
tableName, tableSeq, pThreadInfo, buffer, insertRows,
|
tableName, tableSeq, pThreadInfo, buffer, insertRows,
|
||||||
i, start_time + pThreadInfo->totalInsertRows * timeStampStep,
|
i, start_time,
|
||||||
&(pThreadInfo->samplePos));
|
&(pThreadInfo->samplePos));
|
||||||
if (generated > 0)
|
if (generated > 0)
|
||||||
i += generated;
|
i += generated;
|
||||||
else
|
else
|
||||||
goto free_and_statistics_2;
|
goto free_and_statistics_2;
|
||||||
|
|
||||||
|
start_time += generated * timeStampStep;
|
||||||
pThreadInfo->totalInsertRows += generated;
|
pThreadInfo->totalInsertRows += generated;
|
||||||
|
|
||||||
startTs = taosGetTimestampUs();
|
startTs = taosGetTimestampUs();
|
||||||
|
@ -4743,7 +4756,7 @@ static void* syncWrite(void *sarg) {
|
||||||
threadInfo *winfo = (threadInfo *)sarg;
|
threadInfo *winfo = (threadInfo *)sarg;
|
||||||
SSuperTable* superTblInfo = winfo->superTblInfo;
|
SSuperTable* superTblInfo = winfo->superTblInfo;
|
||||||
|
|
||||||
int rowsPerTbl = superTblInfo?superTblInfo->rowsPerTbl:g_args.rows_per_tbl;
|
int rowsPerTbl = superTblInfo?superTblInfo->rowsPerTbl:g_args.interlace_rows;
|
||||||
|
|
||||||
if (rowsPerTbl > 0) {
|
if (rowsPerTbl > 0) {
|
||||||
// interlace mode
|
// interlace mode
|
||||||
|
@ -5471,17 +5484,21 @@ static int queryTestProcess() {
|
||||||
&& g_queryInfo.superQueryInfo.concurrent > 0) {
|
&& g_queryInfo.superQueryInfo.concurrent > 0) {
|
||||||
|
|
||||||
pids = malloc(g_queryInfo.superQueryInfo.concurrent * sizeof(pthread_t));
|
pids = malloc(g_queryInfo.superQueryInfo.concurrent * sizeof(pthread_t));
|
||||||
|
if (NULL == pids) {
|
||||||
|
taos_close(taos);
|
||||||
|
ERROR_EXIT("memory allocation failed\n");
|
||||||
|
}
|
||||||
infos = malloc(g_queryInfo.superQueryInfo.concurrent * sizeof(threadInfo));
|
infos = malloc(g_queryInfo.superQueryInfo.concurrent * sizeof(threadInfo));
|
||||||
if ((NULL == pids) || (NULL == infos)) {
|
if (NULL == infos) {
|
||||||
printf("malloc failed for create threads\n");
|
|
||||||
taos_close(taos);
|
taos_close(taos);
|
||||||
exit(-1);
|
free(pids);
|
||||||
|
ERROR_EXIT("memory allocation failed for create threads\n");
|
||||||
}
|
}
|
||||||
|
|
||||||
for (int i = 0; i < g_queryInfo.superQueryInfo.concurrent; i++) {
|
for (int i = 0; i < g_queryInfo.superQueryInfo.concurrent; i++) {
|
||||||
threadInfo *t_info = infos + i;
|
threadInfo *t_info = infos + i;
|
||||||
t_info->threadID = i;
|
t_info->threadID = i;
|
||||||
|
|
||||||
if (0 == strncasecmp(g_queryInfo.queryMode, "taosc", 5)) {
|
if (0 == strncasecmp(g_queryInfo.queryMode, "taosc", 5)) {
|
||||||
t_info->taos = taos;
|
t_info->taos = taos;
|
||||||
|
|
||||||
|
@ -5489,6 +5506,8 @@ static int queryTestProcess() {
|
||||||
sprintf(sqlStr, "use %s", g_queryInfo.dbName);
|
sprintf(sqlStr, "use %s", g_queryInfo.dbName);
|
||||||
verbosePrint("%s() %d sqlStr: %s\n", __func__, __LINE__, sqlStr);
|
verbosePrint("%s() %d sqlStr: %s\n", __func__, __LINE__, sqlStr);
|
||||||
if (0 != queryDbExec(t_info->taos, sqlStr, NO_INSERT_TYPE)) {
|
if (0 != queryDbExec(t_info->taos, sqlStr, NO_INSERT_TYPE)) {
|
||||||
|
free(infos);
|
||||||
|
free(pids);
|
||||||
errorPrint( "use database %s failed!\n\n",
|
errorPrint( "use database %s failed!\n\n",
|
||||||
g_queryInfo.dbName);
|
g_queryInfo.dbName);
|
||||||
return -1;
|
return -1;
|
||||||
|
@ -5496,7 +5515,7 @@ static int queryTestProcess() {
|
||||||
} else {
|
} else {
|
||||||
t_info->taos = NULL;
|
t_info->taos = NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
pthread_create(pids + i, NULL, superQueryProcess, t_info);
|
pthread_create(pids + i, NULL, superQueryProcess, t_info);
|
||||||
}
|
}
|
||||||
}else {
|
}else {
|
||||||
|
@ -5509,11 +5528,21 @@ static int queryTestProcess() {
|
||||||
if ((g_queryInfo.subQueryInfo.sqlCount > 0)
|
if ((g_queryInfo.subQueryInfo.sqlCount > 0)
|
||||||
&& (g_queryInfo.subQueryInfo.threadCnt > 0)) {
|
&& (g_queryInfo.subQueryInfo.threadCnt > 0)) {
|
||||||
pidsOfSub = malloc(g_queryInfo.subQueryInfo.threadCnt * sizeof(pthread_t));
|
pidsOfSub = malloc(g_queryInfo.subQueryInfo.threadCnt * sizeof(pthread_t));
|
||||||
infosOfSub = malloc(g_queryInfo.subQueryInfo.threadCnt * sizeof(threadInfo));
|
if (NULL == pidsOfSub) {
|
||||||
if ((NULL == pidsOfSub) || (NULL == infosOfSub)) {
|
|
||||||
printf("malloc failed for create threads\n");
|
|
||||||
taos_close(taos);
|
taos_close(taos);
|
||||||
exit(-1);
|
free(infos);
|
||||||
|
free(pids);
|
||||||
|
|
||||||
|
ERROR_EXIT("memory allocation failed for create threads\n");
|
||||||
|
}
|
||||||
|
|
||||||
|
infosOfSub = malloc(g_queryInfo.subQueryInfo.threadCnt * sizeof(threadInfo));
|
||||||
|
if (NULL == infosOfSub) {
|
||||||
|
taos_close(taos);
|
||||||
|
free(pidsOfSub);
|
||||||
|
free(infos);
|
||||||
|
free(pids);
|
||||||
|
ERROR_EXIT("memory allocation failed for create threads\n");
|
||||||
}
|
}
|
||||||
|
|
||||||
int ntables = g_queryInfo.subQueryInfo.childTblCount;
|
int ntables = g_queryInfo.subQueryInfo.childTblCount;
|
||||||
|
@ -5544,62 +5573,63 @@ static int queryTestProcess() {
|
||||||
}
|
}
|
||||||
|
|
||||||
g_queryInfo.subQueryInfo.threadCnt = threads;
|
g_queryInfo.subQueryInfo.threadCnt = threads;
|
||||||
}else {
|
} else {
|
||||||
g_queryInfo.subQueryInfo.threadCnt = 0;
|
g_queryInfo.subQueryInfo.threadCnt = 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
for (int i = 0; i < g_queryInfo.superQueryInfo.concurrent; i++) {
|
for (int i = 0; i < g_queryInfo.superQueryInfo.concurrent; i++) {
|
||||||
pthread_join(pids[i], NULL);
|
pthread_join(pids[i], NULL);
|
||||||
}
|
}
|
||||||
|
|
||||||
tmfree((char*)pids);
|
tmfree((char*)pids);
|
||||||
tmfree((char*)infos);
|
tmfree((char*)infos);
|
||||||
|
|
||||||
for (int i = 0; i < g_queryInfo.subQueryInfo.threadCnt; i++) {
|
for (int i = 0; i < g_queryInfo.subQueryInfo.threadCnt; i++) {
|
||||||
pthread_join(pidsOfSub[i], NULL);
|
pthread_join(pidsOfSub[i], NULL);
|
||||||
}
|
}
|
||||||
|
|
||||||
tmfree((char*)pidsOfSub);
|
tmfree((char*)pidsOfSub);
|
||||||
tmfree((char*)infosOfSub);
|
tmfree((char*)infosOfSub);
|
||||||
|
|
||||||
taos_close(taos);
|
taos_close(taos);
|
||||||
return 0;
|
return 0;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void subscribe_callback(TAOS_SUB* tsub, TAOS_RES *res, void* param, int code) {
|
static void subscribe_callback(TAOS_SUB* tsub, TAOS_RES *res, void* param, int code) {
|
||||||
if (res == NULL || taos_errno(res) != 0) {
|
if (res == NULL || taos_errno(res) != 0) {
|
||||||
printf("failed to subscribe result, code:%d, reason:%s\n", code, taos_errstr(res));
|
errorPrint("%s() LN%d, failed to subscribe result, code:%d, reason:%s\n",
|
||||||
|
__func__, __LINE__, code, taos_errstr(res));
|
||||||
return;
|
return;
|
||||||
}
|
}
|
||||||
|
|
||||||
getResult(res, (char*)param);
|
getResult(res, (char*)param);
|
||||||
taos_free_result(res);
|
taos_free_result(res);
|
||||||
}
|
}
|
||||||
|
|
||||||
static TAOS_SUB* subscribeImpl(TAOS *taos, char *sql, char* topic, char* resultFileName) {
|
static TAOS_SUB* subscribeImpl(TAOS *taos, char *sql, char* topic, char* resultFileName) {
|
||||||
TAOS_SUB* tsub = NULL;
|
TAOS_SUB* tsub = NULL;
|
||||||
|
|
||||||
if (g_queryInfo.superQueryInfo.subscribeMode) {
|
if (g_queryInfo.superQueryInfo.subscribeMode) {
|
||||||
tsub = taos_subscribe(taos,
|
tsub = taos_subscribe(taos,
|
||||||
g_queryInfo.superQueryInfo.subscribeRestart,
|
g_queryInfo.superQueryInfo.subscribeRestart,
|
||||||
topic, sql, subscribe_callback, (void*)resultFileName,
|
topic, sql, subscribe_callback, (void*)resultFileName,
|
||||||
g_queryInfo.superQueryInfo.subscribeInterval);
|
g_queryInfo.superQueryInfo.subscribeInterval);
|
||||||
} else {
|
} else {
|
||||||
tsub = taos_subscribe(taos,
|
tsub = taos_subscribe(taos,
|
||||||
g_queryInfo.superQueryInfo.subscribeRestart,
|
g_queryInfo.superQueryInfo.subscribeRestart,
|
||||||
topic, sql, NULL, NULL, 0);
|
topic, sql, NULL, NULL, 0);
|
||||||
}
|
}
|
||||||
|
|
||||||
if (tsub == NULL) {
|
if (tsub == NULL) {
|
||||||
printf("failed to create subscription. topic:%s, sql:%s\n", topic, sql);
|
printf("failed to create subscription. topic:%s, sql:%s\n", topic, sql);
|
||||||
return NULL;
|
return NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
return tsub;
|
return tsub;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void *subSubscribeProcess(void *sarg) {
|
static void *subSubscribeProcess(void *sarg) {
|
||||||
threadInfo *winfo = (threadInfo *)sarg;
|
threadInfo *winfo = (threadInfo *)sarg;
|
||||||
char subSqlstr[1024];
|
char subSqlstr[1024];
|
||||||
|
|
||||||
char sqlStr[MAX_TB_NAME_SIZE*2];
|
char sqlStr[MAX_TB_NAME_SIZE*2];
|
||||||
|
@ -5608,7 +5638,7 @@ static void *subSubscribeProcess(void *sarg) {
|
||||||
if (0 != queryDbExec(winfo->taos, sqlStr, NO_INSERT_TYPE)){
|
if (0 != queryDbExec(winfo->taos, sqlStr, NO_INSERT_TYPE)){
|
||||||
return NULL;
|
return NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
//int64_t st = 0;
|
//int64_t st = 0;
|
||||||
//int64_t et = 0;
|
//int64_t et = 0;
|
||||||
do {
|
do {
|
||||||
|
@ -5643,13 +5673,13 @@ static void *subSubscribeProcess(void *sarg) {
|
||||||
if (1 == g_queryInfo.subQueryInfo.subscribeMode) {
|
if (1 == g_queryInfo.subQueryInfo.subscribeMode) {
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
|
|
||||||
res = taos_consume(g_queryInfo.subQueryInfo.tsub[i]);
|
res = taos_consume(g_queryInfo.subQueryInfo.tsub[i]);
|
||||||
if (res) {
|
if (res) {
|
||||||
char tmpFile[MAX_FILE_NAME_LEN*2] = {0};
|
char tmpFile[MAX_FILE_NAME_LEN*2] = {0};
|
||||||
if (g_queryInfo.subQueryInfo.result[i][0] != 0) {
|
if (g_queryInfo.subQueryInfo.result[i][0] != 0) {
|
||||||
sprintf(tmpFile, "%s-%d",
|
sprintf(tmpFile, "%s-%d",
|
||||||
g_queryInfo.subQueryInfo.result[i],
|
g_queryInfo.subQueryInfo.result[i],
|
||||||
winfo->threadID);
|
winfo->threadID);
|
||||||
}
|
}
|
||||||
getResult(res, tmpFile);
|
getResult(res, tmpFile);
|
||||||
|
@ -5657,16 +5687,16 @@ static void *subSubscribeProcess(void *sarg) {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
taos_free_result(res);
|
taos_free_result(res);
|
||||||
|
|
||||||
for (int i = 0; i < g_queryInfo.subQueryInfo.sqlCount; i++) {
|
for (int i = 0; i < g_queryInfo.subQueryInfo.sqlCount; i++) {
|
||||||
taos_unsubscribe(g_queryInfo.subQueryInfo.tsub[i],
|
taos_unsubscribe(g_queryInfo.subQueryInfo.tsub[i],
|
||||||
g_queryInfo.subQueryInfo.subscribeKeepProgress);
|
g_queryInfo.subQueryInfo.subscribeKeepProgress);
|
||||||
}
|
}
|
||||||
return NULL;
|
return NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
static void *superSubscribeProcess(void *sarg) {
|
static void *superSubscribeProcess(void *sarg) {
|
||||||
threadInfo *winfo = (threadInfo *)sarg;
|
threadInfo *winfo = (threadInfo *)sarg;
|
||||||
|
|
||||||
char sqlStr[MAX_TB_NAME_SIZE*2];
|
char sqlStr[MAX_TB_NAME_SIZE*2];
|
||||||
sprintf(sqlStr, "use %s", g_queryInfo.dbName);
|
sprintf(sqlStr, "use %s", g_queryInfo.dbName);
|
||||||
|
@ -5674,7 +5704,7 @@ static void *superSubscribeProcess(void *sarg) {
|
||||||
if (0 != queryDbExec(winfo->taos, sqlStr, NO_INSERT_TYPE)) {
|
if (0 != queryDbExec(winfo->taos, sqlStr, NO_INSERT_TYPE)) {
|
||||||
return NULL;
|
return NULL;
|
||||||
}
|
}
|
||||||
|
|
||||||
//int64_t st = 0;
|
//int64_t st = 0;
|
||||||
//int64_t et = 0;
|
//int64_t et = 0;
|
||||||
do {
|
do {
|
||||||
|
@ -5689,13 +5719,13 @@ static void *superSubscribeProcess(void *sarg) {
|
||||||
sprintf(topic, "taosdemo-subscribe-%d", i);
|
sprintf(topic, "taosdemo-subscribe-%d", i);
|
||||||
char tmpFile[MAX_FILE_NAME_LEN*2] = {0};
|
char tmpFile[MAX_FILE_NAME_LEN*2] = {0};
|
||||||
if (g_queryInfo.subQueryInfo.result[i][0] != 0) {
|
if (g_queryInfo.subQueryInfo.result[i][0] != 0) {
|
||||||
sprintf(tmpFile, "%s-%d",
|
sprintf(tmpFile, "%s-%d",
|
||||||
g_queryInfo.superQueryInfo.result[i], winfo->threadID);
|
g_queryInfo.superQueryInfo.result[i], winfo->threadID);
|
||||||
}
|
}
|
||||||
g_queryInfo.superQueryInfo.tsub[i] =
|
g_queryInfo.superQueryInfo.tsub[i] =
|
||||||
subscribeImpl(winfo->taos,
|
subscribeImpl(winfo->taos,
|
||||||
g_queryInfo.superQueryInfo.sql[i],
|
g_queryInfo.superQueryInfo.sql[i],
|
||||||
topic, tmpFile);
|
topic, tmpFile);
|
||||||
if (NULL == g_queryInfo.superQueryInfo.tsub[i]) {
|
if (NULL == g_queryInfo.superQueryInfo.tsub[i]) {
|
||||||
return NULL;
|
return NULL;
|
||||||
}
|
}
|
||||||
|
@ -5711,12 +5741,12 @@ static void *superSubscribeProcess(void *sarg) {
|
||||||
if (1 == g_queryInfo.superQueryInfo.subscribeMode) {
|
if (1 == g_queryInfo.superQueryInfo.subscribeMode) {
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
|
|
||||||
res = taos_consume(g_queryInfo.superQueryInfo.tsub[i]);
|
res = taos_consume(g_queryInfo.superQueryInfo.tsub[i]);
|
||||||
if (res) {
|
if (res) {
|
||||||
char tmpFile[MAX_FILE_NAME_LEN*2] = {0};
|
char tmpFile[MAX_FILE_NAME_LEN*2] = {0};
|
||||||
if (g_queryInfo.superQueryInfo.result[i][0] != 0) {
|
if (g_queryInfo.superQueryInfo.result[i][0] != 0) {
|
||||||
sprintf(tmpFile, "%s-%d",
|
sprintf(tmpFile, "%s-%d",
|
||||||
g_queryInfo.superQueryInfo.result[i], winfo->threadID);
|
g_queryInfo.superQueryInfo.result[i], winfo->threadID);
|
||||||
}
|
}
|
||||||
getResult(res, tmpFile);
|
getResult(res, tmpFile);
|
||||||
|
@ -5726,7 +5756,7 @@ static void *superSubscribeProcess(void *sarg) {
|
||||||
taos_free_result(res);
|
taos_free_result(res);
|
||||||
|
|
||||||
for (int i = 0; i < g_queryInfo.superQueryInfo.sqlCount; i++) {
|
for (int i = 0; i < g_queryInfo.superQueryInfo.sqlCount; i++) {
|
||||||
taos_unsubscribe(g_queryInfo.superQueryInfo.tsub[i],
|
taos_unsubscribe(g_queryInfo.superQueryInfo.tsub[i],
|
||||||
g_queryInfo.superQueryInfo.subscribeKeepProgress);
|
g_queryInfo.superQueryInfo.subscribeKeepProgress);
|
||||||
}
|
}
|
||||||
return NULL;
|
return NULL;
|
||||||
|
@ -6129,13 +6159,13 @@ static void queryResult() {
|
||||||
|
|
||||||
static void testCmdLine() {
|
static void testCmdLine() {
|
||||||
|
|
||||||
g_args.test_mode = INSERT_TEST;
|
g_args.test_mode = INSERT_TEST;
|
||||||
insertTestProcess();
|
insertTestProcess();
|
||||||
|
|
||||||
if (g_Dbs.insert_only)
|
if (g_Dbs.insert_only)
|
||||||
return;
|
return;
|
||||||
else
|
else
|
||||||
queryResult();
|
queryResult();
|
||||||
}
|
}
|
||||||
|
|
||||||
int main(int argc, char *argv[]) {
|
int main(int argc, char *argv[]) {
|
||||||
|
@ -6145,7 +6175,7 @@ int main(int argc, char *argv[]) {
|
||||||
|
|
||||||
if (g_args.metaFile) {
|
if (g_args.metaFile) {
|
||||||
initOfInsertMeta();
|
initOfInsertMeta();
|
||||||
initOfQueryMeta();
|
initOfQueryMeta();
|
||||||
|
|
||||||
if (false == getInfoFromJsonFile(g_args.metaFile)) {
|
if (false == getInfoFromJsonFile(g_args.metaFile)) {
|
||||||
printf("Failed to read %s\n", g_args.metaFile);
|
printf("Failed to read %s\n", g_args.metaFile);
|
||||||
|
@ -6159,12 +6189,12 @@ int main(int argc, char *argv[]) {
|
||||||
|
|
||||||
if (NULL != g_args.sqlFile) {
|
if (NULL != g_args.sqlFile) {
|
||||||
TAOS* qtaos = taos_connect(
|
TAOS* qtaos = taos_connect(
|
||||||
g_Dbs.host,
|
g_Dbs.host,
|
||||||
g_Dbs.user,
|
g_Dbs.user,
|
||||||
g_Dbs.password,
|
g_Dbs.password,
|
||||||
g_Dbs.db[0].dbName,
|
g_Dbs.db[0].dbName,
|
||||||
g_Dbs.port);
|
g_Dbs.port);
|
||||||
querySqlFile(qtaos, g_args.sqlFile);
|
querySqlFile(qtaos, g_args.sqlFile);
|
||||||
taos_close(qtaos);
|
taos_close(qtaos);
|
||||||
|
|
||||||
} else {
|
} else {
|
||||||
|
|
|
@ -27,7 +27,7 @@ SDisk *tfsNewDisk(int level, int id, const char *dir) {
|
||||||
|
|
||||||
pDisk->level = level;
|
pDisk->level = level;
|
||||||
pDisk->id = id;
|
pDisk->id = id;
|
||||||
strncpy(pDisk->dir, dir, TSDB_FILENAME_LEN);
|
tstrncpy(pDisk->dir, dir, TSDB_FILENAME_LEN);
|
||||||
|
|
||||||
return pDisk;
|
return pDisk;
|
||||||
}
|
}
|
||||||
|
|
|
@ -187,7 +187,7 @@ void tfsInitFile(TFILE *pf, int level, int id, const char *bname) {
|
||||||
|
|
||||||
pf->level = level;
|
pf->level = level;
|
||||||
pf->id = id;
|
pf->id = id;
|
||||||
strncpy(pf->rname, bname, TSDB_FILENAME_LEN);
|
tstrncpy(pf->rname, bname, TSDB_FILENAME_LEN);
|
||||||
|
|
||||||
char tmpName[TMPNAME_LEN] = {0};
|
char tmpName[TMPNAME_LEN] = {0};
|
||||||
snprintf(tmpName, TMPNAME_LEN, "%s/%s", DISK_DIR(pDisk), bname);
|
snprintf(tmpName, TMPNAME_LEN, "%s/%s", DISK_DIR(pDisk), bname);
|
||||||
|
@ -230,15 +230,15 @@ void *tfsDecodeFile(void *buf, TFILE *pf) {
|
||||||
void tfsbasename(const TFILE *pf, char *dest) {
|
void tfsbasename(const TFILE *pf, char *dest) {
|
||||||
char tname[TSDB_FILENAME_LEN] = "\0";
|
char tname[TSDB_FILENAME_LEN] = "\0";
|
||||||
|
|
||||||
strncpy(tname, pf->aname, TSDB_FILENAME_LEN);
|
tstrncpy(tname, pf->aname, TSDB_FILENAME_LEN);
|
||||||
strncpy(dest, basename(tname), TSDB_FILENAME_LEN);
|
tstrncpy(dest, basename(tname), TSDB_FILENAME_LEN);
|
||||||
}
|
}
|
||||||
|
|
||||||
void tfsdirname(const TFILE *pf, char *dest) {
|
void tfsdirname(const TFILE *pf, char *dest) {
|
||||||
char tname[TSDB_FILENAME_LEN] = "\0";
|
char tname[TSDB_FILENAME_LEN] = "\0";
|
||||||
|
|
||||||
strncpy(tname, pf->aname, TSDB_FILENAME_LEN);
|
tstrncpy(tname, pf->aname, TSDB_FILENAME_LEN);
|
||||||
strncpy(dest, dirname(tname), TSDB_FILENAME_LEN);
|
tstrncpy(dest, dirname(tname), TSDB_FILENAME_LEN);
|
||||||
}
|
}
|
||||||
|
|
||||||
// DIR APIs ====================================
|
// DIR APIs ====================================
|
||||||
|
@ -344,7 +344,7 @@ TDIR *tfsOpendir(const char *rname) {
|
||||||
}
|
}
|
||||||
|
|
||||||
tfsInitDiskIter(&(tdir->iter));
|
tfsInitDiskIter(&(tdir->iter));
|
||||||
strncpy(tdir->dirname, rname, TSDB_FILENAME_LEN);
|
tstrncpy(tdir->dirname, rname, TSDB_FILENAME_LEN);
|
||||||
|
|
||||||
if (tfsOpendirImpl(tdir) < 0) {
|
if (tfsOpendirImpl(tdir) < 0) {
|
||||||
free(tdir);
|
free(tdir);
|
||||||
|
|
|
@ -334,7 +334,7 @@ static FORCE_INLINE int tsdbOpenDFileSet(SDFileSet* pSet, int flags) {
|
||||||
|
|
||||||
static FORCE_INLINE void tsdbRemoveDFileSet(SDFileSet* pSet) {
|
static FORCE_INLINE void tsdbRemoveDFileSet(SDFileSet* pSet) {
|
||||||
for (TSDB_FILE_T ftype = 0; ftype < TSDB_FILE_MAX; ftype++) {
|
for (TSDB_FILE_T ftype = 0; ftype < TSDB_FILE_MAX; ftype++) {
|
||||||
tsdbRemoveDFile(TSDB_DFILE_IN_SET(pSet, ftype));
|
(void)tsdbRemoveDFile(TSDB_DFILE_IN_SET(pSet, ftype));
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -164,7 +164,7 @@ static int tsdbCommitMeta(STsdbRepo *pRepo) {
|
||||||
tsdbError("vgId:%d failed to update META record, uid %" PRIu64 " since %s", REPO_ID(pRepo), pAct->uid,
|
tsdbError("vgId:%d failed to update META record, uid %" PRIu64 " since %s", REPO_ID(pRepo), pAct->uid,
|
||||||
tstrerror(terrno));
|
tstrerror(terrno));
|
||||||
tsdbCloseMFile(&mf);
|
tsdbCloseMFile(&mf);
|
||||||
tsdbApplyMFileChange(&mf, pOMFile);
|
(void)tsdbApplyMFileChange(&mf, pOMFile);
|
||||||
// TODO: need to reload metaCache
|
// TODO: need to reload metaCache
|
||||||
return -1;
|
return -1;
|
||||||
}
|
}
|
||||||
|
@ -304,7 +304,7 @@ static int tsdbCommitTSData(STsdbRepo *pRepo) {
|
||||||
SDFileSet *pSet = NULL;
|
SDFileSet *pSet = NULL;
|
||||||
int fid;
|
int fid;
|
||||||
|
|
||||||
memset(&commith, 0, sizeof(SMemTable *));
|
memset(&commith, 0, sizeof(commith));
|
||||||
|
|
||||||
if (pMem->numOfRows <= 0) {
|
if (pMem->numOfRows <= 0) {
|
||||||
// No memory data, just apply retention on each file on disk
|
// No memory data, just apply retention on each file on disk
|
||||||
|
@ -399,9 +399,9 @@ static void tsdbEndCommit(STsdbRepo *pRepo, int eno) {
|
||||||
if (pRepo->appH.notifyStatus) pRepo->appH.notifyStatus(pRepo->appH.appH, TSDB_STATUS_COMMIT_OVER, eno);
|
if (pRepo->appH.notifyStatus) pRepo->appH.notifyStatus(pRepo->appH.appH, TSDB_STATUS_COMMIT_OVER, eno);
|
||||||
|
|
||||||
SMemTable *pIMem = pRepo->imem;
|
SMemTable *pIMem = pRepo->imem;
|
||||||
tsdbLockRepo(pRepo);
|
(void)tsdbLockRepo(pRepo);
|
||||||
pRepo->imem = NULL;
|
pRepo->imem = NULL;
|
||||||
tsdbUnlockRepo(pRepo);
|
(void)tsdbUnlockRepo(pRepo);
|
||||||
tsdbUnRefMemTable(pRepo, pIMem);
|
tsdbUnRefMemTable(pRepo, pIMem);
|
||||||
tsem_post(&(pRepo->readyToCommit));
|
tsem_post(&(pRepo->readyToCommit));
|
||||||
}
|
}
|
||||||
|
@ -1136,12 +1136,12 @@ static int tsdbMoveBlock(SCommitH *pCommith, int bidx) {
|
||||||
}
|
}
|
||||||
|
|
||||||
static int tsdbCommitAddBlock(SCommitH *pCommith, const SBlock *pSupBlock, const SBlock *pSubBlocks, int nSubBlocks) {
|
static int tsdbCommitAddBlock(SCommitH *pCommith, const SBlock *pSupBlock, const SBlock *pSubBlocks, int nSubBlocks) {
|
||||||
if (taosArrayPush(pCommith->aSupBlk, pSupBlock) < 0) {
|
if (taosArrayPush(pCommith->aSupBlk, pSupBlock) == NULL) {
|
||||||
terrno = TSDB_CODE_TDB_OUT_OF_MEMORY;
|
terrno = TSDB_CODE_TDB_OUT_OF_MEMORY;
|
||||||
return -1;
|
return -1;
|
||||||
}
|
}
|
||||||
|
|
||||||
if (pSubBlocks && taosArrayPushBatch(pCommith->aSubBlk, pSubBlocks, nSubBlocks) < 0) {
|
if (pSubBlocks && taosArrayPushBatch(pCommith->aSubBlk, pSubBlocks, nSubBlocks) == NULL) {
|
||||||
terrno = TSDB_CODE_TDB_OUT_OF_MEMORY;
|
terrno = TSDB_CODE_TDB_OUT_OF_MEMORY;
|
||||||
return -1;
|
return -1;
|
||||||
}
|
}
|
||||||
|
@ -1379,7 +1379,7 @@ static int tsdbSetAndOpenCommitFile(SCommitH *pCommith, SDFileSet *pSet, int fid
|
||||||
tstrerror(terrno));
|
tstrerror(terrno));
|
||||||
|
|
||||||
tsdbCloseDFileSet(pWSet);
|
tsdbCloseDFileSet(pWSet);
|
||||||
tsdbRemoveDFile(pWHeadf);
|
(void)tsdbRemoveDFile(pWHeadf);
|
||||||
if (pCommith->isRFileSet) {
|
if (pCommith->isRFileSet) {
|
||||||
tsdbCloseAndUnsetFSet(&(pCommith->readh));
|
tsdbCloseAndUnsetFSet(&(pCommith->readh));
|
||||||
return -1;
|
return -1;
|
||||||
|
|
|
@ -380,7 +380,7 @@ static int tsdbSaveFSStatus(SFSStatus *pStatus, int vid) {
|
||||||
if (taosWrite(fd, pBuf, fsheader.len) < fsheader.len) {
|
if (taosWrite(fd, pBuf, fsheader.len) < fsheader.len) {
|
||||||
terrno = TAOS_SYSTEM_ERROR(errno);
|
terrno = TAOS_SYSTEM_ERROR(errno);
|
||||||
close(fd);
|
close(fd);
|
||||||
remove(tfname);
|
(void)remove(tfname);
|
||||||
taosTZfree(pBuf);
|
taosTZfree(pBuf);
|
||||||
return -1;
|
return -1;
|
||||||
}
|
}
|
||||||
|
@ -413,7 +413,7 @@ static void tsdbApplyFSTxnOnDisk(SFSStatus *pFrom, SFSStatus *pTo) {
|
||||||
sizeTo = taosArrayGetSize(pTo->df);
|
sizeTo = taosArrayGetSize(pTo->df);
|
||||||
|
|
||||||
// Apply meta file change
|
// Apply meta file change
|
||||||
tsdbApplyMFileChange(pFrom->pmf, pTo->pmf);
|
(void)tsdbApplyMFileChange(pFrom->pmf, pTo->pmf);
|
||||||
|
|
||||||
// Apply SDFileSet change
|
// Apply SDFileSet change
|
||||||
if (ifrom >= sizeFrom) {
|
if (ifrom >= sizeFrom) {
|
||||||
|
@ -853,7 +853,7 @@ static int tsdbScanRootDir(STsdbRepo *pRepo) {
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
|
|
||||||
tfsremove(pf);
|
(void)tfsremove(pf);
|
||||||
tsdbDebug("vgId:%d invalid file %s is removed", REPO_ID(pRepo), TFILE_NAME(pf));
|
tsdbDebug("vgId:%d invalid file %s is removed", REPO_ID(pRepo), TFILE_NAME(pf));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -879,7 +879,7 @@ static int tsdbScanDataDir(STsdbRepo *pRepo) {
|
||||||
tfsbasename(pf, bname);
|
tfsbasename(pf, bname);
|
||||||
|
|
||||||
if (!tsdbIsTFileInFS(pfs, pf)) {
|
if (!tsdbIsTFileInFS(pfs, pf)) {
|
||||||
tfsremove(pf);
|
(void)tfsremove(pf);
|
||||||
tsdbDebug("vgId:%d invalid file %s is removed", REPO_ID(pRepo), TFILE_NAME(pf));
|
tsdbDebug("vgId:%d invalid file %s is removed", REPO_ID(pRepo), TFILE_NAME(pf));
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -939,7 +939,7 @@ static int tsdbRestoreMeta(STsdbRepo *pRepo) {
|
||||||
if (strcmp(bname, tsdbTxnFname[TSDB_TXN_TEMP_FILE]) == 0) {
|
if (strcmp(bname, tsdbTxnFname[TSDB_TXN_TEMP_FILE]) == 0) {
|
||||||
// Skip current.t file
|
// Skip current.t file
|
||||||
tsdbInfo("vgId:%d file %s exists, remove it", REPO_ID(pRepo), TFILE_NAME(pf));
|
tsdbInfo("vgId:%d file %s exists, remove it", REPO_ID(pRepo), TFILE_NAME(pf));
|
||||||
tfsremove(pf);
|
(void)tfsremove(pf);
|
||||||
continue;
|
continue;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -1045,7 +1045,7 @@ static int tsdbRestoreDFileSet(STsdbRepo *pRepo) {
|
||||||
|
|
||||||
int code = regexec(®ex, bname, 0, NULL, 0);
|
int code = regexec(®ex, bname, 0, NULL, 0);
|
||||||
if (code == 0) {
|
if (code == 0) {
|
||||||
if (taosArrayPush(fArray, (void *)pf) < 0) {
|
if (taosArrayPush(fArray, (void *)pf) == NULL) {
|
||||||
terrno = TSDB_CODE_TDB_OUT_OF_MEMORY;
|
terrno = TSDB_CODE_TDB_OUT_OF_MEMORY;
|
||||||
tfsClosedir(tdir);
|
tfsClosedir(tdir);
|
||||||
taosArrayDestroy(fArray);
|
taosArrayDestroy(fArray);
|
||||||
|
@ -1055,7 +1055,7 @@ static int tsdbRestoreDFileSet(STsdbRepo *pRepo) {
|
||||||
} else if (code == REG_NOMATCH) {
|
} else if (code == REG_NOMATCH) {
|
||||||
// Not match
|
// Not match
|
||||||
tsdbInfo("vgId:%d invalid file %s exists, remove it", REPO_ID(pRepo), TFILE_NAME(pf));
|
tsdbInfo("vgId:%d invalid file %s exists, remove it", REPO_ID(pRepo), TFILE_NAME(pf));
|
||||||
tfsremove(pf);
|
(void)tfsremove(pf);
|
||||||
continue;
|
continue;
|
||||||
} else {
|
} else {
|
||||||
// Has other error
|
// Has other error
|
||||||
|
|
|
@ -523,7 +523,7 @@ static int tsdbApplyDFileChange(SDFile *from, SDFile *to) {
|
||||||
tsdbRollBackDFile(to);
|
tsdbRollBackDFile(to);
|
||||||
}
|
}
|
||||||
} else {
|
} else {
|
||||||
tsdbRemoveDFile(from);
|
(void)tsdbRemoveDFile(from);
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -139,7 +139,7 @@ int tsdbLoadBlockIdx(SReadH *pReadh) {
|
||||||
ptr = tsdbDecodeSBlockIdx(ptr, &blkIdx);
|
ptr = tsdbDecodeSBlockIdx(ptr, &blkIdx);
|
||||||
ASSERT(ptr != NULL);
|
ASSERT(ptr != NULL);
|
||||||
|
|
||||||
if (taosArrayPush(pReadh->aBlkIdx, (void *)(&blkIdx)) < 0) {
|
if (taosArrayPush(pReadh->aBlkIdx, (void *)(&blkIdx)) == NULL) {
|
||||||
terrno = TSDB_CODE_TDB_OUT_OF_MEMORY;
|
terrno = TSDB_CODE_TDB_OUT_OF_MEMORY;
|
||||||
return -1;
|
return -1;
|
||||||
}
|
}
|
||||||
|
|
|
@ -47,7 +47,6 @@ static FORCE_INLINE int taosCalcChecksumAppend(TSCKSUM csi, uint8_t *stream, uin
|
||||||
}
|
}
|
||||||
|
|
||||||
static FORCE_INLINE int taosCheckChecksum(const uint8_t *stream, uint32_t ssize, TSCKSUM checksum) {
|
static FORCE_INLINE int taosCheckChecksum(const uint8_t *stream, uint32_t ssize, TSCKSUM checksum) {
|
||||||
if (ssize < 0) return 0;
|
|
||||||
return (checksum == (*crc32c)(0, stream, (size_t)ssize));
|
return (checksum == (*crc32c)(0, stream, (size_t)ssize));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -38,7 +38,7 @@
|
||||||
"insert_rows": 100,
|
"insert_rows": 100,
|
||||||
"multi_thread_write_one_tbl": "no",
|
"multi_thread_write_one_tbl": "no",
|
||||||
"number_of_tbl_in_one_sql": 0,
|
"number_of_tbl_in_one_sql": 0,
|
||||||
"rows_per_tbl": 3,
|
"interlace_rows": 3,
|
||||||
"max_sql_len": 1024,
|
"max_sql_len": 1024,
|
||||||
"disorder_ratio": 0,
|
"disorder_ratio": 0,
|
||||||
"disorder_range": 1000,
|
"disorder_range": 1000,
|
||||||
|
|
|
@ -28,6 +28,8 @@ RUN ulimit -c unlimited
|
||||||
|
|
||||||
COPY --from=builder /root/bin/taosd /usr/bin
|
COPY --from=builder /root/bin/taosd /usr/bin
|
||||||
COPY --from=builder /root/bin/tarbitrator /usr/bin
|
COPY --from=builder /root/bin/tarbitrator /usr/bin
|
||||||
|
COPY --from=builder /root/bin/taosdemo /usr/bin
|
||||||
|
COPY --from=builder /root/bin/taosdump /usr/bin
|
||||||
COPY --from=builder /root/bin/taos /usr/bin
|
COPY --from=builder /root/bin/taos /usr/bin
|
||||||
COPY --from=builder /root/cfg/taos.cfg /etc/taos/
|
COPY --from=builder /root/cfg/taos.cfg /etc/taos/
|
||||||
COPY --from=builder /root/lib/libtaos.so.* /usr/lib/libtaos.so.1
|
COPY --from=builder /root/lib/libtaos.so.* /usr/lib/libtaos.so.1
|
||||||
|
|
|
@ -45,8 +45,8 @@ class BuildDockerCluser:
|
||||||
os.system("docker exec -d $(docker ps|grep tdnode1|awk '{print $1}') tarbitrator")
|
os.system("docker exec -d $(docker ps|grep tdnode1|awk '{print $1}') tarbitrator")
|
||||||
|
|
||||||
def run(self):
|
def run(self):
|
||||||
if self.numOfNodes < 2 or self.numOfNodes > 5:
|
if self.numOfNodes < 2 or self.numOfNodes > 10:
|
||||||
print("the number of nodes must be between 2 and 5")
|
print("the number of nodes must be between 2 and 10")
|
||||||
exit(0)
|
exit(0)
|
||||||
print("remove Flag value %s" % self.removeFlag)
|
print("remove Flag value %s" % self.removeFlag)
|
||||||
if self.removeFlag == False:
|
if self.removeFlag == False:
|
||||||
|
@ -96,7 +96,7 @@ parser.add_argument(
|
||||||
'-v',
|
'-v',
|
||||||
'--version',
|
'--version',
|
||||||
action='store',
|
action='store',
|
||||||
default='2.0.17.1',
|
default='2.0.18.1',
|
||||||
type=str,
|
type=str,
|
||||||
help='the version of the cluster to be build, Default is 2.0.17.1')
|
help='the version of the cluster to be build, Default is 2.0.17.1')
|
||||||
parser.add_argument(
|
parser.add_argument(
|
||||||
|
|
|
@ -1,6 +1,7 @@
|
||||||
#!/bin/bash
|
#!/bin/bash
|
||||||
echo "Executing buildClusterEnv.sh"
|
echo "Executing buildClusterEnv.sh"
|
||||||
CURR_DIR=`pwd`
|
CURR_DIR=`pwd`
|
||||||
|
IN_TDINTERNAL="community"
|
||||||
|
|
||||||
if [ $# != 6 ]; then
|
if [ $# != 6 ]; then
|
||||||
echo "argument list need input : "
|
echo "argument list need input : "
|
||||||
|
@ -32,7 +33,7 @@ do
|
||||||
done
|
done
|
||||||
|
|
||||||
function addTaoscfg {
|
function addTaoscfg {
|
||||||
for i in {1..5}
|
for((i=1;i<=$NUM_OF_NODES;i++))
|
||||||
do
|
do
|
||||||
touch $DOCKER_DIR/node$i/cfg/taos.cfg
|
touch $DOCKER_DIR/node$i/cfg/taos.cfg
|
||||||
echo 'firstEp tdnode1:6030' > $DOCKER_DIR/node$i/cfg/taos.cfg
|
echo 'firstEp tdnode1:6030' > $DOCKER_DIR/node$i/cfg/taos.cfg
|
||||||
|
@ -42,7 +43,7 @@ function addTaoscfg {
|
||||||
}
|
}
|
||||||
|
|
||||||
function createDIR {
|
function createDIR {
|
||||||
for i in {1..5}
|
for((i=1;i<=$NUM_OF_NODES;i++))
|
||||||
do
|
do
|
||||||
mkdir -p $DOCKER_DIR/node$i/data
|
mkdir -p $DOCKER_DIR/node$i/data
|
||||||
mkdir -p $DOCKER_DIR/node$i/log
|
mkdir -p $DOCKER_DIR/node$i/log
|
||||||
|
@ -53,7 +54,7 @@ function createDIR {
|
||||||
|
|
||||||
function cleanEnv {
|
function cleanEnv {
|
||||||
echo "Clean up docker environment"
|
echo "Clean up docker environment"
|
||||||
for i in {1..5}
|
for((i=1;i<=$NUM_OF_NODES;i++))
|
||||||
do
|
do
|
||||||
rm -rf $DOCKER_DIR/node$i/data/*
|
rm -rf $DOCKER_DIR/node$i/data/*
|
||||||
rm -rf $DOCKER_DIR/node$i/log/*
|
rm -rf $DOCKER_DIR/node$i/log/*
|
||||||
|
@ -68,23 +69,48 @@ function prepareBuild {
|
||||||
fi
|
fi
|
||||||
|
|
||||||
if [ ! -e $DOCKER_DIR/TDengine-server-$VERSION-Linux-x64.tar.gz ] || [ ! -e $DOCKER_DIR/TDengine-arbitrator-$VERSION-Linux-x64.tar.gz ]; then
|
if [ ! -e $DOCKER_DIR/TDengine-server-$VERSION-Linux-x64.tar.gz ] || [ ! -e $DOCKER_DIR/TDengine-arbitrator-$VERSION-Linux-x64.tar.gz ]; then
|
||||||
cd $CURR_DIR/../../../../packaging
|
cd $CURR_DIR/../../../../packaging
|
||||||
|
echo $CURR_DIR
|
||||||
|
echo $IN_TDINTERNAL
|
||||||
echo "generating TDeninger packages"
|
echo "generating TDeninger packages"
|
||||||
./release.sh -v edge -n $VERSION >> /dev/null
|
if [[ "$CURR_DIR" == *"$IN_TDINTERNAL"* ]]; then
|
||||||
|
pwd
|
||||||
if [ ! -e $CURR_DIR/../../../../release/TDengine-server-$VERSION-Linux-x64.tar.gz ]; then
|
./release.sh -v cluster -n $VERSION >> /dev/null 2>&1
|
||||||
echo "no TDengine install package found"
|
else
|
||||||
exit 1
|
pwd
|
||||||
|
./release.sh -v edge -n $VERSION >> /dev/null 2>&1
|
||||||
fi
|
fi
|
||||||
|
|
||||||
if [ ! -e $CURR_DIR/../../../../release/TDengine-arbitrator-$VERSION-Linux-x64.tar.gz ]; then
|
if [[ "$CURR_DIR" == *"$IN_TDINTERNAL"* ]]; then
|
||||||
echo "no arbitrator install package found"
|
if [ ! -e $CURR_DIR/../../../../release/TDengine-enterprise-server-$VERSION-Linux-x64.tar.gz ]; then
|
||||||
exit 1
|
echo "no TDengine install package found"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ ! -e $CURR_DIR/../../../../release/TDengine-enterprise-arbitrator-$VERSION-Linux-x64.tar.gz ]; then
|
||||||
|
echo "no arbitrator install package found"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
if [ ! -e $CURR_DIR/../../../../release/TDengine-server-$VERSION-Linux-x64.tar.gz ]; then
|
||||||
|
echo "no TDengine install package found"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ ! -e $CURR_DIR/../../../../release/TDengine-arbitrator-$VERSION-Linux-x64.tar.gz ]; then
|
||||||
|
echo "no arbitrator install package found"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
fi
|
fi
|
||||||
|
|
||||||
cd $CURR_DIR/../../../../release
|
cd $CURR_DIR/../../../../release
|
||||||
mv TDengine-server-$VERSION-Linux-x64.tar.gz $DOCKER_DIR
|
if [[ "$CURR_DIR" == *"$IN_TDINTERNAL"* ]]; then
|
||||||
mv TDengine-arbitrator-$VERSION-Linux-x64.tar.gz $DOCKER_DIR
|
mv TDengine-enterprise-server-$VERSION-Linux-x64.tar.gz $DOCKER_DIR
|
||||||
|
mv TDengine-enterprise-arbitrator-$VERSION-Linux-x64.tar.gz $DOCKER_DIR
|
||||||
|
else
|
||||||
|
mv TDengine-server-$VERSION-Linux-x64.tar.gz $DOCKER_DIR
|
||||||
|
mv TDengine-arbitrator-$VERSION-Linux-x64.tar.gz $DOCKER_DIR
|
||||||
|
fi
|
||||||
fi
|
fi
|
||||||
|
|
||||||
rm -rf $DOCKER_DIR/*.yml
|
rm -rf $DOCKER_DIR/*.yml
|
||||||
|
@ -99,23 +125,31 @@ function clusterUp {
|
||||||
|
|
||||||
cd $DOCKER_DIR
|
cd $DOCKER_DIR
|
||||||
|
|
||||||
if [ $NUM_OF_NODES -eq 2 ]; then
|
if [[ "$CURR_DIR" == *"$IN_TDINTERNAL"* ]]; then
|
||||||
echo "create 2 dnodes"
|
docker_run="PACKAGE=TDengine-enterprise-server-$VERSION-Linux-x64.tar.gz TARBITRATORPKG=TDengine-enterprise-arbitrator-$VERSION-Linux-x64.tar.gz DIR=TDengine-enterprise-server-$VERSION DIR2=TDengine-enterprise-arbitrator-$VERSION VERSION=$VERSION DATADIR=$DOCKER_DIR docker-compose -f docker-compose.yml "
|
||||||
PACKAGE=TDengine-server-$VERSION-Linux-x64.tar.gz TARBITRATORPKG=TDengine-arbitrator-$VERSION-Linux-x64.tar.gz DIR=TDengine-server-$VERSION DIR2=TDengine-arbitrator-$VERSION VERSION=$VERSION DATADIR=$DOCKER_DIR docker-compose up -d
|
else
|
||||||
|
docker_run="PACKAGE=TDengine-server-$VERSION-Linux-x64.tar.gz TARBITRATORPKG=TDengine-arbitrator-$VERSION-Linux-x64.tar.gz DIR=TDengine-server-$VERSION DIR2=TDengine-arbitrator-$VERSION VERSION=$VERSION DATADIR=$DOCKER_DIR docker-compose -f docker-compose.yml "
|
||||||
fi
|
fi
|
||||||
|
|
||||||
if [ $NUM_OF_NODES -eq 3 ]; then
|
if [ $NUM_OF_NODES -ge 2 ];then
|
||||||
PACKAGE=TDengine-server-$VERSION-Linux-x64.tar.gz TARBITRATORPKG=TDengine-arbitrator-$VERSION-Linux-x64.tar.gz DIR=TDengine-server-$VERSION DIR2=TDengine-arbitrator-$VERSION VERSION=$VERSION DATADIR=$DOCKER_DIR docker-compose -f docker-compose.yml -f node3.yml up -d
|
echo "create $NUM_OF_NODES dnodes"
|
||||||
|
for((i=3;i<=$NUM_OF_NODES;i++))
|
||||||
|
do
|
||||||
|
if [ ! -f node$i.yml ];then
|
||||||
|
echo "node$i.yml not exist"
|
||||||
|
cp node3.yml node$i.yml
|
||||||
|
sed -i "s/td2.0-node3/td2.0-node$i/g" node$i.yml
|
||||||
|
sed -i "s/'tdnode3'/'tdnode$i'/g" node$i.yml
|
||||||
|
sed -i "s#/node3/#/node$i/#g" node$i.yml
|
||||||
|
sed -i "s#hostname: tdnode3#hostname: tdnode$i#g" node$i.yml
|
||||||
|
sed -i "s#ipv4_address: 172.27.0.9#ipv4_address: 172.27.0.`expr $i + 6`#g" node$i.yml
|
||||||
|
fi
|
||||||
|
docker_run=$docker_run" -f node$i.yml "
|
||||||
|
done
|
||||||
|
docker_run=$docker_run" up -d"
|
||||||
fi
|
fi
|
||||||
|
echo $docker_run |sh
|
||||||
if [ $NUM_OF_NODES -eq 4 ]; then
|
|
||||||
PACKAGE=TDengine-server-$VERSION-Linux-x64.tar.gz TARBITRATORPKG=TDengine-arbitrator-$VERSION-Linux-x64.tar.gz DIR=TDengine-server-$VERSION DIR2=TDengine-arbitrator-$VERSION VERSION=$VERSION DATADIR=$DOCKER_DIR docker-compose -f docker-compose.yml -f node3.yml -f node4.yml up -d
|
|
||||||
fi
|
|
||||||
|
|
||||||
if [ $NUM_OF_NODES -eq 5 ]; then
|
|
||||||
PACKAGE=TDengine-server-$VERSION-Linux-x64.tar.gz TARBITRATORPKG=TDengine-arbitrator-$VERSION-Linux-x64.tar.gz DIR=TDengine-server-$VERSION DIR2=TDengine-arbitrator-$VERSION VERSION=$VERSION DATADIR=$DOCKER_DIR docker-compose -f docker-compose.yml -f node3.yml -f node4.yml -f node5.yml up -d
|
|
||||||
fi
|
|
||||||
|
|
||||||
echo "docker compose finish"
|
echo "docker compose finish"
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
|
@ -28,7 +28,7 @@ function removeDockerContainers {
|
||||||
|
|
||||||
function cleanEnv {
|
function cleanEnv {
|
||||||
echo "Clean up docker environment"
|
echo "Clean up docker environment"
|
||||||
for i in {1..5}
|
for i in {1..10}
|
||||||
do
|
do
|
||||||
rm -rf $DOCKER_DIR/node$i/data/*
|
rm -rf $DOCKER_DIR/node$i/data/*
|
||||||
rm -rf $DOCKER_DIR/node$i/log/*
|
rm -rf $DOCKER_DIR/node$i/log/*
|
||||||
|
|
|
@ -30,6 +30,11 @@ services:
|
||||||
- "tdnode3:172.27.0.9"
|
- "tdnode3:172.27.0.9"
|
||||||
- "tdnode4:172.27.0.10"
|
- "tdnode4:172.27.0.10"
|
||||||
- "tdnode5:172.27.0.11"
|
- "tdnode5:172.27.0.11"
|
||||||
|
- "tdnode6:172.27.0.12"
|
||||||
|
- "tdnode7:172.27.0.13"
|
||||||
|
- "tdnode8:172.27.0.14"
|
||||||
|
- "tdnode9:172.27.0.15"
|
||||||
|
- "tdnode10:172.27.0.16"
|
||||||
volumes:
|
volumes:
|
||||||
# bind data directory
|
# bind data directory
|
||||||
- type: bind
|
- type: bind
|
||||||
|
@ -61,7 +66,9 @@ services:
|
||||||
context: .
|
context: .
|
||||||
args:
|
args:
|
||||||
- PACKAGE=${PACKAGE}
|
- PACKAGE=${PACKAGE}
|
||||||
|
- TARBITRATORPKG=${TARBITRATORPKG}
|
||||||
- EXTRACTDIR=${DIR}
|
- EXTRACTDIR=${DIR}
|
||||||
|
- EXTRACTDIR2=${DIR2}
|
||||||
- DATADIR=${DATADIR}
|
- DATADIR=${DATADIR}
|
||||||
image: 'tdengine:${VERSION}'
|
image: 'tdengine:${VERSION}'
|
||||||
container_name: 'tdnode2'
|
container_name: 'tdnode2'
|
||||||
|
|
|
@ -39,7 +39,7 @@
|
||||||
"insert_rows": 100000,
|
"insert_rows": 100000,
|
||||||
"multi_thread_write_one_tbl": "no",
|
"multi_thread_write_one_tbl": "no",
|
||||||
"number_of_tbl_in_one_sql": 1,
|
"number_of_tbl_in_one_sql": 1,
|
||||||
"rows_per_tbl": 100,
|
"interlace_rows": 100,
|
||||||
"max_sql_len": 1024000,
|
"max_sql_len": 1024000,
|
||||||
"disorder_ratio": 0,
|
"disorder_ratio": 0,
|
||||||
"disorder_range": 1000,
|
"disorder_range": 1000,
|
||||||
|
|
|
@ -6,7 +6,9 @@ services:
|
||||||
context: .
|
context: .
|
||||||
args:
|
args:
|
||||||
- PACKAGE=${PACKAGE}
|
- PACKAGE=${PACKAGE}
|
||||||
|
- TARBITRATORPKG=${TARBITRATORPKG}
|
||||||
- EXTRACTDIR=${DIR}
|
- EXTRACTDIR=${DIR}
|
||||||
|
- EXTRACTDIR2=${DIR2}
|
||||||
- DATADIR=${DATADIR}
|
- DATADIR=${DATADIR}
|
||||||
image: 'tdengine:${VERSION}'
|
image: 'tdengine:${VERSION}'
|
||||||
container_name: 'tdnode3'
|
container_name: 'tdnode3'
|
||||||
|
@ -24,10 +26,15 @@ services:
|
||||||
sysctl -p &&
|
sysctl -p &&
|
||||||
exec my-main-application"
|
exec my-main-application"
|
||||||
extra_hosts:
|
extra_hosts:
|
||||||
- "tdnode1:172.27.0.7"
|
|
||||||
- "tdnode2:172.27.0.8"
|
- "tdnode2:172.27.0.8"
|
||||||
|
- "tdnode3:172.27.0.9"
|
||||||
- "tdnode4:172.27.0.10"
|
- "tdnode4:172.27.0.10"
|
||||||
- "tdnode5:172.27.0.11"
|
- "tdnode5:172.27.0.11"
|
||||||
|
- "tdnode6:172.27.0.12"
|
||||||
|
- "tdnode7:172.27.0.13"
|
||||||
|
- "tdnode8:172.27.0.14"
|
||||||
|
- "tdnode9:172.27.0.15"
|
||||||
|
- "tdnode10:172.27.0.16"
|
||||||
volumes:
|
volumes:
|
||||||
# bind data directory
|
# bind data directory
|
||||||
- type: bind
|
- type: bind
|
||||||
|
|
|
@ -6,7 +6,9 @@ services:
|
||||||
context: .
|
context: .
|
||||||
args:
|
args:
|
||||||
- PACKAGE=${PACKAGE}
|
- PACKAGE=${PACKAGE}
|
||||||
|
- TARBITRATORPKG=${TARBITRATORPKG}
|
||||||
- EXTRACTDIR=${DIR}
|
- EXTRACTDIR=${DIR}
|
||||||
|
- EXTRACTDIR2=${DIR2}
|
||||||
- DATADIR=${DATADIR}
|
- DATADIR=${DATADIR}
|
||||||
image: 'tdengine:${VERSION}'
|
image: 'tdengine:${VERSION}'
|
||||||
container_name: 'tdnode4'
|
container_name: 'tdnode4'
|
||||||
|
@ -28,6 +30,11 @@ services:
|
||||||
- "tdnode3:172.27.0.9"
|
- "tdnode3:172.27.0.9"
|
||||||
- "tdnode4:172.27.0.10"
|
- "tdnode4:172.27.0.10"
|
||||||
- "tdnode5:172.27.0.11"
|
- "tdnode5:172.27.0.11"
|
||||||
|
- "tdnode6:172.27.0.12"
|
||||||
|
- "tdnode7:172.27.0.13"
|
||||||
|
- "tdnode8:172.27.0.14"
|
||||||
|
- "tdnode9:172.27.0.15"
|
||||||
|
- "tdnode10:172.27.0.16"
|
||||||
volumes:
|
volumes:
|
||||||
# bind data directory
|
# bind data directory
|
||||||
- type: bind
|
- type: bind
|
||||||
|
|
|
@ -6,7 +6,9 @@ services:
|
||||||
context: .
|
context: .
|
||||||
args:
|
args:
|
||||||
- PACKAGE=${PACKAGE}
|
- PACKAGE=${PACKAGE}
|
||||||
|
- TARBITRATORPKG=${TARBITRATORPKG}
|
||||||
- EXTRACTDIR=${DIR}
|
- EXTRACTDIR=${DIR}
|
||||||
|
- EXTRACTDIR2=${DIR2}
|
||||||
- DATADIR=${DATADIR}
|
- DATADIR=${DATADIR}
|
||||||
image: 'tdengine:${VERSION}'
|
image: 'tdengine:${VERSION}'
|
||||||
container_name: 'tdnode5'
|
container_name: 'tdnode5'
|
||||||
|
@ -28,6 +30,11 @@ services:
|
||||||
- "tdnode3:172.27.0.9"
|
- "tdnode3:172.27.0.9"
|
||||||
- "tdnode4:172.27.0.10"
|
- "tdnode4:172.27.0.10"
|
||||||
- "tdnode5:172.27.0.11"
|
- "tdnode5:172.27.0.11"
|
||||||
|
- "tdnode6:172.27.0.12"
|
||||||
|
- "tdnode7:172.27.0.13"
|
||||||
|
- "tdnode8:172.27.0.14"
|
||||||
|
- "tdnode9:172.27.0.15"
|
||||||
|
- "tdnode10:172.27.0.16"
|
||||||
volumes:
|
volumes:
|
||||||
# bind data directory
|
# bind data directory
|
||||||
- type: bind
|
- type: bind
|
||||||
|
|
|
@ -238,6 +238,7 @@ python3 test.py -f tools/taosdemoTestLimitOffset.py
|
||||||
python3 test.py -f tools/taosdumpTest.py
|
python3 test.py -f tools/taosdumpTest.py
|
||||||
python3 test.py -f tools/taosdemoTest2.py
|
python3 test.py -f tools/taosdemoTest2.py
|
||||||
python3 test.py -f tools/taosdemoTestSampleData.py
|
python3 test.py -f tools/taosdemoTestSampleData.py
|
||||||
|
python3 test.py -f tools/taosdemoTestInterlace.py
|
||||||
|
|
||||||
# subscribe
|
# subscribe
|
||||||
python3 test.py -f subscribe/singlemeter.py
|
python3 test.py -f subscribe/singlemeter.py
|
||||||
|
|
|
@ -10,7 +10,7 @@
|
||||||
"result_file": "./insert_res.txt",
|
"result_file": "./insert_res.txt",
|
||||||
"confirm_parameter_prompt": "no",
|
"confirm_parameter_prompt": "no",
|
||||||
"insert_interval": 5000,
|
"insert_interval": 5000,
|
||||||
"rows_per_tbl": 50,
|
"interlace_rows": 50,
|
||||||
"num_of_records_per_req": 100,
|
"num_of_records_per_req": 100,
|
||||||
"max_sql_len": 1024000,
|
"max_sql_len": 1024000,
|
||||||
"databases": [{
|
"databases": [{
|
||||||
|
@ -42,7 +42,7 @@
|
||||||
"insert_mode": "taosc",
|
"insert_mode": "taosc",
|
||||||
"insert_rows": 250,
|
"insert_rows": 250,
|
||||||
"multi_thread_write_one_tbl": "no",
|
"multi_thread_write_one_tbl": "no",
|
||||||
"rows_per_tbl": 80,
|
"interlace_rows": 80,
|
||||||
"max_sql_len": 1024000,
|
"max_sql_len": 1024000,
|
||||||
"disorder_ratio": 0,
|
"disorder_ratio": 0,
|
||||||
"disorder_range": 1000,
|
"disorder_range": 1000,
|
||||||
|
|
|
@ -51,7 +51,7 @@ class TDTestCase:
|
||||||
else:
|
else:
|
||||||
tdLog.info("taosd found in %s" % buildPath)
|
tdLog.info("taosd found in %s" % buildPath)
|
||||||
binPath = buildPath + "/build/bin/"
|
binPath = buildPath + "/build/bin/"
|
||||||
os.system("%staosdemo -y -M -t %d -n %d -x" %
|
os.system("%staosdemo -y -t %d -n %d" %
|
||||||
(binPath, self.numberOfTables, self.numberOfRecords))
|
(binPath, self.numberOfTables, self.numberOfRecords))
|
||||||
|
|
||||||
tdSql.execute("use test")
|
tdSql.execute("use test")
|
||||||
|
|
|
@ -31,7 +31,7 @@ class TDTestCase:
|
||||||
|
|
||||||
def insertDataAndAlterTable(self, threadID):
|
def insertDataAndAlterTable(self, threadID):
|
||||||
if(threadID == 0):
|
if(threadID == 0):
|
||||||
os.system("taosdemo -M -y -t %d -n %d -x" %
|
os.system("taosdemo -y -t %d -n %d" %
|
||||||
(self.numberOfTables, self.numberOfRecords))
|
(self.numberOfTables, self.numberOfRecords))
|
||||||
if(threadID == 1):
|
if(threadID == 1):
|
||||||
time.sleep(2)
|
time.sleep(2)
|
||||||
|
|
|
@ -17,6 +17,7 @@ from util.log import *
|
||||||
from util.cases import *
|
from util.cases import *
|
||||||
from util.sql import *
|
from util.sql import *
|
||||||
from util.dnodes import *
|
from util.dnodes import *
|
||||||
|
import subprocess
|
||||||
|
|
||||||
|
|
||||||
class TDTestCase:
|
class TDTestCase:
|
||||||
|
@ -39,7 +40,7 @@ class TDTestCase:
|
||||||
if ("taosd" in files):
|
if ("taosd" in files):
|
||||||
rootRealPath = os.path.dirname(os.path.realpath(root))
|
rootRealPath = os.path.dirname(os.path.realpath(root))
|
||||||
if ("packaging" not in rootRealPath):
|
if ("packaging" not in rootRealPath):
|
||||||
buildPath = root[:len(root)-len("/build/bin")]
|
buildPath = root[:len(root) - len("/build/bin")]
|
||||||
break
|
break
|
||||||
return buildPath
|
return buildPath
|
||||||
|
|
||||||
|
@ -50,14 +51,23 @@ class TDTestCase:
|
||||||
tdLog.exit("taosd not found!")
|
tdLog.exit("taosd not found!")
|
||||||
else:
|
else:
|
||||||
tdLog.info("taosd found in %s" % buildPath)
|
tdLog.info("taosd found in %s" % buildPath)
|
||||||
binPath = buildPath+ "/build/bin/"
|
binPath = buildPath + "/build/bin/"
|
||||||
os.system("%staosdemo -f tools/insert-interlace.json" % binPath)
|
taosdemoCmd = "%staosdemo -f tools/insert-interlace.json -pp 2>&1 | grep sleep | wc -l" % binPath
|
||||||
|
sleepTimes = subprocess.check_output(
|
||||||
|
taosdemoCmd, shell=True).decode("utf-8")
|
||||||
|
print("sleep times: %d" % int(sleepTimes))
|
||||||
|
|
||||||
|
if (int(sleepTimes) != 16):
|
||||||
|
caller = inspect.getframeinfo(inspect.stack()[0][0])
|
||||||
|
tdLog.exit(
|
||||||
|
"%s(%d) failed: expected sleep times 16, actual %d" %
|
||||||
|
(caller.filename, caller.lineno, int(sleepTimes)))
|
||||||
|
|
||||||
tdSql.execute("use db")
|
tdSql.execute("use db")
|
||||||
tdSql.query("select count(tbname) from db.stb")
|
tdSql.query("select count(tbname) from db.stb")
|
||||||
tdSql.checkData(0, 0, 100)
|
tdSql.checkData(0, 0, 9)
|
||||||
tdSql.query("select count(*) from db.stb")
|
tdSql.query("select count(*) from db.stb")
|
||||||
tdSql.checkData(0, 0, 33000)
|
tdSql.checkData(0, 0, 2250)
|
||||||
|
|
||||||
def stop(self):
|
def stop(self):
|
||||||
tdSql.close()
|
tdSql.close()
|
||||||
|
|
|
@ -50,7 +50,7 @@ class TDTestCase:
|
||||||
else:
|
else:
|
||||||
tdLog.info("taosd found in %s" % buildPath)
|
tdLog.info("taosd found in %s" % buildPath)
|
||||||
binPath = buildPath + "/build/bin/"
|
binPath = buildPath + "/build/bin/"
|
||||||
os.system("%staosdemo -y -t %d -n %d -x" %
|
os.system("%staosdemo -N -y -t %d -n %d" %
|
||||||
(binPath, self.numberOfTables, self.numberOfRecords))
|
(binPath, self.numberOfTables, self.numberOfRecords))
|
||||||
|
|
||||||
tdSql.query("show databases")
|
tdSql.query("show databases")
|
||||||
|
|
|
@ -79,24 +79,26 @@ function runSimCaseOneByOnefq {
|
||||||
date +%F\ %T | tee -a out.log
|
date +%F\ %T | tee -a out.log
|
||||||
if [[ "$tests_dir" == *"$IN_TDINTERNAL"* ]]; then
|
if [[ "$tests_dir" == *"$IN_TDINTERNAL"* ]]; then
|
||||||
echo -n $case
|
echo -n $case
|
||||||
./test.sh -f $case > /dev/null 2>&1 && \
|
./test.sh -f $case > ../../../sim/case.log 2>&1 && \
|
||||||
( grep -q 'script.*'$case'.*failed.*, err.*lineNum' ../../../sim/tsim/log/taoslog0.0 && echo -e "${RED} failed${NC}" | tee -a out.log || echo -e "${GREEN} success${NC}" | tee -a out.log )|| \
|
( grep -q 'script.*'$case'.*failed.*, err.*lineNum' ../../../sim/tsim/log/taoslog0.0 && echo -e "${RED} failed${NC}" | tee -a out.log || echo -e "${GREEN} success${NC}" | tee -a out.log )|| \
|
||||||
( grep -q 'script.*success.*m$' ../../../sim/tsim/log/taoslog0.0 && echo -e "${GREEN} success${NC}" | tee -a out.log ) || \
|
( grep -q 'script.*success.*m$' ../../../sim/tsim/log/taoslog0.0 && echo -e "${GREEN} success${NC}" | tee -a out.log ) || \
|
||||||
echo -e "${RED} failed${NC}" | tee -a out.log
|
( echo -e "${RED} failed${NC}" | tee -a out.log && echo '=====================log=====================' && cat ../../../sim/case.log )
|
||||||
else
|
else
|
||||||
echo -n $case
|
echo -n $case
|
||||||
./test.sh -f $case > /dev/null 2>&1 && \
|
./test.sh -f $case > ../../sim/case.log 2>&1 && \
|
||||||
( grep -q 'script.*'$case'.*failed.*, err.*lineNum' ../../sim/tsim/log/taoslog0.0 && echo -e "${RED} failed${NC}" | tee -a out.log || echo -e "${GREEN} success${NC}" | tee -a out.log )|| \
|
( grep -q 'script.*'$case'.*failed.*, err.*lineNum' ../../sim/tsim/log/taoslog0.0 && echo -e "${RED} failed${NC}" | tee -a out.log || echo -e "${GREEN} success${NC}" | tee -a out.log )|| \
|
||||||
( grep -q 'script.*success.*m$' ../../sim/tsim/log/taoslog0.0 && echo -e "${GREEN} success${NC}" | tee -a out.log ) || \
|
( grep -q 'script.*success.*m$' ../../sim/tsim/log/taoslog0.0 && echo -e "${GREEN} success${NC}" | tee -a out.log ) || \
|
||||||
echo -e "${RED} failed${NC}" | tee -a out.log
|
( echo -e "${RED} failed${NC}" | tee -a out.log && echo '=====================log=====================' && cat ../../sim/case.log )
|
||||||
fi
|
fi
|
||||||
|
|
||||||
out_log=`tail -1 out.log `
|
out_log=`tail -1 out.log `
|
||||||
if [[ $out_log =~ 'failed' ]];then
|
if [[ $out_log =~ 'failed' ]];then
|
||||||
if [[ "$tests_dir" == *"$IN_TDINTERNAL"* ]]; then
|
if [[ "$tests_dir" == *"$IN_TDINTERNAL"* ]]; then
|
||||||
cp -r ../../../sim ~/sim_`date "+%Y_%m_%d_%H:%M:%S"`
|
cp -r ../../../sim ~/sim_`date "+%Y_%m_%d_%H:%M:%S"`
|
||||||
|
rm -rf ../../../sim/case.log
|
||||||
else
|
else
|
||||||
cp -r ../../sim ~/sim_`date "+%Y_%m_%d_%H:%M:%S" `
|
cp -r ../../sim ~/sim_`date "+%Y_%m_%d_%H:%M:%S" `
|
||||||
|
rm -rf ../../sim/case.log
|
||||||
fi
|
fi
|
||||||
exit 8
|
exit 8
|
||||||
fi
|
fi
|
||||||
|
@ -105,6 +107,8 @@ function runSimCaseOneByOnefq {
|
||||||
dohavecore $2
|
dohavecore $2
|
||||||
fi
|
fi
|
||||||
done
|
done
|
||||||
|
rm -rf ../../../sim/case.log
|
||||||
|
rm -rf ../../sim/case.log
|
||||||
}
|
}
|
||||||
|
|
||||||
function runPyCaseOneByOne {
|
function runPyCaseOneByOne {
|
||||||
|
@ -158,13 +162,16 @@ function runPyCaseOneByOnefq() {
|
||||||
start_time=`date +%s`
|
start_time=`date +%s`
|
||||||
date +%F\ %T | tee -a pytest-out.log
|
date +%F\ %T | tee -a pytest-out.log
|
||||||
echo -n $case
|
echo -n $case
|
||||||
$line > /dev/null 2>&1 && \
|
$line > ../../sim/case.log 2>&1 && \
|
||||||
echo -e "${GREEN} success${NC}" | tee -a pytest-out.log || \
|
echo -e "${GREEN} success${NC}" | tee -a pytest-out.log || \
|
||||||
echo -e "${RED} failed${NC}" | tee -a pytest-out.log
|
echo -e "${RED} failed${NC}" | tee -a pytest-out.log
|
||||||
end_time=`date +%s`
|
end_time=`date +%s`
|
||||||
out_log=`tail -1 pytest-out.log `
|
out_log=`tail -1 pytest-out.log `
|
||||||
if [[ $out_log =~ 'failed' ]];then
|
if [[ $out_log =~ 'failed' ]];then
|
||||||
cp -r ../../sim ~/sim_`date "+%Y_%m_%d_%H:%M:%S" `
|
cp -r ../../sim ~/sim_`date "+%Y_%m_%d_%H:%M:%S" `
|
||||||
|
echo '=====================log====================='
|
||||||
|
cat ../../sim/case.log
|
||||||
|
rm -rf ../../sim/case.log
|
||||||
exit 8
|
exit 8
|
||||||
fi
|
fi
|
||||||
echo execution time of $case was `expr $end_time - $start_time`s. | tee -a pytest-out.log
|
echo execution time of $case was `expr $end_time - $start_time`s. | tee -a pytest-out.log
|
||||||
|
@ -174,6 +181,7 @@ function runPyCaseOneByOnefq() {
|
||||||
dohavecore $2
|
dohavecore $2
|
||||||
fi
|
fi
|
||||||
done
|
done
|
||||||
|
rm -rf ../../sim/case.log
|
||||||
}
|
}
|
||||||
|
|
||||||
totalFailed=0
|
totalFailed=0
|
||||||
|
|
Loading…
Reference in New Issue