diff --git a/documentation20/cn/08.connector/01.java/docs.md b/documentation20/cn/08.connector/01.java/docs.md
index 5eec33e2f1..4fc10b542b 100644
--- a/documentation20/cn/08.connector/01.java/docs.md
+++ b/documentation20/cn/08.connector/01.java/docs.md
@@ -266,7 +266,9 @@ while(resultSet.next()){
> 查询和操作关系型数据库一致,使用下标获取返回字段内容时从 1 开始,建议使用字段名称获取。
### 处理异常
+
在报错后,通过SQLException可以获取到错误的信息和错误码:
+
```java
try (Statement statement = connection.createStatement()) {
// executeQuery
@@ -279,11 +281,87 @@ try (Statement statement = connection.createStatement()) {
e.printStackTrace();
}
```
+
JDBC连接器可能报错的错误码包括3种:JDBC driver本身的报错(错误码在0x2301到0x2350之间),JNI方法的报错(错误码在0x2351到0x2400之间),TDengine其他功能模块的报错。
具体的错误码请参考:
* https://github.com/taosdata/TDengine/blob/develop/src/connector/jdbc/src/main/java/com/taosdata/jdbc/TSDBErrorNumbers.java
* https://github.com/taosdata/TDengine/blob/develop/src/inc/taoserror.h
+### 通过参数绑定写入数据
+
+从 2.1.2.0 版本开始,TDengine 的 **JDBC-JNI** 实现大幅改进了参数绑定方式对数据写入(INSERT)场景的支持。采用这种方式写入数据时,能避免 SQL 语法解析的资源消耗,从而在很多情况下显著提升写入性能。(注意:**JDBC-RESTful** 实现并不提供参数绑定这种使用方式。)
+
+```java
+Statement stmt = conn.createStatement();
+Random r = new Random();
+
+// INSERT 语句中,VALUES 部分允许指定具体的数据列;如果采取自动建表,则 TAGS 部分需要设定全部 TAGS 列的参数值:
+TSDBPreparedStatement s = (TSDBPreparedStatement) conn.prepareStatement("insert into ? using weather_test tags (?, ?) (ts, c1, c2) values(?, ?, ?)");
+
+// 设定数据表名:
+s.setTableName("w1");
+// 设定 TAGS 取值:
+s.setTagInt(0, r.nextInt(10));
+s.setTagString(1, "Beijing");
+
+int numOfRows = 10;
+
+// VALUES 部分以逐列的方式进行设置:
+ArrayList ts = new ArrayList<>();
+for (int i = 0; i < numOfRows; i++){
+ ts.add(System.currentTimeMillis() + i);
+}
+s.setTimestamp(0, ts);
+
+ArrayList s1 = new ArrayList<>();
+for (int i = 0; i < numOfRows; i++){
+ s1.add(r.nextInt(100));
+}
+s.setInt(1, s1);
+
+ArrayList s2 = new ArrayList<>();
+for (int i = 0; i < numOfRows; i++){
+ s2.add("test" + r.nextInt(100));
+}
+s.setString(2, s2, 10);
+
+// AddBatch 之后,可以再设定新的表名、TAGS、VALUES 取值,这样就能实现一次执行向多个数据表写入:
+s.columnDataAddBatch();
+// 执行语句:
+s.columnDataExecuteBatch();
+// 执行完毕,释放资源:
+s.columnDataCloseBatch();
+```
+
+用于设定 TAGS 取值的方法总共有:
+```java
+public void setTagNull(int index, int type)
+public void setTagBoolean(int index, boolean value)
+public void setTagInt(int index, int value)
+public void setTagByte(int index, byte value)
+public void setTagShort(int index, short value)
+public void setTagLong(int index, long value)
+public void setTagTimestamp(int index, long value)
+public void setTagFloat(int index, float value)
+public void setTagDouble(int index, double value)
+public void setTagString(int index, String value)
+public void setTagNString(int index, String value)
+```
+
+用于设定 VALUES 数据列的取值的方法总共有:
+```java
+public void setInt(int columnIndex, ArrayList list) throws SQLException
+public void setFloat(int columnIndex, ArrayList list) throws SQLException
+public void setTimestamp(int columnIndex, ArrayList list) throws SQLException
+public void setLong(int columnIndex, ArrayList list) throws SQLException
+public void setDouble(int columnIndex, ArrayList list) throws SQLException
+public void setBoolean(int columnIndex, ArrayList list) throws SQLException
+public void setByte(int columnIndex, ArrayList list) throws SQLException
+public void setShort(int columnIndex, ArrayList list) throws SQLException
+public void setString(int columnIndex, ArrayList list, int size) throws SQLException
+public void setNString(int columnIndex, ArrayList list, int size) throws SQLException
+```
+
### 订阅
#### 创建
diff --git a/documentation20/cn/08.connector/docs.md b/documentation20/cn/08.connector/docs.md
index 9484917993..60f8df95f8 100644
--- a/documentation20/cn/08.connector/docs.md
+++ b/documentation20/cn/08.connector/docs.md
@@ -291,9 +291,25 @@ typedef struct taosField {
TDengine的异步API均采用非阻塞调用模式。应用程序可以用多线程同时打开多张表,并可以同时对每张打开的表进行查询或者插入操作。需要指出的是,**客户端应用必须确保对同一张表的操作完全串行化**,即对同一个表的插入或查询操作未完成时(未返回时),不能够执行第二个插入或查询操作。
-### 参数绑定API
+
+### 参数绑定 API
-除了直接调用 `taos_query` 进行查询,TDengine也提供了支持参数绑定的Prepare API,与 MySQL 一样,这些API目前也仅支持用问号`?`来代表待绑定的参数,具体如下:
+除了直接调用 `taos_query` 进行查询,TDengine 也提供了支持参数绑定的 Prepare API,与 MySQL 一样,这些 API 目前也仅支持用问号 `?` 来代表待绑定的参数。
+
+从 2.1.1.0 和 2.1.2.0 版本开始,TDengine 大幅改进了参数绑定接口对数据写入(INSERT)场景的支持。这样在通过参数绑定接口写入数据时,就避免了 SQL 语法解析的资源消耗,从而在绝大多数情况下显著提升写入性能。此时的典型操作步骤如下:
+1. 调用 `taos_stmt_init` 创建参数绑定对象;
+2. 调用 `taos_stmt_prepare` 解析 INSERT 语句;
+3. 如果 INSERT 语句中预留了表名但没有预留 TAGS,那么调用 `taos_stmt_set_tbname` 来设置表名;
+4. 如果 INSERT 语句中既预留了表名又预留了 TAGS(例如 INSERT 语句采取的是自动建表的方式),那么调用 `taos_stmt_set_tbname_tags` 来设置表名和 TAGS 的值;
+5. 调用 `taos_stmt_bind_param_batch` 以多列的方式设置 VALUES 的值;
+6. 调用 `taos_stmt_add_batch` 把当前绑定的参数加入批处理;
+7. 可以重复第 3~6 步,为批处理加入更多的数据行;
+8. 调用 `taos_stmt_execute` 执行已经准备好的批处理指令;
+9. 执行完毕,调用 `taos_stmt_close` 释放所有资源。
+
+除 C/C++ 语言外,TDengine 的 Java 语言 JNI Connector 也提供参数绑定接口支持,具体请另外参见:[参数绑定接口的 Java 用法](https://www.taosdata.com/cn/documentation/connector/java#stmt-java)。
+
+接口相关的具体函数如下(也可以参考 [apitest.c](https://github.com/taosdata/TDengine/blob/develop/tests/examples/c/apitest.c) 文件中使用对应函数的方式):
- `TAOS_STMT* taos_stmt_init(TAOS *taos)`
@@ -301,11 +317,12 @@ TDengine的异步API均采用非阻塞调用模式。应用程序可以用多线
- `int taos_stmt_prepare(TAOS_STMT *stmt, const char *sql, unsigned long length)`
- 解析一条sql语句,将解析结果和参数信息绑定到stmt上,如果参数length大于0,将使用此参数作为sql语句的长度,如等于0,将自动判断sql语句的长度。
+ 解析一条 SQL 语句,将解析结果和参数信息绑定到 stmt 上,如果参数 length 大于 0,将使用此参数作为 SQL 语句的长度,如等于 0,将自动判断 SQL 语句的长度。
- `int taos_stmt_bind_param(TAOS_STMT *stmt, TAOS_BIND *bind)`
- 进行参数绑定,bind指向一个数组,需保证此数组的元素数量和顺序与sql语句中的参数完全一致。TAOS_BIND 的使用方法与 MySQL中的 MYSQL_BIND 一致,具体定义如下:
+ 不如 `taos_stmt_bind_param_batch` 效率高,但可以支持非 INSERT 类型的 SQL 语句。
+ 进行参数绑定,bind 指向一个数组(代表所要绑定的一行数据),需保证此数组中的元素数量和顺序与 SQL 语句中的参数完全一致。TAOS_BIND 的使用方法与 MySQL 中的 MYSQL_BIND 一致,具体定义如下:
```c
typedef struct TAOS_BIND {
@@ -319,9 +336,35 @@ typedef struct TAOS_BIND {
} TAOS_BIND;
```
+- `int taos_stmt_set_tbname(TAOS_STMT* stmt, const char* name)`
+
+ (2.1.1.0 版本新增)
+ 当 SQL 语句中的表名使用了 `?` 占位时,可以使用此函数绑定一个具体的表名。
+
+- `int taos_stmt_set_tbname_tags(TAOS_STMT* stmt, const char* name, TAOS_BIND* tags)`
+
+ (2.1.2.0 版本新增)
+ 当 SQL 语句中的表名和 TAGS 都使用了 `?` 占位时,可以使用此函数绑定具体的表名和具体的 TAGS 取值。最典型的使用场景是使用了自动建表功能的 INSERT 语句(目前版本不支持指定具体的 TAGS 列)。tags 参数中的列数量需要与 SQL 语句中要求的 TAGS 数量完全一致。
+
+- `int taos_stmt_bind_param_batch(TAOS_STMT* stmt, TAOS_MULTI_BIND* bind)`
+
+ (2.1.1.0 版本新增)
+ 以多列的方式传递待绑定的数据,需要保证这里传递的数据列的顺序、列的数量与 SQL 语句中的 VALUES 参数完全一致。TAOS_MULTI_BIND 的具体定义如下:
+
+```c
+typedef struct TAOS_MULTI_BIND {
+ int buffer_type;
+ void * buffer;
+ uintptr_t buffer_length;
+ int32_t * length;
+ char * is_null;
+ int num; // 列的个数,即 buffer 中的参数个数
+} TAOS_MULTI_BIND;
+```
+
- `int taos_stmt_add_batch(TAOS_STMT *stmt)`
- 将当前绑定的参数加入批处理中,调用此函数后,可以再次调用`taos_stmt_bind_param`绑定新的参数。需要注意,此函数仅支持 insert/import 语句,如果是select等其他SQL语句,将返回错误。
+ 将当前绑定的参数加入批处理中,调用此函数后,可以再次调用 `taos_stmt_bind_param` 或 `taos_stmt_bind_param_batch` 绑定新的参数。需要注意,此函数仅支持 INSERT/IMPORT 语句,如果是 SELECT 等其他 SQL 语句,将返回错误。
- `int taos_stmt_execute(TAOS_STMT *stmt)`
@@ -329,7 +372,7 @@ typedef struct TAOS_BIND {
- `TAOS_RES* taos_stmt_use_result(TAOS_STMT *stmt)`
- 获取语句的结果集。结果集的使用方式与非参数化调用时一致,使用完成后,应对此结果集调用 `taos_free_result`以释放资源。
+ 获取语句的结果集。结果集的使用方式与非参数化调用时一致,使用完成后,应对此结果集调用 `taos_free_result` 以释放资源。
- `int taos_stmt_close(TAOS_STMT *stmt)`
diff --git a/src/client/src/tscSQLParser.c b/src/client/src/tscSQLParser.c
index 4e9e5a8654..84d87af385 100644
--- a/src/client/src/tscSQLParser.c
+++ b/src/client/src/tscSQLParser.c
@@ -5828,11 +5828,17 @@ static int32_t setKeepOption(SSqlCmd* pCmd, SCreateDbMsg* pMsg, SCreateDbInfo* p
tVariantListItem* p0 = taosArrayGet(pKeep, 0);
switch (s) {
case 1: {
+ if ((int32_t)p0->pVar.i64 <= 0) {
+ return invalidOperationMsg(tscGetErrorMsgPayload(pCmd), msg);
+ }
pMsg->daysToKeep = htonl((int32_t)p0->pVar.i64);
}
break;
case 2: {
tVariantListItem* p1 = taosArrayGet(pKeep, 1);
+ if ((int32_t)p0->pVar.i64 <= 0 || (int32_t)p1->pVar.i64 <= 0) {
+ return invalidOperationMsg(tscGetErrorMsgPayload(pCmd), msg);
+ }
pMsg->daysToKeep = htonl((int32_t)p0->pVar.i64);
pMsg->daysToKeep1 = htonl((int32_t)p1->pVar.i64);
break;
@@ -5841,6 +5847,10 @@ static int32_t setKeepOption(SSqlCmd* pCmd, SCreateDbMsg* pMsg, SCreateDbInfo* p
tVariantListItem* p1 = taosArrayGet(pKeep, 1);
tVariantListItem* p2 = taosArrayGet(pKeep, 2);
+ if ((int32_t)p0->pVar.i64 <= 0 || (int32_t)p1->pVar.i64 <= 0 || (int32_t)p2->pVar.i64 <= 0) {
+ return invalidOperationMsg(tscGetErrorMsgPayload(pCmd), msg);
+ }
+
pMsg->daysToKeep = htonl((int32_t)p0->pVar.i64);
pMsg->daysToKeep1 = htonl((int32_t)p1->pVar.i64);
pMsg->daysToKeep2 = htonl((int32_t)p2->pVar.i64);
diff --git a/src/client/src/tscStream.c b/src/client/src/tscStream.c
index 2fc18943da..9b13cd2140 100644
--- a/src/client/src/tscStream.c
+++ b/src/client/src/tscStream.c
@@ -110,7 +110,8 @@ static void doLaunchQuery(void* param, TAOS_RES* tres, int32_t code) {
// failed to get table Meta or vgroup list, retry in 10sec.
if (code == TSDB_CODE_SUCCESS) {
tscTansformFuncForSTableQuery(pQueryInfo);
- tscDebug("0x%"PRIx64" stream:%p started to query table:%s", pSql->self, pStream, tNameGetTableName(&pTableMetaInfo->name));
+
+ tscDebug("0x%"PRIx64" stream:%p, start stream query on:%s QueryInfo->skey=%"PRId64" ekey=%"PRId64" ", pSql->self, pStream, tNameGetTableName(&pTableMetaInfo->name), pQueryInfo->window.skey, pQueryInfo->window.ekey);
pQueryInfo->command = TSDB_SQL_SELECT;
@@ -164,7 +165,11 @@ static void tscProcessStreamTimer(void *handle, void *tmrId) {
if (etime > pStream->etime) {
etime = pStream->etime;
} else if (pStream->interval.intervalUnit != 'y' && pStream->interval.intervalUnit != 'n') {
- etime = pStream->stime + (etime - pStream->stime) / pStream->interval.interval * pStream->interval.interval;
+ if(pStream->stime == INT64_MIN) {
+ etime = taosTimeTruncate(etime, &pStream->interval, pStream->precision);
+ } else {
+ etime = pStream->stime + (etime - pStream->stime) / pStream->interval.interval * pStream->interval.interval;
+ }
} else {
etime = taosTimeTruncate(etime, &pStream->interval, pStream->precision);
}
@@ -353,8 +358,8 @@ static void tscSetRetryTimer(SSqlStream *pStream, SSqlObj *pSql, int64_t timer)
tscDebug("0x%"PRIx64" stream:%p, next start at %" PRId64 "(ts window ekey), in %" PRId64 " ms. delay:%" PRId64 "ms qrange %" PRId64 "-%" PRId64, pStream->pSql->self, pStream,
now + timer, timer, delay, pStream->stime, etime);
} else {
- tscDebug("0x%"PRIx64" stream:%p, next start at %" PRId64 "(ts window ekey), in %" PRId64 " ms. delay:%" PRId64 "ms qrange %" PRId64 "-%" PRId64, pStream->pSql->self, pStream,
- pStream->stime, timer, delay, pStream->stime - pStream->interval.interval, pStream->stime - 1);
+ tscDebug("0x%"PRIx64" stream:%p, next start at %" PRId64 " - %" PRId64 " end, in %" PRId64 "ms. delay:%" PRId64 "ms qrange %" PRId64 "-%" PRId64, pStream->pSql->self, pStream,
+ pStream->stime, pStream->etime, timer, delay, pStream->stime - pStream->interval.interval, pStream->stime - 1);
}
pSql->cmd.command = TSDB_SQL_SELECT;
@@ -663,6 +668,11 @@ TAOS_STREAM *taos_open_stream_withname(TAOS *taos, const char* dstTable, const c
STscObj *pObj = (STscObj *)taos;
if (pObj == NULL || pObj->signature != pObj) return NULL;
+ if(fp == NULL){
+ tscError(" taos_open_stream api fp param must not NULL.");
+ return NULL;
+ }
+
SSqlObj *pSql = (SSqlObj *)calloc(1, sizeof(SSqlObj));
if (pSql == NULL) {
return NULL;
diff --git a/src/dnode/src/dnodeMain.c b/src/dnode/src/dnodeMain.c
index a4ff9df203..cf633502c1 100644
--- a/src/dnode/src/dnodeMain.c
+++ b/src/dnode/src/dnodeMain.c
@@ -95,11 +95,17 @@ static SStep tsDnodeCompactSteps[] = {
{"dnode-minfos", dnodeInitMInfos, dnodeCleanupMInfos},
{"dnode-wal", walInit, walCleanUp},
{"dnode-sync", syncInit, syncCleanUp},
+ {"dnode-vread", dnodeInitVRead, dnodeCleanupVRead},
+ {"dnode-vwrite", dnodeInitVWrite, dnodeCleanupVWrite},
+ {"dnode-vmgmt", dnodeInitVMgmt, dnodeCleanupVMgmt},
{"dnode-mread", dnodeInitMRead, NULL},
{"dnode-mwrite", dnodeInitMWrite, NULL},
{"dnode-mpeer", dnodeInitMPeer, NULL},
{"dnode-vnodes", dnodeInitVnodes, dnodeCleanupVnodes},
{"dnode-modules", dnodeInitModules, dnodeCleanupModules},
+ {"dnode-mread", NULL, dnodeCleanupMRead},
+ {"dnode-mwrite", NULL, dnodeCleanupMWrite},
+ {"dnode-mpeer", NULL, dnodeCleanupMPeer},
};
static int dnodeCreateDir(const char *dir) {
diff --git a/src/mnode/src/mnodeDnode.c b/src/mnode/src/mnodeDnode.c
index 7702978af1..76144b53f9 100644
--- a/src/mnode/src/mnodeDnode.c
+++ b/src/mnode/src/mnodeDnode.c
@@ -799,7 +799,7 @@ static int32_t mnodeGetDnodeMeta(STableMetaMsg *pMeta, SShowObj *pShow, void *pC
pSchema[cols].bytes = htons(pShow->bytes[cols]);
cols++;
- pShow->bytes[cols] = 40 + VARSTR_HEADER_SIZE;
+ pShow->bytes[cols] = TSDB_EP_LEN + VARSTR_HEADER_SIZE;
pSchema[cols].type = TSDB_DATA_TYPE_BINARY;
strcpy(pSchema[cols].name, "end_point");
pSchema[cols].bytes = htons(pShow->bytes[cols]);
diff --git a/src/mnode/src/mnodeMnode.c b/src/mnode/src/mnodeMnode.c
index ddc9ea59c4..bccad22157 100644
--- a/src/mnode/src/mnodeMnode.c
+++ b/src/mnode/src/mnodeMnode.c
@@ -485,7 +485,7 @@ static int32_t mnodeGetMnodeMeta(STableMetaMsg *pMeta, SShowObj *pShow, void *pC
pSchema[cols].bytes = htons(pShow->bytes[cols]);
cols++;
- pShow->bytes[cols] = 40 + VARSTR_HEADER_SIZE;
+ pShow->bytes[cols] = TSDB_EP_LEN + VARSTR_HEADER_SIZE;
pSchema[cols].type = TSDB_DATA_TYPE_BINARY;
strcpy(pSchema[cols].name, "end_point");
pSchema[cols].bytes = htons(pShow->bytes[cols]);
diff --git a/src/mnode/src/mnodeSdb.c b/src/mnode/src/mnodeSdb.c
index e9acd5b9bc..a8bcdfa59e 100644
--- a/src/mnode/src/mnodeSdb.c
+++ b/src/mnode/src/mnodeSdb.c
@@ -1176,9 +1176,10 @@ int32_t mnodeCompactWal() {
return -1;
}
- // close wal
- walFsync(tsSdbMgmt.wal, true);
- walClose(tsSdbMgmt.wal);
+ // close sdb and sync to disk
+ //walFsync(tsSdbMgmt.wal, true);
+ //walClose(tsSdbMgmt.wal);
+ sdbCleanUp();
// rename old wal to wal_bak
if (taosRename(tsMnodeDir, tsMnodeBakDir) != 0) {
diff --git a/src/mnode/src/mnodeTable.c b/src/mnode/src/mnodeTable.c
index 5b699c5e24..be53d353c9 100644
--- a/src/mnode/src/mnodeTable.c
+++ b/src/mnode/src/mnodeTable.c
@@ -93,6 +93,9 @@ static int32_t mnodeProcessAlterTableMsg(SMnodeMsg *pMsg);
static void mnodeProcessAlterTableRsp(SRpcMsg *rpcMsg);
static int32_t mnodeFindSuperTableColumnIndex(SSTableObj *pStable, char *colName);
+static int32_t mnodeChangeSuperTableColumn(SMnodeMsg *pMsg);
+static int32_t mnodeChangeSuperTableTag(SMnodeMsg *pMsg);
+static int32_t mnodeChangeNormalTableColumn(SMnodeMsg *pMsg);
static void mnodeDestroyChildTable(SCTableObj *pTable) {
tfree(pTable->info.tableId);
@@ -1457,31 +1460,52 @@ static int32_t mnodeChangeSuperTableColumnCb(SMnodeMsg *pMsg, int32_t code) {
return code;
}
-static int32_t mnodeChangeSuperTableColumn(SMnodeMsg *pMsg, char *oldName, char *newName) {
+static int32_t mnodeChangeSuperTableColumn(SMnodeMsg *pMsg) {
+ SAlterTableMsg *pAlter = pMsg->rpcMsg.pCont;
+ char* name = pAlter->schema[0].name;
SSTableObj *pStable = (SSTableObj *)pMsg->pTable;
- int32_t col = mnodeFindSuperTableColumnIndex(pStable, oldName);
+ int32_t col = mnodeFindSuperTableColumnIndex(pStable, name);
if (col < 0) {
- mError("msg:%p, app:%p stable:%s, change column, oldName:%s, newName:%s", pMsg, pMsg->rpcMsg.ahandle,
- pStable->info.tableId, oldName, newName);
+ mError("msg:%p, app:%p stable:%s, change column, name:%s", pMsg, pMsg->rpcMsg.ahandle,
+ pStable->info.tableId, name);
return TSDB_CODE_MND_FIELD_NOT_EXIST;
}
- // int32_t rowSize = 0;
- uint32_t len = (uint32_t)strlen(newName);
- if (len >= TSDB_COL_NAME_LEN) {
- return TSDB_CODE_MND_COL_NAME_TOO_LONG;
- }
-
- if (mnodeFindSuperTableColumnIndex(pStable, newName) >= 0) {
- return TSDB_CODE_MND_FIELD_ALREAY_EXIST;
- }
-
// update
SSchema *schema = (SSchema *) (pStable->schema + col);
- tstrncpy(schema->name, newName, sizeof(schema->name));
+ ASSERT(schema->type == TSDB_DATA_TYPE_BINARY || schema->type == TSDB_DATA_TYPE_NCHAR);
+ schema->bytes = pAlter->schema[0].bytes;
+ mInfo("msg:%p, app:%p stable %s, start to modify column %s len to %d", pMsg, pMsg->rpcMsg.ahandle, pStable->info.tableId,
+ name, schema->bytes);
- mInfo("msg:%p, app:%p stable %s, start to modify column %s to %s", pMsg, pMsg->rpcMsg.ahandle, pStable->info.tableId,
- oldName, newName);
+ SSdbRow row = {
+ .type = SDB_OPER_GLOBAL,
+ .pTable = tsSuperTableSdb,
+ .pObj = pStable,
+ .pMsg = pMsg,
+ .fpRsp = mnodeChangeSuperTableColumnCb
+ };
+
+ return sdbUpdateRow(&row);
+}
+
+static int32_t mnodeChangeSuperTableTag(SMnodeMsg *pMsg) {
+ SAlterTableMsg *pAlter = pMsg->rpcMsg.pCont;
+ char* name = pAlter->schema[0].name;
+ SSTableObj *pStable = (SSTableObj *)pMsg->pTable;
+ int32_t col = mnodeFindSuperTableTagIndex(pStable, name);
+ if (col < 0) {
+ mError("msg:%p, app:%p stable:%s, change column, name:%s", pMsg, pMsg->rpcMsg.ahandle,
+ pStable->info.tableId, name);
+ return TSDB_CODE_MND_FIELD_NOT_EXIST;
+ }
+
+ // update
+ SSchema *schema = (SSchema *) (pStable->schema + col);
+ ASSERT(schema->type == TSDB_DATA_TYPE_BINARY || schema->type == TSDB_DATA_TYPE_NCHAR);
+ schema->bytes = pAlter->schema[0].bytes;
+ mInfo("msg:%p, app:%p stable %s, start to modify tag len %s to %d", pMsg, pMsg->rpcMsg.ahandle, pStable->info.tableId,
+ name, schema->bytes);
SSdbRow row = {
.type = SDB_OPER_GLOBAL,
@@ -2355,31 +2379,23 @@ static int32_t mnodeDropNormalTableColumn(SMnodeMsg *pMsg, char *colName) {
return sdbUpdateRow(&row);
}
-static int32_t mnodeChangeNormalTableColumn(SMnodeMsg *pMsg, char *oldName, char *newName) {
+static int32_t mnodeChangeNormalTableColumn(SMnodeMsg *pMsg) {
+ SAlterTableMsg *pAlter = pMsg->rpcMsg.pCont;
+ char* name = pAlter->schema[0].name;
SCTableObj *pTable = (SCTableObj *)pMsg->pTable;
- int32_t col = mnodeFindNormalTableColumnIndex(pTable, oldName);
+ int32_t col = mnodeFindNormalTableColumnIndex(pTable, name);
if (col < 0) {
- mError("msg:%p, app:%p ctable:%s, change column, oldName: %s, newName: %s", pMsg, pMsg->rpcMsg.ahandle,
- pTable->info.tableId, oldName, newName);
+ mError("msg:%p, app:%p ctable:%s, change column, name: %s", pMsg, pMsg->rpcMsg.ahandle,
+ pTable->info.tableId, name);
return TSDB_CODE_MND_FIELD_NOT_EXIST;
}
- // int32_t rowSize = 0;
- uint32_t len = (uint32_t)strlen(newName);
- if (len >= TSDB_COL_NAME_LEN) {
- return TSDB_CODE_MND_COL_NAME_TOO_LONG;
- }
-
- if (mnodeFindNormalTableColumnIndex(pTable, newName) >= 0) {
- return TSDB_CODE_MND_FIELD_ALREAY_EXIST;
- }
-
- // update
SSchema *schema = (SSchema *) (pTable->schema + col);
- tstrncpy(schema->name, newName, sizeof(schema->name));
+ ASSERT(schema->type == TSDB_DATA_TYPE_BINARY || schema->type == TSDB_DATA_TYPE_NCHAR);
+ schema->bytes = pAlter->schema[0].bytes;
- mInfo("msg:%p, app:%p ctable %s, start to modify column %s to %s", pMsg, pMsg->rpcMsg.ahandle, pTable->info.tableId,
- oldName, newName);
+ mInfo("msg:%p, app:%p ctable %s, start to modify column %s len to %d", pMsg, pMsg->rpcMsg.ahandle, pTable->info.tableId,
+ name, schema->bytes);
SSdbRow row = {
.type = SDB_OPER_GLOBAL,
@@ -3214,15 +3230,9 @@ static int32_t mnodeProcessAlterTableMsg(SMnodeMsg *pMsg) {
} else if (pAlter->type == TSDB_ALTER_TABLE_DROP_COLUMN) {
code = mnodeDropSuperTableColumn(pMsg, pAlter->schema[0].name);
} else if (pAlter->type == TSDB_ALTER_TABLE_CHANGE_COLUMN) {
- //code = mnodeChangeSuperTableColumn(pMsg, pAlter->schema[0].name, pAlter->schema[1].name);
- (void)mnodeChangeSuperTableColumn;
- mError("change table[%s] column[%s] length to [%d] is not processed", pAlter->tableFname, pAlter->schema[0].name, pAlter->schema[0].bytes);
- code = TSDB_CODE_SUCCESS;
+ code = mnodeChangeSuperTableColumn(pMsg);
} else if (pAlter->type == TSDB_ALTER_TABLE_MODIFY_TAG_COLUMN) {
- //code = mnodeChangeSuperTableColumn(pMsg, pAlter->schema[0].name, pAlter->schema[1].name);
- (void)mnodeChangeSuperTableColumn;
- mError("change table[%s] tag[%s] length to [%d] is not processed", pAlter->tableFname, pAlter->schema[0].name, pAlter->schema[0].bytes);
- code = TSDB_CODE_SUCCESS;
+ code = mnodeChangeSuperTableTag(pMsg);
} else {
}
} else {
@@ -3234,10 +3244,7 @@ static int32_t mnodeProcessAlterTableMsg(SMnodeMsg *pMsg) {
} else if (pAlter->type == TSDB_ALTER_TABLE_DROP_COLUMN) {
code = mnodeDropNormalTableColumn(pMsg, pAlter->schema[0].name);
} else if (pAlter->type == TSDB_ALTER_TABLE_CHANGE_COLUMN) {
- //code = mnodeChangeNormalTableColumn(pMsg, pAlter->schema[0].name, pAlter->schema[1].name);
- (void)mnodeChangeNormalTableColumn;
- mError("change table[%s] column[%s] length to [%d] is not processed", pAlter->tableFname, pAlter->schema[0].name, pAlter->schema[0].bytes);
- code = TSDB_CODE_SUCCESS;
+ code = mnodeChangeNormalTableColumn(pMsg);
} else {
}
}
diff --git a/src/query/src/qExecutor.c b/src/query/src/qExecutor.c
index 25e7e446bd..b7512ac1f0 100644
--- a/src/query/src/qExecutor.c
+++ b/src/query/src/qExecutor.c
@@ -1347,7 +1347,7 @@ static void doSessionWindowAggImpl(SOperatorInfo* pOperator, SSWindowOperatorInf
pInfo->start = j;
} else if (tsList[j] - pInfo->prevTs <= gap) {
pInfo->curWindow.ekey = tsList[j];
- //pInfo->prevTs = tsList[j];
+ pInfo->prevTs = tsList[j];
pInfo->numOfRows += 1;
if (j == 0 && pInfo->start != 0) {
pInfo->numOfRows = 1;
diff --git a/tests/pytest/alter/alter_keep_exception.py b/tests/pytest/alter/alter_keep_exception.py
new file mode 100644
index 0000000000..cddaaa5522
--- /dev/null
+++ b/tests/pytest/alter/alter_keep_exception.py
@@ -0,0 +1,44 @@
+###################################################################
+# Copyright (c) 2016 by TAOS Technologies, Inc.
+# All rights reserved.
+#
+# This file is proprietary and confidential to TAOS Technologies.
+# No part of this file may be reproduced, stored, transmitted,
+# disclosed or used in any form or by any means other than as
+# expressly provided by the written permission from Jianhui Tao
+#
+###################################################################
+
+# -*- coding: utf-8 -*-
+
+#TODO: after TD-4518 and TD-4510 is resolved, add the exception test case for these situations
+
+import sys
+from util.log import *
+from util.cases import *
+from util.sql import *
+
+
+class TDTestCase:
+ def init(self, conn, logSql):
+ tdLog.debug("start to execute %s" % __file__)
+ tdSql.init(conn.cursor(), logSql)
+
+ def run(self):
+ tdSql.prepare()
+
+ tdSql.error('alter database keep db 0')
+ tdSql.error('alter database keep db -10')
+ tdSql.error('alter database keep db -2147483648')
+
+ #this is the test case problem for keep overflow
+ #the error is caught, but type is wrong.
+ #TODO: can be solved in the future, but improvement is minimal
+ tdSql.error('alter database keep db -2147483649')
+ def stop(self):
+ tdSql.close()
+ tdLog.success("%s successfully executed" % __file__)
+
+
+tdCases.addWindows(__file__, TDTestCase())
+tdCases.addLinux(__file__, TDTestCase())
\ No newline at end of file
diff --git a/tests/pytest/fulltest.sh b/tests/pytest/fulltest.sh
index 11d7c49396..5e4f57e1a2 100755
--- a/tests/pytest/fulltest.sh
+++ b/tests/pytest/fulltest.sh
@@ -339,4 +339,5 @@ python3 test.py -f tools/taosdemoAllTest/taosdemoTestQueryWithJson.py
python3 ./test.py -f tag_lite/drop_auto_create.py
python3 test.py -f insert/insert_before_use_db.py
python3 test.py -f alter/alter_cacheLastRow.py
+python3 test.py -f alter/alter_keep_exception.py
#======================p4-end===============
diff --git a/tests/pytest/query/queryJoin.py b/tests/pytest/query/queryJoin.py
index 59e01615b4..cd50a7bf45 100644
--- a/tests/pytest/query/queryJoin.py
+++ b/tests/pytest/query/queryJoin.py
@@ -176,7 +176,15 @@ class TDTestCase:
tdSql.error("select count(join_mt0.c1), first(join_mt0.c1), first(join_mt1.c9) from join_mt0, join_mt1 where join_mt0.t1=join_mt1.t1 and join_mt0.ts=join_mt1.ts interval(10a) group by join_mt0.t1, join_mt0.t2 order by join_mt0.t1 desc slimit 3")
tdSql.error("select count(join_mt0.c1), first(join_mt0.c1) from join_mt0, join_mt1 where join_mt0.t1=join_mt1.t1 and join_mt0.ts=join_mt1.ts interval(10a) group by join_mt0.t1, join_mt0.t2, join_mt1.t1 order by join_mt0.ts desc, join_mt1.ts asc limit 10;")
tdSql.error("select join_mt1.c1,join_mt0.c1 from join_mt1,join_mt0 where join_mt1.ts = join_mt0.ts and join_mt1.t1 = join_mt0.t1 order by t")
-
+ #TD-4458 join on database which using precision us
+ tdSql.execute("create database test_join_us precision 'us'")
+ tdSql.execute("use test_join_us")
+ ts = 1538548685000000
+ for i in range(2):
+ tdSql.execute("create table t%d (ts timestamp, i int)"%i)
+ tdSql.execute("insert into t%d values(%d,11)(%d,12)"%(i,ts,ts+1))
+ tdSql.query("select t1.ts from t0,t1 where t0.ts = t1.ts")
+ tdSql.checkData(0,0,'2018-10-03 14:38:05.000000')
def stop(self):
tdSql.close()
diff --git a/tests/pytest/tools/taosdemoAllTest/insert-drop-exist-auto-N00.json b/tests/pytest/tools/taosdemoAllTest/insert-drop-exist-auto-N00.json
new file mode 100644
index 0000000000..3ac8882699
--- /dev/null
+++ b/tests/pytest/tools/taosdemoAllTest/insert-drop-exist-auto-N00.json
@@ -0,0 +1,181 @@
+{
+ "filetype": "insert",
+ "cfgdir": "/etc/taos",
+ "host": "127.0.0.1",
+ "port": 6030,
+ "user": "root",
+ "password": "taosdata",
+ "thread_count": 4,
+ "thread_count_create_tbl": 4,
+ "result_file": "./insert_res.txt",
+ "confirm_parameter_prompt": "no",
+ "insert_interval": 0,
+ "interlace_rows": 100,
+ "num_of_records_per_req": 100,
+ "databases": [{
+ "dbinfo": {
+ "name": "db",
+ "drop": "no",
+ "replica": 1,
+ "days": 10,
+ "cache": 16,
+ "blocks": 8,
+ "precision": "ms",
+ "keep": 3650,
+ "minRows": 100,
+ "maxRows": 4096,
+ "comp":2,
+ "walLevel":1,
+ "cachelast":0,
+ "quorum":1,
+ "fsync":3000,
+ "update": 0
+ },
+ "super_tables": [{
+ "name": "stb",
+ "child_table_exists":"no",
+ "auto_create_table": "123",
+ "childtable_count": 20,
+ "childtable_prefix": "NN123_",
+ "batch_create_tbl_num": 100,
+ "data_source": "rand",
+ "insert_mode": "taosc",
+ "insert_rows": 5,
+ "childtable_limit": 40,
+ "childtable_offset":0,
+ "interlace_rows": 0,
+ "insert_interval":0,
+ "max_sql_len": 1024000,
+ "disorder_ratio": 0,
+ "disorder_range": 1000,
+ "timestamp_step": 10,
+ "start_timestamp": "now",
+ "sample_format": "csv",
+ "sample_file": "./sample.csv",
+ "tags_file": "",
+ "columns": [{"type": "INT"}],
+ "tags": [{"type": "TINYINT"}]
+ },{
+ "name": "stb",
+ "child_table_exists":"no",
+ "auto_create_table": "no",
+ "childtable_count": 20,
+ "childtable_prefix": "NNN_",
+ "batch_create_tbl_num": 100,
+ "data_source": "rand",
+ "insert_mode": "taosc",
+ "insert_rows": 5,
+ "childtable_limit": 40,
+ "childtable_offset":0,
+ "interlace_rows": 0,
+ "insert_interval":0,
+ "max_sql_len": 1024000,
+ "disorder_ratio": 0,
+ "disorder_range": 1000,
+ "timestamp_step": 10,
+ "start_timestamp": "now",
+ "sample_format": "csv",
+ "sample_file": "./sample.csv",
+ "tags_file": "",
+ "columns": [{"type": "INT"}],
+ "tags": [{"type": "TINYINT"}]
+ },{
+ "name": "stb",
+ "child_table_exists":"no",
+ "auto_create_table": "yes",
+ "childtable_count": 20,
+ "childtable_prefix": "NNY_",
+ "batch_create_tbl_num": 100,
+ "data_source": "rand",
+ "insert_mode": "taosc",
+ "insert_rows": 5,
+ "childtable_limit": 40,
+ "childtable_offset":0,
+ "interlace_rows": 0,
+ "insert_interval":0,
+ "max_sql_len": 1024000,
+ "disorder_ratio": 0,
+ "disorder_range": 1000,
+ "timestamp_step": 10,
+ "start_timestamp": "now",
+ "sample_format": "csv",
+ "sample_file": "./sample.csv",
+ "tags_file": "",
+ "columns": [{"type": "INT"}],
+ "tags": [{"type": "TINYINT"}]
+ },{
+ "name": "stb",
+ "child_table_exists":"yes",
+ "auto_create_table": "123",
+ "childtable_count": 20,
+ "childtable_prefix": "NY123_",
+ "batch_create_tbl_num": 100,
+ "data_source": "rand",
+ "insert_mode": "taosc",
+ "insert_rows": 5,
+ "childtable_limit": 40,
+ "childtable_offset":0,
+ "interlace_rows": 0,
+ "insert_interval":0,
+ "max_sql_len": 1024000,
+ "disorder_ratio": 0,
+ "disorder_range": 1000,
+ "timestamp_step": 10,
+ "start_timestamp": "now",
+ "sample_format": "csv",
+ "sample_file": "./sample.csv",
+ "tags_file": "",
+ "columns": [{"type": "INT"}],
+ "tags": [{"type": "TINYINT"}]
+ },{
+ "name": "stb",
+ "child_table_exists":"yes",
+ "auto_create_table": "no",
+ "childtable_count": 20,
+ "childtable_prefix": "NYN_",
+ "batch_create_tbl_num": 100,
+ "data_source": "rand",
+ "insert_mode": "taosc",
+ "insert_rows": 5,
+ "childtable_limit": 40,
+ "childtable_offset":0,
+ "interlace_rows": 0,
+ "insert_interval":0,
+ "max_sql_len": 1024000,
+ "disorder_ratio": 0,
+ "disorder_range": 1000,
+ "timestamp_step": 10,
+ "start_timestamp": "now",
+ "sample_format": "csv",
+ "sample_file": "./sample.csv",
+ "tags_file": "",
+ "columns": [{"type": "INT"}],
+ "tags": [{"type": "TINYINT"}]
+ },{
+ "name": "stb",
+ "child_table_exists":"yes",
+ "auto_create_table": "yes",
+ "childtable_count": 20,
+ "childtable_prefix": "NYY_",
+ "batch_create_tbl_num": 100,
+ "data_source": "rand",
+ "insert_mode": "taosc",
+ "insert_rows": 5,
+ "childtable_limit": 40,
+ "childtable_offset":0,
+ "interlace_rows": 0,
+ "insert_interval":0,
+ "max_sql_len": 1024000,
+ "disorder_ratio": 0,
+ "disorder_range": 1000,
+ "timestamp_step": 10,
+ "start_timestamp": "now",
+ "sample_format": "csv",
+ "sample_file": "./sample.csv",
+ "tags_file": "",
+ "columns": [{"type": "INT"}],
+ "tags": [{"type": "TINYINT"}]
+ }
+ ]
+ }]
+}
\ No newline at end of file
diff --git a/tests/pytest/tools/taosdemoAllTest/insert-drop-exist-auto-Y00.json b/tests/pytest/tools/taosdemoAllTest/insert-drop-exist-auto-Y00.json
new file mode 100644
index 0000000000..ffa1c91b82
--- /dev/null
+++ b/tests/pytest/tools/taosdemoAllTest/insert-drop-exist-auto-Y00.json
@@ -0,0 +1,181 @@
+{
+ "filetype": "insert",
+ "cfgdir": "/etc/taos",
+ "host": "127.0.0.1",
+ "port": 6030,
+ "user": "root",
+ "password": "taosdata",
+ "thread_count": 4,
+ "thread_count_create_tbl": 4,
+ "result_file": "./insert_res.txt",
+ "confirm_parameter_prompt": "no",
+ "insert_interval": 0,
+ "interlace_rows": 100,
+ "num_of_records_per_req": 100,
+ "databases": [{
+ "dbinfo": {
+ "name": "db",
+ "drop": "yes",
+ "replica": 1,
+ "days": 10,
+ "cache": 16,
+ "blocks": 8,
+ "precision": "ms",
+ "keep": 3650,
+ "minRows": 100,
+ "maxRows": 4096,
+ "comp":2,
+ "walLevel":1,
+ "cachelast":0,
+ "quorum":1,
+ "fsync":3000,
+ "update": 0
+ },
+ "super_tables": [{
+ "name": "stb",
+ "child_table_exists":"no",
+ "auto_create_table": "123",
+ "childtable_count": 20,
+ "childtable_prefix": "YN123_",
+ "batch_create_tbl_num": 100,
+ "data_source": "rand",
+ "insert_mode": "taosc",
+ "insert_rows": 5,
+ "childtable_limit": 40,
+ "childtable_offset":0,
+ "interlace_rows": 0,
+ "insert_interval":0,
+ "max_sql_len": 1024000,
+ "disorder_ratio": 0,
+ "disorder_range": 1000,
+ "timestamp_step": 10,
+ "start_timestamp": "now",
+ "sample_format": "csv",
+ "sample_file": "./sample.csv",
+ "tags_file": "",
+ "columns": [{"type": "INT"}],
+ "tags": [{"type": "TINYINT"}]
+ },{
+ "name": "stb",
+ "child_table_exists":"no",
+ "auto_create_table": "no",
+ "childtable_count": 20,
+ "childtable_prefix": "YNN_",
+ "batch_create_tbl_num": 100,
+ "data_source": "rand",
+ "insert_mode": "taosc",
+ "insert_rows": 5,
+ "childtable_limit": 40,
+ "childtable_offset":0,
+ "interlace_rows": 0,
+ "insert_interval":0,
+ "max_sql_len": 1024000,
+ "disorder_ratio": 0,
+ "disorder_range": 1000,
+ "timestamp_step": 10,
+ "start_timestamp": "now",
+ "sample_format": "csv",
+ "sample_file": "./sample.csv",
+ "tags_file": "",
+ "columns": [{"type": "INT"}],
+ "tags": [{"type": "TINYINT"}]
+ },{
+ "name": "stb",
+ "child_table_exists":"no",
+ "auto_create_table": "yes",
+ "childtable_count": 20,
+ "childtable_prefix": "YNY_",
+ "batch_create_tbl_num": 100,
+ "data_source": "rand",
+ "insert_mode": "taosc",
+ "insert_rows": 5,
+ "childtable_limit": 40,
+ "childtable_offset":0,
+ "interlace_rows": 0,
+ "insert_interval":0,
+ "max_sql_len": 1024000,
+ "disorder_ratio": 0,
+ "disorder_range": 1000,
+ "timestamp_step": 10,
+ "start_timestamp": "now",
+ "sample_format": "csv",
+ "sample_file": "./sample.csv",
+ "tags_file": "",
+ "columns": [{"type": "INT"}],
+ "tags": [{"type": "TINYINT"}]
+ },{
+ "name": "stb",
+ "child_table_exists":"yes",
+ "auto_create_table": "123",
+ "childtable_count": 20,
+ "childtable_prefix": "YY123_",
+ "batch_create_tbl_num": 100,
+ "data_source": "rand",
+ "insert_mode": "taosc",
+ "insert_rows": 5,
+ "childtable_limit": 40,
+ "childtable_offset":0,
+ "interlace_rows": 0,
+ "insert_interval":0,
+ "max_sql_len": 1024000,
+ "disorder_ratio": 0,
+ "disorder_range": 1000,
+ "timestamp_step": 10,
+ "start_timestamp": "now",
+ "sample_format": "csv",
+ "sample_file": "./sample.csv",
+ "tags_file": "",
+ "columns": [{"type": "INT"}],
+ "tags": [{"type": "TINYINT"}]
+ },{
+ "name": "stb",
+ "child_table_exists":"yes",
+ "auto_create_table": "no",
+ "childtable_count": 20,
+ "childtable_prefix": "YYN_",
+ "batch_create_tbl_num": 100,
+ "data_source": "rand",
+ "insert_mode": "taosc",
+ "insert_rows": 5,
+ "childtable_limit": 40,
+ "childtable_offset":0,
+ "interlace_rows": 0,
+ "insert_interval":0,
+ "max_sql_len": 1024000,
+ "disorder_ratio": 0,
+ "disorder_range": 1000,
+ "timestamp_step": 10,
+ "start_timestamp": "now",
+ "sample_format": "csv",
+ "sample_file": "./sample.csv",
+ "tags_file": "",
+ "columns": [{"type": "INT"}],
+ "tags": [{"type": "TINYINT"}]
+ },{
+ "name": "stb",
+ "child_table_exists":"yes",
+ "auto_create_table": "yes",
+ "childtable_count": 20,
+ "childtable_prefix": "YYY_",
+ "batch_create_tbl_num": 100,
+ "data_source": "rand",
+ "insert_mode": "taosc",
+ "insert_rows": 5,
+ "childtable_limit": 40,
+ "childtable_offset":0,
+ "interlace_rows": 0,
+ "insert_interval":0,
+ "max_sql_len": 1024000,
+ "disorder_ratio": 0,
+ "disorder_range": 1000,
+ "timestamp_step": 10,
+ "start_timestamp": "now",
+ "sample_format": "csv",
+ "sample_file": "./sample.csv",
+ "tags_file": "",
+ "columns": [{"type": "INT"}],
+ "tags": [{"type": "TINYINT"}]
+ }
+ ]
+ }]
+}
\ No newline at end of file
diff --git a/tests/pytest/tools/taosdemoAllTest/taosdemoTestInsertWithJson.py b/tests/pytest/tools/taosdemoAllTest/taosdemoTestInsertWithJson.py
index 5ecc4d70b2..638a9c49b9 100644
--- a/tests/pytest/tools/taosdemoAllTest/taosdemoTestInsertWithJson.py
+++ b/tests/pytest/tools/taosdemoAllTest/taosdemoTestInsertWithJson.py
@@ -238,10 +238,12 @@ class TDTestCase:
tdSql.execute("use dbtest123")
tdSql.query("select col2 from stb0")
tdSql.checkData(0, 0, 2147483647)
- tdSql.query("select t1 from stb1")
- tdSql.checkData(0, 0, -127)
- tdSql.query("select t2 from stb1")
- tdSql.checkData(1, 0, 126)
+ tdSql.query("select * from stb1 where t1=-127")
+ tdSql.checkRows(20)
+ tdSql.query("select * from stb1 where t2=127")
+ tdSql.checkRows(10)
+ tdSql.query("select * from stb1 where t2=126")
+ tdSql.checkRows(10)
# insert: test interlace parament
os.system("%staosdemo -f tools/taosdemoAllTest/insert-interlace-row.json -y " % binPath)
@@ -252,6 +254,42 @@ class TDTestCase:
tdSql.checkData(0, 0, 15000)
+ # # insert: auto_create
+
+ tdSql.execute('drop database if exists db')
+ tdSql.execute('create database db')
+ tdSql.execute('use db')
+ os.system("%staosdemo -y -f tools/taosdemoAllTest/insert-drop-exist-auto-N00.json " % binPath) # drop = no, child_table_exists, auto_create_table varies
+ tdSql.execute('use db')
+ tdSql.query('show tables like \'NN123%\'') #child_table_exists = no, auto_create_table varies = 123
+ tdSql.checkRows(20)
+ tdSql.query('show tables like \'NNN%\'') #child_table_exists = no, auto_create_table varies = no
+ tdSql.checkRows(20)
+ tdSql.query('show tables like \'NNY%\'') #child_table_exists = no, auto_create_table varies = yes
+ tdSql.checkRows(20)
+ tdSql.query('show tables like \'NYN%\'') #child_table_exists = yes, auto_create_table varies = no
+ tdSql.checkRows(0)
+ tdSql.query('show tables like \'NY123%\'') #child_table_exists = yes, auto_create_table varies = 123
+ tdSql.checkRows(0)
+ tdSql.query('show tables like \'NYY%\'') #child_table_exists = yes, auto_create_table varies = yes
+ tdSql.checkRows(0)
+
+ tdSql.execute('drop database if exists db')
+ os.system("%staosdemo -y -f tools/taosdemoAllTest/insert-drop-exist-auto-Y00.json " % binPath) # drop = yes, child_table_exists, auto_create_table varies
+ tdSql.execute('use db')
+ tdSql.query('show tables like \'YN123%\'') #child_table_exists = no, auto_create_table varies = 123
+ tdSql.checkRows(20)
+ tdSql.query('show tables like \'YNN%\'') #child_table_exists = no, auto_create_table varies = no
+ tdSql.checkRows(20)
+ tdSql.query('show tables like \'YNY%\'') #child_table_exists = no, auto_create_table varies = yes
+ tdSql.checkRows(20)
+ tdSql.query('show tables like \'YYN%\'') #child_table_exists = yes, auto_create_table varies = no
+ tdSql.checkRows(20)
+ tdSql.query('show tables like \'YY123%\'') #child_table_exists = yes, auto_create_table varies = 123
+ tdSql.checkRows(20)
+ tdSql.query('show tables like \'YYY%\'') #child_table_exists = yes, auto_create_table varies = yes
+ tdSql.checkRows(20)
+
os.system("rm -rf ./insert_res.txt")
os.system("rm -rf tools/taosdemoAllTest/taosdemoTestInsertWithJson.py.sql")
diff --git a/tests/script/general/db/alter_option.sim b/tests/script/general/db/alter_option.sim
index 170ba21c28..c3bb23855f 100644
--- a/tests/script/general/db/alter_option.sim
+++ b/tests/script/general/db/alter_option.sim
@@ -129,8 +129,8 @@ sql alter database db keep 20
sql_error alter database db keep 10
sql_error alter database db keep 9
sql_error alter database db keep 1
-sql alter database db keep 0
-sql alter database db keep -1
+sql_error alter database db keep 0
+sql_error alter database db keep -1
sql_error alter database db keep 365001
print ============== step cache
diff --git a/tests/script/general/db/topic1.sim b/tests/script/general/db/topic1.sim
index 42613405af..2b4cce5e64 100644
--- a/tests/script/general/db/topic1.sim
+++ b/tests/script/general/db/topic1.sim
@@ -385,8 +385,8 @@ sql alter database db keep 20
sql_error alter database db keep 10
sql_error alter database db keep 9
sql_error alter database db keep 1
-sql alter database db keep 0
-sql alter database db keep -1
+sql_error alter database db keep 0
+sql_error alter database db keep -1
sql_error alter database db keep 365001
sql_error alter topic db keep 40
diff --git a/tests/test-all.sh b/tests/test-all.sh
index 47e5de6aa0..6e7963e787 100755
--- a/tests/test-all.sh
+++ b/tests/test-all.sh
@@ -233,6 +233,10 @@ totalExampleFailed=0
if [ "${OS}" == "Linux" ]; then
corepath=`grep -oP '.*(?=core_)' /proc/sys/kernel/core_pattern||grep -oP '.*(?=core-)' /proc/sys/kernel/core_pattern`
+ if [ -z "$corepath" ];then
+ echo "/coredump/core_%e_%p_%t" > /proc/sys/kernel/core_pattern || echo "Permission denied"
+ corepath="/coredump/"
+ fi
fi
if [ "$2" != "jdbc" ] && [ "$2" != "python" ] && [ "$2" != "unit" ] && [ "$2" != "example" ]; then