Merge branch '3.0' into test3.0/lihui

This commit is contained in:
plum-lihui 2022-05-24 22:47:43 +08:00
commit ff10a8838d
28 changed files with 2125 additions and 255 deletions

3
.gitignore vendored
View File

@ -46,6 +46,7 @@ psim/
pysim/ pysim/
*.out *.out
*DS_Store *DS_Store
tests/script/api/batchprepare
# Doxygen Generated files # Doxygen Generated files
html/ html/
@ -108,4 +109,4 @@ TAGS
contrib/* contrib/*
!contrib/CMakeLists.txt !contrib/CMakeLists.txt
!contrib/test !contrib/test
sql sql

View File

@ -22,11 +22,11 @@ import CStmt from "./_c_stmt.mdx";
## Introduction ## Introduction
Application program can execute `INSERT` statement through connectors to insert rows. TAOS CLI can be launched manually to insert data too. Application programs can execute `INSERT` statement through connectors to insert rows. The TAOS CLI can also be used to manually insert data.
### Insert Single Row ### Insert Single Row
Below SQL statement is used to insert one row into table "d1001". The below SQL statement is used to insert one row into table "d1001".
```sql ```sql
INSERT INTO d1001 VALUES (1538548685000, 10.3, 219, 0.31); INSERT INTO d1001 VALUES (1538548685000, 10.3, 219, 0.31);
@ -34,7 +34,7 @@ INSERT INTO d1001 VALUES (1538548685000, 10.3, 219, 0.31);
### Insert Multiple Rows ### Insert Multiple Rows
Multiple rows can be inserted in single SQL statement. Below example inserts 2 rows into table "d1001". Multiple rows can be inserted in a single SQL statement. The example below inserts 2 rows into table "d1001".
```sql ```sql
INSERT INTO d1001 VALUES (1538548684000, 10.2, 220, 0.23) (1538548696650, 10.3, 218, 0.25); INSERT INTO d1001 VALUES (1538548684000, 10.2, 220, 0.23) (1538548696650, 10.3, 218, 0.25);
@ -42,7 +42,7 @@ INSERT INTO d1001 VALUES (1538548684000, 10.2, 220, 0.23) (1538548696650, 10.3,
### Insert into Multiple Tables ### Insert into Multiple Tables
Data can be inserted into multiple tables in same SQL statement. Below example inserts 2 rows into table "d1001" and 1 row into table "d1002". Data can be inserted into multiple tables in the same SQL statement. The example below inserts 2 rows into table "d1001" and 1 row into table "d1002".
```sql ```sql
INSERT INTO d1001 VALUES (1538548685000, 10.3, 219, 0.31) (1538548695000, 12.6, 218, 0.33) d1002 VALUES (1538548696800, 12.3, 221, 0.31); INSERT INTO d1001 VALUES (1538548685000, 10.3, 219, 0.31) (1538548695000, 12.6, 218, 0.33) d1002 VALUES (1538548696800, 12.3, 221, 0.31);
@ -52,14 +52,14 @@ For more details about `INSERT` please refer to [INSERT](/taos-sql/insert).
:::info :::info
- Inserting in batch can gain better performance. Normally, the higher the batch size, the better the performance. Please be noted each single row can't exceed 16K bytes and each single SQL statement can't exceed 1M bytes. - Inserting in batches can improve performance. Normally, the higher the batch size, the better the performance. Please note that a single row can't exceed 16K bytes and each SQL statement can't exceed 1MB.
- Inserting with multiple threads can gain better performance too. However, depending on the system resources on the application side and the server side, with the number of inserting threads grows to a specific point, the performance may drop instead of growing. The proper number of threads need to be tested in a specific environment to find the best number. - Inserting with multiple threads can also improve performance. However, depending on the system resources on the application side and the server side, when the number of inserting threads grows beyond a specific point the performance may drop instead of improving. The proper number of threads needs to be tested in a specific environment to find the best number.
::: :::
:::warning :::warning
- If the timestamp for the row to be inserted already exists in the table, the behavior depends on the value of parameter `UPDATE`. If it's set to 0 (also the default value), the row will be discarded. If it's set to 1, the new values will override the old values for the same row. - If the timestamp for the row to be inserted already exists in the table, the behavior depends on the value of parameter `UPDATE`. If it's set to 0 (the default value), the row will be discarded. If it's set to 1, the new values will override the old values for the same row.
- The timestamp to be inserted must be newer than the timestamp of subtracting current time by the parameter `KEEP`. If `KEEP` is set to 3650 days, then the data older than 3650 days ago can't be inserted. The timestamp to be inserted can't be newer than the timestamp of current time plus parameter `DAYS`. If `DAYS` is set to 2, the data newer than 2 days later can't be inserted. - The timestamp to be inserted must be newer than the timestamp of subtracting current time by the parameter `KEEP`. If `KEEP` is set to 3650 days, then the data older than 3650 days ago can't be inserted. The timestamp to be inserted can't be newer than the timestamp of current time plus parameter `DAYS`. If `DAYS` is set to 2, the data newer than 2 days later can't be inserted.
::: :::
@ -95,13 +95,13 @@ For more details about `INSERT` please refer to [INSERT](/taos-sql/insert).
:::note :::note
1. With either native connection or REST connection, the above samples can work well. 1. With either native connection or REST connection, the above samples can work well.
2. Please be noted that `use db` can't be used with REST connection because REST connection is stateless, so in the samples `dbName.tbName` is used to specify the table name. 2. Please note that `use db` can't be used with a REST connection because REST connections are stateless, so in the samples `dbName.tbName` is used to specify the table name.
::: :::
### Insert with Parameter Binding ### Insert with Parameter Binding
TDengine also provides Prepare API that support parameter binding. Similar to MySQL, only `?` can be used in these APIs to represent the parameters to bind. From version 2.1.1.0 and 2.1.2.0, parameter binding support for inserting data has been improved significantly to improve the insert performance by avoiding the cost of parsing SQL statements. TDengine also provides API support for parameter binding. Similar to MySQL, only `?` can be used in these APIs to represent the parameters to bind. From version 2.1.1.0 and 2.1.2.0, parameter binding support for inserting data has improved significantly to improve the insert performance by avoiding the cost of parsing SQL statements.
Parameter binding is available only with native connection. Parameter binding is available only with native connection.

View File

@ -15,16 +15,16 @@ import CTelnet from "./_c_opts_telnet.mdx";
## Introduction ## Introduction
A single line of text is used in OpenTSDB line protocol to represent one row of data. OpenTSDB employs single column data model, so one line can only contains single data column. There can be multiple tags. Each line contains 4 parts as below: A single line of text is used in OpenTSDB line protocol to represent one row of data. OpenTSDB employs single column data model, so one line can only contain a single data column. There can be multiple tags. Each line contains 4 parts as below:
``` ```
<metric> <timestamp> <value> <tagk_1>=<tagv_1>[ <tagk_n>=<tagv_n>] <metric> <timestamp> <value> <tagk_1>=<tagv_1>[ <tagk_n>=<tagv_n>]
``` ```
- `metric` will be used as STable name. - `metric` will be used as the STable name.
- `timestamp` is the timestamp of current row of data. The time precision will be determined automatically based on the length of the timestamp. second and millisecond time precision are supported.\ - `timestamp` is the timestamp of current row of data. The time precision will be determined automatically based on the length of the timestamp. Second and millisecond time precision are supported.
- `value` is a metric which must be a numeric value, the corresponding column name is "value". - `value` is a metric which must be a numeric value, the corresponding column name is "value".
- The last part is tag sets separated by space, all tags will be converted to nchar type automatically. - The last part is the tag set separated by spaces, all tags will be converted to nchar type automatically.
For example: For example:

View File

@ -2,11 +2,11 @@
title: Insert title: Insert
--- ---
TDengine supports multiple protocols of inserting data, including SQL, InfluxDB Line protocol, OpenTSDB Telnet protocol, OpenTSDB JSON protocol. Data can be inserted row by row, or in batch. Data from one or more collecting points can be inserted simultaneously. In the meantime, data can be inserted with multiple threads, out of order data and historical data can be inserted too. InfluxDB Line protocol, OpenTSDB Telnet protocol and OpenTSDB JSON protocol are the 3 kinds of schemaless insert protocols supported by TDengine. It's not necessary to create stable and table in advance if using schemaless protocols, and the schemas can be adjusted automatically according to the data to be inserted. TDengine supports multiple protocols of inserting data, including SQL, InfluxDB Line protocol, OpenTSDB Telnet protocol, and OpenTSDB JSON protocol. Data can be inserted row by row, or in batches. Data from one or more collection points can be inserted simultaneously. Data can be inserted with multiple threads, and out of order data and historical data can be inserted as well. InfluxDB Line protocol, OpenTSDB Telnet protocol and OpenTSDB JSON protocol are the 3 kinds of schemaless insert protocols supported by TDengine. It's not necessary to create STables and tables in advance if using schemaless protocols, and the schemas can be adjusted automatically based on the data being inserted.
```mdx-code-block ```mdx-code-block
import DocCardList from '@theme/DocCardList'; import DocCardList from '@theme/DocCardList';
import {useCurrentSidebarCategory} from '@docusaurus/theme-common'; import {useCurrentSidebarCategory} from '@docusaurus/theme-common';
<DocCardList items={useCurrentSidebarCategory().items}/> <DocCardList items={useCurrentSidebarCategory().items}/>
``` ```

View File

@ -20,7 +20,7 @@ import CAsync from "./_c_async.mdx";
## Introduction ## Introduction
SQL is used by TDengine as the query language. Application programs can send SQL statements to TDengine through REST API or connectors. TDengine CLI `taos` can also be used to execute SQL Ad-Hoc query. Here is the list of major query functionalities supported by TDengine SQL is used by TDengine as the query language. Application programs can send SQL statements to TDengine through REST API or connectors. TDengine CLI `taos` can also be used to execute SQL Ad-Hoc queries. Here is the list of major query functionalities supported by TDengine
- Query on single column or multiple columns - Query on single column or multiple columns
- Filter on tags or data columns>, <, =, <\>, like - Filter on tags or data columns>, <, =, <\>, like
@ -31,7 +31,7 @@ SQL is used by TDengine as the query language. Application programs can send SQL
- Join query with timestamp alignment - Join query with timestamp alignment
- Aggregate functions: count, max, min, avg, sum, twa, stddev, leastsquares, top, bottom, first, last, percentile, apercentile, last_row, spread, diff - Aggregate functions: count, max, min, avg, sum, twa, stddev, leastsquares, top, bottom, first, last, percentile, apercentile, last_row, spread, diff
For example, below SQL statement can be executed in TDengine CLI `taos` to select the rows whose voltage column is bigger than 215 and limit the output to only 2 rows. For example, the SQL statement below can be executed in TDengine CLI `taos` to select the rows whose voltage column is bigger than 215 and limit the output to only 2 rows.
```sql ```sql
select * from d1001 where voltage > 215 order by ts desc limit 2; select * from d1001 where voltage > 215 order by ts desc limit 2;
@ -46,15 +46,15 @@ taos> select * from d1001 where voltage > 215 order by ts desc limit 2;
Query OK, 2 row(s) in set (0.001100s) Query OK, 2 row(s) in set (0.001100s)
``` ```
To meet the requirements in many use cases, some special functions have been added in TDengine, for example `twa` (Time Weighted Average), `spared` (The difference between the maximum and the minimum), `last_row` (the last row), more and more functions will be added to better perform in many use cases. Furthermore, continuous query is also supported in TDengine. To meet the requirements of many use cases, some special functions have been added in TDengine, for example `twa` (Time Weighted Average), `spared` (The difference between the maximum and the minimum), and `last_row` (the last row). Furthermore, continuous query is also supported in TDengine.
For detailed query syntax please refer to [Select](/taos-sql/select). For detailed query syntax please refer to [Select](/taos-sql/select).
## Aggregation among Tables ## Aggregation among Tables
In many use cases, there are always multiple kinds of data collection points. A new concept, called STable (abbreviated for super table), is used in TDengine to represent a kind of data collection points, and a table is used to represent a specific data collection point. Tags are used by TDengine to represent the static properties of data collection points. A specific data collection point has its own values for static properties. By specifying filter conditions on tags, aggregation can be performed efficiently among all the subtables created via the same STable, i.e. same kind of data collection points, can be. Aggregate functions applicable for tables can be used directly on STables, syntax is exactly same. In many use cases, there are always multiple kinds of data collection points. A new concept, called STable (abbreviated for super table), is used in TDengine to represent a kind of data collection point, and a subtable is used to represent a specific data collection point. Tags are used by TDengine to represent the static properties of data collection points. A specific data collection point has its own values for static properties. By specifying filter conditions on tags, aggregation can be performed efficiently among all the subtables created via the same STable, i.e. same kind of data collection points. Aggregate functions applicable for tables can be used directly on STables, the syntax is exactly the same.
In summary, for a STable, its subtables can be aggregated by a simple query on STable, it's kind of join operation. But tables belong to different STables could not be aggregated. In summary, for a STable, its subtables can be aggregated by a simple query on the STable, it's a kind of join operation. But tables belong to different STables can not be aggregated.
### Example 1 ### Example 1
@ -81,11 +81,11 @@ taos> SELECT count(*), max(current) FROM meters where groupId = 2 and ts > now -
Query OK, 1 row(s) in set (0.002136s) Query OK, 1 row(s) in set (0.002136s)
``` ```
Join query is allowed between only the tables of same STable. In [Select](/taos-sql/select), all query operations are marked as whether it supports STable or not. Join queries are only allowed between the subtables of the same STable. In [Select](/taos-sql/select), all query operations are marked as to whether they supports STables or not.
## Down Sampling and Interpolation ## Down Sampling and Interpolation
In IoT use cases, down sampling is widely used to aggregate the data by time range. `INTERVAL` keyword in TDengine can be used to simplify the query by time window. For example, below SQL statement can be used to get the sum of current every 10 seconds from meters table d1001. In IoT use cases, down sampling is widely used to aggregate the data by time range. The `INTERVAL` keyword in TDengine can be used to simplify the query by time window. For example, the SQL statement below can be used to get the sum of current every 10 seconds from meters table d1001.
``` ```
taos> SELECT sum(current) FROM d1001 INTERVAL(10s); taos> SELECT sum(current) FROM d1001 INTERVAL(10s);
@ -96,7 +96,7 @@ taos> SELECT sum(current) FROM d1001 INTERVAL(10s);
Query OK, 2 row(s) in set (0.000883s) Query OK, 2 row(s) in set (0.000883s)
``` ```
Down sampling can also be used for STable. For example, below SQL statement can be used to get the sum of current from all meters in BeiJing. Down sampling can also be used for STable. For example, the below SQL statement can be used to get the sum of current from all meters in BeiJing.
``` ```
taos> SELECT SUM(current) FROM meters where location like "Beijing%" INTERVAL(1s); taos> SELECT SUM(current) FROM meters where location like "Beijing%" INTERVAL(1s);
@ -110,7 +110,7 @@ taos> SELECT SUM(current) FROM meters where location like "Beijing%" INTERVAL(1s
Query OK, 5 row(s) in set (0.001538s) Query OK, 5 row(s) in set (0.001538s)
``` ```
Down sampling also supports time offset. For example, below SQL statement can be used to get the sum of current from all meters but each time window must start at the boundary of 500 milliseconds. Down sampling also supports time offset. For example, the below SQL statement can be used to get the sum of current from all meters but each time window must start at the boundary of 500 milliseconds.
``` ```
taos> SELECT SUM(current) FROM meters INTERVAL(1s, 500a); taos> SELECT SUM(current) FROM meters INTERVAL(1s, 500a);
@ -124,7 +124,7 @@ taos> SELECT SUM(current) FROM meters INTERVAL(1s, 500a);
Query OK, 5 row(s) in set (0.001521s) Query OK, 5 row(s) in set (0.001521s)
``` ```
In many use cases, it's hard to align the timestamp of the data collected by each collection point. However, a lot of algorithms like FFT require the data to be aligned with same time interval and application programs have to handle by themselves in many systems. In TDengine, it's easy to achieve the alignment using down sampling. In many use cases, it's hard to align the timestamp of the data collected by each collection point. However, a lot of algorithms like FFT require the data to be aligned with same time interval and application programs have to handle this by themselves. In TDengine, it's easy to achieve the alignment using down sampling.
Interpolation can be performed in TDengine if there is no data in a time range. Interpolation can be performed in TDengine if there is no data in a time range.
@ -162,16 +162,16 @@ In the section describing [Insert](/develop/insert-data/sql-writing), a database
:::note :::note
1. With either REST connection or native connection, the above sample code work well. 1. With either REST connection or native connection, the above sample code works well.
2. Please be noted that `use db` can't be used in case of REST connection because it's stateless. 2. Please note that `use db` can't be used in case of REST connection because it's stateless.
::: :::
### Asynchronous Query ### Asynchronous Query
Besides synchronous query, asynchronous query API is also provided by TDengine to insert or query data more efficiently. With similar hardware and software environment, async API is 2~4 times faster than sync APIs. Async API works in non-blocking mode, which means an operation can be returned without finishing so that the calling thread can switch to other works to improve the performance of the whole application system. Async APIs perform especially better in case of poor network. Besides synchronous queries, an asynchronous query API is also provided by TDengine to insert or query data more efficiently. With a similar hardware and software environment, the async API is 2~4 times faster than sync APIs. Async API works in non-blocking mode, which means an operation can be returned without finishing so that the calling thread can switch to other works to improve the performance of the whole application system. Async APIs perform especially better in the case of poor networks.
Please be noted that async query can only be used with native connection. Please note that async query can only be used with a native connection.
<Tabs defaultValue="python" groupId="lang"> <Tabs defaultValue="python" groupId="lang">
<TabItem label="Python" value="python"> <TabItem label="Python" value="python">

View File

@ -242,6 +242,7 @@ typedef struct SSelectStmt {
bool hasAggFuncs; bool hasAggFuncs;
bool hasRepeatScanFuncs; bool hasRepeatScanFuncs;
bool hasIndefiniteRowsFunc; bool hasIndefiniteRowsFunc;
bool hasSelectValFunc;
} SSelectStmt; } SSelectStmt;
typedef enum ESetOperatorType { SET_OP_TYPE_UNION_ALL = 1, SET_OP_TYPE_UNION } ESetOperatorType; typedef enum ESetOperatorType { SET_OP_TYPE_UNION_ALL = 1, SET_OP_TYPE_UNION } ESetOperatorType;

View File

@ -96,7 +96,8 @@ struct STQ {
SHashObj* pStreamTasks; SHashObj* pStreamTasks;
SVnode* pVnode; SVnode* pVnode;
SWal* pWal; SWal* pWal;
TDB* pTdb; TDB* pMetaStore;
TTB* pExecStore;
}; };
typedef struct { typedef struct {

View File

@ -14,6 +14,7 @@
*/ */
#include "tq.h" #include "tq.h"
#include "tdbInt.h"
int32_t tqInit() { int32_t tqInit() {
int8_t old; int8_t old;
@ -46,6 +47,10 @@ void tqCleanUp() {
} }
} }
int tqExecKeyCompare(const void* pKey1, int32_t kLen1, const void* pKey2, int32_t kLen2) {
return strcmp(pKey1, pKey2);
}
STQ* tqOpen(const char* path, SVnode* pVnode, SWal* pWal) { STQ* tqOpen(const char* path, SVnode* pVnode, SWal* pWal) {
STQ* pTq = taosMemoryMalloc(sizeof(STQ)); STQ* pTq = taosMemoryMalloc(sizeof(STQ));
if (pTq == NULL) { if (pTq == NULL) {
@ -55,9 +60,6 @@ STQ* tqOpen(const char* path, SVnode* pVnode, SWal* pWal) {
pTq->path = strdup(path); pTq->path = strdup(path);
pTq->pVnode = pVnode; pTq->pVnode = pVnode;
pTq->pWal = pWal; pTq->pWal = pWal;
if (tdbOpen(path, 4096, 1, &pTq->pTdb) < 0) {
ASSERT(0);
}
pTq->execs = taosHashInit(64, MurmurHash3_32, true, HASH_ENTRY_LOCK); pTq->execs = taosHashInit(64, MurmurHash3_32, true, HASH_ENTRY_LOCK);
@ -65,6 +67,43 @@ STQ* tqOpen(const char* path, SVnode* pVnode, SWal* pWal) {
pTq->pushMgr = taosHashInit(64, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BIGINT), true, HASH_ENTRY_LOCK); pTq->pushMgr = taosHashInit(64, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BIGINT), true, HASH_ENTRY_LOCK);
if (tdbOpen(path, 16 * 1024, 1, &pTq->pMetaStore) < 0) {
ASSERT(0);
}
if (tdbTbOpen("exec", -1, -1, tqExecKeyCompare, pTq->pMetaStore, &pTq->pExecStore) < 0) {
ASSERT(0);
}
TXN txn;
if (tdbTxnOpen(&txn, 0, tdbDefaultMalloc, tdbDefaultFree, NULL, 0) < 0) {
ASSERT(0);
}
/*if (tdbBegin(pTq->pMetaStore, &txn) < 0) {*/
/*ASSERT(0);*/
/*}*/
TBC* pCur;
if (tdbTbcOpen(pTq->pExecStore, &pCur, &txn) < 0) {
ASSERT(0);
}
void* pKey;
int kLen;
void* pVal;
int vLen;
tdbTbcMoveToFirst(pCur);
while (tdbTbcNext(pCur, &pKey, &kLen, &pVal, &vLen) == 0) {
// create, put into execsj
}
if (tdbTxnClose(&txn) < 0) {
ASSERT(0);
}
return pTq; return pTq;
} }
@ -74,7 +113,7 @@ void tqClose(STQ* pTq) {
taosHashCleanup(pTq->execs); taosHashCleanup(pTq->execs);
taosHashCleanup(pTq->pStreamTasks); taosHashCleanup(pTq->pStreamTasks);
taosHashCleanup(pTq->pushMgr); taosHashCleanup(pTq->pushMgr);
tdbClose(pTq->pTdb); tdbClose(pTq->pMetaStore);
taosMemoryFree(pTq); taosMemoryFree(pTq);
} }
// TODO // TODO
@ -91,7 +130,6 @@ int32_t tEncodeSTqExec(SEncoder* pEncoder, const STqExec* pExec) {
if (tEncodeI8(pEncoder, pExec->withTag) < 0) return -1; if (tEncodeI8(pEncoder, pExec->withTag) < 0) return -1;
if (pExec->subType == TOPIC_SUB_TYPE__TABLE) { if (pExec->subType == TOPIC_SUB_TYPE__TABLE) {
if (tEncodeCStr(pEncoder, pExec->qmsg) < 0) return -1; if (tEncodeCStr(pEncoder, pExec->qmsg) < 0) return -1;
// TODO encode modified exec
} }
tEndEncode(pEncoder); tEndEncode(pEncoder);
return pEncoder->pos; return pEncoder->pos;
@ -108,7 +146,6 @@ int32_t tDecodeSTqExec(SDecoder* pDecoder, STqExec* pExec) {
if (tDecodeI8(pDecoder, &pExec->withTag) < 0) return -1; if (tDecodeI8(pDecoder, &pExec->withTag) < 0) return -1;
if (pExec->subType == TOPIC_SUB_TYPE__TABLE) { if (pExec->subType == TOPIC_SUB_TYPE__TABLE) {
if (tDecodeCStrAlloc(pDecoder, &pExec->qmsg) < 0) return -1; if (tDecodeCStrAlloc(pDecoder, &pExec->qmsg) < 0) return -1;
// TODO decode modified exec
} }
tEndDecode(pDecoder); tEndDecode(pDecoder);
return 0; return 0;
@ -556,6 +593,23 @@ int32_t tqProcessVgDeleteReq(STQ* pTq, char* msg, int32_t msgLen) {
int32_t code = taosHashRemove(pTq->execs, pReq->subKey, strlen(pReq->subKey)); int32_t code = taosHashRemove(pTq->execs, pReq->subKey, strlen(pReq->subKey));
ASSERT(code == 0); ASSERT(code == 0);
TXN txn;
if (tdbTxnOpen(&txn, 0, tdbDefaultMalloc, tdbDefaultFree, NULL, TDB_TXN_WRITE | TDB_TXN_READ_UNCOMMITTED) < 0) {
ASSERT(0);
}
if (tdbBegin(pTq->pMetaStore, &txn) < 0) {
ASSERT(0);
}
tdbTbDelete(pTq->pExecStore, pReq->subKey, (int)strlen(pReq->subKey), &txn);
if (tdbCommit(pTq->pMetaStore, &txn) < 0) {
ASSERT(0);
}
return 0; return 0;
} }
@ -604,6 +658,45 @@ int32_t tqProcessVgChangeReq(STQ* pTq, char* msg, int32_t msgLen) {
pExec->pDropTbUid = taosHashInit(64, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BIGINT), false, HASH_NO_LOCK); pExec->pDropTbUid = taosHashInit(64, taosGetDefaultHashFunction(TSDB_DATA_TYPE_BIGINT), false, HASH_NO_LOCK);
} }
taosHashPut(pTq->execs, req.subKey, strlen(req.subKey), pExec, sizeof(STqExec)); taosHashPut(pTq->execs, req.subKey, strlen(req.subKey), pExec, sizeof(STqExec));
int32_t code;
int32_t vlen;
tEncodeSize(tEncodeSTqExec, pExec, vlen, code);
ASSERT(code == 0);
void* buf = taosMemoryCalloc(1, vlen);
if (buf == NULL) {
ASSERT(0);
}
SEncoder encoder;
tEncoderInit(&encoder, buf, vlen);
if (tEncodeSTqExec(&encoder, pExec) < 0) {
ASSERT(0);
}
TXN txn;
if (tdbTxnOpen(&txn, 0, tdbDefaultMalloc, tdbDefaultFree, NULL, TDB_TXN_WRITE | TDB_TXN_READ_UNCOMMITTED) < 0) {
ASSERT(0);
}
if (tdbBegin(pTq->pMetaStore, &txn) < 0) {
ASSERT(0);
}
if (tdbTbUpsert(pTq->pExecStore, req.subKey, (int)strlen(req.subKey), buf, vlen, &txn) < 0) {
ASSERT(0);
}
if (tdbCommit(pTq->pMetaStore, &txn) < 0) {
ASSERT(0);
}
tEncoderClear(&encoder);
taosMemoryFree(buf);
return 0; return 0;
} else { } else {
/*if (req.newConsumerId != -1) {*/ /*if (req.newConsumerId != -1) {*/

View File

@ -83,11 +83,11 @@ bool tqNextDataBlockFilterOut(STqReadHandle* pHandle, SHashObj* filterOutUids) {
int32_t tqRetrieveDataBlock(SArray** ppCols, STqReadHandle* pHandle, uint64_t* pGroupId, uint64_t* pUid, int32_t tqRetrieveDataBlock(SArray** ppCols, STqReadHandle* pHandle, uint64_t* pGroupId, uint64_t* pUid,
int32_t* pNumOfRows, int16_t* pNumOfCols) { int32_t* pNumOfRows, int16_t* pNumOfCols) {
/*int32_t sversion = pHandle->pBlock->sversion;*/
// TODO set to real sversion
*pUid = 0; *pUid = 0;
int32_t sversion = 1; // TODO set to real sversion
/*int32_t sversion = 1;*/
int32_t sversion = htonl(pHandle->pBlock->sversion);
if (pHandle->sver != sversion || pHandle->cachedSchemaUid != pHandle->msgIter.suid) { if (pHandle->sver != sversion || pHandle->cachedSchemaUid != pHandle->msgIter.suid) {
pHandle->pSchema = metaGetTbTSchema(pHandle->pVnodeMeta, pHandle->msgIter.uid, sversion); pHandle->pSchema = metaGetTbTSchema(pHandle->pVnodeMeta, pHandle->msgIter.uid, sversion);
if (pHandle->pSchema == NULL) { if (pHandle->pSchema == NULL) {

View File

@ -143,12 +143,12 @@ SNode* createDropTableClause(SAstCreateContext* pCxt, bool ignoreNotExists, SNod
SNode* createDropTableStmt(SAstCreateContext* pCxt, SNodeList* pTables); SNode* createDropTableStmt(SAstCreateContext* pCxt, SNodeList* pTables);
SNode* createDropSuperTableStmt(SAstCreateContext* pCxt, bool ignoreNotExists, SNode* pRealTable); SNode* createDropSuperTableStmt(SAstCreateContext* pCxt, bool ignoreNotExists, SNode* pRealTable);
SNode* createAlterTableModifyOptions(SAstCreateContext* pCxt, SNode* pRealTable, SNode* pOptions); SNode* createAlterTableModifyOptions(SAstCreateContext* pCxt, SNode* pRealTable, SNode* pOptions);
SNode* createAlterTableAddModifyCol(SAstCreateContext* pCxt, SNode* pRealTable, int8_t alterType, SNode* createAlterTableAddModifyCol(SAstCreateContext* pCxt, SNode* pRealTable, int8_t alterType, SToken* pColName,
const SToken* pColName, SDataType dataType); SDataType dataType);
SNode* createAlterTableDropCol(SAstCreateContext* pCxt, SNode* pRealTable, int8_t alterType, const SToken* pColName); SNode* createAlterTableDropCol(SAstCreateContext* pCxt, SNode* pRealTable, int8_t alterType, SToken* pColName);
SNode* createAlterTableRenameCol(SAstCreateContext* pCxt, SNode* pRealTable, int8_t alterType, SNode* createAlterTableRenameCol(SAstCreateContext* pCxt, SNode* pRealTable, int8_t alterType, SToken* pOldColName,
const SToken* pOldColName, const SToken* pNewColName); SToken* pNewColName);
SNode* createAlterTableSetTag(SAstCreateContext* pCxt, SNode* pRealTable, const SToken* pTagName, SNode* pVal); SNode* createAlterTableSetTag(SAstCreateContext* pCxt, SNode* pRealTable, SToken* pTagName, SNode* pVal);
SNode* createUseDatabaseStmt(SAstCreateContext* pCxt, SToken* pDbName); SNode* createUseDatabaseStmt(SAstCreateContext* pCxt, SToken* pDbName);
SNode* createShowStmt(SAstCreateContext* pCxt, ENodeType type, SNode* pDbName, SNode* pTbNamePattern); SNode* createShowStmt(SAstCreateContext* pCxt, ENodeType type, SNode* pDbName, SNode* pTbNamePattern);
SNode* createShowCreateDatabaseStmt(SAstCreateContext* pCxt, const SToken* pDbName); SNode* createShowCreateDatabaseStmt(SAstCreateContext* pCxt, const SToken* pDbName);

View File

@ -94,7 +94,7 @@ static FORCE_INLINE void getSTSRowAppendInfo(uint8_t rowType, SParsedDataColInfo
col_id_t *colIdx) { col_id_t *colIdx) {
col_id_t schemaIdx = 0; col_id_t schemaIdx = 0;
if (IS_DATA_COL_ORDERED(spd)) { if (IS_DATA_COL_ORDERED(spd)) {
schemaIdx = spd->boundColumns[idx] - PRIMARYKEY_TIMESTAMP_COL_ID; schemaIdx = spd->boundColumns[idx];
if (TD_IS_TP_ROW_T(rowType)) { if (TD_IS_TP_ROW_T(rowType)) {
*toffset = (spd->cols + schemaIdx)->toffset; // the offset of firstPart *toffset = (spd->cols + schemaIdx)->toffset; // the offset of firstPart
*colIdx = schemaIdx; *colIdx = schemaIdx;
@ -104,7 +104,7 @@ static FORCE_INLINE void getSTSRowAppendInfo(uint8_t rowType, SParsedDataColInfo
} }
} else { } else {
ASSERT(idx == (spd->colIdxInfo + idx)->boundIdx); ASSERT(idx == (spd->colIdxInfo + idx)->boundIdx);
schemaIdx = (spd->colIdxInfo + idx)->schemaColIdx - PRIMARYKEY_TIMESTAMP_COL_ID; schemaIdx = (spd->colIdxInfo + idx)->schemaColIdx;
if (TD_IS_TP_ROW_T(rowType)) { if (TD_IS_TP_ROW_T(rowType)) {
*toffset = (spd->cols + schemaIdx)->toffset; *toffset = (spd->cols + schemaIdx)->toffset;
*colIdx = schemaIdx; *colIdx = schemaIdx;
@ -133,14 +133,15 @@ static FORCE_INLINE int32_t setBlockInfo(SSubmitBlk *pBlocks, STableDataBlocks *
int32_t schemaIdxCompar(const void *lhs, const void *rhs); int32_t schemaIdxCompar(const void *lhs, const void *rhs);
int32_t boundIdxCompar(const void *lhs, const void *rhs); int32_t boundIdxCompar(const void *lhs, const void *rhs);
void setBoundColumnInfo(SParsedDataColInfo *pColList, SSchema *pSchema, col_id_t numOfCols); void setBoundColumnInfo(SParsedDataColInfo *pColList, SSchema *pSchema, col_id_t numOfCols);
void destroyBlockArrayList(SArray* pDataBlockList); void destroyBlockArrayList(SArray *pDataBlockList);
void destroyBlockHashmap(SHashObj* pDataBlockHash); void destroyBlockHashmap(SHashObj *pDataBlockHash);
int initRowBuilder(SRowBuilder *pBuilder, int16_t schemaVer, SParsedDataColInfo *pColInfo); int initRowBuilder(SRowBuilder *pBuilder, int16_t schemaVer, SParsedDataColInfo *pColInfo);
int32_t allocateMemIfNeed(STableDataBlocks *pDataBlock, int32_t rowSize, int32_t * numOfRows); int32_t allocateMemIfNeed(STableDataBlocks *pDataBlock, int32_t rowSize, int32_t *numOfRows);
int32_t getDataBlockFromList(SHashObj* pHashList, void* id, int32_t idLen, int32_t size, int32_t startOffset, int32_t rowSize, int32_t getDataBlockFromList(SHashObj *pHashList, void *id, int32_t idLen, int32_t size, int32_t startOffset,
STableMeta* pTableMeta, STableDataBlocks** dataBlocks, SArray* pBlockList, SVCreateTbReq* pCreateTbReq); int32_t rowSize, STableMeta *pTableMeta, STableDataBlocks **dataBlocks, SArray *pBlockList,
int32_t mergeTableDataBlocks(SHashObj* pHashObj, uint8_t payloadType, SArray** pVgDataBlocks); SVCreateTbReq *pCreateTbReq);
int32_t buildCreateTbMsg(STableDataBlocks* pBlocks, SVCreateTbReq* pCreateTbReq); int32_t mergeTableDataBlocks(SHashObj *pHashObj, uint8_t payloadType, SArray **pVgDataBlocks);
int32_t buildCreateTbMsg(STableDataBlocks *pBlocks, SVCreateTbReq *pCreateTbReq);
int32_t allocateMemForSize(STableDataBlocks *pDataBlock, int32_t allSize); int32_t allocateMemForSize(STableDataBlocks *pDataBlock, int32_t allSize);

View File

@ -968,9 +968,9 @@ SNode* createAlterTableModifyOptions(SAstCreateContext* pCxt, SNode* pRealTable,
return createAlterTableStmtFinalize(pRealTable, pStmt); return createAlterTableStmtFinalize(pRealTable, pStmt);
} }
SNode* createAlterTableAddModifyCol(SAstCreateContext* pCxt, SNode* pRealTable, int8_t alterType, SNode* createAlterTableAddModifyCol(SAstCreateContext* pCxt, SNode* pRealTable, int8_t alterType, SToken* pColName,
const SToken* pColName, SDataType dataType) { SDataType dataType) {
if (NULL == pRealTable) { if (NULL == pRealTable || !checkColumnName(pCxt, pColName)) {
return NULL; return NULL;
} }
SAlterTableStmt* pStmt = nodesMakeNode(QUERY_NODE_ALTER_TABLE_STMT); SAlterTableStmt* pStmt = nodesMakeNode(QUERY_NODE_ALTER_TABLE_STMT);
@ -981,8 +981,8 @@ SNode* createAlterTableAddModifyCol(SAstCreateContext* pCxt, SNode* pRealTable,
return createAlterTableStmtFinalize(pRealTable, pStmt); return createAlterTableStmtFinalize(pRealTable, pStmt);
} }
SNode* createAlterTableDropCol(SAstCreateContext* pCxt, SNode* pRealTable, int8_t alterType, const SToken* pColName) { SNode* createAlterTableDropCol(SAstCreateContext* pCxt, SNode* pRealTable, int8_t alterType, SToken* pColName) {
if (NULL == pRealTable) { if (NULL == pRealTable || !checkColumnName(pCxt, pColName)) {
return NULL; return NULL;
} }
SAlterTableStmt* pStmt = nodesMakeNode(QUERY_NODE_ALTER_TABLE_STMT); SAlterTableStmt* pStmt = nodesMakeNode(QUERY_NODE_ALTER_TABLE_STMT);
@ -992,9 +992,9 @@ SNode* createAlterTableDropCol(SAstCreateContext* pCxt, SNode* pRealTable, int8_
return createAlterTableStmtFinalize(pRealTable, pStmt); return createAlterTableStmtFinalize(pRealTable, pStmt);
} }
SNode* createAlterTableRenameCol(SAstCreateContext* pCxt, SNode* pRealTable, int8_t alterType, SNode* createAlterTableRenameCol(SAstCreateContext* pCxt, SNode* pRealTable, int8_t alterType, SToken* pOldColName,
const SToken* pOldColName, const SToken* pNewColName) { SToken* pNewColName) {
if (NULL == pRealTable) { if (NULL == pRealTable || !checkColumnName(pCxt, pOldColName) || !checkColumnName(pCxt, pNewColName)) {
return NULL; return NULL;
} }
SAlterTableStmt* pStmt = nodesMakeNode(QUERY_NODE_ALTER_TABLE_STMT); SAlterTableStmt* pStmt = nodesMakeNode(QUERY_NODE_ALTER_TABLE_STMT);
@ -1005,8 +1005,8 @@ SNode* createAlterTableRenameCol(SAstCreateContext* pCxt, SNode* pRealTable, int
return createAlterTableStmtFinalize(pRealTable, pStmt); return createAlterTableStmtFinalize(pRealTable, pStmt);
} }
SNode* createAlterTableSetTag(SAstCreateContext* pCxt, SNode* pRealTable, const SToken* pTagName, SNode* pVal) { SNode* createAlterTableSetTag(SAstCreateContext* pCxt, SNode* pRealTable, SToken* pTagName, SNode* pVal) {
if (NULL == pRealTable) { if (NULL == pRealTable || !checkColumnName(pCxt, pTagName)) {
return NULL; return NULL;
} }
SAlterTableStmt* pStmt = nodesMakeNode(QUERY_NODE_ALTER_TABLE_STMT); SAlterTableStmt* pStmt = nodesMakeNode(QUERY_NODE_ALTER_TABLE_STMT);

View File

@ -189,11 +189,18 @@ static int32_t calcConstProject(SNode* pProject, SNode** pNew) {
return code; return code;
} }
static int32_t calcConstProjections(SCalcConstContext* pCxt, SNodeList* pProjections, bool subquery) { static bool isUselessCol(bool hasSelectValFunc, SExprNode* pProj) {
if (hasSelectValFunc && QUERY_NODE_FUNCTION == nodeType(pProj) && fmIsSelectFunc(((SFunctionNode*)pProj)->funcId)) {
return false;
}
return NULL == ((SExprNode*)pProj)->pAssociation;
}
static int32_t calcConstProjections(SCalcConstContext* pCxt, SSelectStmt* pSelect, bool subquery) {
SNode* pProj = NULL; SNode* pProj = NULL;
WHERE_EACH(pProj, pProjections) { WHERE_EACH(pProj, pSelect->pProjectionList) {
if (subquery && NULL == ((SExprNode*)pProj)->pAssociation) { if (subquery && isUselessCol(pSelect->hasSelectValFunc, (SExprNode*)pProj)) {
ERASE_NODE(pProjections); ERASE_NODE(pSelect->pProjectionList);
continue; continue;
} }
SNode* pNew = NULL; SNode* pNew = NULL;
@ -226,7 +233,7 @@ static int32_t calcConstGroupBy(SCalcConstContext* pCxt, SSelectStmt* pSelect) {
} }
static int32_t calcConstSelect(SCalcConstContext* pCxt, SSelectStmt* pSelect, bool subquery) { static int32_t calcConstSelect(SCalcConstContext* pCxt, SSelectStmt* pSelect, bool subquery) {
int32_t code = calcConstProjections(pCxt, pSelect->pProjectionList, subquery); int32_t code = calcConstProjections(pCxt, pSelect, subquery);
if (TSDB_CODE_SUCCESS == code) { if (TSDB_CODE_SUCCESS == code) {
code = calcConstFromTable(pCxt, pSelect); code = calcConstFromTable(pCxt, pSelect);
} }

View File

@ -701,7 +701,7 @@ static int32_t parseBoundColumns(SInsertParseContext* pCxt, SParsedDataColInfo*
} }
lastColIdx = index; lastColIdx = index;
pColList->cols[index].valStat = VAL_STAT_HAS; pColList->cols[index].valStat = VAL_STAT_HAS;
pColList->boundColumns[pColList->numOfBound] = index + PRIMARYKEY_TIMESTAMP_COL_ID; pColList->boundColumns[pColList->numOfBound] = index;
++pColList->numOfBound; ++pColList->numOfBound;
switch (pSchema[t].type) { switch (pSchema[t].type) {
case TSDB_DATA_TYPE_BINARY: case TSDB_DATA_TYPE_BINARY:
@ -815,7 +815,7 @@ static int32_t parseTagsClause(SInsertParseContext* pCxt, SSchema* pSchema, uint
return buildInvalidOperationMsg(&pCxt->msg, "no mix usage for ? and tag values"); return buildInvalidOperationMsg(&pCxt->msg, "no mix usage for ? and tag values");
} }
SSchema* pTagSchema = &pSchema[pCxt->tags.boundColumns[i] - 1]; // colId starts with 1 SSchema* pTagSchema = &pSchema[pCxt->tags.boundColumns[i]];
param.schema = pTagSchema; param.schema = pTagSchema;
CHECK_CODE( CHECK_CODE(
parseValueToken(&pCxt->pSql, &sToken, pTagSchema, precision, tmpTokenBuf, KvRowAppend, &param, &pCxt->msg)); parseValueToken(&pCxt->pSql, &sToken, pTagSchema, precision, tmpTokenBuf, KvRowAppend, &param, &pCxt->msg));
@ -903,7 +903,7 @@ static int32_t parseUsingClause(SInsertParseContext* pCxt, SName* name, char* tb
if (TK_NK_LP != sToken.type) { if (TK_NK_LP != sToken.type) {
return buildSyntaxErrMsg(&pCxt->msg, "( is expected", sToken.z); return buildSyntaxErrMsg(&pCxt->msg, "( is expected", sToken.z);
} }
CHECK_CODE(parseTagsClause(pCxt, pCxt->pTableMeta->schema, getTableInfo(pCxt->pTableMeta).precision, name->tname)); CHECK_CODE(parseTagsClause(pCxt, pTagsSchema, getTableInfo(pCxt->pTableMeta).precision, name->tname));
NEXT_VALID_TOKEN(pCxt->pSql, sToken); NEXT_VALID_TOKEN(pCxt->pSql, sToken);
if (TK_NK_COMMA == sToken.type) { if (TK_NK_COMMA == sToken.type) {
return generateSyntaxErrMsg(&pCxt->msg, TSDB_CODE_PAR_TAGS_NOT_MATCHED); return generateSyntaxErrMsg(&pCxt->msg, TSDB_CODE_PAR_TAGS_NOT_MATCHED);
@ -929,7 +929,7 @@ static int parseOneRow(SInsertParseContext* pCxt, STableDataBlocks* pDataBlocks,
// 1. set the parsed value from sql string // 1. set the parsed value from sql string
for (int i = 0; i < spd->numOfBound; ++i) { for (int i = 0; i < spd->numOfBound; ++i) {
NEXT_TOKEN_WITH_PREV(pCxt->pSql, sToken); NEXT_TOKEN_WITH_PREV(pCxt->pSql, sToken);
SSchema* pSchema = &schema[spd->boundColumns[i] - 1]; SSchema* pSchema = &schema[spd->boundColumns[i]];
if (sToken.type == TK_NK_QUESTION) { if (sToken.type == TK_NK_QUESTION) {
isParseBindParam = true; isParseBindParam = true;
@ -1088,7 +1088,7 @@ static int32_t parseInsertBody(SInsertParseContext* pCxt) {
if (sToken.type && pCxt->pSql[0]) { if (sToken.type && pCxt->pSql[0]) {
return buildSyntaxErrMsg(&pCxt->msg, "invalid charactor in SQL", sToken.z); return buildSyntaxErrMsg(&pCxt->msg, "invalid charactor in SQL", sToken.z);
} }
if (0 == pCxt->totalNum && (!TSDB_QUERY_HAS_TYPE(pCxt->pOutput->insertType, TSDB_QUERY_TYPE_STMT_INSERT))) { if (0 == pCxt->totalNum && (!TSDB_QUERY_HAS_TYPE(pCxt->pOutput->insertType, TSDB_QUERY_TYPE_STMT_INSERT))) {
return buildInvalidOperationMsg(&pCxt->msg, "no data in sql"); return buildInvalidOperationMsg(&pCxt->msg, "no data in sql");
} }
@ -1337,7 +1337,7 @@ int32_t qBindStmtTagsValue(void* pBlock, void* boundTags, int64_t suid, char* tN
continue; continue;
} }
SSchema* pTagSchema = &pSchema[tags->boundColumns[c] - 1]; // colId starts with 1 SSchema* pTagSchema = &pSchema[tags->boundColumns[c]];
param.schema = pTagSchema; param.schema = pTagSchema;
int32_t colLen = pTagSchema->bytes; int32_t colLen = pTagSchema->bytes;
@ -1384,7 +1384,7 @@ int32_t qBindStmtColsValue(void* pBlock, TAOS_MULTI_BIND* bind, char* msgBuf, in
tdSRowResetBuf(pBuilder, row); tdSRowResetBuf(pBuilder, row);
for (int c = 0; c < spd->numOfBound; ++c) { for (int c = 0; c < spd->numOfBound; ++c) {
SSchema* pColSchema = &pSchema[spd->boundColumns[c] - 1]; SSchema* pColSchema = &pSchema[spd->boundColumns[c]];
if (bind[c].num != rowNum) { if (bind[c].num != rowNum) {
return buildInvalidOperationMsg(&pBuf, "row number in each bind param should be the same"); return buildInvalidOperationMsg(&pBuf, "row number in each bind param should be the same");
@ -1467,7 +1467,7 @@ int32_t qBindStmtSingleColValue(void* pBlock, TAOS_MULTI_BIND* bind, char* msgBu
tdSRowGetBuf(pBuilder, row); tdSRowGetBuf(pBuilder, row);
} }
SSchema* pColSchema = &pSchema[spd->boundColumns[colIdx] - 1]; SSchema* pColSchema = &pSchema[spd->boundColumns[colIdx]];
if (bind->num != rowNum) { if (bind->num != rowNum) {
return buildInvalidOperationMsg(&pBuf, "row number in each bind param should be the same"); return buildInvalidOperationMsg(&pBuf, "row number in each bind param should be the same");
@ -1539,7 +1539,7 @@ int32_t buildBoundFields(SParsedDataColInfo* boundInfo, SSchema* pSchema, int32_
} }
for (int32_t i = 0; i < boundInfo->numOfBound; ++i) { for (int32_t i = 0; i < boundInfo->numOfBound; ++i) {
SSchema* pTagSchema = &pSchema[boundInfo->boundColumns[i] - 1]; SSchema* pTagSchema = &pSchema[boundInfo->boundColumns[i]];
strcpy((*fields)[i].name, pTagSchema->name); strcpy((*fields)[i].name, pTagSchema->name);
(*fields)[i].type = pTagSchema->type; (*fields)[i].type = pTagSchema->type;
(*fields)[i].bytes = pTagSchema->bytes; (*fields)[i].bytes = pTagSchema->bytes;
@ -1638,7 +1638,7 @@ static int32_t smlBoundColumnData(SArray* cols, SParsedDataColInfo* pColList, SS
} }
lastColIdx = index; lastColIdx = index;
pColList->cols[index].valStat = VAL_STAT_HAS; pColList->cols[index].valStat = VAL_STAT_HAS;
pColList->boundColumns[pColList->numOfBound] = index + PRIMARYKEY_TIMESTAMP_COL_ID; pColList->boundColumns[pColList->numOfBound] = index;
++pColList->numOfBound; ++pColList->numOfBound;
switch (pSchema[t].type) { switch (pSchema[t].type) {
case TSDB_DATA_TYPE_BINARY: case TSDB_DATA_TYPE_BINARY:
@ -1688,7 +1688,7 @@ static int32_t smlBuildTagRow(SArray* cols, SKVRowBuilder* tagsBuilder, SParsedD
SKvParam param = {.builder = tagsBuilder}; SKvParam param = {.builder = tagsBuilder};
for (int i = 0; i < tags->numOfBound; ++i) { for (int i = 0; i < tags->numOfBound; ++i) {
SSchema* pTagSchema = &pSchema[tags->boundColumns[i] - 1]; // colId starts with 1 SSchema* pTagSchema = &pSchema[tags->boundColumns[i]];
param.schema = pTagSchema; param.schema = pTagSchema;
SSmlKv* kv = taosArrayGetP(cols, i); SSmlKv* kv = taosArrayGetP(cols, i);
if (IS_VAR_DATA_TYPE(kv->type)) { if (IS_VAR_DATA_TYPE(kv->type)) {

View File

@ -74,7 +74,7 @@ void setBoundColumnInfo(SParsedDataColInfo* pColList, SSchema* pSchema, col_id_t
default: default:
break; break;
} }
pColList->boundColumns[i] = pSchema[i].colId; pColList->boundColumns[i] = i;
} }
pColList->allNullLen += pColList->flen; pColList->allNullLen += pColList->flen;
pColList->boundNullLen = pColList->allNullLen; // default set allNullLen pColList->boundNullLen = pColList->allNullLen; // default set allNullLen

View File

@ -812,7 +812,7 @@ static EDealRes translateOperator(STranslateContext* pCxt, SOperatorNode* pOp) {
return DEAL_RES_CONTINUE; return DEAL_RES_CONTINUE;
} }
static EDealRes haveAggOrNonstdFunction(SNode* pNode, void* pContext) { static EDealRes haveVectorFunction(SNode* pNode, void* pContext) {
if (isAggFunc(pNode)) { if (isAggFunc(pNode)) {
*((bool*)pContext) = true; *((bool*)pContext) = true;
return DEAL_RES_END; return DEAL_RES_END;
@ -857,7 +857,7 @@ static int32_t rewriteCountStar(STranslateContext* pCxt, SFunctionNode* pCount)
static bool hasInvalidFuncNesting(SNodeList* pParameterList) { static bool hasInvalidFuncNesting(SNodeList* pParameterList) {
bool hasInvalidFunc = false; bool hasInvalidFunc = false;
nodesWalkExprs(pParameterList, haveAggOrNonstdFunction, &hasInvalidFunc); nodesWalkExprs(pParameterList, haveVectorFunction, &hasInvalidFunc);
return hasInvalidFunc; return hasInvalidFunc;
} }
@ -1009,6 +1009,7 @@ static EDealRes rewriteColToSelectValFunc(STranslateContext* pCxt, SNode** pNode
} }
if (TSDB_CODE_SUCCESS == pCxt->errCode) { if (TSDB_CODE_SUCCESS == pCxt->errCode) {
*pNode = (SNode*)pFunc; *pNode = (SNode*)pFunc;
pCxt->pCurrStmt->hasSelectValFunc = true;
} else { } else {
nodesDestroyNode(pFunc); nodesDestroyNode(pFunc);
} }
@ -1096,7 +1097,7 @@ typedef struct CheckAggColCoexistCxt {
STranslateContext* pTranslateCxt; STranslateContext* pTranslateCxt;
bool existAggFunc; bool existAggFunc;
bool existCol; bool existCol;
bool existNonstdFunc; bool existIndefiniteRowsFunc;
int32_t selectFuncNum; int32_t selectFuncNum;
bool existOtherAggFunc; bool existOtherAggFunc;
} CheckAggColCoexistCxt; } CheckAggColCoexistCxt;
@ -1113,7 +1114,7 @@ static EDealRes doCheckAggColCoexist(SNode* pNode, void* pContext) {
return DEAL_RES_IGNORE_CHILD; return DEAL_RES_IGNORE_CHILD;
} }
if (isIndefiniteRowsFunc(pNode)) { if (isIndefiniteRowsFunc(pNode)) {
pCxt->existNonstdFunc = true; pCxt->existIndefiniteRowsFunc = true;
return DEAL_RES_IGNORE_CHILD; return DEAL_RES_IGNORE_CHILD;
} }
if (isScanPseudoColumnFunc(pNode) || QUERY_NODE_COLUMN == nodeType(pNode)) { if (isScanPseudoColumnFunc(pNode) || QUERY_NODE_COLUMN == nodeType(pNode)) {
@ -1129,7 +1130,7 @@ static int32_t checkAggColCoexist(STranslateContext* pCxt, SSelectStmt* pSelect)
CheckAggColCoexistCxt cxt = {.pTranslateCxt = pCxt, CheckAggColCoexistCxt cxt = {.pTranslateCxt = pCxt,
.existAggFunc = false, .existAggFunc = false,
.existCol = false, .existCol = false,
.existNonstdFunc = false, .existIndefiniteRowsFunc = false,
.selectFuncNum = 0, .selectFuncNum = 0,
.existOtherAggFunc = false}; .existOtherAggFunc = false};
nodesWalkExprs(pSelect->pProjectionList, doCheckAggColCoexist, &cxt); nodesWalkExprs(pSelect->pProjectionList, doCheckAggColCoexist, &cxt);
@ -1142,7 +1143,7 @@ static int32_t checkAggColCoexist(STranslateContext* pCxt, SSelectStmt* pSelect)
if ((cxt.selectFuncNum > 1 || cxt.existAggFunc || NULL != pSelect->pWindow) && cxt.existCol) { if ((cxt.selectFuncNum > 1 || cxt.existAggFunc || NULL != pSelect->pWindow) && cxt.existCol) {
return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_NOT_SINGLE_GROUP); return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_NOT_SINGLE_GROUP);
} }
if (cxt.existNonstdFunc && cxt.existCol) { if (cxt.existIndefiniteRowsFunc && cxt.existCol) {
return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_NOT_ALLOWED_FUNC); return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_NOT_ALLOWED_FUNC);
} }
return TSDB_CODE_SUCCESS; return TSDB_CODE_SUCCESS;
@ -4079,9 +4080,7 @@ static int32_t addValToKVRow(STranslateContext* pCxt, SValueNode* pVal, const SS
return parseJsontoTagData(pVal->literal, pBuilder, &pCxt->msgBuf, pSchema->colId); return parseJsontoTagData(pVal->literal, pBuilder, &pCxt->msgBuf, pSchema->colId);
} }
if (pVal->node.resType.type == TSDB_DATA_TYPE_NULL) { if (pVal->node.resType.type != TSDB_DATA_TYPE_NULL) {
// todo
} else {
tdAddColToKVRow(pBuilder, pSchema->colId, nodesGetValueFromNode(pVal), tdAddColToKVRow(pBuilder, pSchema->colId, nodesGetValueFromNode(pVal),
IS_VAR_DATA_TYPE(pSchema->type) ? varDataTLen(pVal->datum.p) : TYPE_BYTES[pSchema->type]); IS_VAR_DATA_TYPE(pSchema->type) ? varDataTLen(pVal->datum.p) : TYPE_BYTES[pSchema->type]);
} }
@ -4097,16 +4096,17 @@ static int32_t createValueFromFunction(STranslateContext* pCxt, SFunctionNode* p
return code; return code;
} }
static SDataType schemaToDataType(SSchema* pSchema) { static SDataType schemaToDataType(uint8_t precision, SSchema* pSchema) {
SDataType dt = {.type = pSchema->type, .bytes = pSchema->bytes, .precision = 0, .scale = 0}; SDataType dt = {.type = pSchema->type, .bytes = pSchema->bytes, .precision = precision, .scale = 0};
return dt; return dt;
} }
static int32_t translateTagVal(STranslateContext* pCxt, SSchema* pSchema, SNode* pNode, SValueNode** pVal) { static int32_t translateTagVal(STranslateContext* pCxt, uint8_t precision, SSchema* pSchema, SNode* pNode,
SValueNode** pVal) {
if (QUERY_NODE_FUNCTION == nodeType(pNode)) { if (QUERY_NODE_FUNCTION == nodeType(pNode)) {
return createValueFromFunction(pCxt, (SFunctionNode*)pNode, pVal); return createValueFromFunction(pCxt, (SFunctionNode*)pNode, pVal);
} else if (QUERY_NODE_VALUE == nodeType(pNode)) { } else if (QUERY_NODE_VALUE == nodeType(pNode)) {
return (DEAL_RES_ERROR == translateValueImpl(pCxt, (SValueNode*)pNode, schemaToDataType(pSchema)) return (DEAL_RES_ERROR == translateValueImpl(pCxt, (SValueNode*)pNode, schemaToDataType(precision, pSchema))
? pCxt->errCode ? pCxt->errCode
: TSDB_CODE_SUCCESS); : TSDB_CODE_SUCCESS);
} else { } else {
@ -4137,7 +4137,7 @@ static int32_t buildKVRowForBindTags(STranslateContext* pCxt, SCreateSubTableCla
return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_INVALID_TAG_NAME, pCol->colName); return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_INVALID_TAG_NAME, pCol->colName);
} }
SValueNode* pVal = NULL; SValueNode* pVal = NULL;
int32_t code = translateTagVal(pCxt, pSchema, pNode, &pVal); int32_t code = translateTagVal(pCxt, pSuperTableMeta->tableInfo.precision, pSchema, pNode, &pVal);
if (TSDB_CODE_SUCCESS == code) { if (TSDB_CODE_SUCCESS == code) {
if (NULL == pVal) { if (NULL == pVal) {
pVal = (SValueNode*)pNode; pVal = (SValueNode*)pNode;
@ -4167,7 +4167,7 @@ static int32_t buildKVRowForAllTags(STranslateContext* pCxt, SCreateSubTableClau
int32_t index = 0; int32_t index = 0;
FOREACH(pNode, pStmt->pValsOfTags) { FOREACH(pNode, pStmt->pValsOfTags) {
SValueNode* pVal = NULL; SValueNode* pVal = NULL;
int32_t code = translateTagVal(pCxt, pTagSchema + index, pNode, &pVal); int32_t code = translateTagVal(pCxt, pSuperTableMeta->tableInfo.precision, pTagSchema + index, pNode, &pVal);
if (TSDB_CODE_SUCCESS == code) { if (TSDB_CODE_SUCCESS == code) {
if (NULL == pVal) { if (NULL == pVal) {
pVal = (SValueNode*)pNode; pVal = (SValueNode*)pNode;
@ -4447,19 +4447,21 @@ static int32_t buildUpdateTagValReq(STranslateContext* pCxt, SAlterTableStmt* pS
return TSDB_CODE_OUT_OF_MEMORY; return TSDB_CODE_OUT_OF_MEMORY;
} }
if (DEAL_RES_ERROR == translateValueImpl(pCxt, pStmt->pVal, schemaToDataType(pSchema))) { if (DEAL_RES_ERROR ==
translateValueImpl(pCxt, pStmt->pVal, schemaToDataType(pTableMeta->tableInfo.precision, pSchema))) {
return pCxt->errCode; return pCxt->errCode;
} }
pReq->isNull = (TSDB_DATA_TYPE_NULL == pStmt->pVal->node.resType.type); pReq->isNull = (TSDB_DATA_TYPE_NULL == pStmt->pVal->node.resType.type);
if(pStmt->pVal->node.resType.type == TSDB_DATA_TYPE_JSON){ if (pStmt->pVal->node.resType.type == TSDB_DATA_TYPE_JSON) {
SKVRowBuilder kvRowBuilder = {0}; SKVRowBuilder kvRowBuilder = {0};
int32_t code = tdInitKVRowBuilder(&kvRowBuilder); int32_t code = tdInitKVRowBuilder(&kvRowBuilder);
if (TSDB_CODE_SUCCESS != code) { if (TSDB_CODE_SUCCESS != code) {
return TSDB_CODE_OUT_OF_MEMORY; return TSDB_CODE_OUT_OF_MEMORY;
} }
if (pStmt->pVal->literal && strlen(pStmt->pVal->literal) > (TSDB_MAX_JSON_TAG_LEN - VARSTR_HEADER_SIZE) / TSDB_NCHAR_SIZE) { if (pStmt->pVal->literal &&
strlen(pStmt->pVal->literal) > (TSDB_MAX_JSON_TAG_LEN - VARSTR_HEADER_SIZE) / TSDB_NCHAR_SIZE) {
return buildSyntaxErrMsg(&pCxt->msgBuf, "json string too long than 4095", pStmt->pVal->literal); return buildSyntaxErrMsg(&pCxt->msgBuf, "json string too long than 4095", pStmt->pVal->literal);
} }
@ -4477,7 +4479,7 @@ static int32_t buildUpdateTagValReq(STranslateContext* pCxt, SAlterTableStmt* pS
pReq->pTagVal = row; pReq->pTagVal = row;
pStmt->pVal->datum.p = row; // for free pStmt->pVal->datum.p = row; // for free
tdDestroyKVRowBuilder(&kvRowBuilder); tdDestroyKVRowBuilder(&kvRowBuilder);
}else{ } else {
pReq->nTagVal = pStmt->pVal->node.resType.bytes; pReq->nTagVal = pStmt->pVal->node.resType.bytes;
if (TSDB_DATA_TYPE_NCHAR == pStmt->pVal->node.resType.type) { if (TSDB_DATA_TYPE_NCHAR == pStmt->pVal->node.resType.type) {
pReq->nTagVal = pReq->nTagVal * TSDB_NCHAR_SIZE; pReq->nTagVal = pReq->nTagVal * TSDB_NCHAR_SIZE;
@ -4688,16 +4690,16 @@ static int32_t rewriteAlterTable(STranslateContext* pCxt, SQuery* pQuery) {
return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_INVALID_COL_JSON); return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_INVALID_COL_JSON);
} }
if (getNumOfTags(pTableMeta) == 1 && pStmt->alterType == TSDB_ALTER_TABLE_DROP_TAG) { if (getNumOfTags(pTableMeta) == 1 && pStmt->alterType == TSDB_ALTER_TABLE_DROP_TAG) {
return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_INVALID_ALTER_TABLE, "can not drop tag if there is only one tag"); return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_INVALID_ALTER_TABLE,
"can not drop tag if there is only one tag");
} }
if (TSDB_SUPER_TABLE == pTableMeta->tableType) { if (TSDB_SUPER_TABLE == pTableMeta->tableType) {
SSchema* pTagsSchema = getTableTagSchema(pTableMeta); SSchema* pTagsSchema = getTableTagSchema(pTableMeta);
if (getNumOfTags(pTableMeta) == 1 && pTagsSchema->type == TSDB_DATA_TYPE_JSON && if (getNumOfTags(pTableMeta) == 1 && pTagsSchema->type == TSDB_DATA_TYPE_JSON &&
(pStmt->alterType == TSDB_ALTER_TABLE_ADD_TAG || (pStmt->alterType == TSDB_ALTER_TABLE_ADD_TAG || pStmt->alterType == TSDB_ALTER_TABLE_DROP_TAG ||
pStmt->alterType == TSDB_ALTER_TABLE_DROP_TAG || pStmt->alterType == TSDB_ALTER_TABLE_UPDATE_TAG_BYTES)) {
pStmt->alterType == TSDB_ALTER_TABLE_UPDATE_TAG_BYTES)) {
return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_ONLY_ONE_JSON_TAG); return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_ONLY_ONE_JSON_TAG);
} }
return TSDB_CODE_SUCCESS; return TSDB_CODE_SUCCESS;

View File

@ -154,6 +154,7 @@ void generateTestST1(MockCatalogService* mcs) {
builder.done(); builder.done();
mcs->createSubTable("test", "st1", "st1s1", 1); mcs->createSubTable("test", "st1", "st1s1", 1);
mcs->createSubTable("test", "st1", "st1s2", 2); mcs->createSubTable("test", "st1", "st1s2", 2);
mcs->createSubTable("test", "st1", "st1s3", 1);
} }
} // namespace } // namespace

View File

@ -44,3 +44,9 @@ TEST_F(PlanJoinTest, withWhere) {
run("SELECT t1.c1, t2.c1 FROM st1s1 t1 JOIN st1s2 t2 ON t1.ts = t2.ts " run("SELECT t1.c1, t2.c1 FROM st1s1 t1 JOIN st1s2 t2 ON t1.ts = t2.ts "
"WHERE t1.c1 > t2.c1 AND t1.c2 = 'abc' AND t2.c2 = 'qwe'"); "WHERE t1.c1 > t2.c1 AND t1.c2 = 'abc' AND t2.c2 = 'qwe'");
} }
TEST_F(PlanJoinTest, multiJoin) {
useDb("root", "test");
run("SELECT t1.c1, t2.c1 FROM st1s1 t1 JOIN st1s2 t2 ON t1.ts = t2.ts JOIN st1s3 t3 ON t1.ts = t3.ts");
}

View File

@ -92,6 +92,8 @@
#./test.sh -f tsim/stable/show.sim #./test.sh -f tsim/stable/show.sim
./test.sh -f tsim/stable/values.sim ./test.sh -f tsim/stable/values.sim
./test.sh -f tsim/stable/vnode3.sim ./test.sh -f tsim/stable/vnode3.sim
./test.sh -f tsim/stable/column_add.sim
./test.sh -f tsim/stable/column_drop.sim
# --- for multi process mode # --- for multi process mode

View File

@ -1,141 +0,0 @@
system sh/stop_dnodes.sh
system sh/deploy.sh -n dnode1 -i 1
system sh/exec.sh -n dnode1 -s start
sql connect
print ========== prepare stb and ctb
sql create database db vgroups 1
sql create table db.stb (ts timestamp, c1 int, c2 binary(4)) tags(t1 int, t2 float, t3 binary(16)) comment "abd"
sql create table db.ctb using db.stb tags(101, 102, "103")
sql insert into db.ctb values(now, 1, "2")
sql show db.stables
if $rows != 1 then
return -1
endi
if $data[0][0] != stb then
return -1
endi
if $data[0][1] != db then
return -1
endi
if $data[0][3] != 3 then
return -1
endi
if $data[0][4] != 3 then
return -1
endi
if $data[0][6] != abd then
return -1
endi
sql show db.tables
if $rows != 1 then
return -1
endi
if $data[0][0] != ctb then
return -1
endi
if $data[0][1] != db then
return -1
endi
if $data[0][3] != 3 then
return -1
endi
if $data[0][4] != stb then
return -1
endi
if $data[0][6] != 2 then
return -1
endi
if $data[0][9] != CHILD_TABLE then
return -1
endi
sql select * from db.stb
if $rows != 1 then
return -1
endi
if $data[0][1] != 1 then
return -1
endi
if $data[0][2] != 2 then
return -1
endi
if $data[0][3] != 101 then
return -1
endi
print ========== add column c3
sql alter table db.stb add column c3 int
sql show db.stables
if $data[0][3] != 4 then
return -1
endi
sql show db.tables
if $data[0][3] != 4 then
return -1
endi
sql select * from db.stb
sql select * from db.stb
print $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
if $rows != 1 then
return -1
endi
if $data[0][1] != 1 then
return -1
endi
if $data[0][2] != 2 then
return -1
endi
if $data[0][3] != NULL then
return -1
endi
if $data[0][4] != 101 then
return -1
endi
sql insert into db.ctb values(now+1s, 1, 2, 3)
sql select * from db.stb
print $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
print $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
if $rows != 2 then
return -1
endi
if $data[0][1] != 1 then
return -1
endi
if $data[0][2] != 2 then
return -1
endi
if $data[0][3] != NULL then
return -1
endi
if $data[0][4] != 101 then
return -1
endi
if $data[1][1] != 1 then
return -1
endi
if $data[2][2] != 2 then
return -1
endi
if $data[1][3] != 3 then
return -1
endi
if $data[1][4] != 101 then
return -1
endi
print ========== add column c4
sql alter table db.stb add column c4 bigint
sql insert into db.ctb values(now+2s, 1, 2, 3, 4)
sql select * from db.stb
sql select * from db.stb
print $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
print $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
print $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]

View File

@ -0,0 +1,303 @@
system sh/stop_dnodes.sh
system sh/deploy.sh -n dnode1 -i 1
system sh/exec.sh -n dnode1 -s start
sql connect
print ========== prepare stb and ctb
sql create database db vgroups 1
sql create table db.stb (ts timestamp, c1 int, c2 binary(4)) tags(t1 int, t2 float, t3 binary(16)) comment "abd"
sql create table db.ctb using db.stb tags(101, 102, "103")
sql insert into db.ctb values(now, 1, "2")
sql show db.stables
if $rows != 1 then
return -1
endi
if $data[0][0] != stb then
return -1
endi
if $data[0][1] != db then
return -1
endi
if $data[0][3] != 3 then
return -1
endi
if $data[0][4] != 3 then
return -1
endi
if $data[0][6] != abd then
return -1
endi
sql show db.tables
if $rows != 1 then
return -1
endi
if $data[0][0] != ctb then
return -1
endi
if $data[0][1] != db then
return -1
endi
if $data[0][3] != 3 then
return -1
endi
if $data[0][4] != stb then
return -1
endi
if $data[0][6] != 2 then
return -1
endi
if $data[0][9] != CHILD_TABLE then
return -1
endi
sql select * from db.stb
if $rows != 1 then
return -1
endi
if $data[0][1] != 1 then
return -1
endi
if $data[0][2] != 2 then
return -1
endi
if $data[0][3] != 101 then
return -1
endi
sql_error alter table db.stb add column ts int
sql_error alter table db.stb add column t1 int
sql_error alter table db.stb add column t2 int
sql_error alter table db.stb add column t3 int
sql_error alter table db.stb add column c1 int
print ========== step1 add column c3
sql alter table db.stb add column c3 int
sql show db.stables
if $data[0][3] != 4 then
return -1
endi
sql show db.tables
if $data[0][3] != 4 then
return -1
endi
sql select * from db.stb
sql select * from db.stb
print $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
if $rows != 1 then
return -1
endi
if $data[0][1] != 1 then
return -1
endi
if $data[0][2] != 2 then
return -1
endi
if $data[0][3] != NULL then
return -1
endi
if $data[0][4] != 101 then
return -1
endi
sql insert into db.ctb values(now+1s, 1, 2, 3)
sql select * from db.stb
print $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
print $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
if $rows != 2 then
return -1
endi
if $data[0][1] != 1 then
return -1
endi
if $data[0][2] != 2 then
return -1
endi
if $data[0][3] != NULL then
return -1
endi
if $data[0][4] != 101 then
return -1
endi
if $data[1][1] != 1 then
return -1
endi
if $data[1][2] != 2 then
return -1
endi
if $data[1][3] != 3 then
return -1
endi
if $data[1][4] != 101 then
return -1
endi
print ========== step2 add column c4
sql alter table db.stb add column c4 bigint
sql select * from db.stb
sql insert into db.ctb values(now+2s, 1, 2, 3, 4)
sql select * from db.stb
print $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
print $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
print $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
if $rows != 3 then
return -1
endi
if $data[0][1] != 1 then
return -1
endi
if $data[0][2] != 2 then
return -1
endi
if $data[0][3] != NULL then
return -1
endi
if $data[0][4] != NULL then
return -1
endi
if $data[0][5] != 101 then
return -1
endi
if $data[1][1] != 1 then
return -1
endi
if $data[1][2] != 2 then
return -1
endi
if $data[1][3] != 3 then
return -1
endi
if $data[1][4] != NULL then
return -1
endi
if $data[1][5] != 101 then
return -1
endi
if $data[2][1] != 1 then
return -1
endi
if $data[2][2] != 2 then
return -1
endi
if $data[2][3] != 3 then
return -1
endi
if $data[2][4] != 4 then
return -1
endi
if $data[2][5] != 101 then
return -1
endi
print ========== step3 add column c5
sql alter table db.stb add column c5 int
sql insert into db.ctb values(now+3s, 1, 2, 3, 4, 5)
sql select * from db.stb
print $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
print $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
print $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
print $data[2][0] $data[2][1] $data[2][2] $data[2][3] $data[2][4] $data[2][5] $data[2][6]
if $rows != 4 then
return -1
endi
if $data[2][1] != 1 then
return -1
endi
if $data[2][2] != 2 then
return -1
endi
if $data[2][3] != 3 then
return -1
endi
if $data[2][4] != 4 then
return -1
endi
if $data[2][5] != NULL then
return -1
endi
if $data[2][6] != 101 then
return -1
endi
if $data[3][1] != 1 then
return -1
endi
if $data[3][2] != 2 then
return -1
endi
if $data[3][3] != 3 then
return -1
endi
if $data[3][4] != 4 then
return -1
endi
if $data[3][5] != 5 then
return -1
endi
if $data[3][6] != 101 then
return -1
endi
print ========== step4 add column c6
sql alter table db.stb add column c6 int
sql insert into db.ctb values(now+4s, 1, 2, 3, 4, 5, 6)
sql select * from db.stb
if $rows != 5 then
return -1
endi
if $data[3][1] != 1 then
return -1
endi
if $data[3][2] != 2 then
return -1
endi
if $data[3][3] != 3 then
return -1
endi
if $data[3][4] != 4 then
return -1
endi
if $data[3][5] != 5 then
return -1
endi
if $data[3][6] != NULL then
return -1
endi
if $data[3][7] != 101 then
return -1
endi
if $data[4][1] != 1 then
return -1
endi
if $data[4][2] != 2 then
return -1
endi
if $data[4][3] != 3 then
return -1
endi
if $data[4][4] != 4 then
return -1
endi
if $data[4][5] != 5 then
return -1
endi
if $data[4][6] != 6 then
return -1
endi
if $data[4][7] != 101 then
return -1
endi
print ========== step5 describe
sql describe db.ctb
if $rows != 10 then
return -1
endi
system sh/exec.sh -n dnode1 -s stop -x SIGINT

View File

@ -0,0 +1,209 @@
system sh/stop_dnodes.sh
system sh/deploy.sh -n dnode1 -i 1
system sh/exec.sh -n dnode1 -s start
sql connect
print ========== prepare stb and ctb
sql create database db vgroups 1
sql create table db.stb (ts timestamp, c1 int, c2 binary(4), c3 int, c4 bigint, c5 int, c6 int) tags(t1 int, t2 float, t3 binary(16)) comment "abd"
sql create table db.ctb using db.stb tags(101, 102, "103")
sql insert into db.ctb values(now, 1, "2", 3, 4, 5, 6)
sql show db.stables
if $rows != 1 then
return -1
endi
if $data[0][0] != stb then
return -1
endi
if $data[0][1] != db then
return -1
endi
if $data[0][3] != 7 then
return -1
endi
if $data[0][4] != 3 then
return -1
endi
if $data[0][6] != abd then
return -1
endi
sql show db.tables
if $rows != 1 then
return -1
endi
if $data[0][0] != ctb then
return -1
endi
if $data[0][1] != db then
return -1
endi
if $data[0][3] != 7 then
return -1
endi
if $data[0][4] != stb then
return -1
endi
if $data[0][6] != 2 then
return -1
endi
if $data[0][9] != CHILD_TABLE then
return -1
endi
sql select * from db.stb
if $rows != 1 then
return -1
endi
if $data[0][1] != 1 then
return -1
endi
if $data[0][2] != 2 then
return -1
endi
if $data[0][3] != 3 then
return -1
endi
if $data[0][4] != 4 then
return -1
endi
if $data[0][5] != 5 then
return -1
endi
if $data[0][6] != 6 then
return -1
endi
if $data[0][7] != 101 then
return -1
endi
sql_error alter table db.stb drop column ts
sql_error alter table db.stb drop column t1
sql_error alter table db.stb drop column t2
sql_error alter table db.stb drop column t3
sql_error alter table db.stb drop column c9
print ========== step1 drop column c6
sql alter table db.stb drop column c6
sql show db.stables
if $data[0][3] != 6 then
return -1
endi
sql show db.tables
if $data[0][3] != 6 then
return -1
endi
sql select * from db.stb
sql select * from db.stb
print $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
if $rows != 1 then
return -1
endi
if $data[0][1] != 1 then
return -1
endi
if $data[0][2] != 2 then
return -1
endi
if $data[0][3] != 3 then
return -1
endi
if $data[0][4] != 4 then
return -1
endi
if $data[0][5] != 5 then
return -1
endi
if $data[0][6] != 101 then
return -1
endi
sql insert into db.ctb values(now+1s, 1, 2, 3, 4, 5)
sql select * from db.stb
if $rows != 2 then
return -1
endi
print ========== step2 drop column c5
sql alter table db.stb drop column c5
sql insert into db.ctb values(now+2s, 1, 2, 3, 4, 5)
sql insert into db.ctb values(now+3s, 1, 2, 3, 4)
sql_error insert into db.ctb values(now+2s, 1, 2, 3, 4, 5)
sql select * from db.stb
if $rows != 4 then
return -1
endi
print ========== step3 drop column c4
sql alter table db.stb drop column c4
sql select * from db.stb
sql_error insert into db.ctb values(now+2s, 1, 2, 3, 4, 5)
sql_error insert into db.ctb values(now+2s, 1, 2, 3, 4)
sql insert into db.ctb values(now+3s, 1, 2, 3)
sql select * from db.stb
if $rows != 5 then
return -1
endi
print ========== step4 add column c4
sql alter table db.stb add column c4 binary(13)
sql insert into db.ctb values(now+4s, 1, 2, 3, '4')
sql select * from db.stb
if $rows != 6 then
return -1
endi
if $data[1][4] != NULL then
return -1
endi
if $data[2][4] != NULL then
return -1
endi
if $data[3][4] != NULL then
return -1
endi
if $data[5][4] != 4 then
return -1
endi
print ========== step5 describe
sql describe db.ctb
if $rows != 8 then
return -1
endi
if $data[0][0] != ts then
return -1
endi
if $data[1][0] != c1 then
return -1
endi
if $data[2][0] != c2 then
return -1
endi
if $data[3][0] != c3 then
return -1
endi
if $data[4][0] != c4 then
return -1
endi
if $data[4][1] != VARCHAR then
return -1
endi
if $data[4][2] != 13 then
return -1
endi
if $data[5][0] != t1 then
return -1
endi
if $data[6][0] != t2 then
return -1
endi
if $data[7][0] != t3 then
return -1
endi
system sh/exec.sh -n dnode1 -s stop -x SIGINT

View File

@ -0,0 +1,78 @@
system sh/stop_dnodes.sh
system sh/deploy.sh -n dnode1 -i 1
system sh/exec.sh -n dnode1 -s start
sql connect
print ========== prepare stb and ctb
sql create database db vgroups 1
sql create table db.stb (ts timestamp, c1 int, c2 binary(4)) tags(t1 int, t2 float, t3 binary(16)) comment "abd"
sql create table db.ctb using db.stb tags(101, 102, "103")
sql insert into db.ctb values(now, 1, "1234")
sql_error alter table db.stb MODIFY column c2 binary(3)
sql_error alter table db.stb MODIFY column c2 int
sql_error alter table db.stb MODIFY column c1 int
sql_error alter table db.stb MODIFY column ts int
sql_error insert into db.ctb values(now, 1, "12345")
print ========== step1 modify column
sql alter table db.stb MODIFY column c2 binary(5)
sql insert into db.ctb values(now, 1, "12345")
sql select * from db.stb
print $data[0][0] $data[0][1] $data[0][2] $data[0][3] $data[0][4] $data[0][5] $data[0][6]
print $data[1][0] $data[1][1] $data[1][2] $data[1][3] $data[1][4] $data[1][5] $data[1][6]
if $rows != 2 then
return -1
endi
if $data[0][1] != 1 then
return -1
endi
if $data[0][2] != 1234 then
return -1
endi
if $data[0][3] != 101 then
return -1
endi
if $data[1][1] != 1 then
return -1
endi
if $data[1][2] != 12345 then
return -1
endi
if $data[1][3] != 101 then
return -1
endi
print ========== step2 describe
sql describe db.ctb
if $rows != 7 then
return -1
endi
if $data[0][0] != ts then
return -1
endi
if $data[1][0] != c1 then
return -1
endi
if $data[2][0] != c2 then
return -1
endi
if $data[2][1] != VARCHAR then
return -1
endi
if $data[2][2] != 5 then
return -1
endi
if $data[3][0] != t1 then
return -1
endi
if $data[4][0] != t2 then
return -1
endi
if $data[5][0] != t3 then
return -1
endi
system sh/exec.sh -n dnode1 -s stop -x SIGINT

View File

@ -134,7 +134,7 @@ class TDTestCase:
def create_udf_function(self): def create_udf_function(self):
for i in range(10): for i in range(5):
# create scalar functions # create scalar functions
tdSql.execute("create function udf1 as '/tmp/udf/libudf1.so' outputtype int bufSize 8;") tdSql.execute("create function udf1 as '/tmp/udf/libudf1.so' outputtype int bufSize 8;")
@ -644,16 +644,12 @@ class TDTestCase:
self.create_udf_function() self.create_udf_function()
self.basic_udf_query() self.basic_udf_query()
self.loop_kill_udfd() self.loop_kill_udfd()
self.unexpected_create()
tdSql.execute(" drop function udf1 ") tdSql.execute(" drop function udf1 ")
tdSql.execute(" drop function udf2 ") tdSql.execute(" drop function udf2 ")
self.create_udf_function() self.create_udf_function()
time.sleep(2) time.sleep(2)
self.basic_udf_query() self.basic_udf_query()
self.test_function_name() self.test_function_name()
self.restart_taosd_query_udf()
def stop(self): def stop(self):

View File

@ -0,0 +1,654 @@
from distutils.log import error
import taos
import sys
import time
import os
from util.log import *
from util.sql import *
from util.cases import *
from util.dnodes import *
import subprocess
class TDTestCase:
def init(self, conn, logSql):
tdLog.debug(f"start to excute {__file__}")
tdSql.init(conn.cursor(), logSql)
def getBuildPath(self):
selfPath = os.path.dirname(os.path.realpath(__file__))
if ("community" in selfPath):
projPath = selfPath[:selfPath.find("community")]
else:
projPath = selfPath[:selfPath.find("tests")]
for root, dirs, files in os.walk(projPath):
if ("taosd" in files):
rootRealPath = os.path.dirname(os.path.realpath(root))
if ("packaging" not in rootRealPath):
buildPath = root[:len(root) - len("/build/bin")]
break
return buildPath
def prepare_udf_so(self):
selfPath = os.path.dirname(os.path.realpath(__file__))
if ("community" in selfPath):
projPath = selfPath[:selfPath.find("community")]
else:
projPath = selfPath[:selfPath.find("tests")]
print(projPath)
libudf1 = subprocess.Popen('find %s -name "libudf1.so"|grep lib|head -n1'%projPath , shell=True, stdout=subprocess.PIPE,stderr=subprocess.STDOUT).stdout.read().decode("utf-8")
libudf2 = subprocess.Popen('find %s -name "libudf2.so"|grep lib|head -n1'%projPath , shell=True, stdout=subprocess.PIPE,stderr=subprocess.STDOUT).stdout.read().decode("utf-8")
os.system("mkdir /tmp/udf/")
os.system("cp %s /tmp/udf/ "%libudf1.replace("\n" ,""))
os.system("cp %s /tmp/udf/ "%libudf2.replace("\n" ,""))
def prepare_data(self):
tdSql.execute("drop database if exists db ")
tdSql.execute("create database if not exists db days 300")
tdSql.execute("use db")
tdSql.execute(
'''create table stb1
(ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint, c5 float, c6 double, c7 bool, c8 binary(16),c9 nchar(32), c10 timestamp)
tags (t1 int)
'''
)
tdSql.execute(
'''
create table t1
(ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint, c5 float, c6 double, c7 bool, c8 binary(16),c9 nchar(32), c10 timestamp)
'''
)
for i in range(4):
tdSql.execute(f'create table ct{i+1} using stb1 tags ( {i+1} )')
for i in range(9):
tdSql.execute(
f"insert into ct1 values ( now()-{i*10}s, {1*i}, {11111*i}, {111*i}, {11*i}, {1.11*i}, {11.11*i}, {i%2}, 'binary{i}', 'nchar{i}', now()+{1*i}a )"
)
tdSql.execute(
f"insert into ct4 values ( now()-{i*90}d, {1*i}, {11111*i}, {111*i}, {11*i}, {1.11*i}, {11.11*i}, {i%2}, 'binary{i}', 'nchar{i}', now()+{1*i}a )"
)
tdSql.execute("insert into ct1 values (now()-45s, 0, 0, 0, 0, 0, 0, 0, 'binary0', 'nchar0', now()+8a )")
tdSql.execute("insert into ct1 values (now()+10s, 9, -99999, -999, -99, -9.99, -99.99, 1, 'binary9', 'nchar9', now()+9a )")
tdSql.execute("insert into ct1 values (now()+15s, 9, -99999, -999, -99, -9.99, NULL, 1, 'binary9', 'nchar9', now()+9a )")
tdSql.execute("insert into ct1 values (now()+20s, 9, -99999, -999, NULL, -9.99, -99.99, 1, 'binary9', 'nchar9', now()+9a )")
tdSql.execute("insert into ct4 values (now()-810d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
tdSql.execute("insert into ct4 values (now()-400d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
tdSql.execute("insert into ct4 values (now()+90d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
tdSql.execute(
f'''insert into t1 values
( '2020-04-21 01:01:01.000', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
( '2020-10-21 01:01:01.000', 1, 11111, 111, 11, 1.11, 11.11, 1, "binary1", "nchar1", now()+1a )
( '2020-12-31 01:01:01.000', 2, 22222, 222, 22, 2.22, 22.22, 0, "binary2", "nchar2", now()+2a )
( '2021-01-01 01:01:06.000', 3, 33333, 333, 33, 3.33, 33.33, 0, "binary3", "nchar3", now()+3a )
( '2021-05-07 01:01:10.000', 4, 44444, 444, 44, 4.44, 44.44, 1, "binary4", "nchar4", now()+4a )
( '2021-07-21 01:01:01.000', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
( '2021-09-30 01:01:16.000', 5, 55555, 555, 55, 5.55, 55.55, 0, "binary5", "nchar5", now()+5a )
( '2022-02-01 01:01:20.000', 6, 66666, 666, 66, 6.66, 66.66, 1, "binary6", "nchar6", now()+6a )
( '2022-10-28 01:01:26.000', 7, 00000, 000, 00, 0.00, 00.00, 1, "binary7", "nchar7", "1970-01-01 08:00:00.000" )
( '2022-12-01 01:01:30.000', 8, -88888, -888, -88, -8.88, -88.88, 0, "binary8", "nchar8", "1969-01-01 01:00:00.000" )
( '2022-12-31 01:01:36.000', 9, -99999999999999999, -999, -99, -9.99, -999999999999999999999.99, 1, "binary9", "nchar9", "1900-01-01 00:00:00.000" )
( '2023-02-21 01:01:01.000', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
'''
)
tdSql.execute("create table tb (ts timestamp , num1 int , num2 int, num3 double , num4 binary(30))")
tdSql.execute(
f'''insert into tb values
( '2020-04-21 01:01:01.000', NULL, 1, 1, "binary1" )
( '2020-10-21 01:01:01.000', 1, 1, 1.11, "binary1" )
( '2020-12-31 01:01:01.000', 2, 22222, 22, "binary1" )
( '2021-01-01 01:01:06.000', 3, 33333, 33, "binary1" )
( '2021-05-07 01:01:10.000', 4, 44444, 44, "binary1" )
( '2021-07-21 01:01:01.000', NULL, NULL, NULL, "binary1" )
( '2021-09-30 01:01:16.000', 5, 55555, 55, "binary1" )
( '2022-02-01 01:01:20.000', 6, 66666, 66, "binary1" )
( '2022-10-28 01:01:26.000', 0, 00000, 00, "binary1" )
( '2022-12-01 01:01:30.000', 8, -88888, -88, "binary1" )
( '2022-12-31 01:01:36.000', 9, -9999999, -99, "binary1" )
( '2023-02-21 01:01:01.000', NULL, NULL, NULL, "binary1" )
'''
)
# udf functions with join
ts_start = 1652517451000
tdSql.execute("create stable st (ts timestamp , c1 int , c2 int ,c3 double ,c4 double ) tags(ind int)")
tdSql.execute("create table sub1 using st tags(1)")
tdSql.execute("create table sub2 using st tags(2)")
for i in range(10):
ts = ts_start + i *1000
tdSql.execute(" insert into sub1 values({} , {},{},{},{})".format(ts,i ,i*10,i*100.0,i*1000.0))
tdSql.execute(" insert into sub2 values({} , {},{},{},{})".format(ts,i ,i*10,i*100.0,i*1000.0))
def create_udf_function(self):
for i in range(5):
# create scalar functions
tdSql.execute("create function udf1 as '/tmp/udf/libudf1.so' outputtype int bufSize 8;")
# create aggregate functions
tdSql.execute("create aggregate function udf2 as '/tmp/udf/libudf2.so' outputtype double bufSize 8;")
functions = tdSql.getResult("show functions")
function_nums = len(functions)
if function_nums == 2:
tdLog.info("create two udf functions success ")
# drop functions
tdSql.execute("drop function udf1")
tdSql.execute("drop function udf2")
functions = tdSql.getResult("show functions")
for function in functions:
if "udf1" in function[0] or "udf2" in function[0]:
tdLog.info("drop udf functions failed ")
tdLog.exit("drop udf functions failed")
tdLog.info("drop two udf functions success ")
# create scalar functions
tdSql.execute("create function udf1 as '/tmp/udf/libudf1.so' outputtype int bufSize 8;")
# create aggregate functions
tdSql.execute("create aggregate function udf2 as '/tmp/udf/libudf2.so' outputtype double bufSize 8;")
functions = tdSql.getResult("show functions")
function_nums = len(functions)
if function_nums == 2:
tdLog.info("create two udf functions success ")
def basic_udf_query(self):
# scalar functions
tdSql.execute("use db ")
tdSql.query("select num1 , udf1(num1) ,num2 ,udf1(num2),num3 ,udf1(num3),num4 ,udf1(num4) from tb")
tdSql.checkData(0,0,None)
tdSql.checkData(0,1,None)
tdSql.checkData(0,2,1)
tdSql.checkData(0,3,88)
tdSql.checkData(0,4,1.000000000)
tdSql.checkData(0,5,88)
tdSql.checkData(0,6,"binary1")
tdSql.checkData(0,7,88)
tdSql.checkData(3,0,3)
tdSql.checkData(3,1,88)
tdSql.checkData(3,2,33333)
tdSql.checkData(3,3,88)
tdSql.checkData(3,4,33.000000000)
tdSql.checkData(3,5,88)
tdSql.checkData(3,6,"binary1")
tdSql.checkData(3,7,88)
tdSql.checkData(11,0,None)
tdSql.checkData(11,1,None)
tdSql.checkData(11,2,None)
tdSql.checkData(11,3,None)
tdSql.checkData(11,4,None)
tdSql.checkData(11,5,None)
tdSql.checkData(11,6,"binary1")
tdSql.checkData(11,7,88)
tdSql.query("select c1 , udf1(c1) ,c2 ,udf1(c2), c3 ,udf1(c3), c4 ,udf1(c4) from stb1 order by c1")
tdSql.checkData(0,0,None)
tdSql.checkData(0,1,None)
tdSql.checkData(0,2,None)
tdSql.checkData(0,3,None)
tdSql.checkData(0,4,None)
tdSql.checkData(0,5,None)
tdSql.checkData(0,6,None)
tdSql.checkData(0,7,None)
tdSql.checkData(20,0,8)
tdSql.checkData(20,1,88)
tdSql.checkData(20,2,88888)
tdSql.checkData(20,3,88)
tdSql.checkData(20,4,888)
tdSql.checkData(20,5,88)
tdSql.checkData(20,6,88)
tdSql.checkData(20,7,88)
# aggregate functions
tdSql.query("select udf2(num1) ,udf2(num2), udf2(num3) from tb")
tdSql.checkData(0,0,15.362291496)
tdSql.checkData(0,1,10000949.553189287)
tdSql.checkData(0,2,168.633425216)
# Arithmetic compute
tdSql.query("select udf2(num1)+100 ,udf2(num2)-100, udf2(num3)*100 ,udf2(num3)/100 from tb")
tdSql.checkData(0,0,115.362291496)
tdSql.checkData(0,1,10000849.553189287)
tdSql.checkData(0,2,16863.342521576)
tdSql.checkData(0,3,1.686334252)
tdSql.query("select udf2(c1) ,udf2(c6) from stb1 ")
tdSql.checkData(0,0,25.514701644)
tdSql.checkData(0,1,265.247614504)
tdSql.query("select udf2(c1)+100 ,udf2(c6)-100 ,udf2(c1)*100 ,udf2(c6)/100 from stb1 ")
tdSql.checkData(0,0,125.514701644)
tdSql.checkData(0,1,165.247614504)
tdSql.checkData(0,2,2551.470164435)
tdSql.checkData(0,3,2.652476145)
# # bug for crash when query sub table
tdSql.query("select udf2(c1+100) ,udf2(c6-100) ,udf2(c1*100) ,udf2(c6/100) from ct1")
tdSql.checkData(0,0,378.215547010)
tdSql.checkData(0,1,353.808067460)
tdSql.checkData(0,2,2114.237451187)
tdSql.checkData(0,3,2.125468151)
tdSql.query("select udf2(c1+100) ,udf2(c6-100) ,udf2(c1*100) ,udf2(c6/100) from stb1 ")
tdSql.checkData(0,0,490.358032462)
tdSql.checkData(0,1,400.460106627)
tdSql.checkData(0,2,2551.470164435)
tdSql.checkData(0,3,2.652476145)
# regular table with aggregate functions
tdSql.error("select udf1(num1) , count(num1) from tb;")
tdSql.error("select udf1(num1) , avg(num1) from tb;")
tdSql.error("select udf1(num1) , twa(num1) from tb;")
tdSql.error("select udf1(num1) , irate(num1) from tb;")
tdSql.error("select udf1(num1) , sum(num1) from tb;")
tdSql.error("select udf1(num1) , stddev(num1) from tb;")
tdSql.error("select udf1(num1) , mode(num1) from tb;")
tdSql.error("select udf1(num1) , HYPERLOGLOG(num1) from tb;")
# stable
tdSql.error("select udf1(c1) , count(c1) from stb1;")
tdSql.error("select udf1(c1) , avg(c1) from stb1;")
tdSql.error("select udf1(c1) , twa(c1) from stb1;")
tdSql.error("select udf1(c1) , irate(c1) from stb1;")
tdSql.error("select udf1(c1) , sum(c1) from stb1;")
tdSql.error("select udf1(c1) , stddev(c1) from stb1;")
tdSql.error("select udf1(c1) , mode(c1) from stb1;")
tdSql.error("select udf1(c1) , HYPERLOGLOG(c1) from stb1;")
# regular table with select functions
tdSql.query("select udf1(num1) , max(num1) from tb;")
tdSql.checkRows(1)
tdSql.query("select floor(num1) , max(num1) from tb;")
tdSql.checkRows(1)
tdSql.query("select udf1(num1) , min(num1) from tb;")
tdSql.checkRows(1)
tdSql.query("select ceil(num1) , min(num1) from tb;")
tdSql.checkRows(1)
tdSql.error("select udf1(num1) , first(num1) from tb;")
tdSql.error("select abs(num1) , first(num1) from tb;")
tdSql.error("select udf1(num1) , last(num1) from tb;")
tdSql.error("select round(num1) , last(num1) from tb;")
tdSql.query("select udf1(num1) , top(num1,1) from tb;")
tdSql.checkRows(1)
tdSql.query("select udf1(num1) , bottom(num1,1) from tb;")
tdSql.checkRows(1)
tdSql.error("select udf1(num1) , last_row(num1) from tb;")
tdSql.error("select round(num1) , last_row(num1) from tb;")
# stable
tdSql.query("select udf1(c1) , max(c1) from stb1;")
tdSql.checkRows(1)
tdSql.query("select abs(c1) , max(c1) from stb1;")
tdSql.checkRows(1)
tdSql.query("select udf1(c1) , min(c1) from stb1;")
tdSql.checkRows(1)
tdSql.query("select floor(c1) , min(c1) from stb1;")
tdSql.checkRows(1)
tdSql.error("select udf1(c1) , first(c1) from stb1;")
tdSql.error("select udf1(c1) , last(c1) from stb1;")
tdSql.query("select udf1(c1) , top(c1 ,1) from stb1;")
tdSql.checkRows(1)
tdSql.query("select abs(c1) , top(c1 ,1) from stb1;")
tdSql.checkRows(1)
tdSql.query("select udf1(c1) , bottom(c1,1) from stb1;")
tdSql.checkRows(1)
tdSql.query("select ceil(c1) , bottom(c1,1) from stb1;")
tdSql.checkRows(1)
tdSql.error("select udf1(c1) , last_row(c1) from stb1;")
tdSql.error("select ceil(c1) , last_row(c1) from stb1;")
# regular table with compute functions
tdSql.query("select udf1(num1) , abs(num1) from tb;")
tdSql.checkRows(12)
tdSql.query("select floor(num1) , abs(num1) from tb;")
tdSql.checkRows(12)
# # bug need fix
#tdSql.query("select udf1(num1) , csum(num1) from tb;")
#tdSql.checkRows(9)
#tdSql.query("select ceil(num1) , csum(num1) from tb;")
#tdSql.checkRows(9)
#tdSql.query("select udf1(c1) , csum(c1) from stb1;")
#tdSql.checkRows(22)
#tdSql.query("select floor(c1) , csum(c1) from stb1;")
#tdSql.checkRows(22)
# stable with compute functions
tdSql.query("select udf1(c1) , abs(c1) from stb1;")
tdSql.checkRows(25)
tdSql.query("select abs(c1) , ceil(c1) from stb1;")
tdSql.checkRows(25)
# nest query
tdSql.query("select abs(udf1(c1)) , abs(ceil(c1)) from stb1 order by ts;")
tdSql.checkRows(25)
tdSql.checkData(0,0,None)
tdSql.checkData(0,1,None)
tdSql.checkData(1,0,88)
tdSql.checkData(1,1,8)
tdSql.query("select abs(udf1(c1)) , abs(ceil(c1)) from ct1 order by ts;")
tdSql.checkRows(13)
tdSql.checkData(0,0,88)
tdSql.checkData(0,1,8)
tdSql.checkData(1,0,88)
tdSql.checkData(1,1,7)
# bug fix for crash
# order by udf function result
for _ in range(50):
tdSql.query("select udf2(c1) from stb1 group by 1-udf1(c1)")
print(tdSql.queryResult)
# udf functions with filter
tdSql.query("select abs(udf1(c1)) , abs(ceil(c1)) from stb1 where c1 is null order by ts;")
tdSql.checkRows(3)
tdSql.checkData(0,0,None)
tdSql.checkData(0,1,None)
tdSql.query("select c1 ,udf1(c1) , c6 ,udf1(c6) from stb1 where c1 > 8 order by ts")
tdSql.checkRows(3)
tdSql.checkData(0,0,9)
tdSql.checkData(0,1,88)
tdSql.checkData(0,2,-99.990000000)
tdSql.checkData(0,3,88)
tdSql.query("select sub1.c1, sub2.c2 from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null")
tdSql.checkData(0,0,0)
tdSql.checkData(0,1,0)
tdSql.checkData(1,0,1)
tdSql.checkData(1,1,10)
tdSql.query("select udf1(sub1.c1), udf1(sub2.c2) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null")
tdSql.checkData(0,0,88)
tdSql.checkData(0,1,88)
tdSql.checkData(1,0,88)
tdSql.checkData(1,1,88)
tdSql.query("select sub1.c1 , udf1(sub1.c1), sub2.c2 ,udf1(sub2.c2) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null")
tdSql.checkData(0,0,0)
tdSql.checkData(0,1,88)
tdSql.checkData(0,2,0)
tdSql.checkData(0,3,88)
tdSql.checkData(1,0,1)
tdSql.checkData(1,1,88)
tdSql.checkData(1,2,10)
tdSql.checkData(1,3,88)
tdSql.query("select udf2(sub1.c1), udf2(sub2.c2) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null")
tdSql.checkData(0,0,16.881943016)
tdSql.checkData(0,1,168.819430161)
tdSql.error("select sub1.c1 , udf2(sub1.c1), sub2.c2 ,udf2(sub2.c2) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null")
# udf functions with group by
tdSql.query("select udf1(c1) from ct1 group by c1")
tdSql.checkRows(10)
tdSql.query("select udf1(c1) from stb1 group by c1")
tdSql.checkRows(11)
tdSql.query("select c1,c2, udf1(c1,c2) from ct1 group by c1,c2")
tdSql.checkRows(10)
tdSql.query("select c1,c2, udf1(c1,c2) from stb1 group by c1,c2")
tdSql.checkRows(11)
tdSql.query("select udf2(c1) from ct1 group by c1")
tdSql.checkRows(10)
tdSql.query("select udf2(c1) from stb1 group by c1")
tdSql.checkRows(11)
tdSql.query("select c1,c2, udf2(c1,c6) from ct1 group by c1,c2")
tdSql.checkRows(10)
tdSql.query("select c1,c2, udf2(c1,c6) from stb1 group by c1,c2")
tdSql.checkRows(11)
tdSql.query("select udf2(c1) from stb1 group by udf1(c1)")
tdSql.checkRows(2)
tdSql.query("select udf2(c1) from stb1 group by floor(c1)")
tdSql.checkRows(11)
# udf mix with order by
tdSql.query("select udf2(c1) from stb1 group by floor(c1) order by udf2(c1)")
tdSql.checkRows(11)
def multi_cols_udf(self):
tdSql.query("select num1,num2,num3,udf1(num1,num2,num3) from tb")
tdSql.checkData(0,0,None)
tdSql.checkData(0,1,1)
tdSql.checkData(0,2,1.000000000)
tdSql.checkData(0,3,None)
tdSql.checkData(1,0,1)
tdSql.checkData(1,1,1)
tdSql.checkData(1,2,1.110000000)
tdSql.checkData(1,3,88)
tdSql.query("select c1,c6,udf1(c1,c6) from stb1 order by ts")
tdSql.checkData(1,0,8)
tdSql.checkData(1,1,88.880000000)
tdSql.checkData(1,2,88)
tdSql.query("select abs(udf1(c1,c6,c1,c6)) , abs(ceil(c1)) from stb1 where c1 is not null order by ts;")
tdSql.checkRows(22)
tdSql.query("select udf2(sub1.c1 ,sub1.c2), udf2(sub2.c2 ,sub2.c1) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null")
tdSql.checkData(0,0,169.661427555)
tdSql.checkData(0,1,169.661427555)
def try_query_sql(self):
udf1_sqls = [
"select num1 , udf1(num1) ,num2 ,udf1(num2),num3 ,udf1(num3),num4 ,udf1(num4) from tb" ,
"select c1 , udf1(c1) ,c2 ,udf1(c2), c3 ,udf1(c3), c4 ,udf1(c4) from stb1 order by c1" ,
"select udf1(num1) , max(num1) from tb;" ,
"select udf1(num1) , min(num1) from tb;" ,
#"select udf1(num1) , top(num1,1) from tb;" ,
#"select udf1(num1) , bottom(num1,1) from tb;" ,
"select udf1(c1) , max(c1) from stb1;" ,
"select udf1(c1) , min(c1) from stb1;" ,
#"select udf1(c1) , top(c1 ,1) from stb1;" ,
#"select udf1(c1) , bottom(c1,1) from stb1;" ,
"select udf1(num1) , abs(num1) from tb;" ,
#"select udf1(num1) , csum(num1) from tb;" ,
#"select udf1(c1) , csum(c1) from stb1;" ,
"select udf1(c1) , abs(c1) from stb1;" ,
"select abs(udf1(c1)) , abs(ceil(c1)) from stb1 order by ts;" ,
"select abs(udf1(c1)) , abs(ceil(c1)) from ct1 order by ts;" ,
"select abs(udf1(c1)) , abs(ceil(c1)) from stb1 where c1 is null order by ts;" ,
"select c1 ,udf1(c1) , c6 ,udf1(c6) from stb1 where c1 > 8 order by ts" ,
"select udf1(sub1.c1), udf1(sub2.c2) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null" ,
"select sub1.c1 , udf1(sub1.c1), sub2.c2 ,udf1(sub2.c2) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null" ,
"select udf1(c1) from ct1 group by c1" ,
"select udf1(c1) from stb1 group by c1" ,
"select c1,c2, udf1(c1,c2) from ct1 group by c1,c2" ,
"select c1,c2, udf1(c1,c2) from stb1 group by c1,c2" ,
"select num1,num2,num3,udf1(num1,num2,num3) from tb" ,
"select c1,c6,udf1(c1,c6) from stb1 order by ts" ,
"select abs(udf1(c1,c6,c1,c6)) , abs(ceil(c1)) from stb1 where c1 is not null order by ts;"
]
udf2_sqls = ["select udf2(sub1.c1), udf2(sub2.c2) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null" ,
"select udf2(c1) from stb1 group by 1-udf1(c1)" ,
"select udf2(num1) ,udf2(num2), udf2(num3) from tb" ,
"select udf2(num1)+100 ,udf2(num2)-100, udf2(num3)*100 ,udf2(num3)/100 from tb" ,
"select udf2(c1) ,udf2(c6) from stb1 " ,
"select udf2(c1)+100 ,udf2(c6)-100 ,udf2(c1)*100 ,udf2(c6)/100 from stb1 " ,
"select udf2(c1+100) ,udf2(c6-100) ,udf2(c1*100) ,udf2(c6/100) from ct1" ,
"select udf2(c1+100) ,udf2(c6-100) ,udf2(c1*100) ,udf2(c6/100) from stb1 " ,
"select udf2(c1) from ct1 group by c1" ,
"select udf2(c1) from stb1 group by c1" ,
"select c1,c2, udf2(c1,c6) from ct1 group by c1,c2" ,
"select c1,c2, udf2(c1,c6) from stb1 group by c1,c2" ,
"select udf2(c1) from stb1 group by udf1(c1)" ,
"select udf2(c1) from stb1 group by floor(c1)" ,
"select udf2(c1) from stb1 group by floor(c1) order by udf2(c1)" ,
"select udf2(sub1.c1 ,sub1.c2), udf2(sub2.c2 ,sub2.c1) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null" ,
"select udf2(sub1.c1 ,sub1.c2), udf2(sub2.c2 ,sub2.c1) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null" ,
"select udf2(sub1.c1 ,sub1.c2), udf2(sub2.c2 ,sub2.c1) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null" ,
"select udf2(sub1.c1 ,sub1.c2), udf2(sub2.c2 ,sub2.c1) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null"]
return udf1_sqls ,udf2_sqls
def unexpected_create(self):
tdLog.info(" create function with out bufsize ")
tdSql.query("drop function udf1 ")
tdSql.query("drop function udf2 ")
# create function without buffer
tdSql.execute("create function udf1 as '/tmp/udf/libudf1.so' outputtype int")
tdSql.execute("create aggregate function udf2 as '/tmp/udf/libudf2.so' outputtype double")
udf1_sqls ,udf2_sqls = self.try_query_sql()
for scalar_sql in udf1_sqls:
tdSql.query(scalar_sql)
for aggregate_sql in udf2_sqls:
tdSql.error(aggregate_sql)
# create function without aggregate
tdLog.info(" create function with out aggregate ")
tdSql.query("drop function udf1 ")
tdSql.query("drop function udf2 ")
# create function without buffer
tdSql.execute("create aggregate function udf1 as '/tmp/udf/libudf1.so' outputtype int bufSize 8 ")
tdSql.execute("create function udf2 as '/tmp/udf/libudf2.so' outputtype double bufSize 8")
udf1_sqls ,udf2_sqls = self.try_query_sql()
for scalar_sql in udf1_sqls:
tdSql.error(scalar_sql)
for aggregate_sql in udf2_sqls:
tdSql.error(aggregate_sql)
tdSql.execute(" create function db as '/tmp/udf/libudf1.so' outputtype int bufSize 8 ")
tdSql.execute(" create aggregate function test as '/tmp/udf/libudf1.so' outputtype int bufSize 8 ")
tdSql.error(" select db(c1) from stb1 ")
tdSql.error(" select db(c1,c6), db(c6) from stb1 ")
tdSql.error(" select db(num1,num2), db(num1) from tb ")
tdSql.error(" select test(c1) from stb1 ")
tdSql.error(" select test(c1,c6), test(c6) from stb1 ")
tdSql.error(" select test(num1,num2), test(num1) from tb ")
def loop_kill_udfd(self):
buildPath = self.getBuildPath()
if (buildPath == ""):
tdLog.exit("taosd not found!")
else:
tdLog.info("taosd found in %s" % buildPath)
cfgPath = buildPath + "/../sim/dnode1/cfg"
udfdPath = buildPath +'/build/bin/udfd'
for i in range(3):
tdLog.info(" loop restart udfd %d_th" % i)
tdSql.query("select udf2(sub1.c1 ,sub1.c2), udf2(sub2.c2 ,sub2.c1) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null")
tdSql.checkData(0,0,169.661427555)
tdSql.checkData(0,1,169.661427555)
# stop udfd cmds
get_processID = "ps -ef | grep -w udfd | grep -v grep| grep -v defunct | awk '{print $2}'"
processID = subprocess.check_output(get_processID, shell=True).decode("utf-8")
stop_udfd = " kill -9 %s" % processID
os.system(stop_udfd)
time.sleep(2)
tdSql.query("select udf2(sub1.c1 ,sub1.c2), udf2(sub2.c2 ,sub2.c1) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null")
tdSql.checkData(0,0,169.661427555)
tdSql.checkData(0,1,169.661427555)
# # start udfd cmds
# start_udfd = "nohup " + udfdPath +'-c' +cfgPath +" > /dev/null 2>&1 &"
# tdLog.info("start udfd : %s " % start_udfd)
def test_function_name(self):
tdLog.info(" create function name is not build_in functions ")
tdSql.execute(" drop function udf1 ")
tdSql.execute(" drop function udf2 ")
tdSql.error("create function max as '/tmp/udf/libudf1.so' outputtype int bufSize 8")
tdSql.error("create aggregate function sum as '/tmp/udf/libudf2.so' outputtype double bufSize 8")
tdSql.error("create function max as '/tmp/udf/libudf1.so' outputtype int bufSize 8")
tdSql.error("create aggregate function sum as '/tmp/udf/libudf2.so' outputtype double bufSize 8")
tdSql.error("create aggregate function tbname as '/tmp/udf/libudf2.so' outputtype double bufSize 8")
tdSql.error("create aggregate function function as '/tmp/udf/libudf2.so' outputtype double bufSize 8")
tdSql.error("create aggregate function stable as '/tmp/udf/libudf2.so' outputtype double bufSize 8")
tdSql.error("create aggregate function union as '/tmp/udf/libudf2.so' outputtype double bufSize 8")
tdSql.error("create aggregate function 123 as '/tmp/udf/libudf2.so' outputtype double bufSize 8")
tdSql.error("create aggregate function 123db as '/tmp/udf/libudf2.so' outputtype double bufSize 8")
tdSql.error("create aggregate function mnode as '/tmp/udf/libudf2.so' outputtype double bufSize 8")
def restart_taosd_query_udf(self):
self.create_udf_function()
for i in range(5):
tdLog.info(" this is %d_th restart taosd " %i)
tdSql.execute("use db ")
tdSql.query("select count(*) from stb1")
tdSql.checkRows(1)
tdSql.query("select udf2(sub1.c1 ,sub1.c2), udf2(sub2.c2 ,sub2.c1) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null")
tdSql.checkData(0,0,169.661427555)
tdSql.checkData(0,1,169.661427555)
tdDnodes.stop(1)
tdDnodes.start(1)
time.sleep(2)
def run(self): # sourcery skip: extract-duplicate-method, remove-redundant-fstring
print(" env is ok for all ")
self.prepare_udf_so()
self.prepare_data()
self.create_udf_function()
self.basic_udf_query()
self.unexpected_create()
def stop(self):
tdSql.close()
tdLog.success(f"{__file__} successfully executed")
tdCases.addLinux(__file__, TDTestCase())
tdCases.addWindows(__file__, TDTestCase())

View File

@ -0,0 +1,654 @@
from distutils.log import error
import taos
import sys
import time
import os
from util.log import *
from util.sql import *
from util.cases import *
from util.dnodes import *
import subprocess
class TDTestCase:
def init(self, conn, logSql):
tdLog.debug(f"start to excute {__file__}")
tdSql.init(conn.cursor(), logSql)
def getBuildPath(self):
selfPath = os.path.dirname(os.path.realpath(__file__))
if ("community" in selfPath):
projPath = selfPath[:selfPath.find("community")]
else:
projPath = selfPath[:selfPath.find("tests")]
for root, dirs, files in os.walk(projPath):
if ("taosd" in files):
rootRealPath = os.path.dirname(os.path.realpath(root))
if ("packaging" not in rootRealPath):
buildPath = root[:len(root) - len("/build/bin")]
break
return buildPath
def prepare_udf_so(self):
selfPath = os.path.dirname(os.path.realpath(__file__))
if ("community" in selfPath):
projPath = selfPath[:selfPath.find("community")]
else:
projPath = selfPath[:selfPath.find("tests")]
print(projPath)
libudf1 = subprocess.Popen('find %s -name "libudf1.so"|grep lib|head -n1'%projPath , shell=True, stdout=subprocess.PIPE,stderr=subprocess.STDOUT).stdout.read().decode("utf-8")
libudf2 = subprocess.Popen('find %s -name "libudf2.so"|grep lib|head -n1'%projPath , shell=True, stdout=subprocess.PIPE,stderr=subprocess.STDOUT).stdout.read().decode("utf-8")
os.system("mkdir /tmp/udf/")
os.system("cp %s /tmp/udf/ "%libudf1.replace("\n" ,""))
os.system("cp %s /tmp/udf/ "%libudf2.replace("\n" ,""))
def prepare_data(self):
tdSql.execute("drop database if exists db ")
tdSql.execute("create database if not exists db days 300")
tdSql.execute("use db")
tdSql.execute(
'''create table stb1
(ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint, c5 float, c6 double, c7 bool, c8 binary(16),c9 nchar(32), c10 timestamp)
tags (t1 int)
'''
)
tdSql.execute(
'''
create table t1
(ts timestamp, c1 int, c2 bigint, c3 smallint, c4 tinyint, c5 float, c6 double, c7 bool, c8 binary(16),c9 nchar(32), c10 timestamp)
'''
)
for i in range(4):
tdSql.execute(f'create table ct{i+1} using stb1 tags ( {i+1} )')
for i in range(9):
tdSql.execute(
f"insert into ct1 values ( now()-{i*10}s, {1*i}, {11111*i}, {111*i}, {11*i}, {1.11*i}, {11.11*i}, {i%2}, 'binary{i}', 'nchar{i}', now()+{1*i}a )"
)
tdSql.execute(
f"insert into ct4 values ( now()-{i*90}d, {1*i}, {11111*i}, {111*i}, {11*i}, {1.11*i}, {11.11*i}, {i%2}, 'binary{i}', 'nchar{i}', now()+{1*i}a )"
)
tdSql.execute("insert into ct1 values (now()-45s, 0, 0, 0, 0, 0, 0, 0, 'binary0', 'nchar0', now()+8a )")
tdSql.execute("insert into ct1 values (now()+10s, 9, -99999, -999, -99, -9.99, -99.99, 1, 'binary9', 'nchar9', now()+9a )")
tdSql.execute("insert into ct1 values (now()+15s, 9, -99999, -999, -99, -9.99, NULL, 1, 'binary9', 'nchar9', now()+9a )")
tdSql.execute("insert into ct1 values (now()+20s, 9, -99999, -999, NULL, -9.99, -99.99, 1, 'binary9', 'nchar9', now()+9a )")
tdSql.execute("insert into ct4 values (now()-810d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
tdSql.execute("insert into ct4 values (now()-400d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
tdSql.execute("insert into ct4 values (now()+90d, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL ) ")
tdSql.execute(
f'''insert into t1 values
( '2020-04-21 01:01:01.000', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
( '2020-10-21 01:01:01.000', 1, 11111, 111, 11, 1.11, 11.11, 1, "binary1", "nchar1", now()+1a )
( '2020-12-31 01:01:01.000', 2, 22222, 222, 22, 2.22, 22.22, 0, "binary2", "nchar2", now()+2a )
( '2021-01-01 01:01:06.000', 3, 33333, 333, 33, 3.33, 33.33, 0, "binary3", "nchar3", now()+3a )
( '2021-05-07 01:01:10.000', 4, 44444, 444, 44, 4.44, 44.44, 1, "binary4", "nchar4", now()+4a )
( '2021-07-21 01:01:01.000', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
( '2021-09-30 01:01:16.000', 5, 55555, 555, 55, 5.55, 55.55, 0, "binary5", "nchar5", now()+5a )
( '2022-02-01 01:01:20.000', 6, 66666, 666, 66, 6.66, 66.66, 1, "binary6", "nchar6", now()+6a )
( '2022-10-28 01:01:26.000', 7, 00000, 000, 00, 0.00, 00.00, 1, "binary7", "nchar7", "1970-01-01 08:00:00.000" )
( '2022-12-01 01:01:30.000', 8, -88888, -888, -88, -8.88, -88.88, 0, "binary8", "nchar8", "1969-01-01 01:00:00.000" )
( '2022-12-31 01:01:36.000', 9, -99999999999999999, -999, -99, -9.99, -999999999999999999999.99, 1, "binary9", "nchar9", "1900-01-01 00:00:00.000" )
( '2023-02-21 01:01:01.000', NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL, NULL )
'''
)
tdSql.execute("create table tb (ts timestamp , num1 int , num2 int, num3 double , num4 binary(30))")
tdSql.execute(
f'''insert into tb values
( '2020-04-21 01:01:01.000', NULL, 1, 1, "binary1" )
( '2020-10-21 01:01:01.000', 1, 1, 1.11, "binary1" )
( '2020-12-31 01:01:01.000', 2, 22222, 22, "binary1" )
( '2021-01-01 01:01:06.000', 3, 33333, 33, "binary1" )
( '2021-05-07 01:01:10.000', 4, 44444, 44, "binary1" )
( '2021-07-21 01:01:01.000', NULL, NULL, NULL, "binary1" )
( '2021-09-30 01:01:16.000', 5, 55555, 55, "binary1" )
( '2022-02-01 01:01:20.000', 6, 66666, 66, "binary1" )
( '2022-10-28 01:01:26.000', 0, 00000, 00, "binary1" )
( '2022-12-01 01:01:30.000', 8, -88888, -88, "binary1" )
( '2022-12-31 01:01:36.000', 9, -9999999, -99, "binary1" )
( '2023-02-21 01:01:01.000', NULL, NULL, NULL, "binary1" )
'''
)
# udf functions with join
ts_start = 1652517451000
tdSql.execute("create stable st (ts timestamp , c1 int , c2 int ,c3 double ,c4 double ) tags(ind int)")
tdSql.execute("create table sub1 using st tags(1)")
tdSql.execute("create table sub2 using st tags(2)")
for i in range(10):
ts = ts_start + i *1000
tdSql.execute(" insert into sub1 values({} , {},{},{},{})".format(ts,i ,i*10,i*100.0,i*1000.0))
tdSql.execute(" insert into sub2 values({} , {},{},{},{})".format(ts,i ,i*10,i*100.0,i*1000.0))
def create_udf_function(self):
for i in range(5):
# create scalar functions
tdSql.execute("create function udf1 as '/tmp/udf/libudf1.so' outputtype int bufSize 8;")
# create aggregate functions
tdSql.execute("create aggregate function udf2 as '/tmp/udf/libudf2.so' outputtype double bufSize 8;")
functions = tdSql.getResult("show functions")
function_nums = len(functions)
if function_nums == 2:
tdLog.info("create two udf functions success ")
# drop functions
tdSql.execute("drop function udf1")
tdSql.execute("drop function udf2")
functions = tdSql.getResult("show functions")
for function in functions:
if "udf1" in function[0] or "udf2" in function[0]:
tdLog.info("drop udf functions failed ")
tdLog.exit("drop udf functions failed")
tdLog.info("drop two udf functions success ")
# create scalar functions
tdSql.execute("create function udf1 as '/tmp/udf/libudf1.so' outputtype int bufSize 8;")
# create aggregate functions
tdSql.execute("create aggregate function udf2 as '/tmp/udf/libudf2.so' outputtype double bufSize 8;")
functions = tdSql.getResult("show functions")
function_nums = len(functions)
if function_nums == 2:
tdLog.info("create two udf functions success ")
def basic_udf_query(self):
# scalar functions
tdSql.execute("use db ")
tdSql.query("select num1 , udf1(num1) ,num2 ,udf1(num2),num3 ,udf1(num3),num4 ,udf1(num4) from tb")
tdSql.checkData(0,0,None)
tdSql.checkData(0,1,None)
tdSql.checkData(0,2,1)
tdSql.checkData(0,3,88)
tdSql.checkData(0,4,1.000000000)
tdSql.checkData(0,5,88)
tdSql.checkData(0,6,"binary1")
tdSql.checkData(0,7,88)
tdSql.checkData(3,0,3)
tdSql.checkData(3,1,88)
tdSql.checkData(3,2,33333)
tdSql.checkData(3,3,88)
tdSql.checkData(3,4,33.000000000)
tdSql.checkData(3,5,88)
tdSql.checkData(3,6,"binary1")
tdSql.checkData(3,7,88)
tdSql.checkData(11,0,None)
tdSql.checkData(11,1,None)
tdSql.checkData(11,2,None)
tdSql.checkData(11,3,None)
tdSql.checkData(11,4,None)
tdSql.checkData(11,5,None)
tdSql.checkData(11,6,"binary1")
tdSql.checkData(11,7,88)
tdSql.query("select c1 , udf1(c1) ,c2 ,udf1(c2), c3 ,udf1(c3), c4 ,udf1(c4) from stb1 order by c1")
tdSql.checkData(0,0,None)
tdSql.checkData(0,1,None)
tdSql.checkData(0,2,None)
tdSql.checkData(0,3,None)
tdSql.checkData(0,4,None)
tdSql.checkData(0,5,None)
tdSql.checkData(0,6,None)
tdSql.checkData(0,7,None)
tdSql.checkData(20,0,8)
tdSql.checkData(20,1,88)
tdSql.checkData(20,2,88888)
tdSql.checkData(20,3,88)
tdSql.checkData(20,4,888)
tdSql.checkData(20,5,88)
tdSql.checkData(20,6,88)
tdSql.checkData(20,7,88)
# aggregate functions
tdSql.query("select udf2(num1) ,udf2(num2), udf2(num3) from tb")
tdSql.checkData(0,0,15.362291496)
tdSql.checkData(0,1,10000949.553189287)
tdSql.checkData(0,2,168.633425216)
# Arithmetic compute
tdSql.query("select udf2(num1)+100 ,udf2(num2)-100, udf2(num3)*100 ,udf2(num3)/100 from tb")
tdSql.checkData(0,0,115.362291496)
tdSql.checkData(0,1,10000849.553189287)
tdSql.checkData(0,2,16863.342521576)
tdSql.checkData(0,3,1.686334252)
tdSql.query("select udf2(c1) ,udf2(c6) from stb1 ")
tdSql.checkData(0,0,25.514701644)
tdSql.checkData(0,1,265.247614504)
tdSql.query("select udf2(c1)+100 ,udf2(c6)-100 ,udf2(c1)*100 ,udf2(c6)/100 from stb1 ")
tdSql.checkData(0,0,125.514701644)
tdSql.checkData(0,1,165.247614504)
tdSql.checkData(0,2,2551.470164435)
tdSql.checkData(0,3,2.652476145)
# # bug for crash when query sub table
tdSql.query("select udf2(c1+100) ,udf2(c6-100) ,udf2(c1*100) ,udf2(c6/100) from ct1")
tdSql.checkData(0,0,378.215547010)
tdSql.checkData(0,1,353.808067460)
tdSql.checkData(0,2,2114.237451187)
tdSql.checkData(0,3,2.125468151)
tdSql.query("select udf2(c1+100) ,udf2(c6-100) ,udf2(c1*100) ,udf2(c6/100) from stb1 ")
tdSql.checkData(0,0,490.358032462)
tdSql.checkData(0,1,400.460106627)
tdSql.checkData(0,2,2551.470164435)
tdSql.checkData(0,3,2.652476145)
# regular table with aggregate functions
tdSql.error("select udf1(num1) , count(num1) from tb;")
tdSql.error("select udf1(num1) , avg(num1) from tb;")
tdSql.error("select udf1(num1) , twa(num1) from tb;")
tdSql.error("select udf1(num1) , irate(num1) from tb;")
tdSql.error("select udf1(num1) , sum(num1) from tb;")
tdSql.error("select udf1(num1) , stddev(num1) from tb;")
tdSql.error("select udf1(num1) , mode(num1) from tb;")
tdSql.error("select udf1(num1) , HYPERLOGLOG(num1) from tb;")
# stable
tdSql.error("select udf1(c1) , count(c1) from stb1;")
tdSql.error("select udf1(c1) , avg(c1) from stb1;")
tdSql.error("select udf1(c1) , twa(c1) from stb1;")
tdSql.error("select udf1(c1) , irate(c1) from stb1;")
tdSql.error("select udf1(c1) , sum(c1) from stb1;")
tdSql.error("select udf1(c1) , stddev(c1) from stb1;")
tdSql.error("select udf1(c1) , mode(c1) from stb1;")
tdSql.error("select udf1(c1) , HYPERLOGLOG(c1) from stb1;")
# regular table with select functions
tdSql.query("select udf1(num1) , max(num1) from tb;")
tdSql.checkRows(1)
tdSql.query("select floor(num1) , max(num1) from tb;")
tdSql.checkRows(1)
tdSql.query("select udf1(num1) , min(num1) from tb;")
tdSql.checkRows(1)
tdSql.query("select ceil(num1) , min(num1) from tb;")
tdSql.checkRows(1)
tdSql.error("select udf1(num1) , first(num1) from tb;")
tdSql.error("select abs(num1) , first(num1) from tb;")
tdSql.error("select udf1(num1) , last(num1) from tb;")
tdSql.error("select round(num1) , last(num1) from tb;")
tdSql.query("select udf1(num1) , top(num1,1) from tb;")
tdSql.checkRows(1)
tdSql.query("select udf1(num1) , bottom(num1,1) from tb;")
tdSql.checkRows(1)
tdSql.error("select udf1(num1) , last_row(num1) from tb;")
tdSql.error("select round(num1) , last_row(num1) from tb;")
# stable
tdSql.query("select udf1(c1) , max(c1) from stb1;")
tdSql.checkRows(1)
tdSql.query("select abs(c1) , max(c1) from stb1;")
tdSql.checkRows(1)
tdSql.query("select udf1(c1) , min(c1) from stb1;")
tdSql.checkRows(1)
tdSql.query("select floor(c1) , min(c1) from stb1;")
tdSql.checkRows(1)
tdSql.error("select udf1(c1) , first(c1) from stb1;")
tdSql.error("select udf1(c1) , last(c1) from stb1;")
tdSql.query("select udf1(c1) , top(c1 ,1) from stb1;")
tdSql.checkRows(1)
tdSql.query("select abs(c1) , top(c1 ,1) from stb1;")
tdSql.checkRows(1)
tdSql.query("select udf1(c1) , bottom(c1,1) from stb1;")
tdSql.checkRows(1)
tdSql.query("select ceil(c1) , bottom(c1,1) from stb1;")
tdSql.checkRows(1)
tdSql.error("select udf1(c1) , last_row(c1) from stb1;")
tdSql.error("select ceil(c1) , last_row(c1) from stb1;")
# regular table with compute functions
tdSql.query("select udf1(num1) , abs(num1) from tb;")
tdSql.checkRows(12)
tdSql.query("select floor(num1) , abs(num1) from tb;")
tdSql.checkRows(12)
# # bug need fix
#tdSql.query("select udf1(num1) , csum(num1) from tb;")
#tdSql.checkRows(9)
#tdSql.query("select ceil(num1) , csum(num1) from tb;")
#tdSql.checkRows(9)
#tdSql.query("select udf1(c1) , csum(c1) from stb1;")
#tdSql.checkRows(22)
#tdSql.query("select floor(c1) , csum(c1) from stb1;")
#tdSql.checkRows(22)
# stable with compute functions
tdSql.query("select udf1(c1) , abs(c1) from stb1;")
tdSql.checkRows(25)
tdSql.query("select abs(c1) , ceil(c1) from stb1;")
tdSql.checkRows(25)
# nest query
tdSql.query("select abs(udf1(c1)) , abs(ceil(c1)) from stb1 order by ts;")
tdSql.checkRows(25)
tdSql.checkData(0,0,None)
tdSql.checkData(0,1,None)
tdSql.checkData(1,0,88)
tdSql.checkData(1,1,8)
tdSql.query("select abs(udf1(c1)) , abs(ceil(c1)) from ct1 order by ts;")
tdSql.checkRows(13)
tdSql.checkData(0,0,88)
tdSql.checkData(0,1,8)
tdSql.checkData(1,0,88)
tdSql.checkData(1,1,7)
# bug fix for crash
# order by udf function result
for _ in range(50):
tdSql.query("select udf2(c1) from stb1 group by 1-udf1(c1)")
print(tdSql.queryResult)
# udf functions with filter
tdSql.query("select abs(udf1(c1)) , abs(ceil(c1)) from stb1 where c1 is null order by ts;")
tdSql.checkRows(3)
tdSql.checkData(0,0,None)
tdSql.checkData(0,1,None)
tdSql.query("select c1 ,udf1(c1) , c6 ,udf1(c6) from stb1 where c1 > 8 order by ts")
tdSql.checkRows(3)
tdSql.checkData(0,0,9)
tdSql.checkData(0,1,88)
tdSql.checkData(0,2,-99.990000000)
tdSql.checkData(0,3,88)
tdSql.query("select sub1.c1, sub2.c2 from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null")
tdSql.checkData(0,0,0)
tdSql.checkData(0,1,0)
tdSql.checkData(1,0,1)
tdSql.checkData(1,1,10)
tdSql.query("select udf1(sub1.c1), udf1(sub2.c2) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null")
tdSql.checkData(0,0,88)
tdSql.checkData(0,1,88)
tdSql.checkData(1,0,88)
tdSql.checkData(1,1,88)
tdSql.query("select sub1.c1 , udf1(sub1.c1), sub2.c2 ,udf1(sub2.c2) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null")
tdSql.checkData(0,0,0)
tdSql.checkData(0,1,88)
tdSql.checkData(0,2,0)
tdSql.checkData(0,3,88)
tdSql.checkData(1,0,1)
tdSql.checkData(1,1,88)
tdSql.checkData(1,2,10)
tdSql.checkData(1,3,88)
tdSql.query("select udf2(sub1.c1), udf2(sub2.c2) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null")
tdSql.checkData(0,0,16.881943016)
tdSql.checkData(0,1,168.819430161)
tdSql.error("select sub1.c1 , udf2(sub1.c1), sub2.c2 ,udf2(sub2.c2) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null")
# udf functions with group by
tdSql.query("select udf1(c1) from ct1 group by c1")
tdSql.checkRows(10)
tdSql.query("select udf1(c1) from stb1 group by c1")
tdSql.checkRows(11)
tdSql.query("select c1,c2, udf1(c1,c2) from ct1 group by c1,c2")
tdSql.checkRows(10)
tdSql.query("select c1,c2, udf1(c1,c2) from stb1 group by c1,c2")
tdSql.checkRows(11)
tdSql.query("select udf2(c1) from ct1 group by c1")
tdSql.checkRows(10)
tdSql.query("select udf2(c1) from stb1 group by c1")
tdSql.checkRows(11)
tdSql.query("select c1,c2, udf2(c1,c6) from ct1 group by c1,c2")
tdSql.checkRows(10)
tdSql.query("select c1,c2, udf2(c1,c6) from stb1 group by c1,c2")
tdSql.checkRows(11)
tdSql.query("select udf2(c1) from stb1 group by udf1(c1)")
tdSql.checkRows(2)
tdSql.query("select udf2(c1) from stb1 group by floor(c1)")
tdSql.checkRows(11)
# udf mix with order by
tdSql.query("select udf2(c1) from stb1 group by floor(c1) order by udf2(c1)")
tdSql.checkRows(11)
def multi_cols_udf(self):
tdSql.query("select num1,num2,num3,udf1(num1,num2,num3) from tb")
tdSql.checkData(0,0,None)
tdSql.checkData(0,1,1)
tdSql.checkData(0,2,1.000000000)
tdSql.checkData(0,3,None)
tdSql.checkData(1,0,1)
tdSql.checkData(1,1,1)
tdSql.checkData(1,2,1.110000000)
tdSql.checkData(1,3,88)
tdSql.query("select c1,c6,udf1(c1,c6) from stb1 order by ts")
tdSql.checkData(1,0,8)
tdSql.checkData(1,1,88.880000000)
tdSql.checkData(1,2,88)
tdSql.query("select abs(udf1(c1,c6,c1,c6)) , abs(ceil(c1)) from stb1 where c1 is not null order by ts;")
tdSql.checkRows(22)
tdSql.query("select udf2(sub1.c1 ,sub1.c2), udf2(sub2.c2 ,sub2.c1) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null")
tdSql.checkData(0,0,169.661427555)
tdSql.checkData(0,1,169.661427555)
def try_query_sql(self):
udf1_sqls = [
"select num1 , udf1(num1) ,num2 ,udf1(num2),num3 ,udf1(num3),num4 ,udf1(num4) from tb" ,
"select c1 , udf1(c1) ,c2 ,udf1(c2), c3 ,udf1(c3), c4 ,udf1(c4) from stb1 order by c1" ,
"select udf1(num1) , max(num1) from tb;" ,
"select udf1(num1) , min(num1) from tb;" ,
#"select udf1(num1) , top(num1,1) from tb;" ,
#"select udf1(num1) , bottom(num1,1) from tb;" ,
"select udf1(c1) , max(c1) from stb1;" ,
"select udf1(c1) , min(c1) from stb1;" ,
#"select udf1(c1) , top(c1 ,1) from stb1;" ,
#"select udf1(c1) , bottom(c1,1) from stb1;" ,
"select udf1(num1) , abs(num1) from tb;" ,
#"select udf1(num1) , csum(num1) from tb;" ,
#"select udf1(c1) , csum(c1) from stb1;" ,
"select udf1(c1) , abs(c1) from stb1;" ,
"select abs(udf1(c1)) , abs(ceil(c1)) from stb1 order by ts;" ,
"select abs(udf1(c1)) , abs(ceil(c1)) from ct1 order by ts;" ,
"select abs(udf1(c1)) , abs(ceil(c1)) from stb1 where c1 is null order by ts;" ,
"select c1 ,udf1(c1) , c6 ,udf1(c6) from stb1 where c1 > 8 order by ts" ,
"select udf1(sub1.c1), udf1(sub2.c2) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null" ,
"select sub1.c1 , udf1(sub1.c1), sub2.c2 ,udf1(sub2.c2) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null" ,
"select udf1(c1) from ct1 group by c1" ,
"select udf1(c1) from stb1 group by c1" ,
"select c1,c2, udf1(c1,c2) from ct1 group by c1,c2" ,
"select c1,c2, udf1(c1,c2) from stb1 group by c1,c2" ,
"select num1,num2,num3,udf1(num1,num2,num3) from tb" ,
"select c1,c6,udf1(c1,c6) from stb1 order by ts" ,
"select abs(udf1(c1,c6,c1,c6)) , abs(ceil(c1)) from stb1 where c1 is not null order by ts;"
]
udf2_sqls = ["select udf2(sub1.c1), udf2(sub2.c2) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null" ,
"select udf2(c1) from stb1 group by 1-udf1(c1)" ,
"select udf2(num1) ,udf2(num2), udf2(num3) from tb" ,
"select udf2(num1)+100 ,udf2(num2)-100, udf2(num3)*100 ,udf2(num3)/100 from tb" ,
"select udf2(c1) ,udf2(c6) from stb1 " ,
"select udf2(c1)+100 ,udf2(c6)-100 ,udf2(c1)*100 ,udf2(c6)/100 from stb1 " ,
"select udf2(c1+100) ,udf2(c6-100) ,udf2(c1*100) ,udf2(c6/100) from ct1" ,
"select udf2(c1+100) ,udf2(c6-100) ,udf2(c1*100) ,udf2(c6/100) from stb1 " ,
"select udf2(c1) from ct1 group by c1" ,
"select udf2(c1) from stb1 group by c1" ,
"select c1,c2, udf2(c1,c6) from ct1 group by c1,c2" ,
"select c1,c2, udf2(c1,c6) from stb1 group by c1,c2" ,
"select udf2(c1) from stb1 group by udf1(c1)" ,
"select udf2(c1) from stb1 group by floor(c1)" ,
"select udf2(c1) from stb1 group by floor(c1) order by udf2(c1)" ,
"select udf2(sub1.c1 ,sub1.c2), udf2(sub2.c2 ,sub2.c1) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null" ,
"select udf2(sub1.c1 ,sub1.c2), udf2(sub2.c2 ,sub2.c1) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null" ,
"select udf2(sub1.c1 ,sub1.c2), udf2(sub2.c2 ,sub2.c1) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null" ,
"select udf2(sub1.c1 ,sub1.c2), udf2(sub2.c2 ,sub2.c1) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null"]
return udf1_sqls ,udf2_sqls
def unexpected_create(self):
tdLog.info(" create function with out bufsize ")
tdSql.query("drop function udf1 ")
tdSql.query("drop function udf2 ")
# create function without buffer
tdSql.execute("create function udf1 as '/tmp/udf/libudf1.so' outputtype int")
tdSql.execute("create aggregate function udf2 as '/tmp/udf/libudf2.so' outputtype double")
udf1_sqls ,udf2_sqls = self.try_query_sql()
for scalar_sql in udf1_sqls:
tdSql.query(scalar_sql)
for aggregate_sql in udf2_sqls:
tdSql.error(aggregate_sql)
# create function without aggregate
tdLog.info(" create function with out aggregate ")
tdSql.query("drop function udf1 ")
tdSql.query("drop function udf2 ")
# create function without buffer
tdSql.execute("create aggregate function udf1 as '/tmp/udf/libudf1.so' outputtype int bufSize 8 ")
tdSql.execute("create function udf2 as '/tmp/udf/libudf2.so' outputtype double bufSize 8")
udf1_sqls ,udf2_sqls = self.try_query_sql()
for scalar_sql in udf1_sqls:
tdSql.error(scalar_sql)
for aggregate_sql in udf2_sqls:
tdSql.error(aggregate_sql)
tdSql.execute(" create function db as '/tmp/udf/libudf1.so' outputtype int bufSize 8 ")
tdSql.execute(" create aggregate function test as '/tmp/udf/libudf1.so' outputtype int bufSize 8 ")
tdSql.error(" select db(c1) from stb1 ")
tdSql.error(" select db(c1,c6), db(c6) from stb1 ")
tdSql.error(" select db(num1,num2), db(num1) from tb ")
tdSql.error(" select test(c1) from stb1 ")
tdSql.error(" select test(c1,c6), test(c6) from stb1 ")
tdSql.error(" select test(num1,num2), test(num1) from tb ")
def loop_kill_udfd(self):
buildPath = self.getBuildPath()
if (buildPath == ""):
tdLog.exit("taosd not found!")
else:
tdLog.info("taosd found in %s" % buildPath)
cfgPath = buildPath + "/../sim/dnode1/cfg"
udfdPath = buildPath +'/build/bin/udfd'
for i in range(3):
tdLog.info(" loop restart udfd %d_th" % i)
tdSql.query("select udf2(sub1.c1 ,sub1.c2), udf2(sub2.c2 ,sub2.c1) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null")
tdSql.checkData(0,0,169.661427555)
tdSql.checkData(0,1,169.661427555)
# stop udfd cmds
get_processID = "ps -ef | grep -w udfd | grep -v grep| grep -v defunct | awk '{print $2}'"
processID = subprocess.check_output(get_processID, shell=True).decode("utf-8")
stop_udfd = " kill -9 %s" % processID
os.system(stop_udfd)
time.sleep(2)
tdSql.query("select udf2(sub1.c1 ,sub1.c2), udf2(sub2.c2 ,sub2.c1) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null")
tdSql.checkData(0,0,169.661427555)
tdSql.checkData(0,1,169.661427555)
# # start udfd cmds
# start_udfd = "nohup " + udfdPath +'-c' +cfgPath +" > /dev/null 2>&1 &"
# tdLog.info("start udfd : %s " % start_udfd)
def test_function_name(self):
tdLog.info(" create function name is not build_in functions ")
tdSql.execute(" drop function udf1 ")
tdSql.execute(" drop function udf2 ")
tdSql.error("create function max as '/tmp/udf/libudf1.so' outputtype int bufSize 8")
tdSql.error("create aggregate function sum as '/tmp/udf/libudf2.so' outputtype double bufSize 8")
tdSql.error("create function max as '/tmp/udf/libudf1.so' outputtype int bufSize 8")
tdSql.error("create aggregate function sum as '/tmp/udf/libudf2.so' outputtype double bufSize 8")
tdSql.error("create aggregate function tbname as '/tmp/udf/libudf2.so' outputtype double bufSize 8")
tdSql.error("create aggregate function function as '/tmp/udf/libudf2.so' outputtype double bufSize 8")
tdSql.error("create aggregate function stable as '/tmp/udf/libudf2.so' outputtype double bufSize 8")
tdSql.error("create aggregate function union as '/tmp/udf/libudf2.so' outputtype double bufSize 8")
tdSql.error("create aggregate function 123 as '/tmp/udf/libudf2.so' outputtype double bufSize 8")
tdSql.error("create aggregate function 123db as '/tmp/udf/libudf2.so' outputtype double bufSize 8")
tdSql.error("create aggregate function mnode as '/tmp/udf/libudf2.so' outputtype double bufSize 8")
def restart_taosd_query_udf(self):
for i in range(3):
tdLog.info(" this is %d_th restart taosd " %i)
tdSql.execute("use db ")
tdSql.query("select count(*) from stb1")
tdSql.checkRows(1)
tdSql.query("select udf2(sub1.c1 ,sub1.c2), udf2(sub2.c2 ,sub2.c1) from sub1, sub2 where sub1.ts=sub2.ts and sub1.c1 is not null")
tdSql.checkData(0,0,169.661427555)
tdSql.checkData(0,1,169.661427555)
tdDnodes.stop(1)
tdDnodes.start(1)
time.sleep(2)
def run(self): # sourcery skip: extract-duplicate-method, remove-redundant-fstring
print(" env is ok for all ")
self.prepare_udf_so()
self.prepare_data()
self.create_udf_function()
self.basic_udf_query()
self.multi_cols_udf()
self.restart_taosd_query_udf()
def stop(self):
tdSql.close()
tdLog.success(f"{__file__} successfully executed")
tdCases.addLinux(__file__, TDTestCase())
tdCases.addWindows(__file__, TDTestCase())

View File

@ -8,6 +8,8 @@ python3 ./test.py -f 0-others/taosShellNetChk.py
python3 ./test.py -f 0-others/telemetry.py python3 ./test.py -f 0-others/telemetry.py
python3 ./test.py -f 0-others/taosdMonitor.py python3 ./test.py -f 0-others/taosdMonitor.py
python3 ./test.py -f 0-others/udfTest.py python3 ./test.py -f 0-others/udfTest.py
python3 ./test.py -f 0-others/udf_create.py
python3 ./test.py -f 0-others/udf_restart_taosd.py
python3 ./test.py -f 0-others/user_control.py python3 ./test.py -f 0-others/user_control.py
python3 ./test.py -f 0-others/fsync.py python3 ./test.py -f 0-others/fsync.py

@ -1 +1 @@
Subproject commit 0aad27d725f4ee6b18daf1db0c07d933aed16eea Subproject commit a8bb88c9056735919fc50bf9b12d9562f17e844f