From 16006bedd425ea1a340709e916ca3a1d96327aa4 Mon Sep 17 00:00:00 2001 From: danielclow <106956386+danielclow@users.noreply.github.com> Date: Wed, 29 Mar 2023 13:09:13 +0800 Subject: [PATCH 001/156] docs: fix broken links --- docs/en/14-reference/07-tdinsight/index.md | 4 ++-- docs/en/25-application/01-telegraf.md | 2 +- docs/en/25-application/02-collectd.md | 2 +- 3 files changed, 4 insertions(+), 4 deletions(-) diff --git a/docs/en/14-reference/07-tdinsight/index.md b/docs/en/14-reference/07-tdinsight/index.md index 652d5ded0b..8d97dafa7c 100644 --- a/docs/en/14-reference/07-tdinsight/index.md +++ b/docs/en/14-reference/07-tdinsight/index.md @@ -12,8 +12,8 @@ After TDengine starts, it automatically writes many metrics in specific interval To deploy TDinsight, we need - a single-node TDengine server or a multi-node TDengine cluster and a [Grafana] server are required. This dashboard requires TDengine 3.0.1.0 and above, with the monitoring feature enabled. For detailed configuration, please refer to [TDengine monitoring configuration](../config/#monitoring-parameters). -- taosAdapter has been instaleld and running, please refer to [taosAdapter](../taosadapter). -- taosKeeper has been installed and running, please refer to [taosKeeper](../taoskeeper). +- taosAdapter has been installed and running, please refer to [taosAdapter](../taosadapter). +- taosKeeper has been installed and running, please refer to [taosKeeper](../taosKeeper). Please record - The endpoint of taosAdapter REST service, for example `http://tdengine.local:6041` diff --git a/docs/en/25-application/01-telegraf.md b/docs/en/25-application/01-telegraf.md index b84b30bb71..1e3325b2b2 100644 --- a/docs/en/25-application/01-telegraf.md +++ b/docs/en/25-application/01-telegraf.md @@ -35,7 +35,7 @@ Please refer to the [official documentation](https://grafana.com/grafana/downloa ### TDengine -Download the latest TDengine-server from the [Downloads](http://tdengine.com/en/all-downloads/) page on the TAOSData website and install it. +Download and install the [latest version of TDengine](https://docs.tdengine.com/releases/tdengine/). ## Data Connection Setup diff --git a/docs/en/25-application/02-collectd.md b/docs/en/25-application/02-collectd.md index 6ac7253fc4..ee1e944928 100644 --- a/docs/en/25-application/02-collectd.md +++ b/docs/en/25-application/02-collectd.md @@ -38,7 +38,7 @@ Please refer to the [official documentation](https://grafana.com/grafana/downloa ### Install TDengine -Download the latest TDengine-server from the [Downloads](http://tdengine.com/en/all-downloads/) page on the TAOSData website and install it. +Download and install the [latest version of TDengine](https://docs.tdengine.com/releases/tdengine/). ## Data Connection Setup From 860fd962372cd281056658731c0a78e379ee7082 Mon Sep 17 00:00:00 2001 From: danielclow <106956386+danielclow@users.noreply.github.com> Date: Wed, 29 Mar 2023 14:22:43 +0800 Subject: [PATCH 002/156] docs: fix redirected links --- docs/en/02-intro/index.md | 13 ++++++------- docs/en/05-get-started/01-docker.md | 2 +- docs/en/05-get-started/03-package.md | 2 +- docs/en/07-develop/01-connect/index.md | 2 +- docs/en/27-train-faq/01-faq.md | 2 +- 5 files changed, 10 insertions(+), 11 deletions(-) diff --git a/docs/en/02-intro/index.md b/docs/en/02-intro/index.md index 79d170eeb9..9324ca9f7e 100644 --- a/docs/en/02-intro/index.md +++ b/docs/en/02-intro/index.md @@ -44,7 +44,7 @@ For more details on features, please read through the entire documentation. ## Competitive Advantages -By making full use of [characteristics of time series data](https://tdengine.com/tsdb/characteristics-of-time-series-data/), TDengine differentiates itself from other [time series databases](https://tdengine.com/tsdb), with the following advantages. +By making full use of [characteristics of time series data](https://tdengine.com/tsdb/characteristics-of-time-series-data/), TDengine differentiates itself from other [time series databases](https://tdengine.com/tsdb/), with the following advantages. - **[High-Performance](https://tdengine.com/tdengine/high-performance-time-series-database/)**: TDengine is the only time-series database to solve the high cardinality issue to support billions of data collection points while out performing other time-series databases for data ingestion, querying and data compression. @@ -123,13 +123,12 @@ As a high-performance, scalable and SQL supported time-series database, TDengine ## Comparison with other databases -- [Writing Performance Comparison of TDengine and InfluxDB ](https://tdengine.com/performance-comparison-of-tdengine-and-influxdb/) -- [Query Performance Comparison of TDengine and InfluxDB](https://tdengine.com/query-performance-comparison-test-report-tdengine-vs-influxdb/) -- [TDengine vs OpenTSDB](https://tdengine.com/performance-tdengine-vs-opentsdb/) -- [TDengine vs Cassandra](https://tdengine.com/performance-tdengine-vs-cassandra/) -- [TDengine vs InfluxDB](https://tdengine.com/performance-tdengine-vs-influxdb/) +- [TDengine vs. InfluxDB](https://tdengine.com/tsdb-comparison-influxdb-vs-tdengine/) +- [TDengine vs. TimescaleDB](https://tdengine.com/tsdb-comparison-timescaledb-vs-tdengine/) +- [TDengine vs. OpenTSDB](https://tdengine.com/performance-tdengine-vs-opentsdb/) +- [TDengine vs. Cassandra](https://tdengine.com/performance-tdengine-vs-cassandra/) ## More readings - [Introduction to Time-Series Database](https://tdengine.com/tsdb/) - [Introduction to TDengine competitive advantages](https://tdengine.com/tdengine/) - + diff --git a/docs/en/05-get-started/01-docker.md b/docs/en/05-get-started/01-docker.md index 42e6861674..2049e1615f 100644 --- a/docs/en/05-get-started/01-docker.md +++ b/docs/en/05-get-started/01-docker.md @@ -6,7 +6,7 @@ description: This document describes how to install TDengine in a Docker contain This document describes how to install TDengine in a Docker container and perform queries and inserts. -- The easiest way to explore TDengine is through [TDengine Cloud](http://cloud.tdengine.com). +- The easiest way to explore TDengine is through [TDengine Cloud](https://cloud.tdengine.com). - To get started with TDengine in a non-containerized environment, see [Quick Install from Package](../../get-started/package). - If you want to view the source code, build TDengine yourself, or contribute to the project, see the [TDengine GitHub repository](https://github.com/taosdata/TDengine). diff --git a/docs/en/05-get-started/03-package.md b/docs/en/05-get-started/03-package.md index 3282f600ac..c41a8d531f 100644 --- a/docs/en/05-get-started/03-package.md +++ b/docs/en/05-get-started/03-package.md @@ -10,7 +10,7 @@ import PkgListV3 from "/components/PkgListV3"; This document describes how to install TDengine on Linux/Windows/macOS and perform queries and inserts. -- The easiest way to explore TDengine is through [TDengine Cloud](http://cloud.tdengine.com). +- The easiest way to explore TDengine is through [TDengine Cloud](https://cloud.tdengine.com). - To get started with TDengine on Docker, see [Quick Install on Docker](../../get-started/docker). - If you want to view the source code, build TDengine yourself, or contribute to the project, see the [TDengine GitHub repository](https://github.com/taosdata/TDengine). diff --git a/docs/en/07-develop/01-connect/index.md b/docs/en/07-develop/01-connect/index.md index 913c24f189..d420f15957 100644 --- a/docs/en/07-develop/01-connect/index.md +++ b/docs/en/07-develop/01-connect/index.md @@ -288,6 +288,6 @@ Prior to establishing connection, please make sure TDengine is already running a :::tip -If the connection fails, in most cases it's caused by improper configuration for FQDN or firewall. Please refer to the section "Unable to establish connection" in [FAQ](https://docs.tdengine.com/train-faq/faq). +If the connection fails, in most cases it's caused by improper configuration for FQDN or firewall. Please refer to the section "Unable to establish connection" in [FAQ](../../train-faq/faq/). ::: diff --git a/docs/en/27-train-faq/01-faq.md b/docs/en/27-train-faq/01-faq.md index 9e7f4f8e0d..aa28303f5d 100644 --- a/docs/en/27-train-faq/01-faq.md +++ b/docs/en/27-train-faq/01-faq.md @@ -32,7 +32,7 @@ TDengine 3.0 is not compatible with the configuration and data files from previo 2. Run `sudo rm -rf /var/log/taos/` to delete your log files. 3. Run `sudo rm -rf /var/lib/taos/` to delete your data files. 4. Install TDengine 3.0. -5. For assistance in migrating data to TDengine 3.0, contact [TDengine Support](https://tdengine.com/support). +5. For assistance in migrating data to TDengine 3.0, contact [TDengine Support](https://tdengine.com/support/). ### 2. How can I resolve the "Unable to establish connection" error? From 37bc1bca3697126a545f5d33939b582047e0cea3 Mon Sep 17 00:00:00 2001 From: kailixu Date: Mon, 3 Apr 2023 17:00:52 +0800 Subject: [PATCH 003/156] enh: column/row max length support up to 64K --- docs/examples/c/async_query_example.c | 2 +- docs/examples/c/query_example.c | 2 +- include/libs/function/function.h | 10 +++++----- include/libs/nodes/cmdnodes.h | 2 +- include/libs/parser/parser.h | 2 +- include/util/tdef.h | 11 +++++++---- source/client/src/clientSmlJson.c | 2 +- source/dnode/mnode/impl/src/mndFunc.c | 2 +- source/libs/executor/src/timewindowoperator.c | 2 +- source/libs/function/inc/tpercentile.h | 4 ++-- source/libs/function/src/builtinsimpl.c | 18 +++++++++--------- source/libs/function/src/tpercentile.c | 2 +- source/libs/parser/src/parInsertSml.c | 2 +- source/libs/scalar/src/sclfunc.c | 16 ++++++++-------- source/util/src/tcompare.c | 2 +- tools/shell/src/shellEngine.c | 2 +- 16 files changed, 42 insertions(+), 39 deletions(-) diff --git a/docs/examples/c/async_query_example.c b/docs/examples/c/async_query_example.c index b370420b12..3807c4bfd7 100644 --- a/docs/examples/c/async_query_example.c +++ b/docs/examples/c/async_query_example.c @@ -8,7 +8,7 @@ #include #include -typedef int16_t VarDataLenT; +typedef uint16_t VarDataLenT; #define TSDB_NCHAR_SIZE sizeof(int32_t) #define VARSTR_HEADER_SIZE sizeof(VarDataLenT) diff --git a/docs/examples/c/query_example.c b/docs/examples/c/query_example.c index fcae95bcd4..c7d52115b5 100644 --- a/docs/examples/c/query_example.c +++ b/docs/examples/c/query_example.c @@ -6,7 +6,7 @@ #include #include -typedef int16_t VarDataLenT; +typedef uint16_t VarDataLenT; #define TSDB_NCHAR_SIZE sizeof(int32_t) #define VARSTR_HEADER_SIZE sizeof(VarDataLenT) diff --git a/include/libs/function/function.h b/include/libs/function/function.h index fb6ef26a8a..1411ee7c4b 100644 --- a/include/libs/function/function.h +++ b/include/libs/function/function.h @@ -99,11 +99,11 @@ typedef struct SSubsidiaryResInfo { } SSubsidiaryResInfo; typedef struct SResultDataInfo { - int16_t precision; - int16_t scale; - int16_t type; - int16_t bytes; - int32_t interBufSize; + int16_t precision; + int16_t scale; + int16_t type; + uint16_t bytes; + int32_t interBufSize; } SResultDataInfo; #define GET_RES_INFO(ctx) ((ctx)->resultInfo) diff --git a/include/libs/nodes/cmdnodes.h b/include/libs/nodes/cmdnodes.h index c716d77b32..e276373f2d 100644 --- a/include/libs/nodes/cmdnodes.h +++ b/include/libs/nodes/cmdnodes.h @@ -30,7 +30,7 @@ extern "C" { #define SHOW_CREATE_DB_RESULT_COLS 2 #define SHOW_CREATE_DB_RESULT_FIELD1_LEN (TSDB_DB_NAME_LEN + VARSTR_HEADER_SIZE) -#define SHOW_CREATE_DB_RESULT_FIELD2_LEN (TSDB_MAX_BINARY_LEN + VARSTR_HEADER_SIZE) +#define SHOW_CREATE_DB_RESULT_FIELD2_LEN (TSDB_MAX_BINARY_LEN) #define SHOW_CREATE_TB_RESULT_COLS 2 #define SHOW_CREATE_TB_RESULT_FIELD1_LEN (TSDB_TABLE_NAME_LEN + VARSTR_HEADER_SIZE) diff --git a/include/libs/parser/parser.h b/include/libs/parser/parser.h index 558203052f..94fb6824d2 100644 --- a/include/libs/parser/parser.h +++ b/include/libs/parser/parser.h @@ -114,7 +114,7 @@ STableDataCxt* smlInitTableDataCtx(SQuery* query, STableMeta* pTableMeta); int32_t smlBindData(SQuery* handle, bool dataFormat, SArray* tags, SArray* colsSchema, SArray* cols, STableMeta* pTableMeta, char* tableName, const char* sTableName, int32_t sTableNameLen, int32_t ttl, - char* msgBuf, int16_t msgBufLen); + char* msgBuf, int32_t msgBufLen); int32_t smlBuildOutput(SQuery* handle, SHashObj* pVgHash); int rawBlockBindData(SQuery *query, STableMeta* pTableMeta, void* data, SVCreateTbReq* pCreateTb, TAOS_FIELD *fields, int numFields, bool needChangeLength); diff --git a/include/util/tdef.h b/include/util/tdef.h index e5000891c9..8c5c5502fd 100644 --- a/include/util/tdef.h +++ b/include/util/tdef.h @@ -236,8 +236,11 @@ typedef enum ELogicConditionType { * - Firstly, we use 65531(65535 - 4), as the SDataRow/SKVRow contains 4 bits header. * - Secondly, if all cols are VarDataT type except primary key, we need 4 bits to store the offset, thus * the final value is 65531-(4096-1)*4 = 49151. + * + * History value:49151/65531 + * - 65531 compatible with 2.0 */ -#define TSDB_MAX_BYTES_PER_ROW 49151 +#define TSDB_MAX_BYTES_PER_ROW 65531 #define TSDB_MAX_TAGS_LEN 16384 #define TSDB_MAX_TAGS 128 @@ -406,9 +409,9 @@ typedef enum ELogicConditionType { #define TSDB_EXPLAIN_RESULT_ROW_SIZE (16 * 1024) #define TSDB_EXPLAIN_RESULT_COLUMN_NAME "QUERY_PLAN" -#define TSDB_MAX_FIELD_LEN 16384 -#define TSDB_MAX_BINARY_LEN (TSDB_MAX_FIELD_LEN - TSDB_KEYSIZE) // keep 16384 -#define TSDB_MAX_NCHAR_LEN (TSDB_MAX_FIELD_LEN - TSDB_KEYSIZE) // keep 16384 +#define TSDB_MAX_FIELD_LEN 65519 // compatible with 2.0 +#define TSDB_MAX_BINARY_LEN TSDB_MAX_FIELD_LEN // 16384:65519 +#define TSDB_MAX_NCHAR_LEN TSDB_MAX_FIELD_LEN // 16384:65519 #define PRIMARYKEY_TIMESTAMP_COL_ID 1 #define COL_REACH_END(colId, maxColId) ((colId) > (maxColId)) diff --git a/source/client/src/clientSmlJson.c b/source/client/src/clientSmlJson.c index 9fd98e33b7..b0ae316031 100644 --- a/source/client/src/clientSmlJson.c +++ b/source/client/src/clientSmlJson.c @@ -575,7 +575,7 @@ static int32_t smlConvertJSONString(SSmlKv *pVal, char *typeStr, cJSON *value) { uError("OTD:invalid type(%s) for JSON String", typeStr); return TSDB_CODE_TSC_INVALID_JSON_TYPE; } - pVal->length = (int16_t)strlen(value->valuestring); + pVal->length = strlen(value->valuestring); if (pVal->type == TSDB_DATA_TYPE_BINARY && pVal->length > TSDB_MAX_BINARY_LEN - VARSTR_HEADER_SIZE) { return TSDB_CODE_PAR_INVALID_VAR_COLUMN_LEN; diff --git a/source/dnode/mnode/impl/src/mndFunc.c b/source/dnode/mnode/impl/src/mndFunc.c index 7a475c61b6..f4451d1630 100644 --- a/source/dnode/mnode/impl/src/mndFunc.c +++ b/source/dnode/mnode/impl/src/mndFunc.c @@ -475,7 +475,7 @@ RETRIEVE_FUNC_OVER: return code; } -static void *mnodeGenTypeStr(char *buf, int32_t buflen, uint8_t type, int16_t len) { +static void *mnodeGenTypeStr(char *buf, int32_t buflen, uint8_t type, int32_t len) { char *msg = "unknown"; if (type >= sizeof(tDataTypes) / sizeof(tDataTypes[0])) { return msg; diff --git a/source/libs/executor/src/timewindowoperator.c b/source/libs/executor/src/timewindowoperator.c index fef588a503..3272eefba4 100644 --- a/source/libs/executor/src/timewindowoperator.c +++ b/source/libs/executor/src/timewindowoperator.c @@ -1111,7 +1111,7 @@ static void doStateWindowAggImpl(SOperatorInfo* pOperator, SStateWindowOperatorI bool masterScan = true; int32_t numOfOutput = pOperator->exprSupp.numOfExprs; - int16_t bytes = pStateColInfoData->info.bytes; + int32_t bytes = pStateColInfoData->info.bytes; SColumnInfoData* pColInfoData = taosArrayGet(pBlock->pDataBlock, pInfo->tsSlotId); TSKEY* tsList = (TSKEY*)pColInfoData->pData; diff --git a/source/libs/function/inc/tpercentile.h b/source/libs/function/inc/tpercentile.h index 80159460f5..65b7b38a05 100644 --- a/source/libs/function/inc/tpercentile.h +++ b/source/libs/function/inc/tpercentile.h @@ -53,7 +53,7 @@ typedef int32_t (*__perc_hash_func_t)(struct tMemBucket *pBucket, const void *va typedef struct tMemBucket { int16_t numOfSlots; int16_t type; - int16_t bytes; + int32_t bytes; int32_t total; int32_t elemPerPage; // number of elements for each object int32_t maxCapacity; // maximum allowed number of elements that can be sort directly to get the result @@ -67,7 +67,7 @@ typedef struct tMemBucket { SHashObj *groupPagesMap; // disk page map for different groups; } tMemBucket; -tMemBucket *tMemBucketCreate(int16_t nElemSize, int16_t dataType, double minval, double maxval); +tMemBucket *tMemBucketCreate(int32_t nElemSize, int16_t dataType, double minval, double maxval); void tMemBucketDestroy(tMemBucket *pBucket); diff --git a/source/libs/function/src/builtinsimpl.c b/source/libs/function/src/builtinsimpl.c index 4760358e0d..62d1f6244b 100644 --- a/source/libs/function/src/builtinsimpl.c +++ b/source/libs/function/src/builtinsimpl.c @@ -196,11 +196,11 @@ typedef struct SMavgInfo { } SMavgInfo; typedef struct SSampleInfo { - int32_t samples; - int32_t totalPoints; - int32_t numSampled; - uint8_t colType; - int16_t colBytes; + int32_t samples; + int32_t totalPoints; + int32_t numSampled; + uint8_t colType; + uint16_t colBytes; STuplePos nullTuplePos; bool nullTupleSaved; @@ -220,7 +220,7 @@ typedef struct STailInfo { int32_t numAdded; int32_t offset; uint8_t colType; - int16_t colBytes; + uint16_t colBytes; STailItem** pItems; } STailInfo; @@ -233,7 +233,7 @@ typedef struct SUniqueItem { typedef struct SUniqueInfo { int32_t numOfPoints; uint8_t colType; - int16_t colBytes; + uint16_t colBytes; bool hasNull; // null is not hashable, handle separately SHashObj* pHash; char pItems[]; @@ -247,13 +247,13 @@ typedef struct SModeItem { typedef struct SModeInfo { uint8_t colType; - int16_t colBytes; + uint16_t colBytes; SHashObj* pHash; STuplePos nullTuplePos; bool nullTupleSaved; - char* buf; // serialize data buffer + char* buf; // serialize data buffer } SModeInfo; typedef struct SDerivInfo { diff --git a/source/libs/function/src/tpercentile.c b/source/libs/function/src/tpercentile.c index de381fadbd..a18051e0b6 100644 --- a/source/libs/function/src/tpercentile.c +++ b/source/libs/function/src/tpercentile.c @@ -236,7 +236,7 @@ static void resetSlotInfo(tMemBucket *pBucket) { } } -tMemBucket *tMemBucketCreate(int16_t nElemSize, int16_t dataType, double minval, double maxval) { +tMemBucket *tMemBucketCreate(int32_t nElemSize, int16_t dataType, double minval, double maxval) { tMemBucket *pBucket = (tMemBucket *)taosMemoryCalloc(1, sizeof(tMemBucket)); if (pBucket == NULL) { return NULL; diff --git a/source/libs/parser/src/parInsertSml.c b/source/libs/parser/src/parInsertSml.c index 106ee641af..c05ce02aa2 100644 --- a/source/libs/parser/src/parInsertSml.c +++ b/source/libs/parser/src/parInsertSml.c @@ -242,7 +242,7 @@ end: int32_t smlBindData(SQuery* query, bool dataFormat, SArray* tags, SArray* colsSchema, SArray* cols, STableMeta* pTableMeta, char* tableName, const char* sTableName, int32_t sTableNameLen, int32_t ttl, - char* msgBuf, int16_t msgBufLen) { + char* msgBuf, int32_t msgBufLen) { SMsgBuf pBuf = {.buf = msgBuf, .len = msgBufLen}; SSchema* pTagsSchema = getTableTagSchema(pTableMeta); diff --git a/source/libs/scalar/src/sclfunc.c b/source/libs/scalar/src/sclfunc.c index 195a08525c..23f07fb332 100644 --- a/source/libs/scalar/src/sclfunc.c +++ b/source/libs/scalar/src/sclfunc.c @@ -12,7 +12,7 @@ typedef double (*_double_fn)(double); typedef double (*_double_fn_2)(double, double); typedef int (*_conv_fn)(int); typedef void (*_trim_fn)(char *, char *, int32_t, int32_t); -typedef int16_t (*_len_fn)(char *, int32_t); +typedef uint16_t (*_len_fn)(char *, int32_t); /** Math functions **/ static double tlog(double v) { return log(v); } @@ -286,9 +286,9 @@ static int32_t doScalarFunction(SScalarParam *pInput, int32_t inputNum, SScalarP } /** String functions **/ -static int16_t tlength(char *input, int32_t type) { return varDataLen(input); } +static VarDataLenT tlength(char *input, int32_t type) { return varDataLen(input); } -static int16_t tcharlength(char *input, int32_t type) { +static VarDataLenT tcharlength(char *input, int32_t type) { if (type == TSDB_DATA_TYPE_VARCHAR) { return varDataLen(input); } else { // NCHAR @@ -377,7 +377,7 @@ static int32_t doLengthFunction(SScalarParam *pInput, int32_t inputNum, SScalarP return TSDB_CODE_SUCCESS; } -static int32_t concatCopyHelper(const char *input, char *output, bool hasNchar, int32_t type, int16_t *dataLen) { +static int32_t concatCopyHelper(const char *input, char *output, bool hasNchar, int32_t type, VarDataLenT *dataLen) { if (hasNchar && type == TSDB_DATA_TYPE_VARCHAR) { TdUcs4 *newBuf = taosMemoryCalloc((varDataLen(input) + 1) * TSDB_NCHAR_SIZE, 1); int32_t len = varDataLen(input); @@ -457,7 +457,7 @@ int32_t concatFunction(SScalarParam *pInput, int32_t inputNum, SScalarParam *pOu continue; } - int16_t dataLen = 0; + VarDataLenT dataLen = 0; for (int32_t i = 0; i < inputNum; ++i) { int32_t rowIdx = (pInput[i].numOfRows == 1) ? 0 : k; input[i] = colDataGetData(pInputData[i], rowIdx); @@ -526,8 +526,8 @@ int32_t concatWsFunction(SScalarParam *pInput, int32_t inputNum, SScalarParam *p continue; } - int16_t dataLen = 0; - bool hasNull = false; + VarDataLenT dataLen = 0; + bool hasNull = false; for (int32_t i = 1; i < inputNum; ++i) { if (colDataIsNull_s(pInputData[i], k) || IS_NULL_TYPE(GET_PARAM_TYPE(&pInput[i]))) { hasNull = true; @@ -695,7 +695,7 @@ int32_t substrFunction(SScalarParam *pInput, int32_t inputNum, SScalarParam *pOu /** Conversion functions **/ int32_t castFunction(SScalarParam *pInput, int32_t inputNum, SScalarParam *pOutput) { int16_t inputType = GET_PARAM_TYPE(&pInput[0]); - int16_t inputLen = GET_PARAM_BYTES(&pInput[0]); + int32_t inputLen = GET_PARAM_BYTES(&pInput[0]); int16_t outputType = GET_PARAM_TYPE(&pOutput[0]); int64_t outputLen = GET_PARAM_BYTES(&pOutput[0]); diff --git a/source/util/src/tcompare.c b/source/util/src/tcompare.c index f8f78ae6a5..be2ad730f7 100644 --- a/source/util/src/tcompare.c +++ b/source/util/src/tcompare.c @@ -1241,7 +1241,7 @@ int32_t taosArrayCompareString(const void *a, const void *b) { int32_t comparestrPatternMatch(const void *pLeft, const void *pRight) { SPatternCompareInfo pInfo = PATTERN_COMPARE_INFO_INITIALIZER; - ASSERT(varDataLen(pRight) <= TSDB_MAX_FIELD_LEN); + ASSERT(varDataTLen(pRight) <= TSDB_MAX_FIELD_LEN); size_t pLen = varDataLen(pRight); size_t sz = varDataLen(pLeft); diff --git a/tools/shell/src/shellEngine.c b/tools/shell/src/shellEngine.c index a87ba16267..165b00dc68 100644 --- a/tools/shell/src/shellEngine.c +++ b/tools/shell/src/shellEngine.c @@ -695,7 +695,7 @@ int32_t shellCalcColWidth(TAOS_FIELD *field, int32_t precision) { case TSDB_DATA_TYPE_NCHAR: case TSDB_DATA_TYPE_JSON: { - int16_t bytes = field->bytes * TSDB_NCHAR_SIZE; + uint16_t bytes = field->bytes * TSDB_NCHAR_SIZE; if (bytes > shell.args.displayWidth) { return TMAX(shell.args.displayWidth, width); } else { From 303bc7dc230cbed51ff941ad36e3e11930dbc41e Mon Sep 17 00:00:00 2001 From: kailixu Date: Mon, 3 Apr 2023 17:12:14 +0800 Subject: [PATCH 004/156] enh: column/row max length support up to 64K --- include/util/tdef.h | 17 ++++------------- source/client/src/clientSmlJson.c | 2 +- 2 files changed, 5 insertions(+), 14 deletions(-) diff --git a/include/util/tdef.h b/include/util/tdef.h index 8c5c5502fd..5782da35bc 100644 --- a/include/util/tdef.h +++ b/include/util/tdef.h @@ -231,16 +231,7 @@ typedef enum ELogicConditionType { #define TSDB_QUERY_ID_LEN 26 #define TSDB_TRANS_OPER_LEN 16 -/** - * In some scenarios uint16_t (0~65535) is used to store the row len. - * - Firstly, we use 65531(65535 - 4), as the SDataRow/SKVRow contains 4 bits header. - * - Secondly, if all cols are VarDataT type except primary key, we need 4 bits to store the offset, thus - * the final value is 65531-(4096-1)*4 = 49151. - * - * History value:49151/65531 - * - 65531 compatible with 2.0 - */ -#define TSDB_MAX_BYTES_PER_ROW 65531 +#define TSDB_MAX_BYTES_PER_ROW 65531 // 49151:65531 #define TSDB_MAX_TAGS_LEN 16384 #define TSDB_MAX_TAGS 128 @@ -409,9 +400,9 @@ typedef enum ELogicConditionType { #define TSDB_EXPLAIN_RESULT_ROW_SIZE (16 * 1024) #define TSDB_EXPLAIN_RESULT_COLUMN_NAME "QUERY_PLAN" -#define TSDB_MAX_FIELD_LEN 65519 // compatible with 2.0 -#define TSDB_MAX_BINARY_LEN TSDB_MAX_FIELD_LEN // 16384:65519 -#define TSDB_MAX_NCHAR_LEN TSDB_MAX_FIELD_LEN // 16384:65519 +#define TSDB_MAX_FIELD_LEN 65519 // 16384:65519 +#define TSDB_MAX_BINARY_LEN TSDB_MAX_FIELD_LEN // 16384-8:65519 +#define TSDB_MAX_NCHAR_LEN TSDB_MAX_FIELD_LEN // 16384-8:65519 #define PRIMARYKEY_TIMESTAMP_COL_ID 1 #define COL_REACH_END(colId, maxColId) ((colId) > (maxColId)) diff --git a/source/client/src/clientSmlJson.c b/source/client/src/clientSmlJson.c index b0ae316031..c3a6e15697 100644 --- a/source/client/src/clientSmlJson.c +++ b/source/client/src/clientSmlJson.c @@ -575,7 +575,7 @@ static int32_t smlConvertJSONString(SSmlKv *pVal, char *typeStr, cJSON *value) { uError("OTD:invalid type(%s) for JSON String", typeStr); return TSDB_CODE_TSC_INVALID_JSON_TYPE; } - pVal->length = strlen(value->valuestring); + pVal->length = (uint16_t)strlen(value->valuestring); if (pVal->type == TSDB_DATA_TYPE_BINARY && pVal->length > TSDB_MAX_BINARY_LEN - VARSTR_HEADER_SIZE) { return TSDB_CODE_PAR_INVALID_VAR_COLUMN_LEN; From ec4ea9cf5d6f3456de2a2bb77b3bafa810d766d4 Mon Sep 17 00:00:00 2001 From: Minglei Jin Date: Fri, 14 Apr 2023 18:28:15 +0800 Subject: [PATCH 005/156] enh(cache/rocks): base for rocks put/get --- contrib/CMakeLists.txt | 1 + contrib/test/CMakeLists.txt | 2 + contrib/test/rocksdb/CMakeLists.txt | 4 +- contrib/test/rocksdb/main.c | 8 +- source/dnode/vnode/CMakeLists.txt | 2 + source/dnode/vnode/src/inc/tsdb.h | 10 ++ source/dnode/vnode/src/inc/vnodeInt.h | 1 + source/dnode/vnode/src/tsdb/tsdbCache.c | 111 +++++++++++++++++++++ source/dnode/vnode/src/tsdb/tsdbMemTable.c | 26 +++-- 9 files changed, 150 insertions(+), 15 deletions(-) diff --git a/contrib/CMakeLists.txt b/contrib/CMakeLists.txt index 8f94313382..6cd6f79d8b 100644 --- a/contrib/CMakeLists.txt +++ b/contrib/CMakeLists.txt @@ -438,6 +438,7 @@ endif(${BUILD_ADDR2LINE}) # ================================================================================================ # Build test # ================================================================================================ +message("contrib tests:" ${BUILD_DEPENDENCY_TESTS}) if(${BUILD_DEPENDENCY_TESTS}) add_subdirectory(test EXCLUDE_FROM_ALL) endif(${BUILD_DEPENDENCY_TESTS}) diff --git a/contrib/test/CMakeLists.txt b/contrib/test/CMakeLists.txt index f35cf0d13d..911eca50a7 100644 --- a/contrib/test/CMakeLists.txt +++ b/contrib/test/CMakeLists.txt @@ -1,4 +1,6 @@ # rocksdb +message("contrib test dir:" ${BUILD_WITH_ROCKSDB}) + if(${BUILD_WITH_ROCKSDB}) add_subdirectory(rocksdb) endif(${BUILD_WITH_ROCKSDB}) diff --git a/contrib/test/rocksdb/CMakeLists.txt b/contrib/test/rocksdb/CMakeLists.txt index b500e0a661..5ae1b0395b 100644 --- a/contrib/test/rocksdb/CMakeLists.txt +++ b/contrib/test/rocksdb/CMakeLists.txt @@ -1,6 +1,8 @@ +message("contrib test/rocksdb:" ${BUILD_DEPENDENCY_TESTS}) + add_executable(rocksdbTest "") target_sources(rocksdbTest PRIVATE "${CMAKE_CURRENT_SOURCE_DIR}/main.c" ) -target_link_libraries(rocksdbTest rocksdb) \ No newline at end of file +target_link_libraries(rocksdbTest rocksdb) diff --git a/contrib/test/rocksdb/main.c b/contrib/test/rocksdb/main.c index d1cbd373da..09435790cc 100644 --- a/contrib/test/rocksdb/main.c +++ b/contrib/test/rocksdb/main.c @@ -25,10 +25,12 @@ int main(int argc, char const *argv[]) { // Read rocksdb_readoptions_t *readoptions = rocksdb_readoptions_create(); - rocksdb_readoptions_set_snapshot(readoptions, rocksdb_create_snapshot(db)); +//rocksdb_readoptions_set_snapshot(readoptions, rocksdb_create_snapshot(db)); + char buf[256] = {0}; size_t vallen = 0; char * val = rocksdb_get(db, readoptions, "key", 3, &vallen, &err); - printf("val:%s\n", val); + snprintf(buf, vallen+5, "val:%s", val); + printf("%ld %ld %s\n", strlen(val), vallen, buf); // Update // rocksdb_put(db, writeoptions, "key", 3, "eulav", 5, &err); @@ -43,4 +45,4 @@ int main(int argc, char const *argv[]) { rocksdb_close(db); return 0; -} \ No newline at end of file +} diff --git a/source/dnode/vnode/CMakeLists.txt b/source/dnode/vnode/CMakeLists.txt index 9911752f8e..52df20877a 100644 --- a/source/dnode/vnode/CMakeLists.txt +++ b/source/dnode/vnode/CMakeLists.txt @@ -82,6 +82,7 @@ target_include_directories( PUBLIC "inc" PUBLIC "src/inc" PUBLIC "${TD_SOURCE_DIR}/include/libs/scalar" + PUBLIC "${TD_SOURCE_DIR}/contrib/rocksdb/include" ) target_link_libraries( vnode @@ -98,6 +99,7 @@ target_link_libraries( # PUBLIC bdb # PUBLIC scalar + PUBLIC rocksdb PUBLIC transport PUBLIC stream PUBLIC index diff --git a/source/dnode/vnode/src/inc/tsdb.h b/source/dnode/vnode/src/inc/tsdb.h index 2a85b191a4..b698909af1 100644 --- a/source/dnode/vnode/src/inc/tsdb.h +++ b/source/dnode/vnode/src/inc/tsdb.h @@ -343,6 +343,14 @@ struct STsdbFS { SArray *aDFileSet; // SArray }; +typedef struct { + rocksdb_t *db; + rocksdb_options_t *options; + rocksdb_writeoptions_t *writeoptions; + rocksdb_readoptions_t *readoptions; + TdThreadMutex rMutex; +} SRocksCache; + struct STsdb { char *path; SVnode *pVnode; @@ -355,6 +363,7 @@ struct STsdb { TdThreadMutex lruMutex; SLRUCache *biCache; TdThreadMutex biMutex; + SRocksCache rCache; }; struct TSDBKEY { @@ -796,6 +805,7 @@ typedef struct { int32_t tsdbOpenCache(STsdb *pTsdb); void tsdbCloseCache(STsdb *pTsdb); +int32_t tsdbCacheUpdate(STsdb *pTsdb, tb_uid_t uid, TSDBROW *row); int32_t tsdbCacheInsertLast(SLRUCache *pCache, tb_uid_t uid, TSDBROW *row, STsdb *pTsdb); int32_t tsdbCacheInsertLastrow(SLRUCache *pCache, STsdb *pTsdb, tb_uid_t uid, TSDBROW *row, bool dup); int32_t tsdbCacheGetLastH(SLRUCache *pCache, tb_uid_t uid, SCacheRowsReader *pr, LRUHandle **h); diff --git a/source/dnode/vnode/src/inc/vnodeInt.h b/source/dnode/vnode/src/inc/vnodeInt.h index 253d5aebce..a3f2c4559f 100644 --- a/source/dnode/vnode/src/inc/vnodeInt.h +++ b/source/dnode/vnode/src/inc/vnodeInt.h @@ -19,6 +19,7 @@ #include "executor.h" #include "filter.h" #include "qworker.h" +#include "rocksdb/c.h" #include "sync.h" #include "tRealloc.h" #include "tchecksum.h" diff --git a/source/dnode/vnode/src/tsdb/tsdbCache.c b/source/dnode/vnode/src/tsdb/tsdbCache.c index 48d3371284..171a63aa33 100644 --- a/source/dnode/vnode/src/tsdb/tsdbCache.c +++ b/source/dnode/vnode/src/tsdb/tsdbCache.c @@ -43,6 +43,110 @@ static void tsdbCloseBICache(STsdb *pTsdb) { } } +static void tsdbGetRocksPath(STsdb *pTsdb, char *path) { + SVnode *pVnode = pTsdb->pVnode; + if (pVnode->pTfs) { + if (path) { + snprintf(path, TSDB_FILENAME_LEN, "%s%s%s%scache.rdb", tfsGetPrimaryPath(pTsdb->pVnode->pTfs), TD_DIRSEP, + pTsdb->path, TD_DIRSEP); + } + } else { + if (path) { + snprintf(path, TSDB_FILENAME_LEN, "%s%scache.rdb", pTsdb->path, TD_DIRSEP); + } + } +} + +static int32_t tsdbOpenRocksCache(STsdb *pTsdb) { + int32_t code = 0; + + rocksdb_options_t *options = rocksdb_options_create(); + if (NULL == options) { + code = TSDB_CODE_OUT_OF_MEMORY; + goto _err; + } + + rocksdb_options_set_create_if_missing(options, 1); + rocksdb_options_set_inplace_update_support(options, 1); + rocksdb_options_set_allow_concurrent_memtable_write(options, 0); + + rocksdb_writeoptions_t *writeoptions = rocksdb_writeoptions_create(); + if (NULL == writeoptions) { + code = TSDB_CODE_OUT_OF_MEMORY; + goto _err2; + } + + rocksdb_readoptions_t *readoptions = rocksdb_readoptions_create(); + if (NULL == readoptions) { + code = TSDB_CODE_OUT_OF_MEMORY; + goto _err2; + } + + char *err = NULL; + char cachePath[TSDB_FILENAME_LEN] = {0}; + tsdbGetRocksPath(pTsdb, cachePath); + + rocksdb_t *db = rocksdb_open(options, cachePath, &err); + if (NULL == db) { + code = -1; + goto _err3; + } + + taosThreadMutexInit(&pTsdb->rCache.rMutex, NULL); + + pTsdb->rCache.options = options; + pTsdb->rCache.writeoptions = writeoptions; + pTsdb->rCache.readoptions = readoptions; + pTsdb->rCache.db = db; + + return code; + +_err4: + rocksdb_readoptions_destroy(readoptions); +_err3: + rocksdb_writeoptions_destroy(writeoptions); +_err2: + rocksdb_options_destroy(options); +_err: + return code; +} + +static void tsdbCloseRocksCache(STsdb *pTsdb) { + rocksdb_close(pTsdb->rCache.db); + rocksdb_readoptions_destroy(pTsdb->rCache.readoptions); + rocksdb_writeoptions_destroy(pTsdb->rCache.writeoptions); + rocksdb_options_destroy(pTsdb->rCache.options); +} + +int32_t tsdbCacheUpdate(STsdb *pTsdb, tb_uid_t uid, TSDBROW *pRow) { + int32_t code = 0; + + STSDBRowIter iter = {0}; + tsdbRowIterOpen(&iter, pRow, pTSchema); + + for (SColVal *pColVal = tsdbRowIterNext(&iter); pColVal; pColVal = tsdbRowIterNext(&iter)) { + SColVal *pRColVal = tsdbCacheGetRColVal(pTsdb); + if (pRColVal) { + // merge pColVal with pRColVal + } + + tsdbCachePutRColVal(pColVal); + } + + tsdbRowClose(&iter); + + char *err = NULL; + char buf[256] = {0}; + size_t vallen = 0; + char *val = rocksdb_get(pTsdb->rCache.db, pTsdb->rCache.readoptions, "key", 3, &vallen, &err); + if (val) { + } else { + } + rocksdb_put(pTsdb->rCache.db, pTsdb->rCache.writeoptions, "key", 3, "value", 5, &err); + + return code; +} + int32_t tsdbOpenCache(STsdb *pTsdb) { int32_t code = 0; SLRUCache *pCache = NULL; @@ -60,6 +164,12 @@ int32_t tsdbOpenCache(STsdb *pTsdb) { goto _err; } + code = tsdbOpenRocksCache(pTsdb); + if (code != TSDB_CODE_SUCCESS) { + code = TSDB_CODE_OUT_OF_MEMORY; + goto _err; + } + taosLRUCacheSetStrictCapacity(pCache, false); taosThreadMutexInit(&pTsdb->lruMutex, NULL); @@ -80,6 +190,7 @@ void tsdbCloseCache(STsdb *pTsdb) { } tsdbCloseBICache(pTsdb); + tsdbCloseRocksCache(pTsdb); } static void getTableCacheKey(tb_uid_t uid, int cacheType, char *key, int *len) { diff --git a/source/dnode/vnode/src/tsdb/tsdbMemTable.c b/source/dnode/vnode/src/tsdb/tsdbMemTable.c index d0ff403bf7..f480fb6081 100644 --- a/source/dnode/vnode/src/tsdb/tsdbMemTable.c +++ b/source/dnode/vnode/src/tsdb/tsdbMemTable.c @@ -284,8 +284,8 @@ bool tsdbTbDataIterNext(STbDataIter *pIter) { int64_t tsdbCountTbDataRows(STbData *pTbData) { SMemSkipListNode *pNode = pTbData->sl.pHead; - int64_t rowsNum = 0; - + int64_t rowsNum = 0; + while (NULL != pNode) { pNode = SL_GET_NODE_FORWARD(pNode, 0); if (pNode == pTbData->sl.pTail) { @@ -298,17 +298,17 @@ int64_t tsdbCountTbDataRows(STbData *pTbData) { return rowsNum; } -void tsdbMemTableCountRows(SMemTable *pMemTable, SHashObj* pTableMap, int64_t *rowsNum) { +void tsdbMemTableCountRows(SMemTable *pMemTable, SHashObj *pTableMap, int64_t *rowsNum) { taosRLockLatch(&pMemTable->latch); for (int32_t i = 0; i < pMemTable->nBucket; ++i) { STbData *pTbData = pMemTable->aBucket[i]; while (pTbData) { - void* p = taosHashGet(pTableMap, &pTbData->uid, sizeof(pTbData->uid)); + void *p = taosHashGet(pTableMap, &pTbData->uid, sizeof(pTbData->uid)); if (p == NULL) { pTbData = pTbData->next; continue; } - + *rowsNum += tsdbCountTbDataRows(pTbData); pTbData = pTbData->next; } @@ -668,15 +668,17 @@ static int32_t tsdbInsertColDataToTable(SMemTable *pMemTable, STbData *pTbData, if (key.ts >= pTbData->maxKey) { pTbData->maxKey = key.ts; - + /* if (TSDB_CACHE_LAST_ROW(pMemTable->pTsdb->pVnode->config)) { tsdbCacheInsertLastrow(pMemTable->pTsdb->lruCache, pMemTable->pTsdb, pTbData->uid, &lRow, true); - } + }*/ } - + /* if (TSDB_CACHE_LAST(pMemTable->pTsdb->pVnode->config)) { tsdbCacheInsertLast(pMemTable->pTsdb->lruCache, pTbData->uid, &lRow, pMemTable->pTsdb); } + */ + tsdbCacheUpdate(pMemTable->pTsdb, pTbData->uid, &lRow); // SMemTable pMemTable->minKey = TMIN(pMemTable->minKey, pTbData->minKey); @@ -736,15 +738,17 @@ static int32_t tsdbInsertRowDataToTable(SMemTable *pMemTable, STbData *pTbData, if (key.ts >= pTbData->maxKey) { pTbData->maxKey = key.ts; - + /* if (TSDB_CACHE_LAST_ROW(pMemTable->pTsdb->pVnode->config)) { tsdbCacheInsertLastrow(pMemTable->pTsdb->lruCache, pMemTable->pTsdb, pTbData->uid, &lRow, true); - } + }*/ } - + /* if (TSDB_CACHE_LAST(pMemTable->pTsdb->pVnode->config)) { tsdbCacheInsertLast(pMemTable->pTsdb->lruCache, pTbData->uid, &lRow, pMemTable->pTsdb); } + */ + tsdbCacheUpdate(pMemTable->pTsdb, pTbData->uid, &lRow); // SMemTable pMemTable->minKey = TMIN(pMemTable->minKey, pTbData->minKey); From e159c4ead94eb5d8b4b70806925cee136c3be5c4 Mon Sep 17 00:00:00 2001 From: Minglei Jin Date: Tue, 18 Apr 2023 14:25:35 +0800 Subject: [PATCH 006/156] first round rocks batch write implementation --- source/dnode/vnode/src/inc/tsdb.h | 3 +- source/dnode/vnode/src/tsdb/tsdbCache.c | 161 +++++++++++++++++++-- source/dnode/vnode/src/tsdb/tsdbMemTable.c | 4 +- 3 files changed, 152 insertions(+), 16 deletions(-) diff --git a/source/dnode/vnode/src/inc/tsdb.h b/source/dnode/vnode/src/inc/tsdb.h index b698909af1..16afa9f596 100644 --- a/source/dnode/vnode/src/inc/tsdb.h +++ b/source/dnode/vnode/src/inc/tsdb.h @@ -348,6 +348,7 @@ typedef struct { rocksdb_options_t *options; rocksdb_writeoptions_t *writeoptions; rocksdb_readoptions_t *readoptions; + rocksdb_writebatch_t *writebatch; TdThreadMutex rMutex; } SRocksCache; @@ -805,7 +806,7 @@ typedef struct { int32_t tsdbOpenCache(STsdb *pTsdb); void tsdbCloseCache(STsdb *pTsdb); -int32_t tsdbCacheUpdate(STsdb *pTsdb, tb_uid_t uid, TSDBROW *row); +int32_t tsdbCacheUpdate(STsdb *pTsdb, tb_uid_t suid, tb_uid_t uid, TSDBROW *row); int32_t tsdbCacheInsertLast(SLRUCache *pCache, tb_uid_t uid, TSDBROW *row, STsdb *pTsdb); int32_t tsdbCacheInsertLastrow(SLRUCache *pCache, STsdb *pTsdb, tb_uid_t uid, TSDBROW *row, bool dup); int32_t tsdbCacheGetLastH(SLRUCache *pCache, tb_uid_t uid, SCacheRowsReader *pr, LRUHandle **h); diff --git a/source/dnode/vnode/src/tsdb/tsdbCache.c b/source/dnode/vnode/src/tsdb/tsdbCache.c index 171a63aa33..393e8ffdf5 100644 --- a/source/dnode/vnode/src/tsdb/tsdbCache.c +++ b/source/dnode/vnode/src/tsdb/tsdbCache.c @@ -43,6 +43,8 @@ static void tsdbCloseBICache(STsdb *pTsdb) { } } +#define ROCKS_KEY_LEN 64 + static void tsdbGetRocksPath(STsdb *pTsdb, char *path) { SVnode *pVnode = pTsdb->pVnode; if (pVnode->pTfs) { @@ -92,8 +94,11 @@ static int32_t tsdbOpenRocksCache(STsdb *pTsdb) { goto _err3; } + rocksdb_writebatch_t *writebatch = rocksdb_writebatch_create(); + taosThreadMutexInit(&pTsdb->rCache.rMutex, NULL); + pTsdb->rCache.writebatch = writebatch; pTsdb->rCache.options = options; pTsdb->rCache.writeoptions = writeoptions; pTsdb->rCache.readoptions = readoptions; @@ -113,37 +118,167 @@ _err: static void tsdbCloseRocksCache(STsdb *pTsdb) { rocksdb_close(pTsdb->rCache.db); + rocksdb_writebatch_destroy(pTsdb->rCache.writebatch); rocksdb_readoptions_destroy(pTsdb->rCache.readoptions); rocksdb_writeoptions_destroy(pTsdb->rCache.writeoptions); rocksdb_options_destroy(pTsdb->rCache.options); } -int32_t tsdbCacheUpdate(STsdb *pTsdb, tb_uid_t uid, TSDBROW *pRow) { +SLastCol *tsdbCacheDeserialize(char const *value) { + SLastCol *pLastCol = (SLastCol *)value; + SColVal *pColVal = &pLastCol->colVal; + if (IS_VAR_DATA_TYPE(pColVal->type)) { + pColVal->value.pData = (char *)value + sizeof(*pColVal); + } + + return pLastCol; +} + +void tsdbCacheSerialize(SLastCol *pLastCol, char **value, size_t *size) { + SColVal *pColVal = &pLastCol->colVal; + size_t length = sizeof(*pLastCol); + if (IS_VAR_DATA_TYPE(pColVal->type)) { + length += pColVal->value.nData; + } + *value = taosMemoryMalloc(length); + + *(SLastCol *)(*value) = *pLastCol; + if (IS_VAR_DATA_TYPE(pColVal->type)) { + uint8_t *pVal = pColVal->value.pData; + pColVal->value.pData = *value + sizeof(*pLastCol); + if (pColVal->value.nData) { + memcpy(pColVal->value.pData, pVal, pColVal->value.nData); + } + } + *size = length; +} + +int32_t tsdbCacheUpdate(STsdb *pTsdb, tb_uid_t suid, tb_uid_t uid, TSDBROW *pRow) { int32_t code = 0; + // 1, fetch schema + STSchema *pTSchema = NULL; + int32_t sver = TSDBROW_SVERSION(pRow); + code = metaGetTbTSchemaEx(pTsdb->pVnode->pMeta, suid, uid, sver, &pTSchema); + if (code != TSDB_CODE_SUCCESS) { + terrno = code; + return -1; + } + + // 2, iterate col values into array + SArray *aColVal = taosArrayInit(32, sizeof(SColVal)); + STSDBRowIter iter = {0}; tsdbRowIterOpen(&iter, pRow, pTSchema); for (SColVal *pColVal = tsdbRowIterNext(&iter); pColVal; pColVal = tsdbRowIterNext(&iter)) { - SColVal *pRColVal = tsdbCacheGetRColVal(pTsdb); - if (pRColVal) { - // merge pColVal with pRColVal + if (IS_VAR_DATA_TYPE(pColVal->type)) { + uint8_t *pVal = pColVal->value.pData; + + pColVal->value.pData = NULL; + code = tRealloc(&pColVal->value.pData, pColVal->value.nData); + if (code) { + code = TSDB_CODE_OUT_OF_MEMORY; + goto _exit; + } + + if (pColVal->value.nData) { + memcpy(pColVal->value.pData, pVal, pColVal->value.nData); + } } - tsdbCachePutRColVal(pColVal); + taosArrayPush(aColVal, pColVal); } tsdbRowClose(&iter); - char *err = NULL; - char buf[256] = {0}; - size_t vallen = 0; - char *val = rocksdb_get(pTsdb->rCache.db, pTsdb->rCache.readoptions, "key", 3, &vallen, &err); - if (val) { - } else { - } - rocksdb_put(pTsdb->rCache.db, pTsdb->rCache.writeoptions, "key", 3, "value", 5, &err); + // 3, build keys & multi get from rocks + int max_key_len = 0; + int num_keys = TARRAY_SIZE(aColVal); + char **keys_list = taosMemoryCalloc(num_keys * 2, sizeof(char *)); + size_t *keys_list_sizes = taosMemoryCalloc(num_keys * 2, sizeof(size_t)); + for (int i = 0; i < num_keys; ++i) { + SColVal *pColVal = (SColVal *)taosArrayGet(aColVal, i); + char *keys = taosMemoryCalloc(2, ROCKS_KEY_LEN); + int last_key_len = snprintf(keys, ROCKS_KEY_LEN, "%" PRIi64 ":%" PRIi16 ":last", uid, pColVal->cid); + if (last_key_len >= ROCKS_KEY_LEN) { + tsdbError("vgId:%d, %s failed at line %d since %s", TD_VID(pTsdb->pVnode), __func__, __LINE__, tstrerror(code)); + } + int lr_key_len = + snprintf(keys + ROCKS_KEY_LEN, ROCKS_KEY_LEN, "%" PRIi64 ":%" PRIi16 ":last_row", uid, pColVal->cid); + if (lr_key_len >= ROCKS_KEY_LEN) { + tsdbError("vgId:%d, %s failed at line %d since %s", TD_VID(pTsdb->pVnode), __func__, __LINE__, tstrerror(code)); + } + keys_list[i] = keys; + keys_list[num_keys + i] = keys + ROCKS_KEY_LEN; + keys_list_sizes[i] = last_key_len; + keys_list_sizes[num_keys + i] = lr_key_len; + } + char **values_list = taosMemoryCalloc(num_keys * 2, sizeof(char *)); + size_t *values_list_sizes = taosMemoryCalloc(num_keys * 2, sizeof(size_t)); + char **errs = taosMemoryCalloc(num_keys * 2, sizeof(char *)); + rocksdb_multi_get(pTsdb->rCache.db, pTsdb->rCache.readoptions, num_keys * 2, (const char *const *)keys_list, + keys_list_sizes, values_list, values_list_sizes, errs); + for (int i = 0; i < num_keys; ++i) { + taosMemoryFree(keys_list[i]); + } + taosMemoryFree(keys_list); + taosMemoryFree(keys_list_sizes); + taosMemoryFree(errs); + + TSKEY keyTs = TSDBROW_TS(pRow); + rocksdb_writebatch_t *wb = pTsdb->rCache.writebatch; + for (int i = 0; i < num_keys; ++i) { + SColVal *pColVal = (SColVal *)taosArrayGet(aColVal, i); + if (COL_VAL_IS_VALUE(pColVal)) { + SLastCol *pLastCol = NULL; + if (NULL != values_list[i]) { + pLastCol = tsdbCacheDeserialize(values_list[i]); + } + + if (NULL == pLastCol || pLastCol->ts <= keyTs) { + char *value = NULL; + size_t vlen = 0; + tsdbCacheSerialize(&(SLastCol){.ts = keyTs, .colVal = *pColVal}, &value, &vlen); + char key[ROCKS_KEY_LEN]; + size_t klen = snprintf(key, ROCKS_KEY_LEN, "%" PRIi64 ":%" PRIi16 ":last", uid, pColVal->cid); + rocksdb_writebatch_put(wb, key, klen, value, vlen); + taosMemoryFree(value); + } + } + + if (!COL_VAL_IS_NONE(pColVal)) { + SLastCol *pLastCol = NULL; + if (NULL != values_list[i + num_keys]) { + pLastCol = tsdbCacheDeserialize(values_list[i]); + } + + if (NULL == pLastCol || pLastCol->ts <= keyTs) { + char *value = NULL; + size_t vlen = 0; + tsdbCacheSerialize(&(SLastCol){.ts = keyTs, .colVal = *pColVal}, &value, &vlen); + char key[ROCKS_KEY_LEN]; + size_t klen = snprintf(key, ROCKS_KEY_LEN, "%" PRIi64 ":%" PRIi16 ":last_row", uid, pColVal->cid); + rocksdb_writebatch_put(wb, key, klen, value, vlen); + taosMemoryFree(value); + } + } + } + taosMemoryFree(values_list); + taosMemoryFree(values_list_sizes); + + char *err = NULL; + rocksdb_write(pTsdb->rCache.db, pTsdb->rCache.writeoptions, wb, &err); + if (NULL != err) { + tsdbError("vgId:%d, %s failed at line %d since %s", TD_VID(pTsdb->pVnode), __func__, __LINE__, err); + rocksdb_free(err); + } + rocksdb_writebatch_clear(wb); + +_exit: + taosArrayDestroy(aColVal); + taosMemoryFree(pTSchema); return code; } diff --git a/source/dnode/vnode/src/tsdb/tsdbMemTable.c b/source/dnode/vnode/src/tsdb/tsdbMemTable.c index f480fb6081..6b4ab4d5cf 100644 --- a/source/dnode/vnode/src/tsdb/tsdbMemTable.c +++ b/source/dnode/vnode/src/tsdb/tsdbMemTable.c @@ -678,7 +678,7 @@ static int32_t tsdbInsertColDataToTable(SMemTable *pMemTable, STbData *pTbData, tsdbCacheInsertLast(pMemTable->pTsdb->lruCache, pTbData->uid, &lRow, pMemTable->pTsdb); } */ - tsdbCacheUpdate(pMemTable->pTsdb, pTbData->uid, &lRow); + tsdbCacheUpdate(pMemTable->pTsdb, pTbData->suid, pTbData->uid, &lRow); // SMemTable pMemTable->minKey = TMIN(pMemTable->minKey, pTbData->minKey); @@ -748,7 +748,7 @@ static int32_t tsdbInsertRowDataToTable(SMemTable *pMemTable, STbData *pTbData, tsdbCacheInsertLast(pMemTable->pTsdb->lruCache, pTbData->uid, &lRow, pMemTable->pTsdb); } */ - tsdbCacheUpdate(pMemTable->pTsdb, pTbData->uid, &lRow); + tsdbCacheUpdate(pMemTable->pTsdb, pTbData->suid, pTbData->uid, &lRow); // SMemTable pMemTable->minKey = TMIN(pMemTable->minKey, pTbData->minKey); From dd42db00774b7102a232ccb7a1a6032d203d6b95 Mon Sep 17 00:00:00 2001 From: Minglei Jin Date: Tue, 18 Apr 2023 15:09:13 +0800 Subject: [PATCH 007/156] tsdb/mem table: remove unused codes --- source/dnode/vnode/src/tsdb/tsdbMemTable.c | 18 ------------------ 1 file changed, 18 deletions(-) diff --git a/source/dnode/vnode/src/tsdb/tsdbMemTable.c b/source/dnode/vnode/src/tsdb/tsdbMemTable.c index 6b4ab4d5cf..122546ebfc 100644 --- a/source/dnode/vnode/src/tsdb/tsdbMemTable.c +++ b/source/dnode/vnode/src/tsdb/tsdbMemTable.c @@ -668,16 +668,7 @@ static int32_t tsdbInsertColDataToTable(SMemTable *pMemTable, STbData *pTbData, if (key.ts >= pTbData->maxKey) { pTbData->maxKey = key.ts; - /* - if (TSDB_CACHE_LAST_ROW(pMemTable->pTsdb->pVnode->config)) { - tsdbCacheInsertLastrow(pMemTable->pTsdb->lruCache, pMemTable->pTsdb, pTbData->uid, &lRow, true); - }*/ } - /* - if (TSDB_CACHE_LAST(pMemTable->pTsdb->pVnode->config)) { - tsdbCacheInsertLast(pMemTable->pTsdb->lruCache, pTbData->uid, &lRow, pMemTable->pTsdb); - } - */ tsdbCacheUpdate(pMemTable->pTsdb, pTbData->suid, pTbData->uid, &lRow); // SMemTable @@ -738,16 +729,7 @@ static int32_t tsdbInsertRowDataToTable(SMemTable *pMemTable, STbData *pTbData, if (key.ts >= pTbData->maxKey) { pTbData->maxKey = key.ts; - /* - if (TSDB_CACHE_LAST_ROW(pMemTable->pTsdb->pVnode->config)) { - tsdbCacheInsertLastrow(pMemTable->pTsdb->lruCache, pMemTable->pTsdb, pTbData->uid, &lRow, true); - }*/ } - /* - if (TSDB_CACHE_LAST(pMemTable->pTsdb->pVnode->config)) { - tsdbCacheInsertLast(pMemTable->pTsdb->lruCache, pTbData->uid, &lRow, pMemTable->pTsdb); - } - */ tsdbCacheUpdate(pMemTable->pTsdb, pTbData->suid, pTbData->uid, &lRow); // SMemTable From dee2ca34769b8a868e559344efc3ef0fdc66d938 Mon Sep 17 00:00:00 2001 From: Minglei Jin Date: Tue, 18 Apr 2023 16:09:42 +0800 Subject: [PATCH 008/156] tsdb/cache: inplace_update are commented out --- source/dnode/vnode/src/tsdb/tsdbCache.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/source/dnode/vnode/src/tsdb/tsdbCache.c b/source/dnode/vnode/src/tsdb/tsdbCache.c index 393e8ffdf5..3dd84bd07c 100644 --- a/source/dnode/vnode/src/tsdb/tsdbCache.c +++ b/source/dnode/vnode/src/tsdb/tsdbCache.c @@ -69,14 +69,15 @@ static int32_t tsdbOpenRocksCache(STsdb *pTsdb) { } rocksdb_options_set_create_if_missing(options, 1); - rocksdb_options_set_inplace_update_support(options, 1); - rocksdb_options_set_allow_concurrent_memtable_write(options, 0); + // rocksdb_options_set_inplace_update_support(options, 1); + // rocksdb_options_set_allow_concurrent_memtable_write(options, 0); rocksdb_writeoptions_t *writeoptions = rocksdb_writeoptions_create(); if (NULL == writeoptions) { code = TSDB_CODE_OUT_OF_MEMORY; goto _err2; } + // rocksdb_writeoptions_disable_WAL(writeoptions, 1); rocksdb_readoptions_t *readoptions = rocksdb_readoptions_create(); if (NULL == readoptions) { From 76edc16ce1601b9086a838bc6682544a0506b8a7 Mon Sep 17 00:00:00 2001 From: Minglei Jin Date: Wed, 19 Apr 2023 10:04:48 +0800 Subject: [PATCH 009/156] operator/cache-scan: pass col list down to tsdb cache --- source/dnode/vnode/inc/vnode.h | 6 +-- source/dnode/vnode/src/inc/tsdb.h | 1 + source/dnode/vnode/src/tsdb/tsdbCache.c | 8 ++++ source/dnode/vnode/src/tsdb/tsdbCacheRead.c | 3 +- source/libs/executor/src/cachescanoperator.c | 43 ++++++++++++-------- 5 files changed, 41 insertions(+), 20 deletions(-) diff --git a/source/dnode/vnode/inc/vnode.h b/source/dnode/vnode/inc/vnode.h index a9e5fe628b..93bbf9d380 100644 --- a/source/dnode/vnode/inc/vnode.h +++ b/source/dnode/vnode/inc/vnode.h @@ -180,7 +180,7 @@ int32_t tsdbSetTableList(STsdbReader *pReader, const void *pTableList, int32_t n int32_t tsdbReaderOpen(SVnode *pVnode, SQueryTableDataCond *pCond, void *pTableList, int32_t numOfTables, SSDataBlock *pResBlock, STsdbReader **ppReader, const char *idstr, bool countOnly); -void tsdbReaderSetId(STsdbReader* pReader, const char* idstr); +void tsdbReaderSetId(STsdbReader *pReader, const char *idstr); void tsdbReaderClose(STsdbReader *pReader); int32_t tsdbNextDataBlock(STsdbReader *pReader, bool *hasNext); int32_t tsdbRetrieveDatablockSMA(STsdbReader *pReader, SSDataBlock *pDataBlock, bool *allHave); @@ -194,7 +194,7 @@ void *tsdbGetIvtIdx(SMeta *pMeta); uint64_t getReaderMaxVersion(STsdbReader *pReader); int32_t tsdbCacherowsReaderOpen(void *pVnode, int32_t type, void *pTableIdList, int32_t numOfTables, int32_t numOfCols, - uint64_t suid, void **pReader, const char *idstr); + SArray *pCidList, uint64_t suid, void **pReader, const char *idstr); int32_t tsdbRetrieveCacheRows(void *pReader, SSDataBlock *pResBlock, const int32_t *slotIds, SArray *pTableUids); void *tsdbCacherowsReaderClose(void *pReader); int32_t tsdbGetTableSchema(SVnode *pVnode, int64_t uid, STSchema **pSchema, int64_t *suid); @@ -260,7 +260,7 @@ int32_t tqReaderAddTbUidList(STqReader *pReader, const SArray *tbUidList); int32_t tqReaderRemoveTbUidList(STqReader *pReader, const SArray *tbUidList); int32_t tqSeekVer(STqReader *pReader, int64_t ver, const char *id); -void tqNextBlock(STqReader *pReader, SFetchRet *ret); +void tqNextBlock(STqReader *pReader, SFetchRet *ret); int32_t tqReaderSetSubmitReq2(STqReader *pReader, void *msgStr, int32_t msgLen, int64_t ver); // int32_t tqReaderSetDataMsg(STqReader *pReader, const SSubmitReq *pMsg, int64_t ver); diff --git a/source/dnode/vnode/src/inc/tsdb.h b/source/dnode/vnode/src/inc/tsdb.h index 16afa9f596..2ec3bf59e1 100644 --- a/source/dnode/vnode/src/inc/tsdb.h +++ b/source/dnode/vnode/src/inc/tsdb.h @@ -787,6 +787,7 @@ typedef struct SCacheRowsReader { uint64_t suid; char **transferBuf; // todo remove it soon int32_t numOfCols; + SArray *pCidList; int32_t type; int32_t tableIndex; // currently returned result tables STableKeyInfo *pTableList; // table id list diff --git a/source/dnode/vnode/src/tsdb/tsdbCache.c b/source/dnode/vnode/src/tsdb/tsdbCache.c index 3dd84bd07c..a60f20522b 100644 --- a/source/dnode/vnode/src/tsdb/tsdbCache.c +++ b/source/dnode/vnode/src/tsdb/tsdbCache.c @@ -283,6 +283,14 @@ _exit: return code; } +int32_t tsdbCacheGetLast(STsdb *pTsdb, tb_uid_t uid, SArray **ppLastArray, SCacheRowsReader *pr) { + int32_t code = 0; + + SArray *pCidList = pr->pCidList; + + return code; +} + int32_t tsdbOpenCache(STsdb *pTsdb) { int32_t code = 0; SLRUCache *pCache = NULL; diff --git a/source/dnode/vnode/src/tsdb/tsdbCacheRead.c b/source/dnode/vnode/src/tsdb/tsdbCacheRead.c index 95981c2f08..19967302d0 100644 --- a/source/dnode/vnode/src/tsdb/tsdbCacheRead.c +++ b/source/dnode/vnode/src/tsdb/tsdbCacheRead.c @@ -143,7 +143,7 @@ static int32_t setTableSchema(SCacheRowsReader* p, uint64_t suid, const char* id } int32_t tsdbCacherowsReaderOpen(void* pVnode, int32_t type, void* pTableIdList, int32_t numOfTables, int32_t numOfCols, - uint64_t suid, void** pReader, const char* idstr) { + SArray* pCidList, uint64_t suid, void** pReader, const char* idstr) { *pReader = NULL; SCacheRowsReader* p = taosMemoryCalloc(1, sizeof(SCacheRowsReader)); if (p == NULL) { @@ -155,6 +155,7 @@ int32_t tsdbCacherowsReaderOpen(void* pVnode, int32_t type, void* pTableIdList, p->pTsdb = p->pVnode->pTsdb; p->verRange = (SVersionRange){.minVer = 0, .maxVer = UINT64_MAX}; p->numOfCols = numOfCols; + p->pCidList = pCidList; p->suid = suid; if (numOfTables == 0) { diff --git a/source/libs/executor/src/cachescanoperator.c b/source/libs/executor/src/cachescanoperator.c index f6fc332b37..543ac412db 100644 --- a/source/libs/executor/src/cachescanoperator.c +++ b/source/libs/executor/src/cachescanoperator.c @@ -36,6 +36,7 @@ typedef struct SCacheRowsScanInfo { int32_t currentGroupIndex; SSDataBlock* pBufferredRes; SArray* pUidList; + SArray* pCidList; int32_t indexOfBufferedRes; STableListInfo* pTableList; } SCacheRowsScanInfo; @@ -45,13 +46,13 @@ static void destroyCacheScanOperator(void* param); static int32_t extractCacheScanSlotId(const SArray* pColMatchInfo, SExecTaskInfo* pTaskInfo, int32_t** pSlotIds); static int32_t removeRedundantTsCol(SLastRowScanPhysiNode* pScanNode, SColMatchInfo* pColMatchInfo); -#define SCAN_ROW_TYPE(_t) ((_t)? CACHESCAN_RETRIEVE_LAST : CACHESCAN_RETRIEVE_LAST_ROW) +#define SCAN_ROW_TYPE(_t) ((_t) ? CACHESCAN_RETRIEVE_LAST : CACHESCAN_RETRIEVE_LAST_ROW) SOperatorInfo* createCacherowsScanOperator(SLastRowScanPhysiNode* pScanNode, SReadHandle* readHandle, STableListInfo* pTableListInfo, SExecTaskInfo* pTaskInfo) { - int32_t code = TSDB_CODE_SUCCESS; + int32_t code = TSDB_CODE_SUCCESS; SCacheRowsScanInfo* pInfo = taosMemoryCalloc(1, sizeof(SCacheRowsScanInfo)); - SOperatorInfo* pOperator = taosMemoryCalloc(1, sizeof(SOperatorInfo)); + SOperatorInfo* pOperator = taosMemoryCalloc(1, sizeof(SOperatorInfo)); if (pInfo == NULL || pOperator == NULL) { code = TSDB_CODE_OUT_OF_MEMORY; tableListDestroy(pTableListInfo); @@ -71,6 +72,13 @@ SOperatorInfo* createCacherowsScanOperator(SLastRowScanPhysiNode* pScanNode, SRe goto _error; } + SArray* pCidList = taosArrayInit(numOfCols, sizeof(int16_t)); + for (int i = 0; i < TARRAY_SIZE(pInfo->matchInfo.pList); ++i) { + SColMatchItem* pColInfo = taosArrayGet(pInfo->matchInfo.pList, i); + taosArrayPush(pCidList, &pColInfo->colId); + } + pInfo->pCidList = pCidList; + removeRedundantTsCol(pScanNode, &pInfo->matchInfo); code = extractCacheScanSlotId(pInfo->matchInfo.pList, pTaskInfo, &pInfo->pSlotIds); @@ -91,7 +99,8 @@ SOperatorInfo* createCacherowsScanOperator(SLastRowScanPhysiNode* pScanNode, SRe uint64_t suid = tableListGetSuid(pTableListInfo); code = tsdbCacherowsReaderOpen(pInfo->readHandle.vnode, pInfo->retrieveType, pList, totalTables, - taosArrayGetSize(pInfo->matchInfo.pList), suid, &pInfo->pLastrowReader, pTaskInfo->id.str); + taosArrayGetSize(pInfo->matchInfo.pList), pCidList, suid, &pInfo->pLastrowReader, + pTaskInfo->id.str); if (code != TSDB_CODE_SUCCESS) { goto _error; } @@ -114,7 +123,8 @@ SOperatorInfo* createCacherowsScanOperator(SLastRowScanPhysiNode* pScanNode, SRe p->pCtx = createSqlFunctionCtx(p->pExprInfo, p->numOfExprs, &p->rowEntryInfoOffset); } - setOperatorInfo(pOperator, "CachedRowScanOperator", QUERY_NODE_PHYSICAL_PLAN_LAST_ROW_SCAN, false, OP_NOT_OPENED, pInfo, pTaskInfo); + setOperatorInfo(pOperator, "CachedRowScanOperator", QUERY_NODE_PHYSICAL_PLAN_LAST_ROW_SCAN, false, OP_NOT_OPENED, + pInfo, pTaskInfo); pOperator->exprSupp.numOfExprs = taosArrayGetSize(pInfo->pRes->pDataBlock); pOperator->fpSet = @@ -123,7 +133,7 @@ SOperatorInfo* createCacherowsScanOperator(SLastRowScanPhysiNode* pScanNode, SRe pOperator->cost.openCost = 0; return pOperator; - _error: +_error: pTaskInfo->code = code; destroyCacheScanOperator(pInfo); taosMemoryFree(pOperator); @@ -136,8 +146,8 @@ SSDataBlock* doScanCache(SOperatorInfo* pOperator) { } SCacheRowsScanInfo* pInfo = pOperator->info; - SExecTaskInfo* pTaskInfo = pOperator->pTaskInfo; - STableListInfo* pTableList = pInfo->pTableList; + SExecTaskInfo* pTaskInfo = pOperator->pTaskInfo; + STableListInfo* pTableList = pInfo->pTableList; uint64_t suid = tableListGetSuid(pTableList); int32_t size = tableListGetSize(pTableList); @@ -194,8 +204,8 @@ SSDataBlock* doScanCache(SOperatorInfo* pOperator) { pRes->info.rows = 1; SExprSupp* pSup = &pInfo->pseudoExprSup; - int32_t code = addTagPseudoColumnData(&pInfo->readHandle, pSup->pExprInfo, pSup->numOfExprs, pRes, - pRes->info.rows, GET_TASKID(pTaskInfo), NULL); + int32_t code = addTagPseudoColumnData(&pInfo->readHandle, pSup->pExprInfo, pSup->numOfExprs, pRes, + pRes->info.rows, GET_TASKID(pTaskInfo), NULL); if (code != TSDB_CODE_SUCCESS) { pTaskInfo->code = code; return NULL; @@ -217,7 +227,7 @@ SSDataBlock* doScanCache(SOperatorInfo* pOperator) { } STableKeyInfo* pList = NULL; - int32_t num = 0; + int32_t num = 0; int32_t code = tableListGetGroupList(pTableList, pInfo->currentGroupIndex, &pList, &num); if (code != TSDB_CODE_SUCCESS) { @@ -225,8 +235,8 @@ SSDataBlock* doScanCache(SOperatorInfo* pOperator) { } code = tsdbCacherowsReaderOpen(pInfo->readHandle.vnode, pInfo->retrieveType, pList, num, - taosArrayGetSize(pInfo->matchInfo.pList), suid, &pInfo->pLastrowReader, - pTaskInfo->id.str); + taosArrayGetSize(pInfo->matchInfo.pList), pInfo->pCidList, suid, + &pInfo->pLastrowReader, pTaskInfo->id.str); if (code != TSDB_CODE_SUCCESS) { pInfo->currentGroupIndex += 1; taosArrayClear(pInfo->pUidList); @@ -254,8 +264,8 @@ SSDataBlock* doScanCache(SOperatorInfo* pOperator) { ASSERT((pInfo->retrieveType & CACHESCAN_RETRIEVE_LAST_ROW) == CACHESCAN_RETRIEVE_LAST_ROW); pInfo->pRes->info.id.uid = *(tb_uid_t*)taosArrayGet(pInfo->pUidList, 0); - code = addTagPseudoColumnData(&pInfo->readHandle, pSup->pExprInfo, pSup->numOfExprs, pInfo->pRes, pInfo->pRes->info.rows, - GET_TASKID(pTaskInfo), NULL); + code = addTagPseudoColumnData(&pInfo->readHandle, pSup->pExprInfo, pSup->numOfExprs, pInfo->pRes, + pInfo->pRes->info.rows, GET_TASKID(pTaskInfo), NULL); if (code != TSDB_CODE_SUCCESS) { pTaskInfo->code = code; return NULL; @@ -280,6 +290,7 @@ void destroyCacheScanOperator(void* param) { blockDataDestroy(pInfo->pRes); blockDataDestroy(pInfo->pBufferredRes); taosMemoryFree(pInfo->pSlotIds); + taosArrayDestroy(pInfo->pCidList); taosArrayDestroy(pInfo->pUidList); taosArrayDestroy(pInfo->matchInfo.pList); tableListDestroy(pInfo->pTableList); @@ -325,7 +336,7 @@ int32_t removeRedundantTsCol(SLastRowScanPhysiNode* pScanNode, SColMatchInfo* pC return TSDB_CODE_SUCCESS; } - size_t size = taosArrayGetSize(pColMatchInfo->pList); + size_t size = taosArrayGetSize(pColMatchInfo->pList); SArray* pMatchInfo = taosArrayInit(size, sizeof(SColMatchItem)); for (int32_t i = 0; i < size; ++i) { From 642aa47887078ace106505b13217997334216a61 Mon Sep 17 00:00:00 2001 From: Minglei Jin Date: Thu, 20 Apr 2023 16:26:27 +0800 Subject: [PATCH 010/156] last: first round implementation for column loading --- source/dnode/vnode/inc/vnode.h | 3 +- source/dnode/vnode/src/inc/tsdb.h | 1 + source/dnode/vnode/src/tsdb/tsdbCache.c | 41 ++++++++++- source/dnode/vnode/src/tsdb/tsdbCacheRead.c | 74 ++++++++++++-------- source/libs/executor/src/cachescanoperator.c | 30 +++++--- 5 files changed, 106 insertions(+), 43 deletions(-) diff --git a/source/dnode/vnode/inc/vnode.h b/source/dnode/vnode/inc/vnode.h index 93bbf9d380..83a126c9c5 100644 --- a/source/dnode/vnode/inc/vnode.h +++ b/source/dnode/vnode/inc/vnode.h @@ -195,7 +195,8 @@ uint64_t getReaderMaxVersion(STsdbReader *pReader); int32_t tsdbCacherowsReaderOpen(void *pVnode, int32_t type, void *pTableIdList, int32_t numOfTables, int32_t numOfCols, SArray *pCidList, uint64_t suid, void **pReader, const char *idstr); -int32_t tsdbRetrieveCacheRows(void *pReader, SSDataBlock *pResBlock, const int32_t *slotIds, SArray *pTableUids); +int32_t tsdbRetrieveCacheRows(void *pReader, SSDataBlock *pResBlock, const int32_t *slotIds, const int32_t *dstSlotIds, + SArray *pTableUids); void *tsdbCacherowsReaderClose(void *pReader); int32_t tsdbGetTableSchema(SVnode *pVnode, int64_t uid, STSchema **pSchema, int64_t *suid); diff --git a/source/dnode/vnode/src/inc/tsdb.h b/source/dnode/vnode/src/inc/tsdb.h index 2ec3bf59e1..9856bc8dbe 100644 --- a/source/dnode/vnode/src/inc/tsdb.h +++ b/source/dnode/vnode/src/inc/tsdb.h @@ -808,6 +808,7 @@ typedef struct { int32_t tsdbOpenCache(STsdb *pTsdb); void tsdbCloseCache(STsdb *pTsdb); int32_t tsdbCacheUpdate(STsdb *pTsdb, tb_uid_t suid, tb_uid_t uid, TSDBROW *row); +int32_t tsdbCacheGet(STsdb *pTsdb, tb_uid_t uid, SArray **ppLastArray, SCacheRowsReader *pr, char const *lstring); int32_t tsdbCacheInsertLast(SLRUCache *pCache, tb_uid_t uid, TSDBROW *row, STsdb *pTsdb); int32_t tsdbCacheInsertLastrow(SLRUCache *pCache, STsdb *pTsdb, tb_uid_t uid, TSDBROW *row, bool dup); int32_t tsdbCacheGetLastH(SLRUCache *pCache, tb_uid_t uid, SCacheRowsReader *pr, LRUHandle **h); diff --git a/source/dnode/vnode/src/tsdb/tsdbCache.c b/source/dnode/vnode/src/tsdb/tsdbCache.c index a60f20522b..f61b1bfcdb 100644 --- a/source/dnode/vnode/src/tsdb/tsdbCache.c +++ b/source/dnode/vnode/src/tsdb/tsdbCache.c @@ -194,7 +194,6 @@ int32_t tsdbCacheUpdate(STsdb *pTsdb, tb_uid_t suid, tb_uid_t uid, TSDBROW *pRow tsdbRowClose(&iter); // 3, build keys & multi get from rocks - int max_key_len = 0; int num_keys = TARRAY_SIZE(aColVal); char **keys_list = taosMemoryCalloc(num_keys * 2, sizeof(char *)); size_t *keys_list_sizes = taosMemoryCalloc(num_keys * 2, sizeof(size_t)); @@ -283,11 +282,49 @@ _exit: return code; } -int32_t tsdbCacheGetLast(STsdb *pTsdb, tb_uid_t uid, SArray **ppLastArray, SCacheRowsReader *pr) { +int32_t tsdbCacheGet(STsdb *pTsdb, tb_uid_t uid, SArray **ppLastArray, SCacheRowsReader *pr, char const *lstring) { int32_t code = 0; SArray *pCidList = pr->pCidList; + int num_keys = TARRAY_SIZE(pCidList); + char **keys_list = taosMemoryCalloc(num_keys, sizeof(char *)); + size_t *keys_list_sizes = taosMemoryCalloc(num_keys, sizeof(size_t)); + for (int i = 0; i < num_keys; ++i) { + int16_t cid = *(int16_t *)taosArrayGet(pCidList, i); + char *keys = taosMemoryCalloc(2, ROCKS_KEY_LEN); + int last_key_len = snprintf(keys, ROCKS_KEY_LEN, "%" PRIi64 ":%" PRIi16 ":%s", uid, cid, lstring); + if (last_key_len >= ROCKS_KEY_LEN) { + tsdbError("vgId:%d, %s failed at line %d since %s", TD_VID(pTsdb->pVnode), __func__, __LINE__, tstrerror(code)); + } + + keys_list[i] = keys; + keys_list_sizes[i] = last_key_len; + } + char **values_list = taosMemoryCalloc(num_keys, sizeof(char *)); + size_t *values_list_sizes = taosMemoryCalloc(num_keys, sizeof(size_t)); + char **errs = taosMemoryCalloc(num_keys, sizeof(char *)); + rocksdb_multi_get(pTsdb->rCache.db, pTsdb->rCache.readoptions, num_keys, (const char *const *)keys_list, + keys_list_sizes, values_list, values_list_sizes, errs); + for (int i = 0; i < num_keys; ++i) { + taosMemoryFree(keys_list[i]); + } + taosMemoryFree(keys_list); + taosMemoryFree(keys_list_sizes); + taosMemoryFree(errs); + + SArray *pLastArray = taosArrayInit(num_keys, sizeof(SLastCol)); + for (int i = 0; i < num_keys; ++i) { + SLastCol *pLastCol = tsdbCacheDeserialize(values_list[i]); + + taosArrayPush(pLastArray, pLastCol); + + taosMemoryFree(values_list[i]); + } + taosMemoryFree(values_list); + taosMemoryFree(values_list_sizes); + + *ppLastArray = pLastArray; return code; } diff --git a/source/dnode/vnode/src/tsdb/tsdbCacheRead.c b/source/dnode/vnode/src/tsdb/tsdbCacheRead.c index 19967302d0..30b9cacd4b 100644 --- a/source/dnode/vnode/src/tsdb/tsdbCacheRead.c +++ b/source/dnode/vnode/src/tsdb/tsdbCacheRead.c @@ -21,16 +21,16 @@ #define HASTYPE(_type, _t) (((_type) & (_t)) == (_t)) static int32_t saveOneRow(SArray* pRow, SSDataBlock* pBlock, SCacheRowsReader* pReader, const int32_t* slotIds, - void** pRes, const char* idStr) { + const int32_t* dstSlotIds, void** pRes, const char* idStr) { int32_t numOfRows = pBlock->info.rows; if (HASTYPE(pReader->type, CACHESCAN_RETRIEVE_LAST)) { bool allNullRow = true; for (int32_t i = 0; i < pReader->numOfCols; ++i) { - SColumnInfoData* pColInfoData = taosArrayGet(pBlock->pDataBlock, i); - SFirstLastRes* p = (SFirstLastRes*)varDataVal(pRes[i]); - + SColumnInfoData* pColInfoData = taosArrayGet(pBlock->pDataBlock, dstSlotIds[i]); + SFirstLastRes* p = (SFirstLastRes*)varDataVal(pRes[dstSlotIds[i]]); + /* if (slotIds[i] == -1) { // the primary timestamp SLastCol* pColVal = (SLastCol*)taosArrayGet(pRow, 0); p->ts = pColVal->ts; @@ -38,37 +38,40 @@ static int32_t saveOneRow(SArray* pRow, SSDataBlock* pBlock, SCacheRowsReader* p *(int64_t*)p->buf = pColVal->ts; allNullRow = false; } else { - int32_t slotId = slotIds[i]; - // add check for null value, caused by the modification of table schema (new column added). - if (slotId >= taosArrayGetSize(pRow)) { - p->ts = 0; - p->isNull = true; - colDataSetNULL(pColInfoData, numOfRows); - continue; - } + */ + int32_t slotId = slotIds[i]; + // add check for null value, caused by the modification of table schema (new column added). + /* + if (slotId >= taosArrayGetSize(pRow)) { + p->ts = 0; + p->isNull = true; + colDataSetNULL(pColInfoData, numOfRows); + continue; + } + */ + // SLastCol* pColVal = (SLastCol*)taosArrayGet(pRow, slotId); + SLastCol* pColVal = (SLastCol*)taosArrayGet(pRow, i); - SLastCol* pColVal = (SLastCol*)taosArrayGet(pRow, slotId); + p->ts = pColVal->ts; + p->isNull = !COL_VAL_IS_VALUE(&pColVal->colVal); + allNullRow = p->isNull & allNullRow; - p->ts = pColVal->ts; - p->isNull = !COL_VAL_IS_VALUE(&pColVal->colVal); - allNullRow = p->isNull & allNullRow; - - if (!p->isNull) { - if (IS_VAR_DATA_TYPE(pColVal->colVal.type)) { - varDataSetLen(p->buf, pColVal->colVal.value.nData); - memcpy(varDataVal(p->buf), pColVal->colVal.value.pData, pColVal->colVal.value.nData); - p->bytes = pColVal->colVal.value.nData + VARSTR_HEADER_SIZE; // binary needs to plus the header size - } else { - memcpy(p->buf, &pColVal->colVal.value, pReader->pSchema->columns[slotId].bytes); - p->bytes = pReader->pSchema->columns[slotId].bytes; - } + if (!p->isNull) { + if (IS_VAR_DATA_TYPE(pColVal->colVal.type)) { + varDataSetLen(p->buf, pColVal->colVal.value.nData); + memcpy(varDataVal(p->buf), pColVal->colVal.value.pData, pColVal->colVal.value.nData); + p->bytes = pColVal->colVal.value.nData + VARSTR_HEADER_SIZE; // binary needs to plus the header size + } else { + memcpy(p->buf, &pColVal->colVal.value, pReader->pSchema->columns[slotId].bytes); + p->bytes = pReader->pSchema->columns[slotId].bytes; } } + //} // pColInfoData->info.bytes includes the VARSTR_HEADER_SIZE, need to substruct it p->hasResult = true; - varDataSetLen(pRes[i], pColInfoData->info.bytes - VARSTR_HEADER_SIZE); - colDataSetVal(pColInfoData, numOfRows, (const char*)pRes[i], false); + varDataSetLen(pRes[dstSlotIds[i]], pColInfoData->info.bytes - VARSTR_HEADER_SIZE); + colDataSetVal(pColInfoData, numOfRows, (const char*)pRes[dstSlotIds[i]], false); } pBlock->info.rows += allNullRow ? 0 : 1; @@ -278,7 +281,8 @@ static int32_t tsdbCacheQueryReseek(void* pQHandle) { } } -int32_t tsdbRetrieveCacheRows(void* pReader, SSDataBlock* pResBlock, const int32_t* slotIds, SArray* pTableUidList) { +int32_t tsdbRetrieveCacheRows(void* pReader, SSDataBlock* pResBlock, const int32_t* slotIds, const int32_t* dstSlotIds, + SArray* pTableUidList) { if (pReader == NULL || pResBlock == NULL) { return TSDB_CODE_INVALID_PARA; } @@ -422,11 +426,12 @@ int32_t tsdbRetrieveCacheRows(void* pReader, SSDataBlock* pResBlock, const int32 } if (hasRes) { - saveOneRow(pLastCols, pResBlock, pr, slotIds, pRes, pr->idstr); + saveOneRow(pLastCols, pResBlock, pr, slotIds, dstSlotIds, pRes, pr->idstr); } } else if (HASTYPE(pr->type, CACHESCAN_RETRIEVE_TYPE_ALL)) { for (int32_t i = pr->tableIndex; i < pr->numOfTables; ++i) { STableKeyInfo* pKeyInfo = &pr->pTableList[i]; + /* code = doExtractCacheRow(pr, lruCache, pKeyInfo->uid, &pRow, &h); if (code != TSDB_CODE_SUCCESS) { goto _end; @@ -442,9 +447,16 @@ int32_t tsdbRetrieveCacheRows(void* pReader, SSDataBlock* pResBlock, const int32 saveOneRow(pRow, pResBlock, pr, slotIds, pRes, pr->idstr); // TODO reset the pRes + tsdbCacheRelease(lruCache, h); + */ + + char const* lstring = pr->type & CACHESCAN_RETRIEVE_LAST ? "last" : "last_row"; + tsdbCacheGet(pr->pTsdb, pKeyInfo->uid, &pRow, pr, lstring); + + saveOneRow(pRow, pResBlock, pr, slotIds, dstSlotIds, pRes, pr->idstr); + taosArrayDestroy(pRow); taosArrayPush(pTableUidList, &pKeyInfo->uid); - tsdbCacheRelease(lruCache, h); pr->tableIndex += 1; if (pResBlock->info.rows >= pResBlock->info.capacity) { diff --git a/source/libs/executor/src/cachescanoperator.c b/source/libs/executor/src/cachescanoperator.c index 543ac412db..204a18ed4e 100644 --- a/source/libs/executor/src/cachescanoperator.c +++ b/source/libs/executor/src/cachescanoperator.c @@ -31,6 +31,7 @@ typedef struct SCacheRowsScanInfo { void* pLastrowReader; SColMatchInfo matchInfo; int32_t* pSlotIds; + int32_t* pDstSlotIds; SExprSupp pseudoExprSup; int32_t retrieveType; int32_t currentGroupIndex; @@ -43,7 +44,8 @@ typedef struct SCacheRowsScanInfo { static SSDataBlock* doScanCache(SOperatorInfo* pOperator); static void destroyCacheScanOperator(void* param); -static int32_t extractCacheScanSlotId(const SArray* pColMatchInfo, SExecTaskInfo* pTaskInfo, int32_t** pSlotIds); +static int32_t extractCacheScanSlotId(const SArray* pColMatchInfo, SExecTaskInfo* pTaskInfo, int32_t** pSlotIds, + int32_t** pDstSlotIds); static int32_t removeRedundantTsCol(SLastRowScanPhysiNode* pScanNode, SColMatchInfo* pColMatchInfo); #define SCAN_ROW_TYPE(_t) ((_t) ? CACHESCAN_RETRIEVE_LAST : CACHESCAN_RETRIEVE_LAST_ROW) @@ -81,7 +83,7 @@ SOperatorInfo* createCacherowsScanOperator(SLastRowScanPhysiNode* pScanNode, SRe removeRedundantTsCol(pScanNode, &pInfo->matchInfo); - code = extractCacheScanSlotId(pInfo->matchInfo.pList, pTaskInfo, &pInfo->pSlotIds); + code = extractCacheScanSlotId(pInfo->matchInfo.pList, pTaskInfo, &pInfo->pSlotIds, &pInfo->pDstSlotIds); if (code != TSDB_CODE_SUCCESS) { goto _error; } @@ -168,8 +170,8 @@ SSDataBlock* doScanCache(SOperatorInfo* pOperator) { blockDataCleanup(pInfo->pBufferredRes); taosArrayClear(pInfo->pUidList); - int32_t code = - tsdbRetrieveCacheRows(pInfo->pLastrowReader, pInfo->pBufferredRes, pInfo->pSlotIds, pInfo->pUidList); + int32_t code = tsdbRetrieveCacheRows(pInfo->pLastrowReader, pInfo->pBufferredRes, pInfo->pSlotIds, + pInfo->pDstSlotIds, pInfo->pUidList); if (code != TSDB_CODE_SUCCESS) { T_LONG_JMP(pTaskInfo->env, code); } @@ -245,7 +247,8 @@ SSDataBlock* doScanCache(SOperatorInfo* pOperator) { taosArrayClear(pInfo->pUidList); - code = tsdbRetrieveCacheRows(pInfo->pLastrowReader, pInfo->pRes, pInfo->pSlotIds, pInfo->pUidList); + code = tsdbRetrieveCacheRows(pInfo->pLastrowReader, pInfo->pRes, pInfo->pSlotIds, pInfo->pDstSlotIds, + pInfo->pUidList); if (code != TSDB_CODE_SUCCESS) { T_LONG_JMP(pTaskInfo->env, code); } @@ -290,6 +293,7 @@ void destroyCacheScanOperator(void* param) { blockDataDestroy(pInfo->pRes); blockDataDestroy(pInfo->pBufferredRes); taosMemoryFree(pInfo->pSlotIds); + taosMemoryFree(pInfo->pDstSlotIds); taosArrayDestroy(pInfo->pCidList); taosArrayDestroy(pInfo->pUidList); taosArrayDestroy(pInfo->matchInfo.pList); @@ -303,7 +307,8 @@ void destroyCacheScanOperator(void* param) { taosMemoryFreeClear(param); } -int32_t extractCacheScanSlotId(const SArray* pColMatchInfo, SExecTaskInfo* pTaskInfo, int32_t** pSlotIds) { +int32_t extractCacheScanSlotId(const SArray* pColMatchInfo, SExecTaskInfo* pTaskInfo, int32_t** pSlotIds, + int32_t** pDstSlotIds) { size_t numOfCols = taosArrayGetSize(pColMatchInfo); *pSlotIds = taosMemoryMalloc(numOfCols * sizeof(int32_t)); @@ -311,18 +316,25 @@ int32_t extractCacheScanSlotId(const SArray* pColMatchInfo, SExecTaskInfo* pTask return TSDB_CODE_OUT_OF_MEMORY; } + *pDstSlotIds = taosMemoryMalloc(numOfCols * sizeof(int32_t)); + if (*pDstSlotIds == NULL) { + taosMemoryFree(*pSlotIds); + return TSDB_CODE_OUT_OF_MEMORY; + } + SSchemaWrapper* pWrapper = pTaskInfo->schemaInfo.sw; for (int32_t i = 0; i < numOfCols; ++i) { SColMatchItem* pColMatch = taosArrayGet(pColMatchInfo, i); for (int32_t j = 0; j < pWrapper->nCols; ++j) { - if (pColMatch->colId == pWrapper->pSchema[j].colId && pColMatch->colId == PRIMARYKEY_TIMESTAMP_COL_ID) { + /* if (pColMatch->colId == pWrapper->pSchema[j].colId && pColMatch->colId == PRIMARYKEY_TIMESTAMP_COL_ID) { (*pSlotIds)[pColMatch->dstSlotId] = -1; break; - } + }*/ if (pColMatch->colId == pWrapper->pSchema[j].colId) { - (*pSlotIds)[pColMatch->dstSlotId] = j; + (*pSlotIds)[i] = j; + (*pDstSlotIds)[i] = pColMatch->dstSlotId; break; } } From 7d896bbc851a4ce476fbf85203be51e53065a419 Mon Sep 17 00:00:00 2001 From: Minglei Jin Date: Thu, 20 Apr 2023 17:36:46 +0800 Subject: [PATCH 011/156] last/empty_table: continue if table's empty --- source/dnode/vnode/src/tsdb/tsdbCache.c | 9 +++++---- source/dnode/vnode/src/tsdb/tsdbCacheRead.c | 3 +++ 2 files changed, 8 insertions(+), 4 deletions(-) diff --git a/source/dnode/vnode/src/tsdb/tsdbCache.c b/source/dnode/vnode/src/tsdb/tsdbCache.c index f86efb7c5b..f9cab74b5f 100644 --- a/source/dnode/vnode/src/tsdb/tsdbCache.c +++ b/source/dnode/vnode/src/tsdb/tsdbCache.c @@ -130,6 +130,10 @@ static void tsdbCloseRocksCache(STsdb *pTsdb) { } SLastCol *tsdbCacheDeserialize(char const *value) { + if (!value) { + return NULL; + } + SLastCol *pLastCol = (SLastCol *)value; SColVal *pColVal = &pLastCol->colVal; if (IS_VAR_DATA_TYPE(pColVal->type)) { @@ -236,10 +240,7 @@ int32_t tsdbCacheUpdate(STsdb *pTsdb, tb_uid_t suid, tb_uid_t uid, TSDBROW *pRow for (int i = 0; i < num_keys; ++i) { SColVal *pColVal = (SColVal *)taosArrayGet(aColVal, i); if (COL_VAL_IS_VALUE(pColVal)) { - SLastCol *pLastCol = NULL; - if (NULL != values_list[i]) { - pLastCol = tsdbCacheDeserialize(values_list[i]); - } + SLastCol *pLastCol = tsdbCacheDeserialize(values_list[i]); if (NULL == pLastCol || pLastCol->ts <= keyTs) { char *value = NULL; diff --git a/source/dnode/vnode/src/tsdb/tsdbCacheRead.c b/source/dnode/vnode/src/tsdb/tsdbCacheRead.c index 964ff2e297..285e668e12 100644 --- a/source/dnode/vnode/src/tsdb/tsdbCacheRead.c +++ b/source/dnode/vnode/src/tsdb/tsdbCacheRead.c @@ -449,6 +449,9 @@ int32_t tsdbRetrieveCacheRows(void* pReader, SSDataBlock* pResBlock, const int32 char const* lstring = pr->type & CACHESCAN_RETRIEVE_LAST ? "last" : "last_row"; tsdbCacheGet(pr->pTsdb, pKeyInfo->uid, &pRow, pr, lstring); + if (TARRAY_SIZE(pRow) <= 0) { + continue; + } saveOneRow(pRow, pResBlock, pr, slotIds, dstSlotIds, pRes, pr->idstr); taosArrayDestroy(pRow); From 373a5428960495f20b2e24f2763414e243ac0a22 Mon Sep 17 00:00:00 2001 From: Minglei Jin Date: Thu, 20 Apr 2023 18:42:02 +0800 Subject: [PATCH 012/156] contrib/rocks: build rocds by default --- contrib/CMakeLists.txt | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/contrib/CMakeLists.txt b/contrib/CMakeLists.txt index 6cd6f79d8b..35709237c0 100644 --- a/contrib/CMakeLists.txt +++ b/contrib/CMakeLists.txt @@ -78,10 +78,10 @@ if(${BUILD_WITH_LEVELDB}) endif(${BUILD_WITH_LEVELDB}) # rocksdb -if(${BUILD_WITH_ROCKSDB}) +#if(${BUILD_WITH_ROCKSDB}) cat("${TD_SUPPORT_DIR}/rocksdb_CMakeLists.txt.in" ${CONTRIB_TMP_FILE}) add_definitions(-DUSE_ROCKSDB) -endif(${BUILD_WITH_ROCKSDB}) +#endif(${BUILD_WITH_ROCKSDB}) # canonical-raft if(${BUILD_WITH_CRAFT}) From c9959b5bf6b23094978f8bd997ce7235dc982f52 Mon Sep 17 00:00:00 2001 From: Minglei Jin Date: Fri, 21 Apr 2023 14:18:04 +0800 Subject: [PATCH 013/156] dst slot for last_row --- contrib/CMakeLists.txt | 8 +-- source/dnode/vnode/src/tsdb/tsdbCacheRead.c | 58 +++++---------------- 2 files changed, 18 insertions(+), 48 deletions(-) diff --git a/contrib/CMakeLists.txt b/contrib/CMakeLists.txt index 35709237c0..ed6120bd30 100644 --- a/contrib/CMakeLists.txt +++ b/contrib/CMakeLists.txt @@ -79,8 +79,8 @@ endif(${BUILD_WITH_LEVELDB}) # rocksdb #if(${BUILD_WITH_ROCKSDB}) - cat("${TD_SUPPORT_DIR}/rocksdb_CMakeLists.txt.in" ${CONTRIB_TMP_FILE}) - add_definitions(-DUSE_ROCKSDB) +cat("${TD_SUPPORT_DIR}/rocksdb_CMakeLists.txt.in" ${CONTRIB_TMP_FILE}) +add_definitions(-DUSE_ROCKSDB) #endif(${BUILD_WITH_ROCKSDB}) # canonical-raft @@ -222,7 +222,7 @@ endif(${BUILD_WITH_LEVELDB}) # rocksdb # To support rocksdb build on ubuntu: sudo apt-get install libgflags-dev -if(${BUILD_WITH_ROCKSDB}) +#if(${BUILD_WITH_ROCKSDB}) SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-error=maybe-uninitialized") option(WITH_TESTS "" OFF) option(WITH_BENCHMARK_TOOLS "" OFF) @@ -234,7 +234,7 @@ if(${BUILD_WITH_ROCKSDB}) rocksdb PUBLIC $ ) -endif(${BUILD_WITH_ROCKSDB}) +#endif(${BUILD_WITH_ROCKSDB}) # lucene # To support build on ubuntu: sudo apt-get install libboost-all-dev diff --git a/source/dnode/vnode/src/tsdb/tsdbCacheRead.c b/source/dnode/vnode/src/tsdb/tsdbCacheRead.c index 285e668e12..d74d9414e5 100644 --- a/source/dnode/vnode/src/tsdb/tsdbCacheRead.c +++ b/source/dnode/vnode/src/tsdb/tsdbCacheRead.c @@ -30,27 +30,8 @@ static int32_t saveOneRow(SArray* pRow, SSDataBlock* pBlock, SCacheRowsReader* p for (int32_t i = 0; i < pReader->numOfCols; ++i) { SColumnInfoData* pColInfoData = taosArrayGet(pBlock->pDataBlock, dstSlotIds[i]); SFirstLastRes* p = (SFirstLastRes*)varDataVal(pRes[dstSlotIds[i]]); - /* - if (slotIds[i] == -1) { // the primary timestamp - SLastCol* pColVal = (SLastCol*)taosArrayGet(pRow, 0); - p->ts = pColVal->ts; - p->bytes = TSDB_KEYSIZE; - *(int64_t*)p->buf = pColVal->ts; - allNullRow = false; - } else { - */ - int32_t slotId = slotIds[i]; - // add check for null value, caused by the modification of table schema (new column added). - /* - if (slotId >= taosArrayGetSize(pRow)) { - p->ts = 0; - p->isNull = true; - colDataSetNULL(pColInfoData, numOfRows); - continue; - } - */ - // SLastCol* pColVal = (SLastCol*)taosArrayGet(pRow, slotId); - SLastCol* pColVal = (SLastCol*)taosArrayGet(pRow, i); + int32_t slotId = slotIds[i]; + SLastCol* pColVal = (SLastCol*)taosArrayGet(pRow, i); p->ts = pColVal->ts; p->isNull = !COL_VAL_IS_VALUE(&pColVal->colVal); @@ -66,7 +47,6 @@ static int32_t saveOneRow(SArray* pRow, SSDataBlock* pBlock, SCacheRowsReader* p p->bytes = pReader->pSchema->columns[slotId].bytes; } } - //} // pColInfoData->info.bytes includes the VARSTR_HEADER_SIZE, need to substruct it p->hasResult = true; @@ -77,32 +57,22 @@ static int32_t saveOneRow(SArray* pRow, SSDataBlock* pBlock, SCacheRowsReader* p pBlock->info.rows += allNullRow ? 0 : 1; } else if (HASTYPE(pReader->type, CACHESCAN_RETRIEVE_LAST_ROW)) { for (int32_t i = 0; i < pReader->numOfCols; ++i) { - SColumnInfoData* pColInfoData = taosArrayGet(pBlock->pDataBlock, i); + SColumnInfoData* pColInfoData = taosArrayGet(pBlock->pDataBlock, dstSlotIds[i]); - if (slotIds[i] == -1) { - SLastCol* pColVal = (SLastCol*)taosArrayGet(pRow, 0); - colDataSetVal(pColInfoData, numOfRows, (const char*)&pColVal->ts, false); - } else { - int32_t slotId = slotIds[i]; - // add check for null value, caused by the modification of table schema (new column added). - if (slotId >= taosArrayGetSize(pRow)) { + int32_t slotId = slotIds[i]; + SLastCol* pColVal = (SLastCol*)taosArrayGet(pRow, i); + SColVal* pVal = &pColVal->colVal; + + if (IS_VAR_DATA_TYPE(pColVal->colVal.type)) { + if (!COL_VAL_IS_VALUE(&pColVal->colVal)) { colDataSetNULL(pColInfoData, numOfRows); - continue; - } - SLastCol* pColVal = (SLastCol*)taosArrayGet(pRow, slotId); - SColVal* pVal = &pColVal->colVal; - - if (IS_VAR_DATA_TYPE(pColVal->colVal.type)) { - if (!COL_VAL_IS_VALUE(&pColVal->colVal)) { - colDataSetNULL(pColInfoData, numOfRows); - } else { - varDataSetLen(pReader->transferBuf[slotId], pVal->value.nData); - memcpy(varDataVal(pReader->transferBuf[slotId]), pVal->value.pData, pVal->value.nData); - colDataSetVal(pColInfoData, numOfRows, pReader->transferBuf[slotId], false); - } } else { - colDataSetVal(pColInfoData, numOfRows, (const char*)&pVal->value.val, !COL_VAL_IS_VALUE(pVal)); + varDataSetLen(pReader->transferBuf[dstSlotIds[i]], pVal->value.nData); + memcpy(varDataVal(pReader->transferBuf[dstSlotIds[i]]), pVal->value.pData, pVal->value.nData); + colDataSetVal(pColInfoData, numOfRows, pReader->transferBuf[dstSlotIds[i]], false); } + } else { + colDataSetVal(pColInfoData, numOfRows, (const char*)&pVal->value.val, !COL_VAL_IS_VALUE(pVal)); } } From 0394c6a50c8ab71f21c51f778705102f12d0881d Mon Sep 17 00:00:00 2001 From: Minglei Jin Date: Fri, 21 Apr 2023 15:24:42 +0800 Subject: [PATCH 014/156] free rocks errs array --- source/dnode/vnode/src/tsdb/tsdbCache.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/source/dnode/vnode/src/tsdb/tsdbCache.c b/source/dnode/vnode/src/tsdb/tsdbCache.c index f9cab74b5f..265254225b 100644 --- a/source/dnode/vnode/src/tsdb/tsdbCache.c +++ b/source/dnode/vnode/src/tsdb/tsdbCache.c @@ -181,6 +181,7 @@ int32_t tsdbCacheUpdate(STsdb *pTsdb, tb_uid_t suid, tb_uid_t uid, TSDBROW *pRow tsdbRowIterOpen(&iter, pRow, pTSchema); for (SColVal *pColVal = tsdbRowIterNext(&iter); pColVal; pColVal = tsdbRowIterNext(&iter)) { + /* if (IS_VAR_DATA_TYPE(pColVal->type)) { uint8_t *pVal = pColVal->value.pData; @@ -195,7 +196,7 @@ int32_t tsdbCacheUpdate(STsdb *pTsdb, tb_uid_t suid, tb_uid_t uid, TSDBROW *pRow memcpy(pColVal->value.pData, pVal, pColVal->value.nData); } } - + */ taosArrayPush(aColVal, pColVal); } @@ -231,6 +232,9 @@ int32_t tsdbCacheUpdate(STsdb *pTsdb, tb_uid_t suid, tb_uid_t uid, TSDBROW *pRow for (int i = 0; i < num_keys; ++i) { taosMemoryFree(keys_list[i]); } + for (int i = 0; i < num_keys * 2; ++i) { + rocksdb_free(errs[i]); + } taosMemoryFree(keys_list); taosMemoryFree(keys_list_sizes); taosMemoryFree(errs); @@ -313,6 +317,7 @@ int32_t tsdbCacheGet(STsdb *pTsdb, tb_uid_t uid, SArray **ppLastArray, SCacheRow keys_list_sizes, values_list, values_list_sizes, errs); for (int i = 0; i < num_keys; ++i) { taosMemoryFree(keys_list[i]); + rocksdb_free(errs[i]); } taosMemoryFree(keys_list); taosMemoryFree(keys_list_sizes); From 01d95b5e95f2f00458a0bf02dd39d0e14adbf3c6 Mon Sep 17 00:00:00 2001 From: Minglei Jin Date: Fri, 21 Apr 2023 16:07:22 +0800 Subject: [PATCH 015/156] cache/read: use new interface with signle query --- source/dnode/vnode/src/tsdb/tsdbCache.c | 2 + source/dnode/vnode/src/tsdb/tsdbCacheRead.c | 102 +++++++------------- 2 files changed, 35 insertions(+), 69 deletions(-) diff --git a/source/dnode/vnode/src/tsdb/tsdbCache.c b/source/dnode/vnode/src/tsdb/tsdbCache.c index 265254225b..8d93cd89ad 100644 --- a/source/dnode/vnode/src/tsdb/tsdbCache.c +++ b/source/dnode/vnode/src/tsdb/tsdbCache.c @@ -273,6 +273,8 @@ int32_t tsdbCacheUpdate(STsdb *pTsdb, tb_uid_t suid, tb_uid_t uid, TSDBROW *pRow taosMemoryFree(value); } } + + rocksdb_free(values_list[i]); } taosMemoryFree(values_list); taosMemoryFree(values_list_sizes); diff --git a/source/dnode/vnode/src/tsdb/tsdbCacheRead.c b/source/dnode/vnode/src/tsdb/tsdbCacheRead.c index d74d9414e5..52828c9ad0 100644 --- a/source/dnode/vnode/src/tsdb/tsdbCacheRead.c +++ b/source/dnode/vnode/src/tsdb/tsdbCacheRead.c @@ -199,7 +199,7 @@ void* tsdbCacherowsReaderClose(void* pReader) { taosMemoryFree(pReader); return NULL; } - +/* static int32_t doExtractCacheRow(SCacheRowsReader* pr, SLRUCache* lruCache, uint64_t uid, SArray** pRow, LRUHandle** h) { int32_t code = TSDB_CODE_SUCCESS; @@ -222,7 +222,7 @@ static int32_t doExtractCacheRow(SCacheRowsReader* pr, SLRUCache* lruCache, uint return code; } - +*/ static void freeItem(void* pItem) { SLastCol* pCol = (SLastCol*)pItem; if (IS_VAR_DATA_TYPE(pCol->colVal.type)) { @@ -279,7 +279,7 @@ int32_t tsdbRetrieveCacheRows(void* pReader, SSDataBlock* pResBlock, const int32 p->ts = INT64_MIN; } - pLastCols = taosArrayInit(pr->pSchema->numOfCols, sizeof(SLastCol)); + pLastCols = taosArrayInit(pr->numOfCols, sizeof(SLastCol)); if (pLastCols == NULL) { code = TSDB_CODE_OUT_OF_MEMORY; goto _end; @@ -303,6 +303,8 @@ int32_t tsdbRetrieveCacheRows(void* pReader, SSDataBlock* pResBlock, const int32 pr->pDataFReader = NULL; pr->pDataFReaderLast = NULL; + char const* lstring = pr->type & CACHESCAN_RETRIEVE_LAST ? "last" : "last_row"; + // retrieve the only one last row of all tables in the uid list. if (HASTYPE(pr->type, CACHESCAN_RETRIEVE_TYPE_SINGLE)) { int64_t st = taosGetTimestampUs(); @@ -310,16 +312,9 @@ int32_t tsdbRetrieveCacheRows(void* pReader, SSDataBlock* pResBlock, const int32 for (int32_t i = 0; i < pr->numOfTables; ++i) { STableKeyInfo* pKeyInfo = &pr->pTableList[i]; - code = doExtractCacheRow(pr, lruCache, pKeyInfo->uid, &pRow, &h); - if (code != TSDB_CODE_SUCCESS) { - goto _end; - } - - if (h == NULL) { - continue; - } - if (taosArrayGetSize(pRow) <= 0) { - tsdbCacheRelease(lruCache, h); + tsdbCacheGet(pr->pTsdb, pKeyInfo->uid, &pRow, pr, lstring); + if (TARRAY_SIZE(pRow) <= 0) { + taosArrayDestroy(pRow); continue; } @@ -327,47 +322,34 @@ int32_t tsdbRetrieveCacheRows(void* pReader, SSDataBlock* pResBlock, const int32 bool hasNotNullRow = true; int64_t singleTableLastTs = INT64_MAX; for (int32_t k = 0; k < pr->numOfCols; ++k) { - int32_t slotId = slotIds[k]; + SLastCol* p = taosArrayGet(pLastCols, k); + SLastCol* pColVal = (SLastCol*)taosArrayGet(pRow, k); - if (slotId == -1) { // the primary timestamp - SLastCol* p = taosArrayGet(pLastCols, 0); - SLastCol* pCol = (SLastCol*)taosArrayGet(pRow, 0); - if (pCol->ts > p->ts) { - hasRes = true; - p->ts = pCol->ts; - p->colVal = pCol->colVal; - singleTableLastTs = pCol->ts; + if (pColVal->ts > p->ts) { + if (!COL_VAL_IS_VALUE(&pColVal->colVal) && HASTYPE(pr->type, CACHESCAN_RETRIEVE_LAST)) { + if (!COL_VAL_IS_VALUE(&p->colVal)) { + hasNotNullRow = false; + } + continue; } - } else { - SLastCol* p = taosArrayGet(pLastCols, slotId); - SLastCol* pColVal = (SLastCol*)taosArrayGet(pRow, slotId); - if (pColVal->ts > p->ts) { - if (!COL_VAL_IS_VALUE(&pColVal->colVal) && HASTYPE(pr->type, CACHESCAN_RETRIEVE_LAST)) { - if (!COL_VAL_IS_VALUE(&p->colVal)) { - hasNotNullRow = false; - } - continue; + hasRes = true; + p->ts = pColVal->ts; + if (pColVal->ts < singleTableLastTs && HASTYPE(pr->type, CACHESCAN_RETRIEVE_LAST)) { + singleTableLastTs = pColVal->ts; + } + + if (!IS_VAR_DATA_TYPE(pColVal->colVal.type)) { + p->colVal = pColVal->colVal; + } else { + if (COL_VAL_IS_VALUE(&pColVal->colVal)) { + memcpy(p->colVal.value.pData, pColVal->colVal.value.pData, pColVal->colVal.value.nData); } - hasRes = true; - p->ts = pColVal->ts; - if (pColVal->ts < singleTableLastTs && HASTYPE(pr->type, CACHESCAN_RETRIEVE_LAST)) { - singleTableLastTs = pColVal->ts; - } - - if (!IS_VAR_DATA_TYPE(pColVal->colVal.type)) { - p->colVal = pColVal->colVal; - } else { - if (COL_VAL_IS_VALUE(&pColVal->colVal)) { - memcpy(p->colVal.value.pData, pColVal->colVal.value.pData, pColVal->colVal.value.nData); - } - - p->colVal.value.nData = pColVal->colVal.value.nData; - p->colVal.type = pColVal->colVal.type; - p->colVal.flag = pColVal->colVal.flag; - p->colVal.cid = pColVal->colVal.cid; - } + p->colVal.value.nData = pColVal->colVal.value.nData; + p->colVal.type = pColVal->colVal.type; + p->colVal.flag = pColVal->colVal.flag; + p->colVal.cid = pColVal->colVal.cid; } } } @@ -389,7 +371,7 @@ int32_t tsdbRetrieveCacheRows(void* pReader, SSDataBlock* pResBlock, const int32 taosArraySet(pTableUidList, 0, &pKeyInfo->uid); } - tsdbCacheRelease(lruCache, h); + taosArrayDestroy(pRow); } if (hasRes) { @@ -398,28 +380,10 @@ int32_t tsdbRetrieveCacheRows(void* pReader, SSDataBlock* pResBlock, const int32 } else if (HASTYPE(pr->type, CACHESCAN_RETRIEVE_TYPE_ALL)) { for (int32_t i = pr->tableIndex; i < pr->numOfTables; ++i) { STableKeyInfo* pKeyInfo = &pr->pTableList[i]; - /* - code = doExtractCacheRow(pr, lruCache, pKeyInfo->uid, &pRow, &h); - if (code != TSDB_CODE_SUCCESS) { - goto _end; - } - if (h == NULL) { - continue; - } - if (taosArrayGetSize(pRow) <= 0) { - tsdbCacheRelease(lruCache, h); - continue; - } - - saveOneRow(pRow, pResBlock, pr, slotIds, pRes, pr->idstr); - // TODO reset the pRes - tsdbCacheRelease(lruCache, h); - */ - - char const* lstring = pr->type & CACHESCAN_RETRIEVE_LAST ? "last" : "last_row"; tsdbCacheGet(pr->pTsdb, pKeyInfo->uid, &pRow, pr, lstring); if (TARRAY_SIZE(pRow) <= 0) { + taosArrayDestroy(pRow); continue; } From 8e87b21a5935d02b402f2ebd37722cc701a37afc Mon Sep 17 00:00:00 2001 From: Minglei Jin Date: Fri, 21 Apr 2023 16:23:40 +0800 Subject: [PATCH 016/156] rocks/multi_get value: free rocks allocated memory --- source/dnode/vnode/src/tsdb/tsdbCache.c | 1 + 1 file changed, 1 insertion(+) diff --git a/source/dnode/vnode/src/tsdb/tsdbCache.c b/source/dnode/vnode/src/tsdb/tsdbCache.c index 8d93cd89ad..caf093fc99 100644 --- a/source/dnode/vnode/src/tsdb/tsdbCache.c +++ b/source/dnode/vnode/src/tsdb/tsdbCache.c @@ -275,6 +275,7 @@ int32_t tsdbCacheUpdate(STsdb *pTsdb, tb_uid_t suid, tb_uid_t uid, TSDBROW *pRow } rocksdb_free(values_list[i]); + rocksdb_free(values_list[i + ROCKS_KEY_LEN]); } taosMemoryFree(values_list); taosMemoryFree(values_list_sizes); From 09c62e6ca281a0a21dcd822b66dac59328db7c26 Mon Sep 17 00:00:00 2001 From: Minglei Jin Date: Fri, 21 Apr 2023 16:30:00 +0800 Subject: [PATCH 017/156] fix values_list index when freeing --- source/dnode/vnode/src/tsdb/tsdbCache.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/source/dnode/vnode/src/tsdb/tsdbCache.c b/source/dnode/vnode/src/tsdb/tsdbCache.c index caf093fc99..26155e4554 100644 --- a/source/dnode/vnode/src/tsdb/tsdbCache.c +++ b/source/dnode/vnode/src/tsdb/tsdbCache.c @@ -275,7 +275,7 @@ int32_t tsdbCacheUpdate(STsdb *pTsdb, tb_uid_t suid, tb_uid_t uid, TSDBROW *pRow } rocksdb_free(values_list[i]); - rocksdb_free(values_list[i + ROCKS_KEY_LEN]); + rocksdb_free(values_list[i + num_keys]); } taosMemoryFree(values_list); taosMemoryFree(values_list_sizes); From 12fc21ae84de8e6fa5396bcba4538c17e184f5f9 Mon Sep 17 00:00:00 2001 From: Minglei Jin Date: Fri, 21 Apr 2023 18:34:58 +0800 Subject: [PATCH 018/156] cache/get: use pRow from reader --- source/dnode/vnode/src/inc/tsdb.h | 2 +- source/dnode/vnode/src/tsdb/tsdbCache.c | 4 +--- source/dnode/vnode/src/tsdb/tsdbCacheRead.c | 20 +++++++++----------- 3 files changed, 11 insertions(+), 15 deletions(-) diff --git a/source/dnode/vnode/src/inc/tsdb.h b/source/dnode/vnode/src/inc/tsdb.h index 9856bc8dbe..35c9af7c84 100644 --- a/source/dnode/vnode/src/inc/tsdb.h +++ b/source/dnode/vnode/src/inc/tsdb.h @@ -808,7 +808,7 @@ typedef struct { int32_t tsdbOpenCache(STsdb *pTsdb); void tsdbCloseCache(STsdb *pTsdb); int32_t tsdbCacheUpdate(STsdb *pTsdb, tb_uid_t suid, tb_uid_t uid, TSDBROW *row); -int32_t tsdbCacheGet(STsdb *pTsdb, tb_uid_t uid, SArray **ppLastArray, SCacheRowsReader *pr, char const *lstring); +int32_t tsdbCacheGet(STsdb *pTsdb, tb_uid_t uid, SArray *pLastArray, SCacheRowsReader *pr, char const *lstring); int32_t tsdbCacheInsertLast(SLRUCache *pCache, tb_uid_t uid, TSDBROW *row, STsdb *pTsdb); int32_t tsdbCacheInsertLastrow(SLRUCache *pCache, STsdb *pTsdb, tb_uid_t uid, TSDBROW *row, bool dup); int32_t tsdbCacheGetLastH(SLRUCache *pCache, tb_uid_t uid, SCacheRowsReader *pr, LRUHandle **h); diff --git a/source/dnode/vnode/src/tsdb/tsdbCache.c b/source/dnode/vnode/src/tsdb/tsdbCache.c index 26155e4554..e961008834 100644 --- a/source/dnode/vnode/src/tsdb/tsdbCache.c +++ b/source/dnode/vnode/src/tsdb/tsdbCache.c @@ -294,7 +294,7 @@ _exit: return code; } -int32_t tsdbCacheGet(STsdb *pTsdb, tb_uid_t uid, SArray **ppLastArray, SCacheRowsReader *pr, char const *lstring) { +int32_t tsdbCacheGet(STsdb *pTsdb, tb_uid_t uid, SArray *pLastArray, SCacheRowsReader *pr, char const *lstring) { int32_t code = 0; SArray *pCidList = pr->pCidList; @@ -326,7 +326,6 @@ int32_t tsdbCacheGet(STsdb *pTsdb, tb_uid_t uid, SArray **ppLastArray, SCacheRow taosMemoryFree(keys_list_sizes); taosMemoryFree(errs); - SArray *pLastArray = taosArrayInit(num_keys, sizeof(SLastCol)); for (int i = 0; i < num_keys; ++i) { SLastCol *pLastCol = tsdbCacheDeserialize(values_list[i]); @@ -337,7 +336,6 @@ int32_t tsdbCacheGet(STsdb *pTsdb, tb_uid_t uid, SArray **ppLastArray, SCacheRow taosMemoryFree(values_list); taosMemoryFree(values_list_sizes); - *ppLastArray = pLastArray; return code; } diff --git a/source/dnode/vnode/src/tsdb/tsdbCacheRead.c b/source/dnode/vnode/src/tsdb/tsdbCacheRead.c index 52828c9ad0..14e5ce0829 100644 --- a/source/dnode/vnode/src/tsdb/tsdbCacheRead.c +++ b/source/dnode/vnode/src/tsdb/tsdbCacheRead.c @@ -258,13 +258,10 @@ int32_t tsdbRetrieveCacheRows(void* pReader, SSDataBlock* pResBlock, const int32 } SCacheRowsReader* pr = pReader; - - int32_t code = TSDB_CODE_SUCCESS; - SLRUCache* lruCache = pr->pVnode->pTsdb->lruCache; - LRUHandle* h = NULL; - SArray* pRow = NULL; - bool hasRes = false; - SArray* pLastCols = NULL; + int32_t code = TSDB_CODE_SUCCESS; + SArray* pRow = taosArrayInit(TARRAY_SIZE(pr->pCidList), sizeof(SLastCol)); + bool hasRes = false; + SArray* pLastCols = NULL; void** pRes = taosMemoryCalloc(pr->numOfCols, POINTER_BYTES); if (pRes == NULL) { @@ -312,7 +309,7 @@ int32_t tsdbRetrieveCacheRows(void* pReader, SSDataBlock* pResBlock, const int32 for (int32_t i = 0; i < pr->numOfTables; ++i) { STableKeyInfo* pKeyInfo = &pr->pTableList[i]; - tsdbCacheGet(pr->pTsdb, pKeyInfo->uid, &pRow, pr, lstring); + tsdbCacheGet(pr->pTsdb, pKeyInfo->uid, pRow, pr, lstring); if (TARRAY_SIZE(pRow) <= 0) { taosArrayDestroy(pRow); continue; @@ -371,7 +368,7 @@ int32_t tsdbRetrieveCacheRows(void* pReader, SSDataBlock* pResBlock, const int32 taosArraySet(pTableUidList, 0, &pKeyInfo->uid); } - taosArrayDestroy(pRow); + taosArrayClear(pRow); } if (hasRes) { @@ -381,14 +378,14 @@ int32_t tsdbRetrieveCacheRows(void* pReader, SSDataBlock* pResBlock, const int32 for (int32_t i = pr->tableIndex; i < pr->numOfTables; ++i) { STableKeyInfo* pKeyInfo = &pr->pTableList[i]; - tsdbCacheGet(pr->pTsdb, pKeyInfo->uid, &pRow, pr, lstring); + tsdbCacheGet(pr->pTsdb, pKeyInfo->uid, pRow, pr, lstring); if (TARRAY_SIZE(pRow) <= 0) { taosArrayDestroy(pRow); continue; } saveOneRow(pRow, pResBlock, pr, slotIds, dstSlotIds, pRes, pr->idstr); - taosArrayDestroy(pRow); + taosArrayClear(pRow); taosArrayPush(pTableUidList, &pKeyInfo->uid); @@ -416,6 +413,7 @@ _end: } taosMemoryFree(pRes); + taosArrayDestroy(pRow); taosArrayDestroyEx(pLastCols, freeItem); return code; } From a75d43a2592d5c9f8c61bb21bad715f412966569 Mon Sep 17 00:00:00 2001 From: Minglei Jin Date: Fri, 21 Apr 2023 18:38:20 +0800 Subject: [PATCH 019/156] cache/read: remove unused extract function --- source/dnode/vnode/src/tsdb/tsdbCacheRead.c | 23 --------------------- 1 file changed, 23 deletions(-) diff --git a/source/dnode/vnode/src/tsdb/tsdbCacheRead.c b/source/dnode/vnode/src/tsdb/tsdbCacheRead.c index 14e5ce0829..830b1869eb 100644 --- a/source/dnode/vnode/src/tsdb/tsdbCacheRead.c +++ b/source/dnode/vnode/src/tsdb/tsdbCacheRead.c @@ -199,30 +199,7 @@ void* tsdbCacherowsReaderClose(void* pReader) { taosMemoryFree(pReader); return NULL; } -/* -static int32_t doExtractCacheRow(SCacheRowsReader* pr, SLRUCache* lruCache, uint64_t uid, SArray** pRow, - LRUHandle** h) { - int32_t code = TSDB_CODE_SUCCESS; - *pRow = NULL; - if (HASTYPE(pr->type, CACHESCAN_RETRIEVE_LAST_ROW)) { - code = tsdbCacheGetLastrowH(lruCache, uid, pr, h); - } else { - code = tsdbCacheGetLastH(lruCache, uid, pr, h); - } - - if (code != TSDB_CODE_SUCCESS) { - return code; - } - - // no data in the table of Uid - if (*h != NULL) { - *pRow = (SArray*)taosLRUCacheValue(lruCache, *h); - } - - return code; -} -*/ static void freeItem(void* pItem) { SLastCol* pCol = (SLastCol*)pItem; if (IS_VAR_DATA_TYPE(pCol->colVal.type)) { From 2e32ebe7f8fdccf47e7c3d98742e04803cb54fe9 Mon Sep 17 00:00:00 2001 From: Adam Ji Date: Sun, 23 Apr 2023 10:12:55 +0800 Subject: [PATCH 020/156] docs: case of sml put line --- .../examples/schemaless_insert_line.rs | 79 +++++++++++++++++++ 1 file changed, 79 insertions(+) create mode 100644 docs/examples/rust/nativeexample/examples/schemaless_insert_line.rs diff --git a/docs/examples/rust/nativeexample/examples/schemaless_insert_line.rs b/docs/examples/rust/nativeexample/examples/schemaless_insert_line.rs new file mode 100644 index 0000000000..8fe9bf40ff --- /dev/null +++ b/docs/examples/rust/nativeexample/examples/schemaless_insert_line.rs @@ -0,0 +1,79 @@ +use taos_query::common::SchemalessPrecision; +use taos_query::common::SchemalessProtocol; +use taos_query::common::SmlDataBuilder; + +use crate::AsyncQueryable; +use crate::AsyncTBuilder; +use crate::TaosBuilder; + +async fn put_line() -> anyhow::Result<()> { + // std::env::set_var("RUST_LOG", "taos=trace"); + std::env::set_var("RUST_LOG", "taos=debug"); + pretty_env_logger::init(); + + let dsn = + std::env::var("TDENGINE_ClOUD_DSN").unwrap_or("http://localhost:6041".to_string()); + log::debug!("dsn: {:?}", &dsn); + + let client = TaosBuilder::from_dsn(dsn)?.build().await?; + + let db = "demo_schemaless_ws"; + + client.exec(format!("drop database if exists {db}")).await?; + + client + .exec(format!("create database if not exists {db}")) + .await?; + + let data = [ + "measurement,host=host1 field1=2i,field2=2.0 1577837300000", + "measurement,host=host1 field1=2i,field2=2.0 1577837400000", + "measurement,host=host1 field1=2i,field2=2.0 1577837500000", + "measurement,host=host1 field1=2i,field2=2.0 1577837600000", + ] + .map(String::from) + .to_vec(); + + // demo with all fields + let sml_data = SmlDataBuilder::default() + .db(db.to_string()) + .protocol(SchemalessProtocol::Line) + .precision(SchemalessPrecision::Millisecond) + .data(data.clone()) + .ttl(1000) + .req_id(100u64) + .build()?; + assert_eq!(client.put(&sml_data).await?, ()); + + // demo omit ttl by default + let sml_data = SmlDataBuilder::default() + .db(db.to_string()) + .protocol(SchemalessProtocol::Line) + .precision(SchemalessPrecision::Millisecond) + .data(data.clone()) + .req_id(101u64) + .build()?; + assert_eq!(client.put(&sml_data).await?, ()); + + // demo omit ttl and req_id by default + let sml_data = SmlDataBuilder::default() + .db(db.to_string()) + .protocol(SchemalessProtocol::Line) + .precision(SchemalessPrecision::Millisecond) + .data(data.clone()) + .build()?; + assert_eq!(client.put(&sml_data).await?, ()); + + // demo omit precision by default + let sml_data = SmlDataBuilder::default() + .db(db.to_string()) + .protocol(SchemalessProtocol::Line) + .data(data) + .req_id(103u64) + .build()?; + assert_eq!(client.put(&sml_data).await?, ()); + + client.exec(format!("drop database if exists {db}")).await?; + + Ok(()) +} From 14a76427f308ae90a9be8b3604cb42c8f2fb56d5 Mon Sep 17 00:00:00 2001 From: Adam Ji Date: Sun, 23 Apr 2023 10:16:00 +0800 Subject: [PATCH 021/156] docs: case put sml telnet --- .../examples/schemaless_insert_telnet.rs | 82 +++++++++++++++++++ 1 file changed, 82 insertions(+) create mode 100644 docs/examples/rust/nativeexample/examples/schemaless_insert_telnet.rs diff --git a/docs/examples/rust/nativeexample/examples/schemaless_insert_telnet.rs b/docs/examples/rust/nativeexample/examples/schemaless_insert_telnet.rs new file mode 100644 index 0000000000..c9a71c33c7 --- /dev/null +++ b/docs/examples/rust/nativeexample/examples/schemaless_insert_telnet.rs @@ -0,0 +1,82 @@ +use taos_query::common::SchemalessPrecision; +use taos_query::common::SchemalessProtocol; +use taos_query::common::SmlDataBuilder; + +use crate::AsyncQueryable; +use crate::AsyncTBuilder; +use crate::TaosBuilder; + +async fn put_telnet() -> anyhow::Result<()> { + // std::env::set_var("RUST_LOG", "taos=trace"); + std::env::set_var("RUST_LOG", "taos=debug"); + pretty_env_logger::init(); + let dsn = + std::env::var("TDENGINE_ClOUD_DSN").unwrap_or("http://localhost:6041".to_string()); + log::debug!("dsn: {:?}", &dsn); + + let client = TaosBuilder::from_dsn(dsn)?.build().await?; + + let db = "test_schemaless_ws"; + + client.exec(format!("drop database if exists {db}")).await?; + + client + .exec(format!("create database if not exists {db}")) + .await?; + + + let data = [ + "meters.current 1648432611249 10.3 location=California.SanFrancisco group=2", + "meters.current 1648432611250 12.6 location=California.SanFrancisco group=2", + "meters.current 1648432611249 10.8 location=California.LosAngeles group=3", + "meters.current 1648432611250 11.3 location=California.LosAngeles group=3", + "meters.voltage 1648432611249 219 location=California.SanFrancisco group=2", + "meters.voltage 1648432611250 218 location=California.SanFrancisco group=2", + "meters.voltage 1648432611249 221 location=California.LosAngeles group=3", + "meters.voltage 1648432611250 217 location=California.LosAngeles group=3", + ] + .map(String::from) + .to_vec(); + + // demo with all fields + let sml_data = SmlDataBuilder::default() + .db(db.to_string()) + .protocol(SchemalessProtocol::Telnet) + .precision(SchemalessPrecision::Millisecond) + .data(data.clone()) + .ttl(1000) + .req_id(200u64) + .build()?; + assert_eq!(client.put(&sml_data).await?, ()); + + // demo with default precision + let sml_data = SmlDataBuilder::default() + .db(db.to_string()) + .protocol(SchemalessProtocol::Telnet) + .data(data.clone()) + .ttl(1000) + .req_id(201u64) + .build()?; + assert_eq!(client.put(&sml_data).await?, ()); + + // demo with default ttl + let sml_data = SmlDataBuilder::default() + .db(db.to_string()) + .protocol(SchemalessProtocol::Telnet) + .data(data.clone()) + .req_id(202u64) + .build()?; + assert_eq!(client.put(&sml_data).await?, ()); + + // demo with default req_id + let sml_data = SmlDataBuilder::default() + .db(db.to_string()) + .protocol(SchemalessProtocol::Telnet) + .data(data.clone()) + .build()?; + assert_eq!(client.put(&sml_data).await?, ()); + + client.exec(format!("drop database if exists {db}")).await?; + + Ok(()) +} \ No newline at end of file From c405051bb3d5bb21223db78cf93114430ce3f5a6 Mon Sep 17 00:00:00 2001 From: Adam Ji Date: Sun, 23 Apr 2023 10:16:59 +0800 Subject: [PATCH 022/156] docs: update put line --- .../rust/nativeexample/examples/schemaless_insert_line.rs | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/examples/rust/nativeexample/examples/schemaless_insert_line.rs b/docs/examples/rust/nativeexample/examples/schemaless_insert_line.rs index 8fe9bf40ff..f4ffc78c54 100644 --- a/docs/examples/rust/nativeexample/examples/schemaless_insert_line.rs +++ b/docs/examples/rust/nativeexample/examples/schemaless_insert_line.rs @@ -45,7 +45,7 @@ async fn put_line() -> anyhow::Result<()> { .build()?; assert_eq!(client.put(&sml_data).await?, ()); - // demo omit ttl by default + // demo with default ttl let sml_data = SmlDataBuilder::default() .db(db.to_string()) .protocol(SchemalessProtocol::Line) @@ -55,7 +55,7 @@ async fn put_line() -> anyhow::Result<()> { .build()?; assert_eq!(client.put(&sml_data).await?, ()); - // demo omit ttl and req_id by default + // demo with default ttl and req_id let sml_data = SmlDataBuilder::default() .db(db.to_string()) .protocol(SchemalessProtocol::Line) @@ -64,7 +64,7 @@ async fn put_line() -> anyhow::Result<()> { .build()?; assert_eq!(client.put(&sml_data).await?, ()); - // demo omit precision by default + // demo with default precision let sml_data = SmlDataBuilder::default() .db(db.to_string()) .protocol(SchemalessProtocol::Line) From bec5790c6c8b0dda8c0e23925b95a7d46ae2ac76 Mon Sep 17 00:00:00 2001 From: Adam Ji Date: Sun, 23 Apr 2023 10:17:20 +0800 Subject: [PATCH 023/156] docs: update put telnet format --- .../rust/nativeexample/examples/schemaless_insert_telnet.rs | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/examples/rust/nativeexample/examples/schemaless_insert_telnet.rs b/docs/examples/rust/nativeexample/examples/schemaless_insert_telnet.rs index c9a71c33c7..2f7c20d7b5 100644 --- a/docs/examples/rust/nativeexample/examples/schemaless_insert_telnet.rs +++ b/docs/examples/rust/nativeexample/examples/schemaless_insert_telnet.rs @@ -79,4 +79,4 @@ async fn put_telnet() -> anyhow::Result<()> { client.exec(format!("drop database if exists {db}")).await?; Ok(()) -} \ No newline at end of file +} From 20bb69cc16b227defe0a5f28012eeea5235d884a Mon Sep 17 00:00:00 2001 From: Adam Ji Date: Sun, 23 Apr 2023 10:19:23 +0800 Subject: [PATCH 024/156] docs: case put sml json --- .../examples/schemaless_insert_json.rs | 75 +++++++++++++++++++ 1 file changed, 75 insertions(+) create mode 100644 docs/examples/rust/nativeexample/examples/schemaless_insert_json.rs diff --git a/docs/examples/rust/nativeexample/examples/schemaless_insert_json.rs b/docs/examples/rust/nativeexample/examples/schemaless_insert_json.rs new file mode 100644 index 0000000000..1a0e219462 --- /dev/null +++ b/docs/examples/rust/nativeexample/examples/schemaless_insert_json.rs @@ -0,0 +1,75 @@ +use taos_query::common::SchemalessPrecision; +use taos_query::common::SchemalessProtocol; +use taos_query::common::SmlDataBuilder; + +use crate::AsyncQueryable; +use crate::AsyncTBuilder; +use crate::TaosBuilder; + +async fn put_json() -> anyhow::Result<()> { + // std::env::set_var("RUST_LOG", "taos=trace"); + std::env::set_var("RUST_LOG", "taos=debug"); + pretty_env_logger::init(); + let dsn = + std::env::var("TDENGINE_ClOUD_DSN").unwrap_or("http://localhost:6041".to_string()); + log::debug!("dsn: {:?}", &dsn); + + let client = TaosBuilder::from_dsn(dsn)?.build().await?; + + let db = "demo_schemaless_ws"; + + client.exec(format!("drop database if exists {db}")).await?; + + client + .exec(format!("create database if not exists {db}")) + .await?; + + // SchemalessProtocol::Json + let data = [ + r#"[{"metric": "meters.current", "timestamp": 1681345954000, "value": 10.3, "tags": {"location": "California.SanFrancisco", "groupid": 2}}, {"metric": "meters.voltage", "timestamp": 1648432611249, "value": 219, "tags": {"location": "California.LosAngeles", "groupid": 1}}, {"metric": "meters.current", "timestamp": 1648432611250, "value": 12.6, "tags": {"location": "California.SanFrancisco", "groupid": 2}}, {"metric": "meters.voltage", "timestamp": 1648432611250, "value": 221, "tags": {"location": "California.LosAngeles", "groupid": 1}}]"# + ] + .map(String::from) + .to_vec(); + + // demo with all fields + let sml_data = SmlDataBuilder::default() + .db(db.to_string()) + .protocol(SchemalessProtocol::Json) + .precision(SchemalessPrecision::Millisecond) + .data(data.clone()) + .ttl(1000) + .req_id(300u64) + .build()?; + assert_eq!(client.put(&sml_data).await?, ()); + + // demo with default precision + let sml_data = SmlDataBuilder::default() + .db(db.to_string()) + .protocol(SchemalessProtocol::Json) + .data(data.clone()) + .ttl(1000) + .req_id(301u64) + .build()?; + assert_eq!(client.put(&sml_data).await?, ()); + + // demo with default ttl + let sml_data = SmlDataBuilder::default() + .db(db.to_string()) + .protocol(SchemalessProtocol::Json) + .data(data.clone()) + .req_id(302u64) + .build()?; + assert_eq!(client.put(&sml_data).await?, ()); + + // demo with default req_id + let sml_data = SmlDataBuilder::default() + .db(db.to_string()) + .protocol(SchemalessProtocol::Json) + .data(data.clone()) + .build()?; + assert_eq!(client.put(&sml_data).await?, ()); + + client.exec(format!("drop database if exists {db}")).await?; + + Ok(()) +} From ae37c254ad6bcb42e5b77723f4c855e99dffdfc6 Mon Sep 17 00:00:00 2001 From: Adam Ji Date: Sun, 23 Apr 2023 10:19:51 +0800 Subject: [PATCH 025/156] docs: update put telnet db name --- .../rust/nativeexample/examples/schemaless_insert_telnet.rs | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/examples/rust/nativeexample/examples/schemaless_insert_telnet.rs b/docs/examples/rust/nativeexample/examples/schemaless_insert_telnet.rs index 2f7c20d7b5..944216338e 100644 --- a/docs/examples/rust/nativeexample/examples/schemaless_insert_telnet.rs +++ b/docs/examples/rust/nativeexample/examples/schemaless_insert_telnet.rs @@ -16,7 +16,7 @@ async fn put_telnet() -> anyhow::Result<()> { let client = TaosBuilder::from_dsn(dsn)?.build().await?; - let db = "test_schemaless_ws"; + let db = "demo_schemaless_ws"; client.exec(format!("drop database if exists {db}")).await?; From c43cdd4faf4682a5ec3e95b27276224ceda45982 Mon Sep 17 00:00:00 2001 From: Adam Ji Date: Sun, 23 Apr 2023 10:21:41 +0800 Subject: [PATCH 026/156] docs: add sml.mdx --- docs/zh/07-develop/03-insert-data/_rust_schemaless.mdx | 3 +++ 1 file changed, 3 insertions(+) create mode 100644 docs/zh/07-develop/03-insert-data/_rust_schemaless.mdx diff --git a/docs/zh/07-develop/03-insert-data/_rust_schemaless.mdx b/docs/zh/07-develop/03-insert-data/_rust_schemaless.mdx new file mode 100644 index 0000000000..a26613bd15 --- /dev/null +++ b/docs/zh/07-develop/03-insert-data/_rust_schemaless.mdx @@ -0,0 +1,3 @@ +```rust +{{#include docs/examples/rust/nativeexample/examples/schemaless_insert_line.rs}} +``` From 88e1e5145ed17d54572da44d69045b99327aa797 Mon Sep 17 00:00:00 2001 From: Adam Ji Date: Sun, 23 Apr 2023 10:23:27 +0800 Subject: [PATCH 027/156] docs: update rust.mdx --- docs/zh/08-connector/26-rust.mdx | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/docs/zh/08-connector/26-rust.mdx b/docs/zh/08-connector/26-rust.mdx index eeb93a6558..d4ca25be81 100644 --- a/docs/zh/08-connector/26-rust.mdx +++ b/docs/zh/08-connector/26-rust.mdx @@ -10,6 +10,7 @@ import TabItem from '@theme/TabItem'; import Preparation from "./_preparation.mdx" import RustInsert from "../07-develop/03-insert-data/_rust_sql.mdx" import RustBind from "../07-develop/03-insert-data/_rust_stmt.mdx" +import RustSml from "../07-develop/03-insert-data/_rust_schemaless.mdx" import RustQuery from "../07-develop/04-query-data/_rust.mdx" [![Crates.io](https://img.shields.io/crates/v/taos)](https://crates.io/crates/taos) ![Crates.io](https://img.shields.io/crates/d/taos) [![docs.rs](https://img.shields.io/docsrs/taos)](https://docs.rs/taos) @@ -230,6 +231,10 @@ async fn demo(taos: &Taos, db: &str) -> Result<(), Error> { +#### Schemaless 写入 + + + ### 查询数据 From 13ea24312dccaea506a323dd228de05d19571253 Mon Sep 17 00:00:00 2001 From: Adam Ji Date: Sun, 23 Apr 2023 10:30:34 +0800 Subject: [PATCH 028/156] docs: add mdx en --- docs/en/07-develop/03-insert-data/_rust_schemaless.mdx | 3 +++ 1 file changed, 3 insertions(+) create mode 100644 docs/en/07-develop/03-insert-data/_rust_schemaless.mdx diff --git a/docs/en/07-develop/03-insert-data/_rust_schemaless.mdx b/docs/en/07-develop/03-insert-data/_rust_schemaless.mdx new file mode 100644 index 0000000000..a26613bd15 --- /dev/null +++ b/docs/en/07-develop/03-insert-data/_rust_schemaless.mdx @@ -0,0 +1,3 @@ +```rust +{{#include docs/examples/rust/nativeexample/examples/schemaless_insert_line.rs}} +``` From d3540a7502db8b9a1924fd18565a532c6fa6cf64 Mon Sep 17 00:00:00 2001 From: Adam Ji Date: Sun, 23 Apr 2023 10:31:53 +0800 Subject: [PATCH 029/156] docs: update mdx en --- docs/en/14-reference/03-connector/06-rust.mdx | 5 +++++ 1 file changed, 5 insertions(+) diff --git a/docs/en/14-reference/03-connector/06-rust.mdx b/docs/en/14-reference/03-connector/06-rust.mdx index ad522e9d2b..597a9b0f52 100644 --- a/docs/en/14-reference/03-connector/06-rust.mdx +++ b/docs/en/14-reference/03-connector/06-rust.mdx @@ -11,6 +11,7 @@ import TabItem from '@theme/TabItem'; import Preparition from "./_preparation.mdx" import RustInsert from "../../07-develop/03-insert-data/_rust_sql.mdx" import RustBind from "../../07-develop/03-insert-data/_rust_stmt.mdx" +import RustSml from "../../07-develop/03-insert-data/_rust_schemaless.mdx" import RustQuery from "../../07-develop/04-query-data/_rust.mdx" [![Crates.io](https://img.shields.io/crates/v/taos)](https://crates.io/crates/taos) ![Crates.io](https://img.shields.io/crates/d/taos) [![docs.rs](https://img.shields.io/docsrs/taos)](https://docs.rs/taos) @@ -232,6 +233,10 @@ There are two ways to query data: Using built-in types or the [serde](https://se +#### Schemaless Write + + + ### Query data From 5fe99c5ad24a8d96b75716b18d12f6aedc1b4595 Mon Sep 17 00:00:00 2001 From: dapan1121 Date: Sun, 23 Apr 2023 15:54:15 +0800 Subject: [PATCH 030/156] feat: support log long query --- include/common/tglobal.h | 8 ++++++ source/client/src/clientImpl.c | 2 +- source/common/src/tglobal.c | 44 +++++++++++++++++++++++++++++++ source/util/src/tlog.c | 48 +++++++++++++++++++++++++++++++--- 4 files changed, 98 insertions(+), 4 deletions(-) diff --git a/include/common/tglobal.h b/include/common/tglobal.h index 6fde7b48a2..25a386b23d 100644 --- a/include/common/tglobal.h +++ b/include/common/tglobal.h @@ -24,6 +24,12 @@ extern "C" { #endif +#define SLOW_LOG_TYPE_QUERY 0x1 +#define SLOW_LOG_TYPE_INSERT 0x2 +#define SLOW_LOG_TYPE_OTHERS 0x4 +#define SLOW_LOG_TYPE_ALL 0xFFFFFFFF + + // cluster extern char tsFirst[]; extern char tsSecond[]; @@ -118,6 +124,8 @@ extern int32_t tsRedirectFactor; extern int32_t tsRedirectMaxPeriod; extern int32_t tsMaxRetryWaitTime; extern bool tsUseAdapter; +extern int32_t tsSlowLogThreshold; +extern int32_t tsSlowLogScope; // client extern int32_t tsMinSlidingTime; diff --git a/source/client/src/clientImpl.c b/source/client/src/clientImpl.c index ce174744ef..b9fa12b587 100644 --- a/source/client/src/clientImpl.c +++ b/source/client/src/clientImpl.c @@ -1257,7 +1257,7 @@ STscObj* taosConnectImpl(const char* user, const char* auth, const char* db, __t if (pRequest->code != TSDB_CODE_SUCCESS) { const char* errorMsg = (pRequest->code == TSDB_CODE_RPC_FQDN_ERROR) ? taos_errstr(pRequest) : tstrerror(pRequest->code); - fprintf(stderr, "failed to connect to server, reason: %s\n\n", errorMsg); + tscError("failed to connect to server, reason: %s", errorMsg); terrno = pRequest->code; destroyRequest(pRequest); diff --git a/source/common/src/tglobal.c b/source/common/src/tglobal.c index da4a912238..224b0068fe 100644 --- a/source/common/src/tglobal.c +++ b/source/common/src/tglobal.c @@ -117,6 +117,10 @@ int32_t tsRedirectFactor = 2; int32_t tsRedirectMaxPeriod = 1000; int32_t tsMaxRetryWaitTime = 10000; bool tsUseAdapter = false; +int32_t tsSlowLogThreshold = 3; // seconds +int32_t tsSlowLogScope = SLOW_LOG_TYPE_ALL; + + /* @@ -345,6 +349,8 @@ static int32_t taosAddClientCfg(SConfig *pCfg) { if (cfgAddBool(pCfg, "useAdapter", tsUseAdapter, true) != 0) return -1; if (cfgAddBool(pCfg, "crashReporting", tsEnableCrashReport, true) != 0) return -1; if (cfgAddInt64(pCfg, "queryMaxConcurrentTables", tsQueryMaxConcurrentTables, INT64_MIN, INT64_MAX, 1) != 0) return -1; + if (cfgAddInt32(pCfg, "slowLogThreshold", tsSlowLogThreshold, 0, INT32_MAX, true) != 0) return -1; + if (cfgAddString(pCfg, "slowLogScope", "", true) != 0) return -1; tsNumOfRpcThreads = tsNumOfCores / 2; tsNumOfRpcThreads = TRANGE(tsNumOfRpcThreads, 2, TSDB_MAX_RPC_THREADS); @@ -692,6 +698,36 @@ static void taosSetServerLogCfg(SConfig *pCfg) { metaDebugFlag = cfgGetItem(pCfg, "metaDebugFlag")->i32; } +static int32_t taosSetSlowLogScope(char *pScope) { + if (NULL == pScope || 0 == strlen(pScope)) { + tsSlowLogScope = SLOW_LOG_TYPE_ALL; + return 0; + } + + if (0 == strcasecmp(pScope, "all")) { + tsSlowLogScope = SLOW_LOG_TYPE_ALL; + return 0; + } + + if (0 == strcasecmp(pScope, "query")) { + tsSlowLogScope = SLOW_LOG_TYPE_QUERY; + return 0; + } + + if (0 == strcasecmp(pScope, "insert")) { + tsSlowLogScope = SLOW_LOG_TYPE_INSERT; + return 0; + } + + if (0 == strcasecmp(pScope, "others")) { + tsSlowLogScope = SLOW_LOG_TYPE_OTHERS; + return 0; + } + + uError("Invalid slowLog scope value:%s", pScope); + return -1; +} + static int32_t taosSetClientCfg(SConfig *pCfg) { tstrncpy(tsLocalFqdn, cfgGetItem(pCfg, "fqdn")->str, TSDB_FQDN_LEN); tsServerPort = (uint16_t)cfgGetItem(pCfg, "serverPort")->i32; @@ -742,6 +778,10 @@ static int32_t taosSetClientCfg(SConfig *pCfg) { tsUseAdapter = cfgGetItem(pCfg, "useAdapter")->bval; tsEnableCrashReport = cfgGetItem(pCfg, "crashReporting")->bval; tsQueryMaxConcurrentTables = cfgGetItem(pCfg, "queryMaxConcurrentTables")->i64; + tsSlowLogThreshold = cfgGetItem(pCfg, "slowLogThreshold")->i32; + if (taosSetSlowLogScope(cfgGetItem(pCfg, "slowLogScope")->str)) { + return -1; + } tsMaxRetryWaitTime = cfgGetItem(pCfg, "maxRetryWaitTime")->i32; @@ -1156,6 +1196,10 @@ int32_t taosSetCfg(SConfig *pCfg, char *name) { sDebugFlag = cfgGetItem(pCfg, "sDebugFlag")->i32; } else if (strcasecmp("smaDebugFlag", name) == 0) { smaDebugFlag = cfgGetItem(pCfg, "smaDebugFlag")->i32; + } else if (strcasecmp("slowLogThreshold", name) == 0) { + tsSlowLogThreshold = cfgGetItem(pCfg, "slowLogThreshold")->i32; + } else if (strcasecmp("slowLogScope", name) == 0) { + taosSetSlowLogScope(cfgGetItem(pCfg, "slowLogScope")->str) } break; } diff --git a/source/util/src/tlog.c b/source/util/src/tlog.c index a3d3c399ab..10ab389873 100644 --- a/source/util/src/tlog.c +++ b/source/util/src/tlog.c @@ -28,6 +28,7 @@ #define LOG_FILE_NAME_LEN 300 #define LOG_DEFAULT_BUF_SIZE (20 * 1024 * 1024) // 20MB +#define LOG_SLOW_BUF_SIZE (10 * 1024 * 1024) // 10MB #define LOG_DEFAULT_INTERVAL 25 #define LOG_INTERVAL_STEP 5 @@ -62,6 +63,7 @@ typedef struct { pid_t pid; char logName[LOG_FILE_NAME_LEN]; SLogBuff *logHandle; + SLogBuff *slowHandle; TdThreadMutex logMutex; } SLogObj; @@ -136,6 +138,34 @@ static int32_t taosStartLog() { return 0; } +int32_t taosInitSlowLog() { + char fullName[PATH_MAX] = {0}; + char logFileName[64] = {0}; +#ifdef CUS_PROMPT + snprintf(logFileName, 64, "%sSlowLog", CUS_PROMPT); +#else + snprintf(logFileName, 64, "taosSlowLog"); +#endif + + if (strlen(tsLogDir) != 0) { + snprintf(fullName, PATH_MAX, "%s" TD_DIRSEP "%s", tsLogDir, logFileName); + } else { + snprintf(fullName, PATH_MAX, "%s", logFileName); + } + + tsLogObj.slowHandle = taosLogBuffNew(LOG_SLOW_BUF_SIZE); + if (tsLogObj.slowHandle == NULL) return -1; + + taosUmaskFile(0); + tsLogObj.slowHandle->pFile = taosOpenFile(fullName, TD_FILE_CREATE | TD_FILE_WRITE | TD_FILE_APPEND); + if (tsLogObj.slowHandle->pFile == NULL) { + printf("\nfailed to open slow log file:%s, reason:%s\n", fullName, strerror(errno)); + return -1; + } + + return 0; +} + int32_t taosInitLog(const char *logName, int32_t maxFiles) { if (atomic_val_compare_exchange_8(&tsLogInited, 0, 1) != 0) return 0; osUpdate(); @@ -151,6 +181,8 @@ int32_t taosInitLog(const char *logName, int32_t maxFiles) { tsLogObj.logHandle = taosLogBuffNew(LOG_DEFAULT_BUF_SIZE); if (tsLogObj.logHandle == NULL) return -1; if (taosOpenLogFile(fullName, tsNumOfLogLines, maxFiles) < 0) return -1; + + if (taosInitSlowLog() < 0) return -1; if (taosStartLog() < 0) return -1; return 0; } @@ -159,11 +191,23 @@ static void taosStopLog() { if (tsLogObj.logHandle) { tsLogObj.logHandle->stop = 1; } + if (tsLogObj.slowHandle) { + tsLogObj.slowHandle->stop = 1; + } } void taosCloseLog() { + taosStopLog(); + + if (tsLogObj.slowHandle != NULL) { + taosThreadMutexDestroy(&tsLogObj.slowHandle->buffMutex); + taosCloseFile(&tsLogObj.slowHandle->pFile); + taosMemoryFreeClear(tsLogObj.slowHandle->buffer); + memset(&tsLogObj.slowHandle->buffer, 0, sizeof(tsLogObj.slowHandle->buffer)); + taosMemoryFreeClear(tsLogObj.slowHandle); + } + if (tsLogObj.logHandle != NULL) { - taosStopLog(); if (tsLogObj.logHandle != NULL && taosCheckPthreadValid(tsLogObj.logHandle->asyncThread)) { taosThreadJoin(tsLogObj.logHandle->asyncThread, NULL); taosThreadClear(&tsLogObj.logHandle->asyncThread); @@ -176,8 +220,6 @@ void taosCloseLog() { memset(&tsLogObj.logHandle->buffer, 0, sizeof(tsLogObj.logHandle->buffer)); taosThreadMutexDestroy(&tsLogObj.logMutex); taosMemoryFreeClear(tsLogObj.logHandle); - memset(&tsLogObj.logHandle, 0, sizeof(tsLogObj.logHandle)); - tsLogObj.logHandle = NULL; } } From 4295063bea8e748a6a38816110d7fbe4b59f7b3f Mon Sep 17 00:00:00 2001 From: Adam Ji Date: Mon, 24 Apr 2023 08:54:30 +0800 Subject: [PATCH 031/156] docs: remove db in sml struct --- .../rust/nativeexample/examples/schemaless_insert_json.rs | 7 +++---- .../rust/nativeexample/examples/schemaless_insert_line.rs | 7 +++---- .../nativeexample/examples/schemaless_insert_telnet.rs | 6 ++---- 3 files changed, 8 insertions(+), 12 deletions(-) diff --git a/docs/examples/rust/nativeexample/examples/schemaless_insert_json.rs b/docs/examples/rust/nativeexample/examples/schemaless_insert_json.rs index 1a0e219462..d5aa45016e 100644 --- a/docs/examples/rust/nativeexample/examples/schemaless_insert_json.rs +++ b/docs/examples/rust/nativeexample/examples/schemaless_insert_json.rs @@ -24,6 +24,9 @@ async fn put_json() -> anyhow::Result<()> { .exec(format!("create database if not exists {db}")) .await?; + // should specify database before insert + client.exec(format!("use {db}")).await?; + // SchemalessProtocol::Json let data = [ r#"[{"metric": "meters.current", "timestamp": 1681345954000, "value": 10.3, "tags": {"location": "California.SanFrancisco", "groupid": 2}}, {"metric": "meters.voltage", "timestamp": 1648432611249, "value": 219, "tags": {"location": "California.LosAngeles", "groupid": 1}}, {"metric": "meters.current", "timestamp": 1648432611250, "value": 12.6, "tags": {"location": "California.SanFrancisco", "groupid": 2}}, {"metric": "meters.voltage", "timestamp": 1648432611250, "value": 221, "tags": {"location": "California.LosAngeles", "groupid": 1}}]"# @@ -33,7 +36,6 @@ async fn put_json() -> anyhow::Result<()> { // demo with all fields let sml_data = SmlDataBuilder::default() - .db(db.to_string()) .protocol(SchemalessProtocol::Json) .precision(SchemalessPrecision::Millisecond) .data(data.clone()) @@ -44,7 +46,6 @@ async fn put_json() -> anyhow::Result<()> { // demo with default precision let sml_data = SmlDataBuilder::default() - .db(db.to_string()) .protocol(SchemalessProtocol::Json) .data(data.clone()) .ttl(1000) @@ -54,7 +55,6 @@ async fn put_json() -> anyhow::Result<()> { // demo with default ttl let sml_data = SmlDataBuilder::default() - .db(db.to_string()) .protocol(SchemalessProtocol::Json) .data(data.clone()) .req_id(302u64) @@ -63,7 +63,6 @@ async fn put_json() -> anyhow::Result<()> { // demo with default req_id let sml_data = SmlDataBuilder::default() - .db(db.to_string()) .protocol(SchemalessProtocol::Json) .data(data.clone()) .build()?; diff --git a/docs/examples/rust/nativeexample/examples/schemaless_insert_line.rs b/docs/examples/rust/nativeexample/examples/schemaless_insert_line.rs index f4ffc78c54..3cf4d969ce 100644 --- a/docs/examples/rust/nativeexample/examples/schemaless_insert_line.rs +++ b/docs/examples/rust/nativeexample/examples/schemaless_insert_line.rs @@ -25,6 +25,9 @@ async fn put_line() -> anyhow::Result<()> { .exec(format!("create database if not exists {db}")) .await?; + // should specify database before insert + client.exec(format!("use {db}")).await?; + let data = [ "measurement,host=host1 field1=2i,field2=2.0 1577837300000", "measurement,host=host1 field1=2i,field2=2.0 1577837400000", @@ -36,7 +39,6 @@ async fn put_line() -> anyhow::Result<()> { // demo with all fields let sml_data = SmlDataBuilder::default() - .db(db.to_string()) .protocol(SchemalessProtocol::Line) .precision(SchemalessPrecision::Millisecond) .data(data.clone()) @@ -47,7 +49,6 @@ async fn put_line() -> anyhow::Result<()> { // demo with default ttl let sml_data = SmlDataBuilder::default() - .db(db.to_string()) .protocol(SchemalessProtocol::Line) .precision(SchemalessPrecision::Millisecond) .data(data.clone()) @@ -57,7 +58,6 @@ async fn put_line() -> anyhow::Result<()> { // demo with default ttl and req_id let sml_data = SmlDataBuilder::default() - .db(db.to_string()) .protocol(SchemalessProtocol::Line) .precision(SchemalessPrecision::Millisecond) .data(data.clone()) @@ -66,7 +66,6 @@ async fn put_line() -> anyhow::Result<()> { // demo with default precision let sml_data = SmlDataBuilder::default() - .db(db.to_string()) .protocol(SchemalessProtocol::Line) .data(data) .req_id(103u64) diff --git a/docs/examples/rust/nativeexample/examples/schemaless_insert_telnet.rs b/docs/examples/rust/nativeexample/examples/schemaless_insert_telnet.rs index 944216338e..d2bbc69261 100644 --- a/docs/examples/rust/nativeexample/examples/schemaless_insert_telnet.rs +++ b/docs/examples/rust/nativeexample/examples/schemaless_insert_telnet.rs @@ -24,6 +24,8 @@ async fn put_telnet() -> anyhow::Result<()> { .exec(format!("create database if not exists {db}")) .await?; + // should specify database before insert + client.exec(format!("use {db}")).await?; let data = [ "meters.current 1648432611249 10.3 location=California.SanFrancisco group=2", @@ -40,7 +42,6 @@ async fn put_telnet() -> anyhow::Result<()> { // demo with all fields let sml_data = SmlDataBuilder::default() - .db(db.to_string()) .protocol(SchemalessProtocol::Telnet) .precision(SchemalessPrecision::Millisecond) .data(data.clone()) @@ -51,7 +52,6 @@ async fn put_telnet() -> anyhow::Result<()> { // demo with default precision let sml_data = SmlDataBuilder::default() - .db(db.to_string()) .protocol(SchemalessProtocol::Telnet) .data(data.clone()) .ttl(1000) @@ -61,7 +61,6 @@ async fn put_telnet() -> anyhow::Result<()> { // demo with default ttl let sml_data = SmlDataBuilder::default() - .db(db.to_string()) .protocol(SchemalessProtocol::Telnet) .data(data.clone()) .req_id(202u64) @@ -70,7 +69,6 @@ async fn put_telnet() -> anyhow::Result<()> { // demo with default req_id let sml_data = SmlDataBuilder::default() - .db(db.to_string()) .protocol(SchemalessProtocol::Telnet) .data(data.clone()) .build()?; From 8f21090343e711b97d36dde2682921f5845cc8d1 Mon Sep 17 00:00:00 2001 From: Minglei Jin Date: Mon, 24 Apr 2023 10:47:28 +0800 Subject: [PATCH 032/156] cache/delete: just remove affected cache entries --- source/dnode/vnode/src/inc/tsdb.h | 2 + source/dnode/vnode/src/tsdb/tsdbCache.c | 112 ++++++++++++++++++-- source/dnode/vnode/src/tsdb/tsdbCacheRead.c | 4 +- source/dnode/vnode/src/tsdb/tsdbMemTable.c | 7 +- 4 files changed, 112 insertions(+), 13 deletions(-) diff --git a/source/dnode/vnode/src/inc/tsdb.h b/source/dnode/vnode/src/inc/tsdb.h index 35c9af7c84..ee816bb122 100644 --- a/source/dnode/vnode/src/inc/tsdb.h +++ b/source/dnode/vnode/src/inc/tsdb.h @@ -809,6 +809,8 @@ int32_t tsdbOpenCache(STsdb *pTsdb); void tsdbCloseCache(STsdb *pTsdb); int32_t tsdbCacheUpdate(STsdb *pTsdb, tb_uid_t suid, tb_uid_t uid, TSDBROW *row); int32_t tsdbCacheGet(STsdb *pTsdb, tb_uid_t uid, SArray *pLastArray, SCacheRowsReader *pr, char const *lstring); +int32_t tsdbCacheDel(STsdb *pTsdb, tb_uid_t suid, tb_uid_t uid, TSKEY sKey, TSKEY eKey); + int32_t tsdbCacheInsertLast(SLRUCache *pCache, tb_uid_t uid, TSDBROW *row, STsdb *pTsdb); int32_t tsdbCacheInsertLastrow(SLRUCache *pCache, STsdb *pTsdb, tb_uid_t uid, TSDBROW *row, bool dup); int32_t tsdbCacheGetLastH(SLRUCache *pCache, tb_uid_t uid, SCacheRowsReader *pr, LRUHandle **h); diff --git a/source/dnode/vnode/src/tsdb/tsdbCache.c b/source/dnode/vnode/src/tsdb/tsdbCache.c index e961008834..dc12f11e23 100644 --- a/source/dnode/vnode/src/tsdb/tsdbCache.c +++ b/source/dnode/vnode/src/tsdb/tsdbCache.c @@ -12,7 +12,6 @@ * You should have received a copy of the GNU Affero General Public License * along with this program. If not, see . */ - #include "tsdb.h" static int32_t tsdbOpenBICache(STsdb *pTsdb) { @@ -101,14 +100,14 @@ static int32_t tsdbOpenRocksCache(STsdb *pTsdb) { rocksdb_writebatch_t *writebatch = rocksdb_writebatch_create(); - taosThreadMutexInit(&pTsdb->rCache.rMutex, NULL); - pTsdb->rCache.writebatch = writebatch; pTsdb->rCache.options = options; pTsdb->rCache.writeoptions = writeoptions; pTsdb->rCache.readoptions = readoptions; pTsdb->rCache.db = db; + taosThreadMutexInit(&pTsdb->rCache.rMutex, NULL); + return code; _err4: @@ -127,6 +126,7 @@ static void tsdbCloseRocksCache(STsdb *pTsdb) { rocksdb_readoptions_destroy(pTsdb->rCache.readoptions); rocksdb_writeoptions_destroy(pTsdb->rCache.writeoptions); rocksdb_options_destroy(pTsdb->rCache.options); + taosThreadMutexDestroy(&pTsdb->rCache.rMutex); } SLastCol *tsdbCacheDeserialize(char const *value) { @@ -208,14 +208,14 @@ int32_t tsdbCacheUpdate(STsdb *pTsdb, tb_uid_t suid, tb_uid_t uid, TSDBROW *pRow size_t *keys_list_sizes = taosMemoryCalloc(num_keys * 2, sizeof(size_t)); for (int i = 0; i < num_keys; ++i) { SColVal *pColVal = (SColVal *)taosArrayGet(aColVal, i); + int16_t cid = pColVal->cid; char *keys = taosMemoryCalloc(2, ROCKS_KEY_LEN); - int last_key_len = snprintf(keys, ROCKS_KEY_LEN, "%" PRIi64 ":%" PRIi16 ":last", uid, pColVal->cid); + int last_key_len = snprintf(keys, ROCKS_KEY_LEN, "%" PRIi64 ":%" PRIi16 ":last", uid, cid); if (last_key_len >= ROCKS_KEY_LEN) { tsdbError("vgId:%d, %s failed at line %d since %s", TD_VID(pTsdb->pVnode), __func__, __LINE__, tstrerror(code)); } - int lr_key_len = - snprintf(keys + ROCKS_KEY_LEN, ROCKS_KEY_LEN, "%" PRIi64 ":%" PRIi16 ":last_row", uid, pColVal->cid); + int lr_key_len = snprintf(keys + ROCKS_KEY_LEN, ROCKS_KEY_LEN, "%" PRIi64 ":%" PRIi16 ":last_row", uid, cid); if (lr_key_len >= ROCKS_KEY_LEN) { tsdbError("vgId:%d, %s failed at line %d since %s", TD_VID(pTsdb->pVnode), __func__, __LINE__, tstrerror(code)); } @@ -227,6 +227,7 @@ int32_t tsdbCacheUpdate(STsdb *pTsdb, tb_uid_t suid, tb_uid_t uid, TSDBROW *pRow char **values_list = taosMemoryCalloc(num_keys * 2, sizeof(char *)); size_t *values_list_sizes = taosMemoryCalloc(num_keys * 2, sizeof(size_t)); char **errs = taosMemoryCalloc(num_keys * 2, sizeof(char *)); + taosThreadMutexLock(&pTsdb->rCache.rMutex); rocksdb_multi_get(pTsdb->rCache.db, pTsdb->rCache.readoptions, num_keys * 2, (const char *const *)keys_list, keys_list_sizes, values_list, values_list_sizes, errs); for (int i = 0; i < num_keys; ++i) { @@ -260,7 +261,7 @@ int32_t tsdbCacheUpdate(STsdb *pTsdb, tb_uid_t suid, tb_uid_t uid, TSDBROW *pRow if (!COL_VAL_IS_NONE(pColVal)) { SLastCol *pLastCol = NULL; if (NULL != values_list[i + num_keys]) { - pLastCol = tsdbCacheDeserialize(values_list[i]); + pLastCol = tsdbCacheDeserialize(values_list[i + num_keys]); } if (NULL == pLastCol || pLastCol->ts <= keyTs) { @@ -286,6 +287,7 @@ int32_t tsdbCacheUpdate(STsdb *pTsdb, tb_uid_t suid, tb_uid_t uid, TSDBROW *pRow tsdbError("vgId:%d, %s failed at line %d since %s", TD_VID(pTsdb->pVnode), __func__, __LINE__, err); rocksdb_free(err); } + taosThreadMutexUnlock(&pTsdb->rCache.rMutex); rocksdb_writebatch_clear(wb); _exit: @@ -339,6 +341,98 @@ int32_t tsdbCacheGet(STsdb *pTsdb, tb_uid_t uid, SArray *pLastArray, SCacheRowsR return code; } +int32_t tsdbCacheDel(STsdb *pTsdb, tb_uid_t suid, tb_uid_t uid, TSKEY sKey, TSKEY eKey) { + int32_t code = 0; + // 1, fetch schema + STSchema *pTSchema = NULL; + int32_t sver = -1; + code = metaGetTbTSchemaEx(pTsdb->pVnode->pMeta, suid, uid, sver, &pTSchema); + if (code != TSDB_CODE_SUCCESS) { + terrno = code; + return -1; + } + + // 3, build keys & multi get from rocks + int num_keys = pTSchema->numOfCols; + char **keys_list = taosMemoryCalloc(num_keys * 2, sizeof(char *)); + size_t *keys_list_sizes = taosMemoryCalloc(num_keys * 2, sizeof(size_t)); + for (int i = 0; i < num_keys; ++i) { + int16_t cid = pTSchema->columns[i].colId; + + char *keys = taosMemoryCalloc(2, ROCKS_KEY_LEN); + int last_key_len = snprintf(keys, ROCKS_KEY_LEN, "%" PRIi64 ":%" PRIi16 ":last", uid, cid); + if (last_key_len >= ROCKS_KEY_LEN) { + tsdbError("vgId:%d, %s failed at line %d since %s", TD_VID(pTsdb->pVnode), __func__, __LINE__, tstrerror(code)); + } + int lr_key_len = snprintf(keys + ROCKS_KEY_LEN, ROCKS_KEY_LEN, "%" PRIi64 ":%" PRIi16 ":last_row", uid, cid); + if (lr_key_len >= ROCKS_KEY_LEN) { + tsdbError("vgId:%d, %s failed at line %d since %s", TD_VID(pTsdb->pVnode), __func__, __LINE__, tstrerror(code)); + } + keys_list[i] = keys; + keys_list[num_keys + i] = keys + ROCKS_KEY_LEN; + keys_list_sizes[i] = last_key_len; + keys_list_sizes[num_keys + i] = lr_key_len; + } + char **values_list = taosMemoryCalloc(num_keys * 2, sizeof(char *)); + size_t *values_list_sizes = taosMemoryCalloc(num_keys * 2, sizeof(size_t)); + char **errs = taosMemoryCalloc(num_keys * 2, sizeof(char *)); + taosThreadMutexLock(&pTsdb->rCache.rMutex); + rocksdb_multi_get(pTsdb->rCache.db, pTsdb->rCache.readoptions, num_keys * 2, (const char *const *)keys_list, + keys_list_sizes, values_list, values_list_sizes, errs); + for (int i = 0; i < num_keys; ++i) { + taosMemoryFree(keys_list[i]); + } + for (int i = 0; i < num_keys * 2; ++i) { + rocksdb_free(errs[i]); + } + taosMemoryFree(keys_list); + taosMemoryFree(keys_list_sizes); + taosMemoryFree(errs); + + rocksdb_writebatch_t *wb = pTsdb->rCache.writebatch; + for (int i = 0; i < num_keys; ++i) { + SLastCol *pLastCol = NULL; + if (NULL != values_list[i]) { + pLastCol = tsdbCacheDeserialize(values_list[i]); + } + + if (NULL != pLastCol || (pLastCol->ts <= eKey && pLastCol->ts >= sKey)) { + char key[ROCKS_KEY_LEN]; + size_t klen = snprintf(key, ROCKS_KEY_LEN, "%" PRIi64 ":%" PRIi16 ":last", uid, pLastCol->colVal.cid); + rocksdb_writebatch_delete(wb, key, klen); + } + + if (NULL != values_list[i + num_keys]) { + pLastCol = tsdbCacheDeserialize(values_list[i + num_keys]); + } + + if (NULL != pLastCol || (pLastCol->ts <= eKey && pLastCol->ts >= sKey)) { + char key[ROCKS_KEY_LEN]; + size_t klen = snprintf(key, ROCKS_KEY_LEN, "%" PRIi64 ":%" PRIi16 ":last_row", uid, pLastCol->colVal.cid); + rocksdb_writebatch_delete(wb, key, klen); + } + + rocksdb_free(values_list[i]); + rocksdb_free(values_list[i + num_keys]); + } + taosMemoryFree(values_list); + taosMemoryFree(values_list_sizes); + + char *err = NULL; + rocksdb_write(pTsdb->rCache.db, pTsdb->rCache.writeoptions, wb, &err); + if (NULL != err) { + tsdbError("vgId:%d, %s failed at line %d since %s", TD_VID(pTsdb->pVnode), __func__, __LINE__, err); + rocksdb_free(err); + } + taosThreadMutexUnlock(&pTsdb->rCache.rMutex); + rocksdb_writebatch_clear(wb); + +_exit: + taosMemoryFree(pTSchema); + + return code; +} + int32_t tsdbOpenCache(STsdb *pTsdb) { int32_t code = 0; SLRUCache *pCache = NULL; @@ -469,7 +563,7 @@ int32_t tsdbCacheDeleteLast(SLRUCache *pCache, tb_uid_t uid, TSKEY eKey) { return code; } - +/* int32_t tsdbCacheDelete(SLRUCache *pCache, tb_uid_t uid, TSKEY eKey) { int32_t code = 0; char key[32] = {0}; @@ -524,7 +618,7 @@ int32_t tsdbCacheDelete(SLRUCache *pCache, tb_uid_t uid, TSKEY eKey) { return code; } - +*/ int32_t tsdbCacheInsertLastrow(SLRUCache *pCache, STsdb *pTsdb, tb_uid_t uid, TSDBROW *row, bool dup) { int32_t code = 0; STSRow *cacheRow = NULL; diff --git a/source/dnode/vnode/src/tsdb/tsdbCacheRead.c b/source/dnode/vnode/src/tsdb/tsdbCacheRead.c index 830b1869eb..c74278f077 100644 --- a/source/dnode/vnode/src/tsdb/tsdbCacheRead.c +++ b/source/dnode/vnode/src/tsdb/tsdbCacheRead.c @@ -288,7 +288,7 @@ int32_t tsdbRetrieveCacheRows(void* pReader, SSDataBlock* pResBlock, const int32 tsdbCacheGet(pr->pTsdb, pKeyInfo->uid, pRow, pr, lstring); if (TARRAY_SIZE(pRow) <= 0) { - taosArrayDestroy(pRow); + taosArrayClear(pRow); continue; } @@ -357,7 +357,7 @@ int32_t tsdbRetrieveCacheRows(void* pReader, SSDataBlock* pResBlock, const int32 tsdbCacheGet(pr->pTsdb, pKeyInfo->uid, pRow, pr, lstring); if (TARRAY_SIZE(pRow) <= 0) { - taosArrayDestroy(pRow); + taosArrayClear(pRow); continue; } diff --git a/source/dnode/vnode/src/tsdb/tsdbMemTable.c b/source/dnode/vnode/src/tsdb/tsdbMemTable.c index 122546ebfc..b6141abcf9 100644 --- a/source/dnode/vnode/src/tsdb/tsdbMemTable.c +++ b/source/dnode/vnode/src/tsdb/tsdbMemTable.c @@ -140,7 +140,6 @@ int32_t tsdbDeleteTableData(STsdb *pTsdb, int64_t version, tb_uid_t suid, tb_uid SMemTable *pMemTable = pTsdb->mem; STbData *pTbData = NULL; SVBufPool *pPool = pTsdb->pVnode->inUse; - TSDBKEY lastKey = {.version = version, .ts = eKey}; // check if table exists SMetaInfo info; @@ -181,7 +180,7 @@ int32_t tsdbDeleteTableData(STsdb *pTsdb, int64_t version, tb_uid_t suid, tb_uid pMemTable->nDel++; pMemTable->minVer = TMIN(pMemTable->minVer, version); pMemTable->maxVer = TMIN(pMemTable->maxVer, version); - + /* if (TSDB_CACHE_LAST_ROW(pMemTable->pTsdb->pVnode->config) && tsdbKeyCmprFn(&lastKey, &pTbData->maxKey) >= 0) { tsdbCacheDeleteLastrow(pTsdb->lruCache, pTbData->uid, eKey); } @@ -189,6 +188,10 @@ int32_t tsdbDeleteTableData(STsdb *pTsdb, int64_t version, tb_uid_t suid, tb_uid if (TSDB_CACHE_LAST(pMemTable->pTsdb->pVnode->config)) { tsdbCacheDeleteLast(pTsdb->lruCache, pTbData->uid, eKey); } + */ + if (eKey >= pTbData->maxKey && sKey <= pTbData->maxKey) { + tsdbCacheDel(pTsdb, suid, uid, sKey, eKey); + } tsdbTrace("vgId:%d, delete data from table suid:%" PRId64 " uid:%" PRId64 " skey:%" PRId64 " eKey:%" PRId64 " at version %" PRId64, From c7d5b48920509f332da8da809bb56139a3f1317f Mon Sep 17 00:00:00 2001 From: Minglei Jin Date: Mon, 24 Apr 2023 15:42:53 +0800 Subject: [PATCH 033/156] cache/read: fix pRes buffer index --- source/dnode/vnode/src/tsdb/tsdbCacheRead.c | 11 ++++++----- 1 file changed, 6 insertions(+), 5 deletions(-) diff --git a/source/dnode/vnode/src/tsdb/tsdbCacheRead.c b/source/dnode/vnode/src/tsdb/tsdbCacheRead.c index c74278f077..1c5a7c815b 100644 --- a/source/dnode/vnode/src/tsdb/tsdbCacheRead.c +++ b/source/dnode/vnode/src/tsdb/tsdbCacheRead.c @@ -29,7 +29,7 @@ static int32_t saveOneRow(SArray* pRow, SSDataBlock* pBlock, SCacheRowsReader* p for (int32_t i = 0; i < pReader->numOfCols; ++i) { SColumnInfoData* pColInfoData = taosArrayGet(pBlock->pDataBlock, dstSlotIds[i]); - SFirstLastRes* p = (SFirstLastRes*)varDataVal(pRes[dstSlotIds[i]]); + SFirstLastRes* p = (SFirstLastRes*)varDataVal(pRes[i]); int32_t slotId = slotIds[i]; SLastCol* pColVal = (SLastCol*)taosArrayGet(pRow, i); @@ -50,8 +50,8 @@ static int32_t saveOneRow(SArray* pRow, SSDataBlock* pBlock, SCacheRowsReader* p // pColInfoData->info.bytes includes the VARSTR_HEADER_SIZE, need to substruct it p->hasResult = true; - varDataSetLen(pRes[dstSlotIds[i]], pColInfoData->info.bytes - VARSTR_HEADER_SIZE); - colDataSetVal(pColInfoData, numOfRows, (const char*)pRes[dstSlotIds[i]], false); + varDataSetLen(pRes[i], pColInfoData->info.bytes - VARSTR_HEADER_SIZE); + colDataSetVal(pColInfoData, numOfRows, (const char*)pRes[i], false); } pBlock->info.rows += allNullRow ? 0 : 1; @@ -247,8 +247,9 @@ int32_t tsdbRetrieveCacheRows(void* pReader, SSDataBlock* pResBlock, const int32 } for (int32_t j = 0; j < pr->numOfCols; ++j) { - pRes[j] = taosMemoryCalloc( - 1, sizeof(SFirstLastRes) + pr->pSchema->columns[-1 == slotIds[j] ? 0 : slotIds[j]].bytes + VARSTR_HEADER_SIZE); + pRes[j] = + taosMemoryCalloc(1, sizeof(SFirstLastRes) + pr->pSchema->columns[/*-1 == slotIds[j] ? 0 : */ slotIds[j]].bytes + + VARSTR_HEADER_SIZE); SFirstLastRes* p = (SFirstLastRes*)varDataVal(pRes[j]); p->ts = INT64_MIN; } From 758427b3c41ba71ac559e2be9ba364dafa853d40 Mon Sep 17 00:00:00 2001 From: shenglian zhou Date: Mon, 24 Apr 2023 15:48:20 +0800 Subject: [PATCH 034/156] fix: use slimit to indicate group for tag scan --- source/libs/executor/inc/executorimpl.h | 1 + source/libs/executor/src/scanoperator.c | 93 ++++++++++++++----------- source/libs/planner/src/planOptimizer.c | 29 ++++++++ 3 files changed, 82 insertions(+), 41 deletions(-) diff --git a/source/libs/executor/inc/executorimpl.h b/source/libs/executor/inc/executorimpl.h index 85424fd7de..43ee54f65d 100644 --- a/source/libs/executor/inc/executorimpl.h +++ b/source/libs/executor/inc/executorimpl.h @@ -366,6 +366,7 @@ typedef struct STagScanInfo { int32_t curPos; SReadHandle readHandle; STableListInfo* pTableListInfo; + SLimitNode* pSlimit; } STagScanInfo; typedef enum EStreamScanMode { diff --git a/source/libs/executor/src/scanoperator.c b/source/libs/executor/src/scanoperator.c index 2389c7252e..b7315db204 100644 --- a/source/libs/executor/src/scanoperator.c +++ b/source/libs/executor/src/scanoperator.c @@ -2512,6 +2512,53 @@ _error: return NULL; } +static void doTagScanOneTable(SOperatorInfo* pOperator, const SExecTaskInfo* pTaskInfo, STagScanInfo* pInfo, + const SExprInfo* pExprInfo, const SSDataBlock* pRes, int32_t size, const char* str, + int32_t* count, SMetaReader* mr) { + STableKeyInfo* item = tableListGetInfo(pInfo->pTableListInfo, pInfo->curPos); + int32_t code = metaGetTableEntryByUid(mr, item->uid); + tDecoderClear(&(*mr).coder); + if (code != TSDB_CODE_SUCCESS) { + qError("failed to get table meta, uid:0x%" PRIx64 ", code:%s, %s", item->uid, tstrerror(terrno), + GET_TASKID(pTaskInfo)); + metaReaderClear(mr); + T_LONG_JMP(pTaskInfo->env, terrno); + } + + for (int32_t j = 0; j < pOperator->exprSupp.numOfExprs; ++j) { + SColumnInfoData* pDst = taosArrayGet(pRes->pDataBlock, pExprInfo[j].base.resSchema.slotId); + + // refactor later + if (fmIsScanPseudoColumnFunc(pExprInfo[j].pExpr->_function.functionId)) { + STR_TO_VARSTR(str, (*mr).me.name); + colDataSetVal(pDst, (*count), str, false); + } else { // it is a tag value + STagVal val = {0}; + val.cid = pExprInfo[j].base.pParam[0].pCol->colId; + const char* p = metaGetTableTagVal((*mr).me.ctbEntry.pTags, pDst->info.type, &val); + + char* data = NULL; + if (pDst->info.type != TSDB_DATA_TYPE_JSON && p != NULL) { + data = tTagValToData((const STagVal*)p, false); + } else { + data = (char*)p; + } + colDataSetVal(pDst, (*count), data, + (data == NULL) || (pDst->info.type == TSDB_DATA_TYPE_JSON && tTagIsJsonNull(data))); + + if (pDst->info.type != TSDB_DATA_TYPE_JSON && p != NULL && IS_VAR_DATA_TYPE(((const STagVal*)p)->type) && + data != NULL) { + taosMemoryFree(data); + } + } + } + + (*count) += 1; + if (++pInfo->curPos >= size) { + setOperatorCompleted(pOperator); + } +} + static SSDataBlock* doTagScan(SOperatorInfo* pOperator) { if (pOperator->status == OP_EXEC_DONE) { return NULL; @@ -2536,47 +2583,10 @@ static SSDataBlock* doTagScan(SOperatorInfo* pOperator) { metaReaderInit(&mr, pInfo->readHandle.meta, 0); while (pInfo->curPos < size && count < pOperator->resultInfo.capacity) { - STableKeyInfo* item = tableListGetInfo(pInfo->pTableListInfo, pInfo->curPos); - int32_t code = metaGetTableEntryByUid(&mr, item->uid); - tDecoderClear(&mr.coder); - if (code != TSDB_CODE_SUCCESS) { - qError("failed to get table meta, uid:0x%" PRIx64 ", code:%s, %s", item->uid, tstrerror(terrno), - GET_TASKID(pTaskInfo)); - metaReaderClear(&mr); - T_LONG_JMP(pTaskInfo->env, terrno); - } - - for (int32_t j = 0; j < pOperator->exprSupp.numOfExprs; ++j) { - SColumnInfoData* pDst = taosArrayGet(pRes->pDataBlock, pExprInfo[j].base.resSchema.slotId); - - // refactor later - if (fmIsScanPseudoColumnFunc(pExprInfo[j].pExpr->_function.functionId)) { - STR_TO_VARSTR(str, mr.me.name); - colDataSetVal(pDst, count, str, false); - } else { // it is a tag value - STagVal val = {0}; - val.cid = pExprInfo[j].base.pParam[0].pCol->colId; - const char* p = metaGetTableTagVal(mr.me.ctbEntry.pTags, pDst->info.type, &val); - - char* data = NULL; - if (pDst->info.type != TSDB_DATA_TYPE_JSON && p != NULL) { - data = tTagValToData((const STagVal*)p, false); - } else { - data = (char*)p; - } - colDataSetVal(pDst, count, data, - (data == NULL) || (pDst->info.type == TSDB_DATA_TYPE_JSON && tTagIsJsonNull(data))); - - if (pDst->info.type != TSDB_DATA_TYPE_JSON && p != NULL && IS_VAR_DATA_TYPE(((const STagVal*)p)->type) && - data != NULL) { - taosMemoryFree(data); - } - } - } - - count += 1; - if (++pInfo->curPos >= size) { - setOperatorCompleted(pOperator); + doTagScanOneTable(pOperator, pTaskInfo, pInfo, pExprInfo, pRes, size, str, &count, &mr); + if (pInfo->pSlimit != NULL) { + pInfo->pRes->info.id.groupId = calcGroupId(mr.me.name, strlen(mr.me.name)); + break; } } @@ -2628,6 +2638,7 @@ SOperatorInfo* createTagScanOperatorInfo(SReadHandle* pReadHandle, STagScanPhysi pInfo->pRes = createDataBlockFromDescNode(pDescNode); pInfo->readHandle = *pReadHandle; pInfo->curPos = 0; + pInfo->pSlimit = (SLimitNode*)pPhyNode->node.pSlimit; //TODO: slimit now only indicate group setOperatorInfo(pOperator, "TagScanOperator", QUERY_NODE_PHYSICAL_PLAN_TAG_SCAN, false, OP_NOT_OPENED, pInfo, pTaskInfo); diff --git a/source/libs/planner/src/planOptimizer.c b/source/libs/planner/src/planOptimizer.c index 07ea110d7e..4ece9be304 100644 --- a/source/libs/planner/src/planOptimizer.c +++ b/source/libs/planner/src/planOptimizer.c @@ -2418,6 +2418,34 @@ static bool tagScanOptShouldBeOptimized(SLogicNode* pNode) { return true; } +static SLogicNode* tagScanOptFindAncestorWithSlimit(SLogicNode* pTableScanNode) { + SLogicNode* pNode = pTableScanNode->pParent; + while (NULL != pNode) { + if (QUERY_NODE_LOGIC_PLAN_PARTITION == nodeType(pNode) || QUERY_NODE_LOGIC_PLAN_AGG == nodeType(pNode) || + QUERY_NODE_LOGIC_PLAN_WINDOW == nodeType(pNode) || QUERY_NODE_LOGIC_PLAN_SORT == nodeType(pNode)) { + return NULL; + } + if (NULL != pNode->pSlimit) { + return pNode; + } + pNode = pNode->pParent; + } + return NULL; +} + +static void tagScanOptCloneAncestorSlimit(SLogicNode* pTableScanNode) { + if (NULL != pTableScanNode->pSlimit) { + return; + } + + SLogicNode* pNode = tagScanOptFindAncestorWithSlimit(pTableScanNode); + if (NULL != pNode) { + //TODO: only set the slimit now. push down slimit later + pTableScanNode->pSlimit = nodesCloneNode(pNode->pSlimit); + } + return; +} + static int32_t tagScanOptimize(SOptimizeContext* pCxt, SLogicSubplan* pLogicSubplan) { SScanLogicNode* pScanNode = (SScanLogicNode*)optFindPossibleNode(pLogicSubplan->pNode, tagScanOptShouldBeOptimized); if (NULL == pScanNode) { @@ -2458,6 +2486,7 @@ static int32_t tagScanOptimize(SOptimizeContext* pCxt, SLogicSubplan* pLogicSubp NODES_CLEAR_LIST(pAgg->pChildren); } nodesDestroyNode((SNode*)pAgg); + tagScanOptCloneAncestorSlimit((SLogicNode*)pScanNode); pCxt->optimized = true; return TSDB_CODE_SUCCESS; } From 9551a269b31e38e333e8c1942db2f74e865c6143 Mon Sep 17 00:00:00 2001 From: dapan1121 Date: Mon, 24 Apr 2023 16:45:34 +0800 Subject: [PATCH 035/156] feat: support log slow query by default --- include/util/taoserror.h | 9 +- include/util/tlog.h | 6 ++ source/client/src/clientEnv.c | 21 +++-- source/client/src/clientImpl.c | 5 ++ source/client/src/clientSml.c | 80 +++++++++++++---- source/common/src/tdataformat.c | 8 +- source/common/src/tglobal.c | 10 ++- source/util/src/terror.c | 3 +- source/util/src/tlog.c | 151 ++++++++++++++++++-------------- 9 files changed, 194 insertions(+), 99 deletions(-) diff --git a/include/util/taoserror.h b/include/util/taoserror.h index 0d9292cc6b..0511474fdb 100644 --- a/include/util/taoserror.h +++ b/include/util/taoserror.h @@ -103,7 +103,7 @@ int32_t* taosGetErrno(); #define TSDB_CODE_CHECKSUM_ERROR TAOS_DEF_ERROR_CODE(0, 0x011F) // internal #define TSDB_CODE_COMPRESS_ERROR TAOS_DEF_ERROR_CODE(0, 0x0120) -#define TSDB_CODE_MSG_NOT_PROCESSED TAOS_DEF_ERROR_CODE(0, 0x0121) // +#define TSDB_CODE_MSG_NOT_PROCESSED TAOS_DEF_ERROR_CODE(0, 0x0121) #define TSDB_CODE_CFG_NOT_FOUND TAOS_DEF_ERROR_CODE(0, 0x0122) #define TSDB_CODE_REPEAT_INIT TAOS_DEF_ERROR_CODE(0, 0x0123) #define TSDB_CODE_DUP_KEY TAOS_DEF_ERROR_CODE(0, 0x0124) @@ -118,9 +118,10 @@ int32_t* taosGetErrno(); #define TSDB_CODE_MSG_ENCODE_ERROR TAOS_DEF_ERROR_CODE(0, 0x012D) #define TSDB_CODE_NO_ENOUGH_DISKSPACE TAOS_DEF_ERROR_CODE(0, 0x012E) -#define TSDB_CODE_APP_IS_STARTING TAOS_DEF_ERROR_CODE(0, 0x0130) // -#define TSDB_CODE_APP_IS_STOPPING TAOS_DEF_ERROR_CODE(0, 0x0131) // -#define TSDB_CODE_IVLD_DATA_FMT TAOS_DEF_ERROR_CODE(0, 0x0132) // +#define TSDB_CODE_APP_IS_STARTING TAOS_DEF_ERROR_CODE(0, 0x0130) +#define TSDB_CODE_APP_IS_STOPPING TAOS_DEF_ERROR_CODE(0, 0x0131) +#define TSDB_CODE_INVALID_DATA_FMT TAOS_DEF_ERROR_CODE(0, 0x0132) +#define TSDB_CODE_INVALID_CFG_VALUE TAOS_DEF_ERROR_CODE(0, 0x0133) //client #define TSDB_CODE_TSC_INVALID_OPERATION TAOS_DEF_ERROR_CODE(0, 0x0200) diff --git a/include/util/tlog.h b/include/util/tlog.h index 0071b3d32c..d267d5876a 100644 --- a/include/util/tlog.h +++ b/include/util/tlog.h @@ -83,6 +83,12 @@ void taosPrintLongString(const char *flags, ELogLevel level, int32_t dflag, cons #endif ; +void taosPrintSlowLog(const char *format, ...) +#ifdef __GNUC__ + __attribute__((format(printf, 1, 2))) +#endif + ; + bool taosAssertDebug(bool condition, const char *file, int32_t line, const char *format, ...); bool taosAssertRelease(bool condition); diff --git a/source/client/src/clientEnv.c b/source/client/src/clientEnv.c index 418103f2a6..6691253585 100644 --- a/source/client/src/clientEnv.c +++ b/source/client/src/clientEnv.c @@ -42,7 +42,7 @@ SAppInfo appInfo; int64_t lastClusterId = 0; int32_t clientReqRefPool = -1; int32_t clientConnRefPool = -1; -int32_t clientStop = 0; +int32_t clientStop = -1; int32_t timestampDeltaLimit = 900; // s @@ -69,7 +69,6 @@ static int32_t registerRequest(SRequestObj *pRequest, STscObj *pTscObj) { } static void deregisterRequest(SRequestObj *pRequest) { - const static int64_t SLOW_QUERY_INTERVAL = 3000000L; // todo configurable if (pRequest == NULL) { tscError("pRequest == NULL"); return; @@ -80,6 +79,7 @@ static void deregisterRequest(SRequestObj *pRequest) { int32_t currentInst = atomic_sub_fetch_64((int64_t *)&pActivity->currentRequests, 1); int32_t num = atomic_sub_fetch_32(&pTscObj->numOfReqs, 1); + int32_t reqType = SLOW_LOG_TYPE_OTHERS; int64_t duration = taosGetTimestampUs() - pRequest->metric.start; tscDebug("0x%" PRIx64 " free Request from connObj: 0x%" PRIx64 ", reqId:0x%" PRIx64 @@ -95,6 +95,7 @@ static void deregisterRequest(SRequestObj *pRequest) { duration, pRequest->metric.parseCostUs, pRequest->metric.ctgCostUs, pRequest->metric.analyseCostUs, pRequest->metric.planCostUs, pRequest->metric.execCostUs); atomic_add_fetch_64((int64_t *)&pActivity->insertElapsedTime, duration); + reqType = SLOW_LOG_TYPE_INSERT; } else if (QUERY_NODE_SELECT_STMT == pRequest->stmtType) { tscDebug("query duration %" PRId64 "us: parseCost:%" PRId64 "us, ctgCost:%" PRId64 "us, analyseCost:%" PRId64 "us, planCost:%" PRId64 "us, exec:%" PRId64 "us", @@ -102,12 +103,16 @@ static void deregisterRequest(SRequestObj *pRequest) { pRequest->metric.planCostUs, pRequest->metric.execCostUs); atomic_add_fetch_64((int64_t *)&pActivity->queryElapsedTime, duration); + reqType = SLOW_LOG_TYPE_QUERY; } } - if (duration >= SLOW_QUERY_INTERVAL) { + if (duration >= (tsSlowLogThreshold * 1000000UL)) { atomic_add_fetch_64((int64_t *)&pActivity->numOfSlowQueries, 1); - tscWarnL("slow query: %s, duration:%" PRId64, pRequest->sqlstr, duration); + if (tsSlowLogScope & reqType) { + taosPrintSlowLog("PID:%d, Conn:%u, QID:0x%" PRIx64 ", Start:%" PRId64 ", Duration:%" PRId64 "us, SQL:%s", + taosGetPId(), pTscObj->connId, pRequest->requestId, pRequest->metric.start, duration, pRequest->sqlstr); + } } releaseTscObj(pTscObj->id); @@ -427,8 +432,12 @@ static void *tscCrashReportThreadFp(void *param) { } #endif + if (-1 != atomic_val_compare_exchange_32(&clientStop, -1, 0)) { + return NULL; + } + while (1) { - if (clientStop) break; + if (clientStop > 0) break; if (loopTimes++ < reportPeriodNum) { taosMsleep(sleepTime); continue; @@ -466,7 +475,7 @@ static void *tscCrashReportThreadFp(void *param) { loopTimes = 0; } - clientStop = -1; + clientStop = -2; return NULL; } diff --git a/source/client/src/clientImpl.c b/source/client/src/clientImpl.c index b9fa12b587..f8eade1d7c 100644 --- a/source/client/src/clientImpl.c +++ b/source/client/src/clientImpl.c @@ -1248,6 +1248,11 @@ STscObj* taosConnectImpl(const char* user, const char* auth, const char* db, __t return NULL; } + pRequest->sqlstr = taosStrdup("taos_connect"); + if (pRequest->sqlstr) { + pRequest->sqlLen = strlen(pRequest->sqlstr); + } + SMsgSendInfo* body = buildConnectMsg(pRequest); int64_t transporterId = 0; diff --git a/source/client/src/clientSml.c b/source/client/src/clientSml.c index 17150286e1..011a1b3111 100644 --- a/source/client/src/clientSml.c +++ b/source/client/src/clientSml.c @@ -660,6 +660,7 @@ static int32_t smlSendMetaMsg(SSmlHandle *info, SName *pName, SArray *pColumns, SMCreateStbReq pReq = {0}; int32_t code = TSDB_CODE_SUCCESS; SCmdMsgInfo pCmdMsg = {0}; + char *pSql = NULL; // put front for free pReq.numOfColumns = taosArrayGetSize(pColumns); @@ -667,7 +668,27 @@ static int32_t smlSendMetaMsg(SSmlHandle *info, SName *pName, SArray *pColumns, pReq.numOfTags = taosArrayGetSize(pTags); pReq.pTags = pTags; - code = buildRequest(info->taos->id, "", 0, NULL, false, &pRequest, 0); + if (action == SCHEMA_ACTION_CREATE_STABLE) { + pReq.colVer = 1; + pReq.tagVer = 1; + pReq.suid = 0; + pReq.source = TD_REQ_FROM_APP; + pSql = "sml_create_stable"; + } else if (action == SCHEMA_ACTION_ADD_TAG || action == SCHEMA_ACTION_CHANGE_TAG_SIZE) { + pReq.colVer = pTableMeta->sversion; + pReq.tagVer = pTableMeta->tversion + 1; + pReq.suid = pTableMeta->uid; + pReq.source = TD_REQ_FROM_TAOX; + pSql = (action == SCHEMA_ACTION_ADD_TAG) ? "sml_add_tag" : "sml_modify_tag_size"; + } else if (action == SCHEMA_ACTION_ADD_COLUMN || action == SCHEMA_ACTION_CHANGE_COLUMN_SIZE) { + pReq.colVer = pTableMeta->sversion + 1; + pReq.tagVer = pTableMeta->tversion; + pReq.suid = pTableMeta->uid; + pReq.source = TD_REQ_FROM_TAOX; + pSql = (action == SCHEMA_ACTION_ADD_COLUMN) ? "sml_add_column" : "sml_modify_column_size"; + } + + code = buildRequest(info->taos->id, pSql, strlen(pSql), NULL, false, &pRequest, 0); if (code != TSDB_CODE_SUCCESS) { goto end; } @@ -678,23 +699,6 @@ static int32_t smlSendMetaMsg(SSmlHandle *info, SName *pName, SArray *pColumns, goto end; } - if (action == SCHEMA_ACTION_CREATE_STABLE) { - pReq.colVer = 1; - pReq.tagVer = 1; - pReq.suid = 0; - pReq.source = TD_REQ_FROM_APP; - } else if (action == SCHEMA_ACTION_ADD_TAG || action == SCHEMA_ACTION_CHANGE_TAG_SIZE) { - pReq.colVer = pTableMeta->sversion; - pReq.tagVer = pTableMeta->tversion + 1; - pReq.suid = pTableMeta->uid; - pReq.source = TD_REQ_FROM_TAOX; - } else if (action == SCHEMA_ACTION_ADD_COLUMN || action == SCHEMA_ACTION_CHANGE_COLUMN_SIZE) { - pReq.colVer = pTableMeta->sversion + 1; - pReq.tagVer = pTableMeta->tversion; - pReq.suid = pTableMeta->uid; - pReq.source = TD_REQ_FROM_TAOX; - } - if (pReq.numOfTags == 0) { pReq.numOfTags = 1; SField field = {0}; @@ -1514,6 +1518,44 @@ static int smlProcess(SSmlHandle *info, char *lines[], char *rawLine, char *rawL return code; } +void smlSetReqSQL(SRequestObj *request, char *lines[], char *rawLine, char *rawLineEnd) { + if (tsSlowLogScope & SLOW_LOG_TYPE_INSERT) { + int32_t len = 0; + int32_t rlen = 0; + char* p = NULL; + + if (lines && lines[0]) { + len = strlen(lines[0]); + p = lines[0]; + } else if (rawLine) { + if (rawLineEnd) { + len = rawLineEnd - rawLine; + } else { + len = strlen(rawLine); + } + p = rawLine; + } + + if (NULL == p) { + return; + } + + rlen = TMIN(len, TSDB_MAX_ALLOWED_SQL_LEN); + rlen = TMAX(rlen, 0); + + char *sql = taosMemoryMalloc(rlen + 1); + if (NULL == sql) { + uError("malloc %d for sml sql failed", rlen + 1); + return; + } + memcpy(sql, p, rlen); + sql[rlen] = 0; + + request->sqlstr = sql; + request->sqlLen = rlen; + } +} + TAOS_RES *taos_schemaless_insert_inner(TAOS *taos, char *lines[], char *rawLine, char *rawLineEnd, int numLines, int protocol, int precision, int32_t ttl, int64_t reqid) { int32_t code = TSDB_CODE_SUCCESS; @@ -1546,6 +1588,8 @@ TAOS_RES *taos_schemaless_insert_inner(TAOS *taos, char *lines[], char *rawLine, info->msgBuf.len = ERROR_MSG_BUF_DEFAULT_SIZE; info->lineNum = numLines; + smlSetReqSQL(request, lines, rawLine, rawLineEnd); + SSmlMsgBuf msg = {ERROR_MSG_BUF_DEFAULT_SIZE, request->msgBuf}; if (request->pDb == NULL) { request->code = TSDB_CODE_PAR_DB_NOT_SPECIFIED; diff --git a/source/common/src/tdataformat.c b/source/common/src/tdataformat.c index d6ab974c6c..0cbc5b2230 100644 --- a/source/common/src/tdataformat.c +++ b/source/common/src/tdataformat.c @@ -500,7 +500,7 @@ int32_t tRowGet(SRow *pRow, STSchema *pTSchema, int32_t iCol, SColVal *pColVal) break; default: ASSERTS(0, "invalid row format"); - return TSDB_CODE_IVLD_DATA_FMT; + return TSDB_CODE_INVALID_DATA_FMT; } if (bv == BIT_FLG_NONE) { @@ -938,7 +938,7 @@ static int32_t tRowTupleUpsertColData(SRow *pRow, STSchema *pTSchema, SColData * break; default: ASSERTS(0, "Invalid row flag"); - return TSDB_CODE_IVLD_DATA_FMT; + return TSDB_CODE_INVALID_DATA_FMT; } while (pColData) { @@ -963,7 +963,7 @@ static int32_t tRowTupleUpsertColData(SRow *pRow, STSchema *pTSchema, SColData * break; default: ASSERTS(0, "Invalid row flag"); - return TSDB_CODE_IVLD_DATA_FMT; + return TSDB_CODE_INVALID_DATA_FMT; } if (bv == BIT_FLG_NONE) { @@ -1054,7 +1054,7 @@ static int32_t tRowKVUpsertColData(SRow *pRow, STSchema *pTSchema, SColData *aCo pData = pv + ((uint32_t *)pKVIdx->idx)[iCol]; } else { ASSERTS(0, "Invalid KV row format"); - return TSDB_CODE_IVLD_DATA_FMT; + return TSDB_CODE_INVALID_DATA_FMT; } int16_t cid; diff --git a/source/common/src/tglobal.c b/source/common/src/tglobal.c index 224b0068fe..0a29a415f0 100644 --- a/source/common/src/tglobal.c +++ b/source/common/src/tglobal.c @@ -724,7 +724,13 @@ static int32_t taosSetSlowLogScope(char *pScope) { return 0; } + if (0 == strcasecmp(pScope, "none")) { + tsSlowLogScope = 0; + return 0; + } + uError("Invalid slowLog scope value:%s", pScope); + terrno = TSDB_CODE_INVALID_CFG_VALUE; return -1; } @@ -1199,7 +1205,9 @@ int32_t taosSetCfg(SConfig *pCfg, char *name) { } else if (strcasecmp("slowLogThreshold", name) == 0) { tsSlowLogThreshold = cfgGetItem(pCfg, "slowLogThreshold")->i32; } else if (strcasecmp("slowLogScope", name) == 0) { - taosSetSlowLogScope(cfgGetItem(pCfg, "slowLogScope")->str) + if (taosSetSlowLogScope(cfgGetItem(pCfg, "slowLogScope")->str)) { + return -1; + } } break; } diff --git a/source/util/src/terror.c b/source/util/src/terror.c index 34b09761c8..29e76bfaf4 100644 --- a/source/util/src/terror.c +++ b/source/util/src/terror.c @@ -98,7 +98,8 @@ TAOS_DEFINE_ERROR(TSDB_CODE_NO_ENOUGH_DISKSPACE, "No enough disk space" TAOS_DEFINE_ERROR(TSDB_CODE_APP_IS_STARTING, "Database is starting up") TAOS_DEFINE_ERROR(TSDB_CODE_APP_IS_STOPPING, "Database is closing down") -TAOS_DEFINE_ERROR(TSDB_CODE_IVLD_DATA_FMT, "Invalid data format") +TAOS_DEFINE_ERROR(TSDB_CODE_INVALID_DATA_FMT, "Invalid data format") +TAOS_DEFINE_ERROR(TSDB_CODE_INVALID_CFG_VALUE, "Invalid configuration value") //client TAOS_DEFINE_ERROR(TSDB_CODE_TSC_INVALID_OPERATION, "Invalid operation") diff --git a/source/util/src/tlog.c b/source/util/src/tlog.c index 10ab389873..688d893d53 100644 --- a/source/util/src/tlog.c +++ b/source/util/src/tlog.c @@ -24,7 +24,7 @@ #define LOG_MAX_LINE_SIZE (10024) #define LOG_MAX_LINE_BUFFER_SIZE (LOG_MAX_LINE_SIZE + 3) #define LOG_MAX_LINE_DUMP_SIZE (1024 * 1024) -#define LOG_MAX_LINE_DUMP_BUFFER_SIZE (LOG_MAX_LINE_DUMP_SIZE + 3) +#define LOG_MAX_LINE_DUMP_BUFFER_SIZE (LOG_MAX_LINE_DUMP_SIZE + 128) #define LOG_FILE_NAME_LEN 300 #define LOG_DEFAULT_BUF_SIZE (20 * 1024 * 1024) // 20MB @@ -52,6 +52,8 @@ typedef struct { int32_t stop; TdThread asyncThread; TdThreadMutex buffMutex; + int32_t writeInterval; + int32_t lastDuration; } SLogBuff; typedef struct { @@ -71,7 +73,6 @@ extern SConfig *tsCfg; static int8_t tsLogInited = 0; static SLogObj tsLogObj = {.fileNum = 1}; static int64_t tsAsyncLogLostLines = 0; -static int32_t tsWriteInterval = LOG_DEFAULT_INTERVAL; static int32_t tsDaylightActive; /* Currently in daylight saving time. */ bool tsLogEmbedded = 0; @@ -84,6 +85,7 @@ int64_t tsNumOfErrorLogs = 0; int64_t tsNumOfInfoLogs = 0; int64_t tsNumOfDebugLogs = 0; int64_t tsNumOfTraceLogs = 0; +int64_t tsNumOfSlowLogs = 0; // log int32_t dDebugFlag = 131; @@ -203,7 +205,6 @@ void taosCloseLog() { taosThreadMutexDestroy(&tsLogObj.slowHandle->buffMutex); taosCloseFile(&tsLogObj.slowHandle->pFile); taosMemoryFreeClear(tsLogObj.slowHandle->buffer); - memset(&tsLogObj.slowHandle->buffer, 0, sizeof(tsLogObj.slowHandle->buffer)); taosMemoryFreeClear(tsLogObj.slowHandle); } @@ -217,7 +218,6 @@ void taosCloseLog() { taosThreadMutexDestroy(&tsLogObj.logHandle->buffMutex); taosCloseFile(&tsLogObj.logHandle->pFile); taosMemoryFreeClear(tsLogObj.logHandle->buffer); - memset(&tsLogObj.logHandle->buffer, 0, sizeof(tsLogObj.logHandle->buffer)); taosThreadMutexDestroy(&tsLogObj.logMutex); taosMemoryFreeClear(tsLogObj.logHandle); } @@ -555,10 +555,9 @@ void taosPrintLongString(const char *flags, ELogLevel level, int32_t dflag, cons va_list argpointer; va_start(argpointer, format); - len += vsnprintf(buffer + len, LOG_MAX_LINE_DUMP_BUFFER_SIZE - len, format, argpointer); + len += vsnprintf(buffer + len, LOG_MAX_LINE_DUMP_BUFFER_SIZE - 2 - len, format, argpointer); va_end(argpointer); - if (len > LOG_MAX_LINE_DUMP_SIZE) len = LOG_MAX_LINE_DUMP_SIZE; buffer[len++] = '\n'; buffer[len] = 0; @@ -566,6 +565,31 @@ void taosPrintLongString(const char *flags, ELogLevel level, int32_t dflag, cons taosMemoryFree(buffer); } +void taosPrintSlowLog(const char *format, ...) { + if (!osLogSpaceAvailable()) return; + + char *buffer = taosMemoryMalloc(LOG_MAX_LINE_DUMP_BUFFER_SIZE); + int32_t len = taosBuildLogHead(buffer, ""); + + va_list argpointer; + va_start(argpointer, format); + len += vsnprintf(buffer + len, LOG_MAX_LINE_DUMP_BUFFER_SIZE - 2 - len, format, argpointer); + va_end(argpointer); + + buffer[len++] = '\n'; + buffer[len] = 0; + + atomic_add_fetch_64(&tsNumOfSlowLogs, 1); + + if (tsAsyncLog) { + taosPushLogBuffer(tsLogObj.slowHandle, buffer, len); + } else { + taosWriteFile(tsLogObj.slowHandle->pFile, buffer, len); + } + + taosMemoryFree(buffer); +} + void taosDumpData(unsigned char *msg, int32_t len) { if (!osLogSpaceAvailable()) return; taosUpdateLogNums(DEBUG_DUMP); @@ -610,6 +634,7 @@ static SLogBuff *taosLogBuffNew(int32_t bufSize) { LOG_BUF_SIZE(pLogBuf) = bufSize; pLogBuf->minBuffSize = bufSize / 10; pLogBuf->stop = 0; + pLogBuf->writeInterval = LOG_DEFAULT_INTERVAL; if (taosThreadMutexInit(&LOG_BUF_MUTEX(pLogBuf), NULL) < 0) goto _err; // tsem_init(&(pLogBuf->buffNotEmpty), 0, 0); @@ -693,83 +718,78 @@ static int32_t taosGetLogRemainSize(SLogBuff *pLogBuf, int32_t start, int32_t en } static void taosWriteLog(SLogBuff *pLogBuf) { - static int32_t lastDuration = 0; - int32_t remainChecked = 0; - int32_t start, end, pollSize; + int32_t start = LOG_BUF_START(pLogBuf); + int32_t end = LOG_BUF_END(pLogBuf); - do { - if (remainChecked == 0) { - start = LOG_BUF_START(pLogBuf); - end = LOG_BUF_END(pLogBuf); + if (start == end) { + dbgEmptyW++; + pLogBuf->writeInterval = LOG_MAX_INTERVAL; + return; + } - if (start == end) { - dbgEmptyW++; - tsWriteInterval = LOG_MAX_INTERVAL; - return; - } - - pollSize = taosGetLogRemainSize(pLogBuf, start, end); - if (pollSize < pLogBuf->minBuffSize) { - lastDuration += tsWriteInterval; - if (lastDuration < LOG_MAX_WAIT_MSEC) { - break; - } - } - - lastDuration = 0; + int32_t pollSize = taosGetLogRemainSize(pLogBuf, start, end); + if (pollSize < pLogBuf->minBuffSize) { + pLogBuf->lastDuration += pLogBuf->writeInterval; + if (pLogBuf->lastDuration < LOG_MAX_WAIT_MSEC) { + return; } + } - if (start < end) { - taosWriteFile(pLogBuf->pFile, LOG_BUF_BUFFER(pLogBuf) + start, pollSize); - } else { - int32_t tsize = LOG_BUF_SIZE(pLogBuf) - start; - taosWriteFile(pLogBuf->pFile, LOG_BUF_BUFFER(pLogBuf) + start, tsize); + pLogBuf->lastDuration = 0; - taosWriteFile(pLogBuf->pFile, LOG_BUF_BUFFER(pLogBuf), end); + if (start < end) { + taosWriteFile(pLogBuf->pFile, LOG_BUF_BUFFER(pLogBuf) + start, pollSize); + } else { + int32_t tsize = LOG_BUF_SIZE(pLogBuf) - start; + taosWriteFile(pLogBuf->pFile, LOG_BUF_BUFFER(pLogBuf) + start, tsize); + + taosWriteFile(pLogBuf->pFile, LOG_BUF_BUFFER(pLogBuf), end); + } + + dbgWN++; + dbgWSize += pollSize; + + if (pollSize < pLogBuf->minBuffSize) { + dbgSmallWN++; + if (pLogBuf->writeInterval < LOG_MAX_INTERVAL) { + pLogBuf->writeInterval += LOG_INTERVAL_STEP; } - - dbgWN++; - dbgWSize += pollSize; - - if (pollSize < pLogBuf->minBuffSize) { - dbgSmallWN++; - if (tsWriteInterval < LOG_MAX_INTERVAL) { - tsWriteInterval += LOG_INTERVAL_STEP; - } - } else if (pollSize > LOG_BUF_SIZE(pLogBuf) / 3) { - dbgBigWN++; - tsWriteInterval = LOG_MIN_INTERVAL; - } else if (pollSize > LOG_BUF_SIZE(pLogBuf) / 4) { - if (tsWriteInterval > LOG_MIN_INTERVAL) { - tsWriteInterval -= LOG_INTERVAL_STEP; - } + } else if (pollSize > LOG_BUF_SIZE(pLogBuf) / 3) { + dbgBigWN++; + pLogBuf->writeInterval = LOG_MIN_INTERVAL; + } else if (pollSize > LOG_BUF_SIZE(pLogBuf) / 4) { + if (pLogBuf->writeInterval > LOG_MIN_INTERVAL) { + pLogBuf->writeInterval -= LOG_INTERVAL_STEP; } + } - LOG_BUF_START(pLogBuf) = (LOG_BUF_START(pLogBuf) + pollSize) % LOG_BUF_SIZE(pLogBuf); + LOG_BUF_START(pLogBuf) = (LOG_BUF_START(pLogBuf) + pollSize) % LOG_BUF_SIZE(pLogBuf); - start = LOG_BUF_START(pLogBuf); - end = LOG_BUF_END(pLogBuf); + start = LOG_BUF_START(pLogBuf); + end = LOG_BUF_END(pLogBuf); - pollSize = taosGetLogRemainSize(pLogBuf, start, end); - if (pollSize < pLogBuf->minBuffSize) { - break; - } + pollSize = taosGetLogRemainSize(pLogBuf, start, end); + if (pollSize < pLogBuf->minBuffSize) { + return; + } - tsWriteInterval = LOG_MIN_INTERVAL; - - remainChecked = 1; - } while (1); + pLogBuf->writeInterval = 0; } static void *taosAsyncOutputLog(void *param) { - SLogBuff *pLogBuf = (SLogBuff *)param; + SLogBuff *pLogBuf = (SLogBuff *)tsLogObj.logHandle; + SLogBuff *pSlowBuf = (SLogBuff *)tsLogObj.slowHandle; + setThreadName("log"); int32_t count = 0; int32_t updateCron = 0; + int32_t writeInterval = 0; + while (1) { - count += tsWriteInterval; + writeInterval = TMIN(pLogBuf->writeInterval, pSlowBuf->writeInterval); + count += writeInterval; updateCron++; - taosMsleep(tsWriteInterval); + taosMsleep(writeInterval); if (count > 1000) { osUpdate(); count = 0; @@ -777,13 +797,14 @@ static void *taosAsyncOutputLog(void *param) { // Polling the buffer taosWriteLog(pLogBuf); + taosWriteLog(pSlowBuf); if (updateCron >= 3600 * 24 * 40 / 2) { taosUpdateDaylight(); updateCron = 0; } - if (pLogBuf->stop) break; + if (pLogBuf->stop || pSlowBuf->stop) break; } return NULL; From 3b88efbd5f9a462c023e30f03b662594eeefb272 Mon Sep 17 00:00:00 2001 From: Minglei Jin Date: Mon, 24 Apr 2023 17:07:20 +0800 Subject: [PATCH 036/156] cache/delete: fix null column values --- source/dnode/vnode/src/tsdb/tsdbCache.c | 15 ++++----------- 1 file changed, 4 insertions(+), 11 deletions(-) diff --git a/source/dnode/vnode/src/tsdb/tsdbCache.c b/source/dnode/vnode/src/tsdb/tsdbCache.c index dc12f11e23..2e93230a7d 100644 --- a/source/dnode/vnode/src/tsdb/tsdbCache.c +++ b/source/dnode/vnode/src/tsdb/tsdbCache.c @@ -391,22 +391,15 @@ int32_t tsdbCacheDel(STsdb *pTsdb, tb_uid_t suid, tb_uid_t uid, TSKEY sKey, TSKE rocksdb_writebatch_t *wb = pTsdb->rCache.writebatch; for (int i = 0; i < num_keys; ++i) { - SLastCol *pLastCol = NULL; - if (NULL != values_list[i]) { - pLastCol = tsdbCacheDeserialize(values_list[i]); - } - - if (NULL != pLastCol || (pLastCol->ts <= eKey && pLastCol->ts >= sKey)) { + SLastCol *pLastCol = tsdbCacheDeserialize(values_list[i]); + if (NULL != pLastCol && (pLastCol->ts <= eKey && pLastCol->ts >= sKey)) { char key[ROCKS_KEY_LEN]; size_t klen = snprintf(key, ROCKS_KEY_LEN, "%" PRIi64 ":%" PRIi16 ":last", uid, pLastCol->colVal.cid); rocksdb_writebatch_delete(wb, key, klen); } - if (NULL != values_list[i + num_keys]) { - pLastCol = tsdbCacheDeserialize(values_list[i + num_keys]); - } - - if (NULL != pLastCol || (pLastCol->ts <= eKey && pLastCol->ts >= sKey)) { + pLastCol = tsdbCacheDeserialize(values_list[i + num_keys]); + if (NULL != pLastCol && (pLastCol->ts <= eKey && pLastCol->ts >= sKey)) { char key[ROCKS_KEY_LEN]; size_t klen = snprintf(key, ROCKS_KEY_LEN, "%" PRIi64 ":%" PRIi16 ":last_row", uid, pLastCol->colVal.cid); rocksdb_writebatch_delete(wb, key, klen); From 4adc95f9db3bbae75572840d2a312f893b3bc9e8 Mon Sep 17 00:00:00 2001 From: Minglei Jin Date: Mon, 24 Apr 2023 18:21:25 +0800 Subject: [PATCH 037/156] cache/deserialize: fix binary pData calc --- source/dnode/vnode/src/tsdb/tsdbCache.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/source/dnode/vnode/src/tsdb/tsdbCache.c b/source/dnode/vnode/src/tsdb/tsdbCache.c index 2e93230a7d..04564f72c2 100644 --- a/source/dnode/vnode/src/tsdb/tsdbCache.c +++ b/source/dnode/vnode/src/tsdb/tsdbCache.c @@ -137,7 +137,7 @@ SLastCol *tsdbCacheDeserialize(char const *value) { SLastCol *pLastCol = (SLastCol *)value; SColVal *pColVal = &pLastCol->colVal; if (IS_VAR_DATA_TYPE(pColVal->type)) { - pColVal->value.pData = (char *)value + sizeof(*pColVal); + pColVal->value.pData = (char *)value + sizeof(*pLastCol); } return pLastCol; From 28bff83096e9906f5233f59c864792cff040949a Mon Sep 17 00:00:00 2001 From: dapan1121 Date: Mon, 24 Apr 2023 19:29:33 +0800 Subject: [PATCH 038/156] fix: log thread quit issue --- source/util/src/tlog.c | 9 +++++---- 1 file changed, 5 insertions(+), 4 deletions(-) diff --git a/source/util/src/tlog.c b/source/util/src/tlog.c index 688d893d53..baf6a9f319 100644 --- a/source/util/src/tlog.c +++ b/source/util/src/tlog.c @@ -201,6 +201,11 @@ static void taosStopLog() { void taosCloseLog() { taosStopLog(); + if (tsLogObj.logHandle != NULL && taosCheckPthreadValid(tsLogObj.logHandle->asyncThread)) { + taosThreadJoin(tsLogObj.logHandle->asyncThread, NULL); + taosThreadClear(&tsLogObj.logHandle->asyncThread); + } + if (tsLogObj.slowHandle != NULL) { taosThreadMutexDestroy(&tsLogObj.slowHandle->buffMutex); taosCloseFile(&tsLogObj.slowHandle->pFile); @@ -209,10 +214,6 @@ void taosCloseLog() { } if (tsLogObj.logHandle != NULL) { - if (tsLogObj.logHandle != NULL && taosCheckPthreadValid(tsLogObj.logHandle->asyncThread)) { - taosThreadJoin(tsLogObj.logHandle->asyncThread, NULL); - taosThreadClear(&tsLogObj.logHandle->asyncThread); - } tsLogInited = 0; taosThreadMutexDestroy(&tsLogObj.logHandle->buffMutex); From 2d4dd64584fb3bc6516bc1aa63570223efd35273 Mon Sep 17 00:00:00 2001 From: kailixu Date: Mon, 24 Apr 2023 20:21:28 +0800 Subject: [PATCH 039/156] chore: more code --- include/libs/nodes/cmdnodes.h | 2 +- include/util/tdef.h | 7 + source/client/src/clientSml.c | 7 +- source/client/src/clientSmlJson.c | 2 + source/client/src/clientSmlLine.c | 5 +- source/client/src/clientSmlTelnet.c | 3 +- source/libs/parser/src/parAstCreater.c | 2 +- source/libs/parser/src/parTranslater.c | 10 +- tests/parallel_test/cases.task | 1 + .../1-insert/influxdb_line_taosc_insert.py | 17 +- tests/system-test/1-insert/stmt_error.py | 225 ++++++++++++++++++ tests/system-test/runAllOne.sh | 1 + tests/system-test/win-test-file | 1 + 13 files changed, 267 insertions(+), 16 deletions(-) create mode 100644 tests/system-test/1-insert/stmt_error.py diff --git a/include/libs/nodes/cmdnodes.h b/include/libs/nodes/cmdnodes.h index 0a9893907b..2323d044ec 100644 --- a/include/libs/nodes/cmdnodes.h +++ b/include/libs/nodes/cmdnodes.h @@ -30,7 +30,7 @@ extern "C" { #define SHOW_CREATE_DB_RESULT_COLS 2 #define SHOW_CREATE_DB_RESULT_FIELD1_LEN (TSDB_DB_NAME_LEN + VARSTR_HEADER_SIZE) -#define SHOW_CREATE_DB_RESULT_FIELD2_LEN (TSDB_MAX_BINARY_LEN) +#define SHOW_CREATE_DB_RESULT_FIELD2_LEN (TSDB_MAX_BINARY_LEN + VARSTR_HEADER_SIZE) #define SHOW_CREATE_TB_RESULT_COLS 2 #define SHOW_CREATE_TB_RESULT_FIELD1_LEN (TSDB_TABLE_NAME_LEN + VARSTR_HEADER_SIZE) diff --git a/include/util/tdef.h b/include/util/tdef.h index ead649a51c..7a2c57b9f8 100644 --- a/include/util/tdef.h +++ b/include/util/tdef.h @@ -22,6 +22,13 @@ extern "C" { #endif + +#if 1 +#define TASSERT assert(0); +#else +#define TASSERT +#endif + #define TSDB__packed #define TSKEY int64_t diff --git a/source/client/src/clientSml.c b/source/client/src/clientSml.c index 17150286e1..0bca52449c 100644 --- a/source/client/src/clientSml.c +++ b/source/client/src/clientSml.c @@ -565,8 +565,8 @@ static int32_t smlFindNearestPowerOf2(int32_t length, uint8_t type) { } if (type == TSDB_DATA_TYPE_BINARY && result > TSDB_MAX_BINARY_LEN - VARSTR_HEADER_SIZE) { result = TSDB_MAX_BINARY_LEN - VARSTR_HEADER_SIZE; - } else if (type == TSDB_DATA_TYPE_NCHAR && result > (TSDB_MAX_BINARY_LEN - VARSTR_HEADER_SIZE) / TSDB_NCHAR_SIZE) { - result = (TSDB_MAX_BINARY_LEN - VARSTR_HEADER_SIZE) / TSDB_NCHAR_SIZE; + } else if (type == TSDB_DATA_TYPE_NCHAR && result > (TSDB_MAX_NCHAR_LEN - VARSTR_HEADER_SIZE) / TSDB_NCHAR_SIZE) { + result = (TSDB_MAX_NCHAR_LEN - VARSTR_HEADER_SIZE) / TSDB_NCHAR_SIZE; } if (type == TSDB_DATA_TYPE_NCHAR) { @@ -637,6 +637,9 @@ static int32_t smlBuildFieldsList(SSmlHandle *info, SSchema *schemaField, SHashO field.bytes = getBytes(kv->type, kv->length); memcpy(field.name, kv->key, kv->keyLen); taosArrayPush(results, &field); + if(numOfCols == 0) { + + } } else if (action == SCHEMA_ACTION_CHANGE_COLUMN_SIZE || action == SCHEMA_ACTION_CHANGE_TAG_SIZE) { uint16_t *index = (uint16_t *)taosHashGet(schemaHash, kv->key, kv->keyLen); if (index == NULL) { diff --git a/source/client/src/clientSmlJson.c b/source/client/src/clientSmlJson.c index c3a6e15697..b50a29d4fb 100644 --- a/source/client/src/clientSmlJson.c +++ b/source/client/src/clientSmlJson.c @@ -578,10 +578,12 @@ static int32_t smlConvertJSONString(SSmlKv *pVal, char *typeStr, cJSON *value) { pVal->length = (uint16_t)strlen(value->valuestring); if (pVal->type == TSDB_DATA_TYPE_BINARY && pVal->length > TSDB_MAX_BINARY_LEN - VARSTR_HEADER_SIZE) { + TASSERT return TSDB_CODE_PAR_INVALID_VAR_COLUMN_LEN; } if (pVal->type == TSDB_DATA_TYPE_NCHAR && pVal->length > (TSDB_MAX_NCHAR_LEN - VARSTR_HEADER_SIZE) / TSDB_NCHAR_SIZE) { + TASSERT return TSDB_CODE_PAR_INVALID_VAR_COLUMN_LEN; } diff --git a/source/client/src/clientSmlLine.c b/source/client/src/clientSmlLine.c index 335e3a1dc7..7dd087039b 100644 --- a/source/client/src/clientSmlLine.c +++ b/source/client/src/clientSmlLine.c @@ -81,6 +81,7 @@ int32_t smlParseValue(SSmlKv *pVal, SSmlMsgBuf *msg) { pVal->type = TSDB_DATA_TYPE_BINARY; pVal->length -= BINARY_ADD_LEN; if (pVal->length > TSDB_MAX_BINARY_LEN - VARSTR_HEADER_SIZE) { + TASSERT return TSDB_CODE_PAR_INVALID_VAR_COLUMN_LEN; } pVal->value += (BINARY_ADD_LEN - 1); @@ -94,6 +95,7 @@ int32_t smlParseValue(SSmlKv *pVal, SSmlMsgBuf *msg) { pVal->type = TSDB_DATA_TYPE_NCHAR; pVal->length -= NCHAR_ADD_LEN; if (pVal->length > (TSDB_MAX_NCHAR_LEN - VARSTR_HEADER_SIZE) / TSDB_NCHAR_SIZE) { + TASSERT return TSDB_CODE_PAR_INVALID_VAR_COLUMN_LEN; } pVal->value += (NCHAR_ADD_LEN - 1); @@ -236,7 +238,8 @@ static int32_t smlParseTagKv(SSmlHandle *info, char **sql, char *sqlEnd, SSmlLin PROCESS_SLASH(value, valueLen) } - if (unlikely(valueLen > (TSDB_MAX_NCHAR_LEN - VARSTR_HEADER_SIZE) / TSDB_NCHAR_SIZE)) { + if (unlikely(valueLen > (TSDB_MAX_TAGS_LEN - VARSTR_HEADER_SIZE) / TSDB_NCHAR_SIZE)) { + TASSERT return TSDB_CODE_PAR_INVALID_VAR_COLUMN_LEN; } diff --git a/source/client/src/clientSmlTelnet.c b/source/client/src/clientSmlTelnet.c index 036442573d..9baa1e5758 100644 --- a/source/client/src/clientSmlTelnet.c +++ b/source/client/src/clientSmlTelnet.c @@ -158,7 +158,8 @@ static int32_t smlParseTelnetTags(SSmlHandle *info, char *data, char *sqlEnd, SS return TSDB_CODE_TSC_INVALID_VALUE; } - if (unlikely(valueLen > (TSDB_MAX_NCHAR_LEN - VARSTR_HEADER_SIZE) / TSDB_NCHAR_SIZE)) { + if (unlikely(valueLen > (TSDB_MAX_TAGS_LEN - VARSTR_HEADER_SIZE) / TSDB_NCHAR_SIZE)) { + TASSERT return TSDB_CODE_PAR_INVALID_VAR_COLUMN_LEN; } diff --git a/source/libs/parser/src/parAstCreater.c b/source/libs/parser/src/parAstCreater.c index 2afe34c1f7..c53721f865 100644 --- a/source/libs/parser/src/parAstCreater.c +++ b/source/libs/parser/src/parAstCreater.c @@ -1208,7 +1208,7 @@ SDataType createDataType(uint8_t type) { } SDataType createVarLenDataType(uint8_t type, const SToken* pLen) { - SDataType dt = {.type = type, .precision = 0, .scale = 0, .bytes = taosStr2Int16(pLen->z, NULL, 10)}; + SDataType dt = {.type = type, .precision = 0, .scale = 0, .bytes = taosStr2Int32(pLen->z, NULL, 10)}; return dt; } diff --git a/source/libs/parser/src/parTranslater.c b/source/libs/parser/src/parTranslater.c index 25e92a55ec..20ca987250 100644 --- a/source/libs/parser/src/parTranslater.c +++ b/source/libs/parser/src/parTranslater.c @@ -4498,8 +4498,9 @@ static int32_t checkTableTagsSchema(STranslateContext* pCxt, SHashObj* pHash, SN code = generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_ONLY_ONE_JSON_TAG); } if (TSDB_CODE_SUCCESS == code) { - if ((TSDB_DATA_TYPE_VARCHAR == pTag->dataType.type && calcTypeBytes(pTag->dataType) > TSDB_MAX_BINARY_LEN) || - (TSDB_DATA_TYPE_NCHAR == pTag->dataType.type && calcTypeBytes(pTag->dataType) > TSDB_MAX_NCHAR_LEN)) { + if ((TSDB_DATA_TYPE_VARCHAR == pTag->dataType.type && calcTypeBytes(pTag->dataType) > TSDB_MAX_TAGS_LEN) || + (TSDB_DATA_TYPE_NCHAR == pTag->dataType.type && calcTypeBytes(pTag->dataType) > TSDB_MAX_TAGS_LEN)) { + TASSERT code = generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_INVALID_VAR_COLUMN_LEN); } } @@ -4551,6 +4552,7 @@ static int32_t checkTableColsSchema(STranslateContext* pCxt, SHashObj* pHash, in if (TSDB_CODE_SUCCESS == code) { if ((TSDB_DATA_TYPE_VARCHAR == pCol->dataType.type && calcTypeBytes(pCol->dataType) > TSDB_MAX_BINARY_LEN) || (TSDB_DATA_TYPE_NCHAR == pCol->dataType.type && calcTypeBytes(pCol->dataType) > TSDB_MAX_NCHAR_LEN)) { + TASSERT code = generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_INVALID_VAR_COLUMN_LEN); } } @@ -5236,6 +5238,7 @@ static int32_t checkAlterSuperTableBySchema(STranslateContext* pCxt, SAlterTable if (TSDB_ALTER_TABLE_UPDATE_COLUMN_BYTES == pStmt->alterType) { if (calcTypeBytes(pStmt->dataType) > TSDB_MAX_FIELD_LEN) { + TASSERT return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_INVALID_VAR_COLUMN_LEN); } @@ -5245,7 +5248,8 @@ static int32_t checkAlterSuperTableBySchema(STranslateContext* pCxt, SAlterTable } if (TSDB_ALTER_TABLE_UPDATE_TAG_BYTES == pStmt->alterType) { - if (calcTypeBytes(pStmt->dataType) > TSDB_MAX_FIELD_LEN) { + if (calcTypeBytes(pStmt->dataType) > TSDB_MAX_TAGS_LEN) { + TASSERT return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_INVALID_VAR_COLUMN_LEN); } diff --git a/tests/parallel_test/cases.task b/tests/parallel_test/cases.task index dda4ec3e84..bf91f7c8c7 100644 --- a/tests/parallel_test/cases.task +++ b/tests/parallel_test/cases.task @@ -338,6 +338,7 @@ ,,y,system-test,./pytest.sh python3 ./test.py -f 1-insert/delete_childtable.py ,,y,system-test,./pytest.sh python3 ./test.py -f 1-insert/delete_normaltable.py ,,y,system-test,./pytest.sh python3 ./test.py -f 1-insert/keep_expired.py +,,y,system-test,./pytest.sh python3 ./test.py -f 1-insert/stmt_error.py ,,y,system-test,./pytest.sh python3 ./test.py -f 1-insert/drop.py ,,y,system-test,./pytest.sh python3 ./test.py -f 1-insert/drop.py -N 3 -M 3 -i False -n 3 ,,y,system-test,./pytest.sh python3 ./test.py -f 2-query/join2.py diff --git a/tests/system-test/1-insert/influxdb_line_taosc_insert.py b/tests/system-test/1-insert/influxdb_line_taosc_insert.py index 6372502484..b53abc41aa 100644 --- a/tests/system-test/1-insert/influxdb_line_taosc_insert.py +++ b/tests/system-test/1-insert/influxdb_line_taosc_insert.py @@ -673,11 +673,11 @@ class TDTestCase: tdSql.checkNotEqual(err.errno, 0) # # # binary - # stb_name = tdCom.getLongName(7, "letters") - # input_sql = f'{stb_name},t0=t c0=f,c1="{tdCom.getLongName(16374, "letters")}" 1626006833639000000' - # self._conn.schemaless_insert([input_sql], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value) + stb_name = tdCom.getLongName(7, "letters") + input_sql = f'{stb_name},t0=t c0=f,c11=f,c2=f,c3=f,c4=f,c5=f,c6=f,c7=f,c8=f,c9=f,c10=f,c12=f,c1="{tdCom.getLongName(65519, "letters")}" 1626006833639000000' + self._conn.schemaless_insert([input_sql], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value) - # input_sql = f'{stb_name},t0=t c0=f,c1="{tdCom.getLongName(16375, "letters")}" 1626006833639000000' + # input_sql = f'{stb_name},t0=t c0=f,c1="{tdCom.getLongName(65514, "letters")}" 1626006833639000000' # try: # self._conn.schemaless_insert([input_sql], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value) # except SchemalessError as err: @@ -884,7 +884,7 @@ class TDTestCase: tdSql.checkRows(2) tdSql.checkNotEqual(tb_name1, tb_name3) - # * tag binary max is 16384, col+ts binary max 49151 + # * tag binary max is 16384, col+ts binary max 65531 def tagColBinaryMaxLengthCheckCase(self): """ every binary and nchar must be length+2 @@ -911,7 +911,10 @@ class TDTestCase: tdSql.checkRows(2) # # * check col,col+ts max in describe ---> 16143 - input_sql = f'{stb_name},t0=t c0=f,c1="{tdCom.getLongName(16374, "letters")}",c2="{tdCom.getLongName(16374, "letters")}",c3="{tdCom.getLongName(16374, "letters")}",c4="{tdCom.getLongName(12, "letters")}" 1626006833639000000' + input_sql = f'{stb_name},t0=t c0=f,c1="{tdCom.getLongName(65517, "letters")}" 1626006833639000000' + self._conn.schemaless_insert([input_sql], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value) + + input_sql = f'{stb_name},t0=t c0=f,c1="{tdCom.getLongName(49133, "letters")}",c2="{tdCom.getLongName(16384, "letters")}" 1626006833639000000' self._conn.schemaless_insert([input_sql], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value) tdSql.query(f"select * from {stb_name}") @@ -1280,7 +1283,7 @@ class TDTestCase: self.nowTsCheckCase() self.dateFormatTsCheckCase() self.illegalTsCheckCase() - # self.tagValueLengthCheckCase() + self.tagValueLengthCheckCase() self.colValueLengthCheckCase() self.tagColIllegalValueCheckCase() self.duplicateIdTagColInsertCheckCase() diff --git a/tests/system-test/1-insert/stmt_error.py b/tests/system-test/1-insert/stmt_error.py new file mode 100644 index 0000000000..b77cd488f7 --- /dev/null +++ b/tests/system-test/1-insert/stmt_error.py @@ -0,0 +1,225 @@ +# encoding:UTF-8 +from taos import * + +from ctypes import * +from datetime import datetime +import taos + +import taos +import time + +from util.log import * +from util.cases import * +from util.sql import * +from util.dnodes import * + +class TDTestCase: + def __init__(self): + self.err_case = 0 + self.curret_case = 0 + + def caseDescription(self): + + ''' + case1 : [TD-11899] : this is an test case for check stmt error use . + ''' + return + + def init(self, conn, logSql, replicaVar=1): + self.replicaVar = int(replicaVar) + tdLog.debug("start to execute %s" % __file__) + tdSql.init(conn.cursor(), logSql) + + def conn(self): + # type: () -> taos.TaosConnection + return connect() + + def test_stmt_insert(self,conn): + # type: (TaosConnection) -> None + + dbname = "pytest_taos_stmt" + try: + conn.execute("drop database if exists %s" % dbname) + conn.execute("create database if not exists %s" % dbname) + conn.select_db(dbname) + + conn.execute( + "create table if not exists log(ts timestamp, bo bool, nil tinyint, ti tinyint, si smallint, ii int,\ + bi bigint, tu tinyint unsigned, su smallint unsigned, iu int unsigned, bu bigint unsigned, \ + ff float, dd double, bb binary(65059), nn nchar(100), tt timestamp)", + ) + conn.load_table_info("log") + + + stmt = conn.statement("insert into log values(?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?)") + params = new_bind_params(16) + params[0].timestamp(1626861392589, PrecisionEnum.Milliseconds) + params[1].bool(True) + params[2].tinyint(None) + params[3].tinyint(2) + params[4].smallint(3) + params[5].int(4) + params[6].bigint(5) + params[7].tinyint_unsigned(6) + params[8].smallint_unsigned(7) + params[9].int_unsigned(8) + params[10].bigint_unsigned(9) + params[11].float(10.1) + params[12].double(10.11) + binaryStr = '123456789' + for i in range(1301): + binaryStr += "1234567890abcdefghij1234567890abcdefghij12345hello" + params[13].binary(binaryStr) + params[14].nchar("stmt") + params[15].timestamp(1626861392589, PrecisionEnum.Milliseconds) + + stmt.bind_param(params) + stmt.execute() + + assert stmt.affected_rows == 1 + stmt.close() + + querystmt=conn.statement("select ?, bo, nil, ti, si, ii,bi, tu, su, iu, bu, ff, dd, bb, nn, tt from log") + queryparam=new_bind_params(1) + print(type(queryparam)) + queryparam[0].binary("ts") + querystmt.bind_param(queryparam) + querystmt.execute() + result=querystmt.use_result() + + row=result.fetch_all() + print(row) + + assert row[0][1] == True + assert row[0][2] == None + for i in range(3, 10): + assert row[0][i] == i - 1 + #float == may not work as expected + # assert row[0][11] == c_float(10.1) + assert row[0][12] == 10.11 + assert row[0][13][65054:] == "hello" + assert row[0][14] == "stmt" + + conn.execute("drop database if exists %s" % dbname) + conn.close() + + except Exception as err: + conn.execute("drop database if exists %s" % dbname) + conn.close() + raise err + + def test_stmt_insert_error(self,conn): + # type: (TaosConnection) -> None + + dbname = "pytest_taos_stmt_error" + try: + conn.execute("drop database if exists %s" % dbname) + conn.execute("create database if not exists %s" % dbname) + conn.select_db(dbname) + + conn.execute( + "create table if not exists log(ts timestamp, bo bool, nil tinyint, ti tinyint, si smallint, ii int,\ + bi bigint, tu tinyint unsigned, su smallint unsigned, iu int unsigned, bu bigint unsigned, \ + ff float, dd double, bb binary(100), nn nchar(100), tt timestamp , error_data int )", + ) + conn.load_table_info("log") + + + stmt = conn.statement("insert into log values(?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,?,1000)") + params = new_bind_params(16) + params[0].timestamp(1626861392589, PrecisionEnum.Milliseconds) + params[1].bool(True) + params[2].tinyint(None) + params[3].tinyint(2) + params[4].smallint(3) + params[5].int(4) + params[6].bigint(5) + params[7].tinyint_unsigned(6) + params[8].smallint_unsigned(7) + params[9].int_unsigned(8) + params[10].bigint_unsigned(9) + params[11].float(10.1) + params[12].double(10.11) + params[13].binary("hello") + params[14].nchar("stmt") + params[15].timestamp(1626861392589, PrecisionEnum.Milliseconds) + + stmt.bind_param(params) + stmt.execute() + + conn.close() + + except Exception as err: + conn.execute("drop database if exists %s" % dbname) + conn.close() + raise err + + def test_stmt_insert_error_null_timestamp(self,conn): + + dbname = "pytest_taos_stmt_error_null_ts" + try: + conn.execute("drop database if exists %s" % dbname) + conn.execute("create database if not exists %s" % dbname) + conn.execute("alter database %s keep 36500" % dbname) + conn.select_db(dbname) + + conn.execute("create stable STB(ts timestamp, n int) tags(b int)") + + stmt = conn.statement("insert into ? using STB tags(?) values(?, ?)") + params = new_bind_params(1) + params[0].int(4); + stmt.set_tbname_tags("ct", params); + + multi_params = new_multi_binds(2); + multi_params[0].timestamp([9223372036854775808]) + multi_params[1].int([123]) + stmt.bind_param_batch(multi_params) + + stmt.execute() + result = stmt.use_result() + + result.close() + stmt.close() + + stmt = conn.statement("select * from STB") + stmt.execute() + result = stmt.use_result() + print(result.affected_rows) + row = result.next() + print(row) + + result.close() + stmt.close() + conn.close() + + except Exception as err: + conn.close() + raise err + + def run(self): + + self.test_stmt_insert(self.conn()) + try: + self.test_stmt_insert_error(self.conn()) + except Exception as error : + + if str(error)=='[0x0200]: no mix usage for ? and values': + tdLog.info('=========stmt error occured for bind part column ==============') + else: + tdLog.exit("expect error(%s) not occured" % str(error)) + + try: + self.test_stmt_insert_error_null_timestamp(self.conn()) + tdLog.exit("expect error not occured - 1") + except Exception as error : + if str(error)=='[0x060b]: Timestamp data out of range': + tdLog.info('=========stmt error occured for bind part column(NULL Timestamp) ==============') + else: + tdLog.exit("expect error(%s) not occured - 2" % str(error)) + + def stop(self): + tdSql.close() + tdLog.success("%s successfully executed" % __file__) + +tdCases.addWindows(__file__, TDTestCase()) +tdCases.addLinux(__file__, TDTestCase()) \ No newline at end of file diff --git a/tests/system-test/runAllOne.sh b/tests/system-test/runAllOne.sh index 5a8d358d98..40addb0ca5 100644 --- a/tests/system-test/runAllOne.sh +++ b/tests/system-test/runAllOne.sh @@ -287,6 +287,7 @@ python3 ./test.py -f 1-insert/tb_100w_data_order.py -P python3 ./test.py -f 1-insert/delete_childtable.py -P python3 ./test.py -f 1-insert/delete_normaltable.py -P python3 ./test.py -f 1-insert/keep_expired.py -P +python3 ./test.py -f 1-insert/stmt_error.py -P python3 ./test.py -f 1-insert/drop.py -P python3 ./test.py -f 2-query/join2.py -P python3 ./test.py -f 2-query/union1.py -P diff --git a/tests/system-test/win-test-file b/tests/system-test/win-test-file index 7e68c40fd8..8b7b7d868d 100644 --- a/tests/system-test/win-test-file +++ b/tests/system-test/win-test-file @@ -218,6 +218,7 @@ python3 ./test.py -f 1-insert/delete_stable.py python3 ./test.py -f 1-insert/delete_childtable.py python3 ./test.py -f 1-insert/delete_normaltable.py python3 ./test.py -f 1-insert/keep_expired.py +python3 ./test.py -f 1-insert/stmt_error.py python3 ./test.py -f 1-insert/drop.py python3 ./test.py -f 1-insert/drop.py -N 3 -M 3 -i False -n 3 python3 ./test.py -f 2-query/join2.py From 655774c32e4b69d29dc3c0509e2a9dedaa2112df Mon Sep 17 00:00:00 2001 From: dapan1121 Date: Tue, 25 Apr 2023 09:00:41 +0800 Subject: [PATCH 040/156] fix: move error msg from tsc to shell --- tools/shell/src/shellEngine.c | 1 + 1 file changed, 1 insertion(+) diff --git a/tools/shell/src/shellEngine.c b/tools/shell/src/shellEngine.c index 910b067d4e..478801d3e7 100644 --- a/tools/shell/src/shellEngine.c +++ b/tools/shell/src/shellEngine.c @@ -1115,6 +1115,7 @@ int32_t shellExecute() { } if (shell.conn == NULL) { + printf("failed to connect to server, reason: %s\n", taos_errstr(NULL)); fflush(stdout); return -1; } From 33fba6f99a604dead98c37b048f9c28b36ff03d3 Mon Sep 17 00:00:00 2001 From: Minglei Jin Date: Tue, 25 Apr 2023 09:48:56 +0800 Subject: [PATCH 041/156] cache/read: fix transferBuf index --- source/dnode/vnode/src/tsdb/tsdbCacheRead.c | 8 +++++--- 1 file changed, 5 insertions(+), 3 deletions(-) diff --git a/source/dnode/vnode/src/tsdb/tsdbCacheRead.c b/source/dnode/vnode/src/tsdb/tsdbCacheRead.c index 1c5a7c815b..3d6bcd42e7 100644 --- a/source/dnode/vnode/src/tsdb/tsdbCacheRead.c +++ b/source/dnode/vnode/src/tsdb/tsdbCacheRead.c @@ -67,9 +67,10 @@ static int32_t saveOneRow(SArray* pRow, SSDataBlock* pBlock, SCacheRowsReader* p if (!COL_VAL_IS_VALUE(&pColVal->colVal)) { colDataSetNULL(pColInfoData, numOfRows); } else { - varDataSetLen(pReader->transferBuf[dstSlotIds[i]], pVal->value.nData); - memcpy(varDataVal(pReader->transferBuf[dstSlotIds[i]]), pVal->value.pData, pVal->value.nData); - colDataSetVal(pColInfoData, numOfRows, pReader->transferBuf[dstSlotIds[i]], false); + varDataSetLen(pReader->transferBuf[slotId], pVal->value.nData); + // tsdbError("buf:%p len:%d index:%d", pReader->transferBuf[slotId], pVal->value.nData, slotId); + memcpy(varDataVal(pReader->transferBuf[slotId]), pVal->value.pData, pVal->value.nData); + colDataSetVal(pColInfoData, numOfRows, pReader->transferBuf[slotId], false); } } else { colDataSetVal(pColInfoData, numOfRows, (const char*)&pVal->value.val, !COL_VAL_IS_VALUE(pVal)); @@ -154,6 +155,7 @@ int32_t tsdbCacherowsReaderOpen(void* pVnode, int32_t type, void* pTableIdList, for (int32_t i = 0; i < p->pSchema->numOfCols; ++i) { if (IS_VAR_DATA_TYPE(p->pSchema->columns[i].type)) { p->transferBuf[i] = taosMemoryMalloc(p->pSchema->columns[i].bytes); + // tsdbError("buf:%p len:%d index:%d", p->transferBuf[i], p->pSchema->columns[i].bytes, i); if (p->transferBuf[i] == NULL) { tsdbCacherowsReaderClose(p); return TSDB_CODE_OUT_OF_MEMORY; From b059cc4ee13a9ad4c5970cf9c6dc671bc701a3e1 Mon Sep 17 00:00:00 2001 From: kailixu Date: Tue, 25 Apr 2023 10:59:02 +0800 Subject: [PATCH 042/156] chore: code optimization --- source/client/inc/clientInt.h | 2 +- source/client/src/clientEnv.c | 2 +- source/client/src/clientHb.c | 13 +++++++++---- source/client/src/clientMain.c | 10 +++++++--- tests/script/api/passwdTest.c | 7 +++---- 5 files changed, 21 insertions(+), 13 deletions(-) diff --git a/source/client/inc/clientInt.h b/source/client/inc/clientInt.h index 8e20d7d275..ab8e20b85e 100644 --- a/source/client/inc/clientInt.h +++ b/source/client/inc/clientInt.h @@ -361,7 +361,7 @@ void stopAllRequests(SHashObj* pRequests); // conn level int hbRegisterConn(SAppHbMgr* pAppHbMgr, int64_t tscRefId, int64_t clusterId, int8_t connType); -void hbDeregisterConn(SAppHbMgr* pAppHbMgr, SClientHbKey connKey, void* param); +void hbDeregisterConn(STscObj* pTscObj, SClientHbKey connKey); typedef struct SSqlCallbackWrapper { SParseContext* pParseCtx; diff --git a/source/client/src/clientEnv.c b/source/client/src/clientEnv.c index 99569fdb57..ba02ae6731 100644 --- a/source/client/src/clientEnv.c +++ b/source/client/src/clientEnv.c @@ -239,7 +239,7 @@ void destroyTscObj(void *pObj) { tscTrace("begin to destroy tscObj %" PRIx64 " p:%p", tscId, pTscObj); SClientHbKey connKey = {.tscRid = pTscObj->id, .connType = pTscObj->connType}; - hbDeregisterConn(pTscObj->pAppInfo->pAppHbMgr, connKey, pTscObj->passInfo.fp); + hbDeregisterConn(pTscObj, connKey); destroyAllRequests(pTscObj->pRequests); taosHashCleanup(pTscObj->pRequests); diff --git a/source/client/src/clientHb.c b/source/client/src/clientHb.c index 79435da89f..4240a8510d 100644 --- a/source/client/src/clientHb.c +++ b/source/client/src/clientHb.c @@ -994,6 +994,7 @@ SAppHbMgr *appHbMgrInit(SAppInstInfo *pAppInstInfo, char *key) { // init stat pAppHbMgr->startTime = taosGetTimestampMs(); pAppHbMgr->connKeyCnt = 0; + pAppHbMgr->passKeyCnt = 0; pAppHbMgr->reportCnt = 0; pAppHbMgr->reportBytes = 0; pAppHbMgr->key = taosStrdup(key); @@ -1154,7 +1155,8 @@ int hbRegisterConn(SAppHbMgr *pAppHbMgr, int64_t tscRefId, int64_t clusterId, in } } -void hbDeregisterConn(SAppHbMgr *pAppHbMgr, SClientHbKey connKey, void *param) { +void hbDeregisterConn(STscObj *pTscObj, SClientHbKey connKey) { + SAppHbMgr *pAppHbMgr = pTscObj->pAppInfo->pAppHbMgr; SClientHbReq *pReq = taosHashAcquire(pAppHbMgr->activeInfo, &connKey, sizeof(SClientHbKey)); if (pReq) { tFreeClientHbReq(pReq); @@ -1167,7 +1169,10 @@ void hbDeregisterConn(SAppHbMgr *pAppHbMgr, SClientHbKey connKey, void *param) { } atomic_sub_fetch_32(&pAppHbMgr->connKeyCnt, 1); - if (param) { - atomic_sub_fetch_32(&pAppHbMgr->passKeyCnt, 1); + + taosThreadMutexLock(&pTscObj->mutex); + if (pTscObj->passInfo.fp) { + int32_t cnt = atomic_sub_fetch_32(&pAppHbMgr->passKeyCnt, 1); } -} + taosThreadMutexUnlock(&pTscObj->mutex); +} \ No newline at end of file diff --git a/source/client/src/clientMain.c b/source/client/src/clientMain.c index aed11b4fb1..55465f227e 100644 --- a/source/client/src/clientMain.c +++ b/source/client/src/clientMain.c @@ -134,11 +134,15 @@ int taos_set_notify_cb(TAOS *taos, __taos_notify_fn_t fp, void *param, int type) switch (type) { case TAOS_NOTIFY_PASSVER: { + taosThreadMutexLock(&pObj->mutex); + if (fp && !pObj->passInfo.fp) { + atomic_add_fetch_32(&pObj->pAppInfo->pAppHbMgr->passKeyCnt, 1); + } else if (!fp && pObj->passInfo.fp) { + atomic_sub_fetch_32(&pObj->pAppInfo->pAppHbMgr->passKeyCnt, 1); + } pObj->passInfo.fp = fp; pObj->passInfo.param = param; - if (fp) { - atomic_add_fetch_32(&pObj->pAppInfo->pAppHbMgr->passKeyCnt, 1); - } + taosThreadMutexUnlock(&pObj->mutex); break; } default: { diff --git a/tests/script/api/passwdTest.c b/tests/script/api/passwdTest.c index 8a2b0a0390..1bf4987689 100644 --- a/tests/script/api/passwdTest.c +++ b/tests/script/api/passwdTest.c @@ -33,8 +33,7 @@ #define nUser 10 #define USER_LEN 24 -void Test(TAOS *taos, char *qstr); -void createUers(TAOS *taos, const char *host, char *qstr); +void createUsers(TAOS *taos, const char *host, char *qstr); void passVerTestMulti(const char *host, char *qstr); int nPassVerNotified = 0; @@ -98,14 +97,14 @@ int main(int argc, char *argv[]) { printf("failed to connect to server, reason:%s\n", "null taos" /*taos_errstr(taos)*/); exit(1); } - createUers(taos, argv[1], qstr); + createUsers(taos, argv[1], qstr); passVerTestMulti(argv[1], qstr); taos_close(taos); taos_cleanup(); } -void createUers(TAOS *taos, const char *host, char *qstr) { +void createUsers(TAOS *taos, const char *host, char *qstr) { // users for (int i = 0; i < nUser; ++i) { sprintf(users[i], "user%d", i); From 93ab81437f57733e536e108d8e34ad51115f94ff Mon Sep 17 00:00:00 2001 From: shenglian zhou Date: Tue, 25 Apr 2023 11:08:58 +0800 Subject: [PATCH 043/156] enhance: code refactor and operator full implement slimit --- source/libs/executor/src/scanoperator.c | 31 ++++++++++++++++--------- source/libs/planner/src/planOptimizer.c | 2 ++ 2 files changed, 22 insertions(+), 11 deletions(-) diff --git a/source/libs/executor/src/scanoperator.c b/source/libs/executor/src/scanoperator.c index b7315db204..6d62d55024 100644 --- a/source/libs/executor/src/scanoperator.c +++ b/source/libs/executor/src/scanoperator.c @@ -2512,9 +2512,11 @@ _error: return NULL; } -static void doTagScanOneTable(SOperatorInfo* pOperator, const SExecTaskInfo* pTaskInfo, STagScanInfo* pInfo, - const SExprInfo* pExprInfo, const SSDataBlock* pRes, int32_t size, const char* str, - int32_t* count, SMetaReader* mr) { +static void doTagScanOneTable(SOperatorInfo* pOperator, const SSDataBlock* pRes, int32_t count, SMetaReader* mr) { + SExecTaskInfo* pTaskInfo = pOperator->pTaskInfo; + STagScanInfo* pInfo = pOperator->info; + SExprInfo* pExprInfo = &pOperator->exprSupp.pExprInfo[0]; + STableKeyInfo* item = tableListGetInfo(pInfo->pTableListInfo, pInfo->curPos); int32_t code = metaGetTableEntryByUid(mr, item->uid); tDecoderClear(&(*mr).coder); @@ -2525,13 +2527,14 @@ static void doTagScanOneTable(SOperatorInfo* pOperator, const SExecTaskInfo* pTa T_LONG_JMP(pTaskInfo->env, terrno); } + char str[512]; for (int32_t j = 0; j < pOperator->exprSupp.numOfExprs; ++j) { SColumnInfoData* pDst = taosArrayGet(pRes->pDataBlock, pExprInfo[j].base.resSchema.slotId); // refactor later if (fmIsScanPseudoColumnFunc(pExprInfo[j].pExpr->_function.functionId)) { STR_TO_VARSTR(str, (*mr).me.name); - colDataSetVal(pDst, (*count), str, false); + colDataSetVal(pDst, (count), str, false); } else { // it is a tag value STagVal val = {0}; val.cid = pExprInfo[j].base.pParam[0].pCol->colId; @@ -2543,7 +2546,7 @@ static void doTagScanOneTable(SOperatorInfo* pOperator, const SExecTaskInfo* pTa } else { data = (char*)p; } - colDataSetVal(pDst, (*count), data, + colDataSetVal(pDst, (count), data, (data == NULL) || (pDst->info.type == TSDB_DATA_TYPE_JSON && tTagIsJsonNull(data))); if (pDst->info.type != TSDB_DATA_TYPE_JSON && p != NULL && IS_VAR_DATA_TYPE(((const STagVal*)p)->type) && @@ -2552,11 +2555,6 @@ static void doTagScanOneTable(SOperatorInfo* pOperator, const SExecTaskInfo* pTa } } } - - (*count) += 1; - if (++pInfo->curPos >= size) { - setOperatorCompleted(pOperator); - } } static SSDataBlock* doTagScan(SOperatorInfo* pOperator) { @@ -2583,9 +2581,20 @@ static SSDataBlock* doTagScan(SOperatorInfo* pOperator) { metaReaderInit(&mr, pInfo->readHandle.meta, 0); while (pInfo->curPos < size && count < pOperator->resultInfo.capacity) { - doTagScanOneTable(pOperator, pTaskInfo, pInfo, pExprInfo, pRes, size, str, &count, &mr); + doTagScanOneTable(pOperator, pRes, count, &mr); + ++count; + if (++pInfo->curPos >= size) { + setOperatorCompleted(pOperator); + } + // each table with tbname is a group, hence its own block, but only group when slimit exists for performance reason. if (pInfo->pSlimit != NULL) { + if (pInfo->curPos < pInfo->pSlimit->offset) { + continue; + } pInfo->pRes->info.id.groupId = calcGroupId(mr.me.name, strlen(mr.me.name)); + if (pInfo->curPos >= (pInfo->pSlimit->offset + pInfo->pSlimit->limit) - 1) { + setOperatorCompleted(pOperator); + } break; } } diff --git a/source/libs/planner/src/planOptimizer.c b/source/libs/planner/src/planOptimizer.c index 4ece9be304..4f8b57de5f 100644 --- a/source/libs/planner/src/planOptimizer.c +++ b/source/libs/planner/src/planOptimizer.c @@ -2442,6 +2442,8 @@ static void tagScanOptCloneAncestorSlimit(SLogicNode* pTableScanNode) { if (NULL != pNode) { //TODO: only set the slimit now. push down slimit later pTableScanNode->pSlimit = nodesCloneNode(pNode->pSlimit); + ((SLimitNode*)pTableScanNode->pSlimit)->limit += ((SLimitNode*)pTableScanNode->pSlimit)->offset; + ((SLimitNode*)pTableScanNode->pSlimit)->offset = 0; } return; } From ae773fee8b17102feca3bfbe7a4ce2c6112b8dc3 Mon Sep 17 00:00:00 2001 From: shenglian zhou Date: Tue, 25 Apr 2023 14:43:22 +0800 Subject: [PATCH 044/156] feature: add test case --- tests/parallel_test/cases.task | 1 + tests/script/tsim/query/tag_scan.sim | 48 ++++++++++++++++++++++++++++ 2 files changed, 49 insertions(+) create mode 100644 tests/script/tsim/query/tag_scan.sim diff --git a/tests/parallel_test/cases.task b/tests/parallel_test/cases.task index dda4ec3e84..9f9679ae35 100644 --- a/tests/parallel_test/cases.task +++ b/tests/parallel_test/cases.task @@ -887,6 +887,7 @@ ,,y,script,./test.sh -f tsim/query/emptyTsRange.sim ,,y,script,./test.sh -f tsim/query/partitionby.sim ,,y,script,./test.sh -f tsim/query/tableCount.sim +,,y,script,./test.sh -f tsim/query/tag_scan.sim ,,y,script,./test.sh -f tsim/query/nullColSma.sim ,,y,script,./test.sh -f tsim/qnode/basic1.sim ,,y,script,./test.sh -f tsim/snode/basic1.sim diff --git a/tests/script/tsim/query/tag_scan.sim b/tests/script/tsim/query/tag_scan.sim new file mode 100644 index 0000000000..03e3a20632 --- /dev/null +++ b/tests/script/tsim/query/tag_scan.sim @@ -0,0 +1,48 @@ +system sh/stop_dnodes.sh +system sh/deploy.sh -n dnode1 -i 1 +system sh/exec.sh -n dnode1 -s start +sql connect +sql drop database if exists test +sql create database test; +sql use test; + +sql create table st(ts timestamp, f int) tags (t int); +sql insert into ct1 using st tags(1) values(now, 1); +sql insert into ct2 using st tags(2) values(now, 2); +sql insert into ct3 using st tags(3) values(now, 3); +sql insert into ct4 using st tags(4) values(now, 4); + +sql create table st2(ts timestamp, f int) tags (t int); +sql insert into ct21 using st2 tags(1) values(now, 1); +sql insert into ct22 using st2 tags(2) values(now, 2); +sql insert into ct23 using st2 tags(3) values(now, 3); +sql insert into ct24 using st2 tags(4) values(now, 4); + +sql select tbname, 1 from st group by tbname order by tbname; +print $rows $data00 $data10 $data20 +if $rows != 4 then + return -1 +endi +if $data00 != @ct1@ then + return -1 +endi +if $data10 != @ct2@ then + return -1 +endi +sql select tbname, 1 from st group by tbname slimit 0, 1; +print $rows +if $rows != 1 then + return -1 +endi +sql select tbname, 1 from st group by tbname slimit 2, 2; +print $rows $data00 $data10 +if $rows != 2 then + return -1 +endi +sql select tbname, 1 from st group by tbname order by tbname slimit 0, 1; +print $rows $data00 $data10 $data20 +if $rows != 4 then + return -1 +endi + +system sh/exec.sh -n dnode1 -s stop -x SIGINT From 2fd9640a39ce88025fbe21ba73e10e6717ff2cc3 Mon Sep 17 00:00:00 2001 From: cadem Date: Tue, 25 Apr 2023 15:15:28 +0800 Subject: [PATCH 045/156] change learner config format --- source/dnode/vnode/src/vnd/vnodeCfg.c | 20 +++++++------------- source/libs/sync/src/syncRaftCfg.c | 14 ++++++-------- 2 files changed, 13 insertions(+), 21 deletions(-) diff --git a/source/dnode/vnode/src/vnd/vnodeCfg.c b/source/dnode/vnode/src/vnd/vnodeCfg.c index 65f32b0a85..511ba9cc24 100644 --- a/source/dnode/vnode/src/vnd/vnodeCfg.c +++ b/source/dnode/vnode/src/vnd/vnodeCfg.c @@ -60,19 +60,19 @@ int vnodeCheckCfg(const SVnodeCfg *pCfg) { const char* vnodeRoleToStr(ESyncRole role) { switch (role) { case TAOS_SYNC_ROLE_VOTER: - return "voter"; + return "true"; case TAOS_SYNC_ROLE_LEARNER: - return "learner"; + return "false"; default: return "unknown"; } } const ESyncRole vnodeStrToRole(char* str) { - if(strcmp(str, "voter") == 0){ + if(strcmp(str, "true") == 0){ return TAOS_SYNC_ROLE_VOTER; } - if(strcmp(str, "learner") == 0){ + if(strcmp(str, "false") == 0){ return TAOS_SYNC_ROLE_LEARNER; } @@ -139,7 +139,6 @@ int vnodeEncodeConfig(const void *pObj, SJson *pJson) { if (tjsonAddIntegerToObject(pJson, "hashSuffix", pCfg->hashSuffix) < 0) return -1; if (tjsonAddIntegerToObject(pJson, "syncCfg.replicaNum", pCfg->syncCfg.replicaNum) < 0) return -1; - if (tjsonAddIntegerToObject(pJson, "syncCfg.totalReplicaNum", pCfg->syncCfg.totalReplicaNum) < 0) return -1; if (tjsonAddIntegerToObject(pJson, "syncCfg.myIndex", pCfg->syncCfg.myIndex) < 0) return -1; if (tjsonAddIntegerToObject(pJson, "vndStats.stables", pCfg->vndStats.numOfSTables) < 0) return -1; @@ -161,7 +160,7 @@ int vnodeEncodeConfig(const void *pObj, SJson *pJson) { if (tjsonAddStringToObject(info, "nodeFqdn", pNode->nodeFqdn) < 0) return -1; if (tjsonAddIntegerToObject(info, "nodeId", pNode->nodeId) < 0) return -1; if (tjsonAddIntegerToObject(info, "clusterId", pNode->clusterId) < 0) return -1; - if (tjsonAddStringToObject(info, "nodeRole", vnodeRoleToStr(pNode->nodeRole)) < 0) return -1; + if (tjsonAddStringToObject(info, "isReplica", vnodeRoleToStr(pNode->nodeRole)) < 0) return -1; if (tjsonAddItemToArray(nodeInfo, info) < 0) return -1; vDebug("vgId:%d, encode config, replica:%d ep:%s:%u dnode:%d", pCfg->vgId, i, pNode->nodeFqdn, pNode->nodePort, pNode->nodeId); @@ -259,8 +258,6 @@ int vnodeDecodeConfig(const SJson *pJson, void *pObj) { tjsonGetNumberValue(pJson, "syncCfg.replicaNum", pCfg->syncCfg.replicaNum, code); if (code < 0) return -1; - tjsonGetNumberValue(pJson, "syncCfg.totalReplicaNum", pCfg->syncCfg.totalReplicaNum, code); - if (code < 0) return -1; tjsonGetNumberValue(pJson, "syncCfg.myIndex", pCfg->syncCfg.myIndex, code); if (code < 0) return -1; @@ -277,10 +274,7 @@ int vnodeDecodeConfig(const SJson *pJson, void *pObj) { SJson *nodeInfo = tjsonGetObjectItem(pJson, "syncCfg.nodeInfo"); int arraySize = tjsonGetArraySize(nodeInfo); - if(pCfg->syncCfg.totalReplicaNum == 0 && pCfg->syncCfg.replicaNum > 0){ - pCfg->syncCfg.totalReplicaNum = pCfg->syncCfg.replicaNum; - } - if (arraySize != pCfg->syncCfg.totalReplicaNum) return -1; + pCfg->syncCfg.totalReplicaNum = arraySize; vDebug("vgId:%d, decode config, replicas:%d totalReplicas:%d selfIndex:%d", pCfg->vgId, pCfg->syncCfg.replicaNum, pCfg->syncCfg.totalReplicaNum, pCfg->syncCfg.myIndex); @@ -296,7 +290,7 @@ int vnodeDecodeConfig(const SJson *pJson, void *pObj) { tjsonGetNumberValue(info, "clusterId", pNode->clusterId, code); if (code < 0) return -1; char role[10] = {0}; - code = tjsonGetStringValue(info, "nodeRole", role); + code = tjsonGetStringValue(info, "isReplica", role); if (code < 0) return -1; if(strlen(role) != 0){ pNode->nodeRole = vnodeStrToRole(role); diff --git a/source/libs/sync/src/syncRaftCfg.c b/source/libs/sync/src/syncRaftCfg.c index 576b9d62f5..480ed4b2af 100644 --- a/source/libs/sync/src/syncRaftCfg.c +++ b/source/libs/sync/src/syncRaftCfg.c @@ -21,19 +21,19 @@ const char* syncRoleToStr(ESyncRole role) { switch (role) { case TAOS_SYNC_ROLE_VOTER: - return "voter"; + return "true"; case TAOS_SYNC_ROLE_LEARNER: - return "learner"; + return "false"; default: return "unknown"; } } const ESyncRole syncStrToRole(char* str) { - if(strcmp(str, "voter") == 0){ + if(strcmp(str, "true") == 0){ return TAOS_SYNC_ROLE_VOTER; } - if(strcmp(str, "learner") == 0){ + if(strcmp(str, "false") == 0){ return TAOS_SYNC_ROLE_LEARNER; } @@ -42,7 +42,6 @@ const ESyncRole syncStrToRole(char* str) { static int32_t syncEncodeSyncCfg(const void *pObj, SJson *pJson) { SSyncCfg *pCfg = (SSyncCfg *)pObj; - if (tjsonAddDoubleToObject(pJson, "totalReplicaNum", pCfg->totalReplicaNum) < 0) return -1; if (tjsonAddDoubleToObject(pJson, "replicaNum", pCfg->replicaNum) < 0) return -1; if (tjsonAddDoubleToObject(pJson, "myIndex", pCfg->myIndex) < 0) return -1; @@ -56,7 +55,7 @@ static int32_t syncEncodeSyncCfg(const void *pObj, SJson *pJson) { if (tjsonAddStringToObject(info, "nodeFqdn", pCfg->nodeInfo[i].nodeFqdn) < 0) return -1; if (tjsonAddIntegerToObject(info, "nodeId", pCfg->nodeInfo[i].nodeId) < 0) return -1; if (tjsonAddIntegerToObject(info, "clusterId", pCfg->nodeInfo[i].clusterId) < 0) return -1; - if (tjsonAddStringToObject(info, "nodeRole", syncRoleToStr(pCfg->nodeInfo[i].nodeRole)) < 0) return -1; + if (tjsonAddStringToObject(info, "isReplica", syncRoleToStr(pCfg->nodeInfo[i].nodeRole)) < 0) return -1; if (tjsonAddItemToArray(nodeInfo, info) < 0) return -1; } @@ -133,7 +132,6 @@ static int32_t syncDecodeSyncCfg(const SJson *pJson, void *pObj) { SSyncCfg *pCfg = (SSyncCfg *)pObj; int32_t code = 0; - tjsonGetInt32ValueFromDouble(pJson, "totalReplicaNum", pCfg->totalReplicaNum, code); tjsonGetInt32ValueFromDouble(pJson, "replicaNum", pCfg->replicaNum, code); if (code < 0) return -1; tjsonGetInt32ValueFromDouble(pJson, "myIndex", pCfg->myIndex, code); @@ -153,7 +151,7 @@ static int32_t syncDecodeSyncCfg(const SJson *pJson, void *pObj) { tjsonGetNumberValue(info, "nodeId", pCfg->nodeInfo[i].nodeId, code); tjsonGetNumberValue(info, "clusterId", pCfg->nodeInfo[i].clusterId, code); char role[10] = {0}; - code = tjsonGetStringValue(info, "nodeRole", role); + code = tjsonGetStringValue(info, "isReplica", role); if(code < 0) return -1; if(strlen(role) != 0){ pCfg->nodeInfo[i].nodeRole = syncStrToRole(role); From 12f28a4a4381790e281d669ca04fcbdd0ecfaca4 Mon Sep 17 00:00:00 2001 From: Benguang Zhao Date: Mon, 24 Apr 2023 17:37:38 +0800 Subject: [PATCH 046/156] enh: unify error msg for no disk space --- source/libs/executor/src/groupoperator.c | 4 ++-- source/libs/executor/src/timewindowoperator.c | 4 ++-- source/libs/executor/src/tlinearhash.c | 4 ++-- source/libs/executor/src/tsort.c | 9 ++++----- source/libs/function/src/builtinsimpl.c | 8 ++++++-- source/libs/function/src/tpercentile.c | 2 +- source/libs/function/src/udfd.c | 2 +- 7 files changed, 18 insertions(+), 15 deletions(-) diff --git a/source/libs/executor/src/groupoperator.c b/source/libs/executor/src/groupoperator.c index 3d9bacf39f..612ecb4684 100644 --- a/source/libs/executor/src/groupoperator.c +++ b/source/libs/executor/src/groupoperator.c @@ -871,9 +871,9 @@ SOperatorInfo* createPartitionOperatorInfo(SOperatorInfo* downstream, SPartition getBufferPgSize(pInfo->binfo.pRes->info.rowSize, &defaultPgsz, &defaultBufsz); if (!osTempSpaceAvailable()) { - terrno = TSDB_CODE_NO_AVAIL_DISK; + terrno = TSDB_CODE_NO_DISKSPACE; pTaskInfo->code = terrno; - qError("Create partition operator info failed since %s", terrstr(terrno)); + qError("Create partition operator info failed since %s, tempDir:%s", terrstr(), tsTempDir); goto _error; } diff --git a/source/libs/executor/src/timewindowoperator.c b/source/libs/executor/src/timewindowoperator.c index 007a6f63d1..7afda3f597 100644 --- a/source/libs/executor/src/timewindowoperator.c +++ b/source/libs/executor/src/timewindowoperator.c @@ -2911,8 +2911,8 @@ int32_t initStreamAggSupporter(SStreamAggSupporter* pSup, SqlFunctionCtx* pCtx, bufSize = pageSize * 4; } if (!osTempSpaceAvailable()) { - terrno = TSDB_CODE_NO_AVAIL_DISK; - qError("Init stream agg supporter failed since %s", terrstr(terrno)); + terrno = TSDB_CODE_NO_DISKSPACE; + qError("Init stream agg supporter failed since %s, tempDir:%s", terrstr(), tsTempDir); return terrno; } int32_t code = createDiskbasedBuf(&pSup->pResultBuf, pageSize, bufSize, "function", tsTempDir); diff --git a/source/libs/executor/src/tlinearhash.c b/source/libs/executor/src/tlinearhash.c index 2cba3855c7..023583fcde 100644 --- a/source/libs/executor/src/tlinearhash.c +++ b/source/libs/executor/src/tlinearhash.c @@ -248,8 +248,8 @@ SLHashObj* tHashInit(int32_t inMemPages, int32_t pageSize, _hash_fn_t fn, int32_ } if (!osTempSpaceAvailable()) { - terrno = TSDB_CODE_NO_AVAIL_DISK; - printf("tHash Init failed since %s", terrstr(terrno)); + terrno = TSDB_CODE_NO_DISKSPACE; + printf("tHash Init failed since %s, tempDir:%s", terrstr(), tsTempDir); taosMemoryFree(pHashObj); return NULL; } diff --git a/source/libs/executor/src/tsort.c b/source/libs/executor/src/tsort.c index 6c8e581b3f..da5f65fdf2 100644 --- a/source/libs/executor/src/tsort.c +++ b/source/libs/executor/src/tsort.c @@ -195,8 +195,8 @@ static int32_t doAddToBuf(SSDataBlock* pDataBlock, SSortHandle* pHandle) { if (pHandle->pBuf == NULL) { if (!osTempSpaceAvailable()) { - terrno = TSDB_CODE_NO_AVAIL_DISK; - qError("Add to buf failed since %s", terrstr(terrno)); + terrno = TSDB_CODE_NO_DISKSPACE; + qError("Add to buf failed since %s, tempDir:%s", terrstr(), tsTempDir); return terrno; } @@ -261,9 +261,8 @@ static int32_t sortComparInit(SMsortComparParam* pParam, SArray* pSources, int32 // multi-pass internal merge sort is required if (pHandle->pBuf == NULL) { if (!osTempSpaceAvailable()) { - code = TSDB_CODE_NO_AVAIL_DISK; - terrno = code; - qError("Sort compare init failed since %s, %s", tstrerror(code), pHandle->idStr); + code = terrno = TSDB_CODE_NO_DISKSPACE; + qError("Sort compare init failed since %s, tempDir:%s, idStr:%s", terrstr(), tsTempDir, pHandle->idStr); return code; } diff --git a/source/libs/function/src/builtinsimpl.c b/source/libs/function/src/builtinsimpl.c index a8ecd9b0a2..ead53aade6 100644 --- a/source/libs/function/src/builtinsimpl.c +++ b/source/libs/function/src/builtinsimpl.c @@ -855,7 +855,9 @@ int32_t setSelectivityValue(SqlFunctionCtx* pCtx, SSDataBlock* pBlock, const STu int32_t numOfCols = pCtx->subsidiaries.num; const char* p = loadTupleData(pCtx, pTuplePos); if (p == NULL) { - terrno = TSDB_CODE_NO_AVAIL_DISK; + terrno = TSDB_CODE_NOT_FOUND; + qError("Load tuple data failed since %s, groupId:%" PRIu64 ", ts:%" PRId64, terrstr(), + pTuplePos->streamTupleKey.groupId, pTuplePos->streamTupleKey.ts); return terrno; } @@ -5098,7 +5100,9 @@ int32_t modeFinalize(SqlFunctionCtx* pCtx, SSDataBlock* pBlock) { if (maxCount != 0) { const char* pData = loadTupleData(pCtx, &resDataPos); if (pData == NULL) { - code = TSDB_CODE_NO_AVAIL_DISK; + code = terrno = TSDB_CODE_NOT_FOUND; + qError("Load tuple data failed since %s, groupId:%" PRIu64 ", ts:%" PRId64, terrstr(), + resDataPos.streamTupleKey.groupId, resDataPos.streamTupleKey.ts); modeFunctionCleanup(pInfo); return code; } diff --git a/source/libs/function/src/tpercentile.c b/source/libs/function/src/tpercentile.c index de381fadbd..2edfb33e81 100644 --- a/source/libs/function/src/tpercentile.c +++ b/source/libs/function/src/tpercentile.c @@ -277,7 +277,7 @@ tMemBucket *tMemBucketCreate(int16_t nElemSize, int16_t dataType, double minval, resetSlotInfo(pBucket); if (!osTempSpaceAvailable()) { - terrno = TSDB_CODE_NO_AVAIL_DISK; + terrno = TSDB_CODE_NO_DISKSPACE; // qError("MemBucket create disk based Buf failed since %s", terrstr(terrno)); tMemBucketDestroy(pBucket); return NULL; diff --git a/source/libs/function/src/udfd.c b/source/libs/function/src/udfd.c index 5034af2f82..d2553ba96d 100644 --- a/source/libs/function/src/udfd.c +++ b/source/libs/function/src/udfd.c @@ -844,7 +844,7 @@ void udfdGetFuncBodyPath(const SUdf *udf, char *path) { int32_t udfdSaveFuncBodyToFile(SFuncInfo *pFuncInfo, SUdf *udf) { if (!osDataSpaceAvailable()) { - terrno = TSDB_CODE_NO_AVAIL_DISK; + terrno = TSDB_CODE_NO_DISKSPACE; fnError("udfd create shared library failed since %s", terrstr(terrno)); return terrno; } From 7e2dee8a0f5d7a7c8e10e66a3d43d1a803966593 Mon Sep 17 00:00:00 2001 From: cadem Date: Tue, 25 Apr 2023 15:49:17 +0800 Subject: [PATCH 047/156] filter voter when agree upon --- source/libs/sync/src/syncCommit.c | 10 ++++++---- 1 file changed, 6 insertions(+), 4 deletions(-) diff --git a/source/libs/sync/src/syncCommit.c b/source/libs/sync/src/syncCommit.c index 2501b4df8b..d3397f8e88 100644 --- a/source/libs/sync/src/syncCommit.c +++ b/source/libs/sync/src/syncCommit.c @@ -75,10 +75,12 @@ bool syncNodeAgreedUpon(SSyncNode* pNode, SyncIndex index) { SSyncIndexMgr* pMatches = pNode->pMatchIndex; ASSERT(pNode->replicaNum == pMatches->replicaNum); - for (int i = 0; i < pNode->replicaNum; i++) { - SyncIndex matchIndex = pMatches->index[i]; - if (matchIndex >= index) { - count++; + for (int i = 0; i < pNode->totalReplicaNum; i++) { + if(pNode->raftCfg.cfg.nodeInfo[i].nodeRole == TAOS_SYNC_ROLE_VOTER){ + SyncIndex matchIndex = pMatches->index[i]; + if (matchIndex >= index) { + count++; + } } } From 2566bd4ae8bf94bdf760f1712dbd850ec1e795b6 Mon Sep 17 00:00:00 2001 From: Minglei Jin Date: Tue, 25 Apr 2023 16:17:58 +0800 Subject: [PATCH 048/156] row/iter: fix null column iter --- source/common/src/tdataformat.c | 9 +++---- source/dnode/vnode/inc/vnode.h | 2 +- source/dnode/vnode/src/inc/tsdb.h | 1 + source/dnode/vnode/src/tsdb/tsdbCache.c | 22 +++++++++++++---- source/dnode/vnode/src/tsdb/tsdbCacheRead.c | 25 ++++++++++++++++---- source/libs/executor/src/cachescanoperator.c | 6 ++--- 6 files changed, 49 insertions(+), 16 deletions(-) diff --git a/source/common/src/tdataformat.c b/source/common/src/tdataformat.c index d6ab974c6c..66f539332a 100644 --- a/source/common/src/tdataformat.c +++ b/source/common/src/tdataformat.c @@ -755,7 +755,7 @@ SColVal *tRowIterNext(SRowIter *pIter) { } if (pIter->pRow->flag == HAS_NULL) { - pIter->cv = COL_VAL_NULL(pTColumn->type, pTColumn->colId); + pIter->cv = COL_VAL_NULL(pTColumn->colId, pTColumn->type); goto _exit; } @@ -2439,7 +2439,7 @@ _exit: int32_t tColDataAddValueByDataBlock(SColData *pColData, int8_t type, int32_t bytes, int32_t nRows, char *lengthOrbitmap, char *data) { int32_t code = 0; - if(data == NULL){ + if (data == NULL) { for (int32_t i = 0; i < nRows; ++i) { code = tColDataAppendValueImpl[pColData->flag][CV_FLAG_NONE](pColData, NULL, 0); } @@ -2453,8 +2453,9 @@ int32_t tColDataAddValueByDataBlock(SColData *pColData, int8_t type, int32_t byt code = tColDataAppendValueImpl[pColData->flag][CV_FLAG_NULL](pColData, NULL, 0); if (code) goto _exit; } else { - if(ASSERT(varDataTLen(data + offset) <= bytes)){ - uError("var data length invalid, varDataTLen(data + offset):%d <= bytes:%d", (int)varDataTLen(data + offset), bytes); + if (ASSERT(varDataTLen(data + offset) <= bytes)) { + uError("var data length invalid, varDataTLen(data + offset):%d <= bytes:%d", (int)varDataTLen(data + offset), + bytes); code = TSDB_CODE_INVALID_PARA; goto _exit; } diff --git a/source/dnode/vnode/inc/vnode.h b/source/dnode/vnode/inc/vnode.h index cadf6d1db7..ebb18ecc76 100644 --- a/source/dnode/vnode/inc/vnode.h +++ b/source/dnode/vnode/inc/vnode.h @@ -195,7 +195,7 @@ void *tsdbGetIvtIdx(SMeta *pMeta); uint64_t getReaderMaxVersion(STsdbReader *pReader); int32_t tsdbCacherowsReaderOpen(void *pVnode, int32_t type, void *pTableIdList, int32_t numOfTables, int32_t numOfCols, - SArray *pCidList, uint64_t suid, void **pReader, const char *idstr); + SArray *pCidList, int32_t *pSlotIds, uint64_t suid, void **pReader, const char *idstr); int32_t tsdbRetrieveCacheRows(void *pReader, SSDataBlock *pResBlock, const int32_t *slotIds, const int32_t *dstSlotIds, SArray *pTableUids); void *tsdbCacherowsReaderClose(void *pReader); diff --git a/source/dnode/vnode/src/inc/tsdb.h b/source/dnode/vnode/src/inc/tsdb.h index ee816bb122..08d4cbad1a 100644 --- a/source/dnode/vnode/src/inc/tsdb.h +++ b/source/dnode/vnode/src/inc/tsdb.h @@ -788,6 +788,7 @@ typedef struct SCacheRowsReader { char **transferBuf; // todo remove it soon int32_t numOfCols; SArray *pCidList; + int32_t *pSlotIds; int32_t type; int32_t tableIndex; // currently returned result tables STableKeyInfo *pTableList; // table id list diff --git a/source/dnode/vnode/src/tsdb/tsdbCache.c b/source/dnode/vnode/src/tsdb/tsdbCache.c index 04564f72c2..63dbb9a4d3 100644 --- a/source/dnode/vnode/src/tsdb/tsdbCache.c +++ b/source/dnode/vnode/src/tsdb/tsdbCache.c @@ -259,10 +259,7 @@ int32_t tsdbCacheUpdate(STsdb *pTsdb, tb_uid_t suid, tb_uid_t uid, TSDBROW *pRow } if (!COL_VAL_IS_NONE(pColVal)) { - SLastCol *pLastCol = NULL; - if (NULL != values_list[i + num_keys]) { - pLastCol = tsdbCacheDeserialize(values_list[i + num_keys]); - } + SLastCol *pLastCol = tsdbCacheDeserialize(values_list[i + num_keys]); if (NULL == pLastCol || pLastCol->ts <= keyTs) { char *value = NULL; @@ -330,7 +327,24 @@ int32_t tsdbCacheGet(STsdb *pTsdb, tb_uid_t uid, SArray *pLastArray, SCacheRowsR for (int i = 0; i < num_keys; ++i) { SLastCol *pLastCol = tsdbCacheDeserialize(values_list[i]); + if (pLastCol) { + SColVal *pColVal = &pLastCol->colVal; + if (IS_VAR_DATA_TYPE(pColVal->type)) { + uint8_t *pVal = pColVal->value.pData; + if (pColVal->value.nData) { + pColVal->value.pData = taosMemoryMalloc(pColVal->value.nData); + memcpy(pColVal->value.pData, pVal, pColVal->value.nData); + } + } + } else { + // recalc: load from tsdb + // still null, then make up a null col value + int16_t cid = *(int16_t *)taosArrayGet(pCidList, i); + pLastCol = &(SLastCol){.ts = TSKEY_MIN, .colVal = COL_VAL_NONE(cid, pr->pSchema->columns[pr->pSlotIds[i]].type)}; + + // maybe store it back to rocks cache + } taosArrayPush(pLastArray, pLastCol); taosMemoryFree(values_list[i]); diff --git a/source/dnode/vnode/src/tsdb/tsdbCacheRead.c b/source/dnode/vnode/src/tsdb/tsdbCacheRead.c index 3d6bcd42e7..48c72ca685 100644 --- a/source/dnode/vnode/src/tsdb/tsdbCacheRead.c +++ b/source/dnode/vnode/src/tsdb/tsdbCacheRead.c @@ -23,10 +23,9 @@ static int32_t saveOneRow(SArray* pRow, SSDataBlock* pBlock, SCacheRowsReader* pReader, const int32_t* slotIds, const int32_t* dstSlotIds, void** pRes, const char* idStr) { int32_t numOfRows = pBlock->info.rows; + bool allNullRow = true; if (HASTYPE(pReader->type, CACHESCAN_RETRIEVE_LAST)) { - bool allNullRow = true; - for (int32_t i = 0; i < pReader->numOfCols; ++i) { SColumnInfoData* pColInfoData = taosArrayGet(pBlock->pDataBlock, dstSlotIds[i]); SFirstLastRes* p = (SFirstLastRes*)varDataVal(pRes[i]); @@ -42,6 +41,7 @@ static int32_t saveOneRow(SArray* pRow, SSDataBlock* pBlock, SCacheRowsReader* p varDataSetLen(p->buf, pColVal->colVal.value.nData); memcpy(varDataVal(p->buf), pColVal->colVal.value.pData, pColVal->colVal.value.nData); p->bytes = pColVal->colVal.value.nData + VARSTR_HEADER_SIZE; // binary needs to plus the header size + taosMemoryFree(pColVal->colVal.value.pData); } else { memcpy(p->buf, &pColVal->colVal.value, pReader->pSchema->columns[slotId].bytes); p->bytes = pReader->pSchema->columns[slotId].bytes; @@ -63,6 +63,10 @@ static int32_t saveOneRow(SArray* pRow, SSDataBlock* pBlock, SCacheRowsReader* p SLastCol* pColVal = (SLastCol*)taosArrayGet(pRow, i); SColVal* pVal = &pColVal->colVal; + if (COL_VAL_IS_NONE(&pColVal->colVal)) { + continue; + } + allNullRow = false; if (IS_VAR_DATA_TYPE(pColVal->colVal.type)) { if (!COL_VAL_IS_VALUE(&pColVal->colVal)) { colDataSetNULL(pColInfoData, numOfRows); @@ -71,13 +75,15 @@ static int32_t saveOneRow(SArray* pRow, SSDataBlock* pBlock, SCacheRowsReader* p // tsdbError("buf:%p len:%d index:%d", pReader->transferBuf[slotId], pVal->value.nData, slotId); memcpy(varDataVal(pReader->transferBuf[slotId]), pVal->value.pData, pVal->value.nData); colDataSetVal(pColInfoData, numOfRows, pReader->transferBuf[slotId], false); + taosMemoryFree(pVal->value.pData); } } else { colDataSetVal(pColInfoData, numOfRows, (const char*)&pVal->value.val, !COL_VAL_IS_VALUE(pVal)); } } - pBlock->info.rows += 1; + pBlock->info.rows += allNullRow ? 0 : 1; + // pBlock->info.rows += 1; } else { tsdbError("invalid retrieve type:%d, %s", pReader->type, idStr); return TSDB_CODE_INVALID_PARA; @@ -117,7 +123,7 @@ static int32_t setTableSchema(SCacheRowsReader* p, uint64_t suid, const char* id } int32_t tsdbCacherowsReaderOpen(void* pVnode, int32_t type, void* pTableIdList, int32_t numOfTables, int32_t numOfCols, - SArray* pCidList, uint64_t suid, void** pReader, const char* idstr) { + SArray* pCidList, int32_t* pSlotIds, uint64_t suid, void** pReader, const char* idstr) { *pReader = NULL; SCacheRowsReader* p = taosMemoryCalloc(1, sizeof(SCacheRowsReader)); if (p == NULL) { @@ -130,6 +136,7 @@ int32_t tsdbCacherowsReaderOpen(void* pVnode, int32_t type, void* pTableIdList, p->verRange = (SVersionRange){.minVer = 0, .maxVer = UINT64_MAX}; p->numOfCols = numOfCols; p->pCidList = pCidList; + p->pSlotIds = pSlotIds; p->suid = suid; if (numOfTables == 0) { @@ -294,6 +301,11 @@ int32_t tsdbRetrieveCacheRows(void* pReader, SSDataBlock* pResBlock, const int32 taosArrayClear(pRow); continue; } + SLastCol* pColVal = (SLastCol*)taosArrayGet(pRow, 0); + if (COL_VAL_IS_NONE(&pColVal->colVal)) { + taosArrayClear(pRow); + continue; + } { bool hasNotNullRow = true; @@ -363,6 +375,11 @@ int32_t tsdbRetrieveCacheRows(void* pReader, SSDataBlock* pResBlock, const int32 taosArrayClear(pRow); continue; } + SLastCol* pColVal = (SLastCol*)taosArrayGet(pRow, 0); + if (COL_VAL_IS_NONE(&pColVal->colVal)) { + taosArrayClear(pRow); + continue; + } saveOneRow(pRow, pResBlock, pr, slotIds, dstSlotIds, pRes, pr->idstr); taosArrayClear(pRow); diff --git a/source/libs/executor/src/cachescanoperator.c b/source/libs/executor/src/cachescanoperator.c index 2df24463b7..925e305450 100644 --- a/source/libs/executor/src/cachescanoperator.c +++ b/source/libs/executor/src/cachescanoperator.c @@ -101,8 +101,8 @@ SOperatorInfo* createCacherowsScanOperator(SLastRowScanPhysiNode* pScanNode, SRe uint64_t suid = tableListGetSuid(pTableListInfo); code = tsdbCacherowsReaderOpen(pInfo->readHandle.vnode, pInfo->retrieveType, pList, totalTables, - taosArrayGetSize(pInfo->matchInfo.pList), pCidList, suid, &pInfo->pLastrowReader, - pTaskInfo->id.str); + taosArrayGetSize(pInfo->matchInfo.pList), pCidList, pInfo->pSlotIds, suid, + &pInfo->pLastrowReader, pTaskInfo->id.str); if (code != TSDB_CODE_SUCCESS) { goto _error; } @@ -237,7 +237,7 @@ SSDataBlock* doScanCache(SOperatorInfo* pOperator) { } code = tsdbCacherowsReaderOpen(pInfo->readHandle.vnode, pInfo->retrieveType, pList, num, - taosArrayGetSize(pInfo->matchInfo.pList), pInfo->pCidList, suid, + taosArrayGetSize(pInfo->matchInfo.pList), pInfo->pCidList, pInfo->pSlotIds, suid, &pInfo->pLastrowReader, pTaskInfo->id.str); if (code != TSDB_CODE_SUCCESS) { pInfo->currentGroupIndex += 1; From f720feb56d7a20399b6862d76e5813e98637a94f Mon Sep 17 00:00:00 2001 From: Haojun Liao Date: Tue, 25 Apr 2023 16:58:00 +0800 Subject: [PATCH 049/156] Update 14-stream.md --- docs/zh/12-taos-sql/14-stream.md | 46 ++++++++++++++++---------------- 1 file changed, 23 insertions(+), 23 deletions(-) diff --git a/docs/zh/12-taos-sql/14-stream.md b/docs/zh/12-taos-sql/14-stream.md index 9bbb551c87..237d39104f 100644 --- a/docs/zh/12-taos-sql/14-stream.md +++ b/docs/zh/12-taos-sql/14-stream.md @@ -216,26 +216,26 @@ T = 最新事件时间 - DELETE_MARK ## 流式计算支持的函数 -1. 所有的单行函数均可用于流计算中。 -2. 以下 19 个聚合函数不能在创建流计算的 SQL 语句中使用 -``` -leastsquares -percentile -top -bottom -elapsed -interp -derivative -irate -twa -histogram -diff -statecount -stateduration -csum -mavg -sample -tail -unique -mode -``` +1. 所有的[单行函数](../function/#单行函数)均可用于流计算。 +2. 以下 19 个聚合/选择函数不能在创建流计算的 SQL 语句,[系统信息函数](../function/#系统信息函数)其他类型的聚合函数均可用于流计算。 + +- [leastsquares](../function/#leastsquares) +- [percentile](../function/#percentile) +- [top](../function/#leastsquares) +- [bottom](../function/#top) +- [elapsed](../function/#leastsquares) +- [interp](../function/#elapsed) +- [derivative](../function/#derivative) +- [irate](../function/#irate) +- [twa](../function/#twa) +- [histogram](../function/#histogram) +- [diff](../function/#diff) +- [statecount](../function/#statecount) +- [stateduration](../function/#stateduration) +- [csum](../function/#csum) +- [mavg](../function/#mavg) +- [sample](../function/#sample) +- [tail](../function/#tail) +- [unique](../function/#unique) +- [mode](../function/#mode) + From 19976122f4e602dcd6a14094079806214353cb8c Mon Sep 17 00:00:00 2001 From: Haojun Liao Date: Tue, 25 Apr 2023 17:00:03 +0800 Subject: [PATCH 050/156] Update 14-stream.md --- docs/zh/12-taos-sql/14-stream.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/zh/12-taos-sql/14-stream.md b/docs/zh/12-taos-sql/14-stream.md index 237d39104f..b08804e5c4 100644 --- a/docs/zh/12-taos-sql/14-stream.md +++ b/docs/zh/12-taos-sql/14-stream.md @@ -216,8 +216,8 @@ T = 最新事件时间 - DELETE_MARK ## 流式计算支持的函数 -1. 所有的[单行函数](../function/#单行函数)均可用于流计算。 -2. 以下 19 个聚合/选择函数不能在创建流计算的 SQL 语句,[系统信息函数](../function/#系统信息函数)其他类型的聚合函数均可用于流计算。 +1. 所有的 [单行函数](../function/#单行函数) 均可用于流计算。 +2. 以下 19 个聚合/选择函数 不能 应用在创建流计算的 SQL 语句,[系统信息函数](../function/#系统信息函数) 也不能用于流计算中。此外的其他类型的函数均可用于流计算。 - [leastsquares](../function/#leastsquares) - [percentile](../function/#percentile) From 693bc09f193b5255cba8b8421856deafe5b08a10 Mon Sep 17 00:00:00 2001 From: Benguang Zhao Date: Tue, 25 Apr 2023 17:36:13 +0800 Subject: [PATCH 051/156] enh: comment off unused error codes in sync and wal --- include/util/taoserror.h | 18 +++++++++--------- source/libs/sync/src/syncMain.c | 4 +--- source/util/src/terror.c | 7 ------- 3 files changed, 10 insertions(+), 19 deletions(-) diff --git a/include/util/taoserror.h b/include/util/taoserror.h index 142427ca6b..9cb0f5e564 100644 --- a/include/util/taoserror.h +++ b/include/util/taoserror.h @@ -539,17 +539,17 @@ int32_t* taosGetErrno(); // #define TSDB_CODE_SYN_INVALID_CHECKSUM TAOS_DEF_ERROR_CODE(0, 0x0908) // 2.x // #define TSDB_CODE_SYN_INVALID_MSGLEN TAOS_DEF_ERROR_CODE(0, 0x0909) // 2.x // #define TSDB_CODE_SYN_INVALID_MSGTYPE TAOS_DEF_ERROR_CODE(0, 0x090A) // 2.x -#define TSDB_CODE_SYN_IS_LEADER TAOS_DEF_ERROR_CODE(0, 0x090B) +// #define TSDB_CODE_SYN_IS_LEADER TAOS_DEF_ERROR_CODE(0, 0x090B) // unused #define TSDB_CODE_SYN_NOT_LEADER TAOS_DEF_ERROR_CODE(0, 0x090C) -#define TSDB_CODE_SYN_ONE_REPLICA TAOS_DEF_ERROR_CODE(0, 0x090D) -#define TSDB_CODE_SYN_NOT_IN_NEW_CONFIG TAOS_DEF_ERROR_CODE(0, 0x090E) -#define TSDB_CODE_SYN_NEW_CONFIG_ERROR TAOS_DEF_ERROR_CODE(0, 0x090F) // internal -#define TSDB_CODE_SYN_RECONFIG_NOT_READY TAOS_DEF_ERROR_CODE(0, 0x0910) +// #define TSDB_CODE_SYN_ONE_REPLICA TAOS_DEF_ERROR_CODE(0, 0x090D) // unused +// #define TSDB_CODE_SYN_NOT_IN_NEW_CONFIG TAOS_DEF_ERROR_CODE(0, 0x090E) // unused +#define TSDB_CODE_SYN_NEW_CONFIG_ERROR TAOS_DEF_ERROR_CODE(0, 0x090F) // internal +// #define TSDB_CODE_SYN_RECONFIG_NOT_READY TAOS_DEF_ERROR_CODE(0, 0x0910) // unused #define TSDB_CODE_SYN_PROPOSE_NOT_READY TAOS_DEF_ERROR_CODE(0, 0x0911) -#define TSDB_CODE_SYN_STANDBY_NOT_READY TAOS_DEF_ERROR_CODE(0, 0x0912) -#define TSDB_CODE_SYN_BATCH_ERROR TAOS_DEF_ERROR_CODE(0, 0x0913) +// #define TSDB_CODE_SYN_STANDBY_NOT_READY TAOS_DEF_ERROR_CODE(0, 0x0912) // unused +// #define TSDB_CODE_SYN_BATCH_ERROR TAOS_DEF_ERROR_CODE(0, 0x0913) // unused #define TSDB_CODE_SYN_RESTORING TAOS_DEF_ERROR_CODE(0, 0x0914) -#define TSDB_CODE_SYN_INVALID_SNAPSHOT_MSG TAOS_DEF_ERROR_CODE(0, 0x0915) // internal +#define TSDB_CODE_SYN_INVALID_SNAPSHOT_MSG TAOS_DEF_ERROR_CODE(0, 0x0915) // internal #define TSDB_CODE_SYN_BUFFER_FULL TAOS_DEF_ERROR_CODE(0, 0x0916) #define TSDB_CODE_SYN_WRITE_STALL TAOS_DEF_ERROR_CODE(0, 0x0917) #define TSDB_CODE_SYN_NEGO_WIN_EXCEEDED TAOS_DEF_ERROR_CODE(0, 0X0918) @@ -573,7 +573,7 @@ int32_t* taosGetErrno(); // wal // #define TSDB_CODE_WAL_APP_ERROR TAOS_DEF_ERROR_CODE(0, 0x1000) // 2.x #define TSDB_CODE_WAL_FILE_CORRUPTED TAOS_DEF_ERROR_CODE(0, 0x1001) -#define TSDB_CODE_WAL_SIZE_LIMIT TAOS_DEF_ERROR_CODE(0, 0x1002) +// #define TSDB_CODE_WAL_SIZE_LIMIT TAOS_DEF_ERROR_CODE(0, 0x1002) // unused #define TSDB_CODE_WAL_INVALID_VER TAOS_DEF_ERROR_CODE(0, 0x1003) // #define TSDB_CODE_WAL_OUT_OF_MEMORY TAOS_DEF_ERROR_CODE(0, 0x1004) // 2.x #define TSDB_CODE_WAL_LOG_NOT_EXIST TAOS_DEF_ERROR_CODE(0, 0x1005) diff --git a/source/libs/sync/src/syncMain.c b/source/libs/sync/src/syncMain.c index cb1919d16e..499df4a98b 100644 --- a/source/libs/sync/src/syncMain.c +++ b/source/libs/sync/src/syncMain.c @@ -463,8 +463,7 @@ bool syncSnapshotRecving(int64_t rid) { int32_t syncNodeLeaderTransfer(SSyncNode* pSyncNode) { if (pSyncNode->peersNum == 0) { sDebug("vgId:%d, only one replica, cannot leader transfer", pSyncNode->vgId); - terrno = TSDB_CODE_SYN_ONE_REPLICA; - return -1; + return 0; } int32_t ret = 0; @@ -486,7 +485,6 @@ int32_t syncNodeLeaderTransfer(SSyncNode* pSyncNode) { int32_t syncNodeLeaderTransferTo(SSyncNode* pSyncNode, SNodeInfo newLeader) { if (pSyncNode->replicaNum == 1) { sDebug("vgId:%d, only one replica, cannot leader transfer", pSyncNode->vgId); - terrno = TSDB_CODE_SYN_ONE_REPLICA; return -1; } diff --git a/source/util/src/terror.c b/source/util/src/terror.c index 1d9d45f26f..2ef3c23f0a 100644 --- a/source/util/src/terror.c +++ b/source/util/src/terror.c @@ -414,15 +414,9 @@ TAOS_DEFINE_ERROR(TSDB_CODE_GRANT_TABLE_LIMITED, "Table creation limite // sync TAOS_DEFINE_ERROR(TSDB_CODE_SYN_TIMEOUT, "Sync timeout") -TAOS_DEFINE_ERROR(TSDB_CODE_SYN_IS_LEADER, "Sync is leader") TAOS_DEFINE_ERROR(TSDB_CODE_SYN_NOT_LEADER, "Sync leader is unreachable") -TAOS_DEFINE_ERROR(TSDB_CODE_SYN_ONE_REPLICA, "Sync one replica") -TAOS_DEFINE_ERROR(TSDB_CODE_SYN_NOT_IN_NEW_CONFIG, "Sync not in new config") TAOS_DEFINE_ERROR(TSDB_CODE_SYN_NEW_CONFIG_ERROR, "Sync new config error") -TAOS_DEFINE_ERROR(TSDB_CODE_SYN_RECONFIG_NOT_READY, "Sync not ready for reconfig") TAOS_DEFINE_ERROR(TSDB_CODE_SYN_PROPOSE_NOT_READY, "Sync not ready for propose") -TAOS_DEFINE_ERROR(TSDB_CODE_SYN_STANDBY_NOT_READY, "Sync not ready for standby") -TAOS_DEFINE_ERROR(TSDB_CODE_SYN_BATCH_ERROR, "Sync batch error") TAOS_DEFINE_ERROR(TSDB_CODE_SYN_RESTORING, "Sync leader is restoring") TAOS_DEFINE_ERROR(TSDB_CODE_SYN_INVALID_SNAPSHOT_MSG, "Sync invalid snapshot msg") TAOS_DEFINE_ERROR(TSDB_CODE_SYN_BUFFER_FULL, "Sync buffer is full") @@ -445,7 +439,6 @@ TAOS_DEFINE_ERROR(TSDB_CODE_TQ_NO_COMMITTED_OFFSET, "TQ no committed offse // wal TAOS_DEFINE_ERROR(TSDB_CODE_WAL_FILE_CORRUPTED, "WAL file is corrupted") -TAOS_DEFINE_ERROR(TSDB_CODE_WAL_SIZE_LIMIT, "WAL size exceeds limit") TAOS_DEFINE_ERROR(TSDB_CODE_WAL_INVALID_VER, "WAL use invalid version") TAOS_DEFINE_ERROR(TSDB_CODE_WAL_LOG_NOT_EXIST, "WAL log not exist") TAOS_DEFINE_ERROR(TSDB_CODE_WAL_CHKSUM_MISMATCH, "WAL checksum mismatch") From fee4050d79c24bd6b65b3d15218a428641dead67 Mon Sep 17 00:00:00 2001 From: Benguang Zhao Date: Tue, 25 Apr 2023 18:05:30 +0800 Subject: [PATCH 052/156] enh: rename error code TSDB_CODE_SYN_NEGO_WIN_EXCEEDED --- include/util/taoserror.h | 2 +- source/libs/sync/src/syncPipeline.c | 2 +- source/util/src/terror.c | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/include/util/taoserror.h b/include/util/taoserror.h index 9cb0f5e564..a709ccf10c 100644 --- a/include/util/taoserror.h +++ b/include/util/taoserror.h @@ -552,7 +552,7 @@ int32_t* taosGetErrno(); #define TSDB_CODE_SYN_INVALID_SNAPSHOT_MSG TAOS_DEF_ERROR_CODE(0, 0x0915) // internal #define TSDB_CODE_SYN_BUFFER_FULL TAOS_DEF_ERROR_CODE(0, 0x0916) #define TSDB_CODE_SYN_WRITE_STALL TAOS_DEF_ERROR_CODE(0, 0x0917) -#define TSDB_CODE_SYN_NEGO_WIN_EXCEEDED TAOS_DEF_ERROR_CODE(0, 0X0918) +#define TSDB_CODE_SYN_NEGOTIATION_WIN_FULL TAOS_DEF_ERROR_CODE(0, 0x0918) #define TSDB_CODE_SYN_INTERNAL_ERROR TAOS_DEF_ERROR_CODE(0, 0x09FF) // tq diff --git a/source/libs/sync/src/syncPipeline.c b/source/libs/sync/src/syncPipeline.c index 85c18220a6..8bb72de518 100644 --- a/source/libs/sync/src/syncPipeline.c +++ b/source/libs/sync/src/syncPipeline.c @@ -54,7 +54,7 @@ int32_t syncLogBufferAppend(SSyncLogBuffer* pBuf, SSyncNode* pNode, SSyncRaftEnt } if (pNode->restoreFinish && index - pBuf->commitIndex >= TSDB_SYNC_NEGOTIATION_WIN) { - terrno = TSDB_CODE_SYN_NEGO_WIN_EXCEEDED; + terrno = TSDB_CODE_SYN_NEGOTIATION_WIN_FULL; sError("vgId:%d, failed to append since %s, index:%" PRId64 ", commit-index:%" PRId64, pNode->vgId, terrstr(), index, pBuf->commitIndex); goto _err; diff --git a/source/util/src/terror.c b/source/util/src/terror.c index 2ef3c23f0a..c1ae901203 100644 --- a/source/util/src/terror.c +++ b/source/util/src/terror.c @@ -421,7 +421,7 @@ TAOS_DEFINE_ERROR(TSDB_CODE_SYN_RESTORING, "Sync leader is restor TAOS_DEFINE_ERROR(TSDB_CODE_SYN_INVALID_SNAPSHOT_MSG, "Sync invalid snapshot msg") TAOS_DEFINE_ERROR(TSDB_CODE_SYN_BUFFER_FULL, "Sync buffer is full") TAOS_DEFINE_ERROR(TSDB_CODE_SYN_WRITE_STALL, "Sync write stall") -TAOS_DEFINE_ERROR(TSDB_CODE_SYN_NEGO_WIN_EXCEEDED, "Sync negotiation win exceeded") +TAOS_DEFINE_ERROR(TSDB_CODE_SYN_NEGOTIATION_WIN_FULL, "Sync negotiation win is full") TAOS_DEFINE_ERROR(TSDB_CODE_SYN_INTERNAL_ERROR, "Sync internal error") //tq From c8cbabe419171a01aaebaa0ebc2a305c75402a07 Mon Sep 17 00:00:00 2001 From: Minglei Jin Date: Tue, 25 Apr 2023 18:13:33 +0800 Subject: [PATCH 053/156] cache/none column: move to outter scope --- source/dnode/vnode/src/tsdb/tsdbCache.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/source/dnode/vnode/src/tsdb/tsdbCache.c b/source/dnode/vnode/src/tsdb/tsdbCache.c index 63dbb9a4d3..f10b2779b2 100644 --- a/source/dnode/vnode/src/tsdb/tsdbCache.c +++ b/source/dnode/vnode/src/tsdb/tsdbCache.c @@ -327,6 +327,8 @@ int32_t tsdbCacheGet(STsdb *pTsdb, tb_uid_t uid, SArray *pLastArray, SCacheRowsR for (int i = 0; i < num_keys; ++i) { SLastCol *pLastCol = tsdbCacheDeserialize(values_list[i]); + int16_t cid = *(int16_t *)taosArrayGet(pCidList, i); + SLastCol noneCol = {.ts = TSKEY_MIN, .colVal = COL_VAL_NONE(cid, pr->pSchema->columns[pr->pSlotIds[i]].type)}; if (pLastCol) { SColVal *pColVal = &pLastCol->colVal; if (IS_VAR_DATA_TYPE(pColVal->type)) { @@ -340,8 +342,7 @@ int32_t tsdbCacheGet(STsdb *pTsdb, tb_uid_t uid, SArray *pLastArray, SCacheRowsR // recalc: load from tsdb // still null, then make up a null col value - int16_t cid = *(int16_t *)taosArrayGet(pCidList, i); - pLastCol = &(SLastCol){.ts = TSKEY_MIN, .colVal = COL_VAL_NONE(cid, pr->pSchema->columns[pr->pSlotIds[i]].type)}; + pLastCol = &noneCol; // maybe store it back to rocks cache } From 23a915da3bc95c5313efbc3fae9a230469f1afb4 Mon Sep 17 00:00:00 2001 From: Benguang Zhao Date: Tue, 25 Apr 2023 19:11:42 +0800 Subject: [PATCH 054/156] enh: adjust error msgs in sync and wal --- source/util/src/terror.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/source/util/src/terror.c b/source/util/src/terror.c index c1ae901203..6fc16ad4a9 100644 --- a/source/util/src/terror.c +++ b/source/util/src/terror.c @@ -416,7 +416,7 @@ TAOS_DEFINE_ERROR(TSDB_CODE_GRANT_TABLE_LIMITED, "Table creation limite TAOS_DEFINE_ERROR(TSDB_CODE_SYN_TIMEOUT, "Sync timeout") TAOS_DEFINE_ERROR(TSDB_CODE_SYN_NOT_LEADER, "Sync leader is unreachable") TAOS_DEFINE_ERROR(TSDB_CODE_SYN_NEW_CONFIG_ERROR, "Sync new config error") -TAOS_DEFINE_ERROR(TSDB_CODE_SYN_PROPOSE_NOT_READY, "Sync not ready for propose") +TAOS_DEFINE_ERROR(TSDB_CODE_SYN_PROPOSE_NOT_READY, "Sync not ready to propose") TAOS_DEFINE_ERROR(TSDB_CODE_SYN_RESTORING, "Sync leader is restoring") TAOS_DEFINE_ERROR(TSDB_CODE_SYN_INVALID_SNAPSHOT_MSG, "Sync invalid snapshot msg") TAOS_DEFINE_ERROR(TSDB_CODE_SYN_BUFFER_FULL, "Sync buffer is full") @@ -439,7 +439,7 @@ TAOS_DEFINE_ERROR(TSDB_CODE_TQ_NO_COMMITTED_OFFSET, "TQ no committed offse // wal TAOS_DEFINE_ERROR(TSDB_CODE_WAL_FILE_CORRUPTED, "WAL file is corrupted") -TAOS_DEFINE_ERROR(TSDB_CODE_WAL_INVALID_VER, "WAL use invalid version") +TAOS_DEFINE_ERROR(TSDB_CODE_WAL_INVALID_VER, "WAL invalid version") TAOS_DEFINE_ERROR(TSDB_CODE_WAL_LOG_NOT_EXIST, "WAL log not exist") TAOS_DEFINE_ERROR(TSDB_CODE_WAL_CHKSUM_MISMATCH, "WAL checksum mismatch") TAOS_DEFINE_ERROR(TSDB_CODE_WAL_LOG_INCOMPLETE, "WAL log incomplete") From a6e37622ff1a8fe181a7ecc8044aaed000af7d6d Mon Sep 17 00:00:00 2001 From: kailixu Date: Tue, 25 Apr 2023 22:15:06 +0800 Subject: [PATCH 055/156] chore: more code --- include/util/tdef.h | 7 -- source/client/src/clientSml.c | 32 +++++- source/client/src/clientSmlJson.c | 2 - source/client/src/clientSmlLine.c | 3 - source/client/src/clientSmlTelnet.c | 1 - source/libs/parser/src/parTranslater.c | 4 - tests/system-test/1-insert/boundary.py | 56 ++++++++++ .../1-insert/influxdb_line_taosc_insert.py | 100 ++++++++++-------- tests/system-test/1-insert/stmt_error.py | 1 - 9 files changed, 142 insertions(+), 64 deletions(-) diff --git a/include/util/tdef.h b/include/util/tdef.h index 7a2c57b9f8..ead649a51c 100644 --- a/include/util/tdef.h +++ b/include/util/tdef.h @@ -22,13 +22,6 @@ extern "C" { #endif - -#if 1 -#define TASSERT assert(0); -#else -#define TASSERT -#endif - #define TSDB__packed #define TSKEY int64_t diff --git a/source/client/src/clientSml.c b/source/client/src/clientSml.c index 0bca52449c..84c6fffdac 100644 --- a/source/client/src/clientSml.c +++ b/source/client/src/clientSml.c @@ -624,6 +624,10 @@ static int32_t getBytes(uint8_t type, int32_t length) { static int32_t smlBuildFieldsList(SSmlHandle *info, SSchema *schemaField, SHashObj *schemaHash, SArray *cols, SArray *results, int32_t numOfCols, bool isTag) { + bool check = numOfCols == 0 ? true : false; + int32_t maxLen = isTag ? TSDB_MAX_TAGS_LEN : TSDB_MAX_BYTES_PER_ROW; + + int32_t len = 0; for (int j = 0; j < taosArrayGetSize(cols); ++j) { SSmlKv *kv = (SSmlKv *)taosArrayGet(cols, j); ESchemaAction action = SCHEMA_ACTION_NULL; @@ -635,11 +639,17 @@ static int32_t smlBuildFieldsList(SSmlHandle *info, SSchema *schemaField, SHashO SField field = {0}; field.type = kv->type; field.bytes = getBytes(kv->type, kv->length); + if (check) { + len += field.bytes; + if (len > maxLen) { + code = TSDB_CODE_PAR_INVALID_VAR_COLUMN_LEN; + uError("smlBuildFieldsList add %s failed since %s", isTag ? "tag" : "col", tstrerror(code)); + return code; + } + } + memcpy(field.name, kv->key, kv->keyLen); taosArrayPush(results, &field); - if(numOfCols == 0) { - - } } else if (action == SCHEMA_ACTION_CHANGE_COLUMN_SIZE || action == SCHEMA_ACTION_CHANGE_TAG_SIZE) { uint16_t *index = (uint16_t *)taosHashGet(schemaHash, kv->key, kv->keyLen); if (index == NULL) { @@ -650,6 +660,14 @@ static int32_t smlBuildFieldsList(SSmlHandle *info, SSchema *schemaField, SHashO if (isTag) newIndex -= numOfCols; SField *field = (SField *)taosArrayGet(results, newIndex); field->bytes = getBytes(kv->type, kv->length); + if (check) { + len += (kv->length - schemaField[*index].bytes + VARSTR_HEADER_SIZE); + if (len > maxLen) { + code = TSDB_CODE_PAR_INVALID_VAR_COLUMN_LEN; + uError("smlBuildFieldsList change %s failed since %s", isTag ? "tag" : "col", tstrerror(code)); + return code; + } + } } } return TSDB_CODE_SUCCESS; @@ -780,11 +798,15 @@ static int32_t smlModifyDBSchemas(SSmlHandle *info) { code = smlBuildFieldsList(info, NULL, NULL, sTableData->tags, pTags, 0, true); if (code != TSDB_CODE_SUCCESS) { uError("SML:0x%" PRIx64 " smlBuildFieldsList tag1 failed. %s", info->id, pName.tname); + taosArrayDestroy(pColumns); + taosArrayDestroy(pTags); goto end; } code = smlBuildFieldsList(info, NULL, NULL, sTableData->cols, pColumns, 0, false); if (code != TSDB_CODE_SUCCESS) { uError("SML:0x%" PRIx64 " smlBuildFieldsList col1 failed. %s", info->id, pName.tname); + taosArrayDestroy(pColumns); + taosArrayDestroy(pTags); goto end; } code = smlSendMetaMsg(info, &pName, pColumns, pTags, NULL, SCHEMA_ACTION_CREATE_STABLE); @@ -836,6 +858,8 @@ static int32_t smlModifyDBSchemas(SSmlHandle *info) { pTableMeta->tableInfo.numOfColumns, true); if (code != TSDB_CODE_SUCCESS) { uError("SML:0x%" PRIx64 " smlBuildFieldsList tag2 failed. %s", info->id, pName.tname); + taosArrayDestroy(pColumns); + taosArrayDestroy(pTags); goto end; } @@ -890,6 +914,8 @@ static int32_t smlModifyDBSchemas(SSmlHandle *info) { pTableMeta->tableInfo.numOfColumns, false); if (code != TSDB_CODE_SUCCESS) { uError("SML:0x%" PRIx64 " smlBuildFieldsList col2 failed. %s", info->id, pName.tname); + taosArrayDestroy(pColumns); + taosArrayDestroy(pTags); goto end; } diff --git a/source/client/src/clientSmlJson.c b/source/client/src/clientSmlJson.c index b50a29d4fb..c3a6e15697 100644 --- a/source/client/src/clientSmlJson.c +++ b/source/client/src/clientSmlJson.c @@ -578,12 +578,10 @@ static int32_t smlConvertJSONString(SSmlKv *pVal, char *typeStr, cJSON *value) { pVal->length = (uint16_t)strlen(value->valuestring); if (pVal->type == TSDB_DATA_TYPE_BINARY && pVal->length > TSDB_MAX_BINARY_LEN - VARSTR_HEADER_SIZE) { - TASSERT return TSDB_CODE_PAR_INVALID_VAR_COLUMN_LEN; } if (pVal->type == TSDB_DATA_TYPE_NCHAR && pVal->length > (TSDB_MAX_NCHAR_LEN - VARSTR_HEADER_SIZE) / TSDB_NCHAR_SIZE) { - TASSERT return TSDB_CODE_PAR_INVALID_VAR_COLUMN_LEN; } diff --git a/source/client/src/clientSmlLine.c b/source/client/src/clientSmlLine.c index 7dd087039b..db2f2bd9fe 100644 --- a/source/client/src/clientSmlLine.c +++ b/source/client/src/clientSmlLine.c @@ -81,7 +81,6 @@ int32_t smlParseValue(SSmlKv *pVal, SSmlMsgBuf *msg) { pVal->type = TSDB_DATA_TYPE_BINARY; pVal->length -= BINARY_ADD_LEN; if (pVal->length > TSDB_MAX_BINARY_LEN - VARSTR_HEADER_SIZE) { - TASSERT return TSDB_CODE_PAR_INVALID_VAR_COLUMN_LEN; } pVal->value += (BINARY_ADD_LEN - 1); @@ -95,7 +94,6 @@ int32_t smlParseValue(SSmlKv *pVal, SSmlMsgBuf *msg) { pVal->type = TSDB_DATA_TYPE_NCHAR; pVal->length -= NCHAR_ADD_LEN; if (pVal->length > (TSDB_MAX_NCHAR_LEN - VARSTR_HEADER_SIZE) / TSDB_NCHAR_SIZE) { - TASSERT return TSDB_CODE_PAR_INVALID_VAR_COLUMN_LEN; } pVal->value += (NCHAR_ADD_LEN - 1); @@ -239,7 +237,6 @@ static int32_t smlParseTagKv(SSmlHandle *info, char **sql, char *sqlEnd, SSmlLin } if (unlikely(valueLen > (TSDB_MAX_TAGS_LEN - VARSTR_HEADER_SIZE) / TSDB_NCHAR_SIZE)) { - TASSERT return TSDB_CODE_PAR_INVALID_VAR_COLUMN_LEN; } diff --git a/source/client/src/clientSmlTelnet.c b/source/client/src/clientSmlTelnet.c index 9baa1e5758..3a9aad4e81 100644 --- a/source/client/src/clientSmlTelnet.c +++ b/source/client/src/clientSmlTelnet.c @@ -159,7 +159,6 @@ static int32_t smlParseTelnetTags(SSmlHandle *info, char *data, char *sqlEnd, SS } if (unlikely(valueLen > (TSDB_MAX_TAGS_LEN - VARSTR_HEADER_SIZE) / TSDB_NCHAR_SIZE)) { - TASSERT return TSDB_CODE_PAR_INVALID_VAR_COLUMN_LEN; } diff --git a/source/libs/parser/src/parTranslater.c b/source/libs/parser/src/parTranslater.c index 20ca987250..b4b499a789 100644 --- a/source/libs/parser/src/parTranslater.c +++ b/source/libs/parser/src/parTranslater.c @@ -4500,7 +4500,6 @@ static int32_t checkTableTagsSchema(STranslateContext* pCxt, SHashObj* pHash, SN if (TSDB_CODE_SUCCESS == code) { if ((TSDB_DATA_TYPE_VARCHAR == pTag->dataType.type && calcTypeBytes(pTag->dataType) > TSDB_MAX_TAGS_LEN) || (TSDB_DATA_TYPE_NCHAR == pTag->dataType.type && calcTypeBytes(pTag->dataType) > TSDB_MAX_TAGS_LEN)) { - TASSERT code = generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_INVALID_VAR_COLUMN_LEN); } } @@ -4552,7 +4551,6 @@ static int32_t checkTableColsSchema(STranslateContext* pCxt, SHashObj* pHash, in if (TSDB_CODE_SUCCESS == code) { if ((TSDB_DATA_TYPE_VARCHAR == pCol->dataType.type && calcTypeBytes(pCol->dataType) > TSDB_MAX_BINARY_LEN) || (TSDB_DATA_TYPE_NCHAR == pCol->dataType.type && calcTypeBytes(pCol->dataType) > TSDB_MAX_NCHAR_LEN)) { - TASSERT code = generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_INVALID_VAR_COLUMN_LEN); } } @@ -5238,7 +5236,6 @@ static int32_t checkAlterSuperTableBySchema(STranslateContext* pCxt, SAlterTable if (TSDB_ALTER_TABLE_UPDATE_COLUMN_BYTES == pStmt->alterType) { if (calcTypeBytes(pStmt->dataType) > TSDB_MAX_FIELD_LEN) { - TASSERT return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_INVALID_VAR_COLUMN_LEN); } @@ -5249,7 +5246,6 @@ static int32_t checkAlterSuperTableBySchema(STranslateContext* pCxt, SAlterTable if (TSDB_ALTER_TABLE_UPDATE_TAG_BYTES == pStmt->alterType) { if (calcTypeBytes(pStmt->dataType) > TSDB_MAX_TAGS_LEN) { - TASSERT return generateSyntaxErrMsg(&pCxt->msgBuf, TSDB_CODE_PAR_INVALID_VAR_COLUMN_LEN); } diff --git a/tests/system-test/1-insert/boundary.py b/tests/system-test/1-insert/boundary.py index d3742ef5f9..29dcbc7c46 100644 --- a/tests/system-test/1-insert/boundary.py +++ b/tests/system-test/1-insert/boundary.py @@ -166,6 +166,61 @@ class TDTestCase: else: tdLog.exit("error info is not true") tdSql.execute('drop database db') + + def row_col_tag_maxlen_check(self): + tdSql.prepare() + tdSql.execute('use db') + tdSql.execute('create table if not exists stb1 (ts timestamp, c1 int,c2 binary(1000)) tags (city binary(16382))') + tdSql.error('create table if not exists stb1 (ts timestamp, c1 int,c2 binary(1000)) tags (city binary(16383))') + tdSql.execute('create table if not exists stb2 (ts timestamp, c0 tinyint, c1 int, c2 nchar(16379)) tags (city binary(16382))') + tdSql.error('create table if not exists stb2 (ts timestamp, c0 smallint, c1 int, c2 nchar(16379)) tags (city binary(16382))') + tdSql.execute('create table if not exists stb3 (ts timestamp, c1 int, c2 binary(65517)) tags (city binary(16382))') + tdSql.error('create table if not exists stb3 (ts timestamp, c0 bool, c1 int, c2 binary(65517)) tags (city binary(16382))') + # prepare the column and tag data + char100='abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMN0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMN0123456789' + tag_max_16382='' + binary_max_65517 = '' + nchar_max_16379='' + for num in range(163): + nchar_max_16379 += char100 + for num in range(4): + binary_max_65517 += char100 + + nchar_max_16379 += '0123456789012345678901234567890123456789012345678901234567890123456789012345678' + tag_max_16382 = nchar_max_16379 + tag_max_16382 += '9ab' + + for num in range(3): + binary_max_65517 += char100 + binary_max_65517 += '01234567890123456' + + # insert/query and check + tdSql.execute(f"create table ct1 using stb1 tags('{tag_max_16382}')") + tdSql.execute(f"create table ct2 using stb2 tags('{tag_max_16382}')") + tdSql.execute(f"create table ct3 using stb3 tags('{tag_max_16382}')") + tdSql.execute(f"insert into ct1 values (now,1,'nchar_max_16379')") + tdSql.execute(f"insert into ct2 values (now,1,1,'{nchar_max_16379}')") + tdSql.execute(f"insert into ct3 values (now,1,'{binary_max_65517}')") + + tdSql.query("select * from stb1") + tdSql.checkEqual(tdSql.queryResult[0][3],tag_max_16382) + + tdSql.query("select * from ct2") + tdSql.checkEqual(tdSql.queryResult[0][3],nchar_max_16379) + + tdSql.query("select * from stb2") + tdSql.checkEqual(tdSql.queryResult[0][3],nchar_max_16379) + tdSql.checkEqual(tdSql.queryResult[0][4],tag_max_16382) + + tdSql.query("select * from ct3") + tdSql.checkEqual(tdSql.queryResult[0][2],binary_max_65517) + + tdSql.query("select * from stb3") + tdSql.checkEqual(tdSql.queryResult[0][2],binary_max_65517) + tdSql.checkEqual(tdSql.queryResult[0][3],tag_max_16382) + + tdSql.execute('drop database db') + def run(self): self.dbname_length_check() self.tbname_length_check() @@ -174,6 +229,7 @@ class TDTestCase: self.username_length_check() self.password_length_check() self.sql_length_check() + self.row_col_tag_maxlen_check() def stop(self): tdSql.close() diff --git a/tests/system-test/1-insert/influxdb_line_taosc_insert.py b/tests/system-test/1-insert/influxdb_line_taosc_insert.py index b53abc41aa..667ce3239c 100644 --- a/tests/system-test/1-insert/influxdb_line_taosc_insert.py +++ b/tests/system-test/1-insert/influxdb_line_taosc_insert.py @@ -672,28 +672,28 @@ class TDTestCase: except SchemalessError as err: tdSql.checkNotEqual(err.errno, 0) - # # # binary + # binary stb_name = tdCom.getLongName(7, "letters") - input_sql = f'{stb_name},t0=t c0=f,c11=f,c2=f,c3=f,c4=f,c5=f,c6=f,c7=f,c8=f,c9=f,c10=f,c12=f,c1="{tdCom.getLongName(65519, "letters")}" 1626006833639000000' + input_sql = f'{stb_name},t0=t c0=f,c1="{tdCom.getLongName(65517, "letters")}" 1626006833639000000' self._conn.schemaless_insert([input_sql], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value) - # input_sql = f'{stb_name},t0=t c0=f,c1="{tdCom.getLongName(65514, "letters")}" 1626006833639000000' - # try: - # self._conn.schemaless_insert([input_sql], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value) - # except SchemalessError as err: - # tdSql.checkNotEqual(err.errno, 0) + input_sql = f'{stb_name},t0=t c0=f,c1="{tdCom.getLongName(65518, "letters")}" 1626006833639000000' + try: + self._conn.schemaless_insert([input_sql], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value) + except SchemalessError as err: + tdSql.checkNotEqual(err.errno, 0) - # # nchar - # # * legal nchar could not be larger than 16374/4 - # stb_name = tdCom.getLongName(7, "letters") - # input_sql = f'{stb_name},t0=t c0=f,c1=L"{tdCom.getLongName(4093, "letters")}" 1626006833639000000' - # self._conn.schemaless_insert([input_sql], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value) + # nchar + # * legal nchar could not be larger than 16374/4 + stb_name = tdCom.getLongName(7, "letters") + input_sql = f'{stb_name},t0=t c0=f,c1=L"{tdCom.getLongName(16379, "letters")}" 1626006833639000000' + self._conn.schemaless_insert([input_sql], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value) - # input_sql = f'{stb_name},t0=t c0=f,c1=L"{tdCom.getLongName(4094, "letters")}" 1626006833639000000' - # try: - # self._conn.schemaless_insert([input_sql], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value) - # except SchemalessError as err: - # tdSql.checkNotEqual(err.errno, 0) + input_sql = f'{stb_name},t0=t c0=f,c1=L"{tdCom.getLongName(16380, "letters")}" 1626006833639000000' + try: + self._conn.schemaless_insert([input_sql], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value) + except SchemalessError as err: + tdSql.checkNotEqual(err.errno, 0) def tagColIllegalValueCheckCase(self): @@ -896,26 +896,40 @@ class TDTestCase: self._conn.schemaless_insert([input_sql], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value) # * every binary and nchar must be length+2, so here is two tag, max length could not larger than 16384-2*2 - input_sql = f'{stb_name},t0=t,t1="{tdCom.getLongName(16374, "letters")}",t2="{tdCom.getLongName(5, "letters")}" c0=f 1626006833639000000' - self._conn.schemaless_insert([input_sql], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value) + # input_sql = f'{stb_name}, t1="{tdCom.getLongName(4095, "letters")}"" c0=f 1626006833639000000' + # self._conn.schemaless_insert([input_sql], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value) - tdSql.query(f"select * from {stb_name}") - tdSql.checkRows(2) - input_sql = f'{stb_name},t0=t,t1="{tdCom.getLongName(16374, "letters")}",t2="{tdCom.getLongName(6, "letters")}" c0=f 1626006833639000000' - try: - self._conn.schemaless_insert([input_sql], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value) - raise Exception("should not reach here") - except SchemalessError as err: - tdSql.checkNotEqual(err.errno, 0) - tdSql.query(f"select * from {stb_name}") - tdSql.checkRows(2) + # tdSql.query(f"select * from {stb_name}") + # tdSql.checkRows(2) + # input_sql = f'{stb_name},t0=t,t1="{tdCom.getLongName(4084, "letters")}",t2="{tdCom.getLongName(6, "letters")}" c0=f 1626006833639000000' + # try: + # self._conn.schemaless_insert([input_sql], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value) + # raise Exception("should not reach here") + # except SchemalessError as err: + # tdSql.checkNotEqual(err.errno, 0) + # tdSql.query(f"select * from {stb_name}") + # tdSql.checkRows(2) # # * check col,col+ts max in describe ---> 16143 - input_sql = f'{stb_name},t0=t c0=f,c1="{tdCom.getLongName(65517, "letters")}" 1626006833639000000' + input_sql = f'{stb_name},t0=t c0=i32,c1="{tdCom.getLongName(65517, "letters")}" 1626006833639000000' self._conn.schemaless_insert([input_sql], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value) - input_sql = f'{stb_name},t0=t c0=f,c1="{tdCom.getLongName(49133, "letters")}",c2="{tdCom.getLongName(16384, "letters")}" 1626006833639000000' + input_sql = f'{stb_name},t0=t c0=i32,c1="{tdCom.getLongName(65517, "letters")},c2=f" 1626006833639000000' self._conn.schemaless_insert([input_sql], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value) + try: + self._conn.schemaless_insert([input_sql], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value) + except SchemalessError as err: + tdSql.checkNotEqual(err.errno, 0) + + input_sql = f'{stb_name},t0=t c0=i16,c1="{tdCom.getLongName(49133, "letters")}",c2="{tdCom.getLongName(16384, "letters")}" 1626006833639000000' + self._conn.schemaless_insert([input_sql], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value) + + input_sql = f'{stb_name},t0=t c0=i16,c1="{tdCom.getLongName(49133, "letters")}",c2="{tdCom.getLongName(16384, "letters")},c3=t" 1626006833639000000' + self._conn.schemaless_insert([input_sql], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value) + try: + self._conn.schemaless_insert([input_sql], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value) + except SchemalessError as err: + tdSql.checkNotEqual(err.errno, 0) tdSql.query(f"select * from {stb_name}") tdSql.checkRows(3) @@ -939,17 +953,17 @@ class TDTestCase: code = self._conn.schemaless_insert([input_sql], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value) # * legal nchar could not be larger than 16374/4 - input_sql = f'{stb_name},t0=t,t1=L"{tdCom.getLongName(4093, "letters")}",t2=L"{tdCom.getLongName(1, "letters")}" c0=f 1626006833639000000' - self._conn.schemaless_insert([input_sql], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value) - tdSql.query(f"select * from {stb_name}") - tdSql.checkRows(2) - input_sql = f'{stb_name},t0=t,t1=L"{tdCom.getLongName(4093, "letters")}",t2=L"{tdCom.getLongName(2, "letters")}" c0=f 1626006833639000000' - try: - self._conn.schemaless_insert([input_sql], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value) - except SchemalessError as err: - tdSql.checkNotEqual(err.errno, 0) - tdSql.query(f"select * from {stb_name}") - tdSql.checkRows(2) + # input_sql = f'{stb_name},t0=t,t1=L"{tdCom.getLongName(4093, "letters")}",t2=L"{tdCom.getLongName(1, "letters")}" c0=f 1626006833639000000' + # self._conn.schemaless_insert([input_sql], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value) + # tdSql.query(f"select * from {stb_name}") + # tdSql.checkRows(2) + # input_sql = f'{stb_name},t0=t,t1=L"{tdCom.getLongName(4093, "letters")}",t2=L"{tdCom.getLongName(2, "letters")}" c0=f 1626006833639000000' + # try: + # self._conn.schemaless_insert([input_sql], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value) + # except SchemalessError as err: + # tdSql.checkNotEqual(err.errno, 0) + # tdSql.query(f"select * from {stb_name}") + # tdSql.checkRows(2) input_sql = f'{stb_name},t0=t c0=f,c1=L"{tdCom.getLongName(4093, "letters")}",c2=L"{tdCom.getLongName(4093, "letters")}",c3=L"{tdCom.getLongName(4093, "letters")}",c4=L"{tdCom.getLongName(4, "letters")}" 1626006833639000000' self._conn.schemaless_insert([input_sql], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value) @@ -1283,7 +1297,7 @@ class TDTestCase: self.nowTsCheckCase() self.dateFormatTsCheckCase() self.illegalTsCheckCase() - self.tagValueLengthCheckCase() + # self.tagValueLengthCheckCase() self.colValueLengthCheckCase() self.tagColIllegalValueCheckCase() self.duplicateIdTagColInsertCheckCase() diff --git a/tests/system-test/1-insert/stmt_error.py b/tests/system-test/1-insert/stmt_error.py index b77cd488f7..c6d747c317 100644 --- a/tests/system-test/1-insert/stmt_error.py +++ b/tests/system-test/1-insert/stmt_error.py @@ -216,7 +216,6 @@ class TDTestCase: tdLog.info('=========stmt error occured for bind part column(NULL Timestamp) ==============') else: tdLog.exit("expect error(%s) not occured - 2" % str(error)) - def stop(self): tdSql.close() tdLog.success("%s successfully executed" % __file__) From 5758aa4d80d303f9224e4714c1b0f015e4a91c69 Mon Sep 17 00:00:00 2001 From: Minglei Jin Date: Wed, 26 Apr 2023 08:17:08 +0800 Subject: [PATCH 056/156] cache/read: fix pRow, pLastCols memory issues --- source/dnode/vnode/src/tsdb/tsdbCacheRead.c | 26 ++++++++++----------- 1 file changed, 12 insertions(+), 14 deletions(-) diff --git a/source/dnode/vnode/src/tsdb/tsdbCacheRead.c b/source/dnode/vnode/src/tsdb/tsdbCacheRead.c index 48c72ca685..eac27fb3c6 100644 --- a/source/dnode/vnode/src/tsdb/tsdbCacheRead.c +++ b/source/dnode/vnode/src/tsdb/tsdbCacheRead.c @@ -39,9 +39,9 @@ static int32_t saveOneRow(SArray* pRow, SSDataBlock* pBlock, SCacheRowsReader* p if (!p->isNull) { if (IS_VAR_DATA_TYPE(pColVal->colVal.type)) { varDataSetLen(p->buf, pColVal->colVal.value.nData); + memcpy(varDataVal(p->buf), pColVal->colVal.value.pData, pColVal->colVal.value.nData); p->bytes = pColVal->colVal.value.nData + VARSTR_HEADER_SIZE; // binary needs to plus the header size - taosMemoryFree(pColVal->colVal.value.pData); } else { memcpy(p->buf, &pColVal->colVal.value, pReader->pSchema->columns[slotId].bytes); p->bytes = pReader->pSchema->columns[slotId].bytes; @@ -72,10 +72,9 @@ static int32_t saveOneRow(SArray* pRow, SSDataBlock* pBlock, SCacheRowsReader* p colDataSetNULL(pColInfoData, numOfRows); } else { varDataSetLen(pReader->transferBuf[slotId], pVal->value.nData); - // tsdbError("buf:%p len:%d index:%d", pReader->transferBuf[slotId], pVal->value.nData, slotId); + memcpy(varDataVal(pReader->transferBuf[slotId]), pVal->value.pData, pVal->value.nData); colDataSetVal(pColInfoData, numOfRows, pReader->transferBuf[slotId], false); - taosMemoryFree(pVal->value.pData); } } else { colDataSetVal(pColInfoData, numOfRows, (const char*)&pVal->value.val, !COL_VAL_IS_VALUE(pVal)); @@ -83,7 +82,6 @@ static int32_t saveOneRow(SArray* pRow, SSDataBlock* pBlock, SCacheRowsReader* p } pBlock->info.rows += allNullRow ? 0 : 1; - // pBlock->info.rows += 1; } else { tsdbError("invalid retrieve type:%d, %s", pReader->type, idStr); return TSDB_CODE_INVALID_PARA; @@ -162,7 +160,6 @@ int32_t tsdbCacherowsReaderOpen(void* pVnode, int32_t type, void* pTableIdList, for (int32_t i = 0; i < p->pSchema->numOfCols; ++i) { if (IS_VAR_DATA_TYPE(p->pSchema->columns[i].type)) { p->transferBuf[i] = taosMemoryMalloc(p->pSchema->columns[i].bytes); - // tsdbError("buf:%p len:%d index:%d", p->transferBuf[i], p->pSchema->columns[i].bytes, i); if (p->transferBuf[i] == NULL) { tsdbCacherowsReaderClose(p); return TSDB_CODE_OUT_OF_MEMORY; @@ -269,8 +266,9 @@ int32_t tsdbRetrieveCacheRows(void* pReader, SSDataBlock* pResBlock, const int32 goto _end; } - for (int32_t i = 0; i < pr->pSchema->numOfCols; ++i) { - struct STColumn* pCol = &pr->pSchema->columns[i]; + for (int32_t i = 0; i < pr->numOfCols; ++i) { + int32_t slotId = slotIds[i]; + struct STColumn* pCol = &pr->pSchema->columns[slotId]; SLastCol p = {.ts = INT64_MIN, .colVal.type = pCol->type, .colVal.flag = CV_FLAG_NULL}; if (IS_VAR_DATA_TYPE(pCol->type)) { @@ -298,12 +296,12 @@ int32_t tsdbRetrieveCacheRows(void* pReader, SSDataBlock* pResBlock, const int32 tsdbCacheGet(pr->pTsdb, pKeyInfo->uid, pRow, pr, lstring); if (TARRAY_SIZE(pRow) <= 0) { - taosArrayClear(pRow); + taosArrayClearEx(pRow, freeItem); continue; } SLastCol* pColVal = (SLastCol*)taosArrayGet(pRow, 0); if (COL_VAL_IS_NONE(&pColVal->colVal)) { - taosArrayClear(pRow); + taosArrayClearEx(pRow, freeItem); continue; } @@ -360,7 +358,7 @@ int32_t tsdbRetrieveCacheRows(void* pReader, SSDataBlock* pResBlock, const int32 taosArraySet(pTableUidList, 0, &pKeyInfo->uid); } - taosArrayClear(pRow); + taosArrayClearEx(pRow, freeItem); } if (hasRes) { @@ -372,17 +370,17 @@ int32_t tsdbRetrieveCacheRows(void* pReader, SSDataBlock* pResBlock, const int32 tsdbCacheGet(pr->pTsdb, pKeyInfo->uid, pRow, pr, lstring); if (TARRAY_SIZE(pRow) <= 0) { - taosArrayClear(pRow); + taosArrayClearEx(pRow, freeItem); continue; } SLastCol* pColVal = (SLastCol*)taosArrayGet(pRow, 0); if (COL_VAL_IS_NONE(&pColVal->colVal)) { - taosArrayClear(pRow); + taosArrayClearEx(pRow, freeItem); continue; } saveOneRow(pRow, pResBlock, pr, slotIds, dstSlotIds, pRes, pr->idstr); - taosArrayClear(pRow); + taosArrayClearEx(pRow, freeItem); taosArrayPush(pTableUidList, &pKeyInfo->uid); @@ -410,7 +408,7 @@ _end: } taosMemoryFree(pRes); - taosArrayDestroy(pRow); + taosArrayDestroyEx(pRow, freeItem); taosArrayDestroyEx(pLastCols, freeItem); return code; } From 519dc0ead0e04fae914aa5d4f791e4424c170a73 Mon Sep 17 00:00:00 2001 From: Adam Ji Date: Wed, 26 Apr 2023 09:24:29 +0800 Subject: [PATCH 057/156] docs: fix tmq create database args (#21081) --- docs/en/07-develop/07-tmq.mdx | 2 +- docs/zh/07-develop/07-tmq.mdx | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/en/07-develop/07-tmq.mdx b/docs/en/07-develop/07-tmq.mdx index 7465cc0a12..739eabd1cf 100644 --- a/docs/en/07-develop/07-tmq.mdx +++ b/docs/en/07-develop/07-tmq.mdx @@ -222,7 +222,7 @@ A database including one supertable and two subtables is created as follows: ```sql DROP DATABASE IF EXISTS tmqdb; -CREATE DATABASE tmqdb; +CREATE DATABASE tmqdb WAL_RETENTION_PERIOD 3600; CREATE TABLE tmqdb.stb (ts TIMESTAMP, c1 INT, c2 FLOAT, c3 VARCHAR(16)) TAGS(t1 INT, t3 VARCHAR(16)); CREATE TABLE tmqdb.ctb0 USING tmqdb.stb TAGS(0, "subtable0"); CREATE TABLE tmqdb.ctb1 USING tmqdb.stb TAGS(1, "subtable1"); diff --git a/docs/zh/07-develop/07-tmq.mdx b/docs/zh/07-develop/07-tmq.mdx index 036c9aa86e..a3833438a2 100644 --- a/docs/zh/07-develop/07-tmq.mdx +++ b/docs/zh/07-develop/07-tmq.mdx @@ -221,7 +221,7 @@ void Close() ```sql DROP DATABASE IF EXISTS tmqdb; -CREATE DATABASE tmqdb; +CREATE DATABASE tmqdb WAL_RETENTION_PERIOD 3600; CREATE TABLE tmqdb.stb (ts TIMESTAMP, c1 INT, c2 FLOAT, c3 VARCHAR(16)) TAGS(t1 INT, t3 VARCHAR(16)); CREATE TABLE tmqdb.ctb0 USING tmqdb.stb TAGS(0, "subtable0"); CREATE TABLE tmqdb.ctb1 USING tmqdb.stb TAGS(1, "subtable1"); From 6835e75903f06ad4435c00b92291286c1d552980 Mon Sep 17 00:00:00 2001 From: Minglei Jin Date: Wed, 26 Apr 2023 10:07:15 +0800 Subject: [PATCH 058/156] cache/get: malloc zero-length binary to unify frees --- source/dnode/vnode/src/tsdb/tsdbCache.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/source/dnode/vnode/src/tsdb/tsdbCache.c b/source/dnode/vnode/src/tsdb/tsdbCache.c index f10b2779b2..35e4dce3dc 100644 --- a/source/dnode/vnode/src/tsdb/tsdbCache.c +++ b/source/dnode/vnode/src/tsdb/tsdbCache.c @@ -333,8 +333,8 @@ int32_t tsdbCacheGet(STsdb *pTsdb, tb_uid_t uid, SArray *pLastArray, SCacheRowsR SColVal *pColVal = &pLastCol->colVal; if (IS_VAR_DATA_TYPE(pColVal->type)) { uint8_t *pVal = pColVal->value.pData; + pColVal->value.pData = taosMemoryMalloc(pColVal->value.nData); if (pColVal->value.nData) { - pColVal->value.pData = taosMemoryMalloc(pColVal->value.nData); memcpy(pColVal->value.pData, pVal, pColVal->value.nData); } } From 455481ddcd58c35a78d70b8ea0bbc891ee0e8522 Mon Sep 17 00:00:00 2001 From: WANG MINGMING Date: Wed, 26 Apr 2023 10:33:16 +0800 Subject: [PATCH 059/156] Update 50-opentsdb-json.mdx --- docs/zh/07-develop/03-insert-data/50-opentsdb-json.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/zh/07-develop/03-insert-data/50-opentsdb-json.mdx b/docs/zh/07-develop/03-insert-data/50-opentsdb-json.mdx index e1fd3dacc8..f2cfbc857b 100644 --- a/docs/zh/07-develop/03-insert-data/50-opentsdb-json.mdx +++ b/docs/zh/07-develop/03-insert-data/50-opentsdb-json.mdx @@ -47,7 +47,7 @@ OpenTSDB JSON 格式协议采用一个 JSON 字符串表示一行或多行数据 :::note - 对于 JSON 格式协议,TDengine 并不会自动把所有标签转成 NCHAR 类型, 字符串将将转为 NCHAR 类型, 数值将同样转换为 DOUBLE 类型。 -- 默认生成的子表名是根据规则生成的唯一 ID 值。用户也可以通过在 taos.cfg 里配置 smlChildTableName 参数来指定某个标签值作为子表名。该标签值应该具有全局唯一性。举例如下:假设有个标签名为tname, 配置 smlChildTableName=tname, 插入数据为 `"tags": { "host": "web02","dc": "lga","tname":"cpu1"}` 则创建的子表名为 cpu1。注意如果多行数据 tname 相同,但是后面的 tag_set 不同,则使用第一行自动建表时指定的 tag_set,其他的行会忽略)。 +- 默认生成的子表名是根据规则生成的唯一 ID 值。用户也可以通过在client端的 taos.cfg 里配置 smlChildTableName 参数来指定某个标签值作为子表名。该标签值应该具有全局唯一性。举例如下:假设有个标签名为tname, 配置 smlChildTableName=tname, 插入数据为 `"tags": { "host": "web02","dc": "lga","tname":"cpu1"}` 则创建的子表名为 cpu1。注意如果多行数据 tname 相同,但是后面的 tag_set 不同,则使用第一行自动建表时指定的 tag_set,其他的行会忽略)。 ::: ## 示例代码 From 20fcb4d8ac78b881f3616bd82620769362cccf13 Mon Sep 17 00:00:00 2001 From: WANG MINGMING Date: Wed, 26 Apr 2023 10:35:39 +0800 Subject: [PATCH 060/156] Update 40-opentsdb-telnet.mdx --- docs/zh/07-develop/03-insert-data/40-opentsdb-telnet.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/zh/07-develop/03-insert-data/40-opentsdb-telnet.mdx b/docs/zh/07-develop/03-insert-data/40-opentsdb-telnet.mdx index 3b2148ef4a..9da767a691 100644 --- a/docs/zh/07-develop/03-insert-data/40-opentsdb-telnet.mdx +++ b/docs/zh/07-develop/03-insert-data/40-opentsdb-telnet.mdx @@ -32,7 +32,7 @@ OpenTSDB 行协议同样采用一行字符串来表示一行数据。OpenTSDB meters.current 1648432611250 11.3 location=California.LosAngeles groupid=3 ``` -- 默认生产的子表名是根据规则生成的唯一 ID 值。用户也可以通过在 taos.cfg 里配置 smlChildTableName 参数来指定某个标签值作为子表名。该标签值应该具有全局唯一性。举例如下:假设有个标签名为tname, 配置 smlChildTableName=tname, 插入数据为 meters.current 1648432611250 11.3 tname=cpu1 location=California.LosAngeles groupid=3 则创建的表名为 cpu1,注意如果多行数据 tname 相同,但是后面的 tag_set 不同,则使用第一行自动建表时指定的 tag_set,其他的行会忽略)。 +- 默认生产的子表名是根据规则生成的唯一 ID 值。用户也可以通过在client端的 taos.cfg 里配置 smlChildTableName 参数来指定某个标签值作为子表名。该标签值应该具有全局唯一性。举例如下:假设有个标签名为tname, 配置 smlChildTableName=tname, 插入数据为 meters.current 1648432611250 11.3 tname=cpu1 location=California.LosAngeles groupid=3 则创建的表名为 cpu1,注意如果多行数据 tname 相同,但是后面的 tag_set 不同,则使用第一行自动建表时指定的 tag_set,其他的行会忽略)。 参考 [OpenTSDB Telnet API 文档](http://opentsdb.net/docs/build/html/api_telnet/put.html)。 ## 示例代码 From 87406f7daf19f2e556e20397f82f8ba16bb5537a Mon Sep 17 00:00:00 2001 From: WANG MINGMING Date: Wed, 26 Apr 2023 10:36:22 +0800 Subject: [PATCH 061/156] Update 30-influxdb-line.mdx --- docs/zh/07-develop/03-insert-data/30-influxdb-line.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/zh/07-develop/03-insert-data/30-influxdb-line.mdx b/docs/zh/07-develop/03-insert-data/30-influxdb-line.mdx index 876f123fe1..258adaaeed 100644 --- a/docs/zh/07-develop/03-insert-data/30-influxdb-line.mdx +++ b/docs/zh/07-develop/03-insert-data/30-influxdb-line.mdx @@ -38,7 +38,7 @@ meters,location=California.LosAngeles,groupid=2 current=13.4,voltage=223,phase=0 - field_set 中的每个数据项都需要对自身的数据类型进行描述, 比如 1.2f32 代表 FLOAT 类型的数值 1.2, 如果不带类型后缀会被当作 DOUBLE 处理; - timestamp 支持多种时间精度。写入数据的时候需要用参数指定时间精度,支持从小时到纳秒的 6 种时间精度。 - 为了提高写入的效率,默认假设同一个超级表中 field_set 的顺序是一样的(第一条数据包含所有的 field,后面的数据按照这个顺序),如果顺序不一样,需要配置参数 smlDataFormat 为 false,否则,数据写入按照相同顺序写入,库中数据会异常。(3.0.1.3 之后的版本 smlDataFormat 默认为 false,从3.0.3.0开始,该配置废弃) [TDengine 无模式写入参考指南](/reference/schemaless/#无模式写入行协议) -- 默认产生的子表名是根据规则生成的唯一 ID 值。用户也可以通过在 taos.cfg 里配置 smlChildTableName 参数来指定某个标签值作为子表名。该标签值应该具有全局唯一性。举例如下:假设有个标签名为tname, 配置 smlChildTableName=tname, 插入数据为 st,tname=cpu1,t1=4 c1=3 1626006833639000000 则创建的子表名为 cpu1。注意如果多行数据 tname 相同,但是后面的 tag_set 不同,则使用第一行自动建表时指定的 tag_set,其他的行会忽略)。[TDengine 无模式写入参考指南](/reference/schemaless/#无模式写入行协议) +- 默认产生的子表名是根据规则生成的唯一 ID 值。用户也可以通过在client端的 taos.cfg 里配置 smlChildTableName 参数来指定某个标签值作为子表名。该标签值应该具有全局唯一性。举例如下:假设有个标签名为tname, 配置 smlChildTableName=tname, 插入数据为 st,tname=cpu1,t1=4 c1=3 1626006833639000000 则创建的子表名为 cpu1。注意如果多行数据 tname 相同,但是后面的 tag_set 不同,则使用第一行自动建表时指定的 tag_set,其他的行会忽略)。[TDengine 无模式写入参考指南](/reference/schemaless/#无模式写入行协议) ::: 要了解更多可参考:[InfluxDB Line 协议官方文档](https://docs.influxdata.com/influxdb/v2.0/reference/syntax/line-protocol/) 和 [TDengine 无模式写入参考指南](/reference/schemaless/#无模式写入行协议) From 41cc572cd8b8fff0915b432e6bed69aa50924f8d Mon Sep 17 00:00:00 2001 From: Shuduo Sang Date: Wed, 26 Apr 2023 12:41:26 +0800 Subject: [PATCH 062/156] docs: update connector matrix for 3.0 (#21086) * docs: update csharp connector status * docs: fix csharp ws bulk pulling * docs: clarify database param is optional to websocket dsn * docs: fix java connector mistake * fix: a few typos * fix: many typos * docs: java connector support subscription over websocket * fix: update 3.0 connector feature matrix --- docs/en/14-reference/03-connector/index.mdx | 2 +- docs/zh/08-connector/index.md | 2 +- 2 files changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/en/14-reference/03-connector/index.mdx b/docs/en/14-reference/03-connector/index.mdx index a35d5bc2d1..28b7b83b58 100644 --- a/docs/en/14-reference/03-connector/index.mdx +++ b/docs/en/14-reference/03-connector/index.mdx @@ -62,7 +62,7 @@ The different database framework specifications for various programming language | **Regular Query** | Support | Support | Support | Support | Support | Support | | **Parameter Binding** | Not Supported | Not Supported | Support | Support | Not Supported | Support | | **Subscription (TMQ) ** | Supported | Support | Support | Not Supported | Not Supported | Support | -| **Schemaless** | Not Supported | Not Supported | Not Supported | Not Supported | Not Supported | Not Supported | +| **Schemaless** | Supported | Not Supported | Not Supported | Not Supported | Not Supported | Not Supported | | **Bulk Pulling (based on WebSocket) ** | Support | Support | Support | Support | Support | Support | | **DataFrame** | Not Supported | Support | Not Supported | Not Supported | Not Supported | Not Supported | diff --git a/docs/zh/08-connector/index.md b/docs/zh/08-connector/index.md index eb1f3a9a9a..bb8c95a15a 100644 --- a/docs/zh/08-connector/index.md +++ b/docs/zh/08-connector/index.md @@ -61,7 +61,7 @@ TDengine 版本更新往往会增加新的功能特性,列表中的连接器 | **普通查询** | 支持 | 支持 | 支持 | 支持 | 支持 | 支持 | | **参数绑定** | 暂不支持 | 暂不支持 | 支持 | 支持 | 暂不支持 | 支持 | | **数据订阅(TMQ)** | 支持 | 支持 | 支持 | 暂不支持 | 暂不支持 | 支持 | -| **Schemaless** | 暂不支持 | 暂不支持 | 暂不支持 | 暂不支持 | 暂不支持 | 暂不支持 | +| **Schemaless** | 支持 | 暂不支持 | 暂不支持 | 暂不支持 | 暂不支持 | 暂不支持 | | **批量拉取(基于 WebSocket)** | 支持 | 支持 | 支持 | 支持 | 支持 | 支持 | | **DataFrame** | 不支持 | 支持 | 不支持 | 不支持 | 不支持 | 不支持 | From 3822a5858b05483f9abbf058bd2d8dbdcce411e0 Mon Sep 17 00:00:00 2001 From: Minglei Jin Date: Wed, 26 Apr 2023 16:03:37 +0800 Subject: [PATCH 063/156] sim/valgrind: suppress rocks reachables --- tests/script/local.supp | 227 ++++++++++++++++++++++++++++++++++++++++ tests/script/sh/exec.sh | 6 +- 2 files changed, 231 insertions(+), 2 deletions(-) create mode 100644 tests/script/local.supp diff --git a/tests/script/local.supp b/tests/script/local.supp new file mode 100644 index 0000000000..562cddba46 --- /dev/null +++ b/tests/script/local.supp @@ -0,0 +1,227 @@ +{ + + Memcheck:Leak + match-leak-kinds: reachable + fun:_Znwm + fun:_ZN9__gnu_cxx13new_allocatorINSt8__detail10_Hash_nodeISt4pairIKjPFvPvEELb0EEEE8allocateEmPKv + fun:_ZNSt16allocator_traitsISaINSt8__detail10_Hash_nodeISt4pairIKjPFvPvEELb0EEEEE8allocateERS9_m + fun:_ZNSt8__detail16_Hashtable_allocISaINS_10_Hash_nodeISt4pairIKjPFvPvEELb0EEEEE16_M_allocate_nodeIJRKSt21piecewise_construct_tSt5tupleIJRS3_EESF_IJEEEEEPS8_DpOT_ + fun:_ZNSt10_HashtableIjSt4pairIKjPFvPvEESaIS5_ENSt8__detail10_Select1stESt8equal_toIjESt4hashIjENS7_18_Mod_range_hashingENS7_20_Default_ranged_hashENS7_20_Prime_rehash_policyENS7_17_Hashtable_traitsILb0ELb0ELb1EEEE12_Scoped_nodeC1IJRKSt21piecewise_construct_tSt5tupleIJRS1_EESO_IJEEEEEPNS7_16_Hashtable_allocISaINS7_10_Hash_nodeIS5_Lb0EEEEEEDpOT_ + fun:_ZNSt8__detail9_Map_baseIjSt4pairIKjPFvPvEESaIS6_ENS_10_Select1stESt8equal_toIjESt4hashIjENS_18_Mod_range_hashingENS_20_Default_ranged_hashENS_20_Prime_rehash_policyENS_17_Hashtable_traitsILb0ELb0ELb1EEELb1EEixERS2_ + fun:_ZNSt13unordered_mapIjPFvPvESt4hashIjESt8equal_toIjESaISt4pairIKjS2_EEEixERS8_ + fun:_ZN7rocksdb14ThreadLocalPtr10StaticMeta10SetHandlerEjPFvPvE + fun:_ZN7rocksdb14ThreadLocalPtrC1EPFvPvE + fun:_ZN7rocksdb16ColumnFamilyDataC1EjRKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEPNS_7VersionEPNS_5CacheEPNS_18WriteBufferManagerERKNS_19ColumnFamilyOptionsERKNS_18ImmutableDBOptionsERKNS_11FileOptionsEPNS_15ColumnFamilySetEPNS_16BlockCacheTracerERKSt10shared_ptrINS_8IOTracerEES8_ + fun:_ZN7rocksdb15ColumnFamilySetC1ERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEPKNS_18ImmutableDBOptionsERKNS_11FileOptionsEPNS_5CacheEPNS_18WriteBufferManagerEPNS_15WriteControllerEPNS_16BlockCacheTracerERKSt10shared_ptrINS_8IOTracerEES8_ + fun:_ZN7rocksdb10VersionSetC1ERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEPKNS_18ImmutableDBOptionsERKNS_11FileOptionsEPNS_5CacheEPNS_18WriteBufferManagerEPNS_15WriteControllerEPNS_16BlockCacheTracerERKSt10shared_ptrINS_8IOTracerEES8_ + fun:_ZN7rocksdb6DBImplC1ERKNS_9DBOptionsERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEbbb + fun:_ZN7rocksdb6DBImpl4OpenERKNS_9DBOptionsERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKSt6vectorINS_22ColumnFamilyDescriptorESaISD_EEPSC_IPNS_18ColumnFamilyHandleESaISJ_EEPPNS_2DBEbb + fun:_ZN7rocksdb2DB4OpenERKNS_9DBOptionsERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKSt6vectorINS_22ColumnFamilyDescriptorESaISD_EEPSC_IPNS_18ColumnFamilyHandleESaISJ_EEPPS0_ + fun:_ZN7rocksdb2DB4OpenERKNS_7OptionsERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEPPS0_ + fun:rocksdb_open + fun:tsdbOpenRocksCache + fun:tsdbOpenCache + fun:tsdbOpen +} +{ + + Memcheck:Leak + match-leak-kinds: reachable + fun:_Znwm + fun:_ZN9__gnu_cxx13new_allocatorINSt8__detail10_Hash_nodeISt4pairIKjPFvPvEELb0EEEE8allocateEmPKv + fun:_ZNSt16allocator_traitsISaINSt8__detail10_Hash_nodeISt4pairIKjPFvPvEELb0EEEEE8allocateERS9_m + fun:_ZNSt8__detail16_Hashtable_allocISaINS_10_Hash_nodeISt4pairIKjPFvPvEELb0EEEEE16_M_allocate_nodeIJRKSt21piecewise_construct_tSt5tupleIJRS3_EESF_IJEEEEEPS8_DpOT_ + fun:_ZNSt10_HashtableIjSt4pairIKjPFvPvEESaIS5_ENSt8__detail10_Select1stESt8equal_toIjESt4hashIjENS7_18_Mod_range_hashingENS7_20_Default_ranged_hashENS7_20_Prime_rehash_policyENS7_17_Hashtable_traitsILb0ELb0ELb1EEEE12_Scoped_nodeC1IJRKSt21piecewise_construct_tSt5tupleIJRS1_EESO_IJEEEEEPNS7_16_Hashtable_allocISaINS7_10_Hash_nodeIS5_Lb0EEEEEEDpOT_ + fun:_ZNSt8__detail9_Map_baseIjSt4pairIKjPFvPvEESaIS6_ENS_10_Select1stESt8equal_toIjESt4hashIjENS_18_Mod_range_hashingENS_20_Default_ranged_hashENS_20_Prime_rehash_policyENS_17_Hashtable_traitsILb0ELb0ELb1EEELb1EEixERS2_ + fun:_ZNSt13unordered_mapIjPFvPvESt4hashIjESt8equal_toIjESaISt4pairIKjS2_EEEixERS8_ + fun:_ZN7rocksdb14ThreadLocalPtr10StaticMeta10SetHandlerEjPFvPvE + fun:_ZN7rocksdb14ThreadLocalPtrC1EPFvPvE + fun:_ZN7rocksdb16ColumnFamilyDataC1EjRKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEPNS_7VersionEPNS_5CacheEPNS_18WriteBufferManagerERKNS_19ColumnFamilyOptionsERKNS_18ImmutableDBOptionsERKNS_11FileOptionsEPNS_15ColumnFamilySetEPNS_16BlockCacheTracerERKSt10shared_ptrINS_8IOTracerEES8_ + fun:_ZN7rocksdb15ColumnFamilySet18CreateColumnFamilyERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEjPNS_7VersionERKNS_19ColumnFamilyOptionsE + fun:_ZN7rocksdb10VersionSet18CreateColumnFamilyERKNS_19ColumnFamilyOptionsEPKNS_11VersionEditE + fun:_ZN7rocksdb18VersionEditHandler15CreateCfAndInitERKNS_19ColumnFamilyOptionsERKNS_11VersionEditE + fun:_ZN7rocksdb18VersionEditHandler10InitializeEv + fun:_ZN7rocksdb22VersionEditHandlerBase7IterateERNS_3log6ReaderEPNS_6StatusE + fun:_ZN7rocksdb10VersionSet7RecoverERKSt6vectorINS_22ColumnFamilyDescriptorESaIS2_EEbPNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE + fun:_ZN7rocksdb6DBImpl7RecoverERKSt6vectorINS_22ColumnFamilyDescriptorESaIS2_EEbbbPm + fun:_ZN7rocksdb6DBImpl4OpenERKNS_9DBOptionsERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKSt6vectorINS_22ColumnFamilyDescriptorESaISD_EEPSC_IPNS_18ColumnFamilyHandleESaISJ_EEPPNS_2DBEbb + fun:_ZN7rocksdb2DB4OpenERKNS_9DBOptionsERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKSt6vectorINS_22ColumnFamilyDescriptorESaISD_EEPSC_IPNS_18ColumnFamilyHandleESaISJ_EEPPS0_ + fun:_ZN7rocksdb2DB4OpenERKNS_7OptionsERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEPPS0_ +} +{ + + Memcheck:Leak + match-leak-kinds: reachable + fun:calloc + fun:__cxa_thread_atexit_impl + fun:__cxa_thread_atexit + fun:__tls_init + fun:_ZTWN7rocksdb12perf_contextE + fun:_ZN7rocksdb17InstrumentedMutex4LockEv + fun:_ZN7rocksdb21InstrumentedMutexLockC1EPNS_17InstrumentedMutexE + fun:_ZN7rocksdb5Timer8ShutdownEv + fun:_ZN7rocksdb5TimerD1Ev + fun:_ZNKSt14default_deleteIN7rocksdb5TimerEEclEPS1_ + fun:_ZNSt10unique_ptrIN7rocksdb5TimerESt14default_deleteIS1_EED1Ev + fun:_ZN7rocksdb21PeriodicWorkSchedulerD1Ev + fun:__run_exit_handlers + fun:exit + fun:(below main) +} +{ + + Memcheck:Leak + match-leak-kinds: reachable + fun:_Znwm + fun:_ZN7rocksdb24CacheEntryStatsCollectorINS_13InternalStats19CacheEntryRoleStatsEE9GetSharedEPNS_5CacheEPNS_11SystemClockEPSt10shared_ptrIS3_E + fun:_ZN7rocksdb13InternalStatsC1EiPNS_11SystemClockEPNS_16ColumnFamilyDataE + fun:_ZN7rocksdb16ColumnFamilyDataC1EjRKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEPNS_7VersionEPNS_5CacheEPNS_18WriteBufferManagerERKNS_19ColumnFamilyOptionsERKNS_18ImmutableDBOptionsERKNS_11FileOptionsEPNS_15ColumnFamilySetEPNS_16BlockCacheTracerERKSt10shared_ptrINS_8IOTracerEES8_ + fun:_ZN7rocksdb15ColumnFamilySet18CreateColumnFamilyERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEjPNS_7VersionERKNS_19ColumnFamilyOptionsE + fun:_ZN7rocksdb10VersionSet18CreateColumnFamilyERKNS_19ColumnFamilyOptionsEPKNS_11VersionEditE + fun:_ZN7rocksdb18VersionEditHandler15CreateCfAndInitERKNS_19ColumnFamilyOptionsERKNS_11VersionEditE + fun:_ZN7rocksdb18VersionEditHandler10InitializeEv + fun:_ZN7rocksdb22VersionEditHandlerBase7IterateERNS_3log6ReaderEPNS_6StatusE + fun:_ZN7rocksdb10VersionSet7RecoverERKSt6vectorINS_22ColumnFamilyDescriptorESaIS2_EEbPNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE + fun:_ZN7rocksdb6DBImpl7RecoverERKSt6vectorINS_22ColumnFamilyDescriptorESaIS2_EEbbbPm + fun:_ZN7rocksdb6DBImpl4OpenERKNS_9DBOptionsERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKSt6vectorINS_22ColumnFamilyDescriptorESaISD_EEPSC_IPNS_18ColumnFamilyHandleESaISJ_EEPPNS_2DBEbb + fun:_ZN7rocksdb2DB4OpenERKNS_9DBOptionsERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKSt6vectorINS_22ColumnFamilyDescriptorESaISD_EEPSC_IPNS_18ColumnFamilyHandleESaISJ_EEPPS0_ + fun:_ZN7rocksdb2DB4OpenERKNS_7OptionsERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEPPS0_ + fun:rocksdb_open + fun:tsdbOpenRocksCache + fun:tsdbOpenCache + fun:tsdbOpen + fun:vnodeOpen + fun:vmProcessCreateVnodeReq +} +{ + + Memcheck:Leak + match-leak-kinds: reachable + fun:_Znwm + fun:_ZN7rocksdb12_GLOBAL__N_111GetRegistryEv + fun:_ZN7rocksdb23CopyCacheDeleterRoleMapEv + fun:_ZN7rocksdb13InternalStats19CacheEntryRoleStats15BeginCollectionEPNS_5CacheEPNS_11SystemClockEm + fun:_ZN7rocksdb24CacheEntryStatsCollectorINS_13InternalStats19CacheEntryRoleStatsEE12CollectStatsEii + fun:_ZN7rocksdb13InternalStats22CollectCacheEntryStatsEb + fun:_ZN7rocksdb6DBImpl9DumpStatsEv + fun:_ZZN7rocksdb21PeriodicWorkScheduler8RegisterEPNS_6DBImplEjjENKUlvE_clEv + fun:_ZSt13__invoke_implIvRZN7rocksdb21PeriodicWorkScheduler8RegisterEPNS0_6DBImplEjjEUlvE_JEET_St14__invoke_otherOT0_DpOT1_ + fun:_ZSt10__invoke_rIvRZN7rocksdb21PeriodicWorkScheduler8RegisterEPNS0_6DBImplEjjEUlvE_JEENSt9enable_ifIXsrSt6__and_IJSt7is_voidIT_ESt14__is_invocableIT0_JDpT1_EEEE5valueES9_E4typeEOSC_DpOSD_ + fun:_ZNSt17_Function_handlerIFvvEZN7rocksdb21PeriodicWorkScheduler8RegisterEPNS1_6DBImplEjjEUlvE_E9_M_invokeERKSt9_Any_data + fun:_ZNKSt8functionIFvvEEclEv + fun:_ZN7rocksdb5Timer3RunEv + fun:_ZSt13__invoke_implIvMN7rocksdb5TimerEFvvEPS1_JEET_St21__invoke_memfun_derefOT0_OT1_DpOT2_ + fun:_ZSt8__invokeIMN7rocksdb5TimerEFvvEJPS1_EENSt15__invoke_resultIT_JDpT0_EE4typeEOS6_DpOS7_ + fun:_ZNSt6thread8_InvokerISt5tupleIJMN7rocksdb5TimerEFvvEPS3_EEE9_M_invokeIJLm0ELm1EEEEvSt12_Index_tupleIJXspT_EEE + fun:_ZNSt6thread8_InvokerISt5tupleIJMN7rocksdb5TimerEFvvEPS3_EEEclEv + fun:_ZNSt6thread11_State_implINS_8_InvokerISt5tupleIJMN7rocksdb5TimerEFvvEPS4_EEEEE6_M_runEv + obj:/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.28 + fun:start_thread +} +{ + + Memcheck:Leak + match-leak-kinds: reachable + fun:_Znwm + fun:_ZN9__gnu_cxx13new_allocatorIPNSt8__detail15_Hash_node_baseEE8allocateEmPKv + fun:_ZNSt16allocator_traitsISaIPNSt8__detail15_Hash_node_baseEEE8allocateERS3_m + fun:_ZNSt8__detail16_Hashtable_allocISaINS_10_Hash_nodeIPN7rocksdb16ThreadStatusDataELb0EEEEE19_M_allocate_bucketsEm + fun:_ZNSt10_HashtableIPN7rocksdb16ThreadStatusDataES2_SaIS2_ENSt8__detail9_IdentityESt8equal_toIS2_ESt4hashIS2_ENS4_18_Mod_range_hashingENS4_20_Default_ranged_hashENS4_20_Prime_rehash_policyENS4_17_Hashtable_traitsILb0ELb1ELb1EEEE19_M_allocate_bucketsEm + fun:_ZNSt10_HashtableIPN7rocksdb16ThreadStatusDataES2_SaIS2_ENSt8__detail9_IdentityESt8equal_toIS2_ESt4hashIS2_ENS4_18_Mod_range_hashingENS4_20_Default_ranged_hashENS4_20_Prime_rehash_policyENS4_17_Hashtable_traitsILb0ELb1ELb1EEEE13_M_rehash_auxEmSt17integral_constantIbLb1EE + fun:_ZNSt10_HashtableIPN7rocksdb16ThreadStatusDataES2_SaIS2_ENSt8__detail9_IdentityESt8equal_toIS2_ESt4hashIS2_ENS4_18_Mod_range_hashingENS4_20_Default_ranged_hashENS4_20_Prime_rehash_policyENS4_17_Hashtable_traitsILb0ELb1ELb1EEEE9_M_rehashEmRKm + fun:_ZNSt10_HashtableIPN7rocksdb16ThreadStatusDataES2_SaIS2_ENSt8__detail9_IdentityESt8equal_toIS2_ESt4hashIS2_ENS4_18_Mod_range_hashingENS4_20_Default_ranged_hashENS4_20_Prime_rehash_policyENS4_17_Hashtable_traitsILb0ELb1ELb1EEEE21_M_insert_unique_nodeERKS2_mmPNS4_10_Hash_nodeIS2_Lb0EEEm + fun:_ZNSt10_HashtableIPN7rocksdb16ThreadStatusDataES2_SaIS2_ENSt8__detail9_IdentityESt8equal_toIS2_ESt4hashIS2_ENS4_18_Mod_range_hashingENS4_20_Default_ranged_hashENS4_20_Prime_rehash_policyENS4_17_Hashtable_traitsILb0ELb1ELb1EEEE9_M_insertIRKS2_NS4_10_AllocNodeISaINS4_10_Hash_nodeIS2_Lb0EEEEEEEESt4pairINS4_14_Node_iteratorIS2_Lb1ELb0EEEbEOT_RKT0_St17integral_constantIbLb1EEm + fun:_ZNSt8__detail12_Insert_baseIPN7rocksdb16ThreadStatusDataES3_SaIS3_ENS_9_IdentityESt8equal_toIS3_ESt4hashIS3_ENS_18_Mod_range_hashingENS_20_Default_ranged_hashENS_20_Prime_rehash_policyENS_17_Hashtable_traitsILb0ELb1ELb1EEEE6insertERKS3_ + fun:_ZNSt13unordered_setIPN7rocksdb16ThreadStatusDataESt4hashIS2_ESt8equal_toIS2_ESaIS2_EE6insertERKS2_ + fun:_ZN7rocksdb19ThreadStatusUpdater14RegisterThreadENS_12ThreadStatus10ThreadTypeEm + fun:_ZN7rocksdb16ThreadStatusUtil14RegisterThreadEPKNS_3EnvENS_12ThreadStatus10ThreadTypeE + fun:_ZN7rocksdb14ThreadPoolImpl4Impl15BGThreadWrapperEPv + fun:_ZSt13__invoke_implIvPFvPvEJPN7rocksdb16BGThreadMetadataEEET_St14__invoke_otherOT0_DpOT1_ + fun:_ZSt8__invokeIPFvPvEJPN7rocksdb16BGThreadMetadataEEENSt15__invoke_resultIT_JDpT0_EE4typeEOS7_DpOS8_ + fun:_ZNSt6thread8_InvokerISt5tupleIJPFvPvEPN7rocksdb16BGThreadMetadataEEEE9_M_invokeIJLm0ELm1EEEEvSt12_Index_tupleIJXspT_EEE + fun:_ZNSt6thread8_InvokerISt5tupleIJPFvPvEPN7rocksdb16BGThreadMetadataEEEEclEv + fun:_ZNSt6thread11_State_implINS_8_InvokerISt5tupleIJPFvPvEPN7rocksdb16BGThreadMetadataEEEEEE6_M_runEv + obj:/usr/lib/x86_64-linux-gnu/libstdc++.so.6.0.28 +} +{ + + Memcheck:Leak + match-leak-kinds: reachable + fun:_Znwm + fun:_ZN9__gnu_cxx13new_allocatorIPNSt8__detail15_Hash_node_baseEE8allocateEmPKv + fun:_ZNSt16allocator_traitsISaIPNSt8__detail15_Hash_node_baseEEE8allocateERS3_m + fun:_ZNSt8__detail16_Hashtable_allocISaINS_10_Hash_nodeISt4pairIKjPFvPvEELb0EEEEE19_M_allocate_bucketsEm + fun:_ZNSt10_HashtableIjSt4pairIKjPFvPvEESaIS5_ENSt8__detail10_Select1stESt8equal_toIjESt4hashIjENS7_18_Mod_range_hashingENS7_20_Default_ranged_hashENS7_20_Prime_rehash_policyENS7_17_Hashtable_traitsILb0ELb0ELb1EEEE19_M_allocate_bucketsEm + fun:_ZNSt10_HashtableIjSt4pairIKjPFvPvEESaIS5_ENSt8__detail10_Select1stESt8equal_toIjESt4hashIjENS7_18_Mod_range_hashingENS7_20_Default_ranged_hashENS7_20_Prime_rehash_policyENS7_17_Hashtable_traitsILb0ELb0ELb1EEEE13_M_rehash_auxEmSt17integral_constantIbLb1EE + fun:_ZNSt10_HashtableIjSt4pairIKjPFvPvEESaIS5_ENSt8__detail10_Select1stESt8equal_toIjESt4hashIjENS7_18_Mod_range_hashingENS7_20_Default_ranged_hashENS7_20_Prime_rehash_policyENS7_17_Hashtable_traitsILb0ELb0ELb1EEEE9_M_rehashEmRKm + fun:_ZNSt10_HashtableIjSt4pairIKjPFvPvEESaIS5_ENSt8__detail10_Select1stESt8equal_toIjESt4hashIjENS7_18_Mod_range_hashingENS7_20_Default_ranged_hashENS7_20_Prime_rehash_policyENS7_17_Hashtable_traitsILb0ELb0ELb1EEEE21_M_insert_unique_nodeERS1_mmPNS7_10_Hash_nodeIS5_Lb0EEEm + fun:_ZNSt8__detail9_Map_baseIjSt4pairIKjPFvPvEESaIS6_ENS_10_Select1stESt8equal_toIjESt4hashIjENS_18_Mod_range_hashingENS_20_Default_ranged_hashENS_20_Prime_rehash_policyENS_17_Hashtable_traitsILb0ELb0ELb1EEELb1EEixERS2_ + fun:_ZNSt13unordered_mapIjPFvPvESt4hashIjESt8equal_toIjESaISt4pairIKjS2_EEEixERS8_ + fun:_ZN7rocksdb14ThreadLocalPtr10StaticMeta10SetHandlerEjPFvPvE + fun:_ZN7rocksdb14ThreadLocalPtrC1EPFvPvE + fun:_ZN7rocksdb16ColumnFamilyDataC1EjRKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEPNS_7VersionEPNS_5CacheEPNS_18WriteBufferManagerERKNS_19ColumnFamilyOptionsERKNS_18ImmutableDBOptionsERKNS_11FileOptionsEPNS_15ColumnFamilySetEPNS_16BlockCacheTracerERKSt10shared_ptrINS_8IOTracerEES8_ + fun:_ZN7rocksdb15ColumnFamilySetC1ERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEPKNS_18ImmutableDBOptionsERKNS_11FileOptionsEPNS_5CacheEPNS_18WriteBufferManagerEPNS_15WriteControllerEPNS_16BlockCacheTracerERKSt10shared_ptrINS_8IOTracerEES8_ + fun:_ZN7rocksdb10VersionSetC1ERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEPKNS_18ImmutableDBOptionsERKNS_11FileOptionsEPNS_5CacheEPNS_18WriteBufferManagerEPNS_15WriteControllerEPNS_16BlockCacheTracerERKSt10shared_ptrINS_8IOTracerEES8_ + fun:_ZN7rocksdb6DBImplC1ERKNS_9DBOptionsERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEbbb + fun:_ZN7rocksdb6DBImpl4OpenERKNS_9DBOptionsERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKSt6vectorINS_22ColumnFamilyDescriptorESaISD_EEPSC_IPNS_18ColumnFamilyHandleESaISJ_EEPPNS_2DBEbb + fun:_ZN7rocksdb2DB4OpenERKNS_9DBOptionsERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKSt6vectorINS_22ColumnFamilyDescriptorESaISD_EEPSC_IPNS_18ColumnFamilyHandleESaISJ_EEPPS0_ + fun:_ZN7rocksdb2DB4OpenERKNS_7OptionsERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEPPS0_ + fun:rocksdb_open +} +{ + + Memcheck:Leak + match-leak-kinds: reachable + fun:_Znwm + fun:_ZN7rocksdb12_GLOBAL__N_125CreateThreadStatusUpdaterEv + fun:_ZN7rocksdb12_GLOBAL__N_18PosixEnvC1Ev + fun:_ZN7rocksdb3Env7DefaultEv + fun:_ZN7rocksdb9DBOptionsC1Ev + fun:_ZN7rocksdb7OptionsC1Ev + fun:_ZN7rocksdb18ImmutableCFOptionsC1Ev + fun:_Z41__static_initialization_and_destruction_0ii + fun:_GLOBAL__sub_I_cf_options.cc + fun:__libc_csu_init + fun:(below main) +} +{ + + Memcheck:Leak + match-leak-kinds: reachable + fun:_Znwm + fun:_ZN7rocksdb14ThreadLocalPtr8InstanceEv + fun:_ZN7rocksdb14ThreadLocalPtr14InitSingletonsEv + fun:_ZN7rocksdb3Env7DefaultEv + fun:_ZN7rocksdb9DBOptionsC1Ev + fun:_ZN7rocksdb7OptionsC1Ev + fun:_ZN7rocksdb18ImmutableCFOptionsC1Ev + fun:_Z41__static_initialization_and_destruction_0ii + fun:_GLOBAL__sub_I_cf_options.cc + fun:__libc_csu_init + fun:(below main) +} +{ + + Memcheck:Leak + match-leak-kinds: reachable + fun:_Znwm + fun:_ZN7rocksdb24CacheEntryStatsCollectorINS_13InternalStats19CacheEntryRoleStatsEE9GetSharedEPNS_5CacheEPNS_11SystemClockEPSt10shared_ptrIS3_E + fun:_ZN7rocksdb13InternalStatsC1EiPNS_11SystemClockEPNS_16ColumnFamilyDataE + fun:_ZN7rocksdb16ColumnFamilyDataC1EjRKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEPNS_7VersionEPNS_5CacheEPNS_18WriteBufferManagerERKNS_19ColumnFamilyOptionsERKNS_18ImmutableDBOptionsERKNS_11FileOptionsEPNS_15ColumnFamilySetEPNS_16BlockCacheTracerERKSt10shared_ptrINS_8IOTracerEES8_ + fun:_ZN7rocksdb15ColumnFamilySet18CreateColumnFamilyERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEjPNS_7VersionERKNS_19ColumnFamilyOptionsE + fun:_ZN7rocksdb10VersionSet18CreateColumnFamilyERKNS_19ColumnFamilyOptionsEPKNS_11VersionEditE + fun:_ZN7rocksdb18VersionEditHandler15CreateCfAndInitERKNS_19ColumnFamilyOptionsERKNS_11VersionEditE + fun:_ZN7rocksdb18VersionEditHandler10InitializeEv + fun:_ZN7rocksdb22VersionEditHandlerBase7IterateERNS_3log6ReaderEPNS_6StatusE + fun:_ZN7rocksdb10VersionSet7RecoverERKSt6vectorINS_22ColumnFamilyDescriptorESaIS2_EEbPNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEE + fun:_ZN7rocksdb6DBImpl7RecoverERKSt6vectorINS_22ColumnFamilyDescriptorESaIS2_EEbbbPm + fun:_ZN7rocksdb6DBImpl4OpenERKNS_9DBOptionsERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKSt6vectorINS_22ColumnFamilyDescriptorESaISD_EEPSC_IPNS_18ColumnFamilyHandleESaISJ_EEPPNS_2DBEbb + fun:_ZN7rocksdb2DB4OpenERKNS_9DBOptionsERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEERKSt6vectorINS_22ColumnFamilyDescriptorESaISD_EEPSC_IPNS_18ColumnFamilyHandleESaISJ_EEPPS0_ + fun:_ZN7rocksdb2DB4OpenERKNS_7OptionsERKNSt7__cxx1112basic_stringIcSt11char_traitsIcESaIcEEEPPS0_ + fun:rocksdb_open + fun:tsdbOpenRocksCache + fun:tsdbOpenCache + fun:tsdbOpen + fun:vnodeOpen + fun:vmOpenVnodeInThread +} diff --git a/tests/script/sh/exec.sh b/tests/script/sh/exec.sh index f548a4cc41..5061688de2 100755 --- a/tests/script/sh/exec.sh +++ b/tests/script/sh/exec.sh @@ -109,8 +109,10 @@ if [ "$EXEC_OPTON" = "start" ]; then if [ "$VALGRIND_OPTION" = "true" ]; then TT=`date +%s` #mkdir ${LOG_DIR}/${TT} - echo "nohup valgrind --log-file=${LOG_DIR}/valgrind-taosd-${NODE_NAME}-${TT}.log --tool=memcheck --leak-check=full --show-reachable=no --track-origins=yes --show-leak-kinds=all --num-callers=20 -v -v --workaround-gcc296-bugs=yes $EXE_DIR/taosd -c $CFG_DIR > /dev/null 2>&1 &" - nohup valgrind --log-file=${LOG_DIR}/valgrind-taosd-${NODE_NAME}-${TT}.log --tool=memcheck --leak-check=full --show-reachable=no --track-origins=yes --show-leak-kinds=all --num-callers=20 -v -v --workaround-gcc296-bugs=yes $EXE_DIR/taosd -c $CFG_DIR > /dev/null 2>&1 & + #echo "nohup valgrind --log-file=${LOG_DIR}/valgrind-taosd-${NODE_NAME}-${TT}.log --tool=memcheck --leak-check=full --show-reachable=no --track-origins=yes --show-leak-kinds=all --num-callers=20 -v -v --workaround-gcc296-bugs=yes $EXE_DIR/taosd -c $CFG_DIR > /dev/null 2>&1 &" + #nohup valgrind --log-file=${LOG_DIR}/valgrind-taosd-${NODE_NAME}-${TT}.log --gen-suppressions=all --tool=memcheck --leak-check=full --show-reachable=no --track-origins=yes --show-leak-kinds=all --num-callers=20 -v -v --workaround-gcc296-bugs=yes $EXE_DIR/taosd -c $CFG_DIR > /dev/null 2>&1 & + echo "nohup valgrind --log-file=${LOG_DIR}/valgrind-taosd-${NODE_NAME}-${TT}.log --suppressions=${SCRIPT_DIR}/local.supp --tool=memcheck --leak-check=full --show-reachable=no --track-origins=yes --show-leak-kinds=all --num-callers=20 -v -v --workaround-gcc296-bugs=yes $EXE_DIR/taosd -c $CFG_DIR > /dev/null 2>&1 &" + nohup valgrind --log-file=${LOG_DIR}/valgrind-taosd-${NODE_NAME}-${TT}.log --suppressions=${SCRIPT_DIR}/local.supp --tool=memcheck --leak-check=full --show-reachable=no --track-origins=yes --show-leak-kinds=all --num-callers=20 -v -v --workaround-gcc296-bugs=yes $EXE_DIR/taosd -c $CFG_DIR > /dev/null 2>&1 & else echo "nohup $EXE_DIR/taosd -c $CFG_DIR > /dev/null 2>&1 &" nohup $EXE_DIR/taosd -c $CFG_DIR > /dev/null 2> $ASAN_DIR/$NODE_NAME.asan & From cf57a3f3f6a012c05e1db43de6e86b3ec83b3c2d Mon Sep 17 00:00:00 2001 From: Haojun Liao Date: Wed, 26 Apr 2023 16:30:40 +0800 Subject: [PATCH 064/156] Update 10-function.md --- docs/en/12-taos-sql/10-function.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/docs/en/12-taos-sql/10-function.md b/docs/en/12-taos-sql/10-function.md index 5c1a833e05..0b5f9103c7 100644 --- a/docs/en/12-taos-sql/10-function.md +++ b/docs/en/12-taos-sql/10-function.md @@ -5,9 +5,9 @@ description: This document describes the standard SQL functions available in TDe toc_max_heading_level: 4 --- -## Single Row Functions +## Scalar Functions -Single row functions return a result for each row. +Scalar functions return one result for each row. ### Mathematical Functions From a98cf1277f25097d948667cbf084a28a4dc3cd92 Mon Sep 17 00:00:00 2001 From: Haojun Liao Date: Wed, 26 Apr 2023 16:33:23 +0800 Subject: [PATCH 065/156] Update 14-stream.md --- docs/en/12-taos-sql/14-stream.md | 34 ++++++++++++++++++++++++++++++-- 1 file changed, 32 insertions(+), 2 deletions(-) diff --git a/docs/en/12-taos-sql/14-stream.md b/docs/en/12-taos-sql/14-stream.md index b8f6c3a163..eed1713a3c 100644 --- a/docs/en/12-taos-sql/14-stream.md +++ b/docs/en/12-taos-sql/14-stream.md @@ -13,8 +13,11 @@ Because stream processing is built in to TDengine, you are no longer reliant on ```sql CREATE STREAM [IF NOT EXISTS] stream_name [stream_options] INTO stb_name SUBTABLE(expression) AS subquery stream_options: { - TRIGGER [AT_ONCE | WINDOW_CLOSE | MAX_DELAY time] - WATERMARK time + TRIGGER [AT_ONCE | WINDOW_CLOSE | MAX_DELAY time] + WATERMARK time + IGNORE EXPIRED [0|1] + DELETE_MARK time + FILL_HISTORY [0|1] } ``` @@ -141,3 +144,30 @@ The data in expired windows is tagged as expired. TDengine stream processing pro 2. Recalculate the data. In this method, all data in the window is reobtained from the database and recalculated. The latest results are then returned. In both of these methods, configuring the watermark is essential for obtaining accurate results (if expired data is dropped) and avoiding repeated triggers that affect system performance (if expired data is recalculated). + +## Supported functions + +All scalar functions are available in stream processing. +All System Information Functions are not available for stream processing. +All aggregate and selection functions are available in stream processing, except following functions + +- [leastsquares](../function/#leastsquares) +- [percentile](../function/#percentile) +- [top](../function/#leastsquares) +- [bottom](../function/#top) +- [elapsed](../function/#leastsquares) +- [interp](../function/#elapsed) +- [derivative](../function/#derivative) +- [irate](../function/#irate) +- [twa](../function/#twa) +- [histogram](../function/#histogram) +- [diff](../function/#diff) +- [statecount](../function/#statecount) +- [stateduration](../function/#stateduration) +- [csum](../function/#csum) +- [mavg](../function/#mavg) +- [sample](../function/#sample) +- [tail](../function/#tail) +- [unique](../function/#unique) +- [mode](../function/#mode) + From 24e8ff79702d290aec3e61d637c26261d6db07b8 Mon Sep 17 00:00:00 2001 From: Haojun Liao Date: Wed, 26 Apr 2023 16:35:45 +0800 Subject: [PATCH 066/156] Update 14-stream.md --- docs/en/12-taos-sql/14-stream.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/en/12-taos-sql/14-stream.md b/docs/en/12-taos-sql/14-stream.md index eed1713a3c..9d1b63ce81 100644 --- a/docs/en/12-taos-sql/14-stream.md +++ b/docs/en/12-taos-sql/14-stream.md @@ -147,7 +147,7 @@ In both of these methods, configuring the watermark is essential for obtaining a ## Supported functions -All scalar functions are available in stream processing. +All [scalar functions](../function/#scalar-functions) are available in stream processing. All System Information Functions are not available for stream processing. All aggregate and selection functions are available in stream processing, except following functions From 7113ded06a2ae7c335fd5e2bdf014e1bbdfdc166 Mon Sep 17 00:00:00 2001 From: Shuduo Sang Date: Wed, 26 Apr 2023 16:48:21 +0800 Subject: [PATCH 067/156] docs: fix typos and use 3.1.0 in java connector for3.0 (#21095) * docs: update csharp connector status * docs: fix csharp ws bulk pulling * docs: clarify database param is optional to websocket dsn * docs: fix java connector mistake * fix: a few typos * fix: many typos * docs: java connector support subscription over websocket * fix: update 3.0 connector feature matrix * docs: fix typos and use 3.1.0 in jdbc driver --- docs/en/12-taos-sql/07-tag-index.md | 4 ++-- docs/en/12-taos-sql/29-changes.md | 2 +- docs/en/14-reference/03-connector/03-cpp.mdx | 6 +++--- docs/en/14-reference/03-connector/04-java.mdx | 8 ++++---- docs/zh/08-connector/14-java.mdx | 8 ++++---- 5 files changed, 14 insertions(+), 14 deletions(-) diff --git a/docs/en/12-taos-sql/07-tag-index.md b/docs/en/12-taos-sql/07-tag-index.md index cb2a61d3e8..af1d0a352e 100644 --- a/docs/en/12-taos-sql/07-tag-index.md +++ b/docs/en/12-taos-sql/07-tag-index.md @@ -6,7 +6,7 @@ description: Use Tag Index to Improve Query Performance ## Introduction -Prior to TDengine 3.0.3.0 (excluded),only one index is created by default on the first tag of each super talbe, but it's not allowed to dynamically create index on any other tags. From version 3.0.30, you can dynamically create index on any tag of any type. The index created automatically by TDengine is still valid. Query performance can benefit from indexes if you use properly. +Prior to TDengine 3.0.3.0 (excluded),only one index is created by default on the first tag of each super table, but it's not allowed to dynamically create index on any other tags. From version 3.0.30, you can dynamically create index on any tag of any type. The index created automatically by TDengine is still valid. Query performance can benefit from indexes if you use properly. ## Syntax @@ -48,4 +48,4 @@ You can also add filter conditions to limit the results. 6. You can' create index on a normal table or a child table. -7. If the unique values of a tag column are too few, it's better not to create index on such tag columns, the benefit would be very small. \ No newline at end of file +7. If the unique values of a tag column are too few, it's better not to create index on such tag columns, the benefit would be very small. diff --git a/docs/en/12-taos-sql/29-changes.md b/docs/en/12-taos-sql/29-changes.md index f4606f263f..086aee59fe 100644 --- a/docs/en/12-taos-sql/29-changes.md +++ b/docs/en/12-taos-sql/29-changes.md @@ -27,7 +27,7 @@ The following data types can be used in the schema for standard tables. | - | :------- | :-------- | :------- | | 1 | ALTER ACCOUNT | Deprecated| This Enterprise Edition-only statement has been removed. It returns the error "This statement is no longer supported." | 2 | ALTER ALL DNODES | Added | Modifies the configuration of all dnodes. -| 3 | ALTER DATABASE | Modified | Deprecated
  • QUORUM: Specified the required number of confirmations. TDengine 3.0 provides strict consistency by default and doesn't allow to change to weak consitency.
  • BLOCKS: Specified the memory blocks used by each vnode. BUFFER is now used to specify the size of the write cache pool for each vnode.
  • UPDATE: Specified whether update operations were supported. All databases now support updating data in certain columns.
  • CACHELAST: Specified how to cache the newest row of data. CACHEMODEL now replaces CACHELAST.
  • COMP: Cannot be modified.
    Added
  • CACHEMODEL: Specifies whether to cache the latest subtable data.
  • CACHESIZE: Specifies the size of the cache for the newest subtable data.
  • WAL_FSYNC_PERIOD: Replaces the FSYNC parameter.
  • WAL_LEVEL: Replaces the WAL parameter.
  • WAL_RETENTION_PERIOD: specifies the time after which WAL files are deleted. This parameter is used for data subscription.
  • WAL_RETENTION_SIZE: specifies the size at which WAL files are deleted. This parameter is used for data subscription.
    Modified
  • REPLICA: Cannot be modified.
  • KEEP: Now supports units.
+| 3 | ALTER DATABASE | Modified | Deprecated
  • QUORUM: Specified the required number of confirmations. TDengine 3.0 provides strict consistency by default and doesn't allow to change to weak consistency.
  • BLOCKS: Specified the memory blocks used by each vnode. BUFFER is now used to specify the size of the write cache pool for each vnode.
  • UPDATE: Specified whether update operations were supported. All databases now support updating data in certain columns.
  • CACHELAST: Specified how to cache the newest row of data. CACHEMODEL now replaces CACHELAST.
  • COMP: Cannot be modified.
    Added
  • CACHEMODEL: Specifies whether to cache the latest subtable data.
  • CACHESIZE: Specifies the size of the cache for the newest subtable data.
  • WAL_FSYNC_PERIOD: Replaces the FSYNC parameter.
  • WAL_LEVEL: Replaces the WAL parameter.
  • WAL_RETENTION_PERIOD: specifies the time after which WAL files are deleted. This parameter is used for data subscription.
  • WAL_RETENTION_SIZE: specifies the size at which WAL files are deleted. This parameter is used for data subscription.
    Modified
  • REPLICA: Cannot be modified.
  • KEEP: Now supports units.
| 4 | ALTER STABLE | Modified | Deprecated
  • CHANGE TAG: Modified the name of a tag. Replaced by RENAME TAG.
    Added
  • RENAME TAG: Replaces CHANGE TAG.
  • COMMENT: Specifies comments for a supertable.
| 5 | ALTER TABLE | Modified | Deprecated
  • CHANGE TAG: Modified the name of a tag. Replaced by RENAME TAG.
    Added
  • RENAME TAG: Replaces CHANGE TAG.
  • COMMENT: Specifies comments for a standard table.
  • TTL: Specifies the time-to-live for a standard table.
| 6 | ALTER USER | Modified | Deprecated
  • PRIVILEGE: Specified user permissions. Replaced by GRANT and REVOKE.
    Added
  • ENABLE: Enables or disables a user.
  • SYSINFO: Specifies whether a user can query system information.
diff --git a/docs/en/14-reference/03-connector/03-cpp.mdx b/docs/en/14-reference/03-connector/03-cpp.mdx index b543879b3c..014109b77e 100644 --- a/docs/en/14-reference/03-connector/03-cpp.mdx +++ b/docs/en/14-reference/03-connector/03-cpp.mdx @@ -423,6 +423,6 @@ In addition to writing data using the SQL method or the parameter binding API, w **Description** - The above seven interfaces are extension interfaces, which are mainly used to pass ttl and reqid parameters, and can be used as needed. - - Withing _raw interfaces represent data through the passed parameters lines and len. In order to solve the problem that the original interface data contains '\0' and is truncated. The totalRows pointer returns the number of parsed data rows. - - Withing _ttl interfaces can pass the ttl parameter to control the ttl expiration time of the table. - - Withing _reqid interfaces can track the entire call chain by passing the reqid parameter. + - Within _raw interfaces represent data through the passed parameters lines and len. In order to solve the problem that the original interface data contains '\0' and is truncated. The totalRows pointer returns the number of parsed data rows. + - Within _ttl interfaces can pass the ttl parameter to control the ttl expiration time of the table. + - Within _reqid interfaces can track the entire call chain by passing the reqid parameter. diff --git a/docs/en/14-reference/03-connector/04-java.mdx b/docs/en/14-reference/03-connector/04-java.mdx index fd4b4641d7..85d79ef0e0 100644 --- a/docs/en/14-reference/03-connector/04-java.mdx +++ b/docs/en/14-reference/03-connector/04-java.mdx @@ -82,7 +82,7 @@ Add following dependency in the `pom.xml` file of your Maven project: com.taosdata.jdbc taos-jdbcdriver - 3.0.0 + 3.1.0 ``` @@ -227,7 +227,7 @@ In addition to getting the connection from the specified URL, you can use Proper Note: - The client parameter set in the application is process-level. If you want to update the parameters of the client, you need to restart the application. This is because the client parameter is a global parameter that takes effect only the first time the application is set. -- The following sample code is based on taos-jdbcdriver-3.0.0. +- The following sample code is based on taos-jdbcdriver-3.1.0. ```java public Connection getConn() throws Exception{ @@ -364,7 +364,7 @@ TDengine has significantly improved the bind APIs to support data writing (INSER **Note:** - JDBC REST connections do not currently support bind interface -- The following sample code is based on taos-jdbcdriver-3.0.0 +- The following sample code is based on taos-jdbcdriver-3.1.0 - The setString method should be called for binary type data, and the setNString method should be called for nchar type data - both setString and setNString require the user to declare the width of the corresponding column in the size parameter of the table definition @@ -632,7 +632,7 @@ TDengine supports schemaless writing. It is compatible with InfluxDB's Line Prot Note: - JDBC REST connections do not currently support schemaless writes -- The following sample code is based on taos-jdbcdriver-3.0.0 +- The following sample code is based on taos-jdbcdriver-3.1.0 ```java public class SchemalessInsertTest { diff --git a/docs/zh/08-connector/14-java.mdx b/docs/zh/08-connector/14-java.mdx index d1c1258365..8a9bba7fe7 100644 --- a/docs/zh/08-connector/14-java.mdx +++ b/docs/zh/08-connector/14-java.mdx @@ -82,7 +82,7 @@ Maven 项目中,在 pom.xml 中添加以下依赖: com.taosdata.jdbc taos-jdbcdriver - 3.0.0 + 3.1.0 ``` @@ -230,7 +230,7 @@ INSERT INTO test.t1 USING test.weather (ts, temperature) TAGS('California.SanFra **注意**: - 应用中设置的 client parameter 为进程级别的,即如果要更新 client 的参数,需要重启应用。这是因为 client parameter 是全局参数,仅在应用程序的第一次设置生效。 -- 以下示例代码基于 taos-jdbcdriver-3.0.0。 +- 以下示例代码基于 taos-jdbcdriver-3.1.0。 ```java public Connection getConn() throws Exception{ @@ -367,7 +367,7 @@ TDengine 的 JDBC 原生连接实现大幅改进了参数绑定方式对数据 **注意**: - JDBC REST 连接目前不支持参数绑定 -- 以下示例代码基于 taos-jdbcdriver-3.0.0 +- 以下示例代码基于 taos-jdbcdriver-3.1.0 - binary 类型数据需要调用 setString 方法,nchar 类型数据需要调用 setNString 方法 - setString 和 setNString 都要求用户在 size 参数里声明表定义中对应列的列宽 @@ -635,7 +635,7 @@ TDengine 支持无模式写入功能。无模式写入兼容 InfluxDB 的 行协 **注意**: - JDBC REST 连接目前不支持无模式写入 -- 以下示例代码基于 taos-jdbcdriver-3.0.0 +- 以下示例代码基于 taos-jdbcdriver-3.1.0 ```java public class SchemalessInsertTest { From f7141fdebb7d7e63c27715d70da36d30423f4cb2 Mon Sep 17 00:00:00 2001 From: Haojun Liao Date: Wed, 26 Apr 2023 16:53:52 +0800 Subject: [PATCH 068/156] Update 14-stream.md --- docs/en/12-taos-sql/14-stream.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/en/12-taos-sql/14-stream.md b/docs/en/12-taos-sql/14-stream.md index 9d1b63ce81..53d61ddde4 100644 --- a/docs/en/12-taos-sql/14-stream.md +++ b/docs/en/12-taos-sql/14-stream.md @@ -149,7 +149,7 @@ In both of these methods, configuring the watermark is essential for obtaining a All [scalar functions](../function/#scalar-functions) are available in stream processing. All System Information Functions are not available for stream processing. -All aggregate and selection functions are available in stream processing, except following functions +All [aggregate and selection functions](../function/#system-information-functions) are available in stream processing, except following functions - [leastsquares](../function/#leastsquares) - [percentile](../function/#percentile) From cf9567ac966fe57d57579d3066949e4c9e4c60f2 Mon Sep 17 00:00:00 2001 From: Haojun Liao Date: Wed, 26 Apr 2023 17:04:37 +0800 Subject: [PATCH 069/156] Update 14-stream.md --- docs/en/12-taos-sql/14-stream.md | 43 +++++++++++++++----------------- 1 file changed, 20 insertions(+), 23 deletions(-) diff --git a/docs/en/12-taos-sql/14-stream.md b/docs/en/12-taos-sql/14-stream.md index 53d61ddde4..0e013ed846 100644 --- a/docs/en/12-taos-sql/14-stream.md +++ b/docs/en/12-taos-sql/14-stream.md @@ -147,27 +147,24 @@ In both of these methods, configuring the watermark is essential for obtaining a ## Supported functions -All [scalar functions](../function/#scalar-functions) are available in stream processing. -All System Information Functions are not available for stream processing. -All [aggregate and selection functions](../function/#system-information-functions) are available in stream processing, except following functions - -- [leastsquares](../function/#leastsquares) -- [percentile](../function/#percentile) -- [top](../function/#leastsquares) -- [bottom](../function/#top) -- [elapsed](../function/#leastsquares) -- [interp](../function/#elapsed) -- [derivative](../function/#derivative) -- [irate](../function/#irate) -- [twa](../function/#twa) -- [histogram](../function/#histogram) -- [diff](../function/#diff) -- [statecount](../function/#statecount) -- [stateduration](../function/#stateduration) -- [csum](../function/#csum) -- [mavg](../function/#mavg) -- [sample](../function/#sample) -- [tail](../function/#tail) -- [unique](../function/#unique) -- [mode](../function/#mode) +All [scalar functions](../function/#scalar-functions) are available in stream processing. All [System Information Functions](../function/#system-information-functions) are not allowed in stream processing. All [Aggregate functions](function/#aggregate-functions) and [Selection functions](../function/#selection-functions) are available in stream processing, except the followings: + - [leastsquares](../function/#leastsquares) + - [percentile](../function/#percentile) + - [top](../function/#leastsquares) + - [bottom](../function/#top) + - [elapsed](../function/#leastsquares) + - [interp](../function/#elapsed) + - [derivative](../function/#derivative) + - [irate](../function/#irate) + - [twa](../function/#twa) + - [histogram](../function/#histogram) + - [diff](../function/#diff) + - [statecount](../function/#statecount) + - [stateduration](../function/#stateduration) + - [csum](../function/#csum) + - [mavg](../function/#mavg) + - [sample](../function/#sample) + - [tail](../function/#tail) + - [unique](../function/#unique) + - [mode](../function/#mode) From e4d40489e77db0094bd66d24bd0a67633f2440fd Mon Sep 17 00:00:00 2001 From: Haojun Liao Date: Wed, 26 Apr 2023 17:13:20 +0800 Subject: [PATCH 070/156] Update 14-stream.md --- docs/en/12-taos-sql/14-stream.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/en/12-taos-sql/14-stream.md b/docs/en/12-taos-sql/14-stream.md index 0e013ed846..0fe2c506a6 100644 --- a/docs/en/12-taos-sql/14-stream.md +++ b/docs/en/12-taos-sql/14-stream.md @@ -147,7 +147,7 @@ In both of these methods, configuring the watermark is essential for obtaining a ## Supported functions -All [scalar functions](../function/#scalar-functions) are available in stream processing. All [System Information Functions](../function/#system-information-functions) are not allowed in stream processing. All [Aggregate functions](function/#aggregate-functions) and [Selection functions](../function/#selection-functions) are available in stream processing, except the followings: +All [scalar functions](../function/#scalar-functions) are available in stream processing. All [System Information Functions](../function/#system-information-functions) are not allowed in stream processing. All [Aggregate functions](../function/#aggregate-functions) and [Selection functions](../function/#selection-functions) are available in stream processing, except the followings: - [leastsquares](../function/#leastsquares) - [percentile](../function/#percentile) - [top](../function/#leastsquares) From ebf5a420e87f725a4caf333a54072e1c23436f68 Mon Sep 17 00:00:00 2001 From: liuyao <38781207+54liuyao@users.noreply.github.com> Date: Wed, 26 Apr 2023 17:19:47 +0800 Subject: [PATCH 071/156] Update 14-stream.md --- docs/zh/12-taos-sql/14-stream.md | 14 ++++++++++++-- 1 file changed, 12 insertions(+), 2 deletions(-) diff --git a/docs/zh/12-taos-sql/14-stream.md b/docs/zh/12-taos-sql/14-stream.md index b08804e5c4..86e2aa9a8b 100644 --- a/docs/zh/12-taos-sql/14-stream.md +++ b/docs/zh/12-taos-sql/14-stream.md @@ -15,6 +15,7 @@ stream_options: { IGNORE EXPIRED [0|1] DELETE_MARK time FILL_HISTORY [0|1] + IGNORE UPDATE [0|1] } ``` @@ -169,7 +170,7 @@ T3 时刻,最新事件到达,T 向后推移超过了第二个窗口关闭的 在 window_close 或 max_delay 模式下,窗口关闭直接影响推送结果。在 at_once 模式下,窗口关闭只与内存占用有关。 -## 流式计算的过期数据处理策略 +## 流式计算对于过期数据的处理策略 对于已关闭的窗口,再次落入该窗口中的数据被标记为过期数据. @@ -177,11 +178,20 @@ TDengine 对于过期数据提供两种处理方式,由 IGNORE EXPIRED 选项 1. 重新计算,即 IGNORE EXPIRED 0:从 TSDB 中重新查找对应窗口的所有数据并重新计算得到最新结果 -2. 直接丢弃, 即 IGNORE EXPIRED 1:默认配置,忽略过期数据 +2. 直接丢弃,即 IGNORE EXPIRED 1:默认配置,忽略过期数据 无论在哪种模式下,watermark 都应该被妥善设置,来得到正确结果(直接丢弃模式)或避免频繁触发重算带来的性能开销(重新计算模式)。 +## 流式计算对于修改数据的处理策略 + +TDengine 对于修改数据提供两种处理方式,由 IGNORE UPDATE 选项指定: + +1. 检查数据是否被修改,即 IGNORE UPDATE 0:默认配置,如果被修改,则重新计算对应窗口。 + +2. 不检查数据是否被修改,全部按增量数据计算,即 IGNORE UPDATE 1。 + + ## 写入已存在的超级表 ```sql [field1_name,...] From 457ed296d3384f28d9a399f509b5ef87f03dc1af Mon Sep 17 00:00:00 2001 From: Haojun Liao Date: Wed, 26 Apr 2023 17:21:45 +0800 Subject: [PATCH 072/156] Update 14-stream.md --- docs/en/12-taos-sql/14-stream.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/en/12-taos-sql/14-stream.md b/docs/en/12-taos-sql/14-stream.md index 0fe2c506a6..de3fd1f4be 100644 --- a/docs/en/12-taos-sql/14-stream.md +++ b/docs/en/12-taos-sql/14-stream.md @@ -147,7 +147,7 @@ In both of these methods, configuring the watermark is essential for obtaining a ## Supported functions -All [scalar functions](../function/#scalar-functions) are available in stream processing. All [System Information Functions](../function/#system-information-functions) are not allowed in stream processing. All [Aggregate functions](../function/#aggregate-functions) and [Selection functions](../function/#selection-functions) are available in stream processing, except the followings: +All [scalar functions](../function/#scalar-functions) are available in stream processing. All [System information functions](../function/#system-information-functions) are not allowed in stream processing. All [Aggregate functions](../function/#aggregate-functions) and [Selection functions](../function/#selection-functions) are available in stream processing, except the followings: - [leastsquares](../function/#leastsquares) - [percentile](../function/#percentile) - [top](../function/#leastsquares) From 892cdfc6562a62ee45d2d0bb898a7f4bc06b557e Mon Sep 17 00:00:00 2001 From: Minglei Jin Date: Wed, 26 Apr 2023 17:36:12 +0800 Subject: [PATCH 073/156] rocks/cmake: fix building on mac --- cmake/cmake.define | 10 ++++++++-- cmake/cmake.options | 2 +- contrib/CMakeLists.txt | 12 ++++++------ 3 files changed, 15 insertions(+), 9 deletions(-) diff --git a/cmake/cmake.define b/cmake/cmake.define index 5b65738c70..3038bf975e 100644 --- a/cmake/cmake.define +++ b/cmake/cmake.define @@ -120,8 +120,14 @@ ELSE () SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Werror -Wno-literal-suffix -Werror=return-type -fPIC -gdwarf-2 -fsanitize=address -fsanitize=undefined -fsanitize-recover=all -fsanitize=float-divide-by-zero -fsanitize=float-cast-overflow -fno-sanitize=shift-base -fno-sanitize=alignment -g3 -Wformat=0") MESSAGE(STATUS "Compile with Address Sanitizer!") ELSE () - SET(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -Werror -Werror=return-type -fPIC -gdwarf-2 -g3 -Wformat=2 -Wno-format-nonliteral -Wno-format-truncation -Wno-format-y2k") - SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Werror -Wno-reserved-user-defined-literal -Wno-literal-suffix -Werror=return-type -fPIC -gdwarf-2 -g3 -Wformat=2 -Wno-format-nonliteral -Wno-format-truncation -Wno-format-y2k") + MESSAGE(STATUS "XXXXXXXXXXXXXX Clang/AppleClang" ${TD_DARWIN}) + IF (${TD_DARWIN}) + SET(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -Werror -Werror=return-type -fPIC -gdwarf-2 -g3 -Wformat=2 -Wno-format-nonliteral -Wno-format-y2k") + SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Werror -Wno-reserved-user-defined-literal -Wno-literal-suffix -Werror=return-type -fPIC -gdwarf-2 -g3 -Wformat=2 -Wno-format-nonliteral -Wno-format-y2k") + ELSE () + SET(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -Werror -Werror=return-type -fPIC -gdwarf-2 -g3 -Wformat=2 -Wno-format-nonliteral -Wno-format-truncation -Wno-format-y2k") + SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Werror -Wno-reserved-user-defined-literal -Wno-literal-suffix -Werror=return-type -fPIC -gdwarf-2 -g3 -Wformat=2 -Wno-format-nonliteral -Wno-format-truncation -Wno-format-y2k") + ENDIF () ENDIF () # disable all assert diff --git a/cmake/cmake.options b/cmake/cmake.options index 60ff00affc..4ec9d18e08 100644 --- a/cmake/cmake.options +++ b/cmake/cmake.options @@ -109,7 +109,7 @@ option( option( BUILD_WITH_ROCKSDB "If build with rocksdb" - OFF + ON ) option( diff --git a/contrib/CMakeLists.txt b/contrib/CMakeLists.txt index ed6120bd30..6cd6f79d8b 100644 --- a/contrib/CMakeLists.txt +++ b/contrib/CMakeLists.txt @@ -78,10 +78,10 @@ if(${BUILD_WITH_LEVELDB}) endif(${BUILD_WITH_LEVELDB}) # rocksdb -#if(${BUILD_WITH_ROCKSDB}) -cat("${TD_SUPPORT_DIR}/rocksdb_CMakeLists.txt.in" ${CONTRIB_TMP_FILE}) -add_definitions(-DUSE_ROCKSDB) -#endif(${BUILD_WITH_ROCKSDB}) +if(${BUILD_WITH_ROCKSDB}) + cat("${TD_SUPPORT_DIR}/rocksdb_CMakeLists.txt.in" ${CONTRIB_TMP_FILE}) + add_definitions(-DUSE_ROCKSDB) +endif(${BUILD_WITH_ROCKSDB}) # canonical-raft if(${BUILD_WITH_CRAFT}) @@ -222,7 +222,7 @@ endif(${BUILD_WITH_LEVELDB}) # rocksdb # To support rocksdb build on ubuntu: sudo apt-get install libgflags-dev -#if(${BUILD_WITH_ROCKSDB}) +if(${BUILD_WITH_ROCKSDB}) SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-error=maybe-uninitialized") option(WITH_TESTS "" OFF) option(WITH_BENCHMARK_TOOLS "" OFF) @@ -234,7 +234,7 @@ endif(${BUILD_WITH_LEVELDB}) rocksdb PUBLIC $ ) -#endif(${BUILD_WITH_ROCKSDB}) +endif(${BUILD_WITH_ROCKSDB}) # lucene # To support build on ubuntu: sudo apt-get install libboost-all-dev From 10e3b4ab4a76cac05b2ee9814e0484246c8d71bf Mon Sep 17 00:00:00 2001 From: Minglei Jin Date: Wed, 26 Apr 2023 17:40:35 +0800 Subject: [PATCH 074/156] cmake/contrib: remove debug messages --- contrib/CMakeLists.txt | 1 - contrib/test/CMakeLists.txt | 2 -- 2 files changed, 3 deletions(-) diff --git a/contrib/CMakeLists.txt b/contrib/CMakeLists.txt index 6cd6f79d8b..8f94313382 100644 --- a/contrib/CMakeLists.txt +++ b/contrib/CMakeLists.txt @@ -438,7 +438,6 @@ endif(${BUILD_ADDR2LINE}) # ================================================================================================ # Build test # ================================================================================================ -message("contrib tests:" ${BUILD_DEPENDENCY_TESTS}) if(${BUILD_DEPENDENCY_TESTS}) add_subdirectory(test EXCLUDE_FROM_ALL) endif(${BUILD_DEPENDENCY_TESTS}) diff --git a/contrib/test/CMakeLists.txt b/contrib/test/CMakeLists.txt index 911eca50a7..f35cf0d13d 100644 --- a/contrib/test/CMakeLists.txt +++ b/contrib/test/CMakeLists.txt @@ -1,6 +1,4 @@ # rocksdb -message("contrib test dir:" ${BUILD_WITH_ROCKSDB}) - if(${BUILD_WITH_ROCKSDB}) add_subdirectory(rocksdb) endif(${BUILD_WITH_ROCKSDB}) From cb43ac024b576fea9ef872f71a1a42a05cf71f6e Mon Sep 17 00:00:00 2001 From: Haojun Liao Date: Wed, 26 Apr 2023 19:06:44 +0800 Subject: [PATCH 075/156] Update 14-stream.md --- docs/en/12-taos-sql/14-stream.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/docs/en/12-taos-sql/14-stream.md b/docs/en/12-taos-sql/14-stream.md index de3fd1f4be..e9870697b4 100644 --- a/docs/en/12-taos-sql/14-stream.md +++ b/docs/en/12-taos-sql/14-stream.md @@ -150,10 +150,10 @@ In both of these methods, configuring the watermark is essential for obtaining a All [scalar functions](../function/#scalar-functions) are available in stream processing. All [System information functions](../function/#system-information-functions) are not allowed in stream processing. All [Aggregate functions](../function/#aggregate-functions) and [Selection functions](../function/#selection-functions) are available in stream processing, except the followings: - [leastsquares](../function/#leastsquares) - [percentile](../function/#percentile) - - [top](../function/#leastsquares) - - [bottom](../function/#top) - - [elapsed](../function/#leastsquares) - - [interp](../function/#elapsed) + - [top](../function/#top) + - [bottom](../function/#tbottomop) + - [elapsed](../function/#elapsed) + - [interp](../function/#interp) - [derivative](../function/#derivative) - [irate](../function/#irate) - [twa](../function/#twa) From 027eff9768fe40f214834b591bee0e195932fe22 Mon Sep 17 00:00:00 2001 From: Haojun Liao Date: Wed, 26 Apr 2023 19:08:21 +0800 Subject: [PATCH 076/156] Update 14-stream.md --- docs/zh/12-taos-sql/14-stream.md | 8 ++++---- 1 file changed, 4 insertions(+), 4 deletions(-) diff --git a/docs/zh/12-taos-sql/14-stream.md b/docs/zh/12-taos-sql/14-stream.md index 86e2aa9a8b..d90db3cab0 100644 --- a/docs/zh/12-taos-sql/14-stream.md +++ b/docs/zh/12-taos-sql/14-stream.md @@ -231,10 +231,10 @@ T = 最新事件时间 - DELETE_MARK - [leastsquares](../function/#leastsquares) - [percentile](../function/#percentile) -- [top](../function/#leastsquares) -- [bottom](../function/#top) -- [elapsed](../function/#leastsquares) -- [interp](../function/#elapsed) +- [top](../function/#top) +- [bottom](../function/#bottom) +- [elapsed](../function/#elapsed) +- [interp](../function/#interp) - [derivative](../function/#derivative) - [irate](../function/#irate) - [twa](../function/#twa) From 8e9936ed32cf84fa8e6794aab2210c3dfcfb741f Mon Sep 17 00:00:00 2001 From: wade zhang <95411902+gccgdb1234@users.noreply.github.com> Date: Thu, 27 Apr 2023 08:09:39 +0800 Subject: [PATCH 077/156] Update 14-stream.md --- docs/en/12-taos-sql/14-stream.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/en/12-taos-sql/14-stream.md b/docs/en/12-taos-sql/14-stream.md index e9870697b4..d99c5bdae4 100644 --- a/docs/en/12-taos-sql/14-stream.md +++ b/docs/en/12-taos-sql/14-stream.md @@ -151,7 +151,7 @@ All [scalar functions](../function/#scalar-functions) are available in stream pr - [leastsquares](../function/#leastsquares) - [percentile](../function/#percentile) - [top](../function/#top) - - [bottom](../function/#tbottomop) + - [bottom](../function/#bottom) - [elapsed](../function/#elapsed) - [interp](../function/#interp) - [derivative](../function/#derivative) From c93d1f933b4380097e6a6b5a973d139569b53b91 Mon Sep 17 00:00:00 2001 From: Xuefeng Tan <1172915550@qq.com> Date: Thu, 27 Apr 2023 09:14:26 +0800 Subject: [PATCH 078/156] fix(taosAdapter): tmq lift blocking time limit (#21101) --- cmake/taosadapter_CMakeLists.txt.in | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/cmake/taosadapter_CMakeLists.txt.in b/cmake/taosadapter_CMakeLists.txt.in index ba937b40c1..4a8f4864b3 100644 --- a/cmake/taosadapter_CMakeLists.txt.in +++ b/cmake/taosadapter_CMakeLists.txt.in @@ -2,7 +2,7 @@ # taosadapter ExternalProject_Add(taosadapter GIT_REPOSITORY https://github.com/taosdata/taosadapter.git - GIT_TAG e02ddb2 + GIT_TAG ae8d51c SOURCE_DIR "${TD_SOURCE_DIR}/tools/taosadapter" BINARY_DIR "" #BUILD_IN_SOURCE TRUE From 1af58ecd03f212f37c939ad87c30c1189380a365 Mon Sep 17 00:00:00 2001 From: xiaolei li <85657333+xleili@users.noreply.github.com> Date: Thu, 27 Apr 2023 09:50:45 +0800 Subject: [PATCH 079/156] enhance: enterprise pull jdbc from tag 3.1.0 (#21103) --- packaging/tools/makepkg.sh | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/packaging/tools/makepkg.sh b/packaging/tools/makepkg.sh index a590835257..3da005c405 100755 --- a/packaging/tools/makepkg.sh +++ b/packaging/tools/makepkg.sh @@ -341,7 +341,7 @@ if [ "$verMode" == "cluster" ] || [ "$verMode" == "cloud" ]; then tmp_pwd=`pwd` cd ${install_dir}/connector if [ ! -d taos-connector-jdbc ];then - git clone -b main --depth=1 https://github.com/taosdata/taos-connector-jdbc.git ||: + git clone -b 3.1.0 --depth=1 https://github.com/taosdata/taos-connector-jdbc.git ||: fi cd taos-connector-jdbc mvn clean package -Dmaven.test.skip=true From 1e29384de4d89f949654a8a3414b894f6a971714 Mon Sep 17 00:00:00 2001 From: Benguang Zhao Date: Tue, 4 Apr 2023 09:18:59 +0800 Subject: [PATCH 080/156] fix: show vgroups properly after split vgroup of multi-replicas --- source/dnode/mnode/impl/src/mndVgroup.c | 25 +++++++++++++------------ 1 file changed, 13 insertions(+), 12 deletions(-) diff --git a/source/dnode/mnode/impl/src/mndVgroup.c b/source/dnode/mnode/impl/src/mndVgroup.c index f0bece6e5e..da30ebcc18 100644 --- a/source/dnode/mnode/impl/src/mndVgroup.c +++ b/source/dnode/mnode/impl/src/mndVgroup.c @@ -2173,23 +2173,24 @@ static int32_t mndSplitVgroup(SMnode *pMnode, SRpcMsg *pReq, SDbObj *pDb, SVgObj // adjust vgroup replica if (pDb->cfg.replications != newVg1.replica) { if (mndBuildAlterVgroupAction(pMnode, pTrans, pDb, pDb, &newVg1, pArray) != 0) goto _OVER; + } else { + pRaw = mndVgroupActionEncode(&newVg1); + if (pRaw == NULL) goto _OVER; + if (mndTransAppendCommitlog(pTrans, pRaw) != 0) goto _OVER; + (void)sdbSetRawStatus(pRaw, SDB_STATUS_READY); + pRaw = NULL; } + if (pDb->cfg.replications != newVg2.replica) { if (mndBuildAlterVgroupAction(pMnode, pTrans, pDb, pDb, &newVg2, pArray) != 0) goto _OVER; + } else { + pRaw = mndVgroupActionEncode(&newVg2); + if (pRaw == NULL) goto _OVER; + if (mndTransAppendCommitlog(pTrans, pRaw) != 0) goto _OVER; + (void)sdbSetRawStatus(pRaw, SDB_STATUS_READY); + pRaw = NULL; } - pRaw = mndVgroupActionEncode(&newVg1); - if (pRaw == NULL) goto _OVER; - if (mndTransAppendCommitlog(pTrans, pRaw) != 0) goto _OVER; - (void)sdbSetRawStatus(pRaw, SDB_STATUS_READY); - pRaw = NULL; - - pRaw = mndVgroupActionEncode(&newVg2); - if (pRaw == NULL) goto _OVER; - if (mndTransAppendCommitlog(pTrans, pRaw) != 0) goto _OVER; - (void)sdbSetRawStatus(pRaw, SDB_STATUS_READY); - pRaw = NULL; - pRaw = mndVgroupActionEncode(pVgroup); if (pRaw == NULL) goto _OVER; if (mndTransAppendCommitlog(pTrans, pRaw) != 0) goto _OVER; From 9ce8915c7a38d8d53768a970243a27a75d1eef2b Mon Sep 17 00:00:00 2001 From: Benguang Zhao Date: Tue, 4 Apr 2023 10:27:31 +0800 Subject: [PATCH 081/156] enh: sort vnodeGid by dnodeId after vgroup altered --- source/dnode/mnode/impl/src/mndVgroup.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/source/dnode/mnode/impl/src/mndVgroup.c b/source/dnode/mnode/impl/src/mndVgroup.c index da30ebcc18..b80b0a698a 100644 --- a/source/dnode/mnode/impl/src/mndVgroup.c +++ b/source/dnode/mnode/impl/src/mndVgroup.c @@ -2084,6 +2084,8 @@ int32_t mndBuildAlterVgroupAction(SMnode *pMnode, STrans *pTrans, SDbObj *pOldDb return -1; } + mndSortVnodeGid(&newVgroup); + { SSdbRaw *pVgRaw = mndVgroupActionEncode(&newVgroup); if (pVgRaw == NULL) return -1; From 42eafc0f17c9098f965c0ae6c46e011e1c413f06 Mon Sep 17 00:00:00 2001 From: dingbo8128 Date: Thu, 27 Apr 2023 14:48:57 +0800 Subject: [PATCH 082/156] docs:fix compile error --- docs/zh/05-get-started/03-package.md | 3 ++- docs/zh/12-taos-sql/29-changes.md | 4 ++-- 2 files changed, 4 insertions(+), 3 deletions(-) diff --git a/docs/zh/05-get-started/03-package.md b/docs/zh/05-get-started/03-package.md index af5f1acae6..d9c64cf37a 100644 --- a/docs/zh/05-get-started/03-package.md +++ b/docs/zh/05-get-started/03-package.md @@ -100,7 +100,8 @@ sudo apt-get install tdengine :::tip apt-get 方式只适用于 Debian 或 Ubuntu 系统。 -:::: +::: + diff --git a/docs/zh/12-taos-sql/29-changes.md b/docs/zh/12-taos-sql/29-changes.md index af45d84ff2..f1952664c2 100644 --- a/docs/zh/12-taos-sql/29-changes.md +++ b/docs/zh/12-taos-sql/29-changes.md @@ -27,13 +27,13 @@ description: "TDengine 3.0 版本的语法变更说明" | - | :------- | :-------- | :------- | | 1 | ALTER ACCOUNT | 废除 | 2.x中为企业版功能,3.0不再支持。语法暂时保留了,执行报“This statement is no longer supported”错误。 | 2 | ALTER ALL DNODES | 新增 | 修改所有DNODE的参数。 -| 3 | ALTER DATABASE | 调整 | 废除
  • QUORUM:写入需要的副本确认数。3.0版本使用STRICT来指定强一致还是弱一致。3.0.0版本STRICT暂不支持修改。
  • BLOCKS:VNODE使用的内存块数。3.0版本使用BUFFER来表示VNODE写入内存池的大小。
  • UPDATE:更新操作的支持模式。3.0版本所有数据库都支持部分列更新。
  • CACHELAST:缓存最新一行数据的模式。3.0版本用CACHEMODEL代替。
  • COMP:3.0版本暂不支持修改。
    新增
  • CACHEMODEL:表示是否在内存中缓存子表的最近数据。
  • CACHESIZE:表示缓存子表最近数据的内存大小。
  • WAL_FSYNC_PERIOD:代替原FSYNC参数。
  • WAL_LEVEL:代替原WAL参数。
  • WAL_RETENTION_PERIOD:3.0.4.0版本新增,wal文件的额外保留策略,用于数据订阅。
  • WAL_RETENTION_SIZE:3.0.4.0版本新增,wal文件的额外保留策略,用于数据订阅。
    调整
  • REPLICA:3.0.0版本暂不支持修改。
  • KEEP:3.0版本新增支持带单位的设置方式。
+| 3 | ALTER DATABASE | 调整 |

废除

  • QUORUM:写入需要的副本确认数。3.0版本使用STRICT来指定强一致还是弱一致。3.0.0版本STRICT暂不支持修改。
  • BLOCKS:VNODE使用的内存块数。3.0版本使用BUFFER来表示VNODE写入内存池的大小。
  • UPDATE:更新操作的支持模式。3.0版本所有数据库都支持部分列更新。
  • CACHELAST:缓存最新一行数据的模式。3.0版本用CACHEMODEL代替。
  • COMP:3.0版本暂不支持修改。

新增

  • CACHEMODEL:表示是否在内存中缓存子表的最近数据。
  • CACHESIZE:表示缓存子表最近数据的内存大小。
  • WAL_FSYNC_PERIOD:代替原FSYNC参数。
  • WAL_LEVEL:代替原WAL参数。
  • WAL_RETENTION_PERIOD:3.0.4.0版本新增,wal文件的额外保留策略,用于数据订阅。
  • WAL_RETENTION_SIZE:3.0.4.0版本新增,wal文件的额外保留策略,用于数据订阅。

调整

  • REPLICA:3.0.0版本暂不支持修改。
  • KEEP:3.0版本新增支持带单位的设置方式。
| 4 | ALTER STABLE | 调整 | 废除
  • CHANGE TAG:修改标签列的名称。3.0版本使用RENAME TAG代替。
    新增
  • RENAME TAG:代替原CHANGE TAG子句。
  • COMMENT:修改超级表的注释。
| 5 | ALTER TABLE | 调整 | 废除
  • CHANGE TAG:修改标签列的名称。3.0版本使用RENAME TAG代替。
    新增
  • RENAME TAG:代替原CHANGE TAG子句。
  • COMMENT:修改表的注释。
  • TTL:修改表的生命周期。
| 6 | ALTER USER | 调整 | 废除
  • PRIVILEGE:修改用户权限。3.0版本使用GRANT和REVOKE来授予和回收权限。
    新增
  • ENABLE:启用或停用此用户。
  • SYSINFO:修改用户是否可查看系统信息。
| 7 | COMPACT VNODES | 暂不支持 | 整理指定VNODE的数据。3.0.0版本暂不支持。 | 8 | CREATE ACCOUNT | 废除 | 2.x中为企业版功能,3.0不再支持。语法暂时保留了,执行报“This statement is no longer supported”错误。 -| 9 | CREATE DATABASE | 调整 | 废除
  • BLOCKS:VNODE使用的内存块数。3.0版本使用BUFFER来表示VNODE写入内存池的大小。
  • CACHE:VNODE使用的内存块的大小。3.0版本使用BUFFER来表示VNODE写入内存池的大小。
  • CACHELAST:缓存最新一行数据的模式。3.0版本用CACHEMODEL代替。
  • DAYS:数据文件存储数据的时间跨度。3.0版本使用DURATION代替。
  • FSYNC:当 WAL 设置为 2 时,执行 fsync 的周期。3.0版本使用WAL_FSYNC_PERIOD代替。
  • QUORUM:写入需要的副本确认数。3.0版本使用STRICT来指定强一致还是弱一致。
  • UPDATE:更新操作的支持模式。3.0版本所有数据库都支持部分列更新。
  • WAL:WAL 级别。3.0版本使用WAL_LEVEL代替。
    新增
  • BUFFER:一个 VNODE 写入内存池大小。
  • CACHEMODEL:表示是否在内存中缓存子表的最近数据。
  • CACHESIZE:表示缓存子表最近数据的内存大小。
  • DURATION:代替原DAYS参数。新增支持带单位的设置方式。
  • PAGES:一个 VNODE 中元数据存储引擎的缓存页个数。
  • PAGESIZE:一个 VNODE 中元数据存储引擎的页大小。
  • RETENTIONS:表示数据的聚合周期和保存时长。
  • STRICT:表示数据同步的一致性要求。
  • SINGLE_STABLE:表示此数据库中是否只可以创建一个超级表。
  • VGROUPS:数据库中初始VGROUP的数目。
  • WAL_FSYNC_PERIOD:代替原FSYNC参数。
  • WAL_LEVEL:代替原WAL参数。
  • WAL_RETENTION_PERIOD:wal文件的额外保留策略,用于数据订阅。
  • WAL_RETENTION_SIZE:wal文件的额外保留策略,用于数据订阅。
  • WAL_ROLL_PERIOD:wal文件切换时长。
  • WAL_SEGMENT_SIZE:wal单个文件大小。
    调整
  • KEEP:3.0版本新增支持带单位的设置方式。
+| 9 | CREATE DATABASE | 调整 |

废除

  • BLOCKS:VNODE使用的内存块数。3.0版本使用BUFFER来表示VNODE写入内存池的大小。
  • CACHE:VNODE使用的内存块的大小。3.0版本使用BUFFER来表示VNODE写入内存池的大小。
  • CACHELAST:缓存最新一行数据的模式。3.0版本用CACHEMODEL代替。
  • DAYS:数据文件存储数据的时间跨度。3.0版本使用DURATION代替。
  • FSYNC:当 WAL 设置为 2 时,执行 fsync 的周期。3.0版本使用WAL_FSYNC_PERIOD代替。
  • QUORUM:写入需要的副本确认数。3.0版本使用STRICT来指定强一致还是弱一致。
  • UPDATE:更新操作的支持模式。3.0版本所有数据库都支持部分列更新。
  • WAL:WAL 级别。3.0版本使用WAL_LEVEL代替。

新增

  • BUFFER:一个 VNODE 写入内存池大小。
  • CACHEMODEL:表示是否在内存中缓存子表的最近数据。
  • CACHESIZE:表示缓存子表最近数据的内存大小。
  • DURATION:代替原DAYS参数。新增支持带单位的设置方式。
  • PAGES:一个 VNODE 中元数据存储引擎的缓存页个数。
  • PAGESIZE:一个 VNODE 中元数据存储引擎的页大小。
  • RETENTIONS:表示数据的聚合周期和保存时长。
  • STRICT:表示数据同步的一致性要求。
  • SINGLE_STABLE:表示此数据库中是否只可以创建一个超级表。
  • VGROUPS:数据库中初始VGROUP的数目。
  • WAL_FSYNC_PERIOD:代替原FSYNC参数。
  • WAL_LEVEL:代替原WAL参数。
  • WAL_RETENTION_PERIOD:wal文件的额外保留策略,用于数据订阅。
  • WAL_RETENTION_SIZE:wal文件的额外保留策略,用于数据订阅。
  • WAL_ROLL_PERIOD:wal文件切换时长。
  • WAL_SEGMENT_SIZE:wal单个文件大小。

调整

  • KEEP:3.0版本新增支持带单位的设置方式。
| 10 | CREATE DNODE | 调整 | 新增主机名和端口号分开指定语法
  • CREATE DNODE dnode_host_name PORT port_val
| 11 | CREATE INDEX | 新增 | 创建SMA索引。 | 12 | CREATE MNODE | 新增 | 创建管理节点。 From 67eac1a76fd52c37ba4b05431ce9ea8a3289eceb Mon Sep 17 00:00:00 2001 From: Benguang Zhao Date: Thu, 27 Apr 2023 15:24:07 +0800 Subject: [PATCH 083/156] enh: process split vgroup msg in non-blocking mode --- source/dnode/mnode/impl/src/mndVgroup.c | 7 ++++++- 1 file changed, 6 insertions(+), 1 deletion(-) diff --git a/source/dnode/mnode/impl/src/mndVgroup.c b/source/dnode/mnode/impl/src/mndVgroup.c index b80b0a698a..64e0f82a1f 100644 --- a/source/dnode/mnode/impl/src/mndVgroup.c +++ b/source/dnode/mnode/impl/src/mndVgroup.c @@ -2248,7 +2248,12 @@ static int32_t mndProcessSplitVgroupMsg(SRpcMsg *pReq) { if (pDb == NULL) goto _OVER; code = mndSplitVgroup(pMnode, pReq, pDb, pVgroup); - if (code == 0) code = TSDB_CODE_ACTION_IN_PROGRESS; + if (code != 0) { + mError("vgId:%d, failed to start to split vgroup since %s, db:%s", pVgroup->vgId, terrstr(), pDb->name); + goto _OVER; + } + + mInfo("vgId:%d, split vgroup started successfully. db:%s", pVgroup->vgId, pDb->name); _OVER: mndReleaseVgroup(pMnode, pVgroup); From ee745354231d220559688b45df200ee2de9aefaa Mon Sep 17 00:00:00 2001 From: xiaolei li <85657333+xleili@users.noreply.github.com> Date: Thu, 27 Apr 2023 17:01:49 +0800 Subject: [PATCH 084/156] docs: release 3.0.4.1 (#21109) * release: upgrade default version to 3.0.4.1 * docs: release 3.0.4.1 --- cmake/cmake.version | 2 +- docs/en/28-releases/01-tdengine.md | 4 ++++ docs/en/28-releases/02-tools.md | 4 ++++ docs/zh/28-releases/01-tdengine.md | 4 ++++ docs/zh/28-releases/02-tools.md | 3 +++ 5 files changed, 16 insertions(+), 1 deletion(-) diff --git a/cmake/cmake.version b/cmake/cmake.version index 232e86d891..3166a0695c 100644 --- a/cmake/cmake.version +++ b/cmake/cmake.version @@ -2,7 +2,7 @@ IF (DEFINED VERNUMBER) SET(TD_VER_NUMBER ${VERNUMBER}) ELSE () - SET(TD_VER_NUMBER "3.0.4.0") + SET(TD_VER_NUMBER "3.0.4.1") ENDIF () IF (DEFINED VERCOMPATIBLE) diff --git a/docs/en/28-releases/01-tdengine.md b/docs/en/28-releases/01-tdengine.md index acfbf6a0ba..fb31ad67c0 100644 --- a/docs/en/28-releases/01-tdengine.md +++ b/docs/en/28-releases/01-tdengine.md @@ -10,6 +10,10 @@ For TDengine 2.x installation packages by version, please visit [here](https://w import Release from "/components/ReleaseV3"; +## 3.0.4.1 + + + ## 3.0.4.0 diff --git a/docs/en/28-releases/02-tools.md b/docs/en/28-releases/02-tools.md index e8f7a54566..9f8dbfee7e 100644 --- a/docs/en/28-releases/02-tools.md +++ b/docs/en/28-releases/02-tools.md @@ -10,6 +10,10 @@ For other historical version installers, please visit [here](https://www.taosdat import Release from "/components/ReleaseV3"; +## 2.5.0 + + + ## 2.4.12 diff --git a/docs/zh/28-releases/01-tdengine.md b/docs/zh/28-releases/01-tdengine.md index 0974289c1f..bea0adfa82 100644 --- a/docs/zh/28-releases/01-tdengine.md +++ b/docs/zh/28-releases/01-tdengine.md @@ -10,6 +10,10 @@ TDengine 2.x 各版本安装包请访问[这里](https://www.taosdata.com/all-do import Release from "/components/ReleaseV3"; +## 3.0.4.1 + + + ## 3.0.4.0 diff --git a/docs/zh/28-releases/02-tools.md b/docs/zh/28-releases/02-tools.md index 78926555f1..8f06b3b7d6 100644 --- a/docs/zh/28-releases/02-tools.md +++ b/docs/zh/28-releases/02-tools.md @@ -9,6 +9,9 @@ taosTools 各版本安装包下载链接如下: 其他历史版本安装包请访问[这里](https://www.taosdata.com/all-downloads) import Release from "/components/ReleaseV3"; +## 2.5.0 + + ## 2.4.12 From 801dcafd4b1e4e145a3e5fb15a650853662a1a2d Mon Sep 17 00:00:00 2001 From: xiaolei li <85657333+xleili@users.noreply.github.com> Date: Thu, 27 Apr 2023 17:20:10 +0800 Subject: [PATCH 085/156] docs: release 3.0.4.1 zh (#21112) * release: upgrade default version to 3.0.4.1 * docs: release 3.0.4.1 * fix: zh doc build failed --- docs/zh/28-releases/02-tools.md | 4 ++++ 1 file changed, 4 insertions(+) diff --git a/docs/zh/28-releases/02-tools.md b/docs/zh/28-releases/02-tools.md index 8f06b3b7d6..32f310127b 100644 --- a/docs/zh/28-releases/02-tools.md +++ b/docs/zh/28-releases/02-tools.md @@ -13,6 +13,10 @@ import Release from "/components/ReleaseV3"; +## 2.5.0 + + + ## 2.4.12 From 82a053e77e17ec0ea47c3c5a0403e4a4dd020692 Mon Sep 17 00:00:00 2001 From: xiaolei li <85657333+xleili@users.noreply.github.com> Date: Thu, 27 Apr 2023 17:37:41 +0800 Subject: [PATCH 086/156] docs: fix build zh release history failed (#21113) --- docs/zh/28-releases/02-tools.md | 1 + 1 file changed, 1 insertion(+) diff --git a/docs/zh/28-releases/02-tools.md b/docs/zh/28-releases/02-tools.md index 32f310127b..e13ec68c2e 100644 --- a/docs/zh/28-releases/02-tools.md +++ b/docs/zh/28-releases/02-tools.md @@ -9,6 +9,7 @@ taosTools 各版本安装包下载链接如下: 其他历史版本安装包请访问[这里](https://www.taosdata.com/all-downloads) import Release from "/components/ReleaseV3"; + ## 2.5.0 From 0cb3ae0efba05482bbe4dadf29456bfbfe6297dd Mon Sep 17 00:00:00 2001 From: Minglei Jin Date: Fri, 28 Apr 2023 08:10:29 +0800 Subject: [PATCH 087/156] container_build/jemalloc: disable temporarily for CI --- tests/parallel_test/container_build.sh | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/tests/parallel_test/container_build.sh b/tests/parallel_test/container_build.sh index 80236cf604..0fc29c241b 100755 --- a/tests/parallel_test/container_build.sh +++ b/tests/parallel_test/container_build.sh @@ -68,7 +68,7 @@ docker run \ -v ${REP_REAL_PATH}/community/contrib/libuv/:${REP_DIR}/community/contrib/libuv \ -v ${REP_REAL_PATH}/community/contrib/lz4/:${REP_DIR}/community/contrib/lz4 \ -v ${REP_REAL_PATH}/community/contrib/zlib/:${REP_DIR}/community/contrib/zlib \ - --rm --ulimit core=-1 taos_test:v1.0 sh -c "pip uninstall taospy -y;pip3 install taospy==2.7.2;cd $REP_DIR;rm -rf debug;mkdir -p debug;cd debug;cmake .. -DBUILD_HTTP=false -DBUILD_TOOLS=true -DBUILD_TEST=true -DWEBSOCKET=true -DBUILD_TAOSX=true -DJEMALLOC_ENABLED=true;make -j || exit 1" + --rm --ulimit core=-1 taos_test:v1.0 sh -c "pip uninstall taospy -y;pip3 install taospy==2.7.2;cd $REP_DIR;rm -rf debug;mkdir -p debug;cd debug;cmake .. -DBUILD_HTTP=false -DBUILD_TOOLS=true -DBUILD_TEST=true -DWEBSOCKET=true -DBUILD_TAOSX=true -DJEMALLOC_ENABLED=0;make -j || exit 1" # -v ${REP_REAL_PATH}/community/contrib/jemalloc/:${REP_DIR}/community/contrib/jemalloc \ if [[ -d ${WORKDIR}/debugNoSan ]] ;then @@ -97,7 +97,7 @@ docker run \ -v ${REP_REAL_PATH}/community/contrib/lz4/:${REP_DIR}/community/contrib/lz4 \ -v ${REP_REAL_PATH}/community/contrib/zlib/:${REP_DIR}/community/contrib/zlib \ -v ${REP_REAL_PATH}/community/contrib/jemalloc/:${REP_DIR}/community/contrib/jemalloc \ - --rm --ulimit core=-1 taos_test:v1.0 sh -c "pip uninstall taospy -y;pip3 install taospy==2.7.2;cd $REP_DIR;rm -rf debug;mkdir -p debug;cd debug;cmake .. -DBUILD_HTTP=false -DBUILD_TOOLS=true -DBUILD_TEST=true -DWEBSOCKET=true -DBUILD_SANITIZER=1 -DTOOLS_SANITIZE=true -DTOOLS_BUILD_TYPE=Debug -DBUILD_TAOSX=true -DJEMALLOC_ENABLED=true;make -j || exit 1 " + --rm --ulimit core=-1 taos_test:v1.0 sh -c "pip uninstall taospy -y;pip3 install taospy==2.7.2;cd $REP_DIR;rm -rf debug;mkdir -p debug;cd debug;cmake .. -DBUILD_HTTP=false -DBUILD_TOOLS=true -DBUILD_TEST=true -DWEBSOCKET=true -DBUILD_SANITIZER=1 -DTOOLS_SANITIZE=true -DTOOLS_BUILD_TYPE=Debug -DBUILD_TAOSX=true -DJEMALLOC_ENABLED=0;make -j || exit 1 " mv ${REP_REAL_PATH}/debug ${WORKDIR}/debugSan From bac5bbc7ab3ca40841cd085998b35b07c72fd974 Mon Sep 17 00:00:00 2001 From: huolibo Date: Fri, 28 Apr 2023 11:43:53 +0800 Subject: [PATCH 088/156] docs(driver): jdbc error code (#21114) --- docs/en/14-reference/03-connector/04-java.mdx | 59 +++++++++-------- docs/zh/08-connector/14-java.mdx | 65 ++++++++++--------- 2 files changed, 67 insertions(+), 57 deletions(-) diff --git a/docs/en/14-reference/03-connector/04-java.mdx b/docs/en/14-reference/03-connector/04-java.mdx index 85d79ef0e0..2847bf20f8 100644 --- a/docs/en/14-reference/03-connector/04-java.mdx +++ b/docs/en/14-reference/03-connector/04-java.mdx @@ -40,19 +40,19 @@ Please refer to [version support list](/reference/connector#version-support) TDengine currently supports timestamp, number, character, Boolean type, and the corresponding type conversion with Java is as follows: -| TDengine DataType | JDBCType | -| ----------------- | ---------------------------------- | -| TIMESTAMP | java.sql.Timestamp | -| INT | java.lang.Integer | -| BIGINT | java.lang.Long | -| FLOAT | java.lang.Float | -| DOUBLE | java.lang.Double | -| SMALLINT | java.lang.Short | -| TINYINT | java.lang.Byte | -| BOOL | java.lang.Boolean | -| BINARY | byte array | -| NCHAR | java.lang.String | -| JSON | java.lang.String | +| TDengine DataType | JDBCType | +| ----------------- | ------------------ | +| TIMESTAMP | java.sql.Timestamp | +| INT | java.lang.Integer | +| BIGINT | java.lang.Long | +| FLOAT | java.lang.Float | +| DOUBLE | java.lang.Double | +| SMALLINT | java.lang.Short | +| TINYINT | java.lang.Byte | +| BOOL | java.lang.Boolean | +| BINARY | byte array | +| NCHAR | java.lang.String | +| JSON | java.lang.String | **Note**: Only TAG supports JSON types @@ -350,7 +350,12 @@ try (Statement statement = connection.createStatement()) { } ``` -There are three types of error codes that the JDBC connector can report: - Error code of the JDBC driver itself (error code between 0x2301 and 0x2350), - Error code of the native connection method (error code between 0x2351 and 0x2400), and - Error code of other TDengine function modules. +There are four types of error codes that the JDBC connector can report: + +- Error code of the JDBC driver itself (error code between 0x2301 and 0x2350), +- Error code of the native connection method (error code between 0x2351 and 0x2360) +- Error code of the consumer method (error code between 0x2371 and 0x2380) +- Error code of other TDengine function modules. For specific error codes, please refer to. @@ -965,17 +970,17 @@ The source code of the sample application is under `TDengine/examples/JDBC`: ## Recent update logs -| taos-jdbcdriver version | major changes | -| :---------------------: | :--------------------------------------------: | -| 3.1.0 | JDBC REST connection supports subscription over WebSocket | -| 3.0.1 - 3.0.4 | fix the resultSet data is parsed incorrectly sometimes. 3.0.1 is compiled on JDK 11, you are advised to use other version in the JDK 8 environment | -| 3.0.0 | Support for TDengine 3.0 | -| 2.0.42 | fix wasNull interface return value in WebSocket connection | -| 2.0.41 | fix decode method of username and password in REST connection | -| 2.0.39 - 2.0.40 | Add REST connection/request timeout parameters | -| 2.0.38 | JDBC REST connections add bulk pull function | -| 2.0.37 | Support json tags | -| 2.0.36 | Support schemaless writing | +| taos-jdbcdriver version | major changes | +| :---------------------: | :------------------------------------------------------------------------------------------------------------------------------------------------: | +| 3.1.0 | JDBC REST connection supports subscription over WebSocket | +| 3.0.1 - 3.0.4 | fix the resultSet data is parsed incorrectly sometimes. 3.0.1 is compiled on JDK 11, you are advised to use other version in the JDK 8 environment | +| 3.0.0 | Support for TDengine 3.0 | +| 2.0.42 | fix wasNull interface return value in WebSocket connection | +| 2.0.41 | fix decode method of username and password in REST connection | +| 2.0.39 - 2.0.40 | Add REST connection/request timeout parameters | +| 2.0.38 | JDBC REST connections add bulk pull function | +| 2.0.37 | Support json tags | +| 2.0.36 | Support schemaless writing | ## Frequently Asked Questions @@ -999,9 +1004,9 @@ The source code of the sample application is under `TDengine/examples/JDBC`: 4. java.lang.NoSuchMethodError: setByteArray - **Cause**: taos-jbdcdriver 3.* only supports TDengine 3.0 and later. + **Cause**: taos-jbdcdriver 3.\* only supports TDengine 3.0 and later. - **Solution**: Use taos-jdbcdriver 2.* with your TDengine 2.* deployment. + **Solution**: Use taos-jdbcdriver 2.\* with your TDengine 2.\* deployment. 5. java.lang.NoSuchMethodError: java.nio.ByteBuffer.position(I)Ljava/nio/ByteBuffer; ... taos-jdbcdriver-3.0.1.jar diff --git a/docs/zh/08-connector/14-java.mdx b/docs/zh/08-connector/14-java.mdx index 8a9bba7fe7..bf89b1df83 100644 --- a/docs/zh/08-connector/14-java.mdx +++ b/docs/zh/08-connector/14-java.mdx @@ -40,19 +40,19 @@ REST 连接支持所有能运行 Java 的平台。 TDengine 目前支持时间戳、数字、字符、布尔类型,与 Java 对应类型转换如下: -| TDengine DataType | JDBCType | -| ----------------- | ---------------------------------- | -| TIMESTAMP | java.sql.Timestamp | -| INT | java.lang.Integer | -| BIGINT | java.lang.Long | -| FLOAT | java.lang.Float | -| DOUBLE | java.lang.Double | -| SMALLINT | java.lang.Short | -| TINYINT | java.lang.Byte | -| BOOL | java.lang.Boolean | -| BINARY | byte array | -| NCHAR | java.lang.String | -| JSON | java.lang.String | +| TDengine DataType | JDBCType | +| ----------------- | ------------------ | +| TIMESTAMP | java.sql.Timestamp | +| INT | java.lang.Integer | +| BIGINT | java.lang.Long | +| FLOAT | java.lang.Float | +| DOUBLE | java.lang.Double | +| SMALLINT | java.lang.Short | +| TINYINT | java.lang.Byte | +| BOOL | java.lang.Boolean | +| BINARY | byte array | +| NCHAR | java.lang.String | +| JSON | java.lang.String | **注意**:JSON 类型仅在 tag 中支持。 @@ -97,7 +97,7 @@ cd taos-connector-jdbc mvn clean install -Dmaven.test.skip=true ``` -编译后,在 target 目录下会产生 taos-jdbcdriver-3.0.*-dist.jar 的 jar 包,并自动将编译的 jar 文件放在本地的 Maven 仓库中。 +编译后,在 target 目录下会产生 taos-jdbcdriver-3.0.\*-dist.jar 的 jar 包,并自动将编译的 jar 文件放在本地的 Maven 仓库中。
@@ -272,7 +272,7 @@ properties 中的配置参数如下: - TSDBDriver.HTTP_SOCKET_TIMEOUT: socket 超时时间,单位 ms,默认值为 5000。仅在 REST 连接且 batchfetch 设置为 false 时生效。 - TSDBDriver.PROPERTY_KEY_MESSAGE_WAIT_TIMEOUT: 消息超时时间, 单位 ms, 默认值为 3000。 仅在 REST 连接且 batchfetch 设置为 true 时生效。 - TSDBDriver.PROPERTY_KEY_USE_SSL: 连接中是否使用 SSL。仅在 REST 连接时生效。 -此外对 JDBC 原生连接,通过指定 URL 和 Properties 还可以指定其他参数,比如日志级别、SQL 长度等。更多详细配置请参考[客户端配置](/reference/config/#仅客户端适用)。 + 此外对 JDBC 原生连接,通过指定 URL 和 Properties 还可以指定其他参数,比如日志级别、SQL 长度等。更多详细配置请参考[客户端配置](/reference/config/#仅客户端适用)。 ### 配置参数的优先级 @@ -353,7 +353,12 @@ try (Statement statement = connection.createStatement()) { } ``` -JDBC 连接器可能报错的错误码包括 3 种:JDBC driver 本身的报错(错误码在 0x2301 到 0x2350 之间),原生连接方法的报错(错误码在 0x2351 到 0x2400 之间),TDengine 其他功能模块的报错。 +JDBC 连接器可能报错的错误码包括 4 种: + +- JDBC driver 本身的报错(错误码在 0x2301 到 0x2350 之间) +- 原生连接方法的报错(错误码在 0x2351 到 0x2360 之间) +- 数据订阅的报错(错误码在 0x2371 到 0x2380 之间) +- TDengine 其他功能模块的报错。 具体的错误码请参考: @@ -702,7 +707,7 @@ TaosConsumer consumer = new TaosConsumer<>(config); - td.connect.type: 连接方式。jni:表示使用动态库连接的方式,ws/WebSocket:表示使用 WebSocket 进行数据通信。默认为 jni 方式。 - httpConnectTimeout:创建连接超时参数,单位 ms,默认为 5000 ms。仅在 WebSocket 连接下有效。 - messageWaitTimeout:数据传输超时参数,单位 ms,默认为 10000 ms。仅在 WebSocket 连接下有效。 -其他参数请参考:[Consumer 参数列表](../../../develop/tmq#创建-consumer-以及consumer-group) + 其他参数请参考:[Consumer 参数列表](../../../develop/tmq#创建-consumer-以及consumer-group) #### 订阅消费数据 @@ -968,17 +973,17 @@ public static void main(String[] args) throws Exception { ## 最近更新记录 -| taos-jdbcdriver 版本 | 主要变化 | -| :------------------: | :----------------------------: | -| 3.1.0 | WebSocket 连接支持订阅功能 | -| 3.0.1 - 3.0.4 | 修复一些情况下结果集数据解析错误的问题。3.0.1 在 JDK 11 环境编译,JDK 8 环境下建议使用其他版本 | -| 3.0.0 | 支持 TDengine 3.0 | -| 2.0.42 | 修在 WebSocket 连接中 wasNull 接口返回值 | -| 2.0.41 | 修正 REST 连接中用户名和密码转码方式 | -| 2.0.39 - 2.0.40 | 增加 REST 连接/请求 超时设置 | -| 2.0.38 | JDBC REST 连接增加批量拉取功能 | -| 2.0.37 | 增加对 json tag 支持 | -| 2.0.36 | 增加对 schemaless 写入支持 | +| taos-jdbcdriver 版本 | 主要变化 | +| :------------------: | :--------------------------------------------------------------------------------------------: | +| 3.1.0 | WebSocket 连接支持订阅功能 | +| 3.0.1 - 3.0.4 | 修复一些情况下结果集数据解析错误的问题。3.0.1 在 JDK 11 环境编译,JDK 8 环境下建议使用其他版本 | +| 3.0.0 | 支持 TDengine 3.0 | +| 2.0.42 | 修在 WebSocket 连接中 wasNull 接口返回值 | +| 2.0.41 | 修正 REST 连接中用户名和密码转码方式 | +| 2.0.39 - 2.0.40 | 增加 REST 连接/请求 超时设置 | +| 2.0.38 | JDBC REST 连接增加批量拉取功能 | +| 2.0.37 | 增加对 json tag 支持 | +| 2.0.36 | 增加对 schemaless 写入支持 | ## 常见问题 @@ -1002,9 +1007,9 @@ public static void main(String[] args) throws Exception { 4. java.lang.NoSuchMethodError: setByteArray -**原因**:taos-jdbcdriver 3.* 版本仅支持 TDengine 3.0 及以上版本。 +**原因**:taos-jdbcdriver 3.\* 版本仅支持 TDengine 3.0 及以上版本。 -**解决方法**: 使用 taos-jdbcdriver 2.* 版本连接 TDengine 2.* 版本。 +**解决方法**: 使用 taos-jdbcdriver 2.\* 版本连接 TDengine 2.\* 版本。 5. java.lang.NoSuchMethodError: java.nio.ByteBuffer.position(I)Ljava/nio/ByteBuffer; ... taos-jdbcdriver-3.0.1.jar From baf098267f4083fb53b56f8c9c9dbf51016f0637 Mon Sep 17 00:00:00 2001 From: Shuduo Sang Date: Fri, 28 Apr 2023 14:49:35 +0800 Subject: [PATCH 089/156] docs: update range for numOfCommitThreads (#21122) --- docs/en/14-reference/12-config/index.md | 11 +++++++++++ docs/zh/14-reference/12-config/index.md | 3 ++- 2 files changed, 13 insertions(+), 1 deletion(-) diff --git a/docs/en/14-reference/12-config/index.md b/docs/en/14-reference/12-config/index.md index 2e6f5ec1e2..715a68696b 100644 --- a/docs/en/14-reference/12-config/index.md +++ b/docs/en/14-reference/12-config/index.md @@ -358,6 +358,17 @@ The charset that takes effect is UTF-8. | Value Range | 0-4096 | | Default Value | 2x the CPU cores | +## Performance Tuning + +### numOfCommitThreads + +| Attribute | Description | +| ------------- | ----------------------------------- | +| Applicable | Server Only | +| Meaning | Maximum number of threads to commit | +| Value Range | 0-1024 | +| Default Value | | + ## Log Parameters ### logDir diff --git a/docs/zh/14-reference/12-config/index.md b/docs/zh/14-reference/12-config/index.md index fe23684fde..c1d23df409 100644 --- a/docs/zh/14-reference/12-config/index.md +++ b/docs/zh/14-reference/12-config/index.md @@ -356,7 +356,7 @@ charset 的有效值是 UTF-8。 | 适用范围 | 仅服务端适用 | | 含义 | dnode 支持的最大 vnode 数目 | | 取值范围 | 0-4096 | -| 缺省值 | CPU 核数的 2 倍 | +| 缺省值 | CPU 核数的 2 倍 | ## 性能调优 @@ -366,6 +366,7 @@ charset 的有效值是 UTF-8。 | -------- | ---------------------- | | 适用范围 | 仅服务端适用 | | 含义 | 设置写入线程的最大数量 | +| 取值范围 | 0-1024 | | 缺省值 | | ## 日志相关 From a7a73ace4a8634b8a5daa5e6e034485215d8d7cc Mon Sep 17 00:00:00 2001 From: wangmm0220 Date: Fri, 28 Apr 2023 15:26:24 +0800 Subject: [PATCH 090/156] fix:memory leak in schemaless if key dump --- source/client/src/clientSml.c | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/source/client/src/clientSml.c b/source/client/src/clientSml.c index 011a1b3111..9acc72fdf1 100644 --- a/source/client/src/clientSml.c +++ b/source/client/src/clientSml.c @@ -1171,6 +1171,7 @@ static int32_t smlPushCols(SArray *colsArray, SArray *cols) { SSmlKv *kv = (SSmlKv *)taosArrayGet(cols, i); taosHashPut(kvHash, kv->key, kv->keyLen, &kv, POINTER_BYTES); if (terrno == TSDB_CODE_DUP_KEY) { + taosHashCleanup(kvHash); return terrno; } } @@ -1244,12 +1245,12 @@ static int32_t smlParseLineBottom(SSmlHandle *info) { uDebug("SML:0x%" PRIx64 " smlParseLineBottom add meta, format:%d, linenum:%d", info->id, info->dataFormat, info->lineNum); SSmlSTableMeta *meta = smlBuildSTableMeta(info->dataFormat); + taosHashPut(info->superTables, elements->measure, elements->measureLen, &meta, POINTER_BYTES); smlInsertMeta(meta->tagHash, meta->tags, tinfo->tags); if (terrno == TSDB_CODE_DUP_KEY) { return terrno; } smlInsertMeta(meta->colHash, meta->cols, elements->colArray); - taosHashPut(info->superTables, elements->measure, elements->measureLen, &meta, POINTER_BYTES); } } uDebug("SML:0x%" PRIx64 " smlParseLineBottom end, format:%d, linenum:%d", info->id, info->dataFormat, info->lineNum); From d4929041886d2da0f8d3979d57ae581a41030303 Mon Sep 17 00:00:00 2001 From: kailixu Date: Fri, 28 Apr 2023 16:40:26 +0800 Subject: [PATCH 091/156] chore: error process and test cases --- source/client/src/clientSml.c | 71 +++++++++----- source/client/src/clientSmlJson.c | 2 +- .../1-insert/influxdb_line_taosc_insert.py | 98 +++++++++++-------- 3 files changed, 105 insertions(+), 66 deletions(-) diff --git a/source/client/src/clientSml.c b/source/client/src/clientSml.c index 4d0f5365f3..45f0def157 100644 --- a/source/client/src/clientSml.c +++ b/source/client/src/clientSml.c @@ -560,9 +560,14 @@ static int32_t smlGenerateSchemaAction(SSchema *colField, SHashObj *colHash, SSm static int32_t smlFindNearestPowerOf2(int32_t length, uint8_t type) { int32_t result = 1; - while (result <= length) { - result *= 2; + if (length < 1024) { + while (result <= length) { + result <<= 1; + } + } else { + result = length; } + if (type == TSDB_DATA_TYPE_BINARY && result > TSDB_MAX_BINARY_LEN - VARSTR_HEADER_SIZE) { result = TSDB_MAX_BINARY_LEN - VARSTR_HEADER_SIZE; } else if (type == TSDB_DATA_TYPE_NCHAR && result > (TSDB_MAX_NCHAR_LEN - VARSTR_HEADER_SIZE) / TSDB_NCHAR_SIZE) { @@ -624,10 +629,6 @@ static int32_t getBytes(uint8_t type, int32_t length) { static int32_t smlBuildFieldsList(SSmlHandle *info, SSchema *schemaField, SHashObj *schemaHash, SArray *cols, SArray *results, int32_t numOfCols, bool isTag) { - bool check = numOfCols == 0 ? true : false; - int32_t maxLen = isTag ? TSDB_MAX_TAGS_LEN : TSDB_MAX_BYTES_PER_ROW; - - int32_t len = 0; for (int j = 0; j < taosArrayGetSize(cols); ++j) { SSmlKv *kv = (SSmlKv *)taosArrayGet(cols, j); ESchemaAction action = SCHEMA_ACTION_NULL; @@ -639,15 +640,6 @@ static int32_t smlBuildFieldsList(SSmlHandle *info, SSchema *schemaField, SHashO SField field = {0}; field.type = kv->type; field.bytes = getBytes(kv->type, kv->length); - if (check) { - len += field.bytes; - if (len > maxLen) { - code = TSDB_CODE_PAR_INVALID_VAR_COLUMN_LEN; - uError("smlBuildFieldsList add %s failed since %s", isTag ? "tag" : "col", tstrerror(code)); - return code; - } - } - memcpy(field.name, kv->key, kv->keyLen); taosArrayPush(results, &field); } else if (action == SCHEMA_ACTION_CHANGE_COLUMN_SIZE || action == SCHEMA_ACTION_CHANGE_TAG_SIZE) { @@ -660,16 +652,19 @@ static int32_t smlBuildFieldsList(SSmlHandle *info, SSchema *schemaField, SHashO if (isTag) newIndex -= numOfCols; SField *field = (SField *)taosArrayGet(results, newIndex); field->bytes = getBytes(kv->type, kv->length); - if (check) { - len += (kv->length - schemaField[*index].bytes + VARSTR_HEADER_SIZE); - if (len > maxLen) { - code = TSDB_CODE_PAR_INVALID_VAR_COLUMN_LEN; - uError("smlBuildFieldsList change %s failed since %s", isTag ? "tag" : "col", tstrerror(code)); - return code; - } - } } } + + int32_t maxLen = isTag ? TSDB_MAX_TAGS_LEN : TSDB_MAX_BYTES_PER_ROW; + int32_t len = 0; + for (int j = 0; j < taosArrayGetSize(results); ++j) { + SField *field = taosArrayGet(results, j); + len += field->bytes; + } + if (len > maxLen) { + return isTag ? TSDB_CODE_PAR_INVALID_TAGS_LENGTH : TSDB_CODE_PAR_INVALID_ROW_LENGTH; + } + return TSDB_CODE_SUCCESS; } @@ -867,6 +862,21 @@ static int32_t smlModifyDBSchemas(SSmlHandle *info) { goto end; } + if (taosArrayGetSize(pTags) + pTableMeta->tableInfo.numOfColumns > TSDB_MAX_COLUMNS) { + uError("SML:0x%" PRIx64 " too many columns than 4096", info->id); + code = TSDB_CODE_PAR_TOO_MANY_COLUMNS; + taosArrayDestroy(pColumns); + taosArrayDestroy(pTags); + goto end; + } + if (taosArrayGetSize(pTags) > TSDB_MAX_TAGS) { + uError("SML:0x%" PRIx64 " too many tags than 128", info->id); + code = TSDB_CODE_PAR_INVALID_TAGS_NUM; + taosArrayDestroy(pColumns); + taosArrayDestroy(pTags); + goto end; + } + code = smlSendMetaMsg(info, &pName, pColumns, pTags, pTableMeta, action); if (code != TSDB_CODE_SUCCESS) { uError("SML:0x%" PRIx64 " smlSendMetaMsg failed. can not create %s", info->id, pName.tname); @@ -923,6 +933,14 @@ static int32_t smlModifyDBSchemas(SSmlHandle *info) { goto end; } + if (taosArrayGetSize(pColumns) + pTableMeta->tableInfo.numOfTags > TSDB_MAX_COLUMNS) { + uError("SML:0x%" PRIx64 " too many columns than 4096", info->id); + code = TSDB_CODE_PAR_TOO_MANY_COLUMNS; + taosArrayDestroy(pColumns); + taosArrayDestroy(pTags); + goto end; + } + code = smlSendMetaMsg(info, &pName, pColumns, pTags, pTableMeta, action); if (code != TSDB_CODE_SUCCESS) { uError("SML:0x%" PRIx64 " smlSendMetaMsg failed. can not create %s", info->id, pName.tname); @@ -1527,8 +1545,11 @@ static int smlProcess(SSmlHandle *info, char *lines[], char *rawLine, char *rawL do { code = smlModifyDBSchemas(info); - if (code == 0) break; - taosMsleep(500); + if (code == 0 || code == TSDB_CODE_SML_INVALID_DATA || code == TSDB_CODE_PAR_TOO_MANY_COLUMNS || + code == TSDB_CODE_PAR_INVALID_TAGS_NUM || code == TSDB_CODE_PAR_INVALID_ROW_LENGTH || + code == TSDB_CODE_PAR_INVALID_TAGS_LENGTH) + break; + taosMsleep(100); uInfo("SML:0x%" PRIx64 " smlModifyDBSchemas retry code:%s, times:%d", info->id, tstrerror(code), retryNum); } while (retryNum++ < taosHashGetSize(info->superTables) * MAX_RETRY_TIMES); diff --git a/source/client/src/clientSmlJson.c b/source/client/src/clientSmlJson.c index c3a6e15697..b0ae316031 100644 --- a/source/client/src/clientSmlJson.c +++ b/source/client/src/clientSmlJson.c @@ -575,7 +575,7 @@ static int32_t smlConvertJSONString(SSmlKv *pVal, char *typeStr, cJSON *value) { uError("OTD:invalid type(%s) for JSON String", typeStr); return TSDB_CODE_TSC_INVALID_JSON_TYPE; } - pVal->length = (uint16_t)strlen(value->valuestring); + pVal->length = strlen(value->valuestring); if (pVal->type == TSDB_DATA_TYPE_BINARY && pVal->length > TSDB_MAX_BINARY_LEN - VARSTR_HEADER_SIZE) { return TSDB_CODE_PAR_INVALID_VAR_COLUMN_LEN; diff --git a/tests/system-test/1-insert/influxdb_line_taosc_insert.py b/tests/system-test/1-insert/influxdb_line_taosc_insert.py index 667ce3239c..46aec2909a 100644 --- a/tests/system-test/1-insert/influxdb_line_taosc_insert.py +++ b/tests/system-test/1-insert/influxdb_line_taosc_insert.py @@ -439,7 +439,7 @@ class TDTestCase: for input_sql in [self.genLongSql(127, 1)[0], self.genLongSql(1, 4093)[0]]: tdCom.cleanTb(dbname="test") self._conn.schemaless_insert([input_sql], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value) - for input_sql in [self.genLongSql(129, 1)[0], self.genLongSql(1, 4095)[0]]: + for input_sql in [self.genLongSql(128, 1)[0], self.genLongSql(1, 4094)[0]]: tdCom.cleanTb(dbname="test") try: self._conn.schemaless_insert([input_sql], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value) @@ -578,10 +578,16 @@ class TDTestCase: # binary stb_name = tdCom.getLongName(7, "letters") - input_sql = f'{stb_name},t0=t,t1="{tdCom.getLongName(16374, "letters")}" c0=f 1626006833639000000' + input_sql = f'{stb_name},t0=t,t1="{tdCom.getLongName(4091, "letters")}" c0=f 1626006833639000000' self._conn.schemaless_insert([input_sql], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value) - input_sql = f'{stb_name},t0=t,t1="{tdCom.getLongName(16375, "letters")}" c0=f 1626006833639000000' + input_sql = f'{stb_name},t0="a",t1="{tdCom.getLongName(4088, "letters")}" c0=f 1626006833639000000' + try: + self._conn.schemaless_insert([input_sql], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value) + except SchemalessError as err: + tdSql.checkNotEqual(err.errno, 0) + + input_sql = f'{stb_name},t0=t,t1="{tdCom.getLongName(4092, "letters")}" c0=f 1626006833639000000' try: self._conn.schemaless_insert([input_sql], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value) except SchemalessError as err: @@ -590,10 +596,10 @@ class TDTestCase: # nchar # * legal nchar could not be larger than 16374/4 stb_name = tdCom.getLongName(7, "letters") - input_sql = f'{stb_name},t0=t,t1=L"{tdCom.getLongName(4093, "letters")}" c0=f 1626006833639000000' + input_sql = f'{stb_name},t0=t,t1=L"{tdCom.getLongName(4090, "letters")}" c0=f 1626006833639000000' self._conn.schemaless_insert([input_sql], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value) - input_sql = f'{stb_name},t0=t,t1=L"{tdCom.getLongName(4094, "letters")}" c0=f 1626006833639000000' + input_sql = f'{stb_name},t0=t,t1=L"{tdCom.getLongName(4091, "letters")}" c0=f 1626006833639000000' try: self._conn.schemaless_insert([input_sql], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value) except SchemalessError as err: @@ -674,9 +680,15 @@ class TDTestCase: # binary stb_name = tdCom.getLongName(7, "letters") - input_sql = f'{stb_name},t0=t c0=f,c1="{tdCom.getLongName(65517, "letters")}" 1626006833639000000' + input_sql = f'{stb_name},t0=t c0=1i32,c1="{tdCom.getLongName(65517, "letters")}" 1626006833639000000' self._conn.schemaless_insert([input_sql], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value) + input_sql = f'{stb_name},t0=t c0=1i32,c1="{tdCom.getLongName(65517, "letters")},c2=f" 1626006833639000000' + try: + self._conn.schemaless_insert([input_sql], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value) + except SchemalessError as err: + tdSql.checkNotEqual(err.errno, 0) + input_sql = f'{stb_name},t0=t c0=f,c1="{tdCom.getLongName(65518, "letters")}" 1626006833639000000' try: self._conn.schemaless_insert([input_sql], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value) @@ -686,10 +698,10 @@ class TDTestCase: # nchar # * legal nchar could not be larger than 16374/4 stb_name = tdCom.getLongName(7, "letters") - input_sql = f'{stb_name},t0=t c0=f,c1=L"{tdCom.getLongName(16379, "letters")}" 1626006833639000000' + input_sql = f'{stb_name},t0=t c0=1i32,c1=L"{tdCom.getLongName(16379, "letters")}",c2=f 1626006833639000000' self._conn.schemaless_insert([input_sql], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value) - input_sql = f'{stb_name},t0=t c0=f,c1=L"{tdCom.getLongName(16380, "letters")}" 1626006833639000000' + input_sql = f'{stb_name},t0=t c0=1i32,c1=L"{tdCom.getLongName(16380, "letters")}",c2=1i16 1626006833639000000' try: self._conn.schemaless_insert([input_sql], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value) except SchemalessError as err: @@ -896,52 +908,57 @@ class TDTestCase: self._conn.schemaless_insert([input_sql], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value) # * every binary and nchar must be length+2, so here is two tag, max length could not larger than 16384-2*2 - # input_sql = f'{stb_name}, t1="{tdCom.getLongName(4095, "letters")}"" c0=f 1626006833639000000' - # self._conn.schemaless_insert([input_sql], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value) - - # tdSql.query(f"select * from {stb_name}") - # tdSql.checkRows(2) - # input_sql = f'{stb_name},t0=t,t1="{tdCom.getLongName(4084, "letters")}",t2="{tdCom.getLongName(6, "letters")}" c0=f 1626006833639000000' - # try: - # self._conn.schemaless_insert([input_sql], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value) - # raise Exception("should not reach here") - # except SchemalessError as err: - # tdSql.checkNotEqual(err.errno, 0) - # tdSql.query(f"select * from {stb_name}") - # tdSql.checkRows(2) - - # # * check col,col+ts max in describe ---> 16143 - input_sql = f'{stb_name},t0=t c0=i32,c1="{tdCom.getLongName(65517, "letters")}" 1626006833639000000' + input_sql = f'{stb_name}, t0=f,t1="{tdCom.getLongName(4093, "letters")}" 1626006833639000000' self._conn.schemaless_insert([input_sql], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value) - input_sql = f'{stb_name},t0=t c0=i32,c1="{tdCom.getLongName(65517, "letters")},c2=f" 1626006833639000000' - self._conn.schemaless_insert([input_sql], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value) + tdSql.query(f"select * from {stb_name}") + tdSql.checkRows(2) + input_sql = f'{stb_name},t0=t,t1="{tdCom.getLongName(4084, "letters")}",t2="{tdCom.getLongName(6, "letters")}" c0=f 1626006833639000000' try: self._conn.schemaless_insert([input_sql], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value) + raise Exception("should not reach here") except SchemalessError as err: tdSql.checkNotEqual(err.errno, 0) + tdSql.query(f"select * from {stb_name}") + tdSql.checkRows(2) - input_sql = f'{stb_name},t0=t c0=i16,c1="{tdCom.getLongName(49133, "letters")}",c2="{tdCom.getLongName(16384, "letters")}" 1626006833639000000' - self._conn.schemaless_insert([input_sql], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value) - input_sql = f'{stb_name},t0=t c0=i16,c1="{tdCom.getLongName(49133, "letters")}",c2="{tdCom.getLongName(16384, "letters")},c3=t" 1626006833639000000' + stb_name = tdCom.getLongName(8, "letters") + # # * check col,col+ts max in describe ---> 16143 + input_sql = f'{stb_name},t0=t c0=1i32,c1="{tdCom.getLongName(65517, "letters")}" 1626006833639000000' self._conn.schemaless_insert([input_sql], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value) + + input_sql = f'{stb_name},t0=t c0=1i32,c1="{tdCom.getLongName(65517, "letters")}",c2=f 1626006833639000000' try: self._conn.schemaless_insert([input_sql], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value) except SchemalessError as err: tdSql.checkNotEqual(err.errno, 0) tdSql.query(f"select * from {stb_name}") - tdSql.checkRows(3) + tdSql.checkRows(1) + + + stb_name = tdCom.getLongName(9, "letters") + input_sql = f'{stb_name},t0=t c0=1i16,c1="{tdCom.getLongName(49133, "letters")}",c2="{tdCom.getLongName(16384, "letters")}" 1626006833639000000' + self._conn.schemaless_insert([input_sql], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value) + + input_sql = f'{stb_name},t0=t c0=1i16,c1="{tdCom.getLongName(49133, "letters")}",c2="{tdCom.getLongName(16384, "letters")},c3=t" 1626006833639000000' + try: + self._conn.schemaless_insert([input_sql], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value) + except SchemalessError as err: + tdSql.checkNotEqual(err.errno, 0) + + tdSql.query(f"select * from {stb_name}") + tdSql.checkRows(1) + input_sql = f'{stb_name},t0=t c0=f,c1="{tdCom.getLongName(16374, "letters")}",c2="{tdCom.getLongName(16374, "letters")}",c3="{tdCom.getLongName(16374, "letters")}",c4="{tdCom.getLongName(13, "letters")}" 1626006833639000000' try: self._conn.schemaless_insert([input_sql], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value) except SchemalessError as err: tdSql.checkNotEqual(err.errno, 0) - tdSql.query(f"select * from {stb_name}") - tdSql.checkRows(3) - # * tag nchar max is 16374/4, col+ts nchar max 49151 + + # * tag nchar max is 16384/4, col+ts nchar max 65531 def tagColNcharMaxLengthCheckCase(self): """ check nchar length limit @@ -952,7 +969,7 @@ class TDTestCase: input_sql = f'{stb_name},id="{tb_name}",t0=t c0=f 1626006833639000000' code = self._conn.schemaless_insert([input_sql], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value) - # * legal nchar could not be larger than 16374/4 + # * legal tag nchar could not be larger than 16384/4 # input_sql = f'{stb_name},t0=t,t1=L"{tdCom.getLongName(4093, "letters")}",t2=L"{tdCom.getLongName(1, "letters")}" c0=f 1626006833639000000' # self._conn.schemaless_insert([input_sql], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value) # tdSql.query(f"select * from {stb_name}") @@ -968,14 +985,15 @@ class TDTestCase: input_sql = f'{stb_name},t0=t c0=f,c1=L"{tdCom.getLongName(4093, "letters")}",c2=L"{tdCom.getLongName(4093, "letters")}",c3=L"{tdCom.getLongName(4093, "letters")}",c4=L"{tdCom.getLongName(4, "letters")}" 1626006833639000000' self._conn.schemaless_insert([input_sql], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value) tdSql.query(f"select * from {stb_name}") - tdSql.checkRows(3) + tdSql.checkRows(2) + input_sql = f'{stb_name},t0=t c0=f,c1=L"{tdCom.getLongName(4093, "letters")}",c2=L"{tdCom.getLongName(4093, "letters")}",c3=L"{tdCom.getLongName(4093, "letters")}",c4=L"{tdCom.getLongName(5, "letters")}" 1626006833639000000' try: self._conn.schemaless_insert([input_sql], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value) except SchemalessError as err: tdSql.checkNotEqual(err.errno, 0) tdSql.query(f"select * from {stb_name}") - tdSql.checkRows(3) + tdSql.checkRows(2) def batchInsertCheckCase(self): """ @@ -1291,13 +1309,13 @@ class TDTestCase: self.idSeqCheckCase() self.idUpperCheckCase() self.noIdCheckCase() - # self.maxColTagCheckCase() + self.maxColTagCheckCase() self.idIllegalNameCheckCase() self.idStartWithNumCheckCase() self.nowTsCheckCase() self.dateFormatTsCheckCase() self.illegalTsCheckCase() - # self.tagValueLengthCheckCase() + self.tagValueLengthCheckCase() self.colValueLengthCheckCase() self.tagColIllegalValueCheckCase() self.duplicateIdTagColInsertCheckCase() @@ -1307,8 +1325,8 @@ class TDTestCase: self.tagColAddDupIDCheckCase() self.tagColAddCheckCase() self.tagMd5Check() - # self.tagColBinaryMaxLengthCheckCase() - # self.tagColNcharMaxLengthCheckCase() + self.tagColBinaryMaxLengthCheckCase() + self.tagColNcharMaxLengthCheckCase() self.batchInsertCheckCase() self.multiInsertCheckCase(10) self.batchErrorInsertCheckCase() From 0160095db1ddbe926a419bc45cdb04790e841cbb Mon Sep 17 00:00:00 2001 From: Minglei Jin Date: Fri, 28 Apr 2023 17:59:21 +0800 Subject: [PATCH 092/156] sml/memory: comment case out --- tests/parallel_test/cases.task | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/tests/parallel_test/cases.task b/tests/parallel_test/cases.task index 9f9679ae35..6be8cd2b9f 100644 --- a/tests/parallel_test/cases.task +++ b/tests/parallel_test/cases.task @@ -134,7 +134,7 @@ ,,y,system-test,./pytest.sh python3 ./test.py -f 1-insert/alter_database.py ,,y,system-test,./pytest.sh python3 ./test.py -f 1-insert/influxdb_line_taosc_insert.py ,,y,system-test,./pytest.sh python3 ./test.py -f 1-insert/opentsdb_telnet_line_taosc_insert.py -,,y,system-test,./pytest.sh python3 ./test.py -f 1-insert/opentsdb_json_taosc_insert.py +#,,y,system-test,./pytest.sh python3 ./test.py -f 1-insert/opentsdb_json_taosc_insert.py ,,y,system-test,./pytest.sh python3 ./test.py -f 1-insert/test_stmt_muti_insert_query.py ,,y,system-test,./pytest.sh python3 ./test.py -f 1-insert/test_stmt_set_tbname_tag.py ,,y,system-test,./pytest.sh python3 ./test.py -f 1-insert/alter_stable.py From cef0aba54d35a54a051f2810fa7b2c49472fe297 Mon Sep 17 00:00:00 2001 From: Shuduo Sang Date: Sat, 29 Apr 2023 00:46:23 +0800 Subject: [PATCH 093/156] docs: update taosdump doc for 3.0 (#21130) * fix: taosbenchmark escape char for 3.0 * fix: json file for escape char * docs: update taosdump doc --- docs/en/14-reference/06-taosdump.md | 1 + docs/zh/14-reference/06-taosdump.md | 1 + 2 files changed, 2 insertions(+) diff --git a/docs/en/14-reference/06-taosdump.md b/docs/en/14-reference/06-taosdump.md index dcfc068a3d..7348add4bd 100644 --- a/docs/en/14-reference/06-taosdump.md +++ b/docs/en/14-reference/06-taosdump.md @@ -76,6 +76,7 @@ Usage: taosdump [OPTION...] dbname [tbname ...] -A, --all-databases Dump all databases. -D, --databases=DATABASES Dump listed databases. Use comma to separate database names. + -e, --escape-character Use escaped character for database name -N, --without-property Dump database without its properties. -s, --schemaonly Only dump table schemas. -y, --answer-yes Input yes for prompt. It will skip data file diff --git a/docs/zh/14-reference/06-taosdump.md b/docs/zh/14-reference/06-taosdump.md index 8a031d1473..8ff1287c3e 100644 --- a/docs/zh/14-reference/06-taosdump.md +++ b/docs/zh/14-reference/06-taosdump.md @@ -79,6 +79,7 @@ Usage: taosdump [OPTION...] dbname [tbname ...] -A, --all-databases Dump all databases. -D, --databases=DATABASES Dump inputted databases. Use comma to separate databases' name. + -e, --escape-character Use escaped character for database name -N, --without-property Dump database without its properties. -s, --schemaonly Only dump tables' schema. -y, --answer-yes Input yes for prompt. It will skip data file From ba344f7b95cb9f69802fdbe8079725e3f7d2d58c Mon Sep 17 00:00:00 2001 From: shenglian zhou Date: Thu, 4 May 2023 11:55:09 +0800 Subject: [PATCH 094/156] feat: add python udf docs --- docs/zh/07-develop/09-udf.md | 129 +++++++++++++++++++++++++++++++-- docs/zh/12-taos-sql/22-meta.md | 4 + docs/zh/12-taos-sql/26-udf.md | 26 +++++-- 3 files changed, 146 insertions(+), 13 deletions(-) diff --git a/docs/zh/07-develop/09-udf.md b/docs/zh/07-develop/09-udf.md index 8a8ef82009..59513a03aa 100644 --- a/docs/zh/07-develop/09-udf.md +++ b/docs/zh/07-develop/09-udf.md @@ -6,11 +6,13 @@ description: "支持用户编码的聚合函数和标量函数,在查询中嵌 在有些应用场景中,应用逻辑需要的查询无法直接使用系统内置的函数来表示。利用 UDF(User Defined Function) 功能,TDengine 可以插入用户编写的处理代码并在查询中使用它们,就能够很方便地解决特殊应用场景中的使用需求。 UDF 通常以数据表中的一列数据做为输入,同时支持以嵌套子查询的结果作为输入。 -TDengine 支持通过 C/C++ 语言进行 UDF 定义。接下来结合示例讲解 UDF 的使用方法。 - 用户可以通过 UDF 实现两类函数:标量函数和聚合函数。标量函数对每行数据输出一个值,如求绝对值 abs,正弦函数 sin,字符串拼接函数 concat 等。聚合函数对多行数据进行输出一个值,如求平均数 avg,最大值 max 等。 -实现 UDF 时,需要实现规定的接口函数 +TDengine 支持通过 C/Python 语言进行 UDF 定义。接下来结合示例讲解 UDF 的使用方法。 + +# C 语言实现UDF + +使用 C 语言实现 UDF 时,需要实现规定的接口函数 - 标量函数需要实现标量接口函数 scalarfn 。 - 聚合函数需要实现聚合接口函数 aggfn_start , aggfn , aggfn_finish。 - 如果需要初始化,实现 udf_init;如果需要清理工作,实现udf_destroy。 @@ -213,9 +215,6 @@ gcc -g -O0 -fPIC -shared bit_and.c -o libbitand.so 这样就准备好了动态链接库 libbitand.so 文件,可以供后文创建 UDF 时使用了。为了保证可靠的系统运行,编译器 GCC 推荐使用 7.5 及以上版本。 -## 管理和使用UDF -编译好的UDF,还需要将其加入到系统才能被正常的SQL调用。关于如何管理和使用UDF,参见[UDF使用说明](../12-taos-sql/26-udf.md) - ## 示例代码 ### 标量函数示例 [bit_and](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/bit_and.c) @@ -268,4 +267,120 @@ select max_vol(vol1,vol2,vol3,deviceid) from battery; {{#include tests/script/sh/max_vol.c}} ``` - \ No newline at end of file + + +# Python 语言实现UDF +使用 Python 语言实现 UDF 时,需要实现规定的接口函数 +- 标量函数需要实现标量接口函数 process 。 +- 聚合函数需要实现聚合接口函数 start ,reduce ,finish。 +- 如果需要初始化,实现 init;如果需要清理工作,实现 destroy。 + +## 实现标量函数 + +标量函数实现模版如下 +```Python +def init(): + # initialization +def destroy(): + # destroy +def process(input: datablock) -> tuple[output_type]: + # process input datablock, + # datablock.data(row, col) is to access the python object in location(row,col) + # return tuple object consisted of object of type outputtype +``` + +## 实现聚合函数 + +聚合函数实现模版如下 +```Python +def init(): + #initialization +def destroy(): + #destroy +def start() -> bytes: + #return serialize(init_state) +def reduce(inputs: datablock, buf: bytes) -> bytes + # deserialize buf to state + # reduce the inputs and state into new_state. + # use inputs.data(i,j) to access python ojbect of location(i,j) + # serialize new_state into new_state_bytes + return new_state_bytes +def finish(buf: bytes) -> output_type: + #return obj of type outputtype +``` + +## 接口函数定义 + +### 标量接口函数 +```Python +def process(input: datablock) -> tuple[output_type]: +``` +- input:datablock 类似二维矩阵,通过成员方法 data(row,col)返回位于 row 行,col 列的 python 对象 +- 返回值是一个 Python 对象元组,每个元素类型为输出类型。 + +### 聚合接口函数 +```Python +def start() -> bytes: +def reduce(inputs: datablock, buf: bytes) -> bytes +def finish(buf: bytes) -> output_type: +``` + +首先调用 start 生成最初结果 buffer,然后输入数据会被分为多个行数据块,对每个数据块 inputs 和当前中间结果 buf 调用 reduce,得到新的中间结果,最后再调用 finish 从中间结果 buf 产生最终输出,最终输出只能含 0 或 1 条数据。 + + +### UDF 初始化和销毁 +```Python +def init() +def destroy() +``` + +其中 init 完成初始化工作。 destroy 完成清理工作。如果没有初始化工作,无需定义 init 函数。如果没有清理工作,无需定义 destroy 函数。 + +## Python数据类型和TDengine数据类型映射 +| **TDengine SQL数据类型** | **Python数据类型** | +| :-----------------------: | ------------ | +|TINYINT / SMALLINT / INT / BIGINT | int | +|TINYINT UNSIGNED / SMALLINT UNSIGNED / INT UNSIGNED / BIGINT UNSIGNED | int | +|FLOAT / DOUBLE | float | +|BOOL | bool | +|BINARY / VARCHAR / NCHAR | bytes| +|TIMESTAMP | int | +|JSON and other types | 不支持 | + +## Python UDF 环境的安装 +1. 安装 taospyudf 包。此包执行Python UDF程序。 +```bash +pip install taospyudf +lddconfig +``` +2. 如果 Python UDF 程序执行时,引用其它的包,PYTHONPATH 环境变量可以通过在 taos.cfg 的 UdfdLdLibPath 变量配置 + +## 示例代码 +### 标量函数示例 [pybitand](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/pybitand.py) + +bit_add 实现多列的按位与功能。如果只有一列,返回这一列。bit_add 忽略空值。 + +
+pybitand.py + +```Python +{{#include tests/script/sh/pybitand.py}} +``` + +
+ +### 聚合函数示例 [pyl2norm](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/pyl2norm.py) + +pyl2norm 实现了输入列的所有数据的二阶范数,即对每个数据先平方,再累加求和,最后开方。 + +
+pyl2norm.py + +```c +{{#include tests/script/sh/pyl2norm.py}} +``` + +
+ +# 管理和使用UDF +编译好的UDF,还需要将其加入到系统才能被正常的SQL调用。关于如何管理和使用UDF,参见[UDF使用说明](../12-taos-sql/26-udf.md) \ No newline at end of file diff --git a/docs/zh/12-taos-sql/22-meta.md b/docs/zh/12-taos-sql/22-meta.md index 1f2e3fb7d5..7fb60b85a7 100644 --- a/docs/zh/12-taos-sql/22-meta.md +++ b/docs/zh/12-taos-sql/22-meta.md @@ -120,6 +120,10 @@ TDengine 内置了一个名为 `INFORMATION_SCHEMA` 的数据库,提供对数 | 5 | create_time | TIMESTAMP | 创建时间 | | 6 | code_len | INT | 代码长度 | | 7 | bufsize | INT | buffer 大小 | +| 8 | func_language | BINARY(31) | 自定义函数编程语言 | +| 9 | func_body | BINARY(16384) | 函数体定义 | +| 10 | func_version | INT | 函数版本号。初始版本为0,每次替换更新,版本号加1。| + ## INS_INDEXES diff --git a/docs/zh/12-taos-sql/26-udf.md b/docs/zh/12-taos-sql/26-udf.md index 7697944f9a..a45e837c8d 100644 --- a/docs/zh/12-taos-sql/26-udf.md +++ b/docs/zh/12-taos-sql/26-udf.md @@ -11,29 +11,38 @@ description: 使用 UDF 的详细指南 在创建 UDF 时,需要区分标量函数和聚合函数。如果创建时声明了错误的函数类别,则可能导致通过 SQL 指令调用函数时出错。此外,用户需要保证输入数据类型与 UDF 程序匹配,UDF 输出数据类型与 OUTPUTTYPE 匹配。 +使用 CREATE OR REPLACE FUNCTION,如果函数已经存在,会修改已有的函数属性。 + - 创建标量函数 ```sql -CREATE FUNCTION function_name AS library_path OUTPUTTYPE output_type; +CREATE [OR REPLACE] FUNCTION function_name AS library_path OUTPUTTYPE output_type [LANGUAGE 'C|Python']; ``` - function_name:标量函数未来在 SQL 中被调用时的函数名,必须与函数实现中 udf 的实际名称一致; - - library_path:包含 UDF 函数实现的动态链接库的库文件绝对路径(指的是库文件在当前客户端所在主机上的保存路径,通常是指向一个 .so 文件),这个路径需要用英文单引号或英文双引号括起来; + - LANGUAGE 'C|Python':函数编程语言,目前支持C语言和Python语言。 + - library_path:如果编程语言是C,路径是包含 UDF 函数实现的动态链接库的库文件绝对路径(指的是库文件在当前客户端所在主机上的保存路径,通常是指向一个 .so 文件)。如果编程语言是Python,路径是包含 UDF 函数实现的Python文件路径。这个路径需要用英文单引号或英文双引号括起来; - output_type:此函数计算结果的数据类型名称; - 例如,如下语句可以把 libbitand.so 创建为系统中可用的 UDF: +例如,如下语句可以把 libbitand.so 创建为系统中可用的 UDF: ```sql CREATE FUNCTION bit_and AS "/home/taos/udf_example/libbitand.so" OUTPUTTYPE INT; ``` +例如,使用以下语句可以修改已经定义的 bit_and 函数,输出类型是 BIGINT,使用Python语言实现。 + + ```sql + CREATE OR REPLACE FUNCTION bit_and AS "/home/taos/udf_example/bit_and.py" OUTPUTTYPE BIGINT LANGUAGE 'Python'; + ``` - 创建聚合函数: ```sql -CREATE AGGREGATE FUNCTION function_name AS library_path OUTPUTTYPE output_type [ BUFSIZE buffer_size ]; +CREATE [OR REPLACE] AGGREGATE FUNCTION function_name AS library_path OUTPUTTYPE output_type [ BUFSIZE buffer_size ] [LANGUAGE 'C|Python']; ``` - function_name:聚合函数未来在 SQL 中被调用时的函数名,必须与函数实现中 udfNormalFunc 的实际名称一致; - - library_path:包含 UDF 函数实现的动态链接库的库文件绝对路径(指的是库文件在当前客户端所在主机上的保存路径,通常是指向一个 .so 文件),这个路径需要用英文单引号或英文双引号括起来; - - output_type:此函数计算结果的数据类型,与上文中 udfNormalFunc 的 itype 参数不同,这里不是使用数字表示法,而是直接写类型名称即可; + - LANGUAGE 'C|Python':函数编程语言,目前支持C语言和Python语言。 + - library_path:如果编程语言是C,路径是包含 UDF 函数实现的动态链接库的库文件绝对路径(指的是库文件在当前客户端所在主机上的保存路径,通常是指向一个 .so 文件)。如果编程语言是Python,路径是包含 UDF 函数实现的Python文件路径。这个路径需要用英文单引号或英文双引号括起来;; + - output_type:此函数计算结果的数据类型名称; - buffer_size:中间计算结果的缓冲区大小,单位是字节。如果不使用可以不设置。 例如,如下语句可以把 libl2norm.so 创建为系统中可用的 UDF: @@ -41,6 +50,11 @@ CREATE AGGREGATE FUNCTION function_name AS library_path OUTPUTTYPE output_type [ ```sql CREATE AGGREGATE FUNCTION l2norm AS "/home/taos/udf_example/libl2norm.so" OUTPUTTYPE DOUBLE bufsize 8; ``` + 例如,使用以下语句可以修改已经定义的 l2norm 函数的缓冲区大小为64。 + ```sql + CREATE AGGREGATE FUNCTION l2norm AS "/home/taos/udf_example/libl2norm.so" OUTPUTTYPE DOUBLE bufsize 64; + ``` + 关于如何开发自定义函数,请参考 [UDF使用说明](/develop/udf)。 ## 管理 UDF From 38f507d941cd89423bcc229c28830988f7aa855e Mon Sep 17 00:00:00 2001 From: dmchen Date: Thu, 4 May 2023 05:49:36 +0000 Subject: [PATCH 095/156] transaction multithread --- source/dnode/mnode/impl/inc/mndDef.h | 1 + source/dnode/mnode/impl/src/mndTrans.c | 12 +++++++++++- 2 files changed, 12 insertions(+), 1 deletion(-) diff --git a/source/dnode/mnode/impl/inc/mndDef.h b/source/dnode/mnode/impl/inc/mndDef.h index f547ce025d..fac574f9e9 100644 --- a/source/dnode/mnode/impl/inc/mndDef.h +++ b/source/dnode/mnode/impl/inc/mndDef.h @@ -177,6 +177,7 @@ typedef struct { SArray* pRpcArray; SRWLatch lockRpcArray; int64_t mTraceId; + TdThreadMutex mutex; } STrans; typedef struct { diff --git a/source/dnode/mnode/impl/src/mndTrans.c b/source/dnode/mnode/impl/src/mndTrans.c index 106eea0313..53dc0476cd 100644 --- a/source/dnode/mnode/impl/src/mndTrans.c +++ b/source/dnode/mnode/impl/src/mndTrans.c @@ -546,6 +546,7 @@ static void mndTransDropData(STrans *pTrans) { pTrans->param = NULL; pTrans->paramLen = 0; } + (void)taosThreadMutexDestroy(&pTrans->mutex); } static int32_t mndTransActionDelete(SSdb *pSdb, STrans *pTrans, bool callFunc) { @@ -651,6 +652,7 @@ STrans *mndTransCreate(SMnode *pMnode, ETrnPolicy policy, ETrnConflct conflict, pTrans->pRpcArray = taosArrayInit(1, sizeof(SRpcHandleInfo)); pTrans->mTraceId = pReq ? TRACE_GET_ROOTID(&pReq->info.traceId) : 0; taosInitRWLatch(&pTrans->lockRpcArray); + taosThreadMutexInit(&pTrans->mutex, NULL); if (pTrans->redoActions == NULL || pTrans->undoActions == NULL || pTrans->commitActions == NULL || pTrans->pRpcArray == NULL) { @@ -1307,7 +1309,13 @@ static int32_t mndTransExecuteRedoActionsSerial(SMnode *pMnode, STrans *pTrans) int32_t code = 0; int32_t numOfActions = taosArrayGetSize(pTrans->redoActions); if (numOfActions == 0) return code; - if (pTrans->redoActionPos >= numOfActions) return code; + + taosThreadMutexLock(&pTrans->mutex); + + if (pTrans->redoActionPos >= numOfActions) { + taosThreadMutexUnlock(&pTrans->mutex); + return code; + } mInfo("trans:%d, execute %d actions serial, current redoAction:%d", pTrans->id, numOfActions, pTrans->redoActionPos); @@ -1377,6 +1385,8 @@ static int32_t mndTransExecuteRedoActionsSerial(SMnode *pMnode, STrans *pTrans) } } + taosThreadMutexUnlock(&pTrans->mutex); + return code; } From 9f472565c994f92b44630f1a5e8c350539c5bf68 Mon Sep 17 00:00:00 2001 From: slzhou Date: Thu, 4 May 2023 13:58:23 +0800 Subject: [PATCH 096/156] enhance: finish python udf docs --- docs/zh/07-develop/09-udf.md | 46 ++++++++++++++++++------------------ 1 file changed, 23 insertions(+), 23 deletions(-) diff --git a/docs/zh/07-develop/09-udf.md b/docs/zh/07-develop/09-udf.md index 59513a03aa..8aa841db89 100644 --- a/docs/zh/07-develop/09-udf.md +++ b/docs/zh/07-develop/09-udf.md @@ -19,7 +19,7 @@ TDengine 支持通过 C/Python 语言进行 UDF 定义。接下来结合示例 接口函数的名称是 UDF 名称,或者是 UDF 名称和特定后缀(_start, _finish, _init, _destroy)的连接。列表中的scalarfn,aggfn, udf需要替换成udf函数名。 -## 实现标量函数 +## C UDF 实现标量函数 标量函数实现模板如下 ```c #include "taos.h" @@ -51,7 +51,7 @@ int32_t scalarfn_destroy() { ``` scalarfn 为函数名的占位符,需要替换成函数名,如bit_and。 -## 实现聚合函数 +## C UDF 实现聚合函数 聚合函数的实现模板如下 ```c @@ -102,7 +102,7 @@ int32_t aggfn_destroy() { ``` aggfn为函数名的占位符,需要修改为自己的函数名,如l2norm。 -## 接口函数定义 +## C UDF 接口函数定义 接口函数的名称是 udf 名称,或者是 udf 名称和特定后缀(_start, _finish, _init, _destroy)的连接。以下描述中函数名称中的 scalarfn,aggfn, udf 需要替换成udf函数名。 @@ -110,7 +110,7 @@ aggfn为函数名的占位符,需要修改为自己的函数名,如l2norm。 接口函数参数类型见数据结构定义。 -### 标量接口函数 +### C UDF 标量接口函数 `int32_t scalarfn(SUdfDataBlock* inputDataBlock, SUdfColumn *resultColumn)` @@ -120,7 +120,7 @@ aggfn为函数名的占位符,需要修改为自己的函数名,如l2norm。 - inputDataBlock: 输入的数据块 - resultColumn: 输出列 -### 聚合接口函数 +### C UDF 聚合接口函数 `int32_t aggfn_start(SUdfInterBuf *interBuf)` @@ -137,7 +137,7 @@ aggfn为函数名的占位符,需要修改为自己的函数名,如l2norm。 - result:最终结果。 -### UDF 初始化和销毁 +### C UDF 初始化和销毁 `int32_t udf_init()` `int32_t udf_destroy()` @@ -145,7 +145,7 @@ aggfn为函数名的占位符,需要修改为自己的函数名,如l2norm。 其中 udf 是函数名的占位符。udf_init 完成初始化工作。 udf_destroy 完成清理工作。如果没有初始化工作,无需定义udf_init函数。如果没有清理工作,无需定义udf_destroy函数。 -## UDF 数据结构 +## C UDF 数据结构 ```c typedef struct SUdfColumnMeta { int16_t type; @@ -203,7 +203,7 @@ typedef struct SUdfInterBuf { 为了更好的操作以上数据结构,提供了一些便利函数,定义在 taosudf.h。 -## 编译 UDF +## 编译 C UDF 用户定义函数的 C 语言源代码无法直接被 TDengine 系统使用,而是需要先编译为 动态链接库,之后才能载入 TDengine 系统。 @@ -215,9 +215,9 @@ gcc -g -O0 -fPIC -shared bit_and.c -o libbitand.so 这样就准备好了动态链接库 libbitand.so 文件,可以供后文创建 UDF 时使用了。为了保证可靠的系统运行,编译器 GCC 推荐使用 7.5 及以上版本。 -## 示例代码 +## C UDF 示例代码 -### 标量函数示例 [bit_and](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/bit_and.c) +### C UDF 标量函数示例 [bit_and](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/bit_and.c) bit_add 实现多列的按位与功能。如果只有一列,返回这一列。bit_add 忽略空值。 @@ -230,7 +230,7 @@ bit_add 实现多列的按位与功能。如果只有一列,返回这一列。 -### 聚合函数示例1 返回值为数值类型 [l2norm](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/l2norm.c) +### C UDF 聚合函数示例1 返回值为数值类型 [l2norm](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/l2norm.c) l2norm 实现了输入列的所有数据的二阶范数,即对每个数据先平方,再累加求和,最后开方。 @@ -243,7 +243,7 @@ l2norm 实现了输入列的所有数据的二阶范数,即对每个数据先 -### 聚合函数示例2 返回值为字符串类型 [max_vol](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/max_vol.c) +### C UDF 聚合函数示例2 返回值为字符串类型 [max_vol](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/max_vol.c) max_vol 实现了从多个输入的电压列中找到最大电压,返回由设备ID + 最大电压所在(行,列)+ 最大电压值 组成的组合字符串值 @@ -275,7 +275,7 @@ select max_vol(vol1,vol2,vol3,deviceid) from battery; - 聚合函数需要实现聚合接口函数 start ,reduce ,finish。 - 如果需要初始化,实现 init;如果需要清理工作,实现 destroy。 -## 实现标量函数 +## Python UDF 实现标量函数 标量函数实现模版如下 ```Python @@ -289,7 +289,7 @@ def process(input: datablock) -> tuple[output_type]: # return tuple object consisted of object of type outputtype ``` -## 实现聚合函数 +## Python UDF 实现聚合函数 聚合函数实现模版如下 ```Python @@ -309,16 +309,16 @@ def finish(buf: bytes) -> output_type: #return obj of type outputtype ``` -## 接口函数定义 +## Python UDF 接口函数定义 -### 标量接口函数 +### Python UDF 标量接口函数 ```Python def process(input: datablock) -> tuple[output_type]: ``` - input:datablock 类似二维矩阵,通过成员方法 data(row,col)返回位于 row 行,col 列的 python 对象 - 返回值是一个 Python 对象元组,每个元素类型为输出类型。 -### 聚合接口函数 +### Python UDF 聚合接口函数 ```Python def start() -> bytes: def reduce(inputs: datablock, buf: bytes) -> bytes @@ -328,7 +328,7 @@ def finish(buf: bytes) -> output_type: 首先调用 start 生成最初结果 buffer,然后输入数据会被分为多个行数据块,对每个数据块 inputs 和当前中间结果 buf 调用 reduce,得到新的中间结果,最后再调用 finish 从中间结果 buf 产生最终输出,最终输出只能含 0 或 1 条数据。 -### UDF 初始化和销毁 +### Python UDF 初始化和销毁 ```Python def init() def destroy() @@ -353,12 +353,12 @@ def destroy() pip install taospyudf lddconfig ``` -2. 如果 Python UDF 程序执行时,引用其它的包,PYTHONPATH 环境变量可以通过在 taos.cfg 的 UdfdLdLibPath 变量配置 +2. 如果 Python UDF 程序执行时,通过 PYTHONPATH 引用其它的包,可以设置 taos.cfg 的 UdfdLdLibPath 变量为PYTHONPATH的内容 -## 示例代码 -### 标量函数示例 [pybitand](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/pybitand.py) +## Python UDF 示例代码 +### Python UDF 标量函数示例 [pybitand](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/pybitand.py) -bit_add 实现多列的按位与功能。如果只有一列,返回这一列。bit_add 忽略空值。 +pybitand 实现多列的按位与功能。如果只有一列,返回这一列。pybitand 忽略空值。
pybitand.py @@ -369,7 +369,7 @@ bit_add 实现多列的按位与功能。如果只有一列,返回这一列。
-### 聚合函数示例 [pyl2norm](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/pyl2norm.py) +### Python UDF 聚合函数示例 [pyl2norm](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/pyl2norm.py) pyl2norm 实现了输入列的所有数据的二阶范数,即对每个数据先平方,再累加求和,最后开方。 From bcd051777bb33cf91aaaefe03d4cfd3454024bf0 Mon Sep 17 00:00:00 2001 From: Minglei Jin Date: Thu, 4 May 2023 09:59:19 +0800 Subject: [PATCH 097/156] rocksdb/cmake: disable iostats_context, perf_context --- cmake/cmake.define | 2 +- contrib/CMakeLists.txt | 16 +++++++++++++++- 2 files changed, 16 insertions(+), 2 deletions(-) diff --git a/cmake/cmake.define b/cmake/cmake.define index 3038bf975e..dcb4ce2702 100644 --- a/cmake/cmake.define +++ b/cmake/cmake.define @@ -123,7 +123,7 @@ ELSE () MESSAGE(STATUS "XXXXXXXXXXXXXX Clang/AppleClang" ${TD_DARWIN}) IF (${TD_DARWIN}) SET(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -Werror -Werror=return-type -fPIC -gdwarf-2 -g3 -Wformat=2 -Wno-format-nonliteral -Wno-format-y2k") - SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Werror -Wno-reserved-user-defined-literal -Wno-literal-suffix -Werror=return-type -fPIC -gdwarf-2 -g3 -Wformat=2 -Wno-format-nonliteral -Wno-format-y2k") + SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Werror -Wno-reserved-user-defined-literal -Werror=return-type -fPIC -gdwarf-2 -g3 -Wformat=2 -Wno-format-nonliteral -Wno-format-y2k") ELSE () SET(CMAKE_C_FLAGS "${CMAKE_C_FLAGS} -Werror -Werror=return-type -fPIC -gdwarf-2 -g3 -Wformat=2 -Wno-format-nonliteral -Wno-format-truncation -Wno-format-y2k") SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Werror -Wno-reserved-user-defined-literal -Wno-literal-suffix -Werror=return-type -fPIC -gdwarf-2 -g3 -Wformat=2 -Wno-format-nonliteral -Wno-format-truncation -Wno-format-y2k") diff --git a/contrib/CMakeLists.txt b/contrib/CMakeLists.txt index 8f94313382..942e77dacb 100644 --- a/contrib/CMakeLists.txt +++ b/contrib/CMakeLists.txt @@ -223,17 +223,31 @@ endif(${BUILD_WITH_LEVELDB}) # rocksdb # To support rocksdb build on ubuntu: sudo apt-get install libgflags-dev if(${BUILD_WITH_ROCKSDB}) - SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-error=maybe-uninitialized") + #SET(CMAKE_CXX_FLAGS "${CMAKE_CXX_FLAGS} -Wno-error=maybe-uninitialized") option(WITH_TESTS "" OFF) option(WITH_BENCHMARK_TOOLS "" OFF) option(WITH_TOOLS "" OFF) option(WITH_LIBURING "" OFF) + option(WITH_IOSTATS_CONTEXT "" OFF) + option(WITH_PERF_CONTEXT "" OFF) + option(FAIL_ON_WARNINGS "" OFF) + #option(WITH_JEMALLOC "" ON) option(ROCKSDB_BUILD_SHARED "Build shared versions of the RocksDB libraries" OFF) + IF (${TD_WINDOWS}) + option(WITH_MD_LIBRARY "build with MD" OFF) + set(SYSTEM_LIBS ${SYSTEM_LIBS} shlwapi.lib rpcrt4.lib) + endif(${TD_WINDOWS}) add_subdirectory(rocksdb EXCLUDE_FROM_ALL) target_include_directories( rocksdb PUBLIC $ ) + IF (${TD_DARWIN}) + target_compile_options( + rocksdb + PRIVATE -Wno-unused-private-field + ) + endif(${TD_DARWIN}) endif(${BUILD_WITH_ROCKSDB}) # lucene From 90be20a6623f8c815a88c25a7a56619dd17cb7ca Mon Sep 17 00:00:00 2001 From: Minglei Jin Date: Thu, 4 May 2023 18:00:54 +0800 Subject: [PATCH 098/156] cache/lstring: use ltype with cache/get --- source/dnode/vnode/src/inc/tsdb.h | 2 +- source/dnode/vnode/src/tsdb/tsdbCache.c | 6 ++++-- source/dnode/vnode/src/tsdb/tsdbCacheRead.c | 6 +++--- 3 files changed, 8 insertions(+), 6 deletions(-) diff --git a/source/dnode/vnode/src/inc/tsdb.h b/source/dnode/vnode/src/inc/tsdb.h index 08d4cbad1a..c704472dff 100644 --- a/source/dnode/vnode/src/inc/tsdb.h +++ b/source/dnode/vnode/src/inc/tsdb.h @@ -809,7 +809,7 @@ typedef struct { int32_t tsdbOpenCache(STsdb *pTsdb); void tsdbCloseCache(STsdb *pTsdb); int32_t tsdbCacheUpdate(STsdb *pTsdb, tb_uid_t suid, tb_uid_t uid, TSDBROW *row); -int32_t tsdbCacheGet(STsdb *pTsdb, tb_uid_t uid, SArray *pLastArray, SCacheRowsReader *pr, char const *lstring); +int32_t tsdbCacheGet(STsdb *pTsdb, tb_uid_t uid, SArray *pLastArray, SCacheRowsReader *pr, int32_t ltype); int32_t tsdbCacheDel(STsdb *pTsdb, tb_uid_t suid, tb_uid_t uid, TSKEY sKey, TSKEY eKey); int32_t tsdbCacheInsertLast(SLRUCache *pCache, tb_uid_t uid, TSDBROW *row, STsdb *pTsdb); diff --git a/source/dnode/vnode/src/tsdb/tsdbCache.c b/source/dnode/vnode/src/tsdb/tsdbCache.c index 35e4dce3dc..bccbc0c34e 100644 --- a/source/dnode/vnode/src/tsdb/tsdbCache.c +++ b/source/dnode/vnode/src/tsdb/tsdbCache.c @@ -293,8 +293,10 @@ _exit: return code; } -int32_t tsdbCacheGet(STsdb *pTsdb, tb_uid_t uid, SArray *pLastArray, SCacheRowsReader *pr, char const *lstring) { - int32_t code = 0; +int32_t tsdbCacheGet(STsdb *pTsdb, tb_uid_t uid, SArray *pLastArray, SCacheRowsReader *pr, int32_t ltype) { + static char const *alstring[2] = {"last", "last_row"}; + char const *lstring = alstring[ltype]; + int32_t code = 0; SArray *pCidList = pr->pCidList; int num_keys = TARRAY_SIZE(pCidList); diff --git a/source/dnode/vnode/src/tsdb/tsdbCacheRead.c b/source/dnode/vnode/src/tsdb/tsdbCacheRead.c index eac27fb3c6..84873931d6 100644 --- a/source/dnode/vnode/src/tsdb/tsdbCacheRead.c +++ b/source/dnode/vnode/src/tsdb/tsdbCacheRead.c @@ -285,7 +285,7 @@ int32_t tsdbRetrieveCacheRows(void* pReader, SSDataBlock* pResBlock, const int32 pr->pDataFReader = NULL; pr->pDataFReaderLast = NULL; - char const* lstring = pr->type & CACHESCAN_RETRIEVE_LAST ? "last" : "last_row"; + int32_t ltype = pr->type & CACHESCAN_RETRIEVE_LAST >> 3; // retrieve the only one last row of all tables in the uid list. if (HASTYPE(pr->type, CACHESCAN_RETRIEVE_TYPE_SINGLE)) { @@ -294,7 +294,7 @@ int32_t tsdbRetrieveCacheRows(void* pReader, SSDataBlock* pResBlock, const int32 for (int32_t i = 0; i < pr->numOfTables; ++i) { STableKeyInfo* pKeyInfo = &pr->pTableList[i]; - tsdbCacheGet(pr->pTsdb, pKeyInfo->uid, pRow, pr, lstring); + tsdbCacheGet(pr->pTsdb, pKeyInfo->uid, pRow, pr, ltype); if (TARRAY_SIZE(pRow) <= 0) { taosArrayClearEx(pRow, freeItem); continue; @@ -368,7 +368,7 @@ int32_t tsdbRetrieveCacheRows(void* pReader, SSDataBlock* pResBlock, const int32 for (int32_t i = pr->tableIndex; i < pr->numOfTables; ++i) { STableKeyInfo* pKeyInfo = &pr->pTableList[i]; - tsdbCacheGet(pr->pTsdb, pKeyInfo->uid, pRow, pr, lstring); + tsdbCacheGet(pr->pTsdb, pKeyInfo->uid, pRow, pr, ltype); if (TARRAY_SIZE(pRow) <= 0) { taosArrayClearEx(pRow, freeItem); continue; From ed3325419ca33c1bd6948726602856b884ea4ae4 Mon Sep 17 00:00:00 2001 From: wade zhang <95411902+gccgdb1234@users.noreply.github.com> Date: Thu, 4 May 2023 18:36:18 +0800 Subject: [PATCH 099/156] Update 29-changes.md --- docs/zh/12-taos-sql/29-changes.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/zh/12-taos-sql/29-changes.md b/docs/zh/12-taos-sql/29-changes.md index 8a5f1b20e3..27dd3294b7 100644 --- a/docs/zh/12-taos-sql/29-changes.md +++ b/docs/zh/12-taos-sql/29-changes.md @@ -27,7 +27,7 @@ description: "TDengine 3.0 版本的语法变更说明" | - | :------- | :-------- | :------- | | 1 | ALTER ACCOUNT | 废除 | 2.x中为企业版功能,3.0不再支持。语法暂时保留了,执行报“This statement is no longer supported”错误。 | 2 | ALTER ALL DNODES | 新增 | 修改所有DNODE的参数。 -| 3 | ALTER DATABASE | 调整 |

废除

  • QUORUM:写入需要的副本确认数。3.0 版本默认行为是强一致性,且不支持修改为弱一致性。
  • BLOCKS:VNODE使用的内存块数。3.0版本使用BUFFER来表示VNODE写入内存池的大小。
  • UPDATE:更新操作的支持模式。3.0版本所有数据库都支持部分列更新。
  • CACHELAST:缓存最新一行数据的模式。3.0版本用CACHEMODEL代替。
  • COMP:3.0版本暂不支持修改。

新增

  • CACHEMODEL:表示是否在内存中缓存子表的最近数据。
  • CACHESIZE:表示缓存子表最近数据的内存大小。
  • WAL_FSYNC_PERIOD:代替原FSYNC参数。
  • WAL_LEVEL:代替原WAL参数。
  • WAL_RETENTION_PERIOD:3.0.4.0版本新增,wal文件的额外保留策略,用于数据订阅。
  • WAL_RETENTION_SIZE:3.0.4.0版本新增,wal文件的额外保留策略,用于数据订阅。

调整

  • REPLICA:3.0.0版本暂不支持修改。
  • KEEP:3.0版本新增支持带单位的设置方式。
+| 3 | ALTER DATABASE | 调整 |

废除

  • QUORUM:写入需要的副本确认数。3.0 版本默认行为是强一致性,且不支持修改为弱一致性。
  • BLOCKS:VNODE使用的内存块数。3.0版本使用BUFFER来表示VNODE写入内存池的大小。
  • UPDATE:更新操作的支持模式。3.0版本所有数据库都支持部分列更新。
  • CACHELAST:缓存最新一行数据的模式。3.0版本用CACHEMODEL代替。
  • COMP:3.0版本暂不支持修改。

新增

  • CACHEMODEL:表示是否在内存中缓存子表的最近数据。
  • CACHESIZE:表示缓存子表最近数据的内存大小。
  • WAL_FSYNC_PERIOD:代替原FSYNC参数。
  • WAL_LEVEL:代替原WAL参数。
  • WAL_RETENTION_PERIOD:3.0.4.0版本新增,wal文件的额外保留策略,用于数据订阅。
  • WAL_RETENTION_SIZE:3.0.4.0版本新增,wal文件的额外保留策略,用于数据订阅。

调整

  • KEEP:3.0版本新增支持带单位的设置方式。
| 4 | ALTER STABLE | 调整 | 废除
  • CHANGE TAG:修改标签列的名称。3.0版本使用RENAME TAG代替。
    新增
  • RENAME TAG:代替原CHANGE TAG子句。
  • COMMENT:修改超级表的注释。
| 5 | ALTER TABLE | 调整 | 废除
  • CHANGE TAG:修改标签列的名称。3.0版本使用RENAME TAG代替。
    新增
  • RENAME TAG:代替原CHANGE TAG子句。
  • COMMENT:修改表的注释。
  • TTL:修改表的生命周期。
| 6 | ALTER USER | 调整 | 废除
  • PRIVILEGE:修改用户权限。3.0版本使用GRANT和REVOKE来授予和回收权限。
    新增
  • ENABLE:启用或停用此用户。
  • SYSINFO:修改用户是否可查看系统信息。
From 5d63f438b5ffaf21a4334e046e15f1bcc0e75edd Mon Sep 17 00:00:00 2001 From: Benguang Zhao Date: Thu, 4 May 2023 13:51:48 +0800 Subject: [PATCH 100/156] enh: confirm alter hash range --- include/common/tmsgdef.h | 1 - source/common/src/tmsg.c | 2 +- source/dnode/mnode/impl/src/mndVgroup.c | 5 +++++ source/dnode/vnode/inc/vnode.h | 1 + source/dnode/vnode/src/vnd/vnodeCfg.c | 3 +++ source/dnode/vnode/src/vnd/vnodeOpen.c | 1 + source/dnode/vnode/src/vnd/vnodeSvr.c | 25 ++++++++++++++++++++++--- 7 files changed, 33 insertions(+), 5 deletions(-) diff --git a/include/common/tmsgdef.h b/include/common/tmsgdef.h index ffad6f6646..c4371f35dc 100644 --- a/include/common/tmsgdef.h +++ b/include/common/tmsgdef.h @@ -225,7 +225,6 @@ enum { TD_DEF_MSG_TYPE(TDMT_VND_COMMIT, "vnode-commit", NULL, NULL) TD_DEF_MSG_TYPE(TDMT_VND_CREATE_INDEX, "vnode-create-index", NULL, NULL) TD_DEF_MSG_TYPE(TDMT_VND_DROP_INDEX, "vnode-drop-index", NULL, NULL) - TD_DEF_MSG_TYPE(TDMT_VND_DISABLE_WRITE, "vnode-disable-write", NULL, NULL) TD_DEF_MSG_TYPE(TDMT_VND_MAX_MSG, "vnd-max", NULL, NULL) diff --git a/source/common/src/tmsg.c b/source/common/src/tmsg.c index 4139d6c7d4..324e6ff37b 100644 --- a/source/common/src/tmsg.c +++ b/source/common/src/tmsg.c @@ -7691,4 +7691,4 @@ void tDeleteMqSubTopicEp(SMqSubTopicEp *pSubTopicEp) { taosMemoryFreeClear(pSubTopicEp->schema.pSchema); pSubTopicEp->schema.nCols = 0; taosArrayDestroy(pSubTopicEp->vgs); -} \ No newline at end of file +} diff --git a/source/dnode/mnode/impl/src/mndVgroup.c b/source/dnode/mnode/impl/src/mndVgroup.c index 64e0f82a1f..add2b1568c 100644 --- a/source/dnode/mnode/impl/src/mndVgroup.c +++ b/source/dnode/mnode/impl/src/mndVgroup.c @@ -2164,6 +2164,7 @@ static int32_t mndSplitVgroup(SMnode *pMnode, SRpcMsg *pReq, SDbObj *pDb, SVgObj mInfo("vgId:%d, vnode:%d dnode:%d", newVg2.vgId, i, newVg2.vnodeGid[i].dnodeId); } + // alter hash range int32_t maxVgId = sdbGetMaxId(pMnode->pSdb, SDB_VGROUP); if (mndAddAlterVnodeHashRangeAction(pMnode, pTrans, &newVg1, maxVgId) != 0) goto _OVER; newVg1.vgId = maxVgId; @@ -2172,6 +2173,10 @@ static int32_t mndSplitVgroup(SMnode *pMnode, SRpcMsg *pReq, SDbObj *pDb, SVgObj if (mndAddAlterVnodeHashRangeAction(pMnode, pTrans, &newVg2, maxVgId) != 0) goto _OVER; newVg2.vgId = maxVgId; + if (mndAddAlterVnodeConfirmAction(pMnode, pTrans, pDb, &newVg1) != 0) goto _OVER; + + if (mndAddAlterVnodeConfirmAction(pMnode, pTrans, pDb, &newVg2) != 0) goto _OVER; + // adjust vgroup replica if (pDb->cfg.replications != newVg1.replica) { if (mndBuildAlterVgroupAction(pMnode, pTrans, pDb, pDb, &newVg1, pArray) != 0) goto _OVER; diff --git a/source/dnode/vnode/inc/vnode.h b/source/dnode/vnode/inc/vnode.h index 828a173108..0a54da9b53 100644 --- a/source/dnode/vnode/inc/vnode.h +++ b/source/dnode/vnode/inc/vnode.h @@ -343,6 +343,7 @@ struct SVnodeCfg { SVnodeStats vndStats; uint32_t hashBegin; uint32_t hashEnd; + bool hashChange; int16_t sttTrigger; int16_t hashPrefix; int16_t hashSuffix; diff --git a/source/dnode/vnode/src/vnd/vnodeCfg.c b/source/dnode/vnode/src/vnd/vnodeCfg.c index 511ba9cc24..faa4d2fc57 100644 --- a/source/dnode/vnode/src/vnd/vnodeCfg.c +++ b/source/dnode/vnode/src/vnd/vnodeCfg.c @@ -134,6 +134,7 @@ int vnodeEncodeConfig(const void *pObj, SJson *pJson) { if (tjsonAddIntegerToObject(pJson, "sstTrigger", pCfg->sttTrigger) < 0) return -1; if (tjsonAddIntegerToObject(pJson, "hashBegin", pCfg->hashBegin) < 0) return -1; if (tjsonAddIntegerToObject(pJson, "hashEnd", pCfg->hashEnd) < 0) return -1; + if (tjsonAddIntegerToObject(pJson, "hashChange", pCfg->hashChange) < 0) return -1; if (tjsonAddIntegerToObject(pJson, "hashMethod", pCfg->hashMethod) < 0) return -1; if (tjsonAddIntegerToObject(pJson, "hashPrefix", pCfg->hashPrefix) < 0) return -1; if (tjsonAddIntegerToObject(pJson, "hashSuffix", pCfg->hashSuffix) < 0) return -1; @@ -249,6 +250,8 @@ int vnodeDecodeConfig(const SJson *pJson, void *pObj) { if (code < 0) return -1; tjsonGetNumberValue(pJson, "hashEnd", pCfg->hashEnd, code); if (code < 0) return -1; + tjsonGetNumberValue(pJson, "hashChange", pCfg->hashChange, code); + if (code < 0) return -1; tjsonGetNumberValue(pJson, "hashMethod", pCfg->hashMethod, code); if (code < 0) return -1; tjsonGetNumberValue(pJson, "hashPrefix", pCfg->hashPrefix, code); diff --git a/source/dnode/vnode/src/vnd/vnodeOpen.c b/source/dnode/vnode/src/vnd/vnodeOpen.c index 7d41edfdd9..2f7520e3a7 100644 --- a/source/dnode/vnode/src/vnd/vnodeOpen.c +++ b/source/dnode/vnode/src/vnd/vnodeOpen.c @@ -198,6 +198,7 @@ int32_t vnodeAlterHashRange(const char *srcPath, const char *dstPath, SAlterVnod info.config.vgId = pReq->dstVgId; info.config.hashBegin = pReq->hashBegin; info.config.hashEnd = pReq->hashEnd; + info.config.hashChange = true; info.config.walCfg.vgId = pReq->dstVgId; SSyncCfg *pCfg = &info.config.syncCfg; diff --git a/source/dnode/vnode/src/vnd/vnodeSvr.c b/source/dnode/vnode/src/vnd/vnodeSvr.c index dd576392f0..b5a5b1500b 100644 --- a/source/dnode/vnode/src/vnd/vnodeSvr.c +++ b/source/dnode/vnode/src/vnd/vnodeSvr.c @@ -1453,11 +1453,30 @@ int32_t vnodeProcessCreateTSma(SVnode *pVnode, void *pCont, uint32_t contLen) { return vnodeProcessCreateTSmaReq(pVnode, 1, pCont, contLen, NULL); } +static int32_t vnodeConsolidateAlterHashRange(SVnode *pVnode, int64_t version) { + int32_t code = TSDB_CODE_SUCCESS; + + vInfo("vgId:%d, trim meta of tables per hash range [%" PRIu32 ", %" PRIu32 "]. apply-index:%" PRId64, TD_VID(pVnode), + pVnode->config.hashBegin, pVnode->config.hashEnd, version); + + // TODO: trim meta of tables from TDB per hash range [pVnode->config.hashBegin, pVnode->config.hashEnd] + + return code; +} + static int32_t vnodeProcessAlterConfirmReq(SVnode *pVnode, int64_t version, void *pReq, int32_t len, SRpcMsg *pRsp) { - vInfo("vgId:%d, vnode management handle msgType:alter-confirm, alter replica confim msg is processed", - TD_VID(pVnode)); + vInfo("vgId:%d, vnode handle msgType:alter-confirm, alter confim msg is processed", TD_VID(pVnode)); + int32_t code = TSDB_CODE_SUCCESS; + if (!pVnode->config.hashChange) { + goto _exit; + } + + code = vnodeConsolidateAlterHashRange(pVnode, version); + pVnode->config.hashChange = false; + +_exit: pRsp->msgType = TDMT_VND_ALTER_CONFIRM_RSP; - pRsp->code = TSDB_CODE_SUCCESS; + pRsp->code = code; pRsp->pCont = NULL; pRsp->contLen = 0; From cfc2b2ecda081d346ea7700ef87d47789804ebeb Mon Sep 17 00:00:00 2001 From: Benguang Zhao Date: Thu, 4 May 2023 19:46:53 +0800 Subject: [PATCH 101/156] test: adjust test case split_vgroup_replica1.sim --- tests/script/tsim/dnode/split_vgroup_replica1.sim | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/tests/script/tsim/dnode/split_vgroup_replica1.sim b/tests/script/tsim/dnode/split_vgroup_replica1.sim index 51d63d25f4..135f7a579e 100644 --- a/tests/script/tsim/dnode/split_vgroup_replica1.sim +++ b/tests/script/tsim/dnode/split_vgroup_replica1.sim @@ -93,6 +93,19 @@ endi print =============== step4: split print split vgroup 2 sql split vgroup 2 +$wt = 0 + stepwt1: + $wt = $wt + 1 + sleep 1000 + if $wt == 200 then + print ====> split vgroup not completed! + return -1 + endi +sql show transactions +if $rows != 0 then + print wait 1 seconds to alter + goto stepwt1 +endi print =============== step5: check split result sql show d1.tables From 9f3a0d9a5e463681295598821a7be439cc7de86b Mon Sep 17 00:00:00 2001 From: dmchen Date: Fri, 5 May 2023 02:17:06 +0000 Subject: [PATCH 102/156] balance leader skew --- source/dnode/mnode/impl/src/mndVgroup.c | 18 ++++++++++++++++-- 1 file changed, 16 insertions(+), 2 deletions(-) diff --git a/source/dnode/mnode/impl/src/mndVgroup.c b/source/dnode/mnode/impl/src/mndVgroup.c index add2b1568c..7aa3afa627 100644 --- a/source/dnode/mnode/impl/src/mndVgroup.c +++ b/source/dnode/mnode/impl/src/mndVgroup.c @@ -1917,7 +1917,7 @@ int32_t mndAddVgroupBalanceToTrans(SMnode *pMnode, SVgObj *pVgroup, STrans *pTra int32_t vgid = pVgroup->vgId; int8_t replica = pVgroup->replica; - if(pVgroup->replica <= 1) { + if(pVgroup->replica <= 1) { mInfo("trans:%d, vgid:%d no need to balance, replica:%d", pTrans->id, vgid, replica); return -1; } @@ -1951,6 +1951,19 @@ int32_t mndAddVgroupBalanceToTrans(SMnode *pMnode, SVgObj *pVgroup, STrans *pTra return -1; } + SDbObj *pDb = mndAcquireDb(pMnode, pVgroup->dbName); + if (pDb == NULL) { + mError("trans:%d, vgid:%d failed to be balanced to dnode:%d, because db not exist", pTrans->id, vgid, dnodeId); + return -1; + } + + if (mndAddAlterVnodeConfirmAction(pMnode, pTrans, pDb, pVgroup) != 0) { + mError("trans:%d, vgid:%d failed to be balanced to dnode:%d", pTrans->id, vgid, dnodeId); + return -1; + } + + mndReleaseDb(pMnode, pDb); + SSdbRaw *pRaw = mndVgroupActionEncode(pVgroup); if (pRaw == NULL) { mError("trans:%d, vgid:%d failed to encode action to dnode:%d", pTrans->id, vgid, dnodeId); @@ -1965,7 +1978,8 @@ int32_t mndAddVgroupBalanceToTrans(SMnode *pMnode, SVgObj *pVgroup, STrans *pTra } else { - mInfo("trans:%d, vgid:%d cant be balanced to dnode:%d, exist:%d, online:%d", pTrans->id, vgid, dnodeId, exist, online); + mInfo("trans:%d, vgid:%d cant be balanced to dnode:%d, exist:%d, online:%d", + pTrans->id, vgid, dnodeId, exist, online); } return 0; From 992480eb00d765518e1b673b89bc32fd377ec280 Mon Sep 17 00:00:00 2001 From: Minglei Jin Date: Fri, 5 May 2023 10:22:59 +0800 Subject: [PATCH 103/156] cache/read: fix ltype calc --- source/dnode/vnode/src/tsdb/tsdbCache.c | 9 +++++++-- source/dnode/vnode/src/tsdb/tsdbCacheRead.c | 2 +- 2 files changed, 8 insertions(+), 3 deletions(-) diff --git a/source/dnode/vnode/src/tsdb/tsdbCache.c b/source/dnode/vnode/src/tsdb/tsdbCache.c index bccbc0c34e..46f8d85d55 100644 --- a/source/dnode/vnode/src/tsdb/tsdbCache.c +++ b/source/dnode/vnode/src/tsdb/tsdbCache.c @@ -294,7 +294,7 @@ _exit: } int32_t tsdbCacheGet(STsdb *pTsdb, tb_uid_t uid, SArray *pLastArray, SCacheRowsReader *pr, int32_t ltype) { - static char const *alstring[2] = {"last", "last_row"}; + static char const *alstring[2] = {"last_row", "last"}; char const *lstring = alstring[ltype]; int32_t code = 0; @@ -342,7 +342,12 @@ int32_t tsdbCacheGet(STsdb *pTsdb, tb_uid_t uid, SArray *pLastArray, SCacheRowsR } } else { // recalc: load from tsdb - + if (ltype) { + SArray *pArray = NULL; + // mergeLastCid(uid, pTsdb, &pArray, pr, cid); + } else { + // mergeLastRowCid(uid, pTsdb, &pArray, pr, cid); + } // still null, then make up a null col value pLastCol = &noneCol; diff --git a/source/dnode/vnode/src/tsdb/tsdbCacheRead.c b/source/dnode/vnode/src/tsdb/tsdbCacheRead.c index 84873931d6..0e03e6e5fb 100644 --- a/source/dnode/vnode/src/tsdb/tsdbCacheRead.c +++ b/source/dnode/vnode/src/tsdb/tsdbCacheRead.c @@ -285,7 +285,7 @@ int32_t tsdbRetrieveCacheRows(void* pReader, SSDataBlock* pResBlock, const int32 pr->pDataFReader = NULL; pr->pDataFReaderLast = NULL; - int32_t ltype = pr->type & CACHESCAN_RETRIEVE_LAST >> 3; + int32_t ltype = (pr->type & CACHESCAN_RETRIEVE_LAST) >> 3; // retrieve the only one last row of all tables in the uid list. if (HASTYPE(pr->type, CACHESCAN_RETRIEVE_TYPE_SINGLE)) { From accdcda3436f731464d14e54dd5e8901361a2ea1 Mon Sep 17 00:00:00 2001 From: shenglian zhou Date: Fri, 5 May 2023 13:21:04 +0800 Subject: [PATCH 104/156] docs: add python udf english version --- docs/en/07-develop/09-udf.md | 187 +++++++++++++++++++++++++++++---- docs/en/12-taos-sql/22-meta.md | 3 + docs/en/12-taos-sql/26-udf.md | 32 ++++-- docs/zh/07-develop/09-udf.md | 17 +-- docs/zh/12-taos-sql/26-udf.md | 10 +- 5 files changed, 206 insertions(+), 43 deletions(-) diff --git a/docs/en/07-develop/09-udf.md b/docs/en/07-develop/09-udf.md index dc42743b51..e1f1bab4e0 100644 --- a/docs/en/07-develop/09-udf.md +++ b/docs/en/07-develop/09-udf.md @@ -6,10 +6,12 @@ description: This document describes how to create user-defined functions (UDF), The built-in functions of TDengine may not be sufficient for the use cases of every application. In this case, you can define custom functions for use in TDengine queries. These are known as user-defined functions (UDF). A user-defined function takes one column of data or the result of a subquery as its input. -TDengine supports user-defined functions written in C or C++. This document describes the usage of user-defined functions. - User-defined functions can be scalar functions or aggregate functions. Scalar functions, such as `abs`, `sin`, and `concat`, output a value for every row of data. Aggregate functions, such as `avg` and `max` output one value for multiple rows of data. +TDengine supports user-defined functions written in C or Python. This document describes the usage of user-defined functions. + +# Implement a UDF in C + When you create a user-defined function, you must implement standard interface functions: - For scalar functions, implement the `scalarfn` interface function. - For aggregate functions, implement the `aggfn_start`, `aggfn`, and `aggfn_finish` interface functions. @@ -17,7 +19,7 @@ When you create a user-defined function, you must implement standard interface f There are strict naming conventions for these interface functions. The names of the start, finish, init, and destroy interfaces must be _start, _finish, _init, and _destroy, respectively. Replace `scalarfn`, `aggfn`, and `udf` with the name of your user-defined function. -## Implementing a Scalar Function +## Implementing a Scalar Function in C The implementation of a scalar function is described as follows: ```c #include "taos.h" @@ -49,7 +51,7 @@ int32_t scalarfn_destroy() { ``` Replace `scalarfn` with the name of your function. -## Implementing an Aggregate Function +### Implementing an Aggregate Function in C The implementation of an aggregate function is described as follows: ```c @@ -100,7 +102,7 @@ int32_t aggfn_destroy() { ``` Replace `aggfn` with the name of your function. -## Interface Functions +## C UDF Interface Functions There are strict naming conventions for interface functions. The names of the start, finish, init, and destroy interfaces must be _start, _finish, _init, and _destroy, respectively. Replace `scalarfn`, `aggfn`, and `udf` with the name of your user-defined function. @@ -108,7 +110,7 @@ Interface functions return a value that indicates whether the operation was succ For information about the parameters for interface functions, see Data Model -### Interfaces for Scalar Functions +### Interfaces for C UDF Scalar Functions `int32_t scalarfn(SUdfDataBlock* inputDataBlock, SUdfColumn *resultColumn)` @@ -118,7 +120,7 @@ The parameters in the function are defined as follows: - inputDataBlock: The data block to input. - resultColumn: The column to output. The column to output. -### Interfaces for Aggregate Functions +### Interfaces for C UDF Aggregate Functions `int32_t aggfn_start(SUdfInterBuf *interBuf)` @@ -126,7 +128,7 @@ The parameters in the function are defined as follows: `int32_t aggfn_finish(SUdfInterBuf* interBuf, SUdfInterBuf *result)` -Replace `aggfn` with the name of your function. In the function, aggfn_start is called to generate a result buffer. Data is then divided between multiple blocks, and aggfn is called on each block to update the result. Finally, aggfn_finish is called to generate final results from the intermediate results. The final result contains only one or zero data points. +Replace `aggfn` with the name of your function. In the function, aggfn_start is called to generate a result buffer. Data is then divided between multiple blocks, and the `aggfn` function is called on each block to update the result. Finally, aggfn_finish is called to generate the final results from the intermediate results. The final result contains only one or zero data points. The parameters in the function are defined as follows: - interBuf: The intermediate result buffer. @@ -135,15 +137,15 @@ The parameters in the function are defined as follows: - result: The final result. -### Initializing and Terminating User-Defined Functions +### C UDF Initializing and Terminating User-Defined Functions `int32_t udf_init()` `int32_t udf_destroy()` -Replace `udf`with the name of your function. udf_init initializes the function. udf_destroy terminates the function. If it is not necessary to initialize your function, udf_init is not required. If it is not necessary to terminate your function, udf_destroy is not required. +Replace `udf` with the name of your function. udf_init initializes the function. udf_destroy terminates the function. If it is not necessary to initialize your function, udf_init is not required. If it is not necessary to terminate your function, udf_destroy is not required. -## Data Structure of User-Defined Functions +## Data Structure of C User-Defined Functions ```c typedef struct SUdfColumnMeta { int16_t type; @@ -193,7 +195,7 @@ typedef struct SUdfInterBuf { ``` The data structure is described as follows: -- The SUdfDataBlock block includes the number of rows (numOfRows) and number of columns (numCols). udfCols[i] (0 <= i <= numCols-1) indicates that each column is of type SUdfColumn. +- The SUdfDataBlock block includes the number of rows (numOfRows) and the number of columns (numCols). udfCols[i] (0 <= i <= numCols-1) indicates that each column is of type SUdfColumn. - SUdfColumn includes the definition of the data type of the column (colMeta) and the data in the column (colData). - The member definitions of SUdfColumnMeta are the same as the data type definitions in `taos.h`. - The data in SUdfColumnData can become longer. varLenCol indicates variable-length data, and fixLenCol indicates fixed-length data. @@ -201,9 +203,9 @@ The data structure is described as follows: Additional functions are defined in `taosudf.h` to make it easier to work with these structures. -## Compile UDF +## Compile C UDF -To use your user-defined function in TDengine, first compile it to a dynamically linked library (DLL). +To use your user-defined function in TDengine, first, compile it to a shared library. For example, the sample UDF `bit_and.c` can be compiled into a DLL as follows: @@ -213,12 +215,9 @@ gcc -g -O0 -fPIC -shared bit_and.c -o libbitand.so The generated DLL file `libbitand.so` can now be used to implement your function. Note: GCC 7.5 or later is required. -## Manage and Use User-Defined Functions -After compiling your function into a DLL, you add it to TDengine. For more information, see [User-Defined Functions](../12-taos-sql/26-udf.md). +## C UDF Sample Code -## Sample Code - -### Sample scalar function: [bit_and](https://github.com/taosdata/TDengine/blob/3.0/tests/script/sh/bit_and.c) +### C UDF Sample scalar function: [bit_and](https://github.com/taosdata/TDengine/blob/3.0/tests/script/sh/bit_and.c) The bit_and function implements bitwise addition for multiple columns. If there is only one column, the column is returned. The bit_and function ignores null values. @@ -231,7 +230,7 @@ The bit_and function implements bitwise addition for multiple columns. If there -### Sample aggregate function: [l2norm](https://github.com/taosdata/TDengine/blob/3.0/tests/script/sh/l2norm.c) +### C UDF Sample aggregate function 1: [l2norm](https://github.com/taosdata/TDengine/blob/3.0/tests/script/sh/l2norm.c) The l2norm function finds the second-order norm for all data in the input column. This squares the values, takes a cumulative sum, and finds the square root. @@ -243,3 +242,151 @@ The l2norm function finds the second-order norm for all data in the input column ``` + +### C UDF Sample aggregate function 2: [max_vol](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/max_vol.c) + +The max_vol function returns a string concatenating the deviceId column, the row number and column number of the maximum voltage and the maximum voltage given several voltage columns as input. + +Create Table: +```bash +create table battery(ts timestamp, vol1 float, vol2 float, vol3 float, deviceId varchar(16)); +``` +Create the UDF: +```bash +create aggregate function max_vol as '/root/udf/libmaxvol.so' outputtype binary(64) bufsize 10240 language 'C'; +``` +Use the UDF in the query: +```bash +select max_vol(vol1,vol2,vol3,deviceid) from battery; +``` + +
+max_vol.c + +```c +{{#include tests/script/sh/max_vol.c}} +``` + +
+ +#Implement a UDF in Python + +Implement the specified interface functions when implementing a UDF in Python. +- implement `process` function for the scalar UDF。 +- implement `start`, `reduce`, `finish` for the aggregate UDF。 +- implement `init` for initialization and `destroy` for termination。 + +## Implement a Scalar UDF in Python + +The implementation of a scalar UDF is described as follows: + +```Python +def init(): + # initialization +def destroy(): + # destroy +def process(input: datablock) -> tuple[output_type]: + # process input datablock, + # datablock.data(row, col) is to access the python object in location(row,col) + # return tuple object consisted of object of type outputtype +``` + +## Implement an Aggregate UDF in Python + +The implementation of an aggregate function is described as follows: + +```Python +def init(): + #initialization +def destroy(): + #destroy +def start() -> bytes: + #return serialize(init_state) +def reduce(inputs: datablock, buf: bytes) -> bytes + # deserialize buf to state + # reduce the inputs and state into new_state. + # use inputs.data(i,j) to access python ojbect of location(i,j) + # serialize new_state into new_state_bytes + return new_state_bytes +def finish(buf: bytes) -> output_type: + #return obj of type outputtype +``` + +## Python UDF interface functions + +### Python UDF scalar interface functions +```Python +def process(input: datablock) -> tuple[output_type]: +``` +- `input` is a data block two-dimension matrix-like object, of which method `data(row, col)` returns the Python object located at location (`row`, `col`) +- return a Python tuple object, of which each item is a Python object of type `output_type` + +### Python UDF aggregate interface functions +```Python +def start() -> bytes: +def reduce(input: datablock, buf: bytes) -> bytes +def finish(buf: bytes) -> output_type: +``` + +- first `start()` is called to return the initial result in type `bytes` +- then the input data are divided into multiple data blocks and for each block `input`, `reduce` is called with the data block `input` and the current result `buf` bytes and generates a new intermediate result buffer. +- finally, the `finish` function is called on the intermediate result `buf` and outputs 0 or 1 data of type `output_type` + + +### Python UDF Initialization and Termination +```Python +def init() +def destroy() +``` +Implement `init` for initialization and `destroy` for termination. + +## TDengine SQL data type and Python UDF Data Type Mapping Table + +The following table describes the mapping between TDengine SQL data type and Python UDF Data Type. The `NULL` value of all TDengine SQL types is mapped to the `None` value in Python. + +| **TDengine SQL Data Type** | **Python Data Type** | +| :-----------------------: | ------------ | +|TINYINT / SMALLINT / INT / BIGINT | int | +|TINYINT UNSIGNED / SMALLINT UNSIGNED / INT UNSIGNED / BIGINT UNSIGNED | int | +|FLOAT / DOUBLE | float | +|BOOL | bool | +|BINARY / VARCHAR / NCHAR | bytes| +|TIMESTAMP | int | +|JSON and other types | Not Supported | + +## Python UDF Installation +1. Install Python package `taospyudf` that executes Python UDF +```bash +sudo pip install taospyudf +ldconfig +``` +2. If PYTHONPATH is needed to find Python packages when the Python UDF executes, include the PYTHONPATH contents into the udfdLdLibPath variable of the taos.cfg configuration file + +## Python UDF Sample Code +### Python UDF Scalar Function Sample Code [pybitand](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/pybitand.py) + +The `pybitand` function implements bitwise addition for multiple columns. If there is only one column, the column is returned. The `pybitand` function ignores null values. + +
+pybitand.py + +```Python +{{#include tests/script/sh/pybitand.py}} +``` + +
+ +### Python UDF Aggregate Function Sample Code [pyl2norm](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/pyl2norm.py) + +The `pyl2norm` function finds the second-order norm for all data in the input column. This squares the values, takes a cumulative sum, and finds the square root. +
+pyl2norm.py + +```c +{{#include tests/script/sh/pyl2norm.py}} +``` + +
+ +## Manage and Use User-Defined Functions +You can add UDF to TDengine before using it in SQL queries. For more information, see [User-Defined Functions](../12-taos-sql/26-udf.md). diff --git a/docs/en/12-taos-sql/22-meta.md b/docs/en/12-taos-sql/22-meta.md index 81284aeaed..e63f682761 100644 --- a/docs/en/12-taos-sql/22-meta.md +++ b/docs/en/12-taos-sql/22-meta.md @@ -120,6 +120,9 @@ Provides information about user-defined functions. | 5 | create_time | TIMESTAMP | Creation time | | 6 | code_len | INT | Length of the source code | | 7 | bufsize | INT | Buffer size | +| 8 | func_language | BINARY(31) | UDF programming language | +| 9 | func_body | BINARY(16384) | UDF function body | +| 10 | func_version | INT | UDF function version. starting from 0. Increasing by 1 each time it is updated| ## INS_INDEXES diff --git a/docs/en/12-taos-sql/26-udf.md b/docs/en/12-taos-sql/26-udf.md index cb64873705..c4d6d4fca4 100644 --- a/docs/en/12-taos-sql/26-udf.md +++ b/docs/en/12-taos-sql/26-udf.md @@ -7,17 +7,18 @@ description: This document describes the SQL statements related to user-defined You can create user-defined functions and import them into TDengine. ## Create UDF -SQL command can be executed on the host where the generated UDF DLL resides to load the UDF DLL into TDengine. This operation cannot be done through REST interface or web console. Once created, any client of the current TDengine can use these UDF functions in their SQL commands. UDF are stored in the management node of TDengine. The UDFs loaded in TDengine would be still available after TDengine is restarted. +SQL command can be executed on the host where the generated UDF DLL resides to load the UDF DLL into TDengine. This operation cannot be done through REST interface or web console. Once created, any client of the current TDengine can use these UDF functions in their SQL commands. UDF is stored in the management node of TDengine. The UDFs loaded in TDengine would be still available after TDengine is restarted. When creating UDF, the type of UDF, i.e. a scalar function or aggregate function must be specified. If the specified type is wrong, the SQL statements using the function would fail with errors. The input data type and output data type must be consistent with the UDF definition. - Create Scalar Function ```sql -CREATE FUNCTION function_name AS library_path OUTPUTTYPE output_type; +CREATE [OR REPLACE] FUNCTION function_name AS library_path OUTPUTTYPE output_type [LANGUAGE 'C|Python']; ``` - - - function_name: The scalar function name to be used in SQL statement which must be consistent with the UDF name and is also the name of the compiled DLL (.so file). - - library_path: The absolute path of the DLL file including the name of the shared object file (.so). The path must be quoted with single or double quotes. + - OR REPLACE: if the UDF exists, the UDF properties are modified + - function_name: The scalar function name to be used in the SQL statement + - LANGUAGE 'C|Python': the programming language of UDF. Now C or Python is supported. If this clause is omitted, C is assumed as the programming language. + - library_path: For C programming language, The absolute path of the DLL file including the name of the shared object file (.so). For Python programming language, the absolute path of the Python UDF script. The path must be quoted with single or double quotes. - output_type: The data type of the results of the UDF. For example, the following SQL statement can be used to create a UDF from `libbitand.so`. @@ -25,14 +26,20 @@ CREATE FUNCTION function_name AS library_path OUTPUTTYPE output_type; ```sql CREATE FUNCTION bit_and AS "/home/taos/udf_example/libbitand.so" OUTPUTTYPE INT; ``` + For Example, the following SQL statement can be used to modify the existing function `bit_and`. The OUTPUT type is changed to BIGINT and the programming language is changed to Python. + + ```sql + CREATE OR REPLACE FUNCTION bit_and AS "/home/taos/udf_example/bit_and.py" OUTPUTTYPE BIGINT LANGUAGE 'Python'; + ``` - Create Aggregate Function ```sql CREATE AGGREGATE FUNCTION function_name AS library_path OUTPUTTYPE output_type [ BUFSIZE buffer_size ]; ``` - - - function_name: The aggregate function name to be used in SQL statement which must be consistent with the udfNormalFunc name and is also the name of the compiled DLL (.so file). - - library_path: The absolute path of the DLL file including the name of the shared object file (.so). The path must be quoted with single or double quotes. + - OR REPLACE: if the UDF exists, the UDF properties are modified + - function_name: The aggregate function name to be used in the SQL statement + - LANGUAGE 'C|Python': the programming language of the UDF. Now C or Python is supported. If this clause is omitted, C is assumed as the programming language. + - library_path: For C programming language, The absolute path of the DLL file including the name of the shared object file (.so). For Python programming language, the absolute path of the Python UDF script. The path must be quoted with single or double quotes. - output_type: The output data type, the value is the literal string of the supported TDengine data type. - buffer_size: The size of the intermediate buffer in bytes. This parameter is optional. @@ -41,6 +48,11 @@ CREATE AGGREGATE FUNCTION function_name AS library_path OUTPUTTYPE output_type [ ```sql CREATE AGGREGATE FUNCTION l2norm AS "/home/taos/udf_example/libl2norm.so" OUTPUTTYPE DOUBLE bufsize 8; ``` + For example, the following SQL statement modifies the buffer size of existing UDF `l2norm` to 64 + ```sql + CREATE AGGREGATE FUNCTION l2norm AS "/home/taos/udf_example/libl2norm.so" OUTPUTTYPE DOUBLE bufsize 64; + ``` + For more information about user-defined functions, see [User-Defined Functions](/develop/udf). ## Manage UDF @@ -61,9 +73,9 @@ SHOW FUNCTIONS; ## Call UDF -The function name specified when creating UDF can be used directly in SQL statements, just like builtin functions. For example: +The function name specified when creating UDF can be used directly in SQL statements, just like built-in functions. For example: ```sql SELECT bit_and(c1,c2) FROM table; ``` -The above SQL statement invokes function X for column c1 and c2 on table. You can use query keywords like WHERE with user-defined functions. +The above SQL statement invokes function X for columns c1 and c2 on the table. You can use query keywords like WHERE with user-defined functions. diff --git a/docs/zh/07-develop/09-udf.md b/docs/zh/07-develop/09-udf.md index 8aa841db89..703f222516 100644 --- a/docs/zh/07-develop/09-udf.md +++ b/docs/zh/07-develop/09-udf.md @@ -10,7 +10,7 @@ description: "支持用户编码的聚合函数和标量函数,在查询中嵌 TDengine 支持通过 C/Python 语言进行 UDF 定义。接下来结合示例讲解 UDF 的使用方法。 -# C 语言实现UDF +# C 语言实现 UDF 使用 C 语言实现 UDF 时,需要实现规定的接口函数 - 标量函数需要实现标量接口函数 scalarfn 。 @@ -269,7 +269,7 @@ select max_vol(vol1,vol2,vol3,deviceid) from battery; -# Python 语言实现UDF +# Python 语言实现 UDF 使用 Python 语言实现 UDF 时,需要实现规定的接口函数 - 标量函数需要实现标量接口函数 process 。 - 聚合函数需要实现聚合接口函数 start ,reduce ,finish。 @@ -336,7 +336,10 @@ def destroy() 其中 init 完成初始化工作。 destroy 完成清理工作。如果没有初始化工作,无需定义 init 函数。如果没有清理工作,无需定义 destroy 函数。 -## Python数据类型和TDengine数据类型映射 +## Python 数据类型和 TDengine 数据类型映射 + +下表描述了TDengine SQL数据类型和Python数据类型的映射。任何类型的NULL值都映射成Python的None值。 + | **TDengine SQL数据类型** | **Python数据类型** | | :-----------------------: | ------------ | |TINYINT / SMALLINT / INT / BIGINT | int | @@ -350,8 +353,8 @@ def destroy() ## Python UDF 环境的安装 1. 安装 taospyudf 包。此包执行Python UDF程序。 ```bash -pip install taospyudf -lddconfig +sudo pip install taospyudf +ldconfig ``` 2. 如果 Python UDF 程序执行时,通过 PYTHONPATH 引用其它的包,可以设置 taos.cfg 的 UdfdLdLibPath 变量为PYTHONPATH的内容 @@ -382,5 +385,5 @@ pyl2norm 实现了输入列的所有数据的二阶范数,即对每个数据 -# 管理和使用UDF -编译好的UDF,还需要将其加入到系统才能被正常的SQL调用。关于如何管理和使用UDF,参见[UDF使用说明](../12-taos-sql/26-udf.md) \ No newline at end of file +# 管理和使用 UDF +需要 UDF 将其加入到系统才能被正常的 SQL 调用。关于如何管理和使用 UDF,参见[UDF使用说明](../12-taos-sql/26-udf.md) \ No newline at end of file diff --git a/docs/zh/12-taos-sql/26-udf.md b/docs/zh/12-taos-sql/26-udf.md index a45e837c8d..c1d2761d7d 100644 --- a/docs/zh/12-taos-sql/26-udf.md +++ b/docs/zh/12-taos-sql/26-udf.md @@ -11,15 +11,13 @@ description: 使用 UDF 的详细指南 在创建 UDF 时,需要区分标量函数和聚合函数。如果创建时声明了错误的函数类别,则可能导致通过 SQL 指令调用函数时出错。此外,用户需要保证输入数据类型与 UDF 程序匹配,UDF 输出数据类型与 OUTPUTTYPE 匹配。 -使用 CREATE OR REPLACE FUNCTION,如果函数已经存在,会修改已有的函数属性。 - - 创建标量函数 ```sql CREATE [OR REPLACE] FUNCTION function_name AS library_path OUTPUTTYPE output_type [LANGUAGE 'C|Python']; ``` - - - function_name:标量函数未来在 SQL 中被调用时的函数名,必须与函数实现中 udf 的实际名称一致; - - LANGUAGE 'C|Python':函数编程语言,目前支持C语言和Python语言。 + - OR REPLACE: 如果函数已经存在,会修改已有的函数属性。 + - function_name:标量函数未来在 SQL 中被调用时的函数名; + - LANGUAGE 'C|Python':函数编程语言,目前支持C语言和Python语言。 如果这个从句忽略,编程语言是C语言 - library_path:如果编程语言是C,路径是包含 UDF 函数实现的动态链接库的库文件绝对路径(指的是库文件在当前客户端所在主机上的保存路径,通常是指向一个 .so 文件)。如果编程语言是Python,路径是包含 UDF 函数实现的Python文件路径。这个路径需要用英文单引号或英文双引号括起来; - output_type:此函数计算结果的数据类型名称; @@ -38,7 +36,7 @@ CREATE [OR REPLACE] FUNCTION function_name AS library_path OUTPUTTYPE output_typ ```sql CREATE [OR REPLACE] AGGREGATE FUNCTION function_name AS library_path OUTPUTTYPE output_type [ BUFSIZE buffer_size ] [LANGUAGE 'C|Python']; ``` - + - OR REPLACE: 如果函数已经存在,会修改已有的函数属性。 - function_name:聚合函数未来在 SQL 中被调用时的函数名,必须与函数实现中 udfNormalFunc 的实际名称一致; - LANGUAGE 'C|Python':函数编程语言,目前支持C语言和Python语言。 - library_path:如果编程语言是C,路径是包含 UDF 函数实现的动态链接库的库文件绝对路径(指的是库文件在当前客户端所在主机上的保存路径,通常是指向一个 .so 文件)。如果编程语言是Python,路径是包含 UDF 函数实现的Python文件路径。这个路径需要用英文单引号或英文双引号括起来;; From c949fee06b12da07f9b2dcd445e14f3093a6588f Mon Sep 17 00:00:00 2001 From: Benguang Zhao Date: Fri, 5 May 2023 10:33:24 +0800 Subject: [PATCH 105/156] enh: commit vnode after consolidating alter hash range --- source/dnode/mnode/impl/src/mndVgroup.c | 2 +- source/dnode/vnode/src/vnd/vnodeOpen.c | 2 -- source/dnode/vnode/src/vnd/vnodeSvr.c | 12 ++++++++++-- 3 files changed, 11 insertions(+), 5 deletions(-) diff --git a/source/dnode/mnode/impl/src/mndVgroup.c b/source/dnode/mnode/impl/src/mndVgroup.c index add2b1568c..d4be6bed73 100644 --- a/source/dnode/mnode/impl/src/mndVgroup.c +++ b/source/dnode/mnode/impl/src/mndVgroup.c @@ -2164,7 +2164,7 @@ static int32_t mndSplitVgroup(SMnode *pMnode, SRpcMsg *pReq, SDbObj *pDb, SVgObj mInfo("vgId:%d, vnode:%d dnode:%d", newVg2.vgId, i, newVg2.vnodeGid[i].dnodeId); } - // alter hash range + // alter vgId and hash range int32_t maxVgId = sdbGetMaxId(pMnode->pSdb, SDB_VGROUP); if (mndAddAlterVnodeHashRangeAction(pMnode, pTrans, &newVg1, maxVgId) != 0) goto _OVER; newVg1.vgId = maxVgId; diff --git a/source/dnode/vnode/src/vnd/vnodeOpen.c b/source/dnode/vnode/src/vnd/vnodeOpen.c index 2f7520e3a7..b5e7c6875b 100644 --- a/source/dnode/vnode/src/vnd/vnodeOpen.c +++ b/source/dnode/vnode/src/vnd/vnodeOpen.c @@ -236,8 +236,6 @@ int32_t vnodeAlterHashRange(const char *srcPath, const char *dstPath, SAlterVnod return -1; } - // todo vnode compact here - vInfo("vgId:%d, vnode hashrange is altered", info.config.vgId); return 0; } diff --git a/source/dnode/vnode/src/vnd/vnodeSvr.c b/source/dnode/vnode/src/vnd/vnodeSvr.c index b5a5b1500b..a8511eedfd 100644 --- a/source/dnode/vnode/src/vnd/vnodeSvr.c +++ b/source/dnode/vnode/src/vnd/vnodeSvr.c @@ -425,7 +425,10 @@ int32_t vnodeProcessWriteMsg(SVnode *pVnode, SRpcMsg *pMsg, int64_t version, SRp } } break; case TDMT_VND_ALTER_CONFIRM: - vnodeProcessAlterConfirmReq(pVnode, version, pReq, len, pRsp); + needCommit = pVnode->config.hashChange; + if (vnodeProcessAlterConfirmReq(pVnode, version, pReq, len, pRsp) < 0) { + goto _err; + } break; case TDMT_VND_ALTER_CONFIG: vnodeProcessAlterConfigReq(pVnode, version, pReq, len, pRsp); @@ -1472,6 +1475,11 @@ static int32_t vnodeProcessAlterConfirmReq(SVnode *pVnode, int64_t version, void } code = vnodeConsolidateAlterHashRange(pVnode, version); + if (code < 0) { + vError("vgId:%d, failed to consolidate alter hashrange since %s. version:%" PRId64, TD_VID(pVnode), terrstr(), + version); + goto _exit; + } pVnode->config.hashChange = false; _exit: @@ -1480,7 +1488,7 @@ _exit: pRsp->pCont = NULL; pRsp->contLen = 0; - return 0; + return code; } static int32_t vnodeProcessAlterConfigReq(SVnode *pVnode, int64_t version, void *pReq, int32_t len, SRpcMsg *pRsp) { From c5b4bbc6d23f9880548c4cd475e3a45e8a57c82e Mon Sep 17 00:00:00 2001 From: Alex Duan <51781608+DuanKuanJun@users.noreply.github.com> Date: Fri, 5 May 2023 13:55:15 +0800 Subject: [PATCH 106/156] Update 09-udf.md --- docs/zh/07-develop/09-udf.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/zh/07-develop/09-udf.md b/docs/zh/07-develop/09-udf.md index 703f222516..e22e466ad7 100644 --- a/docs/zh/07-develop/09-udf.md +++ b/docs/zh/07-develop/09-udf.md @@ -359,7 +359,7 @@ ldconfig 2. 如果 Python UDF 程序执行时,通过 PYTHONPATH 引用其它的包,可以设置 taos.cfg 的 UdfdLdLibPath 变量为PYTHONPATH的内容 ## Python UDF 示例代码 -### Python UDF 标量函数示例 [pybitand](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/pybitand.py) +### Python UDF 标量函数示例 [pybitand](https://github.com/taosdata/TDengine/blob/3.0/tests/script/sh/pybitand.py) pybitand 实现多列的按位与功能。如果只有一列,返回这一列。pybitand 忽略空值。 @@ -372,7 +372,7 @@ pybitand 实现多列的按位与功能。如果只有一列,返回这一列 -### Python UDF 聚合函数示例 [pyl2norm](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/pyl2norm.py) +### Python UDF 聚合函数示例 [pyl2norm](https://github.com/taosdata/TDengine/blob/3.0/tests/script/sh/pyl2norm.py) pyl2norm 实现了输入列的所有数据的二阶范数,即对每个数据先平方,再累加求和,最后开方。 @@ -386,4 +386,4 @@ pyl2norm 实现了输入列的所有数据的二阶范数,即对每个数据 # 管理和使用 UDF -需要 UDF 将其加入到系统才能被正常的 SQL 调用。关于如何管理和使用 UDF,参见[UDF使用说明](../12-taos-sql/26-udf.md) \ No newline at end of file +需要 UDF 将其加入到系统才能被正常的 SQL 调用。关于如何管理和使用 UDF,参见[UDF使用说明](../12-taos-sql/26-udf.md) From 879bbe982a30834cdbd6f5a055453a1224339fed Mon Sep 17 00:00:00 2001 From: gccgdb1234 Date: Fri, 5 May 2023 14:08:21 +0800 Subject: [PATCH 107/156] doc: revise doc structure of UDF --- docs/en/07-develop/09-udf.md | 51 ++++++++++++++++----------------- docs/zh/07-develop/09-udf.md | 55 ++++++++++++++++++------------------ 2 files changed, 53 insertions(+), 53 deletions(-) diff --git a/docs/en/07-develop/09-udf.md b/docs/en/07-develop/09-udf.md index e1f1bab4e0..aef845d3ce 100644 --- a/docs/en/07-develop/09-udf.md +++ b/docs/en/07-develop/09-udf.md @@ -10,7 +10,7 @@ User-defined functions can be scalar functions or aggregate functions. Scalar fu TDengine supports user-defined functions written in C or Python. This document describes the usage of user-defined functions. -# Implement a UDF in C +## Implement a UDF in C When you create a user-defined function, you must implement standard interface functions: - For scalar functions, implement the `scalarfn` interface function. @@ -19,7 +19,7 @@ When you create a user-defined function, you must implement standard interface f There are strict naming conventions for these interface functions. The names of the start, finish, init, and destroy interfaces must be _start, _finish, _init, and _destroy, respectively. Replace `scalarfn`, `aggfn`, and `udf` with the name of your user-defined function. -## Implementing a Scalar Function in C +### Implementing a Scalar Function in C The implementation of a scalar function is described as follows: ```c #include "taos.h" @@ -102,7 +102,7 @@ int32_t aggfn_destroy() { ``` Replace `aggfn` with the name of your function. -## C UDF Interface Functions +### UDF Interface Definition in C There are strict naming conventions for interface functions. The names of the start, finish, init, and destroy interfaces must be _start, _finish, _init, and _destroy, respectively. Replace `scalarfn`, `aggfn`, and `udf` with the name of your user-defined function. @@ -110,8 +110,7 @@ Interface functions return a value that indicates whether the operation was succ For information about the parameters for interface functions, see Data Model -### Interfaces for C UDF Scalar Functions - +#### Scalar Interface `int32_t scalarfn(SUdfDataBlock* inputDataBlock, SUdfColumn *resultColumn)` Replace `scalarfn` with the name of your function. This function performs scalar calculations on data blocks. You can configure a value through the parameters in the `resultColumn` structure. @@ -120,7 +119,7 @@ The parameters in the function are defined as follows: - inputDataBlock: The data block to input. - resultColumn: The column to output. The column to output. -### Interfaces for C UDF Aggregate Functions +#### Aggregate Interface `int32_t aggfn_start(SUdfInterBuf *interBuf)` @@ -137,7 +136,7 @@ The parameters in the function are defined as follows: - result: The final result. -### C UDF Initializing and Terminating User-Defined Functions +#### Initialization and Cleanup Interface `int32_t udf_init()` `int32_t udf_destroy()` @@ -145,7 +144,7 @@ The parameters in the function are defined as follows: Replace `udf` with the name of your function. udf_init initializes the function. udf_destroy terminates the function. If it is not necessary to initialize your function, udf_init is not required. If it is not necessary to terminate your function, udf_destroy is not required. -## Data Structure of C User-Defined Functions +### Data Structures for UDF in C ```c typedef struct SUdfColumnMeta { int16_t type; @@ -203,7 +202,7 @@ The data structure is described as follows: Additional functions are defined in `taosudf.h` to make it easier to work with these structures. -## Compile C UDF +### Compiling C UDF To use your user-defined function in TDengine, first, compile it to a shared library. @@ -215,9 +214,9 @@ gcc -g -O0 -fPIC -shared bit_and.c -o libbitand.so The generated DLL file `libbitand.so` can now be used to implement your function. Note: GCC 7.5 or later is required. -## C UDF Sample Code +### UDF Sample Code in C -### C UDF Sample scalar function: [bit_and](https://github.com/taosdata/TDengine/blob/3.0/tests/script/sh/bit_and.c) +#### Scalar function: [bit_and](https://github.com/taosdata/TDengine/blob/3.0/tests/script/sh/bit_and.c) The bit_and function implements bitwise addition for multiple columns. If there is only one column, the column is returned. The bit_and function ignores null values. @@ -230,7 +229,7 @@ The bit_and function implements bitwise addition for multiple columns. If there -### C UDF Sample aggregate function 1: [l2norm](https://github.com/taosdata/TDengine/blob/3.0/tests/script/sh/l2norm.c) +#### Aggregate function 1: [l2norm](https://github.com/taosdata/TDengine/blob/3.0/tests/script/sh/l2norm.c) The l2norm function finds the second-order norm for all data in the input column. This squares the values, takes a cumulative sum, and finds the square root. @@ -243,7 +242,7 @@ The l2norm function finds the second-order norm for all data in the input column -### C UDF Sample aggregate function 2: [max_vol](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/max_vol.c) +#### Aggregate function 2: [max_vol](https://github.com/taosdata/TDengine/blob/3.0/tests/script/sh/max_vol.c) The max_vol function returns a string concatenating the deviceId column, the row number and column number of the maximum voltage and the maximum voltage given several voltage columns as input. @@ -269,14 +268,14 @@ select max_vol(vol1,vol2,vol3,deviceid) from battery; -#Implement a UDF in Python +## Implement a UDF in Python Implement the specified interface functions when implementing a UDF in Python. - implement `process` function for the scalar UDF。 - implement `start`, `reduce`, `finish` for the aggregate UDF。 - implement `init` for initialization and `destroy` for termination。 -## Implement a Scalar UDF in Python +### Implement a Scalar UDF in Python The implementation of a scalar UDF is described as follows: @@ -291,7 +290,7 @@ def process(input: datablock) -> tuple[output_type]: # return tuple object consisted of object of type outputtype ``` -## Implement an Aggregate UDF in Python +### Implement an Aggregate UDF in Python The implementation of an aggregate function is described as follows: @@ -312,16 +311,16 @@ def finish(buf: bytes) -> output_type: #return obj of type outputtype ``` -## Python UDF interface functions +### Python UDF Interface Definition -### Python UDF scalar interface functions +#### Scalar interface ```Python def process(input: datablock) -> tuple[output_type]: ``` - `input` is a data block two-dimension matrix-like object, of which method `data(row, col)` returns the Python object located at location (`row`, `col`) - return a Python tuple object, of which each item is a Python object of type `output_type` -### Python UDF aggregate interface functions +#### Aggregate Interface ```Python def start() -> bytes: def reduce(input: datablock, buf: bytes) -> bytes @@ -333,14 +332,14 @@ def finish(buf: bytes) -> output_type: - finally, the `finish` function is called on the intermediate result `buf` and outputs 0 or 1 data of type `output_type` -### Python UDF Initialization and Termination +#### Initialization and Cleanup Interface ```Python def init() def destroy() ``` Implement `init` for initialization and `destroy` for termination. -## TDengine SQL data type and Python UDF Data Type Mapping Table +### Data Mapping between TDengine SQL and Python UDF The following table describes the mapping between TDengine SQL data type and Python UDF Data Type. The `NULL` value of all TDengine SQL types is mapped to the `None` value in Python. @@ -354,7 +353,7 @@ The following table describes the mapping between TDengine SQL data type and Pyt |TIMESTAMP | int | |JSON and other types | Not Supported | -## Python UDF Installation +### Installing Python UDF 1. Install Python package `taospyudf` that executes Python UDF ```bash sudo pip install taospyudf @@ -362,8 +361,8 @@ ldconfig ``` 2. If PYTHONPATH is needed to find Python packages when the Python UDF executes, include the PYTHONPATH contents into the udfdLdLibPath variable of the taos.cfg configuration file -## Python UDF Sample Code -### Python UDF Scalar Function Sample Code [pybitand](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/pybitand.py) +### Python UDF Sample Code +#### Scalar Function [pybitand](https://github.com/taosdata/TDengine/blob/3.0/tests/script/sh/pybitand.py) The `pybitand` function implements bitwise addition for multiple columns. If there is only one column, the column is returned. The `pybitand` function ignores null values. @@ -376,7 +375,7 @@ The `pybitand` function implements bitwise addition for multiple columns. If the -### Python UDF Aggregate Function Sample Code [pyl2norm](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/pyl2norm.py) +#### Aggregate Function [pyl2norm](https://github.com/taosdata/TDengine/blob/3.0/tests/script/sh/pyl2norm.py) The `pyl2norm` function finds the second-order norm for all data in the input column. This squares the values, takes a cumulative sum, and finds the square root.
@@ -389,4 +388,4 @@ The `pyl2norm` function finds the second-order norm for all data in the input co
## Manage and Use User-Defined Functions -You can add UDF to TDengine before using it in SQL queries. For more information, see [User-Defined Functions](../12-taos-sql/26-udf.md). +You need to add UDF to TDengine before using it in SQL queries. For more information about how to manage UDF and how to invoke UDF, please see [Manage and Use UDF](../12-taos-sql/26-udf.md). diff --git a/docs/zh/07-develop/09-udf.md b/docs/zh/07-develop/09-udf.md index 703f222516..dbbea7a00f 100644 --- a/docs/zh/07-develop/09-udf.md +++ b/docs/zh/07-develop/09-udf.md @@ -10,7 +10,7 @@ description: "支持用户编码的聚合函数和标量函数,在查询中嵌 TDengine 支持通过 C/Python 语言进行 UDF 定义。接下来结合示例讲解 UDF 的使用方法。 -# C 语言实现 UDF +## 用 C 语言实现 UDF 使用 C 语言实现 UDF 时,需要实现规定的接口函数 - 标量函数需要实现标量接口函数 scalarfn 。 @@ -19,7 +19,7 @@ TDengine 支持通过 C/Python 语言进行 UDF 定义。接下来结合示例 接口函数的名称是 UDF 名称,或者是 UDF 名称和特定后缀(_start, _finish, _init, _destroy)的连接。列表中的scalarfn,aggfn, udf需要替换成udf函数名。 -## C UDF 实现标量函数 +### 用 C 语言实现标量函数 标量函数实现模板如下 ```c #include "taos.h" @@ -51,7 +51,7 @@ int32_t scalarfn_destroy() { ``` scalarfn 为函数名的占位符,需要替换成函数名,如bit_and。 -## C UDF 实现聚合函数 +### 用 C 语言实现聚合函数 聚合函数的实现模板如下 ```c @@ -102,7 +102,7 @@ int32_t aggfn_destroy() { ``` aggfn为函数名的占位符,需要修改为自己的函数名,如l2norm。 -## C UDF 接口函数定义 +### C 语言 UDF 接口函数定义 接口函数的名称是 udf 名称,或者是 udf 名称和特定后缀(_start, _finish, _init, _destroy)的连接。以下描述中函数名称中的 scalarfn,aggfn, udf 需要替换成udf函数名。 @@ -110,7 +110,7 @@ aggfn为函数名的占位符,需要修改为自己的函数名,如l2norm。 接口函数参数类型见数据结构定义。 -### C UDF 标量接口函数 +#### 标量函数接口 `int32_t scalarfn(SUdfDataBlock* inputDataBlock, SUdfColumn *resultColumn)` @@ -120,7 +120,7 @@ aggfn为函数名的占位符,需要修改为自己的函数名,如l2norm。 - inputDataBlock: 输入的数据块 - resultColumn: 输出列 -### C UDF 聚合接口函数 +#### 聚合函数接口 `int32_t aggfn_start(SUdfInterBuf *interBuf)` @@ -137,7 +137,7 @@ aggfn为函数名的占位符,需要修改为自己的函数名,如l2norm。 - result:最终结果。 -### C UDF 初始化和销毁 +#### 初始化和销毁接口 `int32_t udf_init()` `int32_t udf_destroy()` @@ -145,7 +145,7 @@ aggfn为函数名的占位符,需要修改为自己的函数名,如l2norm。 其中 udf 是函数名的占位符。udf_init 完成初始化工作。 udf_destroy 完成清理工作。如果没有初始化工作,无需定义udf_init函数。如果没有清理工作,无需定义udf_destroy函数。 -## C UDF 数据结构 +### C 语言 UDF 数据结构 ```c typedef struct SUdfColumnMeta { int16_t type; @@ -203,7 +203,7 @@ typedef struct SUdfInterBuf { 为了更好的操作以上数据结构,提供了一些便利函数,定义在 taosudf.h。 -## 编译 C UDF +### 编译 C UDF 用户定义函数的 C 语言源代码无法直接被 TDengine 系统使用,而是需要先编译为 动态链接库,之后才能载入 TDengine 系统。 @@ -215,9 +215,9 @@ gcc -g -O0 -fPIC -shared bit_and.c -o libbitand.so 这样就准备好了动态链接库 libbitand.so 文件,可以供后文创建 UDF 时使用了。为了保证可靠的系统运行,编译器 GCC 推荐使用 7.5 及以上版本。 -## C UDF 示例代码 +### C UDF 示例代码 -### C UDF 标量函数示例 [bit_and](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/bit_and.c) +#### 标量函数示例 [bit_and](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/bit_and.c) bit_add 实现多列的按位与功能。如果只有一列,返回这一列。bit_add 忽略空值。 @@ -230,7 +230,7 @@ bit_add 实现多列的按位与功能。如果只有一列,返回这一列。 -### C UDF 聚合函数示例1 返回值为数值类型 [l2norm](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/l2norm.c) +#### 聚合函数示例1 返回值为数值类型 [l2norm](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/l2norm.c) l2norm 实现了输入列的所有数据的二阶范数,即对每个数据先平方,再累加求和,最后开方。 @@ -243,7 +243,7 @@ l2norm 实现了输入列的所有数据的二阶范数,即对每个数据先 -### C UDF 聚合函数示例2 返回值为字符串类型 [max_vol](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/max_vol.c) +#### 聚合函数示例2 返回值为字符串类型 [max_vol](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/max_vol.c) max_vol 实现了从多个输入的电压列中找到最大电压,返回由设备ID + 最大电压所在(行,列)+ 最大电压值 组成的组合字符串值 @@ -269,13 +269,14 @@ select max_vol(vol1,vol2,vol3,deviceid) from battery; -# Python 语言实现 UDF +## 用 Python 语言实现 UDF + 使用 Python 语言实现 UDF 时,需要实现规定的接口函数 - 标量函数需要实现标量接口函数 process 。 - 聚合函数需要实现聚合接口函数 start ,reduce ,finish。 - 如果需要初始化,实现 init;如果需要清理工作,实现 destroy。 -## Python UDF 实现标量函数 +### 用 Python 实现标量函数 标量函数实现模版如下 ```Python @@ -289,7 +290,7 @@ def process(input: datablock) -> tuple[output_type]: # return tuple object consisted of object of type outputtype ``` -## Python UDF 实现聚合函数 +### 用 Python 实现聚合函数 聚合函数实现模版如下 ```Python @@ -309,16 +310,16 @@ def finish(buf: bytes) -> output_type: #return obj of type outputtype ``` -## Python UDF 接口函数定义 +### Python UDF 接口函数定义 -### Python UDF 标量接口函数 +#### 标量函数接口 ```Python def process(input: datablock) -> tuple[output_type]: ``` - input:datablock 类似二维矩阵,通过成员方法 data(row,col)返回位于 row 行,col 列的 python 对象 - 返回值是一个 Python 对象元组,每个元素类型为输出类型。 -### Python UDF 聚合接口函数 +#### 聚合函数接口 ```Python def start() -> bytes: def reduce(inputs: datablock, buf: bytes) -> bytes @@ -328,7 +329,7 @@ def finish(buf: bytes) -> output_type: 首先调用 start 生成最初结果 buffer,然后输入数据会被分为多个行数据块,对每个数据块 inputs 和当前中间结果 buf 调用 reduce,得到新的中间结果,最后再调用 finish 从中间结果 buf 产生最终输出,最终输出只能含 0 或 1 条数据。 -### Python UDF 初始化和销毁 +#### 初始化和销毁接口 ```Python def init() def destroy() @@ -336,7 +337,7 @@ def destroy() 其中 init 完成初始化工作。 destroy 完成清理工作。如果没有初始化工作,无需定义 init 函数。如果没有清理工作,无需定义 destroy 函数。 -## Python 数据类型和 TDengine 数据类型映射 +### Python 和 TDengine之间的数据类型映射 下表描述了TDengine SQL数据类型和Python数据类型的映射。任何类型的NULL值都映射成Python的None值。 @@ -350,7 +351,7 @@ def destroy() |TIMESTAMP | int | |JSON and other types | 不支持 | -## Python UDF 环境的安装 +### Python UDF 环境的安装 1. 安装 taospyudf 包。此包执行Python UDF程序。 ```bash sudo pip install taospyudf @@ -358,8 +359,8 @@ ldconfig ``` 2. 如果 Python UDF 程序执行时,通过 PYTHONPATH 引用其它的包,可以设置 taos.cfg 的 UdfdLdLibPath 变量为PYTHONPATH的内容 -## Python UDF 示例代码 -### Python UDF 标量函数示例 [pybitand](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/pybitand.py) +### Python UDF 示例代码 +#### 标量函数示例 [pybitand](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/pybitand.py) pybitand 实现多列的按位与功能。如果只有一列,返回这一列。pybitand 忽略空值。 @@ -372,7 +373,7 @@ pybitand 实现多列的按位与功能。如果只有一列,返回这一列 -### Python UDF 聚合函数示例 [pyl2norm](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/pyl2norm.py) +#### 聚合函数示例 [pyl2norm](https://github.com/taosdata/TDengine/blob/3.0/tests/script/sh/pyl2norm.py) pyl2norm 实现了输入列的所有数据的二阶范数,即对每个数据先平方,再累加求和,最后开方。 @@ -385,5 +386,5 @@ pyl2norm 实现了输入列的所有数据的二阶范数,即对每个数据 -# 管理和使用 UDF -需要 UDF 将其加入到系统才能被正常的 SQL 调用。关于如何管理和使用 UDF,参见[UDF使用说明](../12-taos-sql/26-udf.md) \ No newline at end of file +## 管理和使用 UDF +在使用 UDF 之前需要先将其加入到 TDengine 系统中。关于如何管理和使用 UDF,请参考[管理和使用 UDF](../12-taos-sql/26-udf.md) \ No newline at end of file From 623e6e7c2f6ae415ca34eab0333ec1d6d16ca898 Mon Sep 17 00:00:00 2001 From: wade zhang <95411902+gccgdb1234@users.noreply.github.com> Date: Fri, 5 May 2023 14:22:29 +0800 Subject: [PATCH 108/156] Update 09-udf.md --- docs/en/07-develop/09-udf.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/en/07-develop/09-udf.md b/docs/en/07-develop/09-udf.md index aef845d3ce..a7753647e3 100644 --- a/docs/en/07-develop/09-udf.md +++ b/docs/en/07-develop/09-udf.md @@ -387,5 +387,5 @@ The `pyl2norm` function finds the second-order norm for all data in the input co -## Manage and Use User-Defined Functions +## Manage and Use UDF You need to add UDF to TDengine before using it in SQL queries. For more information about how to manage UDF and how to invoke UDF, please see [Manage and Use UDF](../12-taos-sql/26-udf.md). From 0e6de0935393270f4f36ec23b6d1ae07b9031cdc Mon Sep 17 00:00:00 2001 From: Benguang Zhao Date: Fri, 5 May 2023 14:53:32 +0800 Subject: [PATCH 109/156] enh: unify error msg for disk space checking in doInitAggInfoSup --- source/libs/executor/src/aggregateoperator.c | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/source/libs/executor/src/aggregateoperator.c b/source/libs/executor/src/aggregateoperator.c index ec8060348d..1f170552db 100644 --- a/source/libs/executor/src/aggregateoperator.c +++ b/source/libs/executor/src/aggregateoperator.c @@ -467,8 +467,8 @@ int32_t doInitAggInfoSup(SAggSupporter* pAggSup, SqlFunctionCtx* pCtx, int32_t n getBufferPgSize(pAggSup->resultRowSize, &defaultPgsz, &defaultBufsz); if (!osTempSpaceAvailable()) { - code = TSDB_CODE_NO_AVAIL_DISK; - qError("Init stream agg supporter failed since %s, %s", terrstr(code), pKey); + code = TSDB_CODE_NO_DISKSPACE; + qError("Init stream agg supporter failed since %s, key:%s, tempDir:%s", terrstr(code), pKey, tsTempDir); return code; } From b0e77eeb4500fa7b1ab8afab56f1e24bfe84993f Mon Sep 17 00:00:00 2001 From: gccgdb1234 Date: Fri, 5 May 2023 16:13:47 +0800 Subject: [PATCH 110/156] doc: refine the description of TABLE_PREFIX and TABLE_SUFFIX --- docs/en/12-taos-sql/02-database.md | 4 ++-- docs/zh/12-taos-sql/02-database.md | 4 ++-- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/docs/en/12-taos-sql/02-database.md b/docs/en/12-taos-sql/02-database.md index ec007d6830..8c579622c9 100644 --- a/docs/en/12-taos-sql/02-database.md +++ b/docs/en/12-taos-sql/02-database.md @@ -72,8 +72,8 @@ database_option: { - 0: The database can contain multiple supertables. - 1: The database can contain only one supertable. - STT_TRIGGER: specifies the number of file merges triggered by flushed files. The default is 8, ranging from 1 to 16. For high-frequency scenarios with few tables, it is recommended to use the default configuration or a smaller value for this parameter; For multi-table low-frequency scenarios, it is recommended to configure this parameter with a larger value. -- TABLE_PREFIX:The prefix length in the table name that is ignored when distributing table to vnode based on table name. -- TABLE_SUFFIX:The suffix length in the table name that is ignored when distributing table to vnode based on table name. +- TABLE_PREFIX: The prefix in the table name that is ignored when distributing a table to a vgroup when it's a positive number, or only the prefix is used when distributing a table to a vgroup, the default value is 0; For example, if the table name v30001, then "0001" is used if TSDB_PREFIX is set to 2 but "v3" is used if TSDB_PREFIX is set to -2; It can help you to control the distribution of tables. +- TABLE_SUFFIX:The suffix in the table name that is ignored when distributing a table to a vgroup when it's a positive number, or only the suffix is used when distributing a table to a vgroup, the default value is 0; For example, if the table name v30001, then "v300" is used if TSDB_SUFFIX is set to 2 but "01" is used if TSDB_SUFFIX is set to -2; It can help you to control the distribution of tables. - TSDB_PAGESIZE: The page size of the data storage engine in a vnode. The unit is KB. The default is 4 KB. The range is 1 to 16384, that is, 1 KB to 16 MB. - WAL_RETENTION_PERIOD: specifies the maximum time of which WAL files are to be kept for consumption. This parameter is used for data subscription. Enter a time in seconds. The default value 0. A value of 0 indicates that WAL files are not required to keep for consumption. Alter it with a proper value at first to create topics. - WAL_RETENTION_SIZE: specifies the maximum total size of which WAL files are to be kept for consumption. This parameter is used for data subscription. Enter a size in KB. The default value is 0. A value of 0 indicates that the total size of WAL files to keep for consumption has no upper limit. diff --git a/docs/zh/12-taos-sql/02-database.md b/docs/zh/12-taos-sql/02-database.md index dd30635660..b0e2887ba9 100644 --- a/docs/zh/12-taos-sql/02-database.md +++ b/docs/zh/12-taos-sql/02-database.md @@ -71,8 +71,8 @@ database_option: { - 0:表示可以创建多张超级表。 - 1:表示只可以创建一张超级表。 - STT_TRIGGER:表示落盘文件触发文件合并的个数。默认为 1,范围 1 到 16。对于少表高频场景,此参数建议使用默认配置,或较小的值;而对于多表低频场景,此参数建议配置较大的值。 -- TABLE_PREFIX:内部存储引擎根据表名分配存储该表数据的 VNODE 时要忽略的前缀的长度。 -- TABLE_SUFFIX:内部存储引擎根据表名分配存储该表数据的 VNODE 时要忽略的后缀的长度。 +- TABLE_PREFIX:当其为正值时,在决定把一个表分配到哪个 vgroup 时要忽略表名中指定长度的前缀;当其为负值时,在决定把一个表分配到哪个 vgroup 时只使用表名中指定长度的前缀;例如,假定表名为 "v30001",当 TSDB_PREFIX = 2 时 使用 "0001" 来决定分配到哪个 vgroup ,当 TSDB_PREFIX = -2 时使用 "v3" 来决定分配到哪个 vgroup +- TABLE_SUFFIX:当其为正值时,在决定把一个表分配到哪个 vgroup 时要忽略表名中指定长度的后缀;当其为负值时,在决定把一个表分配到哪个 vgroup 时只使用表名中指定长度的后缀;例如,假定表名为 "v30001",当 TSDB_SUFFIX = 2 时 使用 "v300" 来决定分配到哪个 vgroup ,当 TSDB_SUFFIX = -2 时使用 "0001" 来决定分配到哪个 vgroup。 - TSDB_PAGESIZE:一个 VNODE 中时序数据存储引擎的页大小,单位为 KB,默认为 4 KB。范围为 1 到 16384,即 1 KB到 16 MB。 - WAL_RETENTION_PERIOD: 为了数据订阅消费,需要WAL日志文件额外保留的最大时长策略。WAL日志清理,不受订阅客户端消费状态影响。单位为 s。默认为 0,表示无需为订阅保留。新建订阅,应先设置恰当的时长策略。 - WAL_RETENTION_SIZE:为了数据订阅消费,需要WAL日志文件额外保留的最大累计大小策略。单位为 KB。默认为 0,表示累计大小无上限。 From b3bd3799afb018962cac149105d11d1e35d524b2 Mon Sep 17 00:00:00 2001 From: dapan1121 <72057773+dapan1121@users.noreply.github.com> Date: Fri, 5 May 2023 16:19:34 +0800 Subject: [PATCH 111/156] Update 02-database.md --- docs/zh/12-taos-sql/02-database.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/zh/12-taos-sql/02-database.md b/docs/zh/12-taos-sql/02-database.md index b0e2887ba9..a2a0914120 100644 --- a/docs/zh/12-taos-sql/02-database.md +++ b/docs/zh/12-taos-sql/02-database.md @@ -72,7 +72,7 @@ database_option: { - 1:表示只可以创建一张超级表。 - STT_TRIGGER:表示落盘文件触发文件合并的个数。默认为 1,范围 1 到 16。对于少表高频场景,此参数建议使用默认配置,或较小的值;而对于多表低频场景,此参数建议配置较大的值。 - TABLE_PREFIX:当其为正值时,在决定把一个表分配到哪个 vgroup 时要忽略表名中指定长度的前缀;当其为负值时,在决定把一个表分配到哪个 vgroup 时只使用表名中指定长度的前缀;例如,假定表名为 "v30001",当 TSDB_PREFIX = 2 时 使用 "0001" 来决定分配到哪个 vgroup ,当 TSDB_PREFIX = -2 时使用 "v3" 来决定分配到哪个 vgroup -- TABLE_SUFFIX:当其为正值时,在决定把一个表分配到哪个 vgroup 时要忽略表名中指定长度的后缀;当其为负值时,在决定把一个表分配到哪个 vgroup 时只使用表名中指定长度的后缀;例如,假定表名为 "v30001",当 TSDB_SUFFIX = 2 时 使用 "v300" 来决定分配到哪个 vgroup ,当 TSDB_SUFFIX = -2 时使用 "0001" 来决定分配到哪个 vgroup。 +- TABLE_SUFFIX:当其为正值时,在决定把一个表分配到哪个 vgroup 时要忽略表名中指定长度的后缀;当其为负值时,在决定把一个表分配到哪个 vgroup 时只使用表名中指定长度的后缀;例如,假定表名为 "v30001",当 TSDB_SUFFIX = 2 时 使用 "v300" 来决定分配到哪个 vgroup ,当 TSDB_SUFFIX = -2 时使用 "01" 来决定分配到哪个 vgroup。 - TSDB_PAGESIZE:一个 VNODE 中时序数据存储引擎的页大小,单位为 KB,默认为 4 KB。范围为 1 到 16384,即 1 KB到 16 MB。 - WAL_RETENTION_PERIOD: 为了数据订阅消费,需要WAL日志文件额外保留的最大时长策略。WAL日志清理,不受订阅客户端消费状态影响。单位为 s。默认为 0,表示无需为订阅保留。新建订阅,应先设置恰当的时长策略。 - WAL_RETENTION_SIZE:为了数据订阅消费,需要WAL日志文件额外保留的最大累计大小策略。单位为 KB。默认为 0,表示累计大小无上限。 From 997be0a3edef6ac78923eed97ab2607658f1c034 Mon Sep 17 00:00:00 2001 From: dapan1121 Date: Fri, 5 May 2023 17:04:36 +0800 Subject: [PATCH 112/156] fix: having clause issues --- source/libs/parser/src/parTranslater.c | 29 ++++++++++++++++++-------- 1 file changed, 20 insertions(+), 9 deletions(-) diff --git a/source/libs/parser/src/parTranslater.c b/source/libs/parser/src/parTranslater.c index 25e92a55ec..a2748c7e6d 100644 --- a/source/libs/parser/src/parTranslater.c +++ b/source/libs/parser/src/parTranslater.c @@ -2289,6 +2289,23 @@ static int32_t checkAggColCoexist(STranslateContext* pCxt, SSelectStmt* pSelect) return TSDB_CODE_SUCCESS; } +static int32_t checkPartitionGroupBy(STranslateContext* pCxt, SSelectStmt* pSelect) { + int32_t code = TSDB_CODE_SUCCESS; + if (NULL == pSelect->pGroupByList && NULL == pSelect->pPartitionByList && NULL == pSelect->pWindow) { + return code; + } + if (NULL != pSelect->pHaving) { + code = checkExprForGroupBy(pCxt, &pSelect->pHaving); + } + if (TSDB_CODE_SUCCESS == code && NULL != pSelect->pProjectionList) { + code = checkExprListForGroupBy(pCxt, pSelect, pSelect->pProjectionList); + } + if (TSDB_CODE_SUCCESS == code && NULL != pSelect->pOrderByList) { + code = checkExprListForGroupBy(pCxt, pSelect, pSelect->pOrderByList); + } + return code; +} + static int32_t checkWindowFuncCoexist(STranslateContext* pCxt, SSelectStmt* pSelect) { if (NULL == pSelect->pWindow) { return TSDB_CODE_SUCCESS; @@ -2901,9 +2918,6 @@ static int32_t translateOrderBy(STranslateContext* pCxt, SSelectStmt* pSelect) { pCxt->currClause = SQL_CLAUSE_ORDER_BY; code = translateExprList(pCxt, pSelect->pOrderByList); } - if (TSDB_CODE_SUCCESS == code) { - code = checkExprListForGroupBy(pCxt, pSelect, pSelect->pOrderByList); - } return code; } @@ -3022,9 +3036,6 @@ static int32_t translateSelectList(STranslateContext* pCxt, SSelectStmt* pSelect if (TSDB_CODE_SUCCESS == code) { code = translateProjectionList(pCxt, pSelect); } - if (TSDB_CODE_SUCCESS == code) { - code = checkExprListForGroupBy(pCxt, pSelect, pSelect->pProjectionList); - } if (TSDB_CODE_SUCCESS == code) { code = translateFillValues(pCxt, pSelect); } @@ -3041,9 +3052,6 @@ static int32_t translateHaving(STranslateContext* pCxt, SSelectStmt* pSelect) { } pCxt->currClause = SQL_CLAUSE_HAVING; int32_t code = translateExpr(pCxt, &pSelect->pHaving); - if (TSDB_CODE_SUCCESS == code && (NULL != pSelect->pGroupByList || NULL != pSelect->pWindow)) { - code = checkExprForGroupBy(pCxt, &pSelect->pHaving); - } return code; } @@ -3629,6 +3637,9 @@ static int32_t translateSelectFrom(STranslateContext* pCxt, SSelectStmt* pSelect if (TSDB_CODE_SUCCESS == code) { code = translateOrderBy(pCxt, pSelect); } + if (TSDB_CODE_SUCCESS == code) { + code = checkPartitionGroupBy(pCxt, pSelect); + } if (TSDB_CODE_SUCCESS == code) { code = checkAggColCoexist(pCxt, pSelect); } From 908fd4ff97bd2c1b00b85ef5b84fca1f264d4f4b Mon Sep 17 00:00:00 2001 From: Benguang Zhao Date: Fri, 5 May 2023 17:48:21 +0800 Subject: [PATCH 113/156] feat: process split vgroup msg imp --- source/dnode/mnode/impl/src/mndVgroup.c | 41 ++++--------------------- 1 file changed, 6 insertions(+), 35 deletions(-) diff --git a/source/dnode/mnode/impl/src/mndVgroup.c b/source/dnode/mnode/impl/src/mndVgroup.c index d4be6bed73..e580c78383 100644 --- a/source/dnode/mnode/impl/src/mndVgroup.c +++ b/source/dnode/mnode/impl/src/mndVgroup.c @@ -2103,7 +2103,7 @@ static int32_t mndAddAdjustVnodeHashRangeAction(SMnode *pMnode, STrans *pTrans, return 0; } -static int32_t mndSplitVgroup(SMnode *pMnode, SRpcMsg *pReq, SDbObj *pDb, SVgObj *pVgroup) { +int32_t mndSplitVgroup(SMnode *pMnode, SRpcMsg *pReq, SDbObj *pDb, SVgObj *pVgroup) { int32_t code = -1; STrans *pTrans = NULL; SSdbRaw *pRaw = NULL; @@ -2229,42 +2229,13 @@ _OVER: return code; } -static int32_t mndProcessSplitVgroupMsg(SRpcMsg *pReq) { - SMnode *pMnode = pReq->info.node; - int32_t code = -1; - SVgObj *pVgroup = NULL; - SDbObj *pDb = NULL; +extern int32_t mndProcessSplitVgroupMsgImp(SRpcMsg *pReq); - SSplitVgroupReq req = {0}; - if (tDeserializeSSplitVgroupReq(pReq->pCont, pReq->contLen, &req) != 0) { - terrno = TSDB_CODE_INVALID_MSG; - goto _OVER; - } +static int32_t mndProcessSplitVgroupMsg(SRpcMsg *pReq) { return mndProcessSplitVgroupMsgImp(pReq); } - mInfo("vgId:%d, start to split", req.vgId); - if (mndCheckOperPrivilege(pMnode, pReq->info.conn.user, MND_OPER_SPLIT_VGROUP) != 0) { - goto _OVER; - } - - pVgroup = mndAcquireVgroup(pMnode, req.vgId); - if (pVgroup == NULL) goto _OVER; - - pDb = mndAcquireDb(pMnode, pVgroup->dbName); - if (pDb == NULL) goto _OVER; - - code = mndSplitVgroup(pMnode, pReq, pDb, pVgroup); - if (code != 0) { - mError("vgId:%d, failed to start to split vgroup since %s, db:%s", pVgroup->vgId, terrstr(), pDb->name); - goto _OVER; - } - - mInfo("vgId:%d, split vgroup started successfully. db:%s", pVgroup->vgId, pDb->name); - -_OVER: - mndReleaseVgroup(pMnode, pVgroup); - mndReleaseDb(pMnode, pDb); - return code; -} +#ifndef TD_ENTERPRISE +int32_t mndProcessSplitVgroupMsgImp(SRpcMsg *pReq) { return 0; } +#endif static int32_t mndSetBalanceVgroupInfoToTrans(SMnode *pMnode, STrans *pTrans, SDbObj *pDb, SVgObj *pVgroup, SDnodeObj *pSrc, SDnodeObj *pDst) { From 76a20b2221b22e246ac4528e8e5f305f38e31cba Mon Sep 17 00:00:00 2001 From: Benguang Zhao Date: Fri, 5 May 2023 18:57:15 +0800 Subject: [PATCH 114/156] enh: adjust enum ETrnExec --- source/dnode/mgmt/mgmt_vnode/src/vmWorker.c | 2 +- source/dnode/mnode/impl/inc/mndDef.h | 2 +- source/dnode/mnode/impl/inc/mndTrans.h | 1 + source/dnode/mnode/impl/src/mndTrans.c | 4 +++- 4 files changed, 6 insertions(+), 3 deletions(-) diff --git a/source/dnode/mgmt/mgmt_vnode/src/vmWorker.c b/source/dnode/mgmt/mgmt_vnode/src/vmWorker.c index a7c4b2e70e..ef4a2b52bb 100644 --- a/source/dnode/mgmt/mgmt_vnode/src/vmWorker.c +++ b/source/dnode/mgmt/mgmt_vnode/src/vmWorker.c @@ -60,7 +60,7 @@ static void vmProcessMgmtQueue(SQueueInfo *pInfo, SRpcMsg *pMsg) { if (IsReq(pMsg)) { if (code != 0) { if (terrno != 0) code = terrno; - dGError("msg:%p, failed to process since %s, type:%s", pMsg, terrstr(code), TMSG_INFO(pMsg->msgType)); + dGError("msg:%p, failed to process since %s, type:%s", pMsg, tstrerror(code), TMSG_INFO(pMsg->msgType)); } vmSendRsp(pMsg, code); } diff --git a/source/dnode/mnode/impl/inc/mndDef.h b/source/dnode/mnode/impl/inc/mndDef.h index f547ce025d..e227848e9f 100644 --- a/source/dnode/mnode/impl/inc/mndDef.h +++ b/source/dnode/mnode/impl/inc/mndDef.h @@ -118,7 +118,7 @@ typedef enum { } ETrnPolicy; typedef enum { - TRN_EXEC_PRARLLEL = 0, + TRN_EXEC_PARALLEL = 0, TRN_EXEC_SERIAL = 1, } ETrnExec; diff --git a/source/dnode/mnode/impl/inc/mndTrans.h b/source/dnode/mnode/impl/inc/mndTrans.h index 057e3efbbc..03434573c4 100644 --- a/source/dnode/mnode/impl/inc/mndTrans.h +++ b/source/dnode/mnode/impl/inc/mndTrans.h @@ -76,6 +76,7 @@ void mndTransSetRpcRsp(STrans *pTrans, void *pCont, int32_t contLen); void mndTransSetCb(STrans *pTrans, ETrnFunc startFunc, ETrnFunc stopFunc, void *param, int32_t paramLen); void mndTransSetDbName(STrans *pTrans, const char *dbname, const char *stbname); void mndTransSetSerial(STrans *pTrans); +void mndTransSetParallel(STrans *pTrans); void mndTransSetOper(STrans *pTrans, EOperType oper); int32_t mndTrancCheckConflict(SMnode *pMnode, STrans *pTrans); diff --git a/source/dnode/mnode/impl/src/mndTrans.c b/source/dnode/mnode/impl/src/mndTrans.c index 106eea0313..ccb0882bb0 100644 --- a/source/dnode/mnode/impl/src/mndTrans.c +++ b/source/dnode/mnode/impl/src/mndTrans.c @@ -643,7 +643,7 @@ STrans *mndTransCreate(SMnode *pMnode, ETrnPolicy policy, ETrnConflct conflict, pTrans->stage = TRN_STAGE_PREPARE; pTrans->policy = policy; pTrans->conflict = conflict; - pTrans->exec = TRN_EXEC_PRARLLEL; + pTrans->exec = TRN_EXEC_PARALLEL; pTrans->createdTime = taosGetTimestampMs(); pTrans->redoActions = taosArrayInit(TRANS_ARRAY_SIZE, sizeof(STransAction)); pTrans->undoActions = taosArrayInit(TRANS_ARRAY_SIZE, sizeof(STransAction)); @@ -793,6 +793,8 @@ void mndTransSetDbName(STrans *pTrans, const char *dbname, const char *stbname) void mndTransSetSerial(STrans *pTrans) { pTrans->exec = TRN_EXEC_SERIAL; } +void mndTransSetParallel(STrans *pTrans) { pTrans->exec = TRN_EXEC_PARALLEL; } + void mndTransSetOper(STrans *pTrans, EOperType oper) { pTrans->oper = oper; } static int32_t mndTransSync(SMnode *pMnode, STrans *pTrans) { From 0c3dc0867fff2d8884842d6dd88d2e6f8efd8320 Mon Sep 17 00:00:00 2001 From: kailixu Date: Fri, 5 May 2023 20:33:41 +0800 Subject: [PATCH 115/156] chore: sync fix from main --- source/client/src/clientSml.c | 11 ++++++----- source/libs/tdb/src/db/tdbBtree.c | 5 +++++ 2 files changed, 11 insertions(+), 5 deletions(-) diff --git a/source/client/src/clientSml.c b/source/client/src/clientSml.c index 45f0def157..98b64b211a 100644 --- a/source/client/src/clientSml.c +++ b/source/client/src/clientSml.c @@ -558,14 +558,15 @@ static int32_t smlGenerateSchemaAction(SSchema *colField, SHashObj *colHash, SSm return 0; } +#define BOUNDARY 1024 static int32_t smlFindNearestPowerOf2(int32_t length, uint8_t type) { int32_t result = 1; - if (length < 1024) { - while (result <= length) { - result <<= 1; - } - } else { + if (length >= BOUNDARY){ result = length; + }else{ + while (result <= length) { + result << 1; + } } if (type == TSDB_DATA_TYPE_BINARY && result > TSDB_MAX_BINARY_LEN - VARSTR_HEADER_SIZE) { diff --git a/source/libs/tdb/src/db/tdbBtree.c b/source/libs/tdb/src/db/tdbBtree.c index 6df2b40000..c49b5726b6 100644 --- a/source/libs/tdb/src/db/tdbBtree.c +++ b/source/libs/tdb/src/db/tdbBtree.c @@ -1814,6 +1814,11 @@ int tdbBtreeNext(SBTC *pBtc, void **ppKey, int *kLen, void **ppVal, int *vLen) { *ppVal = pVal; *vLen = cd.vLen; + } else { + if (TDB_CELLDECODER_FREE_VAL(&cd)) { + tdbTrace("tdb/btree-next2 decoder: %p pVal free: %p", &cd, cd.pVal); + tdbFree(cd.pVal); + } } ret = tdbBtcMoveToNext(pBtc); From 4ae9eab90ebf0e5ccc6dc34a772d7905d2d88414 Mon Sep 17 00:00:00 2001 From: kailixu Date: Fri, 5 May 2023 20:42:49 +0800 Subject: [PATCH 116/156] chore: fix --- source/client/src/clientSml.c | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/source/client/src/clientSml.c b/source/client/src/clientSml.c index 98b64b211a..cf1c6cc434 100644 --- a/source/client/src/clientSml.c +++ b/source/client/src/clientSml.c @@ -565,7 +565,7 @@ static int32_t smlFindNearestPowerOf2(int32_t length, uint8_t type) { result = length; }else{ while (result <= length) { - result << 1; + result <<= 1; } } From 29da3087727535528b13a8329faf1c337312b806 Mon Sep 17 00:00:00 2001 From: kailixu Date: Fri, 5 May 2023 23:58:45 +0800 Subject: [PATCH 117/156] chore: test case --- .../1-insert/influxdb_line_taosc_insert.py | 14 ++++++++------ 1 file changed, 8 insertions(+), 6 deletions(-) diff --git a/tests/system-test/1-insert/influxdb_line_taosc_insert.py b/tests/system-test/1-insert/influxdb_line_taosc_insert.py index 46aec2909a..819951cce5 100644 --- a/tests/system-test/1-insert/influxdb_line_taosc_insert.py +++ b/tests/system-test/1-insert/influxdb_line_taosc_insert.py @@ -908,22 +908,24 @@ class TDTestCase: self._conn.schemaless_insert([input_sql], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value) # * every binary and nchar must be length+2, so here is two tag, max length could not larger than 16384-2*2 - input_sql = f'{stb_name}, t0=f,t1="{tdCom.getLongName(4093, "letters")}" 1626006833639000000' + stb_name = tdCom.getLongName(8, "letters") + input_sql = f'{stb_name},t0=f,t1="{tdCom.getLongName(4091, "letters")}", c0=f 1626006833639000000' self._conn.schemaless_insert([input_sql], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value) tdSql.query(f"select * from {stb_name}") - tdSql.checkRows(2) - input_sql = f'{stb_name},t0=t,t1="{tdCom.getLongName(4084, "letters")}",t2="{tdCom.getLongName(6, "letters")}" c0=f 1626006833639000000' + tdSql.checkRows(1) + + input_sql = f'{stb_name},t0=t,t1="{tdCom.getLongName(4092, "letters")}", c0=f 1626006833639000000' try: self._conn.schemaless_insert([input_sql], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value) raise Exception("should not reach here") except SchemalessError as err: tdSql.checkNotEqual(err.errno, 0) tdSql.query(f"select * from {stb_name}") - tdSql.checkRows(2) + tdSql.checkRows(1) - stb_name = tdCom.getLongName(8, "letters") + stb_name = tdCom.getLongName(9, "letters") # # * check col,col+ts max in describe ---> 16143 input_sql = f'{stb_name},t0=t c0=1i32,c1="{tdCom.getLongName(65517, "letters")}" 1626006833639000000' self._conn.schemaless_insert([input_sql], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value) @@ -938,7 +940,7 @@ class TDTestCase: tdSql.checkRows(1) - stb_name = tdCom.getLongName(9, "letters") + stb_name = tdCom.getLongName(10, "letters") input_sql = f'{stb_name},t0=t c0=1i16,c1="{tdCom.getLongName(49133, "letters")}",c2="{tdCom.getLongName(16384, "letters")}" 1626006833639000000' self._conn.schemaless_insert([input_sql], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value) From 75795253a9b1dc6c3f7ebebbc0e19fb9bb0518fc Mon Sep 17 00:00:00 2001 From: kailixu Date: Sat, 6 May 2023 06:06:12 +0800 Subject: [PATCH 118/156] more code --- source/client/src/clientHb.c | 2 +- tests/system-test/1-insert/influxdb_line_taosc_insert.py | 6 +++--- 2 files changed, 4 insertions(+), 4 deletions(-) diff --git a/source/client/src/clientHb.c b/source/client/src/clientHb.c index 4240a8510d..203aad8068 100644 --- a/source/client/src/clientHb.c +++ b/source/client/src/clientHb.c @@ -1172,7 +1172,7 @@ void hbDeregisterConn(STscObj *pTscObj, SClientHbKey connKey) { taosThreadMutexLock(&pTscObj->mutex); if (pTscObj->passInfo.fp) { - int32_t cnt = atomic_sub_fetch_32(&pAppHbMgr->passKeyCnt, 1); + atomic_sub_fetch_32(&pAppHbMgr->passKeyCnt, 1); } taosThreadMutexUnlock(&pTscObj->mutex); } \ No newline at end of file diff --git a/tests/system-test/1-insert/influxdb_line_taosc_insert.py b/tests/system-test/1-insert/influxdb_line_taosc_insert.py index 819951cce5..3e9bad4efd 100644 --- a/tests/system-test/1-insert/influxdb_line_taosc_insert.py +++ b/tests/system-test/1-insert/influxdb_line_taosc_insert.py @@ -896,7 +896,7 @@ class TDTestCase: tdSql.checkRows(2) tdSql.checkNotEqual(tb_name1, tb_name3) - # * tag binary max is 16384, col+ts binary max 65531 + # * tag binary max is 16384-2, col+ts binary max 65531 def tagColBinaryMaxLengthCheckCase(self): """ every binary and nchar must be length+2 @@ -960,7 +960,7 @@ class TDTestCase: tdSql.checkNotEqual(err.errno, 0) - # * tag nchar max is 16384/4, col+ts nchar max 65531 + # * tag nchar max is (16384-2)/4, col+ts nchar max 65531 def tagColNcharMaxLengthCheckCase(self): """ check nchar length limit @@ -971,7 +971,7 @@ class TDTestCase: input_sql = f'{stb_name},id="{tb_name}",t0=t c0=f 1626006833639000000' code = self._conn.schemaless_insert([input_sql], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value) - # * legal tag nchar could not be larger than 16384/4 + # * legal tag nchar could not be larger than (16384-2)/4 # input_sql = f'{stb_name},t0=t,t1=L"{tdCom.getLongName(4093, "letters")}",t2=L"{tdCom.getLongName(1, "letters")}" c0=f 1626006833639000000' # self._conn.schemaless_insert([input_sql], TDSmlProtocolType.LINE.value, TDSmlTimestampType.NANO_SECOND.value) # tdSql.query(f"select * from {stb_name}") From 72c6292ab47dc6b15799db750b742a0d5c843cd5 Mon Sep 17 00:00:00 2001 From: Benguang Zhao Date: Sat, 6 May 2023 09:14:03 +0800 Subject: [PATCH 119/156] enh: declare mndSplitVgroup in mndVgroup.h --- source/dnode/mnode/impl/inc/mndVgroup.h | 2 ++ source/dnode/mnode/impl/src/mndVgroup.c | 2 +- 2 files changed, 3 insertions(+), 1 deletion(-) diff --git a/source/dnode/mnode/impl/inc/mndVgroup.h b/source/dnode/mnode/impl/inc/mndVgroup.h index 0229735952..94c4eae83f 100644 --- a/source/dnode/mnode/impl/inc/mndVgroup.h +++ b/source/dnode/mnode/impl/inc/mndVgroup.h @@ -50,6 +50,8 @@ void *mndBuildCreateVnodeReq(SMnode *, SDnodeObj *pDnode, SDbObj *pDb, SVgObj *p void *mndBuildDropVnodeReq(SMnode *, SDnodeObj *pDnode, SDbObj *pDb, SVgObj *pVgroup, int32_t *pContLen); bool mndVgroupInDb(SVgObj *pVgroup, int64_t dbUid); +int32_t mndSplitVgroup(SMnode *pMnode, SRpcMsg *pReq, SDbObj *pDb, SVgObj *pVgroup); + #ifdef __cplusplus } #endif diff --git a/source/dnode/mnode/impl/src/mndVgroup.c b/source/dnode/mnode/impl/src/mndVgroup.c index add2b1568c..7ece3b17ff 100644 --- a/source/dnode/mnode/impl/src/mndVgroup.c +++ b/source/dnode/mnode/impl/src/mndVgroup.c @@ -2103,7 +2103,7 @@ static int32_t mndAddAdjustVnodeHashRangeAction(SMnode *pMnode, STrans *pTrans, return 0; } -static int32_t mndSplitVgroup(SMnode *pMnode, SRpcMsg *pReq, SDbObj *pDb, SVgObj *pVgroup) { +int32_t mndSplitVgroup(SMnode *pMnode, SRpcMsg *pReq, SDbObj *pDb, SVgObj *pVgroup) { int32_t code = -1; STrans *pTrans = NULL; SSdbRaw *pRaw = NULL; From fd43736e926abe2a0c2531c370ec44b73a151a28 Mon Sep 17 00:00:00 2001 From: Alex Duan <51781608+DuanKuanJun@users.noreply.github.com> Date: Sat, 6 May 2023 10:48:31 +0800 Subject: [PATCH 120/156] Update 09-udf.md --- docs/zh/07-develop/09-udf.md | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/docs/zh/07-develop/09-udf.md b/docs/zh/07-develop/09-udf.md index 443dcb9806..99ecd903b4 100644 --- a/docs/zh/07-develop/09-udf.md +++ b/docs/zh/07-develop/09-udf.md @@ -217,7 +217,7 @@ gcc -g -O0 -fPIC -shared bit_and.c -o libbitand.so ### C UDF 示例代码 -#### 标量函数示例 [bit_and](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/bit_and.c) +#### 标量函数示例 [bit_and](https://github.com/taosdata/TDengine/blob/3.0/tests/script/sh/bit_and.c) bit_add 实现多列的按位与功能。如果只有一列,返回这一列。bit_add 忽略空值。 @@ -230,7 +230,7 @@ bit_add 实现多列的按位与功能。如果只有一列,返回这一列。 -#### 聚合函数示例1 返回值为数值类型 [l2norm](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/l2norm.c) +#### 聚合函数示例1 返回值为数值类型 [l2norm](https://github.com/taosdata/TDengine/blob/3.0/tests/script/sh/l2norm.c) l2norm 实现了输入列的所有数据的二阶范数,即对每个数据先平方,再累加求和,最后开方。 @@ -243,7 +243,7 @@ l2norm 实现了输入列的所有数据的二阶范数,即对每个数据先 -#### 聚合函数示例2 返回值为字符串类型 [max_vol](https://github.com/taosdata/TDengine/blob/develop/tests/script/sh/max_vol.c) +#### 聚合函数示例2 返回值为字符串类型 [max_vol](https://github.com/taosdata/TDengine/blob/3.0/tests/script/sh/max_vol.c) max_vol 实现了从多个输入的电压列中找到最大电压,返回由设备ID + 最大电压所在(行,列)+ 最大电压值 组成的组合字符串值 From 47791dfe2fd0cfbea9b93f5fc075e6c0dc1af4e3 Mon Sep 17 00:00:00 2001 From: dapan1121 Date: Sat, 6 May 2023 11:02:20 +0800 Subject: [PATCH 121/156] fix: group by validation issue --- source/libs/parser/src/parTranslater.c | 16 ++++++++++++---- source/libs/parser/test/parSelectTest.cpp | 13 +++++++++++++ 2 files changed, 25 insertions(+), 4 deletions(-) diff --git a/source/libs/parser/src/parTranslater.c b/source/libs/parser/src/parTranslater.c index a2748c7e6d..1dcc18e5fb 100644 --- a/source/libs/parser/src/parTranslater.c +++ b/source/libs/parser/src/parTranslater.c @@ -2289,20 +2289,22 @@ static int32_t checkAggColCoexist(STranslateContext* pCxt, SSelectStmt* pSelect) return TSDB_CODE_SUCCESS; } -static int32_t checkPartitionGroupBy(STranslateContext* pCxt, SSelectStmt* pSelect) { +static int32_t checkHavingGroupBy(STranslateContext* pCxt, SSelectStmt* pSelect) { int32_t code = TSDB_CODE_SUCCESS; - if (NULL == pSelect->pGroupByList && NULL == pSelect->pPartitionByList && NULL == pSelect->pWindow) { + if (NULL == getGroupByList(pCxt) && NULL == pSelect->pPartitionByList && NULL == pSelect->pWindow) { return code; } if (NULL != pSelect->pHaving) { code = checkExprForGroupBy(pCxt, &pSelect->pHaving); } +/* if (TSDB_CODE_SUCCESS == code && NULL != pSelect->pProjectionList) { code = checkExprListForGroupBy(pCxt, pSelect, pSelect->pProjectionList); } if (TSDB_CODE_SUCCESS == code && NULL != pSelect->pOrderByList) { code = checkExprListForGroupBy(pCxt, pSelect, pSelect->pOrderByList); } +*/ return code; } @@ -2918,6 +2920,9 @@ static int32_t translateOrderBy(STranslateContext* pCxt, SSelectStmt* pSelect) { pCxt->currClause = SQL_CLAUSE_ORDER_BY; code = translateExprList(pCxt, pSelect->pOrderByList); } + if (TSDB_CODE_SUCCESS == code) { + code = checkExprListForGroupBy(pCxt, pSelect, pSelect->pOrderByList); + } return code; } @@ -3036,6 +3041,9 @@ static int32_t translateSelectList(STranslateContext* pCxt, SSelectStmt* pSelect if (TSDB_CODE_SUCCESS == code) { code = translateProjectionList(pCxt, pSelect); } + if (TSDB_CODE_SUCCESS == code) { + code = checkExprListForGroupBy(pCxt, pSelect, pSelect->pProjectionList); + } if (TSDB_CODE_SUCCESS == code) { code = translateFillValues(pCxt, pSelect); } @@ -3635,10 +3643,10 @@ static int32_t translateSelectFrom(STranslateContext* pCxt, SSelectStmt* pSelect code = translateSelectList(pCxt, pSelect); } if (TSDB_CODE_SUCCESS == code) { - code = translateOrderBy(pCxt, pSelect); + code = checkHavingGroupBy(pCxt, pSelect); } if (TSDB_CODE_SUCCESS == code) { - code = checkPartitionGroupBy(pCxt, pSelect); + code = translateOrderBy(pCxt, pSelect); } if (TSDB_CODE_SUCCESS == code) { code = checkAggColCoexist(pCxt, pSelect); diff --git a/source/libs/parser/test/parSelectTest.cpp b/source/libs/parser/test/parSelectTest.cpp index ec6c69ea8d..2d8ce55b72 100644 --- a/source/libs/parser/test/parSelectTest.cpp +++ b/source/libs/parser/test/parSelectTest.cpp @@ -239,6 +239,19 @@ TEST_F(ParserSelectTest, groupBySemanticCheck) { run("SELECT COUNT(*) cnt, c2 FROM t1 WHERE c1 > 0 GROUP BY c1", TSDB_CODE_PAR_GROUPBY_LACK_EXPRESSION); } +TEST_F(ParserSelectTest, havingCheck) { + useDb("root", "test"); + + run("select tbname,count(*) from st1 partition by tbname having c1>0", TSDB_CODE_PAR_INVALID_OPTR_USAGE); + + run("select tbname,count(*) from st1 group by tbname having c1>0", TSDB_CODE_PAR_GROUPBY_LACK_EXPRESSION); + + run("select max(c1) from st1 group by tbname having c1>0"); + + run("select max(c1) from st1 partition by tbname having c1>0"); +} + + TEST_F(ParserSelectTest, orderBy) { useDb("root", "test"); From fced6270215aa2ba29a936ef1faaed722e25ab5d Mon Sep 17 00:00:00 2001 From: Benguang Zhao Date: Fri, 5 May 2023 19:14:34 +0800 Subject: [PATCH 122/156] enh: add mndTransCommitVgStatus --- source/dnode/mnode/impl/src/mndVgroup.c | 30 ++++++++++++------------- 1 file changed, 15 insertions(+), 15 deletions(-) diff --git a/source/dnode/mnode/impl/src/mndVgroup.c b/source/dnode/mnode/impl/src/mndVgroup.c index e580c78383..83989a22bb 100644 --- a/source/dnode/mnode/impl/src/mndVgroup.c +++ b/source/dnode/mnode/impl/src/mndVgroup.c @@ -2103,6 +2103,18 @@ static int32_t mndAddAdjustVnodeHashRangeAction(SMnode *pMnode, STrans *pTrans, return 0; } +static int32_t mndTransCommitVgStatus(STrans *pTrans, SVgObj *pVg, ESdbStatus vgStatus) { + SSdbRaw *pRaw = mndVgroupActionEncode(pVg); + if (pRaw == NULL) goto _err; + if (mndTransAppendCommitlog(pTrans, pRaw) != 0) goto _err; + (void)sdbSetRawStatus(pRaw, vgStatus); + pRaw = NULL; + return 0; +_err: + sdbFreeRaw(pRaw); + return -1; +} + int32_t mndSplitVgroup(SMnode *pMnode, SRpcMsg *pReq, SDbObj *pDb, SVgObj *pVgroup) { int32_t code = -1; STrans *pTrans = NULL; @@ -2181,28 +2193,16 @@ int32_t mndSplitVgroup(SMnode *pMnode, SRpcMsg *pReq, SDbObj *pDb, SVgObj *pVgro if (pDb->cfg.replications != newVg1.replica) { if (mndBuildAlterVgroupAction(pMnode, pTrans, pDb, pDb, &newVg1, pArray) != 0) goto _OVER; } else { - pRaw = mndVgroupActionEncode(&newVg1); - if (pRaw == NULL) goto _OVER; - if (mndTransAppendCommitlog(pTrans, pRaw) != 0) goto _OVER; - (void)sdbSetRawStatus(pRaw, SDB_STATUS_READY); - pRaw = NULL; + if (mndTransCommitVgStatus(pTrans, &newVg1, SDB_STATUS_READY) < 0) goto _OVER; } if (pDb->cfg.replications != newVg2.replica) { if (mndBuildAlterVgroupAction(pMnode, pTrans, pDb, pDb, &newVg2, pArray) != 0) goto _OVER; } else { - pRaw = mndVgroupActionEncode(&newVg2); - if (pRaw == NULL) goto _OVER; - if (mndTransAppendCommitlog(pTrans, pRaw) != 0) goto _OVER; - (void)sdbSetRawStatus(pRaw, SDB_STATUS_READY); - pRaw = NULL; + if (mndTransCommitVgStatus(pTrans, &newVg2, SDB_STATUS_READY) < 0) goto _OVER; } - pRaw = mndVgroupActionEncode(pVgroup); - if (pRaw == NULL) goto _OVER; - if (mndTransAppendCommitlog(pTrans, pRaw) != 0) goto _OVER; - (void)sdbSetRawStatus(pRaw, SDB_STATUS_DROPPED); - pRaw = NULL; + if (mndTransCommitVgStatus(pTrans, pVgroup, SDB_STATUS_DROPPED) < 0) goto _OVER; memcpy(&dbObj, pDb, sizeof(SDbObj)); if (dbObj.cfg.pRetensions != NULL) { From 028bfd34bf4abd69885af45224c55e839ace47dd Mon Sep 17 00:00:00 2001 From: liuyao <38781207+54liuyao@users.noreply.github.com> Date: Sat, 6 May 2023 14:59:27 +0800 Subject: [PATCH 123/156] Update 14-stream.md --- docs/zh/12-taos-sql/14-stream.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/zh/12-taos-sql/14-stream.md b/docs/zh/12-taos-sql/14-stream.md index d90db3cab0..634f50356d 100644 --- a/docs/zh/12-taos-sql/14-stream.md +++ b/docs/zh/12-taos-sql/14-stream.md @@ -227,7 +227,7 @@ T = 最新事件时间 - DELETE_MARK ## 流式计算支持的函数 1. 所有的 [单行函数](../function/#单行函数) 均可用于流计算。 -2. 以下 19 个聚合/选择函数 不能 应用在创建流计算的 SQL 语句,[系统信息函数](../function/#系统信息函数) 也不能用于流计算中。此外的其他类型的函数均可用于流计算。 +2. 以下 19 个聚合/选择函数 不能 应用在创建流计算的 SQL 语句。此外的其他类型的函数均可用于流计算。 - [leastsquares](../function/#leastsquares) - [percentile](../function/#percentile) From e48fdceb7c1b64e6f6e0976a6168041409ae88a3 Mon Sep 17 00:00:00 2001 From: liuyao <38781207+54liuyao@users.noreply.github.com> Date: Sat, 6 May 2023 16:03:12 +0800 Subject: [PATCH 124/156] Update 14-stream.md --- docs/en/12-taos-sql/14-stream.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/en/12-taos-sql/14-stream.md b/docs/en/12-taos-sql/14-stream.md index d99c5bdae4..43c49c03cd 100644 --- a/docs/en/12-taos-sql/14-stream.md +++ b/docs/en/12-taos-sql/14-stream.md @@ -147,7 +147,7 @@ In both of these methods, configuring the watermark is essential for obtaining a ## Supported functions -All [scalar functions](../function/#scalar-functions) are available in stream processing. All [System information functions](../function/#system-information-functions) are not allowed in stream processing. All [Aggregate functions](../function/#aggregate-functions) and [Selection functions](../function/#selection-functions) are available in stream processing, except the followings: +All [scalar functions](../function/#scalar-functions) are available in stream processing. All [Aggregate functions](../function/#aggregate-functions) and [Selection functions](../function/#selection-functions) are available in stream processing, except the followings: - [leastsquares](../function/#leastsquares) - [percentile](../function/#percentile) - [top](../function/#top) From 1f25cc57cdbfebbc755adb35ed4c6d41ffb5af9c Mon Sep 17 00:00:00 2001 From: dapan1121 Date: Sat, 6 May 2023 16:39:40 +0800 Subject: [PATCH 125/156] fix: projection merge issue --- source/libs/planner/src/planLogicCreater.c | 2 +- source/libs/planner/src/planOptimizer.c | 18 ++++++++++++++++-- 2 files changed, 17 insertions(+), 3 deletions(-) diff --git a/source/libs/planner/src/planLogicCreater.c b/source/libs/planner/src/planLogicCreater.c index 6544898be9..d1011cbf3a 100644 --- a/source/libs/planner/src/planLogicCreater.c +++ b/source/libs/planner/src/planLogicCreater.c @@ -1070,7 +1070,7 @@ static int32_t createProjectLogicNode(SLogicPlanContext* pCxt, SSelectStmt* pSel TSWAP(pProject->node.pLimit, pSelect->pLimit); TSWAP(pProject->node.pSlimit, pSelect->pSlimit); - pProject->ignoreGroupId = pSelect->isSubquery ? true : (NULL == pSelect->pPartitionByList); + pProject->ignoreGroupId = (pSelect->isSubquery && NULL == pProject->node.pLimit && NULL == pProject->node.pSlimit) ? true : (NULL == pSelect->pPartitionByList); pProject->node.groupAction = (!pSelect->isSubquery && pCxt->pPlanCxt->streamQuery) ? GROUP_ACTION_KEEP : GROUP_ACTION_CLEAR; pProject->node.requireDataOrder = DATA_ORDER_LEVEL_NONE; diff --git a/source/libs/planner/src/planOptimizer.c b/source/libs/planner/src/planOptimizer.c index 4f8b57de5f..5cb3591984 100644 --- a/source/libs/planner/src/planOptimizer.c +++ b/source/libs/planner/src/planOptimizer.c @@ -2351,6 +2351,17 @@ static EDealRes mergeProjectionsExpr(SNode** pNode, void* pContext) { return DEAL_RES_CONTINUE; } +static int32_t mergeProjectionsLogicNode(SLogicNode* pDstNode, SLogicNode* pSrcNode) { + SProjectLogicNode *pDstPro = (SProjectLogicNode*)pDstNode; + SProjectLogicNode *pSrcPro = (SProjectLogicNode*)pSrcNode; + + if (!pSrcPro->ignoreGroupId) { + pDstPro->ignoreGroupId = pSrcPro->ignoreGroupId; + } + + return TSDB_CODE_SUCCESS; +} + static int32_t mergeProjectsOptimizeImpl(SOptimizeContext* pCxt, SLogicSubplan* pLogicSubplan, SLogicNode* pSelfNode) { SLogicNode* pChild = (SLogicNode*)nodesListGetNode(pSelfNode->pChildren, 0); @@ -2360,8 +2371,11 @@ static int32_t mergeProjectsOptimizeImpl(SOptimizeContext* pCxt, SLogicSubplan* if (TSDB_CODE_SUCCESS == code) { if (1 == LIST_LENGTH(pChild->pChildren)) { - SLogicNode* pGrandChild = (SLogicNode*)nodesListGetNode(pChild->pChildren, 0); - code = replaceLogicNode(pLogicSubplan, pChild, pGrandChild); + code = mergeProjectionsLogicNode(pSelfNode, pChild); + if (TSDB_CODE_SUCCESS == code) { + SLogicNode* pGrandChild = (SLogicNode*)nodesListGetNode(pChild->pChildren, 0); + code = replaceLogicNode(pLogicSubplan, pChild, pGrandChild); + } } else { // no grand child NODES_CLEAR_LIST(pSelfNode->pChildren); } From dca985214c3bc58311172793b40f379d76cfd50d Mon Sep 17 00:00:00 2001 From: Minglei Jin Date: Sat, 6 May 2023 18:44:15 +0800 Subject: [PATCH 126/156] cache/last: new api: merge last with cid --- source/dnode/vnode/src/tsdb/tsdbCache.c | 204 +++++++++++++++++++++++- 1 file changed, 200 insertions(+), 4 deletions(-) diff --git a/source/dnode/vnode/src/tsdb/tsdbCache.c b/source/dnode/vnode/src/tsdb/tsdbCache.c index 46f8d85d55..f8077d08eb 100644 --- a/source/dnode/vnode/src/tsdb/tsdbCache.c +++ b/source/dnode/vnode/src/tsdb/tsdbCache.c @@ -293,6 +293,9 @@ _exit: return code; } +static int32_t mergeLastCid(tb_uid_t uid, STsdb *pTsdb, SArray **ppLastArray, SCacheRowsReader *pr, int16_t *aCols, + int nCols, int16_t *slotIds); + int32_t tsdbCacheGet(STsdb *pTsdb, tb_uid_t uid, SArray *pLastArray, SCacheRowsReader *pr, int32_t ltype) { static char const *alstring[2] = {"last_row", "last"}; char const *lstring = alstring[ltype]; @@ -328,6 +331,7 @@ int32_t tsdbCacheGet(STsdb *pTsdb, tb_uid_t uid, SArray *pLastArray, SCacheRowsR taosMemoryFree(errs); for (int i = 0; i < num_keys; ++i) { + SArray *pTmpColArray = NULL; SLastCol *pLastCol = tsdbCacheDeserialize(values_list[i]); int16_t cid = *(int16_t *)taosArrayGet(pCidList, i); SLastCol noneCol = {.ts = TSKEY_MIN, .colVal = COL_VAL_NONE(cid, pr->pSchema->columns[pr->pSlotIds[i]].type)}; @@ -343,18 +347,29 @@ int32_t tsdbCacheGet(STsdb *pTsdb, tb_uid_t uid, SArray *pLastArray, SCacheRowsR } else { // recalc: load from tsdb if (ltype) { - SArray *pArray = NULL; - // mergeLastCid(uid, pTsdb, &pArray, pr, cid); + int16_t aCols[1] = {cid}; + int16_t slotIds[1] = {pr->pSlotIds[i]}; + pTmpColArray = NULL; + + mergeLastCid(uid, pTsdb, &pTmpColArray, pr, aCols, 1, slotIds); + if (pTmpColArray && TARRAY_SIZE(pTmpColArray) >= 1) { + pLastCol = taosArrayGet(pTmpColArray, 0); + } } else { // mergeLastRowCid(uid, pTsdb, &pArray, pr, cid); } - // still null, then make up a null col value - pLastCol = &noneCol; + + // still null, then make up a none col value + if (!pLastCol) { + pLastCol = &noneCol; + } // maybe store it back to rocks cache } + taosArrayPush(pLastArray, pLastCol); + taosArrayDestroy(pTmpColArray); taosMemoryFree(values_list[i]); } taosMemoryFree(values_list); @@ -1831,6 +1846,21 @@ static int32_t initLastColArray(STSchema *pTSchema, SArray **ppColArray) { return TSDB_CODE_SUCCESS; } +static int32_t initLastColArrayPartial(STSchema *pTSchema, SArray **ppColArray, int16_t *slotIds, int nCols) { + SArray *pColArray = taosArrayInit(nCols, sizeof(SLastCol)); + if (NULL == pColArray) { + return TSDB_CODE_OUT_OF_MEMORY; + } + + for (int32_t i = 0; i < nCols; ++i) { + int16_t slotId = slotIds[i]; + SLastCol col = {.ts = 0, .colVal = COL_VAL_NULL(pTSchema->columns[slotId].colId, pTSchema->columns[slotId].type)}; + taosArrayPush(pColArray, &col); + } + *ppColArray = pColArray; + return TSDB_CODE_SUCCESS; +} + static int32_t cloneTSchema(STSchema *pSrc, STSchema **ppDst) { int32_t len = sizeof(STSchema) + sizeof(STColumn) * pSrc->numOfCols; *ppDst = taosMemoryMalloc(len); @@ -2169,6 +2199,172 @@ _err: return code; } +static int32_t mergeLastCid(tb_uid_t uid, STsdb *pTsdb, SArray **ppLastArray, SCacheRowsReader *pr, int16_t *aCols, + int nCols, int16_t *slotIds) { + STSchema *pTSchema = pr->pSchema; // metaGetTbTSchema(pTsdb->pVnode->pMeta, uid, -1, 1); + int16_t nLastCol = nCols; + int16_t noneCol = 0; + bool setNoneCol = false; + bool hasRow = false; + bool ignoreEarlierTs = false; + SArray *pColArray = NULL; + SColVal *pColVal = &(SColVal){0}; + + int32_t code = initLastColArrayPartial(pTSchema, &pColArray, slotIds, nCols); + if (TSDB_CODE_SUCCESS != code) { + return code; + } + SArray *aColArray = taosArrayInit(nCols, sizeof(int16_t)); + if (NULL == aColArray) { + taosArrayDestroy(pColArray); + + return TSDB_CODE_OUT_OF_MEMORY; + } + + for (int i = 0; i < nCols; ++i) { + taosArrayPush(aColArray, &aCols[i]); + } + + TSKEY lastRowTs = TSKEY_MAX; + + CacheNextRowIter iter = {0}; + nextRowIterOpen(&iter, uid, pTsdb, pTSchema, pr->suid, pr->pLoadInfo, pr->pReadSnap, &pr->pDataFReader, + &pr->pDataFReaderLast, pr->lastTs); + + do { + TSDBROW *pRow = NULL; + nextRowIterGet(&iter, &pRow, &ignoreEarlierTs, true, TARRAY_DATA(aColArray), TARRAY_SIZE(aColArray)); + + if (!pRow) { + break; + } + + hasRow = true; + + int32_t sversion = TSDBROW_SVERSION(pRow); + if (sversion != -1) { + code = updateTSchema(sversion, pr, uid); + if (TSDB_CODE_SUCCESS != code) { + goto _err; + } + pTSchema = pr->pCurrSchema; + } + // int16_t nCol = pTSchema->numOfCols; + + TSKEY rowTs = TSDBROW_TS(pRow); + + if (lastRowTs == TSKEY_MAX) { + lastRowTs = rowTs; + + for (int16_t iCol = 0; iCol < nCols; ++iCol) { + if (iCol >= nLastCol) { + break; + } + SLastCol *pCol = taosArrayGet(pColArray, iCol); + if (pCol->colVal.cid != pTSchema->columns[slotIds[iCol]].colId) { + continue; + } + tsdbRowGetColVal(pRow, pTSchema, slotIds[iCol], pColVal); + + *pCol = (SLastCol){.ts = lastRowTs, .colVal = *pColVal}; + if (IS_VAR_DATA_TYPE(pColVal->type) && pColVal->value.nData > 0) { + pCol->colVal.value.pData = taosMemoryMalloc(pCol->colVal.value.nData); + if (pCol->colVal.value.pData == NULL) { + terrno = TSDB_CODE_OUT_OF_MEMORY; + code = TSDB_CODE_OUT_OF_MEMORY; + goto _err; + } + memcpy(pCol->colVal.value.pData, pColVal->value.pData, pColVal->value.nData); + } + + if (!COL_VAL_IS_VALUE(pColVal)) { + if (!setNoneCol) { + noneCol = iCol; + setNoneCol = true; + } + } else { + int32_t aColIndex = taosArraySearchIdx(aColArray, &pColVal->cid, compareInt16Val, TD_EQ); + if (aColIndex >= 0) { + taosArrayRemove(aColArray, aColIndex); + } + } + } + if (!setNoneCol) { + // done, goto return pColArray + break; + } else { + continue; + } + } + + // merge into pColArray + setNoneCol = false; + for (int16_t iCol = noneCol; iCol < nCols; ++iCol) { + if (iCol >= nLastCol) { + break; + } + // high version's column value + SLastCol *lastColVal = (SLastCol *)taosArrayGet(pColArray, iCol); + if (lastColVal->colVal.cid != pTSchema->columns[slotIds[iCol]].colId) { + continue; + } + SColVal *tColVal = &lastColVal->colVal; + + tsdbRowGetColVal(pRow, pTSchema, slotIds[iCol], pColVal); + if (!COL_VAL_IS_VALUE(tColVal) && COL_VAL_IS_VALUE(pColVal)) { + SLastCol lastCol = {.ts = rowTs, .colVal = *pColVal}; + if (IS_VAR_DATA_TYPE(pColVal->type) && pColVal->value.nData > 0) { + SLastCol *pLastCol = (SLastCol *)taosArrayGet(pColArray, iCol); + taosMemoryFree(pLastCol->colVal.value.pData); + + lastCol.colVal.value.pData = taosMemoryMalloc(lastCol.colVal.value.nData); + if (lastCol.colVal.value.pData == NULL) { + terrno = TSDB_CODE_OUT_OF_MEMORY; + code = TSDB_CODE_OUT_OF_MEMORY; + goto _err; + } + memcpy(lastCol.colVal.value.pData, pColVal->value.pData, pColVal->value.nData); + } + + taosArraySet(pColArray, iCol, &lastCol); + int32_t aColIndex = taosArraySearchIdx(aColArray, &lastCol.colVal.cid, compareInt16Val, TD_EQ); + taosArrayRemove(aColArray, aColIndex); + } else if (!COL_VAL_IS_VALUE(tColVal) && !COL_VAL_IS_VALUE(pColVal) && !setNoneCol) { + noneCol = iCol; + setNoneCol = true; + } + } + } while (setNoneCol); + + // if (taosArrayGetSize(pColArray) <= 0) { + //*ppLastArray = NULL; + // taosArrayDestroy(pColArray); + //} else { + if (!hasRow) { + if (ignoreEarlierTs) { + taosArrayDestroy(pColArray); + pColArray = NULL; + } else { + taosArrayClear(pColArray); + } + } + *ppLastArray = pColArray; + //} + + nextRowIterClose(&iter); + taosArrayDestroy(aColArray); + // taosMemoryFreeClear(pTSchema); + return code; + +_err: + nextRowIterClose(&iter); + // taosMemoryFreeClear(pTSchema); + *ppLastArray = NULL; + taosArrayDestroy(pColArray); + taosArrayDestroy(aColArray); + return code; +} + int32_t tsdbCacheGetLastrowH(SLRUCache *pCache, tb_uid_t uid, SCacheRowsReader *pr, LRUHandle **handle) { int32_t code = 0; char key[32] = {0}; From 758bcb901a1209f1638c641b451f88e19c61df63 Mon Sep 17 00:00:00 2001 From: Adam Ji Date: Sat, 6 May 2023 22:55:44 +0800 Subject: [PATCH 127/156] docs: fix example tmp (#21197) --- docs/examples/rust/nativeexample/examples/subscribe_demo.rs | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/examples/rust/nativeexample/examples/subscribe_demo.rs b/docs/examples/rust/nativeexample/examples/subscribe_demo.rs index 7551ad46b1..d54bb60e93 100644 --- a/docs/examples/rust/nativeexample/examples/subscribe_demo.rs +++ b/docs/examples/rust/nativeexample/examples/subscribe_demo.rs @@ -45,7 +45,7 @@ async fn main() -> anyhow::Result<()> { taos.exec_many([ format!("DROP TOPIC IF EXISTS tmq_meters"), format!("DROP DATABASE IF EXISTS `{db}`"), - format!("CREATE DATABASE `{db}`"), + format!("CREATE DATABASE `{db}` WAL_RETENTION_PERIOD 3600"), format!("USE `{db}`"), // create super table format!("CREATE TABLE `meters` (`ts` TIMESTAMP, `current` FLOAT, `voltage` INT, `phase` FLOAT) TAGS (`groupid` INT, `location` BINARY(24))"), From 1526eb2923c7e158b5dbe9ec56efd3e3d4415436 Mon Sep 17 00:00:00 2001 From: dapan1121 Date: Mon, 8 May 2023 10:28:07 +0800 Subject: [PATCH 128/156] fix: projection group merge issue --- source/libs/planner/src/planLogicCreater.c | 2 +- source/libs/planner/src/planOptimizer.c | 18 ++---------------- source/libs/planner/src/planSpliter.c | 21 +++++++++++++++++++++ 3 files changed, 24 insertions(+), 17 deletions(-) diff --git a/source/libs/planner/src/planLogicCreater.c b/source/libs/planner/src/planLogicCreater.c index d1011cbf3a..6544898be9 100644 --- a/source/libs/planner/src/planLogicCreater.c +++ b/source/libs/planner/src/planLogicCreater.c @@ -1070,7 +1070,7 @@ static int32_t createProjectLogicNode(SLogicPlanContext* pCxt, SSelectStmt* pSel TSWAP(pProject->node.pLimit, pSelect->pLimit); TSWAP(pProject->node.pSlimit, pSelect->pSlimit); - pProject->ignoreGroupId = (pSelect->isSubquery && NULL == pProject->node.pLimit && NULL == pProject->node.pSlimit) ? true : (NULL == pSelect->pPartitionByList); + pProject->ignoreGroupId = pSelect->isSubquery ? true : (NULL == pSelect->pPartitionByList); pProject->node.groupAction = (!pSelect->isSubquery && pCxt->pPlanCxt->streamQuery) ? GROUP_ACTION_KEEP : GROUP_ACTION_CLEAR; pProject->node.requireDataOrder = DATA_ORDER_LEVEL_NONE; diff --git a/source/libs/planner/src/planOptimizer.c b/source/libs/planner/src/planOptimizer.c index 5cb3591984..4f8b57de5f 100644 --- a/source/libs/planner/src/planOptimizer.c +++ b/source/libs/planner/src/planOptimizer.c @@ -2351,17 +2351,6 @@ static EDealRes mergeProjectionsExpr(SNode** pNode, void* pContext) { return DEAL_RES_CONTINUE; } -static int32_t mergeProjectionsLogicNode(SLogicNode* pDstNode, SLogicNode* pSrcNode) { - SProjectLogicNode *pDstPro = (SProjectLogicNode*)pDstNode; - SProjectLogicNode *pSrcPro = (SProjectLogicNode*)pSrcNode; - - if (!pSrcPro->ignoreGroupId) { - pDstPro->ignoreGroupId = pSrcPro->ignoreGroupId; - } - - return TSDB_CODE_SUCCESS; -} - static int32_t mergeProjectsOptimizeImpl(SOptimizeContext* pCxt, SLogicSubplan* pLogicSubplan, SLogicNode* pSelfNode) { SLogicNode* pChild = (SLogicNode*)nodesListGetNode(pSelfNode->pChildren, 0); @@ -2371,11 +2360,8 @@ static int32_t mergeProjectsOptimizeImpl(SOptimizeContext* pCxt, SLogicSubplan* if (TSDB_CODE_SUCCESS == code) { if (1 == LIST_LENGTH(pChild->pChildren)) { - code = mergeProjectionsLogicNode(pSelfNode, pChild); - if (TSDB_CODE_SUCCESS == code) { - SLogicNode* pGrandChild = (SLogicNode*)nodesListGetNode(pChild->pChildren, 0); - code = replaceLogicNode(pLogicSubplan, pChild, pGrandChild); - } + SLogicNode* pGrandChild = (SLogicNode*)nodesListGetNode(pChild->pChildren, 0); + code = replaceLogicNode(pLogicSubplan, pChild, pGrandChild); } else { // no grand child NODES_CLEAR_LIST(pSelfNode->pChildren); } diff --git a/source/libs/planner/src/planSpliter.c b/source/libs/planner/src/planSpliter.c index fd77261818..eed59c8236 100644 --- a/source/libs/planner/src/planSpliter.c +++ b/source/libs/planner/src/planSpliter.c @@ -532,6 +532,24 @@ static int32_t stbSplGetNumOfVgroups(SLogicNode* pNode) { return 0; } +static int32_t stbSplRewriteFromMergeNode(SMergeLogicNode* pMerge, SLogicNode* pNode) { + int32_t code = TSDB_CODE_SUCCESS; + + switch (nodeType(pNode)) { + case QUERY_NODE_LOGIC_PLAN_PROJECT: { + SProjectLogicNode *pLogicNode = (SProjectLogicNode*)pNode; + if (pMerge->node.pLimit || pMerge->node.pSlimit) { + pLogicNode->ignoreGroupId = false; + } + break; + } + default: + break; + } + + return code; +} + static int32_t stbSplCreateMergeNode(SSplitContext* pCxt, SLogicSubplan* pSubplan, SLogicNode* pSplitNode, SNodeList* pMergeKeys, SLogicNode* pPartChild, bool groupSort) { SMergeLogicNode* pMerge = (SMergeLogicNode*)nodesMakeNode(QUERY_NODE_LOGIC_PLAN_MERGE); @@ -563,6 +581,9 @@ static int32_t stbSplCreateMergeNode(SSplitContext* pCxt, SLogicSubplan* pSubpla ((SLimitNode*)pSplitNode->pLimit)->limit += ((SLimitNode*)pSplitNode->pLimit)->offset; ((SLimitNode*)pSplitNode->pLimit)->offset = 0; } + if (TSDB_CODE_SUCCESS == code) { + code = stbSplRewriteFromMergeNode(pMerge, pSplitNode); + } if (TSDB_CODE_SUCCESS == code) { if (NULL == pSubplan) { code = nodesListMakeAppend(&pSplitNode->pChildren, (SNode*)pMerge); From 198ce399d21b477f8a4309a075903735b7222664 Mon Sep 17 00:00:00 2001 From: Minglei Jin Date: Mon, 8 May 2023 10:59:56 +0800 Subject: [PATCH 129/156] cache/get: store recalced column value back into rocks --- source/dnode/vnode/src/tsdb/tsdbCache.c | 31 +++++++++++++++++++------ 1 file changed, 24 insertions(+), 7 deletions(-) diff --git a/source/dnode/vnode/src/tsdb/tsdbCache.c b/source/dnode/vnode/src/tsdb/tsdbCache.c index f8077d08eb..d5c3b2a789 100644 --- a/source/dnode/vnode/src/tsdb/tsdbCache.c +++ b/source/dnode/vnode/src/tsdb/tsdbCache.c @@ -365,6 +365,21 @@ int32_t tsdbCacheGet(STsdb *pTsdb, tb_uid_t uid, SArray *pLastArray, SCacheRowsR } // maybe store it back to rocks cache + rocksdb_writebatch_t *wb = pTsdb->rCache.writebatch; + char *value = NULL; + size_t vlen = 0; + tsdbCacheSerialize(pLastCol, &value, &vlen); + char key[ROCKS_KEY_LEN]; + size_t klen = snprintf(key, ROCKS_KEY_LEN, "%" PRIi64 ":%" PRIi16 ":last", uid, pLastCol->colVal.cid); + rocksdb_writebatch_put(wb, key, klen, value, vlen); + char *err = NULL; + rocksdb_write(pTsdb->rCache.db, pTsdb->rCache.writeoptions, wb, &err); + if (NULL != err) { + tsdbError("vgId:%d, %s failed at line %d since %s", TD_VID(pTsdb->pVnode), __func__, __LINE__, err); + rocksdb_free(err); + } + + taosMemoryFree(value); } taosArrayPush(pLastArray, pLastCol); @@ -2256,7 +2271,7 @@ static int32_t mergeLastCid(tb_uid_t uid, STsdb *pTsdb, SArray **ppLastArray, SC if (lastRowTs == TSKEY_MAX) { lastRowTs = rowTs; - for (int16_t iCol = 0; iCol < nCols; ++iCol) { + for (int16_t iCol = noneCol; iCol < nCols; ++iCol) { if (iCol >= nLastCol) { break; } @@ -2264,6 +2279,13 @@ static int32_t mergeLastCid(tb_uid_t uid, STsdb *pTsdb, SArray **ppLastArray, SC if (pCol->colVal.cid != pTSchema->columns[slotIds[iCol]].colId) { continue; } + if (slotIds[iCol] == 0) { + STColumn *pTColumn = &pTSchema->columns[0]; + + *pColVal = COL_VAL_VALUE(pTColumn->colId, pTColumn->type, (SValue){.val = lastRowTs}); + taosArraySet(pColArray, 0, &(SLastCol){.ts = lastRowTs, .colVal = *pColVal}); + continue; + } tsdbRowGetColVal(pRow, pTSchema, slotIds[iCol], pColVal); *pCol = (SLastCol){.ts = lastRowTs, .colVal = *pColVal}; @@ -2336,10 +2358,6 @@ static int32_t mergeLastCid(tb_uid_t uid, STsdb *pTsdb, SArray **ppLastArray, SC } } while (setNoneCol); - // if (taosArrayGetSize(pColArray) <= 0) { - //*ppLastArray = NULL; - // taosArrayDestroy(pColArray); - //} else { if (!hasRow) { if (ignoreEarlierTs) { taosArrayDestroy(pColArray); @@ -2349,11 +2367,10 @@ static int32_t mergeLastCid(tb_uid_t uid, STsdb *pTsdb, SArray **ppLastArray, SC } } *ppLastArray = pColArray; - //} nextRowIterClose(&iter); taosArrayDestroy(aColArray); - // taosMemoryFreeClear(pTSchema); + return code; _err: From 2eec3957202c4a804798114323e19be42f8a7f35 Mon Sep 17 00:00:00 2001 From: Minglei Jin Date: Mon, 8 May 2023 11:14:59 +0800 Subject: [PATCH 130/156] cache/commit: flush cache when tsdb commits data --- source/dnode/vnode/src/inc/tsdb.h | 1 + source/dnode/vnode/src/inc/vnodeInt.h | 5 +++-- source/dnode/vnode/src/tsdb/tsdbCache.c | 22 ++++++++++++++++++++++ source/dnode/vnode/src/vnd/vnodeCommit.c | 7 +++++-- 4 files changed, 31 insertions(+), 4 deletions(-) diff --git a/source/dnode/vnode/src/inc/tsdb.h b/source/dnode/vnode/src/inc/tsdb.h index c704472dff..95bce32196 100644 --- a/source/dnode/vnode/src/inc/tsdb.h +++ b/source/dnode/vnode/src/inc/tsdb.h @@ -346,6 +346,7 @@ struct STsdbFS { typedef struct { rocksdb_t *db; rocksdb_options_t *options; + rocksdb_flushoptions_t *flushoptions; rocksdb_writeoptions_t *writeoptions; rocksdb_readoptions_t *readoptions; rocksdb_writebatch_t *writebatch; diff --git a/source/dnode/vnode/src/inc/vnodeInt.h b/source/dnode/vnode/src/inc/vnodeInt.h index 69eacfa46e..94e5f253bf 100644 --- a/source/dnode/vnode/src/inc/vnodeInt.h +++ b/source/dnode/vnode/src/inc/vnodeInt.h @@ -178,6 +178,7 @@ int tsdbClose(STsdb** pTsdb); int32_t tsdbBegin(STsdb* pTsdb); int32_t tsdbPrepareCommit(STsdb* pTsdb); int32_t tsdbCommit(STsdb* pTsdb, SCommitInfo* pInfo); +int32_t tsdbCacheCommit(STsdb* pTsdb); int32_t tsdbCompact(STsdb* pTsdb, SCompactInfo* pInfo); int32_t tsdbFinishCommit(STsdb* pTsdb); int32_t tsdbRollbackCommit(STsdb* pTsdb); @@ -194,9 +195,9 @@ STQ* tqOpen(const char* path, SVnode* pVnode); void tqClose(STQ*); int tqPushMsg(STQ*, void* msg, int32_t msgLen, tmsg_t msgType, int64_t ver); int tqRegisterPushHandle(STQ* pTq, void* pHandle, const SMqPollReq* pRequest, SRpcMsg* pRpcMsg, SMqDataRsp* pDataRsp, - int32_t type); + int32_t type); int tqUnregisterPushHandle(STQ* pTq, const char* pKey, int32_t keyLen, uint64_t consumerId, bool rspConsumer); -int tqStartStreamTasks(STQ* pTq); // restore all stream tasks after vnode launching completed. +int tqStartStreamTasks(STQ* pTq); // restore all stream tasks after vnode launching completed. int tqCommit(STQ*); int32_t tqUpdateTbUidList(STQ* pTq, const SArray* tbUidList, bool isAdd); diff --git a/source/dnode/vnode/src/tsdb/tsdbCache.c b/source/dnode/vnode/src/tsdb/tsdbCache.c index d5c3b2a789..17aed62241 100644 --- a/source/dnode/vnode/src/tsdb/tsdbCache.c +++ b/source/dnode/vnode/src/tsdb/tsdbCache.c @@ -98,12 +98,19 @@ static int32_t tsdbOpenRocksCache(STsdb *pTsdb) { goto _err3; } + rocksdb_flushoptions_t *flushoptions = rocksdb_flushoptions_create(); + if (NULL == flushoptions) { + code = TSDB_CODE_OUT_OF_MEMORY; + goto _err4; + } + rocksdb_writebatch_t *writebatch = rocksdb_writebatch_create(); pTsdb->rCache.writebatch = writebatch; pTsdb->rCache.options = options; pTsdb->rCache.writeoptions = writeoptions; pTsdb->rCache.readoptions = readoptions; + pTsdb->rCache.flushoptions = flushoptions; pTsdb->rCache.db = db; taosThreadMutexInit(&pTsdb->rCache.rMutex, NULL); @@ -122,6 +129,7 @@ _err: static void tsdbCloseRocksCache(STsdb *pTsdb) { rocksdb_close(pTsdb->rCache.db); + rocksdb_flushoptions_destroy(pTsdb->rCache.flushoptions); rocksdb_writebatch_destroy(pTsdb->rCache.writebatch); rocksdb_readoptions_destroy(pTsdb->rCache.readoptions); rocksdb_writeoptions_destroy(pTsdb->rCache.writeoptions); @@ -129,6 +137,20 @@ static void tsdbCloseRocksCache(STsdb *pTsdb) { taosThreadMutexDestroy(&pTsdb->rCache.rMutex); } +int32_t tsdbCacheCommit(STsdb *pTsdb) { + int32_t code = 0; + char *err = NULL; + + rocksdb_flush(pTsdb->rCache.db, pTsdb->rCache.flushoptions, &err); + if (NULL != err) { + tsdbError("vgId:%d, %s failed at line %d since %s", TD_VID(pTsdb->pVnode), __func__, __LINE__, err); + rocksdb_free(err); + code = -1; + } + + return code; +} + SLastCol *tsdbCacheDeserialize(char const *value) { if (!value) { return NULL; diff --git a/source/dnode/vnode/src/vnd/vnodeCommit.c b/source/dnode/vnode/src/vnd/vnodeCommit.c index 847125018c..74168591d2 100644 --- a/source/dnode/vnode/src/vnd/vnodeCommit.c +++ b/source/dnode/vnode/src/vnd/vnodeCommit.c @@ -144,8 +144,8 @@ _exit: } int vnodeShouldCommit(SVnode *pVnode, bool atExit) { - bool diskAvail = osDataSpaceAvailable(); - bool needCommit = false; + bool diskAvail = osDataSpaceAvailable(); + bool needCommit = false; taosThreadMutexLock(&pVnode->mutex); if (pVnode->inUse && diskAvail) { @@ -439,6 +439,9 @@ static int vnodeCommitImpl(SCommitInfo *pInfo) { code = tsdbCommit(pVnode->pTsdb, pInfo); TSDB_CHECK_CODE(code, lino, _exit); + code = tsdbCacheCommit(pVnode->pTsdb); + TSDB_CHECK_CODE(code, lino, _exit); + if (VND_IS_RSMA(pVnode)) { code = smaCommit(pVnode->pSma, pInfo); TSDB_CHECK_CODE(code, lino, _exit); From 5349fac9106b385506b37a6366dbf95f27b45a4b Mon Sep 17 00:00:00 2001 From: Minglei Jin Date: Mon, 8 May 2023 11:33:01 +0800 Subject: [PATCH 131/156] cache/last: use rowTs instead of lastRowTs --- source/dnode/vnode/src/tsdb/tsdbCache.c | 6 +++--- 1 file changed, 3 insertions(+), 3 deletions(-) diff --git a/source/dnode/vnode/src/tsdb/tsdbCache.c b/source/dnode/vnode/src/tsdb/tsdbCache.c index 17aed62241..5a1ddf811d 100644 --- a/source/dnode/vnode/src/tsdb/tsdbCache.c +++ b/source/dnode/vnode/src/tsdb/tsdbCache.c @@ -2304,13 +2304,13 @@ static int32_t mergeLastCid(tb_uid_t uid, STsdb *pTsdb, SArray **ppLastArray, SC if (slotIds[iCol] == 0) { STColumn *pTColumn = &pTSchema->columns[0]; - *pColVal = COL_VAL_VALUE(pTColumn->colId, pTColumn->type, (SValue){.val = lastRowTs}); - taosArraySet(pColArray, 0, &(SLastCol){.ts = lastRowTs, .colVal = *pColVal}); + *pColVal = COL_VAL_VALUE(pTColumn->colId, pTColumn->type, (SValue){.val = rowTs}); + taosArraySet(pColArray, 0, &(SLastCol){.ts = rowTs, .colVal = *pColVal}); continue; } tsdbRowGetColVal(pRow, pTSchema, slotIds[iCol], pColVal); - *pCol = (SLastCol){.ts = lastRowTs, .colVal = *pColVal}; + *pCol = (SLastCol){.ts = rowTs, .colVal = *pColVal}; if (IS_VAR_DATA_TYPE(pColVal->type) && pColVal->value.nData > 0) { pCol->colVal.value.pData = taosMemoryMalloc(pCol->colVal.value.nData); if (pCol->colVal.value.pData == NULL) { From a99db2e39a2d48f0339c0a6a1551f8ea8942404f Mon Sep 17 00:00:00 2001 From: kailixu Date: Mon, 8 May 2023 14:22:41 +0800 Subject: [PATCH 132/156] fix: memory leak --- source/client/src/clientEnv.c | 7 +++---- source/client/src/clientMain.c | 2 +- 2 files changed, 4 insertions(+), 5 deletions(-) diff --git a/source/client/src/clientEnv.c b/source/client/src/clientEnv.c index c8f3feb2d4..500328ba79 100644 --- a/source/client/src/clientEnv.c +++ b/source/client/src/clientEnv.c @@ -388,11 +388,10 @@ void doDestroyRequest(void *p) { deregisterRequest(pRequest); } - if (pRequest->syncQuery) { - if (pRequest->body.param) { - tsem_destroy(&((SSyncQueryParam *)pRequest->body.param)->sem); - } + if (pRequest->body.param) { + tsem_destroy(&((SSyncQueryParam *)pRequest->body.param)->sem); taosMemoryFree(pRequest->body.param); + pRequest->body.param = NULL; } qDestroyQuery(pRequest->pQuery); diff --git a/source/client/src/clientMain.c b/source/client/src/clientMain.c index 55465f227e..398186603f 100644 --- a/source/client/src/clientMain.c +++ b/source/client/src/clientMain.c @@ -1368,7 +1368,7 @@ int taos_load_table_info(TAOS *taos, const char *tableNameList) { tsem_wait(&pParam->sem); _return: - taosArrayDestroy(catalogReq.pTableMeta); + taosArrayDestroyEx(catalogReq.pTableMeta, destoryTablesReq); destroyRequest(pRequest); return code; } From b156fada258b560038f7d66962aed4d0d3575b5b Mon Sep 17 00:00:00 2001 From: kailixu Date: Mon, 8 May 2023 14:39:03 +0800 Subject: [PATCH 133/156] fix: memory leak --- source/client/src/clientEnv.c | 7 +++---- source/client/src/clientMain.c | 2 +- 2 files changed, 4 insertions(+), 5 deletions(-) diff --git a/source/client/src/clientEnv.c b/source/client/src/clientEnv.c index c8f3feb2d4..500328ba79 100644 --- a/source/client/src/clientEnv.c +++ b/source/client/src/clientEnv.c @@ -388,11 +388,10 @@ void doDestroyRequest(void *p) { deregisterRequest(pRequest); } - if (pRequest->syncQuery) { - if (pRequest->body.param) { - tsem_destroy(&((SSyncQueryParam *)pRequest->body.param)->sem); - } + if (pRequest->body.param) { + tsem_destroy(&((SSyncQueryParam *)pRequest->body.param)->sem); taosMemoryFree(pRequest->body.param); + pRequest->body.param = NULL; } qDestroyQuery(pRequest->pQuery); diff --git a/source/client/src/clientMain.c b/source/client/src/clientMain.c index 55465f227e..398186603f 100644 --- a/source/client/src/clientMain.c +++ b/source/client/src/clientMain.c @@ -1368,7 +1368,7 @@ int taos_load_table_info(TAOS *taos, const char *tableNameList) { tsem_wait(&pParam->sem); _return: - taosArrayDestroy(catalogReq.pTableMeta); + taosArrayDestroyEx(catalogReq.pTableMeta, destoryTablesReq); destroyRequest(pRequest); return code; } From a049ae6f9d969a555642fba0adc33b60bf6d5450 Mon Sep 17 00:00:00 2001 From: kailixu Date: Mon, 8 May 2023 14:59:56 +0800 Subject: [PATCH 134/156] chore: revert --- source/client/src/clientEnv.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/source/client/src/clientEnv.c b/source/client/src/clientEnv.c index 500328ba79..c8f3feb2d4 100644 --- a/source/client/src/clientEnv.c +++ b/source/client/src/clientEnv.c @@ -388,10 +388,11 @@ void doDestroyRequest(void *p) { deregisterRequest(pRequest); } - if (pRequest->body.param) { - tsem_destroy(&((SSyncQueryParam *)pRequest->body.param)->sem); + if (pRequest->syncQuery) { + if (pRequest->body.param) { + tsem_destroy(&((SSyncQueryParam *)pRequest->body.param)->sem); + } taosMemoryFree(pRequest->body.param); - pRequest->body.param = NULL; } qDestroyQuery(pRequest->pQuery); From 49ad732517f917e70f8e8f57116a8c6cd4ae9c22 Mon Sep 17 00:00:00 2001 From: kailixu Date: Mon, 8 May 2023 15:00:40 +0800 Subject: [PATCH 135/156] chore: revert --- source/client/src/clientEnv.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/source/client/src/clientEnv.c b/source/client/src/clientEnv.c index 500328ba79..c8f3feb2d4 100644 --- a/source/client/src/clientEnv.c +++ b/source/client/src/clientEnv.c @@ -388,10 +388,11 @@ void doDestroyRequest(void *p) { deregisterRequest(pRequest); } - if (pRequest->body.param) { - tsem_destroy(&((SSyncQueryParam *)pRequest->body.param)->sem); + if (pRequest->syncQuery) { + if (pRequest->body.param) { + tsem_destroy(&((SSyncQueryParam *)pRequest->body.param)->sem); + } taosMemoryFree(pRequest->body.param); - pRequest->body.param = NULL; } qDestroyQuery(pRequest->pQuery); From 8bd50809c95b4c6be5f9ec75239d3093d2e2c3ca Mon Sep 17 00:00:00 2001 From: kailixu Date: Mon, 8 May 2023 15:47:10 +0800 Subject: [PATCH 136/156] chore: more code --- source/client/src/clientMain.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/source/client/src/clientMain.c b/source/client/src/clientMain.c index 398186603f..243a84ca93 100644 --- a/source/client/src/clientMain.c +++ b/source/client/src/clientMain.c @@ -1342,6 +1342,8 @@ int taos_load_table_info(TAOS *taos, const char *tableNameList) { goto _return; } + pRequest->syncQuery = true; + STscObj *pTscObj = pRequest->pTscObj; code = transferTableNameList(tableNameList, pTscObj->acctId, pTscObj->db, &catalogReq.pTableMeta); if (code) { From 29adb77d94b7b10bf4217e80c39ec0f719f03764 Mon Sep 17 00:00:00 2001 From: kailixu Date: Mon, 8 May 2023 15:47:48 +0800 Subject: [PATCH 137/156] chore: more code --- source/client/src/clientMain.c | 2 ++ 1 file changed, 2 insertions(+) diff --git a/source/client/src/clientMain.c b/source/client/src/clientMain.c index 398186603f..243a84ca93 100644 --- a/source/client/src/clientMain.c +++ b/source/client/src/clientMain.c @@ -1342,6 +1342,8 @@ int taos_load_table_info(TAOS *taos, const char *tableNameList) { goto _return; } + pRequest->syncQuery = true; + STscObj *pTscObj = pRequest->pTscObj; code = transferTableNameList(tableNameList, pTscObj->acctId, pTscObj->db, &catalogReq.pTableMeta); if (code) { From e0f0536d2dc7377e74764c096f639f7757d2c507 Mon Sep 17 00:00:00 2001 From: Minglei Jin Date: Mon, 8 May 2023 15:56:54 +0800 Subject: [PATCH 138/156] cache/serialize: nullize pData if zero-length --- source/dnode/vnode/src/tsdb/tsdbCache.c | 10 ++++++++-- source/dnode/vnode/src/tsdb/tsdbCacheRead.c | 2 +- 2 files changed, 9 insertions(+), 3 deletions(-) diff --git a/source/dnode/vnode/src/tsdb/tsdbCache.c b/source/dnode/vnode/src/tsdb/tsdbCache.c index 5a1ddf811d..1360a272cc 100644 --- a/source/dnode/vnode/src/tsdb/tsdbCache.c +++ b/source/dnode/vnode/src/tsdb/tsdbCache.c @@ -159,7 +159,11 @@ SLastCol *tsdbCacheDeserialize(char const *value) { SLastCol *pLastCol = (SLastCol *)value; SColVal *pColVal = &pLastCol->colVal; if (IS_VAR_DATA_TYPE(pColVal->type)) { - pColVal->value.pData = (char *)value + sizeof(*pLastCol); + if (pColVal->value.nData > 0) { + pColVal->value.pData = (char *)value + sizeof(*pLastCol); + } else { + pColVal->value.pData = NULL; + } } return pLastCol; @@ -177,8 +181,10 @@ void tsdbCacheSerialize(SLastCol *pLastCol, char **value, size_t *size) { if (IS_VAR_DATA_TYPE(pColVal->type)) { uint8_t *pVal = pColVal->value.pData; pColVal->value.pData = *value + sizeof(*pLastCol); - if (pColVal->value.nData) { + if (pColVal->value.nData > 0) { memcpy(pColVal->value.pData, pVal, pColVal->value.nData); + } else { + pColVal->value.pData = NULL; } } *size = length; diff --git a/source/dnode/vnode/src/tsdb/tsdbCacheRead.c b/source/dnode/vnode/src/tsdb/tsdbCacheRead.c index 0e03e6e5fb..0d47940582 100644 --- a/source/dnode/vnode/src/tsdb/tsdbCacheRead.c +++ b/source/dnode/vnode/src/tsdb/tsdbCacheRead.c @@ -208,7 +208,7 @@ void* tsdbCacherowsReaderClose(void* pReader) { static void freeItem(void* pItem) { SLastCol* pCol = (SLastCol*)pItem; - if (IS_VAR_DATA_TYPE(pCol->colVal.type)) { + if (IS_VAR_DATA_TYPE(pCol->colVal.type) && pCol->colVal.value.pData) { taosMemoryFree(pCol->colVal.value.pData); } } From 9b275f7421221e006f2865e4ac9144703503b8a4 Mon Sep 17 00:00:00 2001 From: Minglei Jin Date: Mon, 8 May 2023 17:33:35 +0800 Subject: [PATCH 139/156] cache/last_row cid: first round implementation for last_row with cid --- source/dnode/vnode/src/tsdb/tsdbCache.c | 190 ++++++++++++++++++++++-- 1 file changed, 181 insertions(+), 9 deletions(-) diff --git a/source/dnode/vnode/src/tsdb/tsdbCache.c b/source/dnode/vnode/src/tsdb/tsdbCache.c index 1360a272cc..63b0084849 100644 --- a/source/dnode/vnode/src/tsdb/tsdbCache.c +++ b/source/dnode/vnode/src/tsdb/tsdbCache.c @@ -324,6 +324,9 @@ _exit: static int32_t mergeLastCid(tb_uid_t uid, STsdb *pTsdb, SArray **ppLastArray, SCacheRowsReader *pr, int16_t *aCols, int nCols, int16_t *slotIds); +static int32_t mergeLastRowCid(tb_uid_t uid, STsdb *pTsdb, SArray **ppLastArray, SCacheRowsReader *pr, int16_t *aCols, + int nCols, int16_t *slotIds); + int32_t tsdbCacheGet(STsdb *pTsdb, tb_uid_t uid, SArray *pLastArray, SCacheRowsReader *pr, int32_t ltype) { static char const *alstring[2] = {"last_row", "last"}; char const *lstring = alstring[ltype]; @@ -374,17 +377,18 @@ int32_t tsdbCacheGet(STsdb *pTsdb, tb_uid_t uid, SArray *pLastArray, SCacheRowsR } } else { // recalc: load from tsdb - if (ltype) { - int16_t aCols[1] = {cid}; - int16_t slotIds[1] = {pr->pSlotIds[i]}; - pTmpColArray = NULL; + int16_t aCols[1] = {cid}; + int16_t slotIds[1] = {pr->pSlotIds[i]}; + pTmpColArray = NULL; + if (ltype) { mergeLastCid(uid, pTsdb, &pTmpColArray, pr, aCols, 1, slotIds); - if (pTmpColArray && TARRAY_SIZE(pTmpColArray) >= 1) { - pLastCol = taosArrayGet(pTmpColArray, 0); - } } else { - // mergeLastRowCid(uid, pTsdb, &pArray, pr, cid); + mergeLastRowCid(uid, pTsdb, &pTmpColArray, pr, aCols, 1, slotIds); + } + + if (pTmpColArray && TARRAY_SIZE(pTmpColArray) >= 1) { + pLastCol = taosArrayGet(pTmpColArray, 0); } // still null, then make up a none col value @@ -392,7 +396,7 @@ int32_t tsdbCacheGet(STsdb *pTsdb, tb_uid_t uid, SArray *pLastArray, SCacheRowsR pLastCol = &noneCol; } - // maybe store it back to rocks cache + // store result back to rocks cache rocksdb_writebatch_t *wb = pTsdb->rCache.writebatch; char *value = NULL; size_t vlen = 0; @@ -2410,6 +2414,174 @@ _err: return code; } +static int32_t mergeLastRowCid(tb_uid_t uid, STsdb *pTsdb, SArray **ppLastArray, SCacheRowsReader *pr, int16_t *aCols, + int nCols, int16_t *slotIds) { + STSchema *pTSchema = pr->pSchema; // metaGetTbTSchema(pTsdb->pVnode->pMeta, uid, -1, 1); + int16_t nLastCol = nCols; + int16_t noneCol = 0; + bool setNoneCol = false; + bool hasRow = false; + bool ignoreEarlierTs = false; + SArray *pColArray = NULL; + SColVal *pColVal = &(SColVal){0}; + + int32_t code = initLastColArrayPartial(pTSchema, &pColArray, slotIds, nCols); + if (TSDB_CODE_SUCCESS != code) { + return code; + } + SArray *aColArray = taosArrayInit(nCols, sizeof(int16_t)); + if (NULL == aColArray) { + taosArrayDestroy(pColArray); + + return TSDB_CODE_OUT_OF_MEMORY; + } + + for (int i = 0; i < nCols; ++i) { + taosArrayPush(aColArray, &aCols[i]); + } + + TSKEY lastRowTs = TSKEY_MAX; + + CacheNextRowIter iter = {0}; + nextRowIterOpen(&iter, uid, pTsdb, pTSchema, pr->suid, pr->pLoadInfo, pr->pReadSnap, &pr->pDataFReader, + &pr->pDataFReaderLast, pr->lastTs); + + do { + TSDBROW *pRow = NULL; + nextRowIterGet(&iter, &pRow, &ignoreEarlierTs, true, TARRAY_DATA(aColArray), TARRAY_SIZE(aColArray)); + + if (!pRow) { + break; + } + + hasRow = true; + + int32_t sversion = TSDBROW_SVERSION(pRow); + if (sversion != -1) { + code = updateTSchema(sversion, pr, uid); + if (TSDB_CODE_SUCCESS != code) { + goto _err; + } + pTSchema = pr->pCurrSchema; + } + // int16_t nCol = pTSchema->numOfCols; + + TSKEY rowTs = TSDBROW_TS(pRow); + + if (lastRowTs == TSKEY_MAX) { + lastRowTs = rowTs; + + for (int16_t iCol = noneCol; iCol < nCols; ++iCol) { + if (iCol >= nLastCol) { + break; + } + SLastCol *pCol = taosArrayGet(pColArray, iCol); + if (pCol->colVal.cid != pTSchema->columns[slotIds[iCol]].colId) { + continue; + } + if (slotIds[iCol] == 0) { + STColumn *pTColumn = &pTSchema->columns[0]; + + *pColVal = COL_VAL_VALUE(pTColumn->colId, pTColumn->type, (SValue){.val = rowTs}); + taosArraySet(pColArray, 0, &(SLastCol){.ts = rowTs, .colVal = *pColVal}); + continue; + } + tsdbRowGetColVal(pRow, pTSchema, slotIds[iCol], pColVal); + + *pCol = (SLastCol){.ts = rowTs, .colVal = *pColVal}; + if (IS_VAR_DATA_TYPE(pColVal->type) && pColVal->value.nData > 0) { + pCol->colVal.value.pData = taosMemoryMalloc(pCol->colVal.value.nData); + if (pCol->colVal.value.pData == NULL) { + terrno = TSDB_CODE_OUT_OF_MEMORY; + code = TSDB_CODE_OUT_OF_MEMORY; + goto _err; + } + memcpy(pCol->colVal.value.pData, pColVal->value.pData, pColVal->value.nData); + } + + if (COL_VAL_IS_NONE(pColVal)) { + if (!setNoneCol) { + noneCol = iCol; + setNoneCol = true; + } + } else { + int32_t aColIndex = taosArraySearchIdx(aColArray, &pColVal->cid, compareInt16Val, TD_EQ); + if (aColIndex >= 0) { + taosArrayRemove(aColArray, aColIndex); + } + } + } + if (!setNoneCol) { + // done, goto return pColArray + break; + } else { + continue; + } + } + + // merge into pColArray + setNoneCol = false; + for (int16_t iCol = noneCol; iCol < nCols; ++iCol) { + if (iCol >= nLastCol) { + break; + } + // high version's column value + SLastCol *lastColVal = (SLastCol *)taosArrayGet(pColArray, iCol); + if (lastColVal->colVal.cid != pTSchema->columns[slotIds[iCol]].colId) { + continue; + } + SColVal *tColVal = &lastColVal->colVal; + + tsdbRowGetColVal(pRow, pTSchema, slotIds[iCol], pColVal); + if (COL_VAL_IS_NONE(tColVal) && !COL_VAL_IS_NONE(pColVal)) { + SLastCol lastCol = {.ts = rowTs, .colVal = *pColVal}; + if (IS_VAR_DATA_TYPE(pColVal->type) && pColVal->value.nData > 0) { + SLastCol *pLastCol = (SLastCol *)taosArrayGet(pColArray, iCol); + taosMemoryFree(pLastCol->colVal.value.pData); + + lastCol.colVal.value.pData = taosMemoryMalloc(lastCol.colVal.value.nData); + if (lastCol.colVal.value.pData == NULL) { + terrno = TSDB_CODE_OUT_OF_MEMORY; + code = TSDB_CODE_OUT_OF_MEMORY; + goto _err; + } + memcpy(lastCol.colVal.value.pData, pColVal->value.pData, pColVal->value.nData); + } + + taosArraySet(pColArray, iCol, &lastCol); + int32_t aColIndex = taosArraySearchIdx(aColArray, &lastCol.colVal.cid, compareInt16Val, TD_EQ); + taosArrayRemove(aColArray, aColIndex); + } else if (COL_VAL_IS_NONE(tColVal) && !COL_VAL_IS_NONE(pColVal) && !setNoneCol) { + noneCol = iCol; + setNoneCol = true; + } + } + } while (setNoneCol); + + if (!hasRow) { + if (ignoreEarlierTs) { + taosArrayDestroy(pColArray); + pColArray = NULL; + } else { + taosArrayClear(pColArray); + } + } + *ppLastArray = pColArray; + + nextRowIterClose(&iter); + taosArrayDestroy(aColArray); + + return code; + +_err: + nextRowIterClose(&iter); + + *ppLastArray = NULL; + taosArrayDestroy(pColArray); + taosArrayDestroy(aColArray); + return code; +} + int32_t tsdbCacheGetLastrowH(SLRUCache *pCache, tb_uid_t uid, SCacheRowsReader *pr, LRUHandle **handle) { int32_t code = 0; char key[32] = {0}; From 064c46bc7bc4ff2f5c3ab61167e75e18bf01f944 Mon Sep 17 00:00:00 2001 From: Minglei Jin Date: Tue, 9 May 2023 09:17:26 +0800 Subject: [PATCH 140/156] cache/dclp: double check to load from tsdb --- source/dnode/vnode/src/tsdb/tsdbCache.c | 97 ++++++++++++++++--------- 1 file changed, 64 insertions(+), 33 deletions(-) diff --git a/source/dnode/vnode/src/tsdb/tsdbCache.c b/source/dnode/vnode/src/tsdb/tsdbCache.c index 63b0084849..a0c61a8699 100644 --- a/source/dnode/vnode/src/tsdb/tsdbCache.c +++ b/source/dnode/vnode/src/tsdb/tsdbCache.c @@ -190,6 +190,25 @@ void tsdbCacheSerialize(SLastCol *pLastCol, char **value, size_t *size) { *size = length; } +static SLastCol *tsdbCacheLookup(STsdb *pTsdb, tb_uid_t uid, int16_t cid, char const *lstring) { + SLastCol *pLastCol = NULL; + + char *err = NULL; + size_t vlen = 0; + char key[ROCKS_KEY_LEN]; + size_t klen = snprintf(key, ROCKS_KEY_LEN, "%" PRIi64 ":%" PRIi16 ":%s", uid, cid, lstring); + char *value = NULL; + value = rocksdb_get(pTsdb->rCache.db, pTsdb->rCache.readoptions, key, klen, &vlen, &err); + if (NULL != err) { + tsdbError("vgId:%d, %s failed at line %d since %s", TD_VID(pTsdb->pVnode), __func__, __LINE__, err); + rocksdb_free(err); + } + + pLastCol = tsdbCacheDeserialize(value); + + return pLastCol; +} + int32_t tsdbCacheUpdate(STsdb *pTsdb, tb_uid_t suid, tb_uid_t uid, TSDBROW *pRow) { int32_t code = 0; @@ -362,6 +381,7 @@ int32_t tsdbCacheGet(STsdb *pTsdb, tb_uid_t uid, SArray *pLastArray, SCacheRowsR taosMemoryFree(errs); for (int i = 0; i < num_keys; ++i) { + bool freeCol = true; SArray *pTmpColArray = NULL; SLastCol *pLastCol = tsdbCacheDeserialize(values_list[i]); int16_t cid = *(int16_t *)taosArrayGet(pCidList, i); @@ -376,48 +396,59 @@ int32_t tsdbCacheGet(STsdb *pTsdb, tb_uid_t uid, SArray *pLastArray, SCacheRowsR } } } else { - // recalc: load from tsdb - int16_t aCols[1] = {cid}; - int16_t slotIds[1] = {pr->pSlotIds[i]}; - pTmpColArray = NULL; + taosThreadMutexLock(&pTsdb->rCache.rMutex); - if (ltype) { - mergeLastCid(uid, pTsdb, &pTmpColArray, pr, aCols, 1, slotIds); - } else { - mergeLastRowCid(uid, pTsdb, &pTmpColArray, pr, aCols, 1, slotIds); - } - - if (pTmpColArray && TARRAY_SIZE(pTmpColArray) >= 1) { - pLastCol = taosArrayGet(pTmpColArray, 0); - } - - // still null, then make up a none col value + pLastCol = tsdbCacheLookup(pTsdb, uid, cid, lstring); if (!pLastCol) { - pLastCol = &noneCol; + // recalc: load from tsdb + int16_t aCols[1] = {cid}; + int16_t slotIds[1] = {pr->pSlotIds[i]}; + pTmpColArray = NULL; + + if (ltype) { + mergeLastCid(uid, pTsdb, &pTmpColArray, pr, aCols, 1, slotIds); + } else { + mergeLastRowCid(uid, pTsdb, &pTmpColArray, pr, aCols, 1, slotIds); + } + + if (pTmpColArray && TARRAY_SIZE(pTmpColArray) >= 1) { + pLastCol = taosArrayGet(pTmpColArray, 0); + } + + // still null, then make up a none col value + if (!pLastCol) { + pLastCol = &noneCol; + freeCol = false; + } + + // store result back to rocks cache + rocksdb_writebatch_t *wb = pTsdb->rCache.writebatch; + char *value = NULL; + size_t vlen = 0; + tsdbCacheSerialize(pLastCol, &value, &vlen); + char key[ROCKS_KEY_LEN]; + size_t klen = snprintf(key, ROCKS_KEY_LEN, "%" PRIi64 ":%" PRIi16 ":%s", uid, pLastCol->colVal.cid, lstring); + rocksdb_writebatch_put(wb, key, klen, value, vlen); + + taosMemoryFree(value); + + char *err = NULL; + rocksdb_write(pTsdb->rCache.db, pTsdb->rCache.writeoptions, wb, &err); + if (NULL != err) { + tsdbError("vgId:%d, %s failed at line %d since %s", TD_VID(pTsdb->pVnode), __func__, __LINE__, err); + rocksdb_free(err); + } } - // store result back to rocks cache - rocksdb_writebatch_t *wb = pTsdb->rCache.writebatch; - char *value = NULL; - size_t vlen = 0; - tsdbCacheSerialize(pLastCol, &value, &vlen); - char key[ROCKS_KEY_LEN]; - size_t klen = snprintf(key, ROCKS_KEY_LEN, "%" PRIi64 ":%" PRIi16 ":last", uid, pLastCol->colVal.cid); - rocksdb_writebatch_put(wb, key, klen, value, vlen); - char *err = NULL; - rocksdb_write(pTsdb->rCache.db, pTsdb->rCache.writeoptions, wb, &err); - if (NULL != err) { - tsdbError("vgId:%d, %s failed at line %d since %s", TD_VID(pTsdb->pVnode), __func__, __LINE__, err); - rocksdb_free(err); - } - - taosMemoryFree(value); + taosThreadMutexUnlock(&pTsdb->rCache.rMutex); } taosArrayPush(pLastArray, pLastCol); taosArrayDestroy(pTmpColArray); - taosMemoryFree(values_list[i]); + if (freeCol) { + taosMemoryFree(pLastCol); + } } taosMemoryFree(values_list); taosMemoryFree(values_list_sizes); From 412fc9b0bc596cf5b79f8725a53276df9ccb3af7 Mon Sep 17 00:00:00 2001 From: Minglei Jin Date: Tue, 9 May 2023 11:09:53 +0800 Subject: [PATCH 141/156] cache/memory: fix double free issue with tsim/parser/last_cache.sim --- source/dnode/vnode/src/tsdb/tsdbCache.c | 1 + 1 file changed, 1 insertion(+) diff --git a/source/dnode/vnode/src/tsdb/tsdbCache.c b/source/dnode/vnode/src/tsdb/tsdbCache.c index a0c61a8699..df7b940f61 100644 --- a/source/dnode/vnode/src/tsdb/tsdbCache.c +++ b/source/dnode/vnode/src/tsdb/tsdbCache.c @@ -413,6 +413,7 @@ int32_t tsdbCacheGet(STsdb *pTsdb, tb_uid_t uid, SArray *pLastArray, SCacheRowsR if (pTmpColArray && TARRAY_SIZE(pTmpColArray) >= 1) { pLastCol = taosArrayGet(pTmpColArray, 0); + freeCol = false; } // still null, then make up a none col value From a1eefd25eaf9b20b870a4dd98f17caf752eb9e92 Mon Sep 17 00:00:00 2001 From: dapan1121 Date: Tue, 9 May 2023 11:33:47 +0800 Subject: [PATCH 142/156] fix: count wrong group number issue --- include/libs/nodes/plannodes.h | 2 ++ source/libs/command/src/explain.c | 5 ++++ source/libs/executor/src/sortoperator.c | 8 +++++- source/libs/nodes/src/nodesCloneFuncs.c | 1 + source/libs/nodes/src/nodesCodeFuncs.c | 7 +++++ source/libs/nodes/src/nodesMsgFuncs.c | 9 ++++++- source/libs/planner/src/planPhysiCreater.c | 1 + source/libs/planner/src/planSpliter.c | 3 ++- tests/script/tsim/query/partitionby.sim | 30 ++++++++++++++++++++++ 9 files changed, 63 insertions(+), 3 deletions(-) diff --git a/include/libs/nodes/plannodes.h b/include/libs/nodes/plannodes.h index ad4b59714c..197a5ecaf9 100644 --- a/include/libs/nodes/plannodes.h +++ b/include/libs/nodes/plannodes.h @@ -185,6 +185,7 @@ typedef struct SMergeLogicNode { int32_t numOfChannels; int32_t srcGroupId; bool groupSort; + bool ignoreGroupId; } SMergeLogicNode; typedef enum EWindowType { @@ -444,6 +445,7 @@ typedef struct SMergePhysiNode { int32_t numOfChannels; int32_t srcGroupId; bool groupSort; + bool ignoreGroupId; } SMergePhysiNode; typedef struct SWinodwPhysiNode { diff --git a/source/libs/command/src/explain.c b/source/libs/command/src/explain.c index 4302302d7a..c5b9eb7475 100644 --- a/source/libs/command/src/explain.c +++ b/source/libs/command/src/explain.c @@ -1128,6 +1128,11 @@ int32_t qExplainResNodeToRowsImpl(SExplainResNode *pResNode, SExplainCtx *ctx, i EXPLAIN_ROW_END(); QRY_ERR_RET(qExplainResAppendRow(ctx, tbuf, tlen, level + 1)); + EXPLAIN_ROW_NEW(level + 1, EXPLAIN_OUTPUT_FORMAT); + EXPLAIN_ROW_APPEND(EXPLAIN_IGNORE_GROUPID_FORMAT, pMergeNode->ignoreGroupId ? "true" : "false"); + EXPLAIN_ROW_END(); + QRY_ERR_RET(qExplainResAppendRow(ctx, tbuf, tlen, level + 1)); + EXPLAIN_ROW_NEW(level + 1, EXPLAIN_MERGE_KEYS_FORMAT); if (pMergeNode->groupSort) { EXPLAIN_ROW_APPEND(EXPLAIN_STRING_TYPE_FORMAT, "_group_id asc"); diff --git a/source/libs/executor/src/sortoperator.c b/source/libs/executor/src/sortoperator.c index cb0f1aa068..df62a7fa77 100644 --- a/source/libs/executor/src/sortoperator.c +++ b/source/libs/executor/src/sortoperator.c @@ -545,6 +545,7 @@ typedef struct SMultiwayMergeOperatorInfo { SSDataBlock* pIntermediateBlock; // to hold the intermediate result int64_t startTs; // sort start time bool groupSort; + bool ignoreGroupId; uint64_t groupId; STupleHandle* prefetchedTuple; } SMultiwayMergeOperatorInfo; @@ -694,7 +695,11 @@ SSDataBlock* getMultiwaySortedBlockData(SSortHandle* pHandle, SSDataBlock* pData } pDataBlock->info.rows = p->info.rows; - pDataBlock->info.id.groupId = pInfo->groupId; + if (pInfo->ignoreGroupId) { + pDataBlock->info.id.groupId = 0; + } else { + pDataBlock->info.id.groupId = pInfo->groupId; + } pDataBlock->info.dataLoad = 1; } @@ -785,6 +790,7 @@ SOperatorInfo* createMultiwayMergeOperatorInfo(SOperatorInfo** downStreams, size blockDataEnsureCapacity(pInfo->binfo.pRes, pOperator->resultInfo.capacity); pInfo->groupSort = pMergePhyNode->groupSort; + pInfo->ignoreGroupId = pMergePhyNode->ignoreGroupId; pInfo->pSortInfo = createSortInfo(pMergePhyNode->pMergeKeys); pInfo->pInputBlock = pInputBlock; size_t numOfCols = taosArrayGetSize(pInfo->binfo.pRes->pDataBlock); diff --git a/source/libs/nodes/src/nodesCloneFuncs.c b/source/libs/nodes/src/nodesCloneFuncs.c index d9a4c5178f..0f4e7bde63 100644 --- a/source/libs/nodes/src/nodesCloneFuncs.c +++ b/source/libs/nodes/src/nodesCloneFuncs.c @@ -455,6 +455,7 @@ static int32_t logicMergeCopy(const SMergeLogicNode* pSrc, SMergeLogicNode* pDst COPY_SCALAR_FIELD(numOfChannels); COPY_SCALAR_FIELD(srcGroupId); COPY_SCALAR_FIELD(groupSort); + COPY_SCALAR_FIELD(ignoreGroupId); return TSDB_CODE_SUCCESS; } diff --git a/source/libs/nodes/src/nodesCodeFuncs.c b/source/libs/nodes/src/nodesCodeFuncs.c index 0aeb83ce08..3e83ef4291 100644 --- a/source/libs/nodes/src/nodesCodeFuncs.c +++ b/source/libs/nodes/src/nodesCodeFuncs.c @@ -2027,6 +2027,7 @@ static const char* jkMergePhysiPlanTargets = "Targets"; static const char* jkMergePhysiPlanNumOfChannels = "NumOfChannels"; static const char* jkMergePhysiPlanSrcGroupId = "SrcGroupId"; static const char* jkMergePhysiPlanGroupSort = "GroupSort"; +static const char* jkMergePhysiPlanIgnoreGroupID = "IgnoreGroupID"; static int32_t physiMergeNodeToJson(const void* pObj, SJson* pJson) { const SMergePhysiNode* pNode = (const SMergePhysiNode*)pObj; @@ -2047,6 +2048,9 @@ static int32_t physiMergeNodeToJson(const void* pObj, SJson* pJson) { if (TSDB_CODE_SUCCESS == code) { code = tjsonAddBoolToObject(pJson, jkMergePhysiPlanGroupSort, pNode->groupSort); } + if (TSDB_CODE_SUCCESS == code) { + code = tjsonAddBoolToObject(pJson, jkMergePhysiPlanIgnoreGroupID, pNode->ignoreGroupId); + } return code; } @@ -2070,6 +2074,9 @@ static int32_t jsonToPhysiMergeNode(const SJson* pJson, void* pObj) { if (TSDB_CODE_SUCCESS == code) { code = tjsonGetBoolValue(pJson, jkMergePhysiPlanGroupSort, &pNode->groupSort); } + if (TSDB_CODE_SUCCESS == code) { + code = tjsonGetBoolValue(pJson, jkMergePhysiPlanIgnoreGroupID, &pNode->ignoreGroupId); + } return code; } diff --git a/source/libs/nodes/src/nodesMsgFuncs.c b/source/libs/nodes/src/nodesMsgFuncs.c index 6c6b6c0e81..c06eb62771 100644 --- a/source/libs/nodes/src/nodesMsgFuncs.c +++ b/source/libs/nodes/src/nodesMsgFuncs.c @@ -2512,7 +2512,8 @@ enum { PHY_MERGE_CODE_TARGETS, PHY_MERGE_CODE_NUM_OF_CHANNELS, PHY_MERGE_CODE_SRC_GROUP_ID, - PHY_MERGE_CODE_GROUP_SORT + PHY_MERGE_CODE_GROUP_SORT, + PHY_MERGE_CODE_IGNORE_GROUP_ID, }; static int32_t physiMergeNodeToMsg(const void* pObj, STlvEncoder* pEncoder) { @@ -2534,6 +2535,9 @@ static int32_t physiMergeNodeToMsg(const void* pObj, STlvEncoder* pEncoder) { if (TSDB_CODE_SUCCESS == code) { code = tlvEncodeBool(pEncoder, PHY_MERGE_CODE_GROUP_SORT, pNode->groupSort); } + if (TSDB_CODE_SUCCESS == code) { + code = tlvEncodeBool(pEncoder, PHY_MERGE_CODE_IGNORE_GROUP_ID, pNode->ignoreGroupId); + } return code; } @@ -2563,6 +2567,9 @@ static int32_t msgToPhysiMergeNode(STlvDecoder* pDecoder, void* pObj) { case PHY_MERGE_CODE_GROUP_SORT: code = tlvDecodeBool(pTlv, &pNode->groupSort); break; + case PHY_MERGE_CODE_IGNORE_GROUP_ID: + code = tlvDecodeBool(pTlv, &pNode->ignoreGroupId); + break; default: break; } diff --git a/source/libs/planner/src/planPhysiCreater.c b/source/libs/planner/src/planPhysiCreater.c index e2c2e4c655..be43bb008c 100644 --- a/source/libs/planner/src/planPhysiCreater.c +++ b/source/libs/planner/src/planPhysiCreater.c @@ -1559,6 +1559,7 @@ static int32_t createMergePhysiNode(SPhysiPlanContext* pCxt, SMergeLogicNode* pM pMerge->numOfChannels = pMergeLogicNode->numOfChannels; pMerge->srcGroupId = pMergeLogicNode->srcGroupId; pMerge->groupSort = pMergeLogicNode->groupSort; + pMerge->ignoreGroupId = pMergeLogicNode->ignoreGroupId; int32_t code = addDataBlockSlots(pCxt, pMergeLogicNode->pInputs, pMerge->node.pOutputDataBlockDesc); diff --git a/source/libs/planner/src/planSpliter.c b/source/libs/planner/src/planSpliter.c index eed59c8236..504db0d07b 100644 --- a/source/libs/planner/src/planSpliter.c +++ b/source/libs/planner/src/planSpliter.c @@ -538,7 +538,8 @@ static int32_t stbSplRewriteFromMergeNode(SMergeLogicNode* pMerge, SLogicNode* p switch (nodeType(pNode)) { case QUERY_NODE_LOGIC_PLAN_PROJECT: { SProjectLogicNode *pLogicNode = (SProjectLogicNode*)pNode; - if (pMerge->node.pLimit || pMerge->node.pSlimit) { + if (pLogicNode->ignoreGroupId && (pMerge->node.pLimit || pMerge->node.pSlimit)) { + pMerge->ignoreGroupId = true; pLogicNode->ignoreGroupId = false; } break; diff --git a/tests/script/tsim/query/partitionby.sim b/tests/script/tsim/query/partitionby.sim index 8babd1aa8d..4c221e02d3 100644 --- a/tests/script/tsim/query/partitionby.sim +++ b/tests/script/tsim/query/partitionby.sim @@ -36,4 +36,34 @@ if $rows != 0 then return -1 endi +sql insert into tb0 values (now, 0); +sql insert into tb1 values (now, 1); +sql insert into tb2 values (now, 2); +sql insert into tb3 values (now, 3); +sql insert into tb4 values (now, 4); +sql insert into tb5 values (now, 5); +sql insert into tb6 values (now, 6); +sql insert into tb7 values (now, 7); + +sql select * from (select 1 from $mt1 where ts is not null partition by tbname limit 1); +if $rows != 8 then + return -1 +endi + +sql select count(*) from (select ts from $mt1 where ts is not null partition by tbname slimit 2); +if $rows != 1 then + return -1 +endi +if $data00 != 2 then + return -1 +endi + +sql select count(*) from (select ts from $mt1 where ts is not null partition by tbname limit 2); +if $rows != 1 then + return -1 +endi +if $data00 != 8 then + return -1 +endi + system sh/exec.sh -n dnode1 -s stop -x SIGINT From 73ecae67b65665a790dfce498cf3097d3bf1fafb Mon Sep 17 00:00:00 2001 From: Adam Ji Date: Tue, 9 May 2023 15:48:02 +0800 Subject: [PATCH 143/156] docs: add cmd to check status of taosd for mac --- docs/en/05-get-started/03-package.md | 2 ++ docs/zh/05-get-started/03-package.md | 2 ++ 2 files changed, 4 insertions(+) diff --git a/docs/en/05-get-started/03-package.md b/docs/en/05-get-started/03-package.md index 3282f600ac..159760ed21 100644 --- a/docs/en/05-get-started/03-package.md +++ b/docs/en/05-get-started/03-package.md @@ -208,6 +208,8 @@ The following `launchctl` commands can help you manage TDengine service: - Check TDengine Server status: `sudo launchctl list | grep taosd` +- Check TDengine Server status details: `launchctl print system/com.tdengine.taosd` + :::info - Please use `sudo` to run `launchctl` to manage _com.tdengine.taosd_ with administrator privileges. - The administrator privilege is required for service management to enhance security. diff --git a/docs/zh/05-get-started/03-package.md b/docs/zh/05-get-started/03-package.md index b327d33350..1cd0076ba5 100644 --- a/docs/zh/05-get-started/03-package.md +++ b/docs/zh/05-get-started/03-package.md @@ -207,6 +207,8 @@ Active: inactive (dead) - 查看服务状态:`sudo launchctl list | grep taosd` +- 查看服务详细信息:`launchctl print system/com.tdengine.taosd` + :::info - `launchctl` 命令管理`com.tdengine.taosd`需要管理员权限,务必在前面加 `sudo` 来增强安全性。 From 50a24667cd85f546f23097773af7de61631e0e03 Mon Sep 17 00:00:00 2001 From: huolibo Date: Tue, 9 May 2023 16:50:34 +0800 Subject: [PATCH 144/156] docs(driver): jdbc 3.2.1 document and error code --- docs/en/14-reference/03-connector/04-java.mdx | 413 ++++++++++++++---- docs/examples/java/pom.xml | 2 +- .../java/com/taos/example/SubscribeDemo.java | 4 +- docs/zh/08-connector/14-java.mdx | 412 +++++++++++++---- 4 files changed, 683 insertions(+), 148 deletions(-) diff --git a/docs/en/14-reference/03-connector/04-java.mdx b/docs/en/14-reference/03-connector/04-java.mdx index 2847bf20f8..59729bff20 100644 --- a/docs/en/14-reference/03-connector/04-java.mdx +++ b/docs/en/14-reference/03-connector/04-java.mdx @@ -36,6 +36,104 @@ REST connection supports all platforms that can run Java. Please refer to [version support list](/reference/connector#version-support) +## Recent update logs + +| taos-jdbcdriver version | major changes | +| :---------------------: | :------------------------------------------------------------------------------------------------------------------------------------------------: | +| 3.1.0 | JDBC REST connection supports schemaless/prepareStatement over WebSocket | +| 3.2.0 | This version has been deprecated | +| 3.1.0 | JDBC REST connection supports subscription over WebSocket | +| 3.0.1 - 3.0.4 | fix the resultSet data is parsed incorrectly sometimes. 3.0.1 is compiled on JDK 11, you are advised to use other version in the JDK 8 environment | +| 3.0.0 | Support for TDengine 3.0 | +| 2.0.42 | fix wasNull interface return value in WebSocket connection | +| 2.0.41 | fix decode method of username and password in REST connection | +| 2.0.39 - 2.0.40 | Add REST connection/request timeout parameters | +| 2.0.38 | JDBC REST connections add bulk pull function | +| 2.0.37 | Support json tags | +| 2.0.36 | Support schemaless writing | + +**Note**: adding `batchfetch` to the REST connection and setting it to true will enable the WebSocket connection. + +### Handling exceptions + +After an error is reported, the error message and error code can be obtained through SQLException. + +```java +try (Statement statement = connection.createStatement()) { + // executeQuery + ResultSet resultSet = statement.executeQuery(sql); + // print result + printResult(resultSet); +} catch (SQLException e) { + System.out.println("ERROR Message: " + e.getMessage()); + System.out.println("ERROR Code: " + e.getErrorCode()); + e.printStackTrace(); +} +``` + +There are four types of error codes that the JDBC connector can report: + +- Error code of the JDBC driver itself (error code between 0x2301 and 0x2350), +- Error code of the native connection method (error code between 0x2351 and 0x2360) +- Error code of the consumer method (error code between 0x2371 and 0x2380) +- Error code of other TDengine function modules. + +For specific error codes, please refer to. + +| Error Code | Description | Suggested Actions | +| ---------- | --------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------------------------------- | +| 0x2301 | connection already closed | The connection has been closed, check the connection status, or recreate the connection to execute the relevant instructions. | +| 0x2302 | this operation is NOT supported currently! | The current interface does not support the connection. You can use another connection mode. | +| 0x2303 | invalid variables | The parameter is invalid. Check the interface specification and adjust the parameter type and size. | +| 0x2304 | statement is closed | The statement is closed. Check whether the statement is closed and used again, or whether the connection is normal. | +| 0x2305 | resultSet is closed | result set The result set is released. Check whether the result set is released and used again. | +| 0x2306 | Batch is empty! | prepare statement Add parameters and then execute batch. | +| 0x2307 | Can not issue data manipulation statements with executeQuery() | The update operation should use execute update(), not execute query(). | +| 0x2308 | Can not issue SELECT via executeUpdate() | The query operation should use execute query(), not execute update(). | +| 0x2309 | invalid sql for executeQuery: (?) | - | +| 0x230a | Database not specified or available | - | +| 0x230b | invalid sql for executeUpdate: (?) | - | +| 0x230c | invalid sql for execute: (?) | - | +| 0x230d | parameter index out of range | The parameter is out of bounds. Check the proper range of the parameter. | +| 0x230e | connection already closed | The connection has been closed. Please check whether the connection is closed and used again, or whether the connection is normal. | +| 0x230f | unknown sql type in tdengine | Check the data type supported by TDengine. | +| 0x2310 | can't register JDBC-JNI driver | The native driver cannot be registered. Please check whether the url is correct. | +| 0x2311 | can't register JDBC-RESTful driver | - | +| 0x2312 | url is not set | Check whether the REST connection url is correct. | +| 0x2313 | invalid sql | - | +| 0x2314 | numeric value out of range | Check that the correct interface is used for the numeric types in the obtained result set. | +| 0x2315 | unknown taos type in tdengine | Whether the correct TDengine data type is specified when converting the TDengine data type to the JDBC data type. | +| 0x2316 | unknown timestamp precision | - | +| 0x2317 | | wrong request type was used in the REST connection. | +| 0x2318 | | data transmission exception occurred during the REST connection. Please check the network status and try again. | +| 0x2319 | user is required | The user name information is missing when creating the connection | +| 0x231a | password is required | Password information is missing when creating a connection | +| 0x231b | invalid json format | - | +| 0x231c | httpEntity is null, sql: | Execution exception occurred during the REST connection | +| 0x2350 | unknown error | Unknown exception, please return to the developer on github. | +| 0x2351 | failed to create subscription | - | +| 0x2352 | Unsupported encoding | An unsupported character encoding set is specified under the native Connection. | +| 0x2353 | internal error of database, please see taoslog for more details | An error occurs when the prepare statement is executed on the native connection. Check the taos log to locate the fault. | +| 0x2354 | JNI connection is NULL | When the command is executed, the native Connection is closed. Check the connection to TDengine. | +| 0x2355 | JNI result set is NULL | The result set is abnormal. Please check the connection status and try again. | +| 0x2356 | invalid num of fields | The meta information of the result set obtained by the native connection does not match. | +| 0x2357 | empty sql string | Fill in the correct SQL for execution. | +| 0x2358 | fetch to the end of resultSet | - | +| 0x2359 | JNI alloc memory failed, please see taoslog for more details | Memory allocation for the native connection failed. Check the taos log to locate the problem. | +| 0x2371 | consumer properties must not be null! | The parameter is empty when you create a subscription. Please fill in the correct parameter. | +| 0x2372 | configs contain empty key, failed to set consumer property | The parameter key contains a null value. Please enter the correct parameter. | +| 0x2373 | failed to set consumer property, | The parameter value contains a null value. Please enter the correct parameter. | +| 0x2374 | consumer config error | - | +| 0x2375 | topic reference has been destroyed | The topic reference is released during the creation of the data subscription. Check the connection to TDengine. | +| 0x2376 | failed to set consumer topic, topic name is empty | During data subscription creation, the subscription topic name is empty. Check that the specified topic name is correct. | +| 0x2377 | consumer reference has been destroyed | The subscription data transfer channel has been closed. Please check the connection to TDengine. | +| 0x2378 | consumer create error | Failed to create a data subscription. Check the taos log according to the error message to locate the fault. | +| - | can't create connection with server within: | Increase the connection time by adding the httpConnectTimeout parameter, or check the connection to the taos adapter. | +| - | failed to complete the task within the specified time : | Increase the execution time by adding the messageWaitTimeout parameter, or check the connection to the taos adapter. | + +- [TDengine Java Connector](https://github.com/taosdata/taos-connector-jdbc/blob/main/src/main/java/com/taosdata/jdbc/TSDBErrorNumbers.java) + + ## TDengine DataType vs. Java DataType TDengine currently supports timestamp, number, character, Boolean type, and the corresponding type conversion with Java is as follows: @@ -82,7 +180,7 @@ Add following dependency in the `pom.xml` file of your Maven project: com.taosdata.jdbc taos-jdbcdriver - 3.1.0 + 3.2.1 ``` @@ -97,7 +195,7 @@ cd taos-connector-jdbc mvn clean install -Dmaven.test.skip=true ``` -After you have compiled taos-jdbcdriver, the `taos-jdbcdriver-3.0.*-dist.jar` file is created in the target directory. The compiled JAR file is automatically stored in your local Maven repository. +After you have compiled taos-jdbcdriver, the `taos-jdbcdriver-3.2.*-dist.jar` file is created in the target directory. The compiled JAR file is automatically stored in your local Maven repository. @@ -333,35 +431,6 @@ while(resultSet.next()){ > The query is consistent with operating a relational database. When using subscripts to get the contents of the returned fields, you have to start from 1. However, we recommend using the field names to get the values of the fields in the result set. -### Handling exceptions - -After an error is reported, the error message and error code can be obtained through SQLException. - -```java -try (Statement statement = connection.createStatement()) { - // executeQuery - ResultSet resultSet = statement.executeQuery(sql); - // print result - printResult(resultSet); -} catch (SQLException e) { - System.out.println("ERROR Message: " + e.getMessage()); - System.out.println("ERROR Code: " + e.getErrorCode()); - e.printStackTrace(); -} -``` - -There are four types of error codes that the JDBC connector can report: - -- Error code of the JDBC driver itself (error code between 0x2301 and 0x2350), -- Error code of the native connection method (error code between 0x2351 and 0x2360) -- Error code of the consumer method (error code between 0x2371 and 0x2380) -- Error code of other TDengine function modules. - -For specific error codes, please refer to. - -- [TDengine Java Connector](https://github.com/taosdata/taos-connector-jdbc/blob/main/src/main/java/com/taosdata/jdbc/TSDBErrorNumbers.java) - - ### Writing data via parameter binding TDengine has significantly improved the bind APIs to support data writing (INSERT) scenarios. Writing data in this way avoids the resource consumption of SQL syntax parsing, resulting in significant write performance improvements in many cases. @@ -369,9 +438,11 @@ TDengine has significantly improved the bind APIs to support data writing (INSER **Note:** - JDBC REST connections do not currently support bind interface -- The following sample code is based on taos-jdbcdriver-3.1.0 +- The following sample code is based on taos-jdbcdriver-3.2.1 - The setString method should be called for binary type data, and the setNString method should be called for nchar type data -- both setString and setNString require the user to declare the width of the corresponding column in the size parameter of the table definition + + + ```java public class ParameterBindingDemo { @@ -599,21 +670,7 @@ public class ParameterBindingDemo { } ``` -The methods to set TAGS values: - -```java -public void setTagNull(int index, int type) -public void setTagBoolean(int index, boolean value) -public void setTagInt(int index, int value) -public void setTagByte(int index, byte value) -public void setTagShort(int index, short value) -public void setTagLong(int index, long value) -public void setTagTimestamp(int index, long value) -public void setTagFloat(int index, float value) -public void setTagDouble(int index, double value) -public void setTagString(int index, String value) -public void setTagNString(int index, String value) -``` +**Note**: both setString and setNString require the user to declare the width of the corresponding column in the size parameter of the table definition The methods to set VALUES columns: @@ -630,17 +687,203 @@ public void setString(int columnIndex, ArrayList list, int size) throws public void setNString(int columnIndex, ArrayList list, int size) throws SQLException ``` + + + +```java +public class ParameterBindingDemo { + private static final String host = "127.0.0.1"; + private static final Random random = new Random(System.currentTimeMillis()); + private static final int BINARY_COLUMN_SIZE = 30; + private static final String[] schemaList = { + "create table stable1(ts timestamp, f1 tinyint, f2 smallint, f3 int, f4 bigint) tags(t1 tinyint, t2 smallint, t3 int, t4 bigint)", + "create table stable2(ts timestamp, f1 float, f2 double) tags(t1 float, t2 double)", + "create table stable3(ts timestamp, f1 bool) tags(t1 bool)", + "create table stable4(ts timestamp, f1 binary(" + BINARY_COLUMN_SIZE + ")) tags(t1 binary(" + BINARY_COLUMN_SIZE + "))", + "create table stable5(ts timestamp, f1 nchar(" + BINARY_COLUMN_SIZE + ")) tags(t1 nchar(" + BINARY_COLUMN_SIZE + "))" + }; + private static final int numOfSubTable = 10, numOfRow = 10; + + public static void main(String[] args) throws SQLException { + + String jdbcUrl = "jdbc:TAOS-RS://" + host + ":6041/?batchfetch=true"; + Connection conn = DriverManager.getConnection(jdbcUrl, "root", "taosdata"); + + init(conn); + + bindInteger(conn); + + bindFloat(conn); + + bindBoolean(conn); + + bindBytes(conn); + + bindString(conn); + + conn.close(); + } + + private static void init(Connection conn) throws SQLException { + try (Statement stmt = conn.createStatement()) { + stmt.execute("drop database if exists test_ws_parabind"); + stmt.execute("create database if not exists test_ws_parabind"); + stmt.execute("use test_ws_parabind"); + for (int i = 0; i < schemaList.length; i++) { + stmt.execute(schemaList[i]); + } + } + } + + private static void bindInteger(Connection conn) throws SQLException { + String sql = "insert into ? using stable1 tags(?,?,?,?) values(?,?,?,?,?)"; + + try (TSWSPreparedStatement pstmt = conn.prepareStatement(sql).unwrap(TSWSPreparedStatement.class)) { + + for (int i = 1; i <= numOfSubTable; i++) { + // set table name + pstmt.setTableName("t1_" + i); + // set tags + pstmt.setTagByte(1, Byte.parseByte(Integer.toString(random.nextInt(Byte.MAX_VALUE)))); + pstmt.setTagShort(2, Short.parseShort(Integer.toString(random.nextInt(Short.MAX_VALUE)))); + pstmt.setTagInt(3, random.nextInt(Integer.MAX_VALUE)); + pstmt.setTagLong(4, random.nextLong()); + // set columns + long current = System.currentTimeMillis(); + for (int j = 0; j < numOfRow; j++) { + pstmt.setTimestamp(1, new Timestamp(current + j)); + pstmt.setByte(2, Byte.parseByte(Integer.toString(random.nextInt(Byte.MAX_VALUE)))); + pstmt.setShort(3, Short.parseShort(Integer.toString(random.nextInt(Short.MAX_VALUE)))); + pstmt.setInt(4, random.nextInt(Integer.MAX_VALUE)); + pstmt.setLong(5, random.nextLong()); + pstmt.addBatch(); + } + pstmt.executeBatch(); + } + } + } + + private static void bindFloat(Connection conn) throws SQLException { + String sql = "insert into ? using stable2 tags(?,?) values(?,?,?)"; + + try(TSWSPreparedStatement pstmt = conn.prepareStatement(sql).unwrap(TSWSPreparedStatement.class)) { + + for (int i = 1; i <= numOfSubTable; i++) { + // set table name + pstmt.setTableName("t2_" + i); + // set tags + pstmt.setTagFloat(1, random.nextFloat()); + pstmt.setTagDouble(2, random.nextDouble()); + // set columns + long current = System.currentTimeMillis(); + for (int j = 0; j < numOfRow; j++) { + pstmt.setTimestamp(1, new Timestamp(current + j)); + pstmt.setFloat(2, random.nextFloat()); + pstmt.setDouble(3, random.nextDouble()); + pstmt.addBatch(); + } + pstmt.executeBatch(); + } + } + } + + private static void bindBoolean(Connection conn) throws SQLException { + String sql = "insert into ? using stable3 tags(?) values(?,?)"; + + try (TSWSPreparedStatement pstmt = conn.prepareStatement(sql).unwrap(TSWSPreparedStatement.class)) { + for (int i = 1; i <= numOfSubTable; i++) { + // set table name + pstmt.setTableName("t3_" + i); + // set tags + pstmt.setTagBoolean(1, random.nextBoolean()); + // set columns + long current = System.currentTimeMillis(); + for (int j = 0; j < numOfRow; j++) { + pstmt.setTimestamp(1, new Timestamp(current + j)); + pstmt.setBoolean(2, random.nextBoolean()); + pstmt.addBatch(); + } + pstmt.executeBatch(); + } + } + } + + private static void bindBytes(Connection conn) throws SQLException { + String sql = "insert into ? using stable4 tags(?) values(?,?)"; + + try (TSWSPreparedStatement pstmt = conn.prepareStatement(sql).unwrap(TSWSPreparedStatement.class)) { + + for (int i = 1; i <= numOfSubTable; i++) { + // set table name + pstmt.setTableName("t4_" + i); + // set tags + pstmt.setTagString(1, new String("abc")); + + // set columns + long current = System.currentTimeMillis(); + for (int j = 0; j < numOfRow; j++) { + pstmt.setTimestamp(1, new Timestamp(current + j)); + pstmt.setString(2, "abc"); + pstmt.addBatch(); + } + pstmt.executeBatch(); + } + } + } + + private static void bindString(Connection conn) throws SQLException { + String sql = "insert into ? using stable5 tags(?) values(?,?)"; + + try (TSWSPreparedStatement pstmt = conn.prepareStatement(sql).unwrap(TSWSPreparedStatement.class)) { + + for (int i = 1; i <= numOfSubTable; i++) { + // set table name + pstmt.setTableName("t5_" + i); + // set tags + pstmt.setTagNString(1, "California.SanFrancisco"); + + // set columns + long current = System.currentTimeMillis(); + for (int j = 0; j < numOfRow; j++) { + pstmt.setTimestamp(0, new Timestamp(current + j)); + pstmt.setNString(1, "California.SanFrancisco"); + pstmt.addBatch(); + } + pstmt.executeBatch(); + } + } + } +} +``` + + + + +The methods to set TAGS values: + +```java +public void setTagNull(int index, int type) +public void setTagBoolean(int index, boolean value) +public void setTagInt(int index, int value) +public void setTagByte(int index, byte value) +public void setTagShort(int index, short value) +public void setTagLong(int index, long value) +public void setTagTimestamp(int index, long value) +public void setTagFloat(int index, float value) +public void setTagDouble(int index, double value) +public void setTagString(int index, String value) +public void setTagNString(int index, String value) +``` + ### Schemaless Writing TDengine supports schemaless writing. It is compatible with InfluxDB's Line Protocol, OpenTSDB's telnet line protocol, and OpenTSDB's JSON format protocol. For more information, see [Schemaless Writing](../../schemaless). -Note: - -- JDBC REST connections do not currently support schemaless writes -- The following sample code is based on taos-jdbcdriver-3.1.0 + + ```java -public class SchemalessInsertTest { +public class SchemalessJniTest { private static final String host = "127.0.0.1"; private static final String lineDemo = "st,t1=3i64,t2=4f64,t3=\"t3\" c1=3i64,c3=L\"passit\",c2=false,c4=4f64 1626006833639000000"; private static final String telnetDemo = "stb0_0 1626006833 4 host=host0 interface=eth0"; @@ -668,6 +911,41 @@ public class SchemalessInsertTest { } ``` + + + +```java +public class SchemalessWsTest { + private static final String host = "127.0.0.1"; + private static final String lineDemo = "st,t1=3i64,t2=4f64,t3=\"t3\" c1=3i64,c3=L\"passit\",c2=false,c4=4f64 1626006833639000000"; + private static final String telnetDemo = "stb0_0 1626006833 4 host=host0 interface=eth0"; + private static final String jsonDemo = "{\"metric\": \"meter_current\",\"timestamp\": 1626846400,\"value\": 10.3, \"tags\": {\"groupid\": 2, \"location\": \"California.SanFrancisco\", \"id\": \"d1001\"}}"; + + public static void main(String[] args) throws SQLException { + final String url = "jdbc:TAOS-RS://" + host + ":6041/?user=root&password=taosdata&batchfetch=true"; + Connection connection = DriverManager.getConnection(url); + init(connection); + + SchemalessWriter writer = new SchemalessWriter(connection, "test_ws_schemaless"); + writer.write(lineDemo, SchemalessProtocolType.LINE, SchemalessTimestampType.NANO_SECONDS); + writer.write(telnetDemo, SchemalessProtocolType.TELNET, SchemalessTimestampType.MILLI_SECONDS); + writer.write(jsonDemo, SchemalessProtocolType.JSON, SchemalessTimestampType.SECONDS); + System.exit(0); + } + + private static void init(Connection connection) throws SQLException { + try (Statement stmt = connection.createStatement()) { + stmt.executeUpdate("drop database if exists test_ws_schemaless"); + stmt.executeUpdate("create database if not exists test_ws_schemaless keep 36500"); + stmt.executeUpdate("use test_ws_schemaless"); + } + } +} +``` + + + + ### Data Subscription The TDengine Java Connector supports subscription functionality with the following application API. @@ -711,8 +989,9 @@ TaosConsumer consumer = new TaosConsumer<>(config); ```java while(true) { ConsumerRecords records = consumer.poll(Duration.ofMillis(100)); - for (ResultBean record : records) { - process(record); + for (ConsumerRecord record : records) { + ResultBean bean = record.value(); + process(bean); } } ``` @@ -765,8 +1044,9 @@ public abstract class ConsumerLoop { while (!shutdown.get()) { ConsumerRecords records = consumer.poll(Duration.ofMillis(100)); - for (ResultBean record : records) { - process(record); + for (ConsumerRecord record : records) { + ResultBean bean = record.value(); + process(bean); } } consumer.unsubscribe(); @@ -841,8 +1121,9 @@ public abstract class ConsumerLoop { while (!shutdown.get()) { ConsumerRecords records = consumer.poll(Duration.ofMillis(100)); - for (ResultBean record : records) { - process(record); + for (ConsumerRecord record : records) { + ResultBean bean = record.value(); + process(bean); } } consumer.unsubscribe(); @@ -968,20 +1249,6 @@ The source code of the sample application is under `TDengine/examples/JDBC`: [JDBC example](https://github.com/taosdata/TDengine/tree/3.0/examples/JDBC) -## Recent update logs - -| taos-jdbcdriver version | major changes | -| :---------------------: | :------------------------------------------------------------------------------------------------------------------------------------------------: | -| 3.1.0 | JDBC REST connection supports subscription over WebSocket | -| 3.0.1 - 3.0.4 | fix the resultSet data is parsed incorrectly sometimes. 3.0.1 is compiled on JDK 11, you are advised to use other version in the JDK 8 environment | -| 3.0.0 | Support for TDengine 3.0 | -| 2.0.42 | fix wasNull interface return value in WebSocket connection | -| 2.0.41 | fix decode method of username and password in REST connection | -| 2.0.39 - 2.0.40 | Add REST connection/request timeout parameters | -| 2.0.38 | JDBC REST connections add bulk pull function | -| 2.0.37 | Support json tags | -| 2.0.36 | Support schemaless writing | - ## Frequently Asked Questions 1. Why is there no performance improvement when using Statement's `addBatch()` and `executeBatch()` to perform `batch data writing/update`? diff --git a/docs/examples/java/pom.xml b/docs/examples/java/pom.xml index 01d0ce936c..4bdb2062b2 100644 --- a/docs/examples/java/pom.xml +++ b/docs/examples/java/pom.xml @@ -22,7 +22,7 @@ com.taosdata.jdbc taos-jdbcdriver - 3.1.0 + 3.2.1 diff --git a/docs/examples/java/src/main/java/com/taos/example/SubscribeDemo.java b/docs/examples/java/src/main/java/com/taos/example/SubscribeDemo.java index 8da6f77bae..b5cdedc34f 100644 --- a/docs/examples/java/src/main/java/com/taos/example/SubscribeDemo.java +++ b/docs/examples/java/src/main/java/com/taos/example/SubscribeDemo.java @@ -1,5 +1,6 @@ package com.taos.example; +import com.taosdata.jdbc.tmq.ConsumerRecord; import com.taosdata.jdbc.tmq.ConsumerRecords; import com.taosdata.jdbc.tmq.TMQConstants; import com.taosdata.jdbc.tmq.TaosConsumer; @@ -64,7 +65,8 @@ public class SubscribeDemo { consumer.subscribe(Collections.singletonList(TOPIC)); while (!shutdown.get()) { ConsumerRecords meters = consumer.poll(Duration.ofMillis(100)); - for (Meters meter : meters) { + for (ConsumerRecord recode : meters) { + Meters meter = recode.value(); System.out.println(meter); } } diff --git a/docs/zh/08-connector/14-java.mdx b/docs/zh/08-connector/14-java.mdx index bf89b1df83..b11c2074c0 100644 --- a/docs/zh/08-connector/14-java.mdx +++ b/docs/zh/08-connector/14-java.mdx @@ -36,6 +36,104 @@ REST 连接支持所有能运行 Java 的平台。 请参考[版本支持列表](../#版本支持) +## 最近更新记录 + +| taos-jdbcdriver 版本 | 主要变化 | +| :------------------: | :----------------------------------------------------------------------------------------------------------------------------------------------------: | +| 3.2.1 | 新增功能:WebSocket 连接支持 schemaless 与 prepareStatement 写入。变更:consumer poll 返回结果集为 ConsumerRecord,可通过 value() 获取指定结果集数据。 | +| 3.2.0 | 存在连接问题,不推荐使用 | +| 3.1.0 | WebSocket 连接支持订阅功能 | +| 3.0.1 - 3.0.4 | 修复一些情况下结果集数据解析错误的问题。3.0.1 在 JDK 11 环境编译,JDK 8 环境下建议使用其他版本 | +| 3.0.0 | 支持 TDengine 3.0 | +| 2.0.42 | 修在 WebSocket 连接中 wasNull 接口返回值 | +| 2.0.41 | 修正 REST 连接中用户名和密码转码方式 | +| 2.0.39 - 2.0.40 | 增加 REST 连接/请求 超时设置 | +| 2.0.38 | JDBC REST 连接增加批量拉取功能 | +| 2.0.37 | 增加对 json tag 支持 | +| 2.0.36 | 增加对 schemaless 写入支持 | + +**注**:REST 连接中增加 `batchfetch` 参数并设置为 true,将开启 WebSocket 连接。 + +### 处理异常 + +在报错后,通过 SQLException 可以获取到错误的信息和错误码: + +```java +try (Statement statement = connection.createStatement()) { + // executeQuery + ResultSet resultSet = statement.executeQuery(sql); + // print result + printResult(resultSet); +} catch (SQLException e) { + System.out.println("ERROR Message: " + e.getMessage()); + System.out.println("ERROR Code: " + e.getErrorCode()); + e.printStackTrace(); +} +``` + +JDBC 连接器可能报错的错误码包括 4 种: + +- JDBC driver 本身的报错(错误码在 0x2301 到 0x2350 之间) +- 原生连接方法的报错(错误码在 0x2351 到 0x2360 之间) +- 数据订阅的报错(错误码在 0x2371 到 0x2380 之间) +- TDengine 其他功能模块的报错。 + +具体的错误码请参考: + +| Error Code | Description | Suggested Actions | +| ---------- | --------------------------------------------------------------- | --------------------------------------------------------------------------------------- | +| 0x2301 | connection already closed | 连接已经关闭,检查连接情况,或重新创建连接去执行相关指令。 | +| 0x2302 | this operation is NOT supported currently! | 当前使用接口不支持,可以更换其他连接方式。 | +| 0x2303 | invalid variables | 参数不合法,请检查相应接口规范,调整参数类型及大小。 | +| 0x2304 | statement is closed | statement 已经关闭,请检查 statement 是否关闭后再次使用,或是连接是否正常。 | +| 0x2305 | resultSet is closed | resultSet 结果集已经释放,请检查 resultSet 是否释放后再次使用。 | +| 0x2306 | Batch is empty! | prepareStatement 添加参数后再执行 executeBatch。 | +| 0x2307 | Can not issue data manipulation statements with executeQuery() | 更新操作应该使用 executeUpdate(),而不是 executeQuery()。 | +| 0x2308 | Can not issue SELECT via executeUpdate() | 查询操作应该使用 executeQuery(),而不是 executeUpdate()。 | +| 0x2309 | invalid sql for executeQuery: (?) | - | +| 0x230a | Database not specified or available | - | +| 0x230b | invalid sql for executeUpdate: (?) | - | +| 0x230c | invalid sql for execute: (?) | - | +| 0x230d | parameter index out of range | 参数越界,请检查参数的合理范围。 | +| 0x230e | connection already closed | 连接已经关闭,请检查 Connection 是否关闭后再次使用,或是连接是否正常。 | +| 0x230f | unknown sql type in tdengine | 请检查 TDengine 支持的 Data Type 类型。 | +| 0x2310 | can't register JDBC-JNI driver | 不能注册 JNI 驱动,请检查 url 是否填写正确。 | +| 0x2311 | can't register JDBC-RESTful driver | - | +| 0x2312 | url is not set | 请检查 REST 连接 url 是否填写正确。 | +| 0x2313 | invalid sql | - | +| 0x2314 | numeric value out of range | 请检查获取结果集中数值类型是否使用了正确的接口。 | +| 0x2315 | unknown taos type in tdengine | 在 TDengine 数据类型与 JDBC 数据类型转换时,是否指定了正确的 TDengine 数据类型。 | +| 0x2316 | unknown timestamp precision | - | +| 0x2317 | | REST 连接中使用了错误的请求类型。 | +| 0x2318 | | REST 连接中出现了数据传输异常,请检查网络情况并重试。 | +| 0x2319 | user is required | 创建连接时缺少用户名信息 | +| 0x231a | password is required | 创建连接时缺少密码信息 | +| 0x231b | invalid json format | - | +| 0x231c | httpEntity is null, sql: | REST 连接中执行出现异常 | +| 0x2350 | unknown error | 未知异常,请在 github 返回给开发人员。 | +| 0x2351 | failed to create subscription | - | +| 0x2352 | Unsupported encoding | 本地连接下指定了不支持的字符编码集 | +| 0x2353 | internal error of database, please see taoslog for more details | 本地连接执行 prepareStatement 时出现错误,请检查 taos log 进行问题定位。 | +| 0x2354 | JNI connection is NULL | 本地连接执行命令时,Connection 已经关闭。请检查与 TDengine 的连接情况。 | +| 0x2355 | JNI result set is NULL | 本地连接获取结果集,结果集异常,请检查连接情况,并重试。 | +| 0x2356 | invalid num of fields | 本地连接获取结果集的 meta 信息不匹配。 | +| 0x2357 | empty sql string | 填写正确的 SQL 进行执行。 | +| 0x2358 | fetch to the end of resultSet | - | +| 0x2359 | JNI alloc memory failed, please see taoslog for more details | 本地连接分配内存错误,请检查 taos log 进行问题定位。 | +| 0x2371 | consumer properties must not be null! | 创建订阅时参数为空,请填写正确的参数。 | +| 0x2372 | configs contain empty key, failed to set consumer property | 参数 key 中包含空值,请填写正确的参数。 | +| 0x2373 | failed to set consumer property, | 参数 value 中包含空值,请填写正确的参数。 | +| 0x2374 | consumer config error | - | +| 0x2375 | topic reference has been destroyed | 创建数据订阅过程中,topic 引用被释放。请检查与 TDengine 的连接情况。 | +| 0x2376 | failed to set consumer topic, topic name is empty | 创建数据订阅过程中,订阅 topic 名称为空。请检查指定的 topic 名称是否填写正确。 | +| 0x2377 | consumer reference has been destroyed | 订阅数据传输通道已经关闭,请检查与 TDengine 的连接情况。 | +| 0x2378 | consumer create error | 创建数据订阅失败,请根据错误信息检查 taos log 进行问题定位。 | +| - | can't create connection with server within: | 通过增加参数 httpConnectTimeout 增加连接耗时,或是请检查与 taosAdapter 之间的连接情况。 | +| - | failed to complete the task within the specified time : | 通过增加参数 messageWaitTimeout 增加执行耗时,或是请检查与 taosAdapter 之间的连接情况。 | + +- [TDengine Java Connector](https://github.com/taosdata/taos-connector-jdbc/blob/main/src/main/java/com/taosdata/jdbc/TSDBErrorNumbers.java) + + ## TDengine DataType 和 Java DataType TDengine 目前支持时间戳、数字、字符、布尔类型,与 Java 对应类型转换如下: @@ -82,7 +180,7 @@ Maven 项目中,在 pom.xml 中添加以下依赖: com.taosdata.jdbc taos-jdbcdriver - 3.1.0 + 3.2.1 ``` @@ -97,7 +195,7 @@ cd taos-connector-jdbc mvn clean install -Dmaven.test.skip=true ``` -编译后,在 target 目录下会产生 taos-jdbcdriver-3.0.\*-dist.jar 的 jar 包,并自动将编译的 jar 文件放在本地的 Maven 仓库中。 +编译后,在 target 目录下会产生 taos-jdbcdriver-3.2.\*-dist.jar 的 jar 包,并自动将编译的 jar 文件放在本地的 Maven 仓库中。 @@ -336,35 +434,6 @@ while(resultSet.next()){ > 查询和操作关系型数据库一致,使用下标获取返回字段内容时从 1 开始,建议使用字段名称获取。 -### 处理异常 - -在报错后,通过 SQLException 可以获取到错误的信息和错误码: - -```java -try (Statement statement = connection.createStatement()) { - // executeQuery - ResultSet resultSet = statement.executeQuery(sql); - // print result - printResult(resultSet); -} catch (SQLException e) { - System.out.println("ERROR Message: " + e.getMessage()); - System.out.println("ERROR Code: " + e.getErrorCode()); - e.printStackTrace(); -} -``` - -JDBC 连接器可能报错的错误码包括 4 种: - -- JDBC driver 本身的报错(错误码在 0x2301 到 0x2350 之间) -- 原生连接方法的报错(错误码在 0x2351 到 0x2360 之间) -- 数据订阅的报错(错误码在 0x2371 到 0x2380 之间) -- TDengine 其他功能模块的报错。 - -具体的错误码请参考: - -- [TDengine Java Connector](https://github.com/taosdata/taos-connector-jdbc/blob/main/src/main/java/com/taosdata/jdbc/TSDBErrorNumbers.java) - - ### 通过参数绑定写入数据 TDengine 的 JDBC 原生连接实现大幅改进了参数绑定方式对数据写入(INSERT)场景的支持。采用这种方式写入数据时,能避免 SQL 语法解析的资源消耗,从而在很多情况下显著提升写入性能。 @@ -372,9 +441,11 @@ TDengine 的 JDBC 原生连接实现大幅改进了参数绑定方式对数据 **注意**: - JDBC REST 连接目前不支持参数绑定 -- 以下示例代码基于 taos-jdbcdriver-3.1.0 +- 以下示例代码基于 taos-jdbcdriver-3.2.1 - binary 类型数据需要调用 setString 方法,nchar 类型数据需要调用 setNString 方法 -- setString 和 setNString 都要求用户在 size 参数里声明表定义中对应列的列宽 + + + ```java public class ParameterBindingDemo { @@ -602,21 +673,7 @@ public class ParameterBindingDemo { } ``` -用于设定 TAGS 取值的方法总共有: - -```java -public void setTagNull(int index, int type) -public void setTagBoolean(int index, boolean value) -public void setTagInt(int index, int value) -public void setTagByte(int index, byte value) -public void setTagShort(int index, short value) -public void setTagLong(int index, long value) -public void setTagTimestamp(int index, long value) -public void setTagFloat(int index, float value) -public void setTagDouble(int index, double value) -public void setTagString(int index, String value) -public void setTagNString(int index, String value) -``` +**注**:setString 和 setNString 都要求用户在 size 参数里声明表定义中对应列的列宽 用于设定 VALUES 数据列的取值的方法总共有: @@ -633,17 +690,202 @@ public void setString(int columnIndex, ArrayList list, int size) throws public void setNString(int columnIndex, ArrayList list, int size) throws SQLException ``` + + + +```java +public class ParameterBindingDemo { + private static final String host = "127.0.0.1"; + private static final Random random = new Random(System.currentTimeMillis()); + private static final int BINARY_COLUMN_SIZE = 30; + private static final String[] schemaList = { + "create table stable1(ts timestamp, f1 tinyint, f2 smallint, f3 int, f4 bigint) tags(t1 tinyint, t2 smallint, t3 int, t4 bigint)", + "create table stable2(ts timestamp, f1 float, f2 double) tags(t1 float, t2 double)", + "create table stable3(ts timestamp, f1 bool) tags(t1 bool)", + "create table stable4(ts timestamp, f1 binary(" + BINARY_COLUMN_SIZE + ")) tags(t1 binary(" + BINARY_COLUMN_SIZE + "))", + "create table stable5(ts timestamp, f1 nchar(" + BINARY_COLUMN_SIZE + ")) tags(t1 nchar(" + BINARY_COLUMN_SIZE + "))" + }; + private static final int numOfSubTable = 10, numOfRow = 10; + + public static void main(String[] args) throws SQLException { + + String jdbcUrl = "jdbc:TAOS-RS://" + host + ":6041/?batchfetch=true"; + Connection conn = DriverManager.getConnection(jdbcUrl, "root", "taosdata"); + + init(conn); + + bindInteger(conn); + + bindFloat(conn); + + bindBoolean(conn); + + bindBytes(conn); + + bindString(conn); + + conn.close(); + } + + private static void init(Connection conn) throws SQLException { + try (Statement stmt = conn.createStatement()) { + stmt.execute("drop database if exists test_ws_parabind"); + stmt.execute("create database if not exists test_ws_parabind"); + stmt.execute("use test_ws_parabind"); + for (int i = 0; i < schemaList.length; i++) { + stmt.execute(schemaList[i]); + } + } + } + + private static void bindInteger(Connection conn) throws SQLException { + String sql = "insert into ? using stable1 tags(?,?,?,?) values(?,?,?,?,?)"; + + try (TSWSPreparedStatement pstmt = conn.prepareStatement(sql).unwrap(TSWSPreparedStatement.class)) { + + for (int i = 1; i <= numOfSubTable; i++) { + // set table name + pstmt.setTableName("t1_" + i); + // set tags + pstmt.setTagByte(1, Byte.parseByte(Integer.toString(random.nextInt(Byte.MAX_VALUE)))); + pstmt.setTagShort(2, Short.parseShort(Integer.toString(random.nextInt(Short.MAX_VALUE)))); + pstmt.setTagInt(3, random.nextInt(Integer.MAX_VALUE)); + pstmt.setTagLong(4, random.nextLong()); + // set columns + long current = System.currentTimeMillis(); + for (int j = 0; j < numOfRow; j++) { + pstmt.setTimestamp(1, new Timestamp(current + j)); + pstmt.setByte(2, Byte.parseByte(Integer.toString(random.nextInt(Byte.MAX_VALUE)))); + pstmt.setShort(3, Short.parseShort(Integer.toString(random.nextInt(Short.MAX_VALUE)))); + pstmt.setInt(4, random.nextInt(Integer.MAX_VALUE)); + pstmt.setLong(5, random.nextLong()); + pstmt.addBatch(); + } + pstmt.executeBatch(); + } + } + } + + private static void bindFloat(Connection conn) throws SQLException { + String sql = "insert into ? using stable2 tags(?,?) values(?,?,?)"; + + try(TSWSPreparedStatement pstmt = conn.prepareStatement(sql).unwrap(TSWSPreparedStatement.class)) { + + for (int i = 1; i <= numOfSubTable; i++) { + // set table name + pstmt.setTableName("t2_" + i); + // set tags + pstmt.setTagFloat(1, random.nextFloat()); + pstmt.setTagDouble(2, random.nextDouble()); + // set columns + long current = System.currentTimeMillis(); + for (int j = 0; j < numOfRow; j++) { + pstmt.setTimestamp(1, new Timestamp(current + j)); + pstmt.setFloat(2, random.nextFloat()); + pstmt.setDouble(3, random.nextDouble()); + pstmt.addBatch(); + } + pstmt.executeBatch(); + } + } + } + + private static void bindBoolean(Connection conn) throws SQLException { + String sql = "insert into ? using stable3 tags(?) values(?,?)"; + + try (TSWSPreparedStatement pstmt = conn.prepareStatement(sql).unwrap(TSWSPreparedStatement.class)) { + for (int i = 1; i <= numOfSubTable; i++) { + // set table name + pstmt.setTableName("t3_" + i); + // set tags + pstmt.setTagBoolean(1, random.nextBoolean()); + // set columns + long current = System.currentTimeMillis(); + for (int j = 0; j < numOfRow; j++) { + pstmt.setTimestamp(1, new Timestamp(current + j)); + pstmt.setBoolean(2, random.nextBoolean()); + pstmt.addBatch(); + } + pstmt.executeBatch(); + } + } + } + + private static void bindBytes(Connection conn) throws SQLException { + String sql = "insert into ? using stable4 tags(?) values(?,?)"; + + try (TSWSPreparedStatement pstmt = conn.prepareStatement(sql).unwrap(TSWSPreparedStatement.class)) { + + for (int i = 1; i <= numOfSubTable; i++) { + // set table name + pstmt.setTableName("t4_" + i); + // set tags + pstmt.setTagString(1, new String("abc")); + + // set columns + long current = System.currentTimeMillis(); + for (int j = 0; j < numOfRow; j++) { + pstmt.setTimestamp(1, new Timestamp(current + j)); + pstmt.setString(2, "abc"); + pstmt.addBatch(); + } + pstmt.executeBatch(); + } + } + } + + private static void bindString(Connection conn) throws SQLException { + String sql = "insert into ? using stable5 tags(?) values(?,?)"; + + try (TSWSPreparedStatement pstmt = conn.prepareStatement(sql).unwrap(TSWSPreparedStatement.class)) { + + for (int i = 1; i <= numOfSubTable; i++) { + // set table name + pstmt.setTableName("t5_" + i); + // set tags + pstmt.setTagNString(1, "California.SanFrancisco"); + + // set columns + long current = System.currentTimeMillis(); + for (int j = 0; j < numOfRow; j++) { + pstmt.setTimestamp(0, new Timestamp(current + j)); + pstmt.setNString(1, "California.SanFrancisco"); + pstmt.addBatch(); + } + pstmt.executeBatch(); + } + } + } +} +``` + + + + +用于设定 TAGS 取值的方法总共有: + +````java +public void setTagNull(int index, int type) +public void setTagBoolean(int index, boolean value) +public void setTagInt(int index, int value) +public void setTagByte(int index, byte value) +public void setTagShort(int index, short value) +public void setTagLong(int index, long value) +public void setTagTimestamp(int index, long value) +public void setTagFloat(int index, float value) +public void setTagDouble(int index, double value) +public void setTagString(int index, String value) +public void setTagNString(int index, String value) + ### 无模式写入 TDengine 支持无模式写入功能。无模式写入兼容 InfluxDB 的 行协议(Line Protocol)、OpenTSDB 的 telnet 行协议和 OpenTSDB 的 JSON 格式协议。详情请参见[无模式写入](../../reference/schemaless/)。 -**注意**: - -- JDBC REST 连接目前不支持无模式写入 -- 以下示例代码基于 taos-jdbcdriver-3.1.0 + + ```java -public class SchemalessInsertTest { +public class SchemalessJniTest { private static final String host = "127.0.0.1"; private static final String lineDemo = "st,t1=3i64,t2=4f64,t3=\"t3\" c1=3i64,c3=L\"passit\",c2=false,c4=4f64 1626006833639000000"; private static final String telnetDemo = "stb0_0 1626006833 4 host=host0 interface=eth0"; @@ -669,8 +911,43 @@ public class SchemalessInsertTest { } } } +```` + + + + +```java +public class SchemalessWsTest { + private static final String host = "127.0.0.1"; + private static final String lineDemo = "st,t1=3i64,t2=4f64,t3=\"t3\" c1=3i64,c3=L\"passit\",c2=false,c4=4f64 1626006833639000000"; + private static final String telnetDemo = "stb0_0 1626006833 4 host=host0 interface=eth0"; + private static final String jsonDemo = "{\"metric\": \"meter_current\",\"timestamp\": 1626846400,\"value\": 10.3, \"tags\": {\"groupid\": 2, \"location\": \"California.SanFrancisco\", \"id\": \"d1001\"}}"; + + public static void main(String[] args) throws SQLException { + final String url = "jdbc:TAOS-RS://" + host + ":6041/?user=root&password=taosdata&batchfetch=true"; + Connection connection = DriverManager.getConnection(url); + init(connection); + + SchemalessWriter writer = new SchemalessWriter(connection, "test_ws_schemaless"); + writer.write(lineDemo, SchemalessProtocolType.LINE, SchemalessTimestampType.NANO_SECONDS); + writer.write(telnetDemo, SchemalessProtocolType.TELNET, SchemalessTimestampType.MILLI_SECONDS); + writer.write(jsonDemo, SchemalessProtocolType.JSON, SchemalessTimestampType.SECONDS); + System.exit(0); + } + + private static void init(Connection connection) throws SQLException { + try (Statement stmt = connection.createStatement()) { + stmt.executeUpdate("drop database if exists test_ws_schemaless"); + stmt.executeUpdate("create database if not exists test_ws_schemaless keep 36500"); + stmt.executeUpdate("use test_ws_schemaless"); + } + } +} ``` + + + ### 数据订阅 TDengine Java 连接器支持订阅功能,应用 API 如下: @@ -714,8 +991,9 @@ TaosConsumer consumer = new TaosConsumer<>(config); ```java while(true) { ConsumerRecords records = consumer.poll(Duration.ofMillis(100)); - for (ResultBean record : records) { - process(record); + for (ConsumerRecord record : records) { + ResultBean bean = record.value(); + process(bean); } } ``` @@ -766,8 +1044,9 @@ public abstract class ConsumerLoop { while (!shutdown.get()) { ConsumerRecords records = consumer.poll(Duration.ofMillis(100)); - for (ResultBean record : records) { - process(record); + for (ConsumerRecord record : records) { + ResultBean bean = record.value(); + process(bean); } } consumer.unsubscribe(); @@ -844,8 +1123,9 @@ public abstract class ConsumerLoop { while (!shutdown.get()) { ConsumerRecords records = consumer.poll(Duration.ofMillis(100)); - for (ResultBean record : records) { - process(record); + for (ConsumerRecord record : records) { + ResultBean bean = record.value(); + process(bean); } } consumer.unsubscribe(); @@ -971,20 +1251,6 @@ public static void main(String[] args) throws Exception { 请参考:[JDBC example](https://github.com/taosdata/TDengine/tree/3.0/examples/JDBC) -## 最近更新记录 - -| taos-jdbcdriver 版本 | 主要变化 | -| :------------------: | :--------------------------------------------------------------------------------------------: | -| 3.1.0 | WebSocket 连接支持订阅功能 | -| 3.0.1 - 3.0.4 | 修复一些情况下结果集数据解析错误的问题。3.0.1 在 JDK 11 环境编译,JDK 8 环境下建议使用其他版本 | -| 3.0.0 | 支持 TDengine 3.0 | -| 2.0.42 | 修在 WebSocket 连接中 wasNull 接口返回值 | -| 2.0.41 | 修正 REST 连接中用户名和密码转码方式 | -| 2.0.39 - 2.0.40 | 增加 REST 连接/请求 超时设置 | -| 2.0.38 | JDBC REST 连接增加批量拉取功能 | -| 2.0.37 | 增加对 json tag 支持 | -| 2.0.36 | 增加对 schemaless 写入支持 | - ## 常见问题 1. 使用 Statement 的 `addBatch()` 和 `executeBatch()` 来执行“批量写入/更新”,为什么没有带来性能上的提升? From 0958fe56352a313e1617dfadfe6fe19aeb445677 Mon Sep 17 00:00:00 2001 From: Minglei Jin Date: Wed, 10 May 2023 10:15:48 +0800 Subject: [PATCH 145/156] cache/binary: memcpy binary pData if pLastCol is from rocks lookup --- source/dnode/vnode/src/tsdb/tsdbCache.c | 15 ++++++++++++++- 1 file changed, 14 insertions(+), 1 deletion(-) diff --git a/source/dnode/vnode/src/tsdb/tsdbCache.c b/source/dnode/vnode/src/tsdb/tsdbCache.c index df7b940f61..fb1c9514ce 100644 --- a/source/dnode/vnode/src/tsdb/tsdbCache.c +++ b/source/dnode/vnode/src/tsdb/tsdbCache.c @@ -439,6 +439,15 @@ int32_t tsdbCacheGet(STsdb *pTsdb, tb_uid_t uid, SArray *pLastArray, SCacheRowsR tsdbError("vgId:%d, %s failed at line %d since %s", TD_VID(pTsdb->pVnode), __func__, __LINE__, err); rocksdb_free(err); } + } else { + SColVal *pColVal = &pLastCol->colVal; + if (IS_VAR_DATA_TYPE(pColVal->type)) { + uint8_t *pVal = pColVal->value.pData; + pColVal->value.pData = taosMemoryMalloc(pColVal->value.nData); + if (pColVal->value.nData) { + memcpy(pColVal->value.pData, pVal, pColVal->value.nData); + } + } } taosThreadMutexUnlock(&pTsdb->rCache.rMutex); @@ -1381,7 +1390,11 @@ static int32_t getNextRowFromFS(void *iter, TSDBROW **ppRow, bool *pIgnoreEarlie *pIgnoreEarlierTs = false; tBlockDataReset(state->pBlockData); TABLEID tid = {.suid = state->suid, .uid = state->uid}; - code = tBlockDataInit(state->pBlockData, &tid, state->pTSchema, aCols, nCols); + int nTmpCols = nCols; + /*if (aCols[0] == PRIMARYKEY_TIMESTAMP_COL_ID && nCols == 1) { + nTmpCols = 0; + }*/ + code = tBlockDataInit(state->pBlockData, &tid, state->pTSchema, aCols, nTmpCols); if (code) goto _err; code = tsdbReadDataBlock(*state->pDataFReader, &block, state->pBlockData); From c1e1e5d831c66dba7ffe9f01a8776b32d8586e1c Mon Sep 17 00:00:00 2001 From: huolibo Date: Wed, 10 May 2023 11:51:53 +0800 Subject: [PATCH 146/156] docs(driver): fix format --- docs/en/14-reference/03-connector/04-java.mdx | 4 ++-- docs/zh/08-connector/14-java.mdx | 1 + 2 files changed, 3 insertions(+), 2 deletions(-) diff --git a/docs/en/14-reference/03-connector/04-java.mdx b/docs/en/14-reference/03-connector/04-java.mdx index 59729bff20..93cbae702a 100644 --- a/docs/en/14-reference/03-connector/04-java.mdx +++ b/docs/en/14-reference/03-connector/04-java.mdx @@ -688,7 +688,7 @@ public void setNString(int columnIndex, ArrayList list, int size) throws ``` - + ```java public class ParameterBindingDemo { @@ -912,7 +912,7 @@ public class SchemalessJniTest { ``` - + ```java public class SchemalessWsTest { diff --git a/docs/zh/08-connector/14-java.mdx b/docs/zh/08-connector/14-java.mdx index b11c2074c0..7eb4dfedb6 100644 --- a/docs/zh/08-connector/14-java.mdx +++ b/docs/zh/08-connector/14-java.mdx @@ -876,6 +876,7 @@ public void setTagFloat(int index, float value) public void setTagDouble(int index, double value) public void setTagString(int index, String value) public void setTagNString(int index, String value) +``` ### 无模式写入 From 84987c494aacf30952c3ae04c39127c9a51d4a4c Mon Sep 17 00:00:00 2001 From: Minglei Jin Date: Wed, 10 May 2023 12:25:36 +0800 Subject: [PATCH 147/156] cache/fs: fix primary ts column loading --- source/dnode/vnode/src/tsdb/tsdbCache.c | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/source/dnode/vnode/src/tsdb/tsdbCache.c b/source/dnode/vnode/src/tsdb/tsdbCache.c index fb1c9514ce..cbf0ebe3ec 100644 --- a/source/dnode/vnode/src/tsdb/tsdbCache.c +++ b/source/dnode/vnode/src/tsdb/tsdbCache.c @@ -1391,9 +1391,10 @@ static int32_t getNextRowFromFS(void *iter, TSDBROW **ppRow, bool *pIgnoreEarlie tBlockDataReset(state->pBlockData); TABLEID tid = {.suid = state->suid, .uid = state->uid}; int nTmpCols = nCols; - /*if (aCols[0] == PRIMARYKEY_TIMESTAMP_COL_ID && nCols == 1) { + if (aCols[0] == PRIMARYKEY_TIMESTAMP_COL_ID && nCols == 1) { nTmpCols = 0; - }*/ + skipBlock = false; + } code = tBlockDataInit(state->pBlockData, &tid, state->pTSchema, aCols, nTmpCols); if (code) goto _err; From 785f8764d1ef4cd6dd9a7851f2f0d17129ab0b04 Mon Sep 17 00:00:00 2001 From: huolibo Date: Wed, 10 May 2023 12:57:44 +0800 Subject: [PATCH 148/156] fix: change version --- docs/en/14-reference/03-connector/04-java.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/en/14-reference/03-connector/04-java.mdx b/docs/en/14-reference/03-connector/04-java.mdx index 93cbae702a..95a68d091b 100644 --- a/docs/en/14-reference/03-connector/04-java.mdx +++ b/docs/en/14-reference/03-connector/04-java.mdx @@ -40,7 +40,7 @@ Please refer to [version support list](/reference/connector#version-support) | taos-jdbcdriver version | major changes | | :---------------------: | :------------------------------------------------------------------------------------------------------------------------------------------------: | -| 3.1.0 | JDBC REST connection supports schemaless/prepareStatement over WebSocket | +| 3.2.1 | JDBC REST connection supports schemaless/prepareStatement over WebSocket | | 3.2.0 | This version has been deprecated | | 3.1.0 | JDBC REST connection supports subscription over WebSocket | | 3.0.1 - 3.0.4 | fix the resultSet data is parsed incorrectly sometimes. 3.0.1 is compiled on JDK 11, you are advised to use other version in the JDK 8 environment | From c777733b3a3e41678e697e0d264d54a88209ae4c Mon Sep 17 00:00:00 2001 From: gccgdb1234 Date: Wed, 10 May 2023 13:08:21 +0800 Subject: [PATCH 149/156] docs: fix format error --- docs/zh/08-connector/14-java.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/zh/08-connector/14-java.mdx b/docs/zh/08-connector/14-java.mdx index 7eb4dfedb6..326c8bf4d0 100644 --- a/docs/zh/08-connector/14-java.mdx +++ b/docs/zh/08-connector/14-java.mdx @@ -912,7 +912,7 @@ public class SchemalessJniTest { } } } -```` +``` From 59ec57bbfc1d38dc76badc3ead4a22ed1378f7a4 Mon Sep 17 00:00:00 2001 From: danielclow <106956386+danielclow@users.noreply.github.com> Date: Wed, 10 May 2023 13:21:23 +0800 Subject: [PATCH 150/156] remove trailing slash on internal link --- docs/en/07-develop/01-connect/index.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/en/07-develop/01-connect/index.md b/docs/en/07-develop/01-connect/index.md index d420f15957..8cf3c463af 100644 --- a/docs/en/07-develop/01-connect/index.md +++ b/docs/en/07-develop/01-connect/index.md @@ -288,6 +288,6 @@ Prior to establishing connection, please make sure TDengine is already running a :::tip -If the connection fails, in most cases it's caused by improper configuration for FQDN or firewall. Please refer to the section "Unable to establish connection" in [FAQ](../../train-faq/faq/). +If the connection fails, in most cases it's caused by improper configuration for FQDN or firewall. Please refer to the section "Unable to establish connection" in [FAQ](../../train-faq/faq). ::: From db4110c7883bbd8ffa4c2d9ed9a4de5fc247b30c Mon Sep 17 00:00:00 2001 From: huolibo Date: Wed, 10 May 2023 13:43:26 +0800 Subject: [PATCH 151/156] docs: formate and note of java doc --- docs/en/14-reference/03-connector/04-java.mdx | 1 + docs/zh/08-connector/14-java.mdx | 5 +++-- 2 files changed, 4 insertions(+), 2 deletions(-) diff --git a/docs/en/14-reference/03-connector/04-java.mdx b/docs/en/14-reference/03-connector/04-java.mdx index 95a68d091b..bd799470ee 100644 --- a/docs/en/14-reference/03-connector/04-java.mdx +++ b/docs/en/14-reference/03-connector/04-java.mdx @@ -440,6 +440,7 @@ TDengine has significantly improved the bind APIs to support data writing (INSER - JDBC REST connections do not currently support bind interface - The following sample code is based on taos-jdbcdriver-3.2.1 - The setString method should be called for binary type data, and the setNString method should be called for nchar type data +- Do not use `db.?` in prepareStatement, should directly use `?`, then specify the database in setTableName, for example: `prepareStatement.setTableName("db.t1")`. diff --git a/docs/zh/08-connector/14-java.mdx b/docs/zh/08-connector/14-java.mdx index 7eb4dfedb6..d41ae60130 100644 --- a/docs/zh/08-connector/14-java.mdx +++ b/docs/zh/08-connector/14-java.mdx @@ -443,6 +443,7 @@ TDengine 的 JDBC 原生连接实现大幅改进了参数绑定方式对数据 - JDBC REST 连接目前不支持参数绑定 - 以下示例代码基于 taos-jdbcdriver-3.2.1 - binary 类型数据需要调用 setString 方法,nchar 类型数据需要调用 setNString 方法 +- 预处理语句中指定子表名称不要使用 `db.?`,应直接使用 `?`,然后在 setTableName 中指定数据库,如:`prepareStatement.setTableName("db.t1")`。 @@ -864,7 +865,7 @@ public class ParameterBindingDemo { 用于设定 TAGS 取值的方法总共有: -````java +```java public void setTagNull(int index, int type) public void setTagBoolean(int index, boolean value) public void setTagInt(int index, int value) @@ -912,7 +913,7 @@ public class SchemalessJniTest { } } } -```` +``` From 4bdc7e89510ff7e567213f84b76c23cbff7383c8 Mon Sep 17 00:00:00 2001 From: huolibo Date: Wed, 10 May 2023 13:47:14 +0800 Subject: [PATCH 152/156] fix: change some description --- docs/en/14-reference/03-connector/04-java.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/en/14-reference/03-connector/04-java.mdx b/docs/en/14-reference/03-connector/04-java.mdx index bd799470ee..b878d47a1c 100644 --- a/docs/en/14-reference/03-connector/04-java.mdx +++ b/docs/en/14-reference/03-connector/04-java.mdx @@ -440,7 +440,7 @@ TDengine has significantly improved the bind APIs to support data writing (INSER - JDBC REST connections do not currently support bind interface - The following sample code is based on taos-jdbcdriver-3.2.1 - The setString method should be called for binary type data, and the setNString method should be called for nchar type data -- Do not use `db.?` in prepareStatement, should directly use `?`, then specify the database in setTableName, for example: `prepareStatement.setTableName("db.t1")`. +- Do not use `db.?` in prepareStatement when specify the database with the table name, should directly use `?`, then specify the database in setTableName, for example: `prepareStatement.setTableName("db.t1")`. From 677b95d66dbd22fea6a48620c222708709b1a045 Mon Sep 17 00:00:00 2001 From: huolibo Date: Wed, 10 May 2023 13:49:17 +0800 Subject: [PATCH 153/156] fix: change some description --- docs/zh/08-connector/14-java.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/zh/08-connector/14-java.mdx b/docs/zh/08-connector/14-java.mdx index d41ae60130..074584195e 100644 --- a/docs/zh/08-connector/14-java.mdx +++ b/docs/zh/08-connector/14-java.mdx @@ -443,7 +443,7 @@ TDengine 的 JDBC 原生连接实现大幅改进了参数绑定方式对数据 - JDBC REST 连接目前不支持参数绑定 - 以下示例代码基于 taos-jdbcdriver-3.2.1 - binary 类型数据需要调用 setString 方法,nchar 类型数据需要调用 setNString 方法 -- 预处理语句中指定子表名称不要使用 `db.?`,应直接使用 `?`,然后在 setTableName 中指定数据库,如:`prepareStatement.setTableName("db.t1")`。 +- 预处理语句中指定数据库与子表名称不要使用 `db.?`,应直接使用 `?`,然后在 setTableName 中指定数据库,如:`prepareStatement.setTableName("db.t1")`。 From 08320613f5b65afd6a62875a71045f0232ce7c9d Mon Sep 17 00:00:00 2001 From: wade zhang <95411902+gccgdb1234@users.noreply.github.com> Date: Wed, 10 May 2023 14:33:08 +0800 Subject: [PATCH 154/156] docs: resolve format error --- docs/zh/08-connector/14-java.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/zh/08-connector/14-java.mdx b/docs/zh/08-connector/14-java.mdx index 326c8bf4d0..6aa918c522 100644 --- a/docs/zh/08-connector/14-java.mdx +++ b/docs/zh/08-connector/14-java.mdx @@ -864,7 +864,7 @@ public class ParameterBindingDemo { 用于设定 TAGS 取值的方法总共有: -````java +```java public void setTagNull(int index, int type) public void setTagBoolean(int index, boolean value) public void setTagInt(int index, int value) From 9bbf36d35dec69990a58f771a049baebb5c684ae Mon Sep 17 00:00:00 2001 From: Minglei Jin Date: Wed, 10 May 2023 14:39:51 +0800 Subject: [PATCH 155/156] cache/serialize: keep original col value untouched --- source/dnode/vnode/src/tsdb/tsdbCache.c | 7 ++++--- 1 file changed, 4 insertions(+), 3 deletions(-) diff --git a/source/dnode/vnode/src/tsdb/tsdbCache.c b/source/dnode/vnode/src/tsdb/tsdbCache.c index cbf0ebe3ec..7d22e59b4a 100644 --- a/source/dnode/vnode/src/tsdb/tsdbCache.c +++ b/source/dnode/vnode/src/tsdb/tsdbCache.c @@ -180,11 +180,12 @@ void tsdbCacheSerialize(SLastCol *pLastCol, char **value, size_t *size) { *(SLastCol *)(*value) = *pLastCol; if (IS_VAR_DATA_TYPE(pColVal->type)) { uint8_t *pVal = pColVal->value.pData; - pColVal->value.pData = *value + sizeof(*pLastCol); + SColVal *pDColVal = &((SLastCol *)(*value))->colVal; + pDColVal->value.pData = *value + sizeof(*pLastCol); if (pColVal->value.nData > 0) { - memcpy(pColVal->value.pData, pVal, pColVal->value.nData); + memcpy(pDColVal->value.pData, pVal, pColVal->value.nData); } else { - pColVal->value.pData = NULL; + pDColVal->value.pData = NULL; } } *size = length; From 62278ceeb4df56d38e0479482fcf59183bd59ea9 Mon Sep 17 00:00:00 2001 From: wade zhang <95411902+gccgdb1234@users.noreply.github.com> Date: Wed, 10 May 2023 14:43:39 +0800 Subject: [PATCH 156/156] Update 14-java.mdx --- docs/zh/08-connector/14-java.mdx | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/zh/08-connector/14-java.mdx b/docs/zh/08-connector/14-java.mdx index 074584195e..0bc113e58b 100644 --- a/docs/zh/08-connector/14-java.mdx +++ b/docs/zh/08-connector/14-java.mdx @@ -54,7 +54,7 @@ REST 连接支持所有能运行 Java 的平台。 **注**:REST 连接中增加 `batchfetch` 参数并设置为 true,将开启 WebSocket 连接。 -### 处理异常 +## 处理异常 在报错后,通过 SQLException 可以获取到错误的信息和错误码: